Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,600 | 426 | Compact EEPROM-based Weight Functions
A. Kramer, C. K. Sin, R. Chu, and P. K. Ko
Department of Electrical Engineering and Computer Science
University of California at Berkeley
Berkeley, CA 94720
Abstract
We are focusing on the development of a highly compact neural net weight
function based on the use of EEPROM devices. These devices have already
proven useful for analog weight storage, but existing designs rely on the
use of conventional voltage multiplication as the weight function, requiring
additional transistors per synapse. A parasitic capacitance between the
floating gate and the drain of the EEPROM structure leads to an unusual
J- V characteristic which can be used to advantage in designing a compact
synapse. This novel behavior is well characterized by a model we have
developed. A single-device circuit results in a 1-quadrant synapse function
which is nonlinear, though monotonic. A simple extension employing 2
EEPROMs results in a 2 quadrant function which is much more linear.
This approach offers the potential for more than a ten-fold increase in the
density of neural net implementations.
1
INTRODUCTION - ANALOG WEIGHTING
The recent surge of interest in neural networks and parallel analog computation has
motivated the need for compact analog computing blocks. Analog weighting is an
important computational function of this class. Analog weighting is the combining
of two analog values, one of which is typically varying (the input) and one of which
is typically fixed (the weight) or at least varying more slowly. The varying value
is "weighted" by the fixed value through the "weighting function", typically multiplication. Analog weighting is most interesting when the overall computational
task involves computing the "weighted sum of the inputs." That is, to compute
2:7=1 t(lOj, Vi) where
is the weighting function and ~v = {lOb W2, ... , wn} and
to
1001
1002
Kramer, Sin, Chu, and Ko
v=
{VI,
are the n-dimensional analog-valued weight and input vectors.
This weighted sum is simply the dot product in the case where the weighting function is multiplication.
V2, ??? , V n }
For large n, the only way to perform this computation efficiently is to use compact
weighting functions and to take advantage of current summing. Using "conductive
multiplication" as the weighting function (weights stored as conductances of single
devices) results in an efficient implementation such as that shown in figure 1a. This
implementation is probably optimal, but in practice it is not possible to implement
small single-device programmable conductances which are linear.
v
v
v
I=f(W,V)
(a)
(b)
(c)
(d)
Figure 1: Weighting function implementations: (a) ideal, (b) conventional, (c)
EEPROM-based storage, (d) compact EEPROM-based nonlinear weight function
1.1
CONVENTIONAL APPROACHES
The problem of implementing analog weighting is often divided into the separate
tasks of storing the fixed value (the weight) and combining the two analog values
through the weighting function (figure 1b). Conventional approaches to storing a
fixed analog weight value are to use either digital storage with some form of D/ A
conversion or to use volatile analog storage, which requires a large capacitor. Both
of these storage technologies require a large area.
The simplest and most widespread weighting function is multiplication [f( w, -i) =
wi]. Multiplication is attractive because of its mathematical and computational
simplicity. Multiplication is also a fairly straightforward operation to implement in
analog circuitry. When conventional technologies are used for weight storage, the
additional area required to provide a multiplication function is not significant. Of
course, the problem with this approach is that since a large area is required for
weight storage, the result is not sufficiently compact.
2
EEPROMS
EEPROMs are "electrically erasable, programmable, read-only memories". They
are essentially a JFET with a floating gate and a thin-oxide tunneling region between the floating gate and the drain (figure 2). A sufficiently high field across
the tunneling oxide will cause electrons to tunnel into or out of the floating gate,
Compact EEPROM-based Weight Functions
effectively altering the threshold voltage of the device as seen from the top gate.
Normal operating (reading) voltages are sufficiently small to cause only insignificant
"disturbance programming" of the charge on the floating gate, so an EEPROM can
be viewed as a compact storage capacitor with a very long storage lifetime.
tunneling
r--N~~
~~~ ~
ff
top oxide
~
source)
tunneling oxide
Figure 2: EEPROM layout and cross section
Several groups have found that charge leakage on EEPROMs is sufficiently small to
guarantee that the threshold of a device can be retained with 4-8 bits of precision for
a period of years [Kramer, 1989][Holler, 1989]. There are several drawbacks to the
use of EEPROMs. Correct programming of these devices to the desired value is hard
to control and requires feedback. While the programming time for a single device
is less than a millisecond, because devices must be programmed one-at-a-time, the
time to program all the devices on a chip can be prohibitive. In addition, fabrication
of EEPROMs is a non-standard process requiring several additional masks and the
ability to make a thin tunneling oxide.
2.1
EEPROM-BASED WEIGHT STORAGE
The most straightfOl'ward manner to use an EEPROM in a weighting function is to
store the weight with the device. For example, the threshold of an EEPROM device
could be programmed to produce the desired bias current for an analog amplifier
(figure lc). There are two advantages to this approach. Firstly, the weight storage
mechanism is divorced from the actual weight function computation and hence
places few constraints on it, and secondly, if the EEPROM is used in a static mode
(all applied voltages are constant), the exact I-V characteristics of the EEPROM
device are inconsequential.
The major disadvantage of this approach is that of inefficiency, as additional circuitry is needed to perform the weight function computation. An example of this
can be seen in a recent EEPROM-based neural net implementation developed by
the Intel corporation [Holler, 1989]. Though the weight value in this implementation is stored on only two EEPROMs, an additional 4 transistors are needed for
the multiplication function. In addition, though the circuit was designed to perform
multiplication the output is not quite linear under the best of conditions and, under
certain conditions, exhibits severe nonlinearity, Despite these limitations, this design demonstrates the advantage of EEPROM storage technology over conventional
approaches, as it is the most dense neural network implementation to date.
1003
1004
Kramer, Sin, Chu, and Ko
3
EEPROM I-V CHARACTERISTICS
Since linearity is difficult to implement and not a strict requirement of the weighting
function, we have investigated the possibility of using the I-V characteristics of an
EEPROM as the weight function. This approach has the advantage that a single
device could be used for both weight storage and weight function computation,
providing a very compact implementation. It is our hope that this approach will
lead to useful synapses of less than 200um 2 in area, less than a tenth the area used
by the Intel synapse.
Though an EEPROM is a JFET device, a parasitic capacitance of the structure
results in an I-V characteristic which is unique. Conventional use of EEPROM
devices in digital circuitry does not make use of this fact, so that this effect has
not before been characterized or modeled. The floating gate of an EEPROM is
controlled via capacitive coupling by the top gate. In addition, the thin-ox tunneling
region between the floating gate and the drain creates a parasitic capacitor between
these two nodes. Though the area of this drain capacitor is small relative to that of
the top-gate floating-gate overlap area, the tunneling oxide is much thinner than the
insulating oxide between the two gates, resulting in a significant drain capacitance
(figure 3).
We have developed a model for an EEPROM which includes this parasitic drain
capacitance (figure 3). The basic contribution of this capacitance is to couple the
floating-gate voltage to the drain voltage. This is most obvious when the device is
saturated; while the current through a standard JFET is to first order independent
of drain voltage in this region, in the case of an EEPROM, the current has a square
law dependence on the drain voltage (equation 3). While this artifact of EEPROMs
makes them behave poorly as current sources, it may make them more useful as
single-device weighting functions.
floating
-L
--v-
SL
EEPROM
Cox
:~gate
Cg~Cd
1&(,:
~
~
Cg
MODEL
=.J
\
OlCf
\..
Figure 3: EEPROM model and capacitor areas
There are several ways to analyze our model depending on the level of accuracy
desired [Sin, 1991]. We present here the results of simplest of these which captures
the essential behavior of an EEPROM. This analysis is based on a linear channel
approximation and the equations which result are similar in form to those for a
normal JFET, with the addition of the dependence between the floating gate voltage
and the drain voltage and all capacitive coupling factors. The equations for drain
saturation voltage (VdssaJ, nonsaturated drain current (Idsl,J and saturated drain
current (Ids . at ) are:
Compact EEPROM-based Weight Functions
Og Vg - vt(Co.z; + C'g
o.5C'ox + C'g
A-p
[(
[e
I(
p
+ Cd)
C'g Vg
(1)
Vd~
_ vt) _
(
C'g - Cd
+ Cg + Cd)
2
Cox + Cg + Cd)
gVg + CdVds - vt(Cox + C g + Cd)]2
0.5Co + Cg + Cd
)]
Cox
(2)
(3)
:!'?
On EEPROM devices we have fabricated in house, our model matches measured
1-V data well, especially in capt uring the dependence of saturated drain current on
drain voltage (figure 4).
160.00
14000
-
Measured
120.00
Vds
Vgs
--1
10000
Ids
(uA) 8000
60.00
4000
--t--h7"----t---=~I<"=--_t_--+____::::.._t-'l-g=2V
~~-~!!::-!!:-.::::-.:t--::::.--:::-..:...:.+-_~=~iYg=lV
0.00 0.00
100
2.00
3.00
4.00
s.oo
Vds(V)
Figure 4: EEPROM I-V, measured and simulated.
4
EEPROM-BASED WEIGHTING FUNCTIONS
One way to make a compact weight function using an EEPROM is to use the device
1- V characteristics directly. This could be accomplished by storing the weight as the
device threshold voltage (vt), applying the input value as the drain-source voltage
(Vds) and setting the top gate voltage to a constant reference value (figure ld).
In this case the synapse would look exactly like the I-V measuring circuit and the
weighting function would be exactly the EEPROM I-V shown in figure 4, except
that rather than leaving the threshold voltage fixed and varying the gate voltage,
as was done to generate the curves shown, the gate voltage would be fixed to a
constant value and different curves would be generated by programming the device
threshold to different values.
While extremely compact (a single device), this function is only a one quadrant
function (both weight and input values must be positive or output is zero) and for
1005
1006
Kramer, Sin, Chu, and Ko
many applications this is not sufficient. An easy way to provide a two-quadrant
function based on a similar approach is to use two EEPROMs configured in a
common-input, differential-output (lout
Ids+ - I ds -) scheme, as in the circuit
depicted in figure 5. By programming the EEPRO Ms so that one is always active
and one is always inactive, the output of the weight function can now be a "positive"
or a "negative" current, depending on which device is chosen. Again, the weighting
function is exactly the EEPROM I-V in this case.
=
In addition to providing a two-quadrant function, this two-device circuit offers another interesting possibility. The same differential output scheme can be made to
provide a much more linear two quadrant function if both "positive" and ''negative
devices are programmed to be active (negative thresholds). The "weight" in this
case is the difference in threshold values between the two devices (W = l{- - vt+).
This scheme "subtracts" one device curve from the other. The model we have developed indicates that this has the effect of canceling out much of the nonlinearity
and results in a function which has three distinct regions, two of which are linear
in the input voltage and the weight value.
ISO
W=3
-
W = (Vt+ - Vt-)
lout = (lds+ - Ids -)
Measured
100
W=2
Vds= Vin
vrer = const (2.SV)
Vref
oso
W=I
lout
(uA)
W~
000
Yin
W=-I
-050
W~2
-100
W=-3
-150
0.00
1.00
2.00
300
4.00
s.oo
Yin (V)
Figure 5: 2-quadrant, 2-EEPROM weighting function.
The first of these linear regions occurs when both devices are active and neither
is saturated (both devices modeled by equation 2). In this case, subtracting I ds from Ids+ cancels all nonlinearities and the differential is exactly the product of the
input value (Vds) and the weight (Vt- - Vt+), with a scaling factor of Kp:
(4)
The other linear region occurs when both devices are saturated (both modeled
by equation 3). All nonlinearities also cancel in this case, but there is an offset
remaining and the scaling factor is modified:
Compact EEPROM-based Weight Functions
~~g + Cd) Vds (vt_ - vt+) +
vt+) (0.5CoxC~ ~g + Cd - (vt+ + vt_) )
J(p (0.5C o?
J(p (vt_ -
(5)
We have fabricated structures of this type and measured, as well as simulated their
function characteristics. Measured data again agreed with our model (figure 5).
Note that the slope in this last region [scaling factor of J(pCg/(0.5C'ox + C'g + Cd)]
will be strictly less that in the first region [scaling factor J(p]. The model indicates
that one way to minimize this difference in slopes is to increase the size of the
parasitic drain capacitance (Cd) relative to the gate capacitance (C'g).
5
CONCLUSIONS
While EEPROM devices have already proven useful for nonvolatile analog storage, we have discovered and characterized novel functional characteristics of the
EEPROM device which should make them useful as analog weighting functions. A
parasitic drain-floating gate capacitance has been included in a model which accurately captures this behavior. Several compact nonlinear EEPROM-based weight
functions have been proposed, including a single-device one-quadrant function and
a more linear two-device two-quadrant function. Problems such as the usability of
nonlinear weighting functions, selection of optimal EEPROM device parameters and
potential fanout limitations of feeding the input into a low impedance node (drain)
must all be resolved before this technology can be used for a full blown implementation. Our model will be helpful in this work. The approach of using inherent device
characteristics to build highly compact weighting functions promises to greatly improve the density and efficiency of massively parallel analog computation such as
that performed by neural networks.
Acknowledgements
Research sponsored by the Air Force Office of Scientific Research (AFSOR/JSEP)
under Contract Number F49620-90-C-0029.
References
M. Holler, et. al., (1989) "An Electrically Trainable Artificial Neural Network
(ETANN) with 10240 'Floating Gate' Synapses," Proceedings of the ICJNN-89,
Washington D. C., 1989.
A. Kramer, et. aI, (1989) "EEPROM Device as a Reconfigurable Analog Element
for Neural Networks," 1989 IED}\;[ Technical Digest, Beaver Press, Alexandria, VA,
Dec. 1989.
C. K. Sin, (1990) EEPRO}\;/ as an Analog Storage Element, Master's Thesis, Dept.
of EECS, University of California at Berkeley, Berkeley, CA, Sept. 1990.
1007
| 426 |@word cox:4 t_:1 etann:1 ld:1 inefficiency:1 existing:1 current:9 chu:4 must:3 designed:1 sponsored:1 nonsaturated:1 prohibitive:1 device:39 beaver:1 iso:1 node:2 firstly:1 mathematical:1 differential:3 manner:1 mask:1 behavior:3 surge:1 actual:1 ua:2 linearity:1 circuit:5 vref:1 developed:4 corporation:1 fabricated:2 guarantee:1 berkeley:4 charge:2 exactly:4 um:1 demonstrates:1 oso:1 control:1 before:2 positive:3 engineering:1 thinner:1 despite:1 id:5 inconsequential:1 co:2 programmed:3 unique:1 practice:1 block:1 implement:3 area:8 quadrant:9 selection:1 storage:15 applying:1 conventional:7 straightforward:1 layout:1 simplicity:1 exact:1 programming:5 designing:1 element:2 electrical:1 capture:2 region:8 creates:1 efficiency:1 resolved:1 chip:1 distinct:1 kp:1 artificial:1 quite:1 valued:1 jsep:1 ability:1 ward:1 advantage:5 transistor:2 net:3 subtracting:1 product:2 combining:2 date:1 poorly:1 requirement:1 produce:1 coupling:2 depending:2 oo:2 measured:6 involves:1 drawback:1 correct:1 afsor:1 implementing:1 require:1 feeding:1 ied:1 secondly:1 extension:1 strictly:1 sufficiently:4 normal:2 electron:1 circuitry:3 major:1 weighted:3 hope:1 always:2 modified:1 rather:1 varying:4 voltage:19 og:1 office:1 indicates:2 greatly:1 cg:5 helpful:1 typically:3 overall:1 pcg:1 development:1 fairly:1 field:1 washington:1 look:1 cancel:2 thin:3 inherent:1 few:1 floating:13 amplifier:1 conductance:2 interest:1 highly:2 possibility:2 severe:1 saturated:5 desired:3 disadvantage:1 altering:1 measuring:1 fabrication:1 alexandria:1 stored:2 eec:1 sv:1 density:2 contract:1 holler:3 vgs:1 again:2 thesis:1 slowly:1 oxide:7 potential:2 nonlinearities:2 includes:1 configured:1 vi:2 performed:1 analyze:1 parallel:2 vin:1 slope:2 contribution:1 minimize:1 square:1 air:1 accuracy:1 characteristic:9 efficiently:1 lds:1 accurately:1 synapsis:2 canceling:1 obvious:1 static:1 couple:1 jfet:4 agreed:1 focusing:1 synapse:5 done:1 though:5 ox:3 lifetime:1 d:2 nonlinear:4 widespread:1 mode:1 artifact:1 scientific:1 effect:2 requiring:2 hence:1 read:1 attractive:1 sin:6 lob:1 m:1 novel:2 volatile:1 common:1 functional:1 analog:20 significant:2 ai:1 nonlinearity:2 dot:1 operating:1 recent:2 massively:1 store:1 certain:1 vt:12 accomplished:1 seen:2 additional:5 period:1 full:1 technical:1 match:1 characterized:3 h7:1 offer:2 long:1 cross:1 usability:1 divided:1 controlled:1 va:1 ko:4 basic:1 essentially:1 dec:1 addition:5 source:3 leaving:1 w2:1 probably:1 strict:1 capacitor:5 ideal:1 easy:1 wn:1 inactive:1 motivated:1 fanout:1 cause:2 programmable:2 tunnel:1 useful:5 ten:1 simplest:2 generate:1 sl:1 millisecond:1 blown:1 per:1 promise:1 group:1 threshold:8 neither:1 tenth:1 sum:2 year:1 master:1 place:1 tunneling:7 scaling:4 bit:1 fold:1 constraint:1 extremely:1 department:1 electrically:2 across:1 wi:1 equation:5 mechanism:1 needed:2 unusual:1 operation:1 v2:1 gate:21 capacitive:2 top:5 remaining:1 const:1 divorced:1 especially:1 build:1 leakage:1 capacitance:8 already:2 occurs:2 digest:1 dependence:3 exhibit:1 separate:1 simulated:2 vd:7 retained:1 modeled:3 providing:2 conductive:1 difficult:1 negative:3 insulating:1 design:2 implementation:9 perform:3 conversion:1 behave:1 discovered:1 required:2 california:2 lout:3 reading:1 program:1 saturation:1 including:1 memory:1 overlap:1 rely:1 disturbance:1 force:1 scheme:3 improve:1 technology:4 sept:1 acknowledgement:1 drain:19 multiplication:10 relative:2 law:1 interesting:2 limitation:2 proven:2 vg:2 lv:1 digital:2 sufficient:1 storing:3 cd:11 course:1 last:1 bias:1 f49620:1 feedback:1 curve:3 made:1 subtracts:1 employing:1 compact:16 active:3 summing:1 impedance:1 channel:1 ca:2 investigated:1 dense:1 intel:2 ff:1 nonvolatile:1 lc:1 precision:1 loj:1 house:1 weighting:23 reconfigurable:1 offset:1 insignificant:1 essential:1 effectively:1 depicted:1 yin:2 simply:1 monotonic:1 uring:1 viewed:1 kramer:6 eeprom:39 hard:1 included:1 except:1 parasitic:6 dept:1 trainable:1 |
3,601 | 4,260 | A More Powerful Two-Sample Test in High
Dimensions using Random Projection
Miles E. Lopes1
Laurent Jacob1
Martin J. Wainwright1,2
1
Departments of Statistics and EECS2
University of California, Berkeley
Berkeley, CA 94720-3860
{mlopes,laurent,wainwrig}@stat.berkeley.edu
Abstract
We consider the hypothesis testing problem of detecting a shift between the means
of two multivariate normal distributions in the high-dimensional setting, allowing
for the data dimension p to exceed the sample size n. Our contribution is a new test
statistic for the two-sample test of means that integrates a random projection with
the classical Hotelling T 2 statistic. Working within a high-dimensional framework
that allows (p, n) ? ?, we first derive an asymptotic power function for our
test, and then provide sufficient conditions for it to achieve greater power than
other state-of-the-art tests. Using ROC curves generated from simulated data,
we demonstrate superior performance against competing tests in the parameter
regimes anticipated by our theoretical results. Lastly, we illustrate an advantage
of our procedure with comparisons on a high-dimensional gene expression dataset
involving the discrimination of different types of cancer.
1
Introduction
Two-sample hypothesis tests are concerned with the question of whether two samples of data are
generated from the same distribution. Such tests are among the most widely used inference procedures in treatment-control studies in science and engineering [1]. Application domains such
as molecular biology and fMRI have stimulated considerable interest in detecting shifts between
distributions in the high-dimensional setting, where the two samples of data {X1 , . . . , Xn1 } and
{Y1 , . . . , Yn2 } are subsets of Rp , and n1 , n2 p [e.g., 2?5]. In transcriptomics, for instance, p
gene expression measures on the order of hundreds or thousands may be used to investigate differences between two biological conditions, and it is often difficult to obtain sample sizes n1 and n2
larger than several dozen in each condition. In high-dimensional situations such as these, classical
methods may be ineffective, or not applicable at all. Likewise, there has been growing interest in
developing testing procedures that are better suited to deal with the effects of dimension [e.g., 6?10].
A fundamental instance of the general two-sample problem is the two-sample test of means with
Gaussian data. In this case, two independent sets of samples {X1 , . . . , Xn1 } and {Y1 , . . . , Yn2 } are
generated in an i.i.d. manner from p-dimensional multivariate normal distributions N (?1 , ?) and
N (?2 , ?) respectively, where the mean vectors ?1 , ?2 ? Rp and covariance matrix ? 0 are all
fixed and unknown. The hypothesis testing problem of interest is
H0 : ?1 = ?2 versus H1 : ?1 6= ?2 .
(1)
The most well-known test statistic for this problem is the Hotelling T 2 statistic, defined by
T 2 :=
n1 n2
? ? Y? )> ?
b ?1 (X
? ? Y? ),
(X
n1 + n2
1
(2)
b is the pooled sample
? := 1 Pn1 Xj and Y? := 1 Pn2 Yj are the sample means, and ?
where X
j=1
j=1
n1
n2
P
Pn2
n1
>
1
1
b
?
?
covariance matrix, given by ? := n j=1 (Xj ? X)(Xj ? X) + n j=1 (Yj ? Y? )(Yj ? Y? )> , with
n := n1 + n2 ? 2.
b is singular, and the Hotelling test is not well-defined. Even when
When p > n, the matrix ?
p ? n, the Hotelling test is known to perform poorly if p is nearly as large as n. This behavior
was demonstrated in a seminal paper of Bai and Saranadasa [6] (or BS for short), who studied the
performance of the Hotelling test under (p, n) ? ? with p/n ? 1 ? , and showed that the
asymptotic power of the test suffers for small values of > 0. In subsequent years, a number of
improvements on the Hotelling test in the high-dimensional setting have been proposed [e.g., 6?9].
In this paper, we propose a new test statistic for the two-sample test of means with multivariate
normal data, applicable when p ? n/2. We provide an explicit asymptotic power function for
our test with (p, n) ? ?, and show that under certain conditions, our test has greater asymptotic
power than other state-of-the-art tests. These comparison results are valid with p/n tending to a
positive constant or infinity. In addition to its advantage in terms of asymptotic power, our procedure
specifies exact level-? critical values for multivariate normal data, whereas competing procedures
offer only approximate level-? critical values. Furthermore, our experiments in Section 4 suggest
that the critical values of our test may also be more robust than those of competing tests. Lastly, the
computational cost of our procedure is modest in the n < p setting, being of order O(n2 p).
The remainder of this paper is organized as follows. In Section 2, we provide background on hypothesis testing and describe our testing procedure. Section 3 is devoted to a number of theoretical
results about its performance. Theorem 1 in Section 3.1 provides an asymptotic power function,
and Theorems 2 and 3 in Sections 3.3 and 3.4 give sufficient conditions for achieving greater power
than state-of-the-art tests in the sense of asymptotic relative efficiency. In Section 4 we provide
performance comparisons with ROC curves on synthetic data against a broader collection of methods, including some recent kernel-based and non-parametric approaches such as MMD [11], KFDA
[12], and TreeRank [10]. Lastly, we study a high-dimensional gene expression dataset involving the
discrimination of different cancer types, demonstrating that our test?s false positive rate is reliable in
practice. We refer the reader to the preprint [13] for proofs of our theoretical results.
Notation. Let ? := ?1 ? ?2 denote the shift vector between the distributions N (?1 , ?) and
N (?2 , ?), and define the ordered pair of parameters ? := (?, ?). Let z1?? denote the 1 ? ?
quantile of the standard normal distribution, and let ? be its cumulative distribution function. If A
is a matrix in Rp?p , let |||A|||
denote its spectral norm (maximum singular value), and define the
q2 P
2
Frobenius norm |||A|||F :=
i,j Aij . When all the eigenvalues of A are real, we denote them
by ?min (A) = ?p (A) ? ? ? ? ? ?1 (A) = ?max (A). For a positive-definite covariance matrix ?,
?1/2
?1/2
let D? := diag(?), and define the associated correlation matrix R := D? ?D? . We use the
notation f (n) . g(n) if there is some absolute constant c such that the inequality f (n) ? c n holds
for all large n. If both f (n) . g(n) and g(n) . f (n) hold, then we write f (n) g(n). The
notation f (n) = o(g(n)) means f (n)/g(n) ? 0 as n ? ?.
2
Background and random projection method
For the remainder of the paper, we retain the set-up for the two-sample test of means (1) with
Gaussian data, assuming throughout that p ? n/2, and n = n1 + n2 ? 2.
Review of hypothesis testing terminology. The primary focus of our results will be on the comparison of power between test statistics, and here we give precise meaning to this notion. When testing
a null hypothesis H0 versus an alternative hypothesis H1 , a procedure based on a test statistic T
specifies a critical value, such that H0 is rejected if T exceeds that critical value, and H0 is accepted otherwise. The chosen critical value fixes a trade-off between the risk of rejecting H0 when
H0 actually holds, and the risk of accepting H0 when H1 holds. The former error is referred to as
a type I error and the latter as a type II error. A test is said to have level ? if the probability of committing a type I error is at most ?. Finally, at a given level ?, the power of a test is the probability
of rejecting H0 under H1 , i.e., 1 minus the probability of a type II error. When evaluating testing
procedures at a given level ?, we seek to identify the one with the greatest power.
2
Past work. The Hotelling T 2 statistic (2) discriminates between the hypotheses H0 and H1 by providing an estimate of the ?statistical distance? separating the distributions N (?1 , ?) and N (?2 , ?).
More specifically, the Hotelling statistic
is essentially an estimate of the Kullback-Leibler (KL) divergence DKL N (?1 , ?)kN (?2 , ?) = 12 ? > ??1 ?, where ? := ?1 ? ?2 . Due to the fact that the
b in the definition of T 2 is not invertible when p > n, several
pooled sample covariance matrix ?
recent procedures have offered substitutes for the Hotelling statistic in the high-dimensional setting:
Bai and Saranadasa [6], Srivastava and Du [7, 8], Chen and Qin [9], hereafter BS, SD and CQ respectively. Up to now, the route toward circumventing this difficulty has been to form an estimate of
? that is diagonal, and hence easily invertible. We shall see later that this limited use of covariance
structure sacrifices power when the data exhibit non-trivial correlation. In this regard, our procedure is motivated by the idea that covariance structure may be used more effectively by testing with
projected samples in a space of lower dimension.
Intuition for random projection. To provide some further intuition for our method, it is possible
to consider the problem (1) in terms of a competition between the dimension p, and the statistical
distance separating H0 and H1 . On one hand, the accumulation of variance from a large number
of variables makes it difficult to discriminate between the hypotheses, and thus, it is desirable to
reduce the dimension of the data. On the other hand, most methods for reducing dimension will also
bring H0 and H1 ?closer together,? making them harder to distinguish. Mindful of the fact that the
Hotelling test measures the separation of H0 and H1 in terms of ? > ??1 ?, we see that the statistical
distance is driven by the Euclidean length of ?. Consequently, we seek to transform the data in such
a way that the dimension is reduced, while the length of the shift ? is mostly preserved upon passing
to the transformed distributions. From this geometric point of view, it is natural to exploit the fact
that random projections can simultaneously reduce dimension and approximately preserve lengths
with high probability [14]. The use of projection-based test statistics has been considered previously
in Jacob et al., [15], Cl?emenc?on et al. [10], and Cuesta-Albertos et al. [16].
At a high level, our method can be viewed as a two step procedure. First, a single random projection
is drawn, and is used to map the samples from the high-dimensional space Rp to a low-dimensional
space1 Rk , with k := bn/2c. Second, the Hotelling T 2 test is applied to a new hypothesis testing
problem, H0,proj versus H1,proj , in the projected space. A decision is then pulled back to the original
problem by simply rejecting H0 whenever the Hotelling test rejects H0,proj .
Formal testing procedure. Let Pk> ? Rk?p denote a random projection with i.i.d. N (0, 1) entries,
drawn independently of the data, where k = bn/2c. Conditioning on the drawn matrix Pk> , the
projected samples {Pk> X1 , . . . , Pk> Xn1 } and {Pk> Y1 , . . . , Pk> Yn2 } are distributed i.i.d. according
to N (Pk> ?i , Pk> ?Pk ) respectively, with i = 1, 2. Since n ? k, the projected data satisfy the usual
conditions [17, p. 211] for applying the Hotelling T 2 procedure to the following new two-sample
problem in the projected space Rk :
H0,proj : Pk> ?1 = Pk> ?2 versus H1,proj : Pk> ?1 6= Pk> ?2 .
(3)
For this projected problem, the Hotelling test statistic takes the form2
Tk2 :=
n1 n2
?
n1 +n2 (X
b k )?1 P > (X
? ? Y? ),
? Y? )> Pk (Pk> ?P
k
? Y? , and ?
b are as defined in Section 1. Lastly, define the critical value t? :=
where X,
kn
?
?
n?k+1 Fk,n?k+1 (?), where Fk,n?k+1 (?) is the upper ? quantile of the Fk,n?k+1 distribution [17].
It is a basic fact about the classical Hotelling test that rejecting H0,proj when Tk2 ? t? is a level-?
test for the projected problem (3) (e.g., see Muirhead [17, p.217]). Inspection of the formula for Tk2
shows that its distribution is the same under both H0 and H0,proj . Therefore, rejecting the original
H0 when Tk2 ? t? is also a level ? test for the original problem (1). Likewise, we define this as the
condition for rejecting H0 at level ? in our procedure for (1). We summarize our procedure below.
1
2
The choice of projected dimension k = bn/2c is explained in the preprint [13].
b k is invertible with probability 1 when Pk> has i.i.d. N (0, 1) entries.
Note that Pk> ?P
3
1. Generate a single random matrix Pk> with i.i.d. N (0, 1) entries.
2. Compute Tk2 , using Pk> and the two sets of samples.
3. If
Tk2
(?)
? t? , reject H0 ; otherwise accept H0 .
Projected Hotelling test at level ? for problem (1).
3
Main results and their consequences
This section is devoted to the statement and discussion of our main theoretical results, including
a characterization of the asymptotic power function of our test (Theorem 1), and comparisons of
asymptotic relative efficiency with state-of-the-art tests proposed in past work (Theorems 2 and 3).
3.1
Asymptotic power function
As is standard in high-dimensional asymptotics, we will consider a sequence of hypothesis testing
problems indexed by n, allowing the dimension p, mean vectors ?1 and ?2 and covariance matrix
? to implicitly vary as functions of n, with n ? ?. We also make another type of asymptotic
assumption, known as a local alternative [18, p.193], which is commonplace in hypothesis testing.
The idea lying behind a local alternative assumption is that if the difficulty of discriminating between
H0 and H1 is ?held fixed? with respect to n, then it is often the case that most testing procedures
have power tending to 1 under H1 as n ? ?. In such a situation, it is not possible to tell if one
test has greater asymptotic power than another. Consequently, it is standard to derive asymptotic
power results under the extra condition that H0 and H1 become harder to distinguish as n grows.
This theoretical device aids in identifying the conditions under which one test is more powerful
than another. The following local alternative (A1), and balancing assumption (A2), are similar to
those used in previous works [6?9] on problem (1). In particular, condition (A1) means that the
KL-divergence between N (?1 , ?) and N (?2 , ?) tends to 0 as n ? ?.
(A1) Suppose that ? > ??1 ? = o(1).
(A2) Let there be a constant b ? (0, 1) such that n1 /n ? b.
To set the notation for Theorem 1, it is important to notice that each time the procedure (?) is implemented, a draw of Pk> induces a new test statistic Tk2 . To make this dependence clear, recall
? := (?, ?), and let ?(?; Pk> ) denote the exact (non-asymptotic) power function of our level-?
test for problem (1), induced by a draw of Pk> , as in (?). Another key quantity that depends on
Pk> is the KL-divergence between the projected sampling distributions N (Pk> ?1 , Pk> ?Pk ) and
N (Pk> ?2 , Pk> ?Pk ). We denote this divergence by 12 ?2k , and a simple calculation shows that
1 2
1 >
>
?1 >
Pk ?.
2 ?k = 2 ? Pk (Pk ?Pk )
Theorem 1. Under conditions (A1) and (A2), for almost all sequences of projections Pk> ,
?
2
?
?(?; Pk> ) ? ? ?z1?? + b(1?b)
n
?
? 0 as n ? ?.
k
2
(4)
Remarks. Note that if ?2k = 0, e.g. under H0 , then ?(?z1?? ?
+0) = ?, which corresponds to blind
?
guessing at level ?. Consequently, the second term (b(1 ? b)/ 2) n?2k determines the advantage
of our procedure over blind guessing. Since ?2k is proportional to the KL-divergence between the
projected sampling distributions, these observations conform to the intuition from Section 2 that the
KL-divergence measures the discrepancy between H0 and H1 .
3.2
Asymptotic relative efficiency (ARE)
Having derived an asymptotic power function for our test in Theorem 1, we are now in position to
provide sufficient conditions for achieving greater power than two other recent procedures for problem (1): Srivastava and Du [7, 8] (SD), and Chen and Qin [9] (CQ). To the best of our knowledge,
4
these works represent the state of the art3 among tests for problem (1) with a known asymptotic
power function under (p, n) ? ?.
From Theorem 1, the asymptotic power function of our random projection-based test at level ? is
?
?
?RP (?; Pk> ) := ? ?z1?? + b(1?b)
(5)
n ?2k .
2
The asymptotic power functions for the CQ and SD testing procedures at level ? are
?1
n k?k22
?
b(1?b) n ? > D?
?
?
,
and
?
(?)
:=
?
?z
+
.
?CQ (?) := ? ?z1?? + b(1?b)
SD
1??
|||R|||
2 |||?|||
2
F
F
Recall that D? := diag(?), and R denotes the correlation matrix associated with ?. The functions
?CQ and ?SD are derived under local alternatives and asymptotic assumptions that are similar to the
ones used here to obtain ?RP . In particular, all three functions can be obtained allowing p/n to tend
to an arbitrary positive constant or infinity.
A standard method of comparing asymptotic power functions under local alternatives is through
the concept of asymptotic relative efficiency (ARE) e.g., see van der Vaart [18, p.192]). Since ? is
monotone increasing, the term added to ?z1?? inside the ? functions above controls the power. To
compare power between tests, the ARE is simply defined via the ratio of such terms. More explicitly,
2
> ?1 ?
2
n k?k2 ?
n ? D? ?
2
we define ARE (?CQ ; ?RP ) := |||?||| 2
n?2k , and ARE (?SD ; ?RP ) :=
n
?
.
k
|||R|||
F
F
Whenever the ARE is less than 1, our procedure is considered to have greater asymptotic power than
the competing test?with our advantage being greater for smaller values of the ARE. Consequently,
we seek sufficient conditions in Theorems 2 and 3 for ensuring that the ARE is small.
In the present context, the analysis of ARE is complicated by the fact that the ARE varies with
n and depends on a random draw of Pk> through ?2k . Moreover, the quantity ?2k , and hence the
ARE, are affected by the orientation of ? with respect to the eigenvectors of ?. In order to consider
an average-case scenario, where no single orientation of ? is of particular importance, we place a
prior on the unit vector ?/k?k2 , and assume that it is uniformly distributed on the unit sphere of Rp .
We emphasize that our procedure (?) does not rely on this assumption, and that it is only a device
for making an average-case comparison. Therefore, to be clear about the meaning of Theorems 2
and 3, we regard the ARE as a function two random objects, Pk> and ?/k?k2 , and our probability
statements are made with this understanding. We complete the preparation for our comparison
theorems by isolating four assumptions with n ? ?.
?
(A3) The vector k?k
is uniformly distributed on the p-dimensional unit sphere, independent of Pk> .
2
(A4) There is a constant a ? [0, 1) such that k/p ? a.
(A5) The ratio ?1k tr(?) (p ?min (?)) = o(1).
|||D??1 |||2
(A6) The matrix D? = diag(?) satisfies tr(D
= o(1).
?1
)
?
3.3
Comparison with Chen and Qin [9]
The next result compares the asymptotic power of our projection-based test with that of Chen and
Qin [9]. The choice of 1 = 1 below (and in Theorem 3) is the reference for equal asymptotic
performance, with smaller values of 1 corresponding to better performance of random projection.
Theorem 2. Assume conditions (A3), (A4), and (A5). Fix a number 1 > 0, and let c(1 ) be any
constant strictly greater than 1 (1?4?a)4 . If the inequality
2
n ? c(1 ) tr(?)
|||?|||2
(6)
F
holds for all large n, then P [ARE (?CQ ; ?RP ) ? 1 ] ? 1 as n ? ?.
Interpretation. To interpret
the result, note that Jensen?s inequality implies that for any choice of
2
2
?, we have 1 ? tr(?) |||?|||F ? p. As such, it is reasonable to interpret this ratio as a measure of
3
Two other high-dimensional tests have been proposed in older works [6, 19, 20] that lead to the asymptotic
power function ?CQ , but under more restrictive assumptions.
5
the effective dimension of the covariance structure. The message of Theorem 2 is that as long as
the sample size n exceeds the effective dimension, then our projection-based test is asymptotically
2
superior to CQ. The ratio tr(?)2 / |||?|||F can also be viewed as measuring the decay rate of the
2
spectrum of ?, with tr(?)2 |||?|||F p indicating rapid decay. This condition means that the data
has low variance in ?most? directions in Rp , and so projecting onto a random set of k directions will
likely map the data into a low-variance subspace in which it is harder for chance variation to explain
away the correct hypothesis, thereby resulting in greater power.
3.4
Comparison with Srivastava and Du [7, 8]
We now turn to comparison of asymptotic power with the test of Srivastava and Du (SD).
Theorem 3. In addition to the conditions of Theorem 2, assume that condition (A6) holds. Fix a
number 1 > 0, and let c(1 ) be any constant strictly greater than 1 (1?4?a)4 . If the inequality
n ? c(1 )
tr(?)
p
2
?1
tr(D?
)
|||R|||F
2
(7)
holds for all large large n, then P [ARE (?SD ; ?RP ) ? 1 ] ? 1 as n ? ?.
Interpretation. Unlike the comparison with the CQ test, the correlation matrix R plays a large role
in determining the relative efficiency between our procedure and the SD test. The correlation matrix
enters in two different ways. First, the Frobenius norm |||R|||F is larger when the data variables are
more correlated. Second, correlation mitigates the growth of tr(D??1 ), since this trace is largest
when ? is nearly diagonal and has a large number of small eigenvalues. Inspection of the SD test
statistic in [7] shows that it does not make any essential use of correlation. By contrast, our Tk2
statistic does take correlation into account, and so it is understandable that correlated data enhance
the performance of our test relative to SD.
As a simple example, let ? ? (0, 1) and consider a highly correlated situation where all variables
have ? correlation will all other variables. Then, R = (1 ? ?)Ip?p + ?11> where 1 ? Rp is the all
2
ones vector. We may also let ? = R for simplicity. In this case, we see that |||R|||F = p + 2 p2 ?2 &
2
p2 , and tr(D??1 )2 = tr(Ip?p )2 = p2 . This implies tr(D??1 )2 |||R|||F . 1 and tr(?)/p = 1, and
then the sufficient condition (7) for outperforming SD is easily satisfied in terms of rates. We could
even let the correlation ? decay at a rate of n?q with q ? (0, 1/2), and (7) would still be satisfied
for large enough n. More generally, it is not necessary to use specially constructed covariance
matrices ? to demonstrate the superior performance of our method. Section 4 illustrates simulations
involving randomly selected covariance matrices where Tk2 is more powerful than SD.
Conversely, it is possible to show that condition (7) requires non-trivial correlation. To see this,
2
2
first note that in the complete absence of correlation, we have |||R|||F = |||Ip?p |||F = p. Jensen?s
2
2
?1
tr(D?
p2
p2
inequality implies that tr(D??1 ) ? tr(D
= tr(?)
, and so tr(?)
? p. Altogether,
p
|||R|||F
?)
this shows if the data exhibits very low correlation, then (7) cannot hold when p grows faster than
n. This will be illustrated in the simulations of Section 4.
4
Performance comparisons on real and synthetic data
In this section, we compare our procedure to state-of-the-art methods on real and synthetic data,
illustrating the effects of the different factors involved in Theorems 2 and 3.
Comparison on synthetic data. In order to validate the consequences of our theory and compare
against other methods in a controlled fashion, we performed simulations in four settings: slow/fast
spectrum decay, and diagonal/random covariance structure. To consider two rates of spectrum decay,
we selected p equally spaced values between 0.01 and 1, and raised them to the power 20 for fast
decay and the power 5 for slow decay. Random covariance structure was generated by specifying
the eigenvectors of ? as the column vectors of the orthogonal component of a QR decomposition of
a p ? p matrix with i.i.d. N (0, 1) entries. In all cases, we sampled n1 = n2 = 50 data points from
two multivariate normal distributions in p = 200 dimensions, and repeated the process 500 times
6
with ? = 0 for H0 , and 500 times with k?k2 = 1 for H1 . In the case of H1 , ? was drawn uniformly
from the unit sphere, as in Theorems 2 and 3. We fixed the total amount of variance by setting
|||?|||F = 50 in all cases. In addition to our random projection (RP)-based test, we implemented
the methods of BS [6], SD [7], and CQ [9], all of which are designed specifically for problem
(1) in the high-dimensional setting. For the sake of completeness, we also compare against recent
non-parametric procedures for the general two-sample problem that are based on kernel methods
(MMD) [11] and (KFDA) [12], as well as area-under-curve maximization (TreeRank) [10].
The ROC curves from our simulations are displayed in the left block of four panels in Figure 1.
These curves bear out the results of Theorems 2 and 3 in several ways. First notice that fast spectral
decay improves the performance of our test relative to CQ, as expected from Theorem 2. If we set
a = 0 and 1 = 1 in Theorem 2, then condition (6) for outperforming CQ is approximately n ? 75
in the case of fast decay. Given that n = 50 + 50 ? 2 = 98, the advantage of our method over CQ
in panels (b) and (d) is consistent with condition (6) being satisfied. In the case of slow decay, the
same settings of a and 1 indicate that n ? 246 is sufficient for outperforming CQ. Since the ROC
curve of our method is roughly the same as that of CQ in panels (a) and (c) (where again n = 98),
our condition (6) is somewhat conservative for slow decay at the finite sample level.
0.4
0.6
0.8
0.4
0.6
0.8
(c) random ?, slow decay
1.0
0.8
0.10
0.0
0.2
0.4
0.6
0.8
False positive rate
(d) random ?, fast decay
0.2
0.4
0.6
0.8
Nominal level ?
1.0
1.0
False positive rate
RP
SD
CQ
BS
HG
0.00
0.8
0.6
0.4
0.2
True positive rate
1.0
0.2
False positive rate
0.6
0.0
(e) FPR for genomic data
RP
SD
CQ
BS
KFDA
MMD
TreeRank
0.0
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.4
1.0
(b) diagonal ?, fast decay
RP
SD
CQ
BS
KFDA
MMD
TreeRank
0.2
False positive rate
1.0
0.2
False positive rate
0.0
0.8
0.6
0.4
0.0
(a) diagonal ?, slow decay
True positive rate
0.2
1.0
0.08
0.8
0.06
0.6
0.04
0.4
RP
SD
CQ
BS
HG
0.02
0.2
False positive rate
True positive rate
1.0
0.0
RP
SD
CQ
BS
KFDA
MMD
TreeRank
0.0
0.8
0.6
0.2
0.4
RP
SD
CQ
BS
KFDA
MMD
TreeRank
0.0
True positive rate
1.0
To study the consequences of Theorem 3, observe that when the covariance matrix ? is generated
randomly, the amount of correlation is much larger than in the idealized case that ? is diagonal.
Specifically, for a fixed value of tr(?), the quantity tr(D??1 )/ |||R|||F , is much smaller in in the
presence of correlation. Consequently, when comparing (a) with (c), and (b) with (d), we see that
correlation improves the performance of our test relative to SD, as expected from the bound in
Theorem 3. More generally, the ROC curves illustrate that our method has an overall advantage
over BS, CQ, KFDA, and MMD. Note that KFDA and MMD are not designed specifically for the
n p regime. In the case of zero correlation, it is notable that the TreeRank procedure displays a
superior ROC curve to our method, given that it also employs a dimension reduction strategy.
0.00
0.02
0.04
0.06
0.08
Nominal level ?
0.10
(f) FPR for genomic data (zoom)
Figure 1: Left and middle panels: ROC curves of several test statistics for two different choices of
correlation structure and decay rate. (a) Diagonal covariance slow decay, (b) Diagonal covariance
fast decay, (c) Random covariance slow decay, (d) Random covariance fast decay. Right panels: (e)
False positive rate against p-value threshold on the gene expression experiment of Section 4 for RP
(?), BS, CQ, SD and enrichment test, (f) zoom on the p-value < 0.1 region.
7
Comparison on high-dimensional gene expression data. The ability to identify gene sets having
different expression between two types of conditions, e.g., benign and malignant forms of a disease,
is of great value in many areas of biomedical research. Likewise, there is considerable motivation to
study our procedure in the context of detecting differential expression of p genes between two small
groups of patients of sizes n1 and n2 .
To compare the performance our Tk2 statistic against competitors CQ and SD in this type of application, we constructed a collection of 1680 distinct two-sample problems in the following manner,
using data from three genomic studies of ovarian [21], myeloma [22] and colorectal [23] cancers.
First, we randomly split the 3 datasets respectively into 6, 4, and 6 groups of approximately 50
patients. Next, we considered pairwise comparisons between all sets of patients on each of 14
biologically meaningful gene sets from the canonical pathways of MSigDB [24], with each gene set
containing between 75 and 128 genes. Since n1 ' n2 ' 50 for all patient sets, our
collection
of twosample problems is genuinely high-dimensional. Specifically, we have 14 ? ( 62 + 42 + 62 ) = 504
problems under H0 and 14 ? (6 ? 4 + 6 ? 4 + 6 ? 6) = 1176 problems under H1 ?assuming that every
gene set was differentially expressed between two sets of patients with different cancers, and that no
gene set was differentially expressed between two sets of patients with the same cancer type.4
A natural performance measure for comparing test statistics is the actual false positive rate (FPR)
as a function of the nominal level ?. When testing at level ?, the actual FPR should be as close to ?
as possible, but differences may occur if the distribution of the test statistic under H0 is not known
exactly (as is the case in practice). Figure 1 (e) shows that the curve for our procedure is closer to
the optimal diagonal line for most values of ? than the competing curves. Furthermore, the lowerleft corner of Figure 1 (e) is of particular importance, as practitioners are usually only interested in
p-values lower than 10?1 . Figure 1 (f) is a zoomed plot of this region and shows that the SD and
CQ tests commit too many false positives at low thresholds. Again, in this regime, our procedure
is closer to the diagonal and safely commits fewer than the allowed number of false positives. For
example, when thresholding p-values at 0.01, SD has an actual FPR of 0.03, and an even more
excessive FPR of 0.02 when thresholding at 0.001. The tests of CQ and BS are no better. The same
thresholds on the p-values of our test lead to false positive rates of 0.008 and 0 respectively.
With consideration to ROC curves, the samples arising from different cancer types are dissimilar
enough that BS, CQ, SD, and our method all obtain perfect ROC curves (no H1 case has a larger pvalue than any H0 case). We also note that the hypergeometric test-based (HG) enrichment analysis
often used by experimentalists on this problem [25] gives a suboptimal area-under-curve of 0.989.
5
Conclusion
We have proposed a novel testing procedure for the two-sample test of means in high dimensions.
This procedure can be implemented in a simple manner by first projecting a dataset with a single
randomly drawn matrix, and then applying the standard Hotelling T 2 test in the projected space. In
addition to obtaining the asymptotic power of this test, we have provided interpretable conditions on
the covariance matrix ? for achieving greater power than competing tests in the sense of asymptotic
relative efficiency. Specifically, our theoretical comparisons show that our test is well suited to
interesting regimes where most of the variance in the data can be captured in a relatively small
number of variables, or where the variables are highly correlated. Furthermore, in the realistic case
of (n, p) = (98, 200), these regimes were shown to correspond to favorable performance of our test
against several competitors in ROC curve comparisons on simulated data. Finally, we showed on
real gene expression data that our procedure was more reliable than competitors in terms of its false
positive rate. Extensions of this work may include more refined applications of random projection
to high-dimensional testing problems.
Acknowledgements. The authors thank Sandrine Dudoit, Anne Biton, and Peter Bickel for helpful
discussions. MEL gratefully acknowledges the support of the DOE CSGF Fellowship under grant
number DE-FG02-97ER25308, and LJ the support of Stand Up to Cancer. MJW was partially
supported by NSF grant DMS-0907632.
4
Although this assumption could be violated by the existence of various cancer subtypes, or technical differences between original tissue samples, our initial step of randomly splitting the three cancer datasets into
subsets guards against these effects.
8
References
[1] E. L. Lehmann and J. P. Romano. Testing statistical hypotheses. Springer Texts in Statistics. Springer,
New York, third edition, 2005.
[2] Y. Lu, P. Liu, P. Xiao, and H. Deng. Hotelling?s T2 multivariate profiling for detecting differential expression in microarrays. Bioinformatics, 21(14):3105?3113, Jul 2005.
[3] J. J. Goeman and P. B?uhlmann. Analyzing gene expression data in terms of gene sets: methodological
issues. Bioinformatics, 23(8):980?987, Apr 2007.
[4] D. V. D. Ville, T. Blue, and M. Unser. Integrated wavelet processing and spatial statistical testing of fMRI
data. Neuroimage, 23(4):1472?1485, 2004.
[5] U. Ruttimann et al. Statistical analysis of functional MRI data in the wavelet domain. IEEE Transactions
on Medical Imaging, 17(2):142?154, 1998.
[6] Z. Bai and H. Saranadasa. Effect of high dimension: by an example of a two sample problem. Statistica
Sinica, 6:311,329, 1996.
[7] M. S. Srivastava and M. Du. A test for the mean vector with fewer observations than the dimension.
Journal of Multivariate Analysis, 99:386?402, 2008.
[8] M. S. Srivastava. A test for the mean with fewer observations than the dimension under non-normality.
Journal of Multivariate Analysis, 100:518?532, 2009.
[9] S. X. Chen and Y. L. Qin. A two-sample test for high-dimensional data with applications to gene-set
testing. Annals of Statistics, 38(2):808?835, Feb 2010.
[10] S. Cl?emenc?on, M. Depecker, and Vayatis N. AUC optimization and the two-sample problem. In Advances
in Neural Information Processing Systems (NIPS 2009), 2009.
[11] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkop, and A.J. Smola. A kernel method for the twosample-problem. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 513?520. MIT Press, Cambridge, MA, 2007.
[12] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis.
In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS. MIT Press, 2007.
[13] M. E. Lopes, L. J. Jacob, and M. J. Wainwright. A more powerful two-sample test in high dimensions
using random projection. Technical Report arXiv: 1108.2401, 2011.
[14] S. S. Vempala. The Random Projection Method. DIMACS Series in Discrete Mathematics and Theoretical
Computer Science. American Mathematical Society, 2004.
[15] L. Jacob, P. Neuvial, and S. Dudoit. Gains in power from structured two-sample tests of means on graphs.
Technical Report arXiv: q-bio/1009.5173v1, 2010.
[16] J. A. Cuesta-Albertos, E. Del Barrio, R. Fraiman, and C. Matr?an. The random projection method in
goodness of fit for functional data. Computational Statistics & Data Analysis, 51(10):4814?4831, 2007.
[17] R. J. Muirhead. Aspects of Multivariate Statistical Theory. John Wiley & Sons, inc., 1982.
[18] A. W. van der Vaart. Asymptotic Statistics. Cambridge, 2007.
[19] A. P. Dempster. A high dimensional two sample significance test. Annals of Mathematical Statistics,
29(4):995?1010, 1958.
[20] A. P. Dempster. A significance test for the separation of two highly multivariate small samples. Biometrics, 16(1):41?50, 1960.
[21] R. W. Tothill et al. Novel molecular subtypes of serous and endometrioid ovarian cancer linked to clinical
outcome. Clin Cancer Res, 14(16):5198?5208, Aug 2008.
[22] J. Moreaux et al. A high-risk signature for patients with multiple myeloma established from the molecular
classification of human myeloma cell lines. Haematologica, 96(4):574?582, Apr 2011.
[23] R. N. Jorissen et al. Metastasis-associated gene expression changes predict poor outcomes in patients
with dukes stage b and c colorectal cancer. Clin Cancer Res, 15(24):7642?7651, Dec 2009.
[24] A. Subramanian et al. Gene set enrichment analysis: a knowledge-based approach for interpreting
genome-wide expression profiles. Proc. Natl. Acad. Sci. USA, 102(43):15545?15550, Oct 2005.
[25] T. Beissbarth and T. P. Speed. Gostat: find statistically overrepresented gene ontologies within a group of
genes. Bioinformatics, 20(9):1464?1465, Jun 2004.
9
| 4260 |@word illustrating:1 mri:1 middle:1 norm:3 seek:3 simulation:4 bn:3 covariance:18 jacob:3 decomposition:1 thereby:1 minus:1 tr:20 harder:3 reduction:1 initial:1 liu:1 series:1 bai:3 hereafter:1 past:2 wainwrig:1 comparing:3 anne:1 john:2 subsequent:1 realistic:1 benign:1 designed:2 plot:1 interpretable:1 discrimination:2 selected:2 device:2 fewer:3 inspection:2 fpr:6 short:1 accepting:1 detecting:4 provides:1 characterization:1 completeness:1 daphne:1 mathematical:2 guard:1 constructed:2 become:1 differential:2 pathway:1 inside:1 manner:3 pairwise:1 sacrifice:1 expected:2 rapid:1 roughly:1 ontology:1 behavior:1 growing:1 gostat:1 moulines:1 actual:3 increasing:1 provided:1 notation:4 moreover:1 panel:5 null:1 q2:1 safely:1 berkeley:3 every:1 growth:1 exactly:1 k2:4 platt:2 control:2 unit:4 grant:2 medical:1 bio:1 positive:20 engineering:1 local:5 sd:26 tends:1 consequence:3 acad:1 analyzing:1 laurent:2 approximately:3 studied:1 conversely:1 specifying:1 limited:1 statistically:1 testing:22 yj:3 practice:2 block:1 definite:1 procedure:33 asymptotics:1 area:3 reject:2 projection:18 suggest:1 er25308:1 onto:1 cannot:1 close:1 risk:3 applying:2 seminal:1 context:2 accumulation:1 map:2 demonstrated:1 pn2:2 overrepresented:1 emenc:2 independently:1 simplicity:1 identifying:1 splitting:1 muirhead:2 depecker:1 fraiman:1 notion:1 variation:1 annals:2 suppose:1 play:1 nominal:3 exact:2 duke:1 hypothesis:14 genuinely:1 role:1 preprint:2 enters:1 thousand:1 commonplace:1 region:2 trade:1 disease:1 discriminates:1 treerank:7 intuition:3 dempster:2 signature:1 upon:1 efficiency:6 easily:2 various:1 distinct:1 committing:1 describe:1 effective:2 fast:8 tell:1 outcome:2 h0:31 refined:1 widely:1 larger:4 otherwise:2 ability:1 statistic:25 vaart:2 commit:1 transform:1 ip:3 advantage:6 eigenvalue:2 sequence:2 propose:1 zoomed:1 remainder:2 qin:5 poorly:1 achieve:1 roweis:1 frobenius:2 validate:1 competition:1 differentially:2 qr:1 olkopf:1 perfect:1 object:1 derive:2 illustrate:2 stat:1 aug:1 p2:5 implemented:3 implies:3 indicate:1 rasch:1 direction:2 correct:1 human:1 fix:3 biological:1 subtypes:2 strictly:2 extension:1 mjw:1 hold:8 lying:1 considered:3 normal:6 great:1 predict:1 vary:1 bickel:1 a2:3 favorable:1 proc:1 integrates:1 applicable:2 uhlmann:1 largest:1 albertos:2 hoffman:1 mit:2 genomic:3 gaussian:2 form2:1 broader:1 serous:1 derived:2 focus:1 improvement:1 methodological:1 contrast:1 sense:2 helpful:1 inference:1 lj:1 integrated:1 accept:1 koller:1 proj:7 transformed:1 interested:1 overall:1 among:2 orientation:2 issue:1 classification:1 art:5 raised:1 spatial:1 equal:1 having:2 sampling:2 biology:1 nearly:2 excessive:1 anticipated:1 fmri:2 discrepancy:1 t2:1 report:2 metastasis:1 employ:1 randomly:5 simultaneously:1 divergence:6 zoom:2 preserve:1 homogeneity:1 n1:14 interest:3 message:1 a5:2 investigate:1 highly:3 behind:1 natl:1 devoted:2 held:1 hg:3 closer:3 necessary:1 modest:1 orthogonal:1 indexed:1 biometrics:1 euclidean:1 re:2 isolating:1 theoretical:7 instance:2 column:1 measuring:1 goodness:1 a6:2 maximization:1 cost:1 subset:2 entry:4 hundred:1 too:1 kn:2 varies:1 synthetic:4 borgwardt:1 fundamental:1 discriminating:1 retain:1 off:1 invertible:3 enhance:1 together:1 again:2 satisfied:3 containing:1 corner:1 american:1 account:1 de:1 pooled:2 inc:1 satisfy:1 notable:1 explicitly:1 idealized:1 depends:2 blind:2 later:1 h1:18 view:1 performed:1 linked:1 msigdb:1 complicated:1 jul:1 contribution:1 variance:5 who:1 likewise:3 spaced:1 identify:2 correspond:1 rejecting:6 lu:1 tissue:1 explain:1 suffers:1 whenever:2 definition:1 against:8 competitor:3 involved:1 dm:1 proof:1 associated:3 sandrine:1 xn1:3 sampled:1 gain:1 dataset:3 treatment:1 recall:2 knowledge:2 improves:2 organized:1 barrio:1 actually:1 back:1 furthermore:3 rejected:1 biomedical:1 lastly:4 smola:1 correlation:18 stage:1 working:1 hand:2 del:1 grows:2 usa:1 effect:4 k22:1 concept:1 true:4 former:1 hence:2 leibler:1 mile:1 illustrated:1 deal:1 auc:1 mel:1 dimacs:1 complete:2 demonstrate:2 bring:1 interpreting:1 meaning:2 consideration:1 novel:2 superior:4 tending:2 functional:2 conditioning:1 interpretation:2 interpret:2 refer:1 pn1:1 cambridge:2 fk:3 mathematics:1 mindful:1 gratefully:1 feb:1 lowerleft:1 multivariate:10 showed:2 recent:4 driven:1 scenario:1 route:1 certain:1 inequality:5 outperforming:3 der:2 captured:1 greater:11 somewhat:1 deng:1 ii:2 multiple:1 desirable:1 harchaoui:1 gretton:1 exceeds:2 technical:3 faster:1 calculation:1 offer:1 sphere:3 long:1 profiling:1 bach:1 clinical:1 molecular:3 equally:1 dkl:1 a1:4 controlled:1 ensuring:1 involving:3 basic:1 essentially:1 patient:8 experimentalists:1 arxiv:2 kernel:4 represent:1 mmd:8 cell:1 dec:1 preserved:1 addition:4 whereas:1 background:2 myeloma:3 fellowship:1 vayatis:1 singular:2 sch:2 extra:1 unlike:1 specially:1 ineffective:1 induced:1 tend:1 practitioner:1 presence:1 exceed:1 split:1 enough:2 concerned:1 xj:3 fit:1 competing:6 suboptimal:1 reduce:2 idea:2 microarrays:1 shift:4 whether:1 expression:12 motivated:1 transcriptomics:1 peter:1 passing:1 york:1 romano:1 remark:1 generally:2 clear:2 eigenvectors:2 colorectal:2 amount:2 induces:1 reduced:1 generate:1 specifies:2 canonical:1 nsf:1 notice:2 arising:1 ovarian:2 blue:1 conform:1 write:1 discrete:1 shall:1 affected:1 group:3 key:1 four:3 terminology:1 demonstrating:1 threshold:3 achieving:3 drawn:5 v1:1 imaging:1 asymptotically:1 circumventing:1 monotone:1 ville:1 year:1 graph:1 matr:1 powerful:4 lehmann:1 lope:1 place:1 throughout:1 reader:1 almost:1 reasonable:1 separation:2 draw:3 decision:1 bound:1 distinguish:2 display:1 occur:1 infinity:2 sake:1 aspect:1 speed:1 min:2 vempala:1 martin:1 relatively:1 department:1 developing:1 according:1 structured:1 poor:1 smaller:3 son:1 sam:1 b:13 making:2 biologically:1 explained:1 projecting:2 previously:1 turn:1 malignant:1 singer:1 observe:1 away:1 spectral:2 hotelling:18 alternative:6 altogether:1 rp:21 existence:1 substitute:1 original:4 denotes:1 include:1 a4:2 clin:2 exploit:1 commits:1 restrictive:1 quantile:2 tk2:10 yoram:1 classical:3 society:1 question:1 quantity:3 added:1 parametric:2 primary:1 dependence:1 usual:1 diagonal:10 guessing:2 said:1 exhibit:2 strategy:1 subspace:1 distance:3 thank:1 simulated:2 separating:2 sci:1 yn2:3 discriminant:1 trivial:2 toward:1 kfda:8 assuming:2 length:3 cq:28 providing:1 ratio:4 difficult:2 mostly:1 sinica:1 statement:2 trace:1 understandable:1 unknown:1 perform:1 allowing:3 upper:1 observation:3 datasets:2 finite:1 displayed:1 situation:3 precise:1 y1:3 arbitrary:1 enrichment:3 pair:1 kl:5 z1:6 california:1 hypergeometric:1 established:1 nip:2 below:2 usually:1 regime:5 summarize:1 including:2 reliable:2 max:1 wainwright:1 power:36 critical:7 greatest:1 difficulty:2 natural:2 rely:1 subramanian:1 normality:1 older:1 jorissen:1 pvalue:1 acknowledges:1 jun:1 text:1 review:1 geometric:1 prior:1 understanding:1 acknowledgement:1 determining:1 asymptotic:30 relative:9 bear:1 interesting:1 proportional:1 versus:4 offered:1 sufficient:6 consistent:1 xiao:1 thresholding:2 editor:2 balancing:1 cancer:13 twosample:2 supported:1 aij:1 formal:1 pulled:1 wide:1 absolute:1 distributed:3 regard:2 curve:15 dimension:20 van:2 valid:1 cumulative:1 evaluating:1 stand:1 genome:1 author:1 collection:3 made:1 projected:12 transaction:1 approximate:1 emphasize:1 implicitly:1 kullback:1 gene:20 spectrum:3 stimulated:1 robust:1 ca:1 obtaining:1 du:5 cl:2 domain:2 diag:3 pk:39 main:2 apr:2 statistica:1 cuesta:2 motivation:1 significance:2 edition:1 n2:13 profile:1 repeated:1 allowed:1 x1:3 referred:1 roc:10 fashion:1 slow:8 aid:1 wiley:1 neuroimage:1 space1:1 position:1 explicit:1 third:1 wavelet:2 dozen:1 theorem:23 rk:3 formula:1 mitigates:1 jensen:2 unser:1 decay:20 a3:2 essential:1 false:13 effectively:1 importance:2 fg02:1 illustrates:1 chen:5 suited:2 simply:2 likely:1 expressed:2 ordered:1 partially:1 srivastava:6 springer:2 corresponds:1 determines:1 satisfies:1 chance:1 ma:1 oct:1 viewed:2 dudoit:2 consequently:5 absence:1 considerable:2 fisher:1 change:1 specifically:6 reducing:1 uniformly:3 conservative:1 total:1 discriminate:1 accepted:1 meaningful:1 indicating:1 support:2 latter:1 wainwright1:1 dissimilar:1 violated:1 preparation:1 bioinformatics:3 correlated:4 |
3,602 | 4,261 | Beyond Spectral Clustering - Tight Relaxations of
Balanced Graph Cuts
Simon Setzer
Saarland University, Saarbr?ucken, Germany
[email protected]
Matthias Hein
Saarland University, Saarbr?ucken, Germany
[email protected]
Abstract
Spectral clustering is based on the spectral relaxation of the normalized/ratio graph
cut criterion. While the spectral relaxation is known to be loose, it has been shown
recently that a non-linear eigenproblem yields a tight relaxation of the Cheeger
cut. In this paper, we extend this result considerably by providing a characterization of all balanced graph cuts which allow for a tight relaxation. Although
the resulting optimization problems are non-convex and non-smooth, we provide
an efficient first-order scheme which scales to large graphs. Moreover, our approach comes with the quality guarantee that given any partition as initialization
the algorithm either outputs a better partition or it stops immediately.
1
Introduction
The problem of finding the best balanced cut of a graph is an important problem in computer science [9, 24, 13]. It has been used for minimizing the communication cost in parallel computing,
reordering of sparse matrices, image segmentation and clustering. In particular, in machine learning
spectral clustering is one of the most popular graph-based clustering methods as it can be applied
to any graph-based data or to data where similarity information is available so that one can build
a neighborhood graph. Spectral clustering is originally based on a relaxation of the combinatorial
normalized/ratio graph cut problem, see [28]. The relaxation with the best known worst case approximation guarantee yields a semi-definite program, see [3]. However, it is practically infeasible for
graphs with more than 100 vertices due to the presence of O(n3 ) constraints where n is the number
of vertices in the graph. In contrast, the computation of eigenvectors of a sparse graph scales easily
to large graphs. In a line of recent work [6, 26, 14] it has been shown that relaxation based on the
nonlinear graph p-Laplacian lead to similar runtime performance while providing much better cuts.
In particular, for p = 1 one obtains a tight relaxation of the Cheeger cut, see [8, 26, 14].
In this work, we generalize this result considerably. Namely, we provide for almost any balanced
graph cut problem a tight relaxation into a continuous problem. This allows flexible modeling of
different graph cut criteria. The resulting non-convex, non-smooth continuous optimization problem
can be efficiently solved by our new method for the minimization of ratios of differences of convex
functions, called RatioDCA. Moreover, compared to [14], we also provide a more efficient way
how to solve the resulting convex inner problems by transferring recent methods from total variation
denoising, cf. [7], to the graph setting. In first experiments, we illustrate the effect of different
balancing terms and show improved clustering results of USPS and MNIST compared to [14].
2
Set Functions, Submodularity, Convexity and the Lovasz Extension
In this section we gather some material from the literature on set functions, submodularity and the
Lovasz extension, which we need in the next section. We refer the reader to [11, 4] for a more
detailed exposition. We work on weighted, undirected graphs G = (V, W ) with vertex set V and
1
a symmetric, non-negative weight matrix W . We define n := |V | and denote by A = V \A the
? whereas the corresponding Lovasz
complement of A in V , set functions are denoted with a hat, S,
extension is simply S. The indicator vector of a set A is written as 1A . In the following we always
?
assume that for any considered set function S? it holds S(?)
= 0. The Lovasz extension is a way to
extend a set function from 2V to RV .
?
Definition 2.1 Let S? : 2V ? R be a set function with S(?)
= 0. Let f ? RV be ordered in
increasing order f1 ? f2 ? . . . ? fn and define Ci = {j ? V | fj > fi } where C0 = V . Then
S : RV ? R given by
S(f ) =
n
n?1
X
X
? i?1 ) ? S(C
? i) =
? i )(fi+1 ? fi ) + f1 S(V
? )
fi S(C
S(C
i=1
i=1
? Note that S(1A ) = S(A)
?
is called the Lovasz extension of S.
for all A ? V .
? that is S(A)
?
?
?
Note that for symmetric set functions S,
= S(A)
for all A ? V , the property S(?)
=0
?
implies S(V ) = 0. A particular interesting class of set functions are the submodular set functions
as their Lovasz extension is convex.
Definition 2.2 A set function, F? : 2V ? R is submodular if for all A, B ? V ,
F? (A ? B) + F? (A ? B) ? F? (A) + F? (B).
F? is called strictly submodular if the inequality is strict whenever A * B or B * A.
Note that symmetric submodular set functions are always non-negative as for all A ? V ,
2F? (A) = F? (A) + F? (A) ? F? (A ? A) + F? (A ? A) = F? (V ) + F? (?) = 0.
An important class of set functions for clustering are cardinality-based set functions.
Proposition 2.1 ([4]) Let e ? RV+ and g : R+ ? R is a concave function, then F? : A 7? g(s(A))
is submodular. If F? : A 7? g(s(A)) is submodular for all s ? RV+ , then g is concave.
The following properties hold for the Lovasz extension.
?
Proposition 2.2 ([11, 4]) Let S : RV ? R be the Lovasz extension of S? : 2V ? R with S(?)
= 0.
? S? is submodular if and only if S is convex,
? S is positively one-homogeneous,
?
? ) = 0.
? S(f ) ? 0, ? f ? RV and S(1) = 0 if and only if S(A)
? 0, ? A ? V and S(V
? ) = 0,
? S(f + ?1) = S(f ) for all f ? RV , ? ? R if and only if S(V
? S is even, if S? is symmetric.
One might wonder if the Lovasz extension of all submodular set functions generates the set of
all positively one-homogeneous convex functions. This is not the case, as already Lovasz [19]
gave a counter-example. In the next section we will be interested in the class of positively onehomogeneous, even, convex functions S with S(f + ?1) = S(f ) for all f ? RV . From the above
proposition we deduce that these properties are fulfilled for the Lovasz extension of any symmetric,
submodular set function. However, also for this special class there exists a counter-example. Take
1
S(f ) =
f ?
hf, 1i 1
.
|V |
?
?
It fulfills all the stated conditions but it induces the set function S(A)
:= S(1A ) given as
1 max{|A|, |V \A|}, 0 < |A| < |V |
?
S(A)
=
else
|V | 0,
2
It is easy to check that this function is not submodular. Thus different convex one-homogeneous
?
functions can induce the same set function via S(A)
:= S(1A ).
It is known [15] that a large class of functions e.g. every f ? C 2 (Rn ) can be written as a difference
of convex functions. As submodular functions correspond to convex functions in the sense of the
Lovasz extension, one can ask if the same result holds for set functions: Is every set function a
difference of submodular set functions ? The following result has been reported in [21]. As some
properties assumed in the proof in [21] do not hold, we give an alternative constructive proof.
Proposition 2.3 Every set function S? : 2V ? R can be written as the difference of two submodular
functions. The corresponding Lovasz extension S : RV ? R can be written as a difference of convex
functions.
Note that the proof of Proposition 2.3 is constructive. Thus we can always find the decomposition
of the set function into a difference of two submodular functions and thus also the decomposition of
its Lovasz extension into a difference of convex functions.
3
Tight Relaxations of Balanced Graph Cuts
In graph-based clustering a popular criterion to partition the graph is to minimize the cut cut(A, A),
defined as
X
cut(A, A) =
wij ,
i?A,j?A
|V |?|V |
where (wij ) ? R
are the non-negative, symmetric weights of the undirected graph G =
(V, W ) usually interpreted as similarities of vertices i and j. Direct minimization of the cut leads
typically to very unbalanced partitions, where often just a single vertex is split off. Therefore one has
to introduce a balancing term which biases the criterion towards balanced partitions. Two popular
balanced graph cut criterion are the Cheeger cut RCC(A, A) and the ratio cut RCut(A, A)
1
1
cut(A, A)
cut(A, A)
+
,
RCut(A, A) = |V |
= cut(A, A)
RCC(A, A) =
.
|A| |A|
min{|A|, |A|}
|A||A|
We consider later on also their normalized versions. Spectral clustering is derived as relaxation
of the ratio cut criterion based on the second eigenvector of the graph Laplacian. While the second eigenvector can be efficiently computed, it is well-known that this relaxation is far from being
tight. In particular there exist graphs where the spectral relaxation is as bad [12] as the isoperimetric
inequality suggests [1]. In a recent line of work [6, 26, 14] it has been shown that a tight relaxation for the Cheeger cut can be achieved by moving from the linear eigenproblem to a nonlinear
eigenproblem associated to the nonlinear graph 1-Laplacian [14].
In this work we generalize this result considerably by showing in Theorem 3.1 that a tight relaxation
exists for every balanced graph cut measure which is of the form cut divided by balancing term.
More precisely, let S? : 2V ? R be a symmetric non-negative set function. Then a balanced graph
cut criterion ? : 2V ? R+ of a partition (A, A) has the form,
?(A) :=
cut(A, A)
.
?
S(A)
(1)
As we consider undirected graphs, the cut is a symmetric set function and thus ?(A) = ?(A). In
order to get a balanced graph cut, S? is typically chosen as a function of |A| (or some other type of
volume) which is monotonically increasing on [0, |V |/2]. The first part of the theorem showing the
equivalence of combinatorial and continuous problem is motivated by a result derived by Rothaus
in [25] in the context of isoperimetric inequalities on Riemannian manifolds. It has been transferred
to graphs by Tillich and independently by Houdre in [27, 16]. We generalize their result further so
that it now holds for all possible non-negative symmetric set functions. In order to establish the link
to the result of Rothaus, we first state the following characterization
Lemma 3.1 A function S : V ? R is positively one-homogeneous, even, convex and S(f + ?1) =
S(f ) for all f ? RV , ? ? R if and only if S(f ) = supu?U hu, f i where U ? Rn is a closed
symmetric convex set and hu, 1i = 0 for any u ? U .
3
Theorem 3.1 Let G = (V, E) be a finite, weighted undirected graph and S : RV ? R and let
?
S? : 2V ? R be symmetric with S(?)
= 0, then
P
n
1
cut(A, A)
i,j=1 wij |fi ? fj |
inf 2
,
= inf
V
?
A?V
S(f )
f ?R
S(A)
if either one of the following two conditions holds
1. S is positively one-homogeneous, even, convex and S(f + ?1) = S(f ) for all f ? RV ,
?
? ? R and S? is defined as S(A)
:= S(1A ) for all A ? V .
?
2. S is the Lovasz extension of the non-negative, symmetric set function S? with S(?)
= 0.
Let f ? RV and denote by Ct := {i ? V | fi > t}, then it holds under both conditions,
Pn
1
cut(Ct , Ct )
i,j=1 wij |fi ? fj |
2
mint?R
?
.
?
S(f )
S(Ct )
Theorem 3.1 can be generalized by replacing the cut with an arbitrary other set function. However,
the emphasis of this paper is to use the new degree of freedom for balanced graph clustering. The
more general approach will be discussed elsewhere. Note that the first condition in Theorem 3.1
implies that S? is symmetric as
?
?
S(A)
= S(1A ) = S(?1A ) = S(1 ? 1A ) = S(1A ) = S(A).
?
? ) = 0 as S is even, convex and positively oneMoreover, S? is non-negative with S(?)
= S(V
homogeneous. For the second condition note that by Proposition 2.3 the Lovasz extension of any
set function can be written as a difference of convex (d.c.) functions. As the total variation term in
the enumerator is convex, we thus have to minimize a ratio of a convex and a d.c. function. The
efficient minimization of such problems will be the topic of the next section.
We would like to point out a related line of work for the case where the balancing term S? is submodular and the balanced graph cut measure is directly optimized using submodular minimization
techniques. In [23] this idea is proposed for the ratio cut and subsequently generalized [22, 17] so
that every submodular balancing function S? can be used. While the general framework is appealing,
it is unclear if the minimization can be done efficiently. Moreover, note that Theorem 3.1 goes well
beyond the case where S? is submodular.
3.1
Examples of Balancing Set Functions
Theorem 3.1 opens up new modeling possibilities for clustering based on balanced graph cuts. We
discuss in the experiments differences and properties of the individual balancing terms. However,
it is out of the scope of this paper to answer the question which balancing term is the ?best?. An
answer to such a question is likely to be application-dependent. However, for a given random graph
model it might be possible to suggest a suitable balancing term given one knows how cut and volume
behave. A first step in this direction has been done in [20] where the limit of cut and volume has
been discussed for different neighborhood graph types.
In the following we assume that we work with graphs which have non-negative edge weights W =
(wij ) and non-negative
vertex weights e : V ? R+ . The volume vol(A) of a set A ? V is defined
P
as vol(A) = i?A ei . The volume reduces to the cardinality if ei = 1P
for all i ? V (unnormalized
case) or to the volume considered in the normalized cut, vol(A) =
i?A di for ei = di for all
i ? V (normalized case), where di is the degree of vertex i. We denote by E the diagonal matrix
with Eii = ei , i = i, . . . , n. Using general vertex weights allows us to present the unnormalized and
normalized case in a unified framework. Moreover, general vertex weights allow more modeling
freedom e.g. one can give two different vertices very large vertex weights and so implicitly enforce
that they will be in different partitions.
4
?
S(A)
S(f )
Name
Cheeger p-cut
P
n
ei |fi ? wmeanp (f )|p
1
1
vol(A) vol(A) p
p
Normalized p-cut
vol(A) p?1 +vol(A) p?1
P
n
ei |fi ?
i=1
Trunc. Cheeger cut
Hard balanced cut
K
|V |
(f ) ? gmin,
p
K
|V |
p
1
p
vol(V )
(f )
|V |
kf ? median(f )1k1
? gmax, K?1 (f ) ? gmin, K?1 (f )
|V |
1? 1
vol(A) vol(A) +vol(A) vol(A) p
p
? gmax, K?1 (f ) ? gmin, K?1 (f )
|V |
Hard Cheeger cut
he,f i p
|
vol(V )
1
gmax,? (f ) ? gmin,? (f )
gmax,
1
1
i=1
|V |
?
if vol(A) ? ? vol(V ),
? vol(A),
vol(A),
if vol(A) ? ? vol(V ),
? ? vol(V ), else.
(
1, if min{|A|, |A|} ? K
0, else.
?
?
if min{|A|, |A|} < K,
?0,
min{|A|, |A|}
?
??(K ? 1),
else.
Table 1: Examples of balancing set functions and their continuous counterpart. For the hard balanced
and hard Cheeger cut we have unit vertex weights, that is ei ? 1.
We report here the Lovasz extension of two important set functions which will be needed in the
sequel. For that we define the functions gmax,? and gmin,? as:
n
X
?i = ? vol(V ) ,
gmax,? (f ) = max h?, f i 0 ? ?i ? ei , ? i = 1, . . . , n,
i=1
gmin,? (f ) = min h?, f i 0 ? ?i ? ei , ? i = 1, . . . , n,
n
X
?i = ? vol(V )
i=1
Pn
and the weighted p-mean wmeanp (f ) is defined as wmeanp (f ) = inf a?R i=1 ei |fi ? a|p . Note
that gmax,? is convex, whereas gmin,? is concave. Both functions can be easily evaluated by sorting
the componentwise product ei fi .
?
Proposition 3.1 Let S? : 2V ? R, S(A)
:= min{vol(A), vol(A)}. Then the Lovasz extension
S : V ? R is given by S(f ) = kE(f ? wmean1 (f )1)k1 .
min{|A|, |A|}, if min{|A|, |A|} ? K,
?
Let ei = 1, ?i ? V and S? : 2V ? R, S(A)
:=
. Then
K,
else.
the Lovasz extension S : V ? R is given as S(f ) = gmax, K (f ) ? gmin, K (f ).
|V |
|V |
In Table 1 we collect a set of interesting set functions enforcing different levels of balancing. For
the Cheeger and Normalized p-cut family and the truncated Cheeger cut the functions S are convex
and not necessarily the Lovasz extension of the induced set functions S? (first case in Theorem 3.1).
In the case of hard balanced and hard Cheeger cut the set function S? is not submodular. However, in
both cases we know an explicit decomposition of the set function S? into a difference of submodular
functions and thus their Lovasz extension S can be written as a difference of the convex functions.
The derivations can be found in the supplementary material.
4
Minimization of Ratios of Non-negative Differences of Convex Functions
In [14], the problem of computing the optimal Cheeger cut partition is formulated as a nonlinear
eigenproblem. Hein and B?uhler show that the second eigenvector of the nonlinear 1-graph Laplacian
is equal to the indicator function of the optimal partition. In Theorem 3.1, we have generalized this
relation considerably. In this section, we discuss the efficient computation of critical points of the
continuous ratios of Theorem 3.1. We propose a general scheme called RatioDCA for minimizing
ratios of non-negative differences of convex functions and thus generalizes Algorithm 1 of [14]
which could handle only ratios of convex functions. As the optimization problem is non-smooth and
non-convex, only convergence to critical points can be guaranteed. However, we will show that for
every balanced graph cut criterion our algorithm improves a given partition or it terminates directly.
Note that such types of algorithms have been considered for specific graph cut criteria [23, 22, 2].
5
Figure 1: Left: Illustration of different balancing functions (rescaled so that they attain value |V |/2
at |V |/2). Right: Log-log plot of the duality gap of the inner problem vs. the number of iterations
of PDHG (dashed) and FISTA (solid) in outer iterations 3 (black), 5 (blue) and 7 (red) of RatioDCA
corresponding to increasing difficulty of the problem. PDHG significantly outperforms FISTA.
4.1
General Scheme
The continuous optimization problem in Theorem 3.1 has the form
Pn
1
i,j=1 wij |fi ? fj |
2
minf ?RV
,
(2)
S(f )
where S is one-homogeneous and either convex or the Lovasz extension of a non-negative symmetric set function. By Proposition 2.3 the Lovasz extension of any set function can be written as a
difference of one-homogeneous convex functions. Using the fourth property of Proposition 2.2 the
Lovasz extension S is non-negative, that is S(f ) ? 0 for all f ? RV . With the algorithm RatioDCA
below, we provide a general scheme for the minimization of a ratio F (f ) := R(f )/S(f ), where
R and S are non-negative and one-homogeneous and each can be written as a difference of convex
functions: R(f ) = R1 (f )?R2 (f ) and S(f ) = S1 (f )?S2 (f ) with R1 , R2 , S1 , S2 being convex. In
Algorithm RatioDCA ? Minimization of a non-negative ratio of 1-homogeneous d.c. functions
1: Initialization: f 0 = random with
f 0
= 1, ?0 = F (f 0 )
2: repeat
k
3:
s1 (f k ) ? ?S1 (f k), r2 (f k ) ?
?R2 (fk )
k+1
4:
f
= arg min R1 (u) ? u, r2 (f ) + ?k S2 (u) ? u, s1 (f k )
kuk2 ?1
?k+1 = (R1 (f k+1 ) ? R2 (f k+1 ))/(S1 (f k+1 ) ? S2 (f k+1 ))
|?k+1 ??k |
6: until
<
?k
7: Output: eigenvalue ?k+1 and eigenvector f k+1 .
5:
Pn
our setting R(f ) = R1 (f ) = 12 i,j=1 wi,j |fi ? fj |. We refer to the convex optimization problem
which has to be solved at each step in RatioDCA (line 4) as the inner problem.
Proposition 4.1 The sequence f k produced by RatioDCA satisfies F (f k ) > F (f k+1 ) for all k ? 0
or the sequence terminates.
The sequence F (f k ) is not only monotonically decreasing but converges to a generalized nonlinear
eigenvector as introduced in [14].
Theorem 4.1 Each cluster point f ? of the sequence f k produced by the RatioDCA is a nonlinear
?
)
0
eigenvector with eigenvalue ?? = R(f
S(f ? ) ? 0, F (f ) in the sense that it fulfills
0 ? ?R1 (f ? ) ? ?R2 (f ? ) ? ?? ?S1 (f ? ) ? ?S2 (f ? ) .
If S1 ? S2 is continuously differentiable at f ? , then F has a critical point at f ? .
In the balanced graph cut problem (2) we minimize implicitly over non-constant functions. Thus
it is important to guarantee that the RatioDCA for this particular problem always converges to a
non-constant vector.
6
Lemma 4.1 For every balanced graph cut problem, the RatioDCA converges to a non-constant f ?
given that the initial vector f 0 is non-constant.
Now we are ready to state the following key property of our balanced graph clustering algorithm.
Theorem 4.2 Let (A, A) be a given partition of V and let S : V ? R+ satisfy one of the conditions
stated in Theorem 3.1. If one uses as initialization of RatioDCA, f 0 = 1A , then either RatioDCA
terminates after one step or it yields an f 1 which after optimal thresholding as in Theorem 3.1 gives
a partition (B, B) which satisfies
cut(B, B)
cut(A, A)
<
.
?
?
S(B)
S(A)
The above ?improvement theorem? implies that we can use the result of any other graph partitioning
method as initialization. In particular, we can always improve the result of spectral clustering.
4.2
Solution of the Convex Inner Optimization Problems
The performance of RatioDCA depends heavily on how fast we can solve the corresponding inner
problem. We propose to use a primal-dual algorithm for the inner problem and show experimentally
that this approach yields faster convergence than the FISTA method of [5]
Pnwhich was applied in
[14]. Let us restrict our attention to the case where R(f ) = R1 (f ) = 21 i,j=1 wij |fi ? fj | and
S2 = 0. In other words, we apply the RatioDCA algorithm to (2) with S = S1 which is what we
need, e.g., for the tight relaxations of the Cheeger cut, normalized cut and truncated Cheeger cut
families. Hence, the inner problem of the RatioDCA algorithm (line 4) has the form
f k+1 = arg min{
kuk2 ?1
n
1 X
wij |fi ? fj | ? ?k hu, s1 (f k )i}.
2 i,j=1
(3)
Recently, Arrow-Hurwicz-type primal-dual algorithms have become popular, e.g., in image processing, to solve problems whose objective function consists of the sum of convex terms, cf., e.g.,
[10, 7]. We propose to use the following primal-dual algorithm of [7] where it is referred to as
Algorithm 2. We call this method a primal-dual hybrid gradient algorithm (PDHG) here since this
term is used for similar algorithms in the literature. Note that the operator Pk?k? ?1 in the first
step is the componentwise projection onto the interval [?1, 1]. For the sake of readability, we den
fine the linear operator B : RV ? RE by Bu = (wij (ui ? uj ))i,j=1 and its transpose is then
P
n
n
BT? =
.
j=1 wij (?i,j ? ?j,i )
i=1
Algorithm PDHG ? Solution of the inner problem of RatioDCA for (2) and S convex
1: Initialization: u0 , u
?0 , ? 0 = 0, ?, ?0 , ?0 > 0 with ?0 ?0 ? 1/kBk22
2: repeat
3:
? l+1 = Pk?k? ?1 (? l + ?l B u
?l )
1
l+1
l
T l+1
4:
u
= 1+? u ? ?l (B ?
? 2?k s1 (f k ))
?
l
5:
?l = 1/ 1 + 2??l , ?l+1 = ?l ?l ,
6:
u
?l+1 = ul+1 + ?l (ul+1 ? ul )
7: until duality gap <
8: Output: f k+1 ? ul+1 /kul+1 k2
?l+1 = ?l /?l
Although PDHG and FISTA have the same guaranteed converges rates of O(1/l2 ), our experiments
show that for clustering applications, PDHG can outperform FISTA substantially. In Fig.1, we
illustrate this difference on a toy problem. Note that a single step takes about the same computation
time for both algorithms so that the number of iterations is a valid criterion for comparison. In the
supplementary material, we also consider the inner problem of RatioDCA for the tight relaxation of
the hard balanced cut. Although, in this case we have to deal with S2 6= 0 in the inner problem of
RatioDCA, we can derive a similar PDHG method since the objective function is still the a sum of
convex terms.
7
5
Experiments
In a first experiment, we study the influence of the different balancing criteria on the obtained clustering. The data is a Gaussian mixture in R20 where the projection onto the first two dimensions
is shown in Figure 2 - the remaining 18 dimensions are just noise. The distribution of the 2000
points is [1200,600,200]. A symmetric k-NN-graph with k = 20 is built with Gaussian weights
?
e
2kx?yk2
max{? 2 ,? 2 }
x,k y,k
where ?x,k is the k-NN distance of point x. For better interpretation, we report
Figure 2: From left to right: Cheeger 1-cut, Normalized 1-cut, truncated Cheeger cut (TCC), hard
balanced cut (HBC), hard Cheeger cut (HCC). The criteria are the normalized ones, i.e., the vertex
weights are ei = di .
all resulting partitions with respect to all balanced graph cut criteria, cut and the size of the largest
component in the following table. The parameter for truncated, hard Cheeger cut and hard balanced
cut is set to K = 200. One observes that the normalized 1-cut results in a less balanced partition but
with a much smaller cut than the Cheeger 1-cut, which is itself less balanced than the hard Cheeger
cut. The latter is fully balanced but has an even higher cut. The truncated Cheeger cut has a smaller
cut than the hard balanced cut but its partition is not feasible. Note that the hard balanced cut is
similar to the normalized 1-cut but achieves smaller cut at the prize of a larger maximal component.
Thus, the example nicely shows how the different balance criterion influence the final partition.
Criterion \ Obj.
Cheeger 1-cut
Norm. 1-cut
Trunc. Ch. cut
Hard bal. cut
Hard Ch. cut
Cut
408.4
178.3
153.6
175.4
639.2
max{|A|, |A|}
1301
1775
1945
1785
1000
Ch. 1-cut
0.099
0.132
0.513
0.134
0.119
N. 1-cut
0.079
0.075
0.263
0.076
0.115
TCC200
2.042
0.892
0.768
0.877
3.196
HBC200
408.4
178.3
?
175.4
639.2
HCC200
0.817
6.858
?
10.96
0.798
Next we perform unnormalized 1-spectral clustering on the full USPS, normal and extended1
MNIST-datasets (resp. 9298, 70000 and 630000 points) in the same setting as in [14] with no
vertex weights, that is ei = 1, ?i ? V . As clustering criterion for multi-partitioning we use the
PM cut(Ci ,Ci )
multicut version of the normalized 1-cut, given as RCut(C1 , . . . , CM ) =
. We
i=1
|Ci |
successively subdivide clusters until the desired number of clusters (M = 10) is reached. This recursive partitioning scheme is used for all methods. In [14] the Cheeger 1-cut has been used which
is not compatible with the multi-cut criterion. We expect that using the normalized 1-cut for the
bipartitioning steps we should get better results. The results of the other methods for USPS and
MNIST (normal) are taken from [14]. Each bipartitioning step is initialized randomly. Out of 100
obtained multi-partitionings we report the results of the best clustering with respect to the multi-cut
criterion. The next table shows the obtained RCut and errors.
Vertices/Edges
USPS
9K/272K
MNIST (Normal)
70K/1043K
MNIST (Ext)
630K/9192K
Rcut
Error
Rcut
Error
Rcut
Error
N. 1-cut
0.6629
0.1301
0.1499
0.1236
0.0996
0.1180
Ch. 1-cut[14]
0.6661
0.1349
0.1507
0.1244
0.0997
0.1223
S.&B.[26]
0.6663
0.1309
0.1545
0.1318
?
?
1.1-SCl [6]
0.6676
0.1308
0.1529
0.1293
?
?
Standard spectral
0.8180
0.1686
0.2252
0.1883
0.1594
0.2297
We see for all datasets improvements in the obtained cut. Also a slight decrease in the obtained
error can be observed. The improvements are not so drastic as the clustering is already very good.
The problem is that for both datasets one digit is split (0) and two are merged (4 and 9) resulting in
seemingly large errors. Similar results hold for the extended MNIST dataset. Note that the resulting
error is comparable to recently reported results on semi-supervised learning [18].
1
The extended MNIST dataset is generated by translating each original input image of MNIST by one pixel
(8 directions).
8
References
[1] N. Alon and V. D. Milman. ?1 , isoperimetric inequalities for graphs, and superconcentrators. J. Combin.
Theory Ser. B, 38(1):73?88, 1985.
[2] R. Andersen and K. Lang. An algorithm for improving graph partitions. In Proc. of the 19th ACM-SIAM
Symposium on Discrete Algorithms (SODA 2008), pages 651?660, 2008.
[3] S. Arora, J. R. Lee, and A. Naor. Expander flows, geometric embeddings and graph partitioning. In Proc.
36th Annual ACM Symp. on Theory of Computing (STOC), pages 222?231. ACM, 2004.
[4] F. Bach. Convex analysis and optimization with submodular functions, 2010. arXiv:1010.4207v2.
[5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sciences, 2:183?202, 2009.
[6] T. B?uhler and M. Hein. Spectral clustering based on the graph p-Laplacian. In L. Bottou and M. Littman,
editors, Proc. of the 26th Int. Conf. on Machine Learning (ICML), pages 81?88. Omnipress, 2009.
[7] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to
imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[8] F. Chung. Spectral Graph Theory. AMS, Providence, RI, 1997.
[9] W. E. Donath and A. J. Hoffman. Lower bounds for the partitioning of graphs. IBM J. Res. Develop.,
17:420?425, 1973.
[10] E. Esser, X. Zhang, and T. F. Chan. A general framework for a class of first order primal-dual algorithms
for convex optimization in imaging science. SIAM J. Imaging Sciences, 3(4):1015?1046, 2010.
[11] S. Fujishige. Submodular functions and optimization, volume 58 of Annals of Discrete Mathematics.
Elsevier B. V., Amsterdam, second edition, 2005.
[12] Stephen Guattery and Gary L. Miller. On the quality of spectral separators. SIAM Journal on Matrix
Analysis and Applications, 19:701?719, 1998.
[13] L. Hagen and A. B. Kahng. Fast spectral methods for ratio cut partitioning and clustering. Proc. IEEE
Intl. Conf. on Computer-Aided Design, pages 10?13, November 1991.
[14] M. Hein and T. B?uhler. An inverse power method for nonlinear eigenproblems with applications in 1spectral clustering and sparse pca. In Advances in Neural Information Processing Systems 23 (NIPS
2010), pages 847?855, 2010.
[15] J.-B. Hiriart-Urruty. Generalized differentiability, duality and optimization for problems dealing with
differences of convex functions. In Convexity and duality in optimization, pages 37?70. 1985.
[16] C. Houdr?e. Mixed and Isoperimetric Estimates on the Log-Sobolev Constants of Graphs and Markov
Chains. Combinatorica, 21:489?513, 2001.
[17] Y. Kawahara, K. Nagano, and Y. Okamoto. Submodular fractional programming for balanced clustering.
Pattern Recognition Letters, 32:235?243, 2011.
[18] W. Liu, J. He, and S.-F. Chang. Large graph construction for scalable semi-supervised learning. In Proc.
of the 27th Int. Conf. on Machine Learning (ICML), 2010.
[19] L. Lov?asz. Submodular functions and convexity. In Mathematical programming: the state of the art
(Bonn, 1982), pages 235?257. Springer, Berlin, 1983.
[20] M. Maier, U. von Luxburg, and M. Hein. Influence of graph construction on graph-based clustering
measures. In Advances in Neural Information Processing Systems 21 (NIPS), pages 1025 ? 1032, 2009.
[21] M. Narasimhan and J. Bilmes. A submodular-supermodular procedure with applications to discriminative
structure learning. In 21st Conference on Uncertainty in Artificial Intelligence (UAI), 2005.
[22] M. Narasimhan and J. Bilmes. Local search for balanced submodular clusterings. In 20th International
Joint Conference on Artificial Intelligence (IJCAI), 2007.
[23] S. B. Patkar and H. Narayanan. Improving graph partitions using submodular functions. Discrete Appl.
Math., 131(2):535?553, 2003.
[24] A. Pothen, H. D. Simon, and K.-P. Liou. Partitioning sparse matrices with eigenvectors of graphs. SIAM
Journal on Matrix Analysis and Applications, 11:430 ? 452, 1990.
[25] O. S. Rothaus. Analytic inequalities, Isoperimetric Inequalities and Logarithmic Sobolev Inequalities.
Journal of Functional Analysis, 64:296?313, 1985.
[26] A. Szlam and X. Bresson. Total variation and Cheeger cuts. In Proceedings of the 27th International
Conference on Machine Learning, pages 1039?1046. Omnipress, 2010.
[27] J.-P. Tillich. Edge isoperimetric inequalities for product graphs. Discrete Mathematics, 213:291?320,
2000.
[28] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395?416, 2007.
9
| 4261 |@word version:2 norm:1 c0:1 open:1 hu:3 decomposition:3 solid:1 initial:1 liu:1 outperforms:1 lang:1 written:8 fn:1 partition:18 analytic:1 plot:1 v:1 intelligence:2 prize:1 characterization:2 math:1 readability:1 zhang:1 saarland:4 mathematical:2 direct:1 become:1 symposium:1 consists:1 naor:1 symp:1 introduce:1 lov:1 multi:4 decreasing:1 ucken:2 cardinality:2 increasing:3 moreover:4 what:1 cm:1 interpreted:1 substantially:1 eigenvector:6 narasimhan:2 unified:1 finding:1 guarantee:3 every:7 concave:3 runtime:1 k2:1 ser:1 partitioning:8 unit:1 szlam:1 rcc:2 pock:1 local:1 limit:1 ext:1 might:2 black:1 emphasis:1 initialization:5 equivalence:1 suggests:1 collect:1 appl:1 recursive:1 definite:1 supu:1 digit:1 procedure:1 attain:1 significantly:1 projection:2 word:1 induce:1 suggest:1 get:2 onto:2 operator:2 context:1 influence:3 go:1 attention:1 independently:1 convex:40 bipartitioning:2 ke:1 immediately:1 handle:1 variation:3 resp:1 annals:1 construction:2 heavily:1 programming:2 homogeneous:10 us:1 recognition:1 hagen:1 cut:97 observed:1 solved:2 gmin:8 worst:1 counter:2 rescaled:1 decrease:1 observes:1 balanced:31 cheeger:24 convexity:3 ui:1 littman:1 trunc:2 isoperimetric:6 tight:11 f2:1 usps:4 easily:2 joint:1 derivation:1 fast:3 artificial:2 neighborhood:2 kawahara:1 whose:1 supplementary:2 solve:3 larger:1 statistic:1 itself:1 final:1 seemingly:1 sequence:4 eigenvalue:2 differentiable:1 matthias:1 propose:3 tcc:1 hiriart:1 product:2 maximal:1 nagano:1 convergence:2 cluster:3 ijcai:1 r1:7 hbc:1 intl:1 converges:4 illustrate:2 derive:1 alon:1 develop:1 c:1 come:1 implies:3 direction:2 submodularity:2 merged:1 subsequently:1 translating:1 material:3 f1:2 proposition:10 extension:23 strictly:1 hold:8 practically:1 considered:3 normal:3 scope:1 achieves:1 proc:5 combinatorial:2 largest:1 weighted:3 hoffman:1 minimization:8 lovasz:24 enumerator:1 always:5 gaussian:2 pn:4 shrinkage:1 derived:2 improvement:3 check:1 contrast:1 sense:2 am:1 elsevier:1 dependent:1 ratiodca:18 nn:2 typically:2 transferring:1 bt:1 relation:1 wij:10 interested:1 germany:2 pixel:1 arg:2 dual:6 flexible:1 denoted:1 art:1 special:1 equal:1 eigenproblem:4 nicely:1 icml:2 minf:1 report:3 randomly:1 individual:1 beck:1 freedom:2 uhler:3 possibility:1 mixture:1 primal:6 chain:1 edge:3 gmax:8 initialized:1 re:2 desired:1 hein:6 combin:1 modeling:3 teboulle:1 bresson:1 cost:1 vertex:15 wonder:1 pothen:1 reported:2 providence:1 answer:2 considerably:4 st:1 international:2 siam:5 sequel:1 bu:1 off:1 lee:1 continuously:1 andersen:1 von:2 successively:1 multicut:1 conf:3 chung:1 kul:1 toy:1 de:2 int:2 satisfy:1 depends:1 later:1 closed:1 red:1 reached:1 hf:1 parallel:1 simon:2 minimize:3 maier:1 efficiently:3 miller:1 yield:4 correspond:1 generalize:3 produced:2 bilmes:2 mia:1 whenever:1 definition:2 proof:3 associated:1 riemannian:1 di:4 okamoto:1 stop:1 dataset:2 popular:4 ask:1 fractional:1 improves:1 segmentation:1 originally:1 higher:1 supervised:2 supermodular:1 improved:1 done:2 evaluated:1 chambolle:1 just:2 until:3 replacing:1 ei:14 nonlinear:8 quality:2 name:1 effect:1 normalized:15 counterpart:1 hence:1 symmetric:15 deal:1 unnormalized:3 criterion:18 generalized:5 bal:1 eii:1 fj:7 omnipress:2 image:3 recently:3 fi:15 functional:1 volume:7 extend:2 discussed:2 he:2 interpretation:1 slight:1 refer:2 fk:1 pm:1 mathematics:2 hcc:1 submodular:27 esser:1 moving:1 similarity:2 yk2:1 deduce:1 recent:3 chan:1 inf:3 mint:1 inequality:8 monotonically:2 dashed:1 semi:3 rv:17 u0:1 full:1 stephen:1 reduces:1 smooth:3 faster:1 bach:1 divided:1 laplacian:5 scalable:1 vision:1 arxiv:1 iteration:3 achieved:1 c1:1 whereas:2 fine:1 interval:1 else:5 median:1 donath:1 asz:1 strict:1 induced:1 expander:1 undirected:4 fujishige:1 flow:1 obj:1 call:1 presence:1 split:2 easy:1 embeddings:1 gave:1 restrict:1 inner:10 idea:1 hurwicz:1 motivated:1 pca:1 setzer:2 ul:4 detailed:1 eigenvectors:2 eigenproblems:1 induces:1 narayanan:1 differentiability:1 outperform:1 exist:1 tutorial:1 fulfilled:1 blue:1 discrete:4 vol:24 key:1 imaging:5 graph:60 relaxation:18 sum:2 luxburg:2 inverse:2 letter:1 fourth:1 uncertainty:1 soda:1 almost:1 reader:1 family:2 sobolev:2 comparable:1 bound:1 ct:4 guaranteed:2 milman:1 annual:1 constraint:1 precisely:1 n3:1 ri:1 sake:1 generates:1 bonn:1 min:10 transferred:1 terminates:3 smaller:3 wi:1 appealing:1 s1:11 den:1 taken:1 discus:2 loose:1 needed:1 know:2 urruty:1 scl:1 drastic:1 liou:1 available:1 generalizes:1 apply:1 v2:1 spectral:17 enforce:1 alternative:1 subdivide:1 hat:1 original:1 clustering:27 cf:2 remaining:1 guattery:1 k1:2 build:1 establish:1 uj:1 objective:2 already:2 question:2 diagonal:1 unclear:1 gradient:1 distance:1 link:1 berlin:1 outer:1 topic:1 manifold:1 enforcing:1 illustration:1 ratio:14 providing:2 minimizing:2 balance:1 stoc:1 negative:15 stated:2 design:1 kahng:1 perform:1 datasets:3 markov:1 finite:1 november:1 behave:1 truncated:5 extended:2 communication:1 rn:2 arbitrary:1 introduced:1 complement:1 namely:1 optimized:1 componentwise:2 saarbr:2 nip:2 beyond:2 usually:1 below:1 pattern:1 program:1 built:1 max:4 pdhg:7 suitable:1 critical:3 difficulty:1 hybrid:1 power:1 indicator:2 scheme:5 improve:1 arora:1 ready:1 literature:2 l2:1 geometric:1 kf:1 reordering:1 fully:1 expect:1 mixed:1 interesting:2 degree:2 gather:1 thresholding:2 editor:1 balancing:13 rcut:7 ibm:1 elsewhere:1 compatible:1 repeat:2 transpose:1 infeasible:1 bias:1 allow:2 sparse:4 dimension:2 valid:1 far:1 obtains:1 uni:2 implicitly:2 r20:1 dealing:1 uai:1 assumed:1 discriminative:1 continuous:6 iterative:1 search:1 table:4 improving:2 bottou:1 necessarily:1 separator:1 pk:2 arrow:1 s2:8 noise:1 edition:1 positively:6 fig:1 referred:1 kbk22:1 explicit:1 theorem:16 kuk2:2 bad:1 specific:1 showing:2 r2:7 exists:2 mnist:8 ci:4 kx:1 sorting:1 gap:2 logarithmic:1 simply:1 likely:1 amsterdam:1 ordered:1 chang:1 springer:1 ch:4 gary:1 satisfies:2 acm:3 formulated:1 exposition:1 towards:1 feasible:1 hard:16 fista:5 experimentally:1 aided:1 denoising:1 lemma:2 called:4 total:3 duality:4 combinatorica:1 latter:1 fulfills:2 unbalanced:1 constructive:2 |
3,603 | 4,262 | Online Learning: Stochastic, Constrained, and
Smoothed Adversaries
Alexander Rakhlin
Department of Statistics
University of Pennsylvania
[email protected]
Karthik Sridharan
Toyota Technological Institute at Chicago
[email protected]
Ambuj Tewari
Computer Science Department
University of Texas at Austin
[email protected]
Abstract
Learning theory has largely focused on two main learning scenarios: the classical
statistical setting where instances are drawn i.i.d. from a fixed distribution, and
the adversarial scenario wherein, at every time step, an adversarially chosen instance is revealed to the player. It can be argued that in the real world neither of
these assumptions is reasonable. We define the minimax value of a game where
the adversary is restricted in his moves, capturing stochastic and non-stochastic
assumptions on data. Building on the sequential symmetrization approach, we define a notion of distribution-dependent Rademacher complexity for the spectrum
of problems ranging from i.i.d. to worst-case. The bounds let us immediately
deduce variation-type bounds. We study a smoothed online learning scenario and
show that exponentially small amount of noise can make function classes with
infinite Littlestone dimension learnable.
1
Introduction
In the papers [1, 10, 11], an array of tools has been developed to study the minimax value of diverse
sequential problems under the worst-case assumption on Nature. In [10], many analogues of the
classical notions from statistical learning theory have been developed, and these have been extended
in [11] for performance measures well beyond the additive regret. The process of sequential symmetrization emerged as a key technique for dealing with complicated nested minimax expressions.
In the worst-case model, the developed tools give a unified treatment to such sequential problems as
regret minimization, calibration of forecasters, Blackwell?s approachability, ?-regret, and more.
Learning theory has been so far focused predominantly on the i.i.d. and the worst-case learning
scenarios. Much less is known about learnability in-between these two extremes. In the present
paper, we make progress towards filling this gap by proposing a framework in which it is possible
to variously restrict the behavior of Nature. By restricting Nature to play i.i.d. sequences, the results
boil down to the classical notions of statistical learning in the supervised learning scenario. By not
placing any restrictions on Nature, we recover the worst-case results of [10]. Between these two
endpoints of the spectrum, particular assumptions on the adversary yield interesting bounds on the
minimax value of the associated problem. Once again, the sequential symmetrization technique
arises as the main tool for dealing with the minimax value, but the proofs require more care than in
the i.i.d. or completely adversarial settings.
1
Adapting the game-theoretic language, we will think of the learner and the adversary as the two
players of a zero-sum repeated game. Adversary?s moves will be associated with ?data?, while the
moves of the learner ? with a function or a parameter. This point of view is not new: game-theoretic
minimax analysis has been at the heart of statistical decision theory for more than half a century
(see [3]). In fact, there is a well-developed theory of minimax estimation when restrictions are put
on either the choice of the adversary or the allowed estimators by the player. We are not aware of a
similar theory for sequential problems with non-i.i.d. data.
The main contribution of this paper is the development of tools for the analysis of online scenarios
where the adversary?s moves are restricted in various ways. In additional to general theory, we
consider several interesting scenarios which can be captured by our framework. All proofs are
deferred to the appendix.
2
Value of the Game
Let F be a closed subset of a complete separable metric space, denoting the set of moves of
the learner. Suppose the adversary chooses from the set X . Consider the Online Learning
Model, defined as a T -round interaction between the learner and the adversary: On round
t = 1, . . . , T , the learner chooses ft ? F, the adversary simultaneously picks xt ? X ,
and the learner suffers loss ft (xt ). The goal of the learner is to minimize regret, defined as
PT
PT
t=1 ft (xt ) ? inf f ?F
t=1 f (xt ). It is a standard fact that simultaneity of the choices can be
formalized by the first player choosing a mixed strategy; the second player then picks an action
based on this mixed strategy, but not on its realization. We therefore consider randomized learners
who predict a distribution qt ? Q on every round, where Q is the set of probability distributions on
F, assumed to be weakly compact. The set of probability distributions on X (mixed strategies of
the adversary) is denoted by P.
We would like to capture the fact that sequences (x1 , . . . , xT ) cannot be arbitrary. This is achieved
by defining restrictions on the adversary, that is, subsets of ?allowed? distributions for each round.
These restrictions limit the scope of available mixed strategies for the adversary.
Definition 1. A restriction P1:T on the adversary is a sequence P1 , . . . , PT of mappings Pt :
X t?1 7? 2P such that Pt (x1:t?1 ) is a convex subset of P for any x1:t?1 ? X t?1 .
Note that the restrictions depend on the past moves of the adversary, but not on those of the player.
We will write Pt instead of Pt (x1:t?1 ) when x1:t?1 is clearly defined. Using the notion of restrictions, we can give names to several types of adversaries that we will study in this paper.
(1) A worst-case adversary is defined by vacuous restrictions Pt (x1:t?1 ) = P. That is, any mixed
strategy is available to the adversary, including any deterministic point distribution.
(2) A constrained adversary is defined by Pt (x1:xt?1 ) being the set of all distributions supported
on the set {x ? X : Ct (x1 , . . . , xt?1 , x) = 1} for some deterministic binary-valued constraint
Ct . The deterministic constraint can, for instance, ensure that the length of the path determined
by the moves x1 , . . . , xt stays below the allowed budget.
(3) A smoothed adversary picks the worst-case sequence which gets corrupted by i.i.d. noise.
Equivalently, we can view this as restrictions on the adversary who chooses the ?center? (or a
parameter) of the noise distribution.
Using techniques developed in this paper, we can also study the following adversaries (omitted due
to lack of space):
(4) A hybrid adversary in the supervised learning game picks the worst-case label yt , but is forced
to draw the xt -variable from a fixed distribution [6].
(5) An i.i.d. adversary is defined by a time-invariant restriction Pt (x1:t?1 ) = {p} for every t and
some p ? P.
For the given restrictions P1:T , we define the value of the game as
?
VT (P1:T ) =
inf
sup
E
q1 ?Q p1 ?P1 f1 ,x1
inf
sup
E ? ? ? inf
q2 ?Q p2 ?P2 f2 ,x2
sup
E
qT ?Q p ?P
fT ,xT
T
T
"
T
X
t=1
ft (xt ) ? inf
f ?F
T
X
t=1
#
f (xt )
(1)
where ft has distribution qt and xt has distribution pt . As in [10], the adversary is adaptive, that is,
chooses pt based on the history of moves f1:t?1 and x1:t?1 . At this point, the only difference from
2
the setup of [10] is in the restrictions Pt on the adversary. Because these restrictions might not allow
point distributions, suprema over pt ?s in (1) cannot be equivalently written as the suprema over xt ?s.
A word about the notation. In [10], the value of the game is written as VT (F), signifying that the
main object of study is F. In [11], it is written as VT (?, ?T ) since the focus is on the complexity
of the set of transformations ?T and the payoff mapping ?. In the present paper, the main focus is
indeed on the restrictions on the adversary, justifying our choice VT (P1:T ) for the notation.
The first step is to apply the minimax theorem. To this end, we verify the necessary conditions. Our
assumption that F is a closed subset of a complete separable metric space implies that Q is tight and
Prokhorov?s theorem states that compactness of Q under weak topology is equivalent to tightness
[14]. Compactness under weak topology allows us to proceed as in [10]. Additionally, we require
that the restriction sets are compact and convex.
Theorem 1. Let F and X be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let P1:T be the restrictions, and assume that for any x1:t?1 ,
Pt (x1:t?1 ) satisfies the necessary conditions for the minimax theorem to hold. Then
" T
#
T
X
X
inf Ext ?pt [ft (xt )] ? inf
f (xt ) . (2)
VT (P1:T ) = sup Ex1 ?p1 . . . sup ExT ?pT
p1 ?P1
pT ?PT
t=1
ft ?F
f ?F
t=1
The nested sequence of suprema and expected values in Theorem 1 can be re-written succinctly as
VT (P1:T ) = sup Ex1 ?p1 Ex2 ?p2 (?|x1 ) . . . ExT ?pT (?|x1:T ?1 )
p?P
= sup E
p?P
"
T
X
t=1
inf Ext ?pt [ft (xt )] ? inf
ft ?F
f ?F
T
X
"
T
X
t=1
#
f (xt )
t=1
inf Ext ?pt [ft (xt )] ? inf
ft ?F
f ?F
T
X
t=1
#
f (xt )
(3)
where the supremum is over all joint distributions p over sequences, such that p satisfies the restrictions as described below. Given a joint distribution p on sequences (x1 , . . . , xT ) ? X T , we
denote the associated conditional distributions by pt (?|x1:t?1 ). We can think of the choice p as a
sequence of oblivious strategies {pt : X t?1 7? P}Tt=1 , mapping the prefix x1:t?1 to a conditional
distribution pt (?|x1:t?1 ) ? Pt (x1:t?1 ). We will indeed call p a ?joint distribution? or an ?oblivious strategy? interchangeably. We say that a joint distribution p satisfies restrictions if for any t
and any x1:t?1 ? X t?1 , pt (?|x1:t?1 ) ? Pt (x1:t?1 ). The set of all joint distributions satisfying the
restrictions is denoted by P. We note that Theorem 1 cannot be deduced immediately from the analogous result in [10], as it is not clear how the restrictions on the adversary per each round come into
play after applying the minimax theorem. Nevertheless, it is comforting that the restrictions directly
translate into the set P of oblivious strategies satisfying the restrictions.
Before continuing with our goal of upper-bounding the value of the game, we state the following
interesting facts.
Proposition 2. There is an oblivious minimax optimal strategy for the adversary, and there is a
corresponding minimax optimal strategy for the player that does not depend on its own moves.
The latter statement of the proposition is folklore for worst-case learning, yet we have not seen a
proof of it in the literature. The proposition holds for all online learning settings with legal restrictions P1:T , encompassing also the no-restrictions setting of worst-case online learning [10]. The
result crucially relies on the fact that the objective is external regret.
3
Symmetrization and Random Averages
Theorem 1 is a useful representation of the value of the game. As the next step, we upper bound
it with an expression which is easier to study. Such an expression is obtained by introducing
Rademacher random variables. This process can be termed sequential symmetrization and has been
exploited in [1, 10, 11]. The restrictions Pt , however, make sequential symmetrization considerably
more involved than in the papers cited above. The main difficulty arises from the fact that the set
Pt (x1:t?1 ) depends on the sequence x1:t?1 , and symmetrization (that is, replacement of xs with x?s )
has to be done with care as it affects this dependence. Roughly speaking, in the process of symmetrization, a tangent sequence x?1 , x?2 , . . . is introduced such that xt and x?t are independent and
3
identically distributed given ?the past?. However, ?the past? is itself an interleaving choice of the
original sequence and the tangent sequence.
Define the ?selector function? ? : X ?X ?{?1} 7? X by ?(x, x? , ?) = x? if ? = 1 and ?(x, x? , ?) =
x if ? = ?1. When xt and x?t are understood from the context, we will use the shorthand ?t (?) :=
?(xt , x?t , ?). In other words, ?t selects between xt and x?t depending on the sign of ?. Throughout
the paper, we deal with binary trees, which arise from symmetrization [10]. Given some set Z, an
Z-valued tree of depth T is a sequence z = (z1 , . . . , zT ) of T mappings zi : {?1}i?1 7? Z. The
T -tuple ? = (?1 , . . . , ?T ) ? {?1}T defines a path. For brevity, we write zt (?) instead of zt (?1:t?1 ).
T ?1
Given a joint distribution p, consider the ?(X ? X )
? = (?1 , . . . , ?T ) defined by
7? P(X ? X )?- valued probability tree
$
?t (?1:t?1 ) (x1 , x?1 ), . . . , (xT ?1 , x?T ?1 ) = (pt (?|?1 (?1 ), . . . , ?t?1 (?t?1 )), pt (?|?1 (?1 ), . . . , ?t?1 (?t?1 ))).
In other words, the values of the mappings ?t (?) are products of conditional distributions, where
conditioning is done with respect to a sequence made from xs and x?s depending on the sign of ?s .
We note that the difficulty in intermixing the x and x? sequences does not arise in i.i.d. or worstcase symmetrization. However, in-between these extremes the notational complexity seems to be
unavoidable if we are to employ symmetrization and obtain a version of Rademacher complexity.
As an example, consider the ?left-most? path ? = ?1 in a binary tree of depth T , where 1 =
(1, . . . , 1) is a T -dimensional vector of ones. Then all the selectors ?(xt , x?t , ?t ) choose the sequence
x1 , . . . , xT . The probability tree ? on the ?left-most? path is, therefore, defined by the conditional
distributions pt (?|x1:t?1 ); on the path ? = 1, the conditional distributions are pt (?|x?1:t?1 ).
$
Slightly abusing the notation, we will write ?t (?) (x1 , x?1 ), . . . , (xt?1 , x?t?1 ) for the probability
tree since ?t clearly depends only on the prefix up to time t ? 1. Throughout the paper, it will
be understood that the tree ? is obtained from p as described above. Since all the conditional
distributions of p satisfy the restrictions, so do the corresponding distributions of the probability
tree ?. By saying that ? satisfies restrictions we then mean that p ? P.
Sampling of a pair of X -valued trees from ?, written as (x, x? ) ? ?, is defined as the following
recursive process: for any ? ? {?1}T , (x1 (?), x?1 (?)) ? ?1 (?) and
(xt (?), x?t (?)) ? ?t (?)((x1 (?), x?1 (?)), . . . , (xt?1 (?), x?t?1 (?)))
for 2 ? t ? T
(4)
To gain a better understanding of the sampling process, consider the first few levels of the tree.
The roots x1 , x?1 of the trees x, x? are sampled from p1 , the conditional distribution for t = 1
given by p. Next, say, ?1 = +1. Then the ?right? children of x1 and x?1 are sampled via
x2 (+1), x?2 (+1) ? p2 (?|x?1 ) since ?1 (+1) selects x?1 . On the other hand, the ?left? children
x2 (?1), x?2 (?1) are both distributed according to p2 (?|x1 ). Now, suppose ?1 = +1 and ?2 = ?1.
Then, x3 (+1, ?1), x?3 (+1, ?1) are both sampled from p3 (?|x?1 , x2 (+1)).
The proof of Theorem 3 reveals why such intricate conditional structure arises, and Proposition 5
below shows that this structure greatly simplifies for i.i.d. and worst-case situations. Nevertheless,
the process described above allows us to define a unified notion of Rademacher complexity for the
spectrum of assumptions between the two extremes.
Definition 2. The distribution-dependent sequential Rademacher complexity of a function class
F ? RX is defined as
#
"
T
X
?
?t f (xt (?))
RT (F, p) = E(x,x? )?? E? sup
f ?F t=1
where ? = (?1 , . . . , ?T ) is a sequence of i.i.d. Rademacher random variables and ? is the probability
tree associated with p.
We now prove an upper bound on the value VT (P1:T ) of the game in terms of this distributiondependent sequential Rademacher complexity. The result cannot be deduced directly from [10], and
it greatly increases the scope of problems whose learnability can now be studied in a unified manner.
Theorem 3. The minimax value is bounded as
VT (P1:T ) ? 2 sup RT (F, p).
p?P
4
(5)
More generally, for any measurable function Mt such that Mt (p, f, x, x? , ?) = Mt (p, f, x? , x, ??),
"
VT (P1:T ) ? 2 sup E(x,x? )?? E? sup
p?P
T
X
?
#
?t (f (xt (?)) ? Mt (p, f, x, x , ?))
f ?F t=1
The following corollary provides a natural ?centered? version of the distribution-dependent
Rademacher complexity. That is, the complexity can be measured by relative shifts in the adversarial moves.
Corollary 4. For the game with restrictions P1:T ,
#
"
T
X
?t f (xt (?)) ? Et?1 f (xt (?))
VT (P1:T ) ? 2 sup E(x,x? )?? E? sup
p?P
f ?F t=1
where Et?1 denotes the conditional expectation of xt (?).
Example 1. Suppose F is a unit ball in a Banach space and f (x) = hf, xi. Then
T
X
VT (P1:T ) ? 2 sup E(x,x? )?? E?
?t xt (?) ? Et?1 xt (?)
p?P
t=1
Suppose the adversary plays a simple random walk (e.g., pt (x|x1 , . . . , xt?1 ) = pt (x|xt?1 ) is uniform on a unit sphere). For simplicity, suppose this is the only strategy allowed by the set P. Then
xt (?) ? Et?1 xt (?) are independent increments when
conditioned
on the history. Further, the in
PT
crements do not depend on ?t . Thus, VT (P1:T ) ? 2E
t=1 Yt
where {Yt } is the corresponding
random walk.
We now show that the distribution-dependent sequential Rademacher complexity for i.i.d. data is
precisely the classical Rademacher complexity, and further show that the distribution-dependent sequential Rademacher complexity is always upper bounded by the worst-case sequential Rademacher
complexity defined in [10].
Proposition 5. First, consider the i.i.d. restrictions Pt = {p} for all t, where p is some fixed
distribution on X , and let ? be the process associated with the joint distribution p = pT . Then
"
#
T
X
?
RT (F, p) = RT (F, p),
where RT (F, p) = Ex1 ,...,xT ?p E? sup
?t f (xt )
(6)
f ?F t=1
is the classical Rademacher complexity. Second, for any joint distribution p,
"
#
T
X
?
RT (F, p) ? RT (F),
where RT (F) = sup E? sup
?t f (xt (?))
x
(7)
f ?F t=1
is the sequential Rademacher complexity defined in [10].
In the case of hybrid learning, adversary chooses a sequence of pairs (xt , yt ) where the instance xt ?s
are i.i.d. but the labels yi ?s are fully adversarial. The distribution-dependent Rademacher complexity
in such a hybrid case can be upper bounded by a vary natural quantity: a random average where
expectation is taken over xt ?s and a supremum over Y-valued trees. So, the distribution dependent
Rademacher complexity itself becomes a hybrid between the classical Rademacher complexity and
the worst case sequential Rademacher complexity. For more details, see Lemma 17 in the Appendix
as another example of an analysis of the distribution-dependent sequential Rademacher complexity.
Distribution-dependent sequential Rademacher complexity enjoys many of the nice properties satisfied by both classical and worst-case Rademacher complexities. As shown in [10], these properties
are handy tools for proving upper bounds on the value in various examples. We have: (a) If F ? G,
then R(F, p) ? R(G, p); (b) R(F, p) = R(conv(F), p); (c) R(cF, p) = |c|R(F, p) for all
c ? R; (d) For any h, R(F + h, p) = R(F, p) where F + h = {f + h : f ? F}.
In addition to the above properties, upper bounds on R(F, p) can be derived via sequential covering
numbers defined in [10]. This notion of a cover captures the sequential complexity of a function
class on a given X -valued tree x. One can then show an analogue of the Dudley integral bound,
where the complexity is averaged with respect to the underlying process (x, x? ) ? ?.
5
4
Application: Constrained Adversaries
In this section, we consider adversaries who are deterministically constrained in the sequences of
actions they can play. It is often useful to consider scenarios where the adversary is worst case,
yet has some budget or constraint to satisfy while picking the actions. Examples of such scenarios
include, for instance, games where the adversary is constrained to make moves that are close in some
fashion to the previous move, linear games with bounded variance, and so on. Below we formulate
such games quite generally through arbitrary constraints that the adversary has to satisfy on each
round. We easily derive several results to illustrate the versatility of the developed framework.
For a T round game consider an adversary who is only allowed to play sequences x1 , . . . , xT such
that at round t the constraint Ct (x1 , . . . , xt ) = 1 is satisfied, where Ct : X t 7? {0, 1} represents the
constraint on the sequence played so far. The constrained adversary can be viewed as a stochastic
adversary with restrictions on the conditional distribution at time t given by the set of all Borel
?
distributions on the set Xt (x1:t?1 ) = {x ? X : Ct (x1 , . . . , xt?1 , x) = 1}. Since this set includes
all point distributions on each x ? Xt , the sequential complexity simplifies in a way similar to
worst-case adversaries. We write VT (C1:T ) for the value of the game with the given constraints.
Now, assume that for any x1:t?1 , the set of all distributions on Xt (x1:t?1 ) is weakly compact in
a way similar to compactness of P. That is, Pt (x1:t?1 ) satisfy the necessary conditions for the
minimax theorem to hold. We have the following corollaries of Theorems 1 and 3.
Corollary 6. Let F and X be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let {Ct : X t?1 7? {0, 1}}Tt=1 be the constraints. Then
" T
#
T
X
X
inf Ext ?pt [ft (xt )] ? inf
f (xt )
(8)
VT (C1:T ) = sup E
p?P
t=1
ft ?F
f ?F
t=1
where p ranges over all distributions over sequences (x1 , . . . , xT ) such that ?t, Ct (x1:t?1 ) = 1.
Corollary 7. Let the set T be a set of pairs (x, x? ) of X -valued trees with the property that for any ? ? {?1}T and any t ? [T ], C(?1 (?1 ), . . . , ?t?1 (?t?1 ), xt (?)) =
C(?1 (?1 ), . . . , ?t?1 (?t?1 ), x?t (?)) = 1 . The minimax value is bounded as
VT (C1:T ) ? 2
sup
RT (F, p).
(x,x? )?T
More generally, for any measurable function Mt such that Mt (f, x, x? , ?) = Mt (f, x? , x, ??),
#
"
T
X
?t (f (xt (?)) ? Mt (f, x, x? , ?)) .
VT (C1:T ) ? 2 sup E? sup
(x,x? )?T
f ?F t=1
Armed with these results, we can recover and extend some known results on online learning against
budgeted adversaries. The first result says that if the adversary is not allowed to move by more than
?t away from its previous average of decisions, the player has a strategy to exploit this fact and
obtain lower regret. For the ?2 -norm, such ?total variation? bounds have been achieved in [4] up to
a log T factor. Our analysis seamlessly incorporates variance measured in arbitrary norms, not just
?2 . We emphasize that such certificates of learnability are not possible with the analysis of [10].
Proposition 8 (Variance Bound). Consider the online linear optimization setting with F = {f :
?(f ) ? R2 } for a ?-strongly function ? : F 7? R+ on F, and X = {x : kxk? ? 1}. Let
f (x) = hf, xi for any f ? F and x ? X . Consider the sequence of constraints {Ct }Tt=1 given by
Pt?1
1
Ct (x1 , . . . , xt?1 , x) = 1 if kx ? t?1
? =1 x? k? ? ?t and 0 otherwise. Then
? q
PT
VT (C1:T ) ? 2 2R ??1 t=1 ?t2
In particular, we obtain the following ?2 variance bound. Consider the case when ? : F 7? R+ is
given by ?(f ) = 12 kf k2 , F = {f : kf k2 ? 1} and X = {x : kxk
2 ? 1}. Consider the
constrained
Pt?1
1
game where the move xt played by adversary at time t satisfies
xt ? t?1
? =1 x?
? ?t . In
2
? qPT
2
this case we can conclude that VT (C1:T ) ? 2 2
t=1 ?t . We can also derive a variance bound
6
Pd
over the simplex. Let ?(f ) = i=1 fi log(dfi ) is defined over the d-simplex F, and X = {x :
kxk? ? 1}. Consider
the constrained game
where the move xt played by adversary at time t
Pt?1
1
satisfies maxj?[d] xt [j] ? t?1 ? =1 x? [j] ? ?t . For any f ? F, ?(f ) ? log(d) and so we
q
?
PT
conclude that VT (C1:T ) ? 2 2 log(d) t=1 ?t2 .
The next Proposition gives a bound whenever the adversary is constrained to choose his decision
from a small ball around the previous decision.
Proposition 9 (Slowly-Changing Decisions). Consider the online linear optimization setting where
adversary?s move at any time is close to the move during the previous time step. Let F = {f :
?(f ) ? R2 } where ? : F 7? R+ is a ?-strongly function on F and X = {x : kxk? ? B}. Let
f (x) = hf, xi for any f ? F and x ? X . Consider the sequence of constraints {Ct }Tt=1 given by
Ct (x1 , . . . , xt?1 , x) = 1 if kx ? xt?1 k? ? ? and 0 otherwise. Then,
p
VT (C1:T ) ? 2R? 2T /? .
In particular, consider the case of a Euclidean-norm restriction on the moves. Let ? : F 7? R+ is
given by ?(f ) = 12 kf k2 , F = {f : kf k2 ? 1} and X = {x : kxk2 ? 1}. Consider the constrained
game where the move xt played by adversary
at time t satisfies kxt ? xt?1 k2 ? ? . In this case
?
we can conclude that VT (C1:T ) ? 2? 2T . For the case of decision-making on the simplex, we
Pd
obtain the following result. Let ?(f ) =
i=1 fi log(dfi ) is defined over the d-simplex F, and
X = {x : kxk? ? 1}. Consider the constrained game where the move xt played by adversary at
time t satisfies kxt ? xt?1 k? ? ?.
pIn this case note that for any f ? F, ?(f ) ? log(d) and so we
can conclude that VT (C1:T ) ? 2? 2T log(d) .
5
Application: Smoothed Adversaries
The development of smoothed analysis over the past decade is arguably one of the landmarks in
the study of complexity of algorithms. In contrast to the overly optimistic average complexity and
the overly pessimistic worst-case complexity, smoothed complexity can be seen as a more realistic
measure of algorithm?s performance. In their groundbreaking work, Spielman and Teng [13] showed
that the smoothed running time complexity of the simplex method is polynomial. This result explains
good performance of the method in practice despite its exponential-time worst-case complexity. In
this section, we consider the effect of smoothing on learnability.
It is well-known that there is a gap between the i.i.d. and the worst-case scenarios. In fact, we do not
need to go far for an example: a simple class of threshold functions on a unit interval is learnable in
the i.i.d. supervised learning scenario, yet difficult in the online worst-case model [8, 2, 9]. This fact
is reflected in the corresponding combinatorial dimensions: the Vapnik-Chervonenkis dimension is
one, whereas the Littlestone dimension is infinite. The proof of the latter fact, however, reveals
that the infinite number of mistakes on the part of the player is due to the infinite resolution of the
carefully chosen adversarial sequence. We can argue that this infinite precision is an unreasonable
assumption on the power of a real-world opponent. The idea of limiting the power of the malicious
adversary through perturbing the sequence can be traced back to Posner and Kulkarni [9]. The
authors considered on-line learning of functions of bounded variation, but in the so-called realizable
setting (that is, when labels are given by some function in the given class).
We define the smoothed online learning model as the following T -round interaction between the
learner and the adversary. On round t, the learner chooses ft ? F; the adversary simultaneously
?t = ?(xt , st );
chooses xt ? X , which is then perturbed by some noise st ? ?, yielding a value x
and the player suffers ft (?
xt ). Regret is defined with respect to the perturbed sequence. Here ? :
X ? S 7? X is some measurable mapping; for instance, additive disturbances can be written as
x
? = ?(x, s) = x + s. If ? keeps xt unchanged, that is ?(xt , st ) = xt , the setting is precisely
the standard online learning model. In the full information version, we assume that the choice x
?t
is revealed to the player at the end of round t. We now recognize that the setting is nothing but a
particular way to restrict the adversary. That is, the choice xt ? X defines a parameter of a mixed
strategy from which a actual move ?(xt , st ) is drawn; for instance, for additive zero-mean Gaussian
noise, xt defines the center of the distribution from which xt + st is drawn. In other words, noise
does not allow the adversary to play any desired mixed strategy.
7
The value of the smoothed online learning game (as defined in (1)) can be equivalently written as
" T
#
T
X
X
VT = inf sup E inf sup E ? ? ? inf sup E
ft (?(xt , st )) ? inf
f (?(xt , st ))
q1
x1 f1 ?q1 q2
s1 ??
x2 f2 ?q2
s2 ??
qT
xT fT ?qT
sT ??
t=1
f ?F
t=1
where the infima are over qt ? Q and the suprema are over xt ? X . Using sequential symmetrization, we deduce the following upper bound on the value of the smoothed online learning game.
Theorem 10. The value of the smoothed online learning game is bounded above as
"
#
T
X
VT ? 2 sup E E?1 . . . sup E E?T sup
?t f (?(xt , st ))
x1 ?X s1 ??
xT ?X sT ??
f ?F t=1
We now demonstrate how Theorem 10 can be used to show learnability for smoothed learning of
threshold functions. First, consider the supervised game with threshold functions on a unit interval
(that is, non-homogenous hyperplanes). The moves of the adversary are pairs x = (z, y) with
z ? [0, 1] and y ? {0, 1}, and the binary-valued function class F is defined by
(9)
F = {f? (z, y) = |y ? 1 {z < ?}| : ? ? [0, 1]} ,
that is, every function is associated with a threshold ? ? [0, 1]. The class F has infinite Littlestone?s
dimension and is not learnable in the worst-case online framework. Consider a smoothed scenario,
with the z-variable of the adversarial move (z, y) perturbed by an additive uniform noise ? =
Unif[??/2, ?/2] for some ? ? 0. That is, the actual move revealed to the player at time t is
(zt + st , yt ), with st ? ?. Any non-trivial upper bound on regret has to depend on particular noise
assumptions, as ? = 0 corresponds to the case with infinite Littlestone dimension. For the uniform
disturbance, the intuition tells us that noise implies a margin, and we should expect a 1/? complexity
parameter appearing in the bounds. The next lemma quantifies the intuition that additive noise limits
precision of the adversary.
Lemma 11. Let ?1 , . . . , ?N be obtained by discretizing the interval [0, 1] into N = T a bins
[?i , ?i+1 ) of length T ?a , for some a ? 3. Then, for any sequence z1 , . . . , zT ? [0, 1], with probability at least 1 ? ?T 1a?2 , no two elements of the sequence z1 + s1 , . . . , zT + sT belong to the same
interval [?i , ?i+1 ), where s1 , . . . , sT are i.i.d. Unif[??/2, ?/2].
We now observe that, conditioned on the event in Lemma 11, the upper bound on the value in
Theorem 10 is a supremum of N martingale difference sequences! We then arrive at:
Proposition 12. For the problem of smoothed online learning of thresholds in 1-D, the value is
p
VT ? 2 + 2T (4 log T + log(1/?))
What we found is somewhat surprising: for a problem which is not learnable in the online worstcase scenario, an exponentially small noise added to the moves of the adversary yields a learnable
problem. This shows, at least in the given example, that the worst-case analysis and Littlestone?s
dimension are brittle notions which might be too restrictive in the real world, where some noise is
unavoidable. It is comforting that small additive noise makes the problem learnable!
The proof for smoothed learning of half-spaces in higher dimension follows the same route as the
one-dimensional exposition. For simplicity, assume the hyperplanes are homogenous and Z =
Sd?1 ? Rd , Y = {?1, 1}, X = Z ?Y. Define F = {f? (z, y) = 1 {y hz, ?i > 0} : ? ? Sd?1 }, and
assume that the noise is distributed uniformly on a square patch with side-length ? on the surface of
the sphere Sd?1 . We can also consider other distributions, possibly with support on a d-dimensional
ball instead.
Proposition 13. For the problem of smoothed online learning of half-spaces,
s
3 !
d?1
1
1
3
VT = O
dT log
log T + vd?2 ?
+
?
d?1
?
where vd?2 is constant depending only on the dimension d.
We conclude that half spaces are online learnable in the smoothed model, since the upper bound of
Proposition 13 guarantees existence of an algorithm which achieves this regret. In fact, for the two
examples considered in this section, the Exponential Weights Algorithm on the discretization given
by Lemma 11 is a (computationally infeasible) algorithm achieving the bound.
8
References
[1] J. Abernethy, A. Agarwal, P. Bartlett, and A. Rakhlin. A stochastic view of optimal regret
through minimax duality. In COLT, 2009.
[2] S. Ben-David, D. Pal, and S. Shalev-Shwartz. Agnostic online learning. In Proceedings of the
22th Annual Conference on Learning Theory, 2009.
[3] J.O. Berger. Statistical decision theory and Bayesian analysis. Springer, 1985.
[4] E. Hazan and S. Kale. Better algorithms for benign bandits. In SODA, 2009.
[5] S.M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk
bounds, margin bounds, and regularization. NIPS, 22, 2008.
[6] A. Lazaric and R. Munos. Hybrid Stochastic-Adversarial On-line Learning. In COLT, 2009.
[7] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer-Verlag, New York,
1991.
[8] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine Learning, 2(4):285?318, 04 1988.
[9] S. Posner and S. Kulkarni. On-line learning of functions of bounded variation under various
sampling schemes. In Proceedings of the sixth annual conference on Computational learning
theory, pages 439?445. ACM, 1993.
[10] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial
parameters, and learnability. In NIPS, 2010. Full version available at arXiv:1006.1138.
[11] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Beyond regret. In COLT, 2011.
Full version available at arXiv:1011.3168.
[12] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform
convergence. JMLR, 11:2635?2670, Oct 2010.
[13] D. A. Spielman and S. H. Teng. Smoothed analysis of algorithms: Why the simplex algorithm
usually takes polynomial time. Journal of the ACM, 51(3):385?463, 2004.
[14] A. W. Van Der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes : With
Applications to Statistics. Springer Series, March 1996.
9
| 4262 |@word version:5 polynomial:2 seems:1 norm:3 approachability:1 unif:2 forecaster:1 crucially:1 prokhorov:1 q1:3 pick:4 series:1 chervonenkis:1 denoting:1 prefix:2 past:4 discretization:1 surprising:1 yet:3 written:7 chicago:1 additive:6 realistic:1 benign:1 half:4 provides:1 certificate:1 hyperplanes:2 shorthand:1 prove:1 ex2:1 manner:1 upenn:1 intricate:1 expected:1 roughly:1 p1:23 indeed:2 behavior:1 actual:2 armed:1 becomes:1 conv:1 abound:1 notation:3 bounded:8 underlying:1 agnostic:1 what:1 q2:3 developed:6 proposing:1 unified:3 transformation:1 guarantee:1 every:4 k2:5 comforting:2 unit:4 arguably:1 before:1 understood:2 infima:1 limit:2 mistake:1 sd:3 ext:6 despite:1 path:5 might:2 studied:1 range:1 averaged:1 recursive:1 regret:11 practice:1 x3:1 handy:1 empirical:1 suprema:4 adapting:1 word:4 get:1 cannot:4 close:2 put:1 context:1 applying:1 risk:1 restriction:31 equivalent:1 deterministic:3 measurable:3 center:2 yt:5 go:1 kale:1 convex:2 focused:2 formulate:1 resolution:1 formalized:1 simplicity:2 immediately:2 estimator:1 array:1 his:2 posner:2 century:1 proving:1 notion:7 variation:4 increment:1 analogous:1 limiting:1 stability:1 pt:46 play:6 suppose:5 shamir:1 element:1 satisfying:4 ft:18 capture:2 worst:22 technological:1 intuition:2 pd:2 complexity:33 weakly:2 depend:4 tight:1 f2:2 learner:10 completely:1 easily:1 joint:8 various:3 forced:1 tell:1 choosing:1 shalev:2 abernethy:1 whose:1 emerged:1 quite:1 valued:8 say:3 tightness:1 otherwise:2 statistic:2 vaart:1 think:2 itself:2 online:23 sequence:30 kxt:2 ledoux:1 interaction:2 product:1 realization:1 translate:1 convergence:2 rademacher:21 ben:1 object:1 depending:3 derive:2 illustrate:1 measured:2 qt:6 progress:1 p2:5 c:1 implies:2 come:1 attribute:1 stochastic:6 centered:1 bin:1 explains:1 argued:1 require:2 f1:3 proposition:11 pessimistic:1 hold:5 around:1 considered:2 scope:2 predict:1 mapping:6 vary:1 achieves:1 omitted:1 estimation:1 label:3 combinatorial:2 utexas:1 symmetrization:12 tool:5 minimization:1 clearly:2 always:1 gaussian:1 corollary:5 derived:1 focus:2 notational:1 seamlessly:1 greatly:2 contrast:1 adversarial:7 realizable:1 dependent:9 compactness:3 bandit:1 selects:2 colt:3 denoted:2 development:2 constrained:11 smoothing:1 homogenous:2 wharton:1 once:1 aware:1 sampling:3 adversarially:1 placing:1 represents:1 filling:1 simplex:6 t2:2 oblivious:4 employ:1 few:1 simultaneously:2 recognize:1 variously:1 maxj:1 replacement:1 versatility:1 karthik:2 deferred:1 extreme:3 yielding:1 tuple:1 integral:1 necessary:5 tree:15 continuing:1 euclidean:1 littlestone:6 re:1 walk:2 desired:1 instance:7 cover:1 introducing:1 subset:4 uniform:4 too:1 learnability:7 pal:1 perturbed:3 corrupted:1 considerably:1 chooses:7 deduced:2 cited:1 st:14 randomized:1 stay:1 picking:1 quickly:1 again:1 unavoidable:2 satisfied:2 choose:2 slowly:1 possibly:1 external:1 includes:1 satisfy:4 depends:2 view:3 root:1 closed:2 optimistic:1 hazan:1 sup:27 recover:2 hf:3 complicated:1 contribution:1 minimize:1 square:1 intermixing:1 variance:5 largely:1 who:4 yield:2 weak:3 bayesian:1 rx:1 history:2 suffers:2 whenever:1 definition:2 sixth:1 against:1 involved:1 associated:6 proof:6 boil:1 gain:1 sampled:3 treatment:1 carefully:1 back:1 higher:1 dt:1 supervised:4 reflected:1 wherein:1 done:2 strongly:2 just:1 talagrand:1 hand:1 lack:1 abusing:1 defines:3 name:1 effect:1 building:1 verify:1 regularization:1 deal:1 round:11 ex1:3 game:25 interchangeably:1 during:1 covering:1 theoretic:2 complete:2 tt:4 demonstrate:1 ranging:1 fi:2 predominantly:1 mt:8 perturbing:1 endpoint:1 exponentially:2 conditioning:1 banach:2 extend:1 belong:1 rd:1 language:1 calibration:1 surface:1 deduce:2 own:1 showed:1 inf:17 irrelevant:1 scenario:13 termed:1 route:1 verlag:1 binary:4 discretizing:1 vt:26 yi:1 exploited:1 der:1 captured:1 seen:2 additional:1 care:2 somewhat:1 full:3 sphere:2 justifying:1 prediction:1 metric:2 expectation:2 arxiv:2 agarwal:1 achieved:2 c1:10 addition:1 whereas:1 interval:4 malicious:1 hz:1 incorporates:1 sridharan:5 call:1 revealed:3 identically:1 affect:1 zi:1 pennsylvania:1 restrict:2 topology:2 simplifies:2 idea:1 texas:1 shift:1 expression:3 bartlett:1 wellner:1 proceed:1 speaking:1 york:1 action:3 useful:2 tewari:4 clear:1 generally:3 amount:1 simultaneity:1 sign:2 overly:2 per:1 lazaric:1 diverse:1 write:4 key:1 nevertheless:2 threshold:6 traced:1 drawn:3 achieving:1 changing:1 budgeted:1 neither:1 groundbreaking:1 sum:1 distributiondependent:1 soda:1 arrive:1 throughout:2 reasonable:1 saying:1 p3:1 patch:1 draw:1 decision:7 appendix:2 capturing:1 bound:21 ct:11 played:5 annual:2 constraint:10 precisely:2 x2:5 separable:2 department:2 according:1 ball:3 march:1 slightly:1 kakade:1 making:1 s1:4 restricted:2 invariant:1 heart:1 taken:1 legal:1 computationally:1 pin:1 end:2 available:4 opponent:1 unreasonable:1 apply:1 observe:1 away:1 dudley:1 appearing:1 existence:1 original:1 denotes:1 running:1 ensure:1 cf:1 include:1 folklore:1 exploit:1 restrictive:1 classical:7 unchanged:1 move:27 objective:1 added:1 quantity:1 strategy:14 dependence:1 rt:9 landmark:1 vd:2 argue:1 trivial:1 length:3 berger:1 equivalently:3 setup:1 difficult:1 statement:1 zt:6 upper:11 defining:1 extended:1 payoff:1 situation:1 smoothed:18 arbitrary:3 ttic:1 introduced:1 vacuous:1 pair:4 blackwell:1 david:1 z1:3 nip:2 beyond:2 adversary:57 below:4 usually:1 ambuj:2 including:1 analogue:2 power:2 event:1 difficulty:2 hybrid:5 natural:2 disturbance:2 minimax:18 scheme:1 nice:1 literature:1 understanding:1 tangent:2 kf:4 relative:1 loss:1 encompassing:1 fully:1 expect:1 mixed:7 interesting:3 brittle:1 srebro:1 austin:1 succinctly:1 supported:1 infeasible:1 enjoys:1 side:1 allow:2 institute:1 munos:1 distributed:3 van:1 dimension:9 depth:2 world:3 author:1 made:1 adaptive:1 far:3 compact:3 selector:2 emphasize:1 supremum:3 dealing:2 keep:1 reveals:2 assumed:1 conclude:5 xi:3 shwartz:2 spectrum:3 decade:1 quantifies:1 why:2 additionally:1 nature:4 main:6 bounding:1 noise:14 arise:2 s2:1 nothing:1 repeated:1 allowed:6 child:2 x1:49 borel:1 fashion:1 martingale:1 precision:2 dfi:2 deterministically:1 exponential:2 kxk2:1 jmlr:1 toyota:1 interleaving:1 down:1 theorem:17 xt:86 learnable:7 rakhlin:5 x:2 r2:2 restricting:1 sequential:21 vapnik:1 budget:2 conditioned:2 kx:2 margin:2 gap:2 easier:1 kxk:5 springer:3 nested:2 corresponds:1 satisfies:8 relies:1 worstcase:2 acm:2 oct:1 conditional:10 goal:2 viewed:1 exposition:1 towards:1 infinite:7 determined:1 uniformly:1 lemma:5 total:1 teng:2 called:1 duality:1 player:14 support:1 latter:2 arises:3 signifying:1 alexander:1 brevity:1 spielman:2 kulkarni:2 |
3,604 | 4,263 | Inferring Interaction Networks using the IBP applied
to microRNA Target Prediction
Ziv Bar-Joseph
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Hai-Son Le
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Abstract
Determining interactions between entities and the overall organization and clustering of nodes in networks is a major challenge when analyzing biological and
social network data. Here we extend the Indian Buffet Process (IBP), a nonparametric Bayesian model, to integrate noisy interaction scores with properties of
individual entities for inferring interaction networks and clustering nodes within
these networks. We present an application of this method to study how microRNAs regulate mRNAs in cells. Analysis of synthetic and real data indicates that the
method improves upon prior methods, correctly recovers interactions and clusters,
and provides accurate biological predictions.
1
Introduction
Determining interactions between entities based on observations is a major challenge when analyzing biological and social network data [1, 12, 15]. In most cases we can obtain information regarding
each of the entities (individuals in social networks and proteins in biological networks) and some
information about possible relationships between them (friendships or conversation data for social
networks and motif or experimental data for biology). The goal is then to integrate these datasets
to recover the interaction network between the entities being studied. To simplify the analysis of
the data it is also beneficial to identify groups, or clusters, within these interaction networks. Such
groups can then be mapped to specific demographics or interests in the case of social networks or to
modules and pathways in biological networks [2].
A large number of generative models were developed to represent entities as members of a number of
classes. Many of these models are based on the stochastic blockmodel introduced in [19]. While the
number of classes in such models could be fixed, or provided by the user, nonparametric Bayesian
methods have been applied to allow this number to be inferred based on the observed data [9]. The
stochastic blockmodel was also further extended in [1] to allow mixed membership of entities within
these classes. An alternate approach is to use latent features to describe entities. [10] proposed a
nonparametric Bayesian matrix factorization method to learn the latent factors in relational data
whereas [12] presented a nonparametric model to study binary link data. All of these methods rely
on the pairwise link and interaction data and in most cases do not utilize properties of the individual
entities when determining interactions.
Here we present a model that extends the Indian Buffet Process (IBP) [7], a nonparametric Bayesian
prior over infinite binary matrices, to learn the interactions between entities with an unbounded
number of groups. Specifically, we represent each group as a latent feature and define interactions
between entities within each group. Such latent feature representation has been used in the past
to describe entities [7, 10, 12] and IBP is an appropriate nonparametric prior to infer the number
of latent features. However, unlike IBP our model utilizes interaction scores as priors and so the
1
model is not exchangeable anymore. We thus extend IBP by integrating it with Markov random
field (MRF) constraints, specifically pairwise potentials as in Ising model. MRF priors has been
combined with Dirichlet Process mixture models for image segmentation in a related work of Orbanz
and Buhmann [13]. Pairwise information is also used in the distance dependent Chinese restaurant
process [4] to encourage similar objects to be clustered. Our model is well suited for cases in
which we are provided with information on both link structure and the outcome of the underlying
interactions. In social networks such data can come from observations of conversation between
individuals followed by actions of the specific individuals (for example, travel), whereas in biology
it is suited for regulatory networks as discussed below.
We apply our model to study the microRNA (miRNA) target prediction problem. miRNAs were
recently discovered as a class of regulatory RNA molecules that regulate the levels of messenger
RNAs (mRNAs) (which are later translated to proteins) by binding and inhibiting their specific
targets [15]. They were shown to play an important role in a number of diseases including cancer,
and determining the set of genes that are targeted by each miRNA is an important question when
studying these diseases. Several methods were proposed to predict targets of miRNAs based on
their sequence1 . While these predictions are useful, due to the short length of miRNAs, they lead
to many false positives and some false negatives [8]. In addition to sequence information, it is
now possible to obtain the expression levels of miRNAs and their predicted mRNA targets using
microarrays. Since miRNAs inhibit their direct targets, integrating sequence and expression data
can improve predictions regarding the interactions between miRNAs and their targets. A number of
methods based on regression analysis were suggested for this task [8, 17]. While methods utilizing
expression data improved upon methods that only used sequence data, they often treated each target
mRNA in isolation. In contrast, it has now been shown that each miRNA often targets hundreds
of genes, and that miRNAs often work in groups to achieve a larger impact [14]. Thus, rather than
trying to infer a separate regression model for each mRNA we use our IBP extended model to infer
a joint regression model for a cluster of mRNAs and the set of miRNAs that regulate them. Such a
model would provide statistical confidence (since it combines several observations) while adhering
more closely to the underlying biology. In addition to inferring the interactions in the dataset such a
model would also provide a grouping for genes and miRNAs which can be used to improve function
prediction.
2
Computational model
Firstly, we derive a distribution on infinite binary matrices starting with a finite model and taking the
limit as the number of features goes to infinity. Secondly, we describe the application of our model
to the miRNA target prediction problem using a Gaussian additive model.
2.1
Interaction model
Let zik denote the (i, k) entry of a matrix Z and let zk denote the kth column of Z. The group
membership of N entities is defined by a (latent) binary matrix Z where zik = 1 if entity i belongs
to group k. Given Z, we say that entity i interacts with entity j if zik zjk = 1 for some k. Note that
two entities can interact through many groups where each group represents one type of interaction.
In many cases, a prior on such interactions can be obtained. Assume we have a N ? N symmetric
matrix W, where wij indicates the degree that we believe that entity i and j interact: wij > 0 if
entities i and j are more likely to interact and wij < 0 if they are less likely to do so.
Nonparametric prior for Z: Griffiths and Ghahramani [7] proposed the Indian Buffet Process (IBP)
as a nonparametric prior distribution on sparse binary matrices Z. The IBP can be derived from a
simple stochastic process, described by a culinary metaphor. In this metaphor, there are N customers
(entities) entering a restaurant and choosing from an infinite array of dishes (groups). The first
customer tries Poisson(?) dishes, where ? is a parameter. The remaining customers enter one after
the others. The ith customer tries a previously sampled dish k with probability mik , where mk is the
number of previous customers who have sampled this dish. He then samples a Poisson( ?i ) number of
new dishes. This process defines an exchangeable distribution on the equivalence classes of Z, which
are the set of binary matrices that map to the same left-ordered binary matrices. [7]. Exchangeability
1
Genes that are targets of miRNAs contain the reverse complement of part of the miRNA sequence.
2
means that the order of the customers does not affect the distribution and that permutation of the data
does not change the resulting likelihood.
The prior knowledge on interactions discussed above (encoded by W) violates the exchangeability
of the IBP since the group membership probability depends on the identities of the entities whereas
exchangeability means that permutation of entities does not change the probability. In [11], Miller
et al. presented the phylogenetic Indian Buffet Process (pIBP), where they used a tree representation
to express non-exchangeability. In their model, the relationships among customers are encoded as
a tree allowing them to exploit the sum-product algorithm in defining the updates for an MCMC
sampler, without significantly increasing the computational burden when performing inference.
We combine the IBP with pairwise potentials using W, constraining the dish selection of customers.
Similar to the pIBP, the entries in zk are not chosen independently given ?k but rather depend on
the particular assignment of the remaining entries. In the following sections, we start with a model
with a finite number of groups and consider the limit as the number of groups grows to derive the
nonparametric prior. Note that in our model, as in the original IBP [7], while the number of rows are
finite, the number of columns (features) could be infinite. We can thus define a prior on interactions
between entities (since their number is known in advance) while still allowing for an infinite number
of groups. This flexibility allows the group parameters to be drawn from an infinite mixtures of
priors which may lead to identical groups of entities each with a different set of parameters.
2.1.1
Prior on finite matrices Z
We have an N ? K binary matrix Z where N is the number of entities and K is a fixed, finite
number of groups. In the IBP, each group/column k is associated with a parameter ?k , chosen from
a Beta(?/K, 1) prior distribution where ? is a hyperparameter:
X
?
?k |? ? Beta
(1 ? zik ) log(1 ? ?k ) + zik log ?k
,1
P (zk |?k ) = exp
K
i
The joint probability of a column k and ?k in the IBP is:
X
1
?
P (zk , ?k |?) =
(1
?
z
)
log(1
?
?
)
+
z
log
?
+
?
1
log
?
exp
(1)
ik
k
ik
k
k
?
B( K
K
, 1)
i
where B(?) is the Beta function.
For our
the new pairwise potentials on memberships of entities. Defining ?zk =
Pmodel, we add
exp
w
z
z
i<j ij ik jk , the joint probability of a column k and ?k is:
X
?
1
(1 ? zik ) log(1 ? ?k ) + zik log ?k +
? 1 log ?k
(2)
P (zk , ?k |?) = 0 ?zk exp
Z
K
i
where Z 0 is the partition function. Note that IBP is a special case of our model when all w?s are
zeros (W = 0).
Following [7], we define the lof-equivalence classes [Z] as the sets of binary matrices mapped to
the same left-ordered binary matrices. The history hi of a feature k at an entity i is defined as
(z1k , . . . , z(i?1)k ). When no object is specified, h refers to the full history. mk and mh denote the
number of non-zero entries of a feature k and a history h respectively. Kh is the number of features
P2N ?1
possessing the history h while K0 is the number of features having mk = 0. K+ = h=1 Kh is
the number of features for which mk > 0.
By integrating over all values of ?k , we get the marginal probability of a binary matrix Z.
K Z 1
Y
P (Z) =
P (zk , ?k |?) d?k
k=1
=
(3)
0
Z 1
K
?
Y
1
?
exp
+
m
?
1
log
?
+
(N
?
m
)
log(1
?
?
)
d?k
z
k
k
k
k
Z0 k 0
K
(4)
K
Y
1
?
?z B
+ mk , N ? mk + 1
Z0 k K
(5)
k=1
=
k=1
3
The partition function Z 0 could be written as: Z 0 =
2.1.2
P2N ?1
h=0
?h B
?
K
+ mh , N ? mh + 1 .
Taking the infinite limit
The probability of a particular lof-equivalence class of binary matrices, [Z], is:
P ([Z]) =
X
Z
K!
P (Z) = Q2N ?1
h=0
K
Y
?
1
?zk B mk + , N ? mk + 1
0
K
Kh ! k=1 Z
Taking the limit when K ? ?, we can show that with ? =
K!
lim P ([Z]) = lim Q2N ?1
K??
K??
h=0
K+
?
= Q2N ?1
h=1
K+
Y
Kh ! k=1
K+
Y
Kh ! k=1
?zk
?zk
P2N ?1
h=1
(6)
)!(mh ?1)!
?h (N ?mhN
:
!
K
?
, N ? mk + 1) Y 1
B(mk + K
?
B( , N + 1) (7)
?
0
B( K , N + 1)
Z
K
k=1
(N ? mk )!(mk ? 1)!
exp ? ??)
N!
(8)
The detailed derivations are shown in Appendix.
2.1.3
The generative process
We now describe a generative stochastic process for Z. It can be understood by a culinary metaphor,
where each row of Z corresponds to a customer and each column corresponds to a dish. We denote
? h = ?h (N ?mh )!(mh ?1)! , we define
by h(i) the value of zik in the complete history h. With ?
N!
P
P
? h so that ? = N ?i . Finally, let z<ik be entries 1, . . . , (i ? 1) of zk .
?i = h:hi =0,h(i)=1 ?
i=1
Assume that we are provided with a compatibility score between pairs of customers. That is, we have
a value wij for the food preference similarity between customer i and customer j. Higher values of
wij indicate similar preferences and customers with such values are more likely to select the same
dish. Therefore, the dishes a customer selects may depend on the choices of previous customers. The
first customer tries Poisson(??1 ) dishes. The remaining customers enter one after the others. The
ith customer selects dishes with a probability that partially depends
on the selectionP
of the previous
P
? h/
?
customers. The probability that a dish would be selected is h:hi =z<ik ,h(i)=1 ?
h:hi =z<ik ?h .
He then samples a Poisson(??i ) number of new dishes. This process repeats until all customers
have made their selections. Although this process is not exchangeable, the sequential order of customers is not important. This means that we get the same marginal distribution for any particular
(i)
order of customers. Let K1 denote the number of new dishes sampled by customer i, the probability of a particular matrix generated by this process is:
K+
Y
?K+
? z exp ? ??)
?
P (Z) = QN
k
(i)
K
1 k=1
i=1
If we only pay attention to the lof-equivalence classes [Z], since there are
(9)
QN
(i)
i=1 K1
Q2N ?1
Kh !
h=1
matrices gen-
erated by this process mapped to the same equivalence classes, multiplying P (Z) by this quantity
recovers Equation (8). We show in Appendix that in the case of the IBP where ?h = 1 for all
histories h (when W = 0), this generative process simplifies to the Indian Buffet Process.
2.2
Regression model for mRNA expression
In this section, we describe the application using the nonparametric prior to the miRNA target prediction problem. However, the method is applicable in general settings where there is a way to model
properties of one entity from properties of its interacting entities. Our input data are expression profiles of M messenger RNA (mRNA) transcripts and R miRNA transcript across T samples. Let
X = (xT1 , . . . , xTM )T , where each row vector xi is the expression profile of mRNA i in all samples.
T T
Similarly, let Y = (y1T , . . . , yR
) represent the expression profiles of R miRNAs. Furthermore,
suppose we are given a M ? R matrix C where cij is the prior likelihood score for the interaction of
4
mRNA i and miRNA j. Such matrix C could be obtained from sequence-based miRNA target predictions as discussed above. Applying our interaction model to this problem, the set of N = M + R
entities are divided into two disjoint sets of mRNAs and miRNAs. Let Z = (UT , VT )T where U
and
V arethe group membership matrices for mRNAs and miRNAs respectively, W is given by
0 C
. Therefore, mRNA i and miRNA j interact through all groups k such that uik vjk = 1.
CT 0
2.2.1
Gaussian additive model
In the interaction model suggested by GenMiR++ [8], each miRNA expression profile is used to
explain the downregulation of the expression of its targeted mRNAs. Our model uses a group specific
and miRNA specific coefficients ( s = [s1 , . . . , s? ]T , with sk > 0 for groups and r = [r1 , . . . , rR ]T
for all miRNAs) to model the downregulation effect. These coefficients represent the baseline effect
of group members and the strength of specific miRNAs, respectively. Using these parameters the
expression level of a specific mRNA could be explained by summing over expression profiles of all
miRNAs targeting the mRNA:
X
X
(rj +
sk ) yj , ? 2 I
xi ? N ? ?
(10)
j
k:uik vjk =1
where ? represents baseline expression for this mRNA and ? is used to represent measurement noise.
Thus, under this model, the expression of a mRNA are reduced from their baseline values by a linear
combination of expression values of themiRNAs that target them. The probability
of the observed
P
data given Z is: P (X, Y|Z, ?) ? exp ? 2?1 2 i (xi ? x
?i )T (xi ? x
?i ) , with ? = {?, ? 2 , s, r}
P
P
and x
?i = ? ? j (rj + k:uik vjk =1 sk ) yj .
2.2.2
Priors for model variables
We use the following as prior distributions for the variables in our model:
sk ? Gamma(?s , ?s )
r ? N (0, ?r2 I)
? ?
N (0, ??2 I)
1/?
2
(11)
? Gamma(?v , ?v )
where the ? and ? are the shape and scale parameters. The parameters are given hyperpriors: 1/?r2 ?
Gamma(ar , br ) and 1/??2 ? Gamma(a? , b? ). ?s , ?s , ?v , ?v are also given Gamma hyperpriors.
3
Inference by MCMC
As with many nonparametric Bayesian models, exact inference is intractable. Instead we use a
Markov Chain Monte Carlo (MCMC) method to sample from the posterior distribution of Z and ?.
Although, our model allows Z to have infinite number of columns, we only need to keep track of
non-zero columns of Z, an important aspect which is exploited by several nonparametric Bayesian
models [7]. Our sampling algorithm involves a mix of Gibbs and Metropolis-Hasting steps which
are used to generate the new sample.
3.1
Sampling from populated columns of Z
Let m?ik is the number of one entries not including zik in zk . Also let z?ik denote the entries of
zk except zik and let Z?(ik) be the entire matrix Z except zik . The probability of an entry given
the remaining entries in a column can be derived by considering an ordering of customers such that
customer i is the last person in line and using the generative process in Section 2.1.3:
? z ,z =1
?
<ik ik
P (zik = 1|z?ik ) = ?
? z ,z =0
?z<ik ,zik =1 + ?
<ik ik
P
exp
j6=i wij zjk (N ? m?ik ? 1)!m?ik !
P
=
exp
j6=i wij zjk (N ? m?ik ? 1)!m?ik ! + (N ? m?ik )!(m?ik ? 1)!
P
exp
j6=i wij zjk m?ik
P
=
exp
j6=i wij zjk m?ik + (N ? m?ik )
5
We could also get the result using the limiting probability in Equation (8). The probability of each
zik given all other variables is: P (zik |X, Y, Z?(ik) ) ? P (X, Y|Z?(ik) , zik )P (zik |z?ik ). We need
only to condition on z?ik since columns of Z are generated independently.
3.2
Sampling other variables
Sampling a new column of Z: New columns are columns that do not yet have any entries equal
to 1 (empty groups). When sampling for an entity i, we assume this is the last customer in line.
Therefore, based on the generative process described in Section 2.1.3, the number of new features
?
are Poisson( N
). For each new column, we need to sample a new group specific coefficient variable
sk . We can simply sample from the prior distribution given in Equation (11) since the probability
P (X, Y|Z, ?) is not affected by these new columns since no interactions are currently represented
by these columns.
Sampling sk for populated columns: Since we do not have a conjugate prior on s, we cannot compute the conditional likelihood directly. We turn to Metropolis-Hasting to sample s. The proposed
distribution of a new value s?k given the old value sk is q(s?k |sk ) = Gamma(h, shk ) where h is the
shape parameter. The mean of this distribution is the old value sk . The acceptance ratio is
h P (X, Y|Z, ? \ {s }, s? ) p(s? |? , ? ) q(s |s? ) i
k
s
k k
k
k s
A(sk ? s?k ) = min 1,
P (X, Y|Z, ?) p(sk |?s , ?s ) q(s?k |sk )
In our experiments, h is selected so that the average acceptance rate is around 0.25 [5].
Sampling r, ?, ? 2 and prior parameters: Closed-form formulas for the posterior distributions of
r,? and ? 2 can be derived due to their conjugacy. For
example, the posterior distribution of 1/? 2
P
(x ??
x )T (x ??
x ) ?1
.Equations for updates
given the other variables is: Gamma ?v + M2T , ?1v + i i i2 i i
of r and ? are omitted due to lack of space. Gibbs sampling steps are used for ?r2 and ??2 since we
can compute the posterior distribution with conjugate priors. For prior parameters {?s , ?s , ?v , ?v },
we use Metropolis-Hasting steps discussed previously.
4
Results
We name our method GroupMiR (Group MiRNA target prediction). In this section we compare the
performance of GroupMiR with GenMiR++ [8], which is one of the popular methods for predicting miRNA-mRNA interactions. However, unlike our method it does not use grouping of mRNAs
and attempts to predict each one separately. Besides, there are two other important differences of
GenMiR++ from our method: 1) GenMiR++ only consider interactions in the candidate set while
our method consider all possible interactions. 2) GenMiR++ accepts a binary matrix as a candidate set while our method allows continuous valued scores. To our best knowledge, GenMiR++,
which uses the regression model for interaction between entities, is the only appropriate method2
for comparison.
4.1
Synthetic data
We generated 9 synthetic datasets. Each dataset contains 20 miRNAs and 200 mRNAs. We set
the number of groups to K = 5 and T = 10 for all datasets. The miRNA membership V is a
random matrix with at most 5 ones in each column. The mRNA membership U is a random matrix
with density of 0.1. The expression of mRNAs are generated from the model in Equation (10) with
? 2 = 1. The remaining random variables are sampled as follows: y ? N (0, 1), s ? N (1, 0.1)
and r ? N (0, 0.1). Since the sequence based predictions of miRNA-mRNA interactions are based
on short complementary regions they often result in many more false positives than false negatives.
We thus introduce noise to the true binary interaction matrix C0 by probabilistically changing each
zero value in that matrix to 1. We tested different noise probabilities: 0.1, 0.2, 0.4 and 0.8. We use
C = 2C0 ? 1.8, ? = 1 and the hyperprior parameters are set to generic values. Our sampler is ran
for 2000 iterations and 1000 iterations are discarded as burn-in.
2
We also tested with the original IBP (by setting W = 0). The results for both the synthetic and real data
were too weak to be comparable with GenMIR++. See Appendix.
6
(a) Truth (b) 0.1
Figure 1: The posterior distribution of K.
(c) 0.2
(d) 0.4
(e) 0.8
Figure 2: An example synthetic dataset.
Figure 1 plots the estimated posterior distribution of K from the samples of the 9 datasets for all
noise levels. As can be seen, when the noise level is small (0.1), the distributions are correctly
centered around K = 5. With increasing noise levels, the number of groups is overestimated.
However, GroupMiR still does very well at a noise level of 0.4 and estimates for the higher noise
level are also within a reasonable range.
We estimated a posterior mean for the interaction matrix Z by first ordering the columns of each
sampled Z and then selecting the mode from the set of Z matrices. GenMiR++ returns a score value
in [0, 1] for each potential interaction. To convert these to binary interactions we tested a number
of different threshold cutoffs: 0.5, 0.7 and 0.9. Figure 3 presents a number of quality measures
for the recovered interactions by the two methods. GroupMiR achieves the best F1 score across all
noise levels greatly improving upon GenMiR++ when high noise levels are considered (a reasonable
biological scenario). In general, while the precision is very high for all noise levels, recall drops to
a lower rate. From a biological point of view, precision is probably more important than recall since
each of the predictions needs to be experimentally tested, a process that is often time consuming and
expensive.
In addition to accurately recovering interactions between miRNAs and mRNAs, GroupMiR also
correctly recovers the groupings of mRNA and miRNAs. Figure 2 presents a graphical view of
the group membership in both the true model and the model recovered by GroupMiR for one of
the synthetic datasets. As can be seen, our method is able to accurately recover the groupings of
both miRNAs and mRNAs with moderate noise levels (up to 0.4). For the higher noise level (0.8)
the method assigns more groups than in the underlying model. However, most interactions are still
correctly recovered. These results hold for all datasets we tested (not shown due to lack of space).
1
1
0.8
1
1
0.8
0.8
0.6
0.6
0.6
Score
0.4
Rate
Rate
Rate
0.8
0.6
0.4
0.4
0.2
0.2
0.2
0
0.1
0.2
0.4
Noise
(a) Recall
0.8
0.4
0.1
0.2
0.4
Noise
0.8
0.1
(b) Accuracy
0.2
0.4
Noise
(c) Precision
0.8
0
0.1
0.2
0.4
Noise
0.8
(d) F1 Score
Figure 3: Performance of GroupMiR versus GenMiR++: Each data point is a synthetic dataset.
4.2
Application to mouse lung development
To test our method on real biological data, we used a mouse lung developmental dataset [6]. In this
study, the authors used microarrays to profile both miRNAs and mRNAs at 7 time points, which
include all recognized stages of lung development. We downloaded the log ratio normalized data
collected in this study. Duplicate samples were averaged and median values of all probes were
assigned to genes. As suggested in the paper, we used ratios to the last time point resulting in
6 values for each mRNA and miRNA. Priors for interaction between miRNA and mRNA were
downloaded from the MicroCosm Target3 database. Selecting genes with variance in the top 10%,
led to 219 miRNAs and 1498 mRNAs which were used for further analysis.
We collected 5000 samples of the interaction matrix Z following a 5000 iteration burn-in period.
Convergence of the MCMC chain is determined by monitoring trace plots of K in multiple chains.
3
http://www.ebi.ac.uk/enright-srv/microcosm/
7
Lgals3
Dbf4
Parp3
Slc6a14
Zranb3
Brca2
Ear11
Mthfd1l
Msh6
4932413O14Rik
Fmo1
Rad54l
Tcfl5
Mki67
Ear1 Cyp2f2
mmu-miR-30a
Blm
mmu-miR-29c
mmu-miR-20a
Col24a1
mmu-miR-29a
mmu-miR-30c
mmu-miR-30d
Ube2t
Fancd2
mmu-miR-30e
mmu-miR-30b
2610318N02Rik
Ear2
Ppa1
Cenpq
Brca1
mmu-miR-17
Myh14
Igsf9
Tubb3
6720463M24Rik
Col11a1
Dapk2
Tmem100
Lin7a
Lmnb1
Lilrb3
Afp
mmu-miR-93
Pdgfc
Kntc1
H19
mmu-miR-106b
mmu-miR-106a
Agtr2
(a)
Ccna2
4732471D19Rik
Etaa1
Tmem48
(b)
C330027C09Rik
mmu-miR-30e*
Gins1
Hmmr
Abcb7
Mcm8 mmu-miR-30a*
Cdca8
(d)
Clspn
Dlk1
Wnk3
Rrm2
(e)
Diap3
Dusp9
mmu-miR-27b
Kif2c
Aurka
mmu-miR-27a
Palb2
Fkbp3
Wdr75
(c)
mmu-miR-195
mmu-miR-16
Shcbp1
Dhx9
Ppih
Aurkb
Pafah1b3
Top2a
Nrk
(f)
Figure 4: Interaction network recovered by GroupMiR: Each node is a pie chart corresponding to
its expression values in the 6 time points (red: up-regulation, green: down-regulation).
Since there are many more entries for real data compared to synthetic data we computed a consensus
for Z by reordering columns in each sample and averaging the entries across all matrices.
We further analyzed the network constructed from groups with at least 90% posterior probability.
The network recovered by GroupMiR is more connected (89 nodes and 208 edges) when compared
to the network recovered by GenMiR++ (using equivalent 0.9 threshold) with 37 nodes and 34 edges
(Appendix). We used Cytoscape [16] to visualize the 6 groups of interactions in Figure 4. The
network contains several groups of co-expressed miRNAs controlling sets of mRNA, in agreement
with previous biological studies [20].
To test the function of the clusters identified, we performed Gene Ontology (GO) enrichment analysis for the mRNAs using GOstat [3]. The full results (Bonferroni corrected) are presented in Appendix. As can be seen, several cell division categories are enriched in cluster (b) which is expected
when dealing with a developing organ (which undergoes several rounds of cell division). Other
significant functions include organelle organization and apoptosis which also are associated with
development (cluster (c)). We performed similar GO enrichment analysis for the GenMiR++ results
and for K-means when using the same set of mRNAs (setting k = 6 as in our model). In both cases
we did not find any significant enrichment indicating that only by integrating sets of miRNAs with
the mRNAs for this data we can find functional biological groupings. See Appendix for details.
We have also looked at the miRNAs controlling the different clusters and found that in a number of
cases these agreed with prior knowledge. Cluster (a) includes 2 members of the miR 17-92 cluster,
which is known to be critical to lung organogenesis [18]. MiRNA families miR-30, miR-29, miR-20
and miR-16, all identified by our method, were also reported to play roles in the early stages of lung
organogenesis [6]. It is important to point out that we did not filter miRNAs explicitly based on
expression but these miRNAs came in the results based on their strong effect on mRNA expression.
5
Conclusions
We have described an extension to IBP that allows us to integrate priors on interactions between
entities with measured properties for individual entities when constructing interaction networks.
The method was successfully applied to predict miRNA-mRNA interactions and we have shown
that it works well on both synthetic and real data. While our focus in this paper was on a biological problem, several other datasets provide similar information including social networking data.
Our method is appropriate for such datasets and can help when attempting to construct interaction
networks based on observations.
Acknowledgments
This work is supported in part by NIH grants 1RO1GM085022, 1U01HL108642 and NSF grant
DBI-0965316 to Z.B.J.
8
References
[1] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed membership stochastic blockmodels. The Journal of Machine Learning Research, 9:1981?2014, 2008.
[2] Z. Bar-Joseph, G.K. Gerber, T.I. Lee, et al. Computational discovery of gene modules and
regulatory networks. Nature biotechnology, 21(11):1337?1342, 2003.
[3] T. Bei?barth and T.P. Speed. GOstat: find statistically overrepresented Gene Ontologies within
a group of genes. Bioinformatics, 20(9):1464, 2004.
[4] David M. Blei and Peter Frazier. Distance dependent chinese restaurant processes. In Johannes
Frnkranz and Thorsten Joachims, editors, ICML, pages 87?94. Omnipress, 2010.
[5] S. Chib and E. Greenberg. Understanding the metropolis-hastings algorithm. Amer. Statistician, 49(4):327?335, 1995.
[6] J. Dong, G. Jiang, Y.W. Asmann, S. Tomaszek, et al. MicroRNA Networks in Mouse Lung
Organogenesis. PloS one, 5(5):4645?4652, 2010.
[7] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In
Advances in Neural Information Processing Systems, 18:475, 2006.
[8] J.C. Huang, T. Babak, T.W. Corson, et al. Using expression profiling data to identify human
microRNA targets. Nature methods, 4(12):1045?1049, 2007.
[9] C. Kemp, J.B. Tenenbaum, T.L. Griffiths, , et al. Learning systems of concepts with an infinite
relational model. In Proc. 21st Natl Conf. Artif. Intell.(1), page 381, 2006.
[10] E. Meeds, Z. Ghahramani, R.M. Neal, and S.T. Roweis. Modeling dyadic data with binary
latent factors. In Advances in NIPS, 19:977, 2007.
[11] K.T. Miller, T.L. Griffiths, and M.I. Jordan. The phylogenetic indian buffet process: A nonexchangeable nonparametric prior for latent features. In UAI, 2008.
[12] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link
prediction. In Advances in Neural Information Processing Systems, 2009.
[13] P. Orbanz and J.M. Buhmann. Nonparametric bayesian image segmentation. International
Journal of Computer Vision, 77(1):25?45, 2008.
[14] ME Peter. Targeting of mrnas by multiple mirnas: the next step. Oncogene, 29(15):2161?2164,
2010.
[15] N. Rajewsky. microRNA target predictions in animals. Nature genetics, 38:S8?S13, 2006.
[16] P. Shannon, A. Markiel, O. Ozier, et al. Cytoscape: a software environment for integrated
models of biomolecular interaction networks. Genome research, 13(11):2498, 2003.
[17] F. Stingo, Y. Chen, M. Vannucci, et al. A Bayesian graphical modeling approach to microRNA
regulatory network inference. Ann. Appl. Statist, 2010.
[18] A. Ventura, A.G. Young, M.M. Winslow, et al. Targeted Deletion Reveals Essential and Overlapping Functions of the miR-17-92 Family of miRNA Clusters. Cell, 132:875?886, 2008.
[19] Y.J. Wang and G.Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association, 82(397):8?19, 1987.
[20] C. Xiao and K. Rajewsky. MicroRNA control in the immune system: basic principles. Cell,
136(1):26?36, 2009.
9
| 4263 |@word mhn:1 c0:2 contains:2 score:9 selecting:2 past:1 recovered:6 yet:1 written:1 additive:2 partition:2 shape:2 plot:2 drop:1 update:2 zik:17 generative:6 selected:2 yr:1 ith:2 short:2 blei:2 provides:1 node:5 preference:2 firstly:1 nonexchangeable:1 unbounded:1 phylogenetic:2 constructed:1 direct:1 beta:3 vjk:3 ik:28 pathway:1 combine:2 introduce:1 pairwise:5 expected:1 ontology:2 gostat:2 food:1 metaphor:3 considering:1 increasing:2 provided:3 underlying:3 method2:1 developed:1 z1k:1 pmodel:1 uk:1 exchangeable:3 control:1 grant:2 positive:2 understood:1 limit:4 analyzing:2 jiang:1 burn:2 studied:1 equivalence:5 appl:1 co:1 factorization:1 range:1 statistically:1 averaged:1 directed:1 acknowledgment:1 yj:2 significantly:1 confidence:1 integrating:4 griffith:5 refers:1 protein:2 get:3 cannot:1 targeting:2 selection:2 applying:1 wong:1 www:1 equivalent:1 map:1 customer:26 overrepresented:1 mrna:38 go:3 starting:1 independently:2 attention:1 adhering:1 assigns:1 utilizing:1 array:1 dbi:1 limiting:1 target:17 play:2 suppose:1 user:1 exact:1 controlling:2 us:2 agreement:1 pa:2 expensive:1 jk:1 ising:1 database:1 erated:1 observed:2 role:2 module:2 wang:1 region:1 connected:1 ordering:2 plo:1 inhibit:1 ran:1 disease:2 developmental:1 environment:1 babak:1 depend:2 upon:3 division:2 meed:1 translated:1 joint:3 mh:6 k0:1 represented:1 derivation:1 describe:5 monte:1 outcome:1 choosing:1 apoptosis:1 encoded:2 larger:1 y1t:1 valued:1 say:1 noisy:1 sequence:6 rr:1 interaction:46 product:1 gen:1 mirnas:29 flexibility:1 achieve:1 roweis:1 kh:6 ebi:1 convergence:1 cluster:10 empty:1 r1:1 microcosm:2 object:2 help:1 derive:2 ac:1 measured:1 ij:1 ibp:18 transcript:2 strong:1 recovering:1 c:2 predicted:1 come:1 indicate:1 involves:1 closely:1 filter:1 stochastic:6 centered:1 human:1 violates:1 f1:2 clustered:1 biological:11 secondly:1 extension:1 hold:1 around:2 considered:1 lof:3 exp:12 predict:3 visualize:1 inhibiting:1 major:2 achieves:1 early:1 omitted:1 proc:1 travel:1 applicable:1 currently:1 organ:1 micrornas:1 successfully:1 gaussian:2 rna:3 rather:2 exchangeability:4 probabilistically:1 derived:3 focus:1 joachim:1 frazier:1 indicates:2 likelihood:3 greatly:1 contrast:1 blockmodel:2 baseline:3 inference:4 motif:1 dependent:2 membership:9 xtm:1 entire:1 integrated:1 wij:9 selects:2 compatibility:1 overall:1 among:1 ziv:1 development:3 animal:1 special:1 marginal:2 field:1 equal:1 construct:1 having:1 sampling:8 nrk:1 biology:3 represents:2 identical:1 icml:1 others:2 simplify:1 duplicate:1 chib:1 gamma:7 intell:1 individual:6 blm:1 statistician:1 attempt:1 organization:2 interest:1 acceptance:2 mixture:2 analyzed:1 natl:1 chain:3 accurate:1 edge:2 encourage:1 tree:2 old:2 hyperprior:1 gerber:1 mk:12 column:21 modeling:2 ar:1 assignment:1 entry:12 hundred:1 culinary:2 too:1 reported:1 synthetic:9 combined:1 person:1 density:1 st:1 international:1 overestimated:1 lee:1 dong:1 mouse:3 huang:1 conf:1 american:1 return:1 s13:1 potential:4 includes:1 coefficient:3 explicitly:1 depends:2 later:1 try:3 view:2 closed:1 performed:2 red:1 start:1 recover:2 lung:6 xing:1 chart:1 accuracy:1 variance:1 who:1 miller:3 afp:1 identify:2 weak:1 bayesian:8 accurately:2 carlo:1 multiplying:1 monitoring:1 j6:4 history:6 explain:1 messenger:2 networking:1 associated:2 recovers:3 sampled:5 dataset:5 popular:1 recall:3 conversation:2 knowledge:3 improves:1 lim:2 segmentation:2 ut:1 agreed:1 barth:1 higher:3 improved:1 amer:1 furthermore:1 stage:2 until:1 hastings:1 overlapping:1 biomolecular:1 lack:2 defines:1 mode:1 undergoes:1 quality:1 believe:1 zjk:5 grows:1 usa:2 effect:3 name:1 contain:1 true:2 normalized:1 concept:1 artif:1 assigned:1 entering:1 symmetric:1 i2:1 neal:1 round:1 bonferroni:1 trying:1 complete:1 omnipress:1 image:2 recently:1 possessing:1 nih:1 functional:1 extend:2 discussed:4 he:2 s8:1 association:1 mellon:2 measurement:1 significant:2 gibbs:2 enter:2 populated:2 similarly:1 immune:1 similarity:1 add:1 posterior:8 orbanz:2 belongs:1 dish:14 reverse:1 scenario:1 moderate:1 binary:16 came:1 vt:1 exploited:1 seen:3 recognized:1 period:1 full:2 mix:1 rj:2 infer:3 multiple:2 profiling:1 organelle:1 divided:1 impact:1 prediction:14 mrf:2 basic:1 regression:5 hasting:3 vision:1 cmu:2 poisson:5 iteration:3 represent:5 q2n:4 cell:5 whereas:3 addition:3 separately:1 median:1 unlike:2 probably:1 mir:24 member:3 jordan:2 constraining:1 affect:1 restaurant:3 isolation:1 identified:2 regarding:2 simplifies:1 br:1 microarrays:2 expression:19 peter:2 biotechnology:1 action:1 useful:1 detailed:1 johannes:1 nonparametric:15 tenenbaum:1 statist:1 category:1 reduced:1 generate:1 http:1 nsf:1 estimated:2 disjoint:1 correctly:4 track:1 carnegie:2 hyperparameter:1 affected:1 express:1 group:35 threshold:2 drawn:1 changing:1 cutoff:1 utilize:1 graph:1 sum:1 convert:1 extends:1 family:2 reasonable:2 utilizes:1 appendix:6 comparable:1 hi:4 pay:1 followed:1 ct:1 strength:1 constraint:1 infinity:1 software:1 aspect:1 speed:1 min:1 performing:1 attempting:1 department:2 developing:1 alternate:1 combination:1 conjugate:2 beneficial:1 across:3 son:1 joseph:2 metropolis:4 s1:1 explained:1 thorsten:1 vannucci:1 fienberg:1 equation:5 conjugacy:1 previously:2 turn:1 microrna:7 demographic:1 studying:1 apply:1 hyperpriors:2 probe:1 regulate:3 appropriate:3 generic:1 anymore:1 buffet:7 original:2 top:1 clustering:2 dirichlet:1 remaining:5 include:2 graphical:2 exploit:1 ghahramani:3 chinese:2 k1:2 question:1 quantity:1 looked:1 interacts:1 hai:1 kth:1 distance:2 link:4 mapped:3 separate:1 entity:34 me:1 collected:2 srv:1 consensus:1 kemp:1 length:1 besides:1 relationship:2 ratio:3 pie:1 regulation:2 cij:1 ventura:1 trace:1 negative:2 allowing:2 observation:4 datasets:8 markov:2 discarded:1 finite:5 defining:2 extended:2 relational:2 discovered:1 interacting:1 inferred:1 enrichment:3 introduced:1 complement:1 pair:1 david:1 specified:1 accepts:1 deletion:1 nip:1 able:1 bar:2 suggested:3 below:1 challenge:2 including:3 green:1 critical:1 treated:1 rely:1 predicting:1 buhmann:2 improve:2 m2t:1 prior:27 understanding:1 discovery:1 determining:4 reordering:1 permutation:2 mixed:2 versus:1 integrate:3 downloaded:2 degree:1 xiao:1 principle:1 editor:1 row:3 cancer:1 cytoscape:2 genetics:1 repeat:1 last:3 supported:1 allow:2 mik:1 taking:3 mirna:21 sparse:1 greenberg:1 genome:1 qn:2 author:1 made:1 shk:1 social:7 oncogene:1 gene:10 keep:1 dealing:1 uai:1 reveals:1 summing:1 pittsburgh:2 xt1:1 consuming:1 xi:4 p2n:3 latent:10 regulatory:4 continuous:1 sk:12 nature:3 learn:2 zk:14 molecule:1 improving:1 interact:4 constructing:1 did:2 blockmodels:2 noise:17 profile:6 dyadic:1 complementary:1 enriched:1 uik:3 precision:3 inferring:3 sequence1:1 candidate:2 bei:1 young:1 z0:2 formula:1 friendship:1 down:1 specific:8 r2:3 grouping:5 burden:1 intractable:1 essential:1 false:4 sequential:1 airoldi:1 rajewsky:2 chen:1 suited:2 led:1 simply:1 likely:3 expressed:1 ordered:2 partially:1 binding:1 corresponds:2 truth:1 conditional:1 goal:1 targeted:3 identity:1 ann:1 change:2 experimentally:1 infinite:10 specifically:2 except:2 determined:1 sampler:2 averaging:1 corrected:1 experimental:1 shannon:1 indicating:1 select:1 bioinformatics:1 indian:7 mcmc:4 tested:5 |
3,605 | 4,264 | Analysis and Improvement of
Policy Gradient Estimation
Tingting Zhao, Hirotaka Hachiya, Gang Niu, and Masashi Sugiyama
Tokyo Institute of Technology
{tingting@sg., hachiya@sg., gang@sg., sugiyama@}cs.titech.ac.jp
Abstract
Policy gradient is a useful model-free reinforcement learning approach, but it
tends to suffer from instability of gradient estimates. In this paper, we analyze
and improve the stability of policy gradient methods. We first prove that the variance of gradient estimates in the PGPE (policy gradients with parameter-based
exploration) method is smaller than that of the classical REINFORCE method
under a mild assumption. We then derive the optimal baseline for PGPE, which
contributes to further reducing the variance. We also theoretically show that PGPE
with the optimal baseline is more preferable than REINFORCE with the optimal
baseline in terms of the variance of gradient estimates. Finally, we demonstrate
the usefulness of the improved PGPE method through experiments.
1
Introduction
The goal of reinforcement learning (RL) is to find an optimal decision-making policy that maximizes
the return (i.e., the sum of discounted rewards) through interaction with an unknown environment
[13]. Model-free RL is a flexible framework in which decision-making policies are directly learned
without going through explicit modeling of the environment. Policy iteration and policy search are
two popular formulations of model-free RL.
In the policy iteration approach [6], the value function is first estimated and then policies are determined based on the learned value function. Policy iteration was demonstrated to work well in many
real-world applications, especially in problems with discrete states and actions [14, 17, 1]. Although
policy iteration can naturally deal with continuous states by function approximation [8], continuous
actions are hard to handle due to the difficulty of finding maximizers of value functions with respect
to actions. Moreover, since policies are indirectly determined via value function approximation,
misspecification of value function models can lead to inappropriate policies even in very simple
problems [15, 2]. Another limitation of policy iteration especially in physical control tasks is that
control policies can vary drastically in each iteration. This causes severe instability in the physical
system and thus is not favorable in practice.
Policy search is another approach to model-free RL that can overcome the limitations of policy
iteration [18, 4, 7]. In the policy search approach, control policies are directly learned so that the
return is maximized, for example, via a gradient method (called the REINFORCE method) [18], an
EM algorithm [4], and a natural gradient method [7]. Among them, the gradient-based method is
particularly useful in physical control tasks since policies are changed gradually. This ensures the
stability of the physical system.
However, since the REINFORCE method tends to have a large variance in the estimation of the
gradient directions, its naive implementation converges slowly [9, 10, 12]. Subtraction of the optimal
baseline [16, 5] can ease this problem to some extent, but the variance of gradient estimates is
still large. Furthermore, the performance heavily depends on the choice of an initial policy, and
appropriate initialization is not straightforward in practice.
1
To cope with this problem, a novel policy gradient method called policy gradients with parameterbased exploration (PGPE) was proposed recently [12]. In PGPE, an initial policy is drawn from
a prior probability distribution, and then actions are chosen deterministically. This construction
contributes to mitigating the problem of initial policy choice and stabilizing gradient estimates.
Moreover, by subtracting a moving-average baseline, the variance of gradient estimates can be further reduced. Through robot-control experiments, PGPE was demonstrated to achieve more stable
performance than existing policy-gradient methods.
The goal of this paper is to theoretically support the usefulness of PGPE, and to further improve its
performance. More specifically, we first give bounds of the gradient estimates of the REINFORCE
and PGPE methods. Our theoretical analysis shows that gradient estimates for PGPE have smaller
variance than those for REINFORCE under a mild condition. We then show that the moving-average
baseline for PGPE adopted in the original paper [12] has excess variance; we give the optimal
baseline for PGPE that minimizes the variance, following the line of [16, 5]. We further theoretically
show that PGPE with the optimal baseline is more preferable than REINFORCE with the optimal
baseline in terms of the variance of gradient estimates. Finally, the usefulness of the improved PGPE
method is demonstrated through experiments.
2
Policy Gradients for Reinforcement Learning
In this section, we review policy gradient methods.
2.1
Problem Formulation
Let us consider a Markov decision problem specified by (S, A, PT , PI , r, ?), where S is a set of
`-dimensional continuous states, A is a set of continuous actions, PT (s0 |s, a) is the transition probability density from current state s to next state s0 when action a is taken, PI (s) is the probability
of initial states, r(s, a, s0 ) is an immediate reward for transition from s to s0 by taking action a,
and 0 < ? < 1 is the discount factor for future rewards. Let p(a|s, ?) be a stochastic policy with
parameter ?, which represents the conditional probability density of taking action a in state s.
Let h = [s1 , a1 , . . . , sT , aT ] be a trajectory of length T. Then the return (i.e., the discounted sum
of future rewards) along h is given by
PT
R(h) := t=1 ? t?1 r(st , at , st+1 ).
The expected return for parameter ? is defined by
R
QT
J(?) := p(h|?)R(h)dh, where p(h|?) = p(s1 ) t=1 p(st+1 |st , at )p(at |st , ?).
The goal of reinforcement learning is to find the optimal policy parameter ? ? that maximizes the
expected return J(?):
? ? := arg max J(?).
2.2
Review of the REINFORCE Algorithm
In the REINFORCE algorithm [18], the policy parameter ? is updated via gradient ascent:
? ?? ? + ??? J(?),
where ? is a small positive constant. The gradient ?? J(?) is given by
R
R
?? J(?) = ?? p(h|?)R(h)dh = p(h|?)?? log p(h|?)R(h)dh
R
PT
= p(h|?) t=1 ?? log p(at |st , ?)R(h)dh,
where we used the so-called ?log trick?: ?? p(h|?) = p(h|?)?? log p(h|?). Since p(h|?) is unknown, the expectation is approximated by the empirical average:
PN PT
b
?? J(?)
= N1 n=1 t=1 ?? log p(ant |snt , ?)R(hn ),
where hn := [sn1 , an1 , . . . , snT , anT ] is a roll-out sample.
2
Let us employ the Gaussian policy model with parameter ? = (?, ?), where ? is the mean vector
and ? is the standard deviation:
> 2
s)
p(a|s, ?) = ??12? exp ? (a??
.
2
2?
Then the policy gradients are explicitly given as
?? log p(a|s, ?) =
a??>s
?2 s
and ?? log p(a|s, ?) =
(a??>s)2 ?? 2
.
?3
A drawback of REINFORCE is that the variance of the above policy gradients is large [10, 11],
which leads to slow convergence.
2.3
Review of the PGPE Algorithm
One of the reasons for large variance of policy gradients in the REINFORCE algorithm is that the
empirical average is taken at each time step, which is caused by stochasticity of policies.
In order to mitigate this problem, another method called policy gradients with parameter-based
exploration (PGPE) was proposed recently [11]. In PGPE, a linear deterministic policy,
?(a|s, ?) = ?>s,
is adopted, and stochasticity is introduced by considering a prior distribution over policy parameter
? with hyper-parameter ?: p(?|?). Since entire history h is solely determined by a single sample of
parameter ? in this formulation, it is expected that the variance of gradient estimates can be reduced.
The expected return for hyper-parameter ? is expressed as
RR
J(?) =
p(h|?)p(?|?)R(h)dhd?.
Differentiating this with respect to ?, we have
RR
RR
?? J(?) =
p(h|?)?? p(?|?)R(h)dhd? =
p(h|?)p(?|?)?? log p(?|?)R(h)dhd?,
where the log trick for ?? p(?|?) is used. We then approximate the expectation over h and ? by the
empirical average:
PN
b
?? J(?)
= N1 n=1 ?? log p(? n |?)R(hn ),
where each trajectory sample hn is drawn from p(h|? n ) and the parameter ? n is drawn from
p(? n |?).
Let us employ the Gaussian prior distribution with hyper-parameter ? = (?, ? ) to draw parameter
vector ?, where ? is the mean vector and ? is the vector consisting of the standard deviation in each
element:
??i )2
p(?i |?i ) = ? ?12? exp ? (?i2?
.
2
i
i
Then the derivative of log p(?|?) with respect to ?i and ?i are given as follows:
??i log p(?|?) =
3
?i ??i
?i2
and ??i log p(?|?) =
(?i ? ?i )2 ? ?i2
.
?i3
Variance of Gradient Estimates
In this section, we theoretically investigate the variance of gradient estimates in REINFORCE and
PGPE.
For multi-dimensional state space, we consider the trace of the covariance matrix of gradient vectors.
That is, for a random vector A = (A1 , . . . , A` )>, we define
h
i
P`
Var(A) = tr E (A ? E[A])(A ? E[A])> = m=1 E (Am ? E[Am ])2 ,
(1)
where E denotes the expectation. Let B =
P`
?2
i=1 ?i ,
where ` is the dimensionality of state s.
Below, we consider a subset of the following assumptions:
3
Assumption (A): r(s, a, s0 ) ? [??, ?] for ? > 0.
Assumption (B): r(s, a, s0 ) ? [?, ?] for 0 < ? < ?.
Assumption (C): For ? > 0, there exist two series {ct }Tt=1 and {dt }Tt=1 such that kst k2 ? ct and
kst k2 ? dt hold with probability at least (1??)1/2N respectively over the choice of sample
paths, where k ? k2 denotes the `2 -norm.
Note that Assumption (B) is stronger than Assumption (A). Let
PT
PT
L(T ) = CT ?2 ? DT ? 2 /(2?), CT = t=1 c2t , and DT = t=1 d2t .
First, we analyze the variance of gradient estimates in PGPE (the proofs of all the theorems are
provided in the supplementary material):
Theorem 1. Under Assumption (A), we have the following upper bounds:
h
i
h
i
2
T 2
2
T 2
) B
) B
b
b
and
Var
?
J(?)
? 2?N(1??
,
Var ?? J(?)
? ? N(1??
2
?
(1??)
(1??)2
b is proportional to ? 2 (the upper
This theorem means that the upper bound of the variance of ?? J(?)
bound of squared rewards), B (the trace of the inverse Gaussian covariance), and (1?? T )2 /(1??)2 ,
b
and is inverse-proportional to sample size N . The upper bound of the variance of ?? J(?)
is twice
T 2
b
larger than that of ?? J(?). When T goes to infinity, (1 ? ? ) will converge to 1.
Next, we analyze the variance of gradient estimates in REINFORCE:
Theorem 2. Under Assumptions (B) and (C), we have the following lower bound with probability
at least 1 ? ?:
h
i
T 2
)
b
Var ?? J(?)
? N(1??
? 2 (1??)2 L(T ).
Under Assumptions (A) and (C), we have the following upper bound with probability at least (1 ?
?)1/2 :
h
i
2
T 2
T ? (1?? )
b
Var ?? J(?)
? DN
? 2 (1??)2 .
Under Assumption (A), we have
h
i
b
Var ?? J(?)
?
2T ? 2 (1?? T )2
N ? 2 (1??)2 .
The upper bounds for REINFORCE are similar to those for PGPE, but they are monotone increasing
b will be non-trivial
with respect to trajectory length T . The lower bound for the variance of ?? J(?)
if it is positive, i.e., L(T ) > 0. This can be fulfilled, e.g., if ? and ? satisfy 2?CT ?2 > DT ? 2 .
b
Deriving a lower bound of the variance of ?? J(?)
is left open as future work.
Finally, we compare the variance of gradient estimates in REINFORCE and PGPE:
Theorem 3. In addition to Assumptions (B) and (C), we assume L(T ) is positive and monotone increasing with respect to T . If there exists T0 such that L(T0 ) ? ? 2 B? 2 , then we have
b
b
Var[?? J(?)]
> Var[?? J(?)]
for all T > T0 , with probability at least 1 ? ?.
The above theorem means that PGPE is more favorable than REINFORCE in terms of the variance
of gradient estimates of the mean, if trajectory length T is large. This theoretical result would
partially support the experimental success of the PGPE method [12].
4
Variance Reduction by Subtracting Baseline
In this section, we give a method to reduce the variance of gradient estimates in PGPE and analyze
its theoretical properties.
4
4.1
Basic Idea of Introducing Baseline
It is known that the variance of gradient estimates can be reduced by subtracting a baseline b: for
REINFORCE and PGPE, modified gradient estimates are given by
PT
PN
?? Jbb (?) = N1 n=1 (R(hn ) ? b) t=1 ?? log p(ant |snt , ?),
PN
?? Jbb (?) = 1
(R(hn ) ? b)?? log p(? n |?).
N
n=1
The adaptive reinforcement baseline [18] was derived as the exponential moving average of the past
experience:
b(n) = ?R(hn?1 ) + (1 ? ?)b(n ? 1),
where 0 < ? ? 1. Based on this, an empirical gradient estimate with the moving-average baseline
was proposed for REINFORCE [18] and PGPE [12].
The above moving-average baseline contributes to reducing the variance of gradient estimates. However, it was shown [5, 16] that the moving-average baseline is not optimal; the optimal baseline is,
by definition, given as the minimizer of the variance of gradient estimates with respect to a baseline.
Following this formulation, the optimal baseline for REINFORCE is given as follows [10]:
b?REINFORCE := arg minb Var[?? Jbb (?)] =
P
E[R(h)k T
? log p(at |st ,?)k2 ]
PT t=1 ?
.
E[k t=1 ?? log p(at |st ,?)k2 ]
However, only the moving-average baseline was introduced to PGPE so far [12], which is suboptimal. Below, we derive the optimal baseline for PGPE, and study its theoretical properties.
4.2
Optimal Baseline for PGPE
Let b?PGPE be the optimal baseline for PGPE that minimizes the variance:
b?PGPE := arg minb Var[?? Jbb (?)].
Then the following theorem gives the optimal baseline for PGPE:
Theorem 4. The optimal baseline for PGPE is given by
b?PGPE =
E[R(h)k?? log p(?|?)k2 ]
E[k?? log p(?|?)k2 ] ,
and the excess variance for a baseline b is given by
?
Var[?? Jbb (?)] ? Var[?? JbbPGPE (?)] =
2
(b?b?
PGPE )
E[k??
N
log p(?|?)k2 ].
The above theorem gives an analytic-form expression of the optimal baseline for PGPE. When expected return R(h) and the squared norm of characteristic eligibility k?? log p(?|?)k2 are independent of each other, the optimal baseline is reduced to average expected return E[R(h)]. However,
the optimal baseline is generally different from the average expected return. The above theorem also
shows that the excess variance is proportional to the squared difference of baselines (b ? b?PGPE )2
and the expected squared norm of characteristic eligibility E[k?? log p(?|?)k2 ], and is inverseproportional to sample size N .
Next, we analyze the contribution of the optimal baseline to the variance with respect to mean
parameter ? in PGPE:
Theorem 5. If r(s, a, s0 ) ? ? > 0, we have the following lower bound:
?
b
Var[?? J(?)]
? Var[?? JbbPGPE (?)] ?
?2 (1?? T )2 B
N (1??)2 .
Under Assumption (A), we have the following upper bound:
?
b
Var[?? J(?)]
? Var[?? JbbPGPE (?)] ?
? 2 (1?? T )2 B
N (1??)2 .
This theorem shows that the lower and upper bounds of the excess variance are proportional to ?2
and ? 2 (the bounds of squared immediate rewards), B (the trace of the inverse Gaussian covariance),
and (1 ? ? T )2 /(1 ? ?)2 , and are inverse-proportional to sample size N . When T goes to infinity,
(1 ? ? T )2 will converge to 1.
5
4.3
Comparison with REINFORCE
Next, we analyze the contribution of the optimal baseline for REINFORCE, and compare it with that
for PGPE. It was shown [5, 16] that the excess variance for a baseline b in REINFORCE is given by
2
2
?
PT
(b?b?
REINFORCE )
?
log
p(a
|s
,
?)
Var[?? Jbb (?)] ? Var[?? JbbREINFORCE (?)] =
E
.
t=1 ?
t t
N
Based on this, we have the following theorem:
Theorem 6. Under Assumptions (B) and (C), we have the following bounds with probability at least
1 ? ?:
CT ?2 (1?? T )2
N ? 2 (1??)2
?
b
? Var[?? J(?)]
? Var[?? JbbREINFORCE (?)] ?
? 2 (1?? T )2 DT
N ? 2 (1??)2
.
The above theorem shows that the lower and upper bounds of the excess variance are monotone
increasing with respect to trajectory length T .
In the aspect of the amount of reduction in the variance of gradient estimates, Theorem 5 and Theorem 6 show that the optimal baseline for REINFORCE contributes more than that for PGPE.
Finally, based on Theorem 1 and Theorem 5 and based on Theorem 2 and Theorem 6, we have the
following theorem:
Theorem 7. Under Assumptions (B) and (C), we have
T 2
?
(1?? )
2
2
Var[?? JbbPGPE (?)] ? N
(1??)2 (? ? ? )B,
T 2
?
)
2
2
Var[?? JbbREINFORCE (?)] ? N(1??
? 2 (1??)2 (? DT ? ? CT ),
where the latter inequality holds with probability at least 1 ? ?.
This theorem shows that the upper bound of the variance of gradient estimates for REINFORCE with
the optimal baseline is still monotone increasing with respect to trajectory length T . On the other
hand, since (1 ? ? T )2 ? 1, the above upper bound of the variance of gradient estimates in PGPE
2
?
??2 )B
with the optimal baseline can be further upper-bounded as Var[?? JbbPGPE (?)] ? (?
N (1??)2 , which
is independent of T . Thus, when trajectory length T is large, the variance of gradient estimates in
REINFORCE with the optimal baseline may be significantly larger than the variance of gradient
estimates in PGPE with the optimal baseline.
5
Experiments
In this section, we experimentally investigate the usefulness of the proposed method, PGPE with the
optimal baseline.
5.1
Illustrative Data
Let the state space S be one-dimensional and continuous, and the initial state is randomly chosen
from the standard normal distribution. The action space A is also set to be one-dimensional and
continuous. The transition dynamics of the environment is set at st+1 = st + at + ?, where
??
N (0, 0.52 ) is stochastic noise. The immediate reward is defined as r = exp ?s2 /2 ? a2 /2 + 1,
which is bounded as 1 < r ? 2. The discount factor is set at ? = 0.9.
Here, we compare the following five methods: REINFORCE without any baselines, REINFORCE
with the optimal baseline (OB), PGPE without any baselines, PGPE with the moving-average baseline (MB), and PGPE with the optimal baseline (OB). For fair comparison, all of these methods
use the same parameter setup: the mean and standard deviation of the Gaussian distribution is set
at ? = ?1.5 and ? = 1, the number of episodic samples is set at N = 100, and the length of the
trajectory is set at T = 10 or 50. We then calculate the variance of gradient estimates over 100 runs.
Table 1 summarizes the results, showing that the variance of REINFORCE is overall larger than
PGPE. A notable difference between REINFORCE and PGPE is that the variance of REINFORCE
6
Table 1: Variance and bias of estimated gradients for the illustrative data.
T = 10
Variance
Bias
?, ?
?, ?
?, ?
?, ?
REINFORCE
13.2570 26.9173 -0.3102 -1.5098
REINFORCE-OB 0.0914
0.1203
0.0672
0.1286
PGPE
0.9707
1.6855
-0.0691 0.1319
0.2127
0.3238
0.0828 -0.1295
PGPE-MB
PGPE-OB
0.0372
0.0685
-0.0164 0.0512
Method
4
4
3
2
1
0
0
REINFORCE
REINFORCE?OB
10
20
30
40
3
16.5
16
2
1
10
20
14
0
50
3
2
REINFORCE
PGPE
10
20
30
40
Iteration
(c) REINFORCE and PGPE
50
2.5
6
8
10
12
14
16
18
20
(a) Good initial policy
16.5
16
2
15.5
1.5
1
0.5
15
14.5
14
0
REINFORCE
REINFORCE?OB
PGPE
PGPE?MB
PGPE?OB
13.5
?0.5
?1
0
4
17
REINFORCE?OB
PGPE?OB
Return R
4
2
Number of episodes N
3
Variance in log10?scale
Variance in log10?scale
40
(b) PGPE, PGPE-MB and PGPE-OB
6
1
0
30
Iteration
5
REINFORCE
REINFORCE?OB
PGPE
PGPE?MB
PGPE?OB
14.5
Iteration
(a) REINFORCE and REINFORCE-OB
15.5
15
0
?1
0
50
17
PGPE
PGPE?MB
PGPE?OB
Return R
5
Variance in log10 scale
Variance in log10 scale
6
T = 50
Variance
Bias
?, ?
?, ?
?, ?
?, ?
188.3860 278.3095 -1.8126 -5.1747
0.5454
0.8996
-0.2988 -0.2008
1.6572
3.3720
-0.1048 -0.3293
0.4123
0.8332
0.0925 -0.2556
0.0850
0.1815
0.0480 -0.0779
13
10
20
30
40
50
Iteration
(d) REINFORCE-OB and PGPE-OB
Figure 1: Variance of gradient estimates with respect to the
mean parameter through policy-update iterations for the illustrative data.
12.5
0
2
4
6
8
10
12
14
16
18
20
Number of episodes N
(b) Poor initial policy
Figure 2: Return as functions of
the number of episodic samples
N for the illustrative data.
significantly grows as T increases, whereas that of PGPE is not influenced that much by T . This well
agrees with our theoretical analysis in Section 3. The results also show that the variance of PGPEOB (the proposed method) is much smaller than that of PGPE-MB. REINFORCE-OB contributes
highly to reducing the variance especially when T is large, which also well agrees with our theory.
However, PGPE-OB still provides much smaller variance than REINFORCE-OB.
We also investigate the bias of gradient estimates of each method. We regard gradients estimated
with N = 1000 as true gradients, and compute the bias of gradient estimates when N = 100. The
results are also included in Table 1, showing that introduction of baselines does not increase the bias;
rather, it tends to reduce the bias.
Next, we investigate the variance of gradient estimates when policy parameters are updated over iterations. In this experiment, we set N = 10 and T = 20, and the variance is computed from 50 runs.
Policies are updated over 50 iterations. In order to evaluate the variance in a stable manner, we repeat
the above experiments 20 times with random choice of initial mean parameter ? from [?3.0, ?0.1],
and investigate the average variance of gradient estimates with respect to mean parameter ? over 20
trials, in log10 -scale.
The results are summarized in Figure 1. Figure 1(a) compares the variance of REINFORCE
with/without baselines, whereas Figure 1(b) compares the variance of PGPE with/without baselines.
These plots show that introduction of baselines contributes highly to the reduction of the variance
over iterations. Figure 1(c) compares the variance of REINFORCE and PGPE without baselines,
showing that PGPE provides much more stable gradient estimates than REINFORCE. Figure 1(d)
compares the variance of REINFORCE and PGPE with the optimal baselines, showing that gradient estimates obtained by PGPE-OB are much smaller than those by REINFORCE-OB. Overall, in
terms of the variance of gradient estimates, the proposed PGPE-OB compares favorably with other
methods.
Next, we evaluate returns obtained by each method. The trajectory length is fixed at T = 20, and
the maximum number of policy-update iterations is set at 50. We investigate average returns over 20
runs as functions of the number of episodic samples N . We have two experimental results for different initial policies. Figure 2(a) shows the results when initial mean parameter ? is chosen randomly
7
from [?1.6, ?0.1], which tends to perform well. The graph shows that PGPE-OB performs the best,
especially when N < 5; then REINFORCE-OB follows with a small margin. PGPE-MB and plain
PGPE also work reasonably well, although they are slightly unstable due to larger variance. Plain
REINFORCE is highly unstable, which is caused by the huge variance of gradient estimates (see
Figure 1 again).
Figure 2(b) describes the results when initial mean parameter ? is chosen randomly from
[?3.0, ?0.1], which tends to result in poorer performance. In this setup, difference among the
compared methods is more significant than the case with good initial policies. Overall, plain REINFORCE performs very poorly, and even REINFORCE-OB tends to be outperformed by the PGPE
methods. This means that REINFORCE is very sensitive to the choice of initial policies. Among
the PGPE methods, the proposed PGPE-OB works very well and converges quickly.
5.2
Cart-Pole Balancing
Finally, we evaluate the performance of our proposed method in a more complex task of cart-pole
balancing [3]. A pole is hanged to the roof of a cart, and the goal is to swing up the pole by moving
the cart properly and try to keep the pole at the top.
The state space S is two-dimensional and continuous, which consists of the angle ? ? [0, 2?] and
angular velocity ?? ? [?3?, 3?] of the pole. The action space A is one-dimensional and continuous, which corresponds to the force applied to the cart (note that we can not directly control the
pole, but only indirectly through moving the cart). We use the Gaussian policy model for REINFORCE and linear policy model for PGPE, where state s is non-linearly transformed to a feature
space via a basis function vector. We use 20 Gaussian kernels with standard deviation ? = 0.5
as the basis functions, where the kernel centers are distributed over the following grid points:
{0, ?/2, ?, 3?/2} ? {?3?, ?3?/2, 0, 3?/2, 3?}. The dynamics of the pole (i.e., the update rule
of the angle and the angular velocity) is given by
?t+1 = ?t + ?? t+1 ?t and ?? t+1 = ?? t +
9.8 sin(?t )??wl?? 2t sin(2?t )/2+? cos(?t )at
?t,
4l/3??wl cos2 (?t )
where ? = 1/W + w and at is the action taken at time t. We set the problem parameters as: the
mass of the cart W = 8[kg], the mass of the pole w = 2[kg], and the length of the pole l = 0.5[m].
We set the time step ?t for the position and velocity updates at 0.01[s] and action selection at 0.1[s].
The reward function is defined as r(st , at , st+1 ) = cos(?t+1 ). That is, the higher the pole is, the
more rewards we can obtain. The initial policy is chosen randomly, and the initial-state probability
density is set to be uniform. The agent collects N = 100 episodic samples with trajectory length
T = 40. The discount factor is set at ? = 0.9.
We investigate average returns over 10 trials as the functions of policy-update iterations. The return
at each trial is computed over 100 test episodic samples (which are not used for policy learning).
The experimental results are plotted in Figure 3, showing that the improvement of both plain REINFORCE and REINFORCE-OB tend to be slow, and all PGPE methods outperformed REINFORCE
methods overall. Among the PGPE methods, the proposed PGPE-OB converges faster than others.
Conclusion
In this paper, we analyzed and improved the stability of the policy
gradient method called PGPE (policy gradients with parameterbased exploration). We theoretically showed that, under a mild
condition, PGPE provides more stable gradient estimates than
the classical REINFORCE method. We also derived the optimal
baseline for PGPE, and theoretically showed that PGPE with the
optimal baseline is more preferable than REINFORCE with the
optimal baseline in terms of the variance of gradient estimates.
Finally, we demonstrated the usefulness of PGPE with optimal
baseline through experiments.
5
4
3
Return R
6
2
1
REINFORCE
REINFORCE?OB
PGPE
PGPE?MB
PGPE?OB
0
?1
?2
0
50
100
150
200
250
300
Iteration
Figure 3: Performance of policy
Acknowledgments: TZ and GN were supported by the MEXT scholarship and the GCOE program,
HH was supported by the FIRST program, and MS was supported by MEXT KAKENHI 23120004.
8
References
[1] N. Abe, P. Melville, C. Pendus, C. K. Reddy, D. L. Jensen, V. P. Thomas, J. J. Bennett, G. F.
Anderson, B. R. Cooley, M. Kowalczyk, M. Domick, and T. Gardinier. Optimizing debt collections using constrained reinforcement learning. In Proceedings of The 16th ACM SGKDD
Conference on Knowledge Discovery and Data Mining, pages 75?84, 2010.
[2] J. Baxter, P. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient estimation. Journal of Artificial Intelligence Research, 15:351?381, 2001.
[3] M. Bugeja. Non-linear swing-up and stabilizing control of an inverted pendulum system. In
Proceedings of IEEE Region 8 EUROCON, volume 2, pages 437?441, 2003.
[4] P. Dayan and G. E. Hinton. Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271?278, 1997.
[5] E. Greensmith, P. L. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5:1471?1530, 2004.
[6] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal
of Artificial Intelligence Research, 4:237?285, 1996.
[7] S. Kakade. A natural policy gradient. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1531?1538, Cambridge,
MA, 2002. MIT Press.
[8] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[9] P. Marbach and J. N. Tsitsiklis. Approximate gradient methods in policy-space optimization
of Markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111?148, 2004.
[10] J. Peters and S. Schaal. Policy gradient methods for robotics. In Processing of the IEEE/RSJ
International Conferece on Inatelligent Robots and Systems(IROS), 2006.
[11] F. Sehnke, C. Osendorfer, T. R?uckstiess, A. Graves, J. Peters, and J. Schmidhuber. Policy gradients with parameter-based exploration for control. In Proceedings of The 18th International
Conference on Artificial Neural Networks, pages 387?396, 2008.
[12] F. Sehnke, C. Osendorfer, T. R?uckstiess, A. Graves, J. Peters, and J. Schmidhuber. Parameterexploring policy gradients. Neural Networks, 23(4):551?559, 2010.
[13] R. S. Sutton and G. A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, USA, 1998.
[14] G. Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play.
Neural Computation, 6(2):215?219, 1994.
[15] L. Weaver and J. Baxter. Reinforcement learning from state and temporal differences. Technical report, Department of Computer Science, Australian National University, 1999.
[16] L. Weaver and N. Tao. The optimal reward baseline for gradient-based reinforcement learning.
In Processings of The Seventeeth Conference on Uncertainty in Artificial Intelligence, pages
538?545, 2001.
[17] J. D. Williams and S. Young. Partially observable Markov decision processes for spoken dialog
systems. Computer Speech and Language, 21(2):231?422, 2007.
[18] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229, 1992.
9
| 4264 |@word mild:3 trial:3 norm:3 stronger:1 open:1 cos2:1 covariance:3 tr:1 reduction:4 initial:15 series:1 past:1 existing:1 current:1 analytic:1 plot:1 update:5 sehnke:2 intelligence:3 provides:3 five:1 along:1 dn:1 prove:1 consists:1 manner:1 theoretically:6 expected:8 dialog:1 multi:1 discounted:2 td:1 inappropriate:1 considering:1 increasing:4 provided:1 moreover:2 bounded:2 maximizes:2 mass:2 kg:2 minimizes:2 spoken:1 finding:1 temporal:1 mitigate:1 masashi:1 preferable:3 k2:10 control:8 greensmith:1 positive:3 tends:6 sutton:1 hirotaka:1 niu:1 solely:1 path:1 twice:1 initialization:1 collect:1 co:2 ease:1 acknowledgment:1 practice:2 episodic:5 empirical:4 dhd:3 significantly:2 gammon:1 selection:1 instability:2 deterministic:1 demonstrated:4 center:1 straightforward:1 go:2 williams:2 survey:1 stabilizing:2 rule:1 deriving:1 stability:3 handle:1 updated:3 construction:1 pt:10 heavily:1 play:1 trick:2 element:1 velocity:3 approximated:1 particularly:1 snt:3 calculate:1 region:1 ensures:1 episode:2 environment:3 reward:11 littman:1 dynamic:3 basis:2 artificial:4 hyper:3 supplementary:1 larger:4 melville:1 rr:3 subtracting:3 interaction:1 mb:9 poorly:1 achieve:1 convergence:1 converges:3 derive:2 ac:1 qt:1 c:1 australian:1 direction:1 drawback:1 tokyo:1 stochastic:2 exploration:5 material:1 seventeeth:1 hold:2 normal:1 exp:3 parr:1 vary:1 achieves:1 a2:1 estimation:3 favorable:2 outperformed:2 sensitive:1 agrees:2 wl:2 mit:2 gaussian:7 i3:1 modified:1 rather:1 pn:4 barto:1 derived:2 schaal:1 improvement:2 properly:1 kakenhi:1 backgammon:1 baseline:55 am:2 dayan:1 entire:1 going:1 transformed:1 tao:1 mitigating:1 arg:3 among:4 flexible:1 overall:4 constrained:1 represents:1 osendorfer:2 future:3 others:1 report:1 connectionist:1 employ:2 randomly:4 national:1 roof:1 consisting:1 n1:3 huge:1 investigate:7 highly:3 mining:1 severe:1 analyzed:1 poorer:1 experience:1 plotted:1 theoretical:5 modeling:1 gn:1 maximization:1 kaelbling:1 introducing:1 deviation:4 subset:1 pole:11 uniform:1 usefulness:5 st:13 density:3 international:2 quickly:1 squared:5 again:1 hn:7 slowly:1 tz:1 zhao:1 derivative:1 return:17 summarized:1 satisfy:1 notable:1 explicitly:1 caused:2 depends:1 try:1 analyze:6 pendulum:1 contribution:2 square:1 roll:1 variance:68 characteristic:2 maximized:1 ant:3 trajectory:10 sn1:1 history:1 hachiya:2 influenced:1 definition:1 naturally:1 proof:1 popular:1 knowledge:1 dimensionality:1 higher:1 dt:7 improved:3 formulation:4 anderson:1 furthermore:1 angular:2 hand:1 grows:1 usa:1 dietterich:1 true:1 swing:2 moore:1 i2:3 d2t:1 deal:1 sin:2 self:1 eligibility:2 illustrative:4 m:1 tt:2 demonstrate:1 performs:2 novel:1 recently:2 lagoudakis:1 rl:4 physical:4 jp:1 volume:1 significant:1 cambridge:2 grid:1 teaching:1 marbach:1 sugiyama:2 stochasticity:2 language:1 moving:10 robot:2 stable:4 showed:2 optimizing:1 tesauro:1 schmidhuber:2 inequality:1 success:1 inverted:1 subtraction:1 converge:2 technical:1 faster:1 a1:2 basic:1 expectation:4 titech:1 iteration:19 kernel:2 c2t:1 robotics:1 addition:1 whereas:2 minb:2 ascent:1 cart:7 tend:1 baxter:3 suboptimal:1 reduce:2 idea:1 t0:3 expression:1 bartlett:2 becker:1 suffer:1 peter:3 speech:1 cause:1 action:12 useful:2 generally:1 amount:1 discount:3 reduced:4 exist:1 estimated:3 fulfilled:1 discrete:2 drawn:3 iros:1 graph:1 monotone:4 sum:2 run:3 inverse:4 angle:2 master:1 uncertainty:1 draw:1 decision:4 ob:30 summarizes:1 bound:18 ct:7 gang:2 infinity:2 aspect:1 department:1 poor:1 smaller:5 slightly:1 em:1 describes:1 kakade:1 making:2 s1:2 gradually:1 taken:3 reddy:1 hh:1 adopted:2 indirectly:2 pgpe:95 appropriate:1 kowalczyk:1 original:1 thomas:1 denotes:2 top:1 log10:5 scholarship:1 especially:4 ghahramani:1 classical:2 rsj:1 gradient:73 reinforce:66 extent:1 unstable:2 trivial:1 reason:1 length:10 kst:2 setup:2 favorably:1 trace:3 implementation:1 policy:64 unknown:2 perform:1 upper:12 markov:3 immediate:3 cooley:1 hinton:1 misspecification:1 abe:1 introduced:2 specified:1 learned:3 below:2 program:3 max:1 debt:1 event:1 difficulty:1 natural:2 force:1 weaver:3 improve:2 technology:1 naive:1 prior:3 sg:3 review:3 discovery:1 graf:2 limitation:2 proportional:5 var:23 agent:1 s0:7 editor:1 pi:2 balancing:2 changed:1 repeat:1 supported:3 free:4 drastically:1 bias:7 tsitsiklis:1 institute:1 taking:2 differentiating:1 distributed:1 regard:1 overcome:1 plain:4 world:1 transition:3 collection:1 reinforcement:13 adaptive:1 far:1 cope:1 excess:6 approximate:2 observable:1 keep:1 search:3 continuous:8 table:3 an1:1 reasonably:1 parameterbased:2 tingting:2 contributes:6 complex:1 linearly:1 s2:1 noise:1 fair:1 slow:2 position:1 explicit:1 deterministically:1 exponential:1 young:1 theorem:24 showing:5 jensen:1 maximizers:1 exists:1 margin:1 horizon:1 expressed:1 partially:2 corresponds:1 minimizer:1 dh:4 acm:1 ma:2 conditional:1 goal:4 bennett:1 hard:1 experimentally:1 included:1 determined:3 specifically:1 reducing:3 infinite:1 called:5 experimental:3 support:2 mext:2 latter:1 evaluate:3 |
3,606 | 4,265 | Non-parametric Group Orthogonal Matching Pursuit
for Sparse Learning with Multiple Kernels
Vikas Sindhwani and Aur?elie C. Lozano
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
{vsindhw,aclozano}@us.ibm.com
Abstract
We consider regularized risk minimization in a large dictionary of Reproducing
kernel Hilbert Spaces (RKHSs) over which the target function has a sparse representation. This setting, commonly referred to as Sparse Multiple Kernel Learning
(MKL), may be viewed as the non-parametric extension of group sparsity in linear
models. While the two dominant algorithmic strands of sparse learning, namely
convex relaxations using l1 norm (e.g., Lasso) and greedy methods (e.g., OMP),
have both been rigorously extended for group sparsity, the sparse MKL literature
has so far mainly adopted the former with mild empirical success. In this paper, we
close this gap by proposing a Group-OMP based framework for sparse MKL. Unlike l1 -MKL, our approach decouples the sparsity regularizer (via a direct l0 constraint) from the smoothness regularizer (via RKHS norms), which leads to better
empirical performance and a simpler optimization procedure that only requires a
black-box single-kernel solver. The algorithmic development and empirical studies are complemented by theoretical analyses in terms of Rademacher generalization bounds and sparse recovery conditions analogous to those for OMP [27] and
Group-OMP [16].
1
Introduction
Kernel methods are widely used to address a variety of learning problems including classification, regression, structured prediction, data fusion, clustering and dimensionality reduction [22, 23]. However, choosing an appropriate kernel and tuning the corresponding hyper-parameters can be highly
challenging, especially when little is known about the task at hand. In addition, many modern problems involve multiple heterogeneous data sources (e.g. gene functional classification, prediction of
protein-protein interactions) each necessitating the use of a different kernel. This strongly suggests
avoiding the risks and limitations of single kernel selection by considering flexible combinations of
multiple kernels. Furthermore, it is appealing to impose sparsity to discard noisy data sources. As
several papers have provided evidence in favor of using multiple kernels (e.g. [19, 14, 7]), the multiple kernel learning problem (MKL) has generated a large body of recent work [13, 5, 24, 33], and
become the focal point of the intersection between non-parametric function estimation and sparse
learning methods traditionally explored in linear settings.
Given a convex loss function, the MKL problem is usually formulated as the minimization of empirical risk together with a mixed norm regularizer, e.g., the square of the sum of individual RKHS
norms, or variants thereof, that have a close relationship to the Group Lasso criterion [30, 2]. Equivalently, this formulation may be viewed as simultaneous optimization of both the non-negative convex combination of kernels, as well as prediction functions induced by this combined kernel. In
constraining the combination of kernels, the l1 penalty is of particular interest as it encourages sparsity in the supporting kernels, which is highly desirable when the number of kernels considered is
large. The MKL literature has rapidly evolved along two directions: one concerns scalability of op1
timization algorithms beyond the early pioneering proposals based on Semi-definite programming
or Second-order Cone programming [13, 5] to simpler and more efficient alternating optimization
schemes [20, 29, 24]; while the other concerns the use of lp norms [10, 29] to construct complex
non-sparse kernel combinations with the goal of outperforming 1-norm MKL which, as reported in
several papers, has demonstrated mild success in practical applications.
The class of Orthogonal Matching Pursuit techniques has recently received considerable attention, as
a competitive alternative to Lasso. The basic OMP algorithm originates from the signal-processing
community and is similar to forward greedy feature selection, except that it performs re-estimation
of the model parameters in each iteration, which has been shown to contribute to improved accuracy.
For linear models, some strong theoretical performance guarantees and empirical support have been
provided for OMP [31] and its extension for variable group selection, Group-OMP [16]. In particular
it was shown in [25, 9] that OMP and Lasso exhibit competitive theoretical performance guarantees.
It is therefore desirable to investigate the use of Matching Pursuit techniques in the MKL framework
and whether one may be able to improve upon existing MKL methods.
Our contributions in this paper are as follows. We propose a non-parametric kernel-based extension
to Group-OMP [16]. In terms of the feature space (as opposed to function space) perspective of
kernel methods, this allows Group-OMP to handle groups that can potentially contain infinite features. By adding regularization in Group-OMP, we allow it to handle settings where the sample size
might be smaller than the number of features in any group. Rather than imposing a mixed l1 /RKHSnorm regularizer as in group-Lasso based MKL, a group-OMP based approach allows us to consider
the exact sparse kernel selection problem via l0 regularization instead. Note that in contrast to the
group-lasso penalty, the l0 penalty by itself has no effect on the smoothness of each individual component. This allows for a clear decoupling between the role of the smoothness regularizer (namely,
an RKHS regularizer) and the sparsity regularizer (via the l0 penalty). Our greedy algorithms allow
for simple and flexible optimization schemes that only require a black-box solver for standard learning algorithms. In this paper, we focus on multiple kernel learning with Regularized least squares
(RLS). We provide a bound on the Rademacher complexity of the hypothesis sets considered by
our formulation. We derive conditions analogous to OMP [27] and Group-OMP [16] to guarantee
the ?correctness? of kernel selection. We close this paper with empirical studies on simulated and
real-world datasets that confirm the value of our methods.
2
Learning Over an RKHS Dictionary
In this section, we setup some notation and give a brief background before introducing our main
objective function and describing our algorithm in the next section. Let H1 . . . HN be a collection
of Reproducing Kernel Hilbert Spaces with associated Kernel functions k1 . . . kN defined on the
input space X ? Rd . Let H denote the sum space of functions,
H = H1 ? H2 . . . ? HN = {f : X 7? R|f (x) =
N
X
j=1
fj (x), x ? X , fj ? Hj , j = 1 . . . N }
Let us equip this space with the following lp norms,
??
?
? p1
?
?
N
N
? X
?
X
kfj kpHj ? : f (x) =
kf klp (H) = inf ?
fj (x), x ? X , fj ? Hj , j = 1 . . . N
?
?
? j=1
?
j=1
(1)
It is now natural to consider a regularized risk minimization problem over such a RKHS dictionary,
given a collection of training examples {xi , yi }li=1 ,
l
arg min
f ?H
1X
V (yi , f (xi )) + ?kf k2lp (H)
l i=1
(2)
where V (?, ?) is a convex loss function such as squared loss in the Regularized Least Squares (RLS)
algorithm or the hinge loss in the SVM method. If this problem again has elements of an RKHS
structure, then, via the Representer Theorem, it can again be reduced to a finite dimensional problem
and efficiently solved.
2
Let q =
p
2?p
and let us define the q-convex hull of the set of kernel functions to be the following,
?
?
N
N
?
?
X
X
?jq = 1, ?j ? 0
?j kj (x, z),
coq (k1 . . . kN ) = k? : X ? X 7? R | k? (x, z) =
?
?
j=1
j=1
where ? ? RN . It is easy to see that the non-negative combination of kernels, k? , is itself a valid
kernel with an associated RKHS Hk? . With this definition, [17] show the following,
n
o
(3)
kf klp (H) = inf kf kHk? , k? ? coq (k1 . . . kN )
?
This relationship connects Tikhonov regularization with lp norms over H to regularization over
RKHSs parameterized by the kernel functions k? . This leads to a large family of ?multiple kernel
learning? algorithms (whose variants are also sometimes referred to as lq -MKL) where the basic
idea is to solve an equivalent problem,
l
arg min
f ?Hk? ,???q
1X
V (yi , f (xi )) + ?kf k2Hk?
l i=1
(4)
where ?q = {? ? RN : k?kq = 1, ?nj=1 ?j ? 0}. For a fixed ?, the optimization over f ? Hk? is
recognizable as an RKHS problem for which a standard black box solver may be used. The weights
? may then optimized in an alternating minimization scheme, although several other optimization
procedures are also be used (see e.g., [4]). The case where p = 1 is of particular interest in
the setting when the size of the RKHS dictionary is large but the unknown target function can be
approximated in a much smaller number of RKHSs. This leads to a large family of sparse multiple
kernel learning algorithms that have a strong connection to the Group Lasso [2, 20, 29].
3
Multiple Kernel Learning with Group Orthogonal Matching Pursuit
Let us recall the l0 pseudo-norm, whichP
is the cardinality of the sparsest representation of f in the
dictionary, kf kl0 (H) = min{|J| : f = j?J fj }. We now pose the following exact sparse kernel
selection problem,
l
arg min
f ?H
1X
V (yi , f (xi )) + ?kf k2l2 (H) subject to kf kl0 (H) ? s
l i=1
(5)
It is important to note the following: when using a dictionary of universal kernels, e.g., Gaussian
kernels with different bandwidths, the presence of the regularization term kf k2l2 (H) is critical (i.e.,
? > 0) since otherwise the labeled data can be perfectly fit by any single kernel. In other words, the
kernel selection problem is ill-posed. While conceptually simple, our formulation is quite different
from those proposed earlier since the role of a smoothness regularizer (via the kf k2l2 (H) penalty) is
decoupled from the role of a sparsity regularizer (via the constraint on kf kl0 (H) ? s). Moreover, the
latter is imposed directly as opposed through a p = 1 penalty making the spirit of our approach closer
to Group Orthogonal Matching Pursuit (Group-OMP [16]) where groups are formed by very highdimensional (infinite for Gaussian kernels) feature spaces associated with the kernels. It has been
observed in recent work [10, 29] on l1 -MKL that sparsity alone does not lead it to improvements in
real-world empirical tasks and hence several methods have been proposed to explore lq -norm MKL
with q > 1 in Eqn. 4, making MKL depart away from sparsity in kernel combinations. By contrast,
we note that as q ? ?, p ? 2. Our approach gives a direct knob both on smoothness (via ?)
and sparsity (via s) with a solution path along these dimensions that differs from that offered by
Group-Lasso based lq -MKL as q is varied. By combining l0 pseudo-norm with RKHS norms, our
method is conceptually reminiscent of the elastic net [32] (also see [26, 12, 21]). If kernels arise
from different subsets of input variables, our approach is also related to sparse additive models [18].
Our algorithm, MKL-GOMP, is outlined below for regularized least squares. Extensions for other
loss functions, e.g., hinge loss for SVMs, can also be similarly derived. In the description of the algorithm, our notation is as follows: For any function f belonging to an RKHS Fk with kernel function
Pl
k(?, ?), we denote the regularized objective function as, R? (f, y) = 1l i=1 (yi ?f (xi ))2 +?kf kFk
3
where k ? kF denotes the RKHS norm. Recall that the minimizer f ? = arg minf ?F R? (f, y) is
given by solving the linear system, ? = (K + ?lI)?1 y where K is the gram matrix of the kerPl
nel on the labeled data, and by setting f ? (x) = i=1 ?i k(x, xi ). Moreover, the objective value
achieved by the minimizer is: R? (f ? , y) = ?yT (K + ?lI)?1 y. Note that MKL-GOMP should
not be confused with Kernel Matching Pursuit [28] whose goal is different: it is designed to sparsify ? in a single-kernel setting. The MKL-GOMP procedure iteratively expands the hypothesis
space, HG (1) ? HG (2) . . . ? HG (i) , by greedily selecting kernels from a given dictionary, where
S
G (i) ? {1 . . . N } is a subset of indices and HG = j?G Hj . Note that each HG is an RKHS with
P
kernel j?G kj (see Section 6 in [1]). The selection criteria is the best improvement, I(f (i) , Hj ),
given by a new hypothesis space Hj in reducing the norm of the current residual r(i) = y ? f (i)
where f (i) = [f (i) (x1 ) . . . f (i) (xl )]T , by finding the best regularized (smooth) approximation. Note
that since ming?Hj R? (g, r) ? R? (0, r) = krk2 , the value of the improvement function,
I(f (i) , Hj ) = kr(i) k22 ? min R? (g, r(i) )
g?Hj
is always non-negative. Once a kernel is selected, the function is re-estimated
by learning in HG (i) .
P
Note that since HG is an RKHS whose kernel function is the sum j?G kj , we can use a simple
RLS linear system solver for refitting. Unlike group-Lasso based MKL, we do not need an iterative
kernel reweighting step which essentially arises as a mechanism to transform the less convenient
group sparsity norms into reweighted squared RKHS norms. MKL-GOMP converges when the best
improvement is no better than ?.
? Input: Data matrix X = [x1 . . . xl ]T , Label vector y ? Rl ,
Kernel Dictionary {kj (?, ?)}N
j=1 , Precision ? > 0
? Output: Selected Kernels G (i) and a function f (i) ? HG (i)
? Initialization: G (0) = ?, f (0) = 0, set residual r(0) = y
? for i = 0, 1, 2, ...
1. Kernel Selection: For all j ?
/ G (i) , set:
I(f (i) , Hj ) = kr(i) k22 ? ming?Hj R? (g, r(i) )
= r(i)T I ? ?(Kj + ?lI)?1 r(i)
(i)
, Hj )
Pick j (i) = arg maxj ?G
/ (i) I(f
(i)
2. Convergence Check: if I(f , Hj (i) ) ? ? break
S
Pl
3. Refitting: Set G (i+1) = G (i) {j (i) }. Set f (i+1) (x) = j=1 ?j k(x, xj )
?1
P
P
y
K
+
?lI
where k = j?G (i+1) kj and ? =
(i+1)
j
j?G
4. Update Residual: r(i+1) = y ? f (i+1) where f (i+1) = [f (i+1) (x1 ) . . . f (i+1) (xl )]T .
end
Remarks: Note that our algorithm can be applied to multivariate problems with group structure
among outputs similar to Multivariate Group-OMP [15]. In particular, in our experiments on multiclass datasets, we treat all outputs as a single group and evaluate each kernel for selection based
on how well the total residual is reduced across all outputs simultaneously. Kernel matrices are normalized to unit trace or to have uniform variance of data points in their associated feature spaces, as
in [10, 33]. In practice, we can also monitor error on a validation set to decide the optimal degree
1
of sparsity. For efficiency, we can precompute the matrices Qj = (I ? ?(Kj + ?lI)?1 ) 2 so that
I(f (i) , Hj ) = kQj rk22 can be very quickly evaluated at selection time, and/or reduce the search
space by considering a random subsample of the dictionary.
4
Theoretical Analysis
Our analysis is composed of two parts. In the first part, we establish generalization bounds for
the hypothesis spaces considered by our formulation, based on the notion of Rademacher complex4
ity. The second component of our theoretical analysis consists of deriving conditions under which
MKL-GOMP can recover good solutions. While the first part can be seen as characterizing the
?statistical convergence? of our method, the second part characterizes its ?numerical convergence?
as an optimization method, and is required to complement the first part. This is because matching
pursuit methods can be deemed to solve an exact sparse problem approximately, while regularized
methods (e.g. l1 norm MKL) solve an approximate problem exactly. We therefore need to show that
MKL-GOMP recovers a solution that is close to an optimum solution of the exact sparse problem.
4.1
Rademacher Bounds
Theorem 1. Consider the hypothesis space of sufficiently sparse and smooth functions1 ,
n
o
H?,s = f ? H : kf k2l2 (H) ? ?, kf kl0 (H) ? s
Let ? ? (0, 1) and ? = supx?X ,j=1...N kj (x, x). Let ? be any probability distribution on (x, y) ?
X ? R satisfying |y| ? M almost surely, and let {xi , yi }li=1 be randomly sampled according to
Pl
2
?. Define, f? = arg minf ?H?,s 1l i=1 (yi ? f (xi )) to be the empirical risk minimizer and f ? =
2
arg minf ?H?,s R(f ) to be the true risk minimizer in H?,s where R(f ) = E(x,y)?? (y ? f (x))
denotes the true risk. Then, with probability atleast 1 ? ? over random draws of samples of size l,
where ky ? f k?
R(f?) ? R(f ? ) + 8L
?
? L = (M + s?? ).
r
s??
+ 4L2
l
s
log( 3? )
2l
(6)
The proof is given in supplementary material, but can also be reasoned as follows. In the standard
single-RKHS case, the Rademacher complexity can be upper bounded by a quantity that is propor?
tional to the square root of the trace of the Gram matrix, which is further upper bounded by l?.
In our case, any collection of s-sparse functions from a dictionary of N RKHSs reduces to a single
RKHS
? whose kernel is the sum of s base kernels, and hence the corresponding trace can be bounded
by ls? for all possible subsets of size p
s. Once it is established that the empirical Rademacher
complexity of H?,s is upper bounded by s??
l , the generalization bound follows from well-known
results [6] tailored to regularized least squares regression with bounded target variable.
For l1 -norm MKL, in the context
q of margin-based loss functions, Cortes et. al., 2010 [8] bound
)???
where ??? is the ceiling function that rounds to next
the Rademacher complexity as ce?log(N
l
23
integer, e is the exponential and c = 22 . Using VC-based lower-bound arguments, they point
p
out that the log(N ) dependence on N is essentially optimal. By contrast, our greedy approach
with sequential regularized risk minimization imposes direct control over degree of sparsity as well
as smoothness, and hence the Rademacher complexity in our case is independent of N . If s =
O(logN ), the bounds are similar. A critical difference between l1 -norm MKL and sparse greedy
approximations, however, is that the former is convex and hence the empirical risk can be minimized
exactly in the hypothesis space whose complexity is bounded by Rademacher analysis. This is not
true in our case, and therefore, to complement Rademacher analysis, we need conditions under
which good solutions can be recovered.
4.2
Exact Recovery Conditions in Noiseless Settings
R
We now assume that the regression function f? (x) = yd?(y|x) is sparse, i.e., f? ? HGgood for
some subset Ggood of s ?good? kernels and that it is sufficiently smooth in the sense that for some
R? (f, y) gives near
? > 0, given sufficient samples, the empirical minimizer f? = arg minf ?HG
good
optimal generalization as per Theorem 1. In this section our main concern is to characterize GroupOMP like conditions under which MKL-GOMP will be able to learn f? by recovering the support
Ggood exactly.
1
Note that Tikhonov regularization using a penalty term ?k ? k2 , and Ivanov Regularization which uses a
ball constraint k ? k2 ? ? return identical solutions for some one-to-one correspondence between ? and ? .
5
Let us denote r(i) = f? ? f (i) as the residual function at step i of the algorithm. Initially,
r(0) = f? ? HGgood . Our argument is inductive: if at any step i, r(i) ? HGgood and we can
(i)
always guarantee that maxj?Ggood I(f (i) , Hj ) > maxj ?G
, Hj ), i.e., a good kernel of/ good I(f
fers better greedy improvement, then it is clear that the algorithm correctly expands the hypothesis
space and never makes a mistake. Without loss of generality, let us rearrange the dictionary so that
Ggood = {1 . . . s}. For any function f ? HGgood , we now wish to derive the following upper bound,
k(I(f, Hs+1 ) . . . I(f, HN ))k?
? ?H (Ggood )2
k(I(f, H1 ) . . . I(f, Hs ))k?
(7)
Clearly, a sufficient condition for exact recovery is ?H (Ggood ) < 1.
We need some notation to state our main result. Let s = |Ggood |, i.e., the number of good kernels.
For any matrix A ? Rls?l(N ?s) , let kAk(2,1) denote the matrix norm induced by the following
Ps
vector norms: for any vector u = [u1 . . . us ] ? Rls define kuk(2,1) = i=1 kui k2 ; and similarly,
P
N ?s
for any vector v = [v 1 . . . v N ?s ] ? Rl(N ?s) define kvk(2,1) = i=1 kv i k2 . Then, kAk(2,1) =
supv?Rl(N ?s)
kAvk(2,1)
kvk(2,1) .
We can now state the following:
N
Theorem 2. Given the kernel dictionary {kj (?, ?)}N
j=1 with associated gram matrices {Kj }i=1 over
(s)
the labeled data, MKL-GOMP correctly recovers the good kernels, i.e., G = Ggood , if
?H (Ggood ) = kC?,H (Ggood )k(2,1) < 1
ls?l(N ?s)
where C?,H (Ggood ) ? R
Ggood , j ?
/ Ggood , is given by,
is a coherence matrix whose (i, j)th block of size l ? l, i ?
?
C?,H (Ggood )i,j = KGgood Qi ?
where KGgood =
P
j?Ggood
X
k?Ggood
??1
Qk K2Ggood Qk ?
Qj KGgood
(8)
1
Kj , Qj = (I ? ?(Kj + ?lI)?1 ) 2 , j = 1 . . . N .
The proof is given in supplementary material. This result is analogous to sparse recovery conditions
for OMP and l1 methods and their (linear) group counterparts. In the noiseless setting, Tropp [27]
gives an exact recovery condition of the form kX?good Xbad k1 < 1, where Xgood and Xbad refer
to the restriction of the data matrix to good and bad features, and k ? k1 refers to the l1 induced
matrix norm. Intriguingly, the same paper shows that this condition is also sufficient for the Basis
Pursuit l1 minimization problem. For Group-OMP [16], the condition generalizes to involve a group
sensitive matrix norm on the same matrix objects. Likewise, Bach [2] generalizes the Lasso variable
selection consistency conditions to apply to Group Lasso and then further to non-parametric l1 MKL. The above result is similar in spirit. A stronger sufficient condition can be derived by requiring
kQj KGgood k2 to be sufficiently small for all j ?
/ Ggood . Intuitively, this means that smooth functions
in HGgood cannot be well approximated by using smooth functions induced by the ?bad? kernels, so
that MKL-GOMP is never led to making a mistake.
5
Empirical Studies
We report empirical results on a collection of simulated datasets and 3 classification problems from
computational cell biology. In all experiments, as in [10, 33], candidate kernels are normalized
multiplicatively to have uniform variance of data points in their associated feature spaces.
5.1
Adaptability to Data Sparsity - Simulated Setting
We adapt the experimental setting proposed by [10] where the sparsity of the target function is explicitly controlled, and the optimal subset of kernels is varied from requiring the entire dictionary to
requiring a single kernel. Our goal is to study the solution paths offered by MKL-GOMP in comparison to lq -norm MKL. For consistency, we use squared loss in all experiments2 . We implemented
2
lq -MKL with SVM hinge loss behaves similarly.
6
Figure 1: Simulated Setting: Adaptability to Data Sparsity
1?norm MKL
4/3?norm MKL
2?norm MKL
4?norm MKL
??norm MKL (=RLS)
MKL?GOMP
Bayes Error
0.2
0.18
80
Sparsity
0.22
% of Kernels Selected
60
40
test error
20
0.16
0
44
66
82
92
98
v(?) = fraction of noise kernels [in %]
0.14
0.12
140
120
Smoothness
0.1
0.08
0.06
100
80
Value of ?
60
40
20
0.04
0
44
66
82
92
98
0
v(?) = fraction of noise kernels [in %]
44
66
82
92
98
v(?) = fraction of noise kernels [in %]
lq -norm MKL for regularized least squares (RLS) using an alternating minimization scheme adapted
from [17, 29]. Different binary classification datasets3 with 50 labeled examples are randomly generated by sampling the two classes from 50-dimensional isotropic Gaussian distributions with equal
?
and ?2 = ??1 where ?
covariance matrices (identity) and equal but opposite, means ?1 = 1.75 k?k
is a binary vector encoding the true underlying sparsity. The fraction of zero components in ? is a
measure for the feature sparsity of the learning problem. For each dataset, a linear kernel (normalized as in [10]) is generated from each feature and the resulting dictionary is input to MKL-GOMP
and lq -norm MKL. For each level of sparsity, a training of size 50, validation and test sets of size
10000 are generated 10 times and average classification errors are reported. For each run, the validation error is monitored as kernel selection progresses in MKL-GOMP and the number of kernels
with smallest validation error are chosen. The regularization parameters for both MKL-GOMP and
lq norm MKL are similarly chosen using the validation set. Figure 5.1 shows test error rates as a
function of sparsity of the target function: from non-sparse (all kernels needed) to extremely sparse
(only 1 kernel needed). We recover the observations also made in [10]: l1 -norm MKL excels in
extremely sparse settings where a single kernel carries the whole discriminative information of the
learning problem. However, in the other scenarios it mostly performs worse than the other q > 1
variants, despite the fact that the vector ? remains sparse in all but the uniform scenario. As q is
increased, the error rate in these settings improves but deteriorates in sparse settings. As reported
in [11], the elastic net MKL approach of [26] performs similar to l1 -MKL in the hinge loss case.
As can be seen in the Figure, the error curve of MKL-GOMP tends to be below the lower envelope
of the error rates given by lq -MKL solutions. To adapt to the sparsity of the problem, lq methods
clearly need to tune q requiring several fresh invocations of the appropriate lq -MKL solver. On the
other hand, in MKL-GOMP the hypothesis space grows as function of the iteration number and the
solution trajectory naturally expands sequentially in the direction of decreasing sparsity. The right
plot in Figure 5.1 shows the number of kernels selected by MKL-GOMP and the optimal value of
?, suggesting that MKL-GOMP adapts to the sparsity and smoothness of the learning problem.
5.2
Protein Subcellular Localization
The multiclass generalization of l1 -MKL proposed in [33] (MCMKL) is state of the art methodology
in predicting protein subcellular localization, an important cell biology problem that concerns the
estimation of where a protein resides in a cell so that, for example, the identification of drug targets
can be aided. We use three multiclass datasets: PSORT +, PSORT- and PLANT provided by the authors of [33] at http://www.fml.tuebingen.mpg.de/raetsch/suppl/protsubloc
together with a dictionary of 69 kernels derived with biological insight: 2 kernels on phylogenetic
trees, 3 kernels based on similarity to known proteins (BLAST E-values), and 64 kernels based
on amino-acid sequence patterns. The statistics of the three datasets are as follows: PSORT + has
541 proteins labeled with 4 location classes, PSORT- has 1444 proteins in 5 classes and PLANT is
3
Provided by the authors of [10] at mldata.org/repository/data/viewslug/mkl-toy/
7
Performance (higher is better)
100
mklgomp
mcmkl
sum
95
single
other
90
85
80
75
psort+
psort?
plant
Figure 2: Protein Subcellular Localization Results
a 4-class problem with 940 proteins. For each dataset, results are averaged over 10 splits of the
dataset into training and test sets. We used exactly the same experimental protocol, data splits and
evaluation methodology as given in [33]: the hyper-parameters of MKL-GOMP (sparsity and the
regularization parameter ?) were tuned based on 3-fold cross-validation; results on PSORT +, PSORTare F1-scores averaged over the classes while those on PLANT are Mathew?s correlation coefficient4 .
Figure 2 compare MKL-GOMP against MCMKL, baselines such as using the sum of all the kernels
and using the best single kernel, and results from other prediction systems proposed in the literature.
As can be seen, MKL-GOMP slightly outperforms MCMKL on PSORT + an PSORT- datasets and
is slightly worse on PLANT where RLS with the sum of all the kernels also performs very well.
On the two PSORT datasets, [33] report selecting 25 kernels using MCMKL. On the other hand, on
average, MKL-GOMP selects 14 kernels on PSORT +, 15 on PSORT- and 24 kernels on PLANT. Note
that MKL-GOMP is applied in multivariate mode: the kernels are selected based on their utility to
reduce the total residual error across all target classes.
6
Conclusion
By proposing a Group-OMP based framework for sparse multiple kernel learning, analyzing theoretically the performance of the resulting methods in relation to the dominant convex relaxation-based
approach, and demonstrating the value of our framework through extensive experimental studies,
we believe greedy methods arise as a natural alternative for tackling MKL problems. Relevant
directions for future research include extending our theoretical analysis to the stochastic setting,
investigating complex multivariate structures and groupings over outputs, e.g., by generalizing the
multivariate version of Group-OMP [15], and extending our algorithm to incorporate interesting
structured kernel dictionaries [3].
Acknowledgments
We thank Rick Lawrence, David S. Rosenberg and Ha Quang Minh for helpful conversations and
support for this work.
References
[1] N. Aronszajn. Theory of reproducing kernel hilbert spaces. Transactions of the American Mathematical
Society, 68(3):337?404, 1950.
[2] F. Bach. Consistency of group lasso and multiple kernel learning. JMLR, 9:1179?1225, 2008.
[3] F. Bach. High-dimensional non-linear variable selection through hierarchical kernel learning. In Technical
report, HAL 00413473, 2009.
[4] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. In
Technical report, HAL 00413473, 2010.
4
see http://www.fml.tuebingen.mpg.de/raetsch/suppl/protsubloc/protsubloc-wabi08-supp.pdf
8
[5] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo
algorithm. In ICML, 2004.
[6] P. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results.
JMLR, 3:463?482, 2002.
[7] A. Ben-Hur and W. S. Noble. Kernel methods for predicting protein?protein interactions. Bioinformatics,
21, January 2005.
[8] C. Cortes, M. Mohri, and Afshin Rostamizadeh. Generalization bounds for learning kernels. In ICML,
2010.
[9] A. K. Fletcher and S. Rangan. Orthogonal matching pursuit from noisy measurements: A new analysis.
In NIPS, 2009.
[10] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. lp -norm multiple kernel learning. JMLR, 12:953?997,
2011.
[11] M. Kloft, U. Ruckert, and P. Bartlett. A unifying view of multiple kernel learning. In European Conference
on Machine Learning (ECML), 2010.
[12] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics, 38(6):3660?
3695, 2010.
[13] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. J. Mach. Learn. Res., 5:27?72, December 2004.
[14] G. R. G. Lanckriet, T. De Bie, N. Cristianini, M. I. Jordan, and W. S. Noble. A statistical framework for
genomic data fusion. Bioinformatics, 20, November 2004.
[15] A. C. Lozano and V. Sindhwani. Block variable selection in multivariate regression and high-dimensional
causal inference. In NIPS, 2010.
[16] A. C. Lozano, G. Swirszcz, and N. Abe. Group orthogonal matching pursuit for variable selection and
prediction. In NIPS, 2009.
[17] C. Michelli and M. Pontil. Learning the kernel function via regularization. JMLR, 6:1099?1125, 2005.
[18] H. Liu P. Ravikumar, J. Lafferty and L. Wasserman. Sparse additive models. Journal of the Royal
Statistical Society: Series B (Statistical Methodology) (JRSSB), 71 (5):1009?1030, 2009.
[19] P. Pavlidis, J. Cai, J. Weston, and W.S. Noble. Learning gene functional classifications from multiple data
types. Journal of Computational Biology, 9:401?411, 2002.
[20] A. Rakotomamonjy, F.Bach, S. Cano, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning
Research, 9:2491?2521, 2008.
[21] G. Raskutti, M. Wainwrigt, and B. Yu. Minimax-optimal rates for sparse additive models over kernel
classes via convex programming. In Technical Report 795, Statistics Department, UC Berkeley., 2010.
[22] Bernhard Scholkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
[23] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
2004.
[24] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. J. Mach.
Learn. Res., 7, December 2006.
[25] Zhang T. Sparse recovery with orthogonal matching pursuit under rip. Computing Research Repository,
2010.
[26] R. Tomioka and T. Suzuki. Sparsity-accuracy tradeoff in mkl. In NIPS Workshop: Understanding Multiple
Kernel Learning Methods. Technical report, arXiv:1001.2615v1, 2010.
[27] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory,,
50(10):2231?2242, 2004.
[28] P. Vincent and Y. Bengio. Kernel matching pursuit. Machine Learning, 48:165?188, 2002.
[29] Z. Xu, R. Jin, H. Yang, I. King, and M.R. Lyu. Simple and efficient multiple kernel learning by group
lasso. In ICML, 2010.
[30] Ming Yuan, Ali Ekici, Zhaosong Lu, and Renato Monteiro. Dimension reduction and coefficient estimation in multivariate linear regression. Journal Of The Royal Statistical Society Series B, 69(3):329?346,
2007.
[31] Tong Zhang. On the consistency of feature selection using greedy least squares regression. J. Mach.
Learn. Res., 10, June 2009.
[32] H. Zhou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society, 67(2):301?320, 2005.
[33] A. Zien and Cheng S. Ong. Multiclass multiple kernel learning. ICML, 2007.
9
| 4265 |@word mild:2 h:2 repository:2 version:1 norm:34 stronger:1 k2hk:1 covariance:1 pick:1 carry:1 reduction:2 liu:1 series:2 score:1 selecting:2 tuned:1 rkhs:17 outperforms:1 existing:1 current:1 com:1 recovered:1 tackling:1 bie:1 reminiscent:1 additive:3 kqj:2 numerical:1 designed:1 plot:1 update:1 alone:1 greedy:8 selected:5 isotropic:1 viewslug:1 contribute:1 location:1 org:1 simpler:2 zhang:2 height:1 phylogenetic:1 along:2 quang:1 direct:3 become:1 mathematical:1 scholkopf:1 yuan:2 khk:1 consists:1 recognizable:1 blast:1 theoretically:1 p1:1 mpg:2 ming:3 decreasing:1 little:1 ivanov:1 solver:5 considering:2 cardinality:1 provided:4 confused:1 notation:3 moreover:2 bounded:6 underlying:1 evolved:1 proposing:2 finding:1 nj:1 guarantee:4 pseudo:2 berkeley:1 expands:3 exactly:4 decouples:1 k2:5 supv:1 control:1 originates:1 unit:1 before:1 treat:1 tends:1 mistake:2 despite:1 k2l2:4 encoding:1 analyzing:1 mach:3 path:2 simplemkl:1 approximately:1 yd:1 black:3 might:1 initialization:1 koltchinskii:1 suggests:1 challenging:1 averaged:2 pavlidis:1 elie:1 practical:1 acknowledgment:1 practice:1 block:2 definite:1 differs:1 procedure:3 pontil:1 empirical:13 universal:1 drug:1 matching:11 convenient:1 word:1 refers:1 protein:12 rk22:1 close:4 selection:18 cannot:1 risk:10 context:1 restriction:1 equivalent:1 imposed:1 demonstrated:1 center:1 yt:1 www:2 attention:1 l:2 convex:8 recovery:6 wasserman:1 insight:1 deriving:1 ity:1 handle:2 notion:1 traditionally:1 analogous:3 annals:1 target:7 rip:1 exact:7 programming:4 us:1 hypothesis:8 lanckriet:3 element:1 approximated:2 satisfying:1 labeled:5 observed:1 role:3 solved:1 sonnenburg:2 complexity:7 cristianini:3 rigorously:1 ong:1 solving:1 ali:1 upon:1 localization:3 efficiency:1 basis:1 regularizer:9 hyper:2 choosing:1 whose:6 quite:1 widely:1 solve:3 klp:2 posed:1 supplementary:2 otherwise:1 favor:1 statistic:3 transform:1 noisy:2 itself:2 sequence:1 brefeld:1 net:3 cai:1 propose:1 interaction:2 relevant:1 combining:1 rapidly:1 adapts:1 subcellular:3 description:1 inducing:1 kv:1 scalability:1 ky:1 olkopf:1 convergence:3 optimum:1 p:1 rademacher:11 extending:2 converges:1 ben:1 object:1 derive:2 protsubloc:3 pose:1 received:1 progress:1 strong:2 recovering:1 implemented:1 direction:3 hull:1 vc:1 stochastic:1 material:2 require:1 f1:1 generalization:6 biological:1 extension:4 pl:3 sufficiently:3 considered:3 lawrence:1 algorithmic:3 fletcher:1 lyu:1 dictionary:16 early:1 smallest:1 estimation:4 label:1 sensitive:1 correctness:1 datasets3:1 minimization:7 mit:1 clearly:2 genomic:1 gaussian:4 always:2 rather:1 zhou:1 hj:15 rick:1 sparsify:1 rosenberg:1 knob:1 l0:6 focus:1 june:1 derived:3 improvement:5 check:1 mainly:1 hk:3 contrast:3 greedily:1 baseline:1 sense:1 rostamizadeh:1 helpful:1 tional:1 inference:1 el:1 entire:1 initially:1 kc:1 relation:1 jq:1 selects:1 monteiro:1 arg:8 classification:6 flexible:2 ill:1 among:1 logn:1 development:1 art:1 uc:1 equal:2 construct:1 once:2 never:2 functions1:1 reasoned:1 sampling:1 identical:1 biology:3 intriguingly:1 yu:1 rls:8 icml:4 representer:1 minf:4 noble:3 future:1 minimized:1 report:6 modern:1 randomly:2 composed:1 simultaneously:1 experiments2:1 individual:2 maxj:3 connects:1 interest:2 highly:2 investigate:1 evaluation:1 zhaosong:1 ekici:1 kvk:2 semidefinite:1 rearrange:1 hg:9 closer:1 orthogonal:7 decoupled:1 tree:1 taylor:1 michelli:1 re:5 causal:1 theoretical:6 increased:1 earlier:1 introducing:1 rakotomamonjy:1 subset:5 kq:1 uniform:3 characterize:1 reported:3 kn:3 aclozano:1 supx:1 combined:1 aur:1 kloft:2 refitting:2 together:2 quickly:1 squared:3 again:2 opposed:2 hn:3 worse:2 american:1 return:1 li:8 toy:1 suggesting:1 supp:1 op1:1 de:3 coefficient:1 explicitly:1 h1:3 kl0:4 break:1 root:1 view:1 characterizes:1 competitive:2 recover:2 bayes:1 contribution:1 square:8 formed:1 accuracy:2 variance:2 qk:2 efficiently:1 likewise:1 acid:1 conceptually:2 coq:2 identification:1 vincent:1 lu:1 trajectory:1 simultaneous:1 inform:1 definition:1 against:1 thereof:1 naturally:1 associated:6 proof:2 recovers:2 monitored:1 sampled:1 dataset:3 recall:2 conversation:1 hur:1 dimensionality:1 improves:1 hilbert:3 adaptability:2 jenatton:1 higher:1 methodology:3 improved:1 formulation:4 evaluated:1 box:3 strongly:1 generality:1 furthermore:1 smola:1 correlation:1 hand:3 eqn:1 tropp:2 aronszajn:1 reweighting:1 mkl:63 mode:1 grows:1 believe:1 hal:2 effect:1 k22:2 requiring:4 contain:1 normalized:3 true:4 lozano:3 former:2 regularization:12 hence:4 alternating:3 inductive:1 iteratively:1 counterpart:1 reweighted:1 round:1 encourages:1 yorktown:1 kak:2 criterion:2 pdf:1 necessitating:1 performs:4 l1:15 fj:5 cano:1 recently:1 raskutti:1 behaves:1 functional:2 rl:3 refer:1 raetsch:2 measurement:1 cambridge:1 imposing:1 smoothness:8 tuning:1 rd:1 focal:1 outlined:1 similarly:4 fk:1 consistency:4 fml:2 shawe:1 afer:1 similarity:1 base:1 dominant:2 multivariate:7 recent:2 perspective:1 inf:2 discard:1 scenario:2 tikhonov:2 outperforming:1 watson:1 success:2 binary:2 yi:7 propor:1 seen:3 impose:1 omp:20 surely:1 signal:1 semi:1 zien:2 multiple:21 desirable:2 reduces:1 smooth:5 technical:4 adapt:2 bach:6 cross:1 ravikumar:1 controlled:1 qi:1 prediction:5 variant:3 regression:6 basic:2 heterogeneous:1 essentially:2 noiseless:2 arxiv:1 iteration:2 kernel:107 sometimes:1 tailored:1 suppl:2 achieved:1 cell:3 proposal:1 addition:1 background:1 gomp:23 source:2 sch:2 envelope:1 unlike:2 induced:4 subject:1 december:2 lafferty:1 spirit:2 jordan:3 integer:1 structural:1 near:1 presence:1 yang:1 constraining:1 split:2 easy:1 bengio:1 variety:1 xj:1 fit:1 hastie:1 lasso:13 bandwidth:1 perfectly:1 reduce:2 idea:1 opposite:1 multiclass:4 tradeoff:1 qj:3 whether:1 utility:1 bartlett:3 greed:1 penalty:8 remark:1 clear:2 involve:2 tune:1 nel:1 svms:1 reduced:2 http:2 estimated:1 deteriorates:1 per:1 correctly:2 group:37 demonstrating:1 monitor:1 ce:1 kuk:1 v1:1 relaxation:2 fraction:4 sum:7 cone:1 run:1 parameterized:1 family:2 almost:1 decide:1 draw:1 coherence:1 bound:11 renato:1 correspondence:1 fold:1 cheng:1 mathew:1 adapted:1 constraint:3 rangan:1 u1:1 argument:2 min:5 extremely:2 structured:2 department:1 according:1 combination:6 precompute:1 ball:1 belonging:1 smaller:2 across:2 slightly:2 appealing:1 lp:4 making:3 intuitively:1 ghaoui:1 ceiling:1 remains:1 describing:1 mechanism:1 needed:2 end:1 adopted:1 pursuit:12 generalizes:2 apply:1 hierarchical:1 away:1 appropriate:2 alternative:2 rkhss:4 vikas:1 denotes:2 clustering:1 include:1 hinge:4 unifying:1 k1:5 especially:1 establish:1 society:4 objective:3 quantity:1 depart:1 parametric:5 dependence:1 jrssb:1 exhibit:1 excels:1 thank:1 simulated:4 tuebingen:2 equip:1 fresh:1 afshin:1 index:1 relationship:2 multiplicatively:1 timization:1 equivalently:1 setup:1 mostly:1 potentially:1 trace:3 negative:3 kfk:1 unknown:1 upper:4 observation:1 datasets:7 groupomp:1 finite:1 minh:1 november:1 jin:1 ecml:1 supporting:1 january:1 extended:1 rn:2 varied:2 reproducing:3 community:1 abe:1 david:1 complement:2 namely:2 required:1 extensive:1 optimized:1 connection:1 smo:1 established:1 swirszcz:1 nip:4 trans:1 address:1 beyond:2 able:2 usually:1 below:2 pattern:2 sparsity:28 pioneering:1 including:1 royal:3 critical:2 natural:2 regularized:11 predicting:2 residual:6 minimax:1 scheme:4 improve:1 brief:1 conic:1 deemed:1 kj:12 literature:3 l2:1 understanding:1 kf:15 loss:11 plant:6 mixed:2 interesting:1 limitation:1 validation:6 h2:1 mldata:1 degree:2 offered:2 sufficient:4 imposes:1 grandvalet:1 atleast:1 ibm:2 mohri:1 vsindhw:1 allow:2 characterizing:1 sparse:30 curve:1 dimension:2 world:2 valid:1 gram:3 resides:1 forward:1 commonly:1 collection:4 made:1 author:2 suzuki:1 far:1 transaction:1 approximate:1 bernhard:1 gene:2 confirm:1 sequentially:1 investigating:1 mairal:1 xi:8 discriminative:1 search:1 iterative:1 learn:4 decoupling:1 elastic:3 kui:1 complex:2 european:1 protocol:1 main:3 whole:1 subsample:1 arise:2 noise:3 amino:1 body:1 x1:3 xu:1 referred:2 ny:1 tong:1 precision:1 tomioka:1 kfj:1 sparsest:1 wish:1 lq:11 xl:3 exponential:1 krk2:1 candidate:1 invocation:1 jmlr:4 theorem:4 bad:2 explored:1 svm:2 cortes:2 evidence:1 fusion:2 concern:4 grouping:1 mendelson:1 workshop:1 adding:1 sequential:1 kr:2 margin:1 kx:1 gap:1 intersection:1 led:1 generalizing:1 explore:1 strand:1 sindhwani:2 ggood:17 minimizer:5 complemented:1 obozinski:1 weston:1 viewed:2 formulated:1 goal:3 identity:1 king:1 considerable:1 aided:1 infinite:2 except:1 reducing:1 total:2 duality:1 experimental:3 atsch:1 highdimensional:1 support:4 latter:1 arises:1 alexander:1 bioinformatics:2 incorporate:1 evaluate:1 avoiding:1 |
3,607 | 4,266 | Sparse Inverse Covariance Matrix Estimation
Using Quadratic Approximation
Cho-Jui Hsieh, M?aty?as A. Sustik, Inderjit S. Dhillon, and Pradeep Ravikumar
Department of Computer Science
University of Texas at Austin
Austin, TX 78712 USA
{cjhsieh,sustik,inderjit,pradeepr}@cs.utexas.edu
Abstract
The !1 regularized Gaussian maximum likelihood estimator has been shown to
have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random
Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In
contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton?s method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse
Gaussian MLE problem. We show that our method is superlinearly convergent,
and also present experimental results using synthetic and real application data that
demonstrate the considerable improvements in performance of our method when
compared to other state-of-the-art methods.
1 Introduction
Gaussian Markov Random Fields; Covariance Estimation. Increasingly, in modern settings statistical problems are high-dimensional, where the number of parameters is large when compared to
the number of observations. An important class of such problems involves estimating the graph
structure of a Gaussian Markov random field (GMRF) in the high-dimensional setting, with applications ranging from inferring gene networks and analyzing social interactions. Specifically, given
n independently drawn samples {y1 , y2 , . . . , yn } from a p-variate Gaussian distribution, so that
yi ? N (?, ?), the task is to estimate its inverse covariance matrix ??1 , also referred to as the
precision or concentration matrix. The non-zero pattern of this inverse covariance matrix ??1 can
be shown to correspond to the underlying graph structure of the GMRF. An active line of work in
high-dimensional settings where p < n is thus based on imposing some low-dimensional structure,
such as sparsity or graphical model structure on the model space. Accordingly, a line of recent
papers [2, 8, 20] has proposed an estimator that minimizes the Gaussian negative log-likelihood regularized by the !1 norm of the entries (off-diagonal entries) of the inverse covariance matrix. The
resulting optimization problem is a log-determinant program, which is convex, and can be solved in
polynomial time.
Existing Optimization Methods for the regularized Gaussian MLE. Due in part to its importance,
there has been an active line of work on efficient optimization methods for solving the !1 regularized
Gaussian MLE problem. In [8, 2] a block coordinate descent method has been proposed which is
called the graphical lasso or GLASSO for short. Other recent algorithms proposed for this problem
include PSM that uses projected subgradients [5], ALM using alternating linearization [14], IPM an
inexact interior point method [11] and SINCO a greedy coordinate descent method [15].
For typical high-dimensional statistical problems, optimization methods typically suffer sub-linear
rates of convergence [1]. This would be too expensive for the Gaussian MLE problem, since the
1
number of matrix entries scales quadratically with the number of nodes. Luckily, the log-determinant
problem has special structure; the log-determinant function is strongly convex and one can observe
linear (i.e. geometric) rates of convergence for the state-of-the-art methods listed above. However,
at most linear rates in turn become infeasible when the problem size is very large, with the number
of nodes in the thousands and the number of matrix entries to be estimated in the millions. Here
we ask the question: can we obtain superlinear rates of convergence for the optimization problem
underlying the !1 regularized Gaussian MLE?
One characteristic of these state-of-the-art methods is that they are first-order iterative methods that
mainly use gradient information at each step. Such first-order methods have become increasingly
popular in recent years for high-dimensional problems in part due to their ease of implementation,
and because they require very little computation and memory at each step. The caveat is that they
have at most linear rates of convergence [3]. For superlinear rates, one has to consider second-order
methods which at least in part use the Hessian of the objective function. There are however some
caveats to the use of such second-order methods in high-dimensional settings. First, a straightforward implementation of each second-order step would be very expensive for high-dimensional
problems. Secondly, the log-determinant function in the Gaussian MLE objective acts as a barrier
function for the positive definite cone. This barrier property would be lost under quadratic approximations so there is a danger that Newton-like updates will not yield positive-definite matrices, unless
one explicitly enforces such a constraint in some manner.
Our Contributions. In this paper, we present a new second-order algorithm to solve the !1 regularized Gaussian MLE. We perform Newton steps that use iterative quadratic approximations of the
Gaussian negative log-likelihood, but with three innovations that enable finessing the caveats detailed above. First, we provide an efficient method to compute the Newton direction. As in recent
methods [12, 9], we build on the observation that the Newton direction computation is a Lasso problem, and perform iterative coordinate descent to solve this Lasso problem. However, the naive approach has an update cost of O(p2 ) for performing each coordinate descent update in the inner loop,
which makes this resume infeasible for this problem. But we show how a careful arrangement and
caching of the computations can reduce this cost to O(p). Secondly, we use an Armijo-rule based
step size selection rule to obtain a step-size that ensures sufficient descent and positive-definiteness
of the next iterate. Thirdly, we use the form of the stationary condition characterizing the optimal
solution to then focus the Newton direction computation on a small subset of free variables, in a
manner that preserves the strong convergence guarantees of second-order descent.
Here is a brief outline of the paper. In Section 3, we present our algorithm that combines quadratic
approximation, Newton?s method and coordinate descent. In Section 4, we show that our algorithm
is not only convergent but superlinearly so. We summarize the experimental results in Section 5,
using real application data from [11] to compare the algorithms, as well as synthetic examples which
reproduce experiments from [11]. We observe that our algorithm performs overwhelmingly better
(quadratic instead of linear convergence) than the other solutions described in the literature.
2 Problem Setup
Let y be a p-variate Gaussian random vector, with distribution N (?, ?). We are given n independently drawn samples {y1 , . . . , yn } of this random vector, so that the sample covariance matrix can
be written as
n
n
1!
1!
S=
yi .
(1)
(yk ? ?
?)(yk ? ?
?)T , where ?
?=
n
n i=1
k=1
Given some regularization penalty ? > 0, the !1 regularized Gaussian MLE for the inverse covariance matrix can be estimated by solving the following regularized log-determinant program:
"
#
arg min ? log det X + tr(SX) + ?#X#1 = arg min f (X),
(2)
X"0
X"0
$p
where #X#1 =
X. Our results
i,j=1 |Xij | is the elementwise !1 norm of the p ? p matrix $
p
can be also extended to allow a regularization term of the form #? ? X#1 =
i,j=1 ?ij |Xij |,
i.e. different nonnegative weights can be assigned to different entries. This
would
include for
$
instance the popular off-diagonal !1 regularization variant where we penalize i#=j |Xij |, but not the
diagonal entries. The addition of such !1 regularization promotes sparsity in the inverse covariance
matrix, and thus encourages sparse graphical model structure. For further details on the background
of !1 regularization in the context of GMRFs, we refer the reader to [20, 2, 8, 15].
2
3 Quadratic Approximation Method
Our approach is based on computing iterative quadratic approximations to the regularized Gaussian
MLE objective f (X) in (2). This objective function f can be seen to comprise of two parts, f (X) ?
g(X) + h(X), where
g(X) = ? log det X + tr(SX) and h(X) = ?#X#1 .
(3)
The first component g(X) is twice differentiable, and strictly convex, while the second part
h(X) is convex but non-differentiable. Following the standard approach [17, 21] to building a
quadratic approximation around any iterate Xt for such composite functions, we build the secondorder Taylor expansion of the smooth component g(X). The second-order expansion for the
log-determinant function (see for instance [4, Chapter A.4.3]) is given by log det(Xt + ?) ?
log det Xt +tr(Xt?1 ?)? 21 tr(Xt?1 ?Xt?1 ?). We introduce Wt = Xt?1 and write the second-order
approximation g?Xt (?) to g(X) = g(Xt + ?) as
g?Xt (?) = tr((S ? Wt )?) + (1/2) tr(Wt ?Wt ?) ? log det Xt + tr(SXt ).
(4)
We define the Newton direction Dt for the entire objective f (X) can then be written as the solution
of the regularized quadratic program:
Dt = arg min g?Xt (?) + h(Xt + ?).
?
(5)
This Newton direction can be used to compute iterative estimates {Xt } for solving the optimization
problem in (2). In the sequel, we will detail three innovations which makes this resume feasible.
Firstly, we provide an efficient method to compute the Newton direction. As in recent methods [12],
we build on the observation that the Newton direction computation is a Lasso problem, and perform
iterative coordinate descent to find its solution. However, the naive approach has an update cost of
O(p2 ) for performing each coordinate descent update in the inner loop, which makes this resume
infeasible for this problem. We show how a careful arrangement and caching of the computations
can reduce this cost to O(p). Secondly, we use an Armijo-rule based step size selection rule to obtain
a step-size that ensures sufficient descent and positive-definiteness of the next iterate. Thirdly, we
use the form of the stationary condition characterizing the optimal solution to then focus the Newton
direction computation on a small subset of free variables, in a manner that preserves the strong
convergence guarantees of second-order descent. We outline each of these three innovations in the
following three subsections. We then detail the complete method in Section 3.4.
3.1 Computing the Newton Direction
The optimization problem in (5) is an !1 regularized least squares problem, also called Lasso [16]. It
is straightforward to verify that for a symmetric matrix ? we have tr(Wt ?Wt ?) = vec(?)T (Wt ?
Wt ) vec(?), where ? denotes the Kronecker product and vec(X) is the vectorized listing of the
elements of matrix X.
In [7, 18] the authors show that coordinate descent methods are very efficient for solving lasso type
problems. However, an obvious way to update each element of ? to solve for the Newton direction
in (5) needs O(p2 ) floating point operations since Q := Wt ?Wt is a p2 ?p2 matrix, thus yielding an
O(p4 ) procedure for approximating the Newton direction. As we show below, our implementation
reduces the cost of one variable update to O(p) by exploiting the structure of Q or in other words
the specific form of the second order term tr(Wt ?Wt ?). Next, we discuss the details.
For notational simplicity we will omit the Newton iteration index t in the derivations that follow.
(Hence, the notation for g?Xt is also simplified to g?.) Furthermore, we omit the use of a separate
index for the coordinate descent updates. Thus, we simply use D to denote the current iterate
approximating the Newton direction and use D$ for the updated direction. Consider the coordinate
descent update for the variable Xij , with i < j that preserves symmetry: D$ = D+?(ei eTj +ej eTi ).
The solution of the one-variable problem corresponding to (5) yields ?:
arg min g?(D + ?(ei eTj + ej eTi )) + 2?|Xij + Dij + ?|.
?
(6)
As a matter of notation: we use xi to denote the i-th column of the matrix X. We expand the terms
appearing in the definition of g? after substituting D$ = D + ?(ei eTj + ej eTi ) for ? in (4) and omit
the terms not dependent on ?. The contribution of tr(SD$ ) ? tr(W D$ ) yields 2?(Sij ? Wij ), while
3
the regularization term contributes 2?|Xij + Dij + ?|, as seen from (6). The quadratic term can be
rewritten using tr(AB) = tr(BA) and the symmetry of D and W to yield:
tr(W D$ W D$ ) = tr(W DW D) +
4?wiT Dwj + 2?2 (Wij2 + Wii Wjj ).
(7)
In order to compute the single variable update we seek the minimum of the following function of ?:
1
(W 2 + Wii Wjj )?2 + (Sij ? Wij + wiT Dwj )? + ?|Xij + Dij + ?|.
(8)
2 ij
Letting a = Wij2 + Wii Wjj , b = Sij ? Wij + wiT Dwj , and c = Xij + Dij the minimum is achieved
for:
? = ?c + S(c ? b/a, ?/a),
(9)
where S(z, r) = sign(z) max{|z| ? r, 0} is the soft-thresholding function. The values of a and c
are easy to compute. The main cost arises while computing the third term contributing to coefficient
b, namely wiT Dwj . Direct computation requires O(p2 ) time. Instead, we maintain U = DW by
updating two rows of the matrix U for every variable update in D costing O(p) flops, and then
compute wiT uj using also $
O(p) flops. Another way to view this arrangement is that we maintain a
p
decomposition W DW = k=1 wk uTk throughout the process by storing the uk vectors, allowing
O(p) computation of update (9). In order to maintain the matrix U we also need to update two
coordinates of each uk when Dij is modified. We can compactly write the row updates of U as
follows: ui? ? ui? + ?wj? and uj? ? uj? + ?wi? , where ui? refers to the i-th row vector of U .
We note that the calculation of the Newton direction can be simplified if X is a diagonal matrix. For instance, if we are starting from a diagonal matrix X0 , the terms wiT Dwj equal
Dij /((X0 )ii (X0 )jj ), which are independent of each other implying that we only need to update
each variable according to (9) only once, and the resulting D will be the optimum of (5). Hence, the
time cost of finding the first Newton direction is reduced from O(p3 ) to O(p2 ).
3.2 Computing the Step Size
Following the computation of the Newton direction Dt , we need to find a step size ? ? (0, 1] that
ensures positive definiteness of the next iterate Xt + ?Dt and sufficient decrease in the objective
function.
We adopt Armijo?s rule [3, 17] and try step-sizes ? ? {? 0 , ? 1 , ? 2 , . . . } with a constant decrease rate
0 < ? < 1 (typically ? = 0.5) until we find the smallest k ? N with ? = ? k such that Xt + ?Dt
(a) is positive-definite, and (b) satisfies the following condition:
f (Xt + ?Dt ) ? f (Xt ) + ???t , ?t = tr(?g(Xt )Dt ) + ?#Xt + Dt #1 ? ?#Xt #1
(10)
where 0 < ? < 0.5 is a constant. To verify positive definiteness, we use a Cholesky factorization
costing O(p3 ) flops during the objective function evaluation to compute log det(Xt + ?Dt ) and this
step dominates the computational cost in the step-size computations. In the Appendix in Lemma 9
we show that for any Xt and Dt , there exists a ?
?t > 0 such that (10) and the positive-definiteness of
Xt + ?Dt are satisfied for any ? ? (0, ?
? t ], so we can always find a step size satisfying (10) and the
positive-definiteness even if we do not have the exact Newton direction. Following the line search
?1
and the Newton step update Xt+1 = Xt + ?Dt we efficiently compute Wt+1 = Xt+1
by reusing
the Cholesky decomposition of Xt+1 .
3.3 Identifying which variables to update
In this section, we propose a way to select which variables to update that uses the stationary condition
of the Gaussian MLE problem. At the start of any outer loop computing the Newton direction, we
partition the variables into free and fixed sets based on the value of the gradient. Specifically, we
classify the (Xt )ij variable as fixed if |?ij g(Xt )| < ? ? & and (Xt )ij = 0, where & > 0 is small.
(We used & = 0.01 in our experiments.) The remaining variables then constitute the free set. The
following lemma shows the property of the fixed set:
Lemma 1. For any Xt and the corresponding fixed and free sets Sf ixed , Sf ree , the optimized update
on the fixed set would not change any of the coordinates. In other words, the solution of the following
optimization problem is ? = 0:
arg min f (Xt + ?) such that ?ij = 0 ?(i, j) ? Sf ree .
?
4
The proof is given in Appendix 7.2.3. Based on the above observation, we perform the inner loop
coordinate descent updates restricted to the free set only (to find the Newton direction). This reduces
the number of variables over which we perform the coordinate descent from O(p2 ) to the number
of non-zeros in Xt , which in general is much smaller than p2 when ? is large and the solution is
sparse. We have observed huge computational gains from this modification, and indeed in our main
theorem we show the superlinear convergence rate for the algorithm that includes this heuristic.
The attractive facet of this modification is that it leverages the sparsity of the solution and intermediate iterates in a manner that falls within a block coordinate descent framework. Specifically, suppose
as detailed above at any outer loop Newton iteration, we partition the variables into the fixed and
free set, and then first perform a Newton update restricted to the fixed block, followed by a Newton
update on the free block. According to Lemma 1 a Newton update restricted to the fixed block does
not result in any changes.
In other words, performing the inner loop coordinate descent updates restricted to the free set is
equivalent to two block Newton steps restricted to the fixed and free sets consecutively. Note further,
that the union of the free and fixed sets is the set of all variables, which as we show in the convergence
analysis in the appendix, is sufficient to ensure the convergence of the block Newton descent.
But would the size of free set be small? We initialize X0 to the identity matrix, which is indeed
sparse. As the following lemma shows, if the limit of the iterates (the solution of the optimization
problem) is sparse, then after a finite number of iterations, the iterates Xt would also have the same
sparsity pattern.
Lemma 2. Assume {Xt } converges to X ? . If for some index pair (i, j), |?ij g(X ? )| < ? (so that
?
Xij
= 0), then there exists a constant t? > 0 such that for all t > t?, the iterates Xt satisfy
|?ij g(Xt )| < ? and (Xt )ij = 0.
(11)
The proof comes directly from Lemma 11 in the Appendix. Note that |?ij g(X ? )| < ? implying
?
Xij
= 0 follows from the optimality condition of (2). A similar (so called shrinking) strategy is
used in SVM or !1 -regularized logistic regression problems as mentioned in [19]. In Appendix 7.4
we show in experiments this strategy can reduce the size of variables very quickly.
3.4 The Quadratic Approximation based Method
We now have the machinery for a description of our algorithm QUIC standing for QUadratic Inverse
Covariance. A high level summary of the algorithm is shown in Algorithm 1, while the the full
details are given in Algorithm 2 in the Appendix.
1
2
3
4
5
6
7
Algorithm 1: Quadratic Approximation method for Sparse Inverse Covariance Learning (QUIC)
Input : Empirical covariance matrix S, scalar ?, initial X0 , inner stopping tolerance &
Output: Sequence of Xt converging to arg minX"0 f (X), where
f (X) = ? log det X + tr(SX) + ?#X#1 .
for t = 0, 1, . . . do
Compute Wt = Xt?1 .
Form the second order approximation f?Xt (?) := g?Xt (?) + h(Xt + ?) to f (Xt + ?).
Partition the variables into free and fixed sets based on the gradient, see Section 3.3.
Use coordinate descent to find the Newton direction Dt = arg min? f?Xt (Xt + ?) over the
free variable set, see (6) and (9). (A Lasso problem.)
Use an Armijo-rule based step-size selection to get ? s.t. Xt+1 = Xt + ?Dt is positive definite
and the objective value sufficiently decreases, see (10).
end
4 Convergence Analysis
In this section, we show that our algorithm has strong convergence guarantees. Our first main result
shows that our algorithm does converge to the optimum of (2). Our second result then shows that
the asymptotic convergence rate is actually superlinear, specifically quadratic.
4.1 Convergence Guarantee
We build upon the convergence analysis in [17, 21] of the block coordinate gradient descent method
applied to composite objectives. Specifically, [17, 21] consider iterative updates where at each
5
iteration t they update just a block of variables Jt . They then consider a Gauss-Seidel rule:
%
Jt+j ? N ?t = 1, 2, . . . ,
(12)
j=0,...,T ?1
where N is the set of all variables and T is a fixed number. Note that the condition (12) ensures that
each block of variables will be updated at least once every T iterations. Our Newton steps with the
free set modification is a special case of this framework: we set J2t , J2t+1 to be the fixed and free sets
respectively. As outlined in Section 3.3, our selection of the fixed sets ensures that a block update
restricted to the fixed set would not change any values since these variables in fixed sets already
satisfy the coordinatewise optimality condition. Thus, while our algorithm only explicitly updates
the free set block, this is equivalent to updating variables in fixed and free blocks consecutively. We
also have J2t ? J2t+1 = N , implying the Gauss-Seidel rule with T = 3.
Further, the composite objectives in [17, 21] have the form F (x) = g(x) + h(x), where g(x)
is smooth (continuously differentiable), and h(x) is non-differentiable but separable. Note that in
our case, the smooth component is the log-determinant function g(X) = ? log det X + tr(SX),
while the non-differentiable separable component is h(x) = ?#x#1 . However, [17, 21] impose the
additional assumption that g(x) is smooth over the domain Rn . In our case g(x) is smooth over
p
the restricted domain of the positive definite cone S++
. In the appendix, we extend the analysis
so that convergence still holds under our setting. In particular, we prove the following theorem in
Appendix 7.2:
Theorem 1. In Algorithm 1, the sequence {Xt } converges to the unique global optimum of (2).
4.2 Asymptotic Convergence Rate
In addition to convergence, we further show that our algorithm has a quadratic asymptotic convergence rate.
Theorem 2. Our algorithm QUIC converges quadratically, that is for some constant 0 < ? < 1:
#Xt+1 ? X ? #F
= ?.
t?? #Xt ? X ? #2
F
lim
The proof, given in Appendix 7.3, first shows that the step size as computed in Section 3.2 would
eventually become equal to one, so that we would be eventually performing vanilla Newton updates.
Further we use the fact that after a finite number of iterations, the sign pattern of the iterates converges to the sign pattern of the limit. From these two assertions, we build on the convergence rate
result for constrained Newton methods in [6] to show that our method is quadratically convergent.
5 Experiments
In this section, we compare our method QUIC with other state-of-the-art methods on both synthetic
and real datasets. We have implemented QUIC in C++, and all the experiments were executed on
2.83 GHz Xeon X5440 machines with 32G RAM and Linux OS.
We include the following algorithms in our comparisons:
? ALM: the Alternating Linearization Method proposed by [14]. We use their MATLAB source
code for the experiments.
? G LASSO: the block coordinate descent method proposed by [8]. We used their Fortran code
available from cran.r-project.org, version 1.3 released on 1/22/09.
? PSM: the Projected Subgradient Method proposed by [5]. We use the MATLAB source code
available at http://www.cs.ubc.ca/?schmidtm/Software/PQN.html.
? SINCO: the greedy coordinate descent method proposed by [15]. The code can be downloaded
from https://projects.coin-or.org/OptiML/browser/trunk/sinco.
? IPM: An inexact interior point method proposed by [11]. The source code can be downloaded
from http://www.math.nus.edu.sg/?mattohkc/Covsel-0.zip.
Since some of the above implementations do not support the generalized regularization term #? ?
X#1 , our comparisons use ?#X#1 as the regularization term.
The GLASSO algorithm description in [8] does not clearly specify the stopping criterion for the
Lasso iterations. Inspection of the available Fortran implementation has revealed that a separate
6
Table 1: The comparisons on synthetic datasets. p stands for dimension, #??1 #0 indicates the
number of nonzeros in ground truth inverse covariance matrix, #X ? #0 is the number of nonzeros in
the solution, and & is a specified relative error of objective value. ? indicates the run time exceeds
our time limit 30,000 seconds (8.3 hours). The results show that QUIC is overwhelmingly faster
than other methods, and is the only one which is able to scale up to solve problem where p = 10000.
Dataset setting
pattern
p #??1 #0
chain
1000
2998
chain
4000
11998
chain
10000
29998
random
1000
10758
random
4000
41112
random 10000
91410
Parameter setting
Time (in seconds)
? #X ? #0
& QUIC ALM Glasso PSM IPM
10?2
0.30 18.89 23.28 15.59 86.32
0.4
3028
10?6
2.26 41.85
45.1 34.91 151.2
10?2 11.28
922 1068 567.9 3458
0.4 11998
10?6 53.51 1734 2119 1258 5754
10?2 216.7 13820
* 8450
*
0.4 29998
10?6 986.6 28190
* 19251
*
10?2
0.52 42.34 10.31 20.16 71.62
0.12 10414
10?6
1.2 28250 20.43 59.89 116.7
10?2
1.17 65.64 17.96 23.53 78.27
0.075 55830
10?6
6.87
* 60.61 91.7 145.8
10?2 23.25 1429 1052 1479 4928
0.08 41910
10?6 160.2
* 2561 4232 8097
10?2 65.57
* 3328 2963 5621
0.05 247444
10?6 478.8
* 8356 9541 13650
10?2 337.7 26270 21298
*
*
0.08 89652
10?6 1125
*
*
*
*
10?2 803.5
*
*
*
*
0.04 392786
10?6 2951
*
*
*
*
Sinco
120.0
520.8
5246
*
*
*
60.75
683.3
576.0
4449
7375
*
*
*
*
*
*
*
threshold is computed and is used for these inner iterations. We found that under certain conditions
the threshold computed is smaller than the machine precision and as a result the overall algorithm
occasionally displayed erratic convergence behavior and slow performance. We modified the Fortran
implementation of GLASSO to correct this error.
5.1 Comparisons on synthetic datasets
We first compare the run times of the different methods on synthetic data. We generate the two
following types of graph structures for the underlying Gaussian Markov Random Fields:
? Chain Graphs: The ground truth inverse covariance matrix ??1 is set to be ??1
i,i?1 = ?0.5 and
?1
?i,i = 1.25.
? Graphs with Random Sparsity Structures: We use the procedure mentioned in Example 1 in [11]
to generate inverse covariance matrices with random non-zero patterns. Specifically, we first
generate a sparse matrix U with nonzero elements equal to ?1, set ??1 to be U T U and then add
a diagonal term to ensure ??1 is positive definite. We control the number of nonzeros in U so
that the resulting ??1 has approximately 10p nonzero elements.
Given the inverse covariance matrix ??1 , we draw a limited number, n = p/2 i.i.d. samples, to simulate the high-dimensional setting, from the corresponding GMRF distribution. We then compare
the algorithms listed above when run on these samples.
We can use the minimum-norm sub-gradient defined in Lemma 5 in Appendix 7.2 as the stopping
condition, and computing it is easy because X ?1 is available in QUIC. Table 1 shows the results
for timing comparisons in the synthetic datasets. We vary the dimensionality from 1000, 4000 to
10000 for each dataset. For chain graphs, we select ? so that the solution had the (approximately)
correct number of nonzero elements. To test the performance of algorithms on different parameters
(?), for random sparse pattern we test the speed under two values of ?, one discovers correct number
of nonzero elements, and one discovers 5 times the number of nonzero elements. We report the time
for each algorithm to achieve &-accurate solution defined by f (X k ) ? f (X ? ) < &f (X ? ). Table 1
shows the results for & = 10?2 and 10?6 , where & = 10?2 tests the ability for an algorithm to get a
7
good initial guess (the nonzero structure), and & = 10?6 tests whether an algorithm can achieve an
accurate solution. Table 1 shows that QUIC is consistently and overwhelmingly faster than other
methods, both initially with & = 10?2 , and at & = 10?6 . Moreover, for p = 10000 random pattern,
there are p2 = 100 million variables, the selection of fixed/free sets helps QUIC to focus only on
very small part of variables, and can achieve an accurate solution in about 15 minutes, while other
methods fails to even have an initial guess within 8 hours. Notice that our ? setting is smaller
than [14] because here we focus on the ? which discovers true structure, therefore the comparison
between ALM and PSM are different from [14].
5.2 Experiments on real datasets
We use the real world biology datasets preprocessed by [11] to compare the performance of our
method with other state-of-the-art methods. The regularization parameter ? is set to 0.5 according
to the experimental setting in [11]. Results on the following datasets are shown in Figure 1: Estrogen
(p = 692), Arabidopsis (p = 834), Leukemia (p = 1, 225), Hereditary (p = 1, 869). We plot the
relative error (f (Xt ) ? f (X ? ))/f (X ? ) (on a log scale) against time in seconds. On these real
datasets, QUIC can be seen to achieve super-linear convergence, while other methods have at most
a linear convergence rate. Overall QUIC can be ten times faster than other methods, and even more
faster when higher accuracy is desired.
6 Acknowledgements
We would like to thank Professor Kim-Chuan Toh for providing the data set and the IPM code.
We would also like to thank Professor Katya Scheinberg and Shiqian Ma for providing the ALM
implementation. This research was supported by NSF grant IIS-1018426 and CCF-0728879. ISD
acknowledges support from the Moncrief Grand Challenge Award.
ALM
Sinco
PSM
Glasso
IPM
QUIC
0
10
?2
?2
10
Relative error (log scale)
Relative error (log scale)
10
?4
10
?6
10
?4
10
?6
10
?8
?8
10
10
?10
?10
10
ALM
Sinco
PSM
Glasso
IPM
QUIC
0
10
0
10
20
30
Time (sec)
40
50
10
60
0
(a) Time for Estrogen, p = 692
ALM
Sinco
PSM
Glasso
IPM
QUIC
0
10
?2
30
40
Time (sec)
50
60
70
80
ALM
Sinco
PSM
Glasso
IPM
QUIC
0
?2
10
?4
10
?6
10
?4
10
?6
10
?8
?8
10
10
?10
?10
10
20
10
Relative error (log scale)
Relative error (log scale)
10
10
(b) Time for Arabidopsis, p = 834
0
50
100
150
200
250
Time (sec)
300
350
400
450
10
500
(c) Time for Leukemia, p = 1, 255
0
200
400
600
Time (sec)
800
1000
1200
(d) Time for hereditarybc, p = 1, 869
Figure 1: Comparison of algorithms on real datasets. The results show QUIC converges faster than
other methods.
8
References
[1] A. Agarwal, S. Negahban, and M. Wainwright. Convergence rates of gradient methods for
high-dimensional statistical recovery. In NIPS, 2010.
[2] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum
likelihood estimation for multivariate Gaussian or binary data. The Journal of Machine Learning Research, 9, 6 2008.
[3] D. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995.
[4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 7th printing
edition, 2009.
[5] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse Gaussians. UAI, 2008.
[6] J. Dunn. Newton?s method and the Goldstein step-length rule for constrained minimization
problems. SIAM J. Control and Optimization, 18(6):659?674, 1980.
[7] J. Friedman, T. Hastie, H. H?ofling, and R. Tibshirani. Pathwise coordinate optimization. Annals
of Applied Statistics, 1(2):302?332, 2007.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, July 2008.
[9] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of Statistical Software, 33(1):1?22, 2010.
[10] E. S. Levitin and B. T. Polyak. Constrained minimization methods. U.S.S.R. Computational
Math. and Math. Phys., 6:1?50, 1966.
[11] L. Li and K.-C. Toh. An inexact interior point method for l1-reguarlized sparse covariance
selection. Mathematical Programming Computation, 2:291?315, 2010.
[12] L. Meier, S. Van de Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal
of the Royal Statistical Society, Series B, 70:53?71, 2008.
[13] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1970.
[14] K. Scheinberg, S. Ma, and D. Glodfarb. Sparse inverse covariance selection via alternating
linearization methods. NIPS, 2010.
[15] K. Scheinberg and I. Rish. Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In J. Balczar, F. Bonchi, A. Gionis, and M. Sebag, editors, Machine
Learning and Knowledge Discovery in Databases, volume 6323 of Lecture Notes in Computer
Science, pages 196?212. Springer Berlin / Heidelberg, 2010.
[16] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, 58:267?288, 1996.
[17] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117:387?423, 2007.
[18] T. T. Wu and K. Lange. Coordinate descent algorithms for lasso penalized regression. The
Annals of Applied Statistics, 2(1):224?244, 2008.
[19] G.-X. Yuan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin. A comparison of optimization methods
and software for large-scale l1-regularized linear classification. Journal of Machine Learning
Research, 11:3183?3234, 2010.
[20] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model.
Biometrika, 94:19?35, 2007.
[21] S. Yun and K.-C. Toh. A coordinate gradient descent method for l1-regularized convex minimization. Computational Optimizations and Applications, 48(2):273?307, 2011.
9
| 4266 |@word determinant:8 version:1 polynomial:1 norm:3 seek:1 covariance:19 hsieh:2 decomposition:2 tr:18 ipm:8 initial:3 series:1 existing:1 current:1 rish:1 toh:3 written:2 belmont:1 partition:3 plot:1 update:29 stationary:3 greedy:3 implying:3 guess:2 accordingly:1 inspection:1 short:1 moncrief:1 caveat:3 iterates:5 math:3 node:2 firstly:1 org:2 mathematical:2 direct:1 become:3 yuan:2 prove:1 combine:1 bonchi:1 introduce:1 manner:4 x0:5 alm:9 indeed:2 behavior:1 little:1 project:2 estimating:1 underlying:4 notation:2 moreover:1 biostatistics:1 superlinearly:2 minimizes:1 finding:1 nj:1 guarantee:5 every:2 act:1 ofling:1 biometrika:1 uk:2 control:2 arabidopsis:2 grant:1 omit:3 yn:2 bertsekas:1 positive:12 timing:1 sd:1 limit:3 analyzing:1 ree:2 path:1 approximately:2 twice:1 katya:1 ease:1 limited:2 factorization:1 unique:1 enforces:1 lost:1 block:14 definite:6 union:1 sxt:1 procedure:2 dunn:1 danger:1 empirical:1 composite:3 boyd:1 word:3 refers:1 jui:1 get:2 interior:3 superlinear:4 selection:10 context:1 www:2 equivalent:2 straightforward:2 starting:1 independently:2 convex:7 wit:6 simplicity:1 gmrf:3 identifying:1 recovery:1 estimator:2 rule:9 vandenberghe:1 dw:3 coordinate:26 updated:2 annals:2 suppose:1 exact:1 programming:3 us:2 secondorder:1 element:7 expensive:2 satisfying:1 updating:2 database:1 observed:1 solved:1 thousand:1 wj:1 ensures:5 pradeepr:1 decrease:3 yk:2 mentioned:2 ui:3 wjj:3 solving:5 upon:1 compactly:1 chapter:1 tx:1 derivation:1 ixed:1 heuristic:1 solve:4 ability:1 statistic:2 browser:1 sequence:2 differentiable:5 propose:2 interaction:1 product:1 p4:1 loop:6 achieve:4 description:2 exploiting:1 convergence:24 etj:3 optimum:3 converges:5 help:1 ij:10 strong:4 p2:10 recovering:1 c:2 involves:1 come:1 implemented:1 direction:20 correct:3 consecutively:2 luckily:1 enable:1 require:1 secondly:3 strictly:1 hold:1 around:1 sufficiently:1 ground:2 substituting:1 vary:1 adopt:1 smallest:1 released:1 estimation:5 uhlmann:1 utexas:1 eti:3 minimization:4 clearly:1 gaussian:22 always:1 super:1 modified:2 caching:2 ej:3 shrinkage:1 overwhelmingly:3 focus:4 improvement:1 notational:1 consistently:1 likelihood:4 mainly:1 indicates:2 contrast:1 kim:1 dependent:1 stopping:3 typically:2 entire:1 pqn:1 initially:1 koller:1 expand:1 reproduce:1 wij:3 arg:7 overall:2 html:1 classification:1 art:6 special:2 initialize:1 constrained:3 field:4 comprise:1 equal:3 once:2 biology:1 leukemia:2 report:1 nonsmooth:1 employ:1 modern:1 preserve:3 floating:1 maintain:3 ab:1 friedman:3 huge:1 evaluation:1 pradeep:1 yielding:1 chain:5 accurate:3 machinery:1 unless:1 taylor:1 desired:1 covsel:1 instance:3 xeon:1 column:1 soft:1 classify:1 facet:1 assertion:1 cost:8 entry:6 subset:2 dij:6 too:1 synthetic:7 cho:1 grand:1 negahban:1 siam:1 sequel:1 standing:1 off:2 quickly:1 continuously:1 linux:1 satisfied:1 shiqian:1 reusing:1 li:1 de:1 sec:4 wk:1 includes:1 coefficient:1 gionis:1 matter:1 rockafellar:1 satisfy:2 explicitly:2 view:1 try:1 start:1 cjhsieh:1 contribution:2 square:1 accuracy:1 largely:1 characteristic:1 listing:1 correspond:1 yield:4 efficiently:1 resume:3 phys:1 definition:1 inexact:3 sinco:8 against:1 obvious:1 proof:3 gain:1 dataset:2 popular:2 ask:1 subsection:1 lim:1 dimensionality:1 knowledge:1 actually:1 goldstein:1 higher:1 dt:14 follow:1 specify:1 strongly:1 furthermore:1 just:1 until:1 cran:1 ei:3 o:1 banerjee:1 nonlinear:1 logistic:2 schmidtm:1 scientific:1 usa:1 building:1 verify:2 y2:1 true:1 ccf:1 regularization:10 assigned:1 hence:2 alternating:3 symmetric:1 dhillon:1 nonzero:6 attractive:1 during:1 encourages:1 criterion:1 generalized:2 yun:2 outline:2 complete:1 demonstrate:1 performs:1 duchi:1 l1:3 ranging:1 novel:1 discovers:3 volume:1 million:2 thirdly:2 extend:1 elementwise:1 refer:1 cambridge:1 imposing:1 vec:3 vanilla:1 outlined:1 had:1 add:1 multivariate:1 recent:5 occasionally:1 certain:1 binary:1 yi:2 seen:3 minimum:3 additional:1 impose:1 utk:1 zip:1 converge:1 july:1 ii:2 full:1 reduces:2 nonzeros:3 seidel:2 smooth:5 exceeds:1 faster:5 calculation:1 lin:2 ravikumar:1 mle:10 promotes:1 award:1 converging:1 variant:1 regression:4 aty:1 iteration:8 agarwal:1 achieved:1 penalize:1 addition:2 background:1 source:3 wij2:2 ascent:1 leverage:2 intermediate:1 revealed:1 easy:2 j2t:4 iterate:5 variate:2 hastie:3 lasso:13 polyak:1 inner:6 reduce:3 lange:1 texas:1 det:8 whether:1 penalty:1 suffer:1 hessian:1 jj:1 constitute:1 matlab:2 detailed:2 listed:2 chuan:1 ten:1 reduced:1 http:3 generate:3 xij:10 nsf:1 notice:1 sign:3 estimated:2 tibshirani:4 write:2 levitin:1 group:1 threshold:2 drawn:2 reguarlized:1 costing:2 preprocessed:1 ram:1 graph:7 subgradient:2 isd:1 year:1 cone:2 run:3 inverse:15 psm:8 throughout:1 reader:1 wu:1 p3:2 draw:1 appendix:10 followed:1 convergent:3 quadratic:16 nonnegative:1 constraint:1 kronecker:1 software:3 simulate:1 speed:1 min:6 optimality:2 subgradients:1 performing:4 separable:3 gould:1 department:1 according:3 smaller:3 increasingly:2 wi:1 modification:4 restricted:7 sij:3 ghaoui:1 trunk:1 turn:1 discus:1 eventually:2 scheinberg:3 fortran:3 letting:1 sustik:2 end:1 available:4 operation:1 rewritten:1 wii:3 gaussians:1 observe:2 appearing:1 coin:1 denotes:1 remaining:1 include:3 ensure:2 graphical:5 newton:35 build:5 uj:3 approximating:2 society:2 objective:11 question:1 arrangement:3 already:1 quic:17 strategy:2 concentration:1 diagonal:6 gradient:9 minx:1 separate:2 thank:2 berlin:1 estrogen:2 athena:1 outer:2 dwj:5 tseng:1 code:6 length:1 index:3 providing:2 innovation:3 setup:1 executed:1 negative:2 ba:1 implementation:7 perform:6 allowing:1 observation:4 markov:5 datasets:9 finite:2 descent:27 displayed:1 flop:3 extended:1 y1:2 rn:1 namely:1 pair:1 specified:1 meier:1 optimized:1 quadratically:3 hour:2 nu:1 nip:2 able:1 below:1 pattern:8 sparsity:5 summarize:1 challenge:1 program:4 max:1 memory:1 erratic:1 royal:2 wainwright:1 regularized:15 brief:1 acknowledges:1 aspremont:1 naive:2 gmrfs:1 geometric:1 literature:1 sg:1 acknowledgement:1 discovery:1 contributing:1 asymptotic:3 relative:6 glasso:8 lecture:1 downloaded:2 sufficient:4 vectorized:1 thresholding:1 editor:1 storing:1 austin:2 row:3 summary:1 penalized:1 supported:1 free:19 infeasible:3 allow:1 fall:1 characterizing:2 barrier:2 hereditary:1 sparse:16 tolerance:1 ghz:1 van:1 dimension:1 stand:1 world:1 author:1 projected:3 simplified:2 social:1 gene:1 global:1 active:2 uai:1 xi:1 alternatively:1 search:1 iterative:7 table:4 ca:1 symmetry:2 contributes:1 heidelberg:1 expansion:2 domain:2 main:3 edition:1 coordinatewise:1 sebag:1 referred:1 definiteness:6 slow:1 precision:2 sub:2 inferring:1 shrinking:1 fails:1 mattohkc:1 sf:3 third:1 printing:1 theorem:4 minute:1 xt:54 specific:1 jt:2 svm:1 dominates:1 exists:2 importance:1 linearization:3 sx:4 simply:1 pathwise:1 inderjit:2 scalar:1 chang:1 springer:1 ubc:1 truth:2 satisfies:1 ma:3 identity:1 careful:2 professor:2 considerable:1 feasible:1 change:3 specifically:6 typical:1 wt:14 lemma:8 called:3 geer:1 experimental:3 gauss:2 select:2 cholesky:2 support:2 arises:1 armijo:4 princeton:2 |
3,608 | 4,267 | Autonomous Learning of Action Models for Planning
Neville Mehta
Prasad Tadepalli
Alan Fern
School of Electrical Engineering and Computer Science
Oregon State University, Corvallis, OR 97331, USA.
{mehtane,tadepall,afern}@eecs.oregonstate.edu
Abstract
This paper introduces two new frameworks for learning action models for planning. In the mistake-bounded planning framework, the learner has access to a
planner for the given model representation, a simulator, and a planning problem
generator, and aims to learn a model with at most a polynomial number of faulty
plans. In the planned exploration framework, the learner does not have access to a
problem generator and must instead design its own problems, plan for them, and
converge with at most a polynomial number of planning attempts. The paper reduces learning in these frameworks to concept learning with one-sided error and
provides algorithms for successful learning in both frameworks. A specific family
of hypothesis spaces is shown to be efficiently learnable in both the frameworks.
1
Introduction
Planning research typically assumes that the planning system is provided complete and correct models of the actions. However, truly autonomous agents must learn these models. Moreover, model
learning, planning, and plan execution must be interleaved, because agents need to plan long before
perfect models are learned. This paper formulates and analyzes the learning of deterministic action
models used in planning for goal achievement. It has been shown that deterministic STRIPS actions
with a constant number of preconditions can be learned from raw experience with at most a polynomial number of plan prediction mistakes [8]. In spite of this positive result, compact action models
in fully observable, deterministic action models are not always efficiently learnable. For example,
action models represented as arbitrary Boolean functions are not efficiently learnable under standard
cryptographic assumptions such as the hardness of factoring [2].
Learning action models for planning is different from learning an arbitrary function from states
and actions to next states, because one can ignore modeling the effects of some actions in certain
contexts. For example, most people who drive do not ever learn a complete model of the dynamics
of their vehicles; while they might accurately know the stopping distance or turning radius, they
could be oblivious to many aspects that an expert auto mechanic is comfortable with. To capture
this intuition, we introduce the concept of an adequate model, that is, a model that is sound and
sufficiently complete for planning for a given class of goals. When navigating a city, any spanning
tree of the transportation network connecting the places of interest would be an adequate model.
We define two distinct frameworks for learning adequate models for planning and then characterize
sufficient conditions for success in these frameworks. In the mistake-bounded planning (MBP)
framework, the goal is to continually solve user-generated planning problems while learning action
models and guarantee at most a polynomial number of faulty plans or mistakes. We assume that
in addition to the problem generator, the learner has access to a sound and complete planner and
a simulator (or the real world). We also introduce a more demanding planned exploration (PLEX)
framework, where the learner needs to generate its own problems to refine its action model. This
requirement translates to an experiment-design problem, where the learner needs to design problems
in a goal language to refine the action models.
1
The MBP and PLEX frameworks can be reduced to over-general query learning, concept learning
with strictly one-sided error, where the learner is only allowed to make false positive mistakes [7].
This is ideally suited for the autonomous learning setting in which there is no oracle who can provide
positive examples of plans or demonstrations, but negative examples are observed when the agent?s
plans fail to achieve their goals. We introduce mistake-bounded and exact learning versions of
this learning framework and show that they are strictly more powerful than the recently introduced
KWIK framework [4]. We view an action model as a set of state-action-state transitions and ensure
that the learner always maintains a hypothesis which includes all transitions in some adequate model.
Thus, a sound plan is always in the learner?s search space, while it may not always be generated. As
the learner gains more experience in generating plans, executing them on the simulator, and receiving
observations, the hypothesis is incrementally refined until an adequate model is discovered. To
ground our analysis, we show that a general family of hypothesis spaces is learnable in polynomial
time in the two frameworks given appropriate goal languages. This family includes a generalization
of propositional STRIPS operators with conditional effects.
2
Over-General Query Learning
We first introduce a variant of a concept-learning framework that serves as formal underpinning
of our model-learning frameworks. This variant is motivated by the principle of ?optimism under
uncertainty?, which is at the root of several related algorithms in reinforcement learning [1, 3].
A concept is a set of instances. An hypothesis space H is a set of strings or hypotheses, each of
which represents a concept. The size of the concept is the length of the smallest hypothesis that
represents it. Without loss of generality, H can be structured as a (directed acyclic) generalization
graph, where the nodes correspond to sets of equivalent hypotheses representing a concept and there
is a directed edge from node n1 to node n2 if and only if the concept at n1 is strictly more general
than (a strict superset of) that at n2 .
Definition 2.1. The height of H is a function of n and is the length of the longest path from a root
node to any node representing concepts of size n in the generalization graph of H.
Definition 2.2. A hypothesis h is consistent with a set of negative examples Z if h ? Z = ?. Given a
set of negative examples Z consistent with a target hypothesis h, the version space of action models
is the subset of all hypotheses in H that are consistent with Z and is denoted as M(Z).
Definition 2.3. H is well-structured if, for any negative example set Z which has a consistent target
hypothesis in H, the version space M(Z) contains a most general hypothesis mgh(Z). Further,
H is efficiently well-structured if there exists an algorithm that can compute mgh(Z ? {z}) from
mgh(Z) and a new example z in time polynomial in the size of mgh(Z) and z.
Lemma 2.1. Any finite hypothesis space H is well-structuredSif and only if it is closed under union.
Proof. (If) Let Z be a set of negative examples and let H0 = h?M(Z) h represent the unique union
of all concepts represented by hypotheses in M(Z). Because H is closed under union and finite, H0
must be in H. If ?z ? H0 ? Z, then z ? h ? Z for some h ? M(Z). This is a contradiction, because
all h ? M(Z) are consistent with Z. Consequently, H0 is consistent with Z, and is in M(Z). It is
more general than (is a superset of) every other hypothesis in M(Z) because it is their union.
(Only if) Let h1 , h2 be any two hypotheses in H and Z be the set of all instances not included in
either h1 and h2 . Both h1 and h2 are consistent with examples in Z. As H is well-structured,
mgh(Z) must also be in the version space M(Z), and consequently in H. However, mgh(Z) =
h1 ? h2 because it cannot include any element without h1 ? h2 and must include all elements within.
Hence, h1 ? h2 is in H, which makes it closed under union.
In the over-general query (OGQ) framework, the teacher selects a target concept c ? H. The
learner outputs a query in the form of a hypothesis h ? H, where h must be at least as general
as c. The teacher responds with yes if h ? c and the episode ends; otherwise, the teacher gives a
counterexample x ? h ? c. The learner then outputs a new query, and the cycle repeats.
Definition 2.4. A hypothesis space is OGQ-learnable if there exists a learning algorithm for the
OGQ framework that identifies the target c with the number of queries and total running time that
is polynomial in the size of c and the size of the largest counterexample.
Theorem 1. H is learnable in the OGQ framework if and only if H is efficiently well-structured and
its height is a polynomial function.
2
Proof. (If) If H is efficiently well-structured, then the OGQ learner can always output the mgh,
guaranteed to be more general than the target concept, in polynomial time. Because the maximum
number of hypothesis refinements is bounded by the polynomial height of H, it is learnable in the
OGQ framework.
(Only if) If H is not well-structured, then ?h1 , h2 ? H, h1 ? h2 ?
/ H. The teacher can delay picking
its target concept, but always provide counterexamples from outside both h1 and h2 . At some point,
these counterexamples will force the learner to choose between h1 or h2 , because their union is not in
the hypothesis space. Once the learner makes its choice, the teacher can choose the other hypothesis
as its target concept c, resulting in the learner?s hypothesis not being more general than c. If H is
not efficiently well-structured, then there exists Z and z such that computing mgh(Z ? {z}) from
mgh(Z) and a new example z cannot be done in polynomial time. If the teacher picks mgh(Z ?{z})
as the target concept and only provides counterexamples from Z ? {z}, then the learner cannot have
polynomial running time. Finally, the teacher can always provide counterexamples that forces the
learner to take the longest path in H?s generalization graph. Thus, if H does not have polynomial
height, then the number of queries will not be polynomial.
2.1
A Comparison of Learning Frameworks
In order to compare the OGQ framework to other learning frameworks, we first define the overgeneral mistake-bounded (OGMB) learning framework, in which the teacher selects a target concept
c from H and presents an arbitrary instance x from the instance space to the learner for a prediction.
An inclusion mistake is made when the learner predicts x ? c although x ?
/ c; an exclusion mistake
is made when the learner predicts x ?
/ c although x ? c. The teacher presents the true label to the
learner if a mistake is made, and then presents the next instance to the learner, and so on.
Definition 2.5. A hypothesis space is OGMB-learnable if there exists a learning algorithm for the
OGMB framework that never makes any exclusion mistakes and its number of inclusion mistakes
and the running time on each instance are both bounded by polynomial functions of the size of the
target concept and the size of the largest instance seen by the learner.
In the following analysis, we let the name of a framework denote the set of hypothesis spaces learnable in that framework.
Theorem 2. OGQ ( OGMB.
Proof. We can construct an OGMB learner from the OGQ learner as follows. When the OGQ
learner makes a query h, we use h to make predictions for the OGMB learner. As h is guaranteed
to be over-general, it never makes an exclusion mistake. Any instance x on which it makes an
inclusion mistake must be in h ? c and this is returned to the OGQ learner. The cycle repeats with
the OGQ learner providing a new query. Because the OGQ learner makes only a polynomial number
of queries and takes polynomial time for query generation, the simulated OGMB learner makes only
a polynomial number of mistakes and runs in at most polynomial time per instance. The converse
does not hold in general because the queries of the OGQ learner are restricted to be ?proper?, that is,
they must belong to the given hypothesis space. While the OGMB learner can maintain the version
space of all consistent hypotheses of a polynomially-sized hypothesis space, the OGQ learner can
only query with a single hypothesis and there may not be any hypothesis that is guaranteed to be
more general than the target concept.
If the learner is allowed to ask queries outside H, such as queries of the form h1 ? ? ? ? ? hn for all
hi in the version space, then over-general learning is possible. In general, if the learner is allowed
to ask about any polynomially-sized, polynomial-time computable hypothesis, then it is as powerful
as OGMB, because it can encode the computation of the OGMB learner inside a polynomial-sized
circuit and query with that as the hypothesis. We call this the OGQ+ framework and claim the
following theorem (the proof is straightforward).
Theorem 3. OGQ+ = OGMB.
The Knows-What-It-Knows (KWIK) learning framework [4] is similar to the OGMB framework
with one key difference: it does not allow the learner to make any prediction when it does not know
the correct answer. In other words, the learner either makes a correct prediction or simply abstains
from making a prediction and gets the true label from the teacher. The number of abstentions is
3
bounded by a polynomial in the target size and the largest instance size. The set of hypothesis
spaces learnable in the mistake-bound (MB) framework is a strict subset of that learnable in the
probably-approximately-correct (PAC) framework [5], leading to the following result.
Theorem 4. KWIK ( OGMB ( MB ( PAC.
Proof. OGMB ( MB: Every hypothesis space that is OGMB-learnable is MB-learnable because the
OGMB learner is additionally constrained to not make an exclusion mistake. However, every MBlearnable hypothesis space is not OGMB-learnable. Consider the hypothesis space of conjunctions
of n Boolean literals (positive or negative). A single exclusion mistake is sufficient for an MB learner
to learn this hypothesis space. In contrast, after making an inclusion mistake, the OGMB learner
can only exclude that example from the candidate set. As there is exactly one positive example, this
could force the OGMB learner to make an exponential number of mistakes (similar to guessing an
unknown password).
KWIK ( OGMB: If a concept class is KWIK-learnable, it is also OGMB-learnable ? when the
KWIK learner does not know the true label, the OGMB learner simply predicts that the instance is
positive and gets corrected if it is wrong. However, every OGMB-learnable hypothesis space is not
KWIK-learnable. Consider the hypothesis space of disjunctions of n Boolean literals. The OGMB
learner begins with a disjunction over all possible literals (both positive and negative) and hence
predicts all instances as positive. A single inclusion mistake is sufficient for the OGMB learner
to learn this hypothesis space. On the other hand, the teacher can supply the KWIK learner with
an exponential number of positive examples, because the KWIK learner cannot ever know that the
target does not include all possible instances; this implies that the number of abstentions is not
polynomially bounded.
This theorem demonstrates that KWIK is too conservative a framework for model learning ? any
prediction that might be a mistake is disallowed. This makes it impossible to learn even simple
concept classes such as pure disjunctions.
3
Planning Components
A factored planning domain P is a tuple (V, D, A, T ), where V = {v1 , . . . , vn } is the set of variables, D is the domain of the variables in V , and A is the set of actions. S = Dn represents the state
space and T ? S ? A ? S is the transition relation, where (s, a, s0 ) ? T signifies that taking action a
in state s results in state s0 . As we only consider learning deterministic action models, the transition
relation is in fact a function, although the learner?s hypothesis space may include nondeterministic
models. The domain parameters, n, |D|, and |A|, characterize the size of P and are implicit in all
claims of complexity in the rest of this paper.
Definition 3.1. An action model is a relation M ? S ? A ? S.
A planning problem is a pair (s0 , g), where s0 ? S and the goal condition g is an expression chosen
from a goal language G and represents a set of states in which it evaluates to true. A state s satisfies
a goal g if and only if g is true in s. Given a planning problem (s0 , g), a plan is a sequence of states
and actions s0 , a1 , . . . , ap , sp , where the state sp satisfies the goal g. The plan is sound with respect
to (M, g) if (si?1 , ai , si ) ? M for 1 ? i ? p.
Definition 3.2. A planner for the hypothesis space and goal language pair (H, G) is an algorithm
that takes M ? H and (s0 , g ? G) as inputs and outputs a plan or signals failure. It is sound with
respect to (H, G) if, given any M and (s0 , g), it produces a sound plan with respect to (M, g) or
signals failure. It is complete with respect to (H, G) if, given any M and (s0 , g), it produces a sound
plan whenever one exists with respect to (M, g).
We generalize the definition of soundness from its standard usage in the literature in order to apply to
nondeterministic action models, where the nondeterminism is ?angelic? ? the planner can control
the outcome of actions when multiple outcomes are possible according to its model [6]. One way to
implement such a planner is to do forward search through all possible action and outcome sequences
and return an action sequence if it leads to a goal under some outcome choices. Our analysis is
agnostic to plan quality or plan length and applies equally well to suboptimal planners. This is
motivated by the fact that optimal planning is hard for most domains, but suboptimal planning such
as hierarchical planning can be quite efficient and practical.
4
Definition 3.3. A planning mistake occurs if either the planner signals failure when a sound plan
exists with respect to the transition function T or when the plan output by the planner is not sound
with respect to T .
Definition 3.4. Let P be a planning domain and G be a goal language. An action model M is
adequate for G in P if M ? T and the existence of a sound plan with respect to (T, g ? G) implies
the existence of a sound plan with respect to (M, g). H is adequate for G if ?M ? H such that M
is adequate for G.
An adequate model may be partial or incomplete in that it may not include every possible transition
in the transition function T . However, the model is sufficient to produce a sound plan with respect to
(T, g) for every goal g in the desired language. Thus, the more limited the goal language, the more
incomplete the adequate model can be. In the example of a city map, if the goal language excludes
certain locations, then the spanning tree could exclude them as well, although not necessarily so.
Definition 3.5. A simulator of the domain is always situated in the current state s. It takes an action
a as input, transitions to the state s0 resulting from executing a in s, and returns the current state s0 .
Given a goal language G, a problem generator generates an arbitrary problem (s0 , g ? G) and sets
the state of the simulator to s0 .
4
Mistake-Bounded Planning Framework
This section constructs the MBP framework that allows learning and planning to be interleaved for
user-generated problems. It actualizes the teacher of the OGQ framework by combining a problem
generator, a planner, and a simulator, and interfaces with the OGQ learner to learn action models as
hypotheses over the space of possible state transitions for each action. It turns out that the one-sided
mistake property is needed for autonomous learning because the learner can only learn by generating
plans and observing the results; if the learner ever makes an exclusion error, there is no guarantee of
finding a sound plan even when one exists and the learner cannot recover from such mistakes.
Definition 4.1. Let G be a goal language such that H is adequate for it. H is learnable in the MBP
framework if there exists an algorithm A that interacts with a problem generator over G, a sound
and complete planner with respect to (H, G), and a simulator of the planning domain P, and outputs
a plan or signals failure for each planning problem while guaranteeing at most a polynomial number
of planning mistakes. Further, A must respond in time polynomial in the domain parameters and the
length of the longest plan generated by the planner, assuming that a call to the planner, simulator,
or problem generator takes O(1) time.
The goal language is picked such that the hypothesis space is adequate for it. We cannot bound the
time for the convergence of A, because there is no limit on when the mistakes are made.
Theorem 5. H is learnable in the MBP framework if H is OGQ-learnable.
Proof. Algorithm 1 is a general schema for Algorithm 1 MBP L EARNING S CHEMA
action model learning in the MBP framework.
The model M begins with the initial query from Input: Goal language G
OGQ-L EARNER. P ROBLEM G ENERATOR pro- 1: M ? OGQ-L EARNER() // Initial query
vides a planning problem and initializes the cur- 2: loop
3:
(s, g) ? P ROBLEM G ENERATOR(G)
rent state of S IMULATOR. Given M and the
4:
plan
? P LANNER(M, (s, g))
planning problem, P LANNER always outputs a
5:
if
plan
6= false then
plan if one exists because H is adequate for G
6:
for
(?
s
, a, s?0 ) in plan do
(it contains a ?target? adequate model) and M is
0
7:
s
?
S IMULATOR(a)
at least as general as every adequate model. If
0
8:
if
s
=
6
s?0 then
P LANNER signals failure, then there is no plan
M ? OGQ-L EARNER((s, a, s?0 ))
for it. Otherwise, the plan is executed through 9:
print mistake
S IMULATOR until an observed transition con- 10:
break
flicts with the predicted transition. If such a tran- 11:
12:
s
?
s0
sition is found, it is supplied to OGQ-L EARNER
if no mistake then
and M is updated with the next query; other- 13:
print plan
wise, the plan is output. If H is OGQ-learnable, 14:
then OGQ-L EARNER will only be called a polynomial number of times, every call taking polynomial time. As the number of planning mistakes is
5
polynomial and every response of Algorithm 1 is polynomial in the runtime of OGQ-L EARNER and
the length of the longest plan, H is learnable in the MBP framework.
The above result generalizes the work on learning STRIPS operator models from raw experience
(without a teacher) in [8] to arbitrary hypotheses spaces by identifying sufficiency conditions. (A
family of hypothesis spaces considered later in this paper subsumes propositional STRIPS by capturing conditional effects.) It also clarifies the notion of an adequate model, which can be much
simpler than the true transition model, and the influence of the goal language on the complexity of
learning action models.
5
Planned Exploration Framework
The MBP framework is appropriate when mistakes are permissible on user-given problems as long
as their total number is limited and not for cases where no mistakes are permitted after the training
period. In the planned exploration (PLEX) framework, the agent seeks to learn an action model for
the domain without an external problem generator by generating planning problems for itself. The
key issue here is to generate a reasonably small number of planning problems such that solving them
would identify a deterministic action model. Learning a model in the PLEX framework involves
knowing where it is deficient and then planning to reach states that are informative, which entails
formulating planning problems in a goal language. This framework provides a polynomial sample
convergence guarantee which is stronger than a polynomial mistake bound of the MBP framework.
Without a problem generator that can change the simulator?s state, it is impossible for the simulator
to transition freely between strongly connected components (SCCs) of the transition graph. Hence,
we make the assumption that the transition graph is a disconnected union of SCCs and require only
that the agent learn the model for a single SCC that contains the initial state of the simulator.
Definition 5.1. Let P be a planning domain whose transition graph is a union of SCCs. (H, G)
is learnable in the PLEX framework if there exists an algorithm A that interacts with a sound and
complete planner with respect to (H, G) and the simulator for P and outputs a model M ? H that
is adequate for G within the SCC that contains the initial state s0 of the simulator after a polynomial
number of planning attempts. Further, A must run in polynomial time in the domain parameters and
the length of the longest plan output by the planner, assuming that every call to the planner and the
simulator takes O(1) time.
A key step in planned exploration is designing appropriate planning problems. We call these experiments because the goal of solving these problems is to disambiguate nondeterministic action
models. In particular, the agent tries to reach an informative state where the current model is nondeterministic.
Definition 5.2. Given a model M , the set of informative states is I(M ) = {s : (s, a, s0 ), (s, a, s00 ) ?
M ? s0 6= s00 }, where a is said to be informative in s.
S
Definition 5.3. A set of goals G is a cover of a set of states R if g?G {s : s satisfies g} = R.
Given the goal language G and a model M , the problem of experiment design is to find a set of goals
G ? G such that the sets of states that satisfy the goals in G collectively cover all informative states
I(M ). If it is possible to plan to achieve one of these goals, then either the plan passes through a
state where the model is nondeterministic or it executes successfully and the agent reaches the final
goal state; in either case, an informative action can be executed and the observed transition is used
to refine the model. If none of the goals in G can be successfully planned for, then no informative
states for that action are reachable. We formalize these intuitions below.
Definition 5.4. The width of (H, G) is defined as max
min
|G|, where minG |G| =
M ?H G?G:G is a cover of I(M )
? if there is no G ? G to cover a nonempty I(M ).
1 there exists an
Definition 5.5. (H, G) permits efficient experiment design if, for any M ? H,
algorithm (E XPERIMENT D ESIGN) that takes M and G as input and outputs a polynomial-sized
2 there exists an algorithm (I NFOACTION S TATES) that
cover of I(M ) in polynomial time and
takes M and a state s as input and outputs an informative action and two (distinct) predicted next
states according to M in polynomial time.
If (H, G) permits efficient experiment design, then it has polynomial width because no algorithm
can always guarantee to output a polynomial-sized cover otherwise.
6
Theorem 6. (H, G) is learnable in the PLEX framework if it permits efficient experiment design,
and H is adequate for G and is OGQ-learnable.
Proof. Algorithm 2 is a general schema Algorithm 2 PLEX L EARNING S CHEMA
for action model learning in the PLEX Input: Initial state s, goal language G
framework. The model M begins with Output: Model M
the initial query from OGQ-L EARNER. 1: M ? OGQ-L EARNER() // Initial query
Given M and G, E XPERIMENT D ESIGN
2: loop
computes a polynomial-sized cover G. If 3:
G ? E XPERIMENT D ESIGN(M, G)
G is empty, then the model cannot be re4:
if G = ? then
fined further; otherwise, given M and a 5:
return M
goal g ? G, P LANNER may signal fail- 6:
for g ? G do
ure if either no state satisfies g or states
7:
plan ? P LANNER(M, (s, g))
satisfying g are not reachable from the
8:
if plan 6= false then
current state of the simulator. If P LAN 9:
break
NER signals failure on all of the goals,
10:
if plan = false then
then none of the informative states are 11:
return M
reachable and M cannot be refined fur- 12:
for (?
s, a, s?0 ) in plan do
ther. If P LANNER does output a plan, 13:
0
s ? S IMULATOR(a)
then the plan is executed through S IMU - 14:
s ? s0
LATOR until an observed transition con15:
if s0 6= s?0 then
flicts with the predicted transition. If 16:
M ? OGQ-L EARNER((s, a, s?0 ))
such a transition is found, it is supplied 17:
break
to OGQ-L EARNER and M is updated 18:
if M has not been updated then
with the next query. If the plan executes
(a, S?0 ) ? I NFOACTION S TATES(M, s)
successfully, then I NFOACTION S TATES 19:
20:
s0 ? S IMULATOR(a)
provides an informative action with the
M ? OGQ-L EARNER((s, a, s?0 ? S?0 ? {s0 }))
corresponding set of two resultant states 21:
22:
s ? s0
according to M ; OGQ-L EARNER is supplied with the transition of the goal state, 23: return M
the informative action, and the incorrectly predicted next state, and M is updated with the new query. A new cover is computed every time M is updated, and the process continues until all experiments are exhausted. If (H, G)
permits efficient experiment design, then every cover can be computed in polynomial time and I N FOACTION S TATES is efficient. If H is OGQ-learnable, then OGQ-L EARNER will only be called a
polynomial number of times and it can output a new query in polynomial time. As the number of
failures per successful plan is bounded by a polynomial in the width w of (H, G), the total number
of calls to P LANNER is polynomial. Further, as the innermost loop of Algorithm 2 is bounded by
the longest length l of a plan, its running time is a polynomial in the domain parameters and l. Thus,
(H, G) is learnable in the PLEX framework.
6
A Hypothesis Family for Action Modeling
This section proves the learnability of a hypothesis-space family for action modeling in the MBP and
PLEX frameworks. Let U = {u1 , u2 , . . .} be a polynomial-sized set of polynomially computable
basis hypotheses (polynomial in the S
relevant parameters), where ui represents a deterministic set of
transition tuples. Let Power(U) = { u?H u : H ? U} and Pairs(U) = {u1 ? u2 : u1 , u2 ? U}.
Lemma 6.1. Power(U) is OGQ-learnable.
Proof. Power(U) is efficiently well-structured, because it is closed under union by definition and
the new mgh can be computed by removing any basis hypotheses that are not consistent with the
counterexample; this takes polynomial time
Sas U is of polynomial size. At the root of the generalization graph of Power(U) is the hypothesis u?U u and at the leaf is the empty hypothesis. Because
U is of polynomial size and the longest path from the root to the leaf involves removing a single
component at a time, the height of Power(U) is polynomial.
Lemma 6.2. Power(U) is learnable in the MBP framework.
Proof. This follows from Lemma 6.1 and Theorem 5.
7
Lemma 6.3. For any goal language G, (Power(U), G) permits efficient experiment design if
(Pairs(U), G) permits efficient experiment design.
Proof. Any informative state for a hypothesis in Power(U) is an informative state for some hypothesis in Pairs(U), and vice versa. Hence, a cover for (Pairs(U), G) would be a cover for (P ower(U), G).
Consequently, if (Pairs(U), G) permits efficient experiment design, then the efficient algorithms E X PERIMENT D ESIGN and I NFOACTION S TATES are directly applicable to (Power(U), G).
Lemma 6.4. For any goal language G, (Power(U), G) is learnable in the PLEX framework if
(Pairs(U), G) permits efficient experiment design and Power(U) is adequate for G.
Proof. This follows from Lemmas 6.1 and 6.3, and Theorem 6.
We now define a hypothesis space that is a concrete member of the family. Let an action production
r be defined as ?act : pre ? post?, where act(r) is an action and the precondition pre(r) and
postcondition post(r) are conjunctions of ?variable = value? literals.
Definition 6.1. A production r is triggered by a transition (s, a, s0 ) if s satisfies the precondition
1 r is not triggered
pre(r) and a = act(r). A production r is consistent with (s, a, s0 ) if either
2 s0 satisfies the post(r) and all variables not mentioned in post(r) have the same
by (s, a, s0 ) or
values in both s and s0 .
A production represents the set of all consistent transitions that trigger it. All the variables in pre(r)
must take their specified values in a state to trigger r; when r is triggered, post(r) defines the values
in the next state. An example of an action production is ?Do : v1 = 0, v2 = 1 ? v1 = 2, v3 = 1?. It
is triggered only when the Do action is executed in a state in which v1 = 0 and v2 = 1, and defines
the value of v1 to be 2 and v3 to be 1 in the next state, with all other variables staying unchanged.
Let k-SAP be the hypothesis space of models represented by a set of action productions (SAP)
with no more than k variables per production. If U is the set of productions, then |U| =
Pk
O |A| i=1 ni (|D| + 1)2i = O(|A|nk |D|2k ), because a production can have one of |A| actions,
up to k relevant variables figuring on either side of the production, and each variable set to a value
in its domain. As U is of polynomial size, k-SAP is an instance of the family of basis action models.
Moreover, if Conj is the goal language consisting of all goals that can be expressed as conjunctions
of ?variable = value? literals, then (Pairs(k-SAP), Conj) permits efficient experiment design.
Lemma 6.5. (k-SAP, Conj) is learnable in the PLEX framework if k-SAP is adequate for Conj.
7
Conclusion
The main contributions of the paper are the development of the MBP and PLEX frameworks for
learning action models and the characterization of sufficient conditions for efficient learning in these
frameworks. It also provides results on learning a family of hypothesis spaces that is, in some ways,
more general than standard action modeling languages. For example, unlike propositional STRIPS
operators, k-SAP captures the conditional effects of actions.
While STRIPS-like languages served us well in planning research by creating a common useful
platform, they are not designed from the point of view of learnability or planning efficiency. Many
domains such as robotics and real-time strategy games are not amenable to such clean and simple action specification languages. This suggests an approach in which the learner considers increasingly
complex models as dictated by its planning needs. For example, the model learner might start with
small values of k in k-SAP and then incrementally increase k until a value is found that is adequate
for the goals encountered. In general, this motivates a more comprehensive framework in which
planning and learning are tightly integrated, which is the premise of this chapter. Another direction is to investigate better exploration methods that go beyond using optimistic models to include
Bayesian and utility-guided optimal exploration.
8
Acknowledgments
We thank the reviewers for their helpful feedback. This research is supported by the Army Research
Office under grant number W911NF-09-1-0153.
8
References
[1] R. Brafman and M. Tennenholtz. R-MAX ? A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning. Journal of Machine Learning Research, 3:213?231, 2002.
[2] M. Kearns and L. Valiant. Cryptographic Limitations on Learning Boolean Formulae and Finite
Automata. In Annual ACM Symposium on Theory of Computing, 1989.
[3] L. Li. A Unifying Framework for Computational Reinforcement Learning Theory. PhD thesis,
Rutgers University, 2009.
[4] L. Li, M. Littman, and T. Walsh. Knows What It Knows: A Framework for Self-Aware Learning.
In ICML, 2008.
[5] N. Littlestone. Mistake Bounds and Logarithmic Linear-Threshold Learning Algorithms. PhD
thesis, U.C. Santa Cruz, 1989.
[6] B. Marthi, S. Russell, and J. Wolfe. Angelic Semantics for High-Level Actions. In ICAPS,
2007.
[7] B. K. Natarajan. On Learning Boolean Functions. In Annual ACM Symposium on Theory of
Computing, 1987.
[8] T. Walsh and M. Littman. Efficient Learning of Action Schemas and Web-Service Descriptions.
In AAAI, 2008.
9
| 4267 |@word version:6 polynomial:52 stronger:1 tadepalli:1 mehta:1 seek:1 prasad:1 innermost:1 pick:1 initial:7 contains:4 current:4 si:2 must:12 cruz:1 informative:13 designed:1 leaf:2 characterization:1 provides:5 node:5 location:1 simpler:1 height:5 dn:1 supply:1 symposium:2 nondeterministic:5 inside:1 introduce:4 hardness:1 planning:42 mechanic:1 simulator:15 ming:1 provided:1 begin:3 bounded:11 moreover:2 circuit:1 agnostic:1 what:2 string:1 finding:1 guarantee:4 every:12 act:3 runtime:1 exactly:1 icaps:1 wrong:1 demonstrates:1 control:1 converse:1 grant:1 comfortable:1 continually:1 before:1 positive:9 engineering:1 ner:1 service:1 mistake:35 limit:1 ure:1 path:3 approximately:1 ap:1 might:3 suggests:1 limited:2 walsh:2 directed:2 unique:1 practical:1 acknowledgment:1 union:9 implement:1 word:1 pre:4 spite:1 get:2 cannot:8 operator:3 faulty:2 context:1 impossible:2 influence:1 equivalent:1 deterministic:6 map:1 transportation:1 reviewer:1 straightforward:1 go:1 automaton:1 identifying:1 pure:1 contradiction:1 factored:1 notion:1 autonomous:4 updated:5 target:14 trigger:2 user:3 exact:1 designing:1 hypothesis:57 element:2 wolfe:1 satisfying:1 natarajan:1 continues:1 predicts:4 observed:4 electrical:1 afern:1 precondition:3 capture:2 cycle:2 connected:1 episode:1 russell:1 mentioned:1 intuition:2 complexity:2 ui:1 ideally:1 littman:2 dynamic:1 solving:2 efficiency:1 learner:56 basis:3 represented:3 chapter:1 distinct:2 query:24 outside:2 refined:2 h0:4 disjunction:3 outcome:4 quite:1 whose:1 solve:1 otherwise:4 roblem:2 soundness:1 itself:1 final:1 sequence:3 triggered:4 tran:1 mb:5 relevant:2 combining:1 loop:3 achieve:2 description:1 achievement:1 convergence:2 empty:2 requirement:1 produce:3 generating:3 perfect:1 executing:2 guaranteeing:1 staying:1 school:1 sa:1 predicted:4 involves:2 implies:2 direction:1 guided:1 radius:1 correct:4 exploration:7 abstains:1 require:1 premise:1 generalization:5 strictly:3 underpinning:1 hold:1 sufficiently:1 considered:1 ground:1 mgh:11 claim:2 smallest:1 applicable:1 label:3 largest:3 vice:1 city:2 successfully:3 always:10 aim:1 password:1 office:1 conjunction:3 encode:1 longest:7 fur:1 contrast:1 helpful:1 tate:5 factoring:1 stopping:1 typically:1 integrated:1 relation:3 selects:2 semantics:1 issue:1 denoted:1 development:1 plan:47 constrained:1 platform:1 once:1 never:2 construct:2 aware:1 represents:6 icml:1 oblivious:1 tightly:1 comprehensive:1 consisting:1 n1:2 maintain:1 attempt:2 interest:1 investigate:1 introduces:1 truly:1 amenable:1 edge:1 tuple:1 partial:1 conj:4 experience:3 tree:2 incomplete:2 littlestone:1 desired:1 instance:14 modeling:4 boolean:5 planned:6 cover:11 w911nf:1 formulates:1 signifies:1 subset:2 delay:1 successful:2 too:1 sccs:3 characterize:2 learnability:2 nearoptimal:1 answer:1 teacher:13 eec:1 receiving:1 picking:1 connecting:1 concrete:1 thesis:2 s00:2 aaai:1 choose:2 hn:1 literal:5 earner:13 external:1 creating:1 expert:1 leading:1 return:5 li:2 exclude:2 subsumes:1 includes:2 oregon:1 satisfy:1 vehicle:1 view:2 root:4 closed:4 h1:11 observing:1 picked:1 optimistic:1 start:1 recover:1 maintains:1 schema:3 contribution:1 ni:1 who:2 efficiently:8 correspond:1 clarifies:1 identify:1 yes:1 generalize:1 raw:2 bayesian:1 accurately:1 fern:1 none:2 served:1 drive:1 executes:2 plex:13 reach:3 whenever:1 strip:6 definition:19 evaluates:1 failure:7 resultant:1 proof:11 con:1 cur:1 gain:1 sap:8 ask:2 break:3 formalize:1 scc:2 permitted:1 response:1 sufficiency:1 done:1 strongly:1 generality:1 implicit:1 until:5 hand:1 web:1 incrementally:2 defines:2 quality:1 usa:1 effect:4 name:1 concept:21 true:6 usage:1 hence:4 game:1 width:3 self:1 complete:7 interface:1 pro:1 wise:1 recently:1 common:1 belong:1 corvallis:1 versa:1 counterexample:7 ai:1 inclusion:5 language:22 reachable:3 access:3 entail:1 specification:1 kwik:10 own:2 exclusion:6 dictated:1 certain:2 success:1 abstention:2 seen:1 analyzes:1 freely:1 converge:1 v3:2 period:1 signal:7 multiple:1 sound:15 reduces:1 alan:1 long:2 fined:1 post:5 equally:1 a1:1 prediction:7 variant:2 sition:1 rutgers:1 represent:1 robotics:1 addition:1 permissible:1 rest:1 unlike:1 strict:2 probably:1 pass:1 deficient:1 member:1 call:6 mbp:13 superset:2 suboptimal:2 knowing:1 computable:2 translates:1 motivated:2 optimism:1 expression:1 utility:1 returned:1 action:56 adequate:21 useful:1 santa:1 situated:1 reduced:1 generate:2 supplied:3 figuring:1 per:3 disallowed:1 key:3 lan:1 threshold:1 clean:1 v1:5 graph:7 excludes:1 tadepall:1 run:2 uncertainty:1 powerful:2 respond:1 place:1 planner:15 family:9 vn:1 earning:2 interleaved:2 bound:4 hi:1 capturing:1 guaranteed:3 refine:3 oracle:1 encountered:1 annual:2 generates:1 aspect:1 u1:3 min:1 formulating:1 structured:9 according:3 disconnected:1 increasingly:1 making:2 restricted:1 sided:3 turn:1 fail:2 nonempty:1 needed:1 know:8 serf:1 end:1 generalizes:1 permit:9 apply:1 hierarchical:1 v2:2 appropriate:3 existence:2 imu:1 assumes:1 running:4 ensure:1 include:6 unifying:1 prof:1 unchanged:1 initializes:1 print:2 occurs:1 strategy:1 responds:1 guessing:1 interacts:2 said:1 navigating:1 xperiment:3 distance:1 thank:1 simulated:1 considers:1 spanning:2 assuming:2 length:7 neville:1 demonstration:1 providing:1 executed:4 negative:7 design:13 cryptographic:2 proper:1 unknown:1 motivates:1 observation:1 finite:3 incorrectly:1 ever:3 discovered:1 arbitrary:5 introduced:1 propositional:3 pair:9 specified:1 marthi:1 learned:2 ther:1 beyond:1 tennenholtz:1 below:1 max:2 power:11 demanding:1 force:3 turning:1 representing:2 identifies:1 auto:1 literature:1 oregonstate:1 fully:1 loss:1 generation:1 limitation:1 acyclic:1 generator:9 h2:10 agent:7 sufficient:5 consistent:11 s0:27 principle:1 production:10 repeat:2 supported:1 brafman:1 formal:1 allow:1 nondeterminism:1 side:1 taking:2 feedback:1 world:1 transition:24 computes:1 forward:1 made:4 reinforcement:3 refinement:1 polynomially:4 compact:1 observable:1 ignore:1 tuples:1 search:2 additionally:1 disambiguate:1 learn:10 reasonably:1 try:1 necessarily:1 complex:1 domain:14 sp:2 pk:1 main:1 n2:2 allowed:3 periment:1 exponential:2 candidate:1 rent:1 theorem:10 removing:2 formula:1 specific:1 pac:2 learnable:32 exists:12 false:4 valiant:1 ower:1 phd:2 execution:1 exhausted:1 nk:1 suited:1 logarithmic:1 simply:2 army:1 expressed:1 u2:3 applies:1 collectively:1 satisfies:6 acm:2 conditional:3 goal:39 sized:7 consequently:3 hard:1 change:1 included:1 corrected:1 vides:1 lemma:8 conservative:1 total:3 called:2 kearns:1 people:1 later:1 |
3,609 | 4,268 | Generalization Bounds and Consistency for Latent
Structural Probit and Ramp Loss
Joseph Keshet
TTI-Chicago
[email protected]
David McAllester
TTI-Chicago
[email protected]
Abstract
We consider latent structural versions of probit loss and ramp loss. We show that
these surrogate loss functions are consistent in the strong sense that for any feature
map (finite or infinite dimensional) they yield predictors approaching the infimum
task loss achievable by any linear predictor over the given features. We also give
finite sample generalization bounds (convergence rates) for these loss functions.
These bounds suggest that probit loss converges more rapidly. However, ramp
loss is more easily optimized on a given sample.
1
Introduction
Machine learning has become a central tool in areas such as speech recognition, natural language
translation, machine question answering, and visual object detection. In modern approaches to these
applications systems are evaluated with quantitative performance metrics. In speech recognition one
typically measures performance by the word error rate. In machine translation one typically uses
the BLEU score. Recently the IBM deep question answering system was trained to optimize the
Jeopardy game show score. The PASCAL visual object detection challenge is scored by average
precision in recovering object bounding boxes. No metric is perfect and any metric is controversial,
but quantitative metrics provide a basis for quantitative experimentation and quantitative experimentation has lead to real progress. Here we adopt the convention that a performance metric is given as a
task loss ? a measure of a quantity of error or cost such as the word error rate in speech recognition.
We consider general methods for minimizing task loss at evaluation time.
Although the goal is to minimize task loss, most systems are trained by minimizing a surrogate loss
different from task loss. A surrogate loss is necessary when using scale-sensitive regularization in
training a linear classifier. A linear classifier selects the output that maximizes an inner product of a
feature vector and a weight vector. The output of a linear classifier does not change when the weight
vector is scaled down. But for most regularizers of interest, such as a norm of the weight vector,
scaling down the weight vector drives the regularizer to zero. So directly regularizing the task loss
of a linear classifier is meaningless.
For binary classification standard surrogate loss functions include log loss, hinge loss, probit loss,
and ramp loss. Unlike binary classification, however, the applications mentioned above involve
complex (or structured) outputs. The standard surrogate loss functions for binary classification have
generalizations to the structured output setting. Structural log loss is used in conditional random
fields (CRFs) [7]. Structural hinge loss is used in structural SVMs [13, 14]. Structural probit loss is
defined and empirically evaluated in [6]. A version of structural ramp loss is defined and empirically
evaluated in [3] (but see also [12] for a treatment of the fundamental motivation for ramp loss). All
four of these structural surrogate loss functions are defined formally in section 2.1
1
The definition of ramp loss used here is slightly different from that in [3].
1
This paper is concerned with developing a better theoretical understanding of the relationship between surrogate loss training and task loss testing for structured labels. Structural ramp loss is
justified in [3] as being a tight upper bound on task loss. But of course the tightest upper bound on
task loss is the task loss itself. Here we focus on generalization bounds and consistency. A finite
sample generalization bound for probit loss was stated implicitly in [9] and an explicit probit loss
bound is given in [6]. Here we review the finite sample bounds for probit loss and prove a finite
sample bound for ramp loss. Using these bounds we show that probit loss and ramp loss are both
consistent in the sense that for any arbitrary feature map (possibly infinite dimensional) optimizing
these surrogate loss functions with appropriately weighted regularization approaches, in the limit of
infinite training data, the minimum loss achievable by a linear predictor over the given features. No
convex surrogate loss function, such as log loss or hinge loss, can be consistent in this sense ? for
any nontrivial convex surrogate loss function one can give examples (a single feature suffices) where
the learned weight vector is perturbed by outliers but where the outliers do not actually influence the
optimal task loss.
Both probit loss and ramp loss can be optimized in practice by stochastic gradient descent. Ramp
loss is simpler and easier to implement. The subgradient update for ramp loss is similar to a perceptron update ? the update is a difference between a ?good? feature vector and a ?bad? feature
vector. Ramp loss updates are closely related to updates derived from n-best lists in training machine
translaiton systems [8, 2]. Ramp loss updates regularized by early stopping have been shown to be
effective in phoneme alignment [10]. It is also shown in [10] that in the limit of large weight vectors
the expected ramp loss update converges to the true gradient of task loss. This result suggests consistency for ramp loss, a suggestion confirmed here. A practical stochastic gradient descent algorithm
for structural probit loss is given in [6] where it is also shown that probit loss can be effective for
phoneme recognition. Although the generalization bounds suggest that probit loss converges faster
than ramp loss, ramp loss seems easier to optimize.
We formulate all the notions of loss in the presence of latent structure as well as structured labels.
Latent structure is information that is not given in the labeled data but is constructed by the prediction
algorithm. For example, in natural language translation the alignment between the words in the
source and the words in the target is not explicitly given in a translation pair. Grammatical structures
are also not given in a translation pair but may be constructed as part of the translation process. In
visual object detection the position of object parts is not typically annotated in the labeled data but
part position estimates may be used as part of the recognition algorithm. Although the presence of
latent structure makes log loss and hinge loss non-convex, latent strucure seems essential in many
applications. Latent structural log loss, and the notion of a hidden CRF, is formulated in [11]. Latent
structural hinge loss, and the notion of a latent structural SVM, is formulated in [15].
2
Formal Setting and Review
We consider an arbitrary input space X and a finite label space Y. We assume a source probability
distribution over labeled data, i.e., a distribution over pairs (x, y), where we write Ex,y [f (x, y)] for
the expectation of f (x, y). We assume a loss function L such that for any two labels y and y? we have
that L(y, y?) ? [0, 1] is the loss (or cost) when the true label is y and we predict y?. We will work with
infinite dimensional feature vectors. We let `2 be the set of finite-norm infinite-dimensional vectors
? the set of all square-summable infinite sequences of real numbers. We will be interested in linear
predictors involving latent structure. We assume a finite set Z of ?latent labels?. For example, we
might take Z to be the set of all parse trees of source and target sentences in a machien translation
system. In machine translation the label y is typically a sentence with no parse tree specified. We can
recover the pure structural case, with no latent information, by taking Z to be a singleton set. It will
be convenient to define S to be the set of pairs of a label and a latent label. An element s of S will
be called an augmented label and we define L(y, s) by L(y, (?
y , z)) = L(y, y?). We assume a feature
map ? such that for an input x and augmented label s we have ?(x, s) ? `2 with ||?(x, s)|| ? 1.2
Given an input x and a weight vector w ? `2 we define the prediction s?w (x) as follows.
s?w (x)
=
argmax w> ?(x, s)
s
2
We note that this setting covers the finite dimensional case because the range of the feature map can be
taken to be a finite dimensional subset of `2 ? we are not assuming a universal feature map.
2
Our goal is to use the training data to learn a weight vector w so as to minimize the expected loss
on newly drawn labeled data Ex,y [L(y, s?w (x))]. We will assume an infinite sequence of training
data (x1 , y1 ), (x2 , y2 ), (x3 , y3 ), . . . drawn IID from the source distribution and use the following
notations.
L(w, x, y) = L(y, s?w (x))
L(w) = Ex,y [L(w, x, y)]
L? = inf w?`2 L(w)
? n (w) =
L
1
n
Pn
i=1
L(w, xi , yi )
We adopt the convention that in the definition of L(w, x, y) we break ties in definition of s?w (x) in
favor of augmented labels of larger loss. We will refer to this as pessimistic tie breaking.
Here we define latent structural log loss, hinge loss, ramp loss and probit loss as follows.
Llog (w, x, y)
1
= ln Zw (x) ? ln Zw (x, y)
Pw (y|x)
X
X
Zw (x) =
exp(w> ?(x, s)) Zw (x, y) =
exp(w> ?(x, (y, z)))
=
ln
=
?
? ?
?
max w> ?(x, s) + L(y, s) ? max w> ?(x, (y, z))
=
=
?
? ?
?
max w> ?(x, s) + L(y, s) ? max w> ?(x, s)
s
s
?
?
max w> ?(x, s) + L(y, s) ? w> ?(x, s?w (x))
=
E [L(y, s?w+ (x))]
s
Lhinge (w, x, y)
Lramp (w, x, y)
Lprobit (w, x, y)
z
s
z
s
In the definition of probit loss we take to be zero-mean unit-variance isotropic Gaussian noise ?
for each feature dimension j we have that j is an independent zero-mean unit-variance Gaussian
variable.3 More generally we will write E [f ()] for the expectation of f () where is Gaussian
noise. It is interesting to note that Llog , Lhinge , and Lramp are all naturally differences of convex
functions and hence can be optimized by CCCP.
In the case of binary classification we have S = Y = {?1, 1}, ?(x, y) = 21 y?(x), L(y, y 0 ) = 1y6=y0
and we define the margin m = yw> ?(x). We then have the following where the expression for
Lprobit (w, x, y) assumes ||?(x)|| = 1.
Llog (w, x, y) = ln (1 + e?m )
Lhinge (w, x, y) = max(0, 1 ? m)
Lramp (w, x, y) = min(1, max(0, 1 ? m)) Lprobit (w, x, y) = P?N (0,1) [ ? m]
Returning to the general case we consider the relationship between hinge and ramp loss. First we
consider the case where Z is a singleton set ? the case of no latent structure. In this case hinge
loss is convex in w ? the hinge loss becomes a maximum of linear functions. Ramp loss, however,
remains a difference of nonlinear convex functions even for Z singleton. Also, in the case where Z
is singleton one can easily see that hinge loss is unbounded ? wrong labels may score arbitrarily
better than the given label. Hinge loss remains unbounded in case of non-singleton Z. Ramp loss,
on the other hand, is bounded by 1 as follows.
Lramp (w, x, y) =
max w> ?(x, s) + L(y, s) ? w> ?(x, s?w (x))
s
?
max w> ?(x, s) + 1 ? w> ?(x, s?w (x)) = 1
s
Next, as is emphasized in [3], we note that ramp loss is a tighter upper bound on task loss than
is hinge loss. To see this we first note that it is immediate that Lhinge (w, x, y) ? Lramp (w, x, y).
3
In infinite dimension we have that with probability one |||| = ? and hence w+ is not in `2 . The measure
underling E [f ()] is a Gaussian process. However, we still have that for any unit-norm feature vector ? the
inner product > ? is distributed as a zero-mean unit-norm scalar Gaussian and Lprobit (w, x, y) is therefore
well defined.
3
Furthermore, the following derivation shows Lramp (w, x, y) ? L(w, x, y) where we assume pessimistic tie breaking in the definition of s?w (x).
Lramp (w, x, y) =
max w> ?(x, s) + L(y, s) ? w> ?(x, s?w (x))
s
? w> ?(x, s?w (x)) + L(y, s?w (x)) ? w> ?(x, s?w (x)) = L(y, s?w (x))
But perhaps the most important property of ramp loss is the following.
lim Lramp (?w, x, y) = L(w, x, y)
???
(1)
This can be verified by noting that as ? goes to infinity the maximum of the first term in ramp loss
must occur at s = s?w (x).
Next we note that Optimizing Lramp through subgradient descent (rather than CCCP) yields the
following update rule (here we ignore regularization).
?w
s?+
w (x, y)
? ?(x, s?w (x)) ? ?(x, s?+
w (x, y))
=
(2)
>
argmax w ?(x, s) + L(y, s)
s
We will refer to (2) as the ramp loss update rule. The following is proved in [10] under mild
conditions on the probability distribution over pairs (x, y).
?w L(w) = lim ?Ex,y ?(x, s?+
?w (x))
(3)
?w (x, y)) ? ?(x, s
???
Equation (3) expresses a relationship between the expected ramp loss update and the gradient of
generalization loss. Significant empirical success has been achieved with the ramp loss update rule
using early stopping regularization [10]. But both (1) and (3) suggests that regularized ramp loss
should be consistent as is confirmed here.
Finally it is worth noting that Lramp and Lprobit are meaningful for an arbitrary prediction space S,
label space Y, and loss function L(y, s) between a label and a prediction. Log loss and hinge loss
can be generalized to arbitrary prediction and label spaces provided that we assume a compatibility
relation between predictions and labels. The framework of independent prediction and label spaces
is explored more fully in [5] where a notion of weak-label SVM is defined subsuming both ramp
and hinge loss as special cases.
3
Consistency of Probit Loss
We start with the consistency of probit loss which is easier to prove. We consider the following
learning rule where the regularization parameter ?n is some given function of n.
? nprobit (w) + ?n ||w||2
w
?n = argmin L
2n
w
(4)
We now prove the following fairly straightforward consequence of a generalization bound appearing
in [6].
Theorem 1 (Consistency of Probit loss). For w
?n defined by (4), if the sequence ?n increases without
bound, and ?n ln n/n converges to zero, then with probability one over the draw of the infinite
sample we have limn?? Lprobit (w
?n ) = L? .
Unfortunately, and in contrast to simple binary SVMs, for a latent binary SVM (an LSVM) there
exists an infinite sequence w1 , w2 , w3 , . . . such that Lprobit (wn ) approaches L? but L(wn ) remains
bounded away from L? (we omit the example here). However, the learning algorithm (4) achieves
consistency in the sense that the stochastic predictor defined by w
?n + where is Gaussian noise
has a loss which converges to L? .
To prove theorem 1 we start by reviewing the generalization bound of [6]. The departure point
for this generalization bound is the following PAC-Bayesian theorem where P and Q range over
? n (Q) are defined as expectations
probability measures on a given space of predictors and L(Q) and L
over selecting a predictor from Q.
4
Theorem 2 (from [1], see also [4]). For any fixed prior distribution P and fixed ? > 1/2 we
have that with probability at least 1 ? ? over the draw of the training data the following holds
simultaneously for all Q.
KL(Q, P ) + ln 1?
1
n
?
L(Q) ?
L (Q) + ?
(5)
1
n
1 ? 2?
For the space of linear predictors we take the prior P to be the zero-mean unit-variance Gaussian
distribution and for w ? `2 we define the distribution Qw to be the unit-variance Gaussian centered
at w. This gives the following corollary of (5).
Corollary 1 (from [6]). For fixed ?n > 1/2 we have that with probability at least 1 ? ? over the
draw of the training data the following holds simultaneously for all w ? `2 .
1
1
2
1
n
2 ||w|| + ln ?
?
Lprobit (w) ?
Lprobit (w) + ?n
(6)
n
1 ? 2?1n
To prove theorem 1 from (6) we consider an arbitrary unit-norm weight vector w? and an arbitrary
scalar ? > 0. Setting ? to 1/n2 , and noting that w
?n is the minimizer of the right hand side of (6),
we have the following with probability at least 1 ? 1/n2 over the draw of the sample.
1 2
1
n
?
2 ? + 2 ln n
?
Lprobit (?w ) + ?n
Lprobit (w
?n ) ?
(7)
n
1 ? 2?1n
A standard Chernoff bound argument yields that for w? and ? > 0 selected prior to drawing the
sample, we have the following with probability at least 1 ? 1/n2 over the choice of the sample.
r
ln n
n
?
?
?
(8)
L
probit (?w ) ? Lprobit (?w ) +
n
Combining (7) and (8) with a union bound yields that with probability at least 1 ? 2/n2 we have the
following.
r
!
1 2
?
+
2
ln
n
1
ln n
?
Lprobit (w
?n ) ?
Lprobit (?w ) +
+ ?n 2
n
n
1 ? 2?1n
Because the probability that the above inequality is violated goes as 1/n2 , with probability one over
the draw of the sample we have the following.
r
!
1 2
ln n
1
?
2 ? + 2 ln n
Lprobit (?w ) +
lim Lprobit (w
?n ) ? lim
+ ?n
n??
n?? 1 ? 1
n
n
2?n
Under the conditions on ?n given in the statement of theorem 1 we then have
lim Lprobit (w
?n ) ? Lprobit (?w? ).
n??
Because this holds with probability one for any ?, the following must also hold with probability one.
lim Lprobit (w
?n ) ? lim Lprobit (?w? )
n??
???
(9)
Now consider
lim Lprobit (?w, x, y) = lim E [L(?w + , x, y)] = lim E [L(w + ?, x, y)] .
???
???
??0
We have that lim??0 E [L(w + ?, x, y)] is determined by the augmented labels s that are tied for
the maximum value of w> ?(x, s). There is some probability distribution over these tied values that
occurs in the limit of small ?. Under the pessimistic tie breaking in the definition of L(w, x, y) we
then get lim??? Lprobit (?w, x, y) ? L(w, x, y). This in turn gives the following.
h
i
lim Lprobit (?w) = Ex,y lim Lprobit (?w, x, y) ? Ex,y [L(w, x, y)] = L(w)
(10)
???
???
Combining (9) and (10) yields limn?? Lprobit (w
?n ) ? L(w? ). Since for any w? this holds with
probability one, with probability one we also have limn?? Lprobit (w
?n ) ? L? . Finally we note
?
Lprobit (w) = E [L(w + )] ? L which then gives theorem 1.
5
4
Consistency of Ramp Loss
Now we consider the following ramp loss training equation.
? nramp (w) + ?n ||w||2
w
?n = argmin L
2n
w
(11)
The main result of this paper is the following.
Theorem 3 (Consistency of Ramp Loss). For w
?n defined by (11), if the sequence ?n / ln2 n increases
without bound, and the sequence ?n /(n ln n) converges to zero, then with probability one over the
draw of the infinite sample we have limn?? Lprobit ((ln n)w
?n ) = L? .
As with theorem 1, theorem 3 is derived from a finite sample generalization bound. The bound
?n
?n
is derived from (6) by upper bounding L
probit (w/?) in terms of Lramp (w). From section 3 we
have that lim??0 Lprobit (w/?, x, y) ? L(w, x, y) ? Lramp (w, x, y). This can be converted to the
following lemma for finite ? where we recall that S is the set of augmented labels s = (y, z).
Lemma 1.
r
w
|S|
Lprobit
, x, y ? Lramp (w, x, y) + ? + ? 8 ln
?
?
Proof. We first prove that for any ? > 0 we have
w
Lprobit
, x, y ? ? + max L(y, s)
?
s: m(s)?M
(12)
where
r
>
m(s) = w ??(s)
??(s) = ?(x, s?w (x)) ? ?(x, s)
M =?
8 ln
|S|
.
?
To prove (12) we note that for m(s) > M we have the following where P [?()] abbreviates
E 1?() .
P [?
sw+? (x) = s] ? P [(w + ?)> ??(s) ? 0] = P ?> ??(s) ? m(s)/?
M
M2
?
? P?N (0,1) ?
? exp ? 2 =
2?
8?
|S|
E [L(y, s?w+? (x))] ? P [?s : m(s) > M s?w+? (x) = s] +
? ?+
max
max
L(y, s)
s:m(s)?M
L(y, s)
s:m(s)?M
The following calculation shows that (12) implies the lemma.
w
Lprobit
, x, y
? ? + max L(y, s)
?
s: m(s)?M
? ?+
max L(y, s) ? m(s) + M
s: m(s)?M
? ? + max L(y, s) ? m(s) + M
s
= ? + Lramp (w, x, y) + M
Inserting lemma 1 into (6) we get the following.
Theorem 4. For ?n > 1/2 we have that with probability at least 1 ? ? over the draw of the training
data the following holds simultaneously for all w and ? > 0.
!!
r
||w||2
1
w
1
|S|
n
2? 2 + ln ?
?
Lprobit
?
Lramp (w) + ? + ? 8 ln
+ ?n
(13)
?
?
n
1 ? 2?1n
6
To prove theorem 3 we now take ?n = 1/ ln n and ?n = ?n / ln2 n. We then have that w
?n is the
minimizer of the right hand side of (13). This observation yields the following for any unit-norm
vector w? and scalar ? > 0 where we have set ? = 1/n2 .
!
p
2
1
+
8
ln(|S|
ln
n)
1
?
?
2?
n
n
?
? ramp (?w ) +
Lprobit ((ln n)w
?n ) ?
L
(14)
+
+
2
ln n
2n
n ln n
1 ? ln n
2?n
? ramp (?w? )
As in section 3, we use a Chernoff bound for the single vector w? and scalar ? to bound L
in terms of Lramp (?w? ) and then take the limit as n ? ? to get the following with probability one.
lim Lprobit ((ln n)w
?n ) ? Lramp (?w? )
n??
The remainder of the proof is the same in section 3 but where we now use lim??? Lramp (?w? ) =
L(w? ) whose proof we omit.
5
A Comparison of Convergence Rates
To compare the convergence rates implicit in (6) and (13) we note that in (13) we can optimize ? as a
1/3
function of other quantities in the bound.4 An approximately optimal value for ? is ?n ||w||2 /n
which gives the following.
!
!
r
1
w
2 1/3
?
ln
|S|
1
3
?
||w||
n
n
n
?
? ramp (w) +
Lprobit
+ 8 ln
+
(15)
?
L
?
n
2
?
n
1 ? 2?1n
1/3
? ||w
We have that (15) gives O
?n ||2 /n
as opposed to (6) which gives O ||w
?n ||2 /n . This
suggests that while probit loss and ramp loss are both consistent, ramp loss may converge more
slowly.
6
Discussion and Open Problems
The contributions of this paper are a consistency theorem for latent structural probit loss and both
a generalization bound and a consistency theorem for latent structural ramp loss. These bounds
suggest that probit loss converges more rapidly. However, we have only proved upper bounds on
generalization loss and it remains possible that these upper bounds, while sufficient to show consistency, are not accurate characterizations of the actual generalization loss. Finding more definitive
statements, such as matching lower bounds, remains an open problem.
The definition of ramp loss used here is not the only one possible. In particular we can consider the
following variant.
L0ramp (w, x, y) = max w> ?(x, s) ? max w> ?(x, s) ? L(y, s)
s
s
L0ramp
Relations (1) and (3) both hold for
as well as Lramp . Experiments indicate that L0ramp performs somewhat better than Lramp under early stopping of subgradient descent. However it seems
that it is not possible to prove a bound of the form of (15) for L0ramp . A frustrating observation is
that L0ramp (0, x, y) = 0. Finding a meaningful finite-sample statement for L0ramp remains an open
problem.
The isotropic Gaussian noise distribution used in the definition of Lprobit is not optimal. A uniformly tighter upper bound on generalization loss is achieved by optimizing the posterior in the
PAC-Bayesian theorem. Finding a practical more optimal use of the PAC-Bayesian theorem also
remains an open problem.
4
In the consistency proof it was more convenient to set ? = 1/ln n which is plausibly nearly optimal
anyway.
7
References
[1] Olivier Catoni. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical
Learning. Institute of Mathematical Statistics LECTURE NOTES MONOGRAPH SERIES,
2007.
[2] D. Chiang, K. Knight, and W. Wang. 11,001 new features for statistical machine translation.
In Proc. NAACL, 2009, 2009.
[3] Chuong B. Do, Quoc Le, Choon Hui Teo, Olivier Chapelle, and Alex Smola. Tighter bounds
for structured estimation. In nips, 2008.
[4] Pascal Germain, Alexandre Lacasse, Francois Laviolette, and Mario Marchand. Pac-bayesian
learning of linear classifiers. In ICML, 2009.
[5] Ross Girshick, Pedro Felzenszwalb, and David McAllester. Object detection with grammar
models. In NIPS, 2011.
[6] Joseph Keshet, David McAllester, and Tamir Hazan. Pac-bayesian approach for minimization
of phoneme error rate. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011.
[7] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. In Proceedings of the Eightneenth International
Conference on Machine Learning, pages 282?289, 2001.
[8] P. Liang, A. Bouchard-Ct, D. Klein, and B. Taskar. An end-to-end discriminative approach to
machine translation. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL), 2006.
[9] David McAllester. Generalization bounds and consistency for structured labeling. In G. Bakir
nd T. Hofmann, B. Scholkopf, A. Smola, B. Taskar, and S. V. N. Vishwanathan, editors, Predicting Structured Data. MIT Press, 2007.
[10] David A. McAllester, Tamir Hazan, and Joseph Keshet. Direct loss minimization for structured
prediction. In Advances in Neural Information Processing Systems 24, 2010.
[11] A. Quattoni, S. Wang, L.P. Morency, M Collins, and T Darrell. Hidden conditional random
fields. PAMI, 29, 2007.
[12] R.Collobert, F.H.Sinz, J.Weston, and L.Bottou. Trading convexity for scalability. In ICML,
2006.
[13] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural
Information Processing Systems 17, 2003.
[14] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for
interdependent and structured output spaces. In Proceedings of the Twenty-First International
Conference on Machine Learning, 2004.
[15] Chun-Nam John Yu and T. Joachims. Learning structural svms with latent variables. In International Conference on Machine Learning (ICML), 2009.
8
| 4268 |@word mild:1 version:2 pw:1 achievable:2 norm:6 seems:3 nd:1 open:4 series:1 score:3 selecting:1 jeopardy:1 must:2 john:1 chicago:2 hofmann:2 update:11 selected:1 isotropic:2 mccallum:1 chiang:1 characterization:1 simpler:1 unbounded:2 mathematical:1 constructed:2 direct:1 become:1 scholkopf:1 prove:9 expected:3 actual:1 becomes:1 provided:1 notation:1 bounded:2 maximizes:1 qw:1 lramp:20 argmin:2 finding:3 sinz:1 quantitative:4 y3:1 tie:4 returning:1 classifier:5 scaled:1 wrong:1 unit:8 omit:2 segmenting:1 limit:4 consequence:1 approximately:1 pami:1 might:1 acl:1 suggests:3 range:2 practical:2 testing:1 practice:1 union:1 implement:1 x3:1 area:1 empirical:1 universal:1 convenient:2 matching:1 word:4 suggest:3 altun:1 get:3 tsochantaridis:1 influence:1 optimize:3 map:5 crfs:1 go:2 straightforward:1 convex:6 formulate:1 pure:1 m2:1 rule:4 nam:1 notion:4 anyway:1 target:2 olivier:2 us:1 element:1 recognition:5 labeled:4 taskar:3 wang:2 knight:1 mentioned:1 monograph:1 convexity:1 trained:2 tight:1 reviewing:1 basis:1 easily:2 icassp:1 regularizer:1 derivation:1 effective:2 labeling:2 whose:1 larger:1 ramp:41 drawing:1 grammar:1 favor:1 statistic:1 itself:1 sequence:7 product:2 remainder:1 inserting:1 combining:2 rapidly:2 scalability:1 convergence:3 darrell:1 francois:1 perfect:1 tti:2 converges:7 object:6 progress:1 strong:1 recovering:1 implies:1 indicate:1 convention:2 trading:1 closely:1 annotated:1 stochastic:3 centered:1 mcallester:6 suffices:1 generalization:16 pessimistic:3 tighter:3 hold:7 exp:3 predict:1 achieves:1 adopt:2 early:3 estimation:1 proc:1 label:22 ross:1 sensitive:1 teo:1 tool:1 weighted:1 minimization:2 mit:1 gaussian:9 rather:1 pn:1 corollary:2 derived:3 focus:1 joachim:2 contrast:1 sense:4 stopping:3 typically:4 hidden:2 relation:2 koller:1 selects:1 interested:1 compatibility:1 classification:5 pascal:2 special:1 fairly:1 field:3 chernoff:2 y6:1 yu:1 icml:3 nearly:1 modern:1 simultaneously:3 choon:1 argmax:2 detection:4 interest:1 evaluation:1 alignment:2 lhinge:4 regularizers:1 accurate:1 necessary:1 tree:2 girshick:1 theoretical:1 cover:1 cost:2 subset:1 frustrating:1 predictor:8 perturbed:1 fundamental:1 international:5 probabilistic:1 w1:1 central:1 opposed:1 possibly:1 summable:1 slowly:1 converted:1 singleton:5 explicitly:1 collobert:1 break:1 chuong:1 mario:1 hazan:2 start:2 recover:1 bouchard:1 contribution:1 minimize:2 square:1 phoneme:3 variance:4 yield:6 weak:1 bayesian:6 iid:1 confirmed:2 drive:1 worth:1 quattoni:1 definition:8 naturally:1 proof:4 newly:1 proved:2 treatment:1 recall:1 lim:17 bakir:1 actually:1 alexandre:1 supervised:1 evaluated:3 box:1 furthermore:1 implicit:1 smola:2 hand:3 parse:2 nonlinear:1 infimum:1 perhaps:1 naacl:1 true:2 y2:1 regularization:5 hence:2 game:1 generalized:1 ln2:2 crf:1 performs:1 recently:1 empirically:2 jkeshet:1 association:1 refer:2 significant:1 consistency:14 language:2 chapelle:1 underling:1 posterior:1 optimizing:3 inf:1 inequality:1 binary:6 arbitrarily:1 success:1 yi:1 guestrin:1 minimum:1 somewhat:1 converge:1 signal:1 faster:1 calculation:1 cccp:2 prediction:8 variant:1 involving:1 subsuming:1 expectation:3 metric:5 achieved:2 justified:1 source:4 limn:4 appropriately:1 zw:4 meaningless:1 unlike:1 w2:1 lafferty:1 structural:18 presence:2 noting:3 concerned:1 wn:2 w3:1 approaching:1 inner:2 expression:1 speech:4 deep:1 generally:1 yw:1 involve:1 svms:3 klein:1 write:2 express:1 four:1 drawn:2 verified:1 subgradient:3 draw:7 lsvm:1 scaling:1 bound:34 ct:1 marchand:1 nontrivial:1 occur:1 infinity:1 vishwanathan:1 alex:1 x2:1 argument:1 min:1 structured:9 developing:1 slightly:1 y0:1 joseph:3 quoc:1 outlier:2 taken:1 ln:30 equation:2 remains:7 turn:1 end:2 tightest:1 experimentation:2 away:1 appearing:1 assumes:1 include:1 linguistics:2 hinge:14 sw:1 laviolette:1 plausibly:1 question:2 quantity:2 occurs:1 surrogate:10 gradient:4 bleu:1 assuming:1 relationship:3 minimizing:2 liang:1 unfortunately:1 statement:3 stated:1 twenty:1 upper:7 observation:2 markov:1 finite:13 lacasse:1 descent:4 immediate:1 y1:1 arbitrary:6 ttic:2 david:5 pair:5 germain:1 specified:1 kl:1 optimized:3 sentence:2 acoustic:1 learned:1 nip:2 departure:1 challenge:1 max:19 natural:2 regularized:2 predicting:1 thermodynamics:1 review:2 understanding:1 prior:3 interdependent:1 probit:23 loss:114 fully:1 lecture:1 suggestion:1 interesting:1 controversial:1 sufficient:1 consistent:5 editor:1 translation:10 ibm:1 course:1 formal:1 side:2 perceptron:1 institute:1 taking:1 felzenszwalb:1 distributed:1 grammatical:1 dimension:2 tamir:2 ignore:1 implicitly:1 xi:1 discriminative:1 latent:19 learn:1 bottou:1 complex:1 main:1 bounding:2 motivation:1 scored:1 noise:4 n2:6 definitive:1 x1:1 augmented:5 precision:1 position:2 pereira:1 explicit:1 answering:2 breaking:3 tied:2 coling:1 down:2 theorem:16 bad:1 emphasized:1 pac:6 list:1 explored:1 svm:3 chun:1 essential:1 exists:1 hui:1 keshet:3 catoni:1 margin:2 easier:3 visual:3 scalar:4 pedro:1 minimizer:2 weston:1 conditional:3 goal:2 formulated:2 change:1 infinite:11 determined:1 abbreviates:1 uniformly:1 llog:3 lemma:4 called:1 morency:1 meaningful:2 formally:1 support:1 collins:1 violated:1 regularizing:1 ex:6 |
3,610 | 4,269 | Testing a Bayesian Measure of Representativeness
Using a Large Image Database
Joshua T. Abbott
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Katherine A. Heller
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Zoubin Ghahramani
Department of Engineering
University of Cambridge
Cambridge, CB2 1PZ, U.K.
[email protected]
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
tom [email protected]
Abstract
How do people determine which elements of a set are most representative of that
set? We extend an existing Bayesian measure of representativeness, which indicates the representativeness of a sample from a distribution, to define a measure of
the representativeness of an item to a set. We show that this measure is formally
related to a machine learning method known as Bayesian Sets. Building on this
connection, we derive an analytic expression for the representativeness of objects
described by a sparse vector of binary features. We then apply this measure to a
large database of images, using it to determine which images are the most representative members of different sets. Comparing the resulting predictions to human
judgments of representativeness provides a test of this measure with naturalistic
stimuli, and illustrates how databases that are more commonly used in computer
vision and machine learning can be used to evaluate psychological theories.
1
Introduction
The notion of ?representativeness? appeared in cognitive psychology as a proposal for a heuristic
that people might use in the place of performing a probabilistic computation [1, 2]. For example, we
might explain why people believe that the sequence of heads and tails HHTHT is more likely than
HHHHH to be produced by a fair coin by saying that the former is more representative of the output
of a fair coin than the latter. This proposal seems intuitive, but raises a new problem: How is representativeness itself defined? Various proposals have been made, connecting representativeness to
existing quantities such as similarity [1] (itself an ill-defined concept [3]), or likelihood [2]. Tenenbaum and Griffiths [4] took a different approach to this question, providing a ?rational analysis?
of representativeness by trying to identify the problem that such a quantity solves. They proposed
that one sense of representativeness is being a good example of a concept, and then showed how
this could be quantified via Bayesian inference. The resulting model outperformed similarity and
likelihood in predicting human representativeness judgments for two kinds of simple stimuli.
In this paper, we extend this definition of representativeness, and provide a more comprehensive
test of this account using naturalistic stimuli. The question of what makes a good example of a
concept is of direct relevance to computer scientists as well as cognitive scientists, providing a way
to build better systems for retrieving images or documents relevant to a user?s query. However, the
1
model presented by Tenenbaum and Griffiths [4] is overly restrictive in requiring the concept to
be pre-defined, and has not been tested in the context of a large-scale information retrieval system.
We extend the Bayesian measure of representativeness to apply to the problem of deciding which
objects are good examples of a set of objects, show that the resulting model is closely mathematically
related to an existing machine learning method known as Bayesian Sets [5], and compare this model
to similarity and likelihood as an account of people?s judgments of the extent to which images drawn
from a large database are representative of different concepts. In addition, we show how measuring
the representativeness of items in sets can also provide a novel method of finding outliers in sets.
By extending the Bayesian measure of representativeness to apply to sets of objects and testing it
with a large image database, we are taking the first steps towards a closer integration of the methods
of cognitive science and machine learning. Cognitive science experiments typically use a small set
of artificial stimuli, and evaluate different models by comparing them to human judgments about
those stimuli. Machine learning makes use of large datasets, but relies on secondary sources of
?cognitive? input, such as the labels people have applied to images. We combine these methods by
soliciting human judgments to test cognitive models with a large set of naturalistic stimuli. This
provides the first experimental comparison of the Bayesian Sets algorithm to human judgments, and
the first evaluation of the Bayesian measure of representativeness in a realistic applied setting.
The plan of the paper is as follows. Section 2 provides relevant background information, including
psychological theories of representativeness and the definition of Bayesian Sets. Section 3 then
introduces our extended measure of representativeness, and shows how it relates to Bayesian Sets.
Section 4 describes the dataset derived from a large image database that we use for evaluating this
measure, together with the other psychological models we use for comparison. Section 5 presents
the results of an experiment soliciting human judgments about the representativeness of different
images. Section 6 provides a second form of evaluation, focusing on identifying outliers from sets.
Finally, Section 7 concludes the paper.
2
Background
To approach our main question of which elements of a set are most representative of that set, we first
review previous psychological models of representativeness with a particular focus on the rational
model proposed by Tenenbaum and Griffiths [4]. We then introduce Bayesian Sets [5].
2.1
Representativeness
While the notion of representativeness has been most prominent in the literature on judgment and
decision-making, having been introduced by Kahneman and Tversky [1], similar ideas have been
explored in accounts of human categorization and inductive inference [6, 7]. In these accounts,
representativeness is typically viewed as a form of similarity between an outcome and a process
or an object and a concept. Assume some data d has been observed, and we want to evaluate
its representativeness of a hypothesized process or concept h. Then d is representative of h if
it is similar to the observations h typically generates. Computing similarity requires defining a
similarity metric. In the case where we want to evaluate the representativeness of an outcome to a
set, we might use metrics of the kind that are common in categorization models: an exemplar model
defines similarity in terms of the sum of the similarities to the other objects in the set (e.g., [8, 9]),
while a prototype model defines similarity in terms of the similarity to a prototype that captures the
characteristics of the set (e.g., [10]).
An alternative to similarity is the idea that representativeness might track the likelihood function
P (d|h) [11]. The main argument for this proposed equivalence is that the more frequently h leads
to observing d, the more representative d should be of h. However, people?s judgments from the
coin flip example with which we started the paper go against this idea of equivalence, since both
flips have equal likelihood yet people tend to judge HHTHT as more representative of a fair coin.
Analyses of typicality have also argued against the adequacy of frequency for capturing people?s
judgments about what makes a good example of a category [6].
Tenenbaum and Griffiths [4] took a different approach to this question, asking what problem representativeness might be solving, and then deriving an optimal solution to that problem. This approach
is similar to that taken in Shepard?s [12] analysis of generalization, and to Anderson?s [13] idea of
2
rational analysis. The resulting rational model of representativeness takes the problem to be one
of selecting a good example, where the best example is the one that best provides evidence for the
target process or concept relative to possible alternatives. Given some observed data d and a set of
of hypothetical sources, H, we assume that a learner uses Bayesian inference to infer which h ? H
generated d. Tenenbaum and Griffiths [4] defined the representativeness of d for h to be the evidence
that d provides in favor of a specific h relative to its alternatives,
R(d, h) = log P
h0 6=h
P (d|h)
,
P (d|h0 )P (h0 )
(1)
where P (h0 ) in the denominator is the prior distribution on hypotheses, re-normalized over h0 6= h.
2.2
Bayesian Sets
If given a small set of items such as ?ketchup?, ?mustard?, and ?mayonnaise? and asked to produce
other examples that fit into this set, one might give examples such as ?barbecue sauce?, or ?honey?.
This task is an example of clustering on-demand, in which the original set of items represents some
concept or cluster such as ?condiment? and we are to find other items that would fit appropriately
into this set. Bayesian Sets is a formalization of this process in which items are ranked by a modelbased probabilistic scoring criterion, measuring how well they fit into the original cluster [5].
More formally, given a data collection D, and a subset of items Ds = {x1 , . . . , xN } ? D representing a concept, the Bayesian Sets algorithm ranks an item x? ? {D \ Ds } by the following scoring
criterion
p(x? , Ds )
Bscore(x? ) =
(2)
p(x? )p(Ds )
This ratio intuitively compares the probability that x? and Ds were generated by some statistical
model with the same, though unknown, model parameters ?, versus the probability that x? and Ds
were generated by some statistical model with different model parameters ?1 and ?2 .
Each of the three terms in Equation 2 are marginal likelihoods and can be expressed
as the following integrals over ? since the model
parameter
i is assumed to be unR
R hQN
known: p(x? ) =
p(x? |?)p(?)d?, p(Ds ) =
p(x
|?)
p(?)d?, and p(x? , Ds ) =
n
n=1
h
i
R QN
?
n=1 p(xn |?) p(x |?)p(?)d?.
For computational efficiency reasons, Bayesian Sets is typically run on binary data. Thus, each
item in the data collection, xi ? D, is represented as a binary feature vector xi = (xi1 , . . . , xiJ )
where xij ? {0, 1}, and defined under
Q a xmodel in which each element of xi has an independent Bernoulli distribution p(xi |?) = j ?j ij (1 ? ?j )1?xij and conjugate Beta prior p(?|?, ?) =
Q ?(?j +?j ) ?j ?1
(1??j )?j ?1 . Under these assumptions, the scoring criterion for Bayesian Sets
j ?(?j )?(?j ) ?j
reduces to
!1?x?j
x
Y ?j + ?j ?
? j ?j ??j
p(x? , Ds )
?
=
(3)
Bscore(x ) =
p(x? )p(Ds )
?j + ?j + N ?j
?j
j
PN
PN
where ?
? j = ?j + n=1 xnj and ??j = ?j + N ? n=1 xnj . The logarithm of this score is linear
in x and can be computed efficiently as
X
log Bscore(x? ) = c +
sj x?j
(4)
j
? j ? log ?j ? log ??j +
where c = j log(?j + ?j ) ? log(?j + ?j + N ) + log ??j ? log ?j , sj = log ?
?
log ?j , and x?j is the jth component of x .
P
The Bayesian Sets method has been tested with success on numerous datasets, over various applications including content-based image retrieval [14] and analogical reasoning with relational data [15].
Motivated by this method, we now turn to extending the previous measure of representativeness for
a sample from a distribution, to define a measure of representativeness for an item to a set.
3
3
A Bayesian Measure of Representativeness for Sets of Objects
The Bayesian measure of representativeness introduced by Tenenbaum and Griffiths [4] indicated
the representativeness of data d for a hypothesis h. However, in many cases we might not know what
statistical hypothesis best describes the concept that we want to illustrate through an example. For
instance, in an image retrieval problem, we might just have a set of images that are all assigned to the
same category, without a clear idea of the distribution that characterizes that category. In this section,
we show how to extend the Bayesian measure of representativeness to indicate the representativeness
of an element of a set, and how this relates to the Bayesian Sets method summarized above.
Formally, we have a set of data Ds and we want to know how representative an element d of that set
is of the whole set. We can perform an analysis similar to that given for the representativeness of d
to a hypothesis, and obtain the expression
P (d|Ds )
0
0
D 0 6=Ds P (d|D )P (D )
R(d, Ds ) = P
(5)
which is simply Equation 1 with hypotheses replaced by datasets. The quantities that we need to
0
compute to apply this measure,
P P (d|Ds ) and P (D ), we obtain by marginalizing over all hypotheses.
For example, P (d|Ds ) = h P (d|h)P (h|Ds ), being the posterior predictive distribution associated
with Ds . If the hypotheses correspond
R to the continuous parameters of a generative model, then this
is better expressed as P (d|Ds ) = P (d|?)P (?|Ds ).
In the case where the set of possible
datasets that is summed over in the denominator is large, this deP
nominator will approximate D0 P (d|D0 )P (D0 ), which is just P (d). This allows us to observe that
this measure of representativeness will actually closely approximate the logarithm of the quantity
Bscore produced by Bayesian Sets for the dataset Ds , with
P (d|Ds )
P (d, Ds )
P (d|Ds )
? log
= log
= log Bscore(d)
0
0
P (d)
P (d)P (Ds )
D 0 6=Ds P (d|D )P (D )
R(d, Ds ) = log P
This relationship provides a link between the cognitive science literature on representativeness and
the machine learning literature on information retrieval, and a new way to evaluate psychological
models of representativeness.
4
Evaluating Models of Representativeness Using Image Databases
Having developed a measure of the representativeness of an item in a set of objects, we now focus
on the problem of evaluating this measure. The evaluation of psychological theories has historically
tended to use simple artificial stimuli, which provide precision at the cost of ecological validity. In
the case of representativeness, the stimuli previously used by Tenenbaum and Griffiths [4] to evaluate
different representativeness models consisted of 4 coin flip sequences and 45 arguments based on
predicates applied to a set of 10 mammals. One of the aims of this paper is to break the general trend
of using such restricted kinds of stimuli, and the formal relationship between our rational model and
Bayesian Sets allows us to do so. Any dataset that can be represented as a sparse binary matrix can
be used to test the predictions of our measure.
We formulate our evaluation problem as one of determining how representative an image is of a
labeled set of images. Using an existing image database of naturalistic scenes, we can better test
the predictions of different representativeness theories with stimuli much more in common with the
environment humans naturally confront. In the rest of this section, we present the dataset used for
evaluation and outline the implementations of existing models of representativeness we compare our
rational Bayesian model against.
4.1
Dataset
We use the dataset presented in [14], a subset of images taken from the Corel database commonly
used in content-based image retrieval systems. The images in the dataset are partitioned into 50
labeled sets depicting unique categories, with varying numbers of images in each set (the mean is
264). The dataset is of particular interest for testing models of representativeness as each image
4
Algorithm 1 Representativeness Framework
input: a set of items, Dw , for a particular category label w
for each item xi ? Dw do
let Dwi = {Dw \ xi }
compute score(xi , Dwi )
end for
rank items in Dw by this score
output: ranked list of items in Dw
(a)
(b)
Figure 1: Results of the Bayesian model applied to the set labelled coast. (a) The top nine ranked
images. (b) The bottom nine ranked images.
from the Corel database comes with multiple labels given by human judges. The labels have been
criticized for not always being of high quality [16], which provides an additional (realistic) challenge
for the models of representativeness that we aim to evaluate.
The images in this dataset are represented as 240-dimensional feature vectors, composed of 48
Gabor texture features, 27 Tamura texture features, and 165 color histogram features. The images
were additionally preprocessed through a binarization stage, transforming the entire dataset into a
sparse binary matrix that represents the features which most distinguish each image from the rest of
the dataset. Details of the construction of this feature representation are presented in [14].
4.2
Models of Representativeness
We compare our Bayesian model against a likelihood model and two similarity models: a prototype
model and an exemplar model. We build upon a simple leave-one-out framework to allow a fair
comparison of these different representativeness models. Given a set of images with a particular
category label, we iterate through each image in the set and compute a score for how well this image
represents the rest of the set (see Algorithm 1). In this framework, only score(xi , Dwi ) varies across
the different models. We present the different ways to compute this score below.
Bayesian model. Since we have already shown the relationship between our rational measure and
Bayesian Sets, the score in this model is computed efficiently via Equation 2. The hyperparameters
? and ? are set empirically from the entire dataset, ? = ?m, ? = ?(1 ? m), where m is the mean
of x over all images, and ? is a scaling factor. An example of using this measure on the set of 299
images for category label coast is presented in Figure 1. Panels (a) and (b) of this figure show the
top nine and bottom nine ranked images, respectively, where it is quite apparent that the top ranked
images depict a better set of coast examples than the bottom rankings. It also becomes clear how
poorly this label applies to some of the images in the bottom rankings, which is an important issue
if using the labels provided with the Corel database as part of a training set for learning algorithms.
5
Likelihood model. This model treats representative judgments of an item x? as p(x? |Ds ) for a set
?
,Ds )
Ds = {x1 , . . . , xN }. Since this probability can also be expressed as p(x
p(Ds ) , we can derive an
efficient scheme for computing the score similar to the Bayesian Sets scoring criterion by making
the same model assumptions. The likelihood model scoring criterion is
1?x?j
Y
p(x? , Ds )
1
x
Lscore(x? ) =
=
(?
?j ) ?j ??j
(6)
p(Ds )
?j + ?j + N
j
PN
PN
where ?
? j = ?j + n=1 xnj and ??j = ?j + N ? n=1 xnj . The logarithm of this score is also
linear in x and can be computed efficiently as
X
log Lscore(x? ) = c +
wj x?j
(7)
j
where c = j log ?j ? log(?j + ?j + N ) and wj = log ?
? j ? log ??j . The hyperparameters ? and
? are initialized to the same values used in the Bayesian model.
P
Prototype model. In this model we define a prototype vector xproto to be the modal features for a
set of items Ds . The similarity measure then becomes
Pscore(x? ) = exp{?? dist(x? , xproto )}
(8)
where dist(?, ?) is the Hamming distance between the two vectors and ? is a free parameter. Since
we are primarily concerned with ranking images, ? does not need to be optimized as it plays the role
of a scaling constant.
Exemplar model. We define the exemplar model using a similar scoring metric to the prototype
model, except rather than computing the distance of x? to a single prototype, we compute a distance
for each item in the set Ds . Our similarity measure is thus computed as
X
Escore(x? ) =
exp{?? dist(x? , xj )}
(9)
xj ?Ds
where dist(?, ?) is the Hamming distance between two vectors and ? is a free parameter. In this
case, ? does need to be optimized as the sum means that different values for ? can result in different
overall similarity scores.
5
Modeling Human Ratings of Representativeness
Given a set of images provided with a category label, how do people determine which images are
good or bad examples of that category? In this section we present an experiment which evaluates
our models through comparison with human judgments of the representativeness of images.
5.1
Methods
A total of 500 participants (10 per category) were recruited via Amazon Mechanical Turk and compensated $0.25. The stimuli were created by identifying the top 10 and bottom 10 ranked images
for each of the 50 categories for the Bayesian, likelihood, and prototype models and then taking the
union of these sets for each category. The exemplar model was excluded in this process as it required
optimization of its ? parameter, meaning that the best and worst images could not be determined in
advance. The result was a set of 1809 images, corresponding to an average of 36 images per category. Participants were shown a series of images and asked to rate how good an example each image
was of the assigned category label. The order of images presented was randomized across subjects.
Image quality ratings were made on a scale of 1-7, with a rating of 1 meaning the image is a very
bad example and a rating of 7 meaning the image is a very good example.
5.2
Results
Once the human ratings were collected, we computed the mean ratings for each image and the mean
of the top 10 and bottom 10 results for each algorithm used to create the stimuli. We also computed
6
6
Bayes
Likelihood
Prototype
Mean quality ratings
5.5
5
4.5
4
3.5
3
Top 10 Rankings
Bottom 10 Rankings
Figure 2: Mean quality ratings of the top 10 and bottom 10 rankings of the different representativeness models over 50 categories. Error bars show one standard error. The vertical axis is bounded by
the best possible top 10 ratings and the worst possible bottom 10 ratings across categories.
bounds for the ratings based on the optimal set of top 10 and bottom 10 images per category. These
are the images which participants rated highest and lowest, regardless of which algorithm was used
to create the stimuli. The mean ratings for the optimal top 10 images was slightly less than the
highest possible rating allowed (m = 6.018, se = 0.074), while the mean ratings for the optimal
bottom 10 images was significantly higher than the lowest possible rating allowed (m = 2.933,
se = 0.151). The results are presented in Figure 2. The Bayesian model had the overall highest
ratings for its top 10 rankings (m = 5.231, se = 0.026) and the overall lowest ratings for its
bottom 10 rankings (m = 3.956, se = 0.031). The other models performed significantly worse,
with likelihood giving the next highest top 10 (m = 4.886, se = 0.028), and next lowest bottom
10 (m = 4.170, se = 0.031), and prototype having the lowest top 10 (m = 4.756, se = 0.028),
and highest bottom 10 (m = 4.249, se = 0.031). We tested for statistical significance via pairwise
t-tests on the mean differences of the top and bottom 10 ratings over all 50 categories, for each pair
of models. The Bayesian model outperformed both other algorithms (p < .001).
As a second analysis, we ran a Spearman rank-order correlation to examine how well the actual
scores from the models fit with the entire set of human judgments. Although we did not explicitly
ask participants to rank images, their quality ratings implicitly provide an ordering on the images
that can be compared against the models. This also gives us an opportunity to evaluate the exemplar
model, optimizing its ? parameter to maximize the fit to the human data. To perform this correlation
we recorded the model scores over all images for each category, and then computed the correlation of
each model with the human judgments within that category. Correlations were then averaged across
categories. The Bayesian model had the best mean correlation (? = 0.352), while likelihood (? =
0.220), prototype (? = 0.160), and the best exemplar model (? = 2.0, ? = 0.212) all performed
less well. Paired t-tests showed that the Bayesian model produced statistically significantly better
performance than the other three models (all p < .01).
5.3
Discussion
Overall, the Bayesian model of representativeness provided the best account of people?s judgments
of which images were good and bad examples of the different categories. The mean ratings over the
entire dataset were best predicted by our model, indicating that on average, the model predictions
for images in the top 10 results were deemed of high quality and the predictions for images in the
bottom 10 results were deemed of low quality. Since the images from the Corel database come
with labels given by human judges, few images are actually very bad examples of their prescribed
labels. This explains why the ratings for the bottom 10 images are not much lower. Additionally,
there was some variance as to which images the Mechanical Turk workers considered to be ?most
representative?. This explains why the ratings for the top 10 images are not much higher, and thus
why the difference between top and bottom 10 on average is not larger. When comparing the actual
7
Table 1: Model comparisons for the outlier experiment
Model
Bayesian Sets
Likelihood
Prototype
Exemplar
Average Outlier Position
0.805
0.779
0.734
0.734
S.E.
? 0.014
? 0.013
? 0.015
? 0.016
scores from the different models against the ranked order of human quality ratings, the Bayesian
account was also significantly more accurate than the other models. While the actual correlation
value was less than 1, the dataset was rather varied in terms of quality for each category and thus it
was not expected to be a perfect correlation. The methods of the experiment were also not explicitly
testing for this effect, providing another source of variation in the results.
6
Finding Outliers in Sets
Measuring the representativeness of items in sets can also provide a novel method of finding outliers
in sets. Outliers are defined as an observation that appears to deviate markedly from other members
of the sample in which it occurs [17]. Since models of representativeness can be used to rank items
in a set by how good an example they are of the entire set, outliers should receive low rankings.
The performance of these different measures in detecting outliers provides another indirect means
of assessing their quality as measures of representativeness.
To empirically test this idea we can take an image from a particular category and inject it into
all other categories, and see whether the different measures can identify it as an outlier. To find
a good candidate image we used the top ranking image per category as ranked by the Bayesian
model. We justify this method because the Bayesian model had the best performance in predicting
human quality judgments. Thus, the top ranked image for a particular category is assumed to be a
bad example of the other categories. We evaluated how low this outlier was ranked by each of the
representativeness measures 50 times, testing the models with a single injected outlier from each
category to get a more robust measure. The final evaluation was based on the normalized outlier
ranking for each category (position of outlier divided by total number of images in the category),
averaged over the 50 injections. The closer this quantity is to 1, the lower the ranking of outliers.
The results of this analysis are depicted in Table 1, where it can be seen that the Bayesian model
outperforms the other models. It is interesting to note that these measures are all quite distant from
1. We interpret this as another indication of the noisiness of the original image labels in the dataset
since there were a number of images in each category that were ranked lower than the outlier.
7
Conclusions
We have extended an existing Bayesian model of representativeness to handle sets of items and
showed how it closely approximates a method of clustering on-demand ? Bayesian Sets ? that had
been developed in machine learning. We exploited this relationship to allow us to evaluate a set
of psychological models of representativeness using a large database of naturalistic images. Our
Bayesian measure of representativeness significantly outperformed other proposed accounts in predicting human judgments of how representative images were of different categories. These results
provide strong evidence for this characterization of representativeness, and a new source of validation for the Bayesian Sets algorithm. We also introduced a novel method of detecting outliers in sets
of data using our representativeness measure, and showed that it outperformed other measures. We
hope that the combination of methods from cognitive science and computer science that we used
to obtain these results is the first step towards closer integration between these disciplines, linking
psychological theories and behavioral methods to sophisticated algorithms and large databases.
Acknowledgments. This work was supported by grants IIS-0845410 from the National Science Foundation and
FA-9550-10-1-0232 from the Air Force Office of Scientific Research to TLG and a National Science Foundation
Postdoctoctoral Fellowship to KAH.
8
References
[1] D. Kahneman and A. Tversky. Subjective probability: A judgment of representativeness. Cognitive
Psychology, 3:430?454, 1972.
[2] G. Gigerenzer. On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103:592, 1996.
[3] G. L. Murphy and D. L. Medin. The role of theories in conceptual coherence. Psychological Review,
92:289?316, 1985.
[4] J. B. Tenenbaum and T. L. Griffiths. The rational basis of representativeness. In Proc. 23rd Annu. Conf.
Cogn. Sci. Soc., pages 1036?1041, 2001.
[5] Z. Ghahramani and K. A. Heller. Bayesian sets. In Advances in Neural Information Processing Systems,
volume 18, 2005.
[6] C.B. Mervis and E. Rosch. Categorization of natural objects. Annual Review of Psychology, 32:89?115,
1981.
[7] D.N. Osherson, E.E. Smith, O. Wilkie, A. Lopez, and E. Shafir. Category-based induction. Psychological
Review, 97:185, 1990.
[8] D. L. Medin and M. M. Schaffer. Context theory of classification learning. Psychological Review, 85:207?
238, 1978.
[9] R. M. Nosofsky. Attention and learning processes in the identification and categorization of integral
stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13:87?108, 1987.
[10] S. K. Reed. Pattern recognition and categorization. Cognitive Psychology, 3:393?407, 1972.
[11] G. Gigerenzer and U. Hoffrage. How to improve Bayesian reasoning without instruction: Frequency
formats. Psychological Review, 102:684, 1995.
[12] R. N. Shepard. Towards a universal law of generalization for psychological science. Science, 237:1317?
1323, 1987.
[13] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[14] K. A. Heller and Z. Ghahramani. A simple Bayesian framework for content-based image retrieval. IEEE
Conference on Computer Vision and Pattern Recognition, 2:2110?2117, 2006.
[15] R. Silva, K. A. Heller, and Z. Ghahramani. Analogical reasoning with relational Bayesian sets. International Conference on AI and Statistics, 2007.
[16] H. M?uller, S. Marchand-Maillet, and T. Pun. The truth about Corel - evaluation in image retrieval. International Conference on Image and Video Retrieval, 2002.
[17] F. Grubbs. Procedures for detecting outlying observations in samples. Technometrics, 11:1?21, 1969.
9
| 4269 |@word seems:1 norm:1 instruction:1 eng:1 mammal:1 series:1 score:13 selecting:1 document:1 outperforms:1 existing:6 xnj:4 subjective:1 comparing:3 yet:1 distant:1 realistic:2 analytic:1 depict:1 generative:1 item:21 smith:1 provides:9 detecting:3 characterization:1 pun:1 direct:1 beta:1 retrieving:1 lopez:1 combine:1 behavioral:1 introduce:1 pairwise:1 expected:1 frequently:1 dist:4 examine:1 brain:1 actual:3 becomes:2 provided:3 bounded:1 panel:1 lowest:5 what:4 kind:3 developed:2 finding:3 nj:1 berkeley:6 hypothetical:1 honey:1 uk:1 shafir:1 grant:1 engineering:1 scientist:2 treat:1 might:8 quantified:1 equivalence:2 medin:2 statistically:1 averaged:2 kah:1 unique:1 acknowledgment:1 testing:5 union:1 cb2:1 cogn:1 procedure:1 universal:1 gabor:1 significantly:5 thought:1 pre:1 griffith:10 zoubin:2 naturalistic:5 get:1 context:2 compensated:1 go:1 regardless:1 attention:1 typicality:1 formulate:1 amazon:1 identifying:2 deriving:1 dw:5 handle:1 notion:2 variation:1 target:1 construction:1 play:1 user:1 us:1 hypothesis:7 element:5 trend:1 recognition:2 database:14 labeled:2 observed:2 bottom:18 role:2 capture:1 worst:2 wj:2 ordering:1 highest:5 ran:1 environment:1 transforming:1 asked:2 cam:1 tversky:3 raise:1 solving:1 gigerenzer:2 predictive:1 upon:1 efficiency:1 learner:1 basis:1 vague:1 kahneman:3 indirect:1 osherson:1 various:2 represented:3 unr:1 query:1 artificial:2 outcome:2 h0:5 quite:2 heuristic:2 apparent:1 larger:1 favor:1 statistic:1 itself:2 final:1 sequence:2 indication:1 took:2 relevant:2 poorly:1 analogical:2 intuitive:1 cluster:2 extending:2 assessing:1 produce:1 categorization:5 perfect:1 leave:1 object:9 derive:2 illustrate:1 ac:1 exemplar:8 ij:1 dep:1 strong:1 soc:1 solves:1 predicted:1 judge:3 indicate:1 come:2 closely:3 human:19 hillsdale:1 explains:2 argued:1 generalization:2 sauce:1 mathematically:1 considered:1 deciding:1 exp:2 cognition:1 hoffrage:1 proc:1 outperformed:4 label:13 create:2 hope:1 uller:1 mit:1 always:1 aim:2 rather:2 pn:4 varying:1 office:1 derived:1 focus:2 noisiness:1 rank:5 indicates:1 likelihood:14 bernoulli:1 sense:1 inference:3 typically:4 entire:5 issue:1 overall:4 ill:1 classification:1 plan:1 integration:2 summed:1 marginal:1 equal:1 once:1 having:3 represents:3 stimulus:14 primarily:1 few:1 composed:1 national:2 comprehensive:1 murphy:1 replaced:1 technometrics:1 interest:1 evaluation:7 introduces:1 accurate:1 integral:2 closer:3 worker:1 logarithm:3 initialized:1 re:1 psychological:14 instance:1 criticized:1 modeling:1 asking:1 measuring:3 cost:1 subset:2 predicate:1 erlbaum:1 varies:1 international:2 randomized:1 probabilistic:2 xi1:1 discipline:1 modelbased:1 connecting:1 together:1 nosofsky:1 recorded:1 worse:1 cognitive:11 conf:1 inject:1 account:7 summarized:1 representativeness:67 explicitly:2 ranking:12 performed:2 break:1 observing:1 characterizes:1 bayes:1 participant:4 air:1 variance:1 characteristic:1 efficiently:3 judgment:18 identify:2 correspond:1 bayesian:51 identification:1 produced:3 explain:1 tended:1 definition:2 against:6 evaluates:1 frequency:2 turk:2 naturally:1 associated:1 hamming:2 rational:8 dataset:15 massachusetts:1 ask:1 color:1 sophisticated:1 actually:2 focusing:1 appears:1 higher:2 tom:1 modal:1 evaluated:1 though:1 anderson:2 just:2 stage:1 maillet:1 reply:1 correlation:7 d:36 defines:2 quality:11 indicated:1 scientific:1 believe:1 building:1 effect:1 hypothesized:1 concept:11 requiring:1 consisted:1 normalized:2 former:1 inductive:1 assigned:2 validity:1 excluded:1 criterion:5 trying:1 prominent:1 outline:1 silva:1 reasoning:3 image:76 coast:3 meaning:3 novel:3 common:2 empirically:2 corel:5 shepard:2 volume:1 extend:4 tail:1 approximates:1 linking:1 interpret:1 cambridge:3 ai:1 rd:1 had:4 similarity:15 tlg:1 posterior:1 showed:4 optimizing:1 ecological:1 binary:5 success:1 joshua:2 scoring:6 exploited:1 seen:1 additional:1 determine:3 maximize:1 ii:1 relates:2 multiple:1 infer:1 reduces:1 d0:3 retrieval:8 divided:1 paired:1 prediction:5 denominator:2 vision:2 metric:3 confront:1 histogram:1 tamura:1 proposal:3 addition:1 background:2 want:4 receive:1 fellowship:1 source:4 appropriately:1 rest:3 markedly:1 subject:1 tend:1 recruited:1 member:2 adequacy:1 nominator:1 concerned:1 iterate:1 xj:2 fit:5 psychology:7 idea:6 prototype:12 whether:1 expression:2 motivated:1 nine:4 clear:2 se:8 tenenbaum:8 category:34 xij:3 overly:1 track:1 per:4 drawn:1 preprocessed:1 abbott:2 sum:2 run:1 injected:1 place:1 saying:1 decision:1 coherence:1 scaling:2 capturing:1 bound:1 distinguish:1 marchand:1 annual:1 scene:1 generates:1 argument:2 prescribed:1 performing:1 injection:1 format:1 department:4 combination:1 conjugate:1 spearman:1 describes:2 across:4 slightly:1 character:1 partitioned:1 making:2 outlier:17 intuitively:1 restricted:1 taken:2 equation:3 previously:1 turn:1 know:2 flip:3 end:1 apply:4 observe:1 alternative:3 coin:5 thomas:1 original:3 top:19 clustering:2 opportunity:1 giving:1 restrictive:1 ghahramani:4 build:2 question:4 quantity:5 already:1 occurs:1 fa:1 rosch:1 distance:4 link:1 sci:1 extent:1 collected:1 reason:1 induction:1 relationship:4 reed:1 providing:3 ratio:1 katherine:1 implementation:1 unknown:1 perform:2 vertical:1 observation:3 datasets:4 defining:1 extended:2 relational:2 head:1 varied:1 schaffer:1 rating:23 introduced:3 pair:1 mechanical:2 required:1 connection:1 optimized:2 california:2 narrow:1 mervis:1 bar:1 below:1 pattern:2 appeared:1 challenge:1 including:2 memory:1 video:1 ranked:12 force:1 natural:1 predicting:3 representing:1 scheme:1 improve:1 technology:1 historically:1 rated:1 numerous:1 axis:1 started:1 concludes:1 created:1 deemed:2 binarization:1 heller:4 review:7 literature:3 prior:2 deviate:1 marginalizing:1 relative:2 determining:1 law:1 interesting:1 versus:1 validation:1 foundation:2 supported:1 free:2 jth:1 formal:1 allow:2 institute:1 taking:2 sparse:3 xn:3 evaluating:3 qn:1 commonly:2 made:2 collection:2 dwi:3 adaptive:1 outlying:1 wilkie:1 sj:2 approximate:2 implicitly:1 conceptual:1 assumed:2 xi:8 continuous:1 why:4 table:2 additionally:2 robust:1 ca:2 kheller:1 depicting:1 did:1 significance:1 main:2 whole:1 hyperparameters:2 fair:4 allowed:2 x1:2 representative:13 formalization:1 precision:1 position:2 candidate:1 annu:1 bad:5 specific:1 explored:1 pz:1 list:1 evidence:3 texture:2 illustrates:1 demand:2 depicted:1 simply:1 likely:1 expressed:3 applies:1 truth:1 relies:1 ma:1 viewed:1 towards:3 labelled:1 content:3 determined:1 except:1 justify:1 total:2 secondary:1 experimental:2 indicating:1 formally:3 people:10 latter:1 relevance:1 evaluate:9 tested:3 |
3,611 | 427 | Order Reduction for Dynamical Systems
Describing the Behavior of Complex Neurons
Thomas B. Kepler
Biology Dept.
L. F. Abbott
Physics Dept.
Eve Marder
Biology Dept.
Brandeis University
Waltham, MA 02254
Abstract
We have devised a scheme to reduce the complexity of dynamical
systems belonging to a class that includes most biophysically realistic
neural models. The reduction is based on transformations of variables
and perturbation expansions and it preserves a high level of fidelity to
the original system. The techniques are illustrated by reductions of the
Hodgkin-Huxley system and an augmented Hodgkin-Huxley system.
INTRODUCTION
For almost forty years, biophysically realistic modeling of neural systems has followed
the path laid out by Hodgkin and Huxley (Hodgkin and Huxley, 1952). Their seminal
work culminated in the accurately detailed description of the membrane currents
expressed by the giant axon of the squid Loligo, as a system of four coupled non-linear
differential equations. Soon afterward (and ongoing now) simplified, abstract models
were introduced that facilitated the conceptualization of the model's behavior, e.g.
(FitzHugh, 1961). Yet the mathematical relationships between these conceptual models
and the realistic models have not been fully investigated. Now that neurophysiology is
telling us that most neurons are complicated and subtle dynamical systems, this situation
is in need of change. We suggest that a systematic program of simplification in which
a realistic model of given complexity spawns a family of simplified meta-models of
varying degrees of abstraction could yield considerable advantage. In any such scheme,
the number of dynamical variables, or order, must be reduced, and it seems efficient and
reasonable to do this first. This paper will be concerned with this step only. A sketch
of a more thoroughgoing scheme proceeding ultimately to the binary formal neurons of
55
56
Kepler, Abbott, and Marder
Hopfield (Hopfield, 1982) has been presented elsewhere (Abbott and Kepler, 1990).
There are at present several reductions of the Hodgkin-Huxley (HH) system (FitzHugh,
1961; Krinskii and Kokoz, 1973; Rose and Hindmarsh, 1989) but all of them suffer to
varying degrees from a lack of generality and/or insufficient realism.
We will present a scheme of perturbation analyses which provide a power-series
approximation of the original high-order system and whose leading term is a lower-order
system (see (Kepler et ai., 1991) for a full discussion). The techniques are general and
can be applied to many models. Along the way we will refer to the HH system for
concreteness and illustrations. Then, to demonstrate the generality of the techniques and
to exhibit the theoretical utility of our approach, we will incorporate the transient outward
current described in (Connor and Stevens, 1972) and modeled in (Connor et aI., 1977)
known as IA ? We will reduce the resulting sixth-order system to both third- and secondorder systems.
EQUIVALENT POTENTIALS
Many systems modeling excitable neural membrane consist of a differential equation
expressing current conservation
dV
edi
+
I~t)
I(V, {Xi)) =
(1)
where V is the membrane potential difference, C is the membrane capacitance and I(V,x)
is the total ionic current expressed as a function of V and the ~, which are gating
variables described by equations of the form
dxi
di
= kiO')(xj(J')-xJ.
(2)
providing the balance of the system's description. The ubiquity of the membrane
potential and its role as "command variable" in these model systems suggests that we
might profit by introducing potential-like variables in place of the gating variables. We
define the equivalent potential (EP) for each of the gating variables by
V,
- -1
= Xi
)
(Xi'
(3)
In realistic neural models, the function i is ordinarily sigmoid and hence invertible. The
chain rule may be applied to give us new equations of motion. Since no approximations
have yet been made, the system expressed in these variables gives exactly the same
evolution for V as the original system. The evolution of the whole HH system expressed
in EPs is shown in fig. 1. There is something striking about this collection of plots. The
transformation to EPs now suggests that of the four available degrees of freedom, -only
two are actually utilized. Specifically, Vm is nearly indistinguishable from V, and Vh and
V n are likewise quite similar. This strongly suggests that we form averages and
differences of EPs within the two classes.
Order Reduction for Dynamical Systems
60
V
PERTURBATION SERIES
VI>.
30
In the general situation, the EPs
must be segregated into two or
more classes. One class will
contain the true membrane
potential V. Members of this
class will be subscripted with
greek letters /L, JI, etc. while the
others will be subscripted with
latin indices i, j, etc. We make,
within each class, a change of
variables to 1) a new representative EP taken as a weighted
average over all members of the
class, and 2) differences between
each member and their average.
The transformations and their
inverses are
0
;:-
a
-30
-60
"-'"
...
...v
.::!
60
V
J:::
0
30
V
n
m
~
0
-30
-60
~/l/\/V
0
5
10
.,-
)
20
25
10
5
20
15
25
t (msec)
t (msec)
Figure 1: Behavior of equivalent potentials in repetetive firing mode of Hodgkin-Huxley system.
~ = L u"V"
t = LU,Vi
a =v "
(4)
I
II
<V>
"
v
Lv uv~v
+ 3"
~,
= V, -
<l'J>
and
VI' = ~ -
Vi = t - E UJ~J +
3,.
(5)
J
We constrain the ai and the al'to sum to one. The a's will not be taken as constants,
but will be allowed to depend on 4> and t/t. We expect, however, that their variation will
be small so that most of the time dependence of 4> and t/t will be carried by the V's. We
differentiate eqs.(4), use the inverse transformations of eq.(5) and expand to first order
in the o's to get
dt
--;ji
~
= ~
(.?
u, k,(t~x~t>
- x-,
+ 0(3)
(6)
and the new current conservation equation,
c~
+
uol(cj),{i',,(t)},(i,(t)})
= l~t)
+
O(~).
(7)
This is still a current conservation equation, only now we have renormalized the
capacitance in a state-dependent way through ao' The coefficient the of o's in eq.(6) will
be small, at least in the neighborhood of the equilibrium point, as long as the basic
premise of the expansion holds. No such guarantee is made about the corresponding
57
58
Kepler, Abbott, and Marder
coefficient in eq.(7). Therefore we will choose the a's to make the correction term
second order in the a's by setting the coefficient of each ai and al? to zero. For the al? we
get,
(8)
for JL
~
0, where
al -(
(9)
1,) == -x)
ax)
and we use the abbreviation A - E I,,,. And for JL = 0,
CX~
-
u/'O -
CE
cxyky
v.o
+ Cil.o
=0
(10)
Now the time derivatives of the a's vanish at the equilibrium point, and it is with the
neighborhood of this point that we must be primarily concerned. Ignoring these terms
yields surprisingly good results even far from equilibrium. This choice having been
made, we solve for ao, as the root of the polynomial
uoA -
1'0 -
CE
kl,,,[cxoA
11..0
+
Ck"r 1 = 0
(11)
whose order is equal to the number of EPs combining to form q,. The time dependence
of Vt is given by specifying the ai' This may be done as for the al? to get
(12)
EXAMPLE: HODGKIN-HUXLEY
+ IA
For the specific cases in which the HH system is reduced from fourth order to second,
by combining V and Vm to form q, and combining Vh and Vn to form Vt, the plan outlined
above works without any further meddling, and yields a very faithful reduction. Also
straightforward is the reduction of the sixth-order system given by Connor et al. (Connor
et aI., 1977) in which the HH system is supplemented by IA (HH + A) to third order. In
this reduction, the EP for the IA activation variable, a, joins Vh and Vn in the formation
of Vt. Alternatively, we may reduce to a second order system in which V.joins with V
and Vm to form q, and the EPs for n,h and the IA inactivation variable, b, are combined
to form Vt. This is not as straightforward. A direct application of eq.(12) produces a
curve of singularities where the denominator vanishes in the expression for dVt/dt; on one
side dVt/dt has the same sign as q, - Vt, (which it should) and on the other side it does not.
Some additional decisions must be made here. We may certainly take this to be an
indication that the reduction is breaking down, but through good fortune we are able to
salvage it. This matter is dealt with in more detail elsewhere (Kepler et al., 1991). The
reduced models are related in that the first is recovered when the maximum conductance
Order Reduction for Dynamical Systems
of IA is set to zero in either of the other two.
60
Figure 2 shows
the voltage trace of a
30
"
"
to
HH + A cell that is first
"
"
:;:;
",
C
,
hyperpolarized and then
,
0
.....<:>
,,
,
:
0
suddenly depolarized to
0.
.
?
III -30
above threshold. Tracc
<Il
es from all three sysI...
.s;:: -60
C
tems (full, 3n1 order,
..,'"'l
III
2Dd order) are shown
::l
E -90 ~-----:=====~======:::r.::::==1 20 r+
superimposed.
This
..-..
o 3
example focuses on the
-20 2::~----~------~------~--~
phenomenon of post
10
20
30
o
inhibitory latency to
hme (mS)
firing. When a HH cell
Figure 2: Response of full HH+A (solid line), 3Td order is depolarized suffi(dashed) and zuI order systems to current step, showing ciently to produce
firing, the onset of the
latency to Jiring.
first action potential is
immediate and virtually
independent of the degree of hyperpolarization experienced immediately beforehand. In
contrast, the same cell with an IA now shows a latency to firing which depends monotonically on the depth to which it had been hyperpolarized immediately prior to
depolarization.
This is most clearly seen in fig. 3
showing the phase portrait of the
?,,,
,
second-order system. The dc/>/dt
,,
-20
,,
= 0 nullcline has acquired a
,,
second branch. In order to get
...........
from the initial (hyperpolarized)
:> -40
,,
E
.,,
location, the phase point must
.
crawl over this obstacle, and the
-60
,,
lower it starts, the farther it has
,,
to climb.
,
---->
S
'-"
?, '
?..
I'
(")
(1)
I
I
I
-BO
Figure 4 shows the
firing frequency as a function of
the injected current, for the full
HH and HH+A sytems (solid
lines), the HH second order
HH + A third order (dashed
lines) and HH + A second order
-90
?
????
-GO
I
,,
,,
,
-30
0
rp (m V)
I
I
I
I
30
60
Figure 3: Phase portrait of event shown in Jig. 3 for
zuI order reduced system.
59
60
Kepler, Abbott, and Marder
(dotted line).- Note that the first reduction matches the full system quite well in both
cases. The second reduction, however, does not do as well. It does get the qualitative
features right, though. The expansion of the dynamic range for the frequency of firing
is still present, though squeezed into a much smaller interval on the current axis. The
bifurcation occurs at nearly the right place and seems to have the proper character, i. e. ,
saddle rather than Hopf, though this has not been rigorously investigated.
0.3 ....----.--.,---,---,---.,..-------,
.........
N
:?
0 .2
>.
()
~
Q)
~ 01
cr
.
HH
Q)
.....I-.
0 .0
'-----"-_---'-_--'-_-L-_-'------'
o
10
20
30
40
50
60
injected current (/.LA)
Figure 4: Firing frequency as a function of injected
current. Solid: full systems dashed: rt order HH &:
3Td order HH +A, dotted: ztd order HH +A. (From
Kepler et al. 1991)
I
The reduced systems
are intended to be dynamically
realistic, to respond accurately
to the kind of time-dependent
external currents that would be
encountered in real networks.
To put this to the test, we ran
simulations in which Icxtemal(t)
was given by a sum of sinusoids
of equal amplitude and randomly
chosen frequency and phase.
Figure 5 illustrates the remarkable match between the full
HH + A system and the thirdorder reduction, when such an
irregular (quasiperiodic) current
signal is used to drive them.
I
CONCLUSION
We have presented a systematic approach to the reduction of order for a class of
dynamical systems that includes the Hodgkin-Huxley system, the Connor et al. IA
extension of the HH system, and many other realistic neuron models. As mentioned at
the outset, these procedures are merely the first steps in a more comprehensive program
of simplification. In this way, the conceptual advantage of abstract models may be joined
to the biophysical realism of physiologically derived models in a smooth and tractable
manner, and the benefits of simplicity may be enjoyed with a clear conscience.
-For purposees of comparison, the HH system used here is as
modified by (Connoret al., 1977), but withIA removed and the leakage
reversal potential adjusted to give the same resting potential as the
HH+A cell.
Order Reduction for Dynamical Systems
_
>
S
0
~
______
~
______
~
______
~
______
~
______
~
to
'-""'0
-
ro
C")
c:
0
......
.....,
Q)
.....,
.....
::;
.......
(!)
()
r-
o
o..g
Q)
c:
ro
s...
.D
6
6
Q)
(!)
0..
I
()
C
'""l
'""l
0
to
I
(!)
::;
~------+-------~-------+------~------~ W r0.-
1::
0>
o
100
200
300
400
500
time (mS)
Figure 5: Response of HH+A system to i"eguiar cu"ent injection. Solid line:full
order reduction.
system, dashed line:
r
Acknowledgment
This work was supported by National Institutes of Health grant T32NS07292 (TBK),
Department of Energy Contract DE-AC0276-ER03230 (LFA) and National Institutes of
Mental Health grant MH46742 (EM).
REFERENCES
L.F.Abbottand T.B.Kepler, 1990 in Proceedings of the XI Sitges Conference on Neural
Networks in press
J.A.Connor and C.F.Stevens, 1971 J.Physiol.,Lond. 213 31
J.A.Connor, D.Walter and R.McKown, 1977 Biophys.J. 18 81
R.FitzHugh, 1961 Biophys. J. 1445
A.L.Hodgkin and A.F.Huxley, 1952 J. Physiol. 117,500
J.J.Hopfield, 1982 Proc.Nat.Acad.Sci. 792554
T.B.Kepler, L.F. Abbott and E.Marder, 1991 submitted to Bioi. Cyber.
V.I.Krinskii and Yu.M.Kokoz, 1973 Biojizika 18506
R.M.Rose and J.L.Hindmarsh, 1989 Proc.R.Soc.Lond. 237267
61
| 427 |@word neurophysiology:1 cu:1 polynomial:1 seems:2 hyperpolarized:3 squid:1 simulation:1 profit:1 solid:4 reduction:16 initial:1 series:2 current:13 recovered:1 activation:1 yet:2 must:5 physiol:2 realistic:7 plot:1 realism:2 farther:1 conscience:1 mental:1 tems:1 kepler:10 location:1 mathematical:1 along:1 direct:1 differential:2 hopf:1 qualitative:1 manner:1 acquired:1 behavior:3 td:2 nullcline:1 kind:1 depolarization:1 transformation:4 giant:1 guarantee:1 exactly:1 ro:2 grant:2 acad:1 culminated:1 path:1 firing:7 might:1 dynamically:1 suggests:3 specifying:1 tbk:1 range:1 faithful:1 acknowledgment:1 procedure:1 outset:1 suggest:1 get:5 put:1 seminal:1 equivalent:3 straightforward:2 go:1 simplicity:1 immediately:2 rule:1 variation:1 secondorder:1 utilized:1 ep:3 role:1 removed:1 ran:1 rose:2 mentioned:1 vanishes:1 complexity:2 rigorously:1 dynamic:1 renormalized:1 ultimately:1 thirdorder:1 depend:1 hopfield:3 walter:1 formation:1 neighborhood:2 whose:2 quite:2 quasiperiodic:1 solve:1 fortune:1 differentiate:1 advantage:2 indication:1 biophysical:1 combining:3 description:2 ent:1 produce:2 eq:5 soc:1 waltham:1 greek:1 stevens:2 transient:1 premise:1 ao:2 singularity:1 adjusted:1 extension:1 correction:1 hold:1 equilibrium:3 proc:2 suffi:1 weighted:1 clearly:1 modified:1 ck:1 inactivation:1 rather:1 cr:1 varying:2 command:1 voltage:1 ax:1 focus:1 derived:1 superimposed:1 contrast:1 abstraction:1 dependent:2 expand:1 subscripted:2 fidelity:1 plan:1 zui:2 bifurcation:1 equal:2 having:1 biology:2 yu:1 nearly:2 others:1 primarily:1 randomly:1 preserve:1 national:2 comprehensive:1 phase:4 intended:1 n1:1 freedom:1 conductance:1 dvt:2 certainly:1 hindmarsh:2 chain:1 beforehand:1 theoretical:1 portrait:2 modeling:2 obstacle:1 introducing:1 jig:1 combined:1 systematic:2 physic:1 vm:3 contract:1 invertible:1 uoa:1 choose:1 external:1 derivative:1 leading:1 potential:10 hme:1 de:1 includes:2 coefficient:3 matter:1 vi:4 onset:1 depends:1 root:1 start:1 complicated:1 il:1 likewise:1 yield:3 dealt:1 biophysically:2 accurately:2 ionic:1 lu:1 drive:1 submitted:1 sixth:2 energy:1 frequency:4 dxi:1 di:1 cj:1 subtle:1 amplitude:1 actually:1 dt:4 response:2 done:1 though:3 strongly:1 generality:2 sketch:1 lack:1 mode:1 contain:1 true:1 evolution:2 hence:1 sinusoid:1 illustrated:1 indistinguishable:1 m:2 demonstrate:1 motion:1 sigmoid:1 ji:2 hyperpolarization:1 jl:2 resting:1 eps:6 refer:1 expressing:1 connor:7 ai:6 enjoyed:1 uv:1 outlined:1 had:1 etc:2 something:1 meta:1 binary:1 vt:5 seen:1 additional:1 r0:1 forty:1 monotonically:1 dashed:4 ii:1 branch:1 full:8 signal:1 smooth:1 match:2 long:1 devised:1 post:1 basic:1 denominator:1 cell:4 irregular:1 interval:1 depolarized:2 cyber:1 virtually:1 member:3 climb:1 ciently:1 eve:1 latin:1 iii:2 concerned:2 xj:2 reduce:3 sytems:1 expression:1 utility:1 suffer:1 action:1 latency:3 detailed:1 clear:1 outward:1 reduced:5 inhibitory:1 dotted:2 sign:1 four:2 threshold:1 ce:2 abbott:6 concreteness:1 merely:1 sum:2 year:1 inverse:2 letter:1 facilitated:1 respond:1 fourth:1 striking:1 hodgkin:9 place:2 family:1 almost:1 laid:1 reasonable:1 vn:2 decision:1 followed:1 simplification:2 encountered:1 marder:5 huxley:9 constrain:1 lond:2 fitzhugh:3 injection:1 department:1 conceptualization:1 belonging:1 membrane:6 smaller:1 em:1 character:1 ztd:1 dv:1 spawn:1 taken:2 equation:6 describing:1 hh:23 tractable:1 reversal:1 available:1 ubiquity:1 rp:1 thomas:1 original:3 uj:1 suddenly:1 leakage:1 injected:3 capacitance:2 occurs:1 dependence:2 rt:1 lfa:1 exhibit:1 sci:1 modeled:1 relationship:1 insufficient:1 illustration:1 providing:1 balance:1 index:1 trace:1 ordinarily:1 proper:1 neuron:4 immediate:1 situation:2 dc:1 perturbation:3 mckown:1 introduced:1 edi:1 kl:1 able:1 dynamical:8 program:2 power:1 ia:8 event:1 scheme:4 axis:1 carried:1 excitable:1 coupled:1 health:2 vh:3 prior:1 segregated:1 fully:1 expect:1 squeezed:1 afterward:1 lv:1 remarkable:1 degree:4 dd:1 elsewhere:2 surprisingly:1 supported:1 soon:1 formal:1 side:2 telling:1 institute:2 benefit:1 curve:1 depth:1 crawl:1 made:4 collection:1 simplified:2 brandeis:1 far:1 conceptual:2 conservation:3 xi:4 alternatively:1 physiologically:1 ignoring:1 expansion:3 investigated:2 complex:1 whole:1 allowed:1 augmented:1 fig:2 representative:1 join:2 cil:1 axon:1 experienced:1 msec:2 vanish:1 breaking:1 third:3 down:1 specific:1 gating:3 supplemented:1 showing:2 consist:1 nat:1 illustrates:1 biophys:2 cx:1 saddle:1 expressed:4 bo:1 joined:1 ma:1 abbreviation:1 bioi:1 considerable:1 change:2 specifically:1 uol:1 total:1 e:1 la:1 ongoing:1 incorporate:1 dept:3 phenomenon:1 |
3,612 | 4,270 | On the accuracy of `1-filtering of signals with
block-sparse structure
Anatoli Juditsky?
Fatma K?l?nc? Karzan?
Arkadi Nemirovski?
Boris Polyak?
Abstract
We discuss new methods for the recovery of signals with block-sparse structure,
based on `1 -minimization. Our emphasis is on the efficiently computable error
bounds for the recovery routines. We optimize these bounds with respect to the
method parameters to construct the estimators with improved statistical properties. We justify the proposed approach with an oracle inequality which links the
properties of the recovery algorithms and the best estimation performance.
1
Introduction
Suppose an observation y ? Rm is available where
y = Ax + u + D?.
(1)
Here A is a given m ? n sensing matrix, x ? Rn is an unknown vector, u is an unknown (deterministic) nuisance parameter, known to belong to a certain set U ? Rm , D ? Rm?m is known noise
intensity matrix, and ? ? Rm is random noise with standard normal distribution.
We aim to recover a linear transformation w = Bx of the signal x, where B is a given N ? n matrix,
under the assumption that w is block-sparse. Namely, the space W(= RN ) where w ?lives? is
represented as W = Rn1 ? ... ? RnK , so that w = Bx ? RN is a block vector: w = [w[1]; ...; w[K]]
with blocks w[k] = B[k]x ? Rnk , 1 ? k ? K, where B[k], 1 ? k ? K are nk ? n matrices.
The s-block-sparsity of w means that at most a given number s of the blocks u[k], 1 ? k ? K, are
nonzero.
To motivate the interest for the presented model, let us consider two examples.
Tracking of a singularly perturbed linear system Consider a discrete-time linear system
z[i] = Gz[i ? 1] + w[i] + F ?[i], i = 1, 2, ..., z[0] ? Rd ,
where F ?[i] are random perturbations with ?[i] being i.i.d. standard normal vectors ?[i] ? N (0, Id ),
and G, F ? Rd?d are known matrices. We assume that the perturbation vectors w[i] ? Rd i =
1, 2, ... are mostly zeros, but a small proportion of w[i] are nonvanishing unknown vectors in Rd .
Suppose that we are given the linear noisy observation y ? Rm , such that y = A[z[0]; ...z[K]] + ??,
where the matrix A ? Rm?d(K+1) and the noise intensity ? > 0 are known, and ? ? N (0, Im ).
d
Given y, our objective is to recover the sequence of perturbations w = [w[k]]K
k=1 , w[k] ? R , and
the trajectory z of the system.
?
LJK, Universit?e J. Fourier, B.P. 53, 38041 Grenoble Cedex 9, France, [email protected]
Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA, [email protected]
?
Georgia Institute of Technology, Atlanta, Georgia 30332, USA, [email protected]
Research of the second and the third authors was supported by the Office of Naval Research grant #240667V.
?
Institute of Control Sciences of Russian Academy of Sciences, Moscow 117997, Russia,
[email protected]
?
1
To fit the tracking problem into the basic framework let us decompose z = x + ?, where x =
[x[0]; ...; x[K]] with x[i] = Gx[i ? 1] + w[i], x[0] = z[1], and ? = [?[0]; ...; ?[K]] with ?[i] =
G?[i ? 1] + F ?[i], ?[0] = 0. Then
y = Ax + [A? + ??],
where the distribution of A? + ?? is normal with zero mean and the covariance matrix D2 =
AV AT + ? 2 I. Here the Kd ? Kd covariance matrix V of ? has the block structure with blocks
V k,` = Cov(?[k], ?[`]) =
`?k
X
G`?i F F T (GT )k?i ,
i=1
with
P0
1
= 0, by convention.
Image reconstruction with regularization by Total Variation (TV) [21, 7] Here one looks to
recover an image Z ? Rn1 ?n2 from a blurred noisy observation y: y = Az + ??, y ? Rm , where
z = Col(Z) ? Rn , n = n1 n2 , A ? Rm?n is the matrix of discrete convolution, ? > 0 is known,
and ? ? N (0, Im ). We assume that the image z may be decomposed as z = x + v, where v is a
?regular component?, which is modeled by restricting v to belong to the set V of ?smooth images?;
let w = Bx ? Rn?2 , be the (discretized) gradient of the x-component at the points of the grid.
In this example Bx naturally splits into 2-dimensional blocks and TV is nothing but the sum of `2
norms of these blocks. We suppose that w is (nearly) sparse. When denoting u = Av we come to
the observation model y = Ax + u + D?, with u ? U = AV, D = ?I, and ? ? N (0, Im ).
The recovery routines we consider are based on the block-`1 minimization, i.e., the estimate w(y)
b
PK
of w = Bx is w
b = Bb
x(y), where x
b(y) is obtained by minimizing the norm k=1 kB[k]zk(k) over
signals z ? Rn with Az ?fitting,? in certain precise sense, the observations y. Above, k ? k(k) are
given in advance norms on the spaces Rnk where the blocks of w take their values.
In the sequel we refer to the given in advance collection B = (B, n1 , ..., nK , k ? k(1) , ..., k ? k(K) )
as to the representation structure. Given such a structure B and matrix A, our goal is to understand
how well one can recover the s-block-sparse transform Bx by appropriately implemented block-`1
minimization.
Related Compressed Sensing research Our situation and goal form a straightforward extension
of the usual block sparsity Compressed Sensing framework. Indeed, the standard representation
structure B = In , nk = 1, k?k(k) = |?|, 1 ? k ? K = n, leads to the standard Compressed Sensing
setting ? recovering a sparse signal x ? Rn from its noisy observations (1) via `1 minimization. With
the same B = In and nontrivial block structure {nk , k ? k(k) }K
k=1 , we arrive at block-sparsity and
related block-`1 minimization routines considered in numerous recent papers. There is a number
of applications where block-sparsity seem to arise naturally (see, e.g., [10] and references therein).
Several methods of estimation and selection extending the plain `1 -minimization to block sparsity
were proposed and investigated recently. Most of the related research is focused so far on block
regularization schemes ? Lasso-type algorithms
x
b(y) ?
Argmin
z=[z[1];...;z[K]]?Rn =Rn1 ?...?RnK
K
X
kAz ? yk22 + ?L[1,2] (z) , L[1,2] (z) =
kz[k]k2 ,
k=1
nk
k ? k2 being the ?usual? `2 -norm on R . In particular, the huge literature on plain Lasso has a
significant counterpart on group Lasso, see, e.g., [1, 2, 8, 9, 10, 11, 16, 19, 20, 22, 23], and references
therein. Another classical Compressed Sensing estimator, Dantzig Selector, is studied in the blocksparse case in [12, 17]. The available theoretical results allows to bound the errors of recovery in
terms of magnitude of the observation noise and ?s-concentration? of the true signal x (that is, its
L[1,2] distance from the space of signals with at most s nonzero blocks). Typically, these results deal
with the quadratic risks of estimation and rely on a natural block analogy (?Block RIP,? see, e.g.,
[10]) of the celebrated Restricted Isometry property for the sensing matrix A, introduced by Cand?es
and Tao [5], or on a block analogy [18] of the Restricted Eigenvalue property from [3].
Contributions of this work To the best of our knowledge, the conditions used when studying
theoretical properties of block-sparse recovery (with a notable exception of the Mutual Block Incoherence condition of [9]) are unverifiable. The latter means that given the matrix A, one cannot
2
answer in any reasonable time if the (block-) RI or RE property holds with given parameters. While
the efficient verifiability of a condition is by no means necessary for a condition to be meaningful
and useful, we believe also that verifiability has its value and is worthy of being investigated. In particular, the verifiability property allows us to design new recovery routines with explicit confidence
bounds for the recovery error and optimize these bounds with respect to the method parameters.
Thus, the major novelty in what follows is the emphasis on verifiable conditions on A and the representation structure which guarantee good recovery of Bx from noisy observations of Ax, provided
that Bx is nearly s-block-sparse, and the observation noise is low. In this respect, this work extends
the results of [15, 13, 14], where `1 -recovery of the ?usual? sparse vectors was considered (in the
first two papers ? in the case of uncertain-but-bounded observation errors, and in the third ? in the
case of Gaussian observation noise). We propose new routines of block-sparse recovery which explicitly utilize the verifiability certificate ? the contrast matrix, and show how these routines may be
tuned to attain the best performance bounds.
The rest of the manuscript is organized as follows. In Section 2 we give the detailed problem
statement and introduce the family Qs,q , 1 ? q ? ?, of conditions which underly the subsequent
developments. Then in Section 2.3 we introduce the recovery routines and provide the bounds for
their risks. We discuss the properties of conditions Qs,q in Section 3. In particular, in Section 3.1 we
show how one can efficiently verify (the strongest from the family Qs,q ) condition Qs,? . Then in
Section 3.2 we provide an oracle inequality which shows that the condition Qs,? is also necessary
for recovery of block-sparse signals in `? -norm.
2
2.1
Accuracy bounds for `1 -recovery routines
Problem statement and notations
Let w = Bx ? W = Rn1 ? ... ? RnK . To streamline the presentation, we restrict ourselves to
the case where all the norms k ? k(k) on the factors of the representations are the usual `r -norms,
1
Pnk
1 ? r ? ?: kw[k]kr = ( i=1
|w[k]i |r ) r , i.e., the representation structures we consider are
r
B = (B, n1 , ..., nK , k ? kr ). Let r? = r?1
, so that k ? kr? is the norm conjugate to k ? kr . A vector
w = [w[1]; ...; w[K]] from W is called s-block-sparse, if the number of nonzero blocks w[k] ? Rnk
in w is at most s.
For w ? W, we call the number kw[k]kr the magnitude of k-th block in w, and denote by ws the
representation vector obtained from w by zeroing out all but the s largest in magnitude blocks in
w (with the ties resolved arbitrarily). For w ? W and 1 ? p ? ?, we denote by L[p,r] (w) the
k ? kp -norm of the vector [kw[1]kr ; ...; kw[K]kr ], so that L[p,r] (?) is a norm on W with the conjugate
p
norm L?[p,r] (w) = k[kw[1]k(r? ) ; ...; kw[K]k(r? ) ]kp? , p? = p?1
. Given a positive integer s ? K, we
s
set Ls,[p,r] (w) = L[p,r] (w ); note that Ls,[p,r] (?) is a norm on W. When the representation structure
B of x (and thus the norm k ? kr ) is fixed, we use the notation Lp , L?p , and Ls,p , instead of L[p,r] ,
L?[p,r] , and Ls,[p,r] .
The recovery problem we are interested in is as follows: suppose we are given an indirect observation
(cf (1))
y = Ax + u + D?
of unknown signal x ? Rn . Here A ? Rm?n , u + D? is the observation error; in this error, u is
an unknown nuisance known to belong to a given compact convex set U ? Rm symmetric w.r.t. the
origin, D ? Rm?m is known, and ? ? N (0, Im ).
We want to recover x and the representation w = Bx of x, knowing in advance that this representation is nearly s-block-sparse, for some given s. Specifically, we consider the set
X(s, ?) = {x ? Rn : L1 (Bx ? [Bx]s ) ? ?}.
A recovery routine is a Borel function x
b(y) : Rm ? Rn and we characterize the performance of
such a routine by its Lp -risk of recovery w(y)
b
= Bb
x(y) of w = Bx:
Riskp (w(?)|s,
b
D, ?, )
= inf {? : Prob? {Lp (w(y)
b
? w) ? ? ?(u ? U, x ? X(s, ?))} ? 1 ? } .
3
here 0 ? ? 1 and 1 ? p ? ?. In other words, Riskp (w(?)|s,
b
D, ?, ) ? ? if and only if there
exists a set ? ? Rm of ?good realizations of ?? such that Prob{? ? ?} ? 1 ? and the L[p,r] norm of B[b
x(y) ? x] is ? ? whenever ? ? ?, u ? U, and whenever x ? Rn is such that Bx
can be approximated by s-block-sparse representation vector within accuracy ? (measured in the
L[1,r] -norm).
2.2
Condition Qs,q (?)
Let a sensing matrix A and a representation structure B = (B, n1 , ..., nK , k ? kr ) be given, and let
s ? K be a positive integer, q ? [1, ?] and ? > 0. We say that a pair (H, k ? k), where H ? Rm?M
and k ? k is a norm on RM , satisfies the condition Qs,q (?) associated with the matrix A and B, if
1
1
?x ? Rn : Ls,q (Bx) ? s q kH T Axk + ?s q ?1 L1 (Bx).
(2)
The following is an evident observation.
Observation 2.1 Given A and a representation structure B, let (H, k ? k) satisfy Qs,q (?). Then
(H, k ? k) satisfies Qs,q0 (?0 ) for all q 0 ? (1, q) and ?0 ? ?. Besides this, if s0 ? s is a positive
1
1
integer, ((s/s0 ) q H, k ? k) satisfies Qs0 ,q ((s0 /s)1? q ?).
Whenever (B, n1 , ..., nK , k ? kr ) is the standard representation structure, meaning that B is the
identity matrix, n1 = 1, and k ? kr = | ? |, the condition Qs,q (?) reduces to the condition Hs,q (?)
introduced in [14].
2.3
`1 -Recovery Routines
We consider two block-sparse recovery routines.
Regular `1 recovery
is given by
x
breg (y) ? Argmin L1 (Bz) : kH T (Az ? y)k ? ? ,
z
where H ? Rm?M , k ? k and ? > 0 are parameters of the construction.
Theorem 2.1 Let s be a positive integer, q ? [1, ?] and ? ? (0, 1/2). Let also ? (0, 1). Assume
that the parameters H, k ? k, ? of the regular `1 -recovery are such that
A. (H, k?k) satisfies the condition Qs,q (?) associated with the matrix A and the representation
structure B;
B. There exists a set ? ? Rm such that Prob(? ? ?) ? 1 ? and
kH T (u + D?)k ? ? ?(u ? U, ? ? ?).
(3)
Then for all 1 ? p ? q and ? > 0,
1
Riskp (Bb
xreg (y)|s, D, ?, ) ? (4s) p
2? + s?1 ?
, 1 ? p ? q.
1 ? 2?
(4)
Penalized `1 recovery is
x
bpen (y) ? Argmin L1 (Bz) + 2skH T (Az ? y)k ,
z
where H ? Rm?M , k ? k and a positive integer s are parameters of the construction. The accuracy
of the penalized recovery is given by the following analogue of Theorem 2.1:
Theorem 2.2 Let s be a positive integer, q ? [1, ?] and ? ? (0, 1/2). Let also ? (0, 1). Assume
that the parameters H, k ? k, s of the penalized recovery and a ? ? 0 satisfy conditions A, B from
Theorem 2.1. Then for all 1 ? p ? q and ? > 0 we have
?1
1 2? + s
?
Riskp (Bb
xpen (y)|s, D, ?, ) ? 2(2s) p
, 1 ? p ? q,
(5)
1 ? 2?
cf. (4).
4
3
Evaluating Condition Qs,? (?)
The condition Qs,q (?), of Section 2.2, is closely related to known conditions, introduced to study
the properties of recovery routines in the context of block-sparsity. Let us consider the representation
structure with B = In . If the norm k ? k in (2) is chosen to be the `? -norm, we have the following
obvious observation:
b be the maximum of the Euclidean norms of
(!) Let H satisfy Qs,q (?) and let ?
columns in H. Then
b q1 kAxk2 + ?s q1 ?1 L1 (x).
?x ? Rn : Ls,q (x) ? ?s
Note that conditions of this kind with ? < 1/2 and k?kr = k?k2 play a crucial role in the performance
analysis of group-Lasso and Dantzig Selector. For example, the error bounds for Lasso recovery,
obtained in [18] rely upon the Restricted Eigenvalue assumption RE(s, ?) which is as follows: there
is ? > 0 such that
1
L2 (xs ) ? kAxk2 whenever 3L1 (xs ) ? L1 (x ? xs ).
?
?
?
Hence, Ls,1 (x) ? sLs,2 (x) ? ?s kAxk2 whenever 4Ls,1 (x) ? L1 (x), so that
s1/2
1
kAxk2 + L1 (x)
?
4
(observe that (6) is nothing but the ?block version? of the Compatibility condition from [4]).
?x ? Rn : Ls,1 (x) ?
(6)
The bad news is that, in general, condition Qs,q (?), as well as RE and Compatibility conditions,
cannot be verified. Specifically, given a sensing matrix A and a representation structure B, it seems
to be difficult even to verify that a pair (H, k?k) satisfies condition Qs,q (?) associated with A, B, let
alone to synthesize such H which satisfies this condition and results in the best possible error bounds
(4), (5) for the regular and the penalized `1 -recoveries. The good news is that when k ? kr is the
uniform norm k ? k? and, in addition, q = ? the condition Qs,q (?) becomes ?fully computationally
tractable?.1 We intend to demonstrate also that this condition Qs,? (?) is in fact necessary for the
bounds of the form (4), (5) to be valid when p = ?.
3.1
Condition Qs,? (?), case r = ?: tractability
Consider the case of the representation structure B? = (B, n1 , ...nK , k?k? ). We have the following
result.
Proposition 3.1 Let k ? k(k) = k ? k? for all k ? K, and let a positive integer s and reals ? > 0,
? (0, 1) be given.
(i) Assume that a triple (H, k ? k, ?), where H ? RM ?m , k ? k is a norm on RM , and ? ? 0, is such
that
(!) (H, k?k) satisfies Qs,? (?) and the set ? = {? : kH T [u+D?]k ? ? ?u ? U}
is such that Prob(? ? ?) ? 1 ? .
Then there exist N = n1 +...+nK vectors h1 , ..., hN in Rm and N ?N block matrix V = [V k` ]K
k,`=1
(the blocks V k` of V are nk ? n` matrices) such that
(a) B = V B + [h1 , ..., hN ]T A,
(b) kV k` k?,? ? s?1 ? ?k, ` ? K
(here kV k`
k?,? = max1?j?n` kRowj (V k` )k1 , Rowj (M ) being the
j-th row of M ),(7)
(c)
Prob ?+ := {? : max uT hi + |(D?)T hi | ? ?, 1 ? i ? N }
u?U
? 1 ? .
(ii) Whenever vectors h1 , ..., hN ? Rm and a matrix V = [V k` ]K
k,`=1 satisfy (7), the m ? N matrix
1
N
N
b
H = [h , ..., h ], the norm k ? k? on R and ? form a triple satisfying (!).
1
Recall that by Observation 2.1, q = ? corresponds to the strongest among the conditions Qs,q (?) associated with A and a given representation structure B and ensures the validity of the bounds (4) and (5) in the
largest possible range, 1 ? p ? ?, of values of p.
5
Discussion. Let a sensing matrix A ? Rm?n and a representation structure B? be given, along
with a positive integer s, an uncertainty set U, and quantities D and . Recall that Theorems 2.1, 2.2
say that if a triple (H, k ? k, ?) is such that (H, k ? k) satisfies Qs,? (?) with ? < 1/2 and H, ? are
such that for the set
? = {? : kH T [u + D?]k ? ? ?u ? U}
it holds P (?) ? 1 ? , then for all ? ? 0, for the regular `1 recovery associated with (H, k ? k, ?)
and for the penalized `1 recovery associated with (H, k ? k, s) one has:
1
Riskp (Bb
x|s, D, ?, ) ? 2(2s) p
2? + s?1 ?
, 1 ? p ? ?.
1 ? 2?
(8)
Proposition 3.1 states that when applying this result, we lose nothing by restricting ourselves with
triples H = [h1 , ..., hN ] ? Rm?N , N = n1 + ... + nK , k ? k = k ? k? on RN , ? ? 0 which can be
augmented by an appropriately chosen N ? N matrix V to satisfy relations (7). In the rest of this
discussion, it is assumed that we are speaking about triples (H, k ? k, ?) satisfying the just defined
restrictions.
Now, as far as bounds (8) are concerned, they are completely determined by two parameters ? ?
(which should be < 1/2) and ?; the smaller are these parameters, the better are the bounds. In what
follows we address the issue of efficient synthesis of matrices H with ?as good as possible? values
of ? and ?.
Observe, first, that H = [h1 , ..., hN ] and ? should admit an extension by a matrix V to a solution of
the system of constraints (7). Let ?U (h) = max uT h. Note that the restriction
u?U
Prob ?+ = ? : ?U (hi ) + |(D?)T hi | ? ?, 1 ? i ? N
? 1 ? ,
(9)
implies that
h
? ? max
1?i?N
?U (hi ) + erfinv
2
i
kDT hi k2 ,
2
where erfinv(?) is the inverse error function , and it is implied by
h
i
? ? ?U (hi ) + erfinv
kDT hi k2 , 1 ? i ? N.
(10)
2N
Ignoring the ?gap? between erfinv(/2) and erfinv 2N
, we can safely model the restriction (9) by
the system of convex constraints (10). Thus, the set Gs of admissible ?, ? can be safely approximated by the computationally tractable convex set
?
Gs = (?, ?) : ?H = [h1 , ..., hN ] ? Rm?N , V = [V k` ? Rnk ?n` ]K
k,`=1 s.t.
?
B = BV + H T A, kV k` k?,? ? , 1 ? k, ` ? K
s
max uT hi + erfinv
kDT hi k2 ? ?, 1 ? i ? N
u?U
2N
Condition Qs,? (?), case r = ?: necessity
3.2
Let the representation structure B? = (B, n1 , ..., nK , k ? k? ) be fixed. From the above discussion
we know that if, for some ? < 1/2 and ? > 0, there exist H = [h1 , ..., hN ] ? Rm?N and
V = [V k` ? Rnk ?n` ]K
k,`=1 satisfying (7), then regular `1 -recovery with appropriate choice of
parameters ensures that
2? + s?1 ?
Risk? (Bb
xreg |s, D, ?, ) ?
.
(11)
1 ? 2?
We are about to demonstrate that this implication can be ?nearly inverted:?
2
i.e., u = erfinv(?) means that
?1
2?
R?
u
2
e?t
/2
dt = ?.
6
Proposition 3.2 Let a sensing matrix A, an uncertainty set U, and reals ? > 0, ? (0, 1) be
given. Assume also that the observation error ?is present,? specifically, that for every r > 0, the set
{u + De : u ? U, kek2 ? r} contains a neighborhood of the origin.
Given a positive integer S, assume that there exists a recovering routine x
b satisfying an error bound
of the form (11), namely,
?(x ? Rn , u ? U) : Prob? {kB[b
x(y) ? x]k? ? ? + S ?1 L1 (Bx ? [Bx]S )} ? 1 ? .
1
N
for some ? > 0. Then there exist H = [h , ..., h ] ? R
satisfying
m?N
and V = [V
(a)
B = V B + H T A,
(b)
kV k` k?,? ? 2S ?1 ?k, ` ? K,
k`
?
with
(c)
erfinv
T i
T i
? := max max u h + erfinv(
)kD h k2 ? 2?
1?i?N u?U
2N
erfinv
(12)
Rnk ?n` ]K
k,`=1 ,
2N
2
,
and such that
Prob ?+ := {? : max uT hi + |(D?)T hi | ? ?, 1 ? i ? N } ? 1 ? .
u?U
The latter exactly means that the exhibited H satisfies the condition Qs,? (?) (see Proposition 3.1)
1
k
for s ?nearly as large as S,? namely, s ? ?S
2 . Further, H = [h , ..., h ], ? satisfy conditions (10)
erfinv( )
. As a
(and thus ? condition B of Theorem 2.1), with ? being ?nearly ??, namely, ? ? 2? erfinv(2N
2)
S
consequence, under the premise of the proposition, we have for s ? 8 (cf (11)):
Risk? (Bb
xreg |s, D, ?, ) ? 8?
3.3
erfinv( 2N
)
+ 2s?1 ?.
erfinv( 2 )
Condition Qs,? (?), case r = 2: a verifiable sufficient condition
In this section we consider the case of the representation structure B2 = (B, n1 , ..., nK , k ? k2 ). A
verifiable sufficient condition for Qs,? (?) is given by the following
Proposition 3.3 Let a sensing matrix A, a representation structure B2 be given. Let N = n1 + ... +
k`
nK , and let N ? N matrix V = [V k` ]K
are nk ? n` ) and m ? N matrix H satisfy the
k,`=1 (V
relation
B = V B + H T A.
(13)
Let
? ? (V ) = max ?max (V k` ),
(14)
1?k,`?K
where ?max stands for the maximal singular value. Then for all s ? K we have:
Ls,? (Bx) ? L? (H T Ax) + ? ? (V )L1 (Bx), ?x.
(15)
Suppose that the matrix A, the representation structure B2 , the uncertainty set U, and the parameters
D, are given. Let us assume that the triple H, k ? k = L? (?), and ? can be augmented by an
appropriately chosen block N ? N matrix V to satisfy the system of convex constraints (13),
(14). Our objective now is to synthesize the matrix H = [H k ? Rm?nk ]K
k=1 which satisfies the
relationship (3) with ?as good as possible? value of ?.
Let us compute a bound on the probability of deviations of the variable k(H k )TPD?k2 . Note that the
nk
distribution of k(H k )T D?k22 coincides with that of the random variable ?k = k=1
vi [k]?i2 , where
2
2
?, ..., ?nk are i.i.d N (0, 1) and v[k] = [?1 [k], ..., ?nk [k]], ?i [k] being the principal singular values
of (H k )T D. To bound the deviation probabilities for ?k we use the bound of [6] for the deviation of
the weighted ?2 :
(n
)
k
X
?
?2
2
Prob
vi [k]?i ? kv[k]k1 + 2kv[k]k2 ? ? 2 exp ?
.
4kv[k]k22 + 4? kv[k]k?
i=1
7
?
2
2
When substituting kv[k]k? = ?max
[k], kv[k]k2 ? ?max
[k] nk , and kv[k]k1 = k?[k]k22 , where
?max [k] is the maximal singular value and k?[k]k2 is the Frobenius norm of H T D, after a simple
algebra we come to
q
p
Prob k(D?)T H[k]k2 ? k?[k]k2 + ?max [k] 4 ln(2K?1 ) + 2 nk ln(2K?1 ) ? .
K
Let ?U (Hk ) = max kuT H[k]k2 . Then the chance constraint
u?U
Prob ? : ?U (H k ) + k(D?)T H[k]k2 ? ?, 1 ? k ? K ? 1 ? ,
is satisfied for
q
p
? ? max ?U (H k ) + kDT H[k]kF + ?max (DT H[k]) 4 ln(2K?1 ) + 2 nk ln(2K?1 )
k
(here k ? kF stands for the Frobenius norm). In particular, in the case U = {0} (there is no nuisance),
the set Gs of admissible ?, ? can be safely approximated by the computationally tractable convex
set
k`
G?s = (?, ?) : ?H = [H k ? Rm?nk ]K
? Rnk ?n` ]K
k=1 , V = [V
k,`=1
?
B = BV + H T A, ?max (V k` ) ? , 1 ? k, ` ? K
s
q
p
T
T
?1
?1
? ? kD H[k]kF + ?max (D H[k]) 4 ln(2K ) + 2 nk ln(2K ), 1 ? k ? K .
We have mentioned in Introduction that, to the best of our knowledge, the only previously proposed
verifiable sufficient condition for the validity of block `1 recovery is the Mutual Block Incoherence
condition [9]. We aim now to demonstrate that this condition is covered by Proposition 3.3.
The Mutual Block Incoherence condition deals with the case where B = I and all block norms are
k ? k2 -norms. Let the sensing matrix A in question be partitioned as A = [A[1], ..., A[K]], where
A[k], k = 1, ..., K, has nk columns. Let us define the mutual block-incoherence ? of A w.r.t. the
representation structure in question as follows:
? = max ?max Ck?1 AT [k]A[`] , [Ck := AT [k]A[k]]
(16)
1?k,`?K,
k6=`
provided that all matrices Ck , 1 ? k ? K, are nonsingular, otherwise ? = ?. Note that in the case
of the standard representation structure, the just defined quantity is nothing but the standard mutual
incoherence known from the Compressed Sensing literature.
We have the following observation.
Proposition 3.4 Given m ? n sensing matrix A and a representation structure B2 with B = I,
1 ? k ? K, let A = [A[1], ..., A[K]] be the corresponding partition of A.
Let ? be the mutual block-incoherence of A with respect to B2 . Assuming ? < ?, we set
H=
1
?1
[A[1]C1?1 , A[2]C2?1 , ..., A[K]CK
],
1+?
Ck = AT [k]A[k].
(17)
Then the contrast matrix H along with the matrix V = I ? H T A satisfies condition (13) (where
?s
B = I) and condition (14) with ? ? (V ) ? 1+?
. As a result, applying Proposition 3.3, we conclude
that whenever
1+?
s<
,
(18)
2?
?s
the pair (H, L? (?)) satisfies Qs,? (?) with ? = 1+?
< 1/2.
Note that Proposition 3.4 essentially covers the results of [9] where the authors prove, under a
condition which is marginally stronger than that of (18), that an appropriate version of block-`1
recovery allows to recover exactly every block-sparse signal from the noiseless observation y = Ax.
8
References
[1] F. Bach. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res.,
9:1179?1225, 2008.
[2] Z. Ben-Haim and Y. C. Eldar. Near-oracle performance of greedy block-sparse estimation
techniques from noisy measurements. Technical report, 2010. http://arxiv.org/abs/
1009.0906.
[3] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Annals of Stat., 37(4):1705?1732, 2008.
[4] P. B?uhlmann and S. van de Geer. On the conditions used to prove oracle results for the Lasso.
Electron. J. Statist., 3:1360?1392, 2009.
[5] E. J. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Inf. Theory, 51:4203?
4215, 2006.
[6] L. Cavalier, G. K. Golubev, D. Picard, and A. B. Tsybakov. Oracle inequalities for inverse
problems. Ann. Statist., 30(3):843?874, 2002.
[7] A. Chambolle. An algorithm for total variation minimization and applications. Journal of
Mathematical Imaging and Vision, 20(1-2):89?97, 2004.
[8] C. Chesneau and M. Hebiri. Some theoretical results on the grouped variables Lasso. Mathematical Methods of Statistics, 27(4):317?326, 2008.
[9] Y. C. Eldar, P. Kuppinger, and H. B?olcskei. Block-sparse signals: Uncertainty relations and
efficient recovery. IEEE Trans. on Signal Processing, 58(6):3042?3054, 2010.
[10] Y. C. Eldar and M. Mishali. Robust recovery of signals from a structured union of subspaces.
IEEE Trans. Inf. Theory, 55(11):5302?5316, 2009.
[11] J. Huang and T. Zhang. The benefit of group sparsity. Annals of Stat., 38(4):1978?2004, 2010.
[12] G. M. James, P. Radchenko, and J. Lv. Dasso: connections between the Dantzig selector and
Lasso. J. Roy. Statist. Soc. Ser. B, 71(1):127?142, 2009.
[13] A. B. Juditsky, F. K?l?nc?-Karzan, and A. S. Nemirovski. Verifiable conditions of `1 recovery
for sparse signals with sign restrictions. Math. Progr., 127(1):89?122, 2010. http://www.
optimization-online.org/DB_HTML/2009/03/2272.html.
[14] A. B. Juditsky and A. S. Nemirovski. Accuracy guarantees for `1 -recovery. Technical report, 2010. http://www.optimization-online.org/DB_HTML/2010/10/
2778.html.
[15] A. B. Juditsky and A. S. Nemirovski. On verifiable sufficient conditions for sparse signal
recovery via `1 minimization. Math. Progr., 127(1):57?88, 2010. Special issue on machine
learning.
[16] H. Liu and J. Zhang. Estimation consistency of the group Lasso and its applications. Journal
of Machine Learning Research - Proceedings Track, 5:376?383, 2009.
[17] H. Liu, J. Zhang, X. Jiang, and J. Liu. The group Dantzig selector. Journal of Machine
Learning Research - Proceedings Track, 9:461?468, 2010.
[18] K. Lounici, M. Pontil, A. Tsybakov, and S. van de Geer. Oracle inequalities and optimal
inference under group sparsity. Technical report, 2010. http://arxiv.org/pdf/1007.
1771.
[19] Y. Nardi and A. Rinaldo. On the asymptotic properties of the group Lasso estimator for linear
models. Electron. J. Statist., 2:605?633, 2008.
[20] G. Obozinski, M. J. Wainwright, and M. I. Jordan. Support union recovery in high-dimensional
multivariate regression. Annals of Stat., 39(1):1?47, 2011.
[21] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms.
Physica D, 60(1-4):259?268, 1992.
[22] M. Stojnic, F. Parvaresh, and B. Hassibi. On the reconstruction of block-sparse signals with an
optimal number of measurements. IEEE Trans. on Signal Processing, 57(8):3075?3085, 2009.
[23] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J.
Roy. Stat. Soc. Ser. B, 68(1):49?67, 2006.
9
| 4270 |@word h:1 version:2 seems:1 proportion:1 norm:26 stronger:1 d2:1 covariance:2 p0:1 q1:2 necessity:1 celebrated:1 contains:1 liu:3 denoting:1 tuned:1 subsequent:1 underly:1 partition:1 verifiability:4 juditsky:5 alone:1 greedy:1 rudin:1 certificate:1 math:2 gx:1 org:4 zhang:3 mathematical:2 along:2 c2:1 yuan:1 prove:2 fitting:1 introduce:2 indeed:1 cand:2 discretized:1 nardi:1 decomposed:1 becomes:1 provided:2 bounded:1 notation:2 what:2 argmin:3 kind:1 transformation:1 guarantee:2 safely:3 every:2 tie:1 exactly:2 universit:1 rm:28 k2:17 ser:2 control:1 imag:1 grant:1 positive:9 consequence:1 mach:1 id:1 jiang:1 incoherence:6 emphasis:2 therein:2 dantzig:5 studied:1 nemirovski:4 range:1 union:2 block:55 tpd:1 pontil:1 attain:1 confidence:1 word:1 regular:6 cannot:2 selection:2 risk:5 context:1 applying:2 optimize:2 karzan:2 deterministic:1 restriction:4 www:2 straightforward:1 l:10 convex:5 focused:1 bpen:1 recovery:38 chesneau:1 estimator:3 q:27 variation:3 annals:3 construction:2 suppose:5 play:1 rip:1 programming:1 origin:2 synthesize:2 roy:2 approximated:3 satisfying:5 role:1 ensures:2 news:2 mentioned:1 kaxk2:4 motivate:1 algebra:1 upon:1 max1:1 completely:1 resolved:1 indirect:1 represented:1 kp:2 neighborhood:1 say:2 otherwise:1 compressed:5 cov:1 statistic:1 transform:1 noisy:5 pnk:1 online:2 sequence:1 eigenvalue:2 reconstruction:2 propose:1 maximal:2 fr:1 realization:1 academy:1 kh:5 kv:11 frobenius:2 az:4 extending:1 boris:2 ben:1 andrew:1 stat:4 measured:1 soc:2 implemented:1 recovering:2 streamline:1 come:2 implies:1 convention:1 closely:1 kb:2 mishali:1 premise:1 decompose:1 proposition:10 im:4 extension:2 physica:1 hold:2 blocksparse:1 considered:2 normal:3 exp:1 electron:2 substituting:1 major:1 bickel:1 estimation:6 radchenko:1 lose:1 uhlmann:1 largest:2 grouped:2 weighted:1 minimization:8 gaussian:1 aim:2 ck:5 gatech:1 office:1 ax:7 kaz:1 naval:1 nemirovs:1 hk:1 contrast:2 sense:1 inference:1 typically:1 w:1 relation:3 france:1 interested:1 tao:2 compatibility:2 issue:2 among:1 html:2 eldar:3 k6:1 development:1 special:1 mutual:6 construct:1 kw:6 breg:1 look:1 nearly:6 report:3 grenoble:1 ourselves:2 n1:12 ab:1 atlanta:1 interest:1 huge:1 picard:1 implication:1 necessary:3 euclidean:1 re:4 theoretical:3 uncertain:1 column:2 cover:1 tractability:1 deviation:3 uniform:1 characterize:1 answer:1 perturbed:1 kut:1 sequel:1 decoding:1 synthesis:1 nonvanishing:1 satisfied:1 rn1:4 hn:7 russia:1 huang:1 admit:1 bx:20 de:3 b2:5 blurred:1 satisfy:8 notable:1 explicitly:1 vi:2 h1:7 recover:6 arkadi:1 contribution:1 accuracy:5 efficiently:2 nonsingular:1 marginally:1 trajectory:1 simultaneous:1 strongest:2 whenever:7 james:1 obvious:1 naturally:2 fatma:1 associated:6 db_html:2 recall:2 knowledge:2 ut:4 organized:1 routine:14 manuscript:1 dt:2 improved:1 ritov:1 lounici:1 chambolle:1 just:2 stojnic:1 axk:1 nonlinear:1 believe:1 russian:1 usa:2 validity:2 k22:3 true:1 verify:2 progr:2 counterpart:1 regularization:2 hence:1 symmetric:1 nonzero:3 q0:1 i2:1 deal:2 nuisance:3 coincides:1 pdf:1 evident:1 demonstrate:3 l1:11 image:4 meaning:1 recently:1 belong:3 mellon:1 refer:1 significant:1 measurement:2 rd:4 grid:1 consistency:2 zeroing:1 gt:1 multivariate:1 isometry:1 recent:1 inf:3 certain:2 inequality:4 arbitrarily:1 life:1 inverted:1 novelty:1 kdt:4 signal:17 ii:1 multiple:1 reduces:1 smooth:1 technical:3 bach:1 lin:1 basic:1 regression:2 essentially:1 cmu:1 noiseless:1 bz:2 arxiv:2 vision:1 kernel:1 c1:1 addition:1 want:1 singular:3 crucial:1 appropriately:3 rest:2 exhibited:1 cedex:1 seem:1 jordan:1 call:1 integer:9 near:1 yk22:1 split:1 concerned:1 qs0:1 fit:1 pennsylvania:1 lasso:12 restrict:1 polyak:1 knowing:1 computable:1 speaking:1 useful:1 detailed:1 covered:1 verifiable:6 tsybakov:3 statist:4 http:4 sl:1 exist:3 sign:1 track:2 discrete:2 carnegie:1 group:8 verified:1 hebiri:1 utilize:1 imaging:1 sum:1 prob:11 inverse:2 uncertainty:4 arrive:1 extends:1 reasonable:1 family:2 kuppinger:1 rnk:10 bound:18 singularly:1 hi:12 haim:1 quadratic:1 oracle:6 g:3 nontrivial:1 bv:2 constraint:4 ri:1 unverifiable:1 fourier:1 structured:1 tv:2 kd:4 conjugate:2 smaller:1 partitioned:1 lp:3 s1:1 osher:1 restricted:3 computationally:3 ln:6 previously:1 discus:2 know:1 tractable:3 studying:1 available:2 observe:2 appropriate:2 moscow:1 cf:3 anatoli:2 k1:3 classical:1 implied:1 objective:2 intend:1 question:2 quantity:2 concentration:1 usual:4 gradient:1 subspace:1 distance:1 link:1 assuming:1 ru:1 besides:1 fatemi:1 modeled:1 relationship:1 minimizing:1 nc:2 difficult:1 mostly:1 statement:2 design:1 unknown:5 av:3 observation:20 convolution:1 situation:1 precise:1 rn:17 perturbation:3 worthy:1 intensity:2 introduced:3 namely:4 pair:3 connection:1 trans:4 address:1 sparsity:8 max:21 analogue:1 dasso:1 wainwright:1 natural:1 rely:2 kek2:1 scheme:1 technology:1 ipu:1 numerous:1 gz:1 literature:2 l2:1 removal:1 kf:3 asymptotic:1 fully:1 filtering:1 analogy:2 lv:1 triple:6 sufficient:4 s0:3 row:1 penalized:5 supported:1 understand:1 institute:2 sparse:21 van:2 benefit:1 plain:2 evaluating:1 valid:1 stand:2 kz:1 author:2 collection:1 far:2 bb:7 selector:5 compact:1 pittsburgh:1 assumed:1 conclude:1 learn:1 zk:1 robust:1 ignoring:1 riskp:5 investigated:2 pk:1 noise:7 arise:1 n2:2 nothing:4 augmented:2 borel:1 georgia:2 hassibi:1 explicit:1 col:1 isye:1 third:2 admissible:2 theorem:6 bad:1 sensing:14 x:3 exists:3 olcskei:1 restricting:2 kr:13 magnitude:3 nk:26 gap:1 rinaldo:1 tracking:2 corresponds:1 satisfies:12 chance:1 obozinski:1 ljk:1 goal:2 presentation:1 identity:1 ann:1 specifically:3 determined:1 justify:1 principal:1 total:3 called:1 geer:2 e:2 meaningful:1 exception:1 support:1 latter:2 |
3,613 | 4,271 | Structured sparse coding via lateral inhibition
Karol Gregor
Janelia Farm, HHMI
19700 Helix Drive
Ashburn, VA, 20147
[email protected]
Arthur Szlam
The City College of New York
Convent Ave and 138th st
New York, NY, 10031
[email protected]
Yann LeCun
New York University
715 Broadway, Floor 12
New York, NY, 10003
[email protected]
Abstract
This work describes a conceptually simple method for structured sparse coding
and dictionary design. Supposing a dictionary with K atoms, we introduce a
structure as a set of penalties or interactions between every pair of atoms. We
describe modifications of standard sparse coding algorithms for inference in this
setting, and describe experiments showing that these algorithms are efficient. We
show that interesting dictionaries can be learned for interactions that encode tree
structures or locally connected structures. Finally, we show that our framework
allows us to learn the values of the interactions from the data, rather than having
them pre-specified.
1
Introduction
Sparse modeling (Olshausen and Field, 1996; Aharon et al., 2006) is one of the most successful
recent signal processing paradigms. A set of N data points X in the Euclidean space Rd is written
as the approximate product of a d ? k dictionary W and k ? N coefficients Z, where each column
of Z is penalized for having many non-zero entries. In equations, if we take the approximation to
X in the least squares sense, and the penalty on the coefficient matrix to be the l1 norm, we wish to
find
!
argminZ,W
||W zk ? xk ||2 + ?||zk ||1 .
(1)
k
In (Olshausen and Field, 1996), this model is introduced as a possible explanation of the emergence
of orientation selective cells in the primary visual cortex V1; the matrix representing W corresponds
to neural connections.
It is sometimes appropriate to enforce more structure on Z than just sparsity. For example, we
may wish to enforce a tree structure on Z, so that certain basis elements can be used by any data
point, but others are specific to a few data points; or more generally, a graph structure on Z that
specifies which elements can be used with which others. Various forms of structured sparsity are
explored in (Kavukcuoglu et al., 2009; Jenatton et al., 2010; Kim and Xing, 2010; Jacob et al., 2009;
Baraniuk et al., 2009). From an engineering perspective, structured sparse models allow us to access
or enforce information about the dependencies between codewords, and to control the expressive
power of the model without losing reconstruction accuracy. From a biological perspective, structured
sparsity is interesting because structure and sparsity are present in neocortical representations. For
example, neurons in the same mini-columns of V1 are receptive to similar orientations and activate
together. Similarly neurons within columns in the inferior temporal cortex activate together and
correspond to object parts.
In this paper we introduce a new formulation of structured sparsity. The l1 penalty is replaced with a
set of interactions between the coding units corresponding to intralayer connections in the neocortex.
For every pair of units there is an interaction weight that specifies the cost of simultaneously activating both units. We will describe several experiments with the model. In the first set of experiments
1
Figure 1: Model (3) in the locally connected setting of subsection 3.3. Code units are placed in two
dimensional grid above the image (here represented in 1-d for clarity). A given unit connects to a
small neighborhood of an input via W and to a small neighborhood of code units via S. The S is
present and positive (inhibitory) if the distance d between units satisfies r1 < d < r2 for some radii.
we set the interactions to reflect a prespecified structure. In one example we create a locally connected network with inhibitory connections in a ring around every unit. Trained with natural images,
this leads to dictionaries with Gabor-like edge elements with similar orientations placed in nearby
locations, leading to the pinwheel patterns analogous to those observed in V1 of higher mammals.
We also place the units on a tree and place inhibitory interactions between different branches of the
tree, resulting in edges of similar orientation being placed in the same branch of the tree, see for
example (Hyvarinen and Hoyer, 2001). In the second set of experiments we learn the values of the
lateral connections instead of setting them, in effect learning the structure. When trained on images
of faces, the system learns to place different facial features at correct locations in the image.
The rest of this paper is organized as follows: in the rest of this section, we will introduce our model,
and describe its relationship between the other structured sparsity mentioned above. In section 2, we
will describe the algorithms we use for optimizing the model. Finally, in section 3, we will display
the results of experiments showing that the algorithms are efficient, and that we can effectively learn
dictionaries with a given structure, and even learn the structure.
1.1
Structured sparse models
We start with a model that creates a representation Z of data points X via W by specifying a set of
disallowed index pairs of Z: U = {(i1 , j1 ), (i2 , j2 ), ..., (ik jk )} - meaning that representations Z are
not allowed if both Zi #= 0 and Zj #= 0 for any given pair (i, j) ? U . Here we constrain Z ? 0. The
inference problem can be formulated as
min
Z?0
N
!
j=1
||W Zj ? Xj ||2 ,
subject to
ZZ T (i, j) = 0, i, j ? U.
Then the Langrangian of the energy with respect to Z is
N
!
j=1
||W Zj ? Xj ||2 + ZjT SZj ,
(2)
where Sij are the dual variables to each of the constraints in U , and are 0 in the unconstrained pairs.
A local minimum of the constrained problem is a saddle point for (2). At such a point, Sij can be
interpreted as the weight of the inhibitory connection between Wi and Wj necessary to keep them
from simultaneously activating. This observation will be the starting point for this paper.
1.2
Lateral inhibition model
In practice, it is useful to soften the constraints in U to a fixed, prespecified penalty, instead of a
maximization over S as would be suggested by the Lagrangian form. This allows some points to
2
use proscribed activations if they are especially important for the reconstruction. To use units with
both positive and negative activations we take absolute values and obtain
!
min
||W Zj ? Xj ||2 + |Zj |T S|Zj |,
(3)
W,Z
j
||Wj || = 1 ?j.
where |Zj | denotes the vector obtained from the vector Zj by taking absolute values of each component, and Zj is the jth column of Z. S will usually be chosen to be symmetric and have 0 on the
diagonal. As before, instead of taking absolute values, we can instead constrain Z ? 0 allowing to
write the penalty as ZjT SZj . Finally, note that we can also allow S to be negative, implementing
excitatory interaction between neurons. One then has to prevent the sparsity term to go to minus infinity by limiting the amount of excitation a given element can experience (see the algorithm section
for details).
The Lagrangian optimization tries to increase the inhibition between a forbidden pair whenever
it activates. If our goal is to learn the interactions, rather than enforce the ones we have chosen,
then it makes sense to do the opposite, and decrease entries of S corresponding to pairs which are
often activated simultaneously. To force a nontrivial solution and encourage S to economize a fixed
amount of inhibitory power, we also propose the model
!
min min
||W Zj ? Xj ||2 + |Zj |T S|Zj |,
(4)
S
W,Z
j
Z ? 0, ||Wj || = 1 ?j,
0 ? S ? ?, S = S T , and|Sj |1 = ? ?j
Here, ? and ? control the total inhibitory power of the activation of an atom in W , and how much
the inhibitory power can be concentrated in a few interactions (i.e. the sparsity of the interactions).
As above, usually one would also fix S to be 0 on the diagonal.
1.3
Lateral inhibition and weighted l1
Suppose we have fixed S and W , and are inferring z from a datapoint x. Furthermore, suppose that
a subset I of the indices of z do not inhibit each other. Then if I c is the complement of I, for any
fixed value of zI c (here the subscript refers to indices of the column vector z), the cost of using zI
is given by
!
||WI zI ? x||2 +
?i |zi |,
where ?i =
1.4
"
i?I
j?I c
Sij |zj |. Thus for zI c fixed, we get a weighted lasso in zI .
Relation with previous work
As mentioned above, there is a growing literature on structured dictionary learning and structured
sparse coding. The works in (Baraniuk et al., 2009; Huang et al., 2009) use a greedy approach for
structured sparse coding based on OMP or CoSaMP. These methods are fast when there is an efficient method for searching the allowable additions to the active set of coefficients at each greedy
update, for example if the coefficients are constrained to lie on a tree. These works also have
provable recovery properties when the true coefficients respect the structure, and when the dictionaries satisify certain incoherence properites. A second popular basic framework is group sparsity
(Kavukcuoglu et al., 2009; Jenatton et al., 2010; Kim and Xing, 2010; Jacob et al., 2009). In these
works the coefficients are arranged into a predetermined set of groups, and the sparsity term penalizes the number of active groups, rather than the number of active elements. This approach has the
advantage that the resulting inference problems are convex, and many of the works can guarantee
convergence of their inference schemes to the minimal energy.
In our framework, the interactions in S can take any values, giving a different kind of flexibility. Although our framework does not have a convex inference, the algorithms we propose experimentally
efficiently find good codes for every S we have tried. Also note that in this setting, recovery theorems with incoherence assumptions are not applicable, because we will learn the dictionaries, and
3
so there is no guarantee that the dictionaries will satisfy such conditions. Finally, a major difference
between the methods presented here and those in the other works is that we can learn the S from the
data simultaneously with dictionary; as far as we know, this is not possible via the above mentioned
works.
The interaction between a set of units of the form z T Rz + ?T z was originally used in Hopfield
nets (Hopfield, 1982); there the z are binary vectors and the inference is deterministic. Boltzman
machines (Ackley et al., 1985) have a similar term, but the z and the inference are stochastic, e.g.
Markov chain Monte carlo. With S fixed, one can consider our work a special case of real valued
Hopfield nets with R = W T W + S and ? = W T x; because of the form of R and ?, fast inference
schemes from sparse coding can be used. When we learn S, the constraints on S serve the same
purpose as the contrastive terms in the updates in a Boltzman machine.
In (Garrigues and Olshausen, 2008) lateral connections were modeled as the connections of an Ising
model with the Ising units deciding which real valued units (from which input was reconstructed)
were on. The system learned to typically connect similar orientations at a given location. Our
model is related but different - it has no second layer, the lateral connections control real instead
of binary values and the inference and learning is simpler, at the cost of a true generative model.
In (Druckmann and Chklovskii, 2010) the lateral connections were trained so that solutions zt to a
related ODE starting from the inferred code of z = z0 of an input x would map via W to points
close to x. In that work, the lateral connections were trained in response to the dictionary, rather
than simultaneously with it, and did not participate in inference.
In (Garrigues and Olshausen, 2010) the coefficients were given by a Laplacian scale mixture prior,
leading to multiplicative modulation, as in this work. However, in contrast, in our case the sparsity
coefficients are modulated by the units in the same layer, and we learn the modulation, as opposed
to the fixed topology in (Garrigues and Olshausen, 2010).
2
Algorithms
In this section we will describe several algorithms to solve the problems in (3) and (4). The basic
framework will be to alternate betweens updates to Z, W , and, if desired, S. First we discuss
methods for solving for Z with W and S fixed.
2.1
Inferring Z from W , X, and S.
The Z update is the most time sensitive, in the sense that the other variables are fixed after training, and only Z is inferred at test time. In general, any iterative algorithm that can be used for
the weighted basis pursuit problem can be adapted to our setting; the weights just change at each
iteration. We will describe versions of FISTA (Beck and Teboulle, 2009) and coordinate descent
(Wu and Lange, 2008; Li and Osher, 2009). While we cannot prove that the algorithms converge to
the minimum, in all the applications we have tried, they perform very well.
2.1.1
A FISTA like algorithm
The ISTA (Iterated Shrinkage Thresholding Algorithm) minimizes the energy ||W z ? x||2 + ?|z|1
by following gradient steps in the first term with a ?shrinkage?; this can be thought of as gradient
steps where any coordinate which crosses zero is thresholded. In equations:
z t+1 = sh(?/L) (Z ?
1 T
W (W z t ? x)),
L
where sha (b) = sign(b) ? ha (|b|), and ha (b) = max(b ? a, 0). In the case where z is constrained
to be nonnegative, sh reduces to h. In this paper, ? is a vector depending on the current value of z,
rather than a fixed scalar. After each update, ? is updated by ? = ?t+1 ? Sz t+1 .
Nesterov?s accelerated gradient descent has been found to be effective in the basis pursuit setting, where it is called FISTA (Beck and Teboulle, 2009). In essence one adds to the z update
a momentum that approaches one with appropriate speed. Specifically the update equation on z
4
Algorithm 1 ISTA
function ISTA(X, Z, W, L)
Require: L > largest eigenvalue of
WTW.
Initialize: Z = 0,
repeat
? = S|z|
Z = sh(?/L) (Z ? L1 W T (W Z ? X))
until change in Z below a threshold
end function
Algorithm 2 Coordinate Descent
?
function CoD(X, Z, W, S, S)
T
?
Require: S = I ? Wd Wd
Initialize: Z = 0; B = W T X; ? = 0
repeat
Z? = h? (B)
?
k = argmax|Z ? Z|
?
?
B = B + S?k (Zk ? Zk )
? = ? + S?k (Z?k ? Zk )
Zk = Z?k
until change in Z is below a threshold
Z = h? (B)
end function
?1
and
becomes z t+1 = y t + rt (y t ? y t?1 ), y t = sh(?/L) (Z ? L1 W T (W z t ? x)), rt = uutt+1
?
2
ut+1 = (1 + 1 + 4u )/2, u1 = 1. Although our problem is not convex and we do not have any of
the normal guarantees, empirically, the Nesterov acceleration works extremely well.
2.1.2
Coordinate descent
The coordinate descent algorithm iteratively selects a single coordinate j of z, and fixing the other
coordinates, does a line search to find the value of z(k) with the lowest energy. The coordinate
selection can be done by picking the entry with the largest gradient Wu and Lange (2008), or by
approximating the value of the energy after the line search Li and Osher (2009). Suppose at the tth
step we have chosen to update the kth coordinate of z t . Because S is zero on its main diagonal, the
penalty term is not quadratic in z t+1 (k), but is simply a ?(k)z t+1 (k), where ? = Szk (which only
depends on the currently fixed coordinates). Thus there is an explicit solution z t+1 = h? (B(k)),
where B is W T (W z t ? x). Just like in the setting of basis pursuit this has the nice property that
by updating B and ?, and using a precomputed W T W , each update only requires O(K) operations,
where K is the number of atoms in the dictionary; and in particular the dictionary only needs to be
multiplied by x once. In fact, when the actual solution is very sparse and the dictionary is large, the
cost of all the iterations is often less than the cost of multiplying W T x.
We will use coordinate descent for a bilinear model below; in this case, we alternate updates of the
left coefficients with updates of the right coefficients.
2.2
Updating W and S
The updates to W and S can be made after each new z is coded, or can be made in batches, say after
a pass through the data. In the case of per datapoint updates, we can proceed via a gradient descent:
the derivative of all of our models with respect to W for a fixed x and z is (W z ? x)z T . The batch
updates to W can be done as in K-SVD (Aharon et al., 2006).
It is easier to update S in (4) in batch mode, because of the constraints. With W and Z fixed, the
constrained minimization of S is a linear program. We have found that it is useful to average the
current S with the minimum of the linear program in the update.
3
Experiments
In this section we test the models (3,4) in various experimental settings.
3.1
Inference
First we test the speed of convergence and the quality of the resulting state of ISTA, FISTA and
coordinate descent algorithms. We use the example of section 3.4 where the input consist of image
patches and the connections in S define a tree. The figure 3.1 shows the energy after each iteration
5
55
FISTA
ISTA
CD
50
45
40
35
30
25
20
0
20
40
60
80
100
120
140
160
180
200
?
.8
.4
0.2
0.1
0.05
||X ? W Z||2 + ?|Z T |S|Z|
FISTA ISTA
CoD
21.67 21.67
21.79
21.44 21.43
21.79
21.12 21.12
21.68
20.63 20.67
21.19
19.64 19.67
19.94
?|Z T |S|Z|
FISTA ISTA
1e-9
.01
.05
.08
.28
.32
.87
.94
2.01
2.07
CoD
0
.03
.21
.78
2.0
Figure 2: On the left: The energy values after each iteration"of the 3 methods, averaged over all the
data points. On the right: values of the average energy N1 j ||W Zj ? Xj ||2 + ?|Zj |T S|Zj | and
"
average S sparsity N1 j |Zj |T S|Zj |. The ?oracle? best tree structured output computed by using
an exhaustive search over the projections of each data point onto each branch of the tree has the
average energy 20.58 and sparsity 0. S, W , and X are as in section 3.4
of the three methods average over all data points. We can see that coordinate descent very quickly
moves to its resting state (note that each iteration is much cheaper as well, only requiring a few
column operations), but does not on average tend to be quite as good a code as ISTA or FISTA. We
also see that FISTA gets as good a code as ISTA but after far fewer iterations.
To test the absolute quality of the methods, we also measure against the ?oracle? - the lowest possible
energy when none of the constraints are broken, that is, when |z|S|z| = 0. This energy is obtained
by exhaustive search over the projections of each data point onto
" each branch of the tree. In the
table in Figure 3.1, we give the values for the average energy N1 j ||W Zj ? Xj ||2 + ?|Zj |T S|Zj |
"
and for the sparsity energy N1 j ?|Zj |T S|Zj | for various values of ?. Notice that for low values
of ?, the methods presented here give codes with better energy than the best possible code on the
tree, because the penalty is small enough to allow deviations from the tree structure; but when the ?
parameter is increased, the algorithms still compare well against the exhaustive search.
3.2
Scaling
An interesting property of the models (3,4) is their scaling: if the input is re-scaled by a constant
factor the optimal code is re-scaled by the same factor. Thus the model preserves the scale information and the input doesn?t need to be normalized. This is not the case in the standard l1 sparse
coding model (1). For example if the input becomes small the optimal code is zero.
In this subsection we train the model (3) on image patches. In the first part of the experiment we
preprocess each image patch by subtracting its mean and set the elements of S to be all equal and
positive except for zeros on the diagonal. In the second part of the experiment we use the original
image patches without any preprocessing. However since the mean is the strongest component we
introduce the first example of structure: We select one of the components of z and disconnect it from
all the other components. The resulting S is equal to a positive constant everywhere except on the
diagonal, the first row, and the first column, where it is zero. After training in this setting we obtain
the usual edge detectors (see Figure (3a)) except for the first component which learns the mean. In
the first setting the result is simply a set of edge detectors. Experimentally, explicitly removing the
mean before training is better as the training converges a lot more quickly.
3.3
Locally connected net
In this section we impose a structure motivated by the connectivity of cortical layer V1. The cortical
layer has a two dimensional structure (with depth) with locations corresponding to the locations in
the input image. The sublayer 4 contains simple cells with edge like receptive fields. Each such
cell receives input from a small neighborhood of the input image at its corresponding location. We
model this by placing units in a two dimensional grid above the image and connecting each unit to
a small neighborhood of the input, Figure 1. We also bind connections weights for units that are
far enough from each other to reduce the number of parameters without affecting the local structure
(Gregor and LeCun, 2010). Next we connect each unit by inhibitory interactions (the S matrix) to
units in its ring-shaped neighborhood: there is a connection between two units if their distance d
6
Figure 3: (a) Filters learned on the original unprocessed image patches. The S matrix was fully
connected except the unit corresponding to the upper left corner which was not connected to any
other unit and learned the mean. The other units typically learned edge detectors. (b) Filters learned
in the tree structure. The Sij = 0 if one of the i and j is descendant of the other and Sij = S 0 d(i, j)
otherwise where d(i, j) is the distance between the units in the tree. The filters in a given branch are
of a similar orientation and get refined as we walk down the tree.
Figure 4: (a-b) Filters learned on images in the locally connected framework with local inhibition
shown in the Figure 1. The local inhibition matrix has positive value Sij = S 0 > 0 if the distance
between code units Zi and Zj satisfies r1 < d(i, j) < r2 and Sij = 0 otherwise. The input size
was 40 ? 40 pixels and the receptive field size was 10 ? 10 pixels. The net learned to place filters of
similar orientations close together. (a) Images were preprocessed by subtracting the local mean and
dividing by the standard deviation, each of width 1.65 pixels. The resulting filters are sharp edge
detectors and can therefore be naturally imbedded in two dimensions. (b) Only the local mean, of
width 5 pixels, was subtracted. This results in a larger range of frequencies that is harder to imbed
in two dimensions. (c-d) Filters trained on 10 ? 10 image patches with mean subtracted and then
normalized. (c) The inhibition matrix was the same as in (a-b). (d) This time there was an l1 penalty
on each code unit and the lateral interaction matrix S was excitatory: Sij < 0 if d(i, j) < r2 and
zero otherwise.
satisfies r1 < d < r2 for some radii r1 and r2 (alternatively we can put r1 = 0 and create excitatory
interactions in a smaller neighborhood). With this arrangement units that turn on simultaneously are
typically either close to each other (within r1 ) or far from each other (more distant than r2 ).
Training on image patches results in the filters shown in the Figure figure 4. We see that filters
with similar orientations are placed together as is observed in V1 (and other experiments on group
sparsity, for example (Hyvarinen and Hoyer, 2001)). Here we obtain these patterns by the presence
of inhibitory connections.
3.4
Tree structure
In this experiment we place the units z on a tree and desire that the units that are on for a given
input lie on a single branch of the tree. We define Sij = 0 if i is descendant of j or vice versa and
7
Sij = S 0 d(i, j) otherwise where S 0 > 0 is a constant and d(i, j) is the distance between the nodes
i and j (the number of links it takes to get from one to the other).
We trained (3) on image patches. The model learns to place low frequency filters close to the root
of the tree and as we go down the branches the filters ?refine? their parents, Figure 3b.
3.5
A convolutional image parts model
Figure 5: On the left: the dictionary of 16 ? 16 filters learned by the convolutional model on faces.
On the right: some low energy configurations, generated randomly as in Section 3.5 . Each active
filter has response 1.
We give an example of learning S in a convolutional setting. We use the centered faces from the
faces in the wild dataset, available at http://vis-www.cs.umass.edu/lfw/. From each of
the 13233 images we subsample by a factor of two and pick a random 48 ? 48 patch. The 48 ? 48
image x is then contrast normalized to x ? b ? x, where b is a 5 ? 5 averaging box filter; the images
are collected into the 48 ? 48 ? 13233 data set X.
We then train a model minimizing the energy
20
! !
||
Wj ? zji ? Xi ||2 + p(z)T Sp(z),
i
j=1
? ? S ? 0, S = S T , |Sj |1 = ?.
Here the code vector z is written as a 48 ? 48 ? 20 feature map. The pooling operator p takes the
average of the absolute value of each 8 ? 8 patch on each of the 20 maps, and outputs a vector of
size 6 ? 6 ? 20 = 720. ? is set to 72, and ? to .105. Note that these two numbers roughly specify the
number of zeros in the solution of the S problem to be 1600.
The energy is minimized via the batch procedure. The updates for Z are done via coordinate descent
(coordinate descent in the convolutional setting works exactly as before), the updates for W via least
squares, and at each update, S is averaged with .05 of the solution to the linear program in S with
fixed Z and W . W is initialized via random patches from X, and S is initialized as the all ones
matrix, with zeros on the diagonal. In Figure 5 the dictionary W is displayed.
To visualize the S which is learned, we will try to use it to generate new images. Without any data
to reconstruct the model will collapse to zero, so we will constrain z to have a fixed number of unit
entries, and run a few steps of a greedy search to decide which entries should be on. That is: we
initialize z to have 5 random entries set to one, and the rest zero. At each step, we pick one of the
nonzero entries, set it to zero, and find the new entry of z which is cheapest to set to one, namely,
the minimum of the entries in Sp(z) which are not currently turned on. We repeat this until the
configuration is stable. Some results are displayed in 5.
The interesting thing about this experiment is the fact that no filter ever is allowed to see global
information, except through S. However, even though W is blind to anything larger than a 16 ? 16
patch, through the inhibition of S, the model is able to learn the placement of facial structures and
long edges.
8
References
Ackley, D., Hinton, G., and Sejnowski, T. (1985). A learning algorithm for boltzmann machines*.
Cognitive science, 9(1):147?169.
Aharon, M., Elad, M., and Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete
dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311?
4322.
Baraniuk, R. G., Cevher, V., Duarte, M. F., and Hegde, C. (2009). Model-Based Compressive
Sensing.
Beck, A. and Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm with application
to wavelet-based image deblurring. ICASSP?09, pages 693?696.
Druckmann, S. and Chklovskii, D. (2010). Over-complete representations on recurrent neural networks can support persistent percepts.
Garrigues, P. and Olshausen, B. (2008). Learning horizontal connections in a sparse coding model
of natural images. Advances in Neural Information Processing Systems, 20:505?512.
Garrigues, P. and Olshausen, B. (2010). Group sparse coding with a laplacian scale mixture prior.
In Lafferty, J., Williams, C. K. I., Shawe-Taylor, J., Zemel, R., and Culotta, A., editors, Advances
in Neural Information Processing Systems 23, pages 676?684.
Gregor, K. and LeCun, Y. (2010). Emergence of Complex-Like Cells in a Temporal Product Network
with Local Receptive Fields. Arxiv preprint arXiv:1006.0448.
Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational
abilities. Proceedings of the National Academy of Sciences of the United States of America,
79(8):2554.
Huang, J., Zhang, T., and Metaxas, D. N. (2009). Learning with structured sparsity. In ICML,
page 53.
Hyvarinen, A. and Hoyer, P. (2001). A two-layer sparse coding model learns simple and complex
cell receptive fields and topography from natural images. Vision Research, 41(18):2413?2423.
Jacob, L., Obozinski, G., and Vert, J.-P. (2009). Group lasso with overlap and graph lasso. In
Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09, pages
433?440, New York, NY, USA. ACM.
Jenatton, R., Mairal, J., Obozinski, G., and Bach, F. (2010). Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning (ICML).
Kavukcuoglu, K., Ranzato, M., Fergus, R., and LeCun, Y. (2009). Learning invariant features
through topographic filter maps. In Proc. International Conference on Computer Vision and
Pattern Recognition (CVPR?09). IEEE.
Kim, S. and Xing, E. P. (2010). Tree-guided group lasso for multi-task regression with structured
sparsity. In ICML, pages 543?550.
Li, Y. and Osher, S. (2009). Coordinate descent optimization for l1 minimization with application
to compressed sensing; a greedy algorithm. Inverse Problems and Imaging, 3(3):487?503.
Olshausen, B. and Field, D. (1996). Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381(6583):607?609.
Wu, T. T. and Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression.
ANNALS OF APPLIED STATISTICS, 2:224.
9
| 4271 |@word version:1 norm:1 tried:2 jacob:3 contrastive:1 pick:2 mammal:1 minus:1 harder:1 garrigues:5 configuration:2 contains:1 uma:1 united:1 current:2 com:1 wd:2 activation:3 gmail:1 written:2 distant:1 j1:1 predetermined:1 update:19 greedy:4 generative:1 fewer:1 xk:1 prespecified:2 node:1 location:6 simpler:1 zhang:1 ik:1 persistent:1 descendant:2 prove:1 wild:1 introduce:4 roughly:1 growing:1 multi:1 actual:1 becomes:2 lowest:2 kind:1 interpreted:1 minimizes:1 compressive:1 guarantee:3 temporal:2 every:4 exactly:1 scaled:2 sublayer:1 control:3 szlam:1 unit:30 aszlam:1 positive:5 engineering:1 local:7 before:3 bind:1 bilinear:1 subscript:1 incoherence:2 modulation:2 specifying:1 collapse:1 range:1 averaged:2 lecun:4 practice:1 procedure:1 gabor:1 thought:1 projection:2 vert:1 pre:1 refers:1 get:4 cannot:1 close:4 selection:1 onto:2 operator:1 put:1 www:1 deterministic:1 lagrangian:2 map:4 hegde:1 go:2 williams:1 starting:2 convex:3 recovery:2 searching:1 coordinate:17 analogous:1 limiting:1 updated:1 annals:1 suppose:3 losing:1 designing:1 deblurring:1 element:6 recognition:1 jk:1 updating:2 ising:2 observed:2 ackley:2 preprint:1 wj:4 culotta:1 connected:7 ranzato:1 decrease:1 inhibit:1 imbed:1 mentioned:3 broken:1 nesterov:2 trained:6 solving:1 serve:1 creates:1 basis:4 icassp:1 hopfield:4 emergent:1 various:3 represented:1 america:1 train:2 fast:3 describe:7 activate:2 monte:1 effective:1 cod:3 sejnowski:1 zemel:1 neighborhood:6 refined:1 exhaustive:3 quite:1 larger:2 valued:2 solve:1 say:1 elad:1 otherwise:4 reconstruct:1 cvpr:1 ability:1 compressed:1 statistic:1 topographic:1 farm:1 emergence:3 advantage:1 eigenvalue:1 net:4 reconstruction:2 propose:2 interaction:16 product:2 subtracting:2 j2:1 turned:1 flexibility:1 academy:1 convergence:2 parent:1 cosamp:1 r1:6 karol:2 ring:2 converges:1 object:1 depending:1 recurrent:1 fixing:1 dividing:1 c:2 guided:1 radius:2 correct:1 filter:16 stochastic:1 centered:1 implementing:1 require:2 activating:2 fix:1 biological:1 around:1 normal:1 deciding:1 visualize:1 major:1 dictionary:19 purpose:1 proc:1 applicable:1 currently:2 sensitive:1 largest:2 vice:1 create:2 city:1 weighted:3 minimization:2 activates:1 rather:5 shrinkage:3 encode:1 contrast:2 ave:1 kim:3 sense:3 duarte:1 inference:11 typically:3 relation:1 selective:1 i1:1 selects:1 pixel:4 dual:1 orientation:8 constrained:4 special:1 initialize:3 field:8 once:1 equal:2 having:2 shaped:1 atom:4 zz:1 placing:1 icml:4 minimized:1 others:2 few:4 randomly:1 simultaneously:6 preserve:1 national:1 cheaper:1 beck:3 replaced:1 argmax:1 connects:1 n1:4 mixture:2 sh:4 activated:1 chain:1 edge:8 encourage:1 arthur:1 necessary:1 experience:1 facial:2 tree:20 euclidean:1 taylor:1 penalizes:1 desired:1 re:2 walk:1 initialized:2 overcomplete:1 minimal:1 cevher:1 increased:1 column:7 modeling:1 teboulle:3 maximization:1 soften:1 cost:5 deviation:2 entry:9 subset:1 successful:1 dependency:1 connect:2 proximal:1 st:1 langrangian:1 international:3 picking:1 together:4 quickly:2 connecting:1 connectivity:1 reflect:1 opposed:1 huang:2 corner:1 cognitive:1 derivative:1 leading:2 li:3 coding:11 coefficient:10 disconnect:1 satisfy:1 explicitly:1 blind:1 depends:1 vi:1 multiplicative:1 try:2 lot:1 root:1 xing:3 start:1 convent:1 square:2 accuracy:1 convolutional:4 efficiently:1 percept:1 correspond:1 preprocess:1 conceptually:1 metaxas:1 kavukcuoglu:3 iterated:1 none:1 carlo:1 multiplying:1 drive:1 datapoint:2 strongest:1 detector:4 whenever:1 against:2 energy:17 frequency:2 naturally:1 dataset:1 popular:1 subsection:2 ut:1 organized:1 jenatton:3 higher:1 courant:1 originally:1 response:2 specify:1 formulation:1 arranged:1 done:3 box:1 though:1 furthermore:1 just:3 until:3 receives:1 horizontal:1 expressive:1 mode:1 quality:2 olshausen:8 usa:1 effect:1 requiring:1 true:2 normalized:3 symmetric:1 iteratively:1 nonzero:1 i2:1 width:2 inferior:1 essence:1 excitation:1 anything:1 allowable:1 neocortical:1 complete:1 l1:8 image:26 meaning:1 empirically:1 physical:1 resting:1 versa:1 rd:1 unconstrained:1 grid:2 similarly:1 janelia:1 shawe:1 access:1 stable:1 cortex:2 inhibition:8 add:1 recent:1 forbidden:1 perspective:2 optimizing:1 certain:2 binary:2 minimum:4 floor:1 omp:1 impose:1 converge:1 paradigm:1 signal:2 branch:7 reduces:1 hhmi:1 cross:1 long:1 bach:1 coded:1 va:1 laplacian:2 basic:2 regression:2 vision:2 lfw:1 arxiv:2 iteration:6 sometimes:1 cell:6 addition:1 affecting:1 chklovskii:2 ode:1 rest:3 subject:1 supposing:1 tend:1 pooling:1 thing:1 lafferty:1 presence:1 enough:2 xj:6 zi:8 lasso:5 opposite:1 topology:1 lange:3 reduce:1 unprocessed:1 motivated:1 penalty:8 york:5 proceed:1 generally:1 useful:2 amount:2 neocortex:1 locally:5 concentrated:1 ashburn:1 tth:1 argminz:1 http:1 specifies:2 generate:1 zj:24 inhibitory:9 notice:1 sign:1 per:1 disallowed:1 write:1 group:7 threshold:2 clarity:1 prevent:1 preprocessed:1 thresholded:1 economize:1 wtw:1 v1:5 imaging:1 graph:2 run:1 inverse:1 everywhere:1 baraniuk:3 place:6 decide:1 yann:2 wu:3 patch:12 scaling:2 layer:5 display:1 quadratic:1 refine:1 nonnegative:1 oracle:2 nontrivial:1 adapted:1 annual:1 placement:1 constraint:5 infinity:1 constrain:3 nearby:1 u1:1 speed:2 min:4 extremely:1 structured:14 alternate:2 describes:1 smaller:1 wi:2 modification:1 osher:3 invariant:1 sij:10 equation:3 discus:1 precomputed:1 turn:1 know:1 end:2 pursuit:3 aharon:3 operation:2 available:1 multiplied:1 hierarchical:1 appropriate:2 enforce:4 subtracted:2 batch:4 rz:1 original:2 denotes:1 giving:1 especially:1 approximating:1 gregor:4 move:1 arrangement:1 codewords:1 receptive:6 primary:1 sha:1 rt:2 diagonal:6 usual:1 imbedded:1 hoyer:3 gradient:5 kth:1 distance:5 link:1 lateral:9 participate:1 collected:1 provable:1 code:14 index:3 relationship:1 mini:1 modeled:1 minimizing:1 broadway:1 negative:2 design:1 zt:1 boltzmann:1 collective:1 perform:1 allowing:1 upper:1 neuron:3 observation:1 markov:1 pinwheel:1 proscribed:1 descent:13 displayed:2 hinton:1 ever:1 sharp:1 inferred:2 introduced:1 complement:1 pair:7 namely:1 specified:1 connection:15 learned:10 able:1 suggested:1 usually:2 pattern:3 below:3 sparsity:17 program:3 max:1 explanation:1 power:4 overlap:1 natural:4 force:1 representing:1 scheme:2 prior:2 literature:1 nice:1 fully:1 topography:1 interesting:4 thresholding:2 editor:1 helix:1 cd:1 row:1 excitatory:3 penalized:2 placed:4 repeat:3 jth:1 allow:3 face:4 taking:2 absolute:5 sparse:17 depth:1 cortical:2 dimension:2 doesn:1 made:2 preprocessing:1 far:4 boltzman:2 hyvarinen:3 transaction:1 sj:2 approximate:1 reconstructed:1 keep:1 sz:1 global:1 active:4 bruckstein:1 mairal:1 xi:1 fergus:1 alternatively:1 search:6 iterative:2 table:1 learn:10 zk:6 nature:1 intralayer:1 complex:2 szj:2 did:1 sp:2 main:1 cheapest:1 subsample:1 allowed:2 ista:9 ny:3 inferring:2 momentum:1 wish:2 explicit:1 lie:2 learns:4 wavelet:1 theorem:1 z0:1 removing:1 down:2 specific:1 showing:2 sensing:2 nyu:2 explored:1 r2:6 consist:1 effectively:1 zji:1 easier:1 zjt:2 simply:2 saddle:1 visual:1 desire:1 scalar:1 corresponds:1 satisfies:3 acm:1 obozinski:2 goal:1 formulated:1 acceleration:1 experimentally:2 change:3 fista:9 specifically:1 except:5 averaging:1 total:1 called:1 pas:1 svd:2 experimental:1 select:1 college:1 support:1 modulated:1 accelerated:1 druckmann:2 |
3,614 | 4,272 | Regularized Laplacian Estimation and
Fast Eigenvector Approximation
Patrick O. Perry
Information, Operations, and Management Sciences
NYU Stern School of Business
New York, NY 10012
[email protected]
Michael W. Mahoney
Department of Mathematics
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of
a data graph Laplacian exactly solve certain regularized Semi-Definite Programs
(SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to
the manner in which `2 -regularized or `1 -regularized `2 -regression (often called
Ridge regression and Lasso regression, respectively) can be interpreted in terms
of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the
regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this
regularized estimation problem can be computed very quickly by running, e.g.,
the fast diffusion-based PageRank procedure for computing an approximation to
the first nontrivial eigenvector of the graph Laplacian. Empirical results are also
provided to illustrate the manner in which approximate eigenvector computation
implicitly performs statistical regularization, relative to running the corresponding
exact algorithm.
1
Introduction
Approximation algorithms and heuristic approximations are commonly used to speed up the running time of algorithms in machine learning and data analysis. In some cases, the outputs of these
approximate procedures are ?better? than the output of the more expensive exact algorithms, in
the sense that they lead to more robust results or more useful results for the downstream practitioner. Recently, Mahoney and Orecchia formalized these ideas in the context of computing the
first nontrivial eigenvector of a graph Laplacian [1]. Recall that, given a graph G on n nodes or
equivalently its n ? n Laplacian matrix L, the top nontrivial eigenvector of the Laplacian exactly optimizes the Rayleigh quotient, subject to the usual constraints. This optimization problem can equivalently be expressed as a vector optimization program with the objective function f (x) = xT Lx,
where x is an n-dimensional vector, or as a Semi-Definite Program (SDP) with objective function
F (X) = Tr(LX), where X is an n ? n symmetric positive semi-definite matrix. This first nontrivial vector is, of course, of widespread interest in applications due to its usefulness for graph
partitioning, image segmentation, data clustering, semi-supervised learning, etc. [2, 3, 4, 5, 6, 7].
In this context, Mahoney and Orecchia asked the question: do popular diffusion-based procedures?
such as running the Heat Kernel or performing a Lazy Random Walk or computing the PageRank
function?to compute a quick approximation to the first nontrivial eigenvector of L solve some
other regularized version of the Rayleigh quotient objective function exactly? Understanding this
algorithmic-statistical tradeoff is clearly of interest if one is interested in very large-scale applications, where performing statistical analysis to derive an objective and then calling a black box solver
to optimize that objective exactly might be too expensive. Mahoney and Orecchia answered the
above question in the affirmative, with the interesting twist that the regularization is on the SDP
1
formulation rather than the usual vector optimization problem. That is, these three diffusion-based
procedures exactly optimize a regularized SDP with objective function F (X) + ?1 G(X), for some
regularization function G(?) to be described below, subject to the usual constraints.
In this paper, we extend the Mahoney-Orecchia result by providing a statistical interpretation of
their approximation procedure. Our interpretation will be analogous to the manner in which `2 regularized or `1 -regularized `2 -regression (often called Ridge regression and Lasso regression,
respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on
the coefficient vector of the regression problem. In more detail, we will set up a sampling model,
whereby the graph Laplacian is interpreted as an observation from a random process; we will posit
the existence of a ?population Laplacian? driving the random process; and we will then define an
estimation problem: find the inverse of the population Laplacian. We will show that the maximum a
posteriori probability (MAP) estimate of the inverse of the population Laplacian leads to a regularized SDP, where the objective function F (X) = Tr(LX) and where the role of the penalty function
G(?) is to encode prior assumptions about the population Laplacian. In addition, we will show that
when G(?) is the log-determinant function then the MAP estimate leads to the Mahoney-Orecchia
regularized SDP corresponding to running the PageRank heuristic. Said another way, the solutions
to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Moreover, by Mahoney and Orecchia?s main result, the solution
to this regularized SDP can be computed very quickly?rather than solving the SDP with a blackbox solver and rather computing explicitly the pseudoinverse of the Laplacian, one can simply run
the fast diffusion-based PageRank heuristic for computing an approximation to the first nontrivial
eigenvector of the Laplacian L.
The next section describes some background. Section 3 then describes a statistical framework for
graph estimation; and Section 4 describes prior assumptions that can be made on the population
Laplacian. These two sections will shed light on the computational implications associated with
these prior assumptions; but more importantly they will shed light on the implicit prior assumptions
associated with making certain decisions to speed up computations. Then, Section 5 will provide
an empirical evaluation, and Section 6 will provide a brief conclusion. Additional discussion is
available in the Appendix of the technical report version of this paper [8].
2
Background on Laplacians and diffusion-based procedures
A weighted symmetric graph G is defined by a vertex set V = {1, . . . , n}, an edge set E ? V ? V ,
and a weight function w : E ? R+ , where w is assumed to be symmetric (i.e., w(u, v) = w(v, u)).
In this case, one can construct a matrix, L0 ? RV ?V , called the combinatorial Laplacian of G:
?w(u, v)
when u 6= v,
L0 (u, v) =
d(u) ? w(u, u) otherwise,
P
where d(u) = v w(u, v) is called the degree of u. By construction, L0 is positive semidefinite.
Note that the all-ones vector, often denoted 1, is an eigenvector of L0 with eigenvalue zero, i.e., L1 =
0. For this reason, 1 is often called trivial eigenvector of L0 . Letting D be a diagonal matrix with
D(u, u) = d(u), one can also define a normalized version of the Laplacian: L = D?1/2 L0 D?1/2 .
Unless explicitly stated otherwise, when we refer to the Laplacian of a graph, we will mean the
normalized Laplacian.
In many situations, e.g., to perform spectral graph partitioning, one is interested in computing the
first nontrivial eigenvector of a Laplacian. Typically, this vector is computed ?exactly? by calling
a black-box solver; but it could also be approximated with an iteration-based method (such as the
Power Method or Lanczos Method) or by running a random walk-based or diffusion-based method
to the asymptotic state. These random walk-based or diffusion-based methods assign positive and
negative ?charge? to the nodes, and then they let the distribution of charge evolve according to
dynamics derived from the graph structure. Three canonical evolution dynamics are the following:
t
Heat Kernel. Here, the charge evolves according to the heat equation ?H
= ?LHt . Thus, the
P? (?t)k k?t
vector of charges evolves as Ht = exp(?tL) = k=0 k! L , where t ? 0 is a time
parameter, times an input seed distribution vector.
PageRank. Here, the charge at a node evolves by either moving to a neighbor of the current node
or teleporting to a random node. More formally, the vector of charges evolves as
R? = ? (I ? (1 ? ?) M )
2
?1
,
(1)
where M is the natural random walk transition matrix associated with the graph and where
? ? (0, 1) is the so-called teleportation parameter, times an input seed vector.
Lazy Random Walk. Here, the charge either stays at the current node or moves to a neighbor.
Thus, if M is the natural random walk transition matrix associated with the graph, then the
vector of charges evolves as some power of W? = ?I + (1 ? ?)M , where ? ? (0, 1)
represents the ?holding probability,? times an input seed vector.
In each of these cases, there is a parameter (t, ?, and the number of steps of the Lazy Random
Walk) that controls the ?aggressiveness? of the dynamics and thus how quickly the diffusive process
equilibrates; and there is an input ?seed? distribution vector. Thus, e.g., if one is interested in global
spectral graph partitioning, then this seed vector could be a vector with entries drawn from {?1, +1}
uniformly at random, while if one is interested in local spectral graph partitioning [9, 10, 11, 12],
then this vector could be the indicator vector of a small ?seed set? of nodes. See Appendix A of [8]
for a brief discussion of local and global spectral partitioning in this context.
Mahoney and Orecchia showed that these three dynamics arise as solutions to SDPs of the form
minimize
X
Tr(LX) + ?1 G(X)
subject to X 0,
Tr(X) = 1,
(2)
XD1/2 1 = 0,
where G is a penalty function (shown to be the generalized entropy, the log-determinant, and a
certain matrix-p-norm, respectively [1]) and where ? is a parameter related to the aggressiveness
of the diffusive process [1]. Conversely, solutions to the regularized SDP of (2) for appropriate
values of ? can be computed exactly by running one of the above three diffusion-based procedures.
Notably, when G = 0, the solution to the SDP of (2) is uu0 , where u is the smallest nontrivial
eigenvector of L. More generally and in this precise sense, the Heat Kernel, PageRank, and Lazy
Random Walk dynamics can be seen as ?regularized? versions of spectral clustering and Laplacian
eigenvector computation. Intuitively, the function G(?) is acting as a penalty function, in a manner
analogous to the `2 or `1 penalty in Ridge regression or Lasso regression, and by running one of
these three dynamics one is implicitly making assumptions about the form of G(?). In this paper, we
provide a statistical framework to make that intuition precise.
3
A statistical framework for regularized graph estimation
Here, we will lay out a simple Bayesian framework for estimating a graph Laplacian. Importantly,
this framework will allow for regularization by incorporating prior information.
3.1
Analogy with regularized linear regression
It will be helpful to keep in mind the Bayesian interpretation of regularized linear regression. In
that context, we observe n predictor-response pairs in Rp ? R, denoted (x1 , y1 ), . . . , (xn , yn ); the
goal is to find a vector ? such that ? 0 xi ?
Pyi . Typically, we choose ? by minimizing the residual
sum of squares, i.e., F (?) = RSS(?) = i kyi ? ? 0 xi k22 , or a penalized version of it. For Ridge
regression, we minimize F (?) + ?k?k22 ; while for Lasso regression, we minimize F (?) + ?k?k1 .
The additional terms in the optimization criteria (i.e., ?k?k22 and ?k?k1 ) are called penalty functions; and adding a penalty function to the optimization criterion can often be interpreted as incorporating prior information about ?. For example, we can model y1 , . . . , yn as independent random
observations with distributions dependent on ?. Specifically, we can suppose yi is a Gaussian random variable with mean ? 0 xi and known variance ? 2 . This induces a conditional density for the
vector y = (y1 , . . . , yn ):
(3)
p(y | ?) ? exp{? 2?1 2 F (?)},
where the constant of proportionality depends only on y and ?. Next, we can assume that ? itself
is random, drawn from a distribution with density p(?). This distribution is called a prior, since it
encodes prior knowledge about ?. Without loss of generality, the prior density can be assumed to
take the form
p(?) ? exp{?U (?)}.
(4)
3
Since the two random variables are dependent, upon observing y, we have information about ?. This
information is encoded in the posterior density, p(? | y), computed via Bayes? rule as
p(? | y) ? p(y | ?) p(?) ? exp{? 2?1 2 F (?) ? U (?)}.
(5)
The MAP estimate of ? is the value that maximizes p(? | y); equivalently, it is the value of ?
that minimizes ? log p(? | y). In this framework, we can recover the solution to Ridge regression or Lasso regression by setting U (?) = 2??2 k?k22 or U (?) = 2??2 k?k1 , respectively. Thus,
Ridge regression can be interpreted as imposing a Gaussian prior on ?, and Lasso regression can be
interpreted as imposing a double-exponential prior on ?.
3.2
Bayesian inference for the population Laplacian
For our problem, suppose that we have a connected graph with n nodes; or, equivalently, that we
have L, the normalized Laplacian of that graph. We will view this observed graph Laplacian, L,
as a ?sample? Laplacian, i.e., as random object whose distribution depends on a true ?population?
Laplacian, L. As with the linear regression example, this induces a conditional density for L, to be
denoted p(L | L). Next, we can assume prior information about the population Laplacian in the
form of a prior density, p(L); and, given the observed Laplacian, we can estimate the population
Laplacian by maximizing its posterior density, p(L | L).
Thus, to apply the Bayesian formalism, we need to specify the conditional density of L given L. In
the context of linear regression, we assumed that the observations followed a Gaussian distribution.
A graph Laplacian is not just a single observation?it is a positive semidefinite matrix with a very
specific structure. Thus, we will take L to be a random object with expectation L, where L is another
normalized graph Laplacian. Although, in general, L can be distinct from L, we will require that the
nodes in the population and sample graphs have the same degrees. That is, if d = d(1), . . . , d(n)
denotes the ?degree vector? of the graph, and D = diag d(1), . . . , d(n) , then we can define
X = {X : X 0, XD1/2 1 = 0, rank(X) = n ? 1},
(6)
in which case the population Laplacian and the sample Laplacian will both be members of X . To
model L, we will choose a distribution for positive semi-definite matrices analogous to the Gaussian
distribution: a scaled Wishart matrix with expectation L. Note that, although it captures the trait
that L is positive semi-definite, this distribution does not accurately model every feature of L. For
example, a scaled Wishart matrix does not necessarily have ones along its diagonal. However, the
mode of the density is at L, a Laplacian; and for large values of the scale parameter, most of the mass
will be on matrices close to L. Appendix B of [8] provides a more detailed heuristic justification for
the use of the Wishart distribution.
To be more precise, let m ? n ? 1 be a scale parameter, and suppose that L is distributed over X as
1
am
Wishart(L, m) random variable. Then, E[L | L] = L, and L has conditional density
p(L | L) ?
+
exp{? m
2 Tr(LL )}
,
m/2
|L|
(7)
where | ? | denotes pseudodeterminant (product of nonzero eigenvalues). The constant of proportionality depends only on L, d, m, and n; and we emphasize that the density is supported on X . Eqn. (7)
is analogous to Eqn. (3) in the linear regression context, with 1/m, the inverse of the sample size
parameter, playing the role of the variance parameter ? 2 . Next, suppose we have know that L is a
random object drawn from a prior density p(L). Without loss of generality,
p(L) ? exp{?U (L)},
(8)
for some function U , supported on a subset X? ? X . Eqn. (8) is analogous to Eqn. (4) from the
linear regression example. Upon observing L, the posterior distribution for L is
+
p(L | L) ? p(L | L) p(L) ? exp{? m
2 Tr(LL ) +
m
2
log |L+ | ? U (L)},
(9)
with support determined by X? . Eqn. (9) is analogous to Eqn. (5) from the linear regression example.
If we denote by L? the MAP estimate of L, then it follows that L?+ is the solution to the program
minimize
X
Tr(LX) +
+
2
m U (X )
subject to X ? X? ? X .
4
? log |X|
(10)
Note the similarity with Mahoney-Orecchia regularized SDP of (2). In particular, if X? = {X :
Tr(X) = 1} ? X , then the two programs are identical except for the factor of log |X| in the optimization criterion.
4
A prior related to the PageRank procedure
Here, we will present a prior distribution for the population Laplacian that will allow us to leverage
the estimation framework of Section 3; and we will show that the MAP estimate of L for this prior
is related to the PageRank procedure via the Mahoney-Orecchia regularized SDP. Appendix C of [8]
presents priors that lead to the Heat Kernel and Lazy Random Walk in an analogous way; in both of
these cases, however, the priors are data-dependent in the strong sense that they explicitly depend
on the number of data points.
4.1
Prior density
The prior we will present will be based on neutrality and invariance conditions; and it will be supported on X , i.e., on the subset of positive-semidefinite matrices that was the support set for the
conditional density defined in Eqn. (7). In particular, recall that, in addition to being positive semidefinite, every matrix in the support set has rank n ? 1 and satisfies XD1/2 1 = 0. Note that because
the prior depends on the data (via the orthogonality constraint induced by D), this is not a prior in the
fully Bayesian sense; instead, the prior can be considered as part of an empirical or pseudo-Bayes
estimation procedure.
The prior we will specify depends only on the eigenvalues of the normalized Laplacian, or equivalently on the eigenvalues of the pseudoinverse of the Laplacian. Let L+ = ? O?O0 be the spectral
decomposition of the pseudoinverse of the normalized Laplacian L, where? ? 0 isP
a scale factor,
O ? Rn?n?1 is an orthogonal matrix, and ? = diag ?(1), . . . , ?(n ? 1) , where v ?(v) = 1.
Note that the values ?(1), . . . , ?(n ? 1) are unordered and that the vector ? = ?(1), . . . , ?(n ? 1)
lies in the unit simplex. If we require that the distribution for ? be exchangeable (invariant
under
permutations) and neutral (?(v) independent of the vector ?(u)/(1 ? ?(v)) : u 6= v , for all
v), then the only non-degenerate possibility is that ? is Dirichlet-distributed with parameter vector
(?, . . . , ?) [13]. The parameter ?, to which we refer as the ?shape? parameter, must satisfy ? > 0
for the density to be defined. In this case,
p(L) ? p(? )
n?1
Y
?(v)??1 ,
(11)
v=1
where p(? ) is a prior for ? . Thus, the prior weight on L only depends on ? and ?. One implication
is that the prior is ?nearly? rotationally invariant, in the sense that p(P 0 LP ) = p(L) for any rank(n ? 1) projection matrix P satisfying P D1/2 1 = 0.
4.2
Posterior estimation and connection to PageRank
To analyze the MAP estimate associated with the prior of Eqn. (11) and to explain its connection
with the PageRank dynamics, the following proposition is crucial.
Proposition 4.1. Suppose the conditional likelihood for L given L is as defined in (7) and the prior
density for L is as defined in (11). Define L? to be the MAP estimate of L. Then, [Tr(L?+ )]?1 L?+
solves the Mahoney-Orecchia regularized SDP (2), with G(X) = ? log |X| and ? as given in
Eqn. (12) below.
Proof. For L in the support set of the posterior, define ? = Tr(L+ ) and ? = ? ?1 L+ , so that
Tr(?) = 1. Further, rank(?) = n ? 1. Express the prior in the form of Eqn. (8) with function U
given by
U (L) = ? log{p(? ) |?|??1 } = ?(? ? 1) log |?| ? log p(? ),
where, as before, | ? | denotes pseudodeterminant. Using (9) and the relation |L+ | = ? n?1 |?|, the
posterior density for L given L is
n
o
m+2(??1)
p(L | L) ? exp ? m?
Tr(L?)
+
log
|?|
+
g(?
)
,
2
2
5
where g(? ) = m(n?1)
log ? + log p(? ). Suppose L? maximizes the posterior likelihood. Define
2
? = [?
? must minimize the quantity Tr(L?)
? ? 1 log |?|,
?
?? = Tr(L?+ ) and ?
? ]?1 L?+ . In this case, ?
?
where
?=
m?
?
.
m + 2(? ? 1)
(12)
? solves the regularized SDP (2) with G(X) = ? log |X|.
Thus ?
Mahoney and Orecchia showed that the solution to (2) with G(X) = ? log |X| is closely related to
the PageRank matrix, R? , defined in Eqn. (1). By combining Proposition 4.1 with their result, we
get that the MAP estimate of L satisfies L?+ ? D?1/2 R? D1/2 ; conversely, R? ? D1/2 L?+ D?1/2 .
Thus, the PageRank operator of Eqn. (1) can be viewed as a degree-scaled regularized estimate of
the pseudoinverse of the Laplacian. Moreover, prior assumptions about the spectrum of the graph
Laplacian have direct implications on the optimal teleportation parameter. Specifically Mahoney
and Orecchia?s Lemma 2 shows how ? is related to the teleportation parameter ?, and Eqn. (12)
shows how the optimal ? is related to prior assumptions about the Laplacian.
5
Empirical evaluation
In this section, we provide an empirical evaluation of the performance of the regularized Laplacian
estimator, compared with the unregularized estimator. To do this, we need a ground truth population
Laplacian L and a noisily-observed sample Laplacian L. Thus, in Section 5.1, we construct a family
of distributions for L; importantly, this family will be able to represent both low-dimensional graphs
and expander-like graphs. Interestingly, the prior of Eqn. (11) captures some of the qualitative
features of both of these types of graphs (as the shape parameter is varied). Then, in Section 5.2,
we describe a sampling procedure for L which, superficially, has no relation to the scaled Wishart
??
conditional density of Eqn. (7). Despite this model misspecification, the regularized estimator L
outperforms L for many choices of the regularization parameter ?.
5.1
Ground truth generation and prior evaluation
The ground truth graphs we generate are motivated by the Watts-Strogatz ?small-world? model [14].
To generate a ground truth population Laplacian, L?equivalently, a population graph?we start
with a two-dimensional lattice of width w and height h, and thus n = wh nodes. Points in the lattice
are connected to their four nearest neighbors, making adjustments as necessary at the boundary. We
then perform s edge-swaps: for each swap, we choose two edges uniformly at random and then
we swap the endpoints. For example, if we sample edges i1 ? j1 and i2 ? j2 , then we replace
these edges with i1 ? j2 and i2 ? j1 . Thus, when s = 0, the graph is the original discretization
of a low-dimensional space; and as s increases to infinity, the graph becomes more and more like
a uniformly chosen 4-regular graph (which is an expander [15] and which bears similarities with
an Erd?os-R?enyi random graph [16]). Indeed, each edge swap is a step of the Metropolis algorithm
toward a uniformly chosen random graph with a fixed degree sequence. For the empirical evaluation
presented here, h = 7 and w = 6; but the results are qualitatively similar for other values.
Figure 1 compares the expected order statistics (sorted values) for the Dirichlet prior of Eqn. (11)
with the expected eigenvalues of ? = L+ / Tr(L+ ) for the small-world model. In particular, in
Figure 1(a), we show the behavior of the order statistics of a Dirichlet distribution on the (n ? 1)dimensional simplex with scalar shape parameter ?, as a function of ?. For each value of the
shape ?, we generated a random (n ? 1)-dimensional Dirichlet vector, ?, with parameter vector
(?, . . . , ?); we computed the n ? 1 order statistics of ? by sorting its components; and we repeated
this procedure for 500 replicates and averaged the values. Figure 1(b) shows a corresponding plot
for the ordered eigenvalues of ?. For each value of s (normalized, here, by the number of edges ?,
where ? = 2wh ? w ? h = 71), we generated the normalized Laplacian, L, corresponding to the
random s-edge-swapped grid; we computed the n ? 1 nonzero eigenvalues of ?; and we performed
1000 replicates of this procedure and averaged the resulting eigenvalues.
Interestingly, the behavior of the spectrum of the small-world model as the edge-swaps increase is
qualitatively quite similar to the behavior of the Dirichlet prior order statistics as the shape parameter ? increases. In particular, note that for small values of the shape parameter ? the first few
order-statistics are well-separated from the rest; and that as ? increases, the order statistics become
6
0.10
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.5
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ? ?
? ? ?
?
? ?
? ?
?
?
?
?
? ?
? ?
?
?
?
?
? ?
? ?
?
?
?
?
? ?
? ?
?
?
?
?
?
?
? ?
?
?
?
? ?
?
?
?
?
?
1.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.5
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
2.0
0.20
0.15
?
?
?
?
?
?
0.10
?
?
0.05
?
?
0.00
Inverse Laplacian Eigenvalues
0.20
?
?
0.00
Order statistics
?
?
?
?
?
?
?
?
? ? ?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
? ??
??
?
??
?
?
?
??
??
?
?
??
?
??
?
?
?
?
?
?
?
?
? ??
?
?
0.2
? ?
? ? ? ? ? ? ?
? ?
? ?
? ? ? ?
? ? ? ?
? ? ? ?
?
? ?
? ?
?
? ?
?
? ?
? ?
?
?
? ?
? ?
? ?
?
?
?
?
?
?
?
? ?
? ?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ? ?
0.4
? ? ? ? ? ?
? ? ? ? ? ?
? ? ? ? ? ?
? ? ? ? ? ?
? ?
? ?
? ?
?
? ?
? ?
?
? ?
? ?
? ?
?
? ?
? ?
?
?
? ?
? ?
?
? ?
? ?
? ?
?
?
?
?
?
?
?
?
? ?
? ?
? ?
? ?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ? ? ? ?
0.6
0.8
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.0
Swaps Edges
Shape
(a) Dirichlet distribution order statistics.
(b) Spectrum of the inverse Laplacian.
Figure 1: Analytical and empirical priors. 1(a) shows the Dirichlet distribution order statistics versus
the shape parameter; and 1(b) shows the spectrum of ? as a function of the rewiring parameter.
concentrated around 1/(n ? 1). Similarly, when the edge-swap parameter s = 0, the top two eigenvalues (corresponding to the width-wise and height-wise coordinates on the grid) are well-separated
from the bulk; as s increases, the top eigenvalues quickly merge into the bulk; and eventually, as s
goes to infinity, the distribution becomes very close that that of a uniformly chosen 4-regular graph.
5.2
Sampling procedure, estimation performance, and optimal regularization behavior
Finally, we evaluate the estimation performance of a regularized estimator of the graph Laplacian
and compare it with an unregularized estimate. To do so, we construct the population graph G and its
Laplacian L, for a given value of s, as described in Section 5.1. Let ? be the number of edges in G.
The sampling procedure used to generate the observed graph G and its Laplacian L is parameterized
by the sample size m. (Note that this parameter is analogous to the Wishart scale parameter in
Eqn. (7), but here we are sampling from a different distribution.) We randomly choose m edges with
replacement from G; and we define sample graph G and corresponding Laplacian L by setting the
weight of i ? j equal to the number of times we sampled that edge. Note that the sample graph G
over-counts some edges in G and misses others.
We then compute the regularized estimate L?? , up to a constant of proportionality, by solving (implicitly!) the Mahoney-Orecchia regularized SDP (2) with G(X) = ? log |X|. We define the unregular? to be equal to the observed Laplacian, L. Given a population Laplacian L, we define
ized estimate L
? ? , and ?
? similarly to the popu? = ? (L) = Tr(L+ ) and ? = ?(L) = ? ?1 L+ . We define ??? , ??, ?
? ? kF /k? ? ?k
? F,
lation quantities. Our performance criterion is the relative Frobenius error k? ? ?
0
1/2
where k ? kF denotes the Frobenius norm (kAkF = [Tr(A A)] ). Appendix D of [8] presents
similar results when the performance criterion is the relative spectral norm error.
Figures 2(a), 2(b), and 2(c) show the regularization performance when s = 4 (an intermediate
value) for three different values of m/?. In each case, the mean error and one standard deviation
around it are plotted as a function of ?/?
? , as computed from 100 replicates; here, ?? is the mean
value of ? over all replicates. The implicit regularization clearly improves the performance of the
estimator for a large range of ? values. (Note that the regularization parameter in the regularized
SDP (2) is 1/?, and thus smaller values along the X-axis correspond to stronger regularization.) In
particular, when the data are very noisy, e.g., when m/? = 0.2, as in Figure 2(a), improved results
are seen only for very strong regularization; for intermediate levels of noise, e.g., m/? = 1.0, as in
Figure 2(b), (in which case m is chosen such that G and G have the same number of edges counting
multiplicity), improved performance is seen for a wide range of values of ?; and for low levels
of noise, Figure 2(c) illustrates that improved results are obtained for moderate levels of implicit
regularization. Figures 2(d) and 2(e) illustrate similar results for s = 0 and s = 32.
7
2.5
2.0
1.5
??
???
???
???
??
??
??
??
?
??
??
??
??
??
??
??
??
??
??
?
????
?
???
??
???
??
???
??
???
??
???
??
???
???
????
?????? ????????
?????
0.0
0.5
1.0
1.5
2.0
2.5
0.0
0.0
0.0
0.5
1.0
Rel. Frobenius Error
1.0
0.5
Rel. Frobenius Error
1.5
3.0
2.5
2.0
1.5
1.0
Rel. Frobenius Error
??
???
???
???
???
???
???
???
???
???
?
?
?
???
???
???
???
???
???
???
???
????
???
?
?????
?
?
??
??????
?????????????????????
0.5
????????????????
?????????????????????
?????????????
?????????
??????
?????
????
???
???
??
?
??
??
?
?
?
?
?
?
?
?
?
????
0.0
0.5
Regularization
1.0
1.5
2.0
2.5
0.0
0.5
1.0
Regularization
2.0
2.5
(b) m/? = 1.0 and s = 4.
(c) m/? = 2.0 and s = 4.
4
(a) m/? = 0.2 and s = 4.
1.5
Regularization
1.0
?
?
0
0.0
0.5
1.0
1.5
2.0
2.5
Regularization
(d) m/? = 2.0 and s = 0.
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Regularization
(e) m/? = 2.0 and s = 32.
Swaps/Edges
0.8
0.6
0.4
Optimal Penalty
?
?
0.90
0.45
0.23
0.11
0.06
0.03
0.01
0.2
??
???
???
???
???
??
??
??
?
??
??
??
??
??
??
??
??
??
??
?
?
??
??
??
??
??
???
??
???
??
????
???
????
???
?????
?
?
?
???????????????
0.0
2
1
Rel. Frobenius Error
3
2.5
2.0
1.5
1.0
Rel. Frobenius Error
0.0
0.5
???
???
??
??
??
??
??
??
?
??
??
??
???
??
???
??
???
??
???
??
???
??
???
??
??
??
??
?
?
??
??
???
??
???
???
???
???
????
????????????
?
?
?
?
?
?
?
5.0
10.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.1
0.2
0.5
1.0
2.0
Sample Proportion
(f) Optimal ? ? /?
?.
Figure 2: Regularization performance. 2(a) through 2(e) plot the relative Frobenius norm error,
versus the (normalized) regularization parameter ?/?
? . Shown are plots for various values of the
(normalized) number of edges, m/?, and the edge-swap parameter, s. Recall that the regularization
parameter in the regularized SDP (2) is 1/?, and thus smaller values along the X-axis correspond to
stronger regularization. 2(f) plots the optimal regularization parameter ? ? /?
? as a function of sample
proportion for different fractions of edge swaps.
As when regularization is implemented explicitly, in all these cases, we observe a ?sweet spot?
where there is an optimal value for the implicit regularization parameter. Figure 2(f) illustrates
how the optimal choice of ? depends on parameters defining the population Laplacians and sample
Laplacians. In particular, it illustrates how ? ? , the optimal value of ? (normalized by ??), depends
on the sampling proportion m/? and the swaps per edges s/?. Observe that as the sample size m
increases, ? ? converges monotonically to ??; and, further, that higher values of s (corresponding to
more expander-like graphs) correspond to higher values of ? ? . Both of these observations are in
direct agreement with Eqn. (12).
6
Conclusion
We have provided a statistical interpretation for the observation that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian
exactly solve a certain regularized version of the problem. One might be tempted to view our results as ?unfortunate,? in that it is not straightforward to interpret the priors presented in this paper.
Instead, our results should be viewed as making explicit the implicit prior assumptions associated
with making certain decisions (that are already made in practice) to speed up computations.
Several extensions suggest themselves. The most obvious might be to try to obtain Proposition 4.1
with a more natural or empirically-plausible model than the Wishart distribution; to extend the empirical evaluation to much larger and more realistic data sets; to apply our methodology to other
widely-used approximation procedures; and to characterize when implicitly regularizing an eigenvector leads to better statistical behavior in downstream applications where that eigenvector is used.
More generally, though, we expect that understanding the algorithmic-statistical tradeoffs that we
have illustrated will become increasingly important in very large-scale data analysis applications.
8
References
[1] M. W. Mahoney and L. Orecchia. Implementing regularization implicitly via approximate
eigenvector computation. In Proceedings of the 28th International Conference on Machine
Learning, pages 121?128, 2011.
[2] D.A. Spielman and S.-H. Teng. Spectral partitioning works: Planar graphs and finite element
meshes. In FOCS ?96: Proceedings of the 37th Annual IEEE Symposium on Foundations of
Computer Science, pages 96?107, 1996.
[3] S. Guattery and G.L. Miller. On the quality of spectral separators. SIAM Journal on Matrix
Analysis and Applications, 19:701?719, 1998.
[4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transcations of Pattern
Analysis and Machine Intelligence, 22(8):888?905, 2000.
[5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[6] T. Joachims. Transductive learning via spectral graph partitioning. In Proceedings of the 20th
International Conference on Machine Learning, pages 290?297, 2003.
[7] J. Leskovec, K.J. Lang, A. Dasgupta, and M.W. Mahoney. Community structure in large
networks: Natural cluster sizes and the absence of large well-defined clusters. Internet Mathematics, 6(1):29?123, 2009. Also available at: arXiv:0810.1355.
[8] P. O. Perry and M. W. Mahoney. Regularized Laplacian estimation and fast eigenvector approximation. Technical report. Preprint: arXiv:1110.1757 (2011).
[9] D.A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph
sparsification, and solving linear systems. In STOC ?04: Proceedings of the 36th annual ACM
Symposium on Theory of Computing, pages 81?90, 2004.
[10] R. Andersen, F.R.K. Chung, and K. Lang. Local graph partitioning using PageRank vectors.
In FOCS ?06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer
Science, pages 475?486, 2006.
[11] F.R.K. Chung. The heat kernel as the pagerank of a graph. Proceedings of the National
Academy of Sciences of the United States of America, 104(50):19735?19740, 2007.
[12] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A spectral algorithm for improving graph
partitions with applications to exploring data graphs locally. Technical report. Preprint:
arXiv:0912.0681 (2009).
[13] J. Fabius. Two characterizations of the Dirichlet distribution.
1(3):583?587, 1973.
The Annals of Statistics,
[14] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440?
442, 1998.
[15] S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bulletin of
the American Mathematical Society, 43:439?561, 2006.
[16] B. Bollobas. Random Graphs. Academic Press, London, 1985.
9
| 4272 |@word determinant:2 version:6 proportion:3 norm:4 stronger:2 proportionality:3 r:1 decomposition:1 tr:17 reduction:1 united:1 interestingly:2 outperforms:1 current:2 discretization:1 lang:2 must:2 mesh:1 realistic:1 partition:1 j1:2 shape:8 plot:4 intelligence:1 fabius:1 provides:1 characterization:1 node:10 lx:5 height:2 mathematical:1 along:3 direct:2 become:2 symposium:3 qualitative:1 focs:2 manner:4 notably:1 expected:2 indeed:1 behavior:5 themselves:1 sdp:18 blackbox:1 solver:3 becomes:2 provided:2 estimating:1 moreover:2 maximizes:2 mass:1 interpreted:8 minimizes:1 eigenvector:18 affirmative:1 sparsification:1 pseudo:1 every:2 charge:8 shed:2 exactly:8 scaled:4 exchangeable:1 partitioning:9 control:1 unit:1 yn:3 positive:8 before:1 local:3 despite:1 merge:1 black:2 might:3 conversely:3 range:2 averaged:2 practice:1 definite:5 spot:1 procedure:19 empirical:8 projection:1 regular:2 suggest:1 get:1 close:2 operator:1 context:6 optimize:2 map:8 demonstrated:1 quick:3 maximizing:1 shi:1 go:1 straightforward:1 bollobas:1 pyi:1 formalized:1 rule:1 estimator:5 d1:3 importantly:3 population:18 coordinate:1 justification:1 analogous:9 laplace:2 annals:1 construction:1 suppose:6 exact:2 agreement:1 element:1 expensive:2 approximated:1 satisfying:1 lay:1 cut:1 observed:5 role:2 preprint:2 capture:2 connected:2 intuition:1 asked:1 dynamic:8 depend:1 solving:3 upon:2 linial:1 swap:11 isp:1 various:1 america:1 separated:2 heat:6 fast:4 distinct:1 describe:1 enyi:1 london:1 whose:1 heuristic:4 stanford:3 solve:3 encoded:1 quite:1 plausible:1 otherwise:2 larger:1 widely:1 statistic:10 niyogi:1 transductive:1 itself:1 noisy:1 sequence:1 eigenvalue:11 analytical:1 rewiring:1 product:1 j2:2 combining:1 degenerate:1 academy:1 frobenius:8 double:1 cluster:2 converges:1 mmahoney:1 object:3 illustrate:2 derive:1 nearest:1 school:1 solves:2 strong:2 implemented:1 c:1 quotient:2 posit:1 closely:1 aggressiveness:2 implementing:1 require:2 assign:1 proposition:4 extension:1 exploring:1 around:2 considered:1 ground:4 exp:8 seed:6 algorithmic:2 driving:1 smallest:1 estimation:11 combinatorial:1 weighted:1 clearly:2 gaussian:6 rather:3 encode:1 l0:6 derived:1 joachim:1 rank:4 likelihood:2 sense:5 am:1 posteriori:1 helpful:1 inference:1 dependent:3 typically:2 relation:2 interested:4 i1:2 denoted:3 equal:2 construct:3 sampling:6 identical:1 represents:1 nearly:2 simplex:2 report:3 others:1 few:1 sweet:1 belkin:1 randomly:1 national:1 neutrality:1 replacement:1 interest:2 possibility:1 evaluation:6 replicates:4 mahoney:20 semidefinite:4 light:2 hoory:1 implication:3 edge:21 necessary:1 orthogonal:1 unless:1 walk:9 plotted:1 leskovec:1 formalism:1 lanczos:1 lattice:2 deviation:1 vertex:1 entry:1 subset:2 predictor:1 usefulness:1 neutral:1 eigenmaps:1 too:1 characterize:1 density:18 international:2 siam:1 stay:1 michael:1 quickly:4 andersen:1 management:1 choose:4 wishart:7 equilibrates:1 american:1 chung:2 unordered:1 coefficient:2 satisfy:1 explicitly:4 depends:8 performed:1 view:2 try:1 observing:2 analyze:1 start:1 bayes:2 recover:1 minimize:5 square:1 variance:2 miller:1 correspond:3 bayesian:5 sdps:2 accurately:1 explain:1 obvious:1 associated:6 proof:1 sampled:1 popular:3 wh:2 recall:3 knowledge:1 improves:1 dimensionality:1 segmentation:2 teleporting:1 higher:2 supervised:1 methodology:1 response:1 specify:2 erd:1 improved:3 formulation:1 planar:1 box:2 though:1 generality:2 just:1 implicit:5 eqn:18 o:1 perry:2 widespread:1 mode:1 quality:1 k22:4 normalized:12 true:1 evolution:1 regularization:25 symmetric:3 nonzero:2 i2:2 illustrated:1 ll:2 width:2 whereby:1 criterion:5 generalized:1 ridge:6 performs:1 l1:1 image:2 wise:2 recently:2 empirically:1 twist:1 endpoint:1 extend:3 interpretation:6 trait:1 interpret:1 refer:2 imposing:2 grid:2 mathematics:2 similarly:2 moving:1 similarity:2 etc:1 patrick:1 posterior:7 showed:2 noisily:1 optimizes:1 moderate:1 certain:5 yi:1 seen:3 rotationally:1 additional:2 monotonically:1 semi:6 rv:1 technical:3 academic:1 laplacian:59 regression:23 expectation:2 arxiv:3 iteration:1 kernel:5 represent:1 diffusive:2 addition:2 background:2 crucial:1 swapped:1 rest:1 subject:4 induced:1 expander:4 orecchia:17 member:1 practitioner:1 leverage:1 counting:1 intermediate:2 lasso:6 idea:1 tradeoff:2 motivated:1 o0:1 penalty:7 york:1 useful:1 generally:2 detailed:1 locally:1 induces:2 concentrated:1 generate:3 canonical:1 per:1 bulk:2 dasgupta:1 express:1 four:1 drawn:3 kyi:1 diffusion:10 ht:1 graph:55 downstream:2 fraction:1 sum:1 run:1 inverse:5 parameterized:1 family:2 decision:2 appendix:5 internet:1 followed:1 annual:3 nontrivial:10 constraint:3 orthogonality:1 infinity:2 encodes:1 calling:2 speed:3 answered:1 performing:2 department:1 according:2 watt:2 describes:3 smaller:2 increasingly:1 lp:1 metropolis:1 evolves:5 making:5 intuitively:1 invariant:2 multiplicity:1 unregularized:2 equation:1 eventually:1 count:1 mind:1 letting:1 know:1 available:2 operation:1 apply:2 observe:3 spectral:11 appropriate:1 vishnoi:1 rp:1 existence:1 original:1 top:3 running:8 clustering:2 denotes:4 dirichlet:8 uu0:1 unfortunate:1 guattery:1 wigderson:1 k1:3 lation:1 society:1 objective:7 move:1 question:2 quantity:2 already:1 malik:1 usual:3 diagonal:2 said:1 trivial:1 reason:1 toward:1 providing:2 minimizing:1 equivalently:6 stoc:1 holding:1 stated:1 negative:1 ized:1 stern:2 collective:1 perform:2 observation:6 finite:1 situation:1 defining:1 precise:3 misspecification:1 y1:3 rn:1 varied:1 community:1 pair:1 connection:2 able:1 below:2 pattern:1 laplacians:3 program:5 pagerank:14 power:2 transcations:1 business:1 natural:4 regularized:35 indicator:1 residual:1 xd1:3 brief:2 imply:2 axis:2 prior:44 understanding:2 kf:2 evolve:1 relative:4 asymptotic:1 loss:2 fully:1 permutation:1 bear:1 kakf:1 interesting:1 generation:1 expect:1 analogy:1 teleportation:3 versus:2 foundation:2 degree:5 playing:1 course:1 penalized:1 supported:3 allow:2 neighbor:3 wide:1 bulletin:1 distributed:2 boundary:1 xn:1 transition:2 superficially:1 world:4 commonly:1 made:2 qualitatively:2 approximate:3 emphasize:1 implicitly:5 keep:1 pseudoinverse:6 global:2 assumed:3 xi:3 spectrum:4 nature:1 robust:1 ca:1 improving:1 lht:1 necessarily:1 separator:1 diag:2 main:1 noise:2 arise:1 repeated:1 x1:1 tl:1 ny:1 explicit:1 exponential:1 lie:1 xt:1 specific:1 nyu:2 incorporating:2 rel:5 adding:1 illustrates:3 sorting:1 entropy:1 rayleigh:2 simply:1 lazy:5 expressed:1 adjustment:1 strogatz:2 ordered:1 scalar:1 truth:4 satisfies:2 acm:1 conditional:7 goal:1 viewed:2 sorted:1 tempted:1 replace:1 absence:1 specifically:2 determined:1 uniformly:5 except:1 acting:1 miss:1 lemma:1 called:8 teng:2 invariance:1 formally:1 support:4 spielman:2 evaluate:1 regularizing:1 |
3,615 | 4,273 | Nonnegative dictionary learning in the exponential
noise model for adaptive music signal representation
C?edric F?evotte
CNRS LTCI; T?el?ecom ParisTech
75014, Paris, France
[email protected]
Onur Dikmen
CNRS LTCI; T?el?ecom ParisTech
75014, Paris, France
[email protected]
Abstract
In this paper we describe a maximum likelihood approach for dictionary learning
in the multiplicative exponential noise model. This model is prevalent in audio
signal processing where it underlies a generative composite model of the power
spectrogram. Maximum joint likelihood estimation of the dictionary and expansion coefficients leads to a nonnegative matrix factorization problem where the
Itakura-Saito divergence is used. The optimality of this approach is in question because the number of parameters (which include the expansion coefficients) grows
with the number of observations. In this paper we describe a variational procedure
for optimization of the marginal likelihood, i.e., the likelihood of the dictionary
where the activation coefficients have been integrated out (given a specific prior).
We compare the output of both maximum joint likelihood estimation (i.e., standard Itakura-Saito NMF) and maximum marginal likelihood estimation (MMLE)
on real and synthetical datasets. The MMLE approach is shown to embed automatic model order selection, akin to automatic relevance determination.
1 Introduction
In this paper we address the task of nonnegative dictionary learning described by
V ? W H,
(1)
where V , W , H are nonnegative matrices of dimensions F ? N , F ? K and K ? N , respectively. V
is the data matrix, where each column vn is a data point, W is the dictionary matrix, with columns
{wk } acting as ?patterns? or ?explanatory variables? reprensentative of the data, and H is the activation matrix, with columns {hn }. For example, in this paper we will be interested in music data
such that V is time-frequency spectrogram matrix and W is a collection of spectral signatures of latent elementary audio components. The most common approach to nonnegative dictionary learning
is nonnegative matrix factorization (NMF) [1] which consists in retrieving the factorization (1) by
solving
def X
min D(V |W H) =
d(vf n |[W H]f n ) s.t. W, H ? 0 ,
(2)
W,H
fn
where d(x|y) is a measure of fit between nonnegative scalars, vf n are the entries of V , and A ? 0
expresses nonnegativity of the entries of matrix A. The cost function D(V |W H) is often a likelihood function ? log p(V |W, H) in disguise, e.g., the Euclidean distance underlies additive Gaussian
noise, the Kullback-Leibler (KL) divergence underlies Poissonian noise, while the Itakura-Saito (IS)
divergence underlies multiplicative exponential noise [2]. The latter noise model will be central to
this work because it underlies a suitable generative model of the power spectrogram, as shown in [3]
and later recalled.
1
A criticism about NMF is that little can be said about the asymptotical optimality of the learnt
dictionary W . Indeed, because W is estimated jointly with H, the total number of parameters F K +
KN grows with the number of data points N . As such, this paper instead addresses optimization of
the likelihood in the marginal model described by
Z
p(V |W ) =
p(V |W, H)p(H)dH,
(3)
H
where H is treated as a random latent variable with prior p(H). The evaluation and optimization of
the marginal likelihood is not trivial in general, and this paper is precisely devoted to these tasks in
the multiplicative exponential noise model.
The maximum marginal likelihood estimation approach we seek here is related to IS-NMF in such
a way that Latent Dirichlet Allocation (LDA) [4] is related to Latent Semantic Indexing (pLSI)
[5]. LDA and pLSI are two estimators in the same model, but LDA seeks estimation of the topic
distributions in the marginal model, from which the topic weights describing each document have
been integrated out. In contrast, pLSI (which is essentially equivalent to KL-NMF as shown in [6])
performs maximum joint likelihood estimation (MJLE) for the topics and weights. Blei et al. [4]
show the better performance of LDA with respect to (w.r.t) pLSI. Welling et al. [7] also report similar
results with a discussion, stating that deterministic latent variable models assign zero probability to
input configurations that do not appear in the training set. A similar approach is Discrete Component
Analysis (DCA) [8] which considers maximum marginal a posteriori estimation in the GammaPoisson (GaP) model [9], see also [10] for the maximum marginal likelihood estimation on the same
model. In this paper, we will follow the same objective for the multiplicative exponential noise
model.
We will describe a variational algorithm for the evaluation and optimization of (3); note that the
algorithm exploits specificities of the model and is not a mere adaptation of LDA or DCA to an
alternative setting. We will consider a nonnegative Generalized inverse-Gaussian (GIG) distribution
as a prior for H, a flexible distribution which takes the Gamma and inverse-Gamma as special
cases. As will be detailed later, this work relates to recent work by Hoffman et al. [11], which
considers full Bayesian integration of W and H (both assumed random) in the exponential noise
model, in a nonparametric setting allowing for model order selection. We will show that our more
simple maximum likelihood approach inherently performs model selection as well by automatically
pruning ?irrelevant? dictionary elements. Applied to a short well structured piano sequence, our
approach is shown to capture the correct number of components, corresponding to the expected note
spectra, and outperforms the nonparametric Bayesian approach of [11].
The paper is organized as follows. Section 2 introduces the multiplicative exponential noise model
with the prior distribution for the expansion coefficients p(H). Sections 3 and 4 describe the MJLE
and MMLE approaches, respectively. Section 5 reports results on synthetical and real audio data.
Section 6 concludes.
2 Model
The generative model assumed in this paper is
vf n = v?f n . ?f n ,
(4)
P
where v?f n =
k wf k hkn and ?f n is a nonnegative multiplicative noise with exponential distribution ?f n ? exp(??f n ). In other words, and under independence assumptions, the likelihood
function is
Y
p(V |W, H) =
(1/?
vf n ) exp(?vf n /?
vf n ) .
(5)
fn
When V is a power spectrogram matrix such that vf n = |xf n |2 and {xf n } are the complex-valued
short-time Fourier transform (STFT) coefficients of some signal data, where f typically acts as a
frequency index and n acts as a time-frame index, it was shown in [3] that an equivalent generative
model of vf n is
X
xf n =
cf kn , cf kn ? Nc (0, wf k hkn ) ,
(6)
k
2
where Nc refers to the circular complex Gaussian distribution.1 In other words, the exponential
multiplicative noise model underlies a generative composite model of the STFT. The complexvalued matrix {cf kn }f n , referred to as k th component, is characterized by a spectral signature wk ,
amplitude-modulated in time by the frame-dependent coefficient hkn , which accounts for nonstationarity. In analogy with LDA or DCA, if our data consisted of word counts, with f indexing words
and n indexing documents, then the columns of W would describe topics and cf kn would denote
the number of occurrences of word f stemming from topic k in document n.
In our setting W is considered a free deterministic parameter to be estimated by maximum likelihood. In contrast, H is treated as a nonnegative random latent variable over which we will integrate.
It is assigned a GIG prior, such that
hkn ? GIG(?k , ?k , ?k ) ,
(7)
with
GIG(x|?, ?, ?) =
(?/?)?/2 ??1
?
?
x
exp ? ?x +
,
x
2K? (2 ??)
(8)
where K is a modified Bessel function of the second kind and x, ? and ? are nonnegative scalars.
The GIG distribution unifies the Gamma (? > 0, ? = 0) and inverse-Gamma (? < 0, ? = 0)
distributions. Its sufficient statistics are x, 1/x and log x, and in particular we have
s
?
?
r
K?+1 (2 ??) ?
K??1 (2 ??) ?
1
?
?
hxi =
=
,
,
(9)
?
x
?
K? (2 ??)
K? (2 ??)
where hxi denotes expectation. Although all derivations and the implementations are done for the
general case, in practice we will only consider the special case of Gamma distribution for simplicity.
In such case, ? parameter merely acts as a scale parameter, which we fix so as to solve the scale
ambiguity between the columns of W and the rows of H. We will also assume the shape parameters
{?k } fixed to arbitrary values (typically, ?k = 1, which corresponds to the exponential distribution).
Given the generative model specified by equations (4) and (7) we now describe two estimators for
W.
3 Maximum joint likelihood estimation
3.1 Estimator
The joint (penalized) log-likelihood likelihood of W and H is defined by
def
CJL (W, H) = log p(V |W, H) + log p(H)
X
= ?DIS (V |W H) ?
(1 ? ?k ) log hkn + ?k hkn + ?k /hkn + cst ,
kn
(10)
(11)
where DIS (V |W H) is defined as in equation (2) with dIS (x|y) = x/y ? log(x/y) ? 1 (ItakuraSaito divergence) and ?cst? denotes terms constant w.r.t W and H. The subscript JL stands for joint
likelihood, and the estimation of W by maximization of CJL (W, H) will be referred to as maximum
joint likelihood estimation (MJLE).
3.2 MM algorithm for MJLE
We describe an iterative algorithm which sequentially updates W given H and H given W . Each of
the two steps can be achieved in a minorization-maximization (MM) setting [12], where the original
problem is replaced by the iterative optimization of an easier-to-optimize auxiliary function. We first
describe the update of H, from which the update of W will be easily deduced.
P Given W , our task
consists in maximizing C(H) = ?DIS (V |W H) ? L(H), where L(H) = kn (1 ? ?k ) log hkn +
?k hkn + ?k /hkn . Using Jensen?s inequality to majorize the convex part of DIS (V |W H) (terms in
1
A complex random variable has distribution Nc (?, ?) if and only if its real and imaginary parts are independent and distributed as N (?(?), ?/2) and N (?(?), ?/2), respectively.
3
vf n /?
vf n ) and first order Taylor approximation to majorize its concave part (terms in log v?f n ), as in
[13], the functional
X
? =?
G(H, H)
pkn /hkn + qkn hkn ? L(H) + cst,
(12)
k
P
? f n , can be shown to be
?2
vf n , v?f n = [W H]
vf2 n , qkn =
where pkn = h
kn
f wf k /?
f wf k vf n /?
? ? C(H) and G(H,
? H)
? = C(H).
? Its iterative maxa tight lower bound of C(H), i.e., G(H, H)
? = H (i) acts as the current iterate at iteration i, produces an ascent
imization w.r.t H, where H
algorithm, such that C(H (i+1) ) ? C(H (i) ). The update is easily shown to amount to solving an
order 2 polynomial with a single positive root given by
p
(?k ? 1) + (?k ? 1)2 + 4(pkn + ?k )(qkn + ?k )
hkn =
.
(13)
2(qkn + ?k )
P
The update preserves nonnegativity given positive initialization. By exchangeability of W and H
when the data is transposed (V T = H T W T ), and dropping the penalty term (?k = 1, ?k = 0,
?k = 0), the update of W is given by the multiplicative update
sP
vf2 n
n hkn vf n /?
P
wf k = w
?f k
,
(14)
vf n
n hkn /?
which is known from [13].
4 Maximum marginal likelihood estimation
4.1 Estimator
We define the marginal log-likelihood objective function as
Z
def
CML (W ) = log p(V |W, H)p(H) dH .
(15)
The subscript ML stands for marginal likelihood, and the estimation of W by maximization of
CML (W ) will be referred to as maximum marginal likelihood estimation (MMLE). Note that in
Bayesian estimation the term marginal likelihood is sometimes used as a synonym for the model
evidence, which is the likelihood of data given the model, i.e., where all random parameters (including W ) have been marginalized. This is not the case here where W is treated as a deterministic
parameter and marginal likelihood only refers to the likelihood of W , where H has been integrated
out. The integral in equation (15) is intractable given our model. In the next section we resort to a
variational Bayes procedure for the evaluation and maximization of CML (W ).
4.2 Variational algorithm for MMLE
In the following we propose an iterative lower bound evaluation/maximization procedure for
? ) such that
approximate maximization of CML (W ). We will construct a bound B(W, W
?
?
?
?(W, W ), CML (W ) ? B(W, W ), where W acts as the current iterate and W acts as the free parameter over which the bound is maximized. The maximization is approximate in that the bound
? ,W
? ) ? CML (W
? ), i.e., is loosely tight in the current update W
? , which fails to
will only satisfy B(W
ensure ascent of the objective function like in the MM setting of Section 3.2.
We propose to construct the bound from a variational Bayes perspective [14]. The following inequality holds for any distribution function q(H)
def
CML (W ) ? hlog p(V |W, H)iq + hlog p(H)iq ? hlog q(H)iq = Bqvb (W ) .
(16)
The inequality becomes an equality when q(H) = p(H|V, W ); when the latter is available in close
? ) and maximize B vb (W ) w.r.t W ,
form, the EM algorithm consists in using q?(H) = p(H|V, W
q?
and iterate. The true posterior of H being intractable in our case, we take q(H) to be a factorized,
4
? ) to C(W
? ).
parametric distribution q? (H), whose parameter ? is updated so as to tighten Bqvb (W
Like in [11], we choose q? (H) to be in the same family as the prior, such that
Y
q? (H) =
GIG(?
?kn , ??kn , ??kn ) .
(17)
kn
Bqvb (W )
The first term of
essentially involves the expectation of ?DIS (V |W H) w.r.t to the variational distribution
q
(H).
The product W H introduces some coupling of the coefficients of H
?
P
(via the sum k wf k hkn ) which makes the integration difficult. Following [11] and similar to
Section 3.2, we propose to lower bound this term using Jensen?s and Taylor?s type inequalities to
majorize the convex and concave parts of ?DIS (V |W H). The contributions of the elements of H
become decoupled w.r.t to k, which allows for evaluation and maximization of the bound. This leads
to
!
!
X X
1 X
vf n
1
2
hlog p(V |H, W )iq ? ?
+ log ?f n +
wf k hhkn iq ? 1 ,
?f kn
wf k hkn q
?f n
fn
k
k
(18)
P
where {?f n } and {?f kn } are nonnegative free parameters such that k ?f kn = 1. We define
B?,?,? (W ) as Bqvb (W ) but where the expectation of the joint log-likelihood is replaced by its lower
bound given right side of equation (18). From there, our algorithm is a two-step procedure consisting
? ?,
? ?? so as to tighten B?,?,? (W
? ) to CML (W
? ), and 2) maximizing B ? ? ? (W )
in 1) computing ?,
?,?,?
w.r.t W . The corresponding updates are given next. Note that evaluation of the bound only involves
expectations of hkn and 1/hkn w.r.t to the GIG distribution, which is readily given by equation (9).
? , run the following fixed-point
Step 1: Tightening the bound Given current dictionary update W
equations.
w
?f k /h1/hkn iq
,
?f kn = P
?f j /h1/hjn iq
jw
?
? kn = ?k ,
??kn = ?k +
Xw
?f k
f
?f n
?f n =
X
j
,
w
?f j hhjn iq
??kn = ?k +
X vf n ?2f kn
f
w
?f k
.
Step 2: Optimizing the bound Given the variational distribution q? = q?? from previous step,
update W as
v
uP
hP
i?2
u
?1
?1
?f j h1/hjn iq?
h1/hkn iq?
u n vf n
jw
u
wf k = w
?f k t
.
(19)
i?1
P hP
w
?
hh
i
hh
i
f
j
jn
kn
n
j
q?
q?
The VB update has a similar form to the MM update of equation (14) but the contributions of H are
replaced by expected values w.r.t the variational distribution.
4.3 Relation to other works
A variational algorithm using the activation matrix H and the latent components C = {cf kn } as
hidden data can easily be devised, as sketched in [2]. Including C in the variational distribution also
allows to decouple the contributions of the activation coefficients w.r.t to k but leads from our experience to a looser bound, a finding also reported in [11]. In a fully Bayesian setting,
Hoffman et al.
P
[11] assume Gamma priors for both W and H. The model is such that v?f n = k ?k wf k hkn , where
?k acts as a component weight parameter. The number of components is potentially infinite but,
in a nonparametric setting, the prior for ?k favors a finite number of active components. Posterior
inference of the parameters W , H, {?k } is achieved in a variational setting similar to Section 4.2,
by maximizing a lower bound on p(V ). In contrast to this method, our approach does not require to
specify a prior for W , leads to simple updates for W that are directly comparable to IS-NMF and
experiments will reveal that our approach embeds model order selection as well, by automatically
pruning unnecessary columns of W , without resorting to the nonparametric framework.
5
MMLE
5
?1.25
x 10
MJLE
4
?3
?1.3
x 10
MJLE
5
?1.4
?4
?1.45
?5
?1.5
x 10
ML
JL
?1.4
C
C
CML
?1.35
?6
?1.55
?7
?1.6
?1.45
?1.5
?1.55
5
10
15
20
K
(a) CML by MMLE
25
?8
5
10
15
20
25
?1.65
5
K
(b) CJL by MJLE
10
15
20
25
K
(c) CML by MJLE
Figure 1: Marginal likelihood CML (a) and joint likelihood CJL (b) versus number of components
K. CML values corresponding to dictionaries estimated by CJL maximization (c).
5 Experiments
In this section, we study the performances of MJLE and MMLE methods on both synthetical and
real-world datasets.2 The prior hyperparameters are fixed to ?k = 1, ?k = 0 (exponential distribution) and ?k = 1, i.e., hkn ? exp(?hkn ). We used 5000 algorithm iterations and nonnegative
random initializations in all cases. In order to minimize the odds of getting stuck in local optima, we
adapted the deterministic annealing method proposed in [15] for MMLE. Deterministic annealing
is applied by multiplying the entropy term ?hlog q(H)i in the lower bound in (16) by 1/? (i) . The
initial ? (0) is chosen in (0, 1) and increased through iterations. In our experiments, we set ? (0) = 0.6
and updated it with the rule ? (i+1) = min(1, 1.005? (i) ).
5.1 Swimmer dataset
First, we consider the synthetical Swimmer dataset [16], for which the ground truth of the dictionary
is available. The dataset is composed of 256 images of size 32 ? 32, representing a swimmer built
of an invariant torso and 4 limbs. Each of the 4 limbs can be in one of 4 positions and the dataset
is formed of all combinations. Hence, the ground truth dictionary corresponds to the collection of
individual limb positions. As explained in [16] the torso is an unidentifiable component that can
be paired with any of the limbs, or even split among the limbs. In our experiments, we mapped the
values in the dataset onto the range [1, 100] and multiplied with exponential noise, see some samples
in Fig. 2 (a).
We ran the MM and VB algorithms (for MJLE and MMLE, respectively) for K = 1 . . . 20 and the
joint and marginal log-likelihood end values (after the 5000 iterations) are displayed in Fig. 1. The
marginal log-likelihood is here approximated by its lower bound, as described in Section 4.2. In
Fig. 1(a) and (b) the respective objective criteria (CML and CJL ) maximized by MMLE and MJLE
are shown. The increase of CML stops after K = 16, whereas CJL continues to increase as K gets
larger. Fig. 1 (c) displays the corresponding marginal likelihood values, CML , of the dictionaries
obtained by MJLE in Fig. 1 (b); this figure empirically shows that maximizing the joint likelihood
does not necessarily imply maximization of the marginal likelihood. These figures display the mean
and standard deviation values obtained from 7 experiments.
The likelihood values increase with the number of components, as expected from nested models.
However, the marginal likelihood stagnates after K = 16. Manual inspection reveals that passed
this value of K, the extra columns of W are pruned to zero, leaving the criterion unchanged. Hence,
MMLE appears to embed automatic order selection, similar to automatic relevance determination
[17, 18]. The dictionaries learnt from MJLE and MMLE with K = 20 components are shown in
Fig. 2 (b) and (c). As can be seen from Fig. 2 (b), MJLE produces spurious or duplicated components. In contrast, the ground truth is well recovered with MMLE.
2
MATLAB code is available at http://perso.telecom-paristech.fr/?dikmen/nips11/
6
(a) Data
(b) WMJLE
(c) WMMLE
Figure 2: Data samples and dictionaries learnt on the swimmer dataset with K = 20.
5.2 A piano excerpt
In this section, we consider the piano data used in [3]. It is a toy audio sequence recorded in real
conditions, consisting of four notes played all together in the first measure and in all possible pairs in
the subsequent measures. A power spectrogram with analysis window of size 46 ms was computed,
leading to F = 513 frequency bins and N = 676 time frames. We ran MMLE with K = 20 on the
? H,
? where W
? is
spectrogram. We reconstructed STFT component estimates from the factorization W
? = hHi . We used the minimum mean square error (MMSE)
the MMLE dictionary estimate and H
q
estimate given by c?f kn = gf kn . xf n , where gf kn is the time-frequency Wiener mask defined by
?
? kn / P w
w
?f k h
j ?f j hjn . The estimated dictionary and the reconstructed components in the time domain after inverse STFT are shown in Fig. 3 (a). Out of the 20 components, 12 were assigned to zero
during inference. The remaining 8 are displayed. 3 of the nonzero dictionary columns have very
small values, leading to inaudible reconstructions. The five significant dictionary vectors correspond
to the frequency templates of the four notes and the transients. For comparison, we applied the nonparametric approach by Hoffman et al. [11] on the same data with the same hyperparameters for H.
The estimated dictionary and the reconstructed components are presented in Fig. 3 (b). 10 out of
20 components had very small weight values. The most significant 8 of the remaining components
are presented in the figure. These components do not exactly correspond to individual notes and
transients as they did with MMLE. The fourth note is mainly represented in the fifth component, but
partially appears in the first three components as well. In general, the performance of the nonparametric approach depends more on initialization, i.e., requires more repetitions than MMLE. For the
above results, we used 200 repetitions for the nonparametric method and 20 for MMLE (without
annealing, same stopping criterion) and chose the repetition with the highest likelihood.
5.3 Decomposition of a real song
In this last experiment, we decompose the first 40 seconds of God Only Knows by the Beach Boys.
This song was produced in mono and we retrieved a downsampled version of it at 22kHz from the
CD release. We computed a power spectrogram with 46 ms analysis window and ran our VB algorithm with K = 50. Fig. 4 displays the original data, and two examples of estimated time-frequency
masks and reconstructed components. The figure also shows the variance of the reconstructed components and the evolution of the variational bound along iterations. In this example, 5 components
out of the 50 are completely pruned in the factorization and 7 others are inaudible. Such decomposition can be used in various music editing settings, for example for mono to stereo remixing, see,
e.g., [3].
7
WMMLE
cMMLE
Whoffman
(a) MMLE
choffman
(b) Hoffman et al.
Figure 3: The estimated dictionary and the reconstructed components by MMLE and the nonparametric approach by Hoffman et al. with K = 20.
Log power data spectrogram
Variance of reconstructed components
?3
x 10
500
5
400
4
300
3
200
2
100
1
200
400
600
800
1000
1200
1400
0
1600
Temporal data
5
10
15
20
25
30
35
40
45
50
4500
5000
Variational bound against iterations
5
x 10
1
10
0
?1
5
0
5
10
15
20
25
30
35
0
40
500
Time?frequency Wiener mask of component 13
1000
500
500
400
400
300
300
200
200
100
2000
2500
3000
3500
4000
100
200
400
600
800
1000
1200
1400
1600
200
Reconstructed component 13
400
600
800
1000
1200
1400
1600
Reconstructed component 18
1
1
0
?1
1500
Time?frequency Wiener mask of component 18
0
0
5
10
15
20
25
30
35
?1
40
0
5
10
15
20
25
30
35
40
Figure 4: Decomposition results of a real song. The Wiener masks take values between 0 (white)
and 1 (black). The first example of reconstructed component captures the first chord of the song,
repeated 4 times in the intro. The other component captures the cymbal, which starts with the first
verse of the song.
Acknowledgments
This work is supported by project ANR-09-JCJC-0073-01 TANGERINE (Theory and applications
of nonnegative matrix factorization).
6 Conclusions
In this paper we have challenged the standard NMF approach to nonnegative dictionary learning,
based on maximum joint likelihood estimation, with a better-posed approach consisting in maximum
marginal likelihood estimation. The proposed algorithm based on variational inference has comparable computational complexity to standard NMF/MJLE. Our experiments on synthetical and real
data have brought up a very attractive feature of MMLE, namely its self-ability to discard irrelevant
columns in the dictionary, without resorting to elaborate schemes such as Bayesian nonparametrics.
8
References
[1] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization.
Nature, 401:788?791, 1999.
[2] C. F?evotte and A. T. Cemgil. Nonnegative matrix factorisations as probabilistic inference in
composite models. In Proc. 17th European Signal Processing Conference (EUSIPCO), pages
1913?1917, Glasgow, Scotland, Aug. 2009.
[3] C. F?evotte, N. Bertin, and J.-L. Durrieu. Nonnegative matrix factorization with the ItakuraSaito divergence. With application to music analysis. Neural Computation, 21(3):793?830,
Mar. 2009.
[4] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of
Machine Learning Research, 3:993?1022, Jan. 2003.
[5] Thomas Hofman. Probabilistic latent semantic indexing. In Proc. 22nd International Conference on Research and Development in Information Retrieval (SIGIR), 1999.
[6] E. Gaussier and C. Goutte. Relation between PLSA and NMF and implications. In Proc. 28th
annual international ACM SIGIR conference on Research and development in information
retrieval (SIGIR?05), pages 601?602, New York, NY, USA, 2005. ACM.
[7] M. Welling, C. Chemudugunta, and N. Sutter. Deterministic latent variable models and their
pitfalls. In SIAM Conference on Data Mining (SDM), pages 196?207, 2008.
[8] W. L. Buntine and A. Jakulin. Discrete component analysis. In Lecture Notes in Computer
Science, volume 3940, pages 1?33. Springer, 2006.
[9] John F. Canny. GaP: A factor model for discrete data. In Proceedings of the 27th ACM international Conference on Research and Development of Information Retrieval (SIGIR), pages
122?129, 2004.
[10] O. Dikmen and C. F?evotte. Maximum marginal likelihood estimation for nonnegative dictionary learning. In Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP?11), Prague, Czech Republic, 2011.
[11] M. Hoffman, D. Blei, and P. Cook. Bayesian nonparametric matrix factorization for recorded
music. In Proc. 27th International Conference on Machine Learning (ICML), Haifa, Israel,
2010.
[12] D. R. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, 58:30 ?
37, 2004.
[13] Y. Cao, P. P. B. Eggermont, and S. Terebey. Cross Burg entropy maximization and its application to ringing suppression in image reconstruction. IEEE Transactions on Image Processing,
8(2):286?292, Feb. 1999.
[14] C. M. Bishop. Pattern Recognition And Machine Learning. Springer, 2008. ISBN-13: 9780387310732.
[15] K. Katahira, K. Watanabe, and M. Okada. Deterministic annealing variant of variational
Bayes method. In International Workshop on Statistical-Mechanical Informatics 2007 (IWSMI 2007), 2007.
[16] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition into parts? In Sebastian Thrun, Lawrence Saul, and Bernhard Sch?olkopf, editors,
Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[17] D. J. C. Mackay. Probable networks and plausible predictions ? a review of practical Bayesian
models for supervised neural networks. Network: Computation in Neural Systems, 6(3):469?
505, 1995.
[18] C. M. Bishop. Bayesian PCA. In Advances in Neural Information Processing Systems (NIPS),
pages 382?388, 1999.
9
| 4273 |@word version:1 polynomial:1 nd:1 plsa:1 seek:2 cml:16 decomposition:4 edric:1 initial:1 configuration:1 document:3 mmse:1 outperforms:1 imaginary:1 current:4 recovered:1 activation:4 readily:1 john:1 stemming:1 fn:3 additive:1 subsequent:1 shape:1 update:14 generative:6 cook:1 inspection:1 scotland:1 sutter:1 short:2 blei:3 minorization:1 five:1 along:1 become:1 retrieving:1 consists:3 mask:5 expected:3 indeed:1 itakurasaito:2 pitfall:1 automatically:2 little:1 window:2 becomes:1 project:1 factorized:1 israel:1 kind:1 ringing:1 maxa:1 finding:1 temporal:1 act:7 concave:2 exactly:1 katahira:1 appear:1 positive:2 local:1 cemgil:1 eusipco:1 jakulin:1 subscript:2 black:1 chose:1 initialization:3 factorization:10 range:1 acknowledgment:1 practical:1 practice:1 procedure:4 jan:1 saito:3 composite:3 word:5 refers:2 specificity:1 downsampled:1 get:1 onto:1 close:1 selection:5 complexvalued:1 optimize:1 equivalent:2 deterministic:7 maximizing:4 convex:2 sigir:4 simplicity:1 factorisation:1 glasgow:1 estimator:4 rule:1 vf2:2 updated:2 swimmer:4 element:2 approximated:1 recognition:1 continues:1 capture:3 highest:1 chord:1 ran:3 complexity:1 seung:1 signature:2 solving:2 tight:2 hofman:1 synthetical:5 completely:1 easily:3 joint:12 icassp:1 represented:1 various:1 derivation:1 describe:8 whose:1 larger:1 valued:1 solve:1 posed:1 plausible:1 anr:1 favor:1 statistic:1 ability:1 god:1 jointly:1 transform:1 sequence:2 sdm:1 isbn:1 propose:3 reconstruction:2 product:1 fr:3 adaptation:1 canny:1 cao:1 olkopf:1 getting:1 optimum:1 produce:2 object:1 iq:10 coupling:1 stating:1 andrew:1 aug:1 auxiliary:1 involves:2 perso:1 correct:2 transient:2 bin:1 require:1 assign:1 fix:1 decompose:1 probable:1 elementary:1 mm:6 hold:1 considered:1 ground:3 exp:4 lawrence:1 pkn:3 dictionary:25 estimation:18 proc:5 repetition:3 hoffman:6 brought:1 mit:1 durrieu:1 gaussian:3 modified:1 exchangeability:1 release:1 evotte:4 prevalent:1 likelihood:42 mainly:1 contrast:4 criticism:1 suppression:1 wf:10 posteriori:1 inference:4 dependent:1 el:2 cnrs:2 stopping:1 integrated:3 typically:2 explanatory:1 hidden:1 relation:2 spurious:1 france:2 interested:1 sketched:1 among:1 flexible:1 development:3 special:2 integration:2 mackay:1 marginal:22 construct:2 beach:1 ng:1 icml:1 report:2 others:1 composed:1 gamma:6 divergence:5 preserve:1 individual:2 replaced:3 consisting:3 statistician:1 ltci:2 circular:1 mining:1 evaluation:6 introduces:2 devoted:1 implication:1 integral:1 experience:1 respective:1 decoupled:1 euclidean:1 taylor:2 loosely:1 haifa:1 increased:1 column:9 challenged:1 maximization:11 cost:1 deviation:1 entry:2 republic:1 buntine:1 reported:1 kn:26 learnt:3 deduced:1 international:6 siam:1 lee:1 probabilistic:2 informatics:1 michael:1 together:1 central:1 ambiguity:1 cjl:7 recorded:2 hn:1 choose:1 disguise:1 resort:1 american:1 leading:2 toy:1 account:1 wk:2 hkn:24 coefficient:8 satisfy:1 depends:1 multiplicative:8 later:2 root:1 h1:4 start:1 bayes:3 contribution:3 minimize:1 formed:1 square:1 wiener:4 variance:2 maximized:2 correspond:2 bayesian:8 unifies:1 produced:1 hunter:1 mere:1 multiplying:1 nonstationarity:1 stagnates:1 manual:1 sebastian:1 against:1 verse:1 frequency:8 transposed:1 stop:1 dataset:6 duplicated:1 torso:2 organized:1 amplitude:1 dca:3 appears:2 supervised:1 follow:1 specify:1 jw:2 editing:1 done:1 unidentifiable:1 nonparametrics:1 mar:1 lda:6 reveal:1 grows:2 usa:1 consisted:1 true:1 evolution:1 equality:1 assigned:2 hence:2 leibler:1 nonzero:1 semantic:2 white:1 attractive:1 fevotte:1 during:1 self:1 criterion:3 generalized:1 m:2 performs:2 image:3 variational:15 common:1 functional:1 empirically:1 khz:1 volume:1 jl:2 jcjc:1 significant:2 cambridge:1 automatic:4 resorting:2 stft:4 hp:2 had:1 hxi:2 feb:1 posterior:2 plsi:4 recent:1 perspective:1 optimizing:1 irrelevant:2 retrieved:1 discard:1 inequality:4 seen:1 minimum:1 gig:7 spectrogram:8 maximize:1 bessel:1 signal:5 relates:1 full:1 xf:4 determination:2 characterized:1 cross:1 retrieval:3 devised:1 paired:1 prediction:1 underlies:6 variant:1 essentially:2 expectation:4 iteration:6 sometimes:1 achieved:2 whereas:1 annealing:4 leaving:1 sch:1 extra:1 ascent:2 asymptotical:1 jordan:1 odds:1 prague:1 split:1 iterate:3 independence:1 fit:1 lange:1 pca:1 passed:1 akin:1 penalty:1 song:5 stereo:1 speech:1 york:1 matlab:1 stodden:1 detailed:1 amount:1 nonparametric:9 http:1 tutorial:1 estimated:7 chemudugunta:1 discrete:3 dropping:1 express:1 four:2 mono:2 merely:1 sum:1 run:1 inverse:4 fourth:1 family:1 mmle:22 vn:1 looser:1 excerpt:1 vf:17 vb:4 comparable:2 def:4 bound:18 played:1 display:3 hhi:1 nonnegative:19 mjle:15 annual:1 adapted:1 precisely:1 fourier:1 optimality:2 min:2 pruned:2 structured:1 combination:1 em:1 explained:1 invariant:1 indexing:4 ecom:2 equation:7 goutte:1 inaudible:2 describing:1 count:1 hh:2 know:1 end:1 available:3 multiplied:1 limb:5 spectral:2 occurrence:1 alternative:1 jn:1 original:2 thomas:1 denotes:2 dirichlet:2 include:1 cf:5 ensure:1 remaining:2 burg:1 marginalized:1 xw:1 music:5 exploit:1 eggermont:1 unchanged:1 objective:4 question:1 intro:1 parametric:1 said:1 distance:1 onur:1 mapped:1 thrun:1 topic:5 considers:2 trivial:1 code:1 index:2 nc:3 difficult:1 gaussier:1 hlog:5 potentially:1 boy:1 negative:1 tightening:1 implementation:1 allowing:1 observation:1 datasets:2 finite:1 displayed:2 frame:3 arbitrary:1 nmf:9 david:1 pair:1 paris:2 kl:2 specified:1 namely:1 mechanical:1 recalled:1 acoustic:1 czech:1 nip:1 address:2 poissonian:1 pattern:2 built:1 including:2 power:6 suitable:1 treated:3 representing:1 scheme:1 imply:1 concludes:1 gf:2 prior:10 piano:3 review:1 fully:1 lecture:1 allocation:2 analogy:1 versus:1 bertin:1 integrate:1 sufficient:1 editor:1 cd:1 row:1 penalized:1 supported:1 last:1 free:3 dis:7 side:1 majorize:3 saul:1 template:1 fifth:1 distributed:1 dimension:1 stand:2 world:1 stuck:1 collection:2 adaptive:1 tighten:2 welling:2 transaction:1 reconstructed:10 pruning:2 approximate:2 kullback:1 bernhard:1 ml:2 sequentially:1 active:1 reveals:1 assumed:2 unnecessary:1 spectrum:1 latent:10 iterative:4 nature:1 okada:1 inherently:1 itakura:3 expansion:3 complex:3 necessarily:1 european:1 domain:1 sp:1 did:1 synonym:1 noise:13 hyperparameters:2 repeated:1 qkn:4 fig:10 telecom:3 referred:3 elaborate:1 ny:1 embeds:1 fails:1 position:2 nonnegativity:2 watanabe:1 exponential:12 embed:2 specific:1 hjn:3 bishop:2 jensen:2 evidence:1 intractable:2 workshop:1 gap:2 easier:1 entropy:2 partially:1 scalar:2 springer:2 corresponds:2 truth:3 nested:1 dh:2 acm:3 ma:1 dikmen:4 donoho:1 paristech:5 cst:3 infinite:1 acting:1 decouple:1 total:1 latter:2 modulated:1 relevance:2 audio:4 |
3,616 | 4,274 | A reinterpretation of the policy oscillation
phenomenon in approximate policy iteration
Paul Wagner
Department of Information and Computer Science
Aalto University School of Science
PO Box 15400, FI-00076 Aalto, Finland
[email protected]
Abstract
A majority of approximate dynamic programming approaches to the reinforcement learning problem can be categorized into greedy value function methods and
value-based policy gradient methods. The former approach, although fast, is well
known to be susceptible to the policy oscillation phenomenon. We take a fresh
view to this phenomenon by casting a considerable subset of the former approach
as a limiting special case of the latter. We explain the phenomenon in terms of this
view and illustrate the underlying mechanism with artificial examples. We also
use it to derive the constrained natural actor-critic algorithm that can interpolate
between the aforementioned approaches. In addition, it has been suggested in the
literature that the oscillation phenomenon might be subtly connected to the grossly
suboptimal performance in the Tetris benchmark problem of all attempted approximate dynamic programming methods. We report empirical evidence against such
a connection and in favor of an alternative explanation. Finally, we report scores in
the Tetris problem that improve on existing dynamic programming based results.
1
Introduction
We consider the reinforcement learning problem in which one attempts to find a good policy for
controlling a stochastic nonlinear dynamical system. Many approaches to the problem are valuebased and build on the methodology of simulation-based approximate dynamic programming [1, 2,
3, 4, 5]. In this setting, there is no fixed set of data to learn from, but instead the target system, or
typically a simulation of it, is actively sampled during the learning process. This learning setting is
often described as interactive learning (e.g., [1, ?3]).
The majority of these methods can be categorized into greedy value function methods (critic-only)
and value-based policy gradient methods (actor-critic) (e.g., [1, 6]). The former approach, although
fast, is susceptible to potentially severe policy oscillations in presence of approximations. This phenomenon is known as the policy oscillation (or policy chattering) phenomenon [7, 8]. The latter
approach has better convergence guarantees, with the strongest case being for Monte Carlo evaluation with ?compatible? value function approximation. In this case, convergence w.p.1 to a local
optimum can be established under mild assumptions [9, 6, 4].
Bertsekas has recently called attention to the currently not well understood policy oscillation phenomenon [7]. He suggests that a better understanding of it is needed and that such understanding
?has the potential to alter in fundamental ways our thinking about approximate DP.? He also notes
that little progress has been made on this topic in the past decade. In this paper, we will try to shed
more light on this topic. The motivation is twofold. First, the policy oscillation phenomenon is intimately connected to some aspects of the learning dynamics at the very heart of approximate dynamic
An extended version of this paper is available at http://users.ics.tkk.fi/pwagner/.
1
programming; the lack of understanding in the former implies a lack of understanding in the latter.
In the long run, this state might well be holding back important theoretical developments in the field.
Second, methods not susceptible to oscillations have a much better suboptimality bound [7], which
gives also immediate value to a better understanding of oscillation-predisposing conditions.
The policy oscillation phenomenon is strongly associated in the literature with the popular Tetris
benchmark problem. This problem has been used in numerous studies to evaluate different learning
algorithms (see [10, 11]). Several studies, including [12, 13, 14, 11, 15, 16, 17], have been conducted using a standard set of features that were originally proposed in [12]. This setting has posed
considerable difficulties to some approximate dynamic programming methods. Impressively fast
initial improvement followed by severe degradation was reported in [12] using a greedy approximate policy iteration method. This degradation has been taken in the literature as a manifestation of
the policy oscillation phenomenon [12, 8].
Policy gradient and greedy approximate value iteration methods have shown much more stable behavior in the Tetris problem [13, 14], although it has seemed that this stability tends to come at the
price of speed (see esp. [13]). Still, the performance levels reached by even these methods fall way
short of what is known to be possible. The typical performance levels obtained with approximate
dynamic programming methods have been around 5,000 points [12, 8, 13, 16], while an improvement to around 20,000 points has been obtained in [14] by considerably lowering the discount factor.
On the other hand, performance levels between 300,000 and 900,000 points were obtained recently
with the very same features using the cross-entropy method [11, 15]. It has been hypothesized in
[7] that this grossly suboptimal performance of even the best-performing approximate dynamic programming methods might also have some subtle connection to the oscillation phenomenon. In this
paper, we will also briefly look into these potential connections.
The structure of the paper is as follows. After providing background in Section 2, we discuss the
policy oscillation phenomenon in Section 3 along with three examples, one of which is novel and
generalizes the others. We develop a novel view to the policy oscillation phenomenon in Sections 4
and 5. We validate the view also empirically in Section 6 and proceed to looking for the suggested
connection between the oscillation phenomenon and the convergence issues in the Tetris problem.
We report empirical evidence that indeed suggests a shared explanation to the policy degradation
observed in [12, 8] and the early stagnation of all the rest of the attempted approximate dynamic
programming methods. However, it seems that this explanation is not primarily related to the oscillation phenomenon but to numerical instability.
2
Background
A Markov decision process (MDP) is defined by a tuple M = (S, A, P, r), where S and A denote
the state and action spaces. St ? S and At ? A denote random variables on time t, and s, s0 ? S
and a, b ? A denote state and action instances. P(s, a, s0 ) = P(St+1 = s0 |St = s, At = a)
defines the transition dynamics and r(s, a) ? R defines the expected immediate reward function. A
(soft-)greedy policy ? ? (a|s, Q) is a (stochastic) mapping from states to actions and is based on the
value function Q. A parameterized policy ?(a|s, ?) is a stochastic mapping from states to actions
and is based on the parameter vector ?. Note that we use ? ? to denote a (soft-)greedy policy, not an
optimal policy. ThePaction value functions Q(s, a) and A(s, a) are estimators of the ?-discounted
cumulative reward t ? t E[r(St , At )|S0 = s, A0 = a, ?] that follows some (s, a) under some ?.
The state value function V is an estimator of such cumulative reward that follows some s.
In policy iteration, the current policy is fully evaluated, after which a policy improvement step is
taken based on this evaluation. In optimistic policy iteration, policy improvement is based on an
incomplete evaluation. In value iteration, just a one-step lookahead improvement is made at a time.
In greedy value function reinforcement learning (e.g., [2, 3]), the current policy on iteration k is
usually implicit and is greedy (and thus deterministic) with respect to the value function Qk?1 of
the previous policy:
1
if a = arg maxb Qk?1 (s, b)
? ? (a|s, Qk?1 ) =
(1)
0
otherwise.
Improvement is obtained by estimating a new value function Qk for this policy, after which the
process repeats. Soft-greedy iteration is obtained by slightly softening ? ? in some way so that
2
? ? (a|s, Qk?1 ) > 0, ?a, s, the Gibbs soft-greedy policy class with a temperature ? (Boltzmann
exploration) being a common choice:
? ? (a|s, Qk?1 ) ? eQk?1 (s,a)/? .
(2)
We note that (1) becomes approximated by (2) arbitrarily closely as ? ? 0 and that this corresponds
to scaling the action values toward infinity.
A common choice for approximating Q is to obtain a least-squares fit using a linear-in-parameters
? with the feature basis ?? :
approximator Q
? k (s, a, wk ) = w> ?? (s, a) ? Qk (s, a) .
Q
k
(3)
For the soft-greedy case, one option is to use an approximator that will obtain an approximation of
an advantage function (see [9]):1
!
X
>
?
?
?
?
?
Ak (s, a, wk ) = w
? (s, a) ?
? (b|s, Ak?1 )? (s, b) ? Ak (s, a) .
(4)
k
b
Convergence properties depend on how the estimation is performed and on the function approximator class with which Q is being approximated. For greedy approximate policy iteration in the
general case, policy convergence is guaranteed only up to bounded sustained oscillation [2]. Optimistic variants can permit asymptotic convergence in parameters, although the corresponding policy
can manifest sustained oscillation even then [8, 2, 7]. For the case of greedy approximate value
iteration, a line of research has provided solid (although restrictive) conditions for the approximator
class for having asymptotic parameter convergence (reviewed in, e.g., [3]), whereas the question of
policy convergence in these cases has been left quite open. In the rest of the paper, our focus will be
on non-optimistic approximate policy iteration.
In policy gradient reinforcement learning (e.g., [9, 6, 4, 5]), the current policy on iteration k is
explicitly represented using some differentiable stochastic policy class ?(?), the Gibbs policy with
some basis ? being a common choice:
?(a|s, ?) ? e?
>
?(s,a)
.
(5)
Improvement is obtained via stochastic gradient ascent: ?k+1 = ?k + ?k ?J(?k )/??. In actorcritic (value-based policy gradient) methods that implement a policy gradient based approximate
policy iteration scheme, the so-called ?compatibility condition? is fulfilled if the value function is
approximated using (4) with ?? = ? and ?(?k ) in place of ? ? (A?k?1 ) (e.g., [9]). In this case, the
value function parameter vector w becomes the natural gradient estimate ? for the policy ?(a|s, ?),
leading to the natural actor-critic algorithm [13, 4]:
?=w.
(6)
Here, convergence w.p.1 to a local optimum is established for Monte Carlo evaluation under standard assumptions (properly diminishing step-sizes and ergodicity of the sampling process, roughly
speaking) [9, 6]. Convergence into bounded suboptimality is obtained under temporal difference
evaluation [6, 5].
3
The policy oscillation phenomenon
It is well known that greedy policy iteration can be non-convergent under approximations. The
widely used projected equation approach can manifest convergence behavior that is complex and not
well understood, including bounded but potentially severe sustained policy oscillations [7, 8, 18].
Similar consequences arise in the context of partial observability for approximate or incomplete state
estimation (e.g., [19, 20, 21]).
It is important to remember that sustained policy oscillation can take place even under (asymptotic)
value function convergence (e.g., [7, 8]). Policy convergence can be established under various restrictions. Continuously soft-greedy action selection (which is essentially a step toward the policy
1
The approach in [4] is needed to permit temporal difference evaluation in this case.
3
gradient approach) has been found to have a beneficial effect in cases of both value function approximation and partial observability [22]. A notable approach is introduced in [7] wherein it is
also shown that the suboptimality bound for a converging policy sequence is much better. Interestingly, for the special case of Monte Carlo estimation of action values, it is also possible to establish
convergence by solely modifying the exploration scheme, which is known as consistent exploration
[23] or MCESP [24].
Figure 1: Oscillatory examples. Boxes marked with yk denote observations (aggregated states).
Circles marked with wk illustrate receptive fields of the basis functions. Only non-zero rewards are
shown. Start states: s1 (1.1), s0 (1.2), and s1 (1.3). Arrows leading out indicate termination.
The setting likely to be the simplest possible in which oscillation occurs even with Monte Carlo
policy evaluation is depicted in Figure 1.1 (adapted from [21]). The actions al and ar are available
in the decision states s1 and s2 . Both states are observed as y1 . The only reward is obtained
with the decision sequence (s1 , al ; s2 , ar ). Greedy value function methods that operate without
state estimation will oscillate between the policies ?(y1 ) = al and ?(y1 ) = ar , excluding the
exceptions mentioned above. This example can also be used to illustrate how local optima can
arise in the presence of approximations by changing the sign of the reward that follows (s2 , ar )
(see [20]). Figure 1.2 (adapted from [25]) shows a more complex case in which a deterministic
optimal solution is attainable. The actions a[1,3] are available in the only decision state s0 , which
is observed as y0 . Oscillation will occur when using temporal difference evaluation but not with
Monte Carlo evaluation. These two POMDP examples are trivially equivalent to value function
approximation using hard aggregation. Figure 1.3 (a novel example inspired by the classical XOR
problem) shows how similar counterexamples can be constructed also for the case of softer value
? 1 , al ) = w1,l , Q(s
? 2 , al ) =
function approximation. The action values are approximated with Q(s
? 3 , al ) = w2,l , Q(s
? 1 , ar ) = w1,r , Q(s
? 2 , ar ) = 0.5w1,r + 0.5w2,r , and
0.5w1,l + 0.5w2,l , Q(s
?
Q(s3 , ar ) = w2,r . The only reward is obtained with the decision sequence (s1 , al ; s2 , ar ; s3 , al ).
Oscillation will occur even with Monte Carlo evaluation. For other examples, see e.g. [8, 19].
A detailed description of the oscillation phenomenon can be found in [8, ?6.4] (see also [12, 7]),
where it is described in terms of cyclic sequences in the so-called greedy partition of the value
function parameter space. Although this line of research has provided a concise description of
the phenomenon, it has not fully answered the question of why approximations can introduce such
cyclic sequences in greedy policy iteration and why strong convergence guarantees exist for the
policy gradient based counterpart of this methodology. We will proceed by taking a different view
by casting a considerable subset of the former approach as a special case of the latter.
4
Approximations and attractive stochastic policies
In this section, we briefly and informally examine how policy oscillation arises in the examples in
Section 3. In all cases, oscillation is caused by the presence of an attractive stochastic policy, these
attractors being induced by approximations. In the case of partial observability without proper state
estimation (Figure 1.1), the policy class is incapable of representing differing action distributions
for the same observation with differing histories. This makes the optimal sequence (y1 , al ; y1 , ar )
inexpressible for deterministic policies, whereas a stochastic policy can still emit it every now and
then by chance. In Figure 1.3, the same situation is arrived at due to the insufficient capacity of the
approximator: the specified value function approximator cannot express such value estimates that
4
would lead to an implicit greedy policy that attains the optimal sequence (s1 , al ; s2 , ar ; s3 , al ). Generally speaking, in these cases, oscillation follows from a mismatch between the main policy class
and the exploration policy class: stochastic exploration can occasionally reach the reward, but the
deterministic main policy is incapable of exploiting this opportunity. The opportunity nevertheless
keeps appearing in the value function, leading to repeated failing attempts to exploit it. Consistent
exploration avoids the problem by limiting exploration to only expressible sequences.
Temporal difference evaluation effectively solves for an implicit Markov model [26], i.e., it gains
variance reduction in policy evaluation by making the Markov assumption. When this assumption
fails, the value function shows non-existent improvement opportunities. In Figure 1.2, an incorrect
Markov assumption leads to a TD solution that corresponds to a model in which, e.g., the actually
impossible sequence (y0 , a2 , r = 0; y2 , ?, r = +1; end) becomes possible and attractive. Generally
speaking, oscillation results in this case from perceived but non-existent improvement opportunities
that vanish once an attempt is made to exploit them. This vanishing is caused by changes in the
sampling distribution that leads to a different implicit Markov model and, consequently, to a different
fixed point (see [27, 18]).
In summary, stochastic policies can become attractive due to deterministically unreachable or completely non-existing improvement opportunities that appear in the value function. In all cases, the
class of stochastic policies allows gradually increasing the attempt of exploitation of such an opportunity until it is either optimally exploited or it has vanished enough so as to have no advantage over
alternatives, at which point a stochastic equilibrium is reached.
5
Policy oscillation as sustained overshooting
In this section, we focus more carefully on how attractive stochastic policies lead to sustained policy
oscillation when viewed within the policy gradient framework. We begin by looking at a natural
actor-critic algorithm that uses the Gibbs policy class (5). We iterate by fully estimating A?k in (4)
for the current policy ?(?k ), as shown in [4], and then a gradient update is performed using (6):
?k+1 = ?k + ??k .
(7)
Now let us consider some policy ?(?k ) from such a policy sequence generated by (7) and denote the
corresponding value function estimate by A?k and the natural gradient estimate by ?k . It is shown in
[13] that taking a very long step in the direction of the natural gradient ?k will approach in the limit
a greedy update (1) for the value function A?k :
lim
?(a|s, ?k + ??k ) = ? ? (a|s, A?k ) ,
? ? ?, ?k 6? ?, ? 6= 0, ?s, a .
(8)
The resulting policy will have the form
>
>
?(a|s, ?k + ??k ) ? e?k ?(s,a)+??k ?(s,a) .
(9)
The proof in [13] is based on the term ??k> ?(s, a) dominating the sum when ?
? ?. Thus, this type
of a greedy update is a special case of a natural gradient update in which the step-size approaches
infinity.
However, the requirement that ?k 6? ? will hold only during the first step using a constant ? ? ?,
assuming a bounded initial ?. Thus, natural gradient based policy iteration using such a very large
but constant step-size does not approach greedy value function based policy iteration after the first
such iteration. Little is needed, however, to make the equality apply in the case of full policy
iteration. The cleanest way, in theory, is to use a steeply increasing step-size schedule.
Theorem 1. Let ?(?k ) denote the kth policy obtained from (7) using the step-sizes ?[0,k?1] and
natural gradients ?[0,k?1] . Let ? ? (wk ) denote the kth policy obtained from (1) with infinitely small
? 0 ) being evaluated for ?(?0 ).
added softness and using a value function (4), with ?? = ? and A(w
Assume ?0 to be bounded from infinity. Assume all ?k to be bounded from zero and infinity. If
?0 ? ? and ?k /?k?1 ? ?, ?k > 0, then ?(?k+1 ) =lim ? ? (wk ).
Proof. The equivalence after the first iteration is proven in [13] with the requirement that the sum
in (9) is dominated by the last term ?0 ?0> ?(s, a). For ?0 ? ?, this holds if ?0 is bounded and
5
?0 6= 0. By writing the parameter vector after the second iteration as ?2 = ?0 + ?0 ?0 + ?1 ?1 , the
sum becomes
?0> ?(s, a) + ?0 ?0> ?(s, a) + ?1 ?1> ?(s, a) .
(10)
The requirement for the result in [13] to still apply is that the last term keeps dominating the sum.
Assuming ?0 6? ?, ?0 6? ?, and ?1 6= 0, then this condition is maintained if ?1 ? ? and
?1 /?0 ? ?. That is, the step-size in the second iteration needs to approach infinity much faster
than the step-size in the first iteration. The rest follows by induction.
However, once the first update is performed using such a very large step-size, it is no longer possible
to revert back to more conventional step-size schedules: once ? has become very large, any update
performed using a much smaller ? will have virtually no effect on the policy. In the following, a
more practical alternative is discussed that both avoids the related numerical issues and that allows
gradual interpolation back toward conventional policy gradient iteration. It also makes it easier to
illustrate the resulting process, which we will do shortly. However, a slight modification to the
natural actor-critic algorithm is required.
More precisely, we constrain the magnitude of ? by enforcing k?k ? c after each update, where k?k
is some measure of the magnitude of ? and c is some positive constant. Here the update equation (7)
is replaced by:
?k + ??k
if k?k + ??k k ? c
?k+1 =
(11)
?c (?k + ??k )
otherwise,
where ?c = c/k?k + ??k k.
Theorem 2. Let ?(?k ) and ? ? (wk ) be as previously, except that (7) is replaced with (11). Let
? 0 ) to be evaluated for ?(?0 ). Assume ?0 6? ? and all ?k 6= 0. If c ? ? and ?/c ? ?, then
A(w
?(?k+1 ) =lim ? ? (wk ).
Proof. The proof in [13] for a single iteration of the unmodified algorithm requires that the last term
of the sum in (9) dominates. This holds if ?/k?k k ? ? and ?k 6= 0. This is ensured during the first
iteration by having ?0 6? ?. After the kth iteration, k?k k ? c due to the constraint in (11), and the
last term will dominate as long as ?/c ? ? and ?k 6= 0.
The constraint affects the policy ?(?k+1 ) only when k?k + ??k k > c, in which case the magnitude
of the parameter vector is scaled down with a factor ?c so that it becomes equal to c. This has
a diminishing effect on the resulting policy as c ? ? because the Gibbs distribution becomes
increasingly insensitive to scaling of the parameter vector when its magnitude approaches infinity:
lim
?(?c ?) = ?(?) ,
??c , ? such that k?c ?k ? ?, k?k ? ? .
(12)
With a constant ? ? ? and finite c, the resulting constrained natural actor-critic algorithm (CNAC)
is analogous to soft-greedy iteration in which on-policy Boltzmann exploration with temperature
? = 1/c is used: constraining the magnitude of ? will effectively ensure some minimum level
of stochasticity in the corresponding policy (there is a mismatch between the algorithms even for
? = 1/c whenever k?k =
6 1). If the soft-greedy method uses (4) for policy evaluation, then exact
equivalence in the limit is obtained when c ? ? while maintaining ?/c ? ?. Lowering ?
interpolates toward a conventional natural gradient method. These considerations apply also for (3)
in place of (4) in the soft-greedy method if the indices of the maximizing actions become estimated
? a, wA ) = arg maxb Q(s,
? b, wQ ), ?s.
equally in both cases: arg maxa A(s,
Greedy policy iteration searches in the space of deterministic policies. As noted, the sequence of
greedy policies that is generated by such a process can be approximated arbitrarily closely with the
Gibbs policy class (2) with ? ? 0. For this class, the parameters of all deterministic policies lie at
infinity in different directions in the parameter space, whereas stochastic policies are obtained with
finite parameter values (except for vanishingly narrow directions along diagonals). From this point
of view, the search is conducted on the surface of an ?-radius sphere: each iteration performs a
jump from infinity in one direction to infinity in some other direction. Based on Theorems 1 and 2,
we observe that the policy sequence that results from these jumps can be approximated arbitrarily
closely with a natural actor-critic method using very large step-sizes.
6
1b
2a
0
?2
0
?2
?2
0 2
?s1,ar
2b
2
?y0,a2
2
?y1,ar
?s2,ar
2
0
?2
?2
0 2
?y1,al
+
2
?y0,a2
1a
0
?2
?2
0 2
?y0,a1
?
?2
return
The soundness of such a process obviously requires some special structure for the gradient landscape. In informal terms, what suffices is that the performance landscape has a monotonically
increasing profile up to infinity in the direction of a gradient that is estimated at any point. This
condition is established if all attractors in the parameter space reside at infinity and if the gradient field is not, loosely speaking, too ?curled?. Although we ignore the latter requirement, we note
that the former requirement is satisfied when only deterministic attractors exist in the Gibbs policy space. This condition holds when the problem is fully Markovian and the value function is
represented exactly, which leads to the standard result for MDPs stating that there always exists a
deterministic policy that is globally optimal, that there are no locally optimal policies and that any
potential stochastic optimal policies are not attractive (e.g., [1, ?A.2]). However, when these conditions do not hold and there is an attractor in the policy space that corresponds to a stochastic policy,
there is a finite attractor in the parameter space that resides inside the ?-radius sphere.
0 2
?y0,a1
Figure 2: Performance landscapes and estimated gradient fields for the examples in Figure 1.
The required special structure is clearly visible in Figure 2.1a, in which the performance landscape
and the gradient field are shown for the fully observable (and thus Markovian) counterpart of the
problem from Figure 1.1. This structure can be seen also in Figure 2.2a, in which the problem from
Figure 1.2 is evaluated using Monte Carlo evaluation. The redundant parameters ?s1 ,al and ?s2 ,al in
the former and ?y0 ,a3 in the latter were fixed to zero. In these cases, movement in the direction of
the natural gradient keeps improving performance up to infinity, i.e., there are no finite optima in the
way. This structure is clearly lost in Figure 2.1b, which shows the evaluation for the non-Markovian
problem from Figure 1.1. The same holds for the temporal differences based gradient field for the
problem from Figure 1.2 that is shown in Figure 2.2b. In essence, the sustained policy oscillation
that results from using very large step-sizes or greedy updates in the latter two cases (2.1b and 2.2b)
is caused by sustained overshooting over the finite attractor in the policy parameter space.
Another implication of the equivalence between very long natural gradient updates and greedy updates is that, contrary to what is sometimes suggested in the literature, the natural actor-critic approach has an inherent capability for a speed that is comparable to parametric greedy approaches
with linear-in-parameters value function approximation. This is because whatever initial improvement speed can be achieved with the latter due to greedy updates, the same speed can be also
achieved with the former using the same basis together with very long steps and constraining. This
effectively corresponds to an attempt to exploit whatever remains of the special structure of a Markovian problem, making the use of a very large ? in constrained policy improvement analogous to
using a small ? in policy evaluation. Constraining k?k also enables interpolating back toward conventional natural policy gradient learning (in addition to offering a crude way of maintaining explorativity): in cases of partial Markovianity, very long (near-infinite) natural gradient steps can be
used to quickly find the rough direction of the strongest attractors, after which gradually decreasing
the step-size allows an ascent toward some finite attractor.
6
Empirical results
In this section, we apply several variants of the natural actor-critic algorithm and some greedy policy
iteration algorithms to the Tetris problem using the standard features from [12]. For policy improvement, we use the original natural actor-critic (NAC) from [4], a constrained one (CNAC) that uses
(11) and a very large ?, and a soft-greedy policy iteration algorithm (SGPI) that uses (2). For policy
evaluation, we use LSPE [28] and an SVD-based batch solver (pinv). The B matrix in LSPE was
initialized to 0.5I and the policy was updated after every 100th episode. We used the advantage
estimator from (4) unless otherwise stated. We used a simple initial policy (?maxh = ?holes = ?3)
that scores around 20 points.
7
SGPI
4
x 10
x 10
? = 0.1
? = 0.01
? = 0.001
1
x 10
10
Iteration
c = 10
c = 50
c = 500
1
Q estimator from (3)
4
SGPI
"CNAC"
"NAC" (?=500)
"NAC" (?=1010)
4
3
2
1
3
x 10
2.5
2
1.5
5
10
Iteration
10
Iteration
15
1
? = 50
? = 75
? = 1000
? = 10000
NAC
4
5
? = 0.8
? = 0.9
? = 0.975
?=1
1
0
0
5
10
Iteration
0
0
15
0.5
5
1.5
0.5
0
0
15
Average score
5
4
Average score
1.5
0.5
0
0
3
2
5
10
Iteration
15
NAC
12
6
10
Condition number
0.5
0
0
x 10
Average score
1.5
NAC
4
2
2
Average score
Average score
2
5
CNAC
4
1
10
10
? = 0.8
? = 0.9
? = 0.975
?=1
8
10
0
5
10
Iteration
Figure 3: Empirical results for the Tetris problem. See the text for details.
Figure 3.1 shows that with soft-greedy policy iteration (SGPI), it is in fact possible to avoid policy
degradation by using a suitable amount of softening. Results for constrained natural actor-critic
(CNAC) with ? = 1010 are shown in Figure 3.2. The algorithm can indeed emulate greedy updates
(SGPI) and the associated policy degradation. Unconstrained natural actor-critic (NAC), shown in
Figure 3.3, failed to match the behavior and speed of SGPI and CNAC with any step-size (only
selected step-sizes are shown). Results for all algorithms when using the Q estimator in (3) are
shown in Figure 3.4 (technically, CNAC and NAC are not using a natural gradient now). SGPI and
CNAC match perfectly while reaching transiently a level around 50,000 points in just 2 iterations.
We did observe a presence of oscillation-predisposing structure during several runs. There were
optima at finite parameter values along several consecutively estimated gradient directions, but these
optima did not usually form closed basins of attraction in the full parameter space. At such points,
the performance landscape was reminiscent of what was illustrated in Figure 2.1b, except that there
was a tendency for a slope toward an open end of the valley (ridge) at finite distance. As a result,
oscillations were mainly transient with suitably chosen learning parameter values.
However, a commonality among all the algorithms was that the relevant matrices became quickly
highly ill-conditioned. This was the case especially when using (4), with which condition numbers
were typically above 109 upon convergence/stagnation. Figures 3.5 and 3.6 show performance levels
and typical condition numbers for NAC with different discount factors. It can be seen that the inferior
results obtained with a too high ? (cf. [14, 12]) are associated with serious ill-conditioning.
In contrast to typical approximate dynamic programming methods, the cross-entropy method involves numerically more stable computations and, moreover, the computations are based on information from a distribution of policies. Currently, we expect that the policy oscillation or chattering
phenomenon is not the main cause for neither policy degradation nor stagnation in this problem. Instead, it seems that, for both greedy and gradient approaches, the explanation is related to numerical
instabilities that stem possibly both from the involved estimators and from insufficient exploration.
Acknowledgments
We thank Lucian Bus?oniu and Dimitri Bertsekas for valuable discussions. This work has been
financially supported by the Academy of Finland through the Centre of Excellence Programme.
8
References
[1] C. Szepesv?ari. Algorithms for reinforcement learning. Morgan & Claypool Publishers, 2010.
[2] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 2005.
[3] L. Bus?oniu, R. Babu?ska, B. De Schutter, and D. Ernst. Reinforcement learning and dynamic programming
using function approximators. CRC Press, 2010.
[4] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180?1190, 2008.
[5] S. Bhatnagar, R. S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica,
45(11):2471?2482, 2009.
[6] V. R. Konda and J. N. Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization,
42(4):1143?1166, 2004.
[7] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Technical report,
Massachusetts Institute of Technology, Cambridge, US, 2010.
[8] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming. Athena Scientific, 1996.
[9] R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Advances in Neural Information Processing Systems, 2000.
[10] C. Thiery and B. Scherrer. Building Controllers for Tetris. ICGA Journal, 32(1):3?11, 2009.
[11] I. Szita and A. L?orincz. Learning Tetris using the noisy cross-entropy method. Neural Computation,
18(12):2936?2941, 2006.
[12] D. P. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in neurodynamic programming. Technical report, Massachusetts Institute of Technology, Cambridge, US, 1996.
[13] S. M. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, 2002.
[14] M. Petrik and B. Scherrer. Biasing approximate dynamic programming with a lower discount factor. In
Advances in Neural Information Processing Systems, 2008.
[15] C. Thiery and B. Scherrer. Improvements on learning Tetris with cross entropy. ICGA Journal, 32(1):23?
33, 2009.
[16] V. Farias and B. Roy. Tetris: A study of randomized constraint sampling. In Probabilistic and Randomized
Methods for Design Under Uncertainty, pages 189?201. Springer, 2006.
[17] V. Desai, V. Farias, and C. Moallemi. A smoothed approximate linear program. In Advances in Neural
Information Processing Systems, 2009.
[18] G. J. Gordon. Reinforcement learning with function approximation converges to a region. In Advances
in Neural Information Processing Systems, 2001.
[19] S. P. Singh, T. Jaakkola, and M. I. Jordan. Learning without state-estimation in partially observable
markovian decision processes. In Proceedings of the Eleventh International Conference on Machine
Learning, volume 31, page 37, 1994.
[20] M. D. Pendrith and M. J. McGarity. An analysis of direct reinforcement learning in non-markovian
domains. In Proceedings of the Fifteenth International Conference on Machine Learning, 1998.
[21] T. J. Perkins. Action value based reinforcement learning for POMDPs. Technical report, University of
Massachusetts, Amherst, MA, USA, 2001.
[22] T. J. Perkins and D. Precup. A convergent form of approximate policy iteration. In Advances in Neural
Information Processing Systems, 2003.
[23] P. A. Crook and G. Hayes. Consistent exploration improves convergence of reinforcement learning on
POMDPs. In AAMAS 2007 Workshop on Adaptive and Learning Agents, 2007.
[24] T. J. Perkins. Reinforcement learning for POMDPs based on action values and stochastic optimization. In
Proceedings of the Eighteenth National Conference on Artificial Intelligence, pages 199?204. American
Association for Artificial Intelligence, 2002.
[25] G. J. Gordon. Chattering in SARSA(?). Technical report, Carnegie Mellon University, Pittsburgh, PA,
USA, 1996.
[26] R. Parr, L. Li, G. Taylor, C. Painter-Wakefield, and M. L. Littman. An analysis of linear models, linear
value-function approximation, and feature selection for reinforcement learning. In Proceedings of the
25th International Conference on Machine learning, pages 752?759. ACM, 2008.
[27] D. P. Bertsekas and H. Yu. Q-learning and enhanced policy iteration in discounted dynamic programming.
In Decision and Control (CDC), 2010 49th IEEE Conference on, pages 1409?1416. IEEE, 2010.
[28] A. Nedi?c and D. P. Bertsekas. Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems: Theory and Applications, 13(1?2):79?110, 2003.
9
| 4274 |@word mild:1 exploitation:1 version:1 briefly:2 seems:2 suitably:1 open:2 termination:1 simulation:2 gradual:1 attainable:1 concise:1 solid:1 reduction:1 initial:4 cyclic:2 score:7 offering:1 interestingly:1 icga:2 past:1 existing:2 current:4 reminiscent:1 numerical:3 partition:1 visible:1 enables:1 update:13 overshooting:2 greedy:36 selected:1 intelligence:2 vanishing:1 short:1 along:3 constructed:1 direct:1 become:3 incorrect:1 sustained:8 eleventh:1 inside:1 introduce:1 excellence:1 expected:1 indeed:2 roughly:1 behavior:3 examine:1 nor:1 inspired:1 discounted:2 globally:1 decreasing:1 td:1 little:2 solver:1 increasing:3 becomes:6 provided:2 estimating:2 underlying:1 bounded:7 begin:1 moreover:1 what:4 lspe:2 vanished:1 maxa:1 differing:2 guarantee:2 temporal:6 remember:1 every:2 interactive:1 shed:1 softness:1 ensured:1 scaled:1 exactly:1 whatever:2 control:3 appear:1 bertsekas:8 positive:1 understood:2 local:3 tends:1 esp:1 consequence:1 limit:2 sutton:2 ak:3 solely:1 interpolation:1 might:3 equivalence:3 suggests:2 practical:1 acknowledgment:1 lost:1 implement:1 empirical:4 lucian:1 cannot:1 selection:2 valley:1 context:1 impossible:1 instability:2 writing:1 restriction:1 equivalent:1 deterministic:8 conventional:4 eighteenth:1 maximizing:1 attention:1 pomdp:1 survey:1 nedi:1 estimator:6 attraction:1 dominate:1 stability:1 analogous:2 limiting:2 updated:1 controlling:1 target:1 enhanced:1 user:1 exact:1 programming:16 us:4 pa:1 roy:1 approximated:6 observed:3 region:1 connected:2 episode:1 desai:1 movement:1 valuable:1 yk:1 mentioned:1 reward:8 littman:1 dynamic:18 existent:2 ghavamzadeh:1 depend:1 reinterpretation:1 singh:2 subtly:1 technically:1 upon:1 petrik:1 basis:4 completely:1 farias:2 po:1 represented:2 various:1 emulate:1 revert:1 fast:3 monte:7 artificial:3 quite:1 posed:1 widely:1 dominating:2 otherwise:3 favor:1 soundness:1 noisy:1 obviously:1 eqk:1 advantage:3 differentiable:1 sequence:12 vanishingly:1 relevant:1 ernst:1 lookahead:1 academy:1 description:2 validate:1 neurodynamic:1 exploiting:1 convergence:17 optimum:6 requirement:5 converges:1 derive:1 illustrate:4 develop:1 stating:1 school:1 progress:1 solves:1 strong:1 involves:1 implies:1 come:1 indicate:1 direction:9 radius:2 closely:3 modifying:1 stochastic:17 consecutively:1 exploration:10 transient:1 softer:1 mcallester:1 crc:1 suffices:1 sarsa:1 hold:6 hut:1 around:4 ic:1 claypool:1 equilibrium:1 mapping:2 parr:1 finland:2 early:1 a2:3 commonality:1 failing:1 estimation:6 perceived:1 currently:2 rough:1 clearly:2 always:1 reaching:1 avoid:1 casting:2 jaakkola:1 focus:2 schaal:1 improvement:14 properly:1 mainly:1 aalto:2 steeply:1 contrast:1 attains:1 typically:2 a0:1 diminishing:2 expressible:1 compatibility:1 issue:2 aforementioned:1 arg:3 ill:2 unreachable:1 among:1 szita:1 development:1 scherrer:3 constrained:5 special:7 field:6 once:3 equal:1 having:2 sampling:3 look:1 yu:1 thinking:1 alter:1 report:7 others:1 transiently:1 serious:1 inherent:1 primarily:1 gordon:2 national:1 interpolate:1 neurocomputing:1 replaced:2 attractor:8 attempt:5 highly:1 evaluation:18 severe:3 light:1 implication:1 emit:1 tuple:1 partial:4 moallemi:1 unless:1 incomplete:2 loosely:1 taylor:1 initialized:1 circle:1 theoretical:1 instance:1 soft:11 markovian:6 ar:13 unmodified:1 subset:2 markovianity:1 conducted:2 too:2 optimally:1 reported:1 considerably:1 ska:1 st:4 fundamental:1 siam:1 randomized:2 international:3 amherst:1 lee:1 probabilistic:1 together:1 continuously:1 precup:1 quickly:2 w1:4 satisfied:1 possibly:1 american:1 leading:3 return:1 dimitri:1 actively:1 li:1 potential:3 de:1 wk:7 babu:1 notable:1 explicitly:1 caused:3 performed:4 view:6 try:1 closed:1 optimistic:3 reached:2 start:1 aggregation:1 option:1 capability:1 slope:1 actorcritic:1 square:2 painter:1 xor:1 qk:7 variance:1 became:1 landscape:5 carlo:7 bhatnagar:1 pomdps:3 history:1 explain:1 strongest:2 oscillatory:1 reach:1 whenever:1 pendrith:1 against:1 grossly:2 involved:1 associated:3 proof:4 oniu:2 sampled:1 gain:1 popular:1 massachusetts:3 manifest:2 lim:4 improves:1 subtle:1 schedule:2 carefully:1 actually:1 back:4 thiery:2 originally:1 methodology:2 wherein:1 evaluated:4 box:2 strongly:1 just:2 implicit:4 tkk:1 ergodicity:1 until:1 wakefield:1 hand:1 nonlinear:1 lack:2 defines:2 scientific:2 mdp:1 nac:9 building:1 effect:3 hypothesized:1 usa:2 y2:1 counterpart:2 former:8 equality:1 illustrated:1 attractive:6 during:4 inferior:1 maintained:1 noted:1 essence:1 suboptimality:3 manifestation:1 arrived:1 ridge:1 performs:1 temperature:2 consideration:1 novel:3 fi:3 recently:2 ari:1 common:3 empirically:1 pwagner:2 conditioning:1 insensitive:1 volume:1 discussed:1 he:2 slight:1 association:1 numerically:1 mellon:1 cambridge:2 gibbs:6 counterexample:1 unconstrained:1 trivially:1 stochasticity:1 centre:1 softening:2 stable:2 actor:15 longer:1 surface:1 maxh:1 occasionally:1 incapable:2 arbitrarily:3 approximators:1 exploited:1 seen:2 minimum:1 morgan:1 aggregated:1 redundant:1 monotonically:1 full:2 stem:1 technical:4 faster:1 match:2 cross:4 long:6 sphere:2 equally:1 a1:2 converging:1 variant:2 neuro:1 controller:1 essentially:1 fifteenth:1 iteration:44 sometimes:1 achieved:2 addition:2 background:2 whereas:3 szepesv:1 publisher:1 w2:4 rest:3 operate:1 ascent:2 induced:1 virtually:1 contrary:1 jordan:1 near:1 presence:4 constraining:3 maxb:2 enough:1 iterate:1 affect:1 fit:1 perfectly:1 suboptimal:2 observability:3 peter:1 interpolates:1 proceed:2 speaking:4 oscillate:1 action:14 cause:1 generally:2 detailed:1 informally:1 amount:1 discount:3 locally:1 simplest:1 http:1 pinv:1 exist:2 s3:3 sign:1 fulfilled:1 estimated:4 carnegie:1 discrete:1 express:1 nevertheless:1 changing:1 neither:1 lowering:2 sum:5 run:2 parameterized:1 uncertainty:1 place:3 oscillation:35 decision:7 scaling:2 comparable:1 bound:2 followed:1 guaranteed:1 convergent:2 adapted:2 occur:2 infinity:12 precisely:1 constrain:1 constraint:3 perkins:3 dominated:1 aspect:1 speed:5 answered:1 performing:1 department:1 beneficial:1 slightly:1 smaller:1 intimately:1 y0:7 increasingly:1 kakade:1 making:2 s1:8 modification:1 gradually:2 heart:1 taken:2 equation:2 previously:1 remains:1 discus:1 bus:2 mechanism:1 needed:3 end:2 informal:1 available:3 generalizes:1 permit:2 apply:4 observe:2 appearing:1 alternative:3 batch:1 shortly:1 original:1 ensure:1 cf:1 opportunity:6 maintaining:2 konda:1 exploit:3 restrictive:1 build:1 establish:1 approximating:1 classical:1 especially:1 question:2 added:1 occurs:1 receptive:1 parametric:1 diagonal:1 financially:1 gradient:34 dp:1 kth:3 distance:1 thank:1 capacity:1 majority:2 athena:2 topic:2 toward:7 fresh:1 induction:1 enforcing:1 assuming:2 index:1 insufficient:2 providing:1 susceptible:3 potentially:2 holding:1 stated:1 design:1 proper:1 policy:126 boltzmann:2 observation:2 markov:5 benchmark:2 finite:8 immediate:2 situation:1 extended:1 looking:2 excluding:1 orincz:1 y1:7 mansour:1 smoothed:1 introduced:1 required:2 specified:1 connection:4 narrow:1 established:4 suggested:3 dynamical:1 usually:2 mismatch:2 biasing:1 program:1 including:2 explanation:4 suitable:1 event:1 natural:26 difficulty:1 representing:1 scheme:2 improve:1 technology:2 mdps:1 numerous:1 text:1 literature:4 understanding:5 asymptotic:3 fully:5 expect:1 cdc:1 impressively:1 proven:1 approximator:6 agent:1 basin:1 consistent:3 s0:6 critic:16 compatible:1 summary:1 repeat:1 last:4 supported:1 tsitsiklis:2 institute:2 fall:1 taking:2 wagner:1 transition:1 cumulative:2 avoids:2 seemed:1 resides:1 reside:1 made:3 reinforcement:13 projected:1 jump:2 adaptive:1 programme:1 approximate:22 observable:2 ignore:1 schutter:1 keep:3 hayes:1 ioffe:1 automatica:1 pittsburgh:1 search:2 decade:1 why:2 reviewed:1 learn:1 improving:1 complex:2 interpolating:1 domain:1 did:2 cleanest:1 main:3 arrow:1 motivation:1 s2:7 paul:1 arise:2 profile:1 repeated:1 aamas:1 categorized:2 fails:1 deterministically:1 lie:1 crude:1 vanish:1 theorem:3 down:1 evidence:2 dominates:1 exists:1 a3:1 workshop:1 effectively:3 ci:1 magnitude:5 conditioned:1 hole:1 easier:1 entropy:4 depicted:1 likely:1 infinitely:1 crook:1 failed:1 chattering:3 partially:1 springer:1 corresponds:4 chance:1 acm:1 ma:1 marked:2 viewed:1 consequently:1 twofold:1 shared:1 price:1 considerable:3 hard:1 change:1 typical:3 except:3 infinite:1 degradation:6 called:3 tetri:11 svd:1 tendency:1 attempted:2 exception:1 wq:1 latter:8 arises:1 evaluate:1 phenomenon:20 |
3,617 | 4,275 | Efficient Methods for Overlapping Group Lasso
Lei Yuan
Arizona State University
Tempe, AZ, 85287
[email protected]
Jun Liu
Arizona State University
Tempe, AZ, 85287
[email protected]
Jieping Ye
Arizona State University
Tempe, AZ, 85287
[email protected]
Abstract
The group Lasso is an extension of the Lasso for feature selection on (predefined)
non-overlapping groups of features. The non-overlapping group structure limits
its applicability in practice. There have been several recent attempts to study a
more general formulation, where groups of features are given, potentially with
overlaps between the groups. The resulting optimization is, however, much more
challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal
several key properties of the proximal operator associated with the overlapping
group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms
for the optimization. We have performed empirical evaluations using both synthetic and the breast cancer gene expression data set, which consists of 8,141
genes organized into (overlapping) gene sets. Experimental results show that the
proposed algorithm is more efficient than existing state-of-the-art algorithms.
1
Introduction
Problems with high dimensionality have become common over the recent years. The high dimensionality poses significant challenges in building interpretable models with high prediction accuracy.
Regularization has been commonly employed to obtain more stable and interpretable models. A
well-known example is the penalization of the ?1 norm of the estimator, known as Lasso [25]. The
?1 norm regularization has achieved great success in many applications. However, in some applications [28], we are interested in finding important explanatory factors in predicting the response
variable, where each explanatory factor is represented by a group of input features. In such cases,
the selection of important features corresponds to the selection of groups of features. As an extension of Lasso, group Lasso [28] based on the combination of the ?1 norm and the ?2 norm has been
proposed for group feature selection, and quite a few efficient algorithms [16, 17, 19] have been
proposed for efficient optimization. However, the non-overlapping group structure in group Lasso
limits its applicability in practice. For example, in microarray gene expression data analysis, genes
may form overlapping groups as each gene may participate in multiple pathways [12].
Several recent work [3, 12, 15, 18, 29] studies the overlapping group Lasso, where groups of features
are given, potentially with overlaps between the groups. The resulting optimization is, however,
much more challenging to solve due to the group overlaps. When optimizing the overlapping group
Lasso problem, one can reformulate it as a second order cone program and solve it by a generic
toolbox, which, however, does not scale well. Jenatton et al. [13] proposed an alternating algorithm
called SLasso for solving the equivalent reformulation. However, SLasso involves an expensive
matrix inversion at each alternating iteration, and there is no known global convergence rate for
such an alternating procedure. A reformulation [5] was also proposed such that the original problem
can be solved by the Alternating Direction Method of Multipliers (ADMM), which involves solving
a linear system at each iteration, and may not scale well for high dimensional problems. Argyriou
et al. [1] adopted the proximal gradient method for solving the overlapping group lasso, and a
fixed point method was developed to compute the proximal operator. Chen et al. [6] employed a
1
smoothing technique to solve the overlapping group Lasso problem. Mairal [18] proposed to solve
the proximal operator associated with the overlapping group Lasso defined as the sum of the ??
norms, which, however, is not applicable to the overlapping group Lasso defined as the sum of the
?2 norms considered in this paper.
In this paper, we develop an efficient algorithm for the overlapping group Lasso penalized problem
via the accelerated gradient descent method. The accelerated gradient descent method has recently
received increasing attention in machine learning due to the fast convergence rate even for nonsmooth convex problems. One of the key operations is the computation of the proximal operator
associated with the penalty. We reveal several key properties of the proximal operator associated
with the overlapping group Lasso penalty, and proposed several possible reformulations that can
be solved efficiently. The main contributions of this paper include: (1) we develop a low cost
prepossessing procedure to identify (and then remove) zero groups in the proximal operator, which
dramatically reduces the size of the problem to be solved; (2) we propose one dual formulation
and two proximal splitting formulations for the proximal operator; (3) for the dual formulation, we
further derive the duality gap which can be used to check the quality of the solution and determine
the convergence of the algorithm. We have performed empirical evaluations using both synthetic
data and the breast cancer gene expression data set, which consists of 8,141 genes organized into
(overlapping) gene sets. Experimental results demonstrate the efficiency of the proposed algorithm
in comparison with existing state-of-the-art algorithms.
Notations: k ? k denotes the Euclidean norm, and 0 denotes a vector of zeros. SGN(?) and sgn(?)
are defined in a component wise fashion as: 1) if t = 0, then SGN(t) = [?1, 1] and sgn(t) = 0; 2)
if t > 0, then SGN(t) = {1} and sgn(t) = 1; and 3) if t < 0, SGN(t) = {?1} and sgn(t) = ?1.
Gi ? {1, 2, . . . , p} denotes an index set, and xGi denote a sub-vector of x restricted to Gi .
2
The Overlapping Group Lasso
We consider the following overlapping group Lasso penalized problem:
min f (x) = l(x) + ???12 (x)
(1)
x?Rp
where l(?) is a smooth convex loss function, e.g., the least squares loss,
???12 (x) = ?1 kxk1 + ?2
g
X
i=1
wi kxGi k
(2)
is the overlapping group Lasso penalty, ?1 ? 0 and ?2 ? 0 are regularization parameters,
wi > 0, i = 1, 2, . . . , g, Gi ? {1, 2, . . . , p} contains the indices corresponding to the i-th group
of features, and k ? k denotes the Euclidean norm. Note that the first term in (2) can be absorbed
into the second term, which however will introduce p additional groups. The g groups of features
are pre-specified, and they may overlap. The penalty in (2) is a special case of the more general
Composite Absolute Penalty (CAP) family [29]. When the groups are disjoint with ?1 = 0 and
?2 > 0, the model in (1) reduces to the group Lasso [28]. If ?1 > 0 and ?2 = 0, then the model in
(1) reduces to the standard Lasso [25].
In this paper, we propose to make use of the accelerated gradient descent (AGD) [2, 21, 22] for
solving (1), due to its fast convergence rate. The algorithm is called ?FoGLasso?, which stands for
Fast overlapping Group Lasso. One of the key steps in the proposed FoGLasso algorithm is the
computation of the proximal operator associated with the penalty in (2); and we present an efficient
algorithm for the computation in the next section.
In FoGLasso, we first construct a model for approximating f (?) at the point x as:
fL,x (y) = [l(x) + hl? (x), y ? xi] + ???21 (y) +
L
ky ? xk2 ,
2
(3)
where L > 0. The model fL,x (y) consists of the first-order Taylor expansion of the smooth function
l(?) at the point x, the non-smooth penalty ???21 (x), and a regularization term L2 ky ? xk2 . Next, a
sequence of approximate solutions {xi } is computed as follows: xi+1 = arg miny fLi ,si (y), where
the search point si is an affine combination of xi?1 and xi as si = xi +?i (xi ?xi?1 ), for a properly
chosen coefficient ?i , Li is determined by the line search according to the Armijo-Goldstein rule
2
so that Li should be appropriate for si , i.e., f (xi+1 ) ? fLi ,si (xi+1 ). A key building block in
FoGLasso is the minimization of (3), whose solution is known as the proximal operator [20]. The
computation of the proximal operator is the main technical contribution of this paper. The pseudocode of FoGLasso is summarized in Algorithm 1, where the proximal operator ?(?) is defined in
(4). In practice, we can terminate Algorithm 1 if the change of the function values corresponding to
adjacent iterations is within a small value, say 10?5 .
Algorithm 1 The FoGLasso Algorithm
Input: L0 > 0, x0 , k
Output: xk+1
1: Initialize x1 = x0 , ??1 = 0, ?0 = 1, and L = L0 .
2: for i = 1 to k do
i?2 ?1
3:
Set ?i = ??
, si = xi + ?i (xi ? xi?1 )
i?1
4:
Find the smallest L = 2j Li?1 , j = 0, 1, . . . such that f (xi+1 ) ? fL,si (xi+1 ) holds, where
? /L
xi+1 = ??21/L (si ? L1 l? (si ))
?
1+ 1+4?2i
5:
Set Li = L and ?i+1 =
2
6: end for
3
The Associated Proximal Operator and Its Efficient Computation
The proximal operator associated with the overlapping group Lasso penalty is defined as follows:
1
???21 (v) = arg minp g??21 (x) ? kx ? vk2 + ???12 (x) ,
(4)
x?R
2
which is a special case of (1) by setting l(x) = 12 kx ? vk2 . It can be verified that the approximate
? /L
solution xi+1 = arg miny fLi ,si (y) is given by xi+1 = ??21/Lii (si ? L1i l? (si )). Recently, it has
been shown in [14] that, the efficient computation of the proximal operator is key to many sparse
learning algorithms. Next, we focus on the efficient computation of ???21 (v) in (4) for a given v.
The rest of this section is organized as follows. In Section 3.1, we discuss some key properties of
the proximal operator, based on which we propose a pre-processing technique that will significantly
reduce the size of the problem. We then proposed to solve it via the dual formulation in Section 3.2,
and the duality gap is also derived. Several alternative methods for solving the proximal operator
via proximal splitting methods are discussed in Section 3.3.
3.1 Key Properties of the Proximal Operator
We first reveal several basic properties of the proximal operator ???21 (v).
Lemma 1. Suppose that ?1 , ?2 ? 0, and wi > 0, for i = 1, 2, . . . , g. Let x? = ???21 (v). The
following holds: 1) if vi > 0, then 0 ? x?i ? vi ; 2) if vi < 0, then vi ? x?i ? 0; 3) if vi = 0, then
x?i = 0; 4) SGN(v) ? SGN(x? ); and 5) ???21 (v) = sgn(v) ? ???21 (|v|).
Proof. When ?1 , ?2 ? 0, and wi ? 0, for i = 1, 2, . . . , g, the objective function g??21 (?) is strictly
convex, thus x? is the unique minimizer. We first show if vi > 0, then 0 ? x?i ? vi . If x?i > vi ,
then we can construct a x
? as follows: x
?j = x?j , j 6= i and x
?i = vi . Similarly, if x?i < 0, then
?
we can construct a x
? as follows: x
?j = xj , j 6= i and x
?i = 0. It is easy to verify that x
? achieves
a lower objective function value than x? in both cases. We can prove the second and the third
properties using similar arguments. Finally, we can prove the fourth and the fifth properties using
the definition of SGN(?) and the first three properties.
Next, we show that ???21 (?) can be directly derived from ??0 2 (?) by soft-thresholding. Thus, we only
need to focus on the case when ?1 = 0. This simplifies the optimization in (4). It is an extension of
the result for Fused Lasso in [10].
Theorem 1. Let u = sgn(v) ? max(|v| ? ?1 , 0), and
(
)
g
X
1
0
2
??2 (u) = arg minp h?2 (x) ? kx ? uk + ?2
(5)
wi kxGi k .
x?R
2
i=1
Then, the following holds: ???21 (v) = ??0 2 (u).
3
Proof. Denote the unique minimizer of h?2 (?) as x? . The sufficient and necessary condition for the
optimality of x? is:
0 ? ?h?2 (x? ) = x? ? u + ??0?2 (x? ),
(6)
where ?h?2 (x) and ??0?2 (x) are the sub-differential sets of h?2 (?) and ?0?2 (?) at x, respectively.
Next, we need to show 0 ? ?g??21 (x? ). The sub-differential of g??21 (?) at x? is given by
?g??21 (x? ) = x? ? v + ????12 (x? ) = x? ? v + ?1 SGN(x? ) + ??0?2 (x? ).
(7)
It follows from the definition of u that u ? v ? ?1 SGN(u). Using the fourth property in Lemma 1,
we have SGN(u) ? SGN(x? ). Thus,
u ? v ? ?1 SGN(x? ).
(8)
It follows from (6)-(8) that 0 ? ?g??21 (x? ).
It follows from Theorem 1 that, we only need to focus on the optimization of (5) in the following
discussion. The difficulty in the optimization of (5) lies in the large number of groups that may
overlap. In practice, many groups will be zero, thus achieving a sparse solution (a sparse solution
is desirable in many applications). However, the zero groups are not known in advance. The key
question we aim to address is how we can identify as many zero groups as possible to reduce the
complexity of the optimization. Next, we present a sufficient condition for a group to be zero.
Lemma 2. Denote the minimizer of h?2 (?) in (5) by x? . If the i-th group satisfies kuGi k ? ?2 wi ,
then x?Gi = 0, i.e., the i-th group is zero.
Proof. We decompose h?2 (x) into two parts as follows:
?
?
X
1
1
h?2 (x) =
kxGi ? uGi k2 + ?2 wi kxGi k + ? kxGi ? uGi k2 + ?2
wj kxGj k? ,
2
2
(9)
j6=i
where Gi = {1, 2, . . . , p} ? Gi is the complementary set of Gi . We consider the minimization of
h?2 (x) in terms of xGi when xGi = x?G is fixed. It can be verified that if kuGi k ? ?2 wi , then
i
x?Gi = 0 minimizes both terms in (9) simultaneously. Thus we have x?Gi = 0.
Lemma 2 may not identify many true zero groups due to the strong condition imposed. The lemma
below weakens the condition in Lemma 2. Intuitively, for a group Gi , we first identify all existing
zero groups that overlap with Gi , and then compute the overlapping index subset Si of Gi as:
[
Si =
(Gj ? Gi ).
(10)
j6=i,x?
G =0
j
We can show that x?Gi = 0 if kuGi ?Si k ? ?2 wi is satisfied. Note that this condition is much weaker
than the condition in Lemma 2, which requires that kuGi k ? ?2 wi .
Lemma 3. Denote the minimizer of h?2 (?) by x? . Let Si , a subset of Gi , be defined in (10). If
kuGi ?Si k ? ?2 wi holds, then x?Gi = 0.
Proof. Suppose that we have identified a collection of zero groups. By removing these groups, the
original problem (5) can then be reduced to:
X
1
wi kxGi ?Si k
min
kx(I1 ) ? u(I1 )k2 + ?2
x(I1 )?R|I1 | 2
i?G1
S
where I1 is the reduced index set, i.e., I1 = {1, 2, . . . , p} ? i:x? =0 Gi , and G1 = {i : x?Gi 6= 0}
Gi
is the index set of the remaining non-zero groups. Note that ?i ? G1 , Gi ? Si ? I1 . By applying
Lemma 2 again, we show that if kuGi ?Si k ? ?2 wi holds, then x?Gi ?Si = 0. Thus, x?Gi = 0.
Lemma 3 naturally leads to an iterative procedure for identifying the zero groups: For each group
Gi , if kuGi k ? ?2 wi , then we set uGi = 0; we cycle through all groups repeatedly until u does not
change. Let p? = |{ui : ui 6= 0}| be the number of nonzero elements in u, g ? = |{uGi : uGi 6= 0}|
be the number of the nonzero groups, and x? denote the minimizer of h?2 (?). It follows from
Lemma 3 and Lemma 1 that, if ui = 0, then x?i = 0. Therefore, by applying the above iterative
procedure, we can find the minimizer of (5) by solving a reduced problem that has p? ? p variables
and g ? ? g groups. With some abuse of notation, we still use (5) to denote the resulting reduced
problem. In addition, from Lemma 1, we only focus on u > 0 in the following discussion, and the
analysis can be easily generalized to the general u.
4
3.2 Reformulation as an Equivalent Smooth Convex Optimization Problem
It follows from the first two properties of Lemma 1 that, we can rewrite (5) as:
??0 2 (u) = arg minp h?2 (x),
(11)
x?R
0xu
where denotes the element-wise inequality, and
g
X
1
2
h?2 (x) = kx ? uk + ?2
wi kxGi k,
2
i=1
and the minimizer of h?2 (?) is constrained to be non-negative due to u > 0 (refer to the discussion
at the end of Section 3.1).
Making use of the dual norm of the Euclidean norm k ? k, we can rewrite h?2 (x) as:
g
X
1
h?2 (x) = max kx ? uk2 +
hx, Y i i,
Y ?? 2
i=1
(12)
where ? is defined as follows:
? = {Y ? Rp?g : YGi i = 0, kY i k ? ?2 wi , i = 1, 2, . . . , g},
Gi is the complementary set of Gi , Y is a sparse matrix satisfying Yij = 0 if the i-th feature does
not belong to the j-th group, i.e., i 6? Gj , and Y i denotes the i-th column of Y . As a result, we can
reformulate (11) as the following min-max problem:
1
2
min max ?(x, Y ) = kx ? uk + hx, Y ei ,
(13)
Y ??
x?Rp
2
0xu
where e ? Rg is a vector of ones. It is easy to verify that ?(x, Y ) is convex in x and concave in Y ,
and the constraint sets are closed convex for both x and Y . Thus, (13) has a saddle point, and the
min-max can be exchanged.
It is easy to verify that for a given Y , the optimal x minimizing ?(x, Y ) in (13) is given by
x = max(u ? Y e, 0).
(14)
Plugging (14) into (13), we obtain the following minimization problem with regard to Y :
min
Y ?Rp?g :Y ??
{?(Y ) = ??(max(u ? Y e, 0), Y )} .
(15)
Our methodology for minimizing h?2 (?) defined in (5) is to first solve (15), and then construct the
solution to h?2 (?) via (14). Using standard optimization techniques, we can show that the function
?(?) is continuously differentiable with Lipschitz continuous gradient. We include the detailed proof
in the supplemental material for completeness. Therefore, we convert the non-smooth problem (11)
to the smooth problem (15), making the smooth convex optimization tools applicable. In this paper,
we employ the accelerated gradient descent to solve (15), due to its fast convergence property. Note
that, the Euclidean projection onto the set ? can be computed in closed form. We would like to
emphasize here that, the problem (15) may have a much smaller size than (4).
3.2.1
Computing the Duality Gap
We show how to estimate the duality gap of the min-max problem (13), which can be used to check
the quality of the solution and determine the convergence of the algorithm.
For any given approximate solution Y? ? ? for ?(Y ), we can construct the approximate solution
x, Y? )
x
? = max(u ? Y? e, 0) for h?2 (x). The duality gap for the min-max problem (13) at the point (?
can be computed as:
(16)
gap(Y? ) = max ?(?
x, Y ) ? minp ?(x, Y? ).
Y ??
x?R
0xu
The main result of this subsection is summarized in the following theorem:
5
Theorem 2. Let gap(Y? ) be the duality gap defined in (16). Then, the following holds:
gap(Y? ) = ?2
g
X
xGi , Y?Gi i i).
(wi k?
xGi k ? h?
(17)
i=1
In addition, we have
?(Y? ) ? ?(Y ? ) ? gap(Y? ),
h(?
x) ? h(x? ) ? gap(Y? ).
(18)
(19)
Proof. Denote (x? , Y ? ) as the optimal solution to the problem (13). From (12)-(15), we have
? ?(Y? ) = ?(?
x, Y? ) = minp ?(x, Y? ) ? ?(x? , Y? ),
(20)
?(x? , Y? ) ? max ?(x? , Y ) = ?(x? , Y ? ) = ??(Y ? ),
(21)
x?R
0xu
Y ??
?
?
?
?
?
h?2 (x ) = ?(x , Y ) = minp ?(x, Y ) ? ?(?
x, Y ),
(22)
x).
?(?
x, Y ? ) ? max ?(?
x, Y ) = h?2 (?
(23)
x?R
0xu
Y ??
Incorporating (11), (20)-(23), we prove (17)-(19).
In our experiments, we terminate the algorithm when the estimated duality gap is less than 10?10 .
3.3
Proximal Splitting Methods
Recently, a family of proximal splitting methods [8] have been proposed for converting a challenging
optimization problem into a series of sub-problems with a closed form solution. We consider two
reformulations of the proximal operator (4), based on the Dykstra-like Proximal Splitting Method
and the alternating direction method of multipliers (ADMM). The efficiency of these two methods
for overlapping Group Lasso will be demonstrated in the next section.
In [5], Boyd et al. suggested that the original overlapping group lasso problem (1) can be reformulated and solved by ADMM directly. We include the implementation of ADMM for our comparative
study. We provide the details of all three reformulations in the supplemental materials.
4
Experiments
In this section, extensive experiments are performed to demonstrate the efficiency of our proposed
methods. We use both synthetic data sets and a real world data set and the evaluation is done in
various problem size and precision settings. The proposed algorithms are mainly implemented in
Matlab, with the proximal operator implemented in standard C for improved efficiency. Several
state-of-the-art methods are also included for comparison purpose, including SLasso algorithm developed by Jenatton et al. [13], the ADMM reformulation [5], the Prox-Grad method by Chen et
al. [6] and the Picard-Nesterov algorithm by Argyriou et al. [1].
4.1
Synthetic Data
In the first set of simulation we consider only the key component of our algorithm, the proximal
operator. The group indices are predefined such that G1 = {1, 2, . . . , 10}, G2 = {6, 7, . . . , 20}, . . .,
with each group overlapping half of the previous group. 100 examples are generated for each set of
fixed problem size p and group size g, and the results are summarized in Figure 1. As we can observe
from the figure, the dual formulation yields the best performance, followed closely by ADMM and
then the Dykstra method. We can also observe that our method scales very well to high dimensional
problems, since even with p = 106 , the proximal operator can be computed in a few seconds. It
is also not surprising that Dykstra method is much more sensitive to the number of groups, which
equals to the number of projections in one Dykstra step.
To illustrate the effectiveness of our pre-processing technique, we repeat the previous experiment by
removing the pre-processing step. The results are shown in the right plot of Figure 1. As we can observe from the figure, the proposed pre-processing technique effectively reduces the computational
6
1
2
0
10
0
CPU Time
10
?2
10
10
Dual
ADMM
Dykstra
0
CPU Time
Dual
ADMM
Dykstra
CPU Time
2
10
10
?1
10
?2
10
Dual
ADMM
Dual?no?pre
ADMM?no?pre
?2
10
10
?4
10
1e3
?3
1e4
1e5
1e6
10
10
?4
20
50
g
p
100
200
10
1e3
1e4
1e5
1e6
p
Figure 1: Time comparison for computing the proximal operators. The group number is fixed in the
left figure and the problem size is fixed in the middle figure. In the right figure, the effectiveness of
the pre-processing technique is illustrated.
time. As is evident from Figure 1, the dual formulation proposed in Section 3.2 consistently outperforms other proximal splitting methods. In the following experiments, only the dual method will be
used for computing the proximal operator, and our method will then be called as ?FoGLasso?.
4.2
Gene Expression Data
We have also conducted experiments to evaluate the efficiency of the proposed algorithm using the
breast cancer gene expression data set [26], which consists of 8,141 genes in 295 breast cancer
tumors (78 metastatic and 217 non-metastatic). For the sake of analyzing microarrays in terms
of biologically meaningful gene sets, different approaches have been used to organize the genes
into (overlapping) gene sets. In our experiments, we follow [12] and employ the following two
approaches for generating the overlapping gene sets (groups): pathways [24] and edges [7]. For
pathways, the canonical pathways from the Molecular Signatures Database (MSigDB) [24] are used.
It contains 639 groups of genes, of which 637 groups involve the genes in our study. The statistics of
the 637 gene groups are summarized as follows: the average number of genes in each group is 23.7,
the largest gene group has 213 genes, and 3,510 genes appear in these 637 groups with an average
appearance frequency of about 4. For edges, the network built in [7] will be used, and we follow [12]
to extract 42,594 edges from the network, leading to 42,594 overlapping gene sets of size 2. All
8,141 genes appear in the 42,594 groups with an average appearance frequency of about 10. The
experimental settings are as follows: we solve (1) with the least squares loss l(x) = 21 kAx ? bk2 ,
p
, where |Gi | denotes the size of the i-th group
and we set wi = |Gi |, and ?1 = ?2 = ? ? ?max
1
T
), and ? is chosen from the
=
kA
bk
(the
zero
point
is
a
solution
to
(1) if ?1 ? ?max
Gi , ?max
?
1
1
set {5 ? 10?1 , 2 ? 10?1 , 1 ? 10?1 , 5 ? 10?2 , 2 ? 10?2 , 1 ? 10?2 , 5 ? 10?3 , 2 ? 10?3 , 1 ? 10?3 }.
Comparison with SLasso, Prox-Grad and ADMM We first compare our proposed FoGLasso
with the SLasso algorithm [13], ADMM [5] and Prox-Grad [6]. The comparisons are based on
the computational time, since all these methods have efficient Matlab implementations with key
components written in C. For a given ?, we first run SLasso till a certain precision level is reached,
and then run the others until they achieve an objective function value smaller than or equal to that
of SLasso. Different precision levels of the solutions are evaluated such that a fair comparison can
be made. We vary the number of genes involved, and report the total computational time (seconds)
including all nine regularization parameters in Figure 2. We can observe that: 1) for all precision
levels, our proposed FoGLasso is much more efficient than SLasso, ADMM and Prox-Grad; 2) the
advantage of FoGLasso over other three methods in efficiency grows with the increasing number of
genes (variables). For example, with the grouping by pathways, FoGLasso is about 25 and 70 times
faster than SLasso for 1000 and 2000 genes, respectively; and 3) the efficiency on edges is inferior
to that on pathways, due to the larger number of overlapping groups. Additional scalability study of
our proposed method using larger problem size can be found in the supplemental materials.
Comparison with Picard-Nesterov Since the code acquired for Picard-Nesterov is implemented
purely in Matlab, a computational time comparison might not be fair. Therefore, only the number
of iterations required for convergence is reported, as both methods adopt the first order method.
We use edges to generate the groups, and vary the problem size from 100 to 400, using the same
set of regularization parameters. For each problem, we record both the number of outer iterations
(the gradient steps) and the total number of inner iterations (the steps required for computing the
7
Edges with Precision 1e?02
4
2
10
1
10
0
Pathways with Precision 1e?02
4
CPU Time
3
10
1
10
1
10
?1
10
100 200 300 400 500 1000 1500 2000
Number of involved genes
Pathways with Precision 1e?04
100 200 300 400 500 1000 1500 2000
Number of involved genes
4
Pathways with Precision 1e?06
10
FoGLasso
ADMM
SLasso
Prox?Grad
3
10
2
10
1
10
FoGLasso
ADMM
SLasso
Prox?Grad
2
10
1
0
10
10
0
10
?1
10
2
10
0
10
FoGLasso
ADMM
SLasso
Prox?Grad
Edges with Precision 1e?06
FoGLasso
ADMM
SLasso
Prox?Grad
10
?1
CPU Time
2
1
10
10
100 200 300 400 500 1000 1500 2000
Number of involved genes
10
10
2
10
10
?1
3
3
10
0
10
10
4
10
FoGLasso
ADMM
SLasso
Prox?Grad
CPU Time
CPU Time
3
10
CPU Time
3
10
Edges with Precision 1e?04
10
FoGLasso
ADMM
SLasso
Prox?Grad
CPU Time
4
10
0
?1
100 200 300 400 500 1000 1500 2000
Number of involved genes
10
100 200 300 400 500 1000 1500 2000
Number of involved genes
10
100 200 300 400 500 1000 1500 2000
Number of involved genes
Figure 2: Comparison of SLasso [13], ADMM [5], Prox-Grad [6] and our proposed FoGLasso
algorithm in terms of computational time (in seconds and in the logarithmic scale) when different
numbers of genes (variables) are involved. Different precision levels are used for comparison.
Table 1: Comparison of FoGLasso and Picard-Nesterov using different numbers (p) of genes and
various precision levels. For each particular method, the first row denotes the number of outer iterations required for convergence, while the second row represents the total number of inner iterations.
Precision Level
10?2
10?4
10?6
p
100
200
400
100
200
400
100
200
400
81
189
353
192
371
1299
334
507
1796
FoGLasso
288
401
921
404
590
1912
547
727
2387
78
176
325
181
304
1028
318
504
1431
Picard-Nesterov
8271 6.8e4 2.2e5 2.6e4 1.0e5 7.8e5 5.1e4 1.3e5 1.1e6
proximal operators). The average number of iterations among all the regularization parameters are
summarized in Table 1. As we can observe from the table, though Picard-Nesterov method often
takes less outer iterations to converge, it takes a lot more inner iterations to compute the proximal
operator. It is straight forward to verify that the inner iterations in Picard-Nesterov method and our
proposed method have the same complexity of O(pg).
5
Conclusion
In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem based on the accelerated gradient descent method. We reveal several key properties of the
proximal operator associated with the overlapping group Lasso, and compute the proximal operator
via solving the smooth and convex dual problem. Numerical experiments on both synthetic and
the breast cancer data set demonstrate the efficiency of the proposed algorithm. Although with an
inexact proximal operator, the optimal convergence rate of the accelerated gradient descent might
not be guaranteed [23, 11], the algorithm performs quite well empirically. A theoretical analysis on
the convergence property will be an interesting future direction. In the future, we also plan to apply
the proposed algorithm to other real-world applications involving overlapping groups.
Acknowledgments
This work was supported by NSF IIS-0812551, IIS-0953662, MCB-1026710, CCF-1025177, NGA
HM1582-08-1-0016, and NSFC 60905035, 61035003.
8
References
[1] A. Argyriou, C.A. Micchelli, M. Pontil, L. Shen, and Y. Xu. Efficient first order methods for linear
composite regularizers. Arxiv preprint arXiv:1104.1436, 2011.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[3] H. D. Bondell and B. J. Reich. Simultaneous regression shrinkage, variable selection and clustering of
predictors with oscar. Biometrics, 64:115?123, 2008.
[4] J. F. Bonnans and A. Shapiro. Optimization problems with perturbations: A guided tour. SIAM Review,
40(2):228?264, 1998.
[5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. 2010.
[6] X. Chen, Q. Lin, S. Kim, J.G. Carbonell, and E.P. Xing. An efficient proximal gradient method for general
structured sparse learning. Arxiv preprint arXiv:1005.4717, 2010.
[7] H. Y. Chuang, E. Lee, Y. T. Liu, D. Lee, and T. Ideker. Network-based classification of breast cancer
metastasis. Molecular Systems Biology, 3(140), 2007.
[8] P.L. Combettes and J.C. Pesquet. Proximal splitting methods in signal processing. Arxiv preprint
arXiv:0912.3522, 2009.
[9] J. M. Danskin. The theory of max-min and its applications to weapons allocation problems. SpringerVerlag, New York, 1967.
[10] J. Friedman, T. Hastie, H. H?ofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied
Statistics, 1(2):302?332, 2007.
[11] B. He and X. Yuan. An accelerated inexact proximal point algorithm for convex minimization. 2010.
[12] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In ICML, 2009.
[13] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms.
Technical report, arXiv:0904.3523, 2009.
[14] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary
learning. In ICML, 2010.
[15] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In
ICML, 2010.
[16] H. Liu, M. Palatucci, and J. Zhang. Blockwise coordinate descent procedures for the multi-task lasso,
with applications to neural semantic basis discovery. In ICML, 2009.
[17] J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient ?2,1 -norm minimization. In UAI, 2009.
[18] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In
NIPS. 2010.
[19] L. Meier, S. Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal of the Royal
Statistical Society: Series B, 70:53?71, 2008.
[20] J.-J. Moreau. Proximit?e et dualit?e dans un espace hilbertien. Bull. Soc. Math. France, 93:273?299, 1965.
[21] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994.
[22] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004.
[23] R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and
Optimization, 14:877, 1976.
[24] A. Subramanian and et al. Gene set enrichment analysis: A knowledge-based approach for interpreting
genome-wide expression profiles. Proceedings of the National Academy of Sciences, 102(43):15545?
15550, 2005.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society
Series B, 58(1):267?288, 1996.
[26] M. J. Van de Vijver and et al. A gene-expression signature as a predictor of survival in breast cancer. The
New England Journal of Medicine, 347(25):1999?2009, 2002.
[27] Y. Ying, C. Campbell, and M. Girolami. Analysis of svm with indefinite kernels. In NIPS. 2009.
[28] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal Of
The Royal Statistical Society Series B, 68(1):49?67, 2006.
[29] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical
variable selection. Annals of Statistics, 37(6A):3468?3497, 2009.
9
| 4275 |@word middle:1 inversion:1 norm:12 simulation:1 jacob:1 pg:1 liu:5 contains:2 series:4 outperforms:1 existing:3 ka:1 surprising:1 si:21 chu:1 written:1 numerical:1 remove:1 plot:1 interpretable:2 half:1 asu:3 xk:1 record:1 completeness:1 math:1 zhang:1 become:1 differential:2 yuan:4 consists:4 prove:3 pathway:9 introductory:1 introduce:1 acquired:1 x0:2 multi:3 kxgj:1 cpu:9 increasing:2 notation:2 minimizes:1 developed:2 supplemental:3 finding:1 concave:1 ofling:1 k2:3 uk:3 control:1 appear:2 organize:1 limit:2 analyzing:1 nsfc:1 tempe:3 abuse:1 might:2 challenging:3 nemirovski:1 unique:2 acknowledgment:1 practice:4 block:1 procedure:5 pontil:1 empirical:2 significantly:1 composite:3 projection:2 boyd:2 pre:8 vert:1 onto:1 selection:9 operator:31 applying:2 equivalent:2 imposed:1 demonstrated:1 jieping:2 attention:1 convex:12 shen:1 xgi:5 splitting:7 identifying:1 estimator:1 rule:1 rocha:1 coordinate:2 annals:2 suppose:2 programming:1 element:2 expensive:1 satisfying:1 database:1 kxk1:1 preprint:3 solved:4 wj:1 cycle:1 complexity:2 miny:2 ui:3 nesterov:8 signature:2 solving:8 rewrite:2 kxgi:7 purely:1 efficiency:8 proximit:1 basis:1 easily:1 represented:1 various:2 fast:5 quite:2 whose:1 larger:2 solve:9 metastatic:2 say:1 statistic:3 gi:29 g1:4 hilbertien:1 sequence:1 differentiable:1 advantage:1 propose:3 dans:1 till:1 achieve:1 academy:1 inducing:1 az:3 ky:3 scalability:1 convergence:10 comparative:1 generating:1 derive:1 develop:2 weakens:1 pose:1 illustrate:1 received:1 strong:1 soc:1 implemented:3 involves:2 girolami:1 direction:4 guided:2 closely:1 sgn:18 material:3 bonnans:1 hx:2 decompose:1 yij:1 extension:3 strictly:1 hold:6 considered:1 great:1 achieves:1 vary:2 smallest:1 xk2:2 adopt:1 purpose:1 dictionary:1 estimation:1 applicable:2 uhlmann:1 sensitive:1 largest:1 grouped:2 tool:1 minimization:5 aim:1 shrinkage:3 l0:2 focus:4 derived:2 properly:1 consistently:1 check:2 mainly:1 kim:2 vk2:2 explanatory:2 france:1 interested:1 i1:7 arg:5 dual:13 among:1 classification:1 plan:1 art:3 smoothing:1 special:2 initialize:1 constrained:1 equal:2 construct:5 biology:1 represents:1 yu:1 icml:4 future:2 espace:1 nonsmooth:1 others:1 report:2 metastasis:1 few:2 employ:2 simultaneously:1 national:1 beck:1 attempt:1 friedman:1 picard:7 evaluation:3 regularizers:1 predefined:2 ugi:5 edge:8 necessary:1 biometrics:1 tree:1 euclidean:4 taylor:1 exchanged:1 theoretical:1 column:1 soft:1 teboulle:1 bull:1 applicability:2 cost:1 subset:2 tour:1 predictor:2 conducted:1 reported:1 proximal:42 synthetic:5 siam:3 lee:2 fused:1 continuously:1 again:1 satisfied:1 lii:1 zhao:1 leading:1 li:4 prox:11 de:1 summarized:5 coefficient:1 rockafellar:1 audibert:1 vi:9 performed:3 lot:1 closed:3 reached:1 xing:2 msigdb:1 contribution:2 square:2 accuracy:1 efficiently:1 yield:1 identify:4 j6:2 straight:1 simultaneous:1 definition:2 inexact:2 frequency:2 involved:8 naturally:1 associated:8 proof:6 subsection:1 cap:1 dimensionality:2 knowledge:1 organized:3 jenatton:5 goldstein:1 campbell:1 follow:2 methodology:1 response:1 improved:1 formulation:7 done:1 evaluated:1 though:1 until:2 ygi:1 ei:1 overlapping:33 logistic:1 quality:2 reveal:4 lei:2 grows:1 building:2 ye:3 verify:4 multiplier:3 true:1 ccf:1 regularization:7 alternating:6 nonzero:2 semantic:1 illustrated:1 adjacent:1 inferior:1 generalized:1 evident:1 demonstrate:3 performs:1 l1:1 interpreting:1 wise:2 recently:3 parikh:1 common:1 pseudocode:1 empirically:1 ji:1 discussed:1 belong:1 he:1 kluwer:1 significant:1 refer:1 similarly:1 stable:1 reich:1 vijver:1 gj:2 recent:3 optimizing:1 certain:1 inequality:1 success:1 additional:2 employed:2 converting:1 determine:2 converge:1 signal:1 ii:2 multiple:1 desirable:1 reduces:4 smooth:9 technical:2 faster:1 academic:1 england:1 bach:3 lin:2 molecular:2 plugging:1 prediction:1 kax:1 basic:2 involving:1 breast:7 regression:5 arxiv:7 iteration:12 palatucci:1 kernel:1 achieved:1 addition:2 microarray:1 publisher:1 weapon:1 rest:1 flow:1 effectiveness:2 easy:3 xj:1 pesquet:1 lasso:35 identified:1 hastie:1 reduce:2 simplifies:1 inner:4 microarrays:1 grad:11 expression:7 penalty:9 reformulated:1 e3:2 york:1 nine:1 repeatedly:1 matlab:3 dramatically:1 detailed:1 involve:1 reduced:4 generate:1 shapiro:1 canonical:1 nsf:1 slasso:16 uk2:1 estimated:1 disjoint:1 tibshirani:2 group:82 key:12 indefinite:1 reformulation:4 achieving:1 verified:2 imaging:1 graph:1 monotone:1 year:1 cone:1 sum:2 convert:1 run:2 nga:1 fourth:2 inverse:1 oscar:1 family:3 fl:3 followed:1 guaranteed:1 arizona:3 constraint:1 sake:1 argument:1 min:9 optimality:1 structured:4 according:1 combination:2 smaller:2 wi:18 making:2 biologically:1 hl:1 intuitively:1 restricted:1 bondell:1 discus:1 end:2 adopted:1 reformulations:3 operation:1 apply:1 observe:5 hierarchical:2 generic:1 appropriate:1 alternative:1 rp:4 original:3 chuang:1 denotes:8 remaining:1 include:3 clustering:1 medicine:1 approximating:1 dykstra:6 society:3 micchelli:1 objective:3 question:1 gradient:11 outer:3 participate:1 carbonell:1 code:1 index:6 reformulate:2 minimizing:2 ying:1 potentially:2 blockwise:1 negative:1 danskin:1 implementation:2 ideker:1 descent:8 perturbation:1 peleato:1 enrichment:1 bk:1 eckstein:1 required:3 toolbox:1 specified:1 extensive:1 meier:1 nip:2 address:1 suggested:1 below:1 sparsity:3 challenge:1 program:1 built:1 max:17 including:2 royal:3 subramanian:1 overlap:8 difficulty:1 predicting:1 jun:1 extract:1 dualit:1 review:1 l2:1 discovery:1 loss:3 lecture:2 interesting:1 allocation:1 penalization:1 affine:1 sufficient:2 minp:6 thresholding:2 bk2:1 row:2 cancer:7 course:1 penalized:4 repeat:1 supported:1 weaker:1 wide:1 hm1582:1 absolute:2 sparse:6 fifth:1 distributed:1 regard:1 moreau:1 van:1 stand:1 world:2 genome:1 forward:1 commonly:1 collection:1 made:1 agd:1 approximate:4 emphasize:1 gene:38 global:1 uai:1 mairal:3 xi:18 search:2 iterative:3 continuous:1 un:1 table:3 terminate:2 e5:6 expansion:1 fli:3 main:3 profile:1 fair:2 complementary:2 x1:1 xu:1 fashion:1 combettes:1 precision:13 sub:4 lie:1 third:1 theorem:4 removing:2 e4:5 svm:1 survival:1 grouping:1 incorporating:1 effectively:1 kx:7 chen:3 gap:12 rg:1 logarithmic:1 saddle:1 appearance:2 absorbed:1 pathwise:1 g2:1 corresponds:1 minimizer:7 satisfies:1 obozinski:3 lipschitz:1 admm:20 change:2 springerverlag:1 included:1 determined:1 prepossessing:1 lemma:14 tumor:1 called:3 total:3 geer:1 duality:7 experimental:3 meaningful:1 e6:3 armijo:1 l1i:1 accelerated:7 evaluate:1 mcb:1 argyriou:3 |
3,618 | 4,276 | A Brain-Machine Interface Operating with a
Real-Time Spiking Neural Network Control
Algorithm
Julie Dethier?
Department of Bioengineering
Stanford University, CA 94305
[email protected]
Paul Nuyujukian
Department of Bioengineering
School of Medicine
Stanford University, CA 94305
[email protected]
Chris Eliasmith
Centre for Theoretical Neuroscience
University of Waterloo, Canada
[email protected]
Terry Stewart
Centre for Theoretical Neuroscience
University of Waterloo, Canada
[email protected]
Shauki A. Elassaad
Department of Bioengineering
Stanford University, CA 94305
[email protected]
Krishna V. Shenoy
Department of Electrical Engineering
Department of Bioengineering
Department of Neurobiology
Stanford University, CA 94305
[email protected]
Kwabena Boahen
Department of Bioengineering
Stanford University, CA 94305
[email protected]
Abstract
Motor prostheses aim to restore function to disabled patients. Despite compelling
proof of concept systems, barriers to clinical translation remain. One challenge
is to develop a low-power, fully-implantable system that dissipates only minimal
power so as not to damage tissue. To this end, we implemented a Kalman-filter
based decoder via a spiking neural network (SNN) and tested it in brain-machine
interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained
to predict the arm?s velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation
runs in real-time and its closed-loop performance is quite comparable to that of the
standard Kalman filter. The success of this closed-loop decoder holds promise for
hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major
obstacle to the successful clinical translation of neural motor prostheses.
? Present:
Research Fellow F.R.S.-FNRS, Systmod Unit, University of Liege, Belgium.
1
1
Cortically-controlled motor prostheses: the challenge
Motor prostheses aim to restore function for severely disabled patients by translating neural signals
from the brain into useful control signals for prosthetic limbs or computer cursors. Several proof
of concept demonstrations have shown encouraging results, but barriers to clinical translation still
remain. One example is the development of a fully-implantable system that meets power dissipation
constraints, but is still powerful enough to perform complex operations. A recently reported closedloop cortically-controlled motor prosthesis is capable of producing quick, accurate, and robust computer cursor movements by decoding neural signals (threshold-crossings) from a 96-electrode array
in rhesus macaque premotor/motor cortex [1]-[4]. This, and previous designs (e.g., [5]), employ
versions of the Kalman filter, ubiquitous in statistical signal processing. Such a filter and its variants
are the state-of-the-art decoder for brain-machine interfaces (BMIs) in humans [5] and monkeys [2].
While these recent advances are encouraging, clinical translation of such BMIs requires fullyimplanted systems, which in turn impose severe power dissipation constraints. Even though it is an
open, actively-debated question as to how much of the neural prosthetic system must be implanted,
we note that there are no reports to date demonstrating a fully implantable 100-channel wireless
transmission system, motivating performing decoding within the implanted chip. This computation
is constrained by a stringent power budget: A 6 ? 6mm2 implant must dissipate less than 10mW to
avoid heating the brain by more than 1? C [6], which is believed to be important for long term cell
health. With this power budget, current approaches can not scale to higher electrode densities or to
substantially more computer-intensive decode/control algorithms.
The feasibility of mapping a Kalman-filter based decoder algorithm [1]-[4] on to a spiking neural
network (SNN) has been explored off-line (open-loop). In these off-line tests, the SNN?s performance virtually matched that of the standard implementation [7]. These simulations provide confidence that this algorithm?and others similar to it?could be implemented using an ultra-low-power
approach potentially capable of meeting the severe power constraints set by clinical translation. This
neuromorphic approach uses very-large-scale integrated systems containing microelectronic analog
circuits to morph neural systems into silicon chips [8, 9]. These neuromorphic circuits may yield
tremendous power savings?50nW per silicon neuron [10]?over digital circuits because they use
physical operations to perform mathematical computations (analog approach). When implemented
on a chip designed using the neuromorphic approach, a 2,000-neuron SNN network can consume as
little as 100?W.
Demonstrating this approach?s feasibility in a closed-loop system running in real-time is a key,
non-incremental step in the development of a fully implantable decoding chip, and is necessary
before proceeding with fabricating and implanting the chip. As noise, delay, and over-fitting play
a more important role in the closed-loop setting, it is not obvious that the SNN?s stellar open-loop
performance will hold up. In addition, performance criteria are different in the closed-loop and openloop settings (e.g., time per target vs. root mean squared error). Therefore, a SNN of a different
size may be required to meet the desired specifications. Here we present results and assess the
performance and viability of the SNN Kalman-filter based decoder in real-time, closed-loop tests,
with the monkey performing a center-out-and-back target acquisition task. To achieve closed-loop
operation, we developed an embedded Matlab implementation that ran a 2,000-neuron version of
the SNN in real-time on a PC. We achieved almost a 50-fold speed-up by performing part of the
computation in a lower-dimensional space defined by the formal method we used to map the Kalman
filter on to the SNN. This shortcut allowed us to run a larger SNN in real-time than would otherwise
be possible.
2
Spiking neural network mapping of control theory algorithms
As reported in [11], a formal methodology, called the Neural Engineering Framework (NEF), has
been developed to map control-theory algorithms onto a computational fabric consisting of a highly
heterogeneous population of spiking neurons simply by programming the strengths of their connections. These artificial neurons are characterized by a nonlinear multi-dimensional-vector-to-spikerate function?ai (x(t)) for the ith neuron?with parameters (preferred direction, maximum firing
rate, and spiking-threshold) drawn randomly from a wide distribution (standard deviation ? mean).
2
Spike rate (spikes/s)
Representation
x ? ai (x) ?
x? = ?i ai (x)?ix
ai (x) = G(?i ??ix ? x + Jibias )
400
Transformation
y = Ax ? b j (A?x)
A?x = ?i ai (x)A?ix
x(t)
B'
y(t)
A'
200
0
?1
Dynamics
x? = Ax ? x = h ? A0 x
A0 = ?A + I
0
Stimulus x
1
bk(t)
aj(t)
y(t)
B'
h(t)
x(t)
A'
Figure 1: NEF?s three principles. Representation. 1D tuning curves of a population of 50 leaky
integrate-and-fire neurons. The neurons? tuning curves map control variables (x) to spike rates
(ai (x)); this nonlinear transformation is inverted by linear weighted decoding. G() is the neurons?
nonlinear current-to-spike-rate function. Transformation. SNN with populations bk (t) and a j (t)
representing y(t) and x(t). Feedforward and recurrent weights are determined by B0 and A0 , as
described next. Dynamics. The system?s dynamics is captured in a neurally plausible fashion by
replacing integration with the synapses? spike response, h(t), and replacing the matrices with A0 =
?A + I and B0 = ?B to compensate.
The neural engineering approach to configuring SNNs to perform arbitrary computations is underlined by three principles (Figure 1) [11]-[14]:
Representation is defined by nonlinear encoding of x(t) as a spike rate, ai (x(t))?represented by
the neuron tuning curve?combined with optimal weighted linear decoding of ai (x(t)) to recover
an estimate of x(t), x? (t) = ?i ai (x(t))?ix , where ?ix are the decoding weights.
Transformation is performed by using alternate decoding weights in the decoding operation to
map transformations of x(t) directly into transformations of ai (x(t)). For example, y(t) = Ax(t)
is represented by the spike rates b j (A?x(t)), where unit j?s input is computed directly from unit i?s
output using A?x(t) = ?i ai (x(t))A?ix , an alternative linear weighting.
Dynamics brings the first two principles together and adds the time dimension to the circuit. This
principle aims at reuniting the control-theory and neural levels by modifying the matrices to render
the system neurally plausible, thereby permitting the synapses? spike response, h(t), (i.e., impulse
response) to capture the system?s dynamics. For example, for h(t) = ? ?1 e?t/? , x? = Ax(t) is realized
by replacing A with A0 = ?A + I. This so-called neurally plausible matrix yields an equivalent
dynamical system: x(t) = h(t) ? A0 x(t), where convolution replaces integration.
The nonlinear encoding process?from a multi-dimensional stimulus, x(t), to a one-dimensional
soma current, Ji (x(t)), to a firing rate, ai (x(t))?is specified as:
ai (x(t)) = G(Ji (x(t))).
(1)
Here G is the neurons? nonlinear current-to-spike-rate function, which is given by
n
o?1
G(Ji (x)) = ? ref ? ? RC ln (1 ? Jth /Ji (x))
,
(2)
for the leaky integrate-and-fire model (LIF). The LIF neuron has two behavioral regimes: subthreshold and super-threshold. The sub-threshold regime is described by an RC circuit with time
constant ? RC . When the sub-threshold soma voltage reaches the threshold, Vth , the neuron emits a
spike ? (t ?tn ). After this spike, the neuron is reset and rests for ? ref seconds (absolute refractory period) before it resumes integrating. Jth = Vth /R is the minimum input current that produces spiking.
Ignoring the soma?s RC time-constant when specifying the SNN?s dynamics are reasonable because
the neurons cross threshold at a rate that is proportional to their input current, which thus sets the
spike rate instantaneously, without any filtering [11].
The conversion from a multi-dimensional stimulus, x(t), to a one-dimensional soma current, Ji , is
performed by assigning to the neuron a preferred direction, ??ix , in the stimulus space and taking the
dot-product:
Ji (x(t)) = ?i ??ix ? x(t) + Jibias ,
(3)
3
where ?i is a gain or conversion factor, and Jibias is a bias current that accounts for background
activity. For a 1D space, ??ix is either +1 or ?1 (drawn randomly), for ON and OFF neurons,
respectively. The resulting tuning curves are illustrated in Figure 1, left.
The linear decoding process is characterized by the synapses? spike response, h(t) (i.e., post-synaptic
currents), and the decoding weights, ?ix , which are obtained by minimizing the mean square error.
A single noise term, ?, takes into account all sources of noise, which have the effect of introducing
uncertainty into the decoding process. Hence, the transmitted firing rate can be written as ai (x(t)) +
?i , where ai (x(t)) represents the noiseless set of tuning curves and ?i is a random variable picked
from a zero-mean Gaussian distribution with variance ? 2 . Consequently, the mean square error can
be written as [11]:
*"
#2 +
E
1D
1
2
x
E =
[x(t) ? x? (t)]
=
(4)
x(t) ? ? (ai (x(t)) + ?i ) ?i
2
2
x,?,t
i
x,?,t
where h?ix,? denotes integration over the range of x and ?, the expected noise. We assume that the
noise is independent and has the same variance for each neuron [11], which yields:
*"
#2 +
1
1
x
E=
x(t) ? ? ai (x(t))?i
+ ? 2 ?(?ix )2 ,
(5)
2
2
i
i
x,t
where
?2
is the noise variance ?i ? j . This expression is minimized by:
N
?ix = ? ??1
i j ? j,
(6)
j
with ?i j = ai (x)a j (x) x + ? 2 ?i j , where ? is the Kronecker delta function matrix, and ? j =
xa j (x) x [11]. One consequence of modeling noise in the neural representation is that the matrix
? is invertible despite the use of a highly overcomplete representation. In a noiseless representation,
? is generally singular because, due to the large number of neurons, there is a high probability of
having two neurons with similar tuning curves leading to two similar rows in ?.
3
Kalman-filter based cortical decoder
In the 1960?s, Kalman described a method that uses linear filtering to track the state of a dynamical
system throughout time using a model of the dynamics of the system as well as noisy measurements [15]. The model dynamics gives an estimate of the state of the system at the next time step.
This estimate is then corrected using the observations (i.e., measurements) at this time step. The
relative weights for these two pieces of information are given by the Kalman gain, K [15, 16].
Whereas the Kalman gain is updated at each iteration, the state and observation matrices (defined
below)?and corresponding noise matrices?are supposed constant.
In the case of prosthetic applications, the system?s state vector is the cursor?s kinematics, xt =
y
[veltx , velt , 1], where the constant 1 allows for a fixed offset compensation. The measurement vector,
yt , is the neural spike rate (spike counts in each time step) of 192 channels of neural threshold
crossings. The system?s dynamics is modeled by:
xt
yt
= Axt?1 + wt ,
= Cxt + qt ,
(7)
(8)
where A is the state matrix, C is the observation matrix, and wt and qt are additive, Gaussian noise
sources with wt ? N (0, W) and qt ? N (0, Q). The model parameters (A, C, W and Q) are fit with
training data by correlating the observed hand kinematics with the simultaneously measured neural
signals (Figure 2).
For an efficient decoding, we derived the steady-state update equation by replacing the adaptive
Kalman gain by its steady-state formulation: K = (I + WCQ?1 C)?1 W CT Q?1 . This yields the
following estimate of the system?s state:
DT
xt = (I ? KC)Axt?1 + Kyt = MDT
x xt?1 + My yt ,
4
(9)
a
Velocity (cm/s) Neuron
10
c
150
5
100
b
50
20
0
?20
0
0
x?velocity
y?velocity
2000
4000
6000
8000
Time (ms)
10000
12000
1cm
14000
Trials: 0034-0049
Figure 2: Neural and kinematic measurements (monkey J, 2011-04-16, 16 continuous trials) used to
fit the standard Kalman filter model. a. The 192 cortical recordings fed as input to fit the Kalman
filter?s matrices (color code refers to the number of threshold crossings observed in each 50ms bin).
b. Hand x- and y-velocity measurements correlated with the neural data to obtain the Kalman filter?s
matrices. c. Cursor kinematics of 16 continuous trials under direct hand control.
DT
where MDT
x = (I ? KC)A and My = K are the discrete time (DT) Kalman matrices. The steadystate formulation improves efficiency with little loss in accuracy because the optimal Kalman gain
rapidly converges (typically less than 100 iterations). Indeed, in neural applications under both
open-loop and closed-loop conditions, the difference between the full Kalman filter and its steadystate implementation falls to within 1% in a few seconds [17]. This simplifying assumption reduces
the execution time for decoding a typical neuronal firing rate signal approximately seven-fold [17],
a critical speed-up for real-time applications.
4
Kalman filter with a spiking neural network
To implement the Kalman filter with a SNN by applying the NEF, we first convert Equation 9 from
DT to continuous time (CT), and then replace the CT matrices with neurally plausible ones, which
yields:
x(t) = h(t) ? A0 x(t) + B0 y(t) ,
(10)
0
CT
0
CT
CT
DT
CT
DT
where A = ?Mx + I, B = ?My , with Mx = Mx ? I /?t and My = My /?t, the CT
Kalman matrices, and ?t = 50ms, the discrete time step; ? is the synaptic time-constant.
The jth neuron?s input current (see Equation 3) is computed from the system?s current state, x(t),
which is computed from estimates of the system?s previous state (?x(t) = ?i ai (t)?ix ) and current
input (?y(t) = ?k bk (t)?ky ) using Equation 10. This yields:
J j (x(t)) = ? j ?? jx ? x(t) + J bias
j
= ? j ?? jx ? h(t) ? A0 x? (t) + B0 y? (t) + J bias
j
*
!+
y
0
x
0
x
?
(11)
= ? j ? j ? h(t) ? A ? ai (t)?i + B ? bk (t)?
+ J bias
j
k
i
k
This last equation can be written in a neural network form:
!
J j (x(t)) = h(t) ?
? ? ji ai (t) + ? ? jk bk (t)
i
+ J bias
j
(12)
k
D
E
D
E
where ? ji = ? j ?? jx A0 ?ix and ? jk = ? j ?? jx B0 ?ky are the recurrent and feedforward weights, respectively.
5
Efficient implementation of the SNN
In this section, we describe the two distinct steps carried out when implementing the SNN: creating
and running the network. The first step has no computational constraints whereas the second must
be very efficient in order to be successfully deployed in the closed-loop experimental setting.
5
x
(
1000
x
(
=
1000
1000
=
1000
x
1000
b
1000
x
1000
1000
a
Figure 3: Computing a 1000-neuron pool?s recurrent connections. a. Using connection weights requires multiplying a 1000?1000 matrix by a 1000 ?1 vector. b. Operating in the lower-dimensional
state space requires multiplying a 1 ? 1000 vector by a 1000 ? 1 vector to get the decoded state, multiplying this state by a component of the A0 matrix to update it, and multiplying the updated state by
a 1000 ? 1 vector to re-encode it as firing rates, which are then used to update the soma current for
every neuron.
Network creation: This step generates, for a specified number of neurons composing the network,
x
?x
the gain ? j , bias current J bias
j , preferred direction ? j , and decoding weight ? j for each neuron. The
preferred directions ?? jx are drawn randomly from a uniform distribution over the unit sphere. The
maximum firing rate, max G(J j (x)), and the normalized x-axis intercept, G(J j (x)) = 0, are drawn
randomly from a uniform distribution on [200, 400] Hz and [-1, 1], respectively. From these two
specifications, ? j and J bias
are computed using Equation 2 and Equation 3. The decoding weights
j
x
? j are computed by minimizing the mean square error (Equation 6).
For efficient implementation, we used two 1D integrators (i.e., two recurrent neuron pools, with
each pool representing a scalar) rather than a single 3D integrator (i.e., one recurrent neuron pool,
with the pool representing a 3D vector by itself) [13]. The constant 1 is fed to the 1D integrators as
an input, rather than continuously integrated as part of the state vector. We also replaced the bk (t)
units? spike rates (Figure 1, middle) with the 192 neural measurements (spike counts in 50ms bins),
which is equivalent to choosing ?ky from a standard basis (i.e., a unit vector with 1 at the kth position
and 0 everywhere else) [7].
Network simulation: This step runs the simulation to update the soma current for every neuron,
based on input spikes. The soma voltage is then updated following RC circuit dynamics. Gaussian
noise is normally added at this step, the rest of the simulation being noiseless. Neurons with soma
voltage above threshold generate a spike and enter their refractory period. The neuron firing rates
are decoded using the linear decoding weights to get the updated states values, x and y-velocity.
These values are smoothed with a filter identical to h(t), but with ? set to 5ms instead of 20ms to
avoid introducing significant delay. Then the simulation step starts over again.
In order to ensure rapid execution of the simulation step, neuron interactions are not updated directly using the connection matrix (Equation 12), but rather indirectly with the decoding matrix ? jx ,
dynamics matrix A0 , and preferred direction matrix ?? jx (Equation 11). To see why this is more efficient, suppose we have 1000 neurons in the a population for each of the state vector?s two scalars.
Computing the recurrent connections using connection weights requires multiplying a 1000 ? 1000
matrix by a 1000-dimensional vector (Figure 3a). This requires 106 multiplications and about 106
sums. Decoding each scalar (i.e., ?i ai (t)?ix ), however, requires only 1000 multiplications and 1000
sums. The decoded state vector is then updated by multiplying it by the (diagonal) A0 matrix, another
2 products and 1 sum. The updated state vector is then encoded by multiplying it with the neurons?
preferred direction vectors, another 1000 multiplications per scalar (Figure 3b). The resulting total of about 3000 operations is nearly three orders of magnitude fewer than using the connection
weights to compute the identical transformation.
To measure the speedup, we simulated a 2,000-neuron network on a computer running Matlab 2011a
(Intel Core i7, 2.7-GHz, Mac OS X Lion). Although the exact run-times depend on the computing
hardware and software, the run-time reduction factor should remain approximately constant across
platforms. For each reported result, we ran the simulation 10 times to obtain a reliable estimate of
the execution time. The run-time for neuron interactions using the recurrent connection weights was
9.9ms and dropped to 2.7?s in the lower-dimensional space, approximately a 3,500-fold speedup.
Only the recurrent interactions benefit from the speedup, the execution time for the rest of the operations remaining constant. The run-time for a 50ms network simulation using the recurrent connec6
Table 1: Model parameters
Symbol
max G(J j (x))
G(J j (x)) = 0
J bias
j
?j
?? jx
?2
? RC
j
? ref
j
? PSC
j
Range
200-400 Hz
?1 to 1
Satisfies first two
Satisfies
first two
? x
? j
= 1
Description
Maximum firing rate
Normalized x-axis intercept
Bias current
Gain factor
Preferred-direction vector
0.1
20 ms
1 ms
20 ms
Gaussian noise variance
RC time constant
Refractory period
PSC time constant
tion weights was 0.94s and dropped to 0.0198s in the lower-dimensional space, a 47-fold speedup.
These results demonstrate the efficiency the lower-dimensional space offers, which made the closedloop application of SNNs possible.
6
Closed-loop implementation
An adult male rhesus macaque (monkey J) was trained to perform a center-out-and-back reaching
task for juice rewards to one of eight targets, with a 500ms hold time (Figure 4a) [1]. All animal
protocols and procedures were approved by the Stanford Institutional Animal Care and Use Committee. Hand position was measured using a Polaris optical tracking system at 60Hz (Northern
Digital Inc.). Neural data were recorded from two 96-electrode silicon arrays (Blackrock Microsystems) implanted in the dorsal pre-motor and motor cortex. These recordings (-4.5 RMS threshold
crossing applied to each electrode?s signal) yielded tuned activity for the direction and speed of arm
movements. As detailed in [1], a standard Kalman filter model was fit by correlating the observed
hand kinematics with the simultaneously measured neural signals, while the monkey moved his arm
to acquire virtual targets (Figure 2). The resulting model was used in a closed-loop system to control
an on-screen cursor in real-time (Figure 4a, Decoder block). A steady-state version of this model
serves as the standard against which the SNN implementation?s performance is compared.
We built a SNN using the NEF methodology based on derived Kalman filter parameters mentioned
above. This SNN was then simulated on an xPC Target (Mathworks) x86 system (Dell T3400, Intel Core 2 Duo E8600, 3.33GHz). It ran in closed-loop, replacing the standard Kalman filter as the
decoder block in Figure 4a. The parameter values listed in Table 1 were used for the SNN implementation. We ensured that the time constants ?iRC ,?iref , and ?iPSC were smaller than the implementation?s
time step (50ms). Noise was not explicitly added. It arose naturally from the fluctuations produced
by representing a scalar with filtered spike trains, which has been shown to have effects similar to
Gaussian noise [11]. For the purpose of computing the linear decoding weights (i.e., ?), we modeled
the resulting noise as Gaussian with a variance of 0.1.
A 2,000-neuron version of the SNN-based decoder was tested in a closed-loop system, the largest
network our embedded MatLab implementation could run in real-time. There were 1206 trials
total among which 301 (center-outs only) were performed with the SNN and 302 with the standard
(steady-state) Kalman filter. The block structure was randomized and interleaved, so that there is
no behavioral bias present in the findings. 100 trials under hand control are used as a baseline
comparison. Success corresponds to a target acquisition under 1500ms, with 500ms hold time.
Success rates were higher than 99% on all blocks for the SNN implementation and 100% for the
standard Kalman filter. The average time to acquire the target was slightly slower for the SNN
(Figure 5b)?711ms vs. 661ms, respectively?we believe this could be improved by using more
neurons in the SNN.1 The average distance to target (Figure 5a) and the average velocity of the
cursor (Figure 5c) are very similar.
1 Off-line,
the SNN performed better as we increased the number of neurons [7].
7
a
Neural
Spikes
b
c
BMI: Kalman decoder
BMI: SNN decoder
Decoder
Cursor
Velocity
1cm
1cm
Trials: 2056-2071
Trials: 1748-1763
5
0
0
400
Time after Target Onset (ms)
800
Target acquisition time histogram
40
30
20
Mean cursor velocity
50
Standard Kalman filter
40
Hand
30
Spiking Neural Network
20
10
0
c
Cursor Velocity (cm/s)
b
Mean distance to target
10
Percent of Trials (%)
a
Distance to Target (cm)
Figure 4: Experimental setup and results. a. Data are recorded from two 96-channel silicon electrode
arrays implanted in dorsal pre-motor and motor cortex of an adult male monkey performing a centerout-and-back reach task for juice rewards to one of eight targets with a 500ms hold time. b. BMI
position kinematics of 16 continuous trials for the standard Kalman filter implementation. c. BMI
position kinematics of 16 continuous trials for the SNN implementation.
10
0
500
1000
Target Acquire Time (ms)
1500
0
0
200
400
600
800
Time after Target Onset (ms)
1000
Figure 5: SNN (red) performance compared to standard Kalman filter (blue) (hand trials are shown
for reference (yellow)). The SNN achieves similar results?success rates are higher than 99% on
all blocks?as the standard Kalman filter implementation. a. Plot of distance to target vs. time both
after target onset for different control modalities. The thicker traces represent the average time when
the cursor first enters the acceptance window until successfully entering for the 500ms hold time.
b. Histogram of target acquisition time. c. Plot of mean cursor velocity vs. time.
7
Conclusions and future work
The SNN?s performance was quite comparable to that produced by a standard Kalman filter implementation. The 2,000-neuron network had success rates higher than 99% on all blocks, with
mean distance to target, target acquisition time, and mean cursor velocity curves very similar to
the ones obtained with the standard implementation. Future work will explore whether these results extend to additional animals. As the Kalman filter and its variants are the state-of-the-art in
cortically-controlled motor prostheses [1]-[5], these simulations provide confidence that similar levels of performance can be attained with a neuromorphic system, which can potentially overcome the
power constraints set by clinical applications.
Our ultimate goal is to develop an ultra-low-power neuromorphic chip for prosthetic applications
on to which control theory algorithms can be mapped using the NEF. As our next step in this direction, we will begin exploring this mapping with Neurogrid, a hardware platform with sixteen programmable neuromorphic chips that can simulate up to a million spiking neurons in real-time [9].
However, bandwidth limitations prevent Neurogrid from realizing random connectivity patterns. It
can only connect each neuron to thousands of others if neighboring neurons share common inputs
? just as they do in the cortex. Such columnar organization may be possible with NEF-generated
networks if preferred directions vectors are assigned topographically rather than randomly. Implementing this constraint effectively is a subject of ongoing research.
Acknowledgment
This work was supported in part by the Belgian American Education Foundation(J. Dethier), Stanford NIH Medical Scientist Training Program (MSTP) and Soros Fellowship (P. Nuyujukian),
DARPA Revolutionizing Prosthetics program (N66001-06-C-8005, K. V. Shenoy), and two NIH
Director?s Pioneer Awards (DP1-OD006409, K. V. Shenoy; DPI-OD000965, K. Boahen).
8
References
[1] V. Gilja, Towards clinically viable neural prosthetic systems, Ph.D. Thesis, Department of Computer
Science, Stanford University, 2010, pp 19?22 and pp 57?73.
[2] V. Gilja, P. Nuyujukian, C.A. Chestek, J.P. Cunningham, J.M. Fan, B.M. Yu, S.I. Ryu, and K.V. Shenoy, A
high-performance continuous cortically-controlled prosthesis enabled by feedback control design, 2010
Neuroscience Meeting Planner, San Diego, CA: Society for Neuroscience, 2010.
[3] P. Nuyujukian, V. Gilja, C.A. Chestek, J.P. Cunningham, J.M. Fan, B.M. Yu, S.I. Ryu, and K.V. Shenoy,
Generalization and robustness of a continuous cortically-controlled prosthesis enabled by feedback control design, 2010 Neuroscience Meeting Planner, San Diego, CA: Society for Neuroscience, 2010.
[4] V. Gilja, C.A. Chestek, I. Diester, J.M. Henderson, K. Deisseroth, and K.V. Shenoy, Challenges and opportunities for next-generation intra-cortically based neural prostheses, IEEE Transactions on Biomedical
Engineering, 2011, in press.
[5] S.P. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, Neural control of computer
cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia, Journal of Neural
Engineering, vol. 5, 2008, pp 455?476.
[6] S. Kim, P. Tathireddy, R.A. Normann, and F. Solzbacher, Thermal impact of an active 3-D microelectrode
array implanted in the brain, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.
15, 2007, pp 493?501.
[7] J. Dethier, V. Gilja, P. Nuyujukian, S.A. Elassaad, K.V. Shenoy, and K. Boahen, Spiking neural network
decoder for brain-machine interfaces, IEEE Engineering in Medicine & Biology Society Conference on
Neural Engineering, Cancun, Mexico, 2011, pp 396?399.
[8] K. Boahen, Neuromorphic microchips, Scientific American, vol. 292(5), 2005, pp 56?63.
[9] R. Silver, K. Boahen, S. Grillner, N. Kopell, and K.L. Olsen, Neurotech for neuroscience: unifying concepts, organizing principles, and emerging tools, Journal of Neuroscience, vol. 27(44), 2007, pp 11807?
11819.
[10] J.V. Arthur and K. Boahen, Silicon neuron design: the dynamical systems approach, IEEE Transactions
on Circuits and Systems, vol. 58(5), 2011, pp 1034-1043.
[11] C. Eliasmith and C.H. Anderson, Neural engineering: computation, representation, and dynamics in
neurobiological systems, MIT Press, Cambridge, MA; 2003.
[12] C. Eliasmith, A unified approach to building and controlling spiking attractor networks, Neural Computation, vol. 17, 2005, pp 1276?1314.
[13] R. Singh and C. Eliasmith, Higher-dimensional neurons explain the tuning and dynamics of working
memory cells, The Journal of Neuroscience, vol. 26(14), 2006, pp 3667?3678.
[14] C. Eliasmith, How to build a brain: from function to implementation, Synthese, vol. 159(3), 2007, pp
373?388.
[15] R.E. Kalman, A new approach to linear filtering and prediction problems, Transactions of the ASME?
Journal of Basic Engineering, vol. 82(Series D), 1960, pp 35?45.
[16] G. Welsh and G. Bishop, An introduction to the Kalman Filter, University of North Carolina at Chapel
Hill Chapel Hill NC, vol. 95(TR 95-041), 1995, pp 1?16.
[17] W.Q. Malik, W. Truccolo, E.N. Brown, and L.R. Hochberg, Efficient decoding with steady-state Kalman
filter in neural interface systems, IEEE Transactions on Neural Systems and Rehabilitation Engineering,
vol. 19(1), 2011, pp 25?34.
9
| 4276 |@word trial:11 middle:1 version:4 approved:1 open:4 rhesus:3 simulation:9 carolina:1 simplifying:1 thereby:1 tr:1 deisseroth:1 reduction:1 series:1 tuned:1 current:16 assigning:1 must:3 written:3 pioneer:1 additive:1 motor:12 designed:1 plot:2 update:4 v:4 fewer:1 ith:1 realizing:1 core:2 filtered:1 fabricating:1 dell:1 synthese:1 mathematical:1 rc:7 simeral:1 direct:1 viable:1 director:1 microchip:1 fitting:1 behavioral:2 indeed:1 expected:1 rapid:1 multi:3 brain:8 integrator:3 snn:33 encouraging:2 little:2 window:1 begin:1 matched:1 circuit:7 duo:1 cm:6 substantially:1 monkey:7 emerging:1 developed:2 unified:1 finding:1 transformation:7 fellow:1 every:2 thicker:1 axt:2 ensured:1 control:15 unit:6 configuring:1 normally:1 medical:1 producing:1 shenoy:8 before:2 engineering:12 dropped:2 scientist:1 severely:1 consequence:1 despite:2 encoding:2 meet:2 firing:8 fluctuation:1 approximately:3 black:1 closedloop:2 specifying:1 psc:2 range:2 acknowledgment:1 block:6 implement:1 procedure:1 confidence:2 integrating:1 refers:1 pre:2 get:2 onto:1 applying:1 intercept:2 equivalent:2 map:4 quick:1 center:3 yt:3 chapel:2 array:4 his:1 enabled:2 population:4 updated:7 target:20 play:1 suppose:1 decode:1 exact:1 programming:1 diego:2 us:2 controlling:1 velocity:13 crossing:4 jk:2 observed:3 role:1 electrical:1 capture:1 enters:1 thousand:1 movement:2 ran:3 mentioned:1 boahen:7 microelectronic:1 reward:2 dynamic:13 trained:2 depend:1 singh:1 topographically:1 creation:1 efficiency:2 basis:1 darpa:1 chip:8 fabric:1 represented:2 train:1 distinct:1 fnrs:1 describe:1 artificial:1 choosing:1 quite:2 premotor:1 stanford:13 larger:1 plausible:4 consume:1 encoded:1 otherwise:1 noisy:1 itself:1 interaction:3 product:2 reset:1 neighboring:1 loop:16 date:1 rapidly:1 organizing:1 achieve:1 supposed:1 description:1 moved:1 x86:1 ky:3 electrode:5 transmission:1 produce:1 incremental:1 converges:1 silver:1 develop:2 recurrent:9 measured:3 qt:3 school:1 b0:5 implemented:3 direction:10 filter:29 modifying:1 human:2 stringent:1 eliasmith:5 translating:1 implementing:2 bin:2 virtual:1 education:1 truccolo:1 generalization:1 ultra:2 exploring:1 hold:6 revolutionizing:1 mapping:3 predict:1 nw:1 major:1 achieves:1 jx:8 institutional:1 belgium:1 purpose:1 waterloo:2 largest:1 successfully:2 tool:1 weighted:2 instantaneously:1 mit:1 gaussian:6 aim:3 super:1 rather:4 arose:1 reaching:1 avoid:2 voltage:3 encode:1 ax:4 derived:2 prosthetics:1 baseline:1 kim:2 integrated:2 typically:1 a0:12 cunningham:2 kc:2 microelectrode:1 among:1 development:2 animal:3 art:2 constrained:1 integration:3 lif:2 mstp:1 platform:2 saving:2 having:1 biology:1 kwabena:1 mm2:1 represents:1 identical:2 yu:2 nearly:1 future:2 minimized:1 report:1 stimulus:4 others:2 employ:1 few:1 randomly:5 simultaneously:2 implantable:4 replaced:1 consisting:1 welsh:1 fire:2 attractor:1 organization:1 acceptance:1 highly:2 kinematic:1 intra:1 severe:2 henderson:1 male:2 pc:1 accurate:1 bioengineering:5 capable:2 necessary:2 belgian:1 arthur:1 prosthesis:9 desired:1 re:1 overcomplete:1 theoretical:2 minimal:1 increased:1 nuyujukian:5 modeling:1 compelling:1 obstacle:1 stewart:1 neuromorphic:8 introducing:2 deviation:1 mac:1 uniform:2 delay:2 successful:1 chestek:3 motivating:1 reported:3 connect:1 morph:1 my:5 combined:1 density:1 randomized:1 off:4 decoding:21 invertible:1 pool:5 together:1 continuously:1 connectivity:1 thesis:1 squared:1 again:1 recorded:2 containing:1 creating:1 american:2 leading:1 actively:1 account:2 north:1 inc:1 explicitly:1 onset:3 dissipate:1 piece:1 tion:1 performed:4 root:1 closed:13 picked:1 red:1 start:1 recover:1 cxt:1 ass:1 square:3 accuracy:1 variance:5 yield:6 subthreshold:1 resume:1 yellow:1 produced:2 multiplying:7 kopell:1 dp1:1 tissue:1 explain:1 synapsis:3 reach:2 synaptic:2 against:1 acquisition:5 pp:14 obvious:1 naturally:1 proof:2 emits:1 gain:7 color:1 improves:1 ubiquitous:1 back:3 higher:5 dt:6 attained:1 methodology:2 response:4 improved:1 formulation:2 though:1 anderson:1 xa:1 just:1 biomedical:1 until:1 hand:8 working:1 replacing:5 nonlinear:6 o:1 brings:1 aj:1 impulse:1 disabled:2 believe:1 scientific:1 building:1 effect:2 concept:3 normalized:2 brown:1 hence:1 assigned:1 entering:1 illustrated:1 steady:5 criterion:1 m:22 asme:1 hill:2 demonstrate:1 tn:1 dissipation:2 interface:5 tetraplegia:1 percent:1 steadystate:2 recently:1 nih:2 common:1 juice:2 spiking:13 physical:1 ji:8 refractory:3 million:1 analog:2 extend:1 neurogrid:2 silicon:5 measurement:6 significant:1 cambridge:1 ai:22 enter:1 tuning:7 centre:2 had:1 dot:1 specification:2 cortex:4 operating:2 add:1 recent:1 underlined:1 success:5 meeting:3 inverted:1 krishna:1 captured:1 minimum:1 transmitted:1 impose:1 care:1 additional:1 period:3 signal:9 neurally:4 full:1 reduces:1 characterized:2 clinical:6 offer:2 believed:1 long:1 compensate:1 cross:1 post:1 permitting:1 sphere:1 award:1 controlled:5 feasibility:2 openloop:1 variant:2 impact:1 prediction:1 implanted:5 patient:2 heterogeneous:1 noiseless:3 basic:1 iteration:2 histogram:2 represent:1 achieved:1 cell:2 addition:1 background:1 whereas:2 fellowship:1 else:1 singular:1 source:2 modality:1 rest:3 recording:2 hz:3 virtually:1 subject:1 mw:1 feedforward:2 enough:1 viability:1 fit:4 bandwidth:1 intensive:1 donoghue:1 i7:1 whether:1 expression:1 rms:1 ultimate:1 render:1 matlab:4 programmable:1 useful:1 generally:1 detailed:1 listed:1 ph:1 hardware:3 generate:1 northern:1 neuroscience:9 delta:1 per:3 track:1 blue:1 discrete:2 promise:1 vol:11 key:1 soma:8 threshold:11 demonstrating:2 drawn:4 prevent:1 n66001:1 convert:1 sum:3 run:8 everywhere:1 powerful:1 uncertainty:1 almost:1 reasonable:1 throughout:1 planner:2 hochberg:2 comparable:2 interleaved:1 ct:8 npl:1 fold:4 replaces:1 kyt:1 fan:2 yielded:1 activity:3 strength:1 constraint:6 kronecker:1 software:1 prosthetic:5 generates:1 speed:3 simulate:1 performing:4 optical:1 speedup:4 department:8 alternate:1 clinically:1 remain:3 across:1 smaller:1 slightly:1 rehabilitation:2 ln:1 equation:10 turn:1 kinematics:6 count:2 committee:1 mathworks:1 fed:2 end:1 serf:1 operation:6 eight:2 limb:1 indirectly:1 alternative:1 robustness:1 slower:1 denotes:1 running:3 ensure:1 remaining:1 opportunity:1 unifying:1 medicine:2 build:1 society:3 malik:1 question:1 realized:1 spike:21 mdt:2 damage:1 added:2 diagonal:1 kth:1 mx:3 distance:5 mapped:2 simulated:2 decoder:13 chris:1 seven:1 kalman:36 code:1 modeled:2 demonstration:1 minimizing:2 acquire:3 mexico:1 setup:1 nc:1 potentially:2 trace:1 implementation:19 design:4 perform:4 conversion:2 neuron:46 convolution:1 observation:3 compensation:1 thermal:1 neurobiology:1 smoothed:1 arbitrary:1 dpi:1 canada:2 bk:6 required:1 specified:2 connection:8 tremendous:1 ryu:2 macaque:2 adult:2 dynamical:3 below:1 lion:1 microsystems:1 pattern:1 regime:2 challenge:3 program:2 built:1 max:2 reliable:1 memory:1 terry:1 power:12 critical:1 irc:1 restore:2 dissipates:1 arm:3 representing:4 axis:2 carried:1 health:1 normann:1 multiplication:3 relative:1 embedded:3 fully:4 loss:1 generation:1 limitation:1 proportional:1 filtering:3 sixteen:1 digital:2 foundation:1 integrate:2 principle:5 share:1 translation:5 row:1 supported:1 wireless:1 last:1 jth:3 formal:2 bias:11 wide:1 fall:1 taking:1 barrier:2 absolute:1 julie:1 leaky:2 ghz:2 benefit:1 overcome:2 curve:7 dimension:1 cortical:3 feedback:2 made:1 adaptive:1 san:2 transaction:5 olsen:1 preferred:8 neurobiological:1 correlating:2 active:1 gilja:5 nef:7 continuous:7 why:1 table:2 channel:3 robust:1 ca:9 composing:1 ignoring:1 complex:1 protocol:1 bmi:7 grillner:1 uwaterloo:2 noise:14 paul:2 heating:1 allowed:1 ref:3 neuronal:1 intel:2 screen:1 fashion:1 deployed:1 cortically:6 sub:2 decoded:3 position:4 debated:1 weighting:1 stellar:1 ix:16 xt:4 bishop:1 symbol:1 explored:1 offset:1 blackrock:1 effectively:1 magnitude:1 execution:4 cancun:1 budget:2 implant:1 cursor:13 columnar:1 simply:1 explore:1 snns:2 vth:2 tracking:1 scalar:5 corresponds:1 satisfies:2 ma:1 goal:1 consequently:1 towards:1 replace:1 shortcut:1 determined:1 typical:1 corrected:1 wt:3 called:2 total:2 experimental:2 reuniting:1 dorsal:2 ongoing:1 tested:2 correlated:1 |
3,619 | 4,277 | Trace Lasso: a trace norm regularization for
correlated designs
?
Edouard
Grave
INRIA, Sierra Project-team
?
Ecole
Normale Sup?erieure, Paris
[email protected]
Guillaume Obozinski
INRIA, Sierra Project-team
?
Ecole
Normale Sup?erieure, Paris
[email protected]
Francis Bach
INRIA, Sierra Project-team
?
Ecole
Normale Sup?erieure, Paris
[email protected]
Abstract
Using the `1 -norm to regularize the estimation of the parameter vector of a linear
model leads to an unstable estimator when covariates are highly correlated. In this
paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace
Lasso, uses the trace norm of the selected covariates, which is a convex surrogate
of their rank, as the criterion of model complexity. We analyze the properties of
our norm, describe an optimization algorithm based on reweighted least-squares,
and illustrate the behavior of this norm on synthetic data, showing that it is more
adapted to strong correlations than competing methods such as the elastic net.
1
Introduction
The concept of parsimony is central in many scientific domains. In the context of statistics, signal
processing or machine learning, it takes the form of variable or feature selection problems, and is
commonly used in two situations: first, to make the model or the prediction more interpretable or
cheaper to use, i.e., even if the underlying problem does not admit sparse solutions, one looks for the
best sparse approximation. Second, sparsity can also be used given prior knowledge that the model
should be sparse. Many methods have been designed to learn sparse models, namely methods based
on greedy algorithms [1, 2], Bayesian inference [3] or convex optimization [4, 5].
In this paper, we focus on the regularization by sparsity-inducing norms. The simplest example
of such norms is the `1 -norm, leading to the Lasso, when used within a least-squares framework.
In recent years, a large body of work has shown that the Lasso was performing optimally in highdimensional low-correlation settings, both in terms of prediction [6], estimation of parameters or
estimation of supports [7, 8]. However, most data exhibit strong correlations, with various correlation structures, such as clusters (i.e., close to block-diagonal covariance matrices) or sparse graphs,
such as for example problems involving sequences (in which case, the covariance matrix is close to
a Toeplitz matrix [9]). In these situations, the Lasso is known to have stability problems: although
its predictive performance is not disastrous, the selected predictor may vary a lot (typically, given
two correlated variables, the Lasso will only select one of the two, at random).
Several remedies have been proposed to this instability. First, the elastic net [10] adds a strongly
convex penalty term (the squared `2 -norm) that will stabilize selection (typically, given two correlated variables, the elastic net will select the two variables). However, it is blind to the exact
1
correlation structure, and while strong convexity is required for some variables, it is not for other
variables. Another solution is to consider the group Lasso, which will divide the predictors into
groups and penalize the sum of the `2 -norm of these groups [11]. This is known to accomodate
strong correlations within groups [12]; however it requires to know the groups in advance, which is
not always possible. A third line of research has focused on sampling-based techniques [13, 14, 15].
An ideal regularizer should thus take into account the design (like the group Lasso, with oracle
groups), but without requiring human intervention (like the elastic net); it should thus add strong
convexity only where needed, and not modifying variables where things behave correctly. In this
paper, we propose a new norm towards this end.
More precisely we make the following contributions:
? We propose in Section 2 a new norm based on the trace norm (a.k.a. nuclear norm) that
interpolates between the `1 -norm and the `2 -norm depending on correlations.
? We show that there is a unique minimum when penalizing with this norm in Section 2.2.
? We provide optimization algorithms based on reweighted least-squares in Section 3.
? We study the second-order expansion around independence and relate it to existing work
on including correlations in Section 4.
? We perform synthetic experiments in Section 5, where we show that the trace Lasso outperforms existing norms in strong-correlation regimes.
Notations. Let M ? Rn?p . We use superscripts for the columns of M, i.e., M(i) denotes the i-th
column, and subscripts for the rows, i.e., Mi denotes the i-th row. For M ? Rp?p , diag(M) ? Rp
is the diagonal of the matrix M, while for u ? Rp , Diag(u) ? Rp?p is the diagonal matrix whose
diagonal elements are the ui . Let S be a subset of {1, ..., p}, then uS is the vector u restricted to
the support S, with 0 outside the support S. We denote by Sp the set of symmetric matrices of size
p. We will use various matrix norms, here are the notations we use: kMk? is the trace norm, i.e.,
the sum of the singular values of the matrix M, kMkop is the operator norm, i.e., the maximum
singular value of the matrix M,
p kMkF is the Frobenius norm, i.e., the `2 -norm of the singular
values, which is also equal to tr(M> M) and kMk2,1 is the sum of the `2 -norm of the columns
p
X
of M: kMk2,1 =
kM(i) k2 .
i=1
2
Definition and properties of the trace Lasso
We consider the problem of predicting y ? R, given a vector x ? Rp , assuming a linear model
y = w> x + ?,
where ? is an additive (typically Gaussian) noise with mean 0 and variance ? 2 . Given a training set
X = (x1 , ..., xn )> ? Rn?p and y = (y1 , ..., yn )> ? Rn , a widely used method to estimate the
parameter vector w is penalized empirical risk minimization
n
w
? ? argmin
w
1X
`(yi , w> xi ) + ?f (w),
n i=1
(1)
where ` is a loss function used to measure the error we make by predicting w> xi instead of yi , while
f is a regularization term used to penalize complex models. This second term helps avoiding overfitting, especially in the case where we have many more parameters than observation, i.e., n p.
2.1
Related work
We will now present some classical penalty functions for linear models which are widely used in the
machine learning and statistics community. The first one, known as Tikhonov regularization [16] or
ridge [17], is the squared `2 -norm. When used with the square loss, estimating the parameter vector
w is done by solving a linear system. One of the main drawbacks of this penalty function is the fact
2
that it does not perform variable selection and thus does not behave well in sparse high-dimensional
settings.
Hence, it is natural to penalize linear models by the number of variables used by the model. Unfortunately, this criterion, sometimes denoted by k ? k0 (`0 -penalty), is not convex and solving the
problem in Eq. (1) is generally NP-Hard [18]. Thus, a convex relaxation for this problem was introduced, replacing the size of the selected subset by the `1 -norm of w. This estimator is known
as the Lasso [4] in the statistics community and basis pursuit [5] in signal processing. Under some
assumptions, the two problems are in fact equivalent (see for example [19] and references therein).
When two predictors are highly correlated, the Lasso has a very unstable behavior: it often only
selects the variable that is the most correlated with the residual. On the other hand, Tikhonov
regularization tends to shrink coefficients of correlated variables together, leading to a very stable
behavior. In order to get the best of both worlds, stability and variable selection, Zou and Hastie
introduced the elastic net [10], which is the sum of the `1 -norm and squared `2 -norm. Unfortunately,
this estimator needs two regularization parameters and is not adaptive to the precise correlation
structure of the data. Some authors also proposed to use pairwise correlations between predictors
to interpolate more adaptively between the `1 -norm and squared `2 -norm, by introducing a method
called pairwise elastic net [20] (see comparisons with our approach in Section 5).
Finally, when one has more knowledge about the data, for example clusters of variables that should
be selected together, one can use the group Lasso [11]. Given a partition (Si ) of the set of variables,
it is defined as the sum of the `2 -norms of the restricted vectors wSi :
kwkGL =
k
X
kwSi k2 .
i=1
The effect of this penalty function is to introduce sparsity at the group level: variables in a group are
selected all together. One of the main drawback of this method, which is also sometimes one of its
quality, is the fact that one needs to know the partition of the variables, and so one needs to have a
good knowledge of the data.
2.2
The ridge, the Lasso and the trace Lasso
In this section, we show that Tikhonov regularization and the Lasso penalty can be viewed as norms
of the matrix X Diag(w). We then introduce a new norm involving this matrix.
The solution of empirical risk minimization penalized by the `1 -norm or `2 -norm is not equivariant
to rescaling of the predictors X(i) , so it is common to normalize the predictors. When normalizing
the predictors X(i) , and penalizing by Tikhonov regularization or by the Lasso, people are implicitly using a regularization term that depends on the data or design matrix X. In fact, there is an
equivalence between normalizing the predictors and not normalizing them, using the two following
reweighted `2 and `1 -norms instead of Tikhonov regularization and the Lasso:
kwk22 =
p
X
kX(i) k22 wi2
and
kwk1 =
i=1
p
X
kX(i) k2 |wi |.
(2)
i=1
These two norms can be expressed using the matrix X Diag(w):
kwk2 = kX Diag(w)kF
and
kwk1 = kX Diag(w)k2,1 ,
and a natural question arises: are there other relevant choices of functions or matrix norms? A
classical measure of the complexity of a model is the number of predictors used by this model,
which is equal to the size of the support of w. This penalty being non-convex, people use its convex
relaxation, which is the `1 -norm, leading to the Lasso.
Here, we propose a different measure of complexity which can be shown to be more adapted in
model selection settings [21]: the dimension of the subspace spanned by the selected predictors.
This is equal to the rank of the selected predictors, or also to the rank of the matrix X Diag(w).
Like the size of the support, this function is non-convex, and we propose to replace it by a convex
surrogate, the trace norm, leading to the following penalty that we call ?trace Lasso?:
?(w) = kX Diag(w)k? .
3
The trace Lasso has some interesting properties: if all the predictors are orthogonal, then, it is equal
to the `1 -norm. Indeed, we have the decomposition:
X Diag(w) =
p
X
kX(i) k2 wi
i=1
X(i) >
e ,
kX(i) k2 i
where ei are the vectors of the canonical basis. Since the predictors are orthogonal and the ei are
orthogonal too, this gives the singular value decomposition of X Diag(w) and we get
kX Diag(w)k? =
p
X
kX(i) k2 |wi | = kX Diag(w)k2,1 .
i=1
On the other hand, if all the predictors are equal to X(1) , then
X Diag(w) = X(1) w> ,
and we get kX Diag(w)k? = kX(1) k2 kwk2 = kX Diag(w)kF , which is equivalent to Tikhonov
regularization. Thus when two predictors are strongly correlated, our norm will behave like
Tikhonov regularization, while for almost uncorrelated predictors, it will behave like the Lasso.
Always having a unique minimum is an important property for a statistical estimator, as it is a first
step towards stability. The trace Lasso, by adding strong convexity exactly in the direction of highly
correlated covariates, always has a unique minimum, and is thus much more stable than the Lasso.
Proposition 1. If the loss function ` is strongly convex with respect to its second argument, then the
solution of the empirical risk minimization penalized by the trace Lasso, i.e., Eq. (1), is unique.
The technical proof of this proposition can be found in [22], and consists in showing that in the flat
directions of the loss function, the trace Lasso is strongly convex.
2.3
A new family of penalty functions
In this section, we introduce a new family of penalties, inspired by the trace Lasso, allowing us to
write the `1 -norm, the `2 -norm and the newly introduced trace Lasso as special cases. In fact, we
note that k Diag(w)k? = kwk1 and kp?1/2 1> Diag(w)k? = kw> k? = kwk2 . In other words,
we can express the `1 and `2 -norms of w using the trace norm of a given matrix times the matrix
Diag(w). A natural question to ask is: what happens when using a matrix P other than the identity
or the line vector p?1/2 1> , and what are good choices of such matrices? Therefore, we introduce
the following family of penalty functions:
Definition 1. Let P ? Rk?p , all of its columns having unit norm. We introduce the norm ?P as
?P (w) = kP Diag(w)k? .
Proof. The positive homogeneity and triangle inequality are direct consequences of the linearity of
w 7? P Diag(w) and the fact that k ? k? is a norm. Since all the columns of P are not equal to zero,
we have
P Diag(w) = 0 ? w = 0,
and so, ?P separates points and thus is a norm.
As stated before, the `1 and `2 -norms are special cases of the family of norms we just introduced.
Another important penalty that can be expressed as a special case is the group Lasso, with nonoverlapping groups. Given a partition (Sj ) of the set {1, ..., p}, the group Lasso is defined by
X
kwkGL =
kwSj k2 .
Sj
We define the matrix PGL by
p
1/
|Sk |
GL
Pij =
0
if i and j are in the same group Sk ,
otherwise.
4
Figure 1: Unit balls for various value of P> P. See the text for the value of P> P. (Best seen in
color).
Then,
PGL Diag(w) =
X 1Sj
p
wS>j .
|S
|
j
Sj
(3)
Using the fact that (Sj ) is a partition of {1, ..., p}, the vectors 1Sj are orthogonal and so are the
vectors wSj . Hence, after normalizing the vectors, Eq. (3) gives a singular value decomposition of
PGL Diag(w) and so the group Lasso penalty can be expressed as a special case of our family of
norms:
X
kPGL Diag(w)k? =
kwSj k2 = kwkGL .
Sj
In the following proposition, we show that our norm only depends on the value of P> P. This is an
important property for the trace Lasso, where P = X, since it underlies the fact that this penalty
only depends on the correlation matrix X> X of the covariates.
Proposition 2. Let P ? Rk?p , all of its columns having unit norm. We have
?P (w) = k(P> P)1/2 Diag(w)k? .
We plot the unit ball of our norm for the following value of P> P (see figure 1):
!
!
1 0.9 0.1
1
0.7 0.49
1 1
0.9 1 0.1
0.7
1
0.7
1 1
0.1 0.1 1
0.49 0.7
1
0 0
0
0
1
!
We can lower bound and upper bound our norms by the `2 -norm and `1 -norm respectively. This
shows that, as for the elastic net, our norms interpolate between the `1 -norm and the `2 -norm. But
the main difference between the elastic net and our norms is the fact that our norms are adaptive,
and require a single regularization parameter to tune. In particular for the trace Lasso, when two
covariates are strongly correlated, it will be close to the `2 -norm, while when two covariates are
almost uncorrelated, it will behave like the `1 -norm. This is a behavior close to the one of the
pairwise elastic net [20].
Proposition 3. Let P ? Rk?p , all of its columns having unit norm. We have
kwk2 ? ?P (w) ? kwk1 .
2.4
Dual norm
The dual norm is an important quantity for both optimization and theoretical analysis of the estimator. Unfortunately, we are not able in general to obtain a closed form expression of the dual norm for
the family of norms we just introduced. However we can obtain a bound, which is exact for some
special cases:
Proposition 4. The dual norm, defined by ??P (u) = max u> v, can be bounded by:
?P (v)?1
??P (u)
? kP Diag(u)kop .
5
Proof. Using the fact that diag(P> P) = 1, we have
u> v = tr Diag(u)P> P Diag(v)
? kP Diag(u)kop kP Diag(v)k? ,
where the inequality comes from the fact that the operator norm k ? kop is the dual norm of the trace
norm. The definition of the dual norm then gives the result.
As a corollary, we can bound the dual norm by a constant times the `? -norm:
??P (u) ? kP Diag(u)kop ? kPkop k Diag(u)kop = kPkop kuk? .
Using proposition (3), we also have the inequality ??P (u) ? kuk? .
3
Optimization algorithm
In this section, we introduce an algorithm to estimate the parameter vector w when the loss function
is equal to the square loss: `(y, w> x) = 12 (y ? w> x)2 and the penalty is the trace Lasso. It is
straightforward to extend this algorithm to the family of norms indexed by P. The problem we
consider is thus
1
min ky ? Xwk22 + ?kX Diag(w)k? .
w 2
We could optimize this cost function by subgradient descent, but this is quite inefficient: computing
the subgradient of the trace Lasso is expensive and the rate of convergence of subgradient descent
is quite slow. Instead, we consider an iteratively reweighted least-squares method. First, we need to
introduce a well-known variational formulation for the trace norm [23]:
Proposition 5. Let M ? Rn?p . The trace norm of M is equal to:
1
kMk? = inf tr M> S?1 M + tr (S) ,
2 S0
1/2
and the infimum is attained for S = MM>
.
Using this proposition, we can reformulate the previous optimization problem as
1
?
?
min inf ky ? Xwk22 + w> Diag diag(X> S?1 X) w + tr(S).
w S0 2
2
2
This problem is jointly convex in (w, S) [24]. In order to optimize this objective function by alter?1
i
). Otherwise, the infimum
nating the minimization over w and S, we need to add a term ??
2 tr(S
over S could be attained at a non invertible S, leading to a non convergent algorithm. The infimum
1/2
over S is then attained for S = X Diag(w)2 X> + ?i I
.
Optimizing over w is a least-squares problem penalized by a reweighted `2 -norm equal to w> Dw,
where D = Diag diag(X> S?1 X) . It is equivalent to solving the linear system
(X> X + ?D)w = X> y.
This can be done efficiently by using a conjugate gradient method. Since the cost of multiplying
(X> X+?D) by a vector is O(np), solving the system has a complexity of O(knp), where k ? n+1
is the number of iterations needed to converge (see theorem 10.2.5 of [9]). Using warm restarts, k
can be even smaller than n, since the linear system we are solving does not change a lot from an
iteration to another. Below we summarize the algorithm:
I TERATIVE ALGORITHM FOR ESTIMATING w
Input: the design matrix X, the initial guess w0 , number of iteration N , sequence ?i .
For i = 1...N :
? Compute the eigenvalue decomposition U Diag(sk )U> of X Diag(wi?1 )2 X> .
?
? Set D = Diag(diag(X> S?1 X)), where S?1 = U Diag(1/ sk + ?i )U> .
? Set wi by solving the system (X> X + ?D)w = X> y.
For the sequence ?i , we use a decreasing sequence converging to ten times the machine precision.
6
3.1
Choice of ?
We now give a method to choose the initial parameter ? of the regularization path. In fact, we know
that the vector 0 is solution if and only if ? ? ?? (X> y) [25]. Thus, we need to start the path at
? = ?? (X> y), corresponding to the empty solution 0, and then decrease ?. Using the inequalities
on the dual norm we obtained in the previous section, we get
kX> yk? ? ?? (X> y) ? kXkop kX> yk? .
Therefore, starting the path at ? = kXkop kX> yk? is a good choice.
4
Approximation around the Lasso
We recall that when P = I ? Rp?p , our norm is equal to the `1 -norm, and we want to understand
its behavior when P departs from the identity. Thus, we compute a second order approximation of
our norm around the Lasso: we add a small perturbation ? ? Sp to the identity matrix, and using
Prop. 6 of [22], we obtain the following second order approximation:
k(I + ?) Diag(w)k? = kwk1 + diag(?)> |w|+
X X (?ji |wi | ? ?ij |wj |)2
|wi |>0 |wj |>0
4(|wi | + |wj |)
X
+
X (?ij |wj |)2
+ o(k?k2 ).
2|wj |
|wi |=0 |wj |>0
We can rewrite this approximation as
k(I + ?) Diag(w)k? = kwk1 + diag(?)> |w| +
X ?2ij (|wi | ? |wj |)2
i,j
4(|wi | + |wj |)
+ o(k?k2 ),
using a slight abuse of notation, considering that the last term is equal to 0 when wi = wj = 0. The
second order term is quite interesting: it shows that when two covariates are correlated, the effect of
the trace Lasso is to shrink the corresponding coefficients toward each other. We also note that this
term is very similar to pairwise elastic net penalties, which are of the form |w|> P|w|, where Pij is
a decreasing function of ?ij .
5
Experiments
In this section, we perform experiments on synthetic data to illustrate the behavior of the trace Lasso
and other classical penalties when there are highly correlated covariates in the design matrix. The
support S of w is equal to {1, ..., k}, where k is the size of the support. For i in the support of
w, wi is independently drawn from a uniform distribution over [?1, 1]. The observations xi are
drawn from a multivariate Gaussian with mean 0 and covariance matrix ?. For the first setting, ?
is set to the identity, for the second setting, ? is block diagonal with blocks equal to 0.2I + 0.811>
corresponding to clusters of four variables, finally for the third setting, we set ?ij = 0.95|i?j| ,
corresponding to a Toeplitz design. For each method, we choose the best ?. We perform a first
series of experiments (p = 1024, n = 256) for which we report the estimation error. For the second
series of experiments (p = 512, n = 128), we report the Hamming distance between the estimated
support and the true support.
In all six graphs of Figure 2, we observe behaviors that are typical of Lasso, ridge and elastic net:
the Lasso performs very well on very sparse models but its performance degrades for denser models.
The elastic net performs better than the Lasso for settings where there are strongly correlated covariates, thanks to its strongly convex `2 term. In setting 1, since the variables are uncorrelated, there
is no reason to couple their selection. This suggests that the Lasso should be the most appropriate
convex regularization. The trace Lasso approaches the Lasso when n is much larger than p, but the
weak coupling induced by empirical correlations is sufficient to slightly decrease its performance
compared to that of the Lasso. By contrast, in settings 2 and 3, the trace Lasso outperforms other
methods (including the pairwise elastic net) since variables that should be selected together are indeed correlated. As for the penalized elastic net, since it takes into account the correlations between
variables, it is not surprising that in experiments 2 and 3 it performs better than methods that do not.
We do not have a compelling explanation for its superior performance in experiment 1.
7
Figure 2: Left: estimation error (p = 1024, n = 256), right: support recovery (p = 512, n = 128).
(Best seen in color. e-net stands for elastic net, pen stands for pairwise elastic net and trace
stands for trace Lasso. Error bars are obtained over 20 runs.)
6
Conclusion
We introduce a new penalty function, the trace Lasso, which takes advantage of the correlation
between covariates to add strong convexity exactly in the directions where needed, unlike the elastic
net for example, which blindly adds a squared `2 -norm term in every directions. We show on
synthetic data that this adaptive behavior leads to better estimation performance. In the future, we
want to show that if a dedicated norm using prior knowledge such as the group Lasso can be used,
the trace Lasso will behave similarly and its performance will not degrade too much, providing
theoretical guarantees to such adaptivity. Finally, we will seek applications of this estimator in
inverse problems such as deblurring, where the design matrix exhibits strong correlation structure.
Acknowledgments
Guillaume Obozinski and Francis Bach are supported in part by the European Research Council
(SIERRA ERC-239993).
8
References
[1] S.G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. Signal Processing, IEEE Transactions on, 41(12):3397?3415, 1993.
[2] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models.
Advances in Neural Information Processing Systems, 22, 2008.
[3] M.W. Seeger. Bayesian inference and optimal design for the sparse linear model. The Journal
of Machine Learning Research, 9:759?813, 2008.
[4] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58(1):267?288, 1996.
[5] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM
journal on scientific computing, 20(1):33?61, 1999.
[6] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
The Annals of Statistics, 37(4):1705?1732, 2009.
[7] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning
Research, 7:2541?2563, 2006.
[8] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
`1 -constrained quadratic programming (Lasso). Information Theory, IEEE Transactions on,
55(5):2183?2202, 2009.
[9] G.H. Golub and C.F. Van Loan. Matrix computations. Johns Hopkins Univ Pr, 1996.
[10] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
[11] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
[12] F. Bach. Consistency of the group Lasso and multiple kernel learning. The Journal of Machine
Learning Research, 9:1179?1225, 2008.
[13] F. Bach. Bolasso: model consistent Lasso estimation through the bootstrap. In Proceedings of
the 25th international conference on Machine learning, pages 33?40. ACM, 2008.
[14] H. Liu, K. Roeder, and L. Wasserman. Stability approach to regularization selection (stars) for
high dimensional graphical models. Advances in Neural Information Processing Systems, 23,
2010.
[15] N. Meinshausen and P. B?uhlmann. Stability selection. Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 72(4):417?473, 2010.
[16] A. Tikhonov. Solution of incorrectly formulated problems and the regularization method. In
Soviet Math. Dokl., volume 5, page 1035, 1963.
[17] A.E. Hoerl and R.W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55?67, 1970.
[18] G. Davis, S. Mallat, and M. Avellaneda. Adaptive greedy approximations. Constructive approximation, 13(1):57?98, 1997.
[19] E.J. Cand`es and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203?4215, 2005.
[20] A. Lorbert, D. Eis, V. Kostina, D. M. Blei, and P. J. Ramadge. Exploiting covariate similarity
in sparse regression via the pairwise elastic net. JMLR - Proceedings of the 13th International
Conference on Artificial Intelligence and Statistics, 9:477?484, 2010.
[21] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. 2001.
[22] E. Grave, G. Obozinski, and F. Bach. Trace lasso: a trace norm regularization for correlated
designs. Technical report, arXiv:1109.1990, 2011.
[23] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. Advances in neural
information processing systems, 19:41, 2007.
[24] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004.
[25] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing
norms. S. Sra, S. Nowozin, S. J. Wright., editors, Optimization for Machine Learning, 2011.
9
| 4277 |@word norm:91 km:1 seek:1 covariance:3 decomposition:5 tr:6 initial:2 liu:1 series:6 ecole:3 outperforms:2 existing:2 kmk:2 surprising:1 si:1 john:1 additive:1 partition:4 designed:1 interpretable:1 plot:1 greedy:3 selected:8 guess:1 intelligence:1 blei:1 math:1 zhang:2 direct:1 yuan:1 consists:1 introduce:9 pairwise:7 xwk22:2 indeed:2 equivariant:1 cand:1 behavior:8 multi:1 inspired:1 decreasing:2 considering:1 project:3 estimating:2 underlying:1 notation:3 linearity:1 bounded:1 what:2 argmin:1 parsimony:1 guarantee:1 every:1 exactly:2 k2:13 unit:5 intervention:1 yn:1 positive:1 before:1 tends:1 consequence:1 subscript:1 path:3 abuse:1 inria:6 therein:1 dantzig:1 equivalence:1 edouard:2 suggests:1 meinshausen:1 ramadge:1 unique:4 acknowledgment:1 atomic:1 block:3 bootstrap:1 pontil:1 empirical:4 matching:1 boyd:1 word:1 get:4 close:4 selection:12 operator:2 context:1 risk:3 instability:1 optimize:2 equivalent:3 straightforward:1 starting:1 independently:1 convex:16 focused:1 recovery:2 nating:1 wasserman:1 estimator:6 nuclear:1 regularize:1 spanned:1 vandenberghe:1 dw:1 stability:5 annals:1 mallat:2 exact:2 programming:2 us:1 deblurring:1 element:2 expensive:1 eis:1 wj:9 decrease:2 yk:3 convexity:4 ui:1 complexity:4 covariates:10 solving:6 rewrite:1 predictive:1 basis:3 kmkf:1 triangle:1 k0:1 various:3 regularizer:1 soviet:1 univ:2 describe:1 kp:6 artificial:1 outside:1 saunders:1 whose:1 grave:3 widely:2 quite:3 denser:1 larger:1 otherwise:2 toeplitz:2 statistic:5 jointly:1 noisy:1 superscript:1 sequence:4 eigenvalue:1 advantage:1 net:20 propose:4 fr:3 relevant:1 inducing:2 frobenius:1 normalize:1 ky:2 exploiting:1 convergence:1 cluster:3 empty:1 wsj:1 sierra:4 help:1 illustrate:2 depending:1 coupling:1 ij:5 eq:3 strong:9 come:1 direction:4 drawback:2 modifying:1 human:1 require:1 proposition:9 mm:1 around:3 wright:1 knp:1 nonorthogonal:1 vary:1 dictionary:1 bickel:1 estimation:10 hoerl:1 uhlmann:1 council:1 grouped:1 minimization:4 always:3 gaussian:2 normale:3 shrinkage:1 corollary:1 focus:1 methodological:1 rank:3 contrast:1 seeger:1 inference:2 roeder:1 typically:3 w:1 selects:1 tao:1 dual:8 denoted:1 constrained:1 special:5 equal:13 evgeniou:1 having:4 sampling:1 kw:1 look:1 yu:1 alter:1 future:1 np:2 report:3 interpolate:2 homogeneity:1 cheaper:1 technometrics:1 friedman:1 highly:4 golub:1 kxkop:2 orthogonal:4 indexed:1 divide:1 theoretical:2 column:7 compelling:1 cost:2 introducing:1 subset:2 predictor:16 uniform:1 too:2 optimally:1 synthetic:4 adaptively:1 thanks:1 international:2 siam:1 decoding:1 invertible:1 together:4 hopkins:1 squared:5 central:1 choose:2 admit:1 inefficient:1 leading:5 rescaling:1 zhao:1 account:3 nonoverlapping:1 star:1 stabilize:2 coefficient:2 blind:1 depends:3 lot:2 closed:1 analyze:1 sup:3 francis:3 start:1 contribution:1 square:7 variance:1 efficiently:1 weak:1 bayesian:2 multiplying:1 lorbert:1 simultaneous:1 definition:3 frequency:1 proof:3 mi:1 hamming:1 couple:1 newly:1 ask:1 kmk2:2 knowledge:4 color:2 recall:1 jenatton:1 attained:3 restarts:1 methodology:3 formulation:1 done:2 shrink:2 strongly:7 ritov:1 just:2 correlation:17 hand:2 replacing:1 ei:2 infimum:3 quality:1 scientific:2 effect:2 k22:1 concept:1 requiring:1 remedy:1 true:1 regularization:19 hence:2 symmetric:1 iteratively:1 reweighted:5 davis:1 criterion:2 ridge:4 performs:3 dedicated:1 variational:1 common:1 superior:1 ji:1 volume:1 extend:1 slight:1 kwk2:4 cambridge:1 erieure:3 consistency:2 similarly:1 erc:1 stable:2 similarity:1 add:6 multivariate:1 recent:1 optimizing:1 inf:2 tikhonov:8 inequality:4 kwk1:6 yi:2 seen:2 minimum:3 converge:1 signal:3 multiple:1 technical:2 bach:7 lin:1 prediction:2 involving:2 underlies:1 converging:1 regression:4 blindly:1 iteration:3 sometimes:2 kernel:1 arxiv:1 penalize:3 want:2 singular:5 biased:1 unlike:1 kwk22:1 induced:1 thing:1 call:1 ideal:1 independence:1 hastie:3 lasso:58 competing:1 expression:1 six:1 penalty:19 interpolates:1 generally:1 tune:1 tsybakov:1 ten:1 simplest:1 canonical:1 estimated:1 correctly:1 tibshirani:2 write:1 express:1 group:17 bolasso:1 four:1 threshold:1 drawn:2 penalizing:2 kuk:2 backward:1 graph:2 relaxation:2 subgradient:3 year:1 sum:5 run:1 inverse:1 almost:2 family:7 bound:4 convergent:1 quadratic:1 oracle:1 adapted:2 precisely:1 flat:1 argument:1 min:2 performing:1 ball:2 conjugate:1 smaller:1 slightly:1 wi:13 happens:1 restricted:2 pr:2 needed:3 know:3 end:1 pursuit:3 observe:1 appropriate:1 rp:6 denotes:2 graphical:1 especially:1 classical:3 society:4 objective:1 question:2 quantity:1 degrades:1 diagonal:5 surrogate:2 exhibit:2 gradient:1 subspace:1 distance:1 separate:1 w0:1 degrade:1 unstable:2 toward:1 reason:1 assuming:1 reformulate:1 providing:1 unfortunately:3 disastrous:1 relate:1 trace:35 stated:1 design:10 perform:4 allowing:1 upper:1 observation:2 descent:2 behave:6 incorrectly:1 situation:2 team:3 precise:1 y1:1 rn:4 perturbation:1 sharp:1 community:2 introduced:5 namely:1 paris:3 required:1 avellaneda:1 able:1 bar:1 dokl:1 below:1 wi2:1 regime:1 sparsity:5 summarize:1 including:2 max:1 explanation:1 royal:4 wainwright:1 natural:3 warm:1 predicting:2 residual:1 text:1 prior:2 kf:2 loss:6 adaptivity:1 interesting:2 pij:2 sufficient:1 consistent:1 editor:1 uncorrelated:3 nowozin:1 row:2 penalized:5 gl:1 last:1 supported:1 understand:1 sparse:10 van:1 dimension:1 xn:1 world:1 stand:3 author:1 commonly:1 adaptive:5 forward:1 transaction:3 sj:7 selector:1 implicitly:1 overfitting:1 mairal:1 xi:3 terative:1 pen:1 sk:4 learn:1 elastic:19 sra:1 expansion:1 complex:1 zou:2 european:1 domain:1 diag:48 sp:2 main:3 noise:1 body:1 x1:1 kennard:1 slow:1 precision:1 jmlr:1 third:2 rk:3 kop:5 theorem:1 departs:1 covariate:1 showing:2 normalizing:4 adding:1 accomodate:1 kx:17 chen:1 expressed:3 acm:1 obozinski:5 prop:1 viewed:1 identity:4 formulated:1 donoho:1 towards:2 replace:1 hard:1 change:1 loan:1 typical:1 called:2 e:1 select:2 guillaume:3 highdimensional:1 support:11 people:2 arises:1 constructive:1 argyriou:1 avoiding:1 correlated:15 |
3,620 | 4,278 | Learning in Hilbert vs. Banach Spaces: A Measure
Embedding Viewpoint
Bharath K. Sriperumbudur
Gatsby Unit
University College London
Kenji Fukumizu
The Institute of Statistical
Mathematics, Tokyo
Gert R. G. Lanckriet
Dept. of ECE
UC San Diego
[email protected]
[email protected]
[email protected]
Abstract
The goal of this paper is to investigate the advantages and disadvantages of learning in Banach spaces over Hilbert spaces. While many works have been carried
out in generalizing Hilbert methods to Banach spaces, in this paper, we consider
the simple problem of learning a Parzen window classifier in a reproducing kernel
Banach space (RKBS)?which is closely related to the notion of embedding probability measures into an RKBS?in order to carefully understand its pros and cons
over the Hilbert space classifier. We show that while this generalization yields
richer distance measures on probabilities compared to its Hilbert space counterpart, it however suffers from serious computational drawback limiting its practical applicability, which therefore demonstrates the need for developing efficient
learning algorithms in Banach spaces.
1 Introduction
Kernel methods have been popular in machine learning and pattern analysis for their superior performance on a wide spectrum of learning tasks. They are broadly established as an easy way to
construct nonlinear algorithms from linear ones, by embedding data points into reproducing kernel
Hilbert spaces (RKHSs) [1, 14, 15]. Over the last few years, generalization of these techniques to
Banach spaces has gained interest. This is because any two Hilbert spaces over a common scalar
field with the same dimension are isometrically isomorphic while Banach spaces provide more variety in geometric structures and norms that are potentially useful for learning and approximation.
To sample the literature, classification in Banach spaces, more generally in metric spaces were studied in [3, 22, 11, 5]. Minimizing a loss function subject to a regularization condition on a norm
in a Banach space was studied by [3, 13, 24, 21] and online learning in Banach spaces was considered in [17]. While all these works have focused on theoretical generalizations of Hilbert space
methods to Banach spaces, the practical viability and inherent computational issues associated with
the Banach space methods has so far not been highlighted. The goal of this paper is to study the
advantages/disadvantages of learning in Banach spaces in comparison to Hilbert space methods, in
particular, from the point of view of embedding probability measures into these spaces.
The concept of embedding probability measures into RKHS [4, 6, 9, 16] provides a powerful and
straightforward method to deal with high-order statistics of random variables. An immediate application of this notion is to problems of comparing distributions based on finite samples: examples
include tests of homogeneity [9], independence [10], and conditional independence [7]. Formally,
suppose we are given the set P(X ) of all Borel probability measures defined on the topological
space X , and the RKHS (H, k) of functions on X with k as its reproducing kernel (r.k.). If k is
measurable and bounded, then we can embed P in H as
Z
k(?, x) dP(x).
(1)
P 7?
X
1
Given the embedding in (1), the RKHS distance between the embeddings of P and Q defines a
pseudo-metric between P and Q as
Z
Z
.
?k (P, Q) :=
k(?,
x)
dP(x)
?
k(?,
x)
dQ(x)
(2)
X
X
H
It is clear that when Rthe embedding in (1)R is injective, then P and Q can be distinguished based
on their embeddings X k(?, x) dP(x) and X k(?, x) dQ(x). [18] related RKHS embeddings to the
problem of binary classification by showing that ?k (P, Q) is the negative of the optimal risk associated with the Parzen window classifier in H. Extending this classifier to Banach space and studying
the highlights/issues associated with this generalization will throw light on the same associated with
more complex Banach space learning algorithms. With this motivation, in this paper, we consider
the generalization of the notion of RKHS embedding of probability measures to Banach spaces?in
particular reproducing kernel Banach spaces (RKBSs) [24]?and then compare the properties of the
RKBS embedding to its RKHS counterpart.
To derive RKHS based learning algorithms, it is essential to appeal to the Riesz representation
theorem (as an RKHS is defined by the continuity of evaluation functionals), which establishes the
existence of a reproducing kernel. This theorem hinges on the fact that a notion of inner product can
be defined on Hilbert spaces. In this paper, as in [24], we deal with RKBSs that are uniformly Fr?echet
differentiable and uniformly convex (called as s.i.p. RKBS) as many Hilbert space arguments?most
importantly the Riesz representation theorem?can be carried over to such spaces through the notion
of semi-inner-product (s.i.p.) [12], which is a more general structure than an inner product. Based
on Zhang et al. [24], who recently developed RKBS counterparts of RKHS based algorithms like
regularization networks, support vector machines, kernel principal component analysis, etc., we
provide a review of s.i.p. RKBS in Section 3. We present our main contributions in Sections 4 and
5. In Section 4, first, we derive an RKBS embedding of P into B? as
Z
K(?, x) dP(x),
(3)
P 7?
X
where B is an s.i.p. RKBS with K as its reproducing kernel (r.k.) and B? is the topological dual of
B. Note that (3) is similar to (1), but more general than (1) as K in (3) need not have to be positive
definite (pd), in fact, not even symmetric (see Section 3; also see Examples 2 and 3). Based on (3),
we define
Z
Z
K(?, x) dQ(x)
?K (P, Q) :=
K(?, x) dP(x) ?
,
X
X
B?
a pseudo-metric on P(X ), which we show to be the negative of the optimal risk associated with the
Parzen window classifier in B? . Second, we characterize the injectivity of (3) in Section 4.1 wherein
we show that the characterizations obtained for the injectivity of (3) are similar to those obtained for
(1) and coincide with the latter when B is an RKHS. Third, in Section 4.2, we consider the empirical
estimation of ?K (P, Q) based on finite random samples drawn i.i.d. from P and Q and study its
consistency and the rate of convergence. This is useful in applications like two-sample tests (also in
binary classification as it relates to the consistency of the Parzen window classifier) where different
P and Q are to be distinguished based on the finite samples drawn from them and it is important that
the estimator is consistent for the test to be meaningful. We show that the consistency and the rate
of convergence of the estimator depend on the Rademacher type of B? . This result coincides with
the one obtained for ?k when B is an RKHS.
The above mentioned results, while similar to results obtained for RKHS embeddings, are significantly more general, as they apply RKBS spaces, which subsume RKHSs. We can therefore expect
to obtain ?richer? metrics ?K than when being restricted to RKHSs (see Examples 1?3). On the
other hand, one disadvantage of the RKBS framework is that ?K (P, Q) cannot be computed in a
closed form unlike ?k (see Section 4.3). Though this could seriously limit the practical impact of
the RKBS embeddings, in Section 5, we show that closed form expression for ?K and its empirical
estimator can be obtained for some non-trivial Banach spaces (see Examples 1?3). However, the
critical drawback of the RKBS framework is that the computation of ?K and its empirical estimator is significantly more involved and expensive than the RKHS framework, which means a simple
kernel algorithm like a Parzen window classifier, when generalized to Banach spaces suffers from
a serious computational drawback, thereby limiting its practical impact. Given the advantages of
learning in Banach space over Hilbert space, this work, therefore demonstrates the need for the
2
development of efficient algorithms in Banach spaces in order to make the problem of learning in
Banach spaces worthwhile compared to its Hilbert space counterpart. The proofs of the results in
Sections 4 and 5 are provided in the supplementary material.
2 Notation
We introduce some notation that is used throughout the paper. For a topological space X , C(X )
(resp. Cb (X )) denotes the space of all continuous (resp. bounded continuous) functions on X . For
a locally compact Hausdorff space X , f ? C(X ) is said to vanish at infinity if for every ? > 0 the
set {x : |f (x)| ? ?} is compact. The class of all continuous f on X which vanish at infinity is
denoted as C0 (X ). For a Borel measure ? on X , Lp (X , ?) denotes the Banach space of p-power
(p ? 1) ?-integrable functions. For a function f defined on Rd , f? and f ? denote the Fourier
and inverse Fourier transforms of f . Since f? and f ? on Rd can be defined in L1 , L2 or more
generally in distributional senses, they should be treated in the appropriate sense depending on the
context. In the L1 sense, the Fourier and inverse Fourier transforms of f ? L1 (Rd ) are defined as:
R
R
f?(y) = (2?)?d/2 Rd f (x) e?ihy,xi dx and f ? (y) = (2?)?d/2 Rd f (x) eihy,xi dx, where i denotes
R
?
the imaginary unit ?1. ?P := Rd eih?,xi dP(x) denotes the characteristic function of P.
3 Preliminaries: Reproducing Kernel Banach Spaces
In this section, we briefly review the theory of RKBSs, which was recently studied by [24] in the
context of learning in Banach spaces. Let X be a prescribed input space.
Definition 1 (Reproducing kernel Banach space). An RKBS B on X is a reflexive Banach space of
functions on X such that its topological dual B? is isometric to a Banach space of functions on X
and the point evaluations are continuous linear functionals on both B and B? .
Note that if B is a Hilbert space, then the above definition of RKBS coincides with that of an RKHS.
Let (?, ?)B be a bilinear form on B ? B? wherein (f, g ? )B := g ? (f ), f ? B, g ? ? B? . Theorem 2 in
[24] shows that if B is an RKBS on X , then there exists a unique function K : X ? X ? C called
the reproducing kernel (r.k.) of B, such that the following hold:
(a1 ) K(x, ?) ? B, K(?, x) ? B? , x ? X ,
(a2 ) f (x) = (f, K(?, x))B , f ? (x) = (K(x, ?), f ? )B , f ? B, f ? ? B? , x ? X .
Note that K satisfies K(x, y) = (K(x, ?), K(?, y))B and therefore K(?, x) and K(x, ?) are reproducing kernels for B and B? respectively. When B is an RKHS, K is indeed the r.k. in the usual sense.
Though an RKBS has exactly one r.k., different RKBSs may have the same r.k. (see Example 1) unlike an RKHS, where no two RKHSs can have the same r.k (by the Moore-Aronszajn theorem [4]).
Due to the lack of inner product in B (unlike in an RKHS), it can be shown that the r.k. for a general
RKBS can be any arbitrary function on X ?X for a finite set X [24]. In order to have a substitute for
inner products in the Banach space setting, [24] considered RKBS B that are uniformly Fr?echet differentiable and uniformly convex (referred to as s.i.p. RKBS) as it allows Hilbert space arguments to
be carried over to B?most importantly, an analogue to the Riesz representation theorem holds (see
Theorem 3)?through the notion of semi-inner-product (s.i.p.) introduced by [12]. In the following,
we first present results related to general s.i.p. spaces and then consider s.i.p. RKBS.
Definition 2 (S.i.p. space). A Banach space B is said to be uniformly Fr?echet differentiable if for
all f, g ? B, limt?R,t?0 kf +tgkBt ?kf kB exists and the limit is approached uniformly for f, g in the
unit sphere of B. B is said to be uniformly convex if for all ? > 0, there exists a ? > 0 such that
kf + gkB ? 2 ? ? for all f, g ? B with kf kB = kgkB = 1 and kf ? gkB ? ?. B is called an
s.i.p. space if it is both uniformly Fr?echet differentiable and uniformly convex.
Note that uniform Fr?echet differentiability and uniform convexity are properties of the norm associated with B. [8, Theorem 3] has shown that if B is an s.i.p. space, then there exists a unique function
[?, ?]B : B ? B ? C, called the semi-inner-product such that for all f, g, h ? B and ? ? C:
(a3 ) [f + g, h]B = [f, h]B + [g, h]B ,
(a4 ) [?f, g]B = ?[f, g]B , [f, ?g]B = ?[f, g]B ,
(a5 ) [f, f ]B =: kf k2B > 0 for f 6= 0,
3
(a6 ) (Cauchy-Schwartz) |[f, g]B |2 ? kf k2Bkgk2B ,
]B )
and limt?R,t?0 kf +tgkBt ?kf kB = Re([g,f
kf kB , f, g ? B, f 6= 0, where Re(?) and ? represent the
real part and complex conjugate of a complex number ?. Note that s.i.p. in general do not satisfy
conjugate symmetry, [f, g]B = [g, f ]B for all f, g ? B and therefore is not linear in the second
argument, unless B is a Hilbert space, in which case the s.i.p. coincides with the inner product.
Suppose B is an s.i.p. space. Then for each h ? B, f 7? [f, h]B defines a continuous linear
functional on B, which can be identified with a unique element h? ? B? , called the dual function of
h. By this definition of h? , we have h? (f ) = (f, h? )B = [f, h]B , f, h ? B. Using the structure of
s.i.p., [8, Theorem 6] provided the following analogue in B to the Riesz representation theorem of
Hilbert spaces.
Theorem 3 ([8]). Suppose B is an s.i.p. space. Then
(a7 ) (Riesz representation theorem) For each g ? B? , there exists a unique h ? B such that
g = h? , i.e., g(f ) = [f, h]B , f ? B and kgkB? = khkB.
(a8 ) B? is an s.i.p. space with respect to the s.i.p. defined by [h? , f ? ]B? := [f, h]B , f, h ? B
1/2
and kh? kB? := [h? , h? ]B? .
For more details on s.i.p. spaces, we refer the reader to [8]. A concrete example of an s.i.p. space
is as follows, which will prove to be useful in Section 5. Let (X , A , ?) be a measure space and
B := Lp (X , ?) for some p ? (1, +?). It is an s.i.p. space with dual B? := Lq (X , ?) where
|p?2
p
. For each f ? B, its dual element in B? is f ? = kffk|fp?2
q = p?1
. Consequently, the semi-innerLp (X ,?)
product on B is
?
[f, g]B = g (f ) =
R
X
f g|g|p?2 d?
kgkp?2
Lp (X ,?)
.
(4)
Having introduced s.i.p. spaces, we now discuss s.i.p. RKBS which was studied by [24]. Using the
Riesz representation for s.i.p. spaces (see (a7 )), Theorem 9 in [24] shows that if B is an s.i.p. RKBS,
then there exists a unique r.k. K : X ? X ? C and a s.i.p. kernel G : X ? X ? C such that:
(a9 ) G(x, ?) ? B for all x ? X , K(?, x) = (G(x, ?))? , x ? X ,
(a10 ) f (x) = [f, G(x, ?)]B , f ? (x) = [K(x, ?), f ]B for all f ? B, x ? X .
It is clear that G(x, y) = [G(x, ?), G(y, ?)]B , x, y ? X . Since s.i.p. in general do not satisfy conjugate symmetry, G need not be Hermitian nor pd [24, Section 4.3]. The r.k. K and the s.i.p. kernel
G coincide when span{G(x, ?) : x ? X } is dense in B, which is the case when B is an RKHS [24,
Theorems 2, 10 and 11]. This means when B is an RKHS, then the conditions (a9 ) and (a10 ) reduce
to the well-known reproducing properties of an RKHS with the s.i.p. reducing to an inner product.
4 RKBS Embedding of Probability Measures
In this section, we present our main contributions of deriving and analyzing the RKBS embedding
of probability measures, which generalize the theory of RKHS embeddings. First, we would like to
remind the reader that the RKHS embedding in (1) can be derived by choosing F = {f : kf kH ? 1}
in
Z
Z
?F (P, Q) = sup f dP ?
f dQ .
f ?F
X
X
See [19, 20] for details. Similar to the RKHS case, in Theorem 4, we show that the RKBS embeddings can be obtained by choosing F = {f : kf kB ? 1} in ?F (P, Q). Interestingly, though B does
not have an inner product, it can be seen that the structure of semi-inner-product is sufficient enough
to generate an embedding similar to (1).
Theorem 4. Let B be an s.i.p. RKBS defined on a measurable space X with G as the s.i.p. kernel
and K as the reproducing kernel with both G and K being measurable. Let F = {f : kf kB ? 1}
and G be bounded. Then
Z
Z
?K (P, Q) := ?F (P, Q) =
K(?, x) dP(x) ?
K(?, x) dQ(x)
(5)
?.
X
X
4
B
Based
on Theorem 4, it is clear that P can be seen as being embedded into B? as P 7?
R
K(?,
x) dP(x) and ?K (P, Q) is the distance between the embeddings of P and Q. Therefore,
X
we arrive at an embedding which looks similar to (1) and coincides with (1) when B is an RKHS.
Given these embeddings, two questions that need to be answered for these embeddings to be practically useful are: (?) When is the embedding injective? and (??) Can ?K (P, Q) in (5) be estimated
consistently and computed efficiently from finite random samples drawn i.i.d. from P and Q? The
significance of (?) is that if (3) is injective, then such an embedding can be used to differentiate
between different P and Q, which can then be used in applications like two-sample tests to differentiate between P and Q based on samples drawn i.i.d. from them if the answer to (??) is affirmative.
These questions are answered in the following sections.
Before that, we show how these questions are important in binary classification. Following [18], it
can be shown that ?K is the negative of the optimal risk associated with a Parzen window classifier
in B? , that separates the class-conditional distributions P and Q (refer to the supplementary material
for details). This means that if (3) is not injective, then the maximum risk is attained for P 6= Q, i.e.,
distinct distributions are not classifiable. Therefore, the injectivity of (3) is of primal importance in
applications. In addition, the question in (??) is critical as well, as it relates to the consistency of the
Parzen window classifier.
4.1 When is (3) injective?
The following result provides various characterizations for the injectivity of (3), which are similar
(but more general) to those obtained for the injectivity of (1) and coincide with the latter when B is
an RKHS.
Theorem 5 (Injectivity of ?K ). Suppose B is an s.i.p. RKBS defined on a topological space X with
K and G as its r.k. and s.i.p. kernel respectively. Then the following hold:
(a) Let X be a Polish space that is also locally compact Hausdorff. Suppose G is bounded and
K(x, ?) ? C0 (X ) for all x ? X . Then (3) is injective if B is dense in C0 (X ).
(b) Suppose the conditions in (a) hold. Then (3) is injective if B is dense in Lp (X , ?) for any Borel
probability measure ? on X and some p ? [1, ?).
Since it is not easy to check for the denseness of B in C0 (X ) or Lp (X , ?), in Theorem 6, we present
an easily checkable characterization for the injectivity of (3) when K is bounded continuous and
translation invariant on Rd . Note that Theorem 6 generalizes the characterization (see [19, 20]) for
the injectivity of RKHS embedding (in (1)).
Theorem 6 (Injectivity of ?K for translation invariant
Let X = Rd . Suppose K(x, y) =
R K).ihx,?i
d
d?(?) and ? is a finite complex?(x ? y), where ? : R ? R is of the form ?(x) = Rd e
valued Borel measure on Rd . Then (3) is injective if supp(?) = Rd . In addition if K is symmetric,
then the converse holds.
Remark 7. If ? in Theorem 6 is a real-valued pd function, then by Bochner?s theorem, ? has to be
real, nonnegative and symmetric, i.e., ?(d?) = ?(?d?). Since ? need not be a pd function for K
to be a real, symmetric r.k. of B, ? need not be nonnegative. More generally, if ? is a real-valued
function on Rd , then ? is conjugate symmetric, i.e., ?(d?) = ?(?d?). An example of a translation
invariant, real and symmetric (but not pd) r.k. that satisfies the conditions of Theorem 6 can be
obtained with ?(x) = (4x6 + 9x4 ? 18x2 + 15) exp(?x2 ). See Example 3 for more details.
4.2 Consistency Analysis
n
Consider a two-sample test, wherein given two sets of random samples, {Xj }m
j=1 and {Yj }j=1
drawn i.i.d. from distributions P and Q respectively, it is required to test whether P = Q or not.
Given a metric, ?K on P(X ), the problem can equivalently be posed as testing for ?K (P, Q) = 0 or
n
not, based on {Xj }m
j=1 and {Yj }j=1 , in which case, ?K (P, Q) is estimated based on these random
samples. For the test to be meaningful, it is important that this estimate of ?K is consistent. [9]
showed
is a consistent estimator of ?K (P, Q) when B is an RKHS, where Pm :=
Pm that ?K (Pm , Qn1)P
n
1
,
Q
:=
?
n
X
j
j=1
j=1 ?Yj and ?x represents the Dirac measure at x ? X. Theorem 9
m
n
generalizes the consistency result in [9] by showing that ?K (Pm , Qn ) is a consistent estimator of
5
?K (P, Q) and the rate of convergence is O(m(1?t)/t + n(1?t)/t ) if B? is of type t, 1 < t ? 2. Before
we present the result, we define the type of a Banach space, B [2, p. 303].
Definition 8 (Rademacher type of B). Let 1 ? t ? 2. A Banach space B is said to be of tRademacher (or, more shortly, of type t) if there exists a constant C ? such that for any N ? 1
PN
PN
t 1/t
and any {fj }N
? C ? ( j=1 kfj ktB )1/t , where {?j }N
j=1 ? B: (Ek
j=1 are
j=1 ?j fj kB )
i.i.d. Rademacher (symmetric ?1-valued) random variables.
Clearly, every Banach space is of type 1. Since having type t? for t? > t implies having type t, let us
define t? (B) := sup{t : B has type t}.
p
Theorem 9 (Consistency of ?K (Pm , Qn )). Let B be an s.i.p. RKBS. Assume ? := sup{ G(x, x) :
i.i.d.
x ? X } < ?. Fix ? ? (0, 1). Then with probability 1?? over the choice of samples {Xj }m
j=1 ? P
i.i.d.
and {Yj }nj=1 ? Q, we have
1?t
p
1?t
1
1
|?K (Pm , Qn ) ? ?K (P, Q)| ? 2C ? ? m t + n t + 18? 2 log(4/?) m? 2 + n? 2 ,
where t = t? (B? ) and C ? is some universal constant.
It is clear from Theorem 9 that if t? (B? ) ? (1, 2], then ?K (Pm , Qn ) is a consistent estimator of
?K (P, Q). In addition, the best rate is obtained if t? (B? ) = 2, which is the case if B is an RKHS. In
Section 5, we will provide examples of s.i.p. RKBSs that satisfy t? (B? ) = 2.
4.3 Computation of ?K (P, Q)
RWe now consider the problem of computing ?K (P, Q) and ?K (Pm , Qn ).
K(?, x) dP(x). Consider
X
(a5 )
Define ??P :=
(a3 )
2
?K
(P, Q) = k??P ? ??Q k2B? = [??P ? ??Q , ??P ? ??Q ]B? = [??P , ??P ? ??Q ]B? ? [??Q , ??P ? ??Q ]B?
i
i
hZ
hZ
=
?
K(?, x) dP(x), ??P ? ??Q
K(?, x) dQ(x), ??P ? ??Q
B?
B?
Z X
Z X
(?)
[K(?, x), ??P ? ??Q ]B? dP(x) ? [K(?, x), ??P ? ??Q ]B? dQ(x)
=
X
X
Z
Z h
i
K(?, y) d(P ? Q)(y)
K(?, x),
d(P ? Q)(x),
(6)
=
X
B?
X
where (?) is proved in the supplementary material. (6) is not reducible as the s.i.p. is not linear in
the second argument unless B is a Hilbert space. This means ?K (P, Q) is not representable in terms
of the kernel function, K(x, y) unlike in the case of B being an RKHS, in which case the s.i.p. in
(6) reduces to an inner product providing
ZZ
2
K(x, y) d(P ? Q)(x) d(P ? Q)(y).
?K
(P, Q) =
X
Since this issue holds for any P, Q ? P(X ), it also holds for Pm and Qn , which means ?K (Pm , Qn )
cannot be computed in a closed form in terms of the kernel, K(x, y) unlike in the case of an RKHS
where ?K (Pm , Qn ) can be written as a simple V-statistic that depends only on K(x, y) computed
n
at {Xj }m
j=1 and {Yj }j=1 . This is one of the main drawbacks of the RKBS approach where the
s.i.p. structure does not allow closed form representations in terms of the kernel K (also see [24]
where regularization algorithms derived in RKBS are not solvable unlike in an RKHS), and therefore
could limit its practical viability. However, in the following section, we present non-trivial examples
of s.i.p. RKBSs for which ?K (P, Q) and ?K (Pm , Qn ) can be obtained in closed forms.
5 Concrete Examples of RKBS Embeddings
In this section, we present examples of RKBSs and then derive the corresponding ?K (P, Q) and
?K (Pm , Qn ) in closed forms. To elaborate, we present three examples that cover the spectrum:
Example 1 deals with RKBS (in fact a family of RKBSs induced by the same r.k.) whose r.k. is pd,
Example 2 with RKBS whose r.k. is not symmetric and therefore not pd and Example 3 with RKBS
whose r.k. is symmetric but not pd. These examples show that the Banach space embeddings result
in richer metrics on P(X ) than those obtained through RKHS embeddings.
6
Example 1 (K is positive definite). Let ? be a finite nonnegative Borel measure on Rd . Then for
p
any 1 < p < ? with q = p?1
Z
ihx,ti
p
d
d
pd
d
(7)
u(t)e
d?(t) : u ? L (R , ?), x ? R ,
Bp (R ) := fu (x) =
Rd
R
is an RKBS with K(x, y) = G(x, y) = (?(Rd ))(p?2)/p Rd e?ihx?y,ti d?(t) as the r.k. and
Z
?K (P, Q) =
eihx,?i d(P ? Q)(x)
= k?P ? ?Q kLq (Rd ,?) .
(8)
Lq (Rd ,?)
Rd
First note that K is a translation invariant pd kernel on Rd as it is the Fourier transform of a
nonnegative finite Borel measure, ?, which follows from Bochner?s theorem. Therefore, though the
s.i.p. kernel and the r.k. of an RKBS need not be symmetric, the space in (7) is an interesting example
d
of an RKBS, which is induced by a pd kernel. In particular, it can be seen that many RKBSs (Bpd
p (R )
for any 1 < p < ?) have the same r.k (ignoring the scaling factor which can be made one for any
p by choosing ? to be a probability measure). Second, note that Bpd
p is an RKHS when p = q = 2
and therefore (8) generalizes ?k (P, Q) = k?P ? ?Q kL2 (Rd ,?) . By Theorem 6, it is clear that ?K in
(8) is a metric on P(Rd ) if and only if supp(?) = Rd . Refer to the supplementary material for an
d
interpretation of Bpd
p (R ) as a generalization of Sobolev space [23, Chapter 10].
Example 2 (K is not symmetric). LetR ? be a finite nonnegative Borel measure such that its momentp
generating function, i.e., M? (x) := Rd ehx,ti d?(t) exists. Then for any 1 < p < ? with q = p?1
Z
hx,ti
p
d
d
d
u(t)e
d?(t)
:
u
?
L
(R
,
?),
x
?
R
Bns
(R
)
:=
f
(x)
=
u
p
Rd
is an RKBS with K(x, y) = G(x, y) = (M? (qx))(p?2)/p M? (x(q
R ? 1) + y) as the r.k. Suppose
P and Q are such that MP and MQ exist. Then ?K (P, Q) = k Rd ehx,?i d(P ? Q)(x)kLq (Rd ,?) =
kMP ? MQ kLq (Rd ,?) , which is the weighted Lq distance between the moment-generating functions
of P and Q. It is easy to see that if supp(?) = Rd , then ?K (P, Q) = 0 ? MP = MQ a.e. ? P =
Q, which means ?K is a metric on P(Rd ). Note that K is not symmetric (for q 6= 2) and therefore
d
is not pd. When p = q = 2, K(x, y) = M? (x + y) is pd and Bns
p (R ) is an RKHS.
Example 3 (K is symmetric but not positive definite). Let ?(x)
=
1/6
?x2
6
4
2
2
Ae
4x + 9x ? 18x + 15 with A := (1/243) 4? /25
. Then
Z
2
3
snpd
2 ? 3(x?t)
2
2
u(t) dt : u ? L (R), x ? R
B 3 (R) := fu (x) = (x ? t) e
2
R
is an RKBS with r.k. K(x, y) = g(x, y) = ?(x ? y). Clearly, ? and therefore K are not pd
x2
4
?e? ?
b
(though symmetric on R) as ?(x)
= 34992
x6 ? 39x4 + 216x2 ? 324 is not nonnegative at
2
b In addition,
every x ? R. RRefer to the supplementary material for the derivation of K and ?.
3 2
?
b
?K (P, Q) = k R ?(? ? x) d(P ? Q)(x)kLq (R) = k(? (?P ? ?Q )) kLq (R) , where ?(t) = t2 e? 2 t .
b = R, we have ?K (P, Q) = 0 ? (?b (?P ??Q ))? = 0 ? ?b (?P ??Q ) = 0 ? ?P = ?Q
Since supp(?)
a.e., which implies P = Q and therefore ?K is a metric on P(R).
So far, we have presented different examples of RKBSs, wherein we have demonstrated the nature
of the r.k., derived the Banach space embeddings in closed form and studied the conditions under
which it is injective. These examples also show that the RKBS embeddings result in richer distance
measures on probabilities compared to those obtained by the RKHS embeddings?an advantage
gained by moving from Hilbert to Banach spaces. Now, we consider the problem of computing
?K (Pm , Qn ) in closed form and its consistency. In Section 4.3, we showed that ?K (Pm , Qn ) does
not have a nice closed form expression unlike in the case of B being an RKHS. However, in the
following, we show that for K in Examples 1?3, ?K (Pm , Qn ) has a closed form expression for
certain choices of q. Let us consider the estimation of ?K (P, Q):
Z
q
Z Z
q
q
?K (Pm , Qn ) =
b(x, ?) d(Pm ? Qn )(x)
=
b(x, t) d(Pm ? Qn )(x) d?(t)
q
X
L (X ,?)
X
Z X
m
n
q
1X
1
=
b(Xj , t) ?
b(Yj , t) d?(t),
m
n
X
j=1
j=1
7
X
(9)
where b(x, t) = eihx,ti in Example 1, b(x, t) = ehx,ti in Example 2 and b(x, t) = ?(x ? t) with
q = 3 and ? being the Lebesgue measure in Example 3. Since the duals of RKBSs considered
in Examples 1?3 are of type min(q, 2) for 1 ? q ? ? [2, p. 304], by Theorem 9, ?K (Pm , Qn )
max(1?q,?1)
max(1?q,?1)
estimates ?K (P, Q) consistently at a convergence rate of O(m min(q,2) + n min(q,2) ) for q ?
(1, ?), with the best rate of O(m?1/2 + n?1/2 ) attainable when q ? [2, ?). This means for
q ? (2, ?), the same rate as attainable by the RKHS can be achieved. Now, the problem reduces
to computing ?K (Pm , Qn ). Note that (9) cannot be computed in a closed form for all q?see the
discussion in the supplementary material about approximating ?K (Pm , Qn ). However, when q = 2,
(9) can be computed very efficiently in closed form (in terms of K) as a V-statistic [9], given by
m
n
m X
n
X
X
X
K(Xj , Xl )
K(Yj , Yl )
K(Xj , Yl )
2
?K
(Pm , Qn ) =
+
?
2
.
(10)
m2
n2
mn
j=1
j,l=1
j,l=1
l=1
More generally, it can be shown that if q = 2s, s ? N, then (9) reduces to
q
?K
(Pm , Qn ) =
Z
X
q
???
z
Z Z Y
s
X
X j=1
A(x1 ,...,xq )
}|
{
b(x2j?1 , t)b(x2j , t) d?(t)
q
Y
j=1
d(Pm ? Qn )(xj )
(11)
for which closed form computation is possible for appropriate choices of b and ?. Refer to
the supplementary material for the
b and ? as in Example 1, we have
Pderivation ofP(11). For
2?p
s
s
A(x1 , . . . , xq ) = (?(Rd )) p K
x
,
x
j=1 2j?1
j=1 2j , while for b and ? as in Example 2,
Pq
we have A(x1 , . . . , xq ) = M? ( j=1 xj ). By appropriately choosing ? and ? in Example 3, we
can obtain a closed form expression for A(x1 , . . . , xq ), which is proved in the supplementary mateq
rial. Note that choosing s = 1 in (11) results in (10). (11) shows that ?K
(Pm , Qn ) can be computed
q
in a closed form in terms of A at a complexity of O(m ), assuming m = n, which means the least
complexity is obtained for q = 2. The above discussion shows that for appropriate choices of q,
i.e., q ? (2, ?), the RKBS embeddings in Examples 1?3 are useful in practice as ?K (Pm , Qn ) is
consistent and has a closed form expression. However, the drawback of the RKBS framework is that
the computation of ?K (Pm , Qn ) is more involved than its RKHS counterpart.
6 Conclusion & Discussion
With a motivation to study the advantages/disadvantages of generalizing Hilbert space learning algorithms to Banach spaces, in this paper, we generalized the notion of RKHS embedding of probability
measures to Banach spaces, in particular RKBS that are uniformly Fr?echet differentiable and uniformly convex?note that this is equivalent to generalizing a RKHS based Parzen window classifier
to RKBS. While we showed that most of results in RKHS like injectivity of the embedding, consistency of the Parzen window classifier, etc., nicely generalize to RKBS yielding richer distance
measures on probabilities, the generalized notion is less attractive in practice compared to its RKHS
counterpart because of the computational disadvantage associated with it. Since most of the existing
literature on generalizing kernel methods to Banach spaces deal with more complex algorithms than
a simple Parzen window classifier that is considered in this paper, we believe that most of these
algorithms may have limited practical applicability, though they are theoretically appealing. This,
therefore raises an important open problem of developing computationally efficient Banach space
based learning algorithms.
Acknowledgments
The authors thank the anonymous reviewers for their constructive comments that improved the presentation of the paper. Part of the work was done while B. K. S. was a Ph. D. student at UC
San Diego. B. K. S. and G. R. G. L. acknowledge support from the National Science Foundation
(grants DMS-MSPA 0625409 and IIS-1054960). K. F. was supported in part by JSPS KAKENHI
(B) 22300098.
References
[1] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337?404, 1950.
8
[2] B. Beauzamy. Introduction to Banach spaces and their Geometry. North-Holland, The Netherlands, 1985.
[3] K. Bennett and E. Bredensteiner. Duality and geometry in svm classifier. In Proc. 17th International
Conference on Machine Learning, pages 57?64, 2000.
[4] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer Academic Publishers, London, UK, 2004.
[5] R. Der and D. Lee. Large-margin classification in Banach spaces. In JMLR Workshop and Conference
Proceedings, volume 2, pages 91?98. AISTATS, 2007.
[6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research, 5:73?99, 2004.
[7] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems
20, pages 489?496, Cambridge, MA, 2008. MIT Press.
[8] J. R. Giles. Classes of semi-inner-product spaces. Trans. Amer. Math. Soc., 129:436?446, 1967.
[9] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two sample
problem. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing
Systems 19, pages 513?520. MIT Press, 2007.
[10] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. J. Smola. A kernel statistical test of
independence. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information
Processing Systems 20, pages 585?592. MIT Press, 2008.
[11] M. Hein, O. Bousquet, and B. Sch?olkopf. Maximal margin classification for metric spaces. J. Comput.
System Sci., 71:333?359, 2005.
[12] G. Lumer. Semi-inner-product spaces. Trans. Amer. Math. Soc., 100:29?43, 1961.
[13] C. A. Micchelli and M. Pontil. A function representation for learning in Banach spaces. In Conference
on Learning Theory, 2004.
[14] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
UK, 2004.
[16] A. J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In
Proc. 18th International Conference on Algorithmic Learning Theory, pages 13?31. Springer-Verlag,
Berlin, Germany, 2007.
[17] K. Sridharan and A. Tewari. Convex games in Banach spaces. In Conference on Learning Theory, 2010.
[18] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, G. R. G. Lanckriet, and B. Sch?olkopf. Kernel choice
and classifiability for RKHS embeddings of probability distributions. In Y. Bengio, D. Schuurmans,
J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems
22, pages 1750?1758. MIT Press, 2009.
[19] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. G. Lanckriet, and B. Sch?olkopf. Injective Hilbert
space embeddings of probability measures. In R. Servedio and T. Zhang, editors, Proc. of the 21st Annual
Conference on Learning Theory, pages 111?122, 2008.
[20] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. R. G. Lanckriet. Hilbert space
embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517?1561,
2010.
[21] H. Tong, D.-R. Chen, and F. Yang. Least square regression with ?p -coefficient regularization. Neural
Computation, 22:3221?3235, 2010.
[22] U. von Luxburg and O. Bousquet. Distance-based classification with Lipschitz functions. Journal for
Machine Learning Research, 5:669?695, 2004.
[23] H. Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, UK, 2005.
[24] H. Zhang, Y. Xu, and J. Zhang. Reproducing kernel Banach spaces for machine learning. Journal of
Machine Learning Research, 10:2741?2775, 2009.
9
| 4278 |@word briefly:1 norm:3 c0:4 checkable:1 open:1 attainable:2 thereby:1 reduction:1 moment:1 seriously:1 rkhs:43 interestingly:1 imaginary:1 existing:1 comparing:1 dx:2 written:1 v:1 characterization:4 provides:2 math:3 zhang:4 prove:1 hermitian:1 introduce:1 classifiability:1 theoretically:1 indeed:1 nor:1 window:10 provided:2 bounded:5 notation:2 klq:5 affirmative:1 developed:1 nj:1 pseudo:2 every:3 ti:6 isometrically:1 exactly:1 classifier:13 demonstrates:2 uk:4 schwartz:1 unit:3 converse:1 grant:1 berlinet:1 platt:3 positive:3 before:2 limit:3 bilinear:1 analyzing:1 studied:5 bredensteiner:1 limited:1 lumer:1 practical:6 unique:5 acknowledgment:1 yj:7 testing:1 practice:2 definite:3 pontil:1 universal:1 empirical:3 significantly:2 cannot:3 gkb:2 risk:4 context:2 measurable:3 equivalent:1 demonstrated:1 reviewer:1 straightforward:1 williams:1 convex:6 focused:1 m2:1 estimator:7 importantly:2 deriving:1 mq:3 embedding:21 gert:2 notion:8 limiting:2 resp:2 diego:2 suppose:8 lanckriet:4 element:2 expensive:1 distributional:1 reducible:1 culotta:1 eihx:2 sun:1 mentioned:1 pd:14 convexity:1 complexity:2 cristianini:1 depend:1 raise:1 easily:1 various:1 chapter:1 derivation:1 distinct:1 london:2 approached:1 choosing:5 whose:3 richer:5 supplementary:8 valued:4 posed:1 statistic:4 rwe:1 highlighted:1 transform:1 online:1 a9:2 differentiate:2 advantage:5 differentiable:5 ucl:1 product:15 maximal:1 fr:6 roweis:2 ihy:1 kh:2 dirac:1 olkopf:10 convergence:4 extending:1 rademacher:3 generating:2 derive:3 depending:1 ac:2 throw:1 soc:3 kenji:1 implies:2 riesz:6 rasch:1 closely:1 tokyo:1 drawback:5 kb:8 duals:1 material:7 hx:1 fix:1 generalization:6 preliminary:1 anonymous:1 hold:7 practically:1 considered:4 exp:1 cb:1 algorithmic:1 a2:1 estimation:2 proc:3 teo:1 establishes:1 weighted:1 hoffman:1 fukumizu:8 mit:5 clearly:2 pn:2 rial:1 derived:3 kakenhi:1 consistently:2 check:1 polish:1 sense:3 letr:1 koller:2 germany:1 issue:3 classification:7 dual:5 denoted:1 development:1 uc:2 field:1 construct:1 having:3 nicely:1 zz:1 x4:2 represents:1 look:1 t2:1 inherent:1 serious:2 few:1 national:1 homogeneity:1 geometry:2 lebesgue:1 interest:1 a5:2 investigate:1 evaluation:2 yielding:1 light:1 sens:1 primal:1 fu:2 injective:10 unless:2 taylor:1 re:2 hein:1 theoretical:1 giles:1 cover:1 disadvantage:5 a6:1 applicability:2 reflexive:1 uniform:2 jsps:1 characterize:1 answer:1 st:1 borgwardt:1 international:2 lee:1 yl:2 parzen:10 concrete:2 von:1 ek:1 supp:4 student:1 north:1 coefficient:1 satisfy:3 mp:2 depends:1 view:1 closed:16 sup:3 contribution:2 square:1 who:1 characteristic:1 efficiently:2 yield:1 generalize:2 bharath:2 suffers:2 definition:5 a10:2 sriperumbudur:4 servedio:1 kl2:1 echet:6 involved:2 dm:1 associated:8 proof:1 con:1 proved:2 popular:1 dimensionality:1 hilbert:25 carefully:1 attained:1 dt:1 isometric:1 x6:2 supervised:1 wherein:4 improved:1 amer:3 done:1 though:6 ihx:3 smola:4 hand:1 aronszajn:2 nonlinear:1 a7:2 lack:1 continuity:1 defines:2 believe:1 concept:1 counterpart:6 hausdorff:2 regularization:4 symmetric:14 moore:1 deal:4 attractive:1 game:1 qn1:1 coincides:4 generalized:3 l1:3 pro:1 fj:2 recently:2 superior:1 common:1 functional:1 jp:1 volume:1 banach:45 interpretation:1 kluwer:1 refer:4 cambridge:5 rd:31 consistency:9 mathematics:1 pm:28 shawe:1 pq:1 moving:1 etc:2 showed:3 verlag:1 certain:1 binary:3 der:1 integrable:1 seen:3 injectivity:10 bochner:2 semi:7 relates:2 ii:1 reduces:3 gretton:7 academic:1 bach:1 sphere:1 ofp:1 a1:1 impact:2 regression:1 ae:1 metric:11 kmp:1 kernel:35 limt:2 represent:1 achieved:1 addition:4 publisher:1 appropriately:1 sch:10 unlike:7 comment:1 subject:1 hz:2 induced:2 lafferty:1 sridharan:1 jordan:1 yang:1 bengio:1 easy:3 viability:2 embeddings:20 variety:1 independence:3 enough:1 xj:9 identified:1 bpd:3 inner:14 reduce:1 whether:1 expression:5 song:2 remark:1 useful:5 generally:4 clear:5 tewari:1 netherlands:1 transforms:2 locally:2 ph:1 differentiability:1 generate:1 exist:1 estimated:2 broadly:1 drawn:5 year:1 luxburg:1 inverse:2 powerful:1 classifiable:1 arrive:1 throughout:1 reader:2 family:1 sobolev:1 scaling:1 topological:5 ehx:3 nonnegative:6 annual:1 infinity:2 bp:1 x2:5 bousquet:2 fourier:5 answered:2 argument:4 prescribed:1 span:1 bns:2 min:3 developing:2 representable:1 conjugate:4 lp:5 eih:1 appealing:1 restricted:1 invariant:4 computationally:1 discus:1 singer:2 studying:1 generalizes:3 apply:1 worthwhile:1 appropriate:3 distinguished:2 rkhss:4 shortly:1 existence:1 substitute:1 thomas:1 denotes:4 include:1 a4:1 hinge:1 ism:1 approximating:1 micchelli:1 question:4 dependence:1 usual:1 said:4 dp:12 distance:7 separate:1 thank:1 sci:1 berlin:1 cauchy:1 trivial:2 assuming:1 remind:1 providing:1 minimizing:1 equivalently:1 potentially:1 negative:3 finite:9 acknowledge:1 immediate:1 subsume:1 ucsd:1 reproducing:16 arbitrary:1 introduced:2 required:1 established:1 trans:3 pattern:2 agnan:1 fp:1 max:2 analogue:2 power:1 critical:2 treated:1 solvable:1 mn:1 carried:3 xq:4 review:2 geometric:1 literature:2 l2:1 kf:13 nice:1 embedded:1 loss:1 expect:1 highlight:1 interesting:1 foundation:1 sufficient:1 consistent:6 dq:7 viewpoint:1 editor:5 translation:4 ktb:1 last:1 supported:1 denseness:1 allow:1 understand:1 institute:1 wide:1 dimension:1 qn:25 author:1 made:1 san:2 coincide:3 far:2 qx:1 functionals:2 compact:3 rthe:1 xi:3 spectrum:2 continuous:6 nature:1 ignoring:1 symmetry:2 schuurmans:1 complex:5 aistats:1 significance:1 main:3 dense:3 motivation:2 n2:1 x1:4 xu:1 referred:1 borel:7 elaborate:1 scattered:1 gatsby:2 tong:1 kfj:1 lq:3 xl:1 comput:1 vanish:2 jmlr:1 third:1 mspa:1 theorem:30 embed:1 showing:2 appeal:1 svm:1 a3:2 essential:1 exists:8 workshop:1 gained:2 importance:1 margin:2 chen:1 generalizing:4 scalar:1 wendland:1 holland:1 springer:1 a8:1 satisfies:2 ma:2 conditional:3 goal:2 presentation:1 consequently:1 lipschitz:1 bennett:1 uniformly:11 reducing:1 principal:1 called:5 x2j:2 isomorphic:1 ece:2 duality:1 meaningful:2 formally:1 college:1 support:2 latter:2 constructive:1 dept:1 |
3,621 | 4,279 | Anatomically Constrained Decoding of Finger
Flexion from Electrocorticographic Signals
Zuoguan Wang
Department of ECSE
Rensselaer Polytechnic Inst.
Troy, NY 12180
[email protected]
Gerwin Schalk
Wadsworth Center
NYS Dept of Health
Albany, NY, 12201
[email protected]
Qiang Ji
Department of ECSE
Rensselaer Polytechnic Inst.
Troy, NY 12180
[email protected]
Abstract
Brain-computer interfaces (BCIs) use brain signals to convey a user?s intent. Some
BCI approaches begin by decoding kinematic parameters of movements from
brain signals, and then proceed to using these signals, in absence of movements,
to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give
information about kinematic parameters (e.g., hand velocity or finger flexion). The
decoding approaches in these demonstrations usually employed classical classification/regression algorithms that derive a linear mapping between brain signals
and outputs. However, they typically only incorporate little prior information
about the target kinematic parameter. In this paper, we show that different types of
anatomical constraints that govern finger flexion can be exploited in this context.
Specifically, we incorporate these constraints in the construction, structure, and
the probabilistic functions of a switched non-parametric dynamic system (SNDS)
model. We then apply the resulting SNDS decoder to infer the flexion of individual fingers from the same ECoG dataset used in a recent study. Our results show
that the application of the proposed model, which incorporates anatomical constraints, improves decoding performance compared to the results in the previous
work. Thus, the results presented in this paper may ultimately lead to neurally
controlled hand prostheses with full fine-grained finger articulation.
1
Introduction
Brain computer interfaces (BCIs) allow people to control devices directly using brain signals [19].
Because BCI systems directly convert brain signals into commands to control output devices, they
can be used by people with severe paralysis. Core components of any BCI system are the feature extraction algorithm that extracts those brain signal features that represent the subject?s intent,
and the decoding algorithm that translates those features into output commands to control artificial
actuators.
Substantial efforts in signal processing and machine learning have been devoted to decoding algorithms. Many of these efforts focused on classifying discrete brain states. The linear and non-linear
classification algorithms used in these efforts are reviewed in [12, 1, 10]. The simplest translation
algorithms use linear models to model the relationship between brain signals and limb movements.
This linear relationship can be defined using different algorithms, including multiple linear regression, pace regression [8], or ridge regression [13]. Other studies have explored the use of non-linear
methods, including neural networks [15], multilinear perceptrons [7], and support vector machines
[7]. Despite substantial efforts, it is still unclear whether non-linear methods can provide consistent
benefits over linear methods in the BCI context.
What is common to current linear and non-linear methods is that they are often used to model the
instantaneous relationship between brain signals and particular behavioral parameters. Thus, they
1
NORMALIZED AMPLITUDE
Extension
S3
Flexion
Rest
S1 S2
TIME
1s
(a)
(b)
Figure 1: (a) Examples of two flexion traces. (b) A diagram of possible state transitions for finger
movements.
do not account for the temporal evolution of movement parameters, and can also not directly provide uncertainty in their predictions. Furthermore, existing methods do not offer opportunities to
incorporate prior knowledge about the target model system. In the example of finger flexion, existing methods cannot readily account for the physiological, physical, and mechanical constraints
that affect the flexion of different fingers. The main question we sought to answer with this study
is whether mathematical decoding algorithms that can make use of the temporal evolution of movement parameters, that can incorporate uncertainty, and that can also incorporate prior knowledge,
would provide improved decoding results compared to an existing algorithm that utilized only the
instantaneous relationship.
Some previous studies evaluated models that can utilize temporal evolutions. These include the
Kalman filter (KF) that explicitly characterizes the temporal evolution of movement parameters [20].
One important benefit offered by Kalman filters (KFs) is that as a probabilistic method, it can provide
confidence estimates for its results. Hidden Markov Models (HMMs) represent another dynamic
model that can allow to model the latent space both spatially and temporally. As a generalization
of HMMs and KFs, switching linear dynamic systems (SLDSs) provide more expressive power for
sequential data. Standard SLDS has also been used in BCI research, where it was used for inference
of hand motion from motor cortical neurons [21]. Apart from its expressive power, as a probabilistic
graphical model, SLDS has a flexible structural framework that facilitates the incorporation of prior
knowledge by specifying parameters or structures. Nevertheless, no previous study has evaluated a
method that can utilize temporal evolutions, incorporate uncertainty, and make use of different types
of constraints.
The proposed SNDS addresses several limitations of SLDS in terms of modeling the anatomical
constraints of the finger flexion. We applied the SNDS technique to a dataset used in previous studies ([8]) to decode from ECoG signals the flexion of individual fingers, and we compared decoding
results when we did and did not use anatomical constraints (i.e., for SNDS/regression and regression). Our results show that incorporation of anatomical constraints substantially improved decoding
results compared to when we did not incorporate this information. We attribute this improvement to
the following technical advances. First, and most importantly, we introduce a prior model based on
SNDS, which takes advantage of anatomical constraints about finger flexion. Second, to effectively
model the duration of movement patterns, our model solves the ?Markov assumption? problem more
efficiently by modeling the dependence of state transition on the continuous state variable. Third,
because estimation of continuous transition is crucial to accurate prediction, we applied kernel density estimation to model the continuous state transition. Finally, we developed effective learning and
inference methods for the SNDS model.
2
Modeling of Finger Flexion
Figure 1 (a) shows two examples for typical flexion traces. From this figure, we can make the
following observations:
2
St ?1
Yt ?1
St
KDE
Z t ?1
Yt
Zt
Figure 2: SNDS model in which St , Yt , Zt represent the moving states, real finger position and the
measurement of finger position at time t respectively.
1. The movement of fingers can be categorized into three states: extension (state S1 ), flexion
(state S2 ) and rest (rest state S3 ).
2. For each state, there are particular predominant movement patterns. In the extension state
S1 , the finger keeps moving away from the rest position. In the flexion state S2 , the finger
moves back to the rest position. In the rest state S3 , there are only very small movements.
3. For either state S1 or state S2 , the movement speed is relatively low toward full flexion or
full extension, but faster in between. For the rest state, the speed stays close to zero.
4. The natural flexion or extension of fingers are limited to certain ranges due to the physical
constraints of our hand.
5. The transition between different states is not random. Figure 1 (b) shows the four possible
transitions between three states. The extension state and flexion state can transfer to each
other, while the rest state can only follow the flexion state and can only precede the extension state. This is also easy to understand from our common sense about natural finger
flexion. When the finger is extended, it is impossible for it to directly transition into the
rest state without experiencing flexion first. Similarly, fingers can not transition from rest
state to flexion state without first going through the extension state.
6. Figure 1 (b) discusses four possible ways of state transitions. The probability of these
transitions depends on the finger position. For example, in the situation at hand, it is
unlikely that the extension state transfers to the flexion state right after the extension state
begins. At the same time, it is more likely to occur when the finger has extended enough
and is near the end. Similar situations occur at other state transitions.
In summary, the observations described above provide constraints that govern finger flexion patterns.
Using the methods described below, we will build a computational model that incorporates these
constraints and that can systematically learn the movement patterns from data.
3
Model Construction
In this section, we show how the constraints summarized above are incorporated into the construction of the finger flexion model. The overall structure of our model is shown in Figure 2. The top
layer S represents moving states that include the extension state (S1 ), flexion state (S2 ), and rest
state (S3 ). The middle layer (continuous state variable) represents the real finger position, and the
bottom layer (observation) Z the measurements of finger positions. We discuss each layer in detail
below.
3.1
State Transitions
In the standard SLDS, the probability of duration ? of state i is, according to the Markov assumption,
defined as follows:
?
P (? ) = qii
(1 ? qii )
(1)
where qii denotes the transition probability of state i when it makes a self transition. Equation 1
states that the probability of staying in a given state decreases exponentially with time. This behavior
can not provide an adequate representation for many natural temporal phenomena. The natural finger
3
0.8
2
(A)
0.6
PDF
0.4
1
0.2
0.5
0
(B)
1.5
0
2
4
0
-1
6
-0.5
0
0.5
1
NORMALIZED AMPLITUDE
Figure 3: (a) Probabilistic density function (PDF) of Yt?1 given St?1 = extension and St =
f lexion; (b) Probabilistic density function of Yt?1 given St?1 = f lexion and St = extension.
flexion is an example. It usually takes a certain amount of time for fingers to finish extension or
flexion. Thus, the duration of certain movement patterns will deviate from the distribution described
by Equation 1.
This limitation of the state duration model has been investigated by [2, 14]. In fact, in many cases, the
temporal variance is dependent on spacial variance, i.e., state transition is dependent on continuous
state variables. In the context of finger flexion, as discussed in Section 2, the transition of moving
states is dependent on finger position. In the model shown in Figure 2, the variable St not only has
an incoming arrow from St?1 but also from Yt?1 :
1
P (Yt?1 , St?1 , St )
P (Yt?1 , St?1 )
P (St?1 )
=
P (Yt?1 |St?1 , St )P (St |St?1 )
P (Yt?1 , St?1 )
1
=
P (Yt?1 |St?1 , St )P (St |St?1 )
P (Yt?1 |St?1 )
P (St |Yt?1 , St?1 ) =
(2)
where P (Yt?1 |St?1 ) is a normalization term with no relation to St . P (St |St?1 ) is the state transition, which is same with that in HMM and standard SLDS. P (Yt?1 |St?1 , St ) is the posterior
probability of Yt?1 given state transition from St?1 to St . P (Yt?1 |St?1 , St ) plays a central role in
controlling state transition. It directly relates state transition to finger position. We take the transition
between extension state and flexion state as an example to give an intuitive explanation. Figure 3(a)
shows that the transition from extension state to flexion state most probably happens at the finger
position between 1.5 and 2.5, which is near the extension end of movement. Similarly, Figure 3(b)
implies that when the finger position is between -0.6 and -0.3, which is the flexion end of the finger
movement, the transition from flexion state to extension state has a high probability.
3.2
Continuous State Transition
In SLDSs, the Y transition is linearly modeled. However, in our model, the continuous state transition is still highly nonlinear during the extension and flexion states. This is mainly because the finger
movement speed is uneven (fast in the middle but slow at the beginning and end). Modeling the continuous state transition properly is important for accurate decoding of finger movement. Here we
propose a nonparametric method with which continuous state transitions are modeled using kernel
density estimation [3]. A Gaussian kernel is the most common choice because of its effectiveness
and tractability. With a Gaussian kernel, the joint estimated joint distribution p?(Yt?1 , Yt ) under each
state can be obtained by:
p?(Yt?1 = yt?1 , Yt = yt ) =
1
N
X
N hYt?1 hYt
j=1
K
yt?1 ? yj?1
hYt?1
K
yt ? yj
hYt
.
(3)
where K(?) is a given kernel function; hYt?1 and hYt are numeric bandwidth for Yt?1 and Yt .
N is the total number of training examples. Our choice for K(?) is a Gaussian kernel K(t) =
2
(2?)?1/2 e?t /2 . Bandwidths hYt?1 and hYt are estimated via a leave-one-out likelihood criterion
4
4
3
5
(A)
4
-0.3
Yt -0.5
Yt 2
1
-0.6
1
0
-1
-1
-0.7
0
0
1
2
Yt ?1
3
4
-1
(C)
-0.4
3
2
Yt
-0.2
(B)
-0.8
0
2
Yt ?1
4
-0.8
-0.6
-0.4
-0.2
Yt ?1
Figure 4: (a) kernel locations for p?(Yt?1 , Yt ) under extension state; (b) kernel locations for
p?(Yt?1 , Yt ) under flexion state; kernel locations for p?(Yt?1 , Yt ) under rest state. Numbers on the
axis are the normalized amplitude of the fingers? flexion.
[9], which maximizes:
LCV (hYt?1 ,hYt ) =
N
Y
p?{hYt?1 ,hYt ,?i} (yi?1 , yi )
(4)
i=1
where p?{hYt?1 ,hYt ,?i} (yi?1 , yi ) denotes the density estimated with (yi?1 , yi ) deleted. p?(Yt?1 , Yt )
provides a much more accurate representation of continuous state transition than does a linear model.
Figure 4 gives an example of the kernel locations for p?(Yt?1 , Yt ) under each of the three states
(trained with part of the data from thumb flexion of subject A). Even though kernel locations do not
represent the joint distribution p?(Yt?1 , Yt ), they do help to gain some insight into the relationship
between Yt?1 and Yt . Each graph in Figure 4 describes a temporal transition pattern for each movement pattern. For the extension state, all kernel locations are above the diagonal, which means that
statistically Yt is greater than Yt?1 , i.e., fingers are moving up. Also the farther the kernel locations
are from the diagonal, the larger the value of Yt ? Yt?1 , which implies greater moving speed at
time t. In the extension state, the moving speed around average flexion is statistically greater that
around the two extremes (full flexion and extension). Similar arguments can be applied to the flexion
state in Figure 4(b). For the rest state, kernel locations are almost along the diagonal, which means
Yt = Yt?1 , i.e., fingers are not moving. The capability of being able to model the non-linear dependence of speed on position under each state is critical to make a precise prediction of the flexion
trace.
3.3
Observation Model
Z is the observation which is the finger flexion trace directly mapped from ECoG signals through
other regression algorithms. In this paper, we employ the pace regression for this mapping. Here we
make an assumption that under each movement pattern, Zt depends linearly on Yt , and corrupted by
Gaussian noise. Specifically, this relationship can be represented by a linear Gaussian [4]:
2
Zt = ?(s) Yt + w(s) ,
w(s) ? N (?, ? (s) )
(5)
2
Parameters ?(s) , ?(s) and ? (s) can be estimated from the training data via: ?(s) =
E[ZY ]?E[Z]E[Y ]
E[Y 2 ]?E 2 [Y ] ,
2
2
]?E[Z]E[Y ])
?(s) = E[Z] ? ?E[Y ] and ? (s) = E[Z 2 ] ? E 2 [Z] ? (E[ZY
, where E represents the
E[Y 2 ]?E 2 [Y ]
statistical expectation and it is approximated by the sample mean.
3.4
3.4.1
Learning and Inference
Learning
All variables of the SNDS model are incorporated during learning. Finger flexion states are estimated from the behavioral flexion traces (e.g., Figure 1 (a)). Specifically, samples on the extension
parts of the traces are labeled with state ?extension,? samples on the flexion parts of the traces are
labeled with state ?flexion,? and samples during rest are labeled with state ?rest.? Y is the true flexion trace, which we approximate with the data glove measurements. Z is the observation for which
we use the output of pace regression.
5
? in our model (Figure 2) consist of three components: the state transition parameter
All parameters ?
? S , continuous state transition parameter ?
? Y , and observation parameter ?
? O . For state transition
?
? S , as discussed in Equation 2, P (St |St?1 ) and P (Yt?1 |St?1 , St ) are learned from
parameter ?
the training data. P (St |St?1 ) can be simply obtained by counting. However, here we need to
enforce the constraints described in section 2(5). The elements in the conditional probability table
of P (St |St?1 ) corresponding to the impossible state transitions are set to zero. P (Yt?1 |St?1 , St ) is
estimated by kernel density estimation using the one-dimensional form of Equation 1. Y transition
? Y includes the joint distribution p?(Yt?1 , Yt ), which can be estimated using Equation 3
parameter ?
? O includes ?(s) , ?(s) and ? (s) 2
in which bandwidths were selected using the criteria in Equation 4. ?
and they can be estimated using Equations in section 3.3.
3.4.2
Inference
Given the time course of ECoG signals, our goal is to infer the time course of finger flexion. This
is a typical filtering problem, that is, recursively estimating the posterior distribution of St and Yt
given the observation from the beginning to time t, i.e.,Z1:t :
P (St , Yt |Z1:t ) ? P (Zt |St , Yt , Z1:t?1 )P (St , Yt |Z1:t?1 )
?
?
XZ
= P (Zt |St , Yt ) ?
P (St , Yt |St?1 , Yt?1 )P (St?1 , Yt?1 |Z1:t?1 )?
St?1
Yt?1
?
= P (Zt |St , Yt ) ?
(6)
?
XZ
St?1
P (St |St?1 , Yt?1 )P (Yt |St , Yt?1 )P (St?1 , Yt?1 |Z1:t?1 )?
Yt?1
where P (St?1 , Yt?1 |Z1:t?1 ) is the filtering result of the former step. However we note that not
all the continuous variables in our model follow Gaussian distribution, because kernel density estimation was used to model the dynamics of the continuous state variable. Hence, it is infeasible to
update the posterior distribution P (St , Yt |Z1:t ) analytically in each step. To cope with this issue,
we adopted a numerical sampling method based on particle filtering [6] to propagate and update the
discretely approximated distribution over time.
4
4.1
Experiments
Data Collection
The section gives a brief overview of data collection and feature extraction. A more comprehensive
description is given in [8]. The study included five subjects ? three women (subjects A, C and E)
and two men (subject B and D). Each subject had a 48- or 64-electrode grid placed over the frontoparietal-temporal region including parts of sensorimotor cortex. During the experiment, the subjects
were asked to repeatedly flex and extend specific individual fingers according to visual cues that
were given on a video screen. Typically, the subjects flexed the indicated finger 3-5 times over a
period of 1.5-3 s and then rested for 2 s. The data collection for each subject lasted 10 min, which
yielded an average of 30 trials for each finger. The flexion of each finger was measured by a data
glove (5DT Data Glove 5 Ultra, Fifth Dimension Technologies), which digitized the flexion of each
finger at 12 bit resolution.
The ECoG signals from the electrode grid were recorded using the general-purpose BCI2000 system
[17, 16] connected to a Neuroscan Synamps2 system. All electrodes were referenced to an inactive
electrode. The signals were further amplified, bandpass filtered between 0.15 and 200 Hz, and
digitized at 1000 Hz. Each dataset was visually inspected and those channels that did not clearly
contain ECoG activity were removed, which resulted in 48, 63, 47, 64 and 61 channels (for subjects
A-E respectively) for subsequent analyses.
4.2
Feature Extraction
Feature extraction was identical to that in [8]. In short, we first re-referenced the signals using a
PH
1
common average reference (CAR), which subtracted H
q=1 sq from each channel, where H is
the total number of channels and sq is the collected signal at the qth channel and at the particular
6
time. For each 100-ms time slice (overlapped by 50 ms) and each channel, we converted these timeseries ECoG data into the frequency domain using an autogressive model of order 20 [11]. Using
this model, we derived frequency amplitudes between 0 to 500 Hz in 1 Hz bins. ECoG features were
extracted by averaging these frequency amplitudes across five frequency ranges, i.e., 8-12 Hz, 18-24
Hz, 75-115 Hz, 125-159 Hz, and 159-175 Hz. In addition to the frequency features described above,
we obtained the Local Motor Potential (LMP) [18] by averaging the raw time-domain signal at each
channel over 100-ms time window. This resulted in 6 features for each of the ECoG channels, e.g.,
a total of 288 features from 48 channels.
4.3
Evaluation
We defined a movement period as the time between 1000 ms prior to movement onset and 1000
ms after movement offset. Movement onset was defined as the time when the finger?s flexion value
exceeded an empirically defined threshold. Conversely, movement offset was defined as the time
when the finger?s flexion value fell below that threshold and no movement onset was detected within
the next 1200 ms [8]. To achieve a dataset with relatively balanced movement and rest periods, we
discarded all data outside the movement period. For each finger, we used 5-fold cross validation to
evaluate the performance of our modeling and inference algorithms that are described in more detail
in the following sections, i.e., 4/5th of data was used for training and 1/5th of data was used for
testing. Finally, we compared the performance with that achieved using pace regression (which had
been used in [8]). To do this, we used the PaceRegression algorithm implemented in the Java-based
Weka package [5].
4.4
Results
To give an impression of the qualitative improvement of our modeling algorithms described above
compared to pace regression, we first provide a qualitative example of the results achieved with each
method on the index finger of subject A. These results are shown in Figure 5. In this figure, the top
panel shows results achieved using pace regression and the middle figure shows results achieved
using SNDS. In each of these two panels, the thin dotted line shows the actual flexion of the index
finger (concatenated for five movement periods), and the thick solid line shows the flexion decoded
using pace regression/SNDS. This figure demonstrates qualitatively that the decoding of finger flexion achieved using SNDS much better approximates the actual finger flexion than does pace regression. We also observe that SNDS produces much smoother predictions, which is mainly due to
the consideration of temporal evolution of movement parameters in SNDS. The bottom panel again
shows the actual flexion pattern (thin dotted line) as well as the finger flexion state (1=flexion, 2=extension, 3=rest; thick solid line). These results demonstrate that the state of finger flexion (which
cannot be directly inferred using a method that does not incorporate a state machine (such as pace
regression)) can be accurately inferred using SNDS. In addition to the qualitative comparison provided above, Table 1 gives a quantitative comparison between the results achieved using SNDS and
pace regression. The results presented in this table give mean square errors (MSE) (min/max/mean
computed across the cross validation folds). They show that for all fingers and all subjects, the
results achieved using SNDS are superior to those achieved using pace regression. The overall average of mean square error reduces from 0.86 (pace regression) to 0.64 (SNDS). This improvement
of SNDS compared to pace regression was highly statistically significant: when computed a paired
t-test on the mean square errors for all fingers and subjects and between pace regression and SNDS,
the resulting p-value was << 0.001.
5
Discussion
This paper demonstrates that anatomical constraints can be successfully captured to build switched
non-parametric dynamic systems to decode finger flexion from ECoG signals. We also showed
that the resulting computational models are more accurately able to infer the flexion of individual
fingers than does pace regression, an established technique that has recently been used on the same
dataset. This improvement is possible by dividing the flexion activity into several moving states
(St ), considering the state transition over time, establishing specific state transition by considering
its dependence on the finger position (continuous state variable Yt ) and modeling the individual
transition pattern of continuous state variables under each moving state accurately by using kernel
density estimation.
7
NORMALIZED AMPLITUDE
(A)
0
2.5
5
7.5
10
12.5
15
17.5
20
2.5
5
7.5
10
12.5
15
17.5
20
2.5
5
7.5
10
12.5
15
17.5
20
(B)
0
(C)
0
TIME (s)
Figure 5: (a) Actual finger flexion (dotted trace) and decoded finger flexion (solid trace) using pace
regression (mean square error 0.68); (b) Actual finger flexion (dotted trace) and decoded finger
flexion (solid trace) using SNDS (mean square error 0.40); (c) Actual finger flexion (dotted trace)
and state prediction (solid trace).
Table 1: Comparison of decoding performance between pace regression and SNDS. Results
are given, for a particular finger and subject, as mean square errors between actual and decoded
movement (minimum, maximum and mean across all cross validation folds).
Subject
A
A
B
B
C
C
D
D
E
E
Alg.
pace
SNDS
pace
SNDS
pace
SNDS
pace
SNDS
pace
SNDS
Thumb
0.49/0.64/0.58
0.27/0.45/0.35
0.56/0.81/0.65
0.31/0.62/0.46
0.69/1.03/0.83
0.33/0.85/0.53
1.18/1.42/1.29
0.97/1.28/1.15
0.94/1.09/1.03
0.75/1.01/0.84
Index Finger
0.61/0.68/0.64
0.40/0.51/0.44
0.46/0.99/0.63
0.32/0.80/0.44
0.73/0.79/0.78
0.35/0.54/0.46
0.82/1.21/1.07
0.75/1.08/0.94
0.76/1.15/0.96
0.57/1.00/0.75
Middle Finger
0.74/0.84/0.77
0.57/0.76/0.63
0.47/0.87/0.68
0.32/0.67/0.49
0.79/1.07/0.87
0.48/0.60/0.54
0.90/1.05/0.99
0.82/0.90/0.87
0.56/0.98/0.80
0.44/0.77/0.63
Ring Finger
0.77/0.93/0.86
0.64/0.86/0.73
0.43/0.62/0.52
0.25/0.50/0.39
0.79/1.01/0.89
0.44/0.76/0.61
0.98/1.17/1.09
0.92/1.04/0.96
0.85/1.04/0.94
0.63/0.82/0.73
Little Finger
0.74/0.85/0.81
0.52/0.68/0.59
0.46/0.85/0.60
0.23/0.64/0.40
0.81/1.12/0.97
0.60/0.95/0.73
1.17/1.43/1.27
0.94/1.19/1.0
0.71/1.05/0.90
0.43/0.90/0.68
Avg.
0.73
0.54
0.62
0.43
0.87
0.56
1.14
0.98
0.93
0.71
Generally, this improvement in decoding performance is possible, because the computational model
puts different types of constraints on the possible flexion predictions. In other words, the model may
not be able to produce all possible finger flexion patterns. However, the constraints that we put on
these finger flexions are based on the actual natural finger flexions, and thus should not be limiting
for other natural flexions of individual fingers. However, to what extent these constraints used here
may generalize to those of simultaneous movements of multiple fingers remains to be explored.
There are some directions in which this work could be further improved. First, to reduce the computational complexity caused by kernel density estimation, non-linear transition functions can be
used to model the continuous state transitions. Second, more efficient inference methods could be
developed to replace standard particle sampling. Finally, the methods presented in this paper could
be extended to allow for simultaneous decoding of all five fingers instead of one at a time.
References
[1] Bashashati, Ali, Fatourechi, Mehrdad, Ward, Rabab K., and Birch, Gary E. A survey of signal
processing algorithms in brain-computer interfaces based on electrical brain signals. J. Neural
8
Eng., 4(2):R32+, June 2007. ISSN 1741-2552. doi: 10.1088/1741-2560/4/2/R03.
[2] Ferguson, J. Variable duration models for speech. In Proc. Symp. on the Application of Hidden
Markov Models to Text and Speech, pp. 143?79, 1980.
[3] Frank, Eibe, Trigg, Leonard, Holmes, Geoffrey, and Witten, Ian H. Naive Bayes for Regression. In Machine Learning, pp. 5?26, 1998.
[4] Friedman, Nir, Goldszmidt, Moises, and Lee, Thomas J. Bayesian network classification with
continuous attributes: Getting the best of both discretization and parametric fitting. In ICML,
pp. 179?187. Morgan Kaufmann, 1998.
[5] Hall, Mark, Frank, Eibe, Holmes, Geoffrey, Pfahringer, Bernhard, Reutemann, Peter, and Witten, Ian H. The weka data mining software: an update. SIGKDD Explor. Newsl., 11:10?18,
November 2009. ISSN 1931-0145.
[6] Isard, Michael and Blake, Andrew. Condensation - conditional density propagation for visual
tracking. International Journal of Computer Vision, 29:5?28, 1998.
[7] Kim, Kyung Hwan, Kim, Sung Shin, and Kim, Sung June. Superiority of nonlinear mapping in decoding multiple single-unit neuronal spike trains: A simulation study. Journal of
Neuroscience Methods, 150(2):202 ? 211, 2006. ISSN 0165-0270.
[8] Kub?anek, J, Miller, K J, Ojemann, J G, Wolpaw, J R, and Schalk, G. Decoding flexion of
individual fingers using electrocorticographic signals in humans. J Neural Eng, 6(6):066001?
066001, Dec 2009.
[9] Loader, Clive R. Bandwidth Selection: Classical or Plug-In? The Annals of Statistics, 27(2):
415?438, 1999. ISSN 00905364. URL http://www.jstor.org/stable/120098.
[10] Lotte, F, Congedo, M, L?ecuyer, A, Lamarche, F, and Arnaldi, B. A review of classification
algorithms for EEG-based brain-computer interfaces. J Neural Eng, 4(2):1?1, Jun 2007.
[11] Marple, S. L. Digital spectral analysis: with applications. Prentice-Hall, Inc., Upper Saddle
River, NJ, USA, 1986. ISBN 0-132-14149-3.
[12] Muller, K.-R., Anderson, C.W., and Birch, G.E. Linear and nonlinear methods for braincomputer interfaces. Neural Systems and Rehabilitation Engineering, IEEE Transactions on,
11(2):165 ?169, june 2003. ISSN 1534-4320. doi: 10.1109/TNSRE.2003.814484.
[13] Mulliken, Grant H., Musallam, Sam, and Andersen, Richard A. Decoding Trajectories from
Posterior Parietal Cortex Ensembles. J. Neurosci., 28(48):12913?12926, 2008.
[14] Russell, M. and Moore, R. Explicit modelling of state occupancy in hidden Markov models
for automatic speech recognition. In ICASSP, volume 10, pp. 5?8, Apr 1985.
[15] Sanchez, Justin C., Erdogmus, Deniz, and Principe, Jose C. Comparison between nonlinear
mappings and linear state estimation to model the relation from motor cortical neuronal firing
to hand movements. In Proceedings of SAB Workshop, pp. 59?65, 2002.
[16] Schalk, G and Mellinger, J. A Practical Guide to Brain-Computer Interfacing with BCI2000.
Springer, 2010.
[17] Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N., and Wolpaw, J. R. BCI2000: a
general-purpose brain-computer interface (BCI) system. Biomedical Engineering, IEEE Transactions on, 51(6):1034?1043, June 2004.
[18] Schalk, G, Kub?anek, J, Miller, K J, Anderson, N R, Leuthardt, E C, Ojemann, J G, Limbrick, D,
Moran, D, Gerhardt, L A, and Wolpaw, J R. Decoding two-dimensional movement trajectories
using electrocorticographic signals in humans. J Neural Eng, 4(3):264?75, Sep 2007.
[19] Wolpaw, Jonathan R. Brain-computer interfaces (BCIs) for communication and control. In
ACM SIGACCESS, Assets ?07, pp. 1?2. ACM, 2007.
[20] Wu, Wei, Black, Michael J., Gao, Yun, Bienenstock, Elie, Serruya, Mijail, Shaikhouni, Ali,
and Donoghue, John P. Neural decoding of cursor motion using a Kalman filter, 2003.
[21] Wu, Wei, Black, M.J., Mumford, D., Gao, Yun, Bienenstock, E., and Donoghue, J.P. A switching Kalman filter model for the motor cortical coding of hand motion. In IEMBS, volume 3,
pp. 2083 ? 2086 Vol.3, sept. 2003. doi: 10.1109/IEMBS.2003.1280147.
9
| 4279 |@word trial:1 middle:4 fatourechi:1 propagate:1 simulation:1 eng:4 solid:5 recursively:1 qth:1 existing:3 current:1 discretization:1 rpi:2 readily:1 john:1 deniz:1 subsequent:1 numerical:1 motor:4 update:3 cue:1 selected:1 device:2 isard:1 beginning:2 core:1 farther:1 short:1 filtered:1 provides:1 location:8 org:2 five:4 mathematical:1 along:1 qualitative:3 fitting:1 behavioral:2 symp:1 introduce:1 congedo:1 behavior:1 xz:2 brain:18 moises:1 little:2 actual:8 window:1 considering:2 begin:2 estimating:1 provided:1 maximizes:1 panel:3 what:2 substantially:1 sldss:2 developed:2 nj:1 sung:2 temporal:10 quantitative:1 demonstrates:2 clive:1 control:5 unit:1 grant:1 superiority:1 engineering:2 referenced:2 local:1 switching:2 despite:1 establishing:1 loader:1 firing:1 lcv:1 black:2 specifying:1 conversely:1 qii:3 hmms:2 limited:1 range:2 statistically:3 elie:1 practical:1 shaikhouni:1 yj:2 flex:1 testing:1 sq:2 wolpaw:4 shin:1 java:1 confidence:1 word:1 cannot:2 close:1 selection:1 put:2 context:3 impossible:2 prentice:1 www:1 center:1 yt:77 duration:5 focused:1 resolution:1 survey:1 insight:1 holmes:2 kyung:1 importantly:1 limiting:1 annals:1 target:2 construction:3 play:1 user:2 decode:2 experiencing:1 controlling:1 inspected:1 overlapped:1 velocity:1 element:1 approximated:2 recognition:1 utilized:1 labeled:3 electrocorticographic:4 bottom:2 role:1 wang:1 electrical:1 region:1 connected:1 movement:34 decrease:1 removed:1 russell:1 substantial:2 balanced:1 govern:2 complexity:1 asked:1 ojemann:2 dynamic:5 ultimately:1 trained:1 ali:2 flexed:1 icassp:1 joint:4 sep:1 represented:1 finger:79 train:1 fast:1 effective:1 doi:3 artificial:1 detected:1 outside:1 larger:1 ecuyer:1 bci:6 statistic:1 ward:1 advantage:1 snds:27 isbn:1 propose:1 achieve:1 amplified:1 intuitive:1 description:1 getting:1 electrode:4 produce:2 leave:1 staying:1 ring:1 help:1 derive:1 andrew:1 measured:1 solves:1 dividing:1 implemented:1 implies:2 direction:1 thick:2 attribute:2 filter:4 human:3 bin:1 generalization:1 ultra:1 multilinear:1 ecog:11 extension:26 reutemann:1 around:2 hall:2 blake:1 visually:1 mapping:4 sought:1 purpose:2 estimation:8 albany:1 proc:1 precede:1 successfully:1 ecse:2 clearly:1 interfacing:1 gaussian:6 sab:1 frontoparietal:1 command:2 derived:1 june:4 improvement:5 properly:1 modelling:1 likelihood:1 mainly:2 lasted:1 sigkdd:1 mijail:1 kim:3 sense:1 inst:2 inference:6 dependent:3 ferguson:1 typically:2 unlikely:1 pfahringer:1 hidden:3 relation:2 bienenstock:2 going:1 overall:2 classification:4 flexible:1 issue:1 constrained:1 wadsworth:2 extraction:4 sampling:2 qiang:1 identical:1 represents:3 icml:1 thin:2 zuoguan:1 richard:1 employ:1 resulted:2 comprehensive:1 individual:7 friedman:1 highly:2 kinematic:3 mining:1 evaluation:1 severe:1 predominant:1 extreme:1 devoted:1 accurate:3 hyt:14 kfs:2 prosthesis:1 re:1 bci2000:3 modeling:7 tractability:1 answer:1 corrupted:1 gerhardt:1 st:63 density:10 international:1 river:1 stay:1 probabilistic:5 lee:1 decoding:20 michael:2 again:1 central:1 recorded:1 andersen:1 woman:1 hinterberger:1 account:2 converted:1 potential:1 summarized:1 coding:1 includes:2 inc:1 explicitly:1 caused:1 depends:2 onset:3 characterizes:1 bayes:1 capability:1 square:6 variance:2 kaufmann:1 efficiently:1 miller:2 ensemble:1 generalize:1 raw:1 thumb:2 bayesian:1 accurately:3 zy:2 trajectory:2 asset:1 simultaneous:2 sensorimotor:1 frequency:5 pp:7 gain:1 dataset:5 birch:2 knowledge:3 car:1 improves:1 amplitude:6 back:1 exceeded:1 autogressive:1 dt:1 follow:2 improved:3 wei:2 evaluated:2 though:1 anderson:2 furthermore:1 biomedical:1 hand:7 expressive:2 nonlinear:4 propagation:1 indicated:1 bcis:3 usa:1 normalized:4 true:1 contain:1 evolution:6 former:1 hence:1 analytically:1 spatially:1 moore:1 during:4 self:1 criterion:2 m:6 pdf:2 yun:2 impression:1 ridge:1 demonstrate:1 motion:3 interface:7 instantaneous:2 consideration:1 recently:1 common:4 superior:1 witten:2 ji:1 physical:2 overview:1 empirically:1 birbaumer:1 exponentially:1 volume:2 discussed:2 extend:1 approximates:1 measurement:3 significant:1 automatic:1 grid:2 similarly:2 particle:2 had:2 marple:1 moving:10 stable:1 cortex:2 surface:1 posterior:4 recent:2 showed:1 apart:1 certain:3 yi:6 exploited:1 muller:1 captured:1 minimum:1 greater:3 morgan:1 kub:2 employed:1 period:5 signal:24 relates:1 neurally:1 full:4 multiple:3 infer:3 reduces:1 smoother:1 technical:1 faster:1 plug:1 offer:1 cross:3 dept:1 jiq:1 hwan:1 paired:1 controlled:1 prediction:6 regression:24 vision:1 expectation:1 represent:4 kernel:18 normalization:1 serruya:1 achieved:8 dec:1 addition:2 condensation:1 fine:1 diagram:1 crucial:1 rest:17 probably:1 fell:1 recording:1 subject:15 hz:9 facilitates:1 sanchez:1 incorporates:2 effectiveness:1 structural:1 near:2 counting:1 rested:1 easy:1 enough:1 affect:1 finish:1 lotte:1 bandwidth:4 reduce:1 weka:2 translates:1 donoghue:2 inactive:1 whether:2 url:1 effort:4 peter:1 speech:3 proceed:1 repeatedly:1 adequate:1 generally:1 amount:1 nonparametric:1 ph:1 simplest:1 http:1 s3:4 dotted:5 estimated:8 neuroscience:1 pace:22 anatomical:7 discrete:1 vol:1 four:2 nevertheless:1 threshold:2 deleted:1 utilize:2 graph:1 convert:1 package:1 jose:1 uncertainty:3 almost:1 wu:2 bit:1 layer:4 fold:3 jstor:1 discretely:1 yielded:1 activity:2 occur:2 constraint:18 incorporation:2 software:1 speed:6 argument:1 min:2 flexion:74 relatively:2 department:2 according:2 anek:2 describes:1 across:3 sam:1 rehabilitation:1 s1:5 happens:1 anatomically:1 equation:7 remains:1 discus:2 end:4 adopted:1 apply:1 polytechnic:2 actuator:1 limb:1 away:1 enforce:1 observe:1 spectral:1 subtracted:1 thomas:1 top:2 denotes:2 include:2 graphical:1 opportunity:1 schalk:6 concatenated:1 build:2 classical:2 move:1 question:1 spike:1 mumford:1 parametric:3 dependence:3 mehrdad:1 diagonal:3 unclear:1 mapped:1 decoder:1 hmm:1 collected:1 extent:1 toward:1 kalman:4 issn:5 modeled:2 relationship:6 index:3 demonstration:1 kde:1 frank:2 troy:2 trace:14 intent:2 zt:7 upper:1 neuron:1 observation:8 markov:5 discarded:1 november:1 timeseries:1 parietal:1 situation:2 extended:3 incorporated:2 precise:1 digitized:2 communication:1 inferred:2 mechanical:1 z1:8 learned:1 established:1 address:1 able:3 lmp:1 justin:1 usually:2 pattern:11 below:3 mcfarland:1 articulation:1 including:3 max:1 explanation:1 video:1 power:2 critical:1 natural:6 braincomputer:1 occupancy:1 technology:1 brief:1 mellinger:1 temporally:1 axis:1 jun:1 extract:1 health:1 naive:1 sept:1 nir:1 deviate:1 prior:6 text:1 review:1 kf:1 men:1 limitation:2 filtering:3 geoffrey:2 validation:3 digital:1 switched:2 offered:1 consistent:1 systematically:1 classifying:1 translation:1 course:2 summary:1 placed:1 infeasible:1 guide:1 allow:4 understand:1 musallam:1 fifth:1 benefit:2 slice:1 dimension:1 cortical:3 gerwin:1 transition:40 numeric:1 collection:3 qualitatively:1 avg:1 cope:1 transaction:2 approximate:1 bernhard:1 keep:1 paralysis:1 incoming:1 continuous:17 rensselaer:2 latent:1 spacial:1 reviewed:1 table:4 learn:1 transfer:2 channel:9 eeg:1 alg:1 mse:1 investigated:1 domain:2 did:4 apr:1 main:1 linearly:2 arrow:1 s2:5 noise:1 neurosci:1 convey:1 categorized:1 neuronal:2 screen:1 explor:1 ny:4 slow:1 position:13 decoded:4 explicit:1 bandpass:1 leuthardt:1 third:1 grained:1 ian:2 bashashati:1 specific:2 moran:1 explored:2 offset:2 physiological:1 eibe:2 consist:1 workshop:1 sequential:1 effectively:1 cursor:1 slds:5 simply:1 likely:1 saddle:1 gao:2 visual:2 tracking:1 springer:1 gary:1 extracted:1 acm:2 conditional:2 goal:1 leonard:1 erdogmus:1 replace:1 absence:1 included:1 specifically:3 typical:2 glove:3 averaging:2 total:3 perceptrons:1 principe:1 uneven:1 people:2 support:1 goldszmidt:1 mark:1 jonathan:1 incorporate:8 evaluate:1 phenomenon:1 |
3,622 | 428 | Navigating through Temporal Difference
Peter Dayan
Centre for Cognitive Science &. Department of Physics
University of Edinburgh
2 Buccleuch Place, Edinburgh EH8 9LW
dayantcns.ed.ac.uk
Abstract
Barto, Sutton and Watkins [2] introduced a grid task as a didactic example of temporal difference planning and asynchronous dynamical pre>gramming. This paper considers the effects of changing the coding of the
input stimulus, and demonstrates that the self-supervised learning of a
particular form of hidden unit representation improves performance.
1
INTRODUCTION
Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an
agent moving around a finite grid such as the one in figure 1 (the agent is incapable
of crossing the barrier) trying to reach a goal whose position it does not know. If
it can predict how far away from the goal it is at the current step, and how far
away from the goal it is at the next step, after making a move, then it can decide
whether or not that move was helpful or harmful. If, in addition, it can record this
fact, then it can learn how to navigate to the goal. This generation of actions from
predictions is closely related to the mechanism of dynamical programming.
TD is used to learn the predictions in the first place. Consider the agent moving
around randomly on the grid, receiving a negative reinforcement of -1 for every
move it makes apart from moves which take it onto the goal. In this case, if it can
estimat.e from every location it visits, how much reinforcement (discounted by how
soon it arrives) it will get before it next reaches the goal, it will be predicting how
far away it is, based on the random method of selecting actions. TD's mechanism
of learning is to force the predictions to be consistent; the prediction from location
a should be -1 more than the average of the predictions from the locations that can
be reached in one step (hence the extra -1 reinforcement) from a.
464
Navigating Through Temporal Difference
If the agent initially selects each action with the same probability, then the estimate
of future reinforcement from a will be monotonically related to how many steps a
is away from the goal. This makes the predictions useful for criticising actions as
above. In practice, the agent will modify its actions according to this criticism at
the same time as learning the predictions based on those actions.
Barto, Sutton and Watkins [2] develop this example, and show how the TD mechanism coupled with a punctate representation of the stimulus (referred to as'R Bsw
below) finds the optimal paths to the goal. 'RBs w ignores the cues shown in figure 1,
and devotes one input unit to each location on the grid, which fires if and only if
the agent is at that place.
TD methods can however work with more general codes. Section 2 considers alternative representations, including ones that are sensitive to the orientation of the
agent as it moves through the grid, and section 3 looks at a restricted form of la.tent learning - what the agent can divine about its environment in the absence of
reinforcement. Both techniques can improve the speed of learning.
2
ALTERNATE REPRESENTATIONS
Stimulus representations, the means by which the agent finds out from the environment where it is, can be classified along two dimensions; whether they are punctate
or distributed, and whether they are directionally sensitive or in register with the
world.
Over most of the grid, a 'sensible' distributed representation, such as a coarse-coded
one, would be expected to make learning faster, as information about the value and
action functions could be shared across adjacent grid points. There are points of
discontinuity in the actions, as in the region above the right hand arm of the barrier,
but they are few. In his PhD thesis [9], Watkins considers a rather similar problem
to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC
[1] coarse-coded representation of the space. Since his agent moves in a continuous
bounded space, rather than being confined merely to discrete grid points, something
of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes
the agent move ten times more slowly in a closed section of the space. This has a
similar effect to the barrier in inducing a discontinuity in the action space. Despite
the CMACS forcing the system to share information across such discontinuities, they
were able to learn the task quickly.
The other dimension over which representations may vary involves the extent to
which they are sensitive to the direction in which the agent is facing. This is of
interest if the agent must construe its location from the cues around the grid. In this
case, rather than moving North, South, East or West, which are actions registered
with the world, the agent should only move Ahead, Left or Right (Behind is disabled
as an additional constraint), whose effects are also orientation dependent. This,
together with the fact that the representation will be less compact (it having a
larger input dimensionality) should make learning slower. Dynamical programming
and its equivalents are notoriously subject to Bellman's curse of dimensionality, an
engineering equivalent of exponential explosion in search.
Table 1 shows four possible representations classified along these two dimensions.
465
466
Dayan
Directionally
Sensltlve
Insensltlve
Coarse ness
Punctate Distributed
R,x
RA
'R BSW
'R CMAC
Table 1: Representations.
'R BSW is the representation Barto, Sutton and Watkins used. R,x is punctate
and directionally sensitive - it devotes four units to every grid point, one of which
fires for each possible orientation of the agent. 'Rc~IAC' the equivalent of Watkins'
representation, was not simulated, because its capabilities would not differ markedly
from those of the mapping-based representation developed in the next section.
nA
is rather different from the other representations; it provides a test of a representation which is more directly associated with the sensory information that might be
available directly from the cues. Figure 2 shows how 'R A works. Various identifiable
cues, C 1 . . . Cc (c = 7 in the figure) are scattered around the outside of the grid,
and the agent has a fictitious 'retina' which rotates with it. This retina is divided
into a number of angular buckets (8 in the figure), and each bucket has c units, the
iSh one of which responds if the cue Ci is visible in that bucket. This representation
is clearly directionally sensitive (if the agent is facing a different way, then so is its
retina, and so no cue will be visible in the same bucket as it was before), and also
distributed, since in general more than one cue will be visible from every location.
Note that there is no restriction on the number of units that can fire in each bucket
at any time - more than one will fire if more than one cue is visible there. Also,
under the present system 'R A will in general not work if its coding is ambiguous
- grid points must be distinguishable. Finally, it should be clear that 'R A is not
biologically plausible.
Figure 3 shows the learning curves for the three representations simulated. Each
point is generated by switching off the learning temporarily after a certain number
of iterations, starting the agent from everywhere in the grid, and averaging how
many steps it takes in getting to the goal over and above the minimum necesary. It
is apparent that n.x is substantially worse, but, surprisingly, that 'R A is actually
better than 'R BSW . This implies that the added advantage of its distributed nature more than outweighs its disadvantages of having more components and being
directionally sensitive.
One of the motivations behind studying alternate representations is the experimental findings on place cells in the hippocampi of rats (amongst other species). These
are cells that fire only when the rat is at a certain location in its environment.
Although th eir existence has led to many hypotheses about rat cognitive mapping
(see [5J for a substantial discussion of place cells and mapping), it is important to
note that even with a map, there remains the computational1y intensive problem of
navigation addressed, in this paper, by TD. 'R A, being closely related to the input
stimuli is quite unlike a place cell code - the other representations all bear some
similarities.
Navigating Through Temporal Difference
3
GOAL-FREE LEARNING
One of the problems with the TD system as described is that it is incapable oflatent
learning in the absence of reinforcement or a goal. If the goal is just taken away, but
the -1 reinforcements are still applied at each step, then the values assigned to each
location will tend to -00. If both are removed, then although the agent will wander
about its environment with random gay abandon, it will not pick up anything that
could be used to speed subsequent learning. Latent learning experiments with rats
in dry mazes prove fairly conclusively that rats running mazes in the absence of
rewards and punishments learn almost as much as rats that are reinforced.
One way to solve this problem is suggested by Sutton's DYNA architecture [7].
Briefly, this constructs a map of place x action -+ next place, and takes steps
in the fictitious world constructed from its map in-between taking steps in the real
world, as a way of ironing out the computational 'bumps' (ie inconsistencies) in the
value and action functions.
Instead, it is possible to avoid constructing a complete map by altering the representation of the environment used for learning the prediction function and optimal
actions . The section on representations concluded that coarse-coded representations
are generally better than punctate ones, since information can be shared between
neighbouring points. However, not all neighbouring points are amenable to this
sharing, because of discontinuities in the value and action functions. If there were
a way of generating a coarse coded representation (generally from a punctate one)
that is sensitive to the structure of the task, rather than arbitrarily assigned by
the environment, it should provide the base for faster learning still. In this case,
neighbouring points should only be coded together if they are not separated by the
barrier. The initial exploration would allow the agent to learn this much about the
structure of the environment.
Consider a set of units whose job is to predict the future discounted sum of firings
of the raw input lines. Using 'R. Bsw during the initial stage of learning when the
act.ions are still random, if the agent is at location (3,3) of the grid, say, then the
discounted prediction of how often it will be in (3,4) (ie the frequency with which
the single unit representing (3,4) will fire) will be high, since this location is close.
However, the prediction for (7,11) will be low, because it is very unlikely to get
there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg
(1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid,
are far apart in the task. This means that the discounted prediction of how often
the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower.
Overall, the prediction units should act like a coarse code, sensitive to the structure of the task. As required, this information about the environment is entirely
independent of whether or not the agent is reinforced during its exploration. In
fact, the resulting 'map' will be more accurate if it is not, as its exploration will be
more random. The output of the prediction units is taken as an additional source
of information for the value and action functions.
Since their main aim is to create intelligently distributed representations from punctate ones, it is only appropriate to use these prediction units for 'R Bsw and 'R 4X '
Figure 4 compares average learning curves for 'RBsw with and without these ex-
467
468
Dayan
tra mapping units, and with and without 6000 steps of latent learning (LL) in the
absence of any reinforcement. A significant improvement is apparent.
Figure 5 shows one set of predictions based on the 1l Bsw representation! after a
few un-reinforced iterations. The predictions are clearly fairly well developed and
smooth - a predictable exponentially decaying hump. The only deviations from
this are at the barrier and along the edges, where the effects of impermeability and
immobility are apparent.
Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by
which time the agent reaches the goal almost optimally. The predictions degenerate
from being roughly radially symmetric (bar the barrier) to being highly asymmetric.
Once the agent has learnt how to get to the goal from some location, the path it will
follow, and so the locations it will visit from there, is largely fixed. The asymptotic
values of the predictions will therefore be 0 for units not on the path, and -( for
those on the path, where r is the number of steps since the agent's start point and
'Y is the discounting factor weighting immediate versus distant reinforcement. This
is a severe limitation since it implies that the topological information present in the
early stages of learning disappears evaporates, and with it almost all the benefits
of the prediction units.
4
DISCUSSION
Navigation comprises two problems; where the agent and the goals in its environment are, and how it can get to them. Having some form of cognitive map, as is
suggested by the existence of place cells, addresses the first, but leaves open the
second. For the case of one goal, the simple TD method described here is one
solution.
TD planning methods are clearly robust to changes in the way the input stimulus is represented. Distributed codes, particularly ones that allow for the barrier,
make learning faster. This is even true for 1lA' which is sensitive to the orientation
of the agent. All these results require each location to have a unique representation - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how
ambiguities can be resolved using information on the sequence of states the agent
traverses.
Since these TD planning methods are totally general, just like dynamical programming, they are unlikely to scale well. Some evidence for this comes from the relatively poor performance of 1l. x , with its quadrupled input dimension. This puts
the onus back either onto dividing the task into manageable chunks, or onto more
sophisticated representation.
A cknow ledgements
I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell,
David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Connectionist Group at Amherst, and a spatial learning workshop at King's College
Cambridge for their helpful comments. This work was funded by SERC.
1 Note
that these are normalised to a maximum value of 10, for graphical convenience.
Navigating Through Temporal Difference
References
[1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9J
Model Articulation Controller (CMAC). Transactions of the ASME: Journal
of Dynamical Systems, Measurement and Control, 97, pp 220-227.
Barto, AG, Sutton, RS &. Watkins, CJCH (1989). Learning and Sequential
Decision Making. Technical Report 89-95, Computer and Information Science,
University of Massachusetts, Amherst, MA.
Chrisley, RL (1990). Cognitive map construction and use: A parallel distributed approach. In DS Touretzky, J Elman, TJ Sejnowski, &. GE Hinton,
editors, Proceedings of the 1990 Con nectionist M odds Summer School. San
Mateo, CA: Morgan Kaufmann.
Mozer, MC, &. Bachrach, J (1990). Discovering the structure of a reactive
en vironment by exploration. In D Touretzky, editor, Advances in Neurallnformation Processing Systems, ?, pp 439-446. San Mateo, CA: Morgan Kaufmann.
O'Keefe, J & Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford,
England: Oxford University Press.
Sutton, RS (1988). Learning to predict by the methods of temporal difference.
Machine Learning, 3, pp 9-44.
Sutton, RS (1990). Integrated architectures for learning, planning, and reacting
based on approximating dynamic progranuning. In Proceedings of the Seventh
International Conference on Machine Learning. San Mateo, CA: Morgan Kaufmann.
Sutton, RS, &. Barto, AG. To appear. Time-derivative models of Pavlovian
conditioning. In M Gabriel &. JW Moore, editors, Learning and Computational
Neuroscience. Cambridge, MA: MIT Press.
\Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. University of Cambridge, England.
Cl
Cl
Agall
\
\
\
C4
OriCIIlltloD
C4
,
l\
C6
C6
C1
C1
C2
C2
........
,
Cues
~arrier
?? Dot
Anplar bucket
\
cs
\
Goal
Fig 1: The grid task
rlrina
1. flrina
'Retina'
C3
cs
C3
Fig 2: The 'retina' for 1lA
469
470
Dayan
Average extra
steps to goal
Average extra
steps to goal
---
200
,,
150
,,
,
.
100
200
4X
- --
BSW
A
,
No map
Map, DO LL
Map, LL
150
1
'I
)
1\
\ I
,
,
,, \
100
~
"
l
, \
" \
1\1
" \
~I
50
\
\
"
50
I .1
~
,~
0
1
10
",
100
1000
Learning iterations
Fig 3: Different representations
Fig 5: Initial predictions from (5,6)
0
1
10
100
1000
Learning iterations
Fig 4: Mapping with 'R BSW
Fig 6: Predictions after 2000
iterations
| 428 |@word briefly:1 manageable:1 hippocampus:2 open:1 r:4 pick:1 initial:4 selecting:1 ironing:1 current:1 buckingham:1 must:2 subsequent:1 distant:1 visible:4 cue:9 leaf:1 discovering:1 record:1 provides:1 coarse:6 location:14 traverse:1 c6:2 rc:1 along:3 constructed:1 c2:2 prove:1 ra:1 expected:1 roughly:1 elman:1 planning:5 bellman:1 discounted:4 td:10 curse:1 totally:1 bounded:1 what:1 substantially:1 developed:2 finding:1 ag:2 cjch:2 temporal:7 every:4 act:2 iearning:1 estimat:1 willshaw:1 demonstrates:1 uk:1 control:3 unit:13 oftd:1 appear:1 before:2 engineering:1 modify:1 switching:1 sutton:8 punctate:7 despite:1 oxford:2 reacting:1 path:4 firing:1 might:1 therein:1 mateo:3 unique:1 practice:1 cmac:3 pre:1 get:4 onto:3 close:2 convenience:1 put:1 restriction:1 equivalent:3 map:11 starting:1 bachrach:2 his:3 anyway:1 construction:1 programming:3 neighbouring:3 us:1 hypothesis:1 crossing:1 particularly:1 asymmetric:1 attendee:1 eir:1 region:1 removed:1 substantial:1 mozer:2 environment:9 predictable:1 reward:2 dynamic:1 grateful:1 bsw:9 resolved:1 various:1 represented:1 separated:1 sejnowski:1 outside:1 whose:3 apparent:3 larger:1 plausible:1 quite:1 solve:1 say:1 buccleuch:1 abandon:1 directionally:5 advantage:1 sequence:1 intelligently:1 degenerate:1 albus:1 inducing:1 getting:1 generating:1 ish:1 develop:1 ac:1 school:1 job:1 solves:1 dividing:1 c:2 involves:1 implies:2 come:1 differ:1 direction:1 closely:2 exploration:4 require:1 immobility:1 tyrell:1 around:4 mapping:5 predict:3 bump:1 vary:1 early:1 sensitive:9 create:1 mit:1 clearly:3 aim:1 rather:5 avoid:1 barto:5 improvement:1 criticism:1 am:1 helpful:2 dayan:4 dependent:1 unlikely:2 integrated:1 initially:1 hidden:1 selects:1 overall:1 orientation:4 spatial:1 ness:1 fairly:2 construct:1 once:1 having:3 look:2 future:2 connectionist:1 stimulus:5 report:1 richard:1 few:2 retina:5 randomly:1 delayed:1 fire:6 jeffrey:1 interest:1 highly:1 hump:1 severe:1 navigation:2 arrives:1 behind:2 tj:1 amenable:1 accurate:1 edge:1 explosion:1 harmful:1 euclidean:1 cknow:1 disadvantage:1 altering:1 deviation:1 seventh:1 optimally:1 learnt:1 punishment:1 chunk:1 international:1 amherst:2 ie:2 chrisley:2 physic:1 receiving:1 off:1 together:2 quickly:2 na:1 thesis:2 ambiguity:1 slowly:1 worse:1 cognitive:5 derivative:1 coding:2 north:1 devotes:2 tra:1 kate:1 register:1 closed:1 reached:1 start:2 sort:1 decaying:1 capability:1 parallel:1 kaufmann:3 largely:1 reinforced:4 dry:1 raw:1 mc:1 notoriously:1 cc:1 classified:2 reach:3 touretzky:2 sharing:1 ed:1 frequency:1 pp:3 associated:1 con:1 radially:1 massachusetts:1 improves:1 dimensionality:2 sophisticated:1 actually:1 back:1 supervised:1 follow:1 jw:1 though:1 angular:1 just:2 stage:2 d:1 hand:1 disabled:1 manipulator:1 effect:5 true:1 gay:1 hence:1 assigned:2 discounting:1 symmetric:1 moore:1 eg:1 adjacent:1 ll:3 during:2 self:1 ambiguous:1 anything:1 rat:6 trying:1 asme:1 complete:1 rl:1 conditioning:1 exponentially:1 significant:1 measurement:1 cambridge:3 grid:16 centre:1 funded:1 dot:1 moving:3 similarity:1 base:1 something:1 j:1 apart:2 forcing:1 certain:2 incapable:2 arbitrarily:2 quadrupled:1 inconsistency:1 morgan:3 minimum:1 additional:2 monotonically:1 smooth:1 technical:1 faster:3 england:2 divided:1 visit:2 coded:5 prediction:23 variant:1 nadel:1 controller:1 metric:1 iteration:6 cerebellar:1 confined:1 cell:5 ion:1 c1:2 addition:1 addressed:1 source:1 concluded:1 extra:3 unlike:1 markedly:1 south:1 subject:1 tend:1 comment:1 odds:1 neurallnformation:1 iac:1 architecture:2 opposite:1 intensive:1 whether:4 peter:1 action:15 gabriel:1 useful:1 generally:2 clear:1 necesary:1 ten:1 morris:1 neuroscience:1 rb:1 ledgements:1 discrete:1 didactic:1 group:1 four:2 changing:1 merely:1 sum:1 everywhere:1 place:9 almost:3 decide:1 decision:1 entirely:1 summer:1 topological:1 identifiable:1 ahead:1 constraint:1 speed:2 pavlovian:1 relatively:1 department:1 criticising:1 according:1 alternate:2 poor:1 across:2 making:2 biologically:1 tent:1 restricted:1 bucket:6 taken:2 remains:1 mechanism:3 dyna:1 know:1 ge:1 studying:1 available:1 away:5 appropriate:1 alternative:1 slower:1 existence:2 running:1 gramming:1 graphical:1 outweighs:1 serc:1 approximating:1 move:8 added:1 responds:1 navigating:4 amongst:1 rotates:1 simulated:2 sensible:1 considers:3 extent:1 code:4 negative:1 finite:1 immediate:1 hinton:1 divine:1 pdp:1 introduced:1 david:1 required:1 c3:2 c4:2 registered:1 eh8:1 discontinuity:4 address:1 able:1 suggested:2 bar:1 dynamical:5 below:1 articulation:1 including:1 force:1 predicting:1 arm:1 representing:1 improve:1 disappears:1 coupled:1 asymptotic:1 wander:1 manhattan:1 bear:1 generation:1 limitation:1 fictitious:2 facing:2 versus:1 agent:29 consistent:1 editor:3 share:1 surprisingly:1 asynchronous:1 soon:1 free:1 side:1 allow:2 normalised:1 taking:1 barrier:8 edinburgh:3 distributed:8 curve:2 dimension:4 benefit:1 world:4 maze:2 ignores:1 sensory:1 reinforcement:9 san:3 far:4 transaction:1 compact:1 conclusively:1 continuous:1 search:1 latent:2 un:1 table:2 learn:5 nature:1 robust:1 ca:3 cl:2 constructing:1 main:1 motivation:1 toby:1 fig:6 referred:1 west:1 en:1 scattered:1 position:1 comprises:1 exponential:1 lw:1 watkins:7 weighting:1 jay:1 proportionately:1 navigate:1 evidence:1 essential:1 workshop:2 sequential:1 keefe:1 ci:1 phd:2 led:1 distinguishable:1 vironment:1 temporarily:1 ma:2 goal:19 cmacs:1 king:1 shared:2 absence:4 change:1 averaging:1 specie:1 experimental:1 la:3 east:1 college:1 reactive:1 ex:1 |
3,623 | 4,280 | An Application of Tree-Structured Expectation
Propagation for Channel Decoding
Pablo M. Olmos? , Luis Salamanca? , Juan J. Murillo-Fuentes? , Fernando P?erez-Cruz?
?
Dept. of Signal Theory and Communications, University of Sevilla
41092 Sevilla Spain
{olmos,salamanca,murillo}@us.es
?
Dept. of Signal Theory and Communications, University Carlos III in Madrid
28911 Legan?es (Madrid) Spain
[email protected]
Abstract
We show an application of a tree structure for approximate inference in graphical
models using the expectation propagation algorithm. These approximations are
typically used over graphs with short-range cycles. We demonstrate that these
approximations also help in sparse graphs with long-range loops, as the ones
used in coding theory to approach channel capacity. For asymptotically large
sparse graph, the expectation propagation algorithm together with the tree structure yields a completely disconnected approximation to the graphical model but,
for for finite-length practical sparse graphs, the tree structure approximation to the
code graph provides accurate estimates for the marginal of each variable. Furthermore, we propose a new method for constructing the tree structure on the fly that
might be more amenable for sparse graphs with general factors.
1
Introduction
Belief propagation (BP) has become the standard procedure to decode channel codes, since in 1996
MacKay [7] proposed BP to decode codes based on low-density parity-check (LDPC) matrices with
linear complexity. A rate r = k/n LDPC code can be represented as a sparse factor graph with
n variable nodes (typically depicted on the left side) and n ? k factor nodes (on the right side), in
which the number of edges is linear in n [15]. The first LDPC codes [6] presented a regular structure,
in which all variables and factors had, respectively, ` and r connections, i.e. an (`, r) LDPC code.
But the analysis of their limiting decoding performance, when n tends to infinity for a fixed rate,
showed that they do not approach the channel capacity [15]. To improve the performance of regular
LDPC codes, we can define an (irregular) LDPC ensemble as the set of codes randomly generated
according to the degree distribution (DD) from the edge perspective as follows:
?(x) =
lmax
X
?i xi?1
and
i=1
?(x) =
rmax
X
?j xj?1 ,
j=1
where the fraction of edges with left degree i (from variables to factors) is given by ?i and the
fraction of edges with right degree j (from factors to variables) is given by ?j . The left (right)
degree of an edge is the degree of the variable (factor) node it is connected to. The rate of the code is
R1
R1
P
then given by r = 1 ? 0 ?(x)dx/ 0 ?(x)dx, and the total number of edges by E = n/( i ?i /i).
Although optimized irregular LDPC codes can achieve the channel capacity with a decoder based on
BP [15], they present several drawbacks. First, the error floor in those codes increases significantly,
because capacity achieving LDPC ensembles with BP decoding have a large fraction of variables
1
with two connections and they present low minimum distances. Second, the maximum number of
ones per column lmax tends to infinity to approach capacity. These problems limit the BP decoding
performance of capacity approaching codes, when we work with finite length codes used in real
applications.
Approximate inference in graphical models can be solved using more accurate methods that significantly improve the BP performance, especially for dense graphs with short-range loops. A nonexhaustive list of methods are: generalized BP [22], expectation propagation (EP) [10], fractional
BP [19], linear programming [17] and power EP [8]. A detailed list of contributions for approximate inference can be found in [18] and the references therein. But it is a common belief that BP
is sufficiently accurate to decode LDPC codes and other approximate inference algorithms would
not outperform BP decoding significantly, if at all. In this paper, we challenge that belief and show
that more accurate approximate inference algorithms for graphical models can also improve the BP
decoding performance for LDPC codes, which are sparse graphical models with long-range loops.
We particularly focus on tree-structured approximations for inference in graphical models [9] using
the expectation propagation (EP) algorithm, because it presents a simple algorithmic implementation
for LDPC decoding transmitted over the binary erasure channel (BEC)1 , although other higher order
inference algorithms might be suitable for this problem, as well, as in [20] it was proven a connection
between some of them. We show the results for the BEC, because it has a simple structure amenable
for deeper analysis and most of its properties carry over to actual communications channels [14].
The EP with a tree-structured approximation can be presented in a similar way as the BP decoder
for an LDPC code over the BEC [11], with similar run-time complexity. We show that a decoder
based on EP with a tree-structured approximation converges to the BP solution for the asymptotic
limit n ? ?, for finite-length graphs the performance is otherwise improved significantly [13,
11]. For finite graphs, the presence of cycles in the graph degrades the BP estimate and we show
that the EP solution with a tree-structured approximation is less sensitive to the presence of such
loops, and provides more accurate estimates for the marginal of each bit. Therefore, it makes the
expectation propagation with a tree-structured approximation (for short we refer to this algorithm as
tree-structured EP or TEP) a more practical decoding algorithm for finite length LDPC codes.
Besides, the analysis of the application of the tree-structured EP to channel decoding over the BEC
leads to another way of fixing the approximating tree structure different from the one proposed in
[9] for dense codes with positive correlation potentials. In channel coding, the factors of the graph
are parity-checks and the correlations are high but can change from positive to negative by the flip of
a single variable. Therefore, the pair-wise mutual information is zero for any two variables (unless
the factor only contains two variables) and we could not define a prefixed tree structure with the
algorithm in [9]. In contrast, we propose a tree structure that is learnt on the fly based on the graph
itself, hence it might be amenable for other potentials and sparser graphs.
The rest of the paper is organized as follows. In Section 2, we present the peeling decoder, which is
the interpretation of the BP algorithm for LDPC codes over the BEC, and how it can be extended to
incorporate the tree-structured EP decoding procedure. In Section 3, we analyze the TEP decoder
performance for LDPC codes in both the asymptotic and the finite-length regimen. We provide an
estimation of the TEP decoder error rate for a given LDPC ensemble. We conclude the paper in
Section 5.
2
Tree-structured EP and the peeling decoder
The BP algorithm was proposed as a message passing algorithm [5] but, for the BEC, it exhibits
a simpler formulation, in which the non-erased variable nodes are removed from the graph in each
iteration [4], because we either have absolute certainty about the received bit (0 or 1) or complete
ignorance (?). The BP under this interpretation is referred to as the peeling decoder (PD) [3, 15] and
it is easily described using the factor graph of the code. The first step is to initialize the graph by
removing all the variable nodes corresponding to non-erased bits. When removing a one-valued nonerased variable node, the parity of the factors it was connected to are flipped. After the initialization
1
The BEC allows binary transmission, in which the bits are either erased with probability or arrive without
error otherwise. The capacity for this channel is 1 ? and is achieved with equiprobable inputs [2].
2
stage, the algorithm proceeds over the resulting graph by removing a factor and a variable node in
each step:
1. It looks for any factor linked to a single variable (a check node of degree one). The peeling
decoder copies the parity of this factor into the variable node and removes the factor.
2. It removes the variable node that we have just de-erased. If the variable was assigned a one,
it changes the parity of the factors it was connected to.
3. It repeats Steps 1 and 2 until all the variable nodes have been removed, successful decoding,
or until there are no degree-one factors left, unsuccessful decoding.
We illustrate an example of the PD for a 1/2-rate code with four variables in Figure 1. The first and
last bits have not been erased and when we remove them from the graph, the second factor is singled
connected to the third variable, which can be now de-erased (Figure 1(b)). Finally, the first factor is
singled connected to the second variable, decoding the transmitted codeword (Figure 1(c)).
p(Y1 = 0|V1 )
V1
p(Y2 =?|V2 )
V2
P1
p(Y1 = 0|V1 )
V1
p(Y2 =?|V2 )
V2
0
V3
p(Y3 =?|V3 )
V4
(a)
V1
p(Y2 =?|V2 )
V2
0
P2
V3
p(Y3 =?|V3 )
0
p(Y4 = 1|V4 )
P1
p(Y1 = 0|V1 )
P1
1
P2
p(Y3 =?|V3 )
V3
p(Y4 = 1|V4 )
V4
P2
1
V4
p(Y4 = 1|V4 )
(b)
(c)
Figure 1: Example of the PD algorithm for LDPC channel decoding in the erasure channel.
The analysis of the PD for fixed-rate codes, proposed in [3, 4], allows to compute its threshold in the
BEC. This result can be used to optimize the DD to build irregular LDPC codes that, as n tends to
infinity, approach the channel capacity. However, as already discussed, these codes present higher
error floors, because they present many variables with only two edges, and they usually present poor
finite-length performance due to the slow convergence to the asymptotic limit [15].
2.1
The TEP decoder
The tree-structured EP overlaps a tree over the variables on the graph to further impose pairwise
marginal constraints. In the procedure proposed in [9] the tree was defined by measuring the mutual
information between a pair of variables, before running the EP algorithm. The mutual information
between pair of variables is zero for parity-check factors with more than two variables and we need
to define the structure in another way. We propose to define the tree structure on the fly. Let?s
assume that we run the PD in the previous section and yields an unsuccessful decoding. Any factor
of degree two in the remaining graph either tells us that the connected variables are equal (if the
parity check is zero), or opposite (if the parity check is one). We should link these two variables
by the tree structure, because their marginal would provide further information to the remaining
erased variables in the graph. The proposed algorithm actually replaces one variable by the other
and iterates until a factor of degree one is created and more variables can be de-erased. When
this happens a tree structure has been created, in which the pairwise marginal constraint provides
information that was not available with single marginals approximations.
The TEP decoder can be explained in a similar fashion as the PD decoder, in which instead of
looking for degree-one factors, we look for degree one and two. We initialize the TEP decoder, as
the PD, by removing all known variable nodes and updating the parity checks for the variables that
are one. The TEP then removes a variable and a factor per iteration:
1. It looks for a factor of degree one or two.
2. If a factor of degree one is found, the TEP recovers the associated variable, performing the
Steps 1 and 2 of the PD previously described.
3. If a factor of degree two is found, the decoder removes it from the graph together with
one of the variable nodes connected to it and the two associated edges. Then, it reconnects
3
to the remaining variable node all the factors that were connected to the removed variable
node. The parities of the factors re-connected to the remaining variable node are reversed
if the removed factor had parity one.
4. Steps 1-3 are repeated until all the variable nodes have been removed, successful decoding,
or the graph runs out of factors of degree one or two, unsuccessful decoding.
The process of removing a factor of degree two is sketched in Figure 2. First, the variable V1 heirs
the connections from V2 (solid lines), see Figure 2(b). Finally, the factor P1 and the variable V2
can be removed (Figure 2(c)), because they have no further implication in the decoding process. V2
is de-erased once V1 is de-erased. The TEP removes a factor and a variable node per iteration, as
the PD does. The removal of a factor and a variable does not increase the complexity of the TEP
decoder compared to the BP algorithm. Both TEP and BP algorithms have complexity O(n).
P1
V1
P1
P2
V2
V2
P2
P3
P2
V1
V1
P3
P3
(a)
(b)
(c)
Figure 2: In (a) we show two variable nodes, V1 and V2 , that share a factor of degree two P1 . In (b),
V1 heirs the connections of V2 (solid lines). In (c), we show the graph once P1 and V2 have been
removed. If P1 is parity one, the parities of P2 , P3 are reversed.
By removing factors of degree two, we eventually create factors of degree one, whenever we find
an scenario equivalent to the one depicted in Figure 3. Consider two variable nodes connected
to a factor of degree two that also share another factor with degree three, as illustrated in Figure
3(a). When we remove the factor P3 and the variable node V2 , the factor P4 is now degree one,
as illustrated in Figure 3(b). At the beginning of the decoding algorithm, it is unlikely that the two
variable nodes in a factor of degree two also share a factor of degree three. However, as we remove
variables and factors, the probability of this event grows.
Note that, when we remove a factor of degree two connected to variables V1 and V2 , in terms of
the EP algorithm, we are including a pairwise factor between both variables. Therefore, the TEP
equivalent tree structure is not fixed a priori and we construct it along the decoding process. Also, the
steps of the TEP decoder can be presented as a linear combination of the columns of the parity-check
matrix of the code and hence its solution is independent of the processing order.
P1
V1
P2
P1
V2
P3
P2
V1
V3
V3
P4
P4
(a)
(b)
Figure 3: In (a), the variables V1 and V2 are connected to a degree two factor, P3 , and they also share
a factor of degree three, P4 . In (b) we show the graph once the TEP has removed P3 and V2 .
3
TEP analysis: expected graph evolution
We now sketch the proof of why the TEP decoder outperforms BP. The actual proof can be found in
[12] (available as supplementary material). Both the PD and the TEP decoder sequentially reduces
4
the LDPC graph by removing check nodes of degree one or two. As a consequence, the decoding
process yields a sequence of residual graphs and their associated DD. The DD sequence of the
residual graphs constitutes a sufficient statistic to analyze this random process [1]. In [3, 4], the
sequence of residual graphs follows a typical path or expected evolution [15]. The authors make use
of Wormald?s theorem in [21] to describe this path as the solution of a set of differential equations
and characterized the typical deviation from it. For the PD, we have an analytical form for the
evolution of the number of degree one factor as the decoding progresses, r1 (?, ), as a function of
the decoding time, ? , and the erasure rate, . The PD threshold BP is the maximum for which
r1 (?, ) ? 0, ?? . In [1, 15], the authors show that particular decoding realizations are Gaussian
distributed around r1 (?, ), with a variance of order ?BP /n, where ?BP can be computed from the
LDPC DD. They also provide the following approximation to the block error probability of elements
of an LDPC ensemble:
?
BP
n(BP ? )
ELDPC [?(x),?(x),n] PW
(C, ) ? Q
,
(1)
?BP
BP
where PW
(C, ) is the average block error probability for the code C ? LDPC [?(x), ?(x), n]. For
the TEP decoder the analysis follows a similar path, but its derivation is more involved. For arbitrarily large codes, the expected graph evolution during the TEP decoding is computed in [12], with
a set of non-linear differential equations. They track down the expected progression of the fraction
of edges with left degree i, li (? ) for i = 1, . . . , lmax , and right degree j, rj (? ) for j = 1, . . . , rmax
as the TEP decoder performs, where ? is a normalized time: if u is the TEP iteration index and E
is the total number of edges in the original graph, then ? = u/E. By Wormald?s theorem [21], any
real decoding realization does not differ from the solution of such equations in a factor larger than
O(E ?1/6 ). The TEP threshold, TEP , is found as the maximum erasure rate such that
.
rT EP (? ) = r1 (? ) + r2 (? ) > 0,
?? ? [0, n/E],
(2)
where rT EP (? ) is computed by solving the system of differential equations in [12] and TEP ? BP .
Let us illustrate the accuracy of the model derived to analyze the TEP decoder properties. In Figure
4(a), for a regular (3, 6) code with n = 217 and = 0.415, we compare the solution of the system
of differential equations for R1 (? ) = r1 (? )E and R2 (? ) = r2 (? )E, depicted by thick solid lines,
with 30 simulated decoding trajectories, depicted by thin dashed lines. We can see that empirical
curves are tightly distributed around the predicted curves. Indeed, the distribution tends very quickly
to n to a Gaussian [1, 15]. All curves are plotted with respect the evolution of the normalized size
of the graph at each time instant, denoted by e(? ) so that the decoding process starts on the right
e(? = 0) ? 0.415 and, if successful, finishes at e(?END ) = 0. In Figure 4(b) we reproduce, with
identical conclusions, the same experiment for the irregular DD LDPC code defined by:
5
1
x + x3 ,
6
6
?(x) = x5 .
?(x) =
(3)
(4)
For the TEP decoder to perform better than the BP decoder, it needs to significantly increase the
number of check nodes of degree one that are created, which happens if two variables nodes share a
degree-two check together along with a degree-three check node, as illustrated earlier in Figure 3(a).
In [12], we compute the probability that two variable nodes that share a check node of degree-two
also share another check node (scenario S). If we randomly choose a particular degree-two check
node at time ? , the probability of scenario S is:
PS (? ) =
(lavg (? ) ? 1)2 (ravg (? ) ? 1)
,
e(? )E
(5)
where lavg (? ) and ravg (? ) are, respectively, the average left and right edge degrees, and e(? ) is
the fraction of remaining edges in the graph. As the TEP decoder progresses, lavg (? ) increases,
because the remaining variables in the graph inherits the connections of the variables that have been
removed, and e(? ) decreases, therefore creating new factors of degree one and improving the BP/PD
performance. However, note that in the limit n ? ?, PS (? = 0) = 0. Therefore, to improve the
PD solution in this regime we require that lavg (? 0 ) ? ? for some ? 0 . The solution of the TEP
decoder differential equations does not satisfy this property. For instance, in Figure 5 (a), we plot
the expected evolution of r1 (? ) and r2 (? ) for n ? ? and the (3, 6) regular LDPC ensemble when
5
we are just above the BP threshold for this code, which is BP ? 0.4294. Unlike Figure 4(a),
r1 (? ) and r2 (? ) go to zero before e(? ) cancels: the TEP decoder gets stuck before completing the
decoding process. In Fig.5 (b), we include the computed evolution for lavg (? ). As shown, the
fraction of degree two check nodes vanishes before lavg (? ) becomes infinite. We conclude that, in
the asymptotic limit n ? ?, the EP with tree-structure is not able to outperform the BP solution,
which is optimal since LDPC codes become cycle free [15].
5
10
5
10
4
10
4
R i (? ), i = 1, 2
R i (? ), i = 1, 2
10
3
10
4
10
3
4
10
R 1 (? )
R 1 (? )
10
2
10
3
10
3
10
0.06
0.08
2
10
0.1
0.12
0.14
0.16
0.18
0.2
Residual graph normalized size e(? )
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.12
1
10
0.45
0.14
0.16
0.18
0.2
0.22
0.24
0.26
0.28
Resid u al grap h n ormalized size e(? )
0
0.05
Residual graph normalized size e(? )
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Residual graph normalized size e(? )
(a)
(b)
Figure 4: In (a), for a regular (3, 6) code with n = 217 and = 0.415, we compare the solution of
the system of differential equations for R1 (? ) = r1 (? )E (C) and R2 (? ) = r2 (? )E () (thick solid
lines) with 30 simulated decoding trajectories (thin dashed lines). In (b), we reproduce the same
experiment for the irregular LDPC in (3) and (4) for = 0.47.
0
2
10
10
P red i cted l a v g(? )
?1
10
?2
?3
10
l av g(? )
r i (? ) for i = 1, 2
10
?4
10
1
10
?5
10
?6
10
P r ed i ct ed r 1 (? )
P r ed i ct ed r 2 (? )
0
?7
10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
10
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Residual graph normalized size e(t)
Residual graph normalized size e(? )
(a)
(b)
Figure 5: For the regular (3, 6) ensemble and BP ? 0.4294, in (a) we plot the expected evolution of
r1 (? ) and r2 (? ) for n ? ?. In (b), we include the computed evolution of lavg (? ) for this case.
3.1
Analysis in the finite-length regime
In the finite-length regime, the TEP decoder emerges as a powerful decoding algorithm. At a complexity similar to BP, i.e. of order O(n), it is able to further improve the BP solution thanks to a more
accurate estimate of the marginal for each bit. We illustrate the TEP decoder performance for some
regular and irregular finite-length LDPC codes. We first consider a rate 1/2 regular (3, 6) LDPC
code. This ensemble has no asymptotic error floor [15] and we plot the word error rate obtained
with the TEP and the BP decoders with different code lengths in Figure 6(a). In Figure 6(b), we
6
include the results for the irregular DD in (3) and (4), where we can see that in all cases BP and TEP
converge to the same error floor but, as in previous examples, the TEP decoder provides significant
gains in the waterfall region and they are more significant for shorter codes.
0
10
0
10
?1
Word error rate
Word Error Rate
10
?2
10
?1
10
?3
10
?4
10
0.34
?2
0.36
0.38
0.4
0.42
0.44
10
0.46
0.3
Channel erasure probability ?
0.35
0.4
0.45
0.5
Channel erasure probability ?
(a)
(b)
Figure 6: TEP (solid line) and BP (dashed line) decoding performance for a regular LDPC (3,6)
code in (a), and the irregular LDPC in (3) and (4) in (b), with code lengths n = 29 (?), n = 210
(), n = 211 (?) and 212 (B).
The expected graph evolution during the TEP decoding in [12], which provides the average presence
in the graph of degree one and two check nodes as the decoder proceeds, can be used to derive a
coarse estimation of the TEP decoder probability of error for a given LDPC ensemble, similar to
(1) for the BP decoder. By using the regular (3, 6) code as an example, in Figure 5(a), we plot
the solution for r1 (? ) in the case n ? ?. Let ? ? be the time at which the decoder gets stuck,
i.e. r1 (? ? ) + r2 (? ? ) = 0. In Figure 7, we plot the solution for the evolution of r1 (?, n, BP ) with
respect to e(t) for a (3, 6) regular code at = BP = TEP . To avoid confusion, in the following
we explicitly include the dependence with n and in r1 (?, n, ). The code lengths considered are
n = 212 (+), n = 213 (?), n = 214 (), n = 215 (), n = 216 (?) and n = 217 (?). For finite-length
values, we observe that r1 (? ? , n, BP ) is not zero and, indeed, a closer look shows that the following
approximation is reasonable tight:
r1 (? ? , n, TEP ) ? ?TEP n?1 ,
(6)
where we compute ?TEP from the ensemble. For the (3, 6) regular case, we obtain ?TEP ? 0.3198
[12]. The idea to estimate the TEP decoder performance at = BP + ? is to assume that any
particular realization will succeed almost surely as long as the fraction of degree one check nodes at
? ? is positive. For = BP + ?, we can approximate r1 (? ? , n, ) as follows:
?r (?, n, )
?1
(7)
r1 (? ? , n, ) = 1
? =? ? ? + ?TEP n .
?
=TEP
In [1, 15], it is shown that simulated trajectories for the evolution of degree one check nodes under
BP are asymptotically Gaussian distributed and this is observed for the TEP decoder as well. Furthermore, the variance is of order ?(? )/n, where ?(? ) depends on the ensemble and the decoder [1].
To estimate the TEP decoder error rate, we compute the probability that the fraction of degree one
check nodes at at ? ? is positive. Since it is distributed as N (r1 (? ? , n, TEP ), ?(? )/n), we get
TEP
ELDPC [?(x),?(x),n] PW
(C, )
?
?
?r1 (?,n,)
?1
!
?
? =? ? ? + ?TEP n ?
?
?
?
?
n(TEP ? )
?TEP
=TEP
?
?
p
+p
,
(8)
? 1 ? Q?
?=Q
?TEP
?(? ? )/n
n ?(? ? )
?
?
7
where
?TEP =
p
?(? ? )
?r1 (?, n, )
?
?1
? =? ? .
(9)
=TEP
Finally, note that, since for n ? ? we know that the TEP and the BP converge to the same solution,
we can approximate ?TEP ? ?BP . Besides, we have empirically observed that the variance of
trajectories under BP and TEP decoding are quite similar so, for simplicity, we set ?(? ? ) in (8)
equal to ?(? ? )BP , whose analytic solution can be found in [16, 1]. Hence, we consider the TEP
decoder expected evolution to estimate the parameter ?TEP in (8). In Figure 7(b), we compare the
TEP performance for the regular (3, 6) ensemble (solid lines) with the approximation in (8) (dashed
lines), using the approximation ?TEP ? ?BP = 0.56036, ?(? ? ) ? 0.0526 and ?TEP ? 0.3198. We
have plot the results for code lengths of n = 29 (?), n = 210 (), n = 211 (?) and 212 (B). As we
can see, for the shortest code length, the model seems to slightly over-estimate the error probability,
but this mismatch vanishes for the rest of the cases, obtaining a tight estimate.
0
?1
10
10
?2
Word error rate
r 1 (? , n , ? T EP )
10
?3
10
n = 2 12
?1
10
?2
10
n = 2 13
?4
10
n = 2 14
n = 2 15
n = 2 16
n ??
0
0.05
0.1
0.15
0.2
0.25
?3
10
0.3
Residual graph normalized size e(? )
0.36
0.38
0.4
0.42
0.44
0.46
0.48
Channel erasure probability ?
(a)
(b)
Figure 7: In (a), we plot the solution for r1 (? ) with respect to e(t) for a (3, 6) regular code at
= BP = TEP . In (b), we compare the TEP performance for the regular (3, 6) ensemble (solid
lines) with the approximation in (8) (dashed lines), using the approximation ?TEP ? ?BP = 0.56036,
?(? ? ) ? 0.0526 and ?TEP ? 0.3198. We have plot the results for code lengths of n = 29 (?), n = 210
(), n = 211 (?) and n = 212 (B).
4
Conclusions
In this paper, we consider a tree structure for approximate inference in sparse graphical models
using the EP algorithm. We have shown that, for finite-length LDPC sparse graphs, the accuracy
of the marginal estimation with the method proposed significantly outperforms the BP estimate
for the same graph. As a consequence, the decoding error rates are clearly improved. This result
is remarkable in itself, as BP was considered the gold standard for LDPC decoding, and it was
assumed that the long-range cycles and sparse nature of these factor graphs did not lend themselves
for the application of more accurate approximate inference algorithms designed for dense graphs
with short-range cycles. Additionally, the application of LDPC decoding showed us a different way
of learning the tree structure that might be amenable for general factors.
5
Acknowledgments
This work was partially funded by Spanish government (Ministerio de Educaci?on y Ciencia,
TEC2009-14504-C02-01,02, Consolider-Ingenio 2010 CSD2008-00010), Universidad Carlos III
(CCG10-UC3M/TIC-5304) and European Union (FEDER).
8
References
[1] Abdelaziz Amraoui, Andrea Montanari, Tom Richardson, and R?udiger Urbanke. Finite-length scaling for
iteratively decoded LDPC ensembles. IEEE Transactions on Information Theory., 55(2):473?498, 2009.
[2] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wilson and Sons, New York,
USA, 1991.
[3] Michael Luby, Michael Mitzenmacher, Amin Shokrollahi, Daniel Spielman, and Volker Stemann. Practical loss-resilient codes. In Proceedings of the 29th annual ACM Symposium on Theory of Computing,
pages 150?159, 1997.
[4] Michael Luby, Michael Mitzenmacher, Amin Shokrollahi, Daniel Spielman, and Volker Stemann. Efficient erasure correcting codes. IEEE Transactions on Information Theory, 47(2):569?584, Feb. 2001.
[5] David J. C. MacKay. Good error-correcting codes based on very sparse matrices. IEEE Transactions on
Information Theory, 45(2):399?431, 1999.
[6] David J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University
Press, 2003.
[7] David J. C. MacKay and Radford M. Neal. Near Shannon limit performance of low density parity check
codes. Electronics Letters, 32:1645?1646, 1996.
[8] T.
Minka.
Power
EP.
http://research.microsoft.com/? minka/papers/.
Technical
report,
MSR-TR-2004-149,
2004.
[9] Thomas Minka and Yuan Qi. Tree-structured approximations by expectation propagation. In Proceedings
of the Neural Information Processing Systems Conference, (NIPS), 2003.
[10] Thomas P. Minka. Expectation Propagation for approximate Bayesian inference. In Proceedings of the
17th Conference in Uncertainty in Artificial Intelligence (UAI 2001), pages 362?369. Morgan Kaufmann
Publishers Inc., 2001.
[11] Pablo M. Olmos, Juan Jos?e Murillo-Fuentes, and Fernando P?erez-Cruz. Tree-structure expectation propagation for decoding LDPC codes over binary erasure channels. In 2010 IEEE International Symposium
on Information Theory, ISIT, Austin, Texas, 2010.
[12] P.M. Olmos, J.J. Murillo-Fuentes, and F. P?erez-Cruz. Tree-structure expectation propagation for LDPC
decoding in erasure channels. Submitted to IEEE Transactions on Information Theory, 2011.
[13] P.M. Olmos, J.J. Murillo-Fuentes, and F. P?erez-Cruz. Tree-structured expectation propagation for decoding finite-length ldpc codes. IEEE Communications Letters, 15(2):235 ?237, Feb. 2011.
[14] P. Oswald and A. Shokrollahi. Capacity-achieving sequences for the erasure channel. IEEE Transactions
on Information Theory, 48(12):3017 ? 3028, Dec. 2002.
[15] Tom Richardson and Ruediger Urbanke. Modern Coding Theory. Cambridge University Press, Mar.
2008.
[16] N. Takayuki, K. Kasai, and S. Kohichi. Analytical solution of covariance evolution for irregular LDPC
codes. e-prints, November 2010.
[17] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsk. Map estimation via agreement on (hyper)trees:
Message-passing and linear-programming approaches. IEEE Transactions on Information Theory,
51(11):3697?3717, November 2005.
[18] Martin J. Wainwright and Michael I. Jordan. Graphical Models, Exponential Families, and Variational
Inference. Foundations and Trends in Machine Learning, 2008.
[19] W. Weigerinck and T. Heskes. Fractional belief propagation. In S. Becker, S. Thrun, and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, Cambridge, MA, December 2002. MIT
Press.
[20] M. Welling, T. Minka, and Y.W. Teh. Structured region graphs: Morphing EP into GBP. In UAI, 2005.
[21] Nicholas C. Wormald. Differential equations for random processes and random graphs. Annals of Applied
Probability, 5(4):1217?1235, 1995.
[22] J. S. Yedidia, W. T. Freeman, and Y. Weis. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282?2312, July 2005.
9
| 4280 |@word msr:1 pw:3 seems:1 consolider:1 covariance:1 tr:1 solid:7 carry:1 electronics:1 contains:1 daniel:2 outperforms:2 com:1 dx:2 luis:1 tec2009:1 cruz:4 ministerio:1 analytic:1 remove:9 heir:2 plot:8 designed:1 joy:1 intelligence:1 beginning:1 short:4 provides:5 iterates:1 node:35 coarse:1 simpler:1 along:2 become:2 differential:7 symposium:2 yuan:1 pairwise:3 indeed:2 expected:8 andrea:1 p1:11 themselves:1 shokrollahi:3 freeman:1 actual:2 becomes:1 spain:2 tic:1 rmax:2 certainty:1 y3:3 positive:4 before:4 tends:4 limit:6 consequence:2 path:3 might:4 wormald:3 therein:1 initialization:1 murillo:5 range:6 practical:3 acknowledgment:1 union:1 block:2 x3:1 procedure:3 erasure:11 empirical:1 resid:1 significantly:6 word:4 regular:15 cted:1 get:3 optimize:1 equivalent:2 map:1 go:1 simplicity:1 correcting:2 limiting:1 annals:1 decode:3 programming:2 agreement:1 element:2 trend:1 particularly:1 updating:1 bec:8 ep:20 observed:2 fly:3 solved:1 region:2 cycle:5 connected:12 decrease:1 removed:9 pd:14 vanishes:2 complexity:5 lavg:7 ciencia:1 solving:1 tight:2 completely:1 easily:1 represented:1 derivation:1 describe:1 artificial:1 tell:1 hyper:1 quite:1 whose:1 supplementary:1 valued:1 larger:1 otherwise:2 statistic:1 richardson:2 itself:2 singled:2 sequence:4 analytical:2 propose:3 p4:4 loop:4 realization:3 achieve:1 gold:1 amin:2 convergence:1 transmission:1 r1:25 p:2 converges:1 help:1 illustrate:3 derive:1 fixing:1 received:1 progress:2 p2:9 predicted:1 differ:1 thick:2 drawback:1 material:1 require:1 government:1 resilient:1 isit:1 sufficiently:1 around:2 considered:2 algorithmic:1 estimation:4 sensitive:1 create:1 mit:1 clearly:1 gaussian:3 avoid:1 volker:2 wilson:1 jaakkola:1 derived:1 focus:1 inherits:1 waterfall:1 check:21 contrast:1 inference:12 typically:2 unlikely:1 reproduce:2 sketched:1 denoted:1 priori:1 ingenio:1 mackay:4 mutual:3 marginal:7 initialize:2 equal:2 once:3 construct:1 identical:1 flipped:1 look:4 cancel:1 constitutes:1 thin:2 report:1 equiprobable:1 modern:1 randomly:2 tightly:1 microsoft:1 message:2 amenable:4 accurate:7 implication:1 edge:12 closer:1 shorter:1 unless:1 tree:32 urbanke:2 re:1 plotted:1 instance:1 column:2 earlier:1 cover:1 measuring:1 deviation:1 kasai:1 successful:3 learnt:1 thanks:1 density:2 international:1 v4:6 universidad:1 decoding:40 jos:1 michael:5 together:3 quickly:1 choose:1 juan:2 creating:1 li:1 potential:2 de:6 coding:3 inc:1 satisfy:1 explicitly:1 depends:1 analyze:3 linked:1 red:1 start:1 carlos:2 contribution:1 accuracy:2 variance:3 kaufmann:1 ensemble:13 yield:3 educaci:1 bayesian:1 regimen:1 trajectory:4 submitted:1 whenever:1 ed:4 energy:1 ravg:2 involved:1 minka:5 associated:3 proof:2 recovers:1 gain:1 fractional:2 emerges:1 organized:1 actually:1 uc3m:2 higher:2 tom:2 improved:2 wei:1 formulation:1 mitzenmacher:2 mar:1 furthermore:2 just:2 stage:1 correlation:2 until:4 sketch:1 propagation:14 grows:1 usa:1 normalized:8 y2:3 evolution:14 hence:3 assigned:1 iteratively:1 neal:1 illustrated:3 ignorance:1 x5:1 during:2 spanish:1 generalized:2 tsc:1 tep:68 complete:1 demonstrate:1 confusion:1 performs:1 wise:1 variational:1 common:1 empirically:1 discussed:1 interpretation:2 marginals:1 refer:1 significant:2 cambridge:3 heskes:1 erez:4 had:2 funded:1 feb:2 showed:2 perspective:1 scenario:3 codeword:1 binary:3 arbitrarily:1 transmitted:2 minimum:1 morgan:1 floor:4 impose:1 surely:1 converge:2 fernando:3 v3:8 shortest:1 signal:2 dashed:5 july:1 rj:1 reduces:1 technical:1 characterized:1 long:4 qi:1 expectation:11 iteration:4 achieved:1 dec:1 irregular:9 publisher:1 rest:2 unlike:1 december:1 jordan:1 near:1 presence:3 iii:2 xj:1 finish:1 approaching:1 opposite:1 idea:1 texas:1 feder:1 becker:1 passing:2 york:1 olmos:5 detailed:1 http:1 outperform:2 per:3 track:1 four:1 threshold:4 achieving:2 v1:17 graph:50 asymptotically:2 fraction:8 run:3 letter:2 powerful:1 uncertainty:1 arrive:1 almost:1 reasonable:1 c02:1 family:1 p3:8 scaling:1 bit:6 ct:2 completing:1 replaces:1 annual:1 infinity:3 constraint:2 bp:55 performing:1 martin:1 structured:14 according:1 combination:1 disconnected:1 poor:1 slightly:1 son:1 happens:2 explained:1 sevilla:2 equation:8 previously:1 eventually:1 know:1 flip:1 prefixed:1 end:1 available:2 yedidia:1 progression:1 observe:1 v2:19 nicholas:1 luby:2 original:1 thomas:4 running:1 remaining:6 include:4 graphical:8 instant:1 especially:1 build:1 approximating:1 already:1 print:1 degrades:1 rt:2 dependence:1 obermayer:1 exhibit:1 distance:1 link:1 reversed:2 simulated:3 capacity:9 decoder:39 thrun:1 length:19 code:53 ldpc:39 besides:2 y4:3 index:1 negative:1 implementation:1 perform:1 takayuki:1 fuentes:4 av:1 ruediger:1 teh:1 finite:14 november:2 extended:1 communication:4 looking:1 y1:3 pablo:2 david:3 pair:3 connection:6 optimized:1 gbp:1 nip:1 able:2 proceeds:2 usually:1 mismatch:1 regime:3 challenge:1 unsuccessful:3 including:1 lend:1 belief:5 wainwright:2 power:2 suitable:1 overlap:1 event:1 residual:9 improve:5 created:3 csd2008:1 morphing:1 removal:1 asymptotic:5 loss:1 proven:1 remarkable:1 foundation:1 degree:43 sufficient:1 dd:7 editor:1 share:7 austin:1 lmax:3 repeat:1 parity:15 copy:1 last:1 free:2 side:2 deeper:1 absolute:1 sparse:10 distributed:4 curve:3 author:2 stuck:2 welling:1 transaction:7 approximate:10 sequentially:1 uai:2 conclude:2 assumed:1 xi:1 why:1 additionally:1 channel:19 nature:1 obtaining:1 improving:1 european:1 constructing:2 did:1 dense:3 montanari:1 repeated:1 fig:1 referred:1 madrid:2 fashion:1 slow:1 decoded:1 exponential:1 third:1 peeling:4 removing:7 theorem:2 down:1 abdelaziz:1 list:2 r2:9 sparser:1 depicted:4 partially:1 radford:1 acm:1 ma:1 succeed:1 erased:10 change:2 typical:2 infinite:1 total:2 e:3 shannon:1 spielman:2 incorporate:1 dept:2 |
3,624 | 4,281 | Efficient inference in matrix-variate Gaussian models
with iid observation noise
Oliver Stegle1
Max Planck Institutes
T?ubingen, Germany
[email protected]
Christoph Lippert1
Max Planck Institutes
T?ubingen, Germany
[email protected]
Joris Mooij
Institute for Computing and Information Sciences
Radboud University
Nijmegen, The Netherlands
[email protected]
Neil Lawrence
Department of Computer Science
University of Sheffield
Sheffield, UK
[email protected]
Karsten Borgwardt
Max Planck Institutes & Eberhard Karls Universit?at
T?ubingen, Germany
[email protected]
Abstract
Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models
that explicitly account for iid observation noise. Computational tractability can be
retained by exploiting the Kronecker product between row and column covariance
matrices. Using this framework, we show how to generalize the Graphical Lasso
in order to learn a sparse inverse covariance between features while accounting for
a low-rank confounding covariance between samples. We show practical utility on
applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures
and are able to better reconstruct the confounders.
1
Introduction
Matrix-variate normal (MVN) models have important applications in various fields. These models
have been used as regularizer for multi-output prediction, jointly modeling the similarity between
tasks and samples [1]. In related work in Gaussian processes (GPs), generalizations of MVN distributions have been used for inference of vector-valued functions [2, 3]. These models with Kronecker
factored covariance have applications in geostatistics [4], statistical testing on matrix-variate data [5]
and statistical genetics [6].
In prior work, different covariance functions for rows and columns have been combined in a flexible
manner. For example, Dutilleul and Zhang et al. [7, 1] have performed estimation of free-form
covariances with different norm penalties. In other applications for prediction [2] and dimension
reduction [8], combinations of free-form covariances with squared exponential covariances have
been used.
1
These authors contributed equally to this work.
1
In the absence of iid observation noise, an efficient inference scheme also known as the ?flip-flop
algorithm? can be derived. In this iterative approach, estimation of the respective covariances is
decoupled by rotating the data with respect to one of the covariances to optimize parameters of the
other [7, 1]. While this simplifying assumption of noise-free matrix-variate data has been used with
some success, there are clear motivations for including iid noise in the model. For example, Bonilla
et al. [2] have shown that in multi-task regression a noise free GP with Kronecker structure leads to
a cancelation of information sharing between the various prediction tasks. This effect, also known
from the geostatistics literature [4], eliminates any benefit from multivariate prediction compared
to na??ve approaches. Alternatively, when including observation noise in the model, computational
tractability has been limited to smaller datasets. The covariance matrix no longer directly factorizes into a Kronecker product, thus rendering simple approaches such as the ?flip-flop algorithm?
inappropriate.
Here, we address these shortcomings and propose a general framework for efficient inference in
matrix-variate normal models that include iid observation noise. Although in this model the covariance matrix no longer factorizes into a Kronecker product, we show how efficient parameter
inference can still be done. To this end, we provide derivations of both the log-likelihood and gradients with respect to hyperparameters that can be computed in the same asymptotic runtime as
iterations of the ?flip-flop algorithm? on a noise-free model. This allows for parameter learning of
covariance matrices of size 105 ? 105 , or even bigger, which would not be possible if done na??vely.
First, we show how for any combination of covariances, evaluation of model likelihood and gradients
with respect to individual covariance parameters is tractable. Second, we apply this framework
to structure learning in Gaussian graphical models, while accounting for a confounding non-iid
sample structure. This generalization of the Graphical Lasso [9, 10] (GLASSO) allows to jointly
learn and account for a sparse inverse covariance matrix between features and a structured (nondiagonal) sample covariance. The low rank component of the sample covariance is used to account
for confounding effects, as is done in other models for genomics [11, 12].
We illustrate this generalization called ?Kronecker GLASSO? on synthetic datasets and heterogeneous protein signaling and gene expression data, where the aim is to recover the hidden network
structures. We show that our approach is able to recover the confounding structure, when it is known,
and reveals sparse biological networks that are in better agreement with known components of the
latent network structure.
2
Efficient inference in Kronecker Gaussian processes
Assume we are given a data matrix Y ? RN ?D with N rows and D columns, where N is the
number of samples with D features each. As an example, think of N as a number of micro-array
experiments, where in each experiment the expression levels of the same D genes are measured;
here, yrc would be the expression level of gene c in experiment r. Alternatively, Y could represent
multi-variate targets in a multi-task prediction setting, with rows corresponding to tasks and columns
to features. This setting occurs in geostatistics, where the entries yrc correspond to ecological measurements taken on a regular grid.
First we introduce some notation. For any L ? M matrix A, we define vec(A) to be the vector
obtained by concatenating the columns of A; further, let A ? B denote the Kronecker product (or
tensor product) between matrices A and B:
?
?
?
?
a11
a11 B a12 B . . . a1M B
? a21 ?
? a21 B a22 B . . . a2M B ?
?
vec(A) = ?
A?B=?
.
.. ?
? ... ? ;
? ...
...
...
. ?
aLM
aL1 B aL2 B . . .
aLM B
For modeling Y as a matrix-variate normal distribution with iid observation noise, we first introduce
N ? D additional latent variables Z, which can be thought of as the noise-free observations. The
data Y is then given by Z plus iid Gaussian observation noise:
p(Y | Z, ? 2 ) = N vec(Y) vec(Z), ? 2 IN ?D .
(1)
2
If the covariance between rows and columns of the noise-free observations Z factorizes, we may
assume a zero-mean matrix-variate normal model for Z:
p(Z | C, R) =
exp{? 12 Tr[C?1 ZT R?1 Z]}
,
(2?)N D/2 |R|N/2 |C|D/2
which can be equivalently formulated as a multivariate normal distribution:
= N (vec(Z) | 0N ?D , C(?C ) ? R(?R )) .
(2)
Here, the matrix C is a D ? D column covariance matrix and R is an N ? N row covariance matrix
that may depend on hyperparameters ?C and ?R respectively. Marginalizing over the noise-free
observations Z results in the Kronecker Gaussian process model of the observed data Y
p(Y | C, R, ? 2 ) = N vec(Y) 0N ?D , C(?C ) ? R(?R ) + ? 2 IN ?D .
(3)
For notational convenience we will drop the dependency on hyperparameters ?C , ?R and ? 2 .
Note that for ? 2 = 0, the likelihood model in Equation (3) reduces to the matrix-variate normal
distribution in Equation (2).
2.1
Efficient parameter estimation
For efficient optimization of the log likelihood, L = ln p(Y | C, R, ? 2 ), with respect to the hyperparameters, we exploit an identity that allows us to write a matrix product with a Kronecker product
matrix in terms of ordinary matrix products:
(C ? R)vec(Y) = vec(RT YC).
(4)
We also exploit the compatibility of a Kronecker product plus a constant diagonal term with the
eigenvalue decomposition:
T
(C ? R + ? 2 I) = (UC ? UR )(SC ? SR + ? 2 I)(UT
C ? UR ),
where C =
UC SC UT
C
(5)
is the eigenvalue decomposition of C, and similarly for R.
Likelihood evaluation Using these identities, the log of the likelihood in Equation (3) follows as
1
N ?D
ln(2?) ? ln SC ? SR + ? 2 I
L=?
2
2
1
T
2 ?1
? vec(UT
vec(UT
(6)
R YUC ) (SC ? SR + ? I)
R YUC ).
2
This term can be interpreted as a multivariate normal distribution with diagonal covariance matrix
T
(SC ? SR + ? 2 I) on rotated data vec(UT
R YUC ) , similar to an approach that is used to speed up
mixed models in genetics [13].
Gradient evaluation Derivatives of the log marginal likelihood with respect to a particular covariance parameter ?R ? ?R can be expressed as
T
d
1
d
L = ? diag (SC ? SR + ? 2 I)?1 diag SC ? (UT
RUR )
R
d?R
2
d?R
d
1
? T vec UT
? C ,
RUR YS
(7)
+ vec(Y)
R
2
d?R
? = (SC ? SR + ? 2 I)?1 vec(UT YUC ). Analogous expressions follow for partial
where vec(Y)
R
derivatives with respect to ?C ? ?C and the noise level ? 2 . Full details of all derivations, including
derivatives wrt. ? 2 , can be found in the supplementary material.
Runtime and memory complexity A na??ve implementation for optimizing the likelihood (3) with
respect to the hyperparameters would have runtime complexity O(N 3 D3 ) and memory complexity
O(N 2 D2 ). Using the likelihood and derivative as expressed in Equations (6) and (7), each evaluation with new kernel parameters involves solving the symmetric eigenvalue problems of both R
and C, together having a runtime complexity of O(N 3 + D3 ). Explicit evaluation of any matrix
Kronecker products is not necessary, resulting in a low memory complexity of O(N 2 + D2 ).
3
3
Graphical Lasso in the presence of confounders
Estimation of sparse inverse covariance matrices is widely used to identify undirected network structures from observational data. However, non-iid observations due to hidden confounding variables
may hinder accurate recovery of the true network structure. If not accounted for, confounders may
lead to a large number of false positive edges. This is of particular relevance in biological applications, where observational data are often heterogeneous, combining measurements from different
labs, data obtained under various perturbations or from a range of measurement platforms.
As an application of the framework described in Section 2, we here propose an approach to learning sparse inverse covariance matrices between features, while accounting for covariation between
samples due to confounders. First, we briefly review the ?orthogonal? approaches that account for
the corresponding types of sample and feature covariance we set out to model.
3.1
Explaining feature dependencies using the Graphical Lasso
A common approach to model relationships between variables in a graphical model is the GLASSO.
It has been used in the context of biological studies to recover the hidden network structure of
gene-gene interrelationships [14], for instance. The GLASSO assumes a multivariate Gaussian distribution on features with a sparse precision (inverse covariance) matrix. The sparsity is induced by
an L1 penalty on the entries of C?1 , the inverse of the feature covariance matrix.
Under the simplifying assumption of iid samples, the posterior distribution of Y under this model is
proportional to
N
Y
p(Y, C?1 ) = p(C?1 )
N (Yr,: | 0D , C) .
(8)
r=1
Here, the prior on the precision matrix C?1 is
p(C?1 ) ? exp ? ?
C?1
1 [C?1 0],
(9)
with kAk1 defined as the sum over all absolute values of the matrix entries. Note that this prior is
only nonzero for positive-definite C?1 .
3.2
Modeling confounders using the Gaussian process latent variable model
Confounders are unobserved variables that can lead to spurious associations between observed variables and to covariation between samples. A possible approach to identify such confounders is
dimensionality reduction. Here we briefly review two dimensionality reduction methods, dual probabilistic PCA and its generalization, the Gaussian process latent variable model (GPLVM) [15].
In the context of applications, these methods have previously been applied to identify regulatory
processes [16], and to recover confounding factors with broad effects on many features [11, 12].
In dual probabilistic PCA [15], the observed data Y is explained as a linear combination of K latent
variables (?factors?), plus independent observation noise. The model is as follows:
Y = XW + E,
where X ? RN ?K contains the values of K latent variables (?factors?), W ? RK?D contains independent standard-normally distributed weights that specify the mapping between latent and observed
variables. Finally, E ? RN ?D contains iid Gaussian noise with Erc ? N (0, ? 2 ). Marginalizing
over the weights W yields the data likelihood:
p(Y | X) =
D
Y
N Y:,c 0N , XXT + ? 2 IN .
(10)
c=1
Learning the latent factors X and the observation noise variance ? 2 can be done by maximum
likelihood. The more general GPLVM [15] is obtained by replacing XXT in (10) with a more general Gram matrix R, with Rrs = ? (xr1 , . . . , xrK ), (xs1 , . . . , xsK ) for some covariance function
? : RK ? RK ? R.
4
3.3
Combining the two models
We propose to combine these two different explanations of the data into one coherent model. Instead
of treating either the samples or the features as being (conditionally) independent, we aim to learn a
joint covariance for the observed data matrix Y. This model, called Kronecker GLASSO, is a special
instance of the Kronecker Gaussian process model introduced in Section 2, as the data likelihood
can be written as:
p(Y | R, C?1 ) = N vec(Y) 0N ?D , C ? R + ? 2 IN ?D .
(11)
Here, we build on the model components introduced in Section 3.2 and Section 3.1. We use the
sparse L1 penalty (9) for the feature inverse covariance C?1 and use a linear kernel for the covariance on rows R = XXT + ?2 IN . Learning the model parameters proceeds via MAP inference,
optimizing the log likelihood implied by Equation (11) with respect to X and C?1 , and the hyperparameters ? 2 , ?2 . By combining the GLASSO and GPLVM in this way, we can recover network
structure in the presence of confounders.
An equivalent generative model can be obtained in a similar way as in dual probabilistic PCA.
The main difference is that now, the rows of the weight matrix W are sampled from a N (0D , C)
distribution instead of a N (0D , ID ) distribution. This generative model for Y given latent variables
X ? RN ?K and feature covariance C ? RD?D is of the form Y = XW + ?V + E, where
W ? RK?D , V ? RN ?D and E ? RN ?D are jointly independent with distributions vec(W) ?
N (0KD , C ? IK ), vec(V) ? N (0N D , C ? IN ) and vec(E) ? N (0N D , ? 2 IN D ).
3.4
Inference in the joint model
As already mentioned in Section 2, parameter inference in the Kronecker GLASSO model implied
by Equation (11), when done na??vely, is intractable for all but very low dimensional data matrices Y.
Even using the tricks discussed in Section 2, free-form sparse inverse covariance updates for C?1
are intractable under the L1 penalty when depending on gradient updates.
Similar as in Section 2, the first step towards efficient inference is to introduce N ? D additional
latent variables Z, which can be thought of as the noise-free observations:
p(Y|Z, ? 2 ) = N vec(Y) vec(Z), ? 2 IN ?D
(12)
p(Z|R, C) = N (vec(Z) | 0N ?D , C ? R) .
(13)
We consider the latent variables Z as additional model parameters. We now optimize the distribution
p(Y, C?1 | Z, R, ? 2 ) = p(Y | Z, ? 2 )p(Z | R, C)p(C?1 ) with respect to the unknown parameters
Z, C?1 , ? 2 , and R (which depends on X and kernel parameters ?R ) by iterating through the
following steps:
1. Optimize for ? 2 , R after integrating out Z, for fixed C:
argmax p(Y | C, R(?R , X), ? 2 ) =
? 2 ,?R ,X
argmax N vec(Y) 0N ?D , C ? R(?R , X) + ? 2 IN ?D
? 2 ,?R ,X
(14)
2. Calculate the expectation of Z for fixed R, C, and ? 2 :
? = (C ? R)(C ? R + ? 2 IN ?D )?1 vec(Y)
vec(Z)
? ?1 for fixed R and Z:
?
3. Optimize C
? ?1 | Z,
? R) = argmax N vec(Z)
? 0, C
? ? R p(C
? ?1 )
argmax p(C
? ?1
C
? ?1
C
?
and set C = C.
As a stopping criterion we consider the relative reduction of the negative log-marginal likelihood
? ?1 for fixed Z
? is motivated
(Equation (11)) plus the regularizer on C?1 . The choice to optimize C
by computational considerations, as this subproblem then reduces to conventional GLASSO; a full
EM approach with latent variables Z does not seem feasible. Step 1 can be done using the efficient
likelihood evaluations and gradients described in Section 2. We will now discuss step 3 in more
detail.
5
0
49
1
48
2
47
3
0
49
0
1
48
49
2
47
45
46
5
44
43
42
41
41
41
40
40
39
39
12
37
13
11
12
13
14
14
36
35
14
15
15
34
16
16
34
33
33
32
32
31
31
19
20
30
29
20
21
29
28
22
27
23
26
24
21
28
22
27
23
26
25
25
(c) GLASSO
18
19
30
24
17
18
18
19
20
21
22
23
26
16
17
17
27
(b) Ground truth
15
37
36
(a) Precision-recall curve
35
38
13
36
10
39
38
12
37
40
11
11
38
9
10
10
35
8
9
9
34
7
8
8
33
6
7
42
42
31
5
44
43
7
32
4
45
6
6
30
3
5
4
29
2
4
3
28
1
48
46
47
46
45
44
43
24
25
(d) Kron GLASSO (e) Ideal GLASSO
Figure 1: Network reconstruction on the simulated example. (a) Precision-recall curve, when varying the
sparsity penalty ?. Compared are the standard GLASSO, our algorithm with Kronecker structure (Kronecker
GLASSO) and as a reference an idealized setting, applying standard GLASSO to a similar dataset without
confounding influences (Ideal GLASSO). The model that accounts for confounders approaches the performance
of an idealized model, while standard GLASSO finds a large fraction of false positive edges. (b) Ground truth
network. (c-e) Recovered networks for GLASSO, Kronecker GLASSO and Ideal GLASSO at 40% recall (star
in (a)). False positive predicted edges are colored in red. Because of the effect of confounders, standard
GLASSO predicted an excess of edges to 4 of the nodes.
? ?1 The third step, optimizing with respect to C
? ?1 , can be done efficiently, using
Optimizing for C
similar ideas as in Section 2. First consider:
? ? R ? 1 vec(Z)
? T (C
? ? R)?1 vec(Z).
?
? 0N ?D , C
? ? R = ? N ? D ln(2?) ? 1 ln C
ln N vec(Z)
2
2
2
Now, using the Kronecker identity (4) and
ln |A ? B| = rank(B) ln |A| + rank(A) ln |B| ,
we can rewrite the log likelihood as:
= ? N2?D
? 0, C
? ? R p(C
? ?1 )
ln N vec(Z)
? ?1 1
? T R?1 Z
?C
? ?1 ).
ln(2?) ? 12 D ln |R| + 21 N ln C
? 2 Tr(Z
? T R?1 Z:
?
Thus we obtain a standard GLASSO problem with covariance matrix Z
? ?1 | Z,
? R) = argmax ? 1 Tr(Z
? T R?1 Z
?C
? ?1 ) + 1 N ln C
? ?1 ? ?
? ?1
argmax p(C
.
C
2
2
1
? ?1
? ?1 0
C
C
(15)
The inverse sample covariance R?1 in Equation (15) rotates the data covariance, similar as in the
established flip-flop algorithm for inference in matrix-variate normal distributions [7, 1].
4
Experiments
In this Section, we describe three experiments with the generalized GLASSO.
4.1
Simulation study
First, we considered an artificial dataset to illustrate the effect of confounding factors on the solution
quality of sparse inverse covariance estimation. We created synthetic data, with N = 100 samples
and D = 50 features according to the generative model described in Section 3.3. We generated
the sparse inverse column covariance C?1 choosing edges at random with a sparsity level of 1%.
Non-zero entries of the inverse covariance were drawn from a Gaussian with mean 1 and variance
2. The row covariance matrix R was created from K = 3 random factors xk , each drawn from
unit variance iid Gaussian variables. The weighting between the confounders and the iid component
?2 was set such that the factors explained equal variance, which corresponds to moderate extent
of confounding influences. Finally, we added independent Gaussian observation noise, choosing a
signal-to-noise ratio of 10%.
6
praf
pjnk
praf
pmek
P38
pjnk
plcg
PKC
PIP2
PKA
(a) Precision-recall curve
P38
pjnk
plcg
PKC
PIP3
pakts473
praf
pmek
PIP2
PKA
p44/42
(b) Ground truth
PIP3
pakts473
p44/42
(c) GLASSO
pmek
P38
plcg
PKC
PIP2
PKA
PIP3
pakts473
p44/42
(d) Kron GLASSO
Figure 2: Network reconstruction of a protein signaling network from Sachs et al. (a) Precision-recall curve,
when varying the sparsity penalty ?. Compared are the standard GLASSO, and our algorithm with Kronecker
structure (Kronecker GLASSO). Standard GLASSO, not accounting for confounders, found more false positive
edges for a wide range of recall rates. (b) Ground truth network. (c-d) Recovered networks for GLASSO and
Kronecker GLASSO at 40% recall (star in (a)). False positive edge predictions are colored in red.
Next, we applied different methods to reconstruct the true simulated network. We considered standard GLASSO and our Kronecker model that accounts for the confounding influence (Kronecker
GLASSO). For reference, we also considered an idealized setting, applying GLASSO to a similar
dataset without the confounding effects (Ideal GLASSO), obtained by setting X = 0N ?K in the
generative model. To determine an appropriate latent dimensionality of Kronecker GLASSO, we
used the BIC criterion on multiple restarts with K = 1 to K = 5 latent factors. For all models
we varied the sparsity parameter of the graphical lasso, setting ? = 5x , with x linearly interpolated
between ?8 and 3. The solution set of lasso-based algorithms is typically unstable and depends on
slight variation of the data. To improve the stability of all methods, we employed stability selection [17], applying each algorithm for all regularization parameters 100 times to randomly drawn
subsets containing 90% of the data. We then considered edges that were found in at least 50% of all
100 restarts.
Figure 1a shows the precision-recall curve for each algorithm. Kronecker GLASSO performed
considerably better than standard GLASSO, approaching the performance of the ideal model without confounders. Figures 1b-d show the reconstructed networks at 40% recall. While Kronecker
GLASSO reconstructed the same network as the ideal model, standard GLASSO found an excess of
false positive edges.
4.2
Network reconstruction of protein-signaling networks
Important practical applications of the GLASSO include the reconstruction of gene and protein
networks. Here, we revisit the extensively studied protein signaling data from Sachs et al. [18]. The
dataset provides observational data of the activations of 11 proteins under various external stimuli.
We combined measurements from the first 3 experiments, yielding a heterogeneous mix of 2,666
samples that are not expected to be an iid sample set. To make the inference more difficult, we
selected a random fraction of 10% of the samples, yielding a final data matrix of size 266 times 11.
We used the directed ground truth network and moralized the graph structure to obtain an undirected
ground truth network. Parameter choice and stability selection were done as in the simulation study.
Figure 2 shows the results. Analogous to the simulation setting, the Kronecker GLASSO model
found true network links with greater accuracy than standard graphical lasso. This results suggest
that our model is suitable to account for confounding variation as it occurs in real settings.
4.3
Large-scale application to yeast gene expression data
Next, we considered an application to large-scale gene expression profiling data from yeast. We
revisited the dataset from Smith et al. [19], consisting of 109 genetically diverse yeast strains, each of
which has been expression profiled in two environmental conditions (glucose and ethanol). Because
7
r^2 correlation with true confounder
1.0
0.9
0.8
0.7
0.6
0.5
0.4 1
10
102
GPLVM
Kronecker GLasso
103
Number of features (genes)
(a) Confounder reconstruction
(b) GLASSO consistency (68%) (c) Kron. GLASSO consistency
(74%)
Figure 3: (a) Correlation coefficient between learned confounding factor and true environmental condition for
different subsets of all features (genes). Compared are the standard GPLVM model with a linear covariance
and our proposed model that accounts for low rank confounders and sparse gene-gene relationships (Kronecker
GLASSO). Kronecker GLASSO is able to better recover the hidden confounder by accounting for the covariance structure between genes. (b,c) Consistency of edges on the largest network with 1,000 nodes learnt on the
joint dataset, comparing the results when combining both conditions with those for a single condition (glucose).
the confounder in this dataset is known explicitly, we tested the ability of Kronecker GLASSO to
recover it from observational data. Because of missing complete ground truth information, we could
not evaluate the network reconstruction quality directly. An appropriate regularization parameter
was selected by means of cross validation, evaluating the marginal likelihood on a test set (analogous
to the procedure described in [10]). To simplify the comparison to the known confounding factor,
we chose a fixed number of confounders that we set to K = 1.
Recovery of the known confounder Figure 3a shows the r2 correlation coefficient between the
inferred factor and the true environmental condition for increasing number of features (genes) that
were used for learning. In particular for small numbers of genes, accounting for the network structure between genes improved the ability to recover the true confounding effect.
Consistency of obtained networks Next, we tested the consistency when applying GLASSO and
Kronecker GLASSO to data that combines both conditions, glucose and ethanol, comparing to the
recovered network from a single condition alone (glucose). The respective networks are shown in
Figures 3b and 3c. The Kronecker GLASSO model identifies more consistent edges, which shows
the susceptibility of standard GLASSO to the confounder, here the environmental influence.
5
Conclusions and Discussion
We have shown an efficient scheme for parameter learning in matrix-variate normal distributions
with iid observation noise. By exploiting some linear algebra tricks, we have shown how hyperparameter optimization for the row and column covariances can be carried out without evaluating
the prohibitive full covariance, thereby greatly reducing computational and memory complexity. To
the best of our knowledge, these measures have not previously been proposed, despite their general
applicability.
As an application of our framework, we have proposed a method that accounts for confounding influences while estimating a sparse inverse covariance structure. Our approach extends the Graphical
Lasso, generalizing the rigid assumption of iid samples to more general sample covariances. For
this purpose, we employ a Kronecker product covariance structure and learn a low-rank covariance
between samples, thereby accounting for potential confounding influences. We provided synthetic
and real world examples where our method is of practical use, reducing the number of false positive
edges learned.
Acknowledgments This research was supported by the FP7 PASCAL II Network of Excellence.
OS received funding from the Volkswagen Foundation. JM was supported by NWO, the Netherlands
Organization for Scientific Research (VENI grant 639.031.036).
8
References
[1] Y. Zhang and J. Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In
Advances in Neural Information Processing Systems, 2010.
[2] E. Bonilla, K.M. Chai, and C. Williams. Multi-task gaussian process prediction. Advances in
Neural Information Processing Systems, 20:153?160, 2008.
[3] M.A. Alvarez and N.D. Lawrence. Computationally efficient convolved multiple output gaussian processes. Journal of Machine Learning Research, 12:1425?1466, 2011.
[4] H. Wackernagel. Multivariate geostatistics: an introduction with applications. Springer Verlag, 2003.
[5] G.I. Allen and R. Tibshirani. Inference with transposable data: Modeling the effects of row
and column correlations. Arxiv preprint arXiv:1004.0209, 2010.
[6] M. Lynch and B. Walsh. Genetics and Analysis of Quantitative Traits. Sinauer Associates Inc.,
U.S., 1998.
[7] P. Dutilleul. The MLE algorithm for the matrix normal distribution. Journal of Statistical
Computation and Simulation, 64(2):105?123, 1999.
[8] K. Zhang, B. Sch?olkopf, and D. Janzing. Invariant gaussian process latent variable models and
application in causal discovery. In Uncertainty in Artificial Intelligence, 2010.
[9] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum
likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning
Research, 9:485?516, 2008.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432, 2008.
[11] J.T. Leek and J.D. Storey. Capturing heterogeneity in gene expression studies by surrogate
variable analysis. PLoS Genetics, 3(9):e161, 2007.
[12] O. Stegle, L. Parts, R. Durbin, and J. Winn. A bayesian framework to account for complex
non-genetic factors in gene expression levels greatly increases power in eqtl studies. PLoS
Computational Biology, 6(5):e1000770, 2010.
[13] C. Lippert, J. Listgarten, Y. Liu, C.M. Kadie, R.I. Davidson, and D. Heckerman. FaST linear
mixed models for genome-wide association studies. Nature Methods, 8:833?835, 2011.
[14] P. Men?endez, Y.A.I. Kourmpetis, C.J.F. Ter Braak, and F.A. van Eeuwijk. Gene regulatory
networks from multifactorial perturbations using graphical lasso: Application to the dream4
challenge. PLoS One, 5(12):e14147, 2010.
[15] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process
latent variable models. Journal of Machine Learning Research, 6:1783?1816, 2005.
[16] K.Y. Yeung and W.L. Ruzzo. Principal component analysis for clustering gene expression data.
Bioinformatics, 17(9):763, 2001.
[17] N. Meinshausen and P. B?uhlmann. Stability selection. Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 72(4):417?473, 2010.
[18] K. Sachs, O. Perez, D. Pe?er, D.A. Lauffenburger, and G.P. Nolan. Causal protein-signaling
networks derived from multiparameter single-cell data. Science, 308(5721):523, 2005.
[19] E.N. Smith and L. Kruglyak. Gene?environment interaction in yeast gene expression. PLoS
Biology, 6(4):e83, 2008.
9
| 4281 |@word briefly:2 norm:1 d2:2 simulation:4 covariance:52 accounting:7 simplifying:2 decomposition:2 yuc:4 thereby:2 tr:3 volkswagen:1 reduction:4 liu:1 contains:3 series:1 genetic:1 recovered:3 comparing:2 plcg:3 activation:1 written:1 multioutput:1 drop:1 treating:1 update:2 alone:1 generative:4 selected:2 yr:1 prohibitive:1 intelligence:1 xk:1 smith:2 colored:2 provides:1 node:2 p38:3 revisited:1 zhang:3 ethanol:2 ik:1 combine:2 introduce:3 manner:1 excellence:1 alm:2 expected:1 karsten:2 mpg:3 multi:5 jm:1 inappropriate:1 increasing:1 provided:1 estimating:1 notation:1 biostatistics:1 interpreted:1 unobserved:1 a1m:1 quantitative:1 runtime:4 universit:1 uk:2 normally:1 unit:1 grant:1 planck:3 positive:8 despite:1 id:1 plus:4 chose:1 studied:1 meinshausen:1 christoph:1 limited:1 walsh:1 confounder:6 range:2 directed:1 practical:3 acknowledgment:1 testing:1 definite:1 signaling:5 procedure:1 rrs:1 thought:2 integrating:1 regular:1 protein:7 suggest:1 convenience:1 selection:4 context:2 applying:4 influence:6 optimize:5 equivalent:1 map:1 conventional:1 missing:1 williams:1 recovery:2 factored:1 array:1 stability:4 variation:2 analogous:3 target:1 gps:1 agreement:1 trick:2 associate:1 observed:5 subproblem:1 preprint:1 calculate:1 plo:4 mentioned:1 environment:1 complexity:6 hinder:1 depend:1 solving:1 rewrite:1 algebra:1 joint:4 various:4 regularizer:2 xxt:3 derivation:2 fast:1 shortcoming:1 describe:1 radboud:1 artificial:2 sc:8 choosing:2 supplementary:1 valued:1 widely:1 reconstruct:2 nolan:1 ability:2 neil:1 gp:1 jointly:3 think:1 multiparameter:1 final:1 listgarten:1 eigenvalue:3 propose:3 reconstruction:6 interaction:1 product:11 combining:4 kak1:1 pka:3 olkopf:1 exploiting:2 chai:1 multifactorial:1 a11:2 rotated:1 illustrate:2 depending:1 ac:1 measured:1 received:1 a22:1 recovering:1 c:1 involves:1 predicted:2 a12:1 observational:4 material:1 generalization:4 biological:4 considered:5 ground:7 normal:11 exp:2 lawrence:4 mapping:1 major:1 al1:1 susceptibility:1 purpose:1 estimation:7 eqtl:1 nwo:1 uhlmann:1 largest:1 gaussian:20 lynch:1 aim:2 factorizes:3 varying:2 xsk:1 derived:2 notational:1 rank:6 likelihood:18 greatly:2 inference:15 stopping:1 rigid:1 el:1 typically:1 hidden:4 spurious:1 germany:3 compatibility:1 dual:3 flexible:1 pascal:1 platform:1 special:1 uc:2 marginal:3 field:1 equal:1 having:1 biology:3 broad:1 stimulus:1 simplify:1 micro:1 employ:1 randomly:1 ve:2 individual:1 argmax:6 consisting:1 friedman:1 organization:1 evaluation:6 nl:1 yielding:2 perez:1 accurate:1 xrk:1 oliver:1 edge:12 partial:1 necessary:1 respective:2 decoupled:1 vely:2 orthogonal:1 rotating:1 causal:2 instance:2 column:11 modeling:4 ordinary:1 tractability:2 applicability:1 entry:4 subset:2 a2m:1 dependency:2 learnt:1 synthetic:3 combined:2 confounders:15 considerably:1 borgwardt:2 eberhard:1 probabilistic:4 together:1 na:4 squared:1 containing:1 external:1 derivative:4 account:10 potential:1 de:3 kruglyak:1 star:2 kadie:1 coefficient:2 inc:1 explicitly:2 bonilla:2 depends:2 idealized:3 performed:2 lab:1 red:2 recover:8 accuracy:2 variance:4 efficiently:1 correspond:1 identify:3 yield:1 generalize:1 bayesian:1 iid:16 janzing:1 sharing:1 matrixvariate:1 sampled:1 dataset:7 covariation:2 recall:9 knowledge:1 ut:8 dimensionality:3 follow:1 restarts:2 specify:1 improved:1 alvarez:1 methodology:1 done:8 correlation:4 nondiagonal:1 replacing:1 o:1 banerjee:1 quality:2 scientific:1 yeast:4 effect:8 true:7 regularization:2 symmetric:1 nonzero:1 conditionally:1 criterion:2 generalized:1 complete:1 interrelationship:1 l1:3 allen:1 karls:1 consideration:1 funding:1 common:1 al2:1 association:2 discussed:1 slight:1 trait:1 measurement:4 glucose:4 vec:30 e83:1 rd:1 grid:1 consistency:5 similarly:1 erc:1 similarity:1 longer:2 multivariate:6 posterior:1 pkc:3 confounding:17 optimizing:4 moderate:1 verlag:1 ubingen:3 ecological:1 binary:1 success:1 greater:2 additional:3 schneider:1 employed:1 determine:1 signal:1 ii:1 full:3 multiple:3 mix:1 reduces:2 profiling:1 cross:1 equally:1 y:1 bigger:1 mle:1 prediction:8 regression:1 sheffield:3 heterogeneous:3 expectation:1 arxiv:2 iteration:1 represent:1 kernel:3 kron:3 yeung:1 cell:1 winn:1 sch:1 eliminates:1 sr:6 induced:1 undirected:2 seem:1 presence:2 ideal:6 ter:1 rendering:1 variate:12 bic:1 hastie:1 lasso:10 approaching:1 idea:1 expression:11 pca:3 motivated:1 utility:1 wackernagel:1 penalty:7 iterating:1 clear:1 netherlands:2 extensively:1 revisit:1 xr1:1 tibshirani:2 diverse:1 write:1 hyperparameter:1 veni:1 drawn:3 d3:2 graph:1 fraction:2 sum:1 inverse:14 uncertainty:1 extends:1 capturing:1 durbin:1 kronecker:35 interpolated:1 speed:1 rur:2 department:1 structured:1 according:1 combination:3 kd:1 smaller:1 heckerman:1 em:1 ur:2 dutilleul:2 explained:2 invariant:1 ghaoui:1 taken:1 ln:14 equation:8 computationally:1 previously:2 discus:2 leek:1 wrt:1 flip:4 tractable:1 fp7:1 end:1 lauffenburger:1 apply:1 appropriate:2 endez:1 convolved:1 assumes:1 clustering:1 include:2 graphical:11 xw:2 joris:1 exploit:2 build:1 society:1 lippert:1 tensor:1 implied:2 already:1 added:1 occurs:2 rt:1 diagonal:2 surrogate:1 gradient:5 link:1 rotates:1 simulated:2 extent:1 tuebingen:3 unstable:1 ru:1 retained:1 relationship:2 ratio:1 equivalently:1 difficult:1 nijmegen:1 negative:1 implementation:1 zt:1 unknown:1 contributed:1 observation:16 datasets:2 gplvm:5 flop:4 heterogeneity:1 strain:1 rn:6 perturbation:2 varied:1 inferred:1 introduced:2 coherent:1 learned:2 established:1 geostatistics:4 address:1 able:3 proceeds:1 stegle:2 yc:1 sparsity:5 challenge:1 genetically:1 max:3 including:3 memory:4 explanation:1 royal:1 power:1 suitable:1 ruzzo:1 scheme:2 improve:1 identifies:1 created:2 carried:1 aspremont:1 genomics:1 mvn:2 prior:3 literature:1 review:2 discovery:1 mooij:2 marginalizing:2 asymptotic:1 relative:1 sinauer:1 glasso:50 mixed:2 men:1 proportional:1 validation:1 foundation:1 transposable:1 consistent:1 row:12 genetics:4 accounted:1 supported:2 free:10 profiled:1 institute:4 explaining:1 xs1:1 wide:2 absolute:1 sparse:15 benefit:1 distributed:1 curve:5 dimension:2 van:1 gram:1 evaluating:2 world:1 genome:1 author:1 excess:2 reconstructed:2 gene:22 reveals:1 davidson:1 braak:1 alternatively:2 iterative:1 latent:16 regulatory:2 storey:1 learn:4 nature:1 complex:1 diag:2 main:1 sachs:3 linearly:1 motivation:1 noise:22 hyperparameters:6 n2:1 pip3:3 precision:7 explicit:1 a21:2 exponential:1 concatenating:1 pe:1 third:1 weighting:1 rk:4 moralized:1 er:1 r2:1 intractable:2 false:7 generalizing:1 expressed:2 pip2:3 springer:1 corresponds:1 truth:7 environmental:4 identity:3 formulated:1 towards:1 absence:1 feasible:1 reducing:2 principal:2 called:2 relevance:1 bioinformatics:1 evaluate:1 tested:2 |
3,625 | 4,282 | Directed Graph Embedding: an Algorithm based on
Continuous Limits of Laplacian-type Operators
Marina Meil?a
Department of Statistics
University of Washington
Seattle, WA 98195
[email protected]
Dominique C. Perrault-Joncas
Department of Statistics
University of Washington
Seattle, WA 98195
[email protected]
Abstract
This paper considers the problem of embedding directed graphs in Euclidean
space while retaining directional information. We model the observed graph as
a sample from a manifold endowed with a vector field, and we design an algorithm that separates and recovers the features of this process: the geometry of the
manifold, the data density and the vector field. The algorithm is motivated by our
analysis of Laplacian-type operators and their continuous limit as generators of
diffusions on a manifold. We illustrate the recovery algorithm on both artificially
constructed and real data.
1
Motivation
Recent advances in graph embedding and visualization have focused on undirected graphs, for which
the graph Laplacian properties make the analysis particularly elegant [1, 2]. However, there is
an important number of graph data, such as social networks, alignment scores between biological
sequences, and citation data, which are naturally asymmetric. A commonly used approach for this
type of data is to disregard the asymmetry by studying the spectral properties of W +W T or W T W ,
where W is the affinity matrix of the graph.
Some approaches have been offered to preserve the asymmetry information contained in data: [3],
[4], [5] or to define directed Laplacian operators [6]. Although quite successful, these works adopt
a purely graph-theoretical point of view. Thus, they are not concerned with the generative process
that produces the graph, nor with the interpretability and statistical properties of their algorithms.
In contrast, we view the nodes of a directed graph as a finite sample from a manifold in Euclidean
space, and the edges as macroscopic observations of a diffusion kernel between neighboring points
on the manifold. We explore how this diffusion kernel determines the overall connectivity and
asymmetry of the resulting graph and demonstrate how Laplacian-type operators of this graph can
offer insights into the underlying generative process.
Based on the analysis of the Laplacian-type operators, we derive an algorithm that, in the limit of infinite sample and vanishing bandwidth, recovers the key features of the sampling process: manifold
geometry, sampling distribution, and local directionality, up to their intrinsic indeterminacies.
2
Model
The first premise here is that we observe a directed graph G, with n nodes, having weights
W = [Wij ] for the edge from node i to node j. In following with common Laplacian-based embedding approaches, we assume that G is a geometric random graph constructed from n points sampled
according to distribution p = e?U on an unobserved compact smooth manifold M ? Rl of known
intrinsic dimension d ? l. The edge weight Wij is then determined by a directed similarity kernel
k (xi , xj ) with bandwidth . The directional component of k (xi , xj ) will be taken to be derived
1
from a vector field r on M, which assigns a preferred direction between weights Wij and Wji . The
choice of a vector field r to characterize the directional component of G might seem restrictive at
first. In the asymptotic limit of ? 0 and n ? ? however, kernels are characterized by their
diffusion, drift, and source components [7]. As such, r is sufficient to characterize any directionality
associated with a drift component and as it turns out, the component of r normal M in Rl can also
be use to characterize any source component. As for the diffusion component, it is not possible
to uniquely identify it from G alone [8]. Some absolute knownledge of M is needed to say anything about it. Hence, without loss of generality, we will construct k (xi , xj ) so that the diffusion
component ends being isotropic and constant, i.e. equal to Laplace-Beltrami operator ? on M.
The schematic of this generative process is shown in the top left of Figure 1 below.
From left to right: the graph generative process mapping the sample on M to geometric random
graph G via the kernel k (x, y),
then the subsequent embedding
(?)
?n of G by operators Haa,n ,
(?)
Hss,n (defined in section 3.1).
As these operators converge to
(?)
their respective limits, Haa and
(?)
Hss , so will ?n ? ?, pn ? p,
and rn ? r.
We design an algorithm that,
given G, produces the top right
embedding (?n , pn , and rn ).
Figure 1: Schematic of our framework.
The question is then as follows: can the generative process? geometry M, distribution p = e?U , and
directionality r, be recovered from G? In other words, is there an embedding of G in Rm , m ? d
that approximates all three components of the process and that is also consistent as sample size
increases and the bandwidth vanishes? In the case of undirected graphs, the theory of Laplacian
eigenmaps [1] and Diffusion maps [9] answers this question in the affirmative, in that the geometry
of M and p = e?U can be inferred using spectral graph theory. The aim here is to build on
the undirected problem and recover all three components of the generative process from a directed
graph G.
The spectral approach to undirected graph embedding relies on the fact that eigenfunctions of the
Laplace-Beltrami operator are known to preserve the local geometry of M [1]. With a consistent
empirical Laplace-Beltrami operator based on G, its eigenvectors also recover the geometry of M
and converge to the corresponding eigenfunctions on M. For a directed graph G, an additional
operator is needed to recover the local directional component r, but the principle remains the same.
(?)
The schematic for this is shown in Figure 1 where two operators - Hss,n , introduced in [9] for
(?)
undirected embeddings, and Haa,n , a new operator defined in section 3.1 - are used to obtain the
(?)
(?)
(?)
embedding ?n , distribution pn , and vector field rn . As Haa,n and Hss,n converge to Haa and
(?)
Hss , ?n , pn , and rn also converge to ?, p, and r, where ? is the local geometry preserving the
embedding of M into Rm .
(?)
The algorithm we propose in Section 4 will calculate the matrices corresponding to H?,n from the
graph G, and with their eigenvectors, will find estimates for the node coordinates ?, the directional
component r, and the sampling distribution p. In the next section we briefly describe the mathematical models of the diffusion processes that our model relies on.
2
2.1
Problem Setting
The similarity kernel k (x, y) can be used to define transport operators on M. The natural transport
operator is defined by normalizing k (x, y) as
Z
Z
k (x, y)
T [f ](x) =
f (y)p(y)dy , where p (x) =
k (x, y)p(y)dy .
(1)
M p (x)
M
T [f ](x) represents
the diffusion of a distribution f (y) by the transition density
R
k (x, y)p(y)/ k (x, y 0 )p(y 0 )dy 0 . The eigenfunctions of this infinitesimal operator are the
continuous limit of the eigenvectors of the transition probability matrix P = D?1 W given by
normalizing the affinity matrix W of G by D = diag(W 1) [10]. Meanwhile, the infinitesimal
transition
?f
(T ? I)f
= lim
(2)
?0
?t
defines the backward equation for this diffusion process over M based on kernel k . Obtaining the
explicit expression for transport operators like (2) is then the main technical challenge.
2.2
Choice of Kernel
In order for T [f ] to have the correct asymptotic form, some hypotheses about the similarity kernel k (x, y) are required. The hypotheses are best presented by considering the decomposition of
k (x, y) into symmetric h (x, y) = h (y, x) and anti-symmetric a (x, y) = ?a (y, x) components:
k (x, y) = h (x, y) + a (x, y) .
(3)
The symmetric component h (x, y) is assumed to satisfy the following properties: 1. h (||y ?
2
/)
, and 2. h ? 0 and h is exponentially decreasing as ||y ? x|| ? ?. This form
x||2 ) = h(||y?x||
d/2
of symmetric kernel was used in [9] to analyze the diffusion map. For the asymmetric part of the
similarity kernel, we assume the form
a (x, y) =
r(x, y)
h(||y ? x||2 /)
? (y ? x)
,
2
d/2
(4)
with r(x, y) = r(y, x) so that a (x, y) = ?a (y, x). Here r(x, y) is a smooth vector field on the
manifold that gives an orientation to the asymmetry of the kernel k (x, y). It is worth noting that the
dependence of r(x, y) on both x and y implies that r : M ? M ? Rl with Rl the ambient space of
M; however in the asymptotic limit, the dependence in y is only important ?locally? (x = y), and
as such it is appropriate to think of r(x, x) being a vector field on M. As a side note, it is worth
pointing out that even though the form of a (x, y) might seem restrictive at first, it is sufficiently
rich to describe any vector field . This can be seen by taking r(x, y) = (w(x) + w(y))/2 so that at
x = y the resulting vector field is given by r(x, x) = w(x) for an arbitrary vector field w(x).
3
Continuous Limit of Laplacian Type Operators
We are now ready to state the main asymptotic result.
Proposition 3.1 Let M be a compact, closed, smooth manifold of dimension d and k (x, y) an
asymmetric similarity kernel satisfying the conditions of section 2.2, then for any function f ?
C 2 (M), the integral operator based on k has the asymptotic expansion
Z
k (x, y)f (y)dy = m0 f (x) + g(f (x), x) + o() ,
(5)
M
where
m2
(?(x)f (x) + ?f (x) + r ? ?f (x) + f (x)? ? r + c(x)f (x))
2
R
R
and m0 = Rd h(||u||2 )du, m2 = Rd u2i h(||u||2 )du.
g(f (x), x) =
3
(6)
The proof can be found in [8] along with the definition of ?(x) and c(x) in (6). For now, it suffices
to say that ?(x) corresponds to an interaction between the symmetric kernel h and the curvature of
M and was first derived in [9]. Meanwhile, c(x) is a new term that originates from the interaction
between h and the component of r that is normal to M in the ambient space Rl . Proposition 3.1
foreshadows a general fact about spectral embedding algorithms: in most cases, Laplacian operators
confound the effects of spatial proximity, sampling density and directional flow due to the presence
of the various terms above.
3.1
Anisotropic Limit Operators
Proposition 3.1 above can be used to derive the limits of a variety of Laplacian type operators
associated with spectral embedding algorithms like [5, 6, 3]. Although we will focus primarily on
a few operators that give the most insight into the generative process and enable us to recover the
model defined in Figure 1, we first present four distinct families of operators for completeness.
These operator families are inspired by the anisotropic family of operators that [9] introduced for
undirected graphs, which make use of anisotropic kernels of the form:
k(?) (x, y) =
k (x, y)
?
p (x)p?
(y)
,
(7)
with ? ? [0, 1] where ? = 0 is the isotropic limit. To normalize the anisotropic kernels, we need
R
(?)
(?)
(?)
to redefine the outdegrees distribution of k as p (x) = M k (x, y)p(y)dy. From (7), four
families of diffusion processes of the form ft = H (?) [f ](x) can be derived depending on which
kernel is normalized and which outdegree distribution is used for the normalization. Specifically,
(?)
(?)
we define transport operators by normalizing
the asymmetric k or symmetric h kernels with the
R
1
asymmetric p or symmetric q = M h (x, y)p(y)dy outdegree distribution . To keep track of all
options, we introduce the following notation: the operators will be indexed by the type of kernel and
outdegree distribution they correspond to (symmetric or asymmetric), with the first index identifying
the kernel and the second index identifying the outdegree distribution. For example, the family of
anisotropic limit operators introduced by [9] is defined by normalizing the symmetric kernel by
(?)
the symmetric outdegree distribution, hence they will be denoted as Hss , with the superscript
corresponding to the anisotropic power ?.
Proposition 3.2 With the above notation,
(?)
Haa
[f ] = ?f ? 2 (1 ? ?) ?U ??f + r??f
(?)
Has
[f ]
(?)
Hsa [f ]
(?)
Hss
[f ]
(8)
= ?f ? 2 (1 ? ?) ?U ? ?f ? cf + (? ? 1)(r ? ?U )f ? (? ? r)f + r ? ?f (9)
=
?f ? 2 (1 ? ?) ?U ? ?f + (c + ? ? r + (? ? 1)r ? ?U )f
= ?f ? 2(1 ? ?)?U ? ?f.
(10)
(11)
The proof of this proposition, which can be found in [8], follows from repeated application of
Proposition 3.1 to p(y) or q(y) and then to k ? (x, y) or h? (x, y), as well as the fact that p1? =
1
p??
[1 ? ?(? +
?p
p
+ 2r ?
?p
p
+ 2? ? r + c)] + o().
(?)
Thus, if we use the asymmetric k and p , we get Haa , defined by the advected diffusion equa(?)
tion (8). In general, Haa is not hermitian, so it commonly has complex eigenvectors. This makes
(1)
embedding directed graphs with this operator problematic. Nevertheless, Haa will play an important role in extracting the directionality of the sampling process.
If we use the symmetric kernel h but the asymmetric outdegree distribution p , we get the family
(?)
of operators Hsa , of which the WCut of [3] is a special case (? = 0). If we reverse the above, i.e.
(?)
(?)
(?)
use k and q , we obtain Has . This turns out to be merely a combination of Haa and Hsa .
1
The reader may notice that there are in fact eight possible combinations of kernel and degree distribution,
since the anisotripic kernel (7) could also be defined using a symmetric or asymmetric outdegree distribution.
However, there are only four distinct asymptotic results and they are all covered by using one kernel (symmetric
or asymmetric) and one degree distribution (symmetric or asymmetric) throughout.
4
Algorithm 1 Directed Embedding
Input: Affinity matrix Wi,j and embedding dimension m, (m ? d)
T
1. S ? (W
Pn + W )/2 (Steps 1?6 estimate the coordinates as in [11])
2. qi ? j=1 Si,j , Q = diag(q)
3. V ? Q?1 SQ?1
Pn
(1)
4. qi ? j=1 Vi,j , Q(1) = diag(q (1) )
(1)
?1
5. Hss,n ? Q(1) V
6. Compute the ? the n ? (m + 1) matrix with orthonormal columns containing the m + 1 largest
(1)
right eigenvector (by eigenvalue) of Hss,n as well as the ? the (m + 1) ? (m + 1) diagonal matrix
of eigenvalues. Eigenvectors 2 to m + 1 from ? are the m coordinates of the embedding.
(1)
7. ComputeP
? the left eigenvector of Hss,n with eigenvalue 1. (Steps 7?8 estimate the density)
n
8. ? ? ?/
Pn i=1 ?i is the density distribution over the embedding.
9. pi ? j=1 Wi,j , P = diag(p) (Steps 9?13 estimate the vector field r)
10. T ? P ?1
P ?1
PW
(1)
n
11. pi ? j=1 Ti,j , P (1) = diag(p(1) )
(1)
?1
12. Haa,n ? P (1) T
(1)
(1)
13. R ? (Haa,n ? Hss,n )?/2. Columns 2 to m + 1 of R are the vector field components in the
direction of the corresponding coordinates of the embedding.
(?)
Finally, if we only consider the symmetric kernel h and degree distribution q , we recover Hss , the
anisotropic kernels of [9] for symmetric graphs. This operator for ? = 1 is shown to separate the
manifold from the probability distribution [11] and will be used as part of our recovery algorithm.
Isolating the Vector Field r
4
Our aim is to esimate the manifold M, the density distribution p = e?U , and the vector field r. The
(1)
first two components of the data can be recovered from Hss as shown in [11] and summarized in
Algorithm 1.
At this juncture, one feature of generative process is missing: the vector field r. The natural approach
(?)
(?)
for recovering r is to isolate the linear operator r ? ? from Haa by substracting Hss :
(?)
(?)
Haa
? Hss
= r ? ?.
(12)
The advantage of recovering r in operator form as in (12) is that r ? ? is coordinate free. In other
words, as long as the chosen embedding of M is diffeomorphic to M2 , (12) can be used to express
the component of r that lies in the tangent space T M, which we denote by r|| .
Specifically, let ? be a diffeomorphic embedding of M ; the component of r along coordinate ?k is
then given by r ? ??k = rk , and so, in general,
r|| = r ? ?? .
(13)
The subtle point that only r|| is recovered from (13) follows from the fact that the operator r ? ? is
only defined along M and hence any directional derivative is necessarily along T M.
Equation (13) and the previous observations are the basis for Algorithm 1, which recovers the three
important features of the generative process for an asymmetric graph with affinity matrix W .
A similar approach can be employed to recover c + ? ? r, or simply ? ? r if r has no component
perpendicular to the tangent space T M (meaning that c ? 0). Recovering c + ? ? r is achieved by
taking advantage of the fact that
(1)
(1)
(Hsa
? Hss
) = (c + ? ? r) ,
2
(14)
(1)
A diffeomorphic embedding is guaranteed by using the eigendecomposition of Hss .
5
(1)
(1)
which is a diagonal operator. Taking into account that for finite n (Hsa,n ? Hss,n ) is not perfectly
(1)
(1)
diagonal, using ?n ? 1n (vector of ones), i.e. (Hsa,n ? Hss,n )[1n ] = (cn + ? ? rn ), has been found
(1)
(1)
empirically to be more stable than simply extracting the diagonal of (Hsa,n ? Hss,n ).
5
Experiments
Artificial Data For illustrative purposes, we begin by applying our method to an artificial example.
We use the planet Earth as a manifold with a topographic density distribution, where sampling
probability is proportional to elevation. We also consider two vector fields: the first is parallel to the
line of constant latitude and purely tangential to the sphere, while the second is parallel to the line
of constant longitude with a component of the vector field perpendicular to the manifold. The true
model with constant latitude vector field is shown in Figure 2, along with the estimated density and
vector field projected on the true manifold (sphere).
Model
Recovered
Latitudinal
(a)
Longitudinal
(b)
Figure 2: (a): Sphere with latitudinal vector field, i.e East-West asymmetry, with Wew > Wwe if node w
lies to the West of node e. The graph nodes are sampled non-uniformly, with the topographic map of the world
as sampling density. We sample n = 5000 nodes, and observe only the resulting W matrix, but not the node
locations. From W , our algorithm estimates the sample locations (geometry), the vector field (black arrows)
generating the observed asymmetries, and the sampling distribution at each data point (colormap). (b) Vector
fields on a spherical region (blue), and their estimates (red): latitudinal vector field tangent to the manifold
(left) and longitudinal vector field with component perpendicular to manifold tangent plane (right).
Both the estimated density and vector field agree with the true model, demonstrating that for artificial
data, the recovery algorithm 1 performs quite well. We note that the estimated density does not
recover all the details of the original density, even for large sample size (here n = 5000 with =
0.07). Meanwhile, the estimated vector field performs quite well even when the sampling is reduced
to n = 500 with = 0.1. This can be seen in Figure 2, b, where the true and estimated vector fields
are superimposed. Figure 2 also demonstrates how r ? ? only recovers the tangential component of
r. The estimated geometry is not shown on any of these figures, since the success of the diffusion
map in recovering the geometry for such a simple manifold is already well established [2, 9].
Real DataThe National Longitudinal Survey of Youth (NLSY) 1979 Cohort is a representative sample of young men and women in the United States who were followed from 1979 to 2000 [12, 13].
The aim here is to use this survey to obtain a representation of the job market as a diffusion process
over a manifold.
The data set consists of a sample of 7,816 individual career sequences of length 64, listing the jobs
a particular individual held every quarter between the ages of 20 and 36. Each token in the sequence
identifies a job. Each job corresponds to an industry ? occupation pair. There are 25 unique industry
and 20 unique occupation indices. Out of the 500 possible pairings, approximately 450 occur in the
data, with only 213 occurring with sufficient frequency to be included here. Thus, our graph G has
213 nodes - the jobs - and our observations consist of 7,816 walks between the graph nodes.
We convert these walks to a directed graph with affinity matrix W . Specifically, Wij represents the
number of times a transition from job i to job j was observed (Note that this matrix is asymmetric,
6
i.e Wij 6= Wji ). Normalizing each row i of W by its outdegree di gives P = diag(di )?1 W , the
non-parametric maximum likelihood estimator for the Markov chain over G for the progression
(0)
of career sequences. This Markov chain has as limit operator Haa , as the granularity of the job
market increases along with the number of observations. Thus, in trying to recover the geometry,
distribution and vector field, we are actually interested in estimating the full advective effect of the
(0)
diffusion process generated by Haa ; that is, we want to estimate r ? ? ? 2?U ? ? where we can use
(0)
(1)
?2?U ? ? = Hss ? Hss to complement Algorithm 1.
2000
0.25
0.2
0.1
3
0.9
0.2
0.8
0.15
0.7
1800
0.15
!
0.25
0.05
1600
0.1
1400
!30.05
0.6
0.5
0
0
0.4
1200
?0.05
?0.05
?0.1
0.3
?0.1
1000
0.2
?0.15
?0.15
800
?0.2
?0.2
?0.25
?0.25
?0.1
?0.05
0
!2
0.05
0.1
0.15
0.2
(a)
0.1
?0.1
?0.05
0
!2
0.05
0.1
0.15
0.2
(b)
Figure 3: Embedding the job market along with field r ? 2?U over the first two non-constant eigenvectors.
The color map corresponds to the mean monthly wage in dollars (a) and to the female proportion (b) for each
job.
We obtain an embedding of the job market that describes the relative position of jobs, their distribution, and the natural time progression from each job. Of these, the relative position and natural
time progression are the most interesting. Together, they summarize the job market dynamics by
describing which jobs are naturally ?close? as well as where they can lead in the future. From a
public policy perspective, this can potentially improve focus on certain jobs for helping individuals
attain better upward mobility.
The job market was found to be a high dimensional manifold. We present only the first two dimen(0)
sions, that is, the second and third eigenvectors of Hss , since the first eigenvector is uninformative
(constant) by construction. The eigenvectors showed correlation with important demographic data,
such as wages and gender. Figure 3 displays this two-dimensional sub-embedding along with the
directional information r ? 2?U for each dimension. The plot shows very little net progression
toward regions of increasing mean salary3 . This is somewhat surprising, but it is easy to overstate
this observation: diffusion alone would be enough to move the individuals towards higher salary.
What Figure 3 (a) suggests is that there appear to be no ?external forces? advecting individuals towards higher salary. Nevertheless, there appear to be other external forces at play in the job market:
Figure 3 (b), which is analogous to Figure 3 (a), but with gender replacing the salary color scheme,
suggests that these forces push individuals towards greater gender differentiation. This is especially
true amongst male-dominated jobs which appear to be advected toward the left edge of the embedding. Hence, this simple analysis of the job market can be seen as an indication that males and
females tend to move away from each other over time, while neither seems to have a monopoly on
high- or low- paying jobs.
6
Discussion
This paper makes three contributions: (1) it introduces a manifold-based generative model for directed graphs with weighted edges, (2) it obtains asymptotic results for operators constructed from
the directed graphs, and (3) these asymptotic results lead to a natural algorithm for estimating the
model.
3
It is worth noting that in the NLSY data set, high paying jobs are teacher, nurse and mechanic. This is due
to the fact that the career paths observed stop at at age 36, which is relatively early in an individual?s career.
7
Generative Models that assume that data are sampled from a manifold are standard for undirected
graphs, but to our knowledge, none have yet been proposed for directed graphs. When W is symmetric, it is natural to assume that it depends on the points? proximity. For asymmetric affinities W ,
one must include an additional component to explain the asymmetry. In the asymptotic limit, this is
tantamount to defining a vector field on the manifold.
Algorithm We have used from [9] the idea of defining anisotropic kernels (indexed by ?) in order to
separate the density p and the manifold geometry M. Also, we adopted their general assumptions
about the symmetric part of the kernel. As a consequence, the recovery algorithm for p and M is
identical to theirs.
However, insofar as the asymmetric part of the kernel is concerned, everything, starting from the
definition and the introduction of the vector field r as a way to model the asymmetry, through the
derivation of the asymptotic expression for the symmetric plus asymmetric kernel, is new. We go
significantly beyond the elegant idea of [9] regarding the use of anisotropic kernels by analyzing the
four distinct renormalizations possible for a given ?, each of them combining different aspects of
M, p and r. Only the successful (and novel) combination of two different anisotropic operators is
able to recover the directional flow r.
Algorithm 1 is natural, but we do not claim it is the only possible one in the context of our model.
(?)
For instance, we can also use Hsa to recover the operator ? ? r (which empirically seems to have
worse numerical properties than r ? ?). In the National Longitudinal Survery of Youth study, we
were interested in the whole advective term, so we estimated it from a different combination of
operators. Depending on the specific question, other features of the model could be obtained
Limit Results Proposition 3.1 is a general result on the asymptotics of asymmetric kernels. Recovering the manifold and r is just one, albeit the most useful, of the many ways of exploiting these
(0)
results. For instance, Hsa is the limit operator of the operators used in [3] and [5]. The limit analysis
could be extended to other digraph embedding algorithms such as [4, 6].
How general is our model? Any kernel can be decomposed into a symmetric and an asymmetric
part, as we have done. The assumptions on the symmetric part h are standard. The paper of [7] goes
one step further from these assumptions; we will discuss it in relationship with our work shortly.
The more interesting question is how limiting are our assumptions regarding the choice of kernel,
especially the asymmetric part, which we parameterized as a (x, y) = r/2 ? (y ? x)h (x, y) in (4).
In the asymptotic limit, this choice turns out to be fully general, at least up to the identifiable aspects
of the model. For a more detailed discussion of this issue, see [8].
In [7], Ting, Huang and Jordan presented asymptotic results for a general family of kernels that
includes asymmetric and random kernels. Our k can be expressed in the notation of [7] by taking
wx (y) ? 1+r(x, y)?(y?x), rx (y) ? 1, K0 ? h, h ? . Their assumptions are more general than
the assumptions we make here, yet our model is general up to what can be identified from G alone.
The distinction arises because [7] focuses on the graph construction methods from an observed
sample of M, while we focus on explaining an observed directed graph G through a manifold
generative process. Moreover, while the [7] results can be used to analyze data from directed graphs,
they differ from our Proposition 3.1. Specifically, with respect to the limit in Theorem 3 from
[7], we obtain the additional source terms f (x)? ? r and c(x)f (x) that follow from not enforcing
(?)
(?)
conservation of mass while defining operators Hsa and Has .
We applied our theory of directed graph embedding to the analysis of the career sequences in
Section 5, but asymmetric affinity data abound in other social contexts, and in the physical and
life sciences. Indeed, any ?similarity? score that is obtained from a likelihood of the form
Wvu =likelihood(u|v) is generally asymmetric. Hence our methods can be applied to study not
only social networks, but also patterns of human movement, road traffic, and trade relations, as well
as alignment scores in molecular biology. Finally, the physical interpretation of our model also
makes it naturally applicable to physical models of flows.
Acknowledgments
This research was partially supported by NSW awards IIS-0313339 and IIS-0535100.
8
References
[1] Belkin and Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural
Computation, 15:1373?1396, 2002.
[2] Nadler, Lafon, and Coifman. Diffusion maps, spectral clustering and eigenfunctions of fokker-planck
operators. In Neural Information Processing Systems Conference, 2006.
[3] Meila and Pentney. Clustering by weighted cuts in directed graphs. In SIAM Data Mining Conference,
2007.
[4] Zhou, Huang, and Scholkopf. Learning from labeled and unlabeled data on a directed graph. In International Conference on Machine Learning, pages 1041?1048, 2005.
[5] Zhou, Schlkopf, and Hofmann. Semi-supervised learning on directed graphs. In Advances in Neural
Information Processing Systems, volume 17, pages 1633?1640, 2005.
[6] Fan R. K. Chung. The diameter and laplacian eigenvalues of directed graphs. Electr. J. Comb., 13, 2006.
[7] Ting, Huang, and Jordan. An analysis of the convergence of graph Laplacians. In International Conference on Machine Learning, 2010.
[8] Dominique Perrault-Joncas and Marina Meil?a. Directed graph embedding: an algorithm based on continuous limits of laplacian-type operators. Technical Report TR 587, University of Washington - Department
of Statistics, November 2011.
[9] Coifman and Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21:6?30, 2006.
[10] Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. preprint, short version NIPS
2008, 2008.
[11] Coifman, Lafon, Lee, Maggioni, Warner, and Zucker. Geometric diffusions as a tool for harmonic analysis
and structure definition of data: Diffusion maps. In Proceedings of the National Academy of Sciences,
pages 7426?7431, 2005.
[12] United States Department of Labor.
National longitudinal survey of youth 1979 cohort.
http://www.bls.gov/nls/, retrived October 2011.
[13] Marc A. Scott. Affinity models for career sequences. Journal of the Royal Statistical Society: Series C
(Applied Statistics), 60(3):417?436, 2011.
9
| 4282 |@word version:1 briefly:1 pw:1 proportion:1 seems:2 dominique:2 decomposition:1 nsw:1 tr:1 reduction:1 series:1 score:3 united:2 longitudinal:5 recovered:4 surprising:1 si:1 yet:2 must:1 planet:1 subsequent:1 numerical:1 wx:1 hofmann:1 plot:1 alone:3 generative:12 electr:1 plane:1 isotropic:2 vanishing:1 short:1 completeness:1 node:12 location:2 u2i:1 mathematical:1 along:8 constructed:3 pairing:1 scholkopf:1 consists:1 redefine:1 dimen:1 hermitian:1 introduce:1 coifman:3 comb:1 indeed:1 market:8 p1:1 nor:1 mechanic:1 warner:1 inspired:1 decreasing:1 spherical:1 decomposed:1 gov:1 little:1 considering:1 increasing:1 abound:1 begin:1 estimating:2 underlying:1 notation:3 moreover:1 mass:1 what:2 affirmative:1 eigenvector:3 unobserved:1 differentiation:1 every:1 ti:1 colormap:1 rm:2 demonstrates:1 originates:1 appear:3 planck:1 local:4 limit:20 consequence:1 analyzing:1 meil:2 path:1 approximately:1 might:2 black:1 plus:1 suggests:2 perpendicular:3 equa:1 directed:22 unique:2 acknowledgment:1 sq:1 asymptotics:1 empirical:1 attain:1 significantly:1 word:2 road:1 get:2 close:1 unlabeled:1 operator:46 context:2 applying:1 www:1 map:8 missing:1 go:2 starting:1 focused:1 survey:3 recovery:4 assigns:1 identifying:2 m2:3 insight:2 estimator:1 orthonormal:1 embedding:28 renormalizations:1 maggioni:1 coordinate:6 laplace:3 analogous:1 limiting:1 construction:2 play:2 monopoly:1 hypothesis:2 satisfying:1 particularly:1 asymmetric:22 cut:1 labeled:1 observed:6 ft:1 role:1 preprint:1 calculate:1 region:2 movement:1 trade:1 vanishes:1 dynamic:1 purely:2 basis:1 k0:1 various:1 derivation:1 distinct:3 describe:2 artificial:3 quite:3 say:2 statistic:4 niyogi:2 topographic:2 think:1 superscript:1 sequence:6 eigenvalue:4 advantage:2 net:1 indication:1 propose:1 interaction:2 neighboring:1 combining:1 academy:1 normalize:1 seattle:2 exploiting:1 convergence:2 asymmetry:8 produce:2 generating:1 illustrate:1 derive:2 depending:2 stat:2 indeterminacy:1 paying:2 job:22 longitude:1 recovering:5 implies:1 differ:1 direction:2 beltrami:3 nls:1 correct:1 human:1 enable:1 public:1 everything:1 premise:1 suffices:1 elevation:1 proposition:8 biological:1 helping:1 proximity:2 sufficiently:1 normal:2 nadler:1 mapping:1 claim:1 pointing:1 m0:2 adopt:1 early:1 purpose:1 earth:1 applicable:1 largest:1 tool:1 weighted:2 aim:3 pn:7 zhou:2 sion:1 derived:3 focus:4 superimposed:1 likelihood:3 contrast:1 diffeomorphic:3 dollar:1 relation:1 wij:5 interested:2 upward:1 overall:1 issue:1 orientation:1 denoted:1 retaining:1 spatial:1 special:1 field:31 construct:1 equal:1 having:1 washington:5 sampling:9 identical:1 represents:2 outdegree:8 biology:1 future:1 report:1 primarily:1 few:1 tangential:2 belkin:2 preserve:2 national:4 individual:7 geometry:12 mining:1 alignment:2 male:2 introduces:1 held:1 chain:2 ambient:2 edge:5 integral:1 respective:1 mobility:1 indexed:2 euclidean:2 walk:2 isolating:1 theoretical:1 instance:2 column:2 industry:2 successful:2 eigenmaps:3 characterize:3 answer:1 teacher:1 density:13 international:2 siam:1 lee:1 together:1 perrault:2 connectivity:1 containing:1 huang:3 woman:1 worse:1 external:2 derivative:1 chung:1 account:1 summarized:1 includes:1 satisfy:1 vi:1 depends:1 tion:1 view:2 closed:1 analyze:2 traffic:1 red:1 recover:10 option:1 parallel:2 wew:1 contribution:1 partha:1 who:1 listing:1 correspond:1 identify:1 directional:9 schlkopf:1 none:1 rx:1 worth:3 explain:1 definition:3 infinitesimal:2 frequency:1 naturally:3 associated:2 proof:2 recovers:4 di:2 nurse:1 sampled:3 stop:1 lim:1 color:2 knowledge:1 dimensionality:1 subtle:1 actually:1 higher:2 supervised:1 follow:1 done:1 though:1 generality:1 just:1 correlation:1 transport:4 replacing:1 defines:1 effect:2 normalized:1 true:5 hence:5 symmetric:21 joncas:2 uniquely:1 illustrative:1 anything:1 trying:1 demonstrate:1 performs:2 meaning:1 advective:2 overstate:1 novel:1 outdegrees:1 harmonic:2 common:1 quarter:1 rl:5 empirically:2 physical:3 exponentially:1 volume:1 anisotropic:10 interpretation:1 approximates:1 theirs:1 monthly:1 rd:2 meila:1 stable:1 zucker:1 similarity:6 curvature:1 recent:1 female:2 perspective:1 showed:1 reverse:1 certain:1 success:1 life:1 wji:2 preserving:1 seen:3 additional:3 somewhat:1 greater:1 employed:1 converge:4 ii:2 semi:1 full:1 smooth:3 technical:2 characterized:1 youth:3 offer:1 long:1 sphere:3 marina:2 molecular:1 award:1 laplacian:15 schematic:3 qi:2 kernel:37 normalization:1 achieved:1 want:1 uninformative:1 source:3 macroscopic:1 salary:3 eigenfunctions:4 isolate:1 tend:1 elegant:2 undirected:7 flow:3 seem:2 jordan:2 extracting:2 noting:2 presence:1 cohort:2 granularity:1 embeddings:1 concerned:2 easy:1 variety:1 xj:3 enough:1 insofar:1 bandwidth:3 perfectly:1 identified:1 idea:2 cn:1 regarding:2 motivated:1 expression:2 useful:1 generally:1 covered:1 eigenvectors:8 detailed:1 locally:1 diameter:1 reduced:1 http:1 problematic:1 latitudinal:3 notice:1 estimated:7 track:1 blue:1 bls:1 express:1 key:1 four:4 nevertheless:2 demonstrating:1 neither:1 diffusion:21 backward:1 graph:45 merely:1 convert:1 parameterized:1 family:7 reader:1 throughout:1 dy:6 guaranteed:1 followed:1 display:1 fan:1 identifiable:1 occur:1 dominated:1 aspect:2 relatively:1 department:4 according:1 combination:4 describes:1 wi:2 confound:1 taken:1 equation:2 visualization:1 remains:1 agree:1 turn:3 describing:1 discus:1 needed:2 wvu:1 end:1 demographic:1 studying:1 adopted:1 endowed:1 eight:1 observe:2 progression:4 away:1 spectral:6 appropriate:1 shortly:1 original:1 top:2 clustering:2 cf:1 include:1 restrictive:2 ting:2 build:1 especially:2 society:1 move:2 question:4 already:1 parametric:1 dependence:2 diagonal:4 affinity:8 amongst:1 separate:3 manifold:25 considers:1 toward:2 enforcing:1 length:1 index:3 relationship:1 october:1 hsa:10 potentially:1 design:2 policy:1 observation:5 markov:2 finite:2 november:1 anti:1 defining:3 extended:1 rn:5 arbitrary:1 drift:2 inferred:1 introduced:3 complement:1 pair:1 required:1 distinction:1 established:1 nip:1 beyond:1 able:1 below:1 pattern:1 scott:1 latitude:2 laplacians:1 challenge:1 summarize:1 interpretability:1 royal:1 power:1 natural:7 force:3 scheme:1 improve:1 datathe:1 identifies:1 ready:1 geometric:3 tangent:4 asymptotic:12 occupation:2 relative:2 loss:1 tantamount:1 fully:1 men:1 interesting:2 proportional:1 hs:23 generator:1 substracting:1 eigendecomposition:1 age:2 wage:2 degree:3 offered:1 sufficient:2 consistent:2 principle:1 pi:2 row:1 token:1 supported:1 free:1 side:1 explaining:1 taking:4 absolute:1 mikhail:1 dimension:4 transition:4 world:1 rich:1 lafon:3 commonly:2 projected:1 social:3 citation:1 compact:2 obtains:1 preferred:1 keep:1 assumed:1 conservation:1 xi:3 continuous:5 career:6 obtaining:1 expansion:1 du:2 complex:1 artificially:1 meanwhile:3 necessarily:1 diag:6 marc:1 main:2 arrow:1 motivation:1 whole:1 repeated:1 west:2 representative:1 sub:1 position:2 explicit:1 mmp:1 lie:2 third:1 young:1 rk:1 theorem:1 specific:1 normalizing:5 intrinsic:2 consist:1 albeit:1 haa:16 juncture:1 occurring:1 push:1 simply:2 explore:1 expressed:1 contained:1 labor:1 partially:1 gender:3 corresponds:3 fokker:1 determines:1 relies:2 digraph:1 towards:3 directionality:4 included:1 infinite:1 determined:1 specifically:4 uniformly:1 disregard:1 east:1 arises:1 |
3,626 | 4,283 | t-divergence Based Approximate Inference
Nan Ding2 , S.V. N. Vishwanathan1,2 , Yuan Qi2,1
Departments of 1 Statistics and 2 Computer Science
Purdue University
[email protected], [email protected], [email protected]
Abstract
Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions. We
extend the idea of approximate inference to the t-exponential family by de?ning
a new t-divergence. This divergence measure is obtained via convex duality between the log-partition function of the t-exponential family and a new t-entropy.
We illustrate our approach on the Bayes Point Machine with a Student?s t-prior.
1
Introduction
The exponential family of distributions is ubiquitous in statistical machine learning. One prominent application is their use in modeling conditional independence between random variables via a
graphical model. However, when the number or random variables is large, and the underlying graph
structure is complex, a number of computational issues need to be tackled in order to make inference
feasible. Therefore, a number of approximate techniques have been brought to bear on the problem.
Two prominent approximate inference techniques include the Monte Carlo Markov Chain (MCMC)
method [1], and the deterministic method [2, 3].
Deterministic methods are gaining signi?cant research traction, mostly because of their high ef?ciency and practical success in many applications. Essentially, these methods are premised on the
search for a proxy in an analytically solvable distribution family that approximates the true underlying distribution. To measure the closeness between the true and the approximate distributions,
the relative entropy between these two distributions is used. When working with the exponential
family, one uses the Shannon-Boltzmann-Gibbs (SBG) entropy in which case the relative entropy is
the well known Kullback-Leibler (KL) divergence [2]. Numerous well-known algorithms in exponential family, such as the mean ?eld method [2, 4] and the expectation propagation [3, 5], are based
on this criterion.
The thin-tailed nature of the exponential family makes it unsuitable for designing algorithms which
are potentially robust against certain kinds of noisy data. Notable work including [6, 7] utilizes
mixture/split exponential family based approximate model to improve the robustness. Meanwhile,
effort has also been devoted to develop alternate, generalized distribution families in statistics [e.g.
8, 9], statistical physics [e.g. 10, 11], and most recently in machine learning [e.g. 12]. Of particular
interest to us is the t-exponential family1 , which was ?rst proposed by Tsallis and co-workers [10,
13, 14]. It is a special case of the more general ?-exponential family of Naudts [11, 15?17]. Related
work in [18] has applied the t-exponential family to generalize logistic regression and obtain an
algorithm that is robust against certain types of label noise.
In this paper, we attempt to generalize deterministic approximate inference by using the texponential family. In other words, the approximate distribution used is from the t-exponential
family. To obtain the corresponding divergence measure as in the exponential family, we exploit the
1
Sometimes, also called the q-exponential family or the Tsallis distribution.
1
convex duality between the log-partition function of the t-exponential family and a new t-entropy2
to de?ne the t-divergence. To illustrate the usage of the above procedure, we use it for approximate
inference in the Bayes Point Machine (BPM) [3] but with a Student?s t-prior.
The rest of the paper is organized as follows. Section 2 consists of a brief review of the t-exponential
family. In Section 3 a new t-entropy is de?ned as the convex dual of the log-partition function of the
t-exponential family. In Section 4, the t-divergence is derived and is used for approximate inference
in Section 5. Section 6 illustrates the inference approach by applying it to the Bayes Point Machine
with a Student?s t-prior, and we conclude the paper with a discussion in Section 7.
2
The t-exponential Family and Related Entropies
The t-exponential family was ?rst proposed by Tsallis and co-workers [10, 13, 14]. It is de?ned as
p(x; ?) := expt (??(x), ?? ? gt (?)) , where
?
exp(x)
if t = 1
1
expt (x) :=
1?t
otherwise.
[1 + (1 ? t)x]+
(1)
?? gt (?) = Eq [?(x)] .
(3)
(2)
The inverse of the expt function is called logt . Note that the log-partition function, gt (?), in (1)
preserves convexity and satis?es
Here q(x) is called the escort distribution of p(x), and is de?ned as
q(x) := ?
p(x)t
.
p(x)t dx
(4)
See the supplementary material for a proof of convexity of gt (?) based on material from [17], and a
detailed review of the t-exponential family of distributions.
There are various generalizations of the Shannon-Boltzmann-Gibbs (SBG) entropy which are proposed in statistical physics, and paired with the t-exponential family of distributions. Perhaps the
most well-known among them is the Tsallis entropy [10]:
?
(5)
Htsallis (p) := ? p(x)t logt p(x)dx.
Naudts [11, 15, 16, 17] proposed a more general framework, wherein the familiar exp and log
functions are generalized to exp? and log? functions which are de?ned via a function ?. These
generalized functions are used to de?ne a family of distributions, and corresponding to this family
an entropy like measure called the information content I? (p) as well as its divergence measure are
de?ned. The information content is the dual of a function F (?), where
?? F (?) = Ep [?(x)] .
(6)
Setting ?(p) = p in the Naudts framework recovers the t-exponential family de?ned in (1). Interestingly when ?(p) = 1t p2?t , the information content I? is exactly the Tsallis entropy (5).
t
One another well-known non-SBG entropy is the R?enyi entropy [19]. The R?enyi ?-entropy (when
? ?= 1) of the probability distribution p(x) is de?ned as:
??
?
1
?
p(x) dx .
log
(7)
H? (p) =
1??
Besides these entropies proposed in statistical physics, it is also worth noting efforts that work with
generalized linear models or utilize different divergence measures, such as [5, 8, 20, 21].
It is well known that the negative SBG entropy is the Fenchel dual of the log-partition function of an
exponential family distribution. This fact is crucially used in variational inference [2]. Although all
2
Although closely related, our t-entropy de?nition is different from either the Tsallis entropy [10] or the
information content in [17]. Nevertheless, it can be regarded as an example of the generalized framework of
the entropy proposed in [8].
2
of the above generalized entropies are useful in their own way, none of them satisfy this important
property for the t-exponential family. In the following sections we attempt to ?nd an entropy which
satis?es this property, and outline the principles of approximate inference using the t-exponential
family. Note that although our main focus is the t-exponential family, we believe that our results can
also be extended to the more general ?-exponential family of Naudts [15, 17].
3
Convex Duality and the t-Entropy
De?nition 1 (Inspired by Wainwright and Jordan [2]) The t-entropy of a distribution p(x; ?) is
de?ned as
?
Ht (p(x; ?)) : = ? q(x; ?) logt p(x; ?) dx = ? Eq [logt p(x; ?)] .
(8)
where q(x; ?) is the escort distribution of p(x; ?). It is straightforward to verify that the t-entropy is
non-negative. Furthermore, the following theorem establishes the duality between ?Ht and gt . The
proof is provided in the supplementary material. This extends Theorem 3.4 of [2] to the t-entropy.
Theorem 2 For any ?, de?ne ?(?) (if exists) to be the parameter of the t-exponential family s.t.
?
(9)
? = Eq(x;?(?)) [?(x)] = ?(x)q(x; ?(?)) dx.
gt? (?)
Then
=
?
?Ht (p(x; ?(?))) if ?(?) exists
+? otherwise .
(10)
where gt? (?) denotes the Fenchel dual of gt (?). By duality it also follows that
gt (?) = sup {??, ?? ? gt? (?)} .
?
(11)
From Theorem 2, it is obvious that Ht (?) is a concave function. Below, we derive the t-entropy
function corresponding to two commonly used distributions. See Figure 1 for a graphical illustration.
Example 1 (t-entropy of Bernoulli distribution) Assume the Bernoulli distribution is Bern(p)
with parameter p. The t-entropy is
Ht (p) =
?pt logt p ? (1 ? p)t logt (1 ? p)
1 ? (pt + (1 ? p)t )?1
=
pt + (1 ? p)t
t?1
(12)
Example 2 (t-entropy of Student?s t-distribution) Assume that a k-dim Student?s t-distribution
p(x; ?, ?, v) is given by (54), then the t-entropy of p(x; ?, ?, v) is given by
?
? ?
1
1 + v ?1 +
1?t
1?t
?
??2/(v+k)
?((v+k)/2)
? k, and ? = (?v)k/2
.
?(v/2)| ? |1/2
Ht (p(x))) = ?
where K = (v ?)?1 , v =
3.1
2
t?1
(13)
Relation with the Tsallis Entropy
Using (4), (5), and (8), the relation between the t-entropy and Tsallis entropy is obvious. Basically,
the t-entropy is a normalized version of the Tsallis entropy,
?
1
1
p(x)t logt p(x)dx = ?
Htsallis (p).
Ht (p) = ? ?
(14)
p(x)t dx
p(x)t dx
3
1
0.6
t=1.0
t=1.3
t=1.6
t=1.9
10
Ht (? 2 )
0.8
Ht (p)
15
t=0.1
t=0.5
t=1.0
t=1.5
t=1.9
0.4
5
0.2
0
0
0.2
0.4
0.6
0.8
0
1
0
2
p
4
6
8
10
?2
Figure 1: t-entropy corresponding to two well known probability distributions. Left: the Bernoulli
distribution Bern(x; p); Right: the Student?s t-distribution St(x; 0, ? 2 , v), where v = 2/(t?1)?1.
One can recover the SBG entropy by setting t = 1.0.
3.2
Relation with the R?enyi Entropy
We can equivalently rewrite the R?enyi Entropy as:
??
?
??
??1/(1??)
1
p(x)? dx = ? log
log
p(x)? dx
.
H? (p) =
1??
The t-entropy of p(x) (when t ?= 1) is equal to
?
??1/(1?t)
??
p(x)t logt p(x)dx
t
?
dx
.
Ht (p) = ?
=
?
log
p(x)
t
p(x)t dx
(15)
(16)
Therefore, when ? = t,
Ht (p) = ? logt (exp(?H? (p)))
(17)
When t and ? ? 1, both entropies go to the SBG entropy.
4
The t-divergence
Recall that the Bregman divergence de?ned by a convex function ?H between p and p
? is [22]:
?
dH(?
p)
(?
p(x) ? p(x))dx.
(18)
D(p? p
?) = ?H(p) + H(?
p) +
dp
?
For the SBG entropy, it is easy to verify that the Bregman divergence leads to the relative SBGentropy (also widely known as the Kullback-Leibler (KL) divergence). Analogously, one can de?ne
the t-divergence3 as the Bregman divergence or relative entropy based on the t-entropy.
De?nition 3 The t-divergence, which is the relative t-entropy between two distribution p(x) and
p
?(x), is de?ned as,
?
?) = q(x) logt p(x) ? q(x) logt p
?(x)dx.
(19)
Dt (p? p
The following theorem states the relationship between the relative t-entropy and the Bregman divergence. The proof is provided in the supplementary material.
Theorem 4 The t-divergence is the Bregman divergence de?ned on the negative t-entropy ?Ht (p).
3
Note that the t-divergence is not a special case of the divergence measure of Naudts [17] because the
entropies are de?ned differently the derivations are fairly similar in spirit.
4
The t-divergence plays a central role in the variational inference that will be derived shortly. It also
preserves the following properties:
?) ? 0, ?p, p
?. The equality holds only for p = p
?.
? Dt (p? p
?) ?= Dt (?
p ?p).
? Dt (p? p
Example 3 (Relative t-entropy between Bernoulli distributions) Assume that two Bernoulli distributions Bern(p1 ) and Bern(p2 ), then the relative t-entropy Dt (p1 ?p2 ) between these two distributions is:
Dt (p1 ?p2 ) =
=
pt1 logt p1 + (1 ? p1 )t logt (1 ? p1 ) ? pt1 logt p2 ? (1 ? p1 )t logt (1 ? p2 )
pt1 + (1 ? p1 )t
(20)
? (1 ? p1 )t (1 ? p2 )1?t
1 ? pt1 p1?t
2
(1 ? t)(pt1 + (1 ? p1 )t )
(21)
Example 4 (Relative t-entropy between Student?s t-distributions) Assume that two Student?s tdistributions p1 (x; ?1 , ?1 , v) and p2 (x; ?2 , ?2 , v) are given, then the relative t-entropy Dt (p1 ?p2 )
between these two distributions is:
?
Dt (p1 ?p2 ) = q1 (x) logt p1 (x) ? q1 (x) logt p2 (x)dx
?
2?2 ?
?1 ?
? K2 ? 2
1 + v ?1 +
1?t
1?t 1
?
?
?
?2
?2 ? ?
?2 ?
T r K?
? K2 ? 1 ?
?
? K2 ? 2 + 1
2 ?1 ?
1?t
1?t 1
1?t 2
1
t=0.1
t=0.5
t=1.0
t=1.5
t=1.9
0.6
40
0.4
(23)
t=1.0
t=1.3
t=1.6
t=1.9
4
3
Dt (p1 ?p2 )
Dt (p1 ?p2 )
0.8
t=1.0
t=1.3
t=1.6
t=1.9
60
(22)
Dt (p1 ?p2 )
=
2
20
1
0.2
0
0
0.2
0.4
0.6
p1
0.8
1
0
?4
?2
0
?
2
4
0
0
2
4
6
8
10
?2
Figure 2: The t-divergence between: Left: Bern(p1 ) and Bern(p2 = 0.5); Middle: St(x; ?, 1, v)
and St(x; 0, 1, v); Right: St(x; 0, ? 2 , v) and St(x; 0, 1, v), where v = 2/(t ? 1) ? 1.
5
Approximate Inference in the t-Exponential Family
In essence, the deterministic approximate inference ?nds an approximate distribution from an analytically tractable distribution family which minimizes the relative entropy (e.g. KL-divergence
in exponential family) with the true distribution. Since the relative entropy is not symmetric, the
results of minimizing D(p? p
?) and D(?
p ?p) are different. In the main body of the paper we describe
methods which minimize D(p? p
?) where p
? comes from the t-exponential family. Algorithms which
minimize D(?
p ?p) are described in the supplementary material.
Given an arbitrary probability distribution p(x), in order to obtain a good approximation p
?(x; ?) in
the t-exponential family, we minimize the relative t-relative entropy (19)
?
p
? = argmin Dt (p? p
?) = q(x) logt p(x) ? q(x) logt p
?(x; ?)dx.
(24)
p
?
Here q(x) =
1
t
Z p(x)
denotes the escort of the original distribution p(x). Since
p
?(x; ?) = expt (??(x), ?? ? gt (?)),
5
(25)
using the fact that ?? gt (?) = Eq? [?(x)], one can take the derivative of (24) with respect to ?:
Eq [?(x)] = Eq? [?(x)].
(26)
In other words, the approximate distribution can be obtained by matching the escort expectation of
?(x) between the two distributions.
The escort expectation matching in (26) is reminiscent of the moment matching in the Power-EP [5]
or the Fractional BP [23] algorithm, where the approximate distribution is obtained by
Ep? [?(x)] = Ep? p?1?? /Z [?(x)].
(27)
The main reason for using the t-divergence, however, is not to address the computational or convergence issues as is done in the case of power EP/fractional BP. In contrast, we use the generalized
exponential family (t-exponential family) to build our approximate models. In this context, the
t-divergence plays the same role as KL divergence in the exponential family.
To illustrate our ideas on a non-trivial problem, we apply escort expectation matching to the Bayes
Point Machine (BPM) [3] with a Student?s t-distribution prior.
6
Bayes Point Machine with Student?s t-Prior
Let D = {(x1 , y1 ), . . . , (xn , yn )} be the training data. Consider a linear model parametrized by the
k-dim weight vector w. For each training data point (xi , yi ), the conditional distribution of the label
yi given xi and w is modeled as [3]:
ti (w) = p(yi | xi , w) = ? + (1 ? 2?)?(yi ?w, xi ?),
(28)
where ?(z) is the step function: ?(z) = 1 if z > 0 and = 0 otherwise. By making a standard i.i.d.
assumption about the data, the posterior distribution can be written as
?
ti (w),
(29)
p(w | D) ? p0 (w)
i
where p0 (w) denotes a prior distribution. Instead of using multivariate Gaussian distribution as a
prior as was done by Minka [3], we will use a Student?s t-prior, because we want to build robust
models:
p0 (w) = St(w; 0, I, v).
(30)
As it turns out, the posterior p(w | D) is infeasible to obtain in practice. Therefore we will ?nd a
multivariate Student?s t-distribution to approximate the true posterior.
? v).
? ?,
p(w | D) ? p
?(w) = St(w; ?,
(31)
p
?0 (w) = p0 (w)
pi (w) ? p
?i?1 (w)ti (w)
(32)
(33)
p
?i (w) = St(w; ?(i) , ?(i) , v) = argmin Dt (pi (w)?St(w; ?, ?, v)).
(34)
In order to obtain such a distribution, we implement the Bayesian online learning method [24],
which is also known as Assumed Density Filter [25]. The extension to the expectation propagation is
similar to [3] and omitted due to space limitation. The main idea is to process data points one by one
and update the posterior by using escort moment matching. Assume the approximate distribution
after processing (x1 , y1 ), . . . , (xi?1 , yi?1 ) to be p
?i?1 (w) and de?ne
Then the approximate posterior p
?i (w) is updated as
?,?
Because p
?i (w) is a k-dim Student?s t-distribution with degree of freedom v, for which ?(w) =
[w, w w? ] and t = 1 + 2/(v + k) (see example 5 in Appendix A), it turns out that we only need
?
?
qi (w) w d w = q
?i (w) w d w, and
(35)
?
?
?i (w) w w? d w .
(36)
qi (w) w w? d w = q
6
Here q
?i (w) ? p
?i (w)t , qi (w) ? p
?i?1 (w)t t?i (w) and
?
?
t?i (w) = ti (w)t = ?t + (1 ? ?)t ? ?t ?(yi ?w, xi ?).
(37)
?i?1 (w) = St(w; ?(i?1) , v ?(i?1) /(v + 2), v + 2)
Denote p
?i?1 (w) = St(w; ?(i?1) , ?(i?1) , v), q
(also see example 5), and we make use of the following relations:
?
?i?1 (w)t?i (w)d w
(38)
Z1 = p
? z
?
?
St(x; 0, 1, v)dx
(39)
= ?t + (1 ? ?)t ? ?t
??
?
?i?1 (w)t?i (w)d w
(40)
Z2 = q
? z
?
?
St(x; 0, v/(v + 2), v + 2)dx
(41)
= ?t + (1 ? ?)t ? ?t
??
1
? ? Z1 = y i ? x i
g=
Z2
?
?
1 yi ? xi , ?(i?1)
1
? ? Z1 = ?
xi x?
G=
i
(i?1)
Z2
2 x?
?
x
i
i
where,
?=
(42)
(43)
?
?
yi xi , ?(i?1)
((1 ? ?)t ? ?t ) St(z; 0, 1, v)
?
and z = ?
.
(i?1)
? ?(i?1) x
Z2 x ?
?
x
x
i
i
i
i
4
Equations
? (39) and (41) are analogous to Eq. (5.17) in [3]. By assuming that a regularity condition
holds, and ? can be interchanged in ?Z1 of (42) and (43). Combining with (38) and (40), we
obtain the escort expectations of pi (w) from Z1 and Z2 (similar to Eq. (5.12) and (5.13) in [3]),
?
1
q
?i?1 (w)t?i (w) w d w = ?(i?1) + ?(i?1) g
Eq [w] =
(44)
Z2
?
1
Eq [w w? ] ? Eq [w] Eq [w]? =
q
?i?1 (w)t?i (w) w w? d w ? Eq [w] Eq [w]?
Z2
?
?
= r ?(i?1) ? ?(i?1) g g? ?2 G ?(i?1)
(45)
where r = Z1 /Z2 and Eq [?] means the expectation with respect to qi (w).
Since the mean and variance of the escort of p
?i (w) is ?(i) and ?(i) (again see example 5), after
combining with (42) and (43),
?(i) = Eq [w] = ?(i?1) + ?yi ?(i?1) xi
?(i) = Eq [w w? ] ? Eq [w] Eq [w]? = r ?(i?1) ?(?(i?1) xi )
6.1
?
?
?yi xi , ?
??
(i)
(i?1)
x?
xi
i ?
(46)
(?(i?1) xi )? .
(47)
Results
In the above Bayesian online learning algorithm, everytime a new data xn coming in,
p(? | x1 , . . . , xn?1 ) is used as a prior, and the posterior is computed by incorporating the likelihood p(xn | ?). The Student?s t-distribution is a more conservative or non-subjective prior than the
Gaussian distribution because its heavy-tailed nature. More speci?cally, it means that the Student?s
t-based BPM can be more strongly in?uenced by the newly coming in points.
In many binary class?cation problems, it is assumed that the underlying class?cation hyperplane
is always ?xed. However, in some real situations, this assumption might not hold. Especially, in
4
This is a fairly standard technical requirement which is often proved using the Dominated Convergence
Theorem (see e.g. Section 9.2 of Rosenthal [26]).
7
100
# Diff. Signs of w
80
# Diff. Signs of w
20
Gauss
v=3
v=10
60
40
20
0
0
1,000
2,000
3,000
4,000
# Points
Gauss
v=3
v=10
15
10
5
0
0
1,000
2,000
3,000
4,000
# Points
Figure 3: The number of wrong signs between w. Left: case I; Right: case II
Table 1: The classi?cation error of all the data points
Gauss
v=3
v=10
Case I 0.337 0.242 0.254
Case II 0.150 0.130 0.128
an online learning problem, the data sequence coming in is time dependent. It is possible that the
underlying classi?er is also time dependent. For a senario like this, we require our learning machine
is able to self-adjust during the time given the data.
In our experiment, we build a synthetic online dataset which mimics the above senario, that is the
underlying classi?cation hyperplane is changed during a certain time interval. Our sequence of
data is composed of 4000 data points randomly generated by a 100 dimension isotropic Gaussian
distribution N (0, I). The sequence can be partitioned into 10 sub-sequences of length 400. During
each sub-sequence s, there is a base weight vector wb(s) ? {?1, +1}100 . Each point x(i) of the
?
?
b
subsequence is labeled as y(i) = sign w?
(i) x(i) where w(i) = w(s) +n and n is a random noise
from [?0.1, +0.1]100 . The base weight vector wb(s) can be (I) totally randomly generated, or (II)
generated based on the base weight vector wb(s?1) in the following way:
b
w(s)j
=
?
Rand{?1, +1} j ? [400s ? 399, 400s]
b
otherwise.
w(s?1)j
(48)
Namely, only 10% of the base weight vector is changed based upon the previous base weight vector.
We compare the Bayes Point Machine with Student?s t-prior (with v = 3 and v = 10) with the
Gaussian prior. For both method, ? = 0.01. We report (1) for each point the number of different
signs between the base weight vector and the mean of the posterior (2) the error rate of all the points.
According to the Fig. 3 and Table. 1, we ?nd that the Bayes Point Machine with the Student?st prior adjusts itself signi?cantly faster than the Gaussian prior and it also ends up with a better
classi?cation results. We believe that is mostly resulted from its heavy-tailness.
7
Discussion
In this paper, we investigated the convex duality of the log-partition function of the t-exponential
family, and de?ned a new t-entropy. By using the t-divergence as a divergence measure, we proposed approximate inference on the t-exponential family by matching the expectation of the escort
distributions. The results in this paper can be extended to the more generalized ?-exponential family
by Naudts [15].
The t-divergence based approximate inference is only applied in a toy example. The focus of our
future work is on utilizing this approach in various graphical models. Especially, it is important to
investigate a new family of graphical models based on heavy-tailed distributions for applications
involving noisy data.
8
References
[1] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter. Markov Chain Monte Carlo in Practice. Chapman &
Hall, 1995.
[2] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1 ? 2):1 ? 305, 2008.
[3] T. Minka. Expectation Propagation for approximative Bayesian inference. PhD thesis, MIT Media Labs,
Cambridge, USA, 2001.
[4] Y. Weiss. Comparing the mean ?eld method and belief propagation for approximate inference in MRFs.
In David Saad and Manfred Opper, editors, Advanced Mean Field Methods. MIT Press, 2001.
[5] T. Minka. Divergence measures and message passing. Report 173, Microsoft Research, 2005.
[6] C. Bishop, N. Lawrence, T. Jaakkola, and M. Jordan. Approximating posterior distributions in belief
networks using mixtures. In Advances in Neural Information Processing Systems 10, 1997.
[7] G. Bouchard and O. Zoeter. Split variational inference. In Proc. Intl. Conf. Machine Learning, 2009.
[8] P. Grunwald and A. Dawid. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian
decision theory. Annals of Statistics, 32(4):1367?1433, 2004.
[9] C. R. Shalizi. Maximum likelihood estimation for q-exponential (tsallis) distributions, 2007. URL http:
//arxiv.org/abs/math.ST/0701854.
[10] C. Tsallis. Possible generalization of boltzmann-gibbs statistics. J. Stat. Phys., 52:479?487, 1988.
[11] J. Naudts. Deformed exponentials and logarithms in generalized thermostatistics. Physica A, 316:323?
334, 2002. URL http://arxiv.org/pdf/cond-mat/0203489.
[12] T. D. Sears. Generalized Maximum Entropy, Convexity, and Machine Learning. PhD thesis, Australian
National University, 2008.
[13] A. Sousa and C. Tsallis. Student?s t- and r-distributions: Uni?ed derivation from an entropic variational
principle. Physica A, 236:52?57, 1994.
[14] C. Tsallis, R. S. Mendes, and A. R. Plastino. The role of constraints within generalized nonextensive
statistics. Physica A: Statistical and Theoretical Physics, 261:534?554, 1998.
[15] J. Naudts. Generalized thermostatistics based on deformed exponential and logarithmic functions. Physica A, 340:32?40, 2004.
[16] J. Naudts. Generalized thermostatistics and mean-?eld theory. Physica A, 332:279?300, 2004.
[17] J. Naudts. Estimators, escort proabilities, and ?-exponential families in statistical physics. Journal of
Inequalities in Pure and Applied Mathematics, 5(4), 2004.
[18] N. Ding and S. V. N. Vishwanathan. t-logistic regression. In Richard Zemel, John Shawe-Taylor, John
Lafferty, Chris Williams, and Alan Culota, editors, Advances in Neural Information Processing Systems
23, 2010.
[19] A. R?enyi. On measures of information and entropy. In Proc. 4th Berkeley Symposium on Mathematics,
Statistics and Probability, pages 547?561, 1960.
[20] J. D. Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. Annual
Conf. Computational Learning Theory, volume 12, pages 125?133. ACM Press, New York, NY, 1999.
[21] I. Csisz?ar. Information type measures of differences of probability distribution and indirect observations.
Studia Math. Hungarica, 2:299?318, 1967.
[22] K. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential
family of distributions. Machine Learning, 43(3):211?246, 2001. Special issue on Theoretical Advances
in On-line Learning, Game Theory and Boosting.
[23] W. Wiegerinck and T. Heskes. Fractional belief propagation. In S. Becker, S. Thrun, and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, pages 438?445, 2003.
[24] M. Opper. A Bayesian approach to online learning. In On-line Learning in Neural Networks, pages
363?378. Cambridge University Press, 1998.
[25] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In UAI, 1998.
[26] J. S. Rosenthal. A First Look at Rigorous Probability Theory. World Scienti?c Publishing, 2006.
9
| 4283 |@word deformed:2 version:1 middle:1 nd:4 crucially:1 p0:4 q1:2 eld:3 naudts:10 moment:2 interestingly:1 subjective:1 z2:8 comparing:1 dx:19 reminiscent:1 written:1 john:2 alanqi:1 additive:1 partition:6 cant:1 update:1 warmuth:1 isotropic:1 manfred:1 math:2 boosting:2 org:2 symposium:1 yuan:1 consists:1 p1:20 inspired:1 totally:1 provided:2 underlying:5 medium:1 entropy2:1 xed:1 kind:1 argmin:2 minimizes:1 berkeley:1 ti:4 concave:1 exactly:1 everytime:1 k2:3 wrong:1 yn:1 might:1 co:2 tsallis:13 practical:1 gilks:1 practice:2 implement:1 procedure:1 matching:6 word:2 nonextensive:1 context:1 applying:1 senario:2 deterministic:4 straightforward:1 go:1 williams:1 convex:6 pure:1 adjusts:1 estimator:1 utilizing:1 regarded:1 uenced:1 analogous:1 updated:1 annals:1 pt:3 play:2 us:1 designing:1 approximative:1 trend:1 dawid:1 labeled:1 ep:5 role:3 ding:1 convexity:3 rewrite:1 upon:1 indirect:1 differently:1 various:2 derivation:2 sears:1 enyi:5 describe:1 monte:2 zemel:1 supplementary:4 widely:1 pt1:5 otherwise:4 statistic:6 richardson:1 noisy:2 itself:1 online:5 sequence:5 coming:3 combining:2 csisz:1 rst:2 convergence:2 regularity:1 requirement:1 intl:1 illustrate:3 develop:1 derive:1 stat:2 eq:19 p2:15 c:1 signi:2 expt:4 come:1 australian:1 plastino:1 ning:1 closely:1 filter:1 stochastic:1 material:5 require:1 shalizi:1 generalization:2 extension:1 physica:5 hold:3 hall:1 exp:4 lawrence:1 interchanged:1 entropic:1 omitted:1 estimation:2 proc:3 label:2 establishes:1 brought:1 mit:2 gaussian:5 always:1 jaakkola:1 derived:2 focus:2 bernoulli:5 likelihood:2 contrast:1 rigorous:1 dim:3 inference:22 dependent:2 mrfs:1 relation:4 koller:1 bpm:3 issue:3 dual:4 among:1 special:3 fairly:2 equal:1 field:1 chapman:1 look:1 thin:1 mimic:1 future:1 report:2 discrepancy:1 richard:1 randomly:2 composed:1 preserve:2 divergence:31 resulted:1 national:1 familiar:1 microsoft:1 attempt:2 freedom:1 ab:1 interest:1 satis:2 message:1 investigate:1 adjust:1 mixture:2 scienti:1 devoted:1 chain:2 bregman:5 worker:2 taylor:1 logarithm:1 theoretical:2 fenchel:2 modeling:1 wb:3 ar:1 synthetic:1 st:16 density:2 cantly:1 physic:5 analogously:1 again:1 central:1 thesis:2 conf:2 derivative:1 toy:1 premised:1 de:22 student:18 satisfy:1 notable:1 lab:1 sup:1 zoeter:1 bayes:7 recover:1 bouchard:1 minimize:3 variance:1 generalize:2 bayesian:5 basically:1 none:1 carlo:2 worth:1 cation:5 phys:1 ed:1 against:2 vishy:1 minka:3 obvious:2 proof:3 recovers:1 newly:1 proved:1 dataset:1 thermostatistics:3 studia:1 recall:1 fractional:3 ubiquitous:1 organized:1 dt:13 wherein:1 wei:1 rand:1 done:2 strongly:1 sbg:7 furthermore:1 working:1 propagation:5 logistic:2 perhaps:1 believe:2 usage:1 usa:1 verify:2 true:4 normalized:1 analytically:2 equality:1 symmetric:1 leibler:2 during:3 self:1 game:2 essence:1 criterion:1 generalized:14 prominent:2 pdf:1 outline:1 variational:5 ef:1 recently:1 volume:1 extend:1 approximates:1 cambridge:2 gibbs:3 mathematics:2 heskes:1 shawe:1 gt:12 base:6 posterior:8 own:1 multivariate:2 certain:3 inequality:1 binary:1 success:1 yi:10 nition:3 minimum:1 speci:1 ii:3 alan:1 technical:1 faster:1 paired:1 qi:4 involving:1 regression:2 essentially:1 expectation:9 arxiv:2 sometimes:1 want:1 interval:1 saad:1 rest:1 lafferty:2 spirit:1 jordan:3 noting:1 split:2 easy:1 independence:1 idea:3 url:2 becker:1 effort:2 passing:1 york:1 useful:1 detailed:1 traction:1 http:2 qi2:1 sign:5 rosenthal:2 mat:1 nevertheless:1 ht:12 utilize:1 graph:1 inverse:1 extends:1 family:47 utilizes:1 decision:1 appendix:1 bound:1 nan:1 tackled:1 annual:1 constraint:1 vishwanathan:1 bp:2 dominated:1 ned:13 department:1 according:1 alternate:1 logt:19 partitioned:1 making:1 equation:1 turn:2 tractable:2 end:1 apply:1 robustness:1 shortly:1 sousa:1 original:1 denotes:3 include:1 publishing:1 graphical:6 unsuitable:1 cally:1 exploit:1 build:3 especially:2 approximating:1 obermayer:1 dp:1 thrun:1 parametrized:1 chris:1 trivial:1 reason:1 assuming:1 besides:1 length:1 modeled:1 relationship:1 illustration:1 minimizing:1 equivalently:1 mostly:2 potentially:1 negative:3 boltzmann:3 observation:1 markov:2 purdue:4 situation:1 extended:2 y1:2 arbitrary:1 david:1 namely:1 kl:4 z1:6 address:1 able:1 below:1 boyen:1 gaining:1 including:1 belief:3 wainwright:2 power:2 solvable:1 advanced:1 improve:1 spiegelhalter:1 brief:1 numerous:1 ne:5 hungarica:1 prior:14 review:2 relative:15 loss:1 bear:1 limitation:1 foundation:1 degree:1 proxy:1 principle:2 editor:3 pi:3 heavy:3 texponential:1 changed:2 bern:6 infeasible:1 dimension:1 xn:4 opper:2 world:1 commonly:1 approximate:24 uni:1 kullback:2 dealing:1 uai:1 conclude:1 assumed:2 xi:14 subsequence:1 search:1 tailed:3 table:2 nature:2 robust:4 investigated:1 complex:2 meanwhile:1 main:4 azoury:1 noise:2 body:1 x1:3 fig:1 grunwald:1 ny:1 sub:2 ciency:1 exponential:44 theorem:7 bishop:1 er:1 closeness:1 intractable:1 exists:2 incorporating:1 phd:2 illustrates:1 entropy:59 logarithmic:1 escort:11 dh:1 acm:1 conditional:2 feasible:1 content:4 diff:2 hyperplane:2 classi:4 wiegerinck:1 conservative:1 called:4 duality:6 e:2 gauss:3 shannon:2 cond:1 mcmc:1 mendes:1 |
3,627 | 4,284 | Probabilistic amplitude and frequency demodulation
Richard E. Turner?
Computational and Biological Learning Lab, Department of Engineering
University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK
[email protected]
Maneesh Sahani
Gatsby Computational Neuroscience Unit, University College London
Alexandra House, 17 Queen Square, London, WC1N 3AR, UK
[email protected]
Abstract
A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although
signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing
methods. Motivated by the fact that AFD is ill-posed, we approach the problem
using probabilistic inference. The new approach, called probabilistic amplitude
and frequency demodulation (PAFD), models instantaneous frequency using an
auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form
of expectation propagation is used for inference. We demonstrate that although
PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings.
1
Introduction
Amplitude and frequency demodulation (AFD) is the process by which a signal (yt ) is decomposed
into the product of a slowly varying envelope or amplitude component (at ) and a quickly varying
sinusoidal carrier (cos(?t )), that is yt = at cos(?t ). In its general form this is an ill-posed problem
[1], and so algorithms must impose implicit or explicit assumptions about the form of carrier and
envelope to realise a solution. In this paper we make the standard assumption that the amplitude
variables are slowly varying positive variables, and the derivatives of the carrier phase, ?t = ?t ?
?t?1 called the instantaneous frequencies (IFs), are also slowly varying variables.
It has been argued that the subbands of speech are well characterised by such a representation [2, 3]
and so AFD has found a range of applications in audio processing including audio coding [4, 2],
speech enhancement [5] and source separation [6], and it is used in hearing devices [5]. AFD has
been used as a scientific tool to investigate the perception of sounds [7]. AFD is also of importance
in neural signal processing applications. Aggregate field measurements such as those collected at the
scalp by electroencephalography (EEG) or within tissue as local field potentials often exhibit transient sharp spectral lines at characteristic frequencies. Within each such band, both the amplitude of
the oscillation and the precise center frequencies may vary with time; and both of these phenomena
may reveal important elements of the mechanism by which the field oscillation arises.
?
Richard Turner would like to thank the Laboratory for Computational Vision, New York University, New
York, NY 10003-6603, USA, where he carried out this research.
1
Despite the fact that AFD has found a wide range of important applications, there are well-known
problems with existing AFD algorithms [8, 1, 9, 10, 5]. Because of these problems, the Hilbert
method, which recovers an amplitude from the magnitude of the analytic signal, is still considered
to be the benchmark despite a number of limitations [11, 12]. In this paper, we show examples of demodulation of synthetic, audio, and hippocampal theta rhythm signals using various AFD techniques
that highlights some of the anomalies associated with existing methods.
Motivated by the deficiencies in the existing methods this paper develops a probabilistic form of
AFD. This development begins in the next section where we reinterpret two existing probabilistic
algorithms in the context of AFD. The limitations of these methods suggest an improved model
(section 2) which we demonstrate on a range of synthetic and natural signals (sections 4 and 5).
1.1
Simple models for probabilistic amplitude and frequency demodulation
In this paper, we view demodulation as an estimation problem in which a signal is fit with a sinusoid
of time-varying amplitude and phase,
yt = ? (at exp (i?t )) + ?t .
(1)
The expression also includes a noise term which will be modeled as a zero-mean Gaussian with
variance ?y2 , that is p(?t ) = Norm(?t ; 0, ?y2 ). We are interested in the situation where the IF of the
sinusoid varies slowly around a mean value ?
? . In this case, the phase can be expressed in terms of
the integrated mean frequency and a small perturbation, ?t = ?
? t + ?t .
Clearly, the problem of inferring at and ?t from yt is ill-posed, and results will depend on the
specification of prior distributions over the amplitude and phase perturbation variables. Our goal in
this paper is to specify such prior distributions directly, but this will require the development of new
techniques to handle the resulting non-linearities. A simpler alternative is to generate the sinusoidal
signal from a rotating two-dimensional phasor. For example, re-parametrizing the likelihood in
terms of the components x1,t = at cos(?t ) and x2,t = at sin(?t ), yields a linear likelihood function
yt = at (cos(?
? t) cos(?t ) ? sin(?
? t) sin(?t )) + ?t = cos(?
? t)x1,t ? sin(?
? t)x2,t + ?t = wTt xt + ?t .
Here the phasor components, which have been collected into a vector xTt = [x1,t , x2,t ], are multiplied
by time-varying weights, wTt = [cos(?
? t), ? sin(?
? t)]. To complete the model, prior distributions can
be now be specified over xt . One choice that results in a particularly simple inference algorithm is
a Gaussian one-step auto-regressive (AR(1)) prior,
p(xk,t |xk,t?1 ) = Norm(xk,t ; ?xk,t?1 , ?x2 ).
(2)
When the dynamical parameter tends to unity (? ? 1) and the dynamical noise variance to zero
(?x2 ? 0), the dynamics become very slow, and this slowness is inherited by the phase perturbations
and amplitudes. This model is an instance of the Bayesian Spectrum Estimation (BSE) model [13]
(when ? = 1), but re-interpreted in terms of amplitude- and frequency-modulated sinusoids, rather
than fixed frequency basis functions. As the model is a linear Gaussian state space model, exact
inference proceeds via the Kalman smoothing algorithm.
Before discussing the properties of BSE in the context of fitting amplitude- and frequency-modulated
sinusoids, we derive an equivalent model by returning to the likelihood function (eq. 1). Now the
full complex representation of the sinusoid is retained. As before, the real part corresponds to the
observed data, but the imaginary part is now treated explicitly as missing data,
yt = ? (x1,t cos(?
? t) ? x2,t sin(?
? t) + ix1,t sin(?
? t) + ix2,t cos(?
? t)) + ?t .
(3)
The new form of the likelihood function can be expressed in vector form, yt = [1, 0]zt + ?t , using a
new set of variables, zt , which are rotated versions of the original variables, zt = R(?
? t)xt where
cos(?) ? sin(?)
.
(4)
R(?) =
sin(?) cos(?)
An auto-regressive expression for the new variables, zt , can now be found using the fact that rotation
matrices commute, R(?1 + ?2 ) = R(?1 )R(?2 ) = R(?2 )R(?1 ), together with expression for the
dynamics of the original variables, xt (eq. 2),
zt = ?R(?
? )R(?
? (t ? 1))xt?1 + R(?
? t)?t = ?R(?
? )zt?1 + ??t
(5)
where the noise is a zero mean Gaussian with covariance h??t ??tT i = R(?
? t)h?t ?Tt iRT (?
? t) = ?x2 I.
This equivalent formulation of the BSE model is called the Probabilistic Phase Vocoder (PPV) [14].
Again exact inference is possible using the Kalman smoothing algorithm.
2
1.2
Problems with simple models for probabilistic amplitude and frequency demodulation
BSE-PPV is used to demodulate synthetic and natural signals in Figs. 1, 2 and 7. The decomposition
is compared to the Hilbert method. These examples immediately reveal several problems with BSEPPV. Perhaps most unsatisfactory is the fact that the IF estimates are often ill behaved, to the extent
that they go negative, especially in regions where the amplitude of the signal is low. It is easy to
understand why this occurs by considering the prior distribution over amplitude and phase implied
by our choice of prior distribution over xt (or equivalently over zt ),
1
?
at
2
2 2
exp
?
a
a
cos(?
?
?
?
?
?
)
. (6)
+
a
+
?
a
p(at , ?t |at?1 , ?t?1 ) =
t t?1
t
t?1
t?1
2??x2
2?x2 t
?x2
Phase and amplitude are dependent in the implied distribution, which is conditionally a uniform
distribution over phase when the amplitude is zero and a strongly peaked von Mises distribution
[15] when the amplitude is large. Consequently, the model favors more highly variable IFs at low
amplitudes. In some applications this may be desirable, but for signals like sounds it presents a
problem. First it may assign substantial probability to unphysical negative IFs. Second, the same
noiseless signal at different intensities will yield different estimated IF content. Third, the complex
coupling makes it difficult to select domain-appropriate time-scale parameters. Consideration of
IF reveals yet another problem. When the phase-perturbations vary slowly (? ? 1), there is no
correlation between successive IFs (h?t ?t?1 i ? h?t ih?t?1 i ? 0). One of the main goals of the
model was to capture correlated IFs through time, and the solution is to move to priors with higher
order temporal dependencies.
2
0
?2
BSE/PPV
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
y
a
?a
y
?
frequency /Hz
Hilbert
In the next section we will propose a new model for PAFD which addresses these problems, retaining
the same likelihood function, but modifying the prior to include independent distributions over the
phase and amplitude variables.
100
50
?
?
?
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
100
2
0
50
?2
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
100
PAFD
2
0
50
?2
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
time /s
time /s
Figure 1: Comparison of AFD methods on a sinusoidally amplitude- and frequency-modulated sinusoid in broad-band noise. Estimated values are shown in red. The gray areas show the region where
the true amplitude falls below the noise floor (a < ?y ) and the estimates become less accurate. See
section 4 for details.
2
PAFD using Auto-regressive and generalized von Mises distributions
We have argued that the amplitude and phase variables in a model for PAFD should be independently parametrized, but that this introduces difficulties as the likelihood is highly non-linear in
these variables. This section and the next develop the tools necessary to handle this non-linearity.
3
envelopes
4
2
0
?2
frequency /Hz
?4
0
0.05
0.1
0.15
0.2
0.25
0
0.05
0.1
0.15
0.2
0.25
0
0.05
0.1
0.15
0.2
0.25
3000
2000
1000
frequency /Hz
0
2500
2000
1500
1000
time /s
Figure 2: AFD of a starling song. Top: The original waveform with estimated envelopes, shifted
apart vertically to aid visualization. The light gray bar indicates the problematic low amplitude
region. Bottom panels: IF estimates superposed onto the spectrum of the signal. PAFD tracks the
FM/AM well, but the other methods have artifacts.
An important initial consideration is whether to use a representation for phase which is wrapped, ? ?
(??, ?], or unwrapped, ? ? R. Although the latter has the advantage of implying simpler dynamics,
it leads to a potential infinity of local modes at multiples of 2? making inference extremely difficult.
It is therefore necessary to work with wrapped phases and a sensible starting point for a prior is thus
the von Mises distribution,
1
p(?|k, ?) =
exp(k cos(? ? ?)) = vonMises(?; k, ?).
(7)
2?I0 (k)
The two parameters, the concentration (k) and the mean (?), determine the circular variance and
mean of the distribution respectively. The normalizing constant is given by a modified Bessel function of the second kind, I0 (k). Crucially for our purposes, the von Mises distribution can be obtained
by taking a bivariate isotropic Gaussian with an arbitrary mean, and conditioning onto the unit-circle
(this connects with BSE-PPV, see eq. 6). The Generalized von Mises distribution is formed in an
identical way when the bivariate Gaussian is anisotropic [16]. These constructions suggest a simple
extension to time-series data by conditioning a temporal bivariate Gaussian time-series onto the unit
circle at all sample times. For example, when two independent Gaussian AR(2) distributions are
used to construct the prior we have,
p(x1:2,1:T ) ?
T
Y
1(x21,t
+
x22,t
= 1)
2
Y
Norm(xm,t ; ?1 xm,t?1 + ?2 xm,t?2 , ?x2 ).
(8)
m=1
t=1
where 1(x21,t + x22,t = 1) is an indicator function representing the unit circle constraint. Upon a
change of variables x1,t = cos(?t ), x2,t = sin(?t ) this yields,
p(?1:T |k1 , k2 ) ?
T
Y
exp (k1 cos(?t ? ?t?1 ) + k2 cos(?t ? ?t?2 )) ,
t=1
4
(9)
where k1 = ?1 (1 ? ?2 )/?x2 and k2 = ?2 /?x2 . One of the attractive features of this prior is that when
it is combined with the likelihood (eq. 1) the resulting posterior distribution over phase variables
is a temporal version of the Generalized von Mises distribution. That is, it can be expressed as a
bivariate anisotropic Gaussian, which is constrained to the unit circle. It is this representation which
will prove essential for inference.
Having established a candidate prior over phases, we turn to the amplitude variables. With one eye
upon the fact that the prior over phases can be interpreted as product of a Gaussian and a constraint,
we employ a prior of a similar form for the amplitude variables; a truncated Gaussian AR(? ) process,
!
?
T
X
Y
2
2
?t? at?t? , ? .
1(at ? 0) Norm at ;
(10)
p(a1:T |?1:? , ? ) ?
t=1
t? =1
The model formed from equations 1, 9 and 10 will be termed Probabilistic Amplitude and Frequency
Demodulation. PAFD is closely related to the BSE-PPV model [13, 14]. Moreover, when the
phase variables are drawn from a uniform distribution (k1 = k2 = 0) it reduces to the convex
amplitude demodulation model [17], which itself is a form of probabilistic amplitude demodulation
[18, 19, 20]. The AR prior over phases has also been used in a regression setting [21].
3
Inference via expectation propagation
The PAFD model introduced in the last section contains three separate types of non-linearity: the
multiplicative interaction in the likelihood, the unit circle constraint, and the positivity constraint. Of
these, it is the circular constraint which is most challenging as the development of general purpose
machine learning methods for handling hard, non-convex constraints is an open research problem.
Following [22], we propose a novel method which uses expectation propagation (EP) [23] to replace
the hard constraints with soft, local, Gaussian approximations which are iteratively refined.
In order to apply EP, the model is first rewritten into a simpler form. Making use of the
fact that an AR(? ) process can be rewritten as an equivalent multi-dimensional AR(1) model
with ? states, we concatenate the latent variables into an augmented state vector, sTt =
[at , at?1 , . . . , at?? +1 , x1,t , x2,t , x1,t?1 , x2,t?1 ], and express the model as a product of clique potentials in terms of this variable,
p(y1:T , s1:T ) ?
T
Y
?t (st , st?1 )?t (s1,t , s1+?,t , s2+?,t ), where ?t (st , st?1 ) = Norm(st ; ?s st?1 , ?s ),
t=1
?t (at , x1,t , x2,t ) = Norm yt ; at (cos(?
? t)x1,t ? sin(?
? t)x2,t ), ?y2 1(at ? 0)1(x21,t + x22,t = 1).
(See the supplementary material for details of the dynamical matrices ?s and ?s ). In this new form
the constraints have been incorporated with the non-linear likelihood into the potential ?t , leaving a standard Gaussian dynamical potential ?t (st , st?1 ). Using EP we approximate the posterior
distribution using a product of forward, backward and constrained-likelihood messages [24],
q(s1:T ) =
T
Y
?t (st )?t (st )??t (a1,t , x1,t , x2,t ) =
T
Y
qt (st ).
(11)
t=1
t=1
The messages should be interpreted as follows: ?t (st ) is the effect of ?t (st?1 , st ) and q(st?1 )
on the belief q(st ), whilst ?t (st ) is the effect of ?t+1 (st , st+1 ) and q(st+1 ) on the belief q(st ).
Finally, ??t (a1,t , x1,t , x2,t ) is the effect of the likelihood and the constraints on the belief q(st ). All
of these messages will be un-normalized Gaussians. The updates for the messages can be found by
removing the messages from q(s1:T ) that correspond to the effect of a particular potential. These
messages are replaced by the corresponding potential. The deleted messages are then updated by
moment matching the two distributions. The updates for the forward and backward messages are
a straightforward application of EP and result in updates that are nearly identical to those used for
Kalman smoothing. The updates for the constrained likelihood potential are more complicated:
MOM
update ??t such that q(xt ) = p?? (st ) = ?t (st )?t (st )?t (at , x1,t , x2,t ).
(12)
The difficulty is the moment computation which we evaluate in two stages. First, we integrate
over the amplitude variable, which involves computing the moments of a truncated Gaussian and
5
is therefore computationally efficient. Second, we numerically integrate over the one dimensional
phase variable. For the details we again refer the reader to the supplementary material.
A standard forward-backward message update schedule was used. Adaptive damping improved
the numerical stability
of the algorithm substantially. The computational complexity of PAFD is
O T (N + ? 3 ) where N are the number of points used to compute the integral over the phase
variable. For the experiments we used a second order process over the amplitude variables (? = 2)
and N = 1000 integration points. In this case, the 16-32 forward-backward passes required for
convergence took one minute on a modern laptop computer for signals of length T = 1000.
4
Application to synthetic signals
One of the main challenges posed by the evaluation of AFD algorithms is that the ground truth
for real-world signals is unknown. This means that a quantitative comparison of different schemes
must take an indirect approach. The first set of evaluations presented here uses synthetic signals, for
which the ground truth is known. In particular, we consider amplitude- and frequency-modulated
1 d?
?
sinusoids, yt = at cos(?t ) where at = 1 + sin(2?fa t) and 2?
dt = f + ?f sin(2?ff t), which have
been corrupted by Gaussian noise. Fig. 1 compares AFD of one such signal (f? = 50Hz, fa = 8Hz,
ff = 5Hz and ?f = 25Hz) by the Hilbert, BSE-PPV and PAFD methods. Fig. 3 summarizes the
results at different noise levels in terms of the signal to noise ratio (SNR) of the estimated variables
PT
PT
2
and the reconstructed signal, i.e. SNR(a) = 10 log10 t=1 a2t ? 10 log10 t=1 (at ? ?at ) . PAFD
consistently outperforms the other methods by this measure. Furthermore, Fig. 4 demonstrates that
PAFD can be used to accurately reconstruct missing sections of this signal, outperforming BSE-PPV.
100
SNR ?a /dB
80
SNR ?
? /dB
Hilbert
BSE?PPV
PAFD
100
60
40
80
SNR y
? /dB
120
50
0
20
0
10
20
30
40
SNR signal /dB
?
?50
10
20
30
40
SNR signal /dB
?
60
40
20
0
10
20
30
40
50
SNR signal /dB
Figure 3: Noisy synthetic data. SNR of estimated variables as a function of the SNR of the signal.
Envelopes (left), IFs (center) and denoised signal (right). Solid markers denote examples in Fig. 1.
5
Application to real world signals
Having validated PAFD on simple synthetic examples, we now consider real-world signals. Birdsong is used as a prototypical signal as it has strong frequency-modulation content. We isolate a
300ms component of a starling song using a bandpass filter and apply AFD. Fig. 2 shows that PAFD
can track the underlying frequency modulation even though there is noise in the signal which causes
the other methods to fail. This example forms the basis of two important robustness and consistency
tests. In the first, spectrally matched noise is added to the signal and the IFs and amplitudes are reestimated and compared to those derived from the clean signal. Fig. 5 shows that the PAFD method
is considerably more robust to this manipulation than both the Hilbert and BSE-PPV methods. In the
second test, regions of the signal are removed and the model?s predictions for the missing regions
are compared to the estimates derived from the clean signal (see fig. 6). Once again PAFD is more
accurate. As a final test of PAFD we consider the important neuroscientific task of estimating the
phase, equivalently the IF, of theta oscillations in an EEG signal. The EEG signal typically contains
broadband noise and so a conventional analysis applies a band-pass filter before using the Hilbert
method to estimate the IF. Although this improves the estimates markedly, the noise component
cannot be completely eradicated which leads to artifacts in the IF estimates (see Fig. 7). In contrast
6
50
0
?50
80
80
60
60
SNR y
? /dB
SNR ?a /dB
PAFD
BSE?PPV
SNR ?
? /dB
100
40
20
0
?20
0
20
40
60
40
20
0
?20
0
20
time /ms
40
60
0
20
time /ms
40
60
time /ms
0
?2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
y
y
?
a
0.4
?a
freq. /Hz
100
2
50
?
?
?
0
0
0.2
0.4
0.2
0.4
100
freq. /Hz
2
0
50
?2
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0
0.4
time /s
time /s
Figure 4: Missing synthetic data experiments. TOP: SNR of estimated variables as a function of gap
duration in the input signal. Envelopes (left), IFs (center) and denoised signal (right). Solid markers
indicate the examples shown in the bottom rows of the figure. BOTTOM: Two examples of PAFD
reconstruction. Light gray regions indicate missing sections of the signal.
40
30
SNR ?
? /dB
SNR ?a /dB
20
Hilbert
BSE?PPV
PAFD
35
25
20
15
15
10
5
0
?5
?10
10
10
15
20
25
30
35
10
SNR signal /dB
15
20
25
30
35
SNR signal /dB
Figure 5: Noisy bird song experiments. SNR of estimated variables as compared to those estimated
from the clean signal, as a function of the SNR of the input signal. Envelopes (left), IFs (right).
PAFD returns sensible estimates from both the filtered and original signal. Critically, both estimates
are similar to one another suggesting the new estimation scheme is reliable.
6
Conclusion
Amplitude and frequency demodulation is a difficult, ill-posed estimation problem. We have developed a new inferential solution called probabilistic amplitude and frequency demodulation which
employs a von Mises time-series prior over phase, constructed by conditioning a bivariate Gaussian
auto-regressive distribution onto the unit circle. The construction naturally leads to an expectation
propagation inference scheme which approximates the hard constraints using soft local Gaussians.
7
20
0
2.8
60
SNR y
? /dB
PAFD
BSE?PPV
40
SNR ?
? /dB
SNR ?a /dB
60
40
20
0
12.5
2.8
time /ms
40
20
0
12.5
2.8
time /ms
12.5
time /ms
0
?2
0
0.005
0.01
0.015
0.02
0.025
0
3000
2500
?c
?
?
2000
1500
freq. /Hz
2
y
ac
y
?
0.03
?a
freq. /Hz
2
0
0.01
0.02
0.03
0
0.01
0.02
0.03
3000
2500
2000
1500
?2
0
0.005
0.01
0.015
0.02
0.025
0.03
time /s
time /s
0
?2
0
0.5
1
y
?aPAFD
?aHE
?aPPV-BSE
1.5
10
?
? HE
?
? BSE-PPV
5
0
0
0.5
1
1.5
10
5
?
? PAFD
0
0
0.5
1
1.5
time /s
envelopes
2
frequency /Hz frequency /Hz
frequency /Hz frequency /Hz
envelopes
Figure 6: Missing natural data experiments. TOP: SNR of estimated variables as a function of gap
duration in the input signal. Envelopes (left), IFs (center) and denoised signal (right). Solid markers
indicate the examples shown in the bottom rows of the figure. BOTTOM: Two examples of PAFD
reconstruction. Light gray regions indicate missing sections of the signal.
2
0
?2
0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
10
5
0
10
5
0
time /s
Figure 7: Comparison of AFD methods on EEG data. The left hand side shows estimates derived
from the raw EEG signal, whilst the right shows estimates derived from a band-pass filtered version.
The gray areas show the region where the true amplitude falls below the noise floor (a < ?y ), where
conventional methods fail.
We have demonstrated the utility of the new method on synthetic and natural signals, where it outperformed conventional approaches. Future research will consider extensions of the model to multiple
sinusoids, and learning the model parameters so that the algorithm can adapt to novel signals.
Acknowledgments
Richard Turner was funded by the EPRC, and Maneesh Sahani by the Gatsby Charitable Foundation.
8
References
[1] P. J. Loughlin and B. Tacer. On the amplitude- and frequency-modulation decomposition of signals. The
Journal of the Acoustical Society of America, 100(3):1594?1601, 1996.
[2] J. L. Flanagan. Parametric coding of speech spectra. The Journal of the Acoustical Society of America,
68:412?419, 1980.
[3] P. Clark and L.E. Atlas. Time-frequency coherent modulation filtering of nonstationary signals. Signal
Processing, IEEE Transactions on, 57(11):4323 ?4332, nov. 2009.
[4] J. L. Flanagan and R. M. Golden. Phase vocoder. Bell System Technical Journal, pages 1493?1509, 1966.
[5] S. M. Schimmel. Theory of Modulation Frequency Analysis and Modulation Filtering, with Applications
to Hearing Devices. PhD thesis, University of Washington, 2007.
[6] L. E. Atlas and C. Janssen. Coherent modulation spectral filtering for single-channel music source separation. In Proceedings of the IEEE Conference on Acoustics Speech and Signal Processing, 2005.
[7] Z. M. Smith, B. Delgutte, and A. J. Oxenham. Chimaeric sounds reveal dichotomies in auditory perception. Nature, 416(6876):87?90, 2002.
[8] J. Dugundji. Envelopes and pre-envelopes of real waveforms. IEEE Transactions on Information Theory,
4:53?57, 1958.
[9] O. Ghitza. On the upper cutoff frequency of the auditory critical-band envelope detectors in the context
of speech perception. The Journal of the Acoustical Society of America, 110(3):1628?1640, 2001.
[10] F. G. Zeng, K. Nie, S. Liu, G. Stickney, E. Del Rio, Y. Y. Kong, and H. Chen. On the dichotomy in
auditory perception between temporal envelope and fine structure cues (L). The Journal of the Acoustical
Society of America, 116(3):1351?1354, 2004.
[11] D. Vakman. On the analytic signal, the Teager-Kaiser energy algorithm, and other methods for defining
amplitude and frequency. IEEE Journal of Signal Processing, 44(4):791?797, 1996.
[12] G. Girolami and D. Vakman. Instantaneous frequency estimation and measurement: a quasi-local method.
Measurement Science and Technology, 13(6):909?917, 2002.
[13] Y. Qi, T. P. Minka, and R. W. Picard. Bayesian spectrum estimation of unevenly sampled nonstationary
data. In International Conference on Acoustics, Speech, and Signal Processing, 2002.
[14] A. T. Cemgil and S. J. Godsill. Probabilistic Phase Vocoder and its application to Interpolation of Missing
Values in Audio Signals. In 13th European Signal Processing Conference, Antalya/Turkey, 2005.
[15] C. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[16] R. Gatto and S. R. Jammalamadaka. The generalized von mises distribution. Statistical Methodology,
4:341?353, 2007.
[17] G. Sell and M. Slaney. Solving demodulation as an optimization problem. IEEE Transactions on Audio,
Speech and Language Processing, 18:2051?2066, November 2010.
[18] R. E. Turner and M. Sahani. Probabilistic amplitude demodulation. In Independent Component Analysis
and Signal Separation, pages 544?551, 2007.
[19] R. E. Turner and M. Sahani. Statistical inference for single- and multi-band probabilistic amplitude
demodulation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP), pages 5466?5469, 2010.
[20] R. E. Turner and M. Sahani. Demodulation as probabilistic inference. IEEE Transactions on Audio,
Speech and Language Processing, 2011.
[21] J. Breckling. The analysis of directional time series: Application to wind speed and direction. SpringerVerlag, 1989.
[22] J. P. Cunningham. Algorithms for Understanding Motor Cortical Processing and Neural Prosthetic Systems. PhD thesis, Stanford University, Department of Electrical Engineering, (Stanford, California, USA,
2009.
[23] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media Lab, 2001.
[24] T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic bayesian networks. In A. Darwiche and N. Friedman, pages 216?233. Morgan Kaufmann Publishers, 2002.
9
| 4284 |@word kong:1 version:3 norm:6 open:1 crucially:1 covariance:1 decomposition:2 commute:1 solid:3 moment:3 initial:1 liu:1 series:4 contains:2 outperforms:2 existing:5 imaginary:1 delgutte:1 yet:1 must:2 concatenate:1 numerical:1 analytic:2 motor:1 atlas:2 update:6 implying:1 cue:1 device:2 xk:4 isotropic:1 smith:1 filtered:2 regressive:6 provides:1 successive:1 simpler:3 constructed:1 become:2 prove:1 fitting:1 darwiche:1 multi:2 decomposed:2 electroencephalography:1 considering:1 unwrapped:1 begin:1 estimating:1 linearity:3 moreover:1 panel:1 laptop:1 underlying:1 medium:1 matched:1 kind:1 interpreted:3 substantially:1 spectrally:1 developed:1 whilst:2 ifs:10 temporal:4 quantitative:1 reinterpret:1 golden:1 returning:1 k2:4 demonstrates:1 uk:4 unit:7 positive:2 carrier:4 engineering:3 local:5 vertically:1 tends:1 cemgil:1 before:3 despite:2 modulation:7 interpolation:1 bird:1 challenging:1 co:18 range:3 acknowledgment:1 flanagan:2 cb2:1 area:2 maneesh:3 bell:1 matching:1 inferential:1 pre:1 suggest:2 onto:4 cannot:1 schimmel:1 context:3 a2t:1 superposed:1 equivalent:3 conventional:3 demonstrated:1 missing:9 yt:9 center:4 go:1 straightforward:1 starting:1 independently:1 convex:2 duration:2 immediately:1 stability:1 handle:2 updated:1 construction:2 pt:2 anomaly:1 exact:2 us:2 element:1 recognition:1 particularly:1 observed:1 bottom:5 ep:4 electrical:1 capture:1 region:8 removed:1 substantial:1 complexity:1 nie:1 cam:1 dynamic:5 depend:1 solving:1 upon:2 basis:2 completely:1 icassp:1 indirect:1 various:1 america:4 phasor:2 london:2 dichotomy:2 aggregate:1 refined:1 whose:1 posed:5 supplementary:2 stanford:2 reconstruct:1 favor:1 noisy:3 itself:1 final:1 advantage:1 ucl:1 propose:2 took:1 interaction:1 product:5 reconstruction:2 convergence:1 enhancement:1 rotated:1 derive:1 coupling:1 ac:3 develop:1 qt:1 eq:4 strong:1 involves:1 indicate:4 girolami:1 direction:1 waveform:2 closely:1 modifying:1 filter:2 transient:1 material:2 require:2 argued:2 assign:1 generalization:1 biological:1 extension:2 around:1 considered:1 stt:1 ground:2 exp:4 vary:2 purpose:2 estimation:6 outperformed:1 tool:2 mit:1 clearly:1 gaussian:17 modified:1 rather:1 varying:8 ret26:1 validated:1 derived:4 unsatisfactory:1 consistently:1 likelihood:12 indicates:1 contrast:1 am:1 rio:1 inference:13 dependent:1 i0:2 integrated:1 typically:1 cunningham:1 quasi:1 interested:1 ill:5 retaining:1 development:3 smoothing:3 constrained:3 integration:1 field:3 construct:1 once:1 having:2 washington:1 identical:2 sell:1 broad:1 nearly:1 peaked:1 future:1 develops:1 richard:3 employ:2 modern:1 replaced:1 phase:25 connects:1 friedman:1 message:9 investigate:1 highly:2 circular:2 picard:1 evaluation:2 introduces:1 light:3 x22:3 wc1n:1 accurate:2 integral:1 necessary:2 damping:1 rotating:1 re:2 circle:6 instance:1 sinusoidally:1 soft:2 ar:7 queen:1 hearing:2 snr:24 uniform:2 dependency:1 varies:2 corrupted:1 synthetic:10 combined:1 considerably:1 st:25 international:2 probabilistic:15 together:1 quickly:2 reestimated:1 von:9 again:3 thesis:3 slowly:7 positivity:2 slaney:1 derivative:1 return:1 suggesting:1 potential:8 sinusoidal:2 coding:2 includes:1 explicitly:1 multiplicative:1 view:1 wind:1 lab:2 red:1 zoeter:1 denoised:3 complicated:1 inherited:1 square:1 formed:2 variance:3 characteristic:1 kaufmann:1 yield:3 correspond:1 directional:1 bayesian:4 raw:1 accurately:1 critically:1 ix2:1 tissue:1 detector:1 realise:1 energy:1 frequency:34 minka:2 naturally:1 associated:1 mi:9 recovers:1 sampled:1 auditory:3 improves:1 hilbert:8 schedule:1 amplitude:42 higher:1 dt:1 methodology:1 specify:1 improved:2 formulation:1 though:1 strongly:1 furthermore:1 implicit:1 stage:1 correlation:1 hand:1 zeng:1 marker:3 propagation:5 del:1 mode:1 artifact:2 reveal:3 perhaps:1 scientific:2 behaved:1 alexandra:1 gray:5 usa:2 effect:4 normalized:1 y2:3 true:2 sinusoid:8 laboratory:1 iteratively:1 freq:4 conditionally:1 attractive:1 sin:13 wrapped:2 bse:16 rhythm:1 m:7 generalized:4 hippocampal:1 complete:1 demonstrate:2 tt:2 instantaneous:4 novel:3 consideration:2 rotation:1 conditioning:3 anisotropic:2 he:2 approximates:1 numerically:1 measurement:3 refer:1 cambridge:2 consistency:1 heskes:1 language:2 funded:1 specification:1 posterior:2 recent:1 apart:1 termed:1 manipulation:1 slowness:1 outperforming:1 discussing:1 morgan:1 impose:1 floor:2 determine:1 bessel:1 signal:63 full:1 sound:3 desirable:1 multiple:2 reduces:1 turkey:1 technical:1 adapt:1 demodulation:16 a1:3 qi:1 prediction:1 regression:1 vision:1 expectation:5 noiseless:1 fine:1 unevenly:1 source:2 leaving:1 publisher:1 envelope:16 pass:1 markedly:1 hz:15 isolate:1 db:16 nonstationary:2 easy:1 fit:1 fm:1 whether:1 motivated:2 expression:3 utility:1 birdsong:1 song:3 speech:9 york:2 cause:1 band:6 generate:1 problematic:1 shifted:1 neuroscience:1 estimated:9 track:2 express:1 drawn:1 deleted:1 cutoff:1 clean:4 backward:4 wtt:2 family:1 reader:1 separation:3 oscillation:3 summarizes:1 scalp:1 constraint:11 deficiency:1 infinity:1 x2:21 prosthetic:1 speed:1 extremely:1 department:2 trumpington:1 unity:1 making:2 s1:5 computationally:2 equation:1 visualization:1 turn:1 mechanism:1 fail:2 gaussians:2 rewritten:2 multiplied:1 subbands:1 apply:2 spectral:2 appropriate:1 alternative:1 robustness:1 original:4 top:3 include:1 x21:3 log10:2 music:1 k1:4 especially:1 society:4 implied:2 move:1 added:1 occurs:1 kaiser:1 fa:2 concentration:1 parametric:1 exhibit:1 thank:1 separate:1 street:1 parametrized:1 sensible:2 acoustical:4 collected:2 extent:1 ix1:1 kalman:3 length:1 modeled:1 retained:1 ratio:1 equivalently:2 difficult:3 negative:2 neuroscientific:1 irt:1 godsill:1 zt:7 unknown:1 upper:1 benchmark:1 parametrizing:1 november:1 truncated:2 situation:1 defining:1 incorporated:1 precise:1 y1:1 perturbation:4 sharp:1 arbitrary:1 intensity:1 introduced:1 required:1 specified:1 acoustic:3 coherent:2 california:1 established:1 address:1 bar:1 proceeds:1 dynamical:4 perception:4 below:2 afd:17 xm:3 pattern:1 challenge:1 including:1 reliable:1 belief:3 critical:1 demanding:1 natural:4 treated:1 ppv:13 difficulty:2 indicator:1 turner:6 representing:1 scheme:3 technology:1 theta:2 eye:1 carried:1 auto:6 sahani:5 prior:16 mom:1 understanding:1 xtt:1 highlight:1 prototypical:1 limitation:2 filtering:3 clark:1 foundation:1 integrate:2 charitable:1 row:2 last:1 side:1 understand:1 wide:1 fall:2 taking:1 vocoder:3 cortical:1 world:3 forward:4 adaptive:1 transaction:4 reconstructed:1 approximate:3 nov:1 clique:1 reveals:1 spectrum:4 un:1 latent:1 why:1 channel:1 nature:1 robust:1 correlated:1 eeg:5 complex:2 european:1 domain:1 main:2 s2:1 noise:13 x1:13 augmented:1 fig:9 broadband:1 ff:2 gatsby:3 ny:1 slow:1 aid:1 inferring:1 explicit:1 bandpass:1 candidate:1 house:1 third:1 removing:1 minute:1 xt:7 bishop:1 pz:1 normalizing:1 bivariate:5 essential:1 janssen:1 ih:1 importance:1 phd:3 magnitude:1 gap:2 chen:1 expressed:3 applies:1 springer:1 corresponds:1 truth:2 goal:2 consequently:1 replace:1 content:2 change:1 hard:3 springerverlag:1 characterised:1 unphysical:1 called:5 pas:2 select:1 college:1 latter:1 arises:1 modulated:4 evaluate:1 audio:6 phenomenon:1 handling:1 |
3,628 | 4,285 | Message-Passing for Approximate MAP Inference
with Latent Variables
Jiarong Jiang
Dept. of Computer Science
University of Maryland, CP
[email protected]
Piyush Rai
School of Computing
University of Utah
[email protected]
Hal Daum?e III
Dept. of Computer Science
University of Maryland, CP
[email protected]
Abstract
We consider a general inference setting for discrete probabilistic graphical models
where we seek maximum a posteriori (MAP) estimates for a subset of the random
variables (max nodes), marginalizing over the rest (sum nodes). We present a hybrid message-passing algorithm to accomplish this. The hybrid algorithm passes
a mix of sum and max messages depending on the type of source node (sum or
max). We derive our algorithm by showing that it falls out as the solution of a particular relaxation of a variational framework. We further show that the Expectation
Maximization algorithm can be seen as an approximation to our algorithm. Experimental results on synthetic and real-world datasets, against several baselines,
demonstrate the efficacy of our proposed algorithm.
1
Introduction
Probabilistic graphical models provide a compact and principled representation for capturing complex statistical dependencies among a set of random variables. In this paper, we consider the general
maximum a posteriori (MAP) problem in which we want to maximize over a subset of the variables
(max nodes, denoted X), marginalizing the rest (sum nodes, denoted Z). This problem is termed
as the Marginal-MAP problem. A typical example is the minimum Bayes risk (MBR) problem [1]
where the goal is to find an assignment x
? which optimizes a loss ?(?
x, x) with regard to some usually
unknown truth x. Since x is latent, we need to marginalize it before optimizing with respect to x
?.
Although the specific problems of estimating marginals and estimating MAP individually have been
studied extensively [2, 3, 4], similar developments for the more general problem of simultaneous
marginal and MAP estimation are lacking. More recently, [5] proposed a method based optimizing
a variational objective on specific graph structures, and is a simultaneous development as the method
we propose in this paper (please refer to the supplementary material for further details and other
related work).
This problem is fundamentally difficult. As mentioned in [6, 7], even for a tree-structured model,
we cannot solve the Marginal-MAP problem exactly in poly-time unless P = N P . Moreover, it
has been shown [8] that even if a joint distribution
p(x, z) belongs to the exponential family, the
P
corresponding marginal distribution p(x) = z p(x, z) is in general not exponential family (with a
very short list of exceptions, such as Gaussian random fields). This means that we cannot directly
apply algorithms for MAP inference to our task. Motivated by this problem, we propose a hybrid
message passing algorithm which is both intuitive and justified according to variational principles.
Our hybrid message passing algorithm uses a mix of sum and max messages with the message type
depending on the source node type.
Experimental results on chain and grid structured synthetic data sets and another real-world dataset
show that our hybrid message-passing algorithm works favorably compared to standard sumproduct, standard max-product, or the Expectation-Maximization algorithm which iteratively provides MAP and marginal estimates. Our estimates can be further improved by a few steps of local
1
search [6]. Therefore, using the solution found by our hybrid algorithm to initialize some local
search algorithms largely improves the performance on both accuracy and convergence speed, compared to the greedy stochastic search method described in [6]. We also give an example in Sec. 5
of how our algorithm can also be used to solve other practical problem which can be cast under the
Marginal-MAP framework. In particular, the Minimum Bayes Risk [9] problem for decomposable
loss-functions can be readily solved under this framework.
2
Problem Setting
In our setting, the nodes in a graphical model with discrete random variables are divided into two
sets: max and sum nodes. We denote a graph G = (V, E), V = X ? Z where X is the set of nodes
for which we want to compute the MAP assignment (max nodes), and Z is the set of nodes for which
we need the marginals (sum nodes). Let x = {x1 , . . . , xm } (xs ? Xs ), z = {z1 , . . . , zn } (zs ? Zs )
be the random variables associated with the nodes in X and Z respectively. The exponential family
distribution p over these random variables is defined as follows:
p? (x, z) = exp [h?, ?(x, z)i ? A(?)]
where ?(x, z) is the sufficient statistics of the enumerationPof all node assignments, and ? is the vector of canonical or exponential parameters. A(?) = log x,z exp[h?, ?(x, z)i] is the log-partition
function. In this paper, we consider only pairwise node interactions and use standard overcomplete
representation of the sufficient statistics [10] (defined by indicator function I later).
The general MAP problem can be formalized as the following maximization problem:
X
x? = arg max
p? (x, z)
x
(1)
z
with corresponding marginal probabilities of the z nodes, given x? .
X
p(zs |x? ) =
p(z|x? ), s = 1, . . . , n
(2)
Z\{zs }
Before proceeding, we introduce some notations for clarity of exposition: Subscripts s, u, t, etc.
denote nodes in the graphical model. zs , xs are sum and max random variables respectively, associated with some node s. vs can be either a sum (zs ) or a max random (xs ) variable, associated with
some node s. N (s) is the set of neighbors of node s. Xs , Zs , Vs are the state spaces from which xs ,
zs , vs take values.
2.1
Message Passing Algorithms
The sum-product and max-product algorithms are standard message-passing algorithms for inferring
marginal and MAP estimates respectively in probabilistic graphical models. Their idea is to store
a belief state associated with each node, and iteratively passing messages between adjacent nodes,
which are used to update the belief states. It is known [11] that these algorithms are guaranteed to
converge to the exact solution on trees or polytrees. On loopy graphs, they are no longer guaranteed
to converge, but they can still provide good estimates when converged [12].
In the standard sum product algorithm, the message Mts passed from node s to one of its neighbors
t is as follows:
?
?
?
X ?
Y
(3)
Mts (vs ) ? ?
exp[?st (vs , vt? ) + ?t (vt? )]
Mut (vt? )
?
?
?
vt ?Vt
u?N (t)\s
where ? is a normalization constant. When the messages converge, i.e. {Mts , Mst } does not change
for every pair of nodes s Q
and t, the belief (psuedomarginal distribution) for the node s is given by
?s (vs ) = ? exp{?s (vs )} t?N (s) Mts (vs ). The outgoing messages for max product algorithm have
the same form but with a maximization instead of a summation in Eq. (3). After convergence, the
MAP assignment for each node is the assignment with the highest max-marginal probability.
On loopy graphs, the tree-weighted sum and max product [13, 14] can help find the upper bound
of the marginal or MAP problem. They decompose the loopy graph into several spanning trees and
reweight the messages by the edge appearance probability.
2
2.2
Local Search Algorithm
Eq (1) can be viewed as doing a variable elimination for z nodes first, followed by a maximization
over x. Its maximization step may be performed using heuristic search techniques [7, 6]. Eq (2) can
be computed by running standard sum-product over z, given the MAP x? assignments. In [6], the
assignment for the MAP nodes are found by greedily searching the best neighboring assignments
which only differs on one node. However, the hybrid algorithm we propose allows simultaneously
approximating both Eq (1) and Eq (2).
3
HYBRID MESSAGE PASSING
In our setting, we wish to compute MAP estimates for one set of nodes and marginals for the rest.
One possible approach is to run standard sum/max product algorithms over the graph, and find the
most-likely assignment for each max node according to the maximum of sum or max marginals1 .
These na??ve approaches have their own shortcomings; for example, although using standard maxproduct may perform reasonably when there are many max nodes, it inevitably ignores the effect of
sum nodes which should ideally be summed over. This is analogous to the difference between EM
for Gaussian mixture models and K-means. (See Sec. 6)
3.1
ALGORITHM
We now present a hybrid message-passing algorithm which passes sum-style or max-style messages
based on the type of nodes from which the message originates. In the hybrid message-passing
algorithm, a sum node sends sum messages to its neighbors and a max node sends max messages.
The type of message passed depends on the type of source node, not the destination node.
More specifically, the outgoing messages from a source node are as follows:
? Message from sum node t to any neighbor s:
?
?
?
X ?
Y
Mts (vs ) ? ?1
exp[?st (vs , zt? ) + ?t (zt? )]
Mut (zt? )
?
?
?
zt ?Zt
(4)
u?N (t)\s
? Message from max node t to any neighbor s:
?
?
exp[?st (vs , x?t ) + ?t (x?t )]
Mts (vs ) ? ?2 max
x?t ?Xt ?
Y
u?N (t)\s
Mut (x?t )
?
?
(5)
?
and ?1 ,?2 are normalization constants. Algo 1 shows the procedure to do hybrid message-passing.
Algorithm 1 Hybrid Message-Passing Algorithm
Inputs: Graph G = (V, E), V = X ? Z, potentials ?s , s ? V and ?st , (s, t) ? E.
1. Initialize the messages to some arbitrary value.
2. For each node s ? V in G, do the following until messages converge (or maximum number
of iterations reached)
? If s ? X, update messages by Eq.(5).
? If s ? Z, update messages by Eq.(4).
3. Compute the local belief Q
for each node s.
?s (ys ) = ? exp{?s (vs )} t?N (s) Mts (vs )
4. For all xs ? X, return arg maxxs ?Xs ?s (xs )
5. For all zs ? Z, return ?s (zs ).
When there is only a single type of node in the graph, the hybrid algorithm reduces to the standard
max or sum-product algorithm. Otherwise, it passes different messages simultaneously and gives an
approximation to the MAP assignment on max nodes as well as the marginals on sum nodes. On
the loopy graphs, we can also apply this scheme to pass hybrid tree-reweighted messages between
nodes to obtain marginal and MAP estimates. (See Appendix C of the supplementary material)
1
Running the standard sum-product algorithm and choosing the maximum likelihood assignment for the
max nodes is also called maximum marginal decoding [15, 16].
3
3.2
VARIATIONAL DERIVATION
In this section, we show that the Marginal-MAP problem can be framed under a variational framework, and the hybrid message passing algorithm turns out to be a solution of it. (a detailed derivation
is in Appendix A of the supplementary material). To see this, we construct a new graph Gx? with xs?
? ? X = X1 ? ? ? ? ? Xm , so the log-partition function A(?x? ) of the graph
assignments fixed to be x
Gx? is
X
A(?x? ) = log
p(?
x, z) + log A(?) = log p(?
x) + const
(6)
z
As the constant only depends on the log-partition function of the original graph and does not vary
with different assignments of MAP nodes, A(?x? ) exactly estimates the log-likelihood of assignment
? . Therefore argmaxx? ?X log p(?
x) = argmaxx? ?X A(?x? ). Moreover, A(?x? ) can be approximated
x
by the following [10]:
(7)
A(?x? ) ? sup h?, ?i + HBethe (?)
??M (Gx
?)
where M (Gx? ) is the following marginal polytope of graph Gx :
?
?
? fixed to its assignment ?
with x
? ?s (zs ), ?st (vs , vt ): marginals
1
if xs = x
?s
M (Gx? ) = ?
?s (xs ) =
?
?
0
else
(8)
Recall, vs stands for xs or zs . HBethe (?) is the Bethe energy of the graph:
HBethe (?) =
X
Hs (?s ) ?
s
X
Ist (?st ), Hs (?s ) = ?
X
=
?s (vs ) log ?s (vs )
vs ?Vs
(s,t)?E
Ist (?st )
X
?st (vs , vt ) log
(vs ,vt )?Vs ?Vt
(9)
?st (vs , vt )
?s (vs )?t (vt )
For readability, we use ?sum , ?max to subsume the node and pairwise marginals for sum/max nodes
and ?sum?max , ?max?sum are the pairwise marginals for edges between different types of nodes. The
direction here is used to be consistent with the distinction of the constraints as well as the messages.
Solving the Marginal-MAP problem is therefore equivalent to solving the following optimization
problem:
max
sup
h?, ?i + HBethe (?) ?
? ?X ?other ?M (Gx
x
?)
sup
sup
h?, ?i + HBethe (?)
(10)
?max ?Mx
? ?other ?M (Gx
?)
?other contains all other node/pairwise marginals except ?max . The Bethe entropy terms can be
written as (H is the entropy and I is mutual information)
HBethe (?) = H?max + H?sum ? I?max ??max ? I?sum ??sum ? I?max ??sum ? I?sum ??max
If we force to satisfy the second condition in (8), the entropy of max nodes H?max = Hs (?s ) = 0,
?s ? X and the mutual information between max nodes I?max ??max = Ist (xs , xt ) = 0, ?s, t ? X.
For mutual information between different types of nodes, we can either force xs to have integral solutions, or relax xs to have non-integral solution, or relax xs on one direction2 . In practice, we relax
the mutual information on the message from sum nodes to max nodes, so the mutual information
P
?st (xs ,zt )
on the other direction I?max ??sum = Ist (xs , zt ) =
(xs ,zt )?Xs ?Zt ?st (xs , zt ) log ?s (xs )?t (zt ) =
?
P
?st (x ,zt )
?
?
zt ?Zt ?st (x , zt ) log ?s (x? )?t (zt ) = 0, ?s ? X, t ? Z, where x is the assigned state of x at node
s. Finally, we only require sum nodes to satisfy normalization and marginalization conditions, the
entropy for sum nodes, mutual information between sum nodes, and from sum node to max node
can be nonzero.
The above process relaxes ?
the polytope
P M (Gx? ) to be Mx? ? Lz (Gx? ), where ?
Lz (Gx? ) = ? ? 0
?
?
?
?
?
?
?
?
?sP
(zs ) = 1, ?s (xs ) = 1 iff xs = x
?s ,
?
(v
,
z
)
=
?
(v
),
st
s
t
s
s
Pzt
zs ?st (zs , vt ) = ?t (vt ),
?st (xs , zt ) = ?t (zt ) iff xs = x
?s ,
?st (xs , xt ) = 1 iff xs = x
? s , xt = x
?t .
zs
?
?
?
?
?
?
?
?
2
This results in four different relaxations for different combinations of message types and the hybrid algorithm performed empirically the best.
4
This analysis results in the following optimization problem.
sup
h?, ?i + H(?sum ) ? I(?sum?sum ) ? I(?sum?max )
sup
?max ?Mx
? ?others ?M (Gx
?)
Further relaxing ?x? s to have non-integral
?
solutions,
P define
Finally we get
?
?
vs ?s (vs ) = 1,
P
L(G) = ? ? 0 Pvt ?st (vs , vt ) = ?s (vs ),
?
?
v ?st (vs , vt ) = ?t (vt ).
?
s
sup h?, ?i + H(?sum ) ? I(?sum?sum ) ? I(?sum?max )
(11)
??L(G)
So Mx? ? Mz (Gx? ) ? Mx? ? Lz (Gx? ) ? L(G). Unfortunately, Mx? ? Mz (Gx? ) is not guaranteed to be
convex and we can only obtain an approximate solution to the problem defined in Eq (11). Taking
the Lagrangian formulation, for an x node, the partial derivative of the Lagrangian with respect to
?s (xs ), s ? X keeps the same form as in max product derivation[10], and the situations are identical
for ?s (zs ), s ? Z and pairwise psuedo-marginals, so the hybrid message-passing algorithm provides
a solution to Eq (11) (see Appendix A of the supplementary material for a detailed derivation).
4
Expectation Maximization
Another plausible approach to solve the Marginal MAP problem is by the Expectation Maximization(EM) algorithm [17], typically used for maximum likelihood parameter estimation in latent variable models. In our setting, the variables Z correspond to the latent variables. We now show one
way of approaching this problem by applying the sum-product and max-product algorithms in the E
and M step respectively. To see this, let us first define3 :
F (?
p, x) = Ep?[log p(x, z)] + H(?
p(z))
(12)
where H(?
p) = ?Ep?[log p?(z)].
Then EM can be interpreted as a joint maximization of the function F [18]: At iteration t, for
the E-step, p?(t) is set to be the p? that maximizes F (?
p, x(t?1) ) and for the M-step, x(t) is the x
(t)
that maximizes F (?
p , x). Given F , the following two properties4 show
P that jointly maximizing
function F is equivalent to maximizing the objective function p(x) = z p(x, z).
1. With the value of x fixed in function F , the unique solution to maximizing F (?
p, x) is given
by p?(z) = p(z|x).
P
2. If p?(z) = p(z|x), then F (?
p, x) = log p(x) = log z p(x, z).
4.1
Expectation Maximization via Message Passing
Now we can derive the EM algorithm for solving the Marginal-MAP problem by jointly maximizing
function F . In the E-step, we need to estimate p?(z) = p(z|x) given x. This can be done by fixing x
values at their MAP assignments and running the sum-product algorithm over the resulting graph:
The M-step works by maximizing Ep? (z | x?) log p? (x, z), where x
? is the assignment given by the previous M-step. This is equivalent to maximizing Ez?p? (z | x?) log p? (x | z), as P
the log p? (z) term in the
maximization is independent of x. maxx Ez?p? (z | x?) log p? (x | z) = maxx z p(z | x
?)h?, ?(x, z)i,
which in the?overcomplete representation
[10] can be approximated by
?
X
s?X,i
??s;i +
X
t?Z,j
?t;j ?st;ij ? Is;i (xs ) +
X
X
?st;ij Ist;ij (xs , xt ) + C
(s,t)?E,s,t?X (i,j)
where C subsumes the terms irrelevant to the maximization over x, ?t is the psuedo-marginal of
node t given x
?5 . Then, the M-step amounts to running the max product algorithm with potentials on
x nodes modified according to Eq. (13). Summarizing, the EM algorithm for solving marginal-MAP
estimation can be interpreted as follows:
? E-step: Fix xs to be the MAP assignment value from iteration (k ? 1) and run sum product
to get beliefs on sum nodes zs, say ?t , t ? Z.
3
P
By directly applying Jensen?s inequality to the objective function maxx log z p(x, z)
4
The proofs are straightforward following Lemma 1 and 2 in [18] page 4-5. More details are in Appendix
B of the supplementary material
5
A detailed derivation is in Appendix B.4 of the supplementary material
5
? = (V? , E)
? only containing the max nodes. V? =X and E
? =
? M-step: Build a new graph G
{(s, t)|?(s, t) ? E, s, t ? X}. For each max node s in the graph, set its potential as
P
? Run
??s;i = ?s;i + j ?st;ij ?t;j , where t ? Z and (s, t) ? E. ??st;ij = ?st;ij ?(s, t) ? E.
max product over this new graph and update the MAP assignment.
4.2
Relationship with the Hybrid Algorithm
Apart from the fact that the hybrid algorithm passes different messages simultaneously and EM
does it iteratively, to see the connection with the hybrid algorithm, let us first consider the message
passed in the E-step at iteration k. xs are fixed at the last assignment which maximizes the message
(k?1)
at iteration k ? 1, denoted as x? here. The Mut
are the messages computed at iteration k ? 1.
Y
(k)
(k?1)
?
Mts (zs ) = ?1 {exp[?st (zs , xt ) + ?t (x?t )]
Mut (x?t )}
(13)
u?N (t)\s
Now assume there exists an iterative algorithm which, at each iteration, computes the messages
? ts . Eq (13) then
used in both steps of the message-passing variant of the EM algorithm, denoted M
becomes
Y
(k)
(k?1) ?
? ts
? ut
M
(zs ) == ?1 max{exp[?st (zs , x? ) + ?t (x? )]
M
(x )}
t
x?
t
t
u?N (t)\s
So the max nodes (x?s) should pass the max messages to its neighbors (z?s), which is what the hybrid
message-passing algorithm does.
In the M-step for EM (as discussed in Sec. 4), all the sum nodes t are removed
P from the graph
and the parameters of the adjacent max nodes are modified as: ?s;i = ?s;i + j ?st;ij ?t;j . ?t is
computed by the sum product at the E-step of iteration k, and these sum messages are used (in form
of the marginals ?t ) in the subsequent M-step (with the sum nodes removed). However, a max node
may prefer different assignments according to different neighboring nodes. With such uncertainties,
especially during the first a few iterations, it is very likely that making hard decisions will directly
lead to the bad local optima. In comparison, the hybrid message passing algorithm passes mixed
messages instead of making deterministic assignments in each iteration.
5
MBR Decoding
Most work on finding ?best? solutions in graphical models focuses on the MAP estimation problem:
find the x that maximizes p? (x). In many practical applications, one wishes to find an x that minimizes some risk, parameterized by a given loss function. This is the minimum Bayes risk (MBR)
setting, which has proven useful in a number of domains, such as speech recognition [9], natural
language parsing [19, 20], and machine translation [1]. We are given a loss function ?(x, x
?) which
measures the loss of x
? assuming x is the truth. We assume losses are non-negative. Given this loss
function, the minimum Bayes risk solution is the minimizer of Eq (14):
X
MBR? = arg min Ex?p [?(x, x
?)] = arg min
p(x)?(x, x
?)
(14)
x
?
x
?
x
We
?) =
P now assume that ? decomposes over the structure of x. In particular, suppose that: ?(x, x
?(x
,
x
?
),
where
C
is
some
set
of
cliques
in
x,
and
x
denotes
the
variables
associated
with
c
c
c
c?C
that clique. For example, for Hamming loss, the cliques are simply the set of pairs of vertices of the
form (xi , x
?i ), and the loss simply counts the number of disagreements. Such decompositionality is
widely assumed in structured prediction algorithms [21, 22].
Assume lc (x, x? ) ? L ?c, x, x? . Therefore l(x, x? ) ? |C|L. We can then expand Eq (14) into the
following:
X
X
x
?
= arg max
x
?
p(x)(|C|L ? l(x, x? ))
p(x)?(x, x
?) = argmax
MBR? = arg min
x?
x
X
x
"
exp h?, xi + log
x
X
c
[L ? ?(xc , x
?c )] ? A(?)
#
This resulting expression has exactly the same form as the MAP-with-marginal problem, where
? being the variable being maximized. Fig. 1 shows a
x is the variable being marginalized and x
simple example of transforming a MAP lattice problem into an MBR problem under Hamming loss.
Therefore, we can apply our hybrid algorithm to solve the MBR problem.
6
Average KL?divergence on sum nodes
0.05
Max Product
0.045
Sum Product
Hybrid Message Passing
0.04
KL?divergence
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
10
20
30
40
50
60
70
80
90
% of sum nodes
Figure 1: The Augmented Model For Solving Figure 2: Comparison of Various Algorithms For
The MBR Problem Under Hamming Loss over Marginals on 10-Node Chain Graph
a 6-node simple lattice
6
EXPERIMENTS
We perform the experiments on synthetic datasets as well as a real-world protein side-chain prediction dataset [23], and compare our hybrid message-passing algorithm (both its standard belief
propagation and the tree-reweighted belief propagation (TRBP) versions) against a number of baselines such as the standard sum/max product based MAP estimates, EM, TRBP, and the greedy local
search algorithm proposed in [6].
6.1 Synthetic Data
For synthetic data, we first take a 10-node chain graph with varying splits of sum vs max nodes,
and random potentials. Each node can take one of the two states (0/1). The node and the edge
potentials are drawn from U NIFORM(0,1) and we randomly pick nodes in the graph to be sum
or max nodes.
For this small
graph, the trueQassignment is computable by explicitly maximizing
P
P Q
1
p(x) =
z p(x, z) = Z
(s,t)?E ?st (vs , vt ), where Z is some normalization
z
s?V ?s (vs )
constant and ?s (vs ) = exp ?s (vs ).
First, we compare the various algorithms on the MAP assignments. Assume that the aforementioned
maximization gives assignment x? = (x?1 , . . . , x?n ) and some algorithm gives the approximate assignment x = (x1 , . . . , xn ). The metrics we use here are the 0/1 loss and the Hamming loss.
0/1 Loss on a 10?node chain graph
Hamming loss on a 10?node chain graph
0.7
0.25
max
sum
hybrid
EM
max+local search
sum+local search
hybrid+local search
0.6
0.2
0.4
Error rate
Error rate
0.5
0.3
max
sum
hybrid
EM
max+local search
sum+local search
hybrid+local search
0.15
0.1
0.2
0.05
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
% of sum nodes
0.7
0.8
0.9
0
0.1
0.2
0.3
0.4
0.5
0.6
% of sum nodes
0.7
0.8
0.9
Figure 3: Comparison of Various Algorithms For MAP Estimates on 10-Node Chain Graph: 0-1 Loss (Left),
Hamming Loss (Right)
Fig. 3 shows the loss on the assignment of the max nodes. In the figure, as the number of sum nodes
goes up, the accuracy of the standard sum-product based estimation (sum) gets better, whereas the
accuracy of standard max-product based estimation (max) worsens. However, our hybrid messagepassing algorithm (hybrid), on an average, results in the lowest loss compared to the other baselines,
with running times similar to the sum/max product algorithms.
We also compare a stochastic greedy search approach described in [6] initialized by the results of
sum/max/hybrid algorithm (sum/max/hybrid+local search). As shown in [6], local search with sum
product initialization empirically performs better than with max product, so later on, we only compare the results with local search using sum product initialization (LS). Best of the three initialization
methods, by starting at the hybrid algorithm results, the search algorithm in very few steps can find
7
log?likelihood of the assignments normalized by hybrid algo
log?likelihood of the assignment normalized by hybrid algorithm
1.001
1.005
1
0.999
0.998
Relative Likelihood
Relative Likelihood
1
0.995
0.99
max
0.997
0.996
0.995
0.994
TR?max
TR?sum
TR?hybrid
LS
TR?hybrid+LS
sum
0.985
0.993
hybrid
LS
0.992
hybrid+LS
0.98
10%
20%
30%
40%
50%
60%
70%
80%
0.991
0.1
90%
% of sum node
0.2
0.3
0.4
0.5
0.6
% of sum nodes
0.7
0.8
0.9
Figure 4: Approximate Log-Partition Function Scores on a 50-Node Tree (Left) and an 8*10 Grid (Right)
Graph Normalized By the Result of Hybrid Algorithm
the local optimum, which often happened to be the global optimum as well. In particular, it only
takes 1 or 2 steps of search in the 10-node chain case and 1 to 3 steps in the 50-node tree case.
Next, we experiment with the marginals estimation. Fig 2 shows the mean of KL-divergence on the
marginals for the three message-passing algorithms (each averaged over 100 random experiments)
compared to the true marginals of p(z|x). Greedy search of [6] is not included since it only provides
MAP, not marginals. The x-axis shows the percentage of sum nodes in the graph. Just like in the
MAP case, our hybrid method consistently produces the smallest KL-divergence compared to others.
When the computation of the truth is intractable, the loglikelihood of the assignment can be approximated by the log-partition function with Bethe approximation according to Sec. 3.2. Note that this
is exact on trees. Here, we use a 50-node tree with binary node states and 8 ? 10 grid with various
states 1 ? |Ys | ? 20. On the grid graph, we apply tree-reweighted sum or max product [14, 13],
and our hybrid version based on TRBP. For the edge appearance probability in TRBP, we apply a
common approach that use a greedy algorithm to finding the spanning trees with as many uncovered
edges as possible until all the edges in the graph are covered at least once. Even if the messagepassing algorithms are not guaranteed to converge on loopy graphs, we can still compare the best
result they provide after a certain number of iterations
Fig. 4 presents the results. In the tree case, as expected, using hybrid message-passing algorithms?s
result to initialize the local search algorithm performs the best. On the grid graph, the local search
algorithm initialized by the sum product results works well when there are few max nodes, but as
the search space grows exponentially with the number of max nodes, so it takes hundreds of steps to
find the optimum. On the other hand, because the hybrid TRBP starts in a good area, it consistently
achieves the highest likelihood among all four algorithms with fewer extra steps.
6.2 Real-world Data
We then experiment with the protein side-chain prediction dataset [23, 24] which consists a set of pro- Table 1: Accuracy on the 1st, the 1st & 2rd
tein structures for which we need to find lowest en- Angles
ergy assignment for rotamer residues. There are two
?1
ALL
SURFACE
CORE
sum product
0.7900
0.7564
0.8325
sets of residues: core residues and surface residues.
max product
0.7900
0.7555
0.8336
The core residues are the residues which are conhybrid
0.7910
0.7573
0.8336
nected to more than 19 other residues, and the surTRBP
0.7942
0.7608
0.8364
hybrid TRBP
0.7950
0.7626
0.8359
face ones are the others. Since the MAP results are
?1 ? ?2
ALL
SURFACE
CORE
usually lower on the surface residues than the core
sum product
0.6482
0.6069
0.7005
residues nodes [24], we choose the surface residues
max product
0.6512
0.6064
0.7078
hybrid
0.6485
0.6051
0.7033
to be max nodes and the core nodes to be the sum
TRBP
0.6592
0.6112
0.7174
nodes. The ground truth is given by the maximum
hybrid TRBP
0.6597
0.6140
0.7186
likelihood assignment of the residues, so we do not
expect to have a better results on the core nodes, but
we hope that any improvement on the accuracy of the surface nodes can make up the loss on the
core nodes and thus give a better performance overall. As shown in Table 6.2, the improvements of
the hybrid methods on the surface nodes are more than the loss the the core nodes, thus improving
the overall performance.
8
References
[1] Shankar Kumar, William Byrne, and Speech Processing. Minimum bayes-risk decoding for statistical
machine translation. In HLT-NAACL, 2004.
[2] David Sontag and Tommi Jaakkola. New outer bounds on the marginal polytope. In In Advances in
Neural Information Processing Systems, 2007.
[3] Amir Globerson and Tommi Jaakkola. Fixing max-product: Convergent message passing algorithms for
map lp-relaxations. In NIPS, 2007.
[4] Pradeep Ravikumar, Alekh Agarwal, and Martin J. Wainwright. Message-passing for graph-structured
linear programs: proximal projections, convergence and rounding schemes. In ICML, 2008.
[5] Qiang Liu and Alexander Ihler. Variational algorithms for marginal map. In UAI, 2011.
[6] James D. Park. MAP Complexity Results and Approximation Methods. In UAI, 2002.
[7] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[8] Shaul K. Bar-Lev, Daoud Bshouty, Peter Enis, Gerard Letac, I-Li Lu, and Donald Richards. The diagnonal
multivariate natural exponential families and their classification. In Journal of Theoretical Probability,
pages 883?929, 1994.
[9] Vaibhava Goel and William J. Byrne. Minimum Bayes-risk automatic speech recognition. Computer
Speech and Language, 14(2), 2000.
[10] M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference.
Foundations and Trends in Machine Learning, 2008.
[11] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA, 1988.
[12] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In NIPS, 2000.
[13] Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Exact map estimates by tree agreement.
In NIPS, 2002.
[14] Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Tree-reweighted belief propagation
algorithms and approximate ml estimation by pseudo-moment matching. In AISTATS, 2003.
[15] Mark Johnson. Why doesnt em find good hmm pos-taggers. In EMNLP, pages 296?305, 2007.
[16] Pradeep Ravikumar, Martin J. Wainwright, and Alekh Agarwal. Message-passing for graph-structured
linear programs: Proximal methods and rounding schemes, 2008.
[17] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM
algorithm. Journal of The Royal Statistica Society, 1977.
[18] Radford M. Neal and Geoffrey E. Hinton. A View of the EM Algorithm that Justifies Incremental, Sparse,
and Other Variants. In Learning in graphical models, pages 355?368, 1999.
[19] Slav Petrov and Dan Klein. Discriminative log-linear grammars with latent variables. In NIPS, 2008.
[20] Ivan Titov and James Henderson. A latent variable model for generative dependency parsing. In IWPT,
2007.
[21] Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. Learning structured prediction
models: a large margin approach. 2004.
[22] Ioannis Tsochantaridis, Google Inc, Thorsten Joachims, Thomas Hofmann, Yasemin Altun, and Yoram
Singer. Large margin methods for structured and interdependent output variables. Journal of Machine
Learning Research, 6:1453?1484, 2005.
[23] Chen Yanover, Talya Meltzer, and Yair Weiss. Linear programming relaxations and belief propagation ?
an empirical study. Journal of Machine Learning Research, 7:1887?1907, 2006.
[24] Chen Yanover, Ora Schueler-furman, and Yair Weiss. Minimizing and learning energy functions for
side-chain prediction. In RECOMB2007, 2007.
9
| 4285 |@word h:3 worsens:1 version:2 seek:1 pick:1 tr:4 moment:1 liu:1 contains:1 efficacy:1 score:1 uncovered:1 written:1 readily:1 parsing:2 mst:1 subsequent:1 partition:5 hofmann:1 update:4 v:35 greedy:5 fewer:1 generative:1 amir:1 short:1 core:9 provides:3 node:112 readability:1 gx:15 daphne:1 tagger:1 consists:1 dan:1 introduce:1 pairwise:5 expected:1 os:1 freeman:1 talya:1 becomes:1 estimating:2 notation:1 moreover:2 maximizes:4 lowest:2 what:1 interpreted:2 minimizes:1 z:22 finding:2 pseudo:1 every:1 mbr:8 exactly:3 originates:1 before:2 local:18 jiang:1 lev:1 subscript:1 initialization:3 studied:1 relaxing:1 polytrees:1 doesnt:1 averaged:1 practical:2 unique:1 globerson:1 practice:1 differs:1 procedure:1 area:1 empirical:1 maxx:3 projection:1 matching:1 donald:1 protein:2 altun:1 get:3 cannot:2 marginalize:1 tsochantaridis:1 shankar:1 risk:7 applying:2 equivalent:3 map:42 lagrangian:2 deterministic:1 maximizing:7 straightforward:1 go:1 starting:1 l:5 convex:1 decomposable:1 formalized:1 searching:1 analogous:1 suppose:1 exact:3 programming:1 us:1 agreement:1 trend:1 approximated:3 recognition:2 richards:1 ep:3 taskar:1 solved:1 highest:2 mz:2 removed:2 principled:1 mentioned:1 transforming:1 dempster:1 complexity:1 ideally:1 solving:5 algo:2 po:1 joint:2 various:4 derivation:5 shortcoming:1 choosing:1 heuristic:1 supplementary:6 solve:4 plausible:2 say:1 relax:3 otherwise:1 widely:1 loglikelihood:1 grammar:1 statistic:2 jointly:2 laird:1 propose:3 interaction:1 product:34 neighboring:2 iff:3 intuitive:1 convergence:3 optimum:4 gerard:1 produce:1 incremental:1 ben:1 piyush:2 depending:2 derive:2 help:1 fixing:2 ij:7 bshouty:1 school:1 eq:13 c:1 tommi:4 direction:2 psuedo:2 stochastic:2 material:6 elimination:1 require:1 vaibhava:1 fix:1 jiarong:2 decompose:1 summation:1 ground:1 exp:11 vary:1 achieves:1 smallest:1 estimation:8 individually:1 weighted:1 hope:1 mit:1 gaussian:2 modified:2 varying:1 jaakkola:4 focus:1 joachim:1 improvement:2 consistently:2 likelihood:10 greedily:1 baseline:3 summarizing:1 posteriori:2 inference:5 chatalbashev:1 typically:1 shaul:1 koller:2 expand:1 arg:6 among:2 aforementioned:1 overall:2 denoted:4 classification:1 development:2 summed:1 initialize:3 mutual:6 marginal:22 field:1 construct:1 once:1 qiang:1 identical:1 park:1 icml:1 others:3 fundamentally:1 intelligent:1 few:4 randomly:1 simultaneously:3 ve:1 divergence:4 mut:5 argmax:1 william:3 friedman:1 message:58 henderson:1 schueler:1 mixture:1 pradeep:2 chain:10 edge:6 integral:3 partial:1 unless:1 tree:15 incomplete:1 initialized:2 overcomplete:2 theoretical:1 zn:1 assignment:31 maximization:13 loopy:5 lattice:2 vertex:1 subset:2 hundred:1 rounding:2 johnson:1 dependency:2 accomplish:1 synthetic:5 proximal:2 st:30 probabilistic:5 destination:1 decoding:3 na:1 containing:1 choose:1 emnlp:1 iwpt:1 derivative:1 style:2 return:2 li:1 potential:5 sec:4 subsumes:1 ioannis:1 inc:2 satisfy:2 explicitly:1 depends:2 later:2 performed:2 view:1 furman:1 doing:1 sup:7 reached:1 start:1 bayes:6 carlos:1 accuracy:5 kaufmann:1 largely:1 maximized:1 correspond:1 lu:1 converged:1 simultaneous:2 hlt:1 against:2 petrov:1 energy:2 james:2 associated:5 proof:1 ihler:1 hamming:6 judea:1 dataset:3 recall:1 ut:1 improves:1 niform:1 improved:1 wei:3 formulation:1 pvt:1 done:1 just:1 until:2 hand:1 propagation:5 google:1 grows:1 hal:2 utah:2 effect:1 naacl:1 normalized:3 true:1 usa:1 byrne:2 assigned:1 iteratively:3 nonzero:1 neal:1 reweighted:4 adjacent:2 during:1 please:1 generalized:1 demonstrate:1 performs:2 cp:2 pro:1 reasoning:1 variational:7 recently:1 common:1 mt:8 empirically:2 exponentially:1 discussed:1 marginals:15 refer:1 framed:1 rd:1 automatic:1 grid:5 language:2 longer:1 surface:7 alekh:2 etc:1 nected:1 multivariate:1 own:1 optimizing:2 optimizes:1 belongs:1 irrelevant:1 termed:1 store:1 apart:1 certain:1 inequality:1 binary:1 vt:17 yasemin:1 seen:1 minimum:6 morgan:1 guestrin:1 goel:1 converge:5 maximize:1 mix:2 reduces:1 alan:2 divided:1 ravikumar:2 y:2 prediction:5 variant:2 expectation:5 metric:1 iteration:11 normalization:4 agarwal:2 justified:1 whereas:1 want:2 residue:11 else:1 source:4 sends:2 publisher:1 extra:1 rest:3 umd:2 umiacs:2 pass:5 jordan:1 iii:1 split:1 relaxes:1 meltzer:1 ivan:1 marginalization:1 approaching:1 idea:1 computable:1 motivated:1 expression:1 passed:3 peter:1 sontag:1 speech:4 passing:26 useful:1 detailed:3 covered:1 amount:1 extensively:1 percentage:1 canonical:1 happened:1 klein:1 discrete:2 ist:5 four:2 drawn:1 clarity:1 graph:34 relaxation:4 sum:84 run:3 angle:1 parameterized:1 uncertainty:1 family:5 decision:1 appendix:5 prefer:1 capturing:1 bound:2 guaranteed:4 followed:1 convergent:1 constraint:1 speed:1 min:3 kumar:1 martin:4 structured:7 rai:1 according:5 slav:1 combination:1 em:14 lp:1 making:2 thorsten:1 turn:1 count:1 singer:1 yedidia:1 apply:5 titov:1 disagreement:1 yair:3 original:1 thomas:1 denotes:1 running:5 graphical:9 marginalized:1 const:1 daum:1 xc:1 yoram:1 build:1 especially:1 approximating:1 society:1 objective:3 mx:6 maryland:2 hmm:1 outer:1 polytope:3 spanning:2 willsky:2 assuming:1 relationship:1 minimizing:1 difficult:1 unfortunately:1 favorably:1 reweight:1 negative:1 zt:18 unknown:1 perform:2 upper:1 datasets:2 inevitably:1 t:2 subsume:1 situation:1 hinton:1 arbitrary:1 sumproduct:1 pzt:1 rotamer:1 david:1 cast:1 pair:2 kl:4 z1:1 connection:1 trbp:8 tein:1 distinction:1 pearl:1 nip:4 bar:1 usually:2 xm:2 program:2 max:86 royal:1 belief:10 wainwright:5 natural:2 hybrid:49 force:2 indicator:1 yanover:2 scheme:3 axis:1 interdependent:1 marginalizing:2 relative:2 lacking:1 loss:21 expect:1 mixed:1 proven:1 geoffrey:1 foundation:1 sufficient:2 consistent:1 rubin:1 principle:2 vassil:1 translation:2 last:1 side:3 fall:1 neighbor:6 taking:1 face:1 sparse:1 regard:1 xn:1 world:4 stand:1 computes:1 ignores:1 san:1 lz:3 approximate:5 compact:1 keep:1 clique:3 ml:1 global:1 uai:2 assumed:1 francisco:1 xi:2 discriminative:1 search:22 latent:6 iterative:1 decomposes:1 why:1 table:2 bethe:3 reasonably:1 ca:1 messagepassing:2 improving:1 argmaxx:2 complex:1 poly:1 domain:1 sp:1 aistats:1 statistica:1 x1:3 augmented:1 fig:4 en:1 lc:1 inferring:1 wish:2 exponential:6 bad:1 specific:2 xt:6 showing:1 jensen:1 list:1 x:34 exists:1 intractable:1 justifies:1 margin:2 chen:2 entropy:4 simply:2 appearance:2 likely:2 ez:2 radford:1 truth:4 minimizer:1 goal:1 viewed:1 exposition:1 change:1 hard:1 included:1 typical:1 specifically:1 except:1 lemma:1 called:1 pas:2 experimental:2 maxproduct:1 exception:1 mark:1 jonathan:1 alexander:1 dept:2 outgoing:2 ex:1 |
3,629 | 4,286 | Inference in continuous-time change-point models
Florian Stimberg
Computer Science, TU Berlin
[email protected]
Manfred Opper
Computer Science, TU Berlin
[email protected]
Andreas Ruttor
Computer Science, TU Berlin
[email protected]
Guido Sanguinetti
School of Informatics, University of Edinburgh
[email protected]
Abstract
We consider the problem of Bayesian inference for continuous-time multi-stable
stochastic systems which can change both their diffusion and drift parameters at
discrete times. We propose exact inference and sampling methodologies for two
specific cases where the discontinuous dynamics is given by a Poisson process
and a two-state Markovian switch. We test the methodology on simulated data,
and apply it to two real data sets in finance and systems biology. Our experimental
results show that the approach leads to valid inferences and non-trivial insights.
1
Introduction
Continuous-time stochastic models play a prominent role in many scientific fields, from biology to
physics to economics. While it is often possible to easily simulate from a stochastic model, it is often
hard to solve inference or parameter estimation problems, or to assess quantitatively the fit of a model
to observations. In recent years this has motivated an increasing interest in the machine learning
and statistics community in Bayesian inference approaches for stochastic dynamical systems, with
applications ranging from biology [1?3] to genetics [4] to spatio-temporal systems [5].
In this paper, we are interested in modelling and inference for systems exhibiting multi-stable behavior. These systems are characterized by stable periods and rapid transitions between different
equilibria. Very common in physical and biological sciences, they are also highly relevant in economics and finance, where unexpected events can trigger sudden changes in trading behavior [6].
While there have been a number of approaches to Bayesian change-point inference [7?9] most of
them expect the observations to be independent and coming directly from the change-point process.
In many systems this is not the case because observations are only available from a dynamic process whose parameters are change-point processes. There have been other algorithms for detecting
indirectly observed change-point processes [10], but we emphasize that we are also (and sometimes
mostly) interested in the dynamical parameters of the system.
We present both an exact and an MCMC-based approach for Bayesian inference in multi-stable
stochastic systems. We describe in detail two specific scenarios: the classic change-point process
scenario whereby the latent process has a new value at each jump and a bistable scenario where the
latent process is a stochastic telegraph process. We test extensively our model on simulated data,
showing good convergence properties of the sampling algorithm. We then apply our approach to
two very diverse data sets in finance and systems biology, demonstrating that the approach leads to
valid inferences and interesting insights in the nature of the system.
1
2
The generative model
We consider a system of N stochastic differential equations (SDE)
dxi = (Ai (t) ? ?i xi )dt + ?i (t)dWi ,
(1)
of the Ornstein-Uhlenbeck type for i = 1, . . . , N , which are driven by independent Wiener processes Wi (t). The time dependencies in the drift Ai (t) and in the diffusion terms ?i (t) will account
for sudden changes in the system and will be further modelled by stochastic Markov jump processes.
Our prior assumption is that change points, where Ai and ?i change their values, constitute Poisson
events. This means that the times ?t between consecutive change points are independent exponentially distributed random variables with density p(?t) = f exp(?f ?t), where f denotes their
expected number per time unit. We will consider two different models for the values of Ai and ?i in
this paper:
? Model 1 assumes that at each of the change points Ai and ?i are drawn independently from
fixed prior densities pA (?) and p? (?). The number of change points up to time t is counted
?(t)
?(t)
by the Poisson process ?(t), so that Ai (t) = Ai and ?i (t) = ?i are piecewise constant
functions of time.
? Model 2 restricts the parameters Ai (t) and ?i (t) to two possible values A0i , A1i , ?i0 , and
?i1 , which are time independent random variables with corresponding priors. We select the
parameters according to the telegraph process ?(t), which switches between ? = 0 and
? = 1 at each change point.
For both models, Ai (t) and ?i (t) are unobserved. However, we have a data set of M noisy observations Y ? {y1 , . . . , yM } of the process x(t) = (x1 (t), . . . , xN )(t) at discrete times tj ,
j = 1, . . . , M , i.e. we assume that yj = x(tj )+?? j with independent Gaussian noise ? j ? N (0, ?o2 ).
3
Bayesian Inference
Given data Y we are interested in the posterior distribution of all unobserved quantities, which are
the paths of the stochastic processes X ? x[0:T ] , Z ? (A[0:T ] , ? [0:T ] ) in a time interval [0 : T ] and
the model parameters ? = ({?i }). For simplicity, we have not used a prior for the rate f and treated
it as a fixed quantity. The joint probability of these quantities is given by
p(Y, X, Z, ?) = p(Y |X)p(X|Z, ?)p(Z)p(?)
(2)
A Gibbs sampling approach to this distribution is nontrivial, because the sample paths are infinite
dimensional objects, and a naive temporal discretization may lead to potential extra errors.
Inference is greatly facilitated by the fact that conditioned on Z and ?, X is an Ornstein-Uhlenbeck
process, i.e. a Gaussian Markov process. Since also the data likelihood p(Y |X) is Gaussian, it is
possible to integrate out the process X analytically leading to a marginal posterior
p(Z|Y, ?) ? p(Y |Z, ?)p(Z)
(3)
over the simpler piecewise constant sample paths of the jump processes. Details on how to compute
the likelihood p(Y |Z, ?) are given in the supplementary material.
When inference on posterior values X is required, we can use the fact that X|Y, Z, ? is an inhomogeneous Ornstein-Uhlenbeck process, which allows for an explicit analytical computation of
marginal means and variances at each time.
The jump processes Z = {?? , ?} are completely determined by the set of change points ? ? {?j }
and the actual values of ? ? {Aj , ? j } to which the system jumps at the change points. Since
p(Z) = p(?|?? )p(?? ) and p(?|?, Y, ?) ? p(Y |Z, ?)p(?|?? ), we can see that conditioned on a set
of, say m change points, the distribution of ? is a finite (and usually relatively low) dimensional
integral from which one can draw samples using standard methods. In fact, if the prior density of
the drift values pA is a Gaussian, then it is easy to see that also the posterior is Gaussian.
2
4
MCMC sampler architecture
We use a Metropolis-within-Gibbs sampler, which alternates between sampling the parameters ?,
? from p(?|Y, ? , ?), p(?|Y, ? , ?) and the positions ? of change points from p(?? |Y, ?, ?).
Sampling from p(?|Y, ? , ?) as well as sampling the ?i s from p(?|Y, ? , ?) is done by a Gaussian
random walk Metropolis-Hastings sampler on the logarithm of the parameters, to ensure positivity.
Sampling the Ai s on the other hand can be done directly if the prior p(Ai ) is Gaussian, because then
p(Ai |Y, ? , ?, {?i }) is also Gaussian.
Finally, we need to draw change points from their density p(?? |Y, ?, ?) ? p(Y |Z, ?)p(?|?? )p(?? ).
Their number m is a random variable with a Poisson prior distribution and for fixed m, each ?i is
uniformly distributed in [0 : T ]. Therefore the prior probability of the sorted list ?1 , . . . , ?m is given
by
p(?1 , . . . , ?m |f ) ? f m e?f T .
(4)
?
?
For sampling change points we use a Metropolis-Hastings step, which accepts a proposal
for the
positions of the change points with probability
p(?? ? |Y, ?, ?) q(?? |?? ? )
,
(5)
A = min 1,
p(?? |Y, ?, ?) q(?? ? |?? )
where q(?? ? |?? ) is the proposal probability to generate ? ? starting from ? . Otherwise the old sample
is used again. As proposal for a new ? -path we choose one of three (model 1) or five (model 2)
possible actions, which modify the current sample:
? Moving a change point: One change point is chosen at random with equal probability and
the new jump time is drawn from a normal distribution with the old jump time as the mean.
The normal distribution is truncated at the neighboring jump times to ensure that the order
of jump times stays the same.
? Adding a change point: We use a uniform distribution over the whole time interval [0 : T ]
to draw the time of the added jump. In case of model 1 the parameter set ?i for the new
interval stays the same and is only changed in the following update of all the ? sets. For
model 2 it is randomly decided if the telegraph process ?(t) is inverted before or after the
new change point. This is necessary to allow ? to change on both ends.
? Removing a change point: The change point to remove is chosen at random. For model
1 the newly joined interval inherits the parameters with equal probability from the interval
before or after the removed change point. As for adding a change point, when using model
2 we choose to either invert ? after or before the removed jump time.
For model 2 we also need the option to add or remove two jumps, because adding or removing one
jump will result in inverting the whole process after or before it, which leads to poor acceptance
rates. When adding or removing two jumps instead, ? only changes between these two jumps.
? Adding two change points: The first change point is drawn as for adding a single one,
the second one is drawn uniformly from the interval between the new and the next change
point.
? Removing two change points: We choose one of the change points, except the last one, at
random and delete it along with the following one.
While the proposal does not use any information from the data, it is very fast to compute and quickly
converges to reasonable states, although we initialize the change points simply by drawing from
p(?? ).
5
Exact inference
In the case of small systems described by model 2 it is also feasible to calculate the marginal probability distribution q(?, x, t) for the state variables x, ? at time t of the posterior process directly.
For that purpose, we use a smoothing algorithm, which is quite similar to the well-known method
for state inference in hidden Markov models. In order to improve clarity we only discuss the case of
3
0
10
6
mean difference
1
4
x
2
0
-1
0
10
-2
p(?=1)
0. 0. 0. 0.
0 3 6 9
0
-0.48585
y = 2.8359 x
200
400
t
600
800
1000
1
10
100
1000
10000 100000
samples
Figure 1: Comparison of the results of the MCMC sampler and the exact inference: (top left) True
path of x (black) and the noisy observations (blue crosses). (bottom left) True path of ? (black)
and posterior of p(? = 1) from the exact inference (green) and the MCMC sampler (red dashed).
(right) Convergence of the sampler. Mean difference between sampler result and exact inference of
p(? = 1) for different number of samples (red crosses) and the result of power law regression for
more than 100 samples (green).
a one-dimensional Ornstein-Uhlenbeck process x(t) here, but the generalization to multiple dimensions is straightforward.
As our model has the Markov property, the exact marginal posterior is given by
1
p(?, x, t)?(?, x, t).
(6)
L
Here p(?, x, t) denotes the marginal filtering distribution, which is the probability density of the
state (x, ?) at time t conditioned on the observations up to time t. The normalization constant L is
equal to the total likelihood of all observations. And the last factor ?(?, x, t) is the likelihood of the
observations after time t under the condition that the process started with state (x, ?) at time t.
q(?, x, t) =
The initial condition for the forward message p(?, x, t) is the prior over the initial state of the system.
The time evolution of the forward message is given by the forward Chapman-Kolmogorov equation
#
"
X
??2 ? 2
?
?
p(?,
x,
t)
=
+
(A? ? ?x) ?
[f??? p(?, x, t) ? f??? p(?, x, t)] .
(7)
?t ?x
2 ?x2
?6=?
Here f??? denotes the transition rate from discrete state ? to discrete state ? ? {0, 1} of model 2,
which has the values
f0?1 = f1?0 = f, f0?0 = f0?0 = 0.
(8)
Including an observation yj at time tj leads to a jump of the filtering distribution,
?
p(?, x, t+
j ) = p(?, x, tj )p(yj |x),
(9)
where p(yj |x) denotes the local likelihood of that observation given by the noise model and
p(?, x, t?
j ) are the values of the forward message directly before and after time point tj . By integrating equation (7) forward in time from the first observation to the last, we obtain the exact
solution to the filtering problem of our model.
Similarly we integrate backward in time from the last observation at time T to the first one in order
to compute ?(?, x, t). The initial condition here is ?(?, x, t+
N ) = 1. Between observations the time
evolution of the backward message is given by the backward Chapman-Kolmogorov equation
"
#
X
??2 ? 2
?
?
?(?,
x,
t)
=
+ (A? ? ?x)
+
f??? [?(?, x, t) ? ?(?, x, t)] .
(10)
?t
?x
2 ?x2
?6=?
And each observation is taken into account by the jump condition
+
?(?, x, t?
j ) = ?(?, x, tj )p(yj |x(tj )).
4
(11)
-2
1000
t
2000
1500
0
1000
t
1500
2000
10
-1
0
-2
0
2?
10
?
-2
2
4?
?1
0
-2
0
?1
2
?2
-2
0
500 1000 1500
t
0
500 1000 1500 2000
t
0
0
1?
5?
1
10
-2
A1
5?
10
?1
0
-1
t
0
A1
500
-2
500
0
jump probability
6
3
4
5
1?
2
10 -2 ?10 -2 ?10 -2 ?10 -2 ?10 -2 ?10
10
x
5
0
0
500 1000 1500
t
0
500 1000 1500 2000
t
Figure 2: Synthetic results on a four-dimensional diffusion process with diagonal diffusion matrix:
(top left) true paths with subsampled data points (dots); (top right) intensity of the posterior point
process (the probability of a change point in a given interval is given by the integral of the intensity).
Actual change points are shown as vertical dotted lines. (bottom row) posterior processes for A
(left) and ? 2 (right) with a one standard deviation confidence interval. True paths are shown as
black dashed lines.
Afterwards, Lq(?, x, t) can be calculated by multiplying forward message p(?, x, t) and backward
message ?(?, x, t). Normalizing that quantity according to
Z X
q(?, x, t)dx = 1
(12)
?
then gives us the marginal posterior as well as the total likelihood L = p(y1 , . . . , yN |A, b, . . . ) of all
observations. Note, that we only need to calculate L for one time point, as it is a time-independent
quantity. Minimizing ? log L as a function of the parameters can then be used to obtain maximum
likelihood estimates. As an analytical solution for both equations (7) and (10) does not exist, we
have to integrate them numerically on a grid. A detailed description is given in the supplementary
material.
6
6.1
Results
Synthetic Data
As a first consistency check, we tested the model on simulated data. The availability of an exact
solution to the inference problem provides us with an excellent way of monitoring convergence
of our sampler. Figure 1 shows the results of sampling on data generated from model 2, with
parameter settings such that only the diffusion constant changes, making it a fairly challenging
problem. Despite the rather noisy nature of the data (top left panel), the approach gives a reasonable
reconstruction of the latent switching process (left panel, bottom). The comparison between exact
inference and MCMC is also instructive, showing that the sampled posterior does indeed converge
to the true posterior after a relatively short burn in period (Figure 1 right panel). A power law
regression of the mean absolute difference between exact and MCMC (after burn in) on the number
5
0
0
50
0.
2
10
0
0.
4
15
0
0.
6
p(? = 1)
20
0
0.
8
25
0
1
30
0
Fluorescence (arbitrary units)
0
10
20
30
0
time/hrs
10
20
30
time/hrs
Figure 3: Stochastic gene expression during competence: (left) fluorescence intensity for comS
protein over 36 hrs; (right) inferred comK activation profile using model 2 (see text)
of samples yields a decrease with approximately the square root of the number of samples (exponent
0.48), as expected.
To test the performance of the inference approach on model 1, we simulated data from a fourdimensional diffusion process with diagonal diffusion with change points in the drift and diffusion
(at the same times). The results of the sampling based inference are shown in Figure 2. Once
again, the results indicate that the sampled distribution was able to accurately identify the change
points (top right panel) and the values of the parameters (bottom panels). The results are based
on 260,000 samples and were obtained in approximately twelve hours on a standard workstation.
Unfortunately in this higher dimensional example we do not have access to the true posterior, as
numerical integration of a high dimensional PDE proved computationally prohibitive.
6.2
Characterization of noise in stochastic gene expression
Recent developments in microscopy technology have led to the startling discovery that stochasticity
plays a crucial role in biology [11]. A particularly interesting development is the distinction between
intrinsic and extrinsic noise [12]: given a biological system, intrinsic noise arises as a consequence of
fluctuations due to the low numbers of the molecular species composing the system, while extrinsic
noise is caused by external changes influencing the system of interest. A currently open question
is how to characterize mathematically the difference between intrinsic and extrinsic noise, and a
widely mooted opinion is that either the amplitude or the spectral characteristics of the two types
of noise should be different [13]. To provide a proof-of-principle investigation into these issues, we
tested our model on real stochastic gene expression data subject to extrinsic noise in Bacillus subtilis
[14]. Here, single-cell fluorescence levels of the protein comS were assayed through time-lapse
microscopy over a period of 36 hours. During this period, the protein was subjected to extrinsic noise
in the form of activation of the regulator comK, which controls comS expression with a switch-like
behavior (Hill coefficient 5). Activation of comS produces a striking phenotype called competence,
whereby the cell stops dividing, becoming visibly much longer than sister cells. The data used is
shown in Figure 3, left panel.
To determine whether the noise characteristics are different in the presence of comK activity, we
modelled the data using two different models: model 2, where both the offset A and the diffusion
? can take two different values, and a constrained version of model 2 where the diffusion constant
cannot switch (as in [15]). In both cases we sampled 500,000 posterior samples, discarding an initial
burn-in of 10,000 samples. Both models predict two clear change points representing the activation
and inactivation of comK at approximately 5 and 23 hrs respectively (Figure 3 right panel, showing
model 2 results). Also both models are in close agreement on the inferred kinetic parameters A, b,
and ? (Figure 4, left panel, showing a comparison of the ? posteriors), consistently with the fact that
the mean trajectory for both models must be the same.
Naturally, model 2 predicted two different values for the diffusion constant depending on the activity
state of comK (Figure 4, central panel). The two posterior distributions for ?1 and ?2 appear to be
well separated, lending support to the unconstrained version of model 2 being a better description
6
0.
2
0.
25
0.
01
5
8
0.
15
0.
01
6
0.5
0
0.4
0
0.3
0.
05
0.
00
5
0.
1
4
2
0
0.2
0
200
0
2
4
6
8
10
12
14
400
Figure 4: Stochastic gene expression during competence: (left) posterior estimates of ? (solid) for
switching ? (red) and non-switching ? 2 (blue) with common prior (dashed); (center) posterior estimates of ?12 (red solid), ?22 (green solid) and non-switching ? posterior (blue solid) with common
prior (dashed); (right) posterior distribution of f (A, b, ?1 , ?2 ) (see text), indicating the incompatibility of the simple birth-death model of steady state with the data.
of the data. While this is an interesting result in itself, it is perhaps not surprising. We can gain some
insights by considering the underlying discrete dynamics of comS protein counts, which our model
approximates as a continuous variable [16]. As we are dealing with bacterial cells, transcription and
translation are tightly coupled, so that we can reasonably assume that protein production is given by
a Poisson process. At steady state in the absence of comK, the production of comS proteins will be
given by a birth-death process with birth rate b and death rate ?, while in the presence of comK the
birth rate would change to A + b. Defining
?0 =
b
,
?
?1 =
A+b
?
(13)
this simple birth-death model implies a Poisson distribution of the steady state comS protein levels
in the two comK states, with parameters ?0 , ?1 respectively. Unfortunately, we only measure the
counts of comS protein up to a proportionality constant (due to the arbitrary units of fluorescence);
this means that the basic property of Poisson distributions of having the same mean and variance
cannot be tested easily. However, if we consider the ratio of signal to noise ratios in the two states,
we obtain a quantity which is independent of the fluorescence units, namely
r
?1 /stdev(N1 ) r ?1
N
A+b
.
(14)
?0 /stdev(N0 ) = ?0 =
b
N
This relationship is not enforced in our model, but, if the simple birth-death interpretation is supported by the data, it should emerge naturally in the posterior distributions. To test this, we plot in
Figure 4 right panel the posterior distribution of
r
(A + b)/?2
A+b
f (A, b, ?1 , ?2 ) =
?
,
(15)
b/?1
b
the difference between the posterior estimate of the ratio of the signal to noise ratios in the two comK
states and the prediction from the birth-death model. The overwhelming majority of the posterior
probability mass is away from zero, indicating that the data does not support the predictions of the
birth-death interpretation of the steady states. A possible explanation of this unexpected result is
that the continuous approximation breaks down in the low abundance state (corresponding to no
comK activation); the expected number of particles in the comK inactive state is given by ?0 and
has posterior mean 25.8. The breaking down of the OU approximation for these levels of protein
expression would be surprising, and would sound a call for caution when using SDEs to model
single cell data as advocated in large parts of the literature [2]. An alternative and biologically more
exciting explanation would be that the assumption that the decay rates are the same irrespective of
the activity of comK is wrong. Notice that, if we assumed different decay rates in the two states,
the
p
first term in equation (15) would not change, while the second would scale with a factor ?0 /?1 .
Our results would then predict that comK regulation at the transcriptional level alone cannot explain
the data, and that comS dynamics must be regulated both transcriptionally and post-transcriptionally.
7
1990
2000
year
2010
5
4?
10
?
3?
10
Global Financial Crisis
1990
1995
2000
year
2005
2010
10
5
Early 2000s recession
2?
-5
00 250
0
0
0
20
00
0
00
0
A
2
?German NASDAQ?
5
10
25
5
7
00 000 500 000
80
00
60
00
DAX
4
Dot-com bubble
1990
1995
2000
year
2005
2010
Figure 5: Analysis of DAX data: left monthly closing values with data points (red crosses); center
A process with notable events highlighted; right ? process.
6.3
Change point detection in financial data
As an example of another application of our methodology, we applied model 1 to financial data
taken from the German stock exchange (DAX). The data, shown in Figure 5, consists of monthly
closing values; we subsampled it at quarterly values. The posterior processes for A and ? are shown
in the central and right panels of Figure 5 respectively. An inspection of these results reveals several
interesting change points which can be related to known events: for convenience, we highlight
a few of them in the central panel of Figure 5. Clearly evident are the changes caused by the
introduction of the Neuer Markt (the German equivalent of the NASDAQ) in 1997, as well as the
dot-com bubble (and subsequent recession) in the early 2000s and the global financial crisis in 2008.
Interestingly, in our results the diffusion (or volatility as is more commonly termed in financial
modelling) seems not to be particularly affected by recent events (after surging for the Neuer Markt).
A possible explanation is the rather long time interval between data points: volatility is expected to
be particularly high on the micro-time scale, or at best the daily scale. Therefore the effective
sampling rate we use may be too sparse to capture these changes.
7
Discussion
In this paper, we proposed a Bayesian approach to inference in multi-stable system. The basic
model is a system of SDEs whose drift and diffusion coefficients can change abruptly at random,
exponential distributed times. We describe the approach in two special models: a system of SDEs
with coefficients changing at change points from a Poisson process (model 1) and a system of
SDE whose coefficients can change between two sets of values according to a random telegraph
process (model 2). Each model is particularly suitable for specific applications: while model 1
is important in financial modelling and industrial application, model 2 extends a number of similar
models already employed in systems biology [3,15,17]. Testing our model(s) in specific applications
reveals that it often leads to interpretable predictions. For example, in the analysis of DAX data, the
model correctly captures known important events such as the dot-com bubble. In an application to
biological data, the model leads to non-obvious predictions of considerable biological interest.
In regard to the computational costs stated in this paper, it has to be noted that the sampler was
implemented in Matlab. A new implementation in C++ for model 2 showed over 12 times faster
computational times for a data set with 10 OU processes and 2 telegraph processes. A similar
improvement is to be expected for model 1.
There are several interesting possible avenues to further this work. While the inference scheme
we propose is practical in many situations, scaling to higher dimensional problems may become
computationally intensive. It would therefore be interesting to investigate approximate inference
solutions like the ones presented in [15]. Another interesting direction would be to extend the
current work to a factorial design; these can be important, particularly in biological applications
where multiple factors can interact in determining gene expression [17, 18]. Finally, our models are
naturally non-parametric in the sense that the number of change points is not a priori determined.
It would be interesting to explore further non-parametric extensions where the system can exist in
a finite but unknown number of regimes, in the spirit of non-parametric models for discrete time
dynamical systems [19].
8
References
[1] Neil D. Lawrence, Guido Sanguinetti, and Magnus Rattray. Modelling transcriptional regulation using
Gaussian processes. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19. 2007.
[2] Darren J. Wilkinson. Stochastic Modelling for Systems Biology. Chapman & Hall / CRC, London, 2006.
[3] Guido Sanguinetti, Andreas Ruttor, Manfred Opper, and C`edric Archambeau. Switching regulatory models of cellular stress response. Bioinformatics, 25:1280?1286, 2009.
[4] Ido Cohn, Tal El-Hay, Nir Friedman, and Raz Kupferman. Mean field variational approximation for
continuous-time Bayesian networks. In Proceedings of the twenty-fifthth conference on Uncertainty in
Artificial Intelligence (UAI), 2009.
[5] Andreas Ruttor and Manfred Opper. Approximate inference in reaction-diffusion processes. JMLR
W&CP, 9:669?676, 2010.
[6] Tobias Preis, Johannes Schneider, and H. Eugene Stanley. Switching processes in financial markets.
Proceedings of the National Academy of Sciences USA, 108(19):7674?7678, 2011.
[7] Paul Fearnhead and Zhen Liu. Efficient bayesian analysis of multiple changepoint models with dependence across segments. Statistics and Computing, 21(2):217?229, 2011.
[8] Paolo Giordani and Robert Kohn. Efficient bayesian inference for multiple change-point and mixture
innovation models. Journal of Business and Economic Statistics, 26(1):66?77, 2008.
[9] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. An HDP-HMM for systems with state
persistence. In Proc. International Conference on Machine Learning, July 2008.
[10] Yunus Saatci, Ryan Turner, and Carl Edward Rasmussen. Gaussian process change point models. In
ICML, pages 927?934, 2010.
[11] Vahid Shahrezaei and Peter Swain. The stochastic nature of biochemical networks. Curr. Opin. in
Biotech., 19(4):369?374, 2008.
[12] Michael B. Elowitz, Arnold J. Levine, Eric D. Siggia, and Peter S. Swain. Stochastic gene expression in
a single cell. Science, 297(5584):1129?1131, 2002.
[13] Avigdor Eldar and Michael B. Elowitz.
467(7312):167?173, 2010.
Functional roles for noise in genetic circuits.
Nature,
[14] G?urol M. Su?el, Jordi Garcia-Ojalvo, Louisa M. Liberman, and Michael B. Elowitz. An excitable gene
regulatory circuit induces transient cellular differentiation. Nature, 440(7083):545?550, 2006.
[15] Manfred Opper, Andreas Ruttor, and Guido Sanguinetti. Approximate inference in continuous time
gaussian-jump processes. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta,
editors, Advances in Neural Information Processing Systems 23, pages 1822?1830. 2010.
[16] N. G. van Kampen. Stochastic Processes in Physics and Chemistry. North-Holland, Amsterdam, 1981.
[17] Manfred Opper and Guido Sanguinetti. Learning combinatorial transcriptional dynamics from gene expression data. Bioinformatics, 26(13):1623?1629, 2010.
[18] H. M. Shahzad Asif and Guido Sanguinetti. Large scale learning of combinatorial transcriptional dynamics from gene expression. Bioinformatics, 27(9):1277?1283, 2011.
[19] Matthew Beal, Zoubin Ghahramani, and Carl Edward Rasmussen. The infinite hidden Markov model. In
S. Becker, S. Thrun, and L. Saul, editors, Advances in Neural Information Processing Systems 14, pages
577?584. 2002.
9
| 4286 |@word version:2 seems:1 open:1 proportionality:1 solid:4 edric:1 initial:4 liu:1 genetic:1 interestingly:1 o2:1 reaction:1 current:2 discretization:1 com:3 surprising:2 activation:5 dx:1 must:2 subsequent:1 numerical:1 sdes:3 opin:1 remove:2 plot:1 interpretable:1 update:1 n0:1 alone:1 generative:1 prohibitive:1 intelligence:1 inspection:1 short:1 manfred:5 sudden:2 detecting:1 provides:1 characterization:1 lending:1 yunus:1 simpler:1 five:1 along:1 differential:1 become:1 assayed:1 consists:1 indeed:1 market:1 vahid:1 expected:5 rapid:1 behavior:3 multi:4 actual:2 overwhelming:1 considering:1 increasing:1 underlying:1 dax:4 panel:12 mass:1 circuit:2 sde:2 crisis:2 caution:1 unobserved:2 differentiation:1 temporal:2 finance:3 wrong:1 uk:1 control:1 unit:4 platt:1 yn:1 appear:1 before:5 influencing:1 local:1 modify:1 consequence:1 switching:6 despite:1 path:8 fluctuation:1 approximately:3 becoming:1 black:3 burn:3 challenging:1 archambeau:1 decided:1 practical:1 yj:5 testing:1 persistence:1 confidence:1 integrating:1 protein:9 zoubin:1 cannot:3 close:1 convenience:1 equivalent:1 center:2 straightforward:1 economics:2 starting:1 independently:1 williams:1 simplicity:1 insight:3 preis:1 financial:7 classic:1 play:2 trigger:1 guido:6 exact:11 carl:2 agreement:1 pa:2 particularly:5 observed:1 role:3 bottom:4 levine:1 capture:2 calculate:2 culotta:1 decrease:1 removed:2 wilkinson:1 tobias:1 dynamic:6 segment:1 eric:1 completely:1 easily:2 joint:1 stock:1 kolmogorov:2 stdev:2 separated:1 fast:1 describe:2 effective:1 london:1 artificial:1 zemel:1 birth:8 whose:3 quite:1 supplementary:2 solve:1 widely:1 say:1 drawing:1 otherwise:1 statistic:3 neil:1 highlighted:1 noisy:3 itself:1 beal:1 analytical:2 propose:2 reconstruction:1 coming:1 tu:6 relevant:1 neighboring:1 opperm:1 academy:1 description:2 olkopf:1 convergence:3 produce:1 converges:1 object:1 volatility:2 depending:1 ac:1 school:1 advocated:1 edward:2 dividing:1 implemented:1 c:3 predicted:1 trading:1 indicate:1 implies:1 exhibiting:1 direction:1 inhomogeneous:1 discontinuous:1 stochastic:17 transient:1 bistable:1 opinion:1 material:2 crc:1 exchange:1 f1:1 generalization:1 investigation:1 biological:5 ryan:1 mathematically:1 extension:1 hall:1 magnus:1 normal:2 exp:1 equilibrium:1 lawrence:1 predict:2 matthew:1 changepoint:1 consecutive:1 early:2 purpose:1 estimation:1 proc:1 combinatorial:2 currently:1 fluorescence:5 hoffman:1 clearly:1 gaussian:11 fearnhead:1 rather:2 inactivation:1 incompatibility:1 inherits:1 improvement:1 consistently:1 modelling:5 likelihood:7 check:1 greatly:1 visibly:1 industrial:1 sense:1 inference:28 el:2 i0:1 biochemical:1 nasdaq:2 hidden:2 interested:3 i1:1 issue:1 eldar:1 exponent:1 priori:1 development:2 smoothing:1 special:1 integration:1 initialize:1 fairly:1 marginal:6 field:2 equal:3 once:1 bacterial:1 sampling:11 chapman:3 biology:7 having:1 icml:1 quantitatively:1 piecewise:2 few:1 micro:1 randomly:1 tightly:1 national:1 transcriptionally:2 saatci:1 subsampled:2 n1:1 friedman:1 curr:1 detection:1 interest:3 acceptance:1 message:6 highly:1 investigate:1 mixture:1 tj:7 a0i:1 integral:2 necessary:1 daily:1 fox:1 old:2 logarithm:1 walk:1 taylor:1 delete:1 markovian:1 cost:1 deviation:1 uniform:1 swain:2 too:1 characterize:1 dependency:1 ido:1 synthetic:2 density:5 twelve:1 international:1 stay:2 physic:2 informatics:1 telegraph:5 a1i:1 michael:3 ym:1 quickly:1 again:2 central:3 choose:3 positivity:1 external:1 leading:1 account:2 potential:1 de:3 chemistry:1 availability:1 coefficient:4 north:1 notable:1 caused:2 ornstein:4 root:1 break:1 red:5 option:1 ass:1 square:1 wiener:1 variance:2 characteristic:2 yield:1 identify:1 modelled:2 bayesian:9 accurately:1 multiplying:1 monitoring:1 trajectory:1 startling:1 explain:1 ed:1 obvious:1 naturally:3 proof:1 dxi:1 jordi:1 workstation:1 sampled:3 newly:1 proved:1 stop:1 gain:1 stanley:1 amplitude:1 ou:2 higher:2 dt:1 methodology:3 response:1 done:2 hand:1 hastings:2 cohn:1 su:1 aj:1 perhaps:1 scientific:1 usa:1 true:6 evolution:2 analytically:1 death:7 lapse:1 during:3 whereby:2 steady:4 noted:1 comk:13 prominent:1 hill:1 evident:1 stress:1 cp:1 subtilis:1 ranging:1 variational:1 common:3 functional:1 physical:1 exponentially:1 extend:1 interpretation:2 approximates:1 numerically:1 monthly:2 gibbs:2 ai:12 unconstrained:1 grid:1 consistency:1 similarly:1 particle:1 stochasticity:1 closing:2 shawe:1 dot:4 moving:1 stable:5 f0:3 access:1 longer:1 recession:2 add:1 kampen:1 posterior:26 recent:3 showed:1 inf:1 driven:1 scenario:3 termed:1 hay:1 asif:1 inverted:1 florian:1 schneider:1 employed:1 converge:1 determine:1 period:4 dashed:4 signal:2 july:1 multiple:4 afterwards:1 sound:1 faster:1 characterized:1 cross:3 pde:1 long:1 post:1 molecular:1 a1:2 prediction:4 regression:2 basic:2 poisson:8 sometimes:1 uhlenbeck:4 normalization:1 invert:1 microscopy:2 cell:6 proposal:4 interval:9 sudderth:1 crucial:1 sch:1 extra:1 subject:1 lafferty:1 spirit:1 jordan:1 call:1 presence:2 easy:1 switch:4 fit:1 architecture:1 andreas:4 economic:1 raz:1 avenue:1 intensive:1 inactive:1 whether:1 motivated:1 expression:10 kohn:1 becker:1 abruptly:1 peter:2 constitute:1 action:1 matlab:1 detailed:1 clear:1 johannes:1 factorial:1 extensively:1 induces:1 generate:1 exist:2 restricts:1 notice:1 dotted:1 extrinsic:5 per:1 sister:1 correctly:1 blue:3 diverse:1 rattray:1 discrete:6 kupferman:1 affected:1 paolo:1 four:1 demonstrating:1 drawn:4 clarity:1 changing:1 diffusion:14 backward:4 year:4 enforced:1 facilitated:1 uncertainty:1 striking:1 extends:1 reasonable:2 draw:3 scaling:1 activity:3 nontrivial:1 x2:2 tal:1 regulator:1 simulate:1 min:1 relatively:2 according:3 alternate:1 poor:1 across:1 wi:1 metropolis:3 making:1 biologically:1 constrained:1 taken:2 computationally:2 equation:6 discus:1 count:2 german:3 subjected:1 end:1 available:1 apply:2 quarterly:1 away:1 indirectly:1 spectral:1 alternative:1 denotes:4 assumes:1 ensure:2 top:5 ghahramani:1 added:1 quantity:6 question:1 already:1 parametric:3 dependence:1 diagonal:2 transcriptional:4 regulated:1 berlin:6 simulated:4 majority:1 hmm:1 thrun:1 cellular:2 trivial:1 willsky:1 hdp:1 coms:9 relationship:1 ratio:4 minimizing:1 innovation:1 regulation:2 mostly:1 unfortunately:2 robert:1 stated:1 implementation:1 design:1 unknown:1 twenty:1 vertical:1 observation:15 markov:5 finite:2 truncated:1 defining:1 situation:1 y1:2 arbitrary:2 competence:3 community:1 drift:5 intensity:3 inferred:2 inverting:1 namely:1 required:1 accepts:1 distinction:1 hour:2 able:1 dynamical:3 usually:1 regime:1 green:3 including:1 explanation:3 power:2 event:6 suitable:1 treated:1 business:1 hr:4 turner:1 representing:1 scheme:1 improve:1 technology:1 started:1 irrespective:1 bubble:3 zhen:1 excitable:1 naive:1 coupled:1 nir:1 text:2 prior:11 literature:1 discovery:1 eugene:1 determining:1 law:2 expect:1 highlight:1 interesting:8 filtering:3 integrate:3 principle:1 exciting:1 editor:3 translation:1 row:1 production:2 genetics:1 changed:1 supported:1 last:4 rasmussen:2 allow:1 arnold:1 stimberg:1 saul:1 emerge:1 absolute:1 sparse:1 edinburgh:1 distributed:3 regard:1 opper:5 xn:1 valid:2 transition:2 dimension:1 calculated:1 van:1 forward:6 commonly:1 jump:19 dwi:1 counted:1 approximate:3 emphasize:1 liberman:1 ruttor:5 gene:9 dealing:1 transcription:1 global:2 reveals:2 uai:1 giordani:1 assumed:1 spatio:1 xi:1 sanguinetti:6 continuous:7 latent:3 regulatory:2 nature:5 reasonably:1 composing:1 interact:1 excellent:1 whole:2 noise:14 paul:1 profile:1 x1:1 position:2 explicit:1 lq:1 exponential:1 breaking:1 jmlr:1 elowitz:3 abundance:1 removing:4 down:2 specific:4 discarding:1 showing:4 list:1 offset:1 decay:2 normalizing:1 intrinsic:3 adding:6 conditioned:3 phenotype:1 led:1 garcia:1 simply:1 explore:1 unexpected:2 amsterdam:1 joined:1 holland:1 darren:1 kinetic:1 sorted:1 absence:1 feasible:1 change:57 hard:1 markt:2 infinite:2 determined:2 uniformly:2 except:1 sampler:9 considerable:1 total:2 specie:1 called:1 experimental:1 indicating:2 select:1 support:2 arises:1 bioinformatics:3 mcmc:6 tested:3 instructive:1 |
3,630 | 4,287 | Efficient anomaly detection using
bipartite k-NN graphs
Kumar Sricharan
Department of EECS
University of Michigan
Ann Arbor, MI 48104
[email protected]
Alfred O. Hero III
Department of EECS
University of Michigan
Ann Arbor, MI 48104
[email protected]
Abstract
Learning minimum volume sets of an underlying nominal distribution is a very effective approach to anomaly detection. Several approaches to learning minimum
volume sets have been proposed in the literature, including the K-point nearest
neighbor graph (K-kNNG) algorithm based on the geometric entropy minimization (GEM) principle [4]. The K-kNNG detector, while possessing several desirable characteristics, suffers from high computation complexity, and in [4] a
simpler heuristic approximation, the leave-one-out kNNG (L1O-kNNG) was proposed. In this paper, we propose a novel bipartite k-nearest neighbor graph (BPkNNG) anomaly detection scheme for estimating minimum volume sets. Our
bipartite estimator retains all the desirable theoretical properties of the K-kNNG,
while being computationally simpler than the K-kNNG and the surrogate L1OkNNG detectors. We show that BP-kNNG is asymptotically consistent in recovering the p-value of each test point. Experimental results are given that illustrate
the superior performance of BP-kNNG as compared to the L1O-kNNG and other
state of the art anomaly detection schemes.
1 Introduction
Given a training set of normal events, the anomaly detection problem aims to identify unknown,
anomalous events that deviate from the normal set. This novelty detection problem arises in applications where failure to detect anomalous activity could lead to catastrophic outcomes, for example,
detection of faults in mission-critical systems, quality control in manufacturing and medical diagnosis.
Several approaches have been proposed for anomaly detection. One class of algorithms assumes a
family of parametrically defined nominal distributions. Examples include Hotelling?s T test and the
Fisher F-test, which are both based on a Gaussian distribution assumption. The drawback of these
algorithms is model mismatch: the supposed distribution need not be a correct representation of the
nominal data, which can then lead to poor false alarm rates. More recently, several non-parametric
methods based on minimum volume (MV) set estimation have been proposed. These methods aim to
find the minimum volume set that recovers a certain probability mass ? with respect to the unknown
probability density of the nominal events. If a new event falls within the MV set, it is classified as
normal and otherwise as anomalous.
Estimation of minimum volume sets is a difficult problem, especially for high dimensional data.
There are two types of approaches to this problem: (1) transform the MV estimation problem to an
equivalent density level set estimation problem, which requires estimation of the nominal density;
and (2) directly identify the minimal set using function approximation and non-parametric estimation [10, 6, 9]. Both types of approaches involve explicit approximation of high dimensional quant1
ities - the multivariate density function in the first case and the boundary of the minimum volume
set in the second and are therefore not easily applied to high dimensional problems.
The GEM principle developed by Hero [4] for determining MV sets circumvents the above difficulties by using the asymptotic theory of random Euclidean graphs instead of function approximation. However, the GEM based K-kNNG anomaly detection scheme proposed in [4] is computationally difficult. To address this issue, a surrogate L1O-kNNG anomaly detection scheme was proposed
in [4]. L1O-kNNG is computationally simpler than K-kNNG, but loses some desirable properties of
the K-kNNG, including asymptotic consistency, as shown below.
In this paper, we use the GEM principle to develop a bipartite k-nearest neighbor (k-NN) graphbased anomaly detection algorithm. BP-kNNG retains the desirable properties of the GEM principle
and as a result inherits the following features: (i) it is not restricted to linear or even convex decision
regions, (ii) it is completely non-parametric, (iii) it is optimal in that it converges to the uniformly
most powerful (UMP) test when the anomalies are drawn from a mixture of the nominal density and
the uniform density, (iv) it does not require knowledge of anomalies in the training sample, (v) it is
asymptotically consistent in recovering the p-value of the test point and (vi) it produces estimated
p-values, allowing for false positive rate control.
K-LPE [13] and RRS [7] are anomaly detection methods which are also based on k-NN graphs. BPkNNG differs from L1O-kNNG, K-LPE and RRS in the following respects. L1O-kNNG, K-LPE
and RRS do not use bipartite graphs. We will show that the bipartite nature of BP-kNNG results
in significant computational savings. In addition, the K-LPE and RRS test statistics involve only
the k-th nearest neighbor distance, while the statistic in BP-kNNG, like the L1O-kNNG, involves
summation of the power weighted distance of all the edges in the k-NN graph. This will result
in increased robustness to outliers in the training sample. Finally, we will show that the mean
square rate of convergence of p-values in BP-kNNG (O(T ?2/(2+d) )) is faster as compared to the
convergence rate of K-LPE (O(T ?2/5 +T ?6/5d )), where T is the size of the nominal training sample
and d is the dimension of the data.
The rest of this paper is organized as follows. In Section 2, we outline the statistical framework
for minimum volume set anomaly detection. In Section 3, we describe the GEM principle and the
K-kNNG and L1O-kNNG anomaly detection schemes proposed in [4]. Next, in Section 4, we
develop our bipartite k-NN graph (BP-kNNG) method for anomaly detection. We show consistency
of the method and compare its computational complexity with that of the K-kNNG, L1O-kNNG and
K-LPE algorithms. In Section 5, we show simulation results that illustrate the superior performance
of BP-kNNG over L1O-kNNG. We also show that our method compares favorably to other state of
the art anomaly detection schemes when applied to real world data from the UCI repository [1]. We
conclude with a short discussion in Section 6.
2 Statistical novelty detection
The problem setup is as follows. We assume that a training sample X T = {X1 , . . . , XT } of ddimensional vectors is available. Given a new sample X, the objective is to declare X to either be
a ?nominal? event consistent with X T or an ?anomalous? event which deviates from X T . We seek to
find a functional D and corresponding detection rule D(x) > 0 so that X is declared to be nominal if
D(x) > 0 holds and anomalous otherwise. The acceptance region is given by A = {x : D(x) > 0}.
We seek to further constrain the choice of D to allow as few false negatives as possible for a fixed
allowance of false positives.
To formulate this problem, we adopt the standard statistical framework for testing composite hypotheses. We assume that the training sample X T is an i.i.d sample draw from an unknown ddimensional probability distribution f 0 (x) on [0, 1]d . Let X have density f on [0, 1] d . The anomaly
detection problem can be formulated as testing the hypotheses H 0 : f = f0 versus H1 : f = f0 .
For a given ? ? (0, 1), we seek an acceptance region A that satisfies P r(X ? A|H 0 ) ? 1 ? ?.
This
requirement maintains the false positive rate at a level no greater than ?. Let A = {A :
f
(x)dx ? 1 ? ?} denote the collection of acceptance regions of level ?. The most suitable
0
A
acceptance region from the collection A would be the set which minimizes the false negative rate.
Assume that the density f is bounded above by some constant C. In this case the false negative rate
is bounded by C?(A) where ?(.) is the Lebesgue measure in R d . Consider the relaxed problem of
2
minimizing the upper bound C?(A) or equivalently the volume ?(A) of A. The optimal acceptance
region with a maximum
false alarm rate ? is therefore given by the minimum volume set of level ?:
?? = min{?(A) : A f0 (x)dx ? ?}.
Define the minimum entropy
set of level ? to be ? ? = min{H? (A) : A f0 (x)dx ? 1 ? ?} where
H? (A) = (1 ? ?)?1 A f0? (x)dx is the R?enyi ?-entropy of the density f 0 over the set A. It can be
shown that when f 0 is a Lebesgue density in R d , the minimum volume set and the minimum entropy
set are equivalent, i.e. ? ? and ?? are identical. Therefore, the optimal decision rule for a given level
of false alarm ? is to declare an anomaly if X ?
/ ? ?.
This decision rule has a strong optimality property [4]: when f 0 is Lebesgue continuous and has
no ?flat? regions over its support, this decision rule is a uniformly most powerful (UMP) test at level
1 ? ? for the null hypothesis that the test point has density f (x) equal to the nominal f 0 (x) versus
the alternative hypothesis that f (x) = (1 ? )f 0 (x) + U (x), where U (x) is the uniform density
over [0, 1]d and ? [0, 1]. Furthermore, the power function is given by ? = P r(X ?
/ ? ? |H1 ) =
(1 ? )? + (1 ? ?(?? )).
3 GEM principle
In this section, we briefly review the geometric entropy minimization (GEM) principle method [4]
for determining minimum entropy sets ? ? of level ?. The GEM method directly estimates the critical region ?? for detecting anomalies using minimum coverings of subsets of points in a nominal
training sample. These coverings are obtained by constructing minimal graphs, e.g., the k-minimal
spanning tree or the k-nearest neighbor graph, covering a K-point subset that is a given proportion
of the training sample. Points in the training sample that are not covered by the K-point minimal
graphs are identified as tail events.
T
In particular, let X K,T denote one of the K
K point subsets of XT . The k-nearest neighbors
(k-NN) of a point X i ? XK,T are the k closest points to X i among XK,T ? Xi . Denote the
corresponding set of edges between X i and its k-NN by {e i(1) , . . . , ei(k) }. For any subset XK,T ,
define the total power weighted edge length of the k-NN graph on X K,T with power weighting ?
(0 < ? < d), as
K
k
LkN N (XK,T ) =
|eti (l) |? ,
i=1 l=1
where {t1 , . . . , tK } are the indices of X i ? XK,T . Define the K-kNNG
to be the K-point
T graph
k-NN graph having minimal length min XT ,K ?XT LkN N (XT,K ) over all K subsets XK,T . Denote
?
= argmin LkN N (XK,T ).
the corresponding length minimizing subset of K points by X T,K
XT ,K ?X
?
minimal graph covering X K,T
of size K. This graph can be viewed as
The K-kNNG thus specifies a
capturing the densest regions of X T . If XT is an i.i.d. sample from a multivariate density f 0 (x) and
?
converges a.s. to the minimum ?-entropy set containing
if limK,T ?? K/T = ?, then the set XK,T
a proportion of at least ? of the mass of f 0 (x), where ? = 1 ? ?/d [4]. This set can be used to
perform anomaly detection.
3.1 K-kNNG anomaly detection
Given a test sample X, denote the pooled sample X T +1 = XT ? {X} and determine the K-kNNG
?
/ X K,T
graph over X T +1 . Declare X to be an anomaly if X ?
+1 and nominal otherwise. When the
density f0 is Lebesgue continuous, it follows from [4] that as K, T ? ?, this anomaly detection
algorithm has false alarm rate that converges to ? = 1 ? K/T and power that converges to that of
the minimum volume set test of level ?. An identical detection scheme based on the K-minimal
spanning tree has also been developed in [4].
The K-kNNG anomaly detection scheme therefore offers a direct approach to detecting outliers
while bypassing the more difficult problems of density estimation and level set estimation in high dimensions. However, this
T algorithm requires construction of k-nearest neighbor graphs (or k-minimal
spanning trees) over K
different subsets. For each input test point, the runtime of this algorithm
3
T
). As a result, the K-kNNG method is not well suited for anomaly detection
is therefore O(dK 2 K
for large sample sizes.
3.2 L1O-kNNG
To address the computational problems of K-kNNG, Hero [4] proposed implementing the K-kNNG
for the simplest case K = T ? 1. The runtime of this algorithm for each input test point is O(dT 2 ).
Clearly, the L1O-kNNG is of much lower complexity that the K-kNNG scheme. However, the L1OkNNG detects anomalies at a fixed false alarm rate 1/(T + 1), where T is the training sample size.
To detect anomalies at a higher false alarm rate ? ? , one would have to subsample the training set
and only use T ? = 1/?? ? 1 training samples. This destroys any hope for asymptotic consistency
of the L1O-kNNG.
In the next section, we propose a different GEM based algorithm that uses bipartite graphs. The
algorithm has algorithm has a much faster runtime than the L1O-kNNG, and unlike the L1O-kNNG,
is asymptotically consistent and can operate at any specified alarm rate ?. We describe our algorithm
below.
4 BP-kNNG
Let {XN , XM } be a partition of X T with card{XN } =
N and card{XM } = M = T ? N
N
respectively. As above, let X K,N denote one of the K subsets of K distinct points from X N .
Define the bipartite k-NN graph on {X K,N , XM } to be the set of edges linking each X i ? XK,N
to its k nearest neighbors in X M . Define the total power weighted edge length of this bipartite
k-NN graph with power weighting ? (0 < ? < d) and a fixed number of edges s (1 ? s ? k)
corresponding to each vertex X i ? XK,N to be
Ls,k (XK,N , XM ) =
K
k
|eti (l) |? ,
i=1 l=k?s+1
where {t1 , . . . , tK } are the indices of X i ? XK,N and {eti (1) , . . . , eti (k) } are the k-NN edges in
the bipartite graph originating from X ti ? XK,N . Define the bipartite K-kNNG graph to be the one
having minimal weighted length min XN,K ?XN Ls,k (XN,K , XM ) over all N
K subsets XK,N . Define
?
= argmin Ls,k (XK,N , XM ).
the corresponding minimizing subset of K points of X K,N by XK,N
XK,N ?X
Using the theory of partitioned k-NN graph entropy estimators [11], it follows that as k/M ?
?
0, k, N ? ? and for fixed s, the set X K,N
converges a.s. to the minimum ?-entropy set ? 1??
containing a proportion of at least ? of the mass of f 0 (x), where ? = limK,N ?? K/N and ? =
1 ? ?/d.
This suggests using the bipartite k-NN graph to detect anomalies in the following way. Given a
test point X, denote the pooled sample X N +1 = XN ? {X} and determine the optimal bipartite
?
?
/ X K,N
K-kNNG graph X K,N
+1 over {XK,N +1 , XM }. Now declare X to be an anomaly if X ?
+1
and nominal otherwise. It is clear that by the GEM principle, this algorithm detects false alarms at
a rate that converges to ? = 1 ? K/T and power that converges to that of the minimum volume set
test of level ?.
?
We can equivalently determine X K,N
+1 as follows. For each X i ? XN , construct ds,k (Xi ) =
k
k
?
?
|e
|
.
For
each
test
point
X, define d s,k (X) =
l=k?s+1 i(l)
l=s?k+1 |eX(l) | , where
{eX(1) , . . . , eX(k) } are the k-NN edges from X to X M . Now, choose the K points among X N ? X
with the K smallest of the N + 1 edge lengths {d s,k (Xi ), Xi ? XN } ? {ds,k (X)}. Because of
?
the bipartite nature of the construction, this is equivalent to choosing X K,N
+1 . This leads to the
proposed BP-kNNG anomaly detection algorithm described by Algorithm 1.
4.1 BP-kNNG p-value estimates
The p-value is a score between 0 and 1 that is associated with the likelihood that a given point X 0
comes from a specified nominal distribution. The BP-kNNG generates an estimate of the p-value
4
Algorithm 1 Anomaly detection scheme using bipartite k-NN graphs
1. Input: Training samples X T , test samples X, false alarm rate ?
2. Training phase
a. Create partition {XN , XM }
b. Construct k-NN bipartite graph on partition
k
c. Compute k-NN lengths d s,k (Xi ) for each Xi ? XN : ds,k (Xi ) = l=k?s+1 |ei(l) |?
3. Test phase: detect anomalous points
for each input test sample X do
k
Compute k-NN length d s,k (X) = l=k?s+1 |eX(l) |?
if
(1/N )
1(ds,k (Xi ) < ds,k (X)) ? 1 ? ?
Xi ?XN
then
Declare X to be anomalous
else
Declare X to be non-anomalous
end if
end for
that is asymptotically consistent, guaranteeing that the BP-kNNG detector is a consistent novelty
detector.
Specifically, for a given test point X 0 , the
true p-value associated with a point X 0 in a minimum
volume set test is given by p true (X0 ) = S(X0 ) f0 (z)dz where S(X0 ) = {z : f0 (z) ? f0 (X0 )} and
E(X0 ) = {z : f0 (z) = f0 (X0 )}. ptrue (X0 ) is the minimal level ? at which X 0 would be rejected.
The empirical p-value associated with the BP-kNNG is defined as
Xi ?XN 1(ds,k (Xi ) ? ds,k (X0 ))
pbp (X0 ) =
.
(1)
N
4.2 Asymptotic consistency and optimal convergence rates
Here we prove that the BP-kNNG detector is asymptotically consistent by showing that for a fixed
number of edges s, E[(p bp (X0 ) ? ptrue (X0 ))2 ] ? 0 as k/M ? 0, k, N ? ?. In the process,
we also obtain rates of convergence of this mean-squared error. These rates depend on k, N and M
and result in the specification of an optimal number of neighbors k and an optimal partition ratio
N/M that achieve the best trade-off between bias and variance of the p-value estimates p bp (X0 ).
We assume that the density f 0 (i) is bounded away from 0 and ? and is continuous on its support
S, (ii) has no flat spots over its support set and (iii) has a finite number of modes. Let E denote the
expectation w.r.t. the density f 0 , and B, V denote the bias and variance operators. Throughout this
section, assume without loss of generality that {X 1 , . . . , XN } ? XN and {XN +1 , . . . , XT } ? XM .
Bias: We first introduce the oracle p-value p orac (X0 ) = (1/N ) Xi ?XN 1(f0 (Xi ) ? f0 (X0 ))
and note that E[p orac (X0 )] = ptrue (X0 ). The distance ei(l) of a point X i ? XN to its l-th
nearest neighbor in X M is related to the bipartite l-nearest neighbor density estimate f?l (Xi ) =
(l ? 1)/(M cd edi(l) ) (section 2.3, [11]) where c d is the unit ball volume in d dimensions. Let
k
??1
k ? 1
f?l (X)
? s(f (X))??1
e(X) =
l?1
l=k?s+1
and
?(Xi , X0 ) = ?i = (f (Xi ))??1 ? (f (X0 ))??1 .
We then have
B[pbp (X0 )]
= E[pbp (X0 )] ? ptrue (X0 ) = E[pbp (X0 ) ? porac(X0 )]
= E[1(ds,k (X1 ) ? ds,k (X0 ))] ? E[1(f (X1 ) ? f (X0 ))]
= E[1(e(X1 ) ? e(X0 ) + ?1 ? 0) ? 1(?1 ? 0)].
5
This bias will be non-zero when 1(e(X 1 ) ? e(X0 ) + ?1 ? 0) = 1(?1 ? 0). First we investigate
this condition when ? 1 > 0. In this case, for 1(e(X 1 ) ? e(X0 ) + ?1 ? 0) = 1(?1 ? 0), we need
?e(X1 ) + e(X0 ) ? ?1 . Likewise, when ?1 ? 0, 1(e(X1 ) ? e(X0 ) + ?1 ? 0) = 1(?1 ? 0) occurs
when e(X1 ) ? e(X0 ) > |?1 |.
?
From the theory developed in [11], for any fixed s, |e(X)| = O(k/M ) 1/d + O(1/ k) with probability greater than 1 ? o(1/M ). This implies that
B[pbp (X0 )]
= E[1(e(X1 ) ? e(X0 ) + ?1 ? 0) ? 1(?1 ? 0)]
?
?
= P r{|?1 | = O((k/M )1/d + 1/ k)} + o(1/M ) = O((k/M )1/d + 1/ k), (2)
where the last step follows from our assumption that the density f 0 is continuous and has a finite
number of modes.
Variance: Define b i = 1(e(Xi ) ? e(X0 ) + ?i ? 0) ? 1(?i ? 0). We can compute the variance
in a similar manner to the bias as follows (for additional details, please refer to the supplementary
material):
V[pbp (X0 )] =
=
1
N ?1
V[1(e(X1 ) ? e(X0 ) + ?1 ? 0)] +
Cov[b1 , b2 ]
N
N
O(1/N ) + E[b1 b2 ] ? (E[b1 ]E[b2 ]) = O(1/N + (k/M )2/d + 1/k).
(3)
Consistency of p-values: From (2) and (3), we obtain an asymptotic representation of the estimated p-value E[(p bp (X0 ) ? ptrue (X0 ))2 ] = O((k/M )2/d ) + O(1/k) + O(1/N ). This implies that
pbp converges in mean square to p true , for a fixed number of edges s, as k/M ? 0, k, N ? ?.
Optimal choice of parameters: The optimal choice of k to minimize the MSE is given by k =
?(M 2/(2+d) ). For fixed M + N = T , to minimize MSE, N should then be chosen to be of the
order O(M (4+d)/(4+2d) ), which implies that M = ?(T ). The mean square convergence rate for
this optimal choice of k and partition ratio N/M is given by O(T ?2/(2+d) ). In comparison, the
K-LPE method requires that k grows with the sample size at rate k = ?(T 2/5 ). The mean square
rate of convergence of the p-values in K-LPE is then given by O(T ?2/5 + T ?6/5d ). The rate of
convergence of the p-values is therefore faster in the case of BP-kNNG as compared to K-LPE.
4.3 Comparison of run time complexity
Here we compare complexity of BP-kNNG with that ofK-kNNG,
L1O-kNNG and K-LPE. For a
T
single query point X, the runtime of K-kNNG is O(dK 2 K
), while the complexity of the surrogate
L1O-kNN algorithm and the K-LPE is O(dT 2 ). On the other hand, the complexity of the proposed
BP-kNNG algorithm is dominated by the computation of d k (Xi ) for each Xi ? XN and dk (X),
which is O(dN M ) = O(dT (8+3d)/(4+2d) ) = o(dT 2 ).
For the K-kNNG, L1O-kNNG and K-LPE, a new k-NN graph has to be constructed on {X N ?{X}}
for every new query point X. On the other hand, because of the bipartite construction of our k-NN
graph, dk (Xi ) for each Xi ? XN needs to be computed and stored only once. For every new query
X that comes in, the cost to compute d k (X) is only O(dM ) = O(dT ). For a total of L query points,
the overall runtime complexity of our algorithm is therefore much smaller than the L1O-kNNG, K(4+d)/(4+2d)
LPE and K-kNNG anomaly
+ L)) compared to O(dLT 2 ),
detection schemes (O(dT (T
2
2 T
O(dLT ) and O(dLK K ) respectively).
5 Simulation comparisons
We compare the L1O-kNNG and the bipartite K-kNNG schemes on a simulated data set. The
training set contains 1000 realizations drawn from a 2-dimensional Gaussian density f 0 with mean
0 and diagonal covariance with identical component variances of ? = 0.1. The test set contains 500
realizations drawn from 0.8f 0 + 0.2U , where U is the uniform density on [0, 1] 2 . Samples from the
uniform distribution are classified to be anomalies. The percentage of anomalies in the test set is
therefore 20%.
6
0.16
0.98
0.14
0.97
0.12
0.1
Observed
True positive rate
0.96
0.95
0.94
0.08
0.06
0.93
0.04
BP?kNNG
L10?kNNG
Clairvoyant
0.92
0.91
0
0.02
0.04
0.06
0.08
0.1
False positive rate
0.12
0.14
0.02
0
0
0.16
BP?kNNG
L10?kNNG
0.02
0.04
0.06
0.08
Desired
0.1
0.12
0.14
0.16
(a) ROC curves for L1O-kNNG and BP-kNNG. (b) Comparison of observed false alarm rates for
The labeled ?clairvoyant? curve is the ROC of the L1O-kNNG and BP-kNNG with the desired false
UMP anomaly detector..
alarm rates.
Figure 1: Comparison of performance of L1O-kNNG and BP-kNNG.
Data set
HTTP (KDD?99)
Forest
Mulcross
SMTP (KDD?99)
Shuttle
Sample size
567497
286048
262144
95156
49097
Dimension
3
10
4
3
9
Anomaly class
attack (0.4%)
class 4 vs class 2 (0.9%)
2 clusters (10%)
attack (0.03%)
class 2,3,5,6,7 vs class 1 (7%)
Table 1: Description of data used in anomaly detection experiments.
The distribution f 0 has essential support on the unit square. For
? this simple case the minimum
volume set of level ? is a disk centered at the origin with radius 2? 2 log(1/?). The power of the
uniformly most powerful (UMP) test is 1 ? 2?? 2 log(1/?).
L1O-kNNG and BP-kNNG were implemented in Matlab 7.6 on an 2 GHz Intel processor with
3 GB of RAM. The value of k was set to 5. For the BP-kNNG, we set s = 1, N = 100 and
M = 900. In Fig. 1(a), we compare the detection performance of L1O-kNNG and BP-kNNG
against the ?clairvoyant? UMP detector in terms of the ROC. We note that the proposed BP-kNNG
is closer to the optimal UMP test as compared to the L1O-kNNG. In Fig. 1(b) we note the close
agreement between desired and observed false alarm rates for BP-kNNG. Note that the L1O-kNNG
significantly underestimates its false alarm rate for higher levels of true false alarm. In the case
of the L1O-kNNG, it took an average of 60ms to test each instance for possible anomaly. The
total run-time was therefore 60x500 = 3000ms. For the BP-kNNG, for a single instance, it took an
average of 57ms. When all the instances were processed together, the total run time was only 97ms.
This significant savings in runtime is due to the fact that the bipartite graph does not have to be
constructed separately for each new test instance; it suffices to construct it once on the entire data
set.
5.1 Experimental comparisons
In this section, we compare our algorithm to several other state of the art anomaly detection algorithms, namely: MassAD [12], isolation forest (or iForest) [5], two distance-based methods
ORCA [2] and K-LPE [13], a density-based method LOF [3], and the one-class support vector
machine (or 1-SVM) [9]. All the methods are tested on the five largest data sets used in [5]. The
data characteristics are summarized in Table 1. One of the anomaly data generators is Mulcross [8]
and the other four are from the UCI repository [1]. Full details about the data can be found in [5].
The comparison performance is evaluated in terms of averaged AUC (area under ROC curve) and
processing time (a total of training and test time). Results for BP-kNNG are compared with results
for L1O-kNNG, K-LPE, MassAD, iForest and ORCA in Table 2. The results for MassAD, iForest
and ORCA are reproduced from [12]. MassAD and iForest were implemented in Matlab and tested
on an AMD Opteron machine with a 1.8 GHz processor and 4 GB memory. The results for ORCA,
7
Data sets
HTTP
Forest
Mulcross
SMTP
Shuttle
BP
0.99
0.86
1.00
0.90
0.99
L10
NA
NA
NA
NA
NA
K-LPE
NA
NA
NA
NA
NA
AUC
Mass
1.00
0.91
0.99
0.86
0.99
iF
1.00
0.87
0.96
0.88
1.00
ORCA
0.36
0.83
0.33
0.87
0.60
BP
3.81
7.54
4.68
0.74
1.54
L10
.10/i
.18/i
.26/i
.11/i
.45/i
Time (secs)
K-LPE
Mass
.19/i
34
.18/i
18
.17/i
17
.17/i
7
.16/i
4
iF
147
79
75
26
15
ORCA
9487
6995
2512
267
157
Table 2: Comparison of anomaly detection schemes in terms of AUC and run-time for BP-kNNG
(BP) against L1O-kNNG (L10), K-LPE, MassAD (Mass), iForest (iF) and ORCA. When reporting
results for L1O-kNNG and K-LPE, we report the processing time per test instance (/i). We are
unable to report the AUC for K-LPE and L1O-kNNG because of the large processing time. We note
that BP-kNNG compares favorably in terms of AUC while also requiring the least run-time.
Data sets
HTTP (KDD?99)
Forest
Mulcross
SMTP (KDD?99)
Shuttle
0.01
0.007
0.009
0.008
0.006
0.026
Desired false alarm
0.02
0.05
0.1
0.015 0.063 0.136
0.015 0.035 0.071
0.014 0.040 0.096
0.017 0.046 0.099
0.030 0.045 0.079
0.2
0.216
0.150
0.186
0.204
0.179
Table 3: Comparison of desired and observed false alarm rates for BP-kNNG. There is good agreement between the desired and observed rates.
LOF and 1-SVM were conducted using the same experimental setting but on a faster 2.3 GHz
machine. We exclude the results for LOF and 1-SVM in table 2 because MassAD, iForest and
ORCA have been shown to outperform LOF and 1-SVM in [12].
We implemented BP-kNNG, L1O-kNNG and K-LPE in Matlab on an Intel 2 GHz processor with 3
GB RAM. We note that this machine is comparable to the AMD Opteron machine with a 1.8 GHz
processor. We choose T = 10 4 training samples and fix k = 50 in all three cases. For BP-kNNG,
we fix s = 5 and N = 103 . When reporting results for L1O-kNNG and K-LPE, we report the
processing time per test instance (/i). We are unable to report the AUC for K-LPE because of the
large processing time and for L1O-kNNG because it cannot operate at high false alarm rates.
From the results in Table 2, we see that BP-kNNG performs comparably in terms of AUC to the
other algorithms, while having the least processing time across all algorithms (implemented on
different, but comparable machines). In addition, BP-kNNG allows the specification of a threshold
for anomaly detection at a desired false alarm rate. This is corroborated by the results in Table 3,
where we see that the observed false alarm rates across the different data sets are close to the desired
false alarm rate.
6 Conclusions
The geometric entropy minimization (GEM) principle was introduced in [4] to extract minimal set
coverings that can be used to detect anomalies from a set of training samples. In this paper we
propose a bipartite k-nearest neighbor graph (BP-kNNG) anomaly detection algorithm based on the
GEM principle. BP-kNNG inherits the theoretical optimality properties of GEM methods including
consistency, while being an order of magnitude faster than the methods proposed in [4].
We compared BP-kNNG against state of the art anomaly detection algorithms and showed that BPkNNG compares favorably in terms of both ROC performance and computation time. In addition,
BP-kNNG enjoys several other advantages including the ability to detect anomalies at a desired false
alarm rate. In BP-kNNG, the p-values of each test point can also be easily computed (1), making
BP-kNNG easily extendable to incorporating false discovery rate constraints.
8
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[2] S. D. Bay and M. Schwabacher. Mining distance-based outliers in near linear time with randomization and a simple pruning rule. In Proceedings of the ninth ACM SIGKDD international
conference on Knowledge discovery and data mining, KDD ?03, pages 29?38, New York, NY,
USA, 2003. ACM.
[3] M. M. Breunig, H. Kriegel, R. T. Ng, and J. Sander. Lof: identifying density-based local
outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management
of data, SIGMOD ?00, pages 93?104, New York, NY, USA, 2000. ACM.
[4] A. O. Hero. Geometric entropy minimization (gem) for anomaly detection and localization. In
Proc. Advances in Neural Information Processing Systems (NIPS, pages 585?592. MIT Press,
2006.
[5] F. T. Liu, K. M. Ting, and Z. Zhou. Isolation forest. In Proceedings of the 2008 Eighth IEEE
International Conference on Data Mining, pages 413?422, Washington, DC, USA, 2008. IEEE
Computer Society.
[6] C. Park, J. Z. Huang, and Y. Ding. A computable plug-in estimator of minimum volume sets
for novelty detection. Operations Research, 58(5):1469?1480, 2010.
[7] S. Ramaswamy, R. Rastogi, and K. Shim. Efficient algorithms for mining outliers from large
data sets. SIGMOD Rec., 29:427?438, May 2000.
[8] D. M. Rocke and D. L. Woodruff. Identification of Outliers in Multivariate Data. Journal of
the American Statistical Association, 91(435):1047?1061, 1996.
[9] B. Sch?olkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J.Platt. Support Vector Method
for Novelty Detection. volume 12, 2000.
[10] C. Scott and R. Nowak. Learning minimum volume sets. J. Machine Learning Res, 7:665?704,
2006.
[11] K. Sricharan, R. Raich, and A. O. Hero. Empirical estimation of entropy functionals with
confidence. ArXiv e-prints, December 2010.
[12] K. M. Ting, G. Zhou, T. F. Liu, and J. S. C. Tan. Mass estimation and its applications. In
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and
data mining, KDD ?10, pages 989?998, New York, NY, USA, 2010. ACM.
[13] M. Zhao and V. Saligrama. Anomaly detection with score functions based on nearest neighbor
graphs. Computing Research Repository, abs/0910.5461, 2009.
9
| 4287 |@word repository:4 briefly:1 proportion:3 disk:1 simulation:2 seek:3 covariance:1 schwabacher:1 liu:2 contains:2 score:2 woodruff:1 smtp:3 dx:4 partition:5 kdd:6 v:2 xk:18 short:1 detecting:2 attack:2 simpler:3 five:1 dn:1 constructed:2 direct:1 clairvoyant:3 prove:1 manner:1 introduce:1 x0:38 detects:2 estimating:1 underlying:1 bounded:3 mass:7 null:1 argmin:2 minimizes:1 developed:3 every:2 ti:1 runtime:6 platt:1 control:2 unit:2 medical:1 positive:5 declare:6 t1:2 local:1 suggests:1 dlk:1 averaged:1 testing:2 differs:1 spot:1 area:1 rrs:4 empirical:2 lpe:23 significantly:1 composite:1 confidence:1 cannot:1 close:2 operator:1 equivalent:3 dz:1 l:3 convex:1 formulate:1 identifying:1 estimator:3 rule:5 construction:3 nominal:14 tan:1 anomaly:48 densest:1 us:1 breunig:1 hypothesis:4 origin:1 agreement:2 rec:1 corroborated:1 labeled:1 observed:6 x500:1 ding:1 region:9 trade:1 complexity:8 depend:1 localization:1 bipartite:22 completely:1 easily:3 enyi:1 distinct:1 describe:2 effective:1 query:4 newman:1 outcome:1 choosing:1 heuristic:1 supplementary:1 otherwise:4 ability:1 statistic:2 cov:1 knn:1 transform:1 reproduced:1 advantage:1 took:2 propose:3 mission:1 saligrama:1 uci:3 realization:2 achieve:1 supposed:1 description:1 olkopf:1 convergence:7 cluster:1 requirement:1 produce:1 guaranteeing:1 leave:1 converges:8 tk:2 illustrate:2 develop:2 nearest:12 strong:1 recovering:2 ddimensional:2 involves:1 come:2 implies:3 implemented:4 radius:1 drawback:1 correct:1 opteron:2 centered:1 material:1 implementing:1 require:1 suffices:1 fix:2 randomization:1 summation:1 bypassing:1 hold:1 lof:5 normal:3 adopt:1 smallest:1 estimation:10 proc:1 largest:1 create:1 weighted:4 minimization:4 eti:4 hope:1 clearly:1 destroys:1 gaussian:2 mit:1 aim:2 zhou:2 shuttle:3 inherits:2 likelihood:1 sigkdd:2 detect:6 nn:21 entire:1 originating:1 issue:1 among:2 overall:1 art:4 equal:1 construct:3 saving:2 having:3 once:2 ng:1 washington:1 identical:3 park:1 report:4 few:1 phase:2 lebesgue:4 ab:1 detection:39 acceptance:5 investigate:1 mining:5 mixture:1 edge:11 closer:1 nowak:1 tree:3 iv:1 euclidean:1 taylor:1 desired:9 re:1 theoretical:2 minimal:11 increased:1 instance:6 retains:2 cost:1 vertex:1 parametrically:1 subset:10 uniform:4 conducted:1 stored:1 eec:2 lkn:3 extendable:1 density:23 international:4 off:1 together:1 na:10 squared:1 management:1 containing:2 choose:2 huang:1 american:1 zhao:1 exclude:1 pooled:2 b2:3 summarized:1 sec:1 mv:4 vi:1 h1:2 ramaswamy:1 graphbased:1 ump:6 maintains:1 asuncion:1 minimize:2 square:5 variance:5 characteristic:2 likewise:1 rastogi:1 identify:2 identification:1 comparably:1 processor:4 classified:2 detector:7 suffers:1 failure:1 against:3 underestimate:1 dm:1 associated:3 mi:2 recovers:1 knowledge:3 organized:1 higher:2 dt:6 evaluated:1 generality:1 furthermore:1 rejected:1 smola:1 d:9 hand:2 ei:3 mode:2 quality:1 grows:1 usa:4 requiring:1 true:5 l1o:35 please:1 covering:5 auc:7 m:4 outline:1 orca:8 orac:2 performs:1 novel:1 possessing:1 recently:1 pbp:7 superior:2 functional:1 volume:19 tail:1 linking:1 association:1 significant:2 refer:1 consistency:6 shawe:1 specification:2 f0:13 multivariate:3 closest:1 showed:1 certain:1 iforest:6 fault:1 minimum:22 greater:2 relaxed:1 additional:1 novelty:5 determine:3 ii:2 full:1 desirable:4 faster:5 plug:1 offer:1 anomalous:8 expectation:1 arxiv:1 addition:3 separately:1 else:1 sch:1 rest:1 unlike:1 operate:2 limk:2 december:1 near:1 iii:3 sander:1 isolation:2 identified:1 dlt:2 computable:1 gb:3 york:3 matlab:3 covered:1 involve:2 clear:1 processed:1 simplest:1 http:3 specifies:1 outperform:1 percentage:1 estimated:2 per:2 diagnosis:1 alfred:1 four:1 threshold:1 drawn:3 ram:2 graph:33 asymptotically:5 run:5 powerful:3 reporting:2 family:1 throughout:1 circumvents:1 allowance:1 decision:4 draw:1 comparable:2 capturing:1 bound:1 oracle:1 activity:1 knng:105 constraint:1 constrain:1 bp:49 flat:2 raich:1 dominated:1 declared:1 generates:1 min:4 optimality:2 kumar:1 department:2 ball:1 poor:1 smaller:1 across:2 partitioned:1 making:1 outlier:6 restricted:1 computationally:3 hero:6 end:2 umich:2 available:1 operation:1 away:1 hotelling:1 alternative:1 robustness:1 assumes:1 include:1 sigmod:3 ting:2 especially:1 society:1 objective:1 print:1 occurs:1 parametric:3 diagonal:1 surrogate:3 distance:5 unable:2 card:2 simulated:1 amd:2 spanning:3 length:8 index:2 ratio:2 minimizing:3 equivalently:2 difficult:3 setup:1 favorably:3 negative:3 unknown:3 perform:1 allowing:1 upper:1 sricharan:2 finite:2 dc:1 ninth:1 edi:1 introduced:1 namely:1 specified:2 nip:1 address:2 kriegel:1 below:2 mismatch:1 xm:9 eighth:1 scott:1 including:4 memory:1 power:9 event:7 critical:2 difficulty:1 suitable:1 scheme:13 l10:5 extract:1 deviate:2 review:1 literature:1 geometric:4 discovery:3 determining:2 asymptotic:5 loss:1 shim:1 versus:2 generator:1 consistent:7 principle:10 cd:1 last:1 enjoys:1 bias:5 allow:1 neighbor:13 fall:1 ghz:5 boundary:1 dimension:4 xn:19 world:1 curve:3 collection:2 functionals:1 pruning:1 ities:1 b1:3 gem:15 conclude:1 xi:21 continuous:4 bay:1 table:8 nature:2 forest:5 mse:2 williamson:1 constructing:1 subsample:1 alarm:21 x1:9 fig:2 intel:2 roc:5 ny:3 explicit:1 weighting:2 xt:9 showing:1 dk:4 svm:4 essential:1 incorporating:1 false:28 magnitude:1 suited:1 entropy:12 michigan:2 rocke:1 ptrue:5 loses:1 satisfies:1 acm:6 viewed:1 formulated:1 ann:2 manufacturing:1 fisher:1 specifically:1 uniformly:3 total:6 catastrophic:1 arbor:2 experimental:3 support:6 arises:1 tested:2 ex:4 |
3,631 | 4,288 | Variance Reduction in Monte-Carlo Tree Search
Joel Veness
University of Alberta
Marc Lanctot
University of Alberta
Michael Bowling
University of Alberta
[email protected]
[email protected]
[email protected]
Abstract
Monte-Carlo Tree Search (MCTS) has proven to be a powerful, generic planning
technique for decision-making in single-agent and adversarial environments. The
stochastic nature of the Monte-Carlo simulations introduces errors in the value estimates, both in terms of bias and variance. Whilst reducing bias (typically through
the addition of domain knowledge) has been studied in the MCTS literature, comparatively little effort has focused on reducing variance. This is somewhat surprising, since variance reduction techniques are a well-studied area in classical
statistics. In this paper, we examine the application of some standard techniques
for variance reduction in MCTS, including common random numbers, antithetic
variates and control variates. We demonstrate how these techniques can be applied
to MCTS and explore their efficacy on three different stochastic, single-agent settings: Pig, Can?t Stop and Dominion.
1
Introduction
Monte-Carlo Tree Search (MCTS) has become a popular approach for decision making in large
domains. The fundamental idea is to iteratively construct a search tree, whose internal nodes contain
value estimates, by using Monte-Carlo simulations. These value estimates are used to direct the
growth of the search tree and to estimate the value under the optimal policy from each internal node.
This general approach [6] has been successfully adapted to a variety of challenging problem settings,
including Markov Decision Processes, Partially Observable Markov Decision Processes, Real-Time
Strategy games, Computer Go and General Game Playing [15, 22, 2, 9, 12, 10].
Due to its popularity, considerable effort has been made to improve the efficiency of Monte-Carlo
Tree Search. Noteworthy enhancements include the addition of domain knowledge [12, 13], parallelization [7], Rapid Action Value Estimation (RAVE) [11], automated parameter tuning [8] and rollout policy optimization [21]. Somewhat surprisingly however, the application of classical variance
reduction techniques to MCTS has remained unexplored. In this paper we survey some common
variance reduction ideas and show how they can be used to improve the efficiency of MCTS.
For our investigation, we studied three stochastic games: Pig [16], Can?t Stop [19] and Dominion
[24]. We found that substantial increases in performance can be obtained by using the appropriate
combination of variance reduction techniques. To the best of our knowledge, our work constitutes
the first investigation of classical variance reduction techniques in the context of MCTS. By showing
some examples of these techniques working in practice, as well as discussing the issues involved in
their application, this paper aims to bring this useful set of techniques to the attention of the wider
MCTS community.
2
Background
We begin with a short overview of Markov Decision Processes and online planning using MonteCarlo Tree Search.
1
2.1
Markov Decision Processes
A Markov Decision Process (MDP) is a popular formalism [4, 23] for modeling sequential decision
making problems. Although more general setups exist, it will be sufficient to limit our attention to
the case of finite MDPs. Formally, a finite MDP is a triplet (S, A, P0 ), where S is a finite, nonempty set of states, A is a finite, non-empty set of actions and P0 is the transition probability kernel
that assigns to each state-action pair (s, a) ? S ? A a probability measure over S ? R that we denote
by P0 (? | s, a). S and A are known as the state space and action space respectively. Without loss
of generality, we assume that the state always contains the current time index t ? N. The transition
probability kernel gives rise to the state transition kernel P(s, a, s0 ) := P0 ({s0 } ? R | s, a), which
gives the probability of transitioning from state s to state s0 if action a is taken in s. An agent?s
behavior can be described by a policy that defines, for each state s ? S, a probability measure over
A denoted by ?(? | s). At each time t, the agent communicates an action At ? ?(? | St ) to the system
in state St ? S. The system then responds with a state-reward pair (St+1 , Rt+1 ) ? P0 (? | St , At ),
where St+1 ? S and Rt+1 ? R. We will assume that each reward lies within [rmin , rmax ] ? R and
that the system executes for only a finite number of steps n ? N so that t ? n. Given a sequence
of random variables At , St+1 , Rt+1 , . . . , An?1 , Sn , Rn describing
Pnthe execution of the system up to
time n from a state st , the return from st is defined as Xst := i=t+1 Ri . The return Xst ,at with
respect to a state-action pair (st , at ) ? S ? A is defined similarly, with the added constraint that
At = at . An optimal policy, denoted by ? ? , is a policy that maximizes the expected return E [Xst ]
for all states st ? S. A deterministic optimal policy always exists for this class of MDPs.
2.2
Online Monte-Carlo Planning in MDPs
If the state space is small, an optimal action can be computed offline for each state using techniques
such as exhaustive Expectimax Search [18] or Q-Learning [23]. Unfortunately, state spaces too large
for these approaches are regularly encountered in practice. One way to deal with this is to use online
planning. This involves repeatedly using search to compute an approximation to the optimal action
from the current state. This effectively amortizes the planning effort across multiple time steps, and
implicitly focuses the approximation effort on the relevant parts of the state space.
A popular way to construct an online planning algorithm is to use a depth-limited version of an exhaustive search technique (such as Expectimax Search) in conjunction with iterative deepening [18].
Although this approach works well in domains with limited stochasticity, it scales poorly in highly
stochastic MDPs. This is because of the exhaustive enumeration of all possible successor states at
chance nodes. This enumeration severely limits the maximum search depth that can be obtained
given reasonable time constraints. Depth-limited exhaustive search is generally outperformed by
Monte-Carlo planning techniques in these situations.
A canonical example of online Monte-Carlo planning is 1-ply rollout-based planning [3]. It combines a default policy ? with a one-ply lookahead search. At each time t < n, given a starting
state st , for each at ? A and with t < i < n, E [Xst ,at | Ai ? ?(? | Si )] is estimated by generating trajectories St+1 , Rt+1 , . . . , An?1 , Sn , Rn of agent-system interaction. From these tra? s ,a are computed for all at ? A. The agent then selects the action
jectories, sample means X
t t
?
At := argmaxat ?A Xst ,a , and observes the system response (St+1 , Rt+1 ). This process is then
repeated until time n. Under some mild assumptions, this technique is provably superior to executing the default policy [3]. One of the main advantages of rollout based planning compared with
exhaustive depth-limited search is that a much larger search horizon can be used. The disadvantage however is that if ? is suboptimal, then E [Xst ,a | Ai ? ?(? | Si )] < E [Xst ,a | Ai ? ? ? (? | Si )]
for at least one state-action pair (st , a) ? S ? A, which implies that at least some value estimates
constructed by 1-ply rollout-based planning are biased. This can lead to mistakes which cannot be
corrected through additional sampling. The bias can be reduced by incorporating more knowledge
into the default policy, however this can be both difficult and time consuming.
Monte-Carlo Tree Search algorithms improve on this procedure, by providing a means to construct
asymptotically consistent estimates of the return under the optimal policy from simulation trajectories. The UCT algorithm [15] in particular has been shown to work well in practice. Like rolloutbased planning, it uses a default policy to generate trajectories of agent-system interaction. However
now the construction of a search tree is also interleaved within this process, with nodes corresponding to states and edge corresponding to a state-action pairs. Initially, the search tree consists of a
2
single node, which represents the current state st at time t. One or more simulations are then performed. We will use Tm ? S to denote the set of states contained within the search tree after m ? N
m
? s,a
simulations. Associated with each state-action pair (s, a) ? S ? A is an estimate X
of the return
m
under the optimal policy and a count Ts,a ? N representing the number of times this state-action
0
0
? s,a
pair has been visited after m simulations, with Ts,a
:= 0 and X
:= 0.
Each simulation can be broken down into four phases, selection, expansion, rollout and backup.
Selection involves traversing a path from the root node to a leaf node in the following manner: for
each non-leaf, internal node representing some state s on this path, the UCB [1] criterion is applied
to select an action until a leaf node corresponding to state sl is reached. If U(Bs ) denotesP
the uniform
m
m
distribution over the set of unexplored actions Bsm := {a ? A : Ts,a
= 0}, and Tsm := a?A Ts,a
,
UCB at state s selects
q
m
m,
? s,a
Am+1
:= argmax X
+ c log(Tsm )/Ts,a
(1)
s
a?A
if |Bsm | = ?, or Am+1
? U(Bsm ) otherwise. The ratio of exploration to exploitation is controlled
s
by the positive constant c ? R. In the case of more than one maximizing action, ties are broken
uniformly at random. Provided sl is non-terminal, the expansion phase is then executed, by selecting
an action Al ? ?(? | sl ), observing a successor state Sl+1 = sl+1 , and then adding a node to the
search tree so that Tm+1 = Tm ? {sl+1 }. Higher values of c increase the level of exploration,
which in turn leads to more shallow and symmetric tree growth. The rollout phase is then invoked,
which for l < i < n, executes actions Ai ? ?(? | Si ). At this point, a complete agent-system
execution trajectory (at , st+1 , rt+1 , . . . , an?1 , sn , rn ) from st has been realized. The backup phase
then assigns, for t ? k < n,
!
n
X
m+1
m
m
1
?
?
?
Xsk ,ak ? Xsk ,ak + T m +1
ri ? Xsk ,ak , Tsm+1
? Tsmk ,ak + 1,
k ,ak
sk ,ak
i=t+1
to each (sk , ak ) ? Tm+1 occurring on the realized trajectory. Notice that for all (s, a) ? S ? A, the
m
? s,a
value estimate X
corresponds to the average return of the realized simulation trajectories passing
through state-action pair (s, a). After the desired number of simulations k has been performed in
? k is selected. With an
state st , the action with the highest expected return at := argmaxa?A X
st ,a
appropriate [15] value of c, as m ? ?, the value estimates converge to the expected return under
the optimal policy. However, due to the stochastic nature of the UCT algorithm, each value estimate
m
? s,a
is subject to error, in terms of both bias and variance, for finite m. While previous work (see
X
Section 1) has focused on improving these estimates by reducing bias, little attention has been given
to improvements via variance reduction. The next section describes how the accuracy of UCT?s
value estimates can be improved by adapting classical variance reduction techniques to MCTS.
3
Variance Reduction in MCTS
This section describes how three variance reduction techniques ? control variates, common random
numbers and antithetic variates ? can be applied to the UCT algorithm. Each subsection begins with
a short overview of each variance reduction technique, followed by a description of how UCT can be
modified to efficiently incorporate it. Whilst we restrict our attention to planning in MDPs using the
UCT algorithm, the ideas and techniques we present are quite general. For example, similar modifications could be made to the Sparse Sampling [14] or AMS [5] algorithms for planning in MDPs, or
to the POMCP algorithm [22] for planning in POMDPs. In what follows, given an independent
and
? := 1 Pn Xi .
identically distributed sample (X1 , X2 , . . . Xn ), the sample mean is denoted by X
i=1
n
p
? is an unbiased estimator of E [X] with variance n?1 Var[X].
Provided E [X] exists, X
3.1
Control Variates
An improved estimate of E[X] can be constructed if we have access to an additional statistic Y
that is correlated with X, provided that ?Y := E[Y ] exists and is known. To see this, note that if
Z := X + c(Y ? E[Y ]), then Z? is an unbiased estimator of E[X], for any c ? R. Y is called the
control variate. One can show that Var[Z] is minimised for c? := ?Cov[X, Y ]/Var[Y ]. Given a
sample (X1 , Y1 ), (X2 , Y2 ), . . . , (Xn , Yn ) and setting c = c? , the control variate enhanced estimator
n
X
? cv := 1
X
[Xi + c? (Yi ? ?Y )]
(2)
n i=1
3
is obtained, with variance
1
Cov[X, Y ]2
?
Var[Xcv ] =
Var[X] ?
.
n
Var[Y ]
Thus the total variance reduction is dependent on the strength of correlation between X and Y . For
the optimal value of c, the variance reduction obtained by using Z in place of X is 100?Corr[X, Y ]2
percent. In practice, both Var[Y ] and Cov[X, Y ] are unknown and need to be estimated from data.
d
d ), where Cov[?,
d ?] and Var(?)
d
One solution is to use the plug-in estimator Cn := ?Cov[X,
Y ]/Var(Y
denote the sample covariance and sample variance respectively. This estimate can be constructed
offline using an independent sample or be estimated online. Although replacing c? with an online
estimate of Cn in Equation 2 introduces bias, this modified estimator is still consistent [17]. Thus
online estimation is a reasonable choice for large n; we revisit the issue of small n later. Note
? cv can be efficiently computed with respect to Cn by maintaining X
? and Y? online, since
that X
? cv = X
? + Cn (Y? ? ?Y ).
X
Application to UCT. Control variates can be applied recursively, by redefining the return Xs,a for
every state-action pair (s, a) ? S ? A to
Zs,a := Xs,a + cs,a (Ys,a ? E[Ys,a ]) ,
(3)
provided E [Ys,a ] exists and is known for all (s, a) ? S ? A, and Ys,a is a function of the random
variables At , St+1 , Rt+1 , . . . , An?1 , Sn , Rn that describe the complete execution of the system after
action a is performed in state s. Notice that a separate control variate will be introduced for each
state-action pair. Furthermore, as E [Zst ,at | Ai ? ?(? | Si )] = E [Xst ,at | Ai ? ?(? | Si )], for all
policies ?, for all (st , at ) ? S ? A and for all t < i < n, the inductive argument [15] used to
establish the asymptotic consistency of UCT still applies when control variates are introduced in
this fashion.
Finding appropriate control variates whose expectations are known in advance can prove difficult.
This situation is further complicated in UCT where we seek a set of control variates {Ys,a } for all
(s, a) ? S ? A. Drawing inspiration from advantage sum estimators [25], we now provide a general
class of control variates designed for application in UCT. Given a realization of a random simulation
trajectory St = st , At = at , St+1 = st+1 , At+1 = at+1 , . . . , Sn = sn , consider control
variates of the form
Pn?1
Yst ,at := i=t I[b(Si+1 )] ? P[b(Si+1 ) | Si =si , Ai =ai ],
(4)
where b : S ? {true, false} denotes a boolean function of state and I denotes the binary indicator
function. In this case, the expectation
Pn?1
E[Yst ,at ] = i=t E [I [b(Si+1 )] | Si =si , Ai =ai ] ? P [b(Si+1 ) | Si =si , Ai =ai ] = 0,
for all (st , at ) ? S ? A. Thus, using control variates of this form simplifies the task to specifying
a state property that is strongly correlated with the return, such that P[b(Si+1 ) | Si = si , Ai = ai ] is
known for all (si , ai ) ? (S, A), for all t ? i < n. This considerably reduces the effort required to
find an appropriate set of control variates for UCT.
3.2
Common Random Numbers
Consider comparing the expectation of E[Y ] to E[Z], where both Y := g(X) and Z := h(X) are
functions of a common random variable X. This can be framed as estimating the value of ?Y,Z ,
where ?Y,Z := E[g(X)] ? E[h(X)]. If the expectations E[g(X)] and E[h(X)] were estimated from
? 2 ) would be obtained, with variance
two independent samples X1 and X2 , the estimator g?(X1 )?h(X
?
?
Var[?
g (X1 ) ? h(X2 )] = Var[?
g (X1 )] + Var[h(X2 )]. Note that no covariance term appears since
X1 and X2 are independent samples. The technique of common random numbers suggests setting
? 2 )] is positive. This gives the estimator ??Y,Z (X1 ) := g?(X1 ) ? h(X
? 1 ),
X1 = X2 if Cov[?
g (X1 ), h(X
?
?
with variance Var[?
g (X1 )]+Var[h(X1 )]?2Cov[?
g (X1 ), h(X1 )], which is an improvement whenever
? 1 )] is positive. This technique cannot be applied indiscriminately however, since a
Cov[?
g (X1 ), h(X
variance increase will result if the estimates are negatively correlated.
Application to UCT. Rather than directly reducing the variance of the individual return estimates,
common random numbers can instead be applied to reduce the variance of the estimated differences
4
m
? s,a
? m 0 , for each pair of distinct actions a, a0 ? A in a state s. This has the benefit
in return X
?X
s,a
m
? s,a
of reducing the effect of variance in both determining the action at := argmaxa?A X
selected
q
m
m
m
?
by UCT in state st and the actions argmaxa?A Xs,a + c log(Ts )/Ts,a selected by UCB as the
search tree is constructed.
? m is a function of realized simulation trajectories originating from state-action
As each estimate X
s,a
pair (s, a), a carefully chosen subset of the stochastic events determining the realized state transitions
m ?m
? s,a
now needs to be shared across future trajectories originating from s so that Cov[X
, Xs,a0 ] is
0
positive for all m ? N and for all distinct pairs of actions a, a ? A. Our approach is to use the same
chance outcomes to determine the trajectories originating from state-action pairs (s, a) and (s, a0 ) if
j
i
0
m
Ts,a
= Ts,a
0 , for any a, a ? A and i, j ? N. This can be implemented by using Ts,a to index into
a list of stored stochastic outcomes Es defined for each state s. By only adding a new outcome to
Es when Ts,a exceeds the number of elements in Es , the list of common chance outcomes can be
efficiently generated online. This idea can be applied recursively, provided that the shared chance
events from the current state do not conflict with those defined at any possible ancestor state.
3.3
Antithetic Variates
?
? 1 (X)+h
? 2 (Y), the average of two unbiased estimates
Consider estimating E[X] with h(X,
Y) := 12 h
?
?
h1 (X) and h2 (Y), computed from two identically distributed samples X = (X1 , X2 , . . . , Xn ) and
?
Y = (Y1 , Y2 , . . . , Yn ). The variance of h(X,
Y) is
1
?
?
? 1 (X), h
? 2 (Y)].
(Var[h1 (X)] + Var[h2 (Y)]) + 1 Cov[h
(5)
4
2
The method of antithetic variates exploits this identity, by deliberately introducing a negative cor? 1 (X) and h
? 2 (Y). The usual way to do this is to construct X and Y from pairs of
relation between h
? 2 (Y) remains an
sample points (Xi , Yi ) such that Cov[h1 (Xi ), h2 (Yi )] < 0 for all i ? n. So that h
unbiased estimate of E[X], care needs to be taken when making Y depend on X.
Application to UCT. Like the technique of common random numbers, antithetic variates can
be applied to UCT by modifying the way simulation trajectories are sampled. Whenever a node
representing (si , ai ) ? S ? A is visited during the backup phase of UCT, the realized trajectory
si+1 , ri+1 , ai+1 , . . . , sn , rn from (si , ai ) is now stored in memory if Tsmi ,ai mod 2 ? 0. The next
time this node is visited during the selection phase, the previous trajectory is used to predetermine
one or more antithetic events that will (partially) drive subsequent state transitions for the current
simulation trajectory. After this, the memory used to store the previous simulation trajectory is
reclaimed. This technique can be applied to all state-action pairs inside the tree, provided that the
antithetic events determined by any state-action pair do not overlap with the antithetic events defined
by any possible ancestor.
4
Empirical Results
This section begins with a description of our test domains, and how our various variance reduction
ideas can be applied to them. We then investigate the performance of UCT when enhanced with
various combinations of these techniques.
4.1
Test Domains
Pig is a turn-based jeopardy dice game that can be played with one or more players [20]. Players
roll two dice each turn and keep a turn total. At each decision point, they have two actions, roll and
stop. If they decide to stop, they add their turn total to their total score. Normally, dice rolls add to
the players turn total, with the following exceptions: if a single is rolled the turn total will be reset
and the turn ended; if a
is rolled then the players turn will end along with their total score being
reset to 0. These possibilities make the game highly stochastic.
Can?t Stop is a dice game where the goal is to obtain three complete columns by reaching the
highest level in each of the 2-12 columns [19]. This done by repeatedly rolling 4 dice and playing
zero or more pairing combinations. Once a pairing combination is played, a marker is placed on
the associated column and moved upwards. Only three distinct columns can be used during any
5
given turn. If the dice are rolled and no legal pairing combination can be made, the player loses
all of the progress made towards completing columns on this turn. After rolling and making a legal
pairing, a player can chose to lock in their progress by ending their turn. A key component of the
game involves correctly assessing the risk associated with not being able to make a legal dice pairing
given the current board configuration.
Dominion is a popular turn-based, deck-building card game [24]. It involves acquiring cards by
spending the money cards in your current deck. Bought cards have certain effects that allow you to
buy more cards, get more money, draw more cards, and earn victory points. The goal is to get as
many victory points as possible.
In all cases, we used solitaire variants of the games where the aim is to maximize the number of
points given a fixed number of turns. All of our domains can be represented as finite MDPs. The
game of Pig contains approximately 2.4 ? 106 states. Can?t Stop and Dominion are significantly
more challenging, containing in excess of 1024 and 1030 states respectively.
4.2
Application of Variance Reduction Techniques
We now describe the application of each technique to the games of Pig, Can?t Stop and Dominion.
Control Variates. The control variates used for all domains were of the form specified by Equation
4 in Section 3.1. In Pig, we used a boolean function that returned true if we had just performed the
roll action and obtained at least one . This control variate has an intuitive interpretation, since we
would expect the return from a single trajectory to be an underestimate if it contained more rolls
with a than expected, and an overestimate if it contained less rolls with a than expected. In
Can?t Stop, we used similarly inspired boolean function that returned true if we could not make
a legal pairing from our most recent roll of the 4 dice. In Dominion, we used a boolean function
that returned whether we had just played an action that let us randomly draw a hand with 8 or more
money to spend. This is a significant occurrence, as 8 money is needed to buy a Province, the highest
scoring card in the game. Strong play invariably requires purchasing as many Provinces as possible.
We used a mixture of online and offline estimation to determine the values of cs,a to use in Equation
m
d s,a , Ys,a ]/Var[Y
d s,a ] was used. If T m < 50,
3. When Ts,a
? 50, the online estimate ?Cov[X
s,a
the constants 6.0, 6.0 and ?0.7 were used for Pig, Can?t Stop and Dominion respectively. These
d s,a , Ys,a ]/Var[Y
d s,a ] across a
constants were obtained by computing offline estimates of ?Cov[X
representative sample of game situations. This combination gave better performance than either
scheme in isolation.
Common Random Numbers. To apply the ideas in Section 3.2, we need to specify the future
chance events to be shared across all of the trajectories originating from each state. Since a player?s
final score in Pig is strongly dependent on their dice rolls, it is natural to consider sharing one or
more future dice roll outcomes. By exploiting the property in Pig that each roll event is independent
of the current state, our implementation shares a batch of roll outcomes large enough to drive a
complete simulation trajectory. So that these chance events don?t conflict, we limited the sharing of
roll events to just the root node. A similar technique was used in Can?t Stop. We found this scheme
to be superior to sharing a smaller number of future roll outcomes and applying the ideas in Section
3.2 recursively. In Dominion, stochasticity is caused by drawing cards from the top of a deck that is
periodically shuffled. Here we implemented common random numbers by recursively sharing preshuffled deck configurations across the actions at each state. The motivation for this kind of sharing
is that it should reduce the chance of one action appearing better than another simply because of
?luckier? shuffles.
Antithetic Variates. To apply the ideas in Section 3.3, we need to describe how the antithetic
events are constructed from previous simulation trajectories. In Pig, a negative correlation between
the returns of pairs of simulation trajectories can be induced by forcing the roll outcomes in the
second trajectory to oppose those occurring in the first trajectory. Exploiting the property that the
relative worth of each pair of dice outcomes is independent of state, a list of antithetic roll outcomes
can be constructed by mapping each individual roll outcome in the first trajectory to its antithetic
partner. For example, a lucky roll of
was paired with the unlucky roll of
. A similar idea
is used in Can?t Stop, however the situation is more complicated, since the relative worth of each
6
MSE and Bias2 of Roll Value Estimator vs. Simulations in UCT
1000
MSE and Bias2 in Value Difference Estimator vs. Simulations in UCT
300
MSE
Bias2
900
800
700
MSE and Bias2
MSE and Bias2
MSE
Bias2
250
600
500
400
200
150
100
300
50
200
100
0
0
4
5
6
7
8
9
10 11
log2(Simulations)
12
13
14
15
4
5
6
7
8
9
10
11
log2(Simulations)
12
13
14
15
Figure 1: The estimated variance of the value estimates for the Roll action and estimated differences between
actions on turn 1 in Pig.
chance event varies from state to state. Our solution was to develop a state-dependent heuristic
ranking function, which would assign an index between 0 and 1295 to the 64 distinct chance events
for a given state. Chance events that are favorable in the current state are assigned low indexes, while
unfavorable events are assigned high index values. When simulating a non-antithetic trajectory, the
ranking for each chance event is recorded. Later when the antithetic trajectory needs to be simulated,
the previously recorded rank indexes are used to compute the relevant antithetic event for the current
state. This approach can be applied in a wide variety of domains where the stochastic outcomes can
be ordered by how ?lucky? they are e.g., suppliers? price fluctuations, rare catastrophic events, or
higher than average click-through-rates. For Dominion, a number of antithetic mappings were tried,
but none provided any substantial reduction in variance. The complexity of how cards can be played
to draw more cards from one?s deck makes a good or bad shuffle intricately dependent on the exact
composition of cards in one?s deck, of which there are intractably many possibilities with no obvious
symmetries.
4.3
Experimental Setup
Each variance reduction technique is evaluated in combination with the UCT algorithm, with varying levels of search effort. In Pig, the default (rollout) policy plays the roll and stop actions with
probability 0.8 and 0.2 respectively. In Can?t Stop, the default policy will end the turn if a column
has just been finished, otherwise it will choose to re-roll with probability 0.85. In Dominion, the
default policy incorporates some simple domain knowledge that favors obtaining higher cost cards
and avoiding redundant actions. The UCB constant c in Equation 1 was set to 100.0 for both Pig
and Dominion and 5500.0 for Can?t Stop.
4.4
Evaluation
We performed two sets of experiments. The first is used to gain a deeper understanding of the role
of bias and variance in UCT. The next set of results is used to assess the overall performance of UCT
when augmented with our variance reduction techniques.
Bias versus Variance. When assessing the quality of an estimator using mean squared error
(MSE), it is well known that the estimation error can be decomposed into two terms, bias and variance. Therefore, when assessing the potential impact of variance reduction, it is important to know
just how much of the estimation error is caused by variance as opposed to bias. Since the game
of Pig has ? 2.4 ? 106 states, we can solve it offline using Expectimax Search. This allows us to
compute the expected return E[Xs1 | ? ? ] of the optimal action (roll) at the starting state s1 . We use
this value to compute both the bias-squared and variance component of the MSE for the estimated
return of the roll action at s1 when using UCT without variance reduction. This is shown in the
leftmost graph of Figure 1. It seems that the dominating term in the MSE is the bias-squared. This
is misleading however, since the absolute error is not the only factor in determining which action
is selected by UCT. More important instead is the difference between the estimated returns for each
action, since UCT ultimately ends up choosing the action with the largest estimated return. As Pig
has just two actions, we can also compute the MSE of the estimated difference in return between
rolling and stopping using UCT without variance reduction. This is shown by the rightmost graph
7
Pig MCTS Performance Results
Cant Stop MCTS Performance Results
2,200
Base
AV
CRN
CV
CVCRN
60
55
Dominion MCTS Performance Results
Base
AV
CRN
CV
CVCRN
2,000
1,800
40
1,600
1,400
50
Base
CRN
CV
CVCRN
50
30
1,200
1,000
45
20
800
10
600
40
16
32
64
128
256
Simulations
512
1,024
16
32
64
128
256
Simulations
512
1,024
128
256
512
1,024
2,048
Simulations
Figure 2: Performance Results for Pig, Can?t Stop, and Dominion with 95% confidence intervals shown.
Values on the vertical axis of each graph represent the average score.
in Figure 1. Here we see that variance is the dominating component (the bias is within ?2) when
the number of simulations is less than 1024. The role of bias and variance will of course vary from
domain to domain, but this result suggests that variance reduction may play an important role when
trying to determine the best action.
Search Performance. Figure 2 shows the results of our variance reduction methods on Pig, Can?t
Stop and Dominion. Each data point for Pig, Can?t Stop and Dominion is obtained by averaging
the scores obtained across 50, 000, 10, 000 and 10, 000 games respectively. Such a large number
of games is needed to obtain statistically significant results due to the highly stochastic nature of
each domain. 95% confidence intervals are shown for each data point. In Pig, the best approach
consistently outperforms the base version of UCT, even when given twice the number of simulations.
In Can?t Stop, the best approach gave a performance increase roughly equivalent to using base UCT
with 50-60% more simulations. The results also show a clear benefit to using variance reduction
techniques in the challenging game of Dominion. Here the best combination of variance reduction
techniques leads to an improvement roughly equivalent to using 25-40% more simulations. The use
of antithetic variates in both Pig and Can?t Stop gave a measurable increase in performance, however
the technique was less effective than either control variates or common random numbers. Control
variates was particularly helpful across all domains, and even more effective when combined with
common random numbers.
5
Discussion
Although our UCT modifications are designed to be lightweight, some additional overhead is unavoidable. Common random numbers and antithetic variates increase the space complexity of UCT
by a multiplicative constant. Control variates typically increase the time complexity of each value
backup by a constant. These factors need to be taken into consideration when evaluating the benefits of variance reduction for a particular domain. Note that surprising results are possible; for
example, if generating the underlying chance events is expensive, using common random numbers
or antithetic variates can even reduce the computational cost of each simulation. Ultimately, the
effectiveness of variance reduction in MCTS is both domain and implementation specific. That said,
we would expect our techniques to be useful in many situations, especially in noisy domains or if
each simulation is computationally expensive. In our experiments, the overhead of every technique
was dominated by the cost of simulating to the end of the game.
6
Conclusion
This paper describes how control variates, common random numbers and antithetic variates can
be used to improve the performance of Monte-Carlo Tree Search by reducing variance. Our main
contribution is to describe how the UCT algorithm can be modified to efficiently incorporate these
techniques in practice. In particular, we provide a general approach that significantly reduces the
effort needed recursively apply control variates. Using these methods, we demonstrated substantial
performance improvements on the highly stochastic games of Pig, Can?t Stop and Dominion. Our
work should be of particular interest to those using Monte-Carlo planning in highly stochastic or
resource limited settings.
8
References
[1] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. JMLR, 3:397?
422, 2002.
[2] Radha-Krishna Balla and Alan Fern. UCT for Tactical Assault Planning in Real-Time Strategy
Games. In IJCAI, pages 40?45, 2009.
[3] Dimitri P. Bertsekas and David A. Castanon. Rollout algorithms for stochastic scheduling
problems. Journal of Heuristics, 5(1):89?108, 1999.
[4] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
1st edition, 1996.
[5] Hyeong S. Chang, Michael C. Fu, Jiaqiao Hu, and Steven I. Marcus. An Adaptive Sampling
Algorithm for Solving Markov Decision Processes. Operations Research, 53(1):126?139, January 2005.
[6] Guillaume Chaslot, Sander Bakkes, Istvan Szita, and Pieter Spronck. Monte-Carlo Tree
Search: A New Framework for Game AI. In Fourth Artificial Intelligence and Interactive
Digital Entertainment Conference (AIIDE 2008), 2008.
[7] Guillaume M. Chaslot, Mark H. Winands, and H. Jaap Herik. Parallel Monte-Carlo Tree
Search. In Proceedings of the 6th International Conference on Computers and Games, pages
60?71, Berlin, Heidelberg, 2008. Springer-Verlag.
[8] Guillaume M.J-B. Chaslot, Mark H.M. Winands, Istvan Szita, and H. Jaap van den Herik.
Cross-entropy for Monte-Carlo Tree Search. ICGA, 31(3):145?156, 2008.
[9] R?emi Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. In Proceedings Computers and Games 2006. Springer-Verlag, 2006.
[10] Hilmar Finnsson and Yngvi Bjornsson. Simulation-based Approach to General Game Playing. In Twenty-Third AAAI Conference on Artificial Intelligence (AAAI 2008), pages 259?264,
2008.
[11] S. Gelly and D. Silver. Combining online and offline learning in UCT. In Proceedings of the
17th International Conference on Machine Learning, pages 273?280, 2007.
[12] Sylvain Gelly and Yizao Wang. Exploration exploitation in Go: UCT for Monte-Carlo Go. In
NIPS Workshop on On-line trading of Exploration and Exploitation, 2006.
[13] Sylvain Gelly, Yizao Wang, R?emi Munos, and Olivier Teytaud. Modification of UCT with
patterns in Monte-Carlo Go. Technical Report 6062, INRIA, France, November 2006.
[14] Michael J. Kearns, Yishay Mansour, and Andrew Y. Ng. A sparse sampling algorithm for
near-optimal planning in large Markov Decision Processes. In IJCAI, pages 1324?1331, 1999.
[15] Levente Kocsis and Csaba Szepesv?ari. Bandit based Monte-Carlo planning. In ECML, pages
282?293, 2006.
[16] Todd W. Neller and Clifton G.M. Presser. Practical play of the dice game pig. Undergraduate
Mathematics and Its Applications, 26(4):443?458, 2010.
[17] Barry L. Nelson. Control variate remedies. Operations Research, 38(6):pp. 974?992, 1990.
[18] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice-Hall,
Englewood Cliffs, NJ, 2nd edition edition, 2003.
[19] Sid Sackson. Can?t Stop. Ravensburger, 1980.
[20] John Scarne. Scarne on dice. Harrisburg, PA: Military Service Publishing Co, 1945.
[21] David Silver and Gerald Tesauro. Monte-Carlo simulation balancing. In ICML, page 119,
2009.
[22] David Silver and Joel Veness. Monte-Carlo Planning in Large POMDPs. In Advances in
Neural Information Processing Systems 23, pages 2164?2172, 2010.
[23] Csaba Szepesv?ari. Reinforcement learning algorithms for MDPs, 2009.
[24] Donald X. Vaccarino. Dominion. Rio Grande Games, 2008.
[25] Martha White and Michael Bowling. Learning a value analysis tool for agent evaluation.
In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence
(IJCAI), pages 1976?1981, 2009.
9
| 4288 |@word mild:1 exploitation:4 version:2 seems:1 nd:1 hu:1 pieter:1 simulation:32 seek:1 tried:1 covariance:2 p0:5 bsm:3 recursively:5 reduction:28 configuration:2 contains:2 efficacy:1 selecting:1 score:5 lightweight:1 icga:1 rightmost:1 outperforms:1 current:10 comparing:1 surprising:2 si:23 jeopardy:1 john:2 subsequent:1 periodically:1 cant:1 designed:2 v:2 intelligence:4 leaf:3 selected:4 short:2 node:13 teytaud:1 rollout:8 along:1 constructed:6 direct:1 become:1 pairing:6 consists:1 prove:1 combine:1 overhead:2 yst:2 inside:1 manner:1 expected:6 rapid:1 roughly:2 behavior:1 planning:20 examine:1 terminal:1 inspired:1 decomposed:1 alberta:3 little:2 enumeration:2 begin:3 provided:7 estimating:2 underlying:1 maximizes:1 what:1 kind:1 rmax:1 z:1 whilst:2 finding:1 csaba:2 ended:1 nj:1 unexplored:2 every:2 growth:2 interactive:1 tie:1 control:23 normally:1 yn:2 bertsekas:2 overestimate:1 positive:4 service:1 todd:1 limit:2 severely:1 mistake:1 ak:7 cliff:1 path:2 fluctuation:1 noteworthy:1 approximately:1 inria:1 chose:1 twice:1 studied:3 specifying:1 challenging:3 suggests:2 co:1 limited:6 oppose:1 statistically:1 practical:1 practice:5 procedure:1 area:1 dice:13 empirical:1 lucky:2 adapting:1 significantly:2 confidence:3 donald:1 argmaxa:3 get:2 cannot:2 selection:3 operator:1 scheduling:1 prentice:1 context:1 risk:1 applying:1 equivalent:2 deterministic:1 measurable:1 demonstrated:1 maximizing:1 go:4 attention:4 starting:2 focused:2 survey:1 assigns:2 estimator:11 amortizes:1 construction:1 enhanced:2 play:4 ualberta:3 exact:1 programming:1 olivier:1 us:1 yishay:1 norvig:1 pa:1 element:1 expensive:2 particularly:1 role:3 steven:1 yngvi:1 wang:2 shuffle:2 highest:3 trade:1 observes:1 russell:1 substantial:3 environment:1 broken:2 complexity:3 reward:2 reclaimed:1 dynamic:1 gerald:1 ultimately:2 depend:1 solving:1 negatively:1 efficiency:2 joint:1 various:2 represented:1 distinct:4 describe:4 effective:2 monte:21 artificial:4 outcome:12 choosing:1 exhaustive:5 whose:2 quite:1 larger:1 spend:1 heuristic:2 solve:1 drawing:2 otherwise:2 dominating:2 favor:1 statistic:2 cov:13 noisy:1 final:1 online:13 kocsis:1 sequence:1 advantage:2 interaction:2 reset:2 relevant:2 combining:1 realization:1 poorly:1 lookahead:1 description:2 moved:1 intuitive:1 exploiting:2 ijcai:3 enhancement:1 empty:1 assessing:3 generating:2 silver:3 executing:1 wider:1 develop:1 andrew:1 hilmar:1 progress:2 strong:1 implemented:2 c:5 involves:4 implies:1 trading:1 modifying:1 stochastic:13 exploration:5 finnsson:1 successor:2 assign:1 investigation:2 hall:1 antithetic:20 mapping:2 vary:1 estimation:5 favorable:1 outperformed:1 yizao:2 visited:3 largest:1 successfully:1 tool:1 offs:1 always:2 aim:2 modified:3 rather:1 reaching:1 pn:3 varying:1 xsk:3 conjunction:1 focus:1 improvement:4 consistently:1 rank:1 adversarial:1 am:3 helpful:1 rio:1 dependent:4 stopping:1 typically:2 a0:3 initially:1 relation:1 ancestor:2 originating:4 bandit:1 france:1 selects:2 provably:1 issue:2 overall:1 szita:2 denoted:3 construct:4 once:1 veness:3 sampling:4 ng:1 represents:1 stuart:1 icml:1 constitutes:1 future:4 report:1 jectories:1 modern:1 randomly:1 individual:2 phase:6 argmax:1 invariably:1 interest:1 englewood:1 possibility:2 investigate:1 highly:5 joel:2 evaluation:2 unlucky:1 introduces:2 rolled:3 mixture:1 winands:2 edge:1 fu:1 traversing:1 tree:20 desired:1 re:1 formalism:1 modeling:1 boolean:4 column:6 military:1 disadvantage:1 cost:3 introducing:1 subset:1 rare:1 rolling:3 uniform:1 too:1 stored:2 varies:1 considerably:1 combined:1 st:28 fundamental:1 international:3 intricately:1 minimised:1 michael:4 earn:1 squared:3 aaai:2 recorded:2 deepening:1 containing:1 choose:1 opposed:1 unavoidable:1 dimitri:2 return:19 potential:1 tactical:1 tra:1 caused:2 ranking:2 performed:5 root:2 later:2 h1:3 multiplicative:1 observing:1 reached:1 jaap:2 complicated:2 parallel:1 contribution:1 ass:1 accuracy:1 roll:24 variance:50 efficiently:4 sid:1 fern:1 none:1 carlo:21 trajectory:25 pomdps:2 drive:2 worth:2 executes:2 whenever:2 sharing:5 underestimate:1 pp:1 involved:1 obvious:1 associated:3 stop:22 sampled:1 gain:1 popular:4 knowledge:5 subsection:1 carefully:1 auer:1 appears:1 higher:3 response:1 improved:2 specify:1 done:1 evaluated:1 strongly:2 generality:1 furthermore:1 just:6 uct:35 until:2 correlation:2 working:1 hand:1 replacing:1 marker:1 defines:1 quality:1 scientific:1 mdp:2 building:1 effect:2 contain:1 unbiased:4 remedy:1 y2:2 true:3 inductive:1 inspiration:1 deliberately:1 shuffled:1 symmetric:1 iteratively:1 assigned:2 deal:1 white:1 bowling:3 game:26 during:3 criterion:1 leftmost:1 trying:1 complete:4 demonstrate:1 bring:1 upwards:1 percent:1 spending:1 invoked:1 consideration:1 ari:2 common:16 superior:2 overview:2 interpretation:1 significant:2 composition:1 ai:20 cv:6 framed:1 tuning:1 consistency:1 mathematics:1 similarly:2 stochasticity:2 had:2 access:1 money:4 add:2 base:5 recent:1 forcing:1 tesauro:1 store:1 certain:1 verlag:2 selectivity:1 binary:1 discussing:1 yi:3 scoring:1 krishna:1 additional:3 somewhat:2 care:1 converge:1 determine:3 maximize:1 redundant:1 barry:1 multiple:1 reduces:2 exceeds:1 alan:1 technical:1 plug:1 cross:1 y:7 predetermine:1 paired:1 controlled:1 impact:1 victory:2 variant:1 neuro:1 expectation:4 kernel:3 represent:1 addition:2 background:1 szepesv:2 xst:8 interval:2 parallelization:1 biased:1 subject:1 induced:1 regularly:1 incorporates:1 mod:1 effectiveness:1 bought:1 near:1 identically:2 enough:1 automated:1 variety:2 sander:1 variate:33 gave:3 isolation:1 restrict:1 suboptimal:1 click:1 reduce:3 idea:9 tm:4 cn:4 simplifies:1 whether:1 effort:7 peter:2 returned:3 passing:1 action:48 repeatedly:2 useful:2 generally:1 clear:1 reduced:1 generate:1 sl:6 exist:1 canonical:1 notice:2 revisit:1 estimated:11 popularity:1 correctly:1 pomcp:1 key:1 four:1 levente:1 assault:1 asymptotically:1 graph:3 sum:1 powerful:1 you:1 bias2:6 fourth:1 place:1 reasonable:2 decide:1 draw:3 lanctot:2 decision:11 interleaved:1 bound:1 completing:1 followed:1 played:4 encountered:1 adapted:1 strength:1 constraint:2 rmin:1 your:1 ri:3 x2:8 dominated:1 emi:2 argument:1 rave:1 combination:8 across:7 describes:3 smaller:1 shallow:1 making:5 b:1 modification:3 s1:2 den:1 taken:3 legal:4 resource:1 equation:4 computationally:1 remains:1 previously:1 describing:1 montecarlo:1 nonempty:1 count:1 turn:16 needed:3 know:1 cor:1 end:4 operation:2 apply:3 generic:1 appropriate:4 simulating:2 occurrence:1 appearing:1 batch:1 denotes:2 top:1 include:1 entertainment:1 publishing:1 lock:1 maintaining:1 log2:2 exploit:1 gelly:3 especially:1 establish:1 classical:4 comparatively:1 dominion:18 added:1 realized:6 strategy:2 rt:7 usual:1 responds:1 istvan:2 said:1 separate:1 card:12 simulated:1 berlin:1 athena:1 nelson:1 partner:1 marcus:1 index:6 providing:1 ratio:1 coulom:1 setup:2 unfortunately:1 difficult:2 executed:1 negative:2 rise:1 zst:1 implementation:2 policy:17 unknown:1 twenty:2 av:2 vertical:1 herik:2 markov:7 finite:7 november:1 t:12 ecml:1 january:1 situation:5 y1:2 rn:5 mansour:1 community:1 introduced:2 david:3 pair:19 required:1 specified:1 redefining:1 conflict:2 nip:1 able:1 pattern:1 pig:23 including:2 memory:2 event:18 overlap:1 natural:1 indicator:1 representing:3 scheme:2 improve:4 misleading:1 mdps:8 mcts:15 finished:1 axis:1 xcv:1 sn:7 literature:1 understanding:1 determining:3 asymptotic:1 relative:2 loss:1 expect:2 proven:1 var:18 versus:1 castanon:1 digital:1 h2:3 agent:9 purchasing:1 sufficient:1 consistent:2 s0:3 playing:3 share:1 balancing:1 course:1 surprisingly:1 placed:1 intractably:1 offline:6 bias:14 allow:1 deeper:1 tsitsiklis:1 xs1:1 wide:1 chaslot:3 absolute:1 sparse:2 munos:1 distributed:2 benefit:3 van:1 depth:4 default:7 transition:5 xn:3 ending:1 evaluating:1 made:4 expectimax:3 adaptive:1 reinforcement:1 excess:1 observable:1 implicitly:1 keep:1 buy:2 solitaire:1 consuming:1 xi:4 don:1 search:30 iterative:1 triplet:1 sk:2 crn:3 grande:1 nature:3 ca:3 symmetry:1 obtaining:1 improving:1 heidelberg:1 expansion:2 mse:10 marc:1 domain:17 main:2 backup:5 motivation:1 edition:3 repeated:1 x1:17 augmented:1 representative:1 board:1 fashion:1 lie:1 communicates:1 ply:3 supplier:1 jmlr:1 third:1 down:1 remained:1 transitioning:1 bad:1 specific:1 showing:1 list:3 x:4 workshop:1 undergraduate:1 exists:4 incorporating:1 false:1 sequential:1 effectively:1 adding:2 corr:1 indiscriminately:1 province:2 execution:3 occurring:2 horizon:1 entropy:1 simply:1 explore:1 deck:6 contained:3 ordered:1 partially:2 tsm:3 applies:1 acquiring:1 chang:1 corresponds:1 loses:1 chance:12 springer:2 clifton:1 identity:1 goal:2 towards:1 shared:3 price:1 considerable:1 martha:1 determined:1 sylvain:2 reducing:6 corrected:1 uniformly:1 averaging:1 kearns:1 called:1 total:7 catastrophic:1 e:3 experimental:1 player:7 unfavorable:1 ucb:4 exception:1 formally:1 select:1 guillaume:3 internal:3 mark:2 incorporate:2 avoiding:1 correlated:3 |
3,632 | 4,289 | Empirical models of spiking in neural populations
?
Lars Busing
Gatsby Computational Neuroscience Unit
University College London, UK
[email protected]
Jakob H. Macke
Gatsby Computational Neuroscience Unit
University College London, UK
[email protected]
John P. Cunningham
Department of Engineering
University of Cambridge, UK
[email protected]
Byron M. Yu
ECE and BME
Carnegie Mellon University
[email protected]
Krishna V. Shenoy
Department of Electrical Engineering
Stanford University
[email protected]
Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London, UK
[email protected]
Abstract
Neurons in the neocortex code and compute as part of a locally interconnected
population. Large-scale multi-electrode recording makes it possible to access
these population processes empirically by fitting statistical models to unaveraged
data. What statistical structure best describes the concurrent spiking of cells within
a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects
only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth
dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find
that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also
compare models whose observations models are either derived from a Gaussian
or point-process models, finding that the non-Gaussian model provides slightly
better goodness-of-fit and more realistic population spike counts.
1
Introduction
Multi-electrode array recording and similar methods provide measurements of activity from dozens
of neurons simultaneously, and thus allow unprecedented insights into the statistical structure of
neural population activity. To exploit this potential we need methods that identify the temporal dynamics of population activity and link it to external stimuli and observed behaviour. These statistical
models of population activity are essential for understanding neural coding at a population level [1]
and can have practical applications for Brain Machine Interfaces [2].
Two frameworks for modelling the temporal dynamics of cortical population recordings have recently become popular. Generalised Linear spike-response Models (GLMs) [1, 3, 4, 5] model the
influence of spiking history, external stimuli or other neural signals on the firing of a neuron. Here,
the interdependence of different neurons is modelled by terms that link the instantaneous firing rate
of each neuron to the recent spiking history of the population. The parameters of the GLM can be
1
learned efficiently by convex optimisation [3, 4, 5, 6]. Such models have been successful in a range
of studies and systems, including retinal [1] and cortical [7] population recordings.
An alternative is provided by latent variable models such as Gaussian Process Factor Analysis [8]
or other state-space models [9, 10, 11]. In this approach, shared variability (or ?noise correlation?)
is modelled by an unobserved process driving the population, which is sometimes characterised as
?common input? [12, 13]. One advantage of this approach is that the trajectories of the latent state
provide a compact, low-dimensional representation of the population which can be used to visualise
population activity, and link it to observed behaviour [14].
1.1
Comparing coupled generalised linear models and latent variable models
Three lines of argument suggest that latent dynamical models may provide a better fit to cortical
population data than the spike-response GLM. First, prevalent recording apparatus, such as extracellular grid electrodes, sample neural populations very sparsely making it unlikely that much of
the observed shared variability is a consequence of direct physical interaction. Hence, the coupling
filters of a GLM rather reflect statistical interactions (sometimes called functional connectivity).
Without direct synaptic coupling, it is unlikely that variability is shared exclusively by particular
pairs of units; instead, it will generally be common to many cells?an assumption explicit in the
latent variable approach, where shared variability results from the model of cortical dynamics.
Second, most cortical population recordings find that shared variability across neurons is dominated
by a central peak at zero time lag (i.e. the strongest correlation is instantaneous) [15, 16], and has
broad, positive, sometimes asymmetric flanks, decaying slowly with lag time. Correlations with
these properties arise naturally in dynamical system models. The common input from the latent state
induces instantaneous correlations, and the evolution of the latent system typically yields positive
temporal correlations over moderate timescales. By contrast, GLMs couple instantaneous rate to the
recent spiking of other neurons, but not to their simultaneous activity, making zero-lag correlation
hard to model. (As we show below in ?Methods?, the inclusion of simultaneous terms would lead to
invalid models.) Instead, the common approach is to discretise time very finely so that an off-zero
peak can be brought close to simultaneity. This increases computational load, and often requires
discretisation finer than the time-scale of interest, perhaps even finer than the recording resolution
(e.g. for 2-photon calcium imaging). In addition, positive history coupling in a GLM may lead to
loops of self-excitation, predicting unrealistically high firing rates?a trend that must be countered
by long-term negative self-coupling. Thus, while it is certainly not impossible to reproduce neural
correlation structure with GLMs [1], they do not seem to be the natural choice for modelling timeseries of spike-counts with instantaneous correlations.
Third, recording time, and therefore the data available to fit a model, is usually limited in vivo,
especially in behaving animals. This paucity of data places strong constraints on the number of
parameters than can be identified. In dynamical system models, the parameter count grows linearly
with population size (for a constant latent dimension), whereas the parameters of a coupled GLM
depend quadratically on the number of neurons. Thus, GLMs may have many more parameters, and
depend on aggressive regularisation techniques to avoid over-fitting to small datasets.
Here we show that population activity in monkey motor cortex is better fit by a dynamical system
model than by a spike-response GLM; and that the dynamical system, but not a GLM of the same
temporal resolution, accurately reproduces the temporal structure of cross-correlations in these data.
1.2
Comparing dynamical system models with spiking or Gaussian observations
Many studies of population latent variable models assume Gaussian observation noise [8, 17] (but
see, e.g. [2, 11, 13, 18]). Given that spikes are discrete events in time, it seems more natural to use
a Poisson [10] or other point-process model [19] or, at coarser timescales, a count-process model.
However, it is unclear what (if any) advantage such more realistic models confer. For example,
Poisson decoding models do not always outperform Gaussian ones [2, 11]. Here, we describe a
latent linear dynamical system whose count distribution, when conditioned on all past observations,
is Poisson. (the Poisson linear dynamical system or PLDS). Using a co-smoothing metric, we show
that this (computationally more expensive) count model predicts spike counts in our data better than
a Gaussian linear dynamical system (GLDS). The two models give substantially different population
spike-count distributions, and the count approach is also more accurate on this measure than either
the GLDS or GLM.
2
2
Methods
2.1
Dynamical systems with count observations and time-varying mean rates
i
We first consider the count-process latent dynamical model (PLDS). Denote by ykt
the observed
spike-count of neuron i ? {1 . . . q} at time bin t ? {1 . . . T } of trial k ? {1 . . . N }, and by yk =
vec (yk,i=1:q,t=1:T ) the qT ? 1 vector of all data observed on trial k. Neurons are assumed to
be conditionally independent given the low-dimensional latent state xkt (of dimensionality p with
p < q). Thus, correlated neural variability arises from variations of this latent population state, and
not from direct interaction between neurons. Conditioned on x and the recent spiking history st , the
activity of neuron i at time t is given by a Poisson distribution with mean
i
E[ykt
|skt , xkt ] = exp ([Cxkt + d + Dskt ]i ) ,
(1)
where the q ? p matrix C determines how each neuron is related to the latent state xkt , and the
q-dimensional vector d controls the mean firing rates of the population. The history term st is a
vector of all relevant recent spiking in the population [1, 3, 7, 20]. For example, one choice to model
spike refractoriness would set skt to the counts at the previous time point skt = yk,(t?1) , and D to
a diagonal matrix of size q ? q with negative diagonal entries. In general, however, s and D may
contain entries that reflect temporal dependence on a longer time-scale. However, to maintain the
conditional independence of neurons given latent state the matrix D (of size q ? dim(s)) is constrained to have zero values at all entries corresponding to cross-neuron couplings. The exponential
nonlinearity ensures that the conditional firing rate of each neuron is positive. Furthermore, while
conditioned on the latent state and the recent spiking history the count in each bin is Poisson distributed (hence the model name), samples from the model are not Poisson as they are affected both
by variations in the underlying state and the single-neuron history.
We assume that the latent population state xkt evolves according to driven linear Gaussian dynamics:
xk1 ? N (xo , Qo )
xk(t+1) |xkt ? N (Axkt + bt , Q)
(2)
(3)
Here, xo and Qo denote the average value and the covariance of the initial state x1 of each trial. The
p ? p matrix A specifies the deterministic component of the evolution from one state to the next,
and the matrix Q gives the covariance of the innovations that perturb the latent state at each time
step. The ?driving inputs? bt , which add to the latent state, allow the model to capture time-varying
structure in the firing rates that is consistent across trials. Such time-varying mean firing rates are
usually characterised by the peri-stimulus time histogram (PSTH), which requires q ? T parameters
to estimate for each stimulus. Here, by contrast, time-varying means are captured by the driving
inputs into the latent state, and so only p ? T parameters are needed to describe all the PSTHs.
2.2
Expectation-Maximisation for the PLDS model
We use an EM algorithm, similar to those described before [10, 11, 12], to learn the parameters
? = {C, D, d, A, Q, Qo , xo }. The E-step requires the posterior distribution P (?
xk |yk , ?) over the
? k = vec (xk,1:T ) given the data and our current estimate of the parameters ?.
latent trajectories x
As this distribution is not available in closed-form, we approximate it by a multivariate Gaussian,
P (?
xk |yk , ?) ? N (?k , ?k ). As xk is a vector of length pT , so ?k and ?k are of size pT ? 1
and pT ? pT , respectively. We find the mean ?k and the covariance ?k of this Gaussian via a
global Laplace approximation [21], i.e. by maximising the log-posterior P (?
xk , yk ) of each trial
over xk , setting ?k = argmaxx? P (?
x|yk , ?) to be the latent trajectory that achieves this maximum,
?1
and ?k = ? (??x? log P (?
x|yk , ?)|x? =?k ) to be the negative inverse Hessian of the log-posterior
at its maximum. The log-posterior on trial k is given by
!
q
T
X
X
>
log P (?
xk |yk , ?) = const +
ykt (Cxkt + Dskt + d) ?
exp [Cxkt + Dskt + d]i
t=1
1
1
? (xk1 ? xo )> Q?1
o (xk1 ? xo ) ?
2
2
i=1
T
?1
X
(4)
(xk,t+1 ? Axkt ? bt )> Q?1 (xk,t+1 ? Axkt ? bt )
t=1
Log-posteriors of this type are concave and hence unimodal [5, 6], and the Markov structure of the
latent dynamics makes it possible to compute a Newton update in O(T ) time [22]. Furthermore, it
3
has previously been observed that the Laplace approximation performs well for similar models with
Poisson observations [23]. We checked the quality of the Laplace approximation for our parameter
settings by drawing samples from the true posterior in a few cases. The agreement was generally
good, with only some minor deviations between the approximated and sampled covariances.
The M-step requires optimisation of the expected joint log-likelihood with respect to the parameters
?, i.e. ?new = argmax?0 L(?0 ) with
XZ
0
[log P (yk |x, ?0 ) + log P (x|?0 )] N (x|?k , ?k ) dx.
L(? ) =
(5)
k
This integral can be evaluated in closed form, and efficiently optimised over the parameters: L(?0 )
is jointly concave in the parameters C, d, D, and the updates with respect to the dynamics parameters
A, Q, Qo , xo and the driving inputs bt can be calculated analytically.
Our use of the Laplace approximation in the E-step breaks the usual guarantee of non-decreasing
likelihoods in EM. Furthermore, the full likelihood of the model can only be approximated using
sampling techniques [11]. We therefore monitored convergence using the leave-one-neuron-out
prediction score [8] that we also used for comparisons with alternative methods (see below): For
each trial in the test-set, and for each neuron i, we calculate its most likely firing rate given the
?i
activity of the other neurons yk,1:T
, and then compared this prediction against the observed activity.
If implemented naively, this requires q inferences of the latent state from the activity of q?1 neurons.
However, this computation can be sped up by an order of magnitude by first finding the most likely
state given all neurons, and then performing one Newton-update for each held out neuron from this
initial state. While this approximate approach yielded accurate results, we only used it for tracking
convergence of the algorithm, not for reporting the results in section 3.1.
2.3
Alternative models: Generalised Linear Models and Gaussian dynamical systems
The spike-response GLM models the instantaneous rate of neuron i at time t by a generalised linear
form [4] with input covariates representing stimulus (or time) and population spike history:
i
?ikt = E ykt
|skt = exp ([bt + d + Dskt ]i ) .
(6)
The coupling matrix D describes dependence both on the history of firing in the same neuron and
on spiking in other neurons, and the q ? 1 vectors bt model time-varying mean
rates. The pa
P firing
i
rameters are estimated by minimising the negative log-likelihood Ldat = kti ykt
log ?ikt ? ?ikt .
While equation (6) is similar to the definition of the PLDS model in equation (1), the models differ
in their treatment of shared variability: The GLM has no latent state xt and so shared variance is
modelled through the cross-coupling terms of the matrix D, which are set to 0 in the PLDS.
As the number of parameters in the GLM is quadratic in population size, it may be prone to overfitting on small datasets. To improve the generalisation ability of the GLM we added a sparsityinducing L1 prior on the coupling parameters and a smoothness prior on the PSTH parameters bt ,
and minimized the (convex) cost function using methods described in [24]:
X
1 X > ?1
b K bt .
(7)
L(b, d, D) = Ldat + ?1
|Dij | +
2?2 t t ?3
ij
Here, the regularization parameter ?1 determines the sparsity of the solution
D, ?2 is the prior
variance of the smoothing prior, and K?3 (t, s) = exp ?(s ? t)2 /?32 is a squared-exponential
prior on the time-varying firing rates bt which ensures their smoothness over time.
It is important to note that GLMs with Poisson conditionals cannot easily be extended to allow for
instantaneous couplings between neurons. Suppose that we sought a model whose couplings were
only instantaneous, with conditional distributions
Q yit |y?i,t ? Poiss D(i,?i) y?i,t . It can be verified that the model P (y) = Z1 exp y > Jy / i yi !, which could be regarded as the Poisson equivalent to the Ising model [25], wouldP
provide such a structure (as long as J has a zero diagonal). In
this model, P (yit |y?i,t ) ? exp(yi,t j6=i Dij yj,t )/yi,t ! . One might imagine that the parameters J
could be learnt by maximizing each of the conditional likelihoods over a row of J (effectively maximising the pseudo-likelihood), and one could sample counts by Gibbs sampling, again exploiting
the fact that the conditional distributions are all Poisson. However, an obvious prerequisite would be
that a Z exists for which the model is normalised. Unfortunately, this becomes impossible as soon as
any entry of J is positive. For example, if entry Jij is positive, then we can easily construct a firing
4
pattern y for which probabilities diverge. Let the pattern y(n) have value n at entries i and j, and
zeros otherwise. Then, for large n, we find that log P (y(n)) ? n2 Jij ? 2 log(n!), which is dominated by the quadratic term, and therefore diverges, rendering the model unnormalizeable. Thus,
this ?Poisson equivalent? of the Ising model cannot model positive interactions between neurons,
limiting its value.
The Poisson likelihood of the PLDS requires approximation and is computationally cumbersome.
An apparently less veridical alternative would be to model counts as conditionally Gaussian given
the latent state. We used the EM algorithm [9] to fit a linear dynamical system model with Gaussian
noise and driving inputs [17] (GLDS). In comparison with the Poisson model, the GLDS has an
additional set of q parameters corresponding to the variances of the Gaussian observation noise.
Finally, we also compared PLDS to Gaussian Process Factor Analysis (GPFA) [8], a Gaussian model
in which the latent trajectories are drawn not from a linear dynamical system, but from a more
general Gaussian Process with (here) a squared-exponential kernel. We did not include the driving
inputs bt in this model, and used the full model for co-smoothing, i.e. we did not orthogonalise its
filters as was done in [8].
We quantified goodness-of-fit using two measures of ?leave-one-neuron-out prediction? accuracy on
test data (see [8] for more detail). Each neuron?s firing rate was first predicted using the activity
of all other neurons on each test trial. For the GLM (but not PLDS), predictions reported were
based on the past activity of other neurons, but also used the observed past activity of the neuron
being predicted (results exploiting all data from other neurons were similar). Then we calculated
the difference between the total variance and the residual variance around this prediction Mi,k =
i
i
var(yk,1:T
) ? MSE(yk,1:T
, ypred ). Here, the predicted firing rate is a vector of length T , and the
variance is computed over all times t = 1, . . . , T in trial k. Positive values indicate that prediction is
more accurate than a constant prediction equal to the true mean activity of that neuron on that trial.
We also constructed a receiver operating characteristic (ROC) for deciding based on the predicted
firing rates which bins were likely to contain at least one spike, and measured the area under this
curve (AUC) [7, 26]. This measure ranges between 0.5 and 1, with a value of 1 reflecting correct
identification of spike-containing bins, even if the predicted number of spikes is incorrect.
2.4
Details of neural recordings and choice of parameters
We evaluated the methods described above on multi-electrode recordings from the motor cortex
of a behaving monkey. The details of the data are described elsewhere [8]. Briefly, spikes were
recorded with a 96-electrode array (Blackrock, Salt Lake City, UT) implanted into motor areas of a
rhesus macaque (monkey G) performing a delayed center-out reach task. For the analyses presented
here, data came from 108 trials on which the monkey was instructed to reach to one target. We used
1200 ms of data from each trial, from 200ms before target onset until the cue to move was presented.
We included 92 units (single and multi-units) with robust delay activity. Spike trains were binned at
10ms which resulted in 8.13% of bins containing at least one spike, and in 0.61% of bins containing
more than one spike. For goodness-of fit analyses, we performed 4-fold cross-validation, splitting
the data into four non-overlapping test folds with 27 trials each.
For the PLDS model, dimensionality of the latent state varied from 1 to 20. Models either had no
direct history-dependence (i.e. D = 0), or used spike history mapped to a set of 4 basis functions
formed by othogonalising decaying exponentials with time constants 0.1, 10, 20, 40ms (similar to
those used in [1]). The history term st was then obtained by projecting spike counts in the previous
100ms onto each of these functions. The exponential with 0.1ms time constant effectively covered
only the previous time bin and was thus able to model refractoriness. In this case, D was of size
q ? 4q, with only 4 non-zero elements in each row. For the GLM, we varied the sparsity parameter
?1 from 0 to 1 (yielding estimates of D that ranged from a dense matrix to entirely 0), and computed prediction performance at each prior?setting. After exploratory runs, the parameters of the
smoothness prior were set to ?2 = 0.1 and ?3 = 20ms.
3
3.1
Results
Goodness-of-fit of dynamical system models and GLMs
We first compared the goodness-of-fit of PLDS with p = 5 latent dimensions against those of GLMs.
For all choices of the regularization parameter ?1 tested, we found that the prediction performance of
5
?3
x 10
A)
B)
3
0.68
0.66
2
0.64
AUC
Var?MSE
2.5
1.5
GLM 10ms
GLM 100ms
GLM2 100ms
GLM 150ms
PLDS dim=5
1
0.5
0
0
0.62
0.6
0.58
20
40
60
80
100
Percentage of zero?entries in coupling matrix
0
GLM 10ms
GLM 100ms
GLM2 100ms
GLM 150ms
PLDS dim=5
20
40
60
80
100
Percentage of zero?entries in coupling matrix
?3
3.5
C)
x 10
D)
0.68
0.67
AUC
Var?MSE
3
2.5
0.66
2
1.5
GPFA
GLDS
PLDS
PLDS 100ms
5
10
15
Dimensionality of latent space
0.65
0.64
20
GPFA
GLDS
PLDS
PLDS 100ms
5
10
15
Dimensionality of latent space
20
Figure 1: Quantifying goodness-of-fit. A) Prediction performance (variance minus mean-squared
error on test-set) of various coupled GLMs (10 ms history; 2 variants with 100 ms history; 150 ms
history) plotted against sparsity in the filter matrix D generated by different choices of ?1 . For all
?1 , GLM prediction was poorer than that of PLDS with p = 5. Error bars on PLDS-performance
are standard errors of mean across trials. B) As A, measuring performance by area under the ROCcurve (AUC). C) Prediction performance of different latent variable models (GPFA, and LDSs with
Gaussian, Poisson or history-dependent Poisson noise) on the test-set. Black dots indicate dimensionalities where PLDS with 100ms history is significantly better than GLDS (p < 0.05, pairwise
comparisons of trials). PLDS outperforms alternatives, and performance plateaus at small latent
dimensionalities. D) As C, but using AUC to quantify prediction performance. The ordering of the
methods (at the optimal dimensionality) is similar, but there is no advantage of PLDS for higher
dimensional models.
GLMs was inferior to that of PLDS (Fig. 1A). This was true for GLMs with history terms of length
10ms, 100ms or 150ms (with 1, 4 or 5 basis functions each, which were equivalent to the history
functions used for the spiking-history in the dynamical system model, with an additional 80 ms
time-constant exponential as the 5th basis function). To ensure that this difference in performance
is not due to the GLM over-fitting the terms bt (which have q ? T parameters for the GLM, but only
p ? T parameters for PLDS), we fitted both GLMs and PLDS without those filters. In this case, the
prediction performance of both models decreased slightly, but the latent variable models still had
substantially better prediction performance.
Our performance metric based on the mean-squared error is sensitive both to the prediction of which
bins contain spikes, as well as to how many they contain. To quantify the accuracy with which our
models predicted only the absence or presence of spikes, we calculated the area under the curve
(AUC) of the receiver operating characteristic [7]. As can be seen in Fig. 1 B the PLDS outperformed
the GLMs over all choices of the regularization parameter ?1 .
Next, we investigated a more realistic spiking noise model would further improves the performance
of the dynamical system model, and how this would depend on the latent dimensionality d. We
therefore compared our models (GPFA, GLDS, PLDS, PLDS with 100ms history) for different
choices of the latent dimensionality d. When quantifying prediction performance using the meansquared error, we found that for all four models, prediction performance on the test-set increased
strongly with dimensionality for small dimensions, but plateaued at about 8 to 10 dimensions (see
Fig. 1C). Thus, of the models considered here, a low-dimensional latent variable provides the best
fit to the data.
6
0.05
A)
0.03
0.02
0.01
?100
GLM 10ms
GLM 100ms
GLM 150ms
recorded data
0.04
Correlation
Correlation
0.04
0.05
B)
GLDS
PLDS
PLDS 100ms
recorded data
0.03
0.02
0.01
?50
0
Time lag (ms)
50
100
?100
?50
0
Time lag (ms)
50
100
Figure 2: Temporal structure of cross-correlations. A) Average temporal cross-correlation in four
groups of neurons (color-coded from most to least correlated), and comparison with correlations
captured by the dynamical system models with Gaussian, Poisson or history-dependent Poisson
noise. All three model correlations agree well with the data. B) Comparison of GLMs with differing
history-dependence with cortical recordings; the correlations of the models differ markedly from
those of the data, and do not have a peak at zero time-lag.
We also found that models with the more realistic spiking noise model (PLDS, and PLDS 100ms)
had a small, but consistent performance benefit over the computationally more efficient Gaussian
models (GLDS, GPFA). However, for the dataset and comparison considered here (which was based
on predicting the mean activity averaged over all possible spiking histories), we only found a small
advantage of also adding single-neuron dynamics (i.e. the spike-history filters in D) to the spiking
noise model. If we compared the models using their ability to predict population activity on the
next time-step from the observed population history, single-neuron filters did have an effect. In this
prediction task, PLDS with history filters performed best, in particular better than GLMs.
When using AUC rather than mean-squared-error to quantify prediction performance, we found
similar results: Low-dimensional models showed best performance, spiking models slightly outperformed Gaussian ones, and adding single-neuron dynamics yielded only a small benefit. In addition,
when using AUC, the performance benefit of PLDS over GLDS was smaller, and was significant
only at those state-dimensionalities for which overall prediction performance was best. Finally, both
GPFA and GLDS at p = 5 outperformed all GLMs, both for using AUC and mean-squared-error.
Thus, all four of our latent variable models provided superior fits to the dataset than GLMs.
3.2
Reproducing the correlations of cortical population activity
In the introduction, we argued that dynamical system models would be more appropriate for capturing the typical temporal structure of cross-neural correlations in cortical multi-cell recordings. We
explicitly tested this claim in our cortical recordings. First, we subtracted the time-varying mean
firing rate (PSTH) of each neuron to eliminate correlations induced by similarity in mean firing
rates. Then, we calculated time-lagged cross-correlations for each pair of neurons, using 10ms bins.
For display purposes, we divided neurons into 4 groups (color coded in Fig. 2) according to their
total correlation (using summed correlation coefficients with all other neurons), and calculated the
average pairwise correlation in each group. Fig. 2A shows the resulting average time-lagged correlations, and shows that both dynamical system models accurately capture this aspect of the correlation
structure of the data. In contrast Fig. 2B shows that the temporal correlations of the GLM differ
markedly from the real data1 . As mentioned before, this GLM is also fit at 10ms resolution, leaving
open the possibility that fitting it at a finer temporal resolution would yield samples which more
closely reflect the recorded correlations.
3.3
Reproducing the distribution of spike-counts across the population
In the above, we showed that the PLDS model outperforms both Gaussian models and GLMs with
respect to our performance-metric, and that samples from both dynamical systems accurately capture the temporal correlation structure of the data. Finally, we looked at an aggregate measure of
1
We used ?1 = 0, i.e. no regularization for this figure, results with ?1 optimized for prediction performance
vastly underestimate correlations in the data.
7
0.14
0.1
Frequency
Figure 3: Modeling population
spike counts. Distribution of
the population spike counts, and
comparison with distributions
from PLDS, GLDS and two versions of the GLM with 150ms
history dependence (GLM with
no regularization, GLM2 with
optimal sparsity).
GLM
GLM 2
GLDS
PLDS
real data
0.12
0.08
0.06
0.04
0.02
0
0
10
20
30
Number of spikes in 10ms bin
40
population activity, namely the distribution of population spike counts, i.e. the distribution of the
total number of spikes across the population per time bin. This distribution is influenced both by the
single-neuron spike-count distributions and second- and higher-order correlations across neurons.
Fig. 3 shows that the PLDS model accurately reproduces the spike-count distribution in the data,
whereas the other two models do not. The GLDS model underestimates the frequency of high spike
counts, despite accurately matching both the mean and the variance of the distribution. For the GLM
(using 150ms history, and either no regularization or optimal regularization), the frequency of rare
events is either over- or under-estimated. This could be further indication that the GLM does not
fully capture the fact that variability is shared across many cells in the population.
4
Discussion
We explored a statistical model of cortical population recordings based on a latent dynamical system
with count-process observations. We argued that such a model provides a more natural modeling
choice than coupled spike-response GLMs for cortical array-recordings; and indeed, this model did
fit motor-cortical multi-unit recording better, and more faithfully reproduced the temporal structure
of cross-neural correlations. GLMs have many attractive properties, and given the flexibility of
the model class, it is impossible to rule out that some coupled GLM with finer temporal resolution,
possibly nonlinear history dependencies and cleverly chosen regularization would yield better crossvalidation performance. We here argued that latent variable models yield a more appropriate model
of cross-neural correlations with zero-lag peaks: In GLMs, one has to use a fine discretization of
the time-axis (which can be computationally intensive) or work in continuous time to achieve this.
Thus, they might constitute good point-process models at fine time-scales, but arguably not the right
count-process model to model neural recordings at coarser time-scales.
We also showed that a model with count-process observations yields better fits to our data than
ones with a Gaussian noise model, and that it has a more realistic distribution of population spike
counts. Given that spiking data is discrete and therefore non-Gaussian, this might not seem surprising. However, it is important to note that the Gaussian model has free parameters for the singleneuron variability, whereas the conditional variance of the Poisson model is constrained to equal the
mean. For data in which this assumption is invalid, use of other count models, such as a negative
binomial distribution, might be more appropriate. In addition, fitting the PLDS model requires simplifying approximations, and these approximations could offset any gain in prediction performance.
As measured by our co-smoothing metrics, the performance advantage of our count-process over the
Gaussian noise model was small, and the question of whether this advantage would justify the considerable additional computational cost of the count-process model will depend on the application at
hand. In addition, any comparison of statistical models depends on the data used, as different methods are appropriate for datasets with different properties. For the recordings we considered here,
a dynamical system model with count-process observations worked best, but there will be datasets
for which either GLMs, or GLDS or GPFA provide the most appropriate model. Finally, the choice
of the most appropriate model depends on the analysis or prediction question of interest. While
we used a co-smoothing metric to quantify model performance, different models might be more
suitable for decoding reaching movements from population activity [11], or inferring the underlying
anatomical connectivity from extracellular recordings.
8
Acknowledgements
We acknowledge support from the Gatsby Charitable Foundation, an EU Marie Curie Fellowship to JHM,
EPSRC EP/H019472/1 to JPC, the Defense Advanced Research Projects Agency (DARPA) through ?Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR; N66001-10-C-2010)?, NIH CRCNS R01NS054283 to KVS and MS as well as NIH Pioneer 1DP1OD006409 to KVS.
References
[1] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995?
999, 2008.
[2] G. Santhanam, B. M. Yu, V. Gilja, S. I. Ryu, A. Afshar, M. Sahani, and K. V. Shenoy. Factor-analysis
methods for higher-performance neural prostheses. J Neurophysiol, 102(2):1315?1330, 2009.
[3] E.S. Chornoboy, L.P. Schramm, and A.F. Karr. Maximum likelihood identification of neural point process
systems. Biological Cybernetics, 59(4):265?275, 1988.
[4] P. McCulloch and J. Nelder. Generalized linear models. Chapman and Hall, London, 1989.
[5] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network,
15(4):243?262, 2004.
[6] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Press, 2004.
[7] W. Truccolo, L. R. Hochberg, and J. P. Donoghue. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci, 13(1):105?111, 2010.
[8] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity. J Neurophysiol,
102(1):614?635, 2009.
[9] S. Roweis and Z. Ghahramani. A unifying review of linear gaussian models. Neural Comput, 11(2):305?
345, 1999 Feb 15.
[10] A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural
Comput, 15(5):965?91, 2003.
[11] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski. Population decoding of motor cortical activity
using a generalized linear model with hidden states. J Neurosci Methods, 189(2):267?280, 2010.
[12] J.E. Kulkarni and L. Paninski. Common-input models for multiple neural spike-train data. Network:
Computation in Neural Systems, 18(4):375?407, 2007.
[13] M. Vidne, Y. Ahmadian, J. Shlens, J.W. Pillow, J Kulkarni, E. J. Chichilnisky, E. P. Simoncelli, and
L Paninski. A common-input model of a complete network of ganglion cells in the primate retina. In
Computational and Systems Neuroscience, 2010.
[14] M. M. Churchland, B. M. Yu, M. Sahani, and K. V. Shenoy. Techniques for extracting single-trial activity
patterns from large-scale neural recordings. Current Opinion in Neurobiology, 17(5):609?618, 2007.
[15] D. Y. Tso, C. D. Gilbert, and T. N. Wiesel. Relationships between horizontal interactions and functional
architecture in cat striate cortex revealed by cross-correlation analysis. J Neurosci, 6(4):1160?1170, 1986.
[16] A. Jackson, V. J. Gee, S. N. Baker, and R. N. Lemon. Synchrony between neurons with similar muscle
fields in monkey motor cortex. Neuron, 38(1):115?125, 2003.
[17] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Bayesian population decoding of motor
cortical activity using a kalman filter. Neural Comput, 18(1):80?118, 2006.
[18] B. Yu, A. Afshar, G. Santhanam, S.I. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure
embedded in neural activity. In Advances in Neural Information Processing Systems, volume 18, pages
1545?1552. MIT Press, Cambridge, 2006.
[19] J.P. Cunningham, B.M. Yu, K.V. Shenoy, and M. Sahani. Inferring neural firing rates from spike trains
using gaussian processes. Advances in neural information processing systems, 20:329?336, 2008.
[20] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown. Dynamic analysis of neural encoding by
point process adaptive filtering. Neural Comput, 16(5):971?98, 2004.
[21] B. Yu, J. Cunningham, K. Shenoy, and M. Sahani. Neural decoding of movements: From linear to
nonlinear trajectory models. In Neural Information Processing, pages 586?595. Springer, 2008.
[22] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein, and
W. Wu. A new look at state-space models for neural data. J Comput Neurosci, 29(1-2):107?126, 2010.
[23] Y. Ahmadian, J. W. Pillow, and L. Paninski. Efficient markov chain monte carlo methods for decoding
neural spike trains. Neural Comput, 23(1):46?96, 2011.
[24] G. Andrew and J. Gao. Scalable training of l 1-regularized log-linear models. In Proceedings of the 24th
international conference on Machine learning, pages 33?40. ACM, 2007.
[25] E. Schneidman, M. J. 2nd Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly
correlated network states in a neural population. Nature, 440(7087):1007?12, 2006.
[26] T.D. Wickens. Elementary Signal Detection Theory. Oxford University Press, 2002.
9
| 4289 |@word trial:17 version:1 briefly:1 wiesel:1 seems:1 nd:1 busing:1 open:1 rhesus:1 covariance:4 simplifying:1 minus:1 initial:2 exclusively:1 score:1 outperforms:3 past:3 current:2 comparing:3 discretization:1 surprising:1 dx:1 must:1 john:1 pioneer:1 realistic:6 plasticity:1 motor:7 glm2:3 update:3 cue:1 signalling:1 xk:10 smith:1 provides:3 psth:3 constructed:1 direct:5 become:1 incorrect:1 fitting:5 interdependence:1 pairwise:3 indeed:1 expected:1 xz:1 ldss:1 multi:6 brain:1 decreasing:1 becomes:1 provided:2 project:1 underlying:2 estimating:1 baker:1 mcculloch:1 what:2 substantially:2 monkey:6 differing:1 finding:2 unobserved:1 guarantee:1 temporal:15 pseudo:1 concave:2 ferreira:1 uk:8 control:1 unit:7 veridical:1 arguably:1 shenoy:8 generalised:5 positive:8 engineering:2 local:2 before:3 apparatus:1 consequence:1 despite:1 encoding:2 oxford:1 barbieri:1 optimised:1 firing:19 might:5 black:2 quantified:1 co:4 limited:1 range:2 averaged:1 practical:1 yj:1 maximisation:1 area:4 empirical:1 maneesh:2 evolving:1 significantly:1 cascade:1 matching:1 boyd:1 rahnama:1 suggest:1 cannot:2 close:1 onto:1 influence:1 impossible:3 gilbert:1 equivalent:3 deterministic:1 glds:16 maximizing:1 center:1 convex:3 resolution:5 splitting:1 recovery:1 insight:1 rule:1 array:3 regarded:1 shlens:2 vandenberghe:1 jackson:1 population:46 exploratory:1 variation:2 laplace:4 limiting:1 pt:4 suppose:1 imagine:1 target:2 sparsityinducing:1 agreement:1 pa:1 trend:1 element:1 expensive:1 approximated:2 asymmetric:1 sparsely:1 coarser:2 predicts:1 ising:2 observed:9 epsrc:1 ep:1 electrical:1 capture:5 calculate:1 ensures:2 ordering:1 movement:2 eu:1 hatsopoulos:1 yk:13 mentioned:1 byronyu:1 agency:1 covariates:1 cam:1 dynamic:11 depend:4 churchland:1 basis:3 neurophysiol:2 easily:2 joint:1 darpa:1 accelerate:1 various:1 cat:1 train:4 univ:1 describe:2 london:4 ahmadian:3 monte:1 aggregate:1 whose:3 lag:7 stanford:2 drawing:1 otherwise:1 ability:2 jointly:1 reproduced:1 advantage:6 indication:1 unprecedented:1 ucl:3 interconnected:1 interaction:5 jij:2 relevant:1 loop:1 flexibility:1 achieve:1 roweis:1 kv:2 crossvalidation:1 exploiting:2 convergence:2 electrode:5 diverges:1 leave:2 coupling:13 andrew:1 ac:4 bme:1 measured:2 ij:1 minor:1 qt:1 strong:1 implemented:1 predicted:6 indicate:2 quantify:4 differ:3 closely:1 correct:1 filter:8 lars:2 human:1 opinion:1 bin:11 argued:3 truccolo:1 behaviour:2 biological:1 elementary:1 around:1 considered:3 hall:1 exp:6 deciding:1 predict:1 claim:2 driving:6 achieves:1 sought:1 purpose:1 estimation:1 outperformed:3 sensitive:1 concurrent:1 faithfully:1 city:1 reflects:1 brought:1 mit:1 gaussian:29 always:1 rather:3 reaching:1 avoid:1 poi:1 varying:7 derived:1 modelling:2 prevalent:1 likelihood:9 contrast:3 litke:1 dim:3 inference:1 dependent:2 unlikely:2 typically:1 bt:12 cunningham:4 eliminate:1 hidden:1 bienenstock:1 reproduce:1 overall:1 unaveraged:1 animal:1 smoothing:5 constrained:2 summed:1 equal:2 construct:1 field:1 sampling:2 chapman:1 broad:1 yu:7 look:1 minimized:1 stimulus:5 few:1 retina:1 simultaneously:1 resulted:1 delayed:1 argmax:1 maintain:1 detection:1 interest:2 possibility:1 certainly:1 yielding:1 held:1 chain:1 accurate:3 poorer:1 solo:1 integral:1 discretisation:1 plateaued:1 prosthesis:1 plotted:1 fitted:1 increased:1 modeling:2 injury:1 goodness:7 measuring:1 cost:2 deviation:1 entry:8 rare:1 delay:1 successful:1 dij:2 wickens:1 reported:1 dependency:1 axkt:3 spatiotemporal:1 learnt:1 st:3 peri:1 peak:4 international:1 off:1 decoding:6 diverge:1 connectivity:2 squared:6 reflect:3 central:1 again:1 containing:3 recorded:4 slowly:1 possibly:1 vastly:1 external:2 dskt:4 macke:1 aggressive:1 potential:1 photon:1 ikt:3 schramm:1 retinal:1 coding:1 h019472:1 coefficient:1 explicitly:1 onset:1 depends:2 performed:2 break:1 closed:2 apparently:1 decaying:2 synchrony:1 curie:1 vivo:1 formed:1 accuracy:2 afshar:2 variance:9 characteristic:2 efficiently:2 yield:5 identify:1 modelled:3 identification:2 bayesian:1 weak:1 accurately:6 carlo:1 trajectory:5 finer:4 j6:1 cybernetics:1 history:30 simultaneous:2 strongest:1 reach:2 cumbersome:1 plateau:1 influenced:1 synaptic:1 checked:1 definition:1 against:3 underestimate:2 sensorimotor:1 frequency:3 obvious:1 naturally:1 mi:1 monitored:1 couple:1 sampled:1 gain:1 dataset:2 treatment:1 popular:1 color:2 ut:1 dimensionality:11 improves:1 reflecting:1 higher:3 response:6 evaluated:2 refractoriness:2 done:1 strongly:2 furthermore:3 xk1:3 correlation:37 glms:22 until:1 hand:1 horizontal:1 qo:4 nonlinear:2 overlapping:1 quality:1 perhaps:1 grows:1 xkt:5 name:1 effect:1 contain:4 true:3 ranged:1 brown:2 evolution:2 hence:3 analytically:1 regularization:8 conditionally:2 confer:1 attractive:1 self:2 auc:9 inferior:1 excitation:1 m:39 generalized:2 complete:2 performs:1 l1:1 interface:1 instantaneous:8 recently:1 nih:2 common:6 superior:1 psths:1 data1:1 functional:2 spiking:18 empirically:1 physical:1 sped:1 salt:1 volume:1 visualise:1 mellon:1 measurement:1 significant:1 cambridge:3 vec:2 gibbs:1 smoothness:3 grid:1 inclusion:1 nonlinearity:1 had:3 dot:1 access:1 cortex:6 behaving:2 longer:1 operating:2 add:1 similarity:1 feb:1 posterior:6 multivariate:1 recent:5 showed:3 moderate:1 driven:1 came:1 yi:3 muscle:1 krishna:1 captured:2 additional:3 seen:1 schneidman:1 signal:2 vogelstein:1 full:2 unimodal:1 simoncelli:2 multiple:1 smooth:1 cross:11 long:2 minimising:1 jpc:1 divided:1 jy:1 coded:2 prediction:24 variant:1 scalable:1 implanted:1 optimisation:2 cmu:1 poisson:19 metric:5 expectation:1 histogram:1 sometimes:3 kernel:1 cell:5 addition:4 unrealistically:1 whereas:3 conditionals:1 decreased:1 fine:2 fellowship:1 leaving:1 finely:1 markedly:2 recording:21 induced:1 byron:1 seem:2 extracting:2 presence:1 revealed:1 rendering:1 independence:1 fit:15 architecture:1 identified:1 intensive:1 donoghue:2 whether:1 defense:1 hessian:1 constitute:1 generally:2 covered:1 neocortex:1 locally:1 induces:1 specifies:1 outperform:1 percentage:2 simultaneity:1 neuroscience:4 estimated:2 per:1 anatomical:1 carnegie:1 discrete:2 affected:1 santhanam:3 group:3 four:4 eden:1 yit:2 drawn:1 marie:1 verified:1 n66001:1 imaging:1 fraction:1 run:1 inverse:1 place:1 reporting:1 wu:3 lake:1 putative:1 hochberg:1 entirely:1 capturing:1 display:1 fold:2 quadratic:2 chornoboy:1 yielded:2 activity:26 binned:1 lemon:1 constraint:1 worked:1 segev:1 dominated:2 aspect:1 argument:1 performing:2 extracellular:2 department:2 according:2 cleverly:1 describes:2 slightly:3 across:7 em:3 smaller:1 evolves:1 making:2 primate:1 projecting:1 repair:1 xo:6 glm:36 computationally:4 equation:2 agree:1 previously:1 count:32 needed:1 available:2 plds:38 prerequisite:1 appropriate:7 subtracted:1 alternative:5 vidne:2 binomial:1 include:1 ensure:1 newton:2 unifying:1 const:1 paucity:1 exploit:1 perturb:1 especially:1 ghahramani:1 skt:4 move:1 added:1 question:2 spike:40 looked:1 dependence:5 usual:1 countered:1 diagonal:3 unclear:1 exhibit:1 striate:1 bialek:1 link:3 mapped:1 koyama:1 argue:1 maximising:2 code:1 discretise:1 length:3 reorganization:1 relationship:1 kalman:1 innovation:1 unfortunately:1 frank:1 negative:5 lagged:2 calcium:1 collective:1 neuron:49 lawhern:1 observation:12 datasets:4 markov:2 acknowledge:1 roccurve:1 timeseries:1 extended:1 variability:10 neurobiology:1 varied:2 jakob:2 reproducing:2 pair:2 namely:1 chichilnisky:2 extensive:1 z1:1 optimized:1 rad:1 meansquared:1 learned:1 quadratically:1 ryu:3 macaque:1 able:1 bar:1 dynamical:26 below:2 usually:2 pattern:3 sparsity:4 including:1 event:2 suitable:1 natural:3 regularized:1 predicting:3 residual:1 advanced:1 representing:1 improve:1 imply:1 axis:1 coupled:6 sher:1 sahani:7 prior:7 understanding:1 acknowledgement:1 review:1 berry:1 regularisation:1 embedded:1 fully:1 rameters:1 filtering:1 var:3 validation:1 foundation:1 kti:1 consistent:2 charitable:1 row:2 prone:1 elsewhere:1 soon:1 free:1 gee:1 allow:3 normalised:1 karr:1 distributed:1 benefit:3 curve:2 dimension:4 cortical:15 calculated:5 pillow:3 instructed:1 adaptive:1 offit:1 approximate:2 compact:1 reproduces:3 global:1 overfitting:1 receiver:2 assumed:1 nelder:1 gilja:1 continuous:1 latent:43 jhm:1 learn:1 nature:2 robust:1 flank:1 argmaxx:1 mse:3 investigated:1 did:4 timescales:2 dense:1 linearly:1 neurosci:4 noise:11 arise:1 n2:1 x1:1 neuronal:1 fig:7 crcns:1 roc:1 gatsby:7 inferring:2 explicit:1 exponential:6 comput:6 third:1 dozen:1 load:1 xt:1 explored:1 offset:1 blackrock:1 essential:1 naively:1 exists:1 adding:2 effectively:2 magnitude:1 nat:1 conditioned:3 paninski:7 likely:3 ganglion:1 gao:2 visual:1 ykt:5 tracking:1 gpfa:8 springer:1 determines:2 acm:1 conditional:6 quantifying:2 invalid:2 shared:9 absence:1 considerable:1 hard:1 included:1 typical:2 characterised:2 generalisation:1 justify:1 called:1 total:3 ece:1 college:3 support:1 arises:1 kulkarni:2 tested:2 correlated:3 |
3,633 | 429 | Connectionist Music Composition Based on
Melodic and Stylistic Constraints
Michael C. Mozer
Department of Computer Science
and Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Todd Soukup
Department of Electrical
and Computer Engineering
University of Colorado
Boulder, CO 80309-0425
Abstract
We describe a recurrent connectionist network, called CONCERT, that uses a
set of melodies written in a given style to compose new melodies in that
style. CONCERT is an extension of a traditional algorithmic composition technique in which transition tables specify the probability of the next note as a
function of previous context. A central ingredient of CONCERT is the use of a
psychologically-grounded representation of pitch.
1 INTRODUCTION
In creating music, composers bring to bear a wealth of knowledge about musical conventions. If we hope to build automatic music composition systems that can mimic the abilities of human composers, it will be necessary to incorporate knowledge about musical
conventions into the systems. However, this knowledge is difficult to express: even human composers are unaware of many of the constraints under which they operate.
In this paper, we describe a connectionist network that composes melodies. The network
is called CONCERT, an acronym for connectionist composer of erudite tunes. Musical
knowledge is incorporated into CONCERT via two routes. First, CONCERT-is trained on a
set of sample melodies from which it extracts rules of note and phrase progressions.
Second, we have built a representation of pitch into CONCERT that is based on psychological studies of human perception. This representation, and an associated theory of generalization proposed by Shepard (1987), provides CONCERT with a basis for jUdging the
similarity among notes, for selecting a response, and for restricting the set of alternatives
that can be considered at any time.
2 TRANSITION TABLE APPROACHES TO COMPOSITION
We begin by describing a traditional approach to algorithmic music composition using
Markov transition tables. This simple but interesting technique involves selecting notes
sequentially according to a table that specifies the probability of the next note as a func-
789
790
Mozer and Soukup
tion of the current note (Dodge & Jerse, 1985). The tables may be hand-constructed according to certain criteria or they may be set up to embody a particular musical style. In
the latter case, statistics are collected over a set of examples (hereafter, the training set)
and the table entries are defined to be the transition probabilities in these examples.
In melodies of any complexity, musical structure cannot be fully described by pairwise
statistics. To capture additional structure, the transition table can be generalized from a
two-dimensional array to n dimensions. In the n -dimensional table, often referred to as
a table of order n -1, the probability of the next note is indicated as a function of the previous n -1 notes. Unfortunately, extending the transition table in this manner gives rise
to two problems. First, the size of the table explodes exponentially with the amount of
context and rapidly becomes unmanageable. Second, a table representing the high-order
structure masks whatever low-order structure is present.
Kohonen (1989) has proposed a scheme by which only the relevant high-order structure
is represented. The scheme is symbolic algorithm that, given a training set of examples,
produces a collection of rules - a context-sensitive grammar - sufficient for reproducing most or all of the structure inherent in the set. However, because the algorithm attempts to produce deterministic rules - rules that always apply in a given context - the
algorithm will not discover regularities unless they are absolute; it is not equipped to deal
with statistical properties of the data. Both Kohonen's musical grammar and the transition table approach suffer from the further drawback that a symbolic representation of
notes does not facilitate generalization. For instance, invariance under transposition is
not directly representable. In addition, other similarities are not encoded, for example,
the congruity of octaves.
Connectionist learning algorithms offer the potential of overcoming the various limitations of transition table approaches and Kohonen musical grammars. Connectionist algorithms are able to discover relevant structure and statistical regularities in sequences
(e.g., Elman, 1990; Mozer, 1989), and to consider varying amounts of context, noncontiguous context, and combinations of low-order and high-order regularities. Connectionist approaches also promise better generalization through the use of distributed representations. In a local representation, where each note is represented by a discrete symbol,
the sort of statistical contingencies that can be discovered are among notes. However, in
a distributed representation, where each note is represented by a set of continuous feature
values, the sort of contingencies that can be discovered are among features. To the extent that two notes share features, featural regularities discovered for one note may
transfer to the other note.
3 THE CONCERT ARCHITECTURE
CONCERT is a recurrent network architecture of the sort studied by Elman (1990). A
melody is presented to it, one note at a time, and its task at each point in time is to predict
the next note in the melody. Using a training procedure described below, CONCERT's
connection strengths are adjusted so that it can perform this task correctly for a set of
training examples. Each example consists of a sequence of notes, each note being
characterized by a pitch and a duration. The current note in the sequence is represented
in the input layer of CONCERT, and the prediction of the next note is represented in the
output layer. As Figure 1 indicates, the next note is encoded in two different ways: The
next-note-distributed (or NND) layer contains CONCERT'S internal representation of the
Connectionist Music Composition Based on Melodic and Stylistic Constraints
note, while the next-note-Iocal (or NNL) layer contains one unit for each alternative. For
now, it should suffice to say that the representation of a note in the NND layer, as well as
in the input layer, is distributed, i.e., a note is indicated by a pattern of activity across the
units. Because such patterns of activity can be quite difficult to interpret, the NNL layer
provides an alternative, explicit representation of the possibilities.
NnlNotc
Local
(NNW
:~
c O l l i n , .'I
\
I
'-------,.--,r---'
Figure 1: The CONCERT Architecture
The context layer represents the the temporal context in which a prediction is made.
When a new note is presented in the input layer, the current context activity pattern is integrated with the new note to form a new context representation. Although CONCERT
could readily be wired up to behave as a k -th order transition table, the architecture is far
more general. The training procedure attempts to determine which aspects of the input
sequence are relevant for making future predictions and retain only this task-relevant information in the context layer. This contrasts with Todd's (1989) seminal work on connectionist composition in which the recurrent context connections are prewired and fixed,
which makes the nature of the information Todd's model retains independent of the examples on which it is trained.
Once CONCERT has been trained, it can be run in composition mode to create new pieces.
This involves first seeding CONCERT with a short sequence of notes, perhaps the initial
notes of one of the training examples. From this point on, the output of CONCERT can be
fed back to the input, allowing CONCERT to continue generating notes without further
external input. Generally, the output of CONCERT does not specify a single note with absolute certainty; instead, the output is a probability distribution over the set of candidates.
It is thus necessary to select a particular note in accordance with this distribution. This is
the role of the selection process depicted in Figure 1.
3.1
ACTIVATION RULES AND TRAINING PROCEDURE
The activation rule for the context units is
(1)
j
j
where c,. (n) is the activity of context unit i following processing of input note
Il
(which
791
792
Mozer and Soukup
we refer to as step n), Xj (Il) is the activity of input unit j at step n, Wij is the connection
strength from unit j of the input to unit i of the context layer, and Vij is the connection
strength from unit j to unit i within the context layer, and s is a sigmoid activation function rescaled to the range (-1,1). Units in the NND layer follow a similar rule:
nlldj (n ) .. s [LUjj Cj (n)] ,
j
where mzdj (n ) is the activity of NND unit i at step nand Uij is the strength of connection
from context unit j to NND unit i .
The transformation from the NND layer to the NNL layer is achieved by first computing
the distance between the NND representation, nnd(n), and the target (distributed)
representation of each pitch i, Pi:
dj
= Innd(n) - pj I ,
where 1?1 denotes the L2 vector norm. This distance is an indication of how well the
NND representation matches a particular pitch. The activation of the NNL unit
corresponding to pitch i, nlllj, increases inversely with the distance:
1l111j(n) = e-d'/"Le-d} .
j
This normalized exponential transform (proposed by Bridle, 1990, and Rumelhart, in
press) produces an activity pattern over the NNL units in which each unit has activity in
the range (0,1) and the activity of all units sums to 1. Consequently, the NNL activity
pattern can be interpreted as a probability distribution - in this case, the probability that
the next note has a particular pitch.
CONCERT is trained using the back propagation unfolding-in-time procedure (Rumelhart,
Hinton, & Williams, 1986) using the log likelihood error measure
E = - L log nnltgt (l1,p) ,
p,n
where p is an index over pieces in the training set and
piece; tgt is the target pitch for note 11 of piece p .
3.2
11
an index over notes within a
PITCH REPRESENTATION
Having described CONCERT's architecture and training procedure, we turn to the
representation of pitch. To accommodate a variety of music, CONCERT needs the ability
to represent a range of about four octaves. Using standard musical notation, these pitches
are labeled as follows: C1, D1, ... , B1, C2, D2, ... B2, C3, ... C5, where C1 is the
lowest pitch and C5 the highest. Sharps are denoted by a #, e.g., F#3. The range
C1-C5 spans 49 pitches.
One might argue that the choice of a pitch representation is not critical because back propagation can, in principle, discover an alternative representation well suited to the task.
In practice, however, researchers have found that the choice of external representation is
a critical determinant of the network's ultimate performance (e.g., Denker et aI., 1987;
Mozer, 1987). Quite simply, the more task-appropriate information that is built into the
network, the easier the job the learning algorithm has. Because we are asking the net-
Connectionist Music Composition Based on Melodic and Stylistic Constraints
work to make predictions about melodies that people have composed or to generate
melodies that people perceive as pleasant, we have furnished CONCERT with a
psychologically-motivated representation of pitch. By this, we mean that notes that people judge to be similar have similar representations in the network, indicating that the
representation in the head matches the representation in the network.
Shepard (1982) has studied the similarity of pitches by asking people to judge the perceived similarity of pairs of pitches. He has proposed a theory of generalization
(Shepard, 1987) in which the similarity of two items is exponentially related to their distance in an internal or "psychological" representational space. (This is one justification
for the NNL layer computing an exponential function of distance.) Based on psychophysical experiments, he has proposed a five-dimensional space for the representation of
pitch, depicted in Figure 2.
OIl":"
D1 ....
C'I4CIPilch Height
Chromalic Circle
Circle of Fiflhs
Figure 2: Pitch Representation Proposed by Shepard (1982)
In this space, each pitch specifies a point along the pitch height (or PH) dimension, an
(x ,y) coordinate on the chromatic circle (or CC), and an (x ,y) coordinate on the circle of
fifths (or CF). we will refer to this representation as PHCCCF, after its three components. The pitch height component specifies the logarithm of the frequency of a pitch;
this logarithmic transform places tonal half-steps at equal spacing from one another along
the pitch height axis. In the chromatic circle, neighboring pitches are a tonal half-step
apart. In the circle of fifths, the perfect fifth of a pitch is the next pitch immediately
counterclockwise. Figure 2 shows the relative magnitude of the various components to
scale. The proximity of two pitches in the five-dimensional PHCCCF space can be determined simply by computing the Euclidean distance between their representations.
A straightforward scheme for translating the PHCCCF representation into an activity pattern over a set of connectionist units is to use five units, one for pitch height and two
pairs to encode the (x ,y ) coordinates of the pitch on the two circles. Due to several problems, we have represented each circle over a set of 6 binary-valued units that preserves
the essential distance relationships among tones on the circles (Mozer, 1990). The
PHCCCF representation thus consists of 13 units altogether. Rests (silence) are assigned
a code that distinguish them from all pitches. The end of a piece is coded by several
rests.
793
794
Mozer and Soukup
4 SIMULATION EXPERIMENTS
4.1
LEARNING THE STRUCTURE OF DIATONIC SCALES
In this simulation, we trained CONCERT on a set of diatonic scales in various keys over a
one octave range, e.g., 01 El F#1 Gl Al Bl C#2 02. Thirty-seven such scales
can be made using pitches in the C l-C 5 range. The training set consisted of 28 scales
- roughly 75% of the corpus - selected at random, and the test set consisted of the
remaining 9. In 10 replications of the simulation using 20 context units, CONCERT
mastered the training set in approximately 55 passes. Generalization performance was
tested by presenting the scales in the test set one note at a time and examining CONCERT's
prediction. Of the 63 notes to be predicted in the test set, CONCERT achieved remarkable
performance: 98.4% correct. The few errors were caused by transposing notes one full
octave or one tonal half step.
To compare CONCERT with a transition table approach, we built a second-order transition
table from the training set data and measured its performance on the test set. The transition table prediction (i.e., the note with highest probability) was correct only 26.6% of the
time. The transition table is somewhat of a straw man in this environment: A transition
table that is based on absolute pitches is simply unable to generalize correctly. Even if
the transition table encoded relative pitches, a third-order table would be required to master the environment. Kohonen's musical grammar faces the same difficulties as a transition table.
4.2
LEARNING INTERSPERSED RANDOM WALK SEQUENCES
The sequences in this simulation were generated by interspersing the elements of two
simple random walk sequences. Each interspersed sequence had the following form: a l'
bi> a2, b 2, ... , as, b s, where al and b 1 are randomly selected pitches, ai+l is one step
up or down from aj on the C major scale, and likewise for bi + 1 and b j ? Each sequence
consisted of ten notes. CONCERT, with 25 context units, was trained on 50 passes through
a set of 200 examples and was then tested on an additional 100. Because it is impossible
to predict the second note in the interspersed sequences (b 1) from the first (a 1), this
prediction was ignored for the purpose of evaluating CONCERT's performance. CONCERT
achieved a performance of 91. 7% correct. About half the errors were ones in which CONCERT transposed a correct prediction by an octave. Excluding these errors, performance
improved to 95.8% correct.
To capture the structure in this environment, a transition table approach would need to
consider at least the previous two notes. However, such a transition table is not likely to
generalize well because, if it is to be assured of predicting a note at step 11 correctly, it
must observe the note at step 11 -2 in the context of every possible note at step 11 -1. We
constructed a second-order transition table from CONCERT'S training set. Using a testing
criterion analogous to that used to evaluate CONCERT, the transition table achieved a performance level on the test set of only 67.1 % correct. Kohonen's musical grammar would
face the same difficulty as the transition table in this environment.
Connectionist Music Composition Based on Melodic and Stylistic Constraints
4.3
GENERATING NEW MELODIES IN THE STYLE OF BACH
In a final experiment, we trained CONCERT on the melody line of a set of ten simple
minuets and marches by J. S. Bach. The pieces had several voices, but the melody generally appeared in the treble voice. Importantly, to naive listeners the extracted melodies
sounded pleasant and coherent without the accompaniment.
In the training data, each piece was terminated with a rest marker (the only rests in the
pieces). This allowed CONCERT to learn not only the notes within a piece but also when
the end of the piece was reached. Further, each major piece was transposed to the key of
C major and each minor piece to the key of A minor. This was done to facilitate learning
because the pitch representation does not take into account the notion of musical key; a
more sophisticated pitch representation might avoid the necessity of this step.
In this simulation, each note was represented by a duration as well as a pitch. The duration representation consisted of five units and was somewhat analogous the PHCCCF
representation for pitch. It allowed for the representation of sixteenth, eighth, quarter,
and half notes, as well as triplets. Also included in this simulation were two additional
input ones. One indicated whether the piece was in a major versus minor key, the other
indicated whether the piece was in 3/4 meter versus 2/4 or 4/4. These inputs were fixed
for a given piece.
Learning the examples involves predicting a total of 1,260 notes altogether, no small feat.
CONCERT was trained with 40 hidden units for 3000 passes through the training set. The
learning rate was gradually lowered from .0004 to .0002. By the completion of training,
CONCERT could correctly predict about 95% of the pitches and 95% of the durations
correctly. New pieces can be created by presenting a few notes to start and then running
CONCERT in composition mode. One example of a composition produced by CONCERT is
shown in Figure 3. The primary deficiency of CONCERT's compositions is that they are
lacking in global coherence.
IF
F
J
J
tiS I
18 ( r E r
r
Figure 3: A Sample Composition Produced by CONCERT
II
795
796
Mozer and Soukup
5 DISCUSSION
Initial results from CONCERT are encouraging. CONCERT is able to learn musical structure
of varying complexity, from random walk sequences to Bach pieces containing nearly
200 notes. We presented two examples of structure that CONCERT can learn but that cannot be captured by a simple transition table or by Kohonen's musical grammar.
Beyond a more systematic examination of alternative architectures, work on CONCERT is
heading in two directions. First, the pitch representation is being expanded to account for
the perceptual effects of musical context and musical key. Second, CONCERT is being extended to better handle the processing of global structure in music. It is unrealistic to expect that CONCERT, presented with a linear string of notes, could induce not only local relationships among the notes, but also more global phrase structure, e.g., an AABA phrase
pattern. To address the issue of global structure, we have designed a network that
operates at several different temporal resolutions simultaneously (Mozer, 1990).
Acknowledgements
This research was supported by NSF grant IRI-9058450, grant 90-21 from the James S.
McDonnell Foundation. Our thanks to Paul Smolensky, Yoshiro Miyata, Debbie Breen,
and Geoffrey Hinton for helpful comments regarding this work, and to Hal Eden and
Darren Hardy for technical assistance.
References
Bridle, J. (1990). Training stochastic model recognition algorithms as networks can lead to maximum mutual
information estimation of parameters. In D. S. Touretzky (Ed.), Advances in neural information processing systems 2 (pp. 211-217). San Mateo, CA: Morgan Kaufmann.
Dodge, c., & Jerse, T. A. (1985). Computer music: Synthesis, composition, and performance. New York:
Shirmer Books.
Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179-212.
Kohonen, T. (1989). A self-learning musical grammar, or "Associative memory of the second kind". Proceedings of the 1989 International Joint Conference on Neural Networks, 1-5.
Mozer, M. C. (1987). RAMBOT: A connectionist expert system that learns by example. In M. Caudill & c.
Butler (Eds.), Proceedings fo the IEEE First Annual International Conference on Neural Networks (pp.
693-700). San Diego, CA: IEEE Publishing Services.
Mozer, M. C. (1989). A focused back-propagation algorithm for temporal pattern recognition. Complex Systems, 3, 349-381.
Mozer, M. C. (1990). Connectionist music composition based on melodic, stylistic, and psychophysical constraints (Tech Report CU-CS-495-90). Boulder, CO: University of Colorado, Department of Computer Science.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McQelland (Eds.), Parallel distributed processing: Explorations in
the microstructure of cognition. Volume I: Foundations (pp. 318-362). Cambridge, MA: MIT
PresslBradford Books.
Rumelhart, D. E. (in press). Connectionist processing and learning as statistical inference. In Y. Chauvin & D.
E. Rumelhart (Eds.), Backpropagation: Theory, architectures, and applications. Hillsdale, NJ: Erlbaum.
Shepard, R. N. (1982). Geometrical approximations to the structure of musical pitch. Psychological Review, 89,
305-333.
Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237,
1317-1323. Shepard (1987)
Todd, P. M. (1989). A ~onnectionist approach to algorithmic composition. Computer Music Journal, 13,
27-43.
| 429 |@word determinant:1 cu:1 norm:1 d2:1 simulation:6 accommodate:1 necessity:1 initial:2 contains:2 selecting:2 hereafter:1 accompaniment:1 hardy:1 current:3 activation:4 written:1 readily:1 must:1 seeding:1 designed:1 concert:49 half:5 selected:2 item:1 tone:1 short:1 transposition:1 provides:2 five:4 height:5 along:2 constructed:2 c2:1 replication:1 consists:2 compose:1 manner:1 pairwise:1 mask:1 roughly:1 elman:3 embody:1 encouraging:1 equipped:1 becomes:1 begin:1 discover:3 notation:1 suffice:1 lowest:1 kind:1 interpreted:1 string:1 finding:1 transformation:1 nj:1 temporal:3 certainty:1 every:1 ti:1 whatever:1 unit:25 grant:2 service:1 engineering:1 local:3 todd:4 accordance:1 approximately:1 might:2 studied:2 mateo:1 co:3 range:6 bi:2 thirty:1 testing:1 practice:1 backpropagation:1 procedure:5 universal:1 induce:1 melodic:5 symbolic:2 cannot:2 selection:1 context:21 impossible:1 seminal:1 deterministic:1 williams:2 straightforward:1 iri:1 duration:4 focused:1 resolution:1 immediately:1 perceive:1 rule:7 array:1 importantly:1 d1:2 handle:1 notion:1 coordinate:3 justification:1 analogous:2 target:2 diego:1 colorado:3 us:1 element:1 rumelhart:6 recognition:2 labeled:1 role:1 electrical:1 capture:2 rescaled:1 highest:2 mozer:12 environment:4 complexity:2 trained:8 dodge:2 basis:1 joint:1 represented:7 various:3 listener:1 describe:2 quite:2 encoded:3 valued:1 say:1 grammar:7 ability:2 statistic:2 transform:2 final:1 associative:1 sequence:12 indication:1 net:1 kohonen:7 relevant:4 neighboring:1 rapidly:1 representational:1 sixteenth:1 regularity:4 extending:1 produce:3 wired:1 generating:2 perfect:1 recurrent:3 completion:1 measured:1 minor:3 job:1 predicted:1 involves:3 judge:2 c:1 convention:2 direction:1 drawback:1 correct:6 nnd:9 stochastic:1 exploration:1 human:3 translating:1 melody:13 hillsdale:1 microstructure:1 generalization:6 adjusted:1 extension:1 proximity:1 considered:1 algorithmic:3 predict:3 cognition:1 major:4 a2:1 purpose:1 perceived:1 estimation:1 sensitive:1 create:1 hope:1 unfolding:1 mit:1 always:1 avoid:1 chromatic:2 varying:2 encode:1 indicates:1 likelihood:1 tech:1 contrast:1 helpful:1 inference:1 el:1 nnw:1 integrated:1 nand:1 hidden:1 uij:1 wij:1 issue:1 among:5 denoted:1 mutual:1 equal:1 once:1 having:1 prewired:1 represents:1 nearly:1 mimic:1 future:1 connectionist:15 report:1 inherent:1 few:2 randomly:1 composed:1 preserve:1 simultaneously:1 transposing:1 attempt:2 possibility:1 necessary:2 unless:1 euclidean:1 logarithm:1 walk:3 circle:9 psychological:4 instance:1 yoshiro:1 asking:2 retains:1 phrase:3 entry:1 examining:1 erlbaum:1 thanks:1 international:2 retain:1 systematic:1 straw:1 michael:1 synthesis:1 central:1 containing:1 cognitive:2 creating:1 external:2 book:2 style:4 expert:1 account:2 potential:1 b2:1 caused:1 piece:17 tion:1 reached:1 start:1 sort:3 parallel:1 il:2 musical:17 kaufmann:1 likewise:1 generalize:2 produced:2 researcher:1 cc:1 composes:1 fo:1 touretzky:1 ed:4 frequency:1 pp:3 james:1 associated:1 transposed:2 bridle:2 knowledge:4 cj:1 sophisticated:1 back:4 follow:1 specify:2 response:1 improved:1 done:1 hand:1 marker:1 propagation:4 mode:2 aj:1 perhaps:1 indicated:4 hal:1 oil:1 effect:1 facilitate:2 consisted:4 normalized:1 assigned:1 deal:1 assistance:1 self:1 criterion:2 generalized:1 octave:5 presenting:2 l1:1 bring:1 geometrical:1 sigmoid:1 quarter:1 shepard:7 exponentially:2 interspersed:3 volume:1 he:2 interpret:1 refer:2 composition:17 cambridge:1 ai:2 automatic:1 dj:1 had:2 lowered:1 similarity:5 apart:1 route:1 certain:1 binary:1 continue:1 captured:1 morgan:1 additional:3 somewhat:2 determine:1 ii:1 full:1 technical:1 match:2 characterized:1 offer:1 bach:3 coded:1 pitch:42 prediction:8 psychologically:2 grounded:1 represent:1 achieved:4 c1:3 addition:1 spacing:1 wealth:1 operate:1 rest:4 explodes:1 pass:3 comment:1 counterclockwise:1 tgt:1 xj:1 variety:1 architecture:7 regarding:1 whether:2 motivated:1 ultimate:1 tonal:3 suffer:1 york:1 ignored:1 generally:2 pleasant:2 tune:1 amount:2 ten:2 ph:1 generate:1 specifies:3 nsf:1 judging:1 correctly:5 discrete:1 promise:1 express:1 diatonic:2 key:6 four:1 eden:1 pj:1 sum:1 run:1 master:1 furnished:1 place:1 stylistic:5 coherence:1 layer:16 distinguish:1 presslbradford:1 annual:1 activity:11 strength:4 constraint:6 deficiency:1 aspect:1 span:1 noncontiguous:1 expanded:1 department:3 according:2 combination:1 representable:1 march:1 mcdonnell:1 across:1 making:1 gradually:1 boulder:3 describing:1 turn:1 fed:1 end:2 acronym:1 apply:1 progression:1 denker:1 observe:1 appropriate:1 alternative:5 voice:2 altogether:2 denotes:1 remaining:1 cf:1 running:1 publishing:1 mastered:1 music:12 soukup:5 build:1 bl:1 psychophysical:2 primary:1 traditional:2 distance:7 unable:1 seven:1 argue:1 collected:1 extent:1 chauvin:1 toward:1 code:1 index:2 relationship:2 difficult:2 unfortunately:1 rise:1 perform:1 allowing:1 markov:1 behave:1 hinton:3 incorporated:1 head:1 excluding:1 extended:1 discovered:3 reproducing:1 sharp:1 overcoming:1 pair:2 required:1 c3:1 connection:5 coherent:1 address:1 able:2 beyond:1 below:1 perception:1 pattern:8 eighth:1 appeared:1 smolensky:1 built:3 memory:1 unrealistic:1 critical:2 difficulty:2 examination:1 predicting:2 caudill:1 representing:1 scheme:3 inversely:1 axis:1 created:1 extract:1 naive:1 featural:1 func:1 review:1 l2:1 meter:1 acknowledgement:1 relative:2 law:1 lacking:1 fully:1 expect:1 bear:1 interesting:1 limitation:1 versus:2 ingredient:1 remarkable:1 geoffrey:1 foundation:2 contingency:2 sufficient:1 principle:1 vij:1 share:1 pi:1 aaba:1 gl:1 supported:1 heading:1 silence:1 institute:1 face:2 unmanageable:1 absolute:3 fifth:3 distributed:6 dimension:2 transition:22 evaluating:1 unaware:1 collection:1 made:2 c5:3 san:2 far:1 feat:1 global:4 sequentially:1 b1:1 corpus:1 butler:1 continuous:1 triplet:1 table:29 nature:1 transfer:1 learn:3 ca:2 miyata:1 composer:4 complex:1 assured:1 terminated:1 paul:1 allowed:2 referred:1 explicit:1 exponential:2 candidate:1 perceptual:1 third:1 learns:1 nnl:7 down:1 symbol:1 essential:1 restricting:1 magnitude:1 easier:1 suited:1 depicted:2 logarithmic:1 simply:3 likely:1 darren:1 extracted:1 ma:1 consequently:1 man:1 included:1 determined:1 operates:1 called:2 total:1 invariance:1 indicating:1 select:1 internal:3 people:4 latter:1 incorporate:1 evaluate:1 tested:2 |
3,634 | 4,290 | On Learning Discrete Graphical Models Using
Greedy Methods
Christopher C. Johnson
University of Texas at Asutin
[email protected]
Ali Jalali
University of Texas at Austin
[email protected]
Pradeep Ravikumar
University of Texas at Asutin
[email protected]
Abstract
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a
forward-backward greedy algorithm as applied to general statistical models. As
a special case, we then apply this algorithm to learn the structure of a discrete
graphical model via neighborhood estimation. As a corollary of our general result,
we derive sufficient conditions on the number of samples n, the maximum nodedegree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result
guarantees graph selection for samples scaling as n = ?(d2 log(p)), in contrast to
existing convex-optimization based algorithms that require a sample complexity
of ?(d3 log(p)). Further, the greedy algorithm only requires a restricted strong
convexity condition which is typically milder than irrepresentability assumptions.
We corroborate these results using numerical simulations at the end.
1 Introduction
Undirected graphical models, also known as Markov random fields, are used in a variety of domains,
including statistical physics, natural language processing and image analysis among others. In this
paper we are concerned with the task of estimating the graph structure G of a Markov random field
(MRF) over a discrete random vector X = (X1 , X2 , . . . , Xp ), given n independent and identically
distributed samples {x(1) , x(2) , . . . , x(n) }. This underlying graph structure encodes conditional independence assumptions among subsets of the variables, and thus plays an important role in a broad
range of applications of MRFs.
Existing approaches: Neighborhood Estimation, Greedy Local Search. Methods for estimating such
graph structure include those based on constraint and hypothesis testing [22], and those that estimate
restricted classes of graph structures such as trees [8], polytrees [11], and hypertrees [23]. A recent
class of successful approaches for graphical model structure learning are based on estimating the local neighborhood of each node. One subclass of these for the special case of bounded degree graphs
involve the use of exhaustive search so that their computational complexity grows at least as quickly
as O(pd ), where d is the maximum neighborhood size in the graphical model [1, 4, 9]. Another
subclass use convex programs to learn the neighborhood structure: for instance [20, 17, 16] estimate
the neighborhood set for each vertex r ? V by optimizing its ?1 -regularized conditional likelihood;
[15, 10] use ?1 /?2 -regularized conditional likelihood. Even these methods, however need to solve
regularized convex programs with typically polynomial computational cost of O(p4 ) or O(p6 ), are
still expensive for large problems. Another popular class of approaches are based on using a score
metric and searching for the best scoring structure from a candidate set of graph structures. Ex1
act search is typically NP-hard [7]; indeed for general discrete MRFs, not only is the search space
intractably large, but calculation of typical score metrics itself is computationally intractable since
they involve computing the partition function associated with the Markov random field [26]. Such
methods thus have to use approximations and search heuristics for tractable computation. Question:
Can one use local procedures that are as inexpensive as the heuristic greedy approaches, and yet
come with the strong statistical guarantees of the regularized convex program based approaches?
High-dimensional Estimation; Greedy Methods. There has been an increasing focus in recent years
on high-dimensional statistical models where the number of parameters p is comparable to or even
larger than the number of observations n. It is now well understood that consistent estimation is possible even under such high-dimensional scaling if some low-dimensional structure is imposed on the
model space. Of relevance to graphical model structure learning is the structure of sparsity, where
a sparse set of non-zero parameters entail a sparse set of edges. A surge of recent work [5, 12]
has shown that ?1 -regularization for learning such sparse models can lead to practical algorithms
with strong theoretical guarantees. A line of recent work (cf. paragraph above) has thus leveraged
this sparsity inducing nature of ?1 -regularization, to propose and analyze convex programs based on
regularized log-likelihood functions. A related line of recent work on learning sparse models has
focused on ?stagewise? greedy algorithms. These perform simple forward steps (adding parameters
greedily), and possibly also backward steps (removing parameters greedily), and yet provide strong
statistical guarantees for the estimate after a finite number of greedy steps. The forward greedy variant which performs just the forward step has appeared in various guises in multiple communities: in
machine learning as boosting [13], in function approximation [24], and in signal processing as basis
pursuit [6]. In the context of statistical model estimation, Zhang [28] analyzed the forward greedy
algorithm for the case of sparse linear regression; and showed that the forward greedy algorithm is
sparsistent (consistent for model selection recovery) under the same ?irrepresentable? condition as
that required for ?sparsistency? of the Lasso. Zhang [27] analyzes a more general greedy algorithm
for sparse linear regression that performs forward and backward steps, and showed that it is sparsistent under a weaker restricted eigenvalue condition. Here we ask the question: Can we provide
an analysis of a general forward backward algorithm for parameter estimation in general statistical
models? Specifically, we need to extend the sparsistency analysis of [28] to general non-linear models, which requires a subtler analysis due to the circular requirement of requiring to control the third
order terms in the Taylor series expansion of the log-likelihood, that in turn requires the estimate to
be well-behaved. Such extensions in the case of ?1 -regularization occur for instance in [20, 25, 3].
Our Contributions. In this paper, we address both questions above. In the first part, we analyze the
forward backward greedy algorithm [28] for general statistical models. We note that even though we
consider the general statistical model case, our analysis is much simpler and accessible than [28],
and would be of use even to a reader interested in just the linear model case of Zhang [28]. In the
second part, we use this to show that when combined with neighborhood estimation, the forward
backward variant applied to local conditional log-likelihoods provides a simple computationally
tractable method that adds and deletes edges, but comes with strong sparsistency guarantees. We
reiterate that the our first result on the sparsistency of the forward backward greedy algorithm for
general objectives is of independent interest even outside the context of graphical models. As we
show, the greedy method is better than the ?1 -regularized counterpart in [20] theoretically, as well
as experimentally. The sufficient condition on the parameters imposed by the greedy algorithm
is a restricted strong convexity condition [19], which is weaker than the irrepresentable condition
required by [20]. Further, the number of samples required for sparsistent graph recovery scales as
O(d2 log p), where d is the maximum node degree, in contrast to O(d3 log p) for the ?1 -regularized
counterpart. We corroborate this in our simulations, where we find that the greedy algorithm requires
fewer observations than [20] for sparsistent graph recovery.
2 Review, Setup and Notation
2.1 Markov Random Fields
Let X = (X1 , . . . , Xp ) be a random vector, each variable Xi taking values in a discrete set X
of cardinality m. Let G = (V, E) denote a graph with p nodes, corresponding to the p variables
{X1 , . . . , Xp }. A pairwise Markov random field over X = (X1 , . . . , Xp ) is then specified by
nodewise and pairwise functions ?r : X 7? R for all r ? V , and ?rt : X ? X 7? R for all (r, t) ? E:
P(x) ? exp
X
?r (xr ) +
r?V
X
(r,t)?E
2
?rt (xr , xt ) .
(1)
In this paper, we largely focus on the case where the variables are binary with X = {?1, +1},
where we can rewrite (1) to the Ising model form [14] for some set of parameters {?r } and {?rt } as
P(x) ? exp
X
?r x r +
r?V
X
(r,t)?E
?rt xr xt .
(2)
2.2 Graphical Model Selection
Let D := {x(1) , . . . , x(n) } denote the set of n samples, where each p-dimensional vector x(i) ?
{1, . . . , m}p is drawn i.i.d. from a distribution P?? of the form (1), for parameters ?? and graph
G = (V, E ? ) over the p variables. Note that the true edge set E ? can also be expressed as a function
of the parameters as
?
E ? = {(r, t) ? V ? V : ?st
6= 0}.
(3)
?
The graphical model selection task consists of inferring this edge set E from the samples D. The
?n for which P[E
?n = E ? ] ? 1 as n ? ?. Denote by N ? (r)
goal is to construct an estimator E
the set of neighbors of a vertex r ? V , so that N ? (r) = {t : (r, t) ? E ? }. Then the graphical
?n (r) ? V , so that
model selection problem is equivalent to that of estimating the neighborhoods N
?
?
P[Nn (r) = N (r); ?r ? V ] ? 1 as n ? ?.
For any pair of random variables Xr and Xt , the parameter ?rt fully characterizes whether there is
an edge between them, and can be estimated via its conditional likelihood. In particular, defining
?r := (?r1 , . . . , ?rp ), our goal is to use the conditional likelihood of Xr conditioned on XV \r to
estimate ?r and hence its neighborhood N (r). This conditional distribution of Xr conditioned on
XV \r generated by (2) is given by the logistic model
P
exp(?r xr + t?V \r ?rt xr xt )
P
P Xr = xr XV \r = xV \r =
.
1 + exp(?r + r?V \r ?rt xr )
Given the n samples D, the corresponding conditional log-likelihood is given by
?
? ?
??
?
n ?
?
X
X
X
1
(i)
(i)
?rt x(i)
.
?rt xr(i) xt ????r xr(i) ?
log ?1+ exp ??r x(i) +
L(?r ; D) =
r xt
?
n i=1 ?
t?V \r
(4)
t?V \r
In Section 4, we study a greedy algorithm (Algorithm 2) that finds these node neighborhoods
?n (r) = Supp(?
b r ) of each random variable Xr separately by a greedy stagewise optimization
N
of the conditional log-likelihood of Xr conditioned on XV \r . The algorithm then combines these
b using an ?OR? rule: E
bn = ?r {(r, t) : t ? N
?n (r)}.
neighborhoods to obtain a graph estimate E
Other rules such as the ?AND? rule, that add an edge only if it occurs in each of the respective node
neighborhoods, could be used to combine the node-neighborhoods to a graph estimate. We show
in Theorem 2 that the neighborhood selection by the greedy algorithm succeeds in recovering the
exact node-neighborhoods with high probability, so that by a union bound, the graph estimates using
either the AND or OR rules would be exact with high probability as well.
Before we describe this greedy algorithm and its analysis in Section 4 however, we first consider
the general statistical model case in the next section. We first describe the forward backward greedy
algorithm of Zhang [28] as applied to general statistical models, followed by a sparsistency analysis
for this general case. We then specialize these general results in Section 4 to the graphical model
case. The next section is thus of independent interest even outside the context of graphical models.
3 Greedy Algorithm for General Losses
Consider a random variable Z with distribution P, and let Z1n := {Z1 , . . . , Zn } denote n observations drawn i.i.d. according to P. Suppose we are interested in estimating some parameter
?? ? Rp of the distribution P that is sparse; denote its number of non-zeroes by s? := k?? k0 .
Let L : Rp ? Z n 7? R be some loss function that assigns a cost to any parameter ? ? Rp , for a
given set of observations Z1n . For ease of notation, in the sequel, we adopt the shorthand L(?) for
L(?; Z1n ). We assume that ?? satisfies EZ [?L(?? )] = 0.
3
Algorithm 1 Greedy forward-backward algorithm for finding a sparse optimizer of L(?)
Input: Data D := {x(1) , . . . , x(n) }, Stopping Threshold ?S , Backward Step Factor ? ? (0, 1)
Output: Sparse optimizer ?b
?b(0) ?? 0 and Sb(0) ?? ? and k ?? 1
while true do {Forward Step}
(j? , ?? ) ?? arg
min
b(k?1) )c ; ?
j?(S
L(?b(k?1) +?ej ; D)
Sb(k) ?? Sb(k?1) ? {j? }
(k)
?f ?? L(?b(k?1) ; D) ? L(?b(k?1) + ?? ej? ; D)
if ?f(k) ? ?S then
break
end if
?b(k) ?? arg min L ?Sb(k) ; D
k ?? k + 1
?
while true do {Backward Step}
(k?1)
j ? ?? arg min L(?b(k?1) ? ?bj
ej ; D)
b(k?1)
j?S
(k)
(k?1)
if L ?b(k?1) ? ?bj ? ej ? ; D ? L ?b(k?1) ; D > ??f then
break
end if
Sb(k?1) ?? Sb(k) ? {j ? }
?b(k?1) ?? arg min L ?Sb(k?1) ; D
?
k ?? k ? 1
end while
end while
We now consider the forward backward greedy algorithm in Algorithm 1 that rewrites the algorithm
in [27] to allow for general loss functions. The algorithm starts with an empty set of active variables
Sb(0) and gradually adds (and removes) vairables to the active set until it meets the stopping criterion.
This algorithm has two major steps: the forward step and the backward step. In the forward step,
the algorithm finds the best next candidate and adds it to the active set as long as it improves the loss
function at least by ?S , otherwise the stopping criterion is met and the algorithm terminates. Then,
in the backward step, the algorithm checks the influence of all variables in the presence of the newly
added variable. If one or more of the previously added variables do not contribute at least ??S to
the loss function, then the algorithm removes them from the active set. This procedure ensures that
at each round, the loss function is improved by at least (1 ? ?)?S and hence it terminates within a
finite number of steps.
We state the assumptions on the loss function such that sparsistency is guaranteed. Let us first recall
the definition of restricted strong convexity from Negahban et al. [18]. Specifically, for a given set S,
the loss function is said to satisfy restricted strong convexity (RSC) with parameter ?l with respect
to the set S if
?l
L(? + ?; Z1n ) ? L(?; Z1n ) ? h?L(?; Z1n ), ?i ?
k?k22
for all ? ? S.
(5)
2
We can now define sparsity restricted strong convexity as follows. Specifically, we say that the
loss function L satisfies RSC(k) with parameter ?l if it satisfies RSC with parameter ?l for the set
{? ? Rp : k?k0 ? k}.
In contrast, we say that the loss function satisfies restricted strong smoothness (RSS) with parameter
?u with respect to a set S if
?u
k?k22
for all ? ? S.
L(? + ?; Z1n ) ? L(?; Z1n ) ? h?L(?; Z1n ), ?i ?
2
4
We can define RSS(k) similarly. The loss function L satisfies RSS(k) with parameter ?u if it
satisfies RSS with parameter ?u for the set {? ? Rp : k?k0 ? k}. Given any constants ?l and ?u ,
and a sample based loss function L, we can typically use concentration based arguments to obtain
bounds on the sample size required so that the RSS and RSC conditions hold with high probability.
Another property of the loss function that we require is an upper bound ?n on the ?? norm of the
gradient of the loss at the true parameter ?? , i.e., ?n ? k?L(?? )k? . This captures the ?noise level?
of the samples with respect to the loss. Here too, we can typically use concentration arguments to
show for instance that ?n ? cn (log(p)/n)1/2 , for some constant cn > 0 with high probability.
Theorem 1 (Sparsistency). Suppose the loss function
L(?) satisfies
(? s? ) and RSS (? s? )
p
? RSC
2
2
2
?
with parameters ?l and ?u for some ? ? 2 + 4? ( (? ? ?)/s
p + 2) with ? = ?u /?l . Moreover,
suppose that the true parameters ?? satisfy minj?S ? |?j? | > 32??S /?l . Then if we run Algorithm 1
with stopping threshold ?S ? (8??/?l ) s? ?2n , the output ?b with support Sb satisfies:
?
? ?
?
(a) Error Bound: k?b ? ?? k2 ? ?2l s? (?n ? + ?S 2?u ).
(b) No False Exclusions: S ? ? Sb = ?.
(c) No False Inclusions: Sb ? S ? = ?.
Proof. The proof theorem hinges on three main lemmas: Lemmas 1 and 2 which are simple consequences of the forward and backward steps failing when the greedy algorithm stops, and Lemma 3
which uses these two lemmas and extends techniques from [21] and [19] to obtain an ?2 error bound
on the error. Provided these lemmas hold, we then show below that the greedy algorithm is sparsistent. However, these lemmas require apriori that the RSC and RSS conditions hold for sparsity size
b Thus, we use the result in Lemma 4 that if RSC(?s? ) holds, then the solution when the
|S ? ? S|.
b ? (? ? 1)s? , and hence |Sb ? S ? | ? ?s? . Thus, we can then apply
algorithm terminates satisfies |S|
Lemmas 1, 2 and Lemma 3 to complete the proof as detailed below.
(a) The result follows directly from Lemma 3, and noting that |Sb ? S ? | ? ?s? . In this Lemma, we
show that the upper bound holds by drawing from fixed point techniques in [21] and [19], and by
using a simple consequence of the forward step failing when the greedy algorithm stops.
(b) We follow the chaining argument in [27]. For any ? ? R, we have
b 22
? |{j ? S ? ? Sb : |?j? |2 > ? }| ? k?S? ? ?Sb k22 ? k?? ? ?k
?
8?s? ?2n
16?u ?S ? b
+
|S ? S|,
?2l
?2l
where the last inequality follows from part (a) and the inequality (a + b)2 ? 2a2 + 2b2 . Now,
setting ? = 32??u2 ?S , and dividing both sides by ? /2 we get
l
?s? ?2n
b
2|{j ? S ? ? Sb : |?j? |2 > ? }| ?
+ |S ? ? S|.
2?u ?S
b ? |{j ? S ? ? Sb : |?j? |2 ? ? }|, we get
Substituting |{j ? S ? ? Sb : |?j? |2 > ? }| = |S ? ? S|
b ? |{j ? S ? ? Sb : |?j? |2 ? ? }| +
|S ? ? S|
?s? ?2n
? |{j ? S ? ? Sb : |?j? |2 ? ? }| + 1/2,
2?u ?S
due to the setting of the stopping threshold ?S . This in turn entails that
b ? |{j ? S ? ? Sb : |?? |2 ? ? }| = 0,
|S ? ? S|
j
by our assumption on the size of the minimum entry of ?? .
(c) From Lemma 2, which provides a simple consequence of the backward step failing when the
b = ?b ? ?? , we have ?S /?u |Sb ? S ? | ? k?
b b ? k2 ? k?k
b 2 , so that
greedy algorithm stops, for ?
2
S?S 2
? 2
b = 0, we obtain that |Sb ? S ? | ? 4?s ?n2?u ? 1/2, due to the
using Lemma 3 and that |S ? ? S|
?S ?l
setting of the stopping threshold ?S .
5
Algorithm 2 Greedy forward-backward algorithm for pairwise discrete graphical model learning
Input: Data D := {x(1) , . . . , x(n) }, Stopping Threshold ?S , Backward Step Factor ? ? (0, 1)
b
Output: Estimated Edges E
for r ? V do
b r with support N
cr
Run Algorithm 1 with the loss L(?) set as in (4), to obtain ?
end for
b=
Output E
o
S n
c
r (r, t) : t ? Nr
3.1 Lemmas for Theorem 1
We list the simple lemmas that characterize the solution obtained when the algorithm terminates,
and on which the proof of Theorem 1 hinges.
b
Lemma 1 (Stopping Forward Step). When the algorithm 1 stops with parameter ?b supported on S,
we have
q
?
?
b ?u ?S
?b ? ?
.
L ?b ? L (? ) < 2 |S ? ? S|
2
Lemma 2 (Stopping Backward Step). When the algorithm 1 stops with parameter ?b supported on
b we have
S,
2
?S b
b
?
?S?S
S ? S .
?
?
b
?u
2
b
Lemma 3 (Stopping Error Bound). When the algorithm 1 stops with parameter ?b supported on S,
we have
2
b
?
? ? ?
?
?l
2
Lemma 4 (Stopping Size). If ?S >
4?2
q
?2 ??
s?
+
?
2
2
?2
n
?u
?n
!
r
r
? b
S ? S + 2 S ? ? Sb ?u ?S .
q
2
??1
?
q ?2
2
?
and RSC (?s? ) holds for some ? ? 2 +
, then the algorithm 1 stops with k ? (? ? 1)s? .
Notice that if ?S ? (8??/?l ) (? 2 /(4?2 )) ?2n , then, the assumption of this lemma is satisfied. Hence
for large value of s? ? 8?2 > ? 2 /(4?2 ), it suffices to have ?S ? (8??/?l ) s? ?2n .
4 Greedy Algorithm for Pairwise Graphical Models
Suppose we are given set of n i.i.d. samples D := {x(1) , . . . , x(n) }, drawn from a pairwise Ising
model as in (2), with parameters ?? , and graph G = (V, E ? ). It will be useful to denote the maximum
node-degree in the graph E ? by d. As we will show, our model selection performance depends
critically on this parameter d. We propose Algorithm 2 for estimating the underlying graphical
model from the n samples D.
Theorem 2 (Pairwise Sparsistency). Suppose we run Algorithm 2 with stopping threshold ?S ?
p
?
c1 d log
n c, where, d is the maximum
? node degree in the graphical model, and the true parameters ?
?
3
?
|?
|
>
c
>
min
?
,
and
further
that
number
of
samples
scales
as
satisfy ?
2
j?S
S
j
d
n > c4 d2 log p,
for some constants c1 , c2 , c3 , c4 . Then, with probability at least 1 ? c? exp(?c?? n), the output ?b
supported on Sb satisfies:
b = ?.
(a) No False Exclusions: E ? ? E
6
b ? E ? = ?.
(b) No False Inclusions: E
Proof. This theorem is a corollary to our general Theorem 1. We first show that the conditions of
Theorem 1 hold under the assumptions in this corollary.
RSC, RSS. We first note that the conditional log-likelihood loss function in (4) corresponds to a logistic likelihood. Moreover, the covariates are all binary, and bounded, and hence also sub-Gaussian.
[19, 2] analyze the RSC and RSS properties of generalized linear models, of which logistic models
are an instance, and show that the following result holds if the covariates are sub-Gaussian. Let
?L(?; ?? ) = L(?? + ?) ? L(?? ) ? h?L(?? ), ?i be the second order Taylor series remainder.
Then, Proposition 2 in [19] states that that there exist constants ?l1 and ?l2 , independent of n, p such
that with probability at least 1 ? c1 exp(?c2 n), for some constants c1 , c2 > 0,
(
)
r
log(p)
l
?
l
?L(?; ? ) ? ?1 k?k2 k?k2 ? ?2
for all ? : k?k2 ? 1.
k?k1
n
?
Thus, if k?k0 ? k := ?d, then k?k1 ? kk?k2 , so that
!
r
?l
k log p
l
l
?
2
? 1 k?k22 ,
?L(?; ? ) ? k?k2 ?1 ? ?2
n
2
if n > 4(?l2 /?l1 )2 ?d log(p). In other words, with probability at least 1 ? c1 exp(?c2 n), the loss
function L satisfies RSC(k) with parameter ?l1 provided n > 4(?l2 /?l1 )2 ?d log(p). Similarly, it
follows from [19, 2] that there exist constants ?u1 and ?u2 such that with probability at least 1 ?
c?1 exp(?c?2 n),
?L(?; ?? ) ? ?u1 k?k2 {k?k2 ? ?u2 k?k1 }
for all ? : k?k2 ? 1,
so that by a similar argument, with probability at least 1?c?1 exp(?c?2 n),
RSS(k) with parameter ?u1 provided n > 4(?u2 /?u1 )2 ?d log(p).
the loss function L satisfies
Noise Level. Next, we obtain a bound on the noiselevel ?n ? k?L(?? )k? following similar arguments to [20]. Let W denote the gradient ?L(?? ) of the loss function (4). Any enPn
(i)
(i)
(i) (i)
(i)
try of W has the form Wt = n1 i=1 Zrt , where Zrt = xt (xr ? P(xr = 1|x\s )) are
(i)
zero-mean, i.i.d. and bounded |Zrt | ? 1. Thus, an application of Hoeffding?s inequality
yields that P[|Wt | > ?] ? 2 exp(?2n? 2 ). Applying a union bound over indices in W , we get
P[kW k? > ?] ? 2 exp(?2n? 2 + log(p)). Thus, if ?n = (log(p)/n)1/2 , then kW k? ? ?n with
probability at least 1 ? exp(?n?2n + log(p)).
We can now verify that under the assumptions in the corollary, the conditions on the stopping size ?S
and the minimum absolute value of the non-zero parameters minj?S ? |?j? | are satisfied. Moreover,
from the discussion above, under the sample size scaling in the corollary, the required RSC and
RSS conditions hold as well. Thus, Theorem 1 yields that each node neighborhood is recovered
with no false exclusions or inclusions with probability at least 1 ? c? exp(?c?? n). An application of
a union bound over all nodes completes the proof.
Remarks. The sufficient condition on the parameters imposed by the greedy algorithm is a restricted
strong convexity condition [19], which is weaker than the irrepresentable condition required by
[20]. Further, the number of samples required for sparsistent graph recovery scales as O(d2 log p),
where d is the maximum node degree, in contrast to O(d3 log p) for the ?1 regularized counterpart.
We corroborate this in our simulations, where we find that the greedy algorithm requires fewer
observations than [20] for sparsistent graph recovery.
We also note that the result can also be extended to the general pairwise graphical model case, where
each random variable takes values in the range {1, . . . , m}. In that case, the conditional likelihood
of each node conditioned on the rest of the nodes takes the form of a multiclass logistic model, and
the greedy algorithm would take the form of a ?group? forward-backward greedy algorithm, which
would add or remove all the parameters corresponding to an edge as a group. Our analysis however
naturally extends to such a group greedy setting as well. The analysis for RSC and RSS remains the
same and for bounds on ?n , see equation (12) in [15]. We defer further discussion on this due to the
lack of space.
7
1
Probability of Success
Probability of Success
1
0.8
Greedy
Algorithm
0.6
0.4
0.2
0
0
p = 36
p = 64
p = 100
Logistic
Regression
0.5
1
1.5
2
2.5
3
3.5
0.8
Logistic
Regression
0.4
p = 36
p = 64
p = 100
0.2
0
0
4
Greedy
Algorithm
0.6
0.5
1
1.5
2
2.5
3
3.5
4
Control Parameter
Control Parameter
(a) Chain (Line Graph)
(b) 4-Nearest Neighbor (Grid Graph)
1
Probability of Success
Greedy
Algorithm
0.8
0.6
Logistic
Regression
0.4
p = 36
p = 64
p = 100
0.2
0
0
0.25
0.5
0.75
1
1.25
1.5
Control Parameter
(c) Star
(d) Chain, 4-Nearest Neighbor and Star
Graphs
b? (r) = N ? (r), ?r ? V ] versus the control parameter
Fig 1: Plots of success probability P[N
?(n, p, d) = n/[20d log(p)] for Ising model on (a) chain (d = 2), (b) 4-nearest neighbor (d = 4)
?
and (c) Star graph (d = 0.1p). The coupling parameters are chosen randomly from ?st
= ?0.50
for both greedy and node-wise ?1 -regularized logistic regression methods. As our theorem suggests
and these figures show, the greedy algorithm requires less samples to recover the exact structure of
the graphical model.
5 Experimental Results
We now present experimental results that illustrate the power of Algorithm 2 and support our theoretical guarantees. We simulated structure learning of different graph structures and compared the
learning rates of our method to that of node-wise ?1 -regularized logistic regression as outlined in
[20].
We performed experiments using 3 different graph structures: (a) chain (line graph), (b) 4-nearest
neighbor (grid graph) and (c) star graph. For each experiment, we assumed a pairwise binary Ising
?
model in which each ?rt
= ?1 randomly. For each graph type, we generated a set of n i.i.d.
(1)
(n)
samples {x , ..., x } using Gibbs sampling. We then attempted to learn the structure of the
model using both Algorithm 2 as well as node-wise ?1 -regularized logistic regression. We then
compared the actual graph structure with the empirically learned graph structures. If the graph
structures matched completely then we declared the result a success otherwise we declared the result
a failure. We compared these results over a range of sample sizes (n) and averaged the results for
each sample size over a batch of size 10. For all greedy experiments we set the stopping threshold
?S = c log(np)
, where c is a tuning constant, as suggested by Theorem 2, and set the backwards
n
step threshold
?
= 0.5. For all logistic regression experiments we set the regularization parameter
p
?n = c? log(p)/n, where c? was set via cross-validation.
Figure 1 shows the results for the chain (d = 2), grid (d = 4) and star (d = 0.1p) graphs using
both Algorithm 2 and node-wise ?1 -regularized logistic regression for three different graph sizes
p ? {36, 64, 100} with mixed (random sign) couplings. For each sample size, we generated a batch
of 10 different graphical models and averaged the probability of success (complete structure learned)
over the batch. Each curve then represents the probability of success versus the control parameter
?(n, p, d) = n/[20d log(p)] which increases with the sample size n. These results support our
theoretical claims and demonstrate the efficiency of the greedy method in comparison to node-wise
?1 -regularized logistic regression [20].
6 Acknowledgements
We would like to acknowledge the support of NSF grant IIS-1018426.
8
References
[1] P. Abbeel, D. Koller, and A. Y. Ng. Learning factor graphs in polynomial time and sample complexity.
Jour. Mach. Learning Res., 7:1743?1788, 2006.
[2] A. Agarwal, S. Negahban, and M. Wainwright.
dimensional statistical recovery. In NIPS, 2010.
Convergence rates of gradient methods for high-
[3] F. Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics, 4:384?414,
2010.
[4] G. Bresler, E. Mossel, and A. Sly. Reconstruction of markov random fields from samples: Some easy
observations and algorithms. In RANDOM 2008.
[5] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals
of Statistics, 2006.
[6] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci.
Computing, 20(1):33?61, 1998.
[7] D. Chickering. Learning Bayesian networks is NP-complete. Proceedings of AI and Statistics, 1995.
[8] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans.
Info. Theory, 14(3):462?467, 1968.
[9] I. Csisz?ar and Z. Talata. Consistent estimation of the basic neighborhood structure of Markov random
fields. The Annals of Statistics, 34(1):123?145, 2006.
[10] C. Dahinden, M. Kalisch, and P. Buhlmann. Decomposition and model selection for large contingency
tables. Biometrical Journal, 52(2):233?252, 2010.
[11] S. Dasgupta. Learning polytrees. In Uncertainty on Artificial Intelligence, pages 134?14, 1999.
[12] D. Donoho and M. Elad. Maximal sparsity representation via ?1 minimization. Proc. Natl. Acad. Sci.,
100:2197?2202, March 2003.
[13] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting.
Annals of Statistics, 28:337?374, 2000.
[14] E. Ising. Beitrag zur theorie der ferromagnetismus. Zeitschrift f?ur Physik, 31:253?258, 1925.
[15] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS) 14, 2011.
[16] S.-I. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of markov networks using l1regularization. In Neural Information Processing Systems (NIPS) 19, 2007.
[17] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the lasso. Annals
of Statistics, 34(3), 2006.
[18] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. In Neural Information Processing Systems
(NIPS) 22, 2009.
[19] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. In Arxiv, 2010.
[20] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using ?1 regularized logistic regression. Annals of Statistics, 38(3):1287?1319.
[21] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. 2:
494?515, 2008.
[22] P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction and search. MIT Press, 2000.
[23] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1):
123?138, 2003.
[24] V. N. Temlyakov. Greedy approximation. Acta Numerica, 17:235?409, 2008.
[25] S. van de Geer. High-dimensional generalized linear models and the lasso. The Annals of Statistics, 36:
614?645, 2008.
[26] D. J. A. Welsh. Complexity: Knots, Colourings, and Counting. LMS Lecture Note Series. Cambridge
University Press, Cambridge, 1993.
[27] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In Neural
Information Processing Systems (NIPS) 21, 2008.
[28] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal of
Machine Learning Research, 10:555?568, 2009.
9
| 4290 |@word polynomial:2 norm:1 physik:1 d2:4 simulation:3 r:12 bn:1 decomposition:2 covariance:1 liu:1 series:3 score:2 existing:2 recovered:1 yet:2 additive:1 numerical:1 partition:1 remove:3 plot:1 greedy:46 fewer:2 intelligence:2 provides:2 boosting:2 node:19 contribute:1 simpler:1 zhang:6 c2:4 consists:1 specialize:1 shorthand:1 combine:2 paragraph:1 theoretically:1 pairwise:9 inter:1 indeed:1 surge:1 actual:1 cardinality:1 increasing:1 provided:3 estimating:6 underlying:2 bounded:4 notation:2 moreover:3 matched:1 unified:2 finding:1 guarantee:6 subclass:2 act:1 k2:10 control:6 grant:1 kalisch:1 before:1 understood:1 local:4 xv:5 consequence:3 acad:1 zeitschrift:1 mach:1 meet:1 acta:1 dantzig:1 meinshausen:1 suggests:1 polytrees:2 ease:1 range:3 averaged:2 practical:1 testing:1 atomic:1 union:3 xr:17 procedure:2 word:1 get:3 irrepresentable:3 selection:11 context:3 influence:1 applying:1 sparsistent:7 equivalent:1 imposed:3 convex:5 focused:1 decomposable:2 recovery:7 assigns:1 estimator:3 rule:4 searching:1 annals:6 play:1 suppose:5 exact:3 us:1 hypothesis:1 expensive:1 zrt:3 ising:6 role:1 capture:1 ensures:1 pradeepr:1 pd:1 convexity:6 complexity:4 covariates:2 rewrite:2 ali:1 efficiency:1 basis:2 completely:1 k0:4 various:1 describe:2 artificial:2 neighborhood:17 outside:2 exhaustive:1 saunders:1 heuristic:2 larger:2 solve:1 elad:1 say:2 drawing:1 otherwise:2 statistic:9 itself:1 eigenvalue:1 propose:2 reconstruction:1 maximal:1 p4:1 remainder:1 inducing:1 csisz:1 convergence:1 empty:1 requirement:1 r1:1 derive:1 coupling:2 illustrate:1 nearest:4 z1n:9 strong:11 dividing:1 recovering:1 c:2 come:2 met:1 require:3 suffices:1 abbeel:1 hypertrees:1 proposition:1 rothman:1 extension:1 hold:9 exp:14 bj:2 claim:1 lm:1 substituting:1 major:1 optimizer:2 adopt:1 a2:1 bickel:1 failing:3 estimation:10 proc:1 uhlmann:1 utexas:3 minimization:1 beitrag:1 mit:1 gaussian:2 ej:4 cr:1 corollary:5 focus:2 l1regularization:1 likelihood:13 check:1 contrast:4 greedily:2 milder:1 mrfs:2 stopping:14 nn:1 sb:25 typically:5 chow:1 koller:2 interested:2 tao:1 arg:4 among:2 special:2 apriori:1 field:7 construct:1 ng:1 sampling:1 kw:2 broad:1 represents:1 yu:2 others:1 np:3 sanghavi:1 causation:1 randomly:2 sparsistency:9 welsh:1 n1:1 friedman:1 interest:2 circular:1 analyzed:1 pradeep:1 natl:1 regularizers:2 chain:5 edge:9 respective:1 tree:3 taylor:2 re:1 theoretical:3 rsc:13 instance:4 corroborate:3 ar:1 zn:1 cost:2 vertex:2 subset:1 entry:1 successful:1 johnson:1 too:1 characterize:1 combined:1 st:2 jour:1 negahban:4 siam:1 accessible:1 sequel:1 lee:1 physic:1 quickly:1 satisfied:2 leveraged:1 possibly:1 hoeffding:1 conf:1 ganapathi:1 supp:1 de:1 star:5 b2:1 biometrical:1 satisfy:3 reiterate:1 depends:1 performed:1 break:2 try:1 view:1 analyze:3 characterizes:1 start:1 recover:1 candes:1 defer:1 ferromagnetismus:1 contribution:1 square:1 largely:1 yield:2 bayesian:1 critically:1 knot:1 minj:2 definition:1 inexpensive:1 failure:1 naturally:1 associated:1 proof:6 recovers:1 stop:7 newly:1 popular:1 ask:1 colouring:1 recall:1 improves:1 follow:1 improved:1 though:1 just:2 p6:1 until:1 sly:1 christopher:1 lack:1 logistic:16 stagewise:2 behaved:1 grows:1 k22:4 requiring:1 true:6 verify:1 counterpart:3 regularization:5 hence:5 spirtes:1 ex1:1 round:1 self:1 width:1 chaining:1 criterion:2 generalized:2 complete:3 demonstrate:1 performs:2 l1:4 image:1 wise:5 empirically:1 extend:1 cambridge:2 gibbs:1 ai:2 smoothness:1 tuning:1 consistency:2 grid:3 similarly:2 inclusion:3 outlined:1 language:1 entail:2 add:5 recent:5 showed:2 exclusion:3 optimizing:1 irrepresentability:1 inequality:3 binary:3 success:7 der:1 scoring:1 analyzes:1 minimum:2 signal:1 ii:1 multiple:1 levina:1 calculation:1 cross:1 long:1 bach:1 ravikumar:5 prediction:1 mrf:1 variant:2 regression:15 basic:1 metric:2 arxiv:1 agarwal:1 c1:5 zur:1 separately:1 completes:1 rest:1 undirected:1 lafferty:1 alij:1 presence:1 noting:1 backwards:1 counting:1 identically:1 concerned:1 easy:1 variety:1 independence:1 hastie:1 lasso:3 cn:2 multiclass:1 texas:3 whether:1 remark:1 useful:1 detailed:1 involve:2 exist:2 nsf:1 notice:1 talata:1 sign:1 estimated:2 tibshirani:1 nodewise:1 discrete:8 numerica:1 dasgupta:1 group:3 threshold:8 deletes:1 drawn:3 d3:3 groupsparse:1 backward:21 graph:35 year:1 run:3 uncertainty:1 extends:2 reader:1 electronic:1 scaling:3 comparable:1 bound:11 followed:1 guaranteed:1 occur:1 constraint:1 x2:1 encodes:1 declared:2 u1:4 argument:5 min:5 glymour:1 according:1 march:1 terminates:4 ur:1 subtler:1 restricted:9 gradually:1 invariant:1 computationally:2 equation:1 scheines:1 previously:1 remains:1 turn:2 tractable:2 end:6 pursuit:2 apply:2 batch:3 rp:6 cf:1 include:1 graphical:21 hinge:2 k1:3 approximating:1 objective:1 question:3 added:2 occurs:1 concentration:2 rt:10 dependence:1 jalali:2 nr:1 said:1 gradient:3 simulated:1 sci:2 mail:1 index:1 kk:1 setup:1 theorie:1 info:1 perform:1 upper:2 observation:6 markov:9 finite:2 acknowledge:1 defining:1 extended:1 community:1 buhlmann:1 pair:1 required:7 specified:1 c3:1 z1:1 c4:2 learned:2 nip:4 trans:1 address:2 suggested:1 below:2 pattern:1 appeared:1 sparsity:6 program:4 including:1 wainwright:4 power:1 natural:1 regularized:14 zhu:1 mossel:1 review:1 l2:3 acknowledgement:1 fully:1 loss:21 bresler:1 permutation:1 mixed:1 lecture:1 srebro:1 versus:2 validation:1 contingency:1 degree:5 vasuki:1 sufficient:3 xp:4 consistent:3 austin:1 supported:4 last:1 intractably:1 side:1 weaker:3 allow:1 neighbor:5 taking:1 absolute:1 sparse:11 distributed:1 van:1 curve:1 forward:23 adaptive:1 temlyakov:1 selector:1 active:4 assumed:1 xi:1 search:6 table:1 learn:3 nature:1 expansion:1 domain:1 aistats:1 main:2 noise:2 n2:1 x1:4 fig:1 guise:1 inferring:1 sub:2 candidate:2 chickering:1 third:1 removing:1 theorem:12 xt:7 list:1 intractable:1 false:5 adding:1 conditioned:4 chen:1 ez:1 expressed:1 u2:4 corresponds:1 satisfies:12 conditional:11 asutin:2 goal:2 donoho:2 hard:1 experimentally:1 typical:1 specifically:3 wt:2 lemma:20 geer:1 experimental:2 concordant:1 succeeds:1 attempted:1 support:5 relevance:1 |
3,635 | 4,291 | Improving Topic Coherence with
Regularized Topic Models
David Newman
University of California, Irvine
[email protected]
Edwin V. Bonilla
Wray Buntine
NICTA & Australian National University
{edwin.bonilla, wray.buntine}@nicta.com.au
Abstract
Topic models have the potential to improve search and browsing by extracting
useful semantic themes from web pages and other text documents. When learned
topics are coherent and interpretable, they can be valuable for faceted browsing,
results set diversity analysis, and document retrieval. However, when dealing with
small collections or noisy text (e.g. web search result snippets or blog posts),
learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models.
Our regularizers work by creating a structured prior over words that reflect broad
patterns in the external data. Using thirteen datasets we show that both regularizers
improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more
useful across a broader range of text data.
1
Introduction
Topic modeling holds much promise for improving the ways users search, discover, and organize
online content by automatically extracting semantic themes from collections of text documents.
Learned topics can be useful in user interfaces for ad-hoc document retrieval [18]; driving faceted
browsing [14]; clustering search results [19]; or improving display of search results by increasing
result diversity [10]. When the text being modeled is plentiful, clear and well written (e.g. large
collections of abstracts from scientific literature), learned topics are usually coherent, easily understood, and fit for use in user interfaces. However, topics are not always consistently coherent, and
even with relatively well written text, one can learn topics that are a mix of concepts or hard to
understand [1, 6]. This problem is exacerbated for content that is sparse or noisy, such as blog posts,
tweets, or web search result snippets. Take for instance the task of learning categories in clustering
search engine results. A few searches with Carrot2, Yippee, or WebClust quickly demonstrate that
consistently learning meaningful topic facets is a difficult task [5].
Our goal in this paper is to improve the coherence, interpretability and ultimate usability of learned
topics. To achieve this we propose Q UAD -R EG and C ONV-R EG, two new methods for regularizing
topic models, which produce more coherent and interpretable topics. Our work is predicated on
recent evidence that a pointwise mutual information-based score (PMI-Score) is highly correlated
with human-judged topic coherence [15, 16]. We develop two Bayesian regularization formulations that are designed to improve PMI-Score. We experiment with five search result datasets from
7M Blog posts, four search result datasets from 1M News articles, and four datasets of Google
search results. Using these thirteen datasets, our experiments demonstrate that both regularizers
consistently improve topic coherence and interpretability, as measured separately by PMI-Score and
human judgements. To the best of our knowledge, our models are the first to address the problem
of learning topics when dealing with limited and/or noisy text content. This work opens up new
application areas for topic modeling.
1
2
Topic Coherence and PMI-Score
Topics learned from a statistical topic model are formally a multinomial distribution over words,
and are often displayed by printing the 10 most probable words in the topic. These top-10 words
usually provide sufficient information to determine the subject area and interpretation of a topic,
and distinguish one topic from another. However, topics learned on sparse or noisy text data are
often less coherent, difficult to interpret, and not particularly useful. Some of these noisy topics
can be vaguely interpretable, but contain (in the top-10 words) one or two unrelated words ? while
other topics can be practically incoherent. In this paper we wish to improve topic models learned on
document collections where the text data is sparse and/or noisy. We postulate that using additional
(possibly external) data will regularize the learning of the topic models.
Therefore, our goal is to improve topic coherence. Topic coherence ? meaning semantic coherence
? is a human judged quality that depends on the semantics of the words, and cannot be measured
by model-based statistical measures that treat the words as exchangeable tokens. Fortunately, recent
work has demonstrated that it is possible to automatically measure topic coherence with near-human
accuracy [16, 15] using a score based on pointwise mutual information (PMI). In that work they
showed (using 6000 human evaluations) that the PMI-Score broadly agrees with human-judged
topic coherence. The PMI-Score is motivated by measuring word association between all pairs of
words in the top-10 topic words. PMI-Score is defined as follows:
1 X
PMI-Score(w) =
PMI(wi , wj ), ij ? {1 . . . 10}
(1)
45 i<j
P (wi , wj )
,
(2)
P (wi )P (wj )
and 45 is the number of PMI scores over the set of distinct word pairs in the top-10 words. A key
aspect of this score is that it uses external data ? that is data not used during topic modeling. This
data could come from a variety of sources, for example the corpus of 3M English Wikipedia articles.
where
PMI(wi , wj ) = log
For this paper, we will use both PMI-Score and human judgements to measure topic coherence.
Note that we can measure the PMI-Score of an individual topic, or for a topic model of T topics (in
that case PMI-Score will refer to the average of T PMI-Scores). This PMI-Score ? and the idea of
using external data to measure it ? forms the foundation of our idea for regularization.
3
Regularized Topic Models
In this section we describe our approach to regularization in topic models by proposing two different methods: (a) a quadratic regularizer (Q UAD -R EG) and (b) a convolved Dirichlet regularizer
(C ONV-R EG). We start by introducing the standard notation in topic modeling and the baseline
latent Dirichlet allocation method (LDA, [4, 9]).
3.1
Topic Modeling and LDA
Topic models are a Bayesian version of probabilistic latent semantic analysis [11]. In standard
LDA topic modeling each of D documents in the corpus is modeled as a discrete distribution over T
latent topics, and each topic is a discrete distribution over the vocabulary of W words. For document
d, the distribution over topics, ?t|d , is drawn from a Dirichlet distribution Dir[?]. Likewise, each
distribution over words, ?w|t , is drawn from a Dirichlet distribution, Dir[?].
For the ith token in a document, a topic assignment, zid , is drawn from ?t|d and the word, xid , is
drawn from the corresponding topic, ?w|zid . Hence, the generative process in LDA is given by:
?t|d ? Dirichlet[?]
?w|t ? Dirichlet[?]
(3)
zid ? Mult[?t|d ]
xid ? Mult[?w|zid ].
(4)
We can compute the posterior distribution of the topic assignments via Gibbs sampling by writing down the joint probability, integrating out ? and ?, and following a few simple mathematical
manipulations to obtain the standard Gibbs sampling update:
N ?i + ?
p(zid = t|xid = w, z?i ) ? ?iwt
(N ?i + ?) .
(5)
Nt + W ? td
2
where z?i denotes the set of topic assignment variables except the ith variable; Nwt is the number
of times word w has been assigned to topic t; Ntd is the number of times topic t has been assigned
PW
to document d, and Nt = w=1 Nwt .
Given samples from the posterior distribution we can compute point estimates of the document-topic
proportions ?t|d and the word-topic probabilities ?w|t . We will denote henceforth ?t as the vector
of word probabilities for a given topic t and analogously for other variables.
3.2
Regularization via Structured Priors
To learn better topic models for small or noisy collections we introduce structured priors on ?t based
upon external data, which has a regularization effect on the standard LDA model. More specifically,
our priors on ?t will depend on the structural relations of the words in the vocabulary as given by
external data, which will be characterized by the W ? W ?covariance? matrix C. Intuitively, C
is a matrix that captures the short-range dependencies between words in the external data. More
importantly, we are only interested in relatively frequent terms from the vocabulary, so C will be a
sparse matrix and hence computations are still feasible for our methods to be used in practice.
3.3
Quadratic Regularizer (Q UAD -R EG)
Here we use a standard quadratic form with a trade-off factor. Therefore, given a matrix of word
dependencies C, we can use the prior:
?
(6)
p(?t |C) ? ?Tt C?t
for some power ?. Note we do not know the normalization factor but for our purposes of MAP
estimation we do not need it. The log posterior (omitting irrelevant constants) is given by:
W
X
LMAP =
Nit log ?i|t + ? log ?Tt C?t .
(7)
i=1
PW
Optimizing Equation (7) with respect to ?w|t subject to the constraints i=1 ?i|t = 1, we obtain
the following fixed point update:
!
PW
?w|t i=1 Ciw ?i|t
1
?w|t ?
.
(8)
Nwt + 2?
Nt + 2?
?Tt C?t
We note that unlike other topic models in which a covariance or correlation structure is used (as
in the correlated topic model, [3]) in the context of correlated priors for ?t|d , our method does not
require the inversion of C, which would be impractical for even modest vocabulary sizes.
By using the update in Equation (8) we obtain the values for ?w|t . This means we no longer have
neat conjugate priors for ?w|t and thus the sampling in Equation (5) does not hold. Instead, at the
end of each major Gibbs cycle, ?w|t is re-estimated and the corresponding Gibbs update becomes:
?i
p(zid = t|xid = w, z?i , ?w|t ) ? ?w|t (Ntd
+ ?) .
3.4
(9)
Convolved Dirichlet Regularizer (C ONV-R EG)
Another approach to leveraging information on word dependencies from external data is to consider
that each ?t is a mixture of word probabilities ? t , where the coefficients are constrained by the
word-pair dependency matrix C:
?t ? C? t
? t ? Dirichlet(?1).
where
(10)
Each topic has a different ? t drawn from a Dirichlet, thus the model is a convolved Dirichlet. This
means that we convolve the supplied topic to include a spread of related words. Then we have that:
?
?Nit
W
W
Y
X
?
p(w|z = t, C, ?t ) =
Cij ?j|t ? .
(11)
i=1
3
j=1
Table 1: Search result datasets came from a collection of 7M Blogs, a collection of 1M News articles,
and the web. The first two collections were indexed with Lucene. The queries below were issued to
create five Blog datasets, four News datasets, and four Web datasets. Search result set sizes ranged
from 1000 to 18,590. For Blogs and News, half of each dataset was set aside for Test, and Train was
sampled from the remaining half. For Web, Train was the top-40 search results.
Name
Query
# Results DTest DTrain
Blogs
beijing
beijing olympic ceremony
5024 2512
39
climate
climate change
14,932 7466
58
obama
obama debate
18,590 9295
72
palin
palin interview
10,478 5239
40
vista
vista problem
4214 2107
32
News
baseball
major league baseball game team player
3774 1887
29
drama
television show series drama
3024 1512
23
health
health medicine insurance
1655
828
25
legal
law legal crime court
2976 1488
23
Web
depression depression
1000 1000
40
migraine
migraine
1000 1000
40
america
america
1000 1000
40
south africa south africa
1000 1000
40
We obtain the MAP solution to ? t by optimizing:
LMAP =
W
X
Nit log
i=1
W
X
Cij ?j|t +
j=1
W
X
(? ? 1) log ?j|t
j=1
s.t.
W
X
?j|t = 1.
(12)
j=1
Solving for ?w|t we obtain:
?w|t ?
W
X
i=1
Nit Ciw
?w|t + ?.
PW
j=1 Cij ?j|t
(13)
We follow the same semi-collapsed inference procedure used for Q UAD -R EG, with the updates in
Equations (13) and (10) producing the values for ?w|t to be used in the semi-collapsed sampler (9).
4
Search Result Datasets
Text datasets came from a collection of 7M Blogs (from ICWSM 2009), a collection of 1M News
articles (LDC Gigaword), and the Web (using Google?s search). Table 1 shows a summary of the
datasets used. These datasets provide a diverse range of content for topic modeling. Blogs are often
written in a chatty and informal style, which tends to produce topics that are difficult to interpret.
News articles are edited to a higher standard, so learned topics are often fairly interpretable when
one models, say, thousands of articles. However, our experiments use 23-29 articles, limiting the
data for topic learning. Snippets from web search result present perhaps the most sparse data. For
each dataset we created the standard bag-of-words representation and performed fairly standard
tokenization. We created a vocabulary of terms that occurred at least five times (or two times, for the
Web datasets), after excluding stopwords. We learned the topic models on the Train data set, setting
T = 15 for Blogs datasets, T = 10 for News datasets, and T = 8 for the Web datasets.
Construction of C: The word co-occurrence data for regularization was obtained from the entire
LDC dataset of 1M articles (for News), a subset of the 7M blog posts (for Blogs), and using all 3M
English Wikipedia articles (for Web). Word co-occurrence was computed using a sliding window
of ten words to emphasize short-range dependency. Note that we only kept positive PMI values.
For each dataset we created a W ? W matrix of co-occurrence counts using the 2000-most frequent
terms in the vocabulary for that dataset, thereby maintaining reasonably good sparsity for these data.
Selecting most-frequent terms makes sense because our objective is to improve PMI-Score, which
is defined over the top-10 topic words, which tend to involve relatively high-frequency terms. Using
high-frequency terms also avoids potential numerical problems of large PMI values arising from
co-occurrence of rare terms.
4
LDA
Quad?Reg
Conv?Reg
LDA
Quad?Reg
Conv?Reg
8000
Test Perplexity
PMI?Score
2.4
2.2
2
6000
4000
2000
1.8
beijing climate obama
palin
0
vista
beijing climate obama
palin
vista
Figure 1: PMI-Score and test perplexity of regularized methods vs. LDA on Blogs, T = 15. Both
regularization methods improve PMI-Score and perplexity for all datasets, with the exception of
?vista? where Q UAD -R EG has slightly higher perplexity.
5
Experiments
In this section we evaluate our regularized topic models by reporting the average PMI-Score over 10
different runs, each computed using Equations (1) and (2) (and then in Section 5.4, we use human
judgements). Additionally, we report the average test data perplexity over 10 samples from the
posterior across ten independent chains, where each perplexity is calculated using:
X
X
1
test
Perp(xtest ) = exp ? test log p(xtest )
log p(xtest ) =
Ndw
log
?w|t ?t|d
(14)
N
t
dw
?t|d
? + Ntd
=
T ? + Nd
?w|t =
? + Nwt
.
W ? + Nt
(15)
The document mixture ?t|d is learned from test data, and the log probability of the test words is computed using this mixture. Each ?w|t is computed by Equation (15) for the baseline LDA model, and
it is used directly for the Q UAD -R EG and C ONV-R EG methods. For the Gibbs sampling algorithms
N
and ? = 0.01 (initially). This setting of ? allocates 5% of the probability mass
we set ? = 0.05 DT
for smoothing. We run the sampling for 300 iterations; applied the fixed point iterations (on the
regularized models) 10 times every 20 Gibbs iterations and ran 10 different random initializations
(computing average over these runs). We used T = 10 for the News datasets, T = 15 for the Blogs
datasets and T = 8 for the Web datasets. Note that test perplexity is computed on DTest (Table 1) that
is at least an order of magnitude larger than the training data. After some preliminary experiments,
we fixed Q UAD -R EG?s regularization parameter to ? = 0.5 N
T .
5.1
Results
Figures 1 and 2 show the average PMI-Scores and average test perplexities for the Blogs and News
datasets. For Blogs (Figure 1) we see that our regularized models consistently improve PMI-Score
and test perplexity on all datasets with the exception of the ?vista? dataset where Q UAD -R EG has
slightly higher perplexity. For News (Figure 2) we see that both regularization methods improve
PMI-Score and perplexity for all datasets. Hence, we can conclude that our regularized models not
only provide a good characterization of the collections but also improve the coherence of the learned
topics as measured by the PMI-Score. It is reasonable to expect both PMI-Score and perplexity to
improve as semantically related words should be expected in topic models, so with little data, our
regularizers push both measures in a positive direction.
5.2
Coherence of Learned Topics
Table 2 shows selected topics learned by LDA and our Q UAD -R EG model. To obtain correspondence of topics (for this experiment), we initialized the Q UAD -R EG model with the converged LDA
model. Overall, our regularized model tends to learn topics that are more focused on a particular subject, contain fewer spurious words, and therefore are easier to interpret. The following list
explains how the regularized version of the topic is more useful:
5
LDA
Quad?Reg
Conv?Reg
LDA
3.5
Conv?Reg
8000
Test Perplexity
3
PMI?Score
Quad?Reg
10000
2.5
2
6000
4000
2000
baseball
drama
health
0
legal
baseball
drama
health
legal
Figure 2: PMI-Score and test perplexity of regularized methods vs. LDA on News, T = 10. Both
regularization methods improve PMI-Score and perplexity for all datasets.
Table 2: Selected topics improved by regularization. Each pair first shows an LDA topic and the
corresponding topic produced by Q UAD -R EG (initialized from the converged LDA model). Q UAD R EG?s PMI-Scores were always better than LDA?s on these examples. The regularized versions tend
to be more focused on a particular subject and easier to interpret.
Name
Model Topic
beijing LDA
girl phony world yang fireworks interest maybe miaoke peiyi young
R EG
girl yang peiyi miaoke lin voice real lip music sync
obama LDA
palin biden sarah running mccain media hilton stein paris john
R EG
palin sarah mate running biden vice governor selection alaska choice
drama
LDA
wire david place police robert baltimore corner friends com simon
R EG
drama episode characters series cop cast character actors detective emmy
legal
LDA
saddam american iraqi iraq judge against charges minister thursday told
R EG
iraqi saddam iraq military crimes tribunal against troops accused officials
beijing Q UAD -R EG has better focus on the names and issues involved in the controversy over the
Chinese replacing the young girl doing the actual singing at the Olympic opening ceremony
with the girl who lip-synched.
obama Q UAD -R EG focuses on Sarah Palin?s selection as a GOP Vice Presidential candidate, while
LDA has a less clear theme including the story of Paris Hilton giving Palin fashion advice.
drama Q UAD -R EG learns a topic related to television police dramas, while LDA narrowly focuses
on David Simon?s The Wire along with other scattered terms: robert and friends.
legal LDA topic is somewhat related to Saddam Hussein?s appearance in court, but includes
uninteresting terms such as: thursday, and told. The Q UAD -R EG topic is an overall better
category relating to the tribunal and charges against Saddam Hussein.
5.3
Modeling of Google Search Results
Are our regularized topic models useful for building facets in a clustering-web-search-results type
of application? Figure 3 (top) shows the average PMI-Score (mean +/? two standard errors over
10 runs) for the four searches described in Table 1 (Web dataset) and the average perplexity using
top-1000 results as test data (bottom). In all cases Q UAD -R EG and C ONV-R EG learn better topics,
as measured by PMI-Score, compared to those learned by LDA. Additionally, whereas Q UAD -R EG
exhibits slightly higher values of perplexity compared to LDA, C ONV-R EG consistently improved
perplexity on all four search datasets. This level of improvement in PMI-Score through regularization was not seen in News or Blogs likely because of the greater sparsity in these data.
5.4
Human Evaluation of Regularized Topic Models
So far we have evaluated our regularized topic models by assessing (a) how faithful their representation is to the collection of interest, as measured by test perplexity, and (b) how coherent they are,
6
3.2
800
Test Perplexity
3
PMI?Score
1000
LDA top?40
Quad?Reg top?40
Conv?Reg top?40
LDA top?1000
3.4
2.8
2.6
2.4
2.2
2
LDA top?40
Quad?Reg top?40
Conv?Reg top?40
600
400
200
1.8
0
depression migraine americasouth africa
depression migraine americasouth africa
Figure 3: PMI-Score and test perplexity of regularized methods vs. LDA on Google search results.
Both methods improve PMI-Score and C ONV-R EG also improves test perplexity, which is computed
using top-1000 results as test data (therefore top-1000 test perplexity is not reported).
as given by the PMI-Score. Ultimately, we have hypothesized that humans will find our regularized
topic models more semantically coherent than baseline LDA and therefore more useful for tasks
such as document clustering, search and browsing. To test this hypothesis we performed further experiments where we asked humans to directly compare our regularized topics with LDA topics and
choose which is more coherent. As our experimental results in this section show, our regularized
topic models significantly outperform LDA based on actual human judgements.
To evaluate our models with human judgments we used Amazon Mechanical Turk (AMT, https:
//www.mturk.com) where we asked workers to compare topic pairs (one topic given by one
of our regularized models and the other topic given by LDA) and to answer explicitly which topic
was more coherent according to how clearly they represented a single theme/idea/concept. To keep
the cognitive load low (while still having a fair and sound evaluation of the topics) we described
each topic by its top-10 words. We provided an additional option ?...Can?t decide...? indicating
that the user could not find a qualitative difference between the topics presented. We also included
control comparisons to filter out bad workers. These control comparisons were done by replacing
a randomly-selected topic word with an intruder word. To have aligned (matched) pairs of topics,
the sampling procedure of our regularized topic models was initialized with LDA?s topic assignment
obtained after convergence of Gibbs sampling. These experiments produced a total of 3650 topiccomparison human evaluations and the results can be seen in Figure 4.
6
Related Work
Several authors have investigated the use of domain knowledge from external sources in topic modeling. For example, [7, 8] propose a method for combining topic models with ontological knowledge
to tag web pages. They constrain the topics in an LDA-based model to be amongst those in the given
ontology. [20] also use statistical topic models with a predefined set of topics to address the task of
query classification. Our goal is different to theirs in that we are not interested in constraining the
learned topics to those in the external data but rather in improving the topics in small or noisy collections by means of regularization. Along a similar vein, [2] incorporate domain knowledge into topic
models by encouraging some word pairs to have similar probability within a topic. Their method,
as ours, is based on replacing the standard Dirichlet prior over word-topic probabilities. However,
unlike our approach that is entirely data-driven, it appears that their method relies on interactive
feedback from the user or on the careful selection of words within an ontological concept.
The effect of structured priors in LDA has been investigated by [17] who showed that learning
hierarchical Dirichlet priors over the document-topic distribution can provide better performance
than using a symmetric prior. Our work is motivated by the fact that priors matter but is focused on a
rather different use case of topic models, i.e. when we are dealing with small or noisy collections and
want to improve the coherence of the topics by re-defining the prior on the word-topic distributions.
Priors that introduce correlations in topic models have been investigated by [3]. Unlike our work
that considers priors on the word-topic distributions (?w|t ), they introduce a correlated prior on the
7
% Time Method is Better
80
QuadReg
LDA
60
40
20
0
baseball
drama
health
legal
beijing
climate
obama
palin
vista
% Time Method is Better
80
depression migraine america southafrica
ConvReg
LDA
60
40
20
0
baseball
drama
health
legal
beijing
climate
obama
palin
vista
depression migraine america southafrica
Figure 4: The proportion of times workers in Amazon Mechanical Turk selected each topic model as
showing better coherence. In nearly all cases our regularized models outperform LDA. C ONV-R EG
outperforms LDA in 11 of 13 datasets. Q UAD -R EG never performs worse than LDA (at the dataset
level). On average (from 3650 topic comparisons) workers selected Q UAD -R EG as more coherent
57% of the time while they selected LDA as more coherent only 37% of the time. Similarly, they
chose C ONV-R EG?s topics as more coherent 56% of the time, and LDA as more coherent only 39%
of the time. These results are statistically significant at 5% level of significance when performing
a paired t-test on the total values across all datasets. Note that the two bars corresponding to each
dataset do not add up to 100% as the remaining mass corresponds to ?...Can?t decide...? responses.
topic proportions (?t|d ). In our approach, considering similar priors for ?w|t to those studied by [3]
would be unfeasible as they would require the inverse of a W ? W covariance matrix.
Network structures associated with a collection of documents are used in [12] in order to ?smooth?
the topic distributions of the PLSA model [11]. Our methods are different in that they do not require
the collection under study to have an associated network structure as we aim at addressing the
different problem of regularizing topic models on small or noisy collections. Additionally, their work
is focused on regularizing the document-topic distributions instead of the word-topic distributions.
Finally, the work in [13], contemporary to ours, also addresses the problem of improving the quality
of topic models. However, our approach focuses on exploiting the knowledge provided by external
data given the noisy and/or small nature of the collection of interest.
7
Discussion & Conclusions
In this paper we have proposed two methods for regularization of LDA topic models based upon
the direct inclusion of word dependencies in our word-topic prior distributions. We have shown that
our regularized models can improve the coherence of learned topics significantly compared to the
baseline LDA method, as measured by the PMI-Score and assessed by human workers in Amazon
Mechanical Turk. While our focus in this paper has been on small, and small and noisy datasets, we
would expect our regularization methods also to be effective on large and noisy datasets. Note that
mixing and rate of convergence may be more of an issue with larger datasets, since our regularizers
use a semi-collapsed Gibbs sampler. We will address these large noisy collections in future work.
Acknowledgments
NICTA is funded by the Australian Government as represented by the Department of Broadband,
Communications and the Digital Economy and the Australian Research Council through the ICT
Centre of Excellence program. DN was also supported by an NSF EAGER Award, an IMLS Research Grant, and a Google Research Award.
8
References
[1] L. AlSumait, D. Barbar?a, J. Gentle, and C. Domeniconi. Topic significance ranking of LDA generative
models. In ECML/PKDD, 2009.
[2] D. Andrzejewski, X. Zhu, and M. Craven. Incorporating domain knowledge into topic modeling via
Dirichlet forest priors. In ICML, 2009.
[3] David M. Blei and John D. Lafferty. Correlated topic models. In NIPS, 2005.
[4] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[5] Claudio Carpineto, Stanislaw Osinski, Giovanni Romano, and Dawid Weiss. A survey of web clustering
engines. ACM Comput. Surv., 41(3), 2009.
[6] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. Reading tea leaves: How humans interpret
topic models. In NIPS, 2009.
[7] Chaitanya Chemudugunta, America Holloway, Padhraic Smyth, and Mark Steyvers. Modeling documents
by combining semantic concepts with unsupervised statistical learning. In ISWC, 2008.
[8] Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. Combining concept hierarchies and
statistical topic models. In CIKM, 2008.
[9] T. Griffiths and M. Steyvers. Probabilistic topic models. In Latent Semantic Analysis: A Road to Meaning,
2006.
[10] Shengbo Guo and Scott Sanner. Probabilistic latent maximal marginal relevance. In SIGIR, 2010.
[11] Thomas Hofmann. Probabilistic latent semantic indexing. In SIGIR, 1999.
[12] Qiaozhu Mei, Deng Cai, Duo Zhang, and ChengXiang Zhai. Topic modeling with network regularization.
In WWW, 2008.
[13] David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. Optimizing
semantic coherence in topic models. In EMNLP, 2011.
[14] D.M. Mimno and A. McCallum. Organizing the OCA: learning faceted subjects from a library of digital
books. In JCDL, 2007.
[15] D. Newman, J.H. Lau, K. Grieser, and T. Baldwin. Automatic evaluation of topic coherence. In NAACL
HLT, 2010.
[16] D. Newman, Y. Noh, E. Talley, S. Karimi, and T. Baldwin. Evaluating topic models for digital libraries.
In JCDL, 2010.
[17] H. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009.
[18] Xing Wei and W. Bruce Croft. LDA-based document models for ad-hoc retrieval. In SIGIR, 2006.
[19] Hua-Jun Zeng, Qi-Cai He, Zheng Chen, Wei-Ying Ma, and Jinwen Ma. Learning to cluster web search
results. In SIGIR, 2004.
[20] Haijun Zhai, Jiafeng Guo, Qiong Wu, Xueqi Cheng, Huawei Sheng, and Jin Zhang. Query classification
based on regularized correlated topic model. In Proceedings of the International Joint Conference on Web
Intelligence and Intelligent Agent Technology, 2009.
9
| 4291 |@word version:3 inversion:1 judgement:4 pw:4 proportion:3 nd:1 plsa:1 open:1 covariance:3 xtest:3 detective:1 thereby:1 plentiful:1 series:2 score:39 selecting:1 document:17 ours:2 africa:4 outperforms:1 com:3 nt:4 written:3 john:2 numerical:1 hofmann:1 designed:1 interpretable:5 update:5 aside:1 v:3 generative:2 half:2 selected:6 fewer:1 leaf:1 intelligence:1 mccallum:3 ith:2 short:2 blei:3 characterization:1 zhang:2 five:3 stopwords:1 mathematical:1 along:2 dn:1 direct:1 qualitative:1 sync:1 introduce:3 excellence:1 expected:1 ontology:1 pkdd:1 faceted:3 ciw:2 automatically:2 td:1 little:1 quad:6 window:1 actual:2 increasing:1 becomes:1 conv:6 discover:1 notation:1 unrelated:1 provided:2 mass:2 medium:1 duo:1 matched:1 proposing:1 impractical:1 every:1 charge:2 interactive:1 exchangeable:1 control:2 grant:1 organize:1 producing:1 iswc:1 positive:2 shengbo:1 understood:1 treat:1 tends:2 perp:1 chose:1 au:1 initialization:1 studied:1 wallach:2 co:4 limited:1 range:4 lmap:2 statistically:1 faithful:2 acknowledgment:1 drama:10 practice:1 procedure:2 mei:1 area:2 mult:2 significantly:2 boyd:1 word:45 integrating:1 griffith:1 road:1 cannot:1 unfeasible:1 selection:3 judged:3 context:1 collapsed:3 writing:1 www:2 map:2 demonstrated:1 troop:1 nit:4 focused:4 survey:1 sigir:4 amazon:3 importantly:1 regularize:2 dw:1 steyvers:3 limiting:1 construction:1 hierarchy:1 user:5 smyth:2 us:1 grieser:1 hypothesis:1 surv:1 dawid:1 particularly:1 iraq:2 vein:1 bottom:1 baldwin:2 wang:1 capture:1 singing:1 thousand:1 barbar:1 wj:4 news:14 cycle:1 episode:1 trade:1 contemporary:1 valuable:1 edited:1 ran:1 leenders:1 asked:2 ultimately:1 controversy:1 depend:1 solving:1 upon:2 baseball:6 edwin:2 girl:4 easily:1 joint:2 represented:2 america:5 vista:8 regularizer:4 train:3 distinct:1 describe:1 effective:1 query:4 newman:4 ceremony:2 emmy:1 larger:2 say:1 presidential:1 noisy:14 cop:1 online:1 hoc:2 jcdl:2 encouraging:1 interview:1 cai:2 propose:3 maximal:1 frequent:3 uci:1 aligned:1 combining:3 organizing:1 mixing:1 achieve:1 gentle:1 exploiting:1 convergence:2 cluster:1 assessing:1 produce:2 sarah:3 develop:1 friend:2 andrew:1 measured:6 ij:1 miriam:1 exacerbated:1 come:1 australian:3 judge:1 direction:1 filter:1 human:16 jiafeng:1 xid:4 explains:1 require:3 government:1 preliminary:1 probable:1 hold:2 practically:1 exp:1 driving:1 major:2 purpose:1 estimation:1 ntd:3 bag:1 council:1 agrees:1 vice:2 create:1 clearly:1 always:2 aim:1 rather:2 claudio:1 broader:1 focus:5 improvement:1 consistently:5 baseline:4 sense:1 inference:1 economy:1 huawei:1 entire:1 initially:1 spurious:1 relation:1 interested:2 semantics:1 karimi:1 overall:3 issue:2 classification:2 noh:1 oca:1 constrained:1 smoothing:1 fairly:2 mutual:2 tokenization:1 marginal:1 never:1 having:1 ng:1 sampling:7 broad:1 icml:1 nearly:1 unsupervised:1 future:1 report:1 intelligent:1 few:2 opening:1 randomly:1 national:1 individual:1 firework:1 interest:4 highly:1 hussein:2 zheng:1 insurance:1 evaluation:5 mixture:3 regularizers:5 chain:1 predefined:1 worker:5 allocates:1 modest:1 indexed:1 alaska:1 initialized:3 re:2 chaitanya:2 alsumait:1 instance:1 military:1 modeling:12 facet:2 measuring:1 assignment:4 introducing:1 addressing:1 subset:1 rare:1 uninteresting:1 dtrain:1 buntine:2 reported:1 eager:1 dependency:6 answer:1 dir:2 international:1 probabilistic:4 off:1 told:2 analogously:1 quickly:1 reflect:1 postulate:1 padhraic:2 dtest:2 possibly:1 emnlp:1 choose:1 henceforth:1 andrzejewski:1 worse:1 external:11 creating:1 corner:1 american:1 style:1 cognitive:1 book:1 potential:2 diversity:2 includes:1 coefficient:1 matter:2 bonilla:2 explicitly:1 ad:2 depends:1 ranking:1 performed:2 doing:1 start:1 xing:1 option:1 simon:2 bruce:1 minister:1 accuracy:1 who:2 likewise:1 judgment:1 bayesian:2 considering:1 qiong:1 wray:2 produced:2 saddam:4 converged:2 hlt:1 against:3 frequency:2 involved:1 turk:3 associated:2 irvine:1 sampled:1 dataset:9 onv:9 knowledge:6 improves:1 zid:6 appears:1 higher:4 dt:1 follow:1 response:1 improved:2 wei:3 formulation:1 evaluated:1 done:1 predicated:1 correlation:2 sheng:1 web:19 replacing:3 zeng:1 google:5 lda:49 quality:2 perhaps:1 scientific:1 building:1 effect:2 omitting:1 concept:5 contain:2 ranged:1 name:3 hypothesized:1 regularization:16 hence:3 assigned:2 naacl:1 symmetric:1 semantic:8 eg:33 climate:6 during:1 game:1 chengxiang:1 tt:3 demonstrate:2 performs:1 interface:2 meaning:2 wikipedia:2 multinomial:1 association:1 interpretation:1 occurred:1 relating:1 interpret:5 theirs:1 he:1 refer:1 significant:1 gibbs:8 automatic:1 pmi:41 league:1 similarly:1 inclusion:1 centre:1 funded:1 actor:1 longer:1 add:1 ndw:1 posterior:4 recent:2 showed:2 optimizing:3 irrelevant:1 driven:1 perplexity:23 manipulation:1 issued:1 gop:1 blog:17 came:2 seen:2 additional:2 fortunately:1 somewhat:1 greater:1 deng:1 determine:1 semi:3 sliding:1 mix:1 sound:1 smooth:1 usability:1 characterized:1 retrieval:3 lin:1 post:4 award:2 paired:1 qi:1 mturk:1 iteration:3 normalization:1 whereas:1 want:1 separately:1 baltimore:1 source:2 hilton:2 unlike:3 south:2 subject:5 tend:2 leveraging:1 lafferty:1 jordan:1 extracting:2 structural:1 near:1 yang:2 constraining:1 variety:1 fit:1 idea:3 court:2 narrowly:1 motivated:2 ultimate:1 romano:1 depression:6 useful:8 clear:2 involve:1 maybe:1 stein:1 ten:2 category:2 http:1 supplied:1 outperform:2 nsf:1 estimated:1 arising:1 cikm:1 broadly:1 gigaword:1 discrete:2 diverse:1 promise:1 tea:1 chemudugunta:2 key:1 four:6 drawn:5 kept:1 vaguely:1 tweet:1 beijing:8 run:4 inverse:1 nwt:4 reporting:1 place:1 reasonable:1 decide:2 intruder:1 wu:1 coherence:19 entirely:1 distinguish:1 display:1 iwt:1 correspondence:1 quadratic:3 cheng:1 constraint:1 constrain:1 tag:1 aspect:1 xueqi:1 performing:1 relatively:3 structured:4 department:1 according:1 craven:1 conjugate:1 across:3 slightly:3 character:2 wi:4 intuitively:1 lau:1 indexing:1 legal:8 equation:6 count:1 know:1 iraqi:2 end:1 informal:1 lucene:1 hierarchical:1 occurrence:4 voice:1 convolved:3 thomas:1 top:18 dirichlet:14 clustering:5 denotes:1 convolve:1 include:1 remaining:2 maintaining:1 running:2 medicine:1 music:1 giving:1 chinese:1 objective:1 exhibit:1 amongst:1 rethinking:1 topic:144 considers:1 stanislaw:1 nicta:3 modeled:2 pointwise:2 zhai:2 ying:1 difficult:3 cij:3 thirteen:2 robert:2 debate:1 uad:20 wire:2 datasets:31 mate:1 snippet:3 jin:1 displayed:1 ecml:1 defining:1 excluding:1 team:1 communication:1 police:2 david:5 pair:7 paris:2 cast:1 mechanical:3 crime:2 california:1 coherent:14 learned:17 engine:2 nip:3 address:4 bar:1 usually:2 pattern:1 below:1 scott:1 sparsity:2 reading:1 program:1 interpretability:3 including:1 ldc:2 power:1 regularized:23 sanner:1 zhu:1 improve:17 technology:1 library:2 migraine:6 created:3 governor:1 incoherent:1 jun:1 health:6 text:10 prior:20 literature:1 ict:1 law:1 expect:2 allocation:2 digital:3 foundation:1 agent:1 ontological:2 sufficient:1 article:9 story:1 summary:1 token:2 supported:1 english:2 neat:1 understand:1 sparse:5 mimno:3 overcome:1 calculated:1 vocabulary:6 world:1 avoids:1 feedback:1 giovanni:1 evaluating:1 author:1 collection:20 qiaozhu:1 far:1 emphasize:1 keep:1 dealing:3 corpus:2 conclude:1 search:24 latent:7 olympic:2 why:1 table:6 additionally:3 lip:2 nature:1 learn:4 reasonably:1 improving:5 forest:1 hanna:1 investigated:3 domain:3 obama:8 official:1 significance:2 spread:1 fair:1 graber:1 advice:1 broadband:1 scattered:1 fashion:1 talley:2 theme:4 wish:1 comput:1 candidate:1 jmlr:1 printing:1 learns:1 young:2 croft:1 down:1 load:1 bad:1 showing:1 list:1 evidence:1 incorporating:1 magnitude:1 edmund:1 push:1 television:2 browsing:4 chen:1 easier:2 appearance:1 likely:1 chang:1 hua:1 corresponds:1 gerrish:1 amt:1 relies:1 acm:1 ma:2 goal:3 careful:1 content:4 hard:1 feasible:1 change:1 specifically:1 except:1 included:1 semantically:2 sampler:2 total:2 domeniconi:1 experimental:1 player:1 meaningful:1 exception:2 formally:1 thursday:2 indicating:1 holloway:1 icwsm:1 mark:2 guo:2 assessed:1 relevance:1 incorporate:1 evaluate:2 reg:12 regularizing:3 correlated:6 |
3,636 | 4,292 | Clustered Multi-Task Learning Via Alternating
Structure Optimization
Jiayu Zhou, Jianhui Chen, Jieping Ye
Computer Science and Engineering
Arizona State University
Tempe, AZ 85287
{jiayu.zhou, jianhui.chen, jieping.ye}@asu.edu
Abstract
Multi-task learning (MTL) learns multiple related tasks simultaneously to improve
generalization performance. Alternating structure optimization (ASO) is a popular
MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many
real world applications. As an alternative MTL approach, clustered multi-task
learning (CMTL) assumes that multiple tasks follow a clustered structure, i.e.,
tasks are partitioned into a set of groups where tasks in the same group are similar
to each other, and that such a clustered structure is unknown a priori. The objectives in ASO and CMTL differ in how multiple tasks are related. Interestingly,
we show in this paper the equivalence relationship between ASO and CMTL, providing significant new insights into ASO and CMTL as well as their inherent relationship. The CMTL formulation is non-convex, and we adopt a convex relaxation
to the CMTL formulation. We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation
of ASO, and show that the proposed convex CMTL formulation is significantly
more efficient especially for high-dimensional data. In addition, we present three
algorithms for solving the convex CMTL formulation. We report experimental
results on benchmark datasets to demonstrate the efficiency of the proposed algorithms.
1
Introduction
Many real-world problems involve multiple related classificatrion/regression tasks. A naive approach is to apply single task learning (STL) where each task is solved independently and thus the
task relatedness is not exploited. Recently, there is a growing interest in multi-task learning (MTL),
where we learn multiple related tasks simultaneously by extracting appropriate shared information
across tasks. In MTL, multiple tasks are expected to benefit from each other, resulting in improved
generalization performance. The effectiveness of MTL has been demonstrated empirically [1, 2, 3]
and theoretically [4, 5, 6]. MTL has been applied in many applications including biomedical informatics [7], marketing [1], natural language processing [2], and computer vision [3].
Many different MTL approaches have been proposed in the past; they differ in how the relatedness among different tasks is modeled. Evgeniou et al. [8] proposed the regularized MTL which
constrained the models of all tasks to be close to each other. The task relatedness can also be modeled by constraining multiple tasks to share a common underlying structure [4, 6, 9, 10]. Ando
and Zhang [5] proposed a structural learning formulation, which assumed multiple predictors for
different tasks shared a common structure on the underlying predictor space. For linear predictors,
they proposed the alternating structure optimization (ASO) that simultaneously performed inference
on multiple tasks and discovered the shared low-dimensional predictive structure. ASO has been
1
shown to be effective in many practical applications [2, 11, 12]. One limitation of the original ASO
formulation is that it involves a non-convex optimization problem and a globally optimal solution is
not guaranteed. A convex relaxation of ASO called CASO was proposed and analyzed in [13].
Many existing MTL formulations are based on the assumption that all tasks are related. In practical
applications, the tasks may exhibit a more sophisticated group structure where the models of tasks
from the same group are closer to each other than those from a different group. There have been
many prior work along this line of research, known as clustered multi-task learning (CMTL). In
[14], the mutual relatedness of tasks was estimated and knowledge of one task could be transferred
to other tasks in the same cluster. Bakker and Heskes [15] used clustered multi-task learning in a
Bayesian setting by considering a mixture of Gaussians instead of single Gaussian priors. Evgeniou
et al. [8] proposed the task clustering regularization and showed how cluster information could
be encoded in MTL, and however the group structure was required to be known a priori. Xue et
al. [16] introduced the Dirichlet process prior which automatically identified subgroups of related
tasks. In [17], a clustered MTL framework was proposed that simultaneously identified clusters
and performed multi-task inference. Because the formulation is non-convex, they also proposed a
convex relaxation to obtain a global optimum [17]. Wang et al. [18] used a similar idea to consider
clustered tasks by introducing an inter-task regularization.
The objective in CMTL differs from many MTL formulations (e.g., ASO which aims to identify a
shared low-dimensional predictive structure for all tasks) which are based on the standard assumption that each task can learn equally well from any other task. In this paper, we study the inherent
relationship between these two seemingly different MTL formulations. Specifically, we establish
the equivalence relationship between ASO and a specific formulation of CMTL, which performs
simultaneous multi-task learning and task clustering: First, we show that CMTL performs clustering on the tasks, while ASO performs projection on the features to find a shared low-rank structure.
Next, we show that the spectral relaxation of the clustering (on tasks) in CMTL and the projection
(on the features) in ASO lead to an identical regularization, related to the negative Ky Fan k-norm
of the weight matrix involving all task models, thus establishing their equivalence relationship. The
presented analysis provides significant new insights into ASO and CMTL as well as their inherent
relationship. To our best knowledge, the clustering view of ASO has not been explored before.
One major limitation of the ASO/CMTL formulation is that it involves a non-convex optimization,
as the negative Ky Fan k-norm is concave. We propose a convex relaxation of CMTL, and establish
the equivalence relationship between the proposed convex relaxation of CMTL and the convex ASO
formulation proposed in [13]. We show that the proposed convex CMTL formulation is significantly
more efficient especially for high-dimensional data. We further develop three algorithms for solving
the convex CMTL formulation based on the block coordinate descent, accelerated projected gradient, and gradient descent, respectively. We have conducted experiments on benchmark datasets
including School and Sarcos; our results demonstrate the efficiency of the proposed algorithms.
Notation: Throughout this paper, Rd denotes the d-dimensional Euclidean space. I denotes the
identity matrix of a proper size. N denotes the set of natural numbers. Sm
+ denotes the set of
symmetric positive semi-definite matrices of size m by m. A B means that B ? A is positie
semi-definite. tr (X) is the trace of X.
2
Multi-Task Learning: ASO and CMTL
Assume we are given a multi-task learning problem with m tasks; each task i ? Nm is associated
with a set of training data {(xi1 , y1i ), . . . , (xini , yni i )} ? Rd ? R, and a linear predictive function fi :
fi (xij ) = wiT xij , where wi is the weight vector of the i-th task, d is the data dimensionality, and ni
is the number of samples of the i-th task. We denote W = [w1 , . . . , wm ] ? Rd?m as the weight
matrix to be estimated. Given a loss function ?(?, ?), the empirical risk is given by:
?
?
ni
m
X
1 ?X
L(W ) =
?(wiT xij , yji )? .
n
i
i=1
j=1
We study the following multi-task learning formulation: minW L(W ) + ?(W ), where ? encodes
our prior knowledge about the m tasks. Next, we review ASO and CMTL and explore their inherent
relationship.
2
2.1
Alternating structure optimization
In ASO [5], all tasks are assumed to share a common feature space ? ? Rh?d , where h ?
min(m, d) is the dimensionality of the shared feature space and ? has orthonormal columns, i.e.,
??T = Ih . The predictive function of ASO is: fi (xij ) = wiT xij = uTi xij +viT ?xij , where the weight
wi = ui + ?T vi consists of two components including the weight ui for the high-dimensional
feature space and the weight vi for the low-dimensional
space based on ?. ASO minimizes the
Pm
following objective function: L(W ) + ? i=1 kui k22 , subject to: ??T = Ih , where ? is the regularization parameter
for task relatedness. We can further improve the formulation by including
Pm
a penalty, ? i=1 kwi k22 , to improve the generalization performance as in traditional supervised
learning. Since ui = wi ? ?T vi , we obtain the following ASO formulation:
m
X
(1)
?kwi ? ?T vi k22 + ?kwi k22 .
min
L(W ) +
W,{vi },?:??T =Ih
2.2
i=1
Clustered multi-task learning
In CMTL, we assume that the tasks are clustered into k < m clusters, and the index set of the
j-th cluster
Pis defined as Ij = {v|v ? cluster j}. We denote the mean of the jth cluster to be
w
?j = n1j v?Ij wv . For a given W = [w1 , ? ? ? , wm ], the sum-of-square error (SSE) function in
K-means clustering is given by [19, 20]:
k X
X
j=1 v?Ij
kwv ? w
?j k22 = tr W T W ? tr F T W T W F ,
(2)
where the matrix F ? Rm?k is an orthogonal cluster indicator matrix with Fi,j = ?1nj if i ? Ij and
Fi,j = 0 otherwise. If we ignore the special structure of F and keep the orthogonality requirement
only, the relaxed SSE minimization problem is:
min tr W T W ? tr F T W T W F ,
(3)
F :F T F =Ik
resulting in the following penalty function for CMTL:
(4)
?CMTL0 (W, F ) = ? tr W T W ? tr F T W T W F + ? tr W T W ,
where the first term is derived from the K-means clustering objective and the second term is to
improve the generalization performance. Combing Eq. (4) with the empirical error term L(W ), we
obtain the following CMTL formulation:
(5)
min
L(W ) + ?CMTL0 (W, F ).
W,F :F T F =Ik
2.3
Equivalence of ASO and CMTL
In the ASO formulation in Eq. (1), it is clear that the optimal vi is given by vi? = ?wi . Thus, the
penalty in ASO has the following equivalent form:
m
X
?kwi ? ?T ?wi k22 + ?kwi k22
?ASO (W, ?) =
i=1
= ? tr W T W ? tr W T ?T ?W + ? tr W T W ,
resulting in the following equivalent ASO formulation:
min
L(W ) + ?ASO (W, ?).
W,?:??T =Ih
(6)
(7)
The penalty of the ASO formulation in Eq. (7) looks very similar to the penalty of the CMTL
formulation in Eq. (5), however the operations involved are fundamentally different. In the CMTL
formulation in Eq. (5), the matrix F is operated on the task dimension, as it is derived from the
K-means clustering on the tasks; while in the ASO formulation in Eq. (7), the matrix ? is operated
on the feature dimension, as it aims to identify a shared low-dimensional predictive structure for all
tasks. Although different in the mathematical formulation, we show in the following theorem that
the objectives of CMTL and ASO are equivalent.
3
Theorem 2.1. The objectives of CMTL in Eq. (5) and ASO in Eq. (7) are equivalent if the cluster
number, k, in K-means equals to the size, h, of the shared low-dimensional feature space.
Proof. Denote Q(W ) = L(W ) + (? + ?) tr W T W , with ?, ? > 0. Then, CMTL and ASO solve
the following optimization problems:
Q(W ) ? ? tr W T ?T ?W ,
Q(W ) ? ? tr W F F T W T ,
min
min
W,?:??T =Ip
W,F :F T F =Ip
respectively. Note that in both CMTL and ASO, the first term Q is independent of F or ?, for a
given W . Thus, the optimal F and ? for these two optimization problems are given by solving:
[CMTL]
max tr W F F T W T ,
[ASO]
max tr W T ?T ?W .
F :F T F =Ik
?:??T =Ik
Since W W T and W T W share the same set of nonzero eigenvalues, by the Ky-Fan Theorem [21], both problems above achieve exactly the same maximum objective value: kW T W k(k) =
Pk
T
T
T
T
i=1 ?i (W W ), where ?i (W W ) denotes the i-th largest eigenvalue of W W and kW W k(k)
T
is known as the Ky Fan k-norm of matrix W W . Plugging the results back to the original objective,
the optimization problem for both CMTL and ASO becomes minW Q(W ) ? ?kW T W k(k) . This
completes the proof of this theorem.
3
Convex Relaxation of CMTL
The formulation in Eq. (5) is non-convex. A natural approach is to perform a convex relaxation on
CMTL. We first reformulate the penalty in Eq. (5) as follows:
(8)
?CMTL0 (W, F ) = ? tr W ((1 + ?)I ? F F T )W T ,
where ? is defined as ? = ?/? > 0. Since F T F = Ik , the following holds:
(1 + ?)I ? F F T = ?(1 + ?)(?I + F F T )?1 .
Thus, we can reformulate ?CMTL0 in Eq. (8) as the following equivalent form:
?CMTL1 (W, F ) = ??(1 + ?) tr W (?I + F F T )?1 W T .
(9)
resulting in the following equivalent CMTL formulation:
min
W,F :F T F =Ik
L(W ) + ?CMTL1 (W, F ).
(10)
Following [13, 17], we obtain the following convex relaxation of Eq. (10), called cCMTL:
min L(W ) + ?cCMTL (W, M ) s.t. tr (M ) = k, M I, M ? Sm
+.
W,M
(11)
where ?cCMTL (W, M ) is defined as:
?cCMTL (W, M ) = ??(1 + ?) tr W (?I + M )?1 W T .
(12)
The optimization problem in Eq. (11) is jointly convex with respect to W and M [9].
3.1
Equivalence of cASO and cCMTL
A convex relaxation (cASO) of the ASO formulation in Eq. (7) has been proposed in [13]:
min L(W ) + ?cASO (W, S) s.t. tr (S) = h, S I, S ? Sd+ ,
W,S
(13)
where ?cASO is defined as:
?cASO (W, S) = ??(1 + ?) tr W T (?I + S)?1 W .
(14)
The cASO formulation in Eq. (13) and the cCMTL formulation in Eq. (11) are different in the regularization components: the respective Hessian of the regularization with respect to W are different.
Similar to Theorem 2.1, our analysis shows that cASO and cCMTL are equivalent.
4
Theorem 3.1. The objectives of the cCMTL formulation in Eq. (11) and the cASO formulation
in Eq. (13) are equivalent if the cluster number, k, in K-means equals to the size, h, of the shared
low-dimensional feature space.
Proof. Define the following two convex functions of W :
gcCMTL (W ) = min tr W (?I + M )?1 W T , s.t. tr (M ) = k, M I, M ? Sm
+,
M
(15)
and
gcASO (W ) = min tr W T (?I + S)?1 W , s.t. tr (S) = h, S I, S ? Sd+ .
S
(16)
The cCMTL and cASO formulations can be expressed as unconstrained optimization w.r.t. W :
[cCMTL] min L(W ) + c ? gCMTL (W ),
[cASO] min L(W ) + c ? gASO (W ),
W
W
where c = ??(1 + ?). Let h = k ? min(d, m). Next, we show that for a given W , gCMTL (W ) =
gASO (W ) holds.
Let W = Q1 ?Q2 , M = P1 ?1 P1T , and S = P2 ?2 P2T , be the SVD of W , M , and S (M and
S are symmetric positive semi-definite), respectively, where ? = diag{?1 , ?2 , . . . , ?m }, ?1 =
(1)
(1)
(1)
(2)
(2)
(2)
diag{?1 , ?2 , . . . , ?m }, and ?2 = {?1 , ?2 , . . . , ?m }. Let q < k be the rank of ?. It follows
from the basic properties of the trace that:
tr W (?I + M )?1 W T = tr (?I + ?1 )?1 P1T Q2 ?2 QT2 P1 .
The problem in Eq. (15) is thus equivalent to:
d
X
(1)
?i = k.
min tr (?I + ?1 )?1 P1T Q2 ?2 QT2 P1 , s.t. P1 P1T = I, P1T P1 = I,
P1 ,?1
(17)
i=1
It can be shown that the optimal P1? is given by P1? = Q2 and the optimal ??1 is given by solving the
following simple (convex) optimization problem [13]:
??1 = argmin
?1
q
X
i=1
?i2
, s.t.
(1)
? + ?i
q
X
(1)
?i
i
(1)
= k, 0 ? ?i
? 1.
(18)
It follows that gcCMTL
(W ) = tr (?I + ??1 )?1 ?2 . Similarly, we can show that gcASO (W ) =
tr (?I + ??2 )?1 ?2 , where
??2 = argmin
?2
q
X
i=1
?i2
, s.t.
(2)
? + ?i
q
X
i
(2)
?i
(2)
= h, 0 ? ?i
? 1.
It is clear that when h = k, ??1 = ??2 holds. Therefore, we have gcCMTL (W ) = gcASO (W ). This
completes the proof.
Remark 3.2. In the functional of cASO in Eq. (16) the variable to be optimized is S ? Sd+ , while
in the functional of cCMTL in Eq. (15) the optimization variable is M ? Sm
+ . In many practical
MTL problems the data dimensionality d is much larger than the task number m, and in such cases
cCMTL is significantly more efficient in terms of both time and space. Our equivalence relationship
established in Theorem 3.1 provides an (equivalent) efficient implementation of cASO especially
for high-dimensional problems.
4
Optimization Algorithms
In this section, we propose to employ three different methods, i.e., Alternating Optimization Method
(altCMTL), Accelerated Projected Gradient Method (apgCMTL), and Direct Gradient Descent
Method (graCMTL), respectively, for solving the convex relaxation in Eq. (11). Note that we focus
on smooth loss functions in this paper.
5
4.1
Alternating Optimization Method
The Alternating Optimization Method (altCMTL) is similar to the Block Coordinate Descent (BCD)
method [22], in which the variable is optimized alternatively with the other variables fixed. The
pseudo-code of altCMTL is provided in the supplemental material. Note that using similar techniques as the ones from [23], we can show that altCMTL finds the globally optimal solution to
Eq. (11). The altCMTL algorithm involves the following two steps in each iteration:
Optimization of W For a fixed M , the optimal W can be obtained via solving:
min L(W ) + c tr W (?I + M )?1 W T .
W
(19)
The problem above is smooth and convex. It can be solved using gradient-type methods [22]. In the
special case of a least square loss function, the problem in Eq. (19) admits an analytic solution.
Optimization of M For a fixed W , the optimal M can be obtained via solving:
min tr W (?I + M )?1 W T , s.t. tr (M ) = k, M I, M ? Sm
+.
M
(20)
From Theorem 3.1, the optimal M to Eq. (20) is given by M = Q?? QT , where ?? is obtained from
Eq. (18). The problem in Eq. (18) can be efficiently solved using similar techniques in [17].
4.2
Accelerated Projected Gradient Method
The accelerated projected gradient method (APG) has been applied to solve many machine learning
formulations [24]. We apply APG to solve the cCMTL formulation in Eq. (11). The algorithm is
called apgCMTL. The key component of apgCMTL is to compute a proximal operator as follows:
2
2
? S
?S
min
WZ ? W
(21)
+
MZ ? M
, s.t. tr (MZ ) = k, MZ I, MZ ? Sm
+,
WZ ,MZ
F
F
? S and M
? S can be found in [24]. The optimization
where the details about the construction of W
problem in Eq. (21) is involved in each iteration of apgCMTL, and hence its computation is critical
for the practical efficiency of apgCMTL. We show below that the optimal WZ and MZ to Eq. (21)
can be computed efficiently.
Computation of Wz The optimal WZ to Eq. (21) can be obtained by solving:
2
? S
min
WZ ? W
.
(22)
Computation of Mz The optimal MZ to Eq. (21) can be obtained by solving:
2
?S
min
MZ ? M
, s.t. tr (MZ ) = k, MZ I, MZ ? Sm
+,
(23)
WZ
F
? S.
Clearly the optimal WZ to Eq. (22) is equal to W
MZ
F
? S is not guaranteed to be positive semidefinite. Our analysis shows that the optimization
where M
problem in Eq. (23) admits an analytical solution via solving a simple convex projection problem.
The main result and the pseudo-code of apgCMTL are provided in the supplemental material.
4.3
Direct Gradient Descent Method
In Direct Gradient Descent Method (graCMTL) as used in [17], the cCMTL problem in Eq. (11) is
reformulated as an optimization problem with one single variable W , given by:
min L(W ) + c ? gCMTL (W ),
(24)
W
where gCMTL (W ) is a functional of W defined in Eq. (15).
Given the intermediate solution Wk?1 from the (k ? 1)-th iteration of graCMTL, we compute
the gradient of gCMTL (W ) and then apply the general gradient descent scheme [25] to obtain Wk .
Note that at each iterative step in line search, we need to solve the optimization problem in the
form of Eq. (20). The gradient of gCMTL (?) at Wk?1 is given by [26, 27]: ?W gCMTL (Wk ) =
? is obtained by solving Eq. (20) at W = Wk?1 . The pseudo-code of
? )?1 W T , where M
2(?I + M
k?1
graCMTL is provided in the supplemental material.
6
Truth
RidgeSTL
RegMTL
cCMTL
Figure 1: The correlation matrices of the ground truth model, and the models learnt from RidgeSTL,
RegMTL, and cCMTL. Darker color indicates higher correlation. In the ground truth there are 100
tasks clustered into 5 groups. Each task has 200 dimensions. 95 training samples and 5 testing
samples are used in each task. The test errors (in terms of nMSE) for RidgeSTL, RegMTL, and
cCMTL are 0.8077, 0.6830, 0.0354, respectively.
5
Experiments
In this section, we empirically evaluate the effectiveness and the efficiency of the proposed algorithms on synthetic and real-world data sets. The normalized mean square error (nMSE) and the
averaged mean square error (aMSE) are used as the performance measure [23]. Note that in this
paper we have not developed new MTL formulations; instead our main focus is on the theoretical
understanding of the inherent relationship between ASO and CMTL. Thus, an extensive comparative study of various MTL algorithms is out of the scope of this paper. As an illustration, in the
following experiments we only compare cCMTL with two baseline techniques: ridge regression
STL (RidgeSTL) and regularized MTL (RegMTL) [28].
Simulation Study We apply the proposed cCMTL formulation in Eq. (11) on a synthetic data
set (with a pre-defined cluster structure). We use 5-fold cross-validation to determine the regularization parameters for all methods. We construct the synthetic data set following a procedure similar
to the one in [17]: the constructed synthetic data set consists of 5 clusters, where each cluster includes 20 (regression) tasks and each task is represented by a weight vector of length d = 300.
Details of the construction is provided in the supplemental material. We apply RidgeSTL, RegMTL,
and cCMTL on the constructed synthetic data. The correlation coefficient matrices of the obtained
weight vectors are presented in Figure 1. From the result we can observe (1) cCMTL is able to
capture the cluster structure among tasks and achieves a small test error; (2) RegMTL is better than
RidgeSTL in terms of test error. It however introduces unnecessary correlation among tasks possibly due to the assumption that all tasks are related; (3) In cCMTL we also notice some ?noisy?
correlation, which may because of the spectral relaxation.
Table 1: Performance comparison on the School data in terms of nMSE and aMSE. Smaller nMSE
and aMSE indicate better performance. All regularization parameters are tuned using 5-fold cross
validation. The mean and standard deviation are calculated based on 10 random repetitions.
Measure Ratio
RidgeSTL
RegMTL
cCMTL
nMSE
10% 1.3954 ? 0.0596 1.0988 ? 0.0178 1.0850 ? 0.0206
15% 1.1370 ? 0.0146 1.0636 ? 0.0170 0.9708 ? 0.0145
20% 1.0290 ? 0.0309 1.0349 ? 0.0091 0.8864 ? 0.0094
25% 0.8649 ? 0.0123 1.0139 ? 0.0057 0.8243 ? 0.0031
30% 0.8367 ? 0.0102 1.0042 ? 0.0066 0.8006 ? 0.0081
aMSE
10% 0.3664 ? 0.0160 0.2865 ? 0.0054 0.2831 ? 0.0050
15% 0.2972 ? 0.0034 0.2771 ? 0.0045 0.2525 ? 0.0048
20% 0.2717 ? 0.0083 0.2709 ? 0.0027 0.2322 ? 0.0022
25% 0.2261 ? 0.0033 0.2650 ? 0.0027 0.2154 ? 0.0020
30% 0.2196 ? 0.0035 0.2632 ? 0.0028 0.2101 ? 0.0016
Effectiveness Comparison Next, we empirically evaluate the effectiveness of the cCMTL formulation in comparison with RidgeSTL and RegMTL using real world benchmark datasets including
the School data1 and the Sarcos data2 . The regularization parameters for all algorithms are deter1
2
http://www.cs.ucl.ac.uk/staff/A.Argyriou/code/
http://gaussianprocess.org/gpml/data/
7
5
12
200
apgCMTL
altCMTL
graCMTL
10
apgCMTL
altCMTL
graCMTL
apgCMTL
altCMTL
graCMTL
4
100
50
0
500
Seconds
Seconds
Seconds
150
8
6
1500
Dimension
2000
2500
2
1
4
1000
3
2
3000 5000 7000 9000 3000 5000 7000 9000
Sample Size
0
50
90
130
Task Number
170
Figure 2: Sensitivity study of altCMTL, apgCMTL, graCMTL in terms of the computation cost (in
seconds) with respect to feature dimensionality (left), sample size (middle), and task number (right).
mined via 5-fold cross validation; the reported experimental results are averaged over 10 random
repetitions. The School data consists of the exam scores of 15362 students from 139 secondary
schools, where each student is described by 27 attributes. We vary the training ratio in the set
5 ? {1, 2, ? ? ? , 6}% and record the respective performance. The experimental results are presented
in Table 1. We can observe that cCMTL performs the best among all settings. Experimental results
on the Sarcos dataset is available in the supplemental material.
Efficiency Comparison We compare the efficiency of the three algorithms including altCMTL,
apgCMTLand graCMTL for solving the cCMTL formulation in Eq. (11). For the following experiments, we set ? = 1, ? = 1, and k = 2 in cCMTL. We observe a similar trend in other settings.
Specifically, we study how the feature dimensionality, the sample size, and the task number affect
the required computation cost (in seconds) for convergence. The experimental setup is as follows:
we terminate apgCMTL when the change of objective values in two successive steps is smaller than
10?5 and record the obtained objective value; we then use such a value as the stopping criterion
in graCMTL and altCMTL, that is, we stop graCMTL or altCMTL when graCMTL or altCMTL
attains an objective value equal to or smaller than the one attained by apgCMTL. We use Yahoo
Arts data for the first two experiments. Because in Yahoo data the task number is very small, we
construct a synthetic data for the third experiment.
In the first experiment, we vary the feature dimensionality in the set [500 : 500 : 2500] with the
sample size fixed at 4000 and the task numbers fixed at 17. The experimental result is presented
in the left plot of Figure 2. In the second experiment, we vary the sample size in the set [3000 :
1000 : 9000] with the dimensionality fixed at 500 and the task number fixed at 17. The experimental
result is presented in the middle plot of Figure 2. From the first two experiments, we observe that
larger feature dimensionality or larger sample size will lead to higher computation cost. In the third
experiment, we vary the task number in the set [10 : 10 : 190] with the feature dimensionality fixed
at 600 and the sample size fixed at 2000. The employed synthetic data set is constructed as follows:
for each task, we generate the entries of the data matrix Xi from N (0, 1), and generate the entries
of the weight vector from N (0, 1), the response vector yi is computed as yi = Xi wi + ?, where
? ? N (0, 0.01) represents the noise vector. The experimental result is presented in the right plot of
Figure 2. We can observe that altCMTL is more efficient than the other two algorithms.
6
Conclusion
In this paper we establish the equivalence relationship between two multi-task learning techniques:
alternating structure optimization (ASO) and clustered multi-task learning (CMTL). We further establish the equivalence relationship between our proposed convex relaxation of CMTL and an existing convex relaxation of ASO. In addition, we propose three algorithms for solving the convex
CMTL formulation and demonstrate their effectiveness and efficiency on benchmark datasets. The
proposed algorithms involve the computation of SVD. In the case of a very large task number, the
SVD computation will be expensive. We seek to further improve the efficiency of the algorithms by
employing approximation methods. In addition, we plan to apply the proposed algorithms to other
real world applications involving multiple (clustered) tasks.
Acknowledgments
This work was supported in part by NSF IIS-0812551, IIS-0953662, MCB-1026710, CCF-1025177,
and NIH R01 LM010730.
8
References
[1] T. Evgeniou, M. Pontil, and O. Toubia. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. Marketing Science, 26(6):805?818, 2007.
[2] R.K. Ando. Applying alternating structure optimization to word sense disambiguation. In Proceedings of
the Tenth Conference on Computational Natural Language Learning, pages 77?84, 2006.
[3] A. Torralba, K.P. Murphy, and W.T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In Computer Vision and Pattern Recognition, 2004, IEEE Conference on, volume 2,
pages 762?769, 2004.
[4] J. Baxter. A model of inductive bias learning. J. Artif. Intell. Res., 12:149?198, 2000.
[5] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817?1853, 2005.
[6] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Lecture notes in
computer science, pages 567?580, 2003.
[7] S. Bickel, J. Bogojeska, T. Lengauer, and T. Scheffer. Multi-task learning for hiv therapy screening. In
Proceedings of the 25th International Conference on Machine Learning, pages 56?63. ACM, 2008.
[8] T. Evgeniou, C.A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of
Machine Learning Research, 6(1):615, 2006.
[9] A. Argyriou, C.A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task
structure learning. Advances in Neural Information Processing Systems, 20:25?32, 2008.
[10] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[11] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In
Proceedings of the 2006 Conference on EMNLP, pages 120?128, 2006.
[12] A. Quattoni, M. Collins, and T. Darrell. Learning visual representations using images with captions. In
Computer Vision and Pattern Recognition, 2007. IEEE Conference on, pages 1?8. IEEE, 2007.
[13] J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning shared structures from multiple
tasks. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 137?144.
ACM, 2009.
[14] S. Thrun and J. O?Sullivan. Clustering learning tasks and the selective cross-task transfer of knowledge.
Learning to learn, pages 181?209, 1998.
[15] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. The Journal of
Machine Learning Research, 4:83?99, 2003.
[16] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet
process priors. The Journal of Machine Learning Research, 8:35?63, 2007.
[17] L. Jacob, F. Bach, and J.P. Vert. Clustered multi-task learning: A convex formulation. Arxiv preprint
arXiv:0809.2085, 2008.
[18] F. Wang, X. Wang, and T. Li. Semi-supervised multi-task learning with task regularizations. In Data
Mining, 2009. ICDM?09. Ninth IEEE International Conference on, pages 562?568. IEEE, 2009.
[19] C. Ding and X. He. K-means clustering via principal component analysis. In Proceedings of the twentyfirst International Conference on Machine learning, page 29. ACM, 2004.
[20] H. Zha, X. He, C. Ding, M. Gu, and H. Simon. Spectral relaxation for k-means clustering. Advances in
Neural Information Processing Systems, 2:1057?1064, 2002.
[21] K. Fan. On a theorem of Weyl concerning eigenvalues of linear transformations I. Proceedings of the
National Academy of Sciences of the United States of America, 35(11):652, 1949.
[22] J. Nocedal and S.J. Wright. Numerical optimization. Springer verlag, 1999.
[23] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[24] Y. Nesterov. Gradient methods for minimizing composite objective function. ReCALL, 76(2007076),
2007.
[25] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[26] J. Gauvin and F. Dubeau. Differential properties of the marginal function in mathematical programming.
Optimality and Stability in Mathematical Programming, pages 101?119, 1982.
[27] M. Wu, B. Sch?olkopf, and G. Bak?r. A direct method for building sparse kernel learning algorithms. The
Journal of Machine Learning Research, 7:603?624, 2006.
[28] T. Evgeniou and M. Pontil. Regularized multi?task learning. In Proceedings of the tenth ACM SIGKDD
International Conference on Knowledge discovery and data mining, pages 109?117. ACM, 2004.
9
| 4292 |@word multitask:2 middle:2 norm:3 seek:1 simulation:1 jacob:1 q1:1 tr:36 liu:1 score:1 united:1 tuned:1 interestingly:1 past:1 existing:3 yni:1 numerical:1 weyl:1 analytic:1 plot:3 asu:1 data2:1 record:2 sarcos:3 provides:2 boosting:1 successive:1 org:1 zhang:2 mathematical:3 along:1 constructed:3 direct:4 differential:1 ik:6 consists:3 theoretically:1 inter:1 expected:1 p1:8 growing:1 multi:21 freeman:1 globally:2 automatically:1 considering:1 becomes:1 provided:4 notation:1 underlying:2 argmin:2 bakker:2 minimizes:1 q2:4 developed:1 supplemental:5 n1j:1 transformation:1 nj:1 pseudo:3 concave:1 exactly:1 rm:1 uk:1 before:1 positive:3 engineering:1 sd:3 tempe:1 establishing:1 equivalence:10 averaged:2 practical:4 acknowledgment:1 testing:1 block:2 definite:3 differs:1 sullivan:1 procedure:2 pontil:5 empirical:2 significantly:3 vert:1 projection:3 composite:1 pre:1 word:1 boyd:1 krishnapuram:1 close:1 unlabeled:1 operator:1 risk:1 applying:1 www:1 equivalent:10 demonstrated:1 jieping:2 independently:1 convex:35 vit:1 wit:3 jianhui:2 insight:2 orthonormal:1 vandenberghe:1 stability:1 coordinate:2 sse:2 construction:2 caption:1 programming:2 hypothesis:1 trend:1 expensive:1 recognition:2 preprint:1 ding:2 solved:3 wang:3 capture:1 mz:13 ui:3 nesterov:1 solving:13 predictive:7 efficiency:8 gu:1 various:1 represented:1 america:1 effective:1 qt2:2 hiv:1 encoded:1 larger:3 solve:4 otherwise:1 jointly:1 noisy:1 ip:2 seemingly:1 eigenvalue:3 analytical:1 ucl:1 propose:3 adaptation:1 achieve:1 academy:1 ky:4 az:1 olkopf:1 exploiting:1 convergence:1 cluster:14 optimum:1 requirement:1 darrell:1 comparative:1 ben:1 object:1 blitzer:1 develop:1 ac:1 exam:1 ij:4 qt:1 school:5 eq:40 p2:1 c:1 involves:3 indicate:1 differ:2 bak:1 attribute:1 material:5 clustered:14 generalization:4 hold:3 therapy:1 ground:2 wright:1 scope:1 major:1 achieves:1 adopt:1 vary:4 torralba:1 bickel:1 estimation:1 gaussianprocess:1 largest:1 repetition:2 successfully:1 aso:41 minimization:1 clearly:1 gaussian:1 aim:2 zhou:2 gpml:1 derived:2 focus:2 rank:2 indicates:1 sigkdd:1 attains:1 baseline:1 sense:1 inference:2 stopping:1 selective:1 among:4 classification:1 priori:2 yahoo:2 plan:1 constrained:1 special:2 art:1 mutual:1 marginal:1 equal:4 construct:2 evgeniou:6 identical:1 kw:3 represents:1 look:1 carin:1 report:1 fundamentally:1 inherent:5 employ:1 simultaneously:4 national:1 intell:1 murphy:1 ando:3 detection:1 interest:1 screening:1 mining:2 introduces:1 analyzed:1 mixture:1 operated:2 semidefinite:1 closer:1 minw:2 respective:2 orthogonal:1 euclidean:1 re:1 theoretical:1 column:1 modeling:1 caruana:1 cost:3 introducing:1 deviation:1 entry:2 predictor:3 conducted:1 reported:1 learnt:1 xue:2 proximal:1 synthetic:7 international:5 sensitivity:1 xi1:1 informatics:1 w1:2 nm:1 possibly:1 emnlp:1 combing:1 li:1 student:2 wk:5 includes:1 coefficient:1 vi:7 toubia:1 performed:2 view:1 jiayu:2 wm:2 zha:1 simon:1 square:4 ni:2 efficiently:2 identify:2 bayesian:2 cmtl:41 simultaneous:1 quattoni:1 sharing:1 involved:2 associated:1 proof:4 stop:1 dataset:1 popular:1 recall:1 knowledge:5 color:1 dimensionality:9 sophisticated:1 back:1 higher:2 attained:1 supervised:2 mtl:18 follow:1 response:1 improved:1 formulation:43 marketing:2 biomedical:1 correlation:5 twentyfirst:1 artif:1 building:1 lengauer:1 ye:3 k22:7 normalized:1 ccf:1 inductive:1 regularization:11 hence:1 alternating:9 symmetric:2 nonzero:1 i2:2 criterion:1 ridge:1 demonstrate:3 mcdonald:1 performs:4 image:1 recently:1 fi:5 nih:1 common:3 data1:1 functional:3 empirically:3 volume:1 he:2 significant:2 cambridge:1 rd:3 unconstrained:1 heskes:2 pm:2 similarly:1 language:2 showed:1 verlag:1 wv:1 yi:2 exploited:1 relaxed:1 staff:1 employed:1 determine:1 semi:4 ii:2 multiple:15 smooth:2 cross:4 bach:1 icdm:1 concerning:1 equally:1 plugging:1 involving:2 regression:3 basic:1 liao:1 vision:3 arxiv:2 iteration:3 kernel:2 addition:3 completes:2 sch:1 kwi:5 subject:1 effectiveness:5 extracting:1 structural:2 constraining:1 intermediate:1 baxter:1 affect:1 identified:2 idea:1 multiclass:1 penalty:6 reformulated:1 hessian:1 remark:1 clear:2 involve:2 http:2 generate:2 xij:7 nsf:1 notice:1 estimated:2 group:7 key:1 tenth:2 nocedal:1 relaxation:17 sum:1 throughout:1 uti:1 wu:1 disambiguation:1 apg:2 guaranteed:2 mined:1 correspondence:1 fan:5 fold:3 arizona:1 annual:1 orthogonality:1 encodes:1 bcd:1 y1i:1 min:22 optimality:1 transferred:1 across:1 smaller:3 partitioned:1 wi:6 lm010730:1 available:1 gaussians:1 operation:1 apply:6 observe:5 spectral:4 appropriate:1 alternative:1 original:2 denotes:5 assumes:1 clustering:12 dirichlet:2 especially:3 establish:5 r01:1 micchelli:2 objective:13 p2t:1 traditional:1 exhibit:1 gradient:13 thrun:1 consumer:1 code:4 length:1 modeled:2 relationship:13 index:1 providing:1 reformulate:2 illustration:1 ratio:2 ying:1 setup:1 minimizing:1 trace:2 negative:2 implementation:1 proper:1 unknown:1 perform:1 datasets:4 sm:7 benchmark:4 descent:7 heterogeneity:1 discovered:1 ninth:1 introduced:1 david:1 required:2 extensive:1 optimized:2 established:1 subgroup:1 able:1 below:1 pattern:2 including:6 max:2 wz:8 critical:1 natural:4 regularized:3 indicator:1 schuller:1 scheme:1 improve:5 naive:1 prior:5 review:1 understanding:1 discovery:1 loss:3 lecture:1 limitation:2 conjoint:1 amse:4 validation:3 share:3 pi:1 supported:1 jth:1 bias:1 sparse:1 benefit:1 dimension:4 calculated:1 world:5 projected:4 employing:1 ignore:1 relatedness:6 keep:1 global:1 assumed:2 unnecessary:1 xi:2 alternatively:1 yji:1 search:1 iterative:1 table:2 learn:3 terminate:1 transfer:1 kui:1 domain:1 diag:2 pk:1 main:2 rh:1 noise:1 nmse:5 scheffer:1 darker:1 pereira:1 third:2 learns:2 tang:1 theorem:9 specific:1 gating:1 explored:1 admits:2 stl:2 ih:4 chen:3 p1t:5 explore:1 visual:1 expressed:1 springer:1 truth:3 acm:5 identity:1 shared:11 change:1 specifically:2 principal:1 called:3 secondary:1 experimental:8 svd:3 bogojeska:1 collins:1 accelerated:4 evaluate:2 mcb:1 argyriou:3 |
3,637 | 4,293 | Selecting Receptive Fields in Deep Networks
Andrew Y. Ng
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Adam Coates
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Recent deep learning and unsupervised feature learning systems that learn from
unlabeled data have achieved high performance in benchmarks by using extremely
large architectures with many features (hidden units) at each layer. Unfortunately,
for such large architectures the number of parameters can grow quadratically in the
width of the network, thus necessitating hand-coded ?local receptive fields? that
limit the number of connections from lower level features to higher ones (e.g.,
based on spatial locality). In this paper we propose a fast method to choose these
connections that may be incorporated into a wide variety of unsupervised training
methods. Specifically, we choose local receptive fields that group together those
low-level features that are most similar to each other according to a pairwise similarity metric. This approach allows us to harness the advantages of local receptive
fields (such as improved scalability, and reduced data requirements) when we do
not know how to specify such receptive fields by hand or where our unsupervised
training algorithm has no obvious generalization to a topographic setting. We
produce results showing how this method allows us to use even simple unsupervised training algorithms to train successful multi-layered networks that achieve
state-of-the-art results on CIFAR and STL datasets: 82.0% and 60.1% accuracy,
respectively.
1
Introduction
Much recent research has focused on training deep, multi-layered networks of feature extractors applied to challenging visual tasks like object recognition. An important practical concern in building
such networks is to specify how the features in each layer connect to the features in the layers beneath. Traditionally, the number of parameters in networks for visual tasks is reduced by restricting
higher level units to receive inputs only from a ?receptive field? of lower-level inputs. For instance,
in the first layer of a network used for object recognition it is common to connect each feature extractor to a small rectangular area within a larger image instead of connecting every feature to the entire
image [14, 15]. This trick dramatically reduces the number of parameters that must be trained and
is a key element of several state-of-the-art systems [4, 19, 6]. In this paper, we propose a method to
automatically choose such receptive fields in situations where we do not know how to specify them
by hand?a situation that, as we will explain, is commonly encountered in deep networks.
There are now many results in the literature indicating that large networks with thousands of unique
feature extractors are top competitors in applications and benchmarks (e.g., [4, 6, 9, 19]). A major obstacle to scaling up these representations further is the blowup in the number of network
parameters: for n input features, a complete representation with n features requires a matrix of
n2 weights?one weight for every feature and input. This blowup leads to a number of practical
problems: (i) it becomes difficult to represent, and even more difficult to update, the entire weight
matrix during learning, (ii) feature extraction becomes extremely slow, and (iii) many algorithms
and techniques (like whitening and local contrast normalization) are difficult to generalize to large,
1
unstructured input domains. As mentioned above, we can solve this problem by limiting the ?fan
in? to each feature by connecting each feature extractor to a small receptive field of inputs. In this
work, we will propose a method that chooses these receptive fields automatically during unsupervised training of deep networks. The scheme can operate without prior knowledge of the underlying
data and is applicable to virtually any unsupervised feature learning or pre-training pipeline. In our
experiments, we will show that when this method is combined with a recently proposed learning
system, we can construct highly scalable architectures that achieve accuracy on CIFAR-10 and STL
datasets beyond the best previously published.
It may not be clear yet why it is necessary to have an automated way to choose receptive fields since,
after all, it is already common practice to pick receptive fields simply based on prior knowledge.
However, this type of solution is insufficient for large, deep representations. For instance, in local
receptive field architectures for image data, we typically train a bank of linear filters that apply only
to a small image patch. These filters are then convolved with the input image to yield the first
layer of features. As an example, if we train 100 5-by-5 pixel filters and convolve them with a
32-by-32 pixel input, then we will get a 28-by-28-by-100 array of features. Each 2D grid of 28-by28 feature responses for a single filter is frequently called a ?map? [14, 4]. Though there are still
spatial relationships amongst the feature values within each map, it is not clear how two features in
different maps are related. Thus when we train a second layer of features we must typically resort to
connecting each feature to every input map or to a random subset of maps [12, 4] (though we may
still take advantage of the remaining spatial organization within each map). At even higher layers of
deep networks, this problem becomes extreme: our array of responses will have very small spatial
resolution (e.g., 1-by-1) yet will have a large number of maps and thus we can no longer make use of
spatial receptive fields. This problem is exacerbated further when we try to use very large numbers
of maps which are often necessary to achieve top performance [4, 5].
In this work we propose a way to address the problem of choosing receptive fields that is not only
a flexible addition to unsupervised learning and pre-training pipelines, but that can scale up to the
extremely large networks used in state-of-the-art systems. In our method we select local receptive
fields that group together (pre-trained) lower-level features according to a pairwise similarity metric
between features. Each receptive field is constructed using a greedy selection scheme so that it
contains features that are similar according to the similarity metric. Depending on the choice of
metric, we can cause our system to choose receptive fields that are similar to those that might be
learned implicitly by popular learning algorithms like ICA [11]. Given the learned receptive fields
(groups of features) we can subsequently apply an unsupervised learning method independently
over each receptive field. Thus, this method frees us to use any unsupervised learning algorithm
to train the weights of the next layer. Using our method in conjunction with the pipeline proposed
by [6], we demonstrate the ability to train multi-layered networks using only vector quantization as
our unsupervised learning module. All of our results are achieved without supervised fine-tuning
(i.e., backpropagation), and thus rely heavily on the success of the unsupervised learning procedure.
Nevertheless, we attain the best known performances on the CIFAR-10 and STL datasets.
We will now discuss some additional work related to our approach in Section 2. Details of our
method are given in Section 3 followed by our experimental results in Section 4.
2
Related Work
While much work has focused on different representations for deep networks, an orthogonal line of
work has investigated the effect of network structure on performance of these systems. Much of this
line of inquiry has sought to identify the best choices of network parameters such as size, activation
function, pooling method and so on [12, 5, 3, 16, 19]. Through these investigations a handful of
key factors have been identified that strongly influence performance (such as the type of pooling,
activation function, and number of features). These works, however, do not address the finer-grained
questions of how to choose the internal structure of deep networks directly.
Other authors have tackled the problem of architecture selection more generally. One approach is
to search for the best architecture. For instance, Saxe et al. [18] propose using randomly initialized
networks (forgoing the expense of training) to search for a high-performing structure. Pinto et
al. [17], on the other hand, use a screening procedure to choose from amongst large numbers of
randomly composed networks, collecting the best performing networks.
2
More powerful modeling and optimization techniques have also been used for learning the structure of deep networks in-situ. For instance, Adams et al. [1] use a non-parametric Bayesian prior
to jointly infer the depth and number of hidden units at each layer of a deep belief network during
training. Zhang and Chan [21] use an L1 penalization scheme to zero out many of the connections in
an otherwise bipartite structure. Unfortunately, these methods require optimizations that are as complex or expensive as the algorithms they augment, thus making it difficult to achieve computational
gains from any architectural knowledge discovered by these systems.
In this work, the receptive fields will be built by analyzing the relationships between feature responses rather than relying on prior knowledge of their organization. A popular alternative solution
is to impose topographic organization on the feature outputs during training. In general, these learning algorithms train a set of features (usually linear filters) such that features nearby in a pre-specified
topography share certain characteristics. The Topographic ICA algorithm [10], for instance, uses a
probabilistic model that implies that nearby features in the topography have correlated variances
(i.e., energies). This statistical measure of similarity is motivated by empirical observations of
neurons and has been used in other analytical models [20]. Similar methods can be obtained by
imposing group sparsity constraints so that features within a group tend to be on or off at the same
time [7, 8]. These methods have many advantages but require us to specify a topography first, then
solve a large-scale optimization problem in order to organize our features according to the given
topographic layout. This will typically involve many epochs of training and repeated feature evaluations in order to succeed. In this work, we perform this procedure in reverse: our features are
pre-trained using whatever method we like, then we will extract a useful grouping of the features
post-hoc. This approach has the advantage that it can be scaled to large distributed clusters and is
very generic, allowing us to potentially use different types of grouping criteria and learning strategies in the future with few changes. In that respect, part of the novelty in our approach is to convert
existing notions of topography and statistical dependence in deep networks into a highly scalable
?wrapper method? that can be re-used with other algorithms.
3
Algorithm Details
In this section we will describe our approach to selecting the connections between high-level features and their lower-level inputs (i.e., how to ?learn? the receptive field structure of the high-level
features) from an arbitrary set of data based on a particular pairwise similarity metric: squarecorrelation of feature responses.1 We will then explain how our method integrates with a typical
learning pipeline and, in particular, how to couple our algorithm with the feature learning system
proposed in [6], which we adopt since it has been shown previously to perform well on image recognition tasks.
In what follows, we assume that we are given a dataset X of feature vectors x(i) , i ? {1, . . . , m},
(i)
with elements xj . These vectors may be raw features (e.g., pixel values) but will usually be features
generated by lower layers of a deep network.
3.1
Similarity of Features
In order to group features together, we must first define a similarity metric between features. Ideally,
we should group together features that are closely related (e.g., because they respond to similar
patterns or tend to appear together). By putting such features in the same receptive field, we allow
their relationship to be modeled more finely by higher level learning algorithms. Meanwhile, it also
makes sense to model seemingly independent subsets of features separately, and thus we would like
such features to end up in different receptive fields.
A number of criteria might be used to quantify this type of relationship between features. One
popular choice is ?square correlation? of feature responses, which partly underpins the Topographic
ICA [10] algorithm. The idea is that if our dataset X consists of linearly uncorrelated features (as
can be obtained by applying a whitening procedure), then a measure of the higher-order dependence
between two features can be obtained by looking at the correlation
of their energies (squared re
sponses). In particular, if we have E [x] = 0 and E xx> = I, then we will define the similarity
1
Though we use this metric throughout, and propose some extensions, it can be replaced by many other
choices such as the mutual information between two features.
3
between features xj and xk as the correlation between the squared responses:
q
S[xj , xk ] = corr(x2j , x2k ) = E x2j x2k ? 1 / E x4j ? 1 E [x4k ? 1].
This metric is easy to compute by first whitening our input dataset with ZCA2 whitening [2], then
computing the pairwise similarities between all of the features:
P
Sj,k ? SX [xj , xk ] ? q
i
(i) 4
i (xj
P
(i) 2 (i) 2
xk
xj
? 1)
?1
(i) 4
i (xk
P
.
(1)
? 1)
This computation is completely practical for fewer than 5000 input features. For fewer than 10000
features it is feasible but somewhat arduous: we must not only hold a 10000x10000 matrix in memory but we must also whiten our 10000-feature dataset?requiring a singular value or eigenvalue
decomposition. We will explain how this expense can be avoided in Section 3.3, after we describe
our receptive field learning procedure.
3.2
Selecting Local Receptive Fields
We now assume that we have available to us the matrix of pairwise similarities between features Sj,k
computed as above. Our goal is to construct ?receptive fields?: sets of features Rn , n = 1, . . . , N
whose responses will become the inputs to one or more higher-level features. We would like for
each Rn to contain pairs of features with large values of Sj,k . We might achieve this using various
agglomerative or spectral clustering methods, but we have found that a simple greedy procedure
works well: we choose one feature as a seed, and then group it with its nearest neighbors according
to the similarities Sj,k . In detail, we first select N rows, j1 , . . . , jN , of the matrix S at random
(corresponding to a random choice of features xjn to be the seed of each group). We then construct
a receptive field Rn that contains the features xk corresponding to the top T values of Sjn ,k . We
typically use T = 200, though our results are not too sensitive to this parameter. Upon completion,
we have N (possibly overlapping) receptive fields Rn that can be used during training of the next
layer of features.
3.3
Approximate Similarity
Computing the similarity matrix Sj,k using square correlation is practical for fairly large numbers
of features using the obvious procedure given above. However, if we want to learn receptive fields
over huge numbers of features (as arise, for instance, when we use hundreds or thousands of maps),
we may often be unable to compute S directly. For instance, as explained above, if we use square
correlation as our similarity criterion then we must perform whitening over a large number of features.
Note, however, that the greedy grouping scheme we use requires only N rows of the matrix. Thus,
provided we can compute Sj,k for a single pair of features, we can avoid storing the entire matrix
S. To avoid performing the whitening step for all of the input features, we can instead perform
pair-wise whitening between features. Specifically, to compute the squared correlation of xj and
xk , we whiten the jth and kth columns of X together (independently of all other columns), then
compute the square correlation between the whitened values x
?j and x
?k . Though this procedure is
not equivalent to performing full whitening, it appears to yield effective estimates for the squared
correlation between two features in practice. For instance, for a given ?seed?, the receptive field
chosen using this approximation typically overlaps with the ?true? receptive field (computed with
full whitening) by 70% or more. More importantly, our final results (Section 4) are unchanged
compared to the exact procedure.
Compared to the ?brute force? computation of the similarity matrix, the approximation described
above is very fast and easy to distribute across a cluster of machines. Specifically, the 2x2 ZCA
whitening transform for a pair of features can be computed analytically, and thus we can express
the pair-wise square correlations analytically as a function of the original inputs without having to
?
?
If E xx> = ? = V DV > , ZCA whitening uses the transform P = V D?1/2 V > to compute the
whitened vector x
? as x
? = P x.
2
4
numerically perform the whitening on all pairs of features. If we assume that all of the input features
of x(i) are zero-mean and unit variance, then we have:
(i)
1
(i)
(i)
((?jk + ?jk )xj + (?jk ? ?jk )xk )
2
1
(i)
(i)
= ((?jk ? ?jk )xj + (?jk + ?jk )xk )
2
x
?j =
(i)
x
?k
where ?jk = (1 ? ?jk )?1/2 , ?jk = (1 + ?jk )?1/2 and ?jk is the correlation between xj and xk .
Substituting x
?(i) for x(i) in Equation 1 and expanding yields an expression for the similarity Sj,k
in terms of the pair-wise moments of each feature (up to fourth order). We can typically implement
these computations in a single pass over the dataset that accumulates the needed statistics and then
selects the receptive fields based on the results. Many alternative methods (e.g., Topographic ICA)
would require some form of distributed optimization algorithm to achieve a similar result, which
requires many feed-forward and feed-back passes over the dataset. In contrast, the above method is
typically less expensive than a single feed-forward pass (to compute the feature values x(i) ) and is
thus very fast compared to other conceivable solutions.
3.4
Learning Architecture
We have adopted the architecture of [6], which has previously been applied with success to image
recognition problems. In this section we will briefly review this system as it is used in conjunction
with our receptive field learning approach, but it should be noted that our basic method is equally
applicable to many other choices of processing pipeline and unsupervised learning method.
The architecture proposed by [6] works by constructing a feature representation of a small image
patch (say, a 6-by-6 pixel region) and then extracting these features from many overlapping patches
within a larger image (much like a convolutional neural net).
Let X ? Rm?108 be a dataset composed of a large number of 3-channel (RGB), 6-by-6 pixel image
patches extracted from random locations in unlabeled training images and let x(i) ? R108 be the
vector of RGB pixel values representing the ith patch. Then the system in [6] applies the following
procedure to learn a new representation of an image patch:
1. Normalize each example x(i) by subtracting out the mean and dividing by the norm. Apply
a ZCA whitening transform to x(i) to yield x
?(i) .
2. Apply an unsupervised learning algorithm (e.g., K-means or sparse coding) to obtain a
(normalized) set of linear filters (a ?dictionary?), D.
3. Define a mapping from the whitened input vectors x
?(i) to output features given the dic(i)
(i)
tionary D. We use a soft threshold function that computes each feature fj as fj =
max{0, D(j)> x
?(i) ? t} for a fixed threshold t.
The computed feature values for each example, f (i) , become the new representation for the patch
x(i) . We can now apply the learned feature extractor produced by this method to a larger image, say,
a 32-by-32 pixel RGB color image. This large image can be represented generally as a long vector
with 32 ? 32 ? 3 = 3072 elements. To compute its feature representation we simply extract features
from every overlapping patch within the image (using a stride of 1 pixel between patches) and then
concatenate all of the features into a single vector, yielding a (usually large) new representation of
the entire image.
Clearly we can modify this procedure to use choices of receptive fields other than 6-by-6 patches
of images. Concretely, given the 32-by-32 pixel image, we could break it up into arbitrary choices
of overlapping sets Rn where each Rn includes a subset of the RGB values of the whole image.
Then we apply the procedure outlined above to each set of features Rn independently, followed by
concatenating all of the extracted features. In general, if X is now any training set (not necessarily
image patches), we can define XRn as the training set X reduced to include only the features in one
receptive field, Rn (that is, we simply discard all of the columns of X that do not correspond to
features in Rn ). We may then apply the feature learning and extraction methods above to each XRn
separately, just as we would for the hand-chosen patch receptive fields used in previous work.
5
3.5
Network Details
The above components, conceptually, allow us to lump together arbitrary types and quantities of data
into our unlabeled training set and then automatically partition them into receptive fields in order to
learn higher-level features. The automated receptive field selection can choose receptive fields that
span multiple feature maps, but the receptive fields will often span only small spatial areas (since
features extracted from locations far apart tend to appear nearly independent). Thus, we will also
exploit spatial knowledge to enable us to use large numbers of maps rather than trying to treat the
entire input as unstructured data. Note that this is mainly to reduce the expense of feature extraction
and to allow us to use spatial pooling (which introduces some invariance between layers of features);
the receptive field selection method itself can be applied to hundreds of thousands of inputs. We now
detail the network structure used for our experiments that incorporates this structure.
First, there is little point in applying the receptive field learning method to the raw pixel layer. Thus,
we use 6-by-6 pixel receptive fields with a stride (step) of 1 pixel between them for the first layer
of features. If the first layer contains K1 maps (i.e., K1 filters), then a 32-by-32 pixel color image
takes on a 27-by-27-by-K1 representation after the first layer of (convolutional) feature extraction.
Second, depending on the unsupervised learning module, it can be difficult to learn features that are
invariant to image transformations like translation. This is handled traditionally by incorporating
?pooling? layers [3, 14]. Here we use average pooling over adjacent, disjoint 3-by-3 spatial blocks.
Thus, applied to the 27-by-27-by-K1 representation from layer 1, this yields a 9-by-9-by-K1 pooled
representation.
After extracting the 9-by-9-by-K1 pooled representation from the first two layers, we apply our receptive field selection method. We could certainly apply the algorithm to the entire high-dimensional
representation. As explained above, it is useful to retain spatial structure so that we can perform
spatial pooling and convolutional feature extraction. Thus, rather than applying our algorithm to the
entire input, we apply the receptive field learning to 2-by-2 spatial regions within the 9-by-9-by-K1
pooled representation. Thus the receptive field learning algorithm must find receptive fields to cover
2 ? 2 ? K1 inputs. The next layer of feature learning then operates on each receptive field within
the 2-by-2 spatial regions separately. This is similar to the structure commonly employed by prior
work [4, 12], but here we are able to choose receptive fields that span several feature maps in a
deliberate way while also exploiting knowledge of the spatial structure.
In our experiments we will benchmark our system on image recognition datasets using K1 = 1600
first layer maps and K2 = 3200 second layer maps learned from N = 32 receptive fields. When we
use three layers, we apply an additional 2-by-2 average pooling stage to the layer 2 outputs (with
stride of 1) and then train K3 = 3200 third layer maps (again with N = 32 receptive fields). To
construct a final feature representation for classification, the outputs of the first and second layers of
trained features are average-pooled over quadrants as is done by [6]. Thus, our first layer of features
result in 1600 ? 4 = 6400 values in the final feature vector, and our second layer of features results
in 3200 ? 4 = 12800 values. When using a third layer, we use average pooling over the entire image
to yield 3200 additional feature values. The features for all layers are then concatenated into a single
long vector and used to train a linear classifier (L2-SVM).
4
Experimental Results
We have applied our method to several benchmark visual recognition problems: the CIFAR-10 and
STL datasets. In addition to training on the full CIFAR training set, we also provide results of our
method when we use only 400 training examples per class to compare with other single-layer results
in [6].
The CIFAR-10 examples are all 32-by-32 pixel color images. For the STL dataset, we downsample
the (96 pixel) images to 32 pixels. We use the pipeline detailed in Section 3.4, with vector quantization (VQ) as the unsupervised learning module to train up to 3 layers. For each set of experiments
we provide test results for 1 to 3 layers of features, where the receptive fields for the 2nd and 3rd layers of features are learned using the method of Section 3.2 and square-correlation for the similarity
metric.
For comparison, we also provide test results in each case using several alternative receptive field
choices. In particular, we have also tested architectures where we use a single receptive field (N = 1)
6
where R1 contains all of the inputs, and random receptive fields (N = 32) where Rn is filled according to the same algorithm as in Section 3.2, but where the matrix S is set to random values. The first
method corresponds to the ?completely connected?, brute-force case described in the introduction,
while the second is the ?randomly connected? case. Note that in these cases we use the same spatial
organization outlined in Section 3.5. For instance, the completely-connected layers are connected to
all the maps within a 2-by-2 spatial window. Finally, we will also provide test results using a larger
1st layer representation (K1 = 4800 maps) to verify that the performance gains we achieve are not
merely the result of passing more projections of the data to the supervised classification stage.
4.1
4.1.1
CIFAR-10
Learned 2nd-layer Receptive Fields and Features
Before we look at classification results, we first inspect the learned features and their receptive fields
from the second layer (i.e., the features that take the pooled first-layer responses as their input).
Figure 1 shows two typical examples of receptive fields chosen by our method when using squarecorrelation as the similarity metric. In both of the examples, the receptive field incorporates filters
with similar orientation tuning but varying phase, frequency and, sometimes, varying color. The
position of the filters within each window indicates its location in the 2-by-2 region considered by
the learning algorithm. As we might expect, the filters in each group are visibly similar to those
placed together by topographic methods like TICA that use related criteria.
Figure 1: Two examples of receptive fields chosen from 2-by-2-by-1600 image representations.
Each box shows the low-level filter and its position (ignoring pooling) in the 2-by-2 area considered
by the algorithm. Only the most strongly dependent features from the T = 200 total features are
shown. (Best viewed in color.)
Figure 2: Most inhibitory (left) and excitatory (right) filters for two 2nd-layer features. (Best viewed
in color.)
We also visualize some of the higher-level features constructed by the vector quantization algorithm
when applied to these two receptive fields. The filters obtained from VQ assign weights to each of
the lower level features in the receptive field. Those with a high positive weight are ?excitatory?
inputs (tending to lead to a high response when these input features are active) and those with a
large negative weight are ?inhibitory? inputs (tending to result in low filter responses). The 5 most
inhibitory and excitatory inputs for two learned features are shown in Figure 2 (one from each
receptive field in Figure 1). For instance, the two most excitatory filters of feature (a) tend to select
for long, narrow vertical bars, inhibiting responses of wide bars.
4.1.2
Classification Results
We have tested our method on the task of image recognition using the CIFAR training and testing
labels. Table 1 details our results using the full CIFAR dataset with various settings. We first note
the comparison of our 2nd layer results with the alternative of a single large 1st layer using an
equivalent number of maps (4800) and see that, indeed, our 2nd layer created with learned receptive
fields performs better (81.2% vs. 80.6%). We also see that the random and single receptive field
choices work poorly, barely matching the smaller single-layer network. This appears to confirm our
belief that grouping together similar features is necessary to allow our unsupervised learning module
(VQ) to identify useful higher-level structure in the data. Finally, with a third layer of features, we
achieve the best result to date on the full CIFAR dataset with 82.0% accuracy.
7
Table 1: Results on CIFAR-10 (full)
Architecture
Accuracy (%)
1 Layer
78.3%
1 Layer (4800 maps)
80.6%
2 Layers (Single RF)
77.4%
2 Layers (Random RF) 77.6%
2 Layers (Learned RF) 81.2%
3 Layers (Learned RF) 82.0%
VQ (6000 maps) [6]
81.5%
Conv. DBN [13]
78.9%
Deep NN [4]
80.49%
Table 2: Results on CIFAR-10 (400 ex. per class)
Architecture
Accuracy (%)
1 Layer
64.6% (?0.8%)
1 Layer (4800 maps)
63.7% (?0.7%)
2 Layers (Single RF)
65.8% (?0.3%)
2 Layers (Random RF)
65.8% (?0.9%)
2 Layers (Learned RF)
69.2% (?0.7%)
3 Layers (Learned RF)
70.7% (?0.7%)
Sparse coding (1 layer) [6] 66.4% (?0.8%)
VQ (1 layer) [6]
64.4% (?1.0%)
It is difficult to assess the strength of feature learning methods on the full CIFAR dataset because
the performance may be attributed to the success of the supervised SVM training and not the unsupervised feature training. For this reason we have also performed classification using 400 labeled
examples per class.3 Our results for this scenario are in Table 2. There we see that our 2-layer
architecture significantly outperforms our 1-layer system as well as the two 1-layer architectures developed in [6]. As with the full CIFAR dataset, we note that it was not possible to achieve equivalent
performance by merely expanding the first layer or by using either of the alternative receptive field
structures (which, again, make minimal gains over a single layer).
4.2
STL-10
Table 3: Classification Results on STL-10
Architecture
Accuracy (%)
1 Layer
54.5% (?0.8%)
1 Layer (4800 maps)
53.8% (?1.6%)
2 Layers (Single RF)
55.0% (?0.8%)
2 Layers (Random RF)
54.4% (?1.2%)
2 Layers (Learned RF)
58.9% (?1.1%)
3 Layers (Learned RF)
60.1% (?1.0%)
Sparse coding (1 layer) [6] 59.0% (?0.8%)
VQ (1 layer) [6]
54.9% (?0.4%)
Finally, we also tested our algorithm on the
STL-10 dataset [5]. Compared to CIFAR, STL
provides many fewer labeled training examples
(allowing 100 labeled instances per class for
each training fold). Instead of relying on labeled data, one tries to learn from the provided
unlabeled dataset, which contains images from
a distribution that is similar to the labeled set
but broader. We used the same architecture for
this dataset as for CIFAR, but rather than train our features each time on the labeled training fold
(which is too small), we use 20000 examples taken from the unlabeled dataset. Our results are
reported in Table 3.
Here we see increasing performance with higher levels of features once more, achieving state-of-theart performance with our 3-layered model. This is especially notable since the higher level features
have been trained purely from unlabeled data. We note, one more time, that none of the alternative
architectures (which roughly represent common practice for training deep networks) makes significant gains over the single layer system.
5
Conclusions
We have proposed a method for selecting local receptive fields in deep networks. Inspired by
the grouping behavior of topographic learning methods, our algorithm selects qualitatively similar
groups of features directly using arbitrary choices of similarity metric, while also being compatible
with any unsupervised learning algorithm we wish to use. For one metric in particular (square correlation) we have employed our algorithm to choose receptive fields within multi-layered networks
that lead to successful image representations for classification while still using only vector quantization for unsupervised learning?a relatively simple by highly scalable learning module. Among
our results, we have achieved the best published accuracy on CIFAR-10 and STL datasets. These
performances are strengthened by the fact that they did not require the use of any supervised backpropagation algorithms. We expect that the method proposed here is a useful new tool for managing
extremely large, higher-level feature representations where more traditional spatio-temporal local
receptive fields are unhelpful or impossible to employ successfully.
3
Our networks are still trained unsupervised from the entire training set.
8
References
[1] R. Adams, H. Wallach, and Z. Ghahramani. Learning the structure of deep sparse graphical
models. In International Conference on AI and Statistics, 2010.
[2] A. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters.
Vision Research, 37, 1997.
[3] Y. Boureau, F. Bach, Y. LeCun, and J. Ponce. Learning mid-level features for recognition. In
Computer Vision and Pattern Recognition, 2010.
[4] D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber.
Highperformance neural networks for visual object classification.
Pre-print, 2011.
http://arxiv.org/abs/1102.0183.
[5] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature
learning. In International Conference on AI and Statistics, 2011.
[6] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and
vector quantization. In International Conference on Machine Learning, 2011.
[7] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In
Advances in Neural Information Processing Systems, 2010.
[8] K. Gregor and Y. LeCun. Emergence of complex-like cells in a temporal product network with
local receptive fields, 2010.
[9] F. Huang and Y. LeCun. Large-scale learning with SVM and convolutional nets for generic
object categorization. In Computer Vision and Pattern Recognition, 2006.
[10] A. Hyvarinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural
Computation, 13(7):1527?1558, 2001.
[11] A. Hyvarinen and E. Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411?430, 2000.
[12] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In International Conference on Computer Vision, 2009.
[13] A. Krizhevsky. Convolutional Deep Belief Networks on CIFAR-10. Unpublished manuscript,
2010.
[14] Y. LeCun, F. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004.
[15] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for
scalable unsupervised learning of hierarchical representations. In International Conference on
Machine Learning, 2009.
[16] V. Nair and G. E. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In
International Conference on Machine Learning, 2010.
[17] N. Pinto, D. Doukhan, J. J. DiCarlo, and D. D. Cox. A high-throughput screening approach
to discovering good forms of biologically inspired visual representation. PLoS Comput Biol,
2009.
[18] A. Saxe, P. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and
unsupervised feature learning. In International Conference on Machine Learning, 2011.
[19] D. Scherer, A. Mller, and S. Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks,
2010.
[20] E. Simoncelli and O. Schwartz. Modeling surround suppression in v1 neurons with a statistically derived normalization model. Advances in Neural Information Processing Systems,
1998.
[21] K. Zhang and L. Chan. Ica with sparse connections. Intelligent Data Engineering and Automated Learning, 2006.
9
| 4293 |@word cox:1 briefly:1 norm:1 nd:5 rgb:4 decomposition:1 pick:1 garrigues:1 moment:1 wrapper:1 contains:5 selecting:4 outperforms:1 existing:1 activation:2 yet:2 must:7 concatenate:1 partition:1 j1:1 update:1 v:1 greedy:3 fewer:3 discovering:1 xk:10 ith:1 provides:1 location:3 org:1 zhang:2 constructed:2 become:2 acoates:1 consists:1 pairwise:5 indeed:1 ica:5 behavior:1 blowup:2 frequently:1 roughly:1 multi:5 inspired:2 relying:2 automatically:3 little:1 window:2 increasing:1 becomes:3 provided:2 xx:2 underlying:1 conv:1 what:2 underpins:1 developed:1 transformation:1 temporal:2 every:4 collecting:1 classifier:1 scaled:1 rm:1 k2:1 whatever:1 unit:5 brute:2 appear:2 organize:1 schwartz:1 before:1 positive:1 engineering:1 local:10 modify:1 treat:1 limit:1 accumulates:1 encoding:1 analyzing:1 might:4 wallach:1 challenging:1 doukhan:1 jarrett:1 statistically:1 practical:4 unique:1 lecun:5 testing:1 practice:3 block:1 implement:1 backpropagation:2 procedure:12 suresh:1 area:3 empirical:1 bell:1 attain:1 significantly:1 projection:1 matching:1 pre:6 quadrant:1 get:1 unlabeled:6 layered:5 selection:5 influence:1 applying:3 impossible:1 xrn:2 equivalent:3 map:23 layout:1 independently:3 focused:2 rectangular:1 resolution:1 unstructured:2 array:2 importantly:1 notion:1 traditionally:2 limiting:1 heavily:1 exact:1 us:2 trick:1 element:3 recognition:14 expensive:2 jk:13 labeled:6 module:5 thousand:3 region:4 connected:4 ranzato:1 plo:1 mentioned:1 ideally:1 trained:6 purely:1 upon:1 bipartite:1 completely:3 various:2 x4k:1 represented:1 train:11 fast:3 describe:2 effective:1 sejnowski:1 artificial:1 x10000:1 choosing:1 whose:1 stanford:6 larger:4 solve:2 say:2 otherwise:1 ability:1 statistic:3 topographic:9 jointly:1 transform:3 itself:1 final:3 seemingly:1 emergence:1 hoc:1 advantage:4 eigenvalue:1 analytical:1 net:2 propose:6 subtracting:1 product:1 beneath:1 date:1 poorly:1 achieve:9 normalize:1 scalability:1 exploiting:1 cluster:2 requirement:1 r1:1 produce:1 categorization:1 adam:3 object:7 depending:2 andrew:1 completion:1 pose:1 nearest:1 exacerbated:1 dividing:1 c:2 implies:1 quantify:1 closely:1 filter:16 subsequently:1 saxe:2 enable:1 require:4 assign:1 generalization:1 investigation:1 extension:1 hold:1 considered:2 seed:3 mapping:1 k3:1 visualize:1 mller:1 substituting:1 inhibiting:1 major:1 sought:1 adopt:1 dictionary:1 integrates:1 applicable:2 label:1 sensitive:1 successfully:1 tool:1 clearly:1 rather:4 avoid:2 varying:2 broader:1 conjunction:2 derived:1 ponce:1 indicates:1 mainly:1 visibly:1 contrast:2 zca:3 suppression:1 sense:1 dependent:1 downsample:1 nn:1 entire:9 typically:7 hidden:2 bhand:1 selects:2 pixel:16 classification:8 flexible:1 orientation:1 augment:1 among:1 spatial:16 art:3 fairly:1 mutual:1 field:70 construct:4 once:1 extraction:5 ng:5 having:1 look:1 unsupervised:23 nearly:1 theart:1 throughput:1 future:1 intelligent:1 employ:1 few:1 randomly:3 oja:1 composed:2 replaced:1 phase:1 ab:1 organization:4 screening:2 huge:1 highly:3 situ:1 evaluation:2 certainly:1 introduces:1 mixture:1 extreme:1 yielding:1 edge:1 necessary:3 orthogonal:1 filled:1 initialized:1 re:2 minimal:1 instance:11 column:3 modeling:2 obstacle:1 soft:1 cover:1 subset:3 hundred:2 krizhevsky:1 successful:2 too:2 reported:1 connect:2 chooses:1 combined:1 st:2 international:8 retain:1 probabilistic:1 off:1 lee:2 together:9 connecting:3 squared:4 again:2 choose:11 possibly:1 huang:2 resort:1 forgoing:1 highperformance:1 distribute:1 stride:3 tica:1 coding:5 pooled:5 includes:1 notable:1 performed:1 try:2 break:1 ass:1 square:7 accuracy:7 convolutional:7 variance:2 characteristic:1 yield:6 identify:2 correspond:1 conceptually:1 generalize:1 bayesian:1 raw:2 kavukcuoglu:1 produced:1 none:1 lighting:1 rectified:1 finer:1 published:2 explain:3 inquiry:1 competitor:1 energy:2 frequency:1 obvious:2 attributed:1 couple:1 gain:4 dataset:16 popular:3 knowledge:6 color:6 back:1 appears:2 feed:3 manuscript:1 higher:12 supervised:4 harness:1 specify:4 improved:1 response:11 done:1 though:5 strongly:2 box:1 just:1 stage:3 correlation:12 hand:5 overlapping:4 arduous:1 olshausen:1 building:1 effect:1 requiring:1 contain:1 true:1 normalized:1 verify:1 analytically:2 adjacent:1 during:5 width:1 noted:1 whiten:2 criterion:4 trying:1 complete:1 demonstrate:1 necessitating:1 performs:1 l1:1 fj:2 image:31 wise:3 recently:1 common:3 inki:1 tending:2 numerically:1 significant:1 surround:1 imposing:1 ai:2 tuning:2 rd:1 grid:1 outlined:2 dbn:1 similarity:19 longer:1 whitening:13 recent:2 chan:2 apart:1 reverse:1 discard:1 scenario:1 certain:1 schmidhuber:1 success:3 additional:3 somewhat:1 impose:1 employed:2 managing:1 novelty:1 gambardella:1 ii:1 full:8 multiple:1 simoncelli:1 reduces:1 infer:1 bach:1 long:3 cifar:18 post:1 equally:1 dic:1 coded:1 laplacian:1 scalable:4 basic:1 whitened:3 vision:5 metric:12 arxiv:1 represent:2 normalization:2 sometimes:1 achieved:3 cell:1 receive:1 addition:2 want:1 fine:1 separately:3 grow:1 singular:1 operate:1 finely:1 pass:1 pooling:10 tend:4 virtually:1 incorporates:2 lump:1 extracting:2 iii:1 easy:2 automated:3 variety:1 xj:10 architecture:19 identified:1 ciresan:1 behnke:1 reduce:1 idea:1 motivated:1 expression:1 handled:1 passing:1 cause:1 deep:19 dramatically:1 generally:2 useful:4 clear:2 involve:1 detailed:1 ang:1 mid:1 reduced:3 http:1 deliberate:1 coates:3 inhibitory:3 disjoint:1 per:4 express:1 group:12 key:2 putting:1 nevertheless:1 threshold:2 achieving:1 v1:1 merely:2 convert:1 powerful:1 respond:1 fourth:1 throughout:1 architectural:1 patch:12 scaling:1 x2k:2 layer:74 followed:2 tackled:1 fan:1 fold:2 encountered:1 strength:1 handful:1 constraint:1 x2:1 scene:1 nearby:2 extremely:4 span:3 performing:4 relatively:1 department:2 according:6 across:1 smaller:1 making:1 biologically:1 explained:2 dv:1 invariant:1 restricted:1 koh:1 pipeline:6 taken:1 equation:1 vq:6 previously:3 discus:1 needed:1 know:2 end:1 adopted:1 available:1 operation:1 apply:11 hierarchical:1 generic:3 spectral:1 alternative:6 convolved:1 jn:1 original:1 top:3 convolve:1 remaining:1 clustering:1 include:1 graphical:1 exploit:1 concatenated:1 k1:10 especially:1 ghahramani:1 gregor:1 unchanged:1 already:1 question:1 quantity:1 print:1 receptive:70 parametric:1 strategy:1 dependence:2 traditional:1 hoyer:1 amongst:2 kth:1 conceivable:1 unable:1 agglomerative:1 barely:1 reason:1 dicarlo:1 modeled:1 relationship:4 insufficient:1 difficult:6 unfortunately:2 potentially:1 expense:3 negative:1 boltzmann:1 perform:6 allowing:2 inspect:1 observation:1 neuron:2 datasets:6 vertical:1 benchmark:4 situation:2 hinton:1 incorporated:1 looking:1 discovered:1 rn:10 arbitrary:4 pair:7 meier:1 specified:1 unpublished:1 connection:5 quadratically:1 learned:15 narrow:1 address:2 beyond:1 able:1 bar:2 usually:3 pattern:4 unhelpful:1 sparsity:1 built:1 max:1 memory:1 rf:12 belief:4 overlap:1 natural:1 rely:1 force:2 representing:1 scheme:4 improve:1 created:1 extract:2 prior:6 literature:1 epoch:1 review:1 l2:1 expect:2 topography:4 scherer:1 versus:1 penalization:1 bank:1 uncorrelated:1 share:1 storing:1 translation:1 row:2 excitatory:4 compatible:1 placed:1 free:1 jth:1 allow:4 wide:2 neighbor:1 sparse:7 distributed:2 depth:1 computes:1 author:1 commonly:2 forward:2 concretely:1 avoided:1 qualitatively:1 far:1 hyvarinen:2 ranganath:1 sj:7 approximate:1 implicitly:1 confirm:1 active:1 spatio:1 search:2 why:1 table:6 learn:7 channel:1 ca:2 correlated:1 expanding:2 ignoring:1 investigated:1 complex:2 meanwhile:1 constructing:1 domain:1 necessarily:1 bottou:1 did:1 linearly:1 whole:1 arise:1 n2:1 sjn:1 repeated:1 strengthened:1 grosse:1 slow:1 position:2 wish:1 concatenating:1 comput:1 third:3 extractor:5 grained:1 masci:1 showing:1 svm:3 stl:10 concern:1 grouping:5 incorporating:1 quantization:5 restricting:1 corr:1 sponses:1 importance:1 tionary:1 sx:1 boureau:1 chen:1 locality:1 simply:3 xjn:1 visual:5 pinto:2 applies:1 corresponds:1 extracted:3 nair:1 succeed:1 goal:1 viewed:2 feasible:1 change:1 specifically:3 typical:2 operates:1 called:1 x2j:2 pas:2 partly:1 experimental:2 invariance:2 total:1 indicating:1 select:3 internal:1 tested:3 biol:1 ex:1 |
3,638 | 4,294 | Maximum Margin Multi-Instance Learning
Hua Wang
Computer Science and Engineering
University of Texas at Arlington
[email protected]
Heng Huang
Computer Science and Engineering
University of Texas at Arlington
[email protected]
Farhad Kamangar
Computer Science and Engineering
University of Texas at Arlington
[email protected]
Feiping Nie
Computer Science and Engineering
University of Texas at Arlington
[email protected]
Chris Ding
Computer Science and Engineering
University of Texas at Arlington
[email protected]
Abstract
Multi-instance learning (MIL) considers input as bags of instances, in which labels are assigned to the bags. MIL is useful in many real-world applications. For
example, in image categorization semantic meanings (labels) of an image mostly
arise from its regions (instances) instead of the entire image (bag). Existing MIL
methods typically build their models using the Bag-to-Bag (B2B) distance, which
are often computationally expensive and may not truly reflect the semantic similarities. To tackle this, in this paper we approach MIL problems from a new
perspective using the Class-to-Bag (C2B) distance, which directly assesses the
relationships between the classes and the bags. Taking into account the two major challenges in MIL, high heterogeneity on data and weak label association, we
propose a novel Maximum Margin Multi-Instance Learning (M3 I) approach to
parameterize the C2B distance by introducing the class specific distance metrics
and the locally adaptive significance coefficients. We apply our new approach to
the automatic image categorization tasks on three (one single-label and two multilabel) benchmark data sets. Extensive experiments have demonstrated promising
results that validate the proposed method.
1 Introduction
Traditional image categorization methods usually consider an image as one indiscrete entity, which,
however, neglects an important fact that the semantic meanings (labels) of an image mostly arise
from its constituent regions, but not the entire image. For example, the labels ?person? and ?car?
associated with the query image in Figure 1 are only characterized by the regions in two bounding
boxes, respectively, rather than the whole image. Therefore, modeling the relationships between labels and regions (instead of the entire image) could potentially reduce the noise in the corresponding
feature space, and the learned semantic models could be more accurate.
In recent years, image representation techniques using semi-local, or patch-based, features, such as
SIFT, have demonstrated some of the best performance in image retrieval and object recognition
applications. These algorithms choose a set of patches in an image, and for each patch compute
1
??
Class 1
D(C1, X)
Class K
Our
Model
??
?
??
D(C1, X)
D(Ck, X)
??
??
Significance
Coefficients (SC)
of Each Instance
C2B distances
for classification
Metric
matrix M1
D(CK, X)
An image bag
D(CK, X)
A query image X with two instances
(labeled as ?person? and ?car?)
Metric
matrix MK
Super-bags of the classes
Figure 1: Diagram of the proposed M3 I approach. Our task is to learn class specific distance metrics Mk and significance coefficients wkj from the training data, with which we compute the C2B
distances from the classes to a query image X for classification.
a fixed-length feature vector. This gives a set of vectors per image, where the size of the set can
vary from image to image. Armed with these patch-based features, image categorization is recently
formulated as a multi-instance learning (MIL) task [1?6]. Under the framework of MIL, an image
is viewed as a bag, which contains a number of instances corresponding to the regions in the image.
If any of these instances is related to a semantic concept, the image will be associated with the
corresponding label. The goal of MIL is to construct a learner to classify unseen image bags.
1.1 Learning Class-to-Bag (C2B) distance for multi-instance data
In MIL data objects are represented as bags of instances, therefore the distance between the objects
is a set-to-set distance. Compared to traditional single-instance data that use vector distance such as
Euclidean distance, estimating the Bag-to-Bag (B2B) distance in MIL is more challenging [7, 8]. In
addition, the B2B distances often do not truly reflect the semantic similarities [9]. For example, two
images containing one common object may also have other visually incompatible objects, which
makes these two images less similar in terms of the B2B distance. Therefore, instead of measuring
the similarities between bags, in this paper we approach MIL from a new perspective using the
Class-to-Bag (C2B) distance, which assesses the relationships between the classes and the bags.
Measuring the distance between images (bags) and classes was first introduced in [9] for object
recognition, which used the Bag-to-Class (B2C) distance instead of the C2B distance. Given a
triplet constraint (i, p, n) that image i is more relevant to class p than it is to class n, the C2B
distance formulates this as Dpi < Dni , while the B2C distance formulates this as Dip < Din . It
seems these two formulations are similar, however, they are different when learning parameterized
distance, the main goal of this paper. To be more specific, for the C2B distance we only need to
parameterize training instances, which are available during the training phase. In contrast, for the
B2C distance, parameterizing instances in query images has to be involved, which is not always
feasible because we typically do not know them beforehand. This difference will become more
clear shortly when we mathematically define the C2B distance.
1.2 Challenges and opportunities of MIL
Multi-instance data are different from traditional single-instance data, which bring new opportunities
to improve the classification performance, though together with more challenges. We first explore
these challenges, as well as to find opportunities to enhance the C2B distance introduced above.
Learning class specific distance metrics. Due to the well-known semantic gap between low-level
visual features and high-level semantic concepts [10], choosing an appropriate distance metric plays
an important role in establishing an effective image categorization system, as well as other general
MIL models. Existing metric learning methods [5,6] for multi-instance data only learned one global
metric for an entire data set. However, multi-instance data by nature are highly heterogeneous, thus
a homogeneous distance metric may not suffice to characterize different classes of objects in a same
data set. For example, in Figure 1 the shape and color characterizations of a person are definitely
different from those of a car. To this end, we consider to learn multiple distance metrics, one for each
class, for a multi-instance data set to capture the correlations among the features within each object
category. The metrics are learned simultaneously by forming a maximum margin optimization problem with the constraints that the C2B distances from the correct classes of an training object to it
should be less than the distances from other classes to it by a margin.
2
0.974
2.569
0.175
1.953
2.861
1.154
0.352
(a) SCs w.r.t. class ?person? (left) and ?horse? (right).
2.736
(b) SCs w.r.t. class ?person? (left) and ?car? (right).
Figure 2: The learned SCs of the instances in a same image when they serve as training samples for
different classes. For example, in Figure 2(a) the SC of the horse instance in class ?person? is 0.175,
whereas its SC in class ?horse? is 2.861. As a result, the horse instance contributes a lot in the C2B
distance from class ?horse? to a query image, while having much less impact in the C2B distance
from class ?person? to a query image.
Learning locally adaptive C2B distance. Different from the classification problems for traditional
single-instance data, in MIL the classes are weakly associated to the bags, i.e., a label is assigned
to a bag as long as one of its instance belongs to the class. As a result, although a bag is associated
with a class, some, or even most, of its instances may not be truly related to the class. For example,
in the query image in Figure 1, the instance in the left bounding box does not contribute to the label
?person?. Intuitively, the instances in a super-bag of a class should not contribute equally in predicting labels for a query image. Instead, they should be properly weighted. With this recognition, we
formulate another maximum margin optimization problem to learn multiple weights for a training
instance, one for each of its labeled classes. The resulted weight reflects the relative importance of
the training instance with respect to a class, which we call as Significance Coefficient (SC) . Ideally,
the SC of an instance with respect to its true belonging class should be large, whereas its SC with
respect other classes should be small. In Figure 2, we show the learned SCs for the instances in
some images when they serve as training samples for their labeled classes. Because the image in
Figure 2(a) has two labels, this image, thereby its two instances, serves as a training sample for both
class ?person? (left) and ?horse? (right). Although the learned SC of the horse instance is very low
when it is in the super-bag of ?person? (0.175) as in the left panel of Figure 2(a), its SC (2.861) is
relatively high when it is in the super-bag of ?horse?, its true belonging class, as in right panel of
Figure 2(a). The same observations can also be seen in the rest examples, which are perfectly in
accordance with our expectations.
With the above two enhancements to C2B distance, the class specific metrics and SCs, the two difficulties in MIL are addressed. Because these two components of the proposed approach are learned
from two maximum margin optimization problems, we call the proposed approach as Maximum
Margin Multi-Instance Learning (M3 I) approach, which is schematically illustrated in Figure 1.
2 Learning C2B distance for multi-instance data via M3 I approach
In this section, we first briefly formalize the MIL problem and the C2B distance for a multi-instance
data set, where we provide the notations used in this paper. Then we gradually develop the proposed
M3 I approach to incorporate the class specific distance metrics and the locally adaptive SCs into the
C2B distance, together with its learning algorithms.
Problem formalization of MIL. Given a multi-instance data set with
K classes and N training
N
bags, we denote the training set by D = {(Xi , yi )}i=1 . Each Xi = x1i , . . . , xni i is a bag of ni
instances, where xji ? Rp is a vector of p dimensions. The class assignment indicator yi ? {0, 1}K
is a binary vector, with yi (k) = 1 indicating that bag Xi belongs to the k-th class and yi (k) = 0
PK
T
otherwise. We write Y = [y1 , . . . , yN ] . If k=1 Yik = 1, i.e., each bag belongs to exactly one
PK
class, the data set is a single-label data set; if k=1 Yik ? 1, i.e., each bag may be associated with
more than one class label, the data set is a multi-label data set [11?14]. In the setting of MIL, we
assume that (I) bag X is assigned to the k-th class ?? at least one instance of X belongs to the
k-th class; and (II) bag X is not assigned to the k-th class ?? no instance of X belongs to the k-th
class. Our task is to learn from D a classifier that is able to predict labels for a new query bag.
3
For convenience, we denote P (Xi ) as the classes that bag Xi belongs to (positive classes), and
N (Xi ) as the classes that Xi does not belong to (negative classes).
C2B distance in MIL. In order to compute the C2B distance, we represent every class as a superbag, i.e., a set consisting of all the instances in every bag belonging to a class:
o
n j
k
Sk = s1k , . . . , sm
= xi | Yik = 1 ,
(1)
k
where sjk isP
an instance of Sk that comes from one of the training bags belonging to the k-th class,
and mk = {i|Yik =1} ni is the total number of the instances in Sk . Note that, in single-label data
PK
where each bag belongs to only one class, we have Sk ? Sl = ? (? k 6= l) and k=1 mk =
PN
each instance) may belong to more than one
i=1 ni . In multi-label data where each bag (thereby
P
PN
class [11?14], we have Sk ? Sl 6= ? (? k 6= l) and K
k=1 mk ?
i=1 ni , i.e., different super-bags
j
may overlap and one xi may appear in multiple super-bags.
The elementary distance from an instance in a super-bag to a bag is defined as the distance between
this instance and its nearest neighbor instance in the bag:
2
d sjk , Xi =
sjk ? ?sjk
,
(2)
M
where
?sjk
is the nearest neighbor instance of
sjk
in Xi .
Then we compute the C2B distance from a super-bag Sk to a data bag Xi as following:
D (Sk , Xi ) =
mk
X
j=1
mk
2
X
j
j
d sjk , Xi =
sk ? ?sk
.
(3)
j=1
2.1 Parameterized C2B distance of the M3 I approach
Because the C2B distance defined in Eq. (3) does not take into account the challenging properties of
multi-instance data as discussed in Section 1.2, we further develop it in the rest of this subsection.
Class specific distance metrics. The C2B distance defined in Eq. (3) is a Euclidean distance, which
is independent of the input data. In order to capture the second-order statistics of the input data that
could potentially improve the subsequent classification [5, 6], we consider to use the Mahalanobis
distance with an appropriate distance metric. With the recognition of the high heterogeneity in multiinstance data, instead of learning a global distance metric as in existing works [5, 6], we propose to
K
learn K different class specific distance metrics {Mk }k=1 ? Rp?p , one for each class. Note that,
using class specific distance metrics is only feasible with the distance between classes and bags
(either C2B or B2C distance), because we are only concerned with intra-class distances. In contrast,
traditional B2B distance needs to compute the distances between bags belonging to different classes
involving inter-class distance metrics, which inevitably complicates the problem.
Specifically, instead of using Eq. (3), we compute C2B distance using the Mahalanobis distance as:
mk
T
X
j
j
j
j
D (Sk , Xi ) =
sk ? ?sk Mk sk ? ?sk
.
(4)
j=1
Locally adaptive C2B distance. Now we further develop the C2B distance defined in Eq. (4) to
address the labeling ambiguity in multi-instance scenarios. We propose a locally adaptive C2B
distance by weighting the instances in a super-bag upon their relevance to the corresponding class.
Due to the weak association between the instances and the bag labels, not every instance in a superbag of a class truly characterizes the corresponding semantic concept. For example, in Figure 2(a)
the region for the horse object is in the super-bag of ?person? class, because the entire image is
labeled with both ?person? and ?horse?. As a result, intuitively, we should give a smaller, or even
no, weight to the horse instance when determining whether to assign ?person? label to a query
4
image; and give it a higher weight when deciding ?horse? label. To be more precise, let wkj be the
weight associated with sjk , we wish to learn the C2B distance as following:
mk
T
X
j
j
j
j
j
D (Sk , Xi ) =
wk sk ? ?sk Mk sk ? ?sk
.
(5)
j=1
wkj
Because
reflects the relative importance of instance sjk when determining the label for the k-th
class, we call it as the ?Significance Coefficient (SC)? of sjk .
2.2 Procedures to learn Mk and wkj
Given the parameterized C2B distance defined in Eq. (5) for a multi-instance data set, our learning
objects are the two sets of variables Mk and wkj . Motivated by metric learning from relative comparisons [15?17], we learn Mk and wkj by constraining that the C2B distances from the true belonging
classes of bag Xi to it are smaller than the distances from any other classes to it by a margin:
? p ? P (Xi ) , n ? N (Xi ) : D (Sn , Xi ) ? D (Sp , Xi ) ? 1 ? ?ipn ,
(6)
where ?ipn is a slack variable because the constraints usually can not be completely satisfied in real
world data. Therefore, ?ipn measures the deviation from the strict constraint for the triplet (i, p, n).
In the following, we formulate two maximum margin optimization problems to learn the two sets of
target variables Mk and wkj , one for each of them.
Optimizing Mk . First we fix wkj to optimize Mk . To avoid over-fitting, as in support vector machine
(SVM), we minimize the overall C2B distances from Xi ?s associated classes to itself and the total
amount of slack. Specifically, we solve the following convex optimization problem:
X
X
?ipn ,
D (Sp , Xi ) + C
min
M1 ,...,MK
i, p?P(Xi ), n?N (Xi )
i, p?P(Xi ),
(7)
s.t. ? p ? P (Xi ) , n ? N (Xi ) : ?ipn ? 0, D (Sn , Xi ) ? D (Sp , Xi ) ? 1 ? ?ipn ,
? k : Mk 0,
where C is a trade-off parameter, acting same as in SVM. The optimization problem in Eq. (7) is a
semi-definite programming (SDP) problem, which can be solved by standard SDP solvers. However,
standard SDP solvers are computationally expensive. Therefore, we use the gradient descent SDP
solver introduced in [18] to solve the problem.
T
Optimizing wkj . Then we fix Mk to optimize wkj . Let dM sjk , Xi = sjk ? ?sjk Mk sjk ? ?sjk ,
T
1
mk T
k
we denote dki = dM s1k , Xi , . . . , dM (sm
, by the definik , Xi ) . Let wk = wk , . . . , wk
tion in Eq. (5) we rewrite Eq. (6) as following:
wnT dni ? wpT dpi ? 1 ? ?ipn , ? p ? P (Xi ) , n ? N (Xi ) .
(8)
In order to make use of the standard large-margin classification framework and simplify our deriva
T T
tion, following [17] we expand our notations. Let w = w1T , . . . , wK
, which is the concatenation of the class-specific weight vectors wk . Thus, each class-specific weight vector wk corresponds a subrange of w. Similarly, we expand the distance vectors and let dipn be a vector of
the same length as w, such that all its entries are 0 except the subranges corresponding to class p
and class n, which are set to be ?dpi and dni respectively. It is straightforward to to verify that
wnT dni ? wpT dpi = wT dipn . Thus Eq. (8) becomes:
wT dipn ? 1 ? ?ipn , ? p ? P (Xi ) , n ? N (Xi ) .
(9)
Following the standard soft-margin SVM framework, we minimize the cumulative deviation over all
triplet constraints (i, p, n) and impose ?2 -norm regularization on w as following:
X
1
min
kw ? w(0) k2 + C
?ipn
w, ?ipn
2
i,p?P(Xi ),n?N (Xi )
(10)
s.t. ? i, p ? P (Xi ) , n ? N (Xi ) : ?ipn ? 0, wT dipn ? 1 ? ?ipn ,
? j : w (j) > 0,
5
where C controls the tradeoff between the loss and regularization terms. The positivity constraint
on the elements of w is due to the fact that our goal is to define a distance function which, by
definition, is a positive definite operator. In addition, we also enforce a prior weight vector w(0) in
the objective. In standard SVM, all the entries of w are set as 0 as default. In our objective, however,
we set all its entries to be 1, because we think all the instances are equally important if we have no
prior training knowledge.
We solve Eq. (10) using the solver introduced in [17], which solves the dual problem by an accelerated iterative method. Upon solution, we obtain w, which can be decomposed into the expected
instance weights wkj for every instance with respect to its labeled classes.
2.3 Label prediction using C2B distance
Solving the optimization problems in Eq. (7) and Eq. (10) for an input multi-instance data set D, we
obtain the learned class specific distance metrics Mk (1 ? k ? K) and the significance coefficients
wkj (1 ? k ? K, 1 ? j ? mk ). Given a query bag X, upon the learned Mk and wkj we can compute
the parameterized C2B distances D (Sk , X) (1 ? k ? K) from all the classes to the query bag using
Eq. (5). Sorting D (Sk , X), we can easily assign labels to the query bag.
For single-label multi-instance data, in which each bag belongs to one and only one class, we assign
X to the class with the minimum C2B distance, i.e., l (X) = arg mink D (Sk , X).
For multi-label multi-instance data, in which one bag can be associated with more than one class
label, we need a threshold to make prediction. For every class, we learn a threshold from the training
PN
PK
data as bk = i=1 Yik D (Sk , Xi ) / i=1 Yik , which is the average of the C2B distances from the
k-th class to all its training bags. Then we determine the class membership for X using the following
rule: assign X to the k-th class if D (Sk , X) < bk , and not otherwise.
3 Related works
Learning B2C distance. Due to the unsatisfactory performance and high computational complexity
of machine vision models using B2B distance, a new perspective to compute B2C distance was
presented in [9]. This non-parametric model does not involve training process. Though simple, it
achieved promising results in object recognition. However, this method heavily relies on the large
number of local features in the training and testing set. To address this, Wang et al. [18] further
developed this method by introducing distance metrics, to achieve better results with a small amount
of training. However, as discussed earlier in Section 1.1, B2C distance is hard to parameterize in
may real world applications. To tackle this, we propose to use C2B distance for multi-instance data.
Learning distance metric for MIL. As demonstrated in literature [5, 6], learning a distance metric
from training data to maintain class information is beneficial for MIL. However, existing methods
[5, 6] learned only one global metric for a multi-instance data set, which is insufficient because the
objects in multi-instance data by nature are highly heterogeneous. Recognizing this, we propose to
learn multiple distance metrics, one for each class. [18] took a same perspective as us, though it does
not clearly formalize image classification as a MIL task.
Learning locally adaptive distance. Due to the weak label association in MIL, instead of considering all the instance equally important, we assign locally adaptive SCs to every instance in training
data. Locally adaptive distance was first introduced in [16, 17] for B2B distance. Compared to it,
the proposed C2B distance is more advantageous. First, C2B distance measures the relevance between a class and a bag, hence label prediction can be naturally made upon the resulted distance,
whereas an additional classification step [16] or transformation [17] is required when B2B distance
is used. Second, C2B distance directly assesses the relations between semantic concepts and image
regions, hence it could narrow the gap between high-level semantic concepts and low-level visual
features. Last, but not least, our C2B distance requires significantly less computation. Specifically,
the triplet constraints
used in C2B model are constructed between classes and bags whose number
is O NK 2 , while those used in B2B model [16, 17] are constructed between bags with number of
O N 3 . As N (bag number) is typically much larger than K (class number), our approach is much
more computationally efficient. Indeed, a constraint selection step was involved in [16, 17].
6
Table 1: Performance comparison
on Object Recognition data set.
Methods
DD
DD-SVM
MIMLBoost
MIMLSVM
B2C
B2C-M
C2B
C2B-M
C2B-SC
M3 I
Table 2: Performance comparison on Corel5K data set.
Methods
Hamming
loss ?
One-error
?
Coverage
?
Rank loss
?
Avg. prec.
?
MIMLBoost
MIMLSVM
DM
MildML
B2C
B2C-M
0.282
0.271
0.243
0.238
0.275
0.270
0.584
0.581
0.575
0.569
0.580
0.562
5.974
5.993
5.512
5.107
5.823
5.675
0.281
0.289
0.236
0.233
0.283
0.241
0.467
0.472
0.541
0.554
0.470
0.493
C2B
C2B-M
C2B-SC
M3 I
0.224
0.216
0.211
0.207
0.545
0.538
0.527
0.512
5.032
4.912
4.903
4.760
0.229
0.218
0.213
0.209
0.565
0.572
0.580
0.593
Accuracy
0.676 ? 0.074
0.754 ? 0.054
0.793 ? 0.033
0.796 ? 0.042
0.672 ? 0.013
0.715 ? 0.032
0.797 ? 0.015
0.815 ? 0.026
0.820 ? 0.031
0.832 ? 0.029
4 Experimental results
In this section, we experimentally evaluate the proposed M3 I approach in image categorization tasks
on three benchmark data sets: Object Recognition data set [2] which is a single-label image data set;
and Corel5K data set [19] and PASCAL2010 data set [20] which are multi-label data sets.
4.1 Classification on single-label image data
Because the proposed M3 I approach comprises two components, class specific metrics and significant coefficients, we implement four versions of our approach and evaluate their performances: (1)
the simplest C2B distance, denoted as ?C2B?, computed by Eq. (3), in which no learning is involved;
(2) C2B distance with class specific metrics, denoted as ?C2B-M?, computed by Eq. (4); (3) C2B
distance with SCs, denoted as ?C2B-SC? by Eq. (5) and set Mk = I; and (4) the C2B distance
computed by proposed M3 I approach using Eq. (5). We compare our methods against the following established MIL algorithms including (A) Diversity Density (DD) method [1], (B) DD-SVM
method [2], (C) MIMLBoost method [3] and (D) MIMLSVM method [3]. We also compare our
method to the two related methods, i.e., (E) B2C method [9] and (F) B2C-M method [18]. These
two methods are not MIL methods, therefore we consider each instance as an image descriptor following [9, 18]. We implement these methods following the original papers. The parameters of DD
and DD-SVM are set according to the settings that resulted in the best performance [1,2]. The boosting rounds for MIMLBoost is set to 25 and for MIMLSVM we set ? = 0.2, which are same as in
the experimental settings in [3]. For MIMLBoost and MIMLSVM, the top ranked class is regarded
as the single-label prediction as in [3].
The classification accuracy is employed to measure the performance of the compared methods. Standard 5-fold cross-validation is performed and the classification accuracies averaged over all the 20
categories by the compared methods are presented in Table 1, where the means and standard deviations of the results in the 5 trials are reported and the best performances are bolded. The results
in Table 1 show that the proposed M3 I method clearly outperforms all other compared methods,
which demonstrate the effectiveness of our method in single-label classification. Moreover, our M3 I
method is always better than its simplified versions, which confirms the usefulness of class specific
metrics and SCs in MIL.
4.2 Classification on multi-label image data
Multi-label data refers to data sets in which an image can be associated with more than one semantic
concept, which is more challenging but closer to real world applications than single-label data [21].
Thus, we evaluate the proposed method in multi-label image categorization tasks.
Experimental settings. We compare our approach to the following most recent MIML classification
methods. (1) MIMLBoost method [3] and (2) MIMLSVM method [3] are designed for MIML
classification, though they can also work with single-label multi-instance data as in last subsection.
(3) Distance metric (DM) method [5] and (4) MildML method [6] learn a global distance metric
from multi-instance data to compute B2B distances, therefore an additional classification step is
7
Table 3: Classification performance of comparison on PASCAL VOC 2010 data.
Methods
Hamming loss ?
One-error ?
Coverage ?
Rank loss ?
Average precision ?
MIMLBoost
MIMLSVM
DM
MildML
B2C
B2C-M
0.183 ? 0.020
0.180 ? 0.018
0.146 ? 0.012
0.139 ? 0.011
0.180 ? 0.013
0.177 ? 0.010
0.346 ? 0.034
0.349 ? 0.029
0.307 ? 0.024
0.308 ? 0.022
0.343 ? 0.020
0.332 ? 0.022
1.034 ? 0.075
1.064 ? 0.084
0.942 ? 0.064
0.951 ? 0.058
1.052 ? 0.050
0.993 ? 0.049
0.189 ? 0.016
0.181 ? 0.014
0.167 ? 0.013
0.162 ? 0.011
0.148 ? 0.023
0.177 ? 0.019
0.472 ? 0.023
0.479 ? 0.026
0.501 ? 0.031
0.504 ? 0.029
0.469 ? 0.019
0.502 ? 0.023
0.176 ? 0.017
0.145 ? 0.014
0.137 ? 0.010
0.119 ? 0.009
0.326 ? 0.027
0.301 ? 0.020
0.297 ? 0.019
0.275 ? 0.018
0.979 ? 0.051
0.966 ? 0.046
0.925 ? 0.035
0.843 ? 0.013
0.168 ? 0.020
0.160 ? 0.024
0.150 ? 0.017
0.141 ? 0.010
0.513 ? 0.021
0.509 ? 0.026
0.527 ? 0.016
0.548 ? 0.032
C2B
C2B-M
C2B-SC
M3 I
required. Following [5], we use citation-KNN [22] algorithm for classification, whose parameters
are set as R = 20 and C = 20 as in [5]. We implement these method following their original works.
Corel5K data set has already been split into training set and test set, thus we train the compared
methods using the 4500 training images and classify the 500 test images. We run 5-fold crossvalidation on PASCAL VOC 2010 data set and report the ?mean+std? performance over the 5 trails.
Experimental results. Because the two data sets used in our experiments are multi-label data sets,
we measure the classification performances of the compared methods using five widely used multilabel evaluation metrics, as shown in Table 2 to 3, where ??? indicates ?the small the better? while
??? indicates ?the bigger the better?. Details of these evaluation metrics can be found in [5, 23].
From Table 2 and 3, we can see that the proposed M3 I method consistently outperforms the other
methods, sometimes very significantly. Moreover, it is always better than its simplified versions.
Finally, we study the locally adaptive SCs learned for the training instances. In Figure 2, we show
the SCs for several images in PASCAL VOC 2010 data set when they serve as training images.
From the figures we can see that, a same object has different SCs when it is in different super-bags.
For example, the instance of the person in the inner bounding box of the image in Figure 2(b) has
comparably higher SC than the car instance in the outer bounding box when considering ?person?
class. In contrast, when it is in the super-bag of ?car?, its SC is lower than that of the car instance.
These observations are consistent with our intuitions and theoretical analysis, because the person
instance contribute considerably large in characterizing the ?person? concept, whereas it contributes
much less, or even possibly harmful, in characterizing the ?car? concept. The same observations
can also be seen on almost all the training images, which are not shown due to space limit. These
interesting results provide concrete evidences to support the proposed M3 I method?s capability in
revealing the semantic insight of a multi-instance image data set.
5 Conclusions
In this paper, we proposed a novel Maximum Margin Multi-Instance Learning (M3 I) method, which,
instead of using the B2B distance as in many existing methods, approached MIL from a new perspective using the C2B distance to directly assess the relevance between classes and bags. Moreover, taking into account the two challenging properties of multi-instance data, high heterogeneity
and weak label association, we further developed the C2B distance by introducing class specific
distance metrics and locally adaptive SCs, which are learned by solving two convex maximum margin optimization problems. We applied the proposed M3 I method in image categorization tasks
on three benchmark data sets, one for single-label classification and two for multi-label classification. Encouraging experimental results by comparing our method to state-of-the-art MIL algorithms
demonstrated its effectiveness.
Acknowledgments
This research was supported by NSF-IIS 1117965, NSFCCF-0830780, NSF-DMS-0915228, NSFCCF-0917274.
8
References
[1] O. Maron and A.L. Ratan. Multiple-instance learning for natural scene classification. In ICML,
1998.
[2] Y. Chen and J.Z. Wang. Image categorization by learning and reasoning with regions. JMLR,
5:913?939, 2004.
[3] Z.H. Zhou and M.L. Zhang. Multi-instance multi-label learning with application to scene
classification. In NIPS, 2007.
[4] Z.J. Zha, X.S. Hua, T. Mei, J. Wang, G.J. Qi, and Z. Wang. Joint multi-label multi-instance
learning for image classification. In CVPR, 2008.
[5] R. Jin, S. Wang, and Z.H. Zhou. Learning a distance metric from multi-instance multi-label
data. In CVPR, 2009.
[6] M. Guillaumin, J. Verbeek, and C. Schmid. Multiple instance metric learning from automatically labeled bags of faces. In ECCV, 2010.
[7] H. Wang, F. Nie, and H. Huang. Learning instance specific distance for multi-instance classification. In AAAI, 2011.
[8] H. Wang, F. Nie, H. Huang, and Y. Yang. Learning frame relevance for video classification. In
ACM MM, 2011.
[9] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008.
[10] A.W.M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. Content-based image
retrieval at the end of the early years. IEEE TPAMI, 22(12):1349?1380, 2002.
[11] H. Wang, H. Huang, and C. Ding. Image annotation using multi-label correlated Green?s
function. In ICCV, 2009.
[12] H. Wang, H. Huang, and C. Ding. Multi-label feature transform for image classifications. In
ECCV, 2010.
[13] H. Wang, C. Ding, and H. Huang. Multi-label linear discriminant analysis. In ECCV, pages
126?139. Springer, 2010.
[14] H. Wang, H. Huang, and C. Ding. Image annotation using bi-relational graph of images and
semantic labels. In CVPR, 2011.
[15] M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. In NIPS,
2003.
[16] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance
functions. In NIPS, 2007.
[17] A. Frome, Y. Singer, F. Sha, and J. Malik. Learning globally-consistent local distance functions
for shape-based image retrieval and classification. In ICCV, 2007.
[18] Z. Wang, Y. Hu, and L.T. Chia. Image-to-Class Distance Metric Learning for Image Classification. In ECCV, 2010.
[19] P. Duygulu, K. Barnard, J. De Freitas, and D. Forsyth. Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In ECCV, 2002.
[20] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results.
http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2010/.
[21] H. Wang, C. Ding, and H. Huang. Multi-label classification: Inconsistency and class balanced
k-nearest neighbor. In AAAI, 2010.
[22] J. Wang and J.D. Zucker. Solving the multiple-instance problem: A lazy learning approach. In
ICML, 2000.
[23] R.E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization.
Machine learning, 39(2):135?168, 2000.
9
| 4294 |@word trial:1 version:3 briefly:1 seems:1 norm:1 advantageous:1 everingham:1 hu:1 confirms:1 ratan:1 thereby:2 shechtman:1 contains:1 outperforms:2 existing:5 freitas:1 com:2 comparing:1 gmail:2 subsequent:1 shape:2 designed:1 characterization:1 boosting:2 contribute:3 lexicon:1 zhang:1 five:1 constructed:2 become:1 fitting:1 inter:1 indeed:1 expected:1 xji:1 sdp:4 multi:47 s1k:2 voc:4 decomposed:1 globally:1 automatically:1 encouraging:1 armed:1 solver:4 considering:2 becomes:1 estimating:1 notation:2 suffice:1 panel:2 moreover:3 developed:2 transformation:1 every:6 tackle:2 exactly:1 classifier:1 k2:1 uk:1 control:1 yn:1 appear:1 positive:2 engineering:5 local:4 accordance:1 limit:1 establishing:1 corel5k:3 challenging:4 bi:1 averaged:1 acknowledgment:1 testing:1 definite:2 implement:3 procedure:1 mei:1 significantly:2 revealing:1 refers:1 convenience:1 selection:1 operator:1 optimize:2 demonstrated:4 straightforward:1 williams:1 convex:2 formulate:2 parameterizing:1 rule:1 insight:1 regarded:1 target:1 play:1 heavily:1 programming:1 homogeneous:1 trail:1 element:1 expensive:2 recognition:8 std:1 labeled:6 role:1 ding:6 wang:15 capture:2 parameterize:3 solved:1 region:8 trade:1 balanced:1 intuition:1 complexity:1 nie:3 ideally:1 multilabel:2 weakly:1 rewrite:1 solving:3 serve:3 upon:4 learner:1 completely:1 isp:1 easily:1 joint:1 voc2010:2 represented:1 train:1 jain:1 effective:1 query:13 sc:28 horse:12 labeling:1 approached:1 choosing:1 whose:2 larger:1 solve:3 widely:1 cvpr:4 otherwise:2 statistic:1 knn:1 unseen:1 think:1 transform:1 itself:1 tpami:1 took:1 propose:5 relevant:1 achieve:1 boostexter:1 validate:1 constituent:1 crossvalidation:1 enhancement:1 categorization:10 object:18 develop:3 ac:1 nearest:4 eq:17 solves:1 coverage:2 frome:2 come:1 correct:1 sjk:15 assign:5 fix:2 elementary:1 mathematically:1 mm:1 visually:1 deciding:1 predict:1 major:1 vary:1 early:1 bag:66 label:51 weighted:1 reflects:2 clearly:2 always:3 super:12 rather:1 ck:3 pn:3 avoid:1 zhou:2 mil:28 joachim:1 properly:1 unsatisfactory:1 rank:2 indicates:2 feipingnie:1 consistently:1 contrast:3 membership:1 entire:5 typically:3 relation:1 expand:2 overall:1 classification:32 among:1 dual:1 arg:1 denoted:3 pascal:4 art:1 construct:1 having:1 kw:1 icml:2 report:1 simplify:1 simultaneously:1 resulted:3 uta:3 phase:1 consisting:1 maintain:1 highly:2 intra:1 evaluation:2 truly:4 accurate:1 beforehand:1 xni:1 closer:1 euclidean:2 harmful:1 ipn:12 theoretical:1 mk:27 complicates:1 instance:87 classify:2 modeling:1 soft:1 earlier:1 formulates:2 measuring:2 assignment:1 introducing:3 deviation:3 entry:3 c2b:62 usefulness:1 recognizing:1 characterize:1 reported:1 considerably:1 person:17 density:1 definitely:1 off:1 enhance:1 together:2 concrete:1 aaai:2 reflect:2 ambiguity:1 containing:1 huang:8 choose:1 satisfied:1 positivity:1 possibly:1 account:3 diversity:1 de:1 wk:7 coefficient:7 forsyth:1 tion:2 performed:1 lot:1 characterizes:1 zha:1 capability:1 wnt:2 annotation:2 smeulders:1 ass:4 ni:4 minimize:2 accuracy:3 descriptor:1 bolded:1 boiman:1 weak:4 comparably:1 guillaumin:1 definition:1 against:1 involved:3 dm:7 naturally:1 associated:9 hamming:2 soton:1 color:1 car:8 subsection:2 knowledge:1 formalize:2 higher:2 arlington:5 zisserman:1 pascallin:1 formulation:1 though:4 box:4 correlation:1 maron:1 feiping:1 concept:8 true:3 verify:1 regularization:2 assigned:4 din:1 hence:2 irani:1 b2b:11 semantic:14 illustrated:1 mahalanobis:2 round:1 during:1 demonstrate:1 bring:1 reasoning:1 image:71 meaning:2 novel:2 recently:1 common:1 association:4 belong:2 m1:2 discussed:2 significant:1 automatic:1 similarly:1 zucker:1 similarity:3 recent:2 perspective:5 optimizing:2 belongs:8 scenario:1 binary:1 santini:1 yi:4 inconsistency:1 seen:2 minimum:1 additional:2 impose:1 employed:1 determine:1 semi:2 ii:2 multiple:7 b2c:15 characterized:1 cross:1 long:1 retrieval:4 wpt:2 chia:1 equally:3 bigger:1 impact:1 prediction:4 involving:1 qi:1 verbeek:1 heterogeneous:2 vision:1 metric:38 expectation:1 represent:1 sometimes:1 achieved:1 c1:2 addition:2 whereas:4 schematically:1 addressed:1 winn:1 diagram:1 wkj:13 rest:2 strict:1 effectiveness:2 call:3 yang:1 constraining:1 split:1 concerned:1 perfectly:1 reduce:1 inner:1 tradeoff:1 texas:5 chqding:1 whether:1 motivated:1 defense:1 yik:6 useful:1 clear:1 involve:1 amount:2 locally:10 category:2 deriva:1 simplest:1 http:1 schapire:1 sl:2 nsf:2 per:1 write:1 four:1 threshold:2 graph:1 year:2 run:1 parameterized:4 almost:1 patch:4 incompatible:1 fold:2 multiinstance:1 constraint:8 scene:2 min:2 miml:2 duygulu:1 relatively:1 according:1 belonging:6 smaller:2 beneficial:1 intuitively:2 gradually:1 iccv:2 computationally:3 slack:2 singer:3 know:1 end:2 serf:1 available:1 apply:1 appropriate:2 enforce:1 prec:1 shortly:1 rp:2 original:2 mimlboost:7 top:1 opportunity:3 neglect:1 build:1 objective:2 malik:2 already:1 parametric:1 sha:1 traditional:5 gradient:1 distance:113 entity:1 concatenation:1 outer:1 chris:1 considers:1 discriminant:1 length:2 relationship:3 insufficient:1 mostly:2 potentially:2 negative:1 mink:1 observation:3 sm:2 benchmark:3 descent:1 inevitably:1 jin:1 heterogeneity:3 worring:1 relational:1 precise:1 y1:1 frame:1 dpi:4 introduced:5 bk:2 required:2 extensive:1 learned:12 narrow:1 established:1 nip:3 address:2 able:1 usually:2 challenge:6 including:1 green:1 video:1 gool:1 overlap:1 difficulty:1 ranked:1 natural:1 predicting:1 indicator:1 improve:2 farhad:1 schmid:1 sn:2 text:1 prior:2 literature:1 determining:2 relative:4 loss:5 interesting:1 validation:1 consistent:2 dd:6 heng:2 translation:1 eccv:5 supported:1 last:2 neighbor:4 taking:2 characterizing:2 face:1 dni:4 van:1 dip:1 dimension:1 default:1 world:4 cumulative:1 vocabulary:1 made:1 adaptive:10 avg:1 simplified:2 schultz:1 ec:1 citation:1 global:4 xi:42 iterative:1 triplet:4 sk:24 table:7 promising:2 learn:12 nature:2 contributes:2 sp:3 significance:6 main:1 pk:4 bounding:4 whole:1 arise:2 noise:1 w1t:1 formalization:1 precision:1 comprises:1 wish:1 x1i:1 jmlr:1 weighting:1 specific:17 sift:1 mimlsvm:7 dki:1 svm:7 gupta:1 evidence:1 importance:2 margin:13 gap:2 sorting:1 chen:1 explore:1 forming:1 visual:3 lazy:1 hua:2 springer:1 corresponds:1 relies:1 acm:1 viewed:1 formulated:1 goal:3 barnard:1 feasible:2 hard:1 experimentally:1 content:1 specifically:3 except:1 acting:1 wt:3 total:2 experimental:5 m3:18 indicating:1 support:2 relevance:4 accelerated:1 incorporate:1 evaluate:3 correlated:1 |
3,639 | 4,295 | Gaussian Process Training with Input Noise
Carl Edward Rasmussen
Department of Engineering
Cambridge University
Cambridge, CB2 1PZ
[email protected]
Andrew McHutchon
Department of Engineering
Cambridge University
Cambridge, CB2 1PZ
[email protected]
Abstract
In standard Gaussian Process regression input locations are assumed to be noise
free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local
linear expansion about each input point. This allows the input noise to be recast
as output noise proportional to the squared gradient of the GP posterior mean.
The input noise variances are inferred from the data as extra hyperparameters.
They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which
alternates between optimising the hyperparameters and calculating the posterior
gradient. Analytic predictive moments can then be found for Gaussian distributed
test points. We compare our model to others over a range of different regression
problems and show that it improves over current methods.
1
Introduction
Over the last decade the use of Gaussian Processes (GPs) as non-parametric regression models has
grown significantly. They have been successfully used to learn mappings between inputs and outputs
in a wide variety of tasks. However, many authors have highlighted a limitation in the way GPs
handle noisy measurements. Standard GP regression [1] makes two assumptions about the noise
in datasets: firstly that measurements of input points, x, are noise-free, and, secondly, that output
points, y, are corrupted by constant-variance Gaussian noise. For some datasets this makes intuitive
sense: for example, an application in Rasmussen and Williams (2006) [1] is that of modelling CO2
concentration in the atmosphere over the last forty years. One can viably assume that the date is
available noise-free and the CO2 sensors are affected by signal-independent sensor noise.
However, in many datasets, either or both of these assumptions are not valid and lead to poor modelling performance. In this paper we look at datasets where the input measurements, as well as the
output, are corrupted by noise. Unfortunately, in the GP framework, considering each input location
to be a distribution is intractable. If, as an approximation, we treat the input measurements as if they
were deterministic, and inflate the corresponding output variance to compensate, this leads to the
output noise variance varying across the input space, a feature often called heteroscedasticity. One
method for modelling datasets with input noise is, therefore, to hold the input measurements to be
deterministic and then use a heteroscedastic GP model. This approach has been strengthened by the
breadth of research published recently on extending GPs to heteroscedastic data.
However, referring the input noise to the output in this way results in heteroscedasticity with a very
particular structure. This structure can be exploited to improve upon current heteroscedastic GP
models for datasets with input noise. One can imagine that in regions where a process is changing
its output value rapidly, corrupted input measurements will have a much greater effect than in regions
Pre-conference version
1
where the output is almost constant. In other words, the effect of the input noise is related to the
gradient of the function mapping input to output. This is the intuition behind the model we propose
in this paper.
We fit a local linear model to the GP posterior mean about each training point. The input noise variance can then be referred to the output, proportional to the square of the posterior mean function?s
gradient. This approach is particularly powerful in the case of time-series data where the output
at time t becomes the input at time t + 1. In this situation, input measurements are clearly not
noise-free: the noise on a particular measurement is the same whether it is considered an input or
output. By also assuming the inputs are noisy, our model is better able to fit datasets of this type.
Furthermore, we can estimate the noise variance on each input dimension, which is often very useful
for analysis.
Related work lies in the field of heteroscedastic GPs. A common approach to modelling changing
variance with a GP, as proposed by Goldberg et al. [2], is to make the noise variance a random
variable and attempt to estimate its form at the same time as estimating the posterior mean. Goldberg
et al. suggested using a second GP to model the noise level as a function of the input location.
Kersting et al. [3] improved upon Goldberg et al.?s Monte Carlo training method with a ?most likely?
training scheme and demonstrated its effectiveness; related work includes Yuan and Wahba [4], and
Le at al. [5] who proposed a scheme to find the variance via a maximum-a-posteriori estimate set
in the exponential family. Snelson and Ghahramani [6] suggest a different approach whereby the
importance of points in a pseudo-training set can be varied, allowing the posterior variance to vary
as well. Recently Wilson and Ghahramani broadened the scope still further and proposed Copula
and Wishart Process methods [7, 8].
Although all of these methods could be applied to datasets with input noise, they are designed for a
more general class of heteroscedastic problems and so none of them exploits the structure inherent in
input noise datasets. Our model also has a further advantage in that training is by marginal likelihood
maximisation rather than by an approximate inference method, or one such as maximum likelihood,
which is more susceptible to overfitting. Dallaire et al. [9] train on Gaussian distributed input points
by calculating the expected the covariance matrix. However, their method requires prior knowledge
of the noise variance, rather than inferring it as we do in this paper.
2
The Model
In this section we formally derive our model, which we refer to as NIGP (noisy input GP).
Let x and y be a pair of measurements from a process, where x is a D dimensional input to the
process and y is the corresponding scalar output. In standard GP regression we assume that y is a
noisy measurement of the actual output of the process y?,
y = y? + y
(1)
2
where, y ? N 0, ?y . In our model, we further assume that the inputs are also noisy measurements
of the actual input x
?,
x=x
? + x
(2)
where x ? N (0, ?x ). We assume that each input dimension is independently corrupted by noise,
thus ?x is diagonal. Under a model f (.), we can write the output as a function of the input in the
following form,
y = f (?
x + x ) + y
(3)
For a GP model the posterior distribution based on equation 3 is intractable. We therefore consider
a Taylor expansion about the latent state x
?,
?f (?
x)
?f (x)
+ . . . ' f (x) + Tx
+ ...
(4)
?x
?
?x
We don?t have access to the latent variable x
? so we approximate it with the noisy measurements.
Now the derivative of a Gaussian Process is another Gaussian Process [10]. Thus, the exact treatment
would require the consideration of a distribution over Taylor expansions. Although the resulting distribution is not Gaussian, its first and second moments can be calculated analytically. However, these
calculations carry a high computational load and previous experiments showed this exact treatment
f (?
x + x ) = f (?
x) + Tx
2
provided no significant improvement over the much quicker approximate method we now describe.
Instead we take the derivative of the mean of the GP function, which we will denote ? f?, a Ddimensional vector, for the derivative of one GP function value w.r.t. the D-dimensional input, and
?f?, an N by D matrix, for the derivative of N function values. Differentiating the mean function
corresponds to ignoring the uncertainty about the derivative. If we expand up to the first order terms
we get a linear model for the input noise,
y = f (x) + Tx ? f? + y
(5)
The probability of an observation y is therefore,
P (y | f ) = N (f, ?y2 + ? Tf? ?x ? f?)
(6)
We keep the usual Gaussian Process prior, P (f | X) = N (0, K(X, X)), where K(X, X) is the N
by N training data covariance matrix and X is an N by D matrix of input observations. Combining
these probabilities gives the predictive posterior mean and variance as,
?1
E [f? | X, y, x? ] = k(x? , X) K(X, X) + ?y2 I + diag{?f? ?x ?Tf? } y
?1
V [f? | X, y, x? ] = k(x? , x? ) ? k(x? , X) K(X, X) + ?y2 I + diag{?f? ?x ?Tf? } k(X, x? )
(7)
This is equivalent to treating the inputs as deterministic and adding a corrective term,
diag{?f? ?x ?Tf? }, to the output noise. The notation ?diag{.}? results in a diagonal matrix, the
elements of which are the diagonal elements of its matrix argument. Note that if the posterior mean
gradient is constant across the input space the heteroscedasticity is removed and our model is essentially identical to a standard GP.
An advantage of our approach can be seen in the case of multiple output dimensions. As the input
noise levels are the same for each of the output dimensions, our model can use data from all of the
outputs when learning the input noise variances. Not only does this give more information about the
noise variances without needing further input measurements but it also reduces over-fitting as the
learnt noise variances must agree with all E output dimensions.
For time-series datasets (where the model has to predict the next state given the current), each
dimension?s input and output noise variance can be constrained to be the same since the noise level
on a measurement is independent of whether it is an input or output. This further constraint increases
the ability of the model to recover the actual noise variances. The model is thus ideally suited to the
common task of multivariate time series modelling.
3
Training
Our model introduces an extra D hyperparameters compared to the standard GP - one noise variance
hyperparameter per input dimension. A major advantage of our model is that these hyperparameters
can be trained alongside any others by maximisation of the marginal likelihood. This approach
automatically includes regularisation of the noise parameters and reduces the effect of over-fitting.
In order to calculate the marginal likelihood of the training data we need the posterior distribution,
and the slope of its mean, at each of the training points. However, evaluating the posterior mean
from equation 7 with x? ? X, results in an analytically unsolvable differential equation: f? is a
complicated function of ?f?, its own derivative. Therefore, we define a two-step approach: first we
evaluate a standard GP with the training data, using our initial hyperparameter settings and ignoring
the input noise. We then find the slope of the posterior mean of this GP at each of the training points
and use it to add in the corrective variance term, diag{?f? ?x ?Tf? }. This process is summarised in
figures 1a and 1b.
The marginal likelihood of the GP with the corrected variance is then computed, along with its
derivatives with respect to the initial hyperparameters, which include the input noise variances. This
step involves chaining the derivatives of the marginal likelihood back through the slope calculation.
Gradient descent can then be used to improve the hyperparameters. Figure 1c shows the GP posterior
for the trained hyperparameters and shows how NIGP can reduce output noise level estimates by
taking input noise into account. Figure 1d shows the NIGP fit for the trained hyperparameters.
3
6
5
a) Initial hyperparameters & training
data define a GP fit
b) Extra variance added proportional
to squared slope
c) Standard GP with NIGP trained
hyperparameters
d) The NIGP fit including variance
from input noise
Target
4
3
2
1
0
?1
6
5
Target
4
3
2
1
0
?1
0
1
2
3
Input
4
5
6
0
1
2
3
Input
4
5
6
Figure 1: Training with NIGP. (a) A standard GP posterior distribution can be computed from an
initial set of hyperparameters and a training data set, shown by the blue crosses. The gradients of the
posterior mean at each training point can then be found analytically. (b) The NIGP method increases
the posterior variance by the square of the posterior mean slope multiplied by the current setting of
the input noise variance hyperparameter. The marginal likelihood of this fit is then calculated along
with its derivatives w.r.t. initial hyperparameter settings. Gradient descent is used to train the hyperparameters. (c) This plot shows the standard GP posterior using the newly trained hyperparameters.
Comparing to plot (a) shows that the output noise hyperparameter has been greatly reduced. (d) This
plot shows the NIGP fit - plot(c) with the input noise corrective variance term, diag{?f? ?x ?Tf? }.
Plot (d) is related to plot (c) in the same way that plot (b) is related to plot (a).
To improve the fit further we can iterate this procedure: we use the slopes of the current trained
NIGP, instead of a standard GP, to calculate the effect of the input noise, i.e. replace the fit in figure
1a with the fit from figure 1d and re-train.
4
Prediction
We turn now to the task of making predictions at noisy input locations with our model. To be true to
our model we must use the same process in making predictions as we did in training. We therefore
use the trained hyperparameters and the training data to define a GP posterior mean, which we
differentiate at each test point and each training point. The calculated gradients are then used to add
in the corrective variance terms. The posterior mean slope at the test points is only used to calculate
the variance over observations, where we increase the predictive variance by the noise variances.
There is an alternative option, however. If a single test point is considered to have a Gaussian
distribution and all the training points are certain then, although the GP posterior is unknown, its
mean and variance can be calculated exactly [11]. As our model estimates the input noise variance
?x during training, we can consider a test point to be Gaussian distributed: x0? ? N (x? , ?x ).
[11] then gives the mean and variance of the posterior distribution, for a squared exponential kernel
(equation 12), to be,
T
2 ?1
f?? =
K + ?y2 I + ?x ? f?
y q
(8)
4
where,
? 1
1
?1
qi = ?f2 ?x ??1 + I 2 exp ? (xi ? x? )T (?x + ?) (xi ? x? )
2
where ? is a diagonal matrix of the squared lengthscale hyperparameters.
2 ?1
V [f? ] = ?f2 ? tr K + ?y2 I + ?x ? f?
Q + ?T Q? ? f??2
(9)
(10)
with,
Qij =
k(xi , x? )k(xj , x? )
|2?x ??1 + I|
1
2
?1
1
exp (z ? x? )T ? + ???1
(z
?
x
)
?
?
x
2
(11)
with z = 21 (xi +xj ). This method is computationally slower than using equation 7 and is vulnerable
to worse results if the learnt input noise variance ?x is very different from the true value. However,
it gives proper consideration to the uncertainty surrounding the test point and exactly computes the
moments of the correct posterior distribution. This often leads it to outperform predictions based on
equation 7.
5
Results
We tested our model on a variety of functions and datasets, comparing its performance to standard GP regression as well as Kersting et al.?s ?most likely heteroscedastic GP? (MLHGP) model, a
state-of-the-art heteroscedastic GP model. We used the squared exponential kernel with Automatic
Relevance Determination,
1
k(xi , xj ) = ?f2 exp ? (xi ? xj )T ??1 (xi ? xj )
(12)
2
where ? is a diagonal matrix of the squared lengthscale hyperparameters and ?f2 is a signal variance
hyperparameter. Code to run NIGP is available on the author?s website.
Standard GP
Kersting et al.
This paper
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
?1
?1
?1.5
?10
?5
0
5
10
?1.5
?10
?5
0
5
10
?1.5
?10
?5
0
5
10
Figure 2: Posterior distribution for a near-square wave with ?y = 0.05, ?x = 0.3, and 60 data points.
The solid line represents the predictive mean and the dashed lines are two standard deviations either
side. Also shown are the training points and the underlying function. The left image is for standard
GP regression, the middle uses Kersting et al.?s MLHGP algorithm, the right image shows our model.
While the predictive means are similar, both our model and MLHGP pinch in the variance around the
low noise areas. Our model correctly expands the variance around all steep areas whereas MLHGP
can only do so where high noise is observed (see areas around x= -6 and x = 1).
Figure 2 shows an example comparison between standard GP regression, Kersting et al.?s MLHGP,
and our model for a simple near-square wave function. This function was chosen as it has areas
5
of steep gradient and near flat gradient and thus suffers from the heteroscedastic problems we are
trying to solve. The posterior means are very similar for the three models, however the variances
are quite different. The standard GP model has to take into account the large noise seen around the
steep sloped areas by assuming large noise everywhere, which leads to the much larger error bars.
Our model can recover the actual noise levels by taking the input noise into account. Both our model
and MLHGP pinch the variance in around the flat regions of the function and expand it around the
steep areas. For the example shown in figure 2 the standard GP estimated an output noise standard
deviation of 0.16 (much too large) compared to our estimate of 0.052, which is very close to the
correct value of 0.050. Our model also learnt an input noise standard deviation of 0.305, very close
to the real value of 0.300. MLHGP does not produce a single estimate of noise levels.
Predictions for 1000 noisy measurements were made using each of the models and the log probability of the test set was calculated. The standard GP model had a log probability per data point of
0.419, MLHGP 0.740, and our model 0.885, a significant improvement. Part of the reason for our
improvement over MLHGP can be seen around x = 1: our model has near-symmetric ?horns? in
the variance around the corners of the square wave, whereas MLHGP only has one ?horn?. This is
because in our model, the amount of noise expected is proportional to the derivative of the mean
squared, which is the same for both sides of the square wave. In Kersting et al.?s model the noise
is estimated from the training points themselves. In this example the training points around x = 1
happen to have low noise and so the learnt variance is smaller. The same problem can be seen around
x = ?6 where MLHGP has much too small variance. This illustrates an important aspect of our
model: the accuracy in plotting the varying effect of noise is only dependent on the accuracy of the
mean posterior function and not on an extra, learnt noise model. This means that our model typically
requires fewer data points to achieve the same accuracy as MLHGP on input noise datasets. To test
the models further, we trained them on a suite of six functions. The functions were again chosen
to have varying gradients across the input space. The training set consisted of twenty five points in
the interval [-10, 10] and the test set one thousand points in the same interval. Trials were run for
different levels of input noise. For each trial, ten different initialisations of the hyperparameters were
tried. In order to remove initialisation effects the best initialisations for each model were chosen at
each step. The entire experiment was run on twenty different random seeds. For our model, NIGP,
we trained both a single model for all output dimensions, as well as separate models for each of the
outputs, to see what the effect of using the cross-dimension information was.
Figure 3 shows the results for this experiment. The figure shows that NIGP performs very well on
all the functions, always outperforming the standard GP when there is input noise and nearly always
MLHGP; wherever there is a significant difference our model is favoured. Training on all the outputs
at once only gives an improvement for some of the functions, which suggests that, for the others,
the input noise levels could be estimated from the individual functions alone. The predictions using
stochastic test-points, equations 8 and 10, generally outperformed the predictions made using deterministic test-points, equation 7. The RMSEs are quite similar to each other for most of the functions
as the posterior means are very similar, although where they do differ significantly, again, it is to
favour our model. These results show our model consistently calculates a more accurate predictive
posterior variance than either a standard GP or a state-of-the-art heteroscedastic GP model.
As previously mentioned, our model can be adapted to work more effectively with time-series data,
where the outputs become subsequent inputs. In this situation the input and output noise variance
will be the same. We therefore combine these two parameters into one. We tested NIGP on a timeseries dataset and compared the two modes (with separate input and output noise hyperparameters
and with combined) and also to standard GP regression (MLHGP was not available for multiple
input dimensions). The dataset is a simulated pendulum without friction and with added noise.
There are two variables: pendulum angle and angular velocity. The choice of time interval between
observations is important: for very small time intervals, and hence small changes in the angle, the
dynamics are approximately linear, as sin ? ? ?. As discussed before, our model will not bring
any benefit to linear dynamics, so in order to see the difference in performance a much longer time
interval was chosen. The range of initial angular velocities was chosen to allow the pendulum to
spin multiple times at the extremes, which adds extra non-linearity. Ten different initialisations
were tried, with the one achieving the highest training set marginal likelihood chosen, and the whole
experiment was repeated fifty times with different random seeds.
The plots show the difference in log probability of the test set between four versions of NIGP and a
standard GP model trained on the same data. All four versions of our model perform better than the
6
Negative log predictive posterior
sin(x)
Near?square wave
2
0.5
0.5
1.5
1
0
NIGP DTP all o/p
0
0.5
NIGP DTP indiv. o/p
?0.5
0
NIGP STP indiv. o/p
?0.5
NIGP STP all o/p
?1
?0.5
Kersting et al.
?1.5
?2
0
?1
?1
Standard GP
0.1
0.2
0.3
0.4
0.5
0.6
0.7
?1.5
0.8
0.9
1
?1.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.2*x2*tanh(cos(x))
tan(0.15*(x))*sin(x)
Negative log predictive posterior
exp(?0.2*x)*sin(x)
1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.8
0.9
1
0.8
0.9
1
0.8
0.9
1
0.5*log(x2*(sin(2*x)+2)+1)
4
2
0.6
3.5
1.5
0.4
3
0.2
2.5
1
0
2
0.5
?0.2
1.5
?0.4
0
1
?0.6
0.5
?0.5
?0.8
0
?1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
sin(x)
Normalised test set RMSE
0.5
0.6
0.7
0.8
0.9
1
?1
0
1
0.5
0.9
0.8
0.45
0.8
0.7
0.4
0.7
0.6
0.35
0.6
0.5
0.3
0.5
0.4
0.25
0.4
0.3
0.2
0.3
0.2
0.15
0.2
0.1
0.1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.05
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
2
0.6
0.7
0.3
0.4
0.5
0.6
0.7
2
0.2*x *tanh(cos(x))
0.5*log(x *(sin(2*x)+2)+1)
0.8
1
0.5
0.1
0.1
tan(0.15*(x))*sin(x)
1.1
0.4
exp(?0.2*x)*sin(x)
0.55
0.2
0.3
Near?square wave
1
0.1
0.2
Input noise standard deviation
0.9
0
0
0.1
Input noise standard deviation
Input noise standard deviation
Normalised test set RMSE
0.4
0.22
0.2
0.7
0.9
0.18
0.6
0.8
0.16
0.5
0.7
0.6
0.14
0.4
0.5
0.12
0.1
0.3
0.4
0.08
0.2
0.3
0.06
0.1
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Input noise standard deviation
0.9
1
0
0
0.04
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Input noise standard deviation
0.9
1
0.02
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Input noise standard deviation
Figure 3: Comparison of models for suite of 6 test functions. The solid line is our model with
?deterministic test-point? predictions, the solid line with triangles is our model with ?stochastic testpoint? predictions. Both these models were trained on all 6 functions at once, the respective dashed
lines were trained on the functions individually. The dash-dot line is a standard GP regression model
and the dotted line is MLHGP. RMSE has been normalised by the RMS value of the function. In both
plots lower values indicate better performance. The plots show our model has lower negative log
posterior predictive than standard GP on all the functions, particularly the exponentially decaying
sine wave and the multiplication between tan and sin.
standard GP. Once again the stochastic test point version outperforms the deterministic test points.
There was a slight improvement in RMSE using our model but the differences were within two
standard deviations of each other. There is also a slight improvement using the combined noise
levels although, again, the difference is contained within the error bars.
A better comparison between the two modes is to look at the input noise variance values recovered.
The real noise standard deviations used were 0.2 and 0.4 for the angle and angular velocity respectively. The model which learnt the variances separately found standard deviations of 0.3265 and
0.8026 averaged over the trials, whereas the combined model found 0.2429 and 0.8948. This is a
significant improvement on the first dimension. Both modes struggle to recover the correct noise
level on the second dimension and this is probably why the angular velocity prediction performance
shown in figure 4 is worse than the angle prediction performance. Training with more data signif7
Figure 4: The difference between four versions of NIGP and a standard GP model on a pendulum
prediction task. DTP stands for deterministic test point and STP is stochastic test point. Comb. and
sep. indicate whether the model combined the input and output noise parameters or treated them
separately. The error bars indicate plus/minus two standard deviations.
icantly improved the recovered noise value although the difference between the two NIGP modes
then shrank as there was sufficient information to correctly deduce the noise levels separately.
6
Conclusion
The correct way of training on input points corrupted by Gaussian noise is to consider every input
point as a Gaussian distribution. This model is intractable, however, and so approximations must
be made. In our model, we refer the input noise to the output by passing it through a local linear
expansion. This adds a term to the likelihood which is proportional to the squared posterior mean
gradient. Not only does this lead to tractable computations but it makes intuitive sense - input
noise has a larger effect in areas where the function is changing its output rapidly. The model,
although simple in its approach, has been shown to be very effective, outperforming Kersting et
al.?s model and a standard GP model in a variety of different regression tasks. It can make use of
multiple outputs and can recover a noise variance parameter for each input dimension, which is
often useful for analysis. In our approximate model, exact inference can be performed as the model
hyperparameters can be trained simultaneously by marginal likelihood maximisation.
A proper handling of time-series data would constrain the specific noise levels on each training point
to be the same for when they are considered inputs and outputs. This would be computationally very
expensive however. By allowing input noise and fixing the input and output noise variances to be
identical, our model is a computationally efficient alternative. Our results showed that NIGP gives a
substantial improvement over the often-used standard GP for modelling time-series data.
It is important to state that this model has been designed to tackle a particular situation, that of
constant-variance input noise, and would not perform so well on a general heteroscedastic problem. It could not be expected to improve over a standard GP on problems where noise levels are
proportional to the function or input value for example. We do not see this limitation as too restricting however, as we maintain that constant input noise situations (including those where this is
a sufficient approximation) are reasonably common. Throughout the paper we have taken particular
care to avoid functions or systems which are linear, or approximately linear, as in these cases our
model can be reduced to standard GP regression. However, for the problems for which NIGP has
been designed, such as the various non-linear problems we have presented in this paper, our model
outperforms current methods.
This paper considers a first order Taylor expansion of the posterior mean function. We would expect
this to be a good approximation for any function providing the input noise levels are not too large
(i.e. small perturbations around the point we linearised about). In practice, we could require that
the input noise level is not larger than the input characteristic length scale. A more accurate model
could use a second order Taylor series, which would still be analytic although computationally
the algorithm would then scale with D3 rather than the current D2 . Another refinement could be
achieved by doing a Taylor series for the full posterior distribution (not just its mean, as we have
done here), again at considerably higher computational cost. These are interesting areas for future
research, which we are actively pursuing.
8
References
[1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine
Learning. MIT Press, 2006.
[2] Paul W. Goldberg, Christopher K. I. Williams, and Christopher M. Bishop. Regression with
input-dependent noise: A Gaussian Process treatment. NIPS-98, 1998.
[3] Kristian Kersting, Christian Plagemann, Patrick Pfaff, and Wolfram Burgard. Most likely
heteroscedastic Gaussian Process regression. ICML-07, 2007.
[4] Ming Yuan and Grace Wahba. Doubly penalized likelihood estimator in heteroscedastic regression. Statistics and Probability Letter, 69:11?20, 2004.
[5] Quoc V. Le, Alex J. Smola, and Stephane Canu. Heteroscedastic Gaussian Process regression.
Procedings of ICML-05, pages 489?496, 2005.
[6] Edward Snelson and Zoubin Ghahramani. Variable noise and dimensionality reduction for
sparse gaussian processes. Procedings of UAI-06, 2006.
[7] A.G. Wilson and Z. Ghahramani. Copula processes. In J. Lafferty, C. K. I. Williams, J. ShaweTaylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2460?2468. 2010.
[8] Andrew Wilson and Zoubin Ghahramani. Generalised Wishart Processes. In Proceedings of
the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence
(UAI-11), pages 736?744, Corvallis, Oregon, 2011. AUAI Press.
[9] P. Dallaire, C. Besse, and B. Chaib-draa. Learning Gaussian Process Models from Uncertain
Data. 16th International Conference on Neural Information Processing, 2008.
[10] E. Solak, R. Murray-Smith, W.e. Leithead, D.J. Leith, and C.E. Rasmussen. Derivative observations in Gaussian Process models of dynamic systems. NIPS-03, pages 1033?1040, 2003.
[11] Agathe Girard, Carl Edward Rasmussen, Joaquin Quinonero Candela, and Roderick MurraySmith. Gaussian Process priors with incertain inputs - application to multiple-step ahead time
series forecasting. Advances in Neural Information Processing Systems 16, 2003.
9
| 4295 |@word trial:3 middle:1 version:5 d2:1 tried:2 covariance:2 tr:1 solid:3 minus:1 carry:1 moment:3 reduction:1 initial:6 series:9 initialisation:4 outperforms:2 current:7 comparing:2 recovered:2 yet:1 must:3 subsequent:1 happen:1 shawetaylor:1 analytic:2 christian:1 remove:1 designed:3 treating:1 plot:11 alone:1 intelligence:1 fewer:1 website:1 smith:1 wolfram:1 location:4 firstly:1 five:1 along:2 differential:1 become:1 yuan:2 qij:1 doubly:1 fitting:2 combine:1 comb:1 x0:1 expected:3 themselves:1 ming:1 automatically:1 actual:4 considering:1 becomes:1 provided:1 estimating:1 notation:1 underlying:1 linearity:1 what:1 suite:2 pseudo:1 every:1 expands:1 auai:1 tackle:1 exactly:2 uk:2 broadened:1 before:1 generalised:1 engineering:2 local:3 treat:1 struggle:1 leithead:1 leith:1 approximately:2 plus:1 suggests:1 heteroscedastic:13 co:2 dtp:3 range:2 averaged:1 horn:2 maximisation:4 practice:1 cb2:2 procedure:1 area:8 significantly:2 pre:1 word:1 suggest:1 zoubin:2 get:1 indiv:2 close:2 equivalent:1 deterministic:7 demonstrated:1 williams:4 independently:1 estimator:1 handle:1 imagine:1 target:2 tan:3 exact:3 carl:3 us:2 gps:4 goldberg:4 element:2 velocity:4 expensive:1 particularly:2 observed:1 quicker:1 calculate:3 thousand:1 region:3 culotta:1 removed:1 highest:1 mentioned:1 intuition:1 substantial:1 roderick:1 ideally:1 co2:2 cam:2 dynamic:3 trained:14 heteroscedasticity:3 predictive:9 upon:2 f2:4 triangle:1 sep:1 various:1 tx:3 corrective:4 grown:1 surrounding:1 train:3 effective:2 describe:1 monte:1 lengthscale:2 artificial:1 zemel:1 pinch:2 quite:2 larger:3 solve:1 plagemann:1 ability:1 statistic:1 gp:50 highlighted:1 noisy:8 differentiate:1 advantage:3 dallaire:2 propose:1 combining:1 date:1 rapidly:2 achieve:1 intuitive:2 extending:1 produce:1 derive:1 andrew:2 ac:2 fixing:1 mchutchon:1 inflate:1 edward:4 ddimensional:1 involves:1 indicate:3 differ:1 correct:4 stephane:1 stochastic:4 require:2 atmosphere:1 secondly:1 hold:1 around:11 considered:3 exp:5 seed:2 mapping:2 scope:1 predict:1 major:1 vary:1 outperformed:1 tanh:2 individually:1 tf:6 successfully:1 mit:1 clearly:1 sensor:2 gaussian:23 always:2 rather:3 avoid:1 kersting:9 varying:3 wilson:3 improvement:8 consistently:1 modelling:6 likelihood:12 greatly:1 sense:2 posteriori:1 inference:2 dependent:2 typically:1 entire:1 expand:2 stp:3 constrained:1 art:2 copula:2 marginal:9 field:1 once:3 optimising:1 identical:2 represents:1 look:2 icml:2 nearly:1 future:1 others:3 inherent:1 simultaneously:1 individual:1 maintain:1 attempt:1 linearised:1 cer54:1 introduces:1 extreme:1 behind:1 accurate:2 respective:1 draa:1 taylor:5 re:1 uncertain:1 cost:1 deviation:13 burgard:1 seventh:1 too:4 corrupted:6 learnt:6 considerably:1 combined:4 referring:1 international:1 squared:8 again:5 wishart:2 worse:2 corner:1 derivative:11 actively:1 account:3 includes:2 oregon:1 sine:1 performed:1 candela:1 doing:1 pendulum:4 wave:7 decaying:1 recover:4 option:1 complicated:1 slope:7 rmse:4 shrank:1 square:8 spin:1 accuracy:3 variance:49 who:1 characteristic:1 none:1 carlo:1 published:1 suffers:1 chaib:1 newly:1 dataset:2 treatment:3 knowledge:1 improves:1 dimensionality:1 back:1 higher:1 improved:2 done:1 furthermore:1 angular:4 just:1 smola:1 agathe:1 joaquin:1 christopher:3 mode:4 effect:8 consisted:1 y2:5 true:2 analytically:3 hence:1 symmetric:1 sin:10 during:1 whereby:1 chaining:1 trying:1 mlhgp:15 performs:1 bring:1 image:2 snelson:2 consideration:2 recently:2 common:3 exponentially:1 discussed:1 slight:2 measurement:15 refer:2 significant:4 cambridge:4 corvallis:1 automatic:1 canu:1 had:1 dot:1 access:1 longer:1 deduce:1 add:4 patrick:1 posterior:34 multivariate:1 showed:2 own:1 certain:1 outperforming:2 exploited:1 seen:4 greater:1 care:1 forty:1 signal:2 dashed:2 multiple:5 full:1 needing:1 reduces:2 determination:1 calculation:2 cross:2 compensate:1 qi:1 prediction:12 calculates:1 regression:16 essentially:1 kernel:2 achieved:1 whereas:3 separately:3 interval:5 extra:5 fifty:1 probably:1 lafferty:1 effectiveness:1 near:6 variety:3 iterate:1 fit:10 xj:5 wahba:2 reduce:1 pfaff:1 favour:1 whether:3 six:1 rms:1 forecasting:1 passing:1 useful:2 generally:1 amount:1 ten:2 reduced:2 outperform:1 dotted:1 estimated:3 per:2 correctly:2 blue:1 summarised:1 write:1 hyperparameter:6 affected:1 four:3 achieving:1 d3:1 changing:3 breadth:1 year:1 run:3 angle:4 everywhere:1 powerful:1 uncertainty:3 unsolvable:1 letter:1 almost:1 family:1 throughout:1 pursuing:1 dash:1 annual:1 adapted:1 ahead:1 constraint:1 constrain:1 alex:1 x2:2 flat:2 aspect:1 argument:1 friction:1 department:2 alternate:1 poor:1 across:3 smaller:1 making:2 wherever:1 quoc:1 taken:1 computationally:4 equation:8 agree:1 previously:1 turn:1 tractable:2 available:3 multiplied:1 murraysmith:1 alternative:2 slower:1 include:1 calculating:2 exploit:1 ghahramani:5 murray:1 added:2 parametric:1 concentration:1 usual:2 diagonal:5 grace:1 gradient:13 separate:2 simulated:1 quinonero:1 considers:1 reason:1 sloped:1 assuming:2 code:1 length:1 providing:1 unfortunately:1 susceptible:1 steep:4 negative:3 proper:2 unknown:1 twenty:3 allowing:2 perform:2 observation:5 datasets:12 descent:2 timeseries:1 situation:4 varied:1 perturbation:1 inferred:1 procedings:2 pair:1 nip:2 able:1 suggested:1 alongside:2 bar:3 recast:1 including:2 treated:1 scheme:3 improve:4 prior:3 multiplication:1 regularisation:1 expect:1 interesting:1 limitation:2 proportional:6 rmses:1 sufficient:2 plotting:1 editor:1 penalized:1 last:2 rasmussen:5 free:4 side:2 allow:1 normalised:3 icantly:1 wide:1 taking:2 differentiating:1 sparse:1 distributed:3 benefit:1 dimension:13 calculated:5 valid:1 evaluating:1 stand:1 computes:1 author:2 made:3 refinement:1 approximate:4 keep:1 overfitting:1 uai:2 assumed:1 xi:7 don:1 iterative:1 latent:2 decade:1 why:1 learn:1 reasonably:1 ignoring:2 solak:1 expansion:5 diag:6 did:1 whole:1 noise:98 hyperparameters:20 paul:1 repeated:1 girard:1 referred:1 strengthened:1 besse:1 favoured:1 inferring:1 exponential:3 lie:1 load:1 specific:1 bishop:1 pz:2 intractable:3 restricting:1 adding:1 effectively:1 importance:1 illustrates:1 suited:1 likely:3 contained:1 scalar:1 vulnerable:1 kristian:1 corresponds:1 replace:1 change:1 corrected:1 called:1 formally:1 relevance:1 evaluate:1 tested:2 handling:1 |
3,640 | 4,296 | Efficient Inference in Fully Connected CRFs with
Gaussian Edge Potentials
?
Philipp Kr?ahenbuhl
Computer Science Department
Stanford University
[email protected]
Vladlen Koltun
Computer Science Department
Stanford University
[email protected]
Abstract
Most state-of-the-art techniques for multi-class image segmentation and labeling
use conditional random fields defined over pixels or image regions. While regionlevel models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we
consider fully connected CRF models defined on the complete set of pixels in an
image. The resulting graphs have billions of edges, making traditional inference
algorithms impractical. Our main contribution is a highly efficient approximate
inference algorithm for fully connected CRF models in which the pairwise edge
potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves
segmentation and labeling accuracy.
1
Introduction
Multi-class image segmentation and labeling is one of the most challenging and actively studied
problems in computer vision. The goal is to label every pixel in the image with one of several predetermined object categories, thus concurrently performing recognition and segmentation of multiple
object classes. A common approach is to pose this problem as maximum a posteriori (MAP) inference in a conditional random field (CRF) defined over pixels or image patches [8, 12, 18, 19, 9].
The CRF potentials incorporate smoothness terms that maximize label agreement between similar
pixels, and can integrate more elaborate terms that model contextual relationships between object
classes.
Basic CRF models are composed of unary potentials on individual pixels or image patches and pairwise potentials on neighboring pixels or patches [19, 23, 7, 5]. The resulting adjacency CRF structure is limited in its ability to model long-range connections within the image and generally results
in excessive smoothing of object boundaries. In order to improve segmentation and labeling accuracy, researchers have expanded the basic CRF framework to incorporate hierarchical connectivity
and higher-order potentials defined on image regions [8, 12, 9, 13]. However, the accuracy of these
approaches is necessarily restricted by the accuracy of unsupervised image segmentation, which is
used to compute the regions on which the model operates. This limits the ability of region-based
approaches to produce accurate label assignments around complex object boundaries, although significant progress has been made [9, 13, 14].
In this paper, we explore a different model structure for accurate semantic segmentation and labeling.
We use a fully connected CRF that establishes pairwise potentials on all pairs of pixels in the image.
Fully connected CRFs have been used for semantic image labeling in the past [18, 22, 6, 17], but the
complexity of inference in fully connected models has restricted their application to sets of hundreds
of image regions or fewer. The segmentation accuracy achieved by these approaches is again limited
by the unsupervised segmentation that produces the regions. In contrast, our model connects all
1
sky
tree
grass
tree
bench
road
(a) Image
(b) Unary classifiers
(c) Robust P
n
CRF
grass
(d) Fully connected CRF, (e) Fully connected CRF,
MCMC inference, 36 hrs our approach, 0.2 seconds
Figure 1: Pixel-level classification with a fully connected CRF. (a) Input image from the MSRC-21 dataset. (b)
The response of unary classifiers used by our models. (c) Classification produced by the Robust P n CRF [9].
(d) Classification produced by MCMC inference [17] in a fully connected pixel-level CRF model; the algorithm
was run for 36 hours and only partially converged for the bottom image. (e) Classification produced by our
inference algorithm in the fully connected model in 0.2 seconds.
pairs of individual pixels in the image, enabling greatly refined segmentation and labeling. The
main challenge is the size of the model, which has tens of thousands of nodes and billions of edges
even on low-resolution images.
Our main contribution is a highly efficient inference algorithm for fully connected CRF models in
which the pairwise edge potentials are defined by a linear combination of Gaussian kernels in an arbitrary feature space. The algorithm is based on a mean field approximation to the CRF distribution.
This approximation is iteratively optimized through a series of message passing steps, each of which
updates a single variable by aggregating information from all other variables. We show that a mean
field update of all variables in a fully connected CRF can be performed using Gaussian filtering
in feature space. This allows us to reduce the computational complexity of message passing from
quadratic to linear in the number of variables by employing efficient approximate high-dimensional
filtering [16, 2, 1]. The resulting approximate inference algorithm is sublinear in the number of
edges in the model.
Figure 1 demonstrates the benefits of the presented algorithm on two images from the MSRC-21
dataset for multi-class image segmentation and labeling. Figure 1(d) shows the results of approximate MCMC inference in fully connected CRFs on these images [17]. The MCMC procedure was
run for 36 hours and only partially converged for the bottom image. We have also experimented with
graph cut inference in the fully connected models [11], but it did not converge within 72 hours. In
contrast, a single-threaded implementation of our algorithm produces a detailed pixel-level labeling
in 0.2 seconds, as shown in Figure 1(e). A quantitative evaluation on the MSRC-21 and the PASCAL VOC 2010 datasets is provided in Section 6. To the best of our knowledge, we are the first to
demonstrate efficient inference in fully connected CRF models at the pixel level.
2
The Fully Connected CRF Model
Consider a random field X defined over a set of variables {X1 , . . . , XN }. The domain of each
variable is a set of labels L = {l1 , l2 , . . . , lk }. Consider also a random field I defined over variables
{I1 , . . . , IN }. In our setting, I ranges over possible input images of size N and X ranges over
possible pixel-level image labelings. Ij is the color vector of pixel j and Xj is the label assigned to
pixel j.
A conditional random
field (I, X) is characterized by a Gibbs distribution
P
1
exp(? c?CG ?c (Xc |I)), where G = (V, E) is a graph on X and each clique c
P (X|I) = Z(I)
2
N
in a set of cliques
P CG in G induces a potential ?c [15]. The Gibbs energy of a labeling x ? L
is E(x|I) = c?CG ?c (xc |I). The maximum a posteriori (MAP) labeling of the random field is
x? = arg maxx?LN P (x|I). For notational convenience we will omit the conditioning in the rest of
the paper and use ?c (xc ) to denote ?c (xc |I).
In the fully connected pairwise CRF model, G is the complete graph on X and CG is the set of all
unary and pairwise cliques. The corresponding Gibbs energy is
X
X
E(x) =
?u (xi ) +
?p (xi , xj ),
(1)
i
i<j
where i and j range from 1 to N . The unary potential ?u (xi ) is computed independently for each
pixel by a classifier that produces a distribution over the label assignment xi given image features.
The unary potential used in our implementation incorporates shape, texture, location, and color
descriptors and is described in Section 5. Since the output of the unary classifier for each pixel
is produced independently from the outputs of the classifiers for other pixels, the MAP labeling
produced by the unary classifiers alone is generally noisy and inconsistent, as shown in Figure 1(b).
The pairwise potentials in our model have the form
PK
?p (xi , xj ) = ?(xi , xj ) m=1 w(m) k (m) (fi , fj ),
|
{z
}
(2)
k(fi ,fj )
where each k
is a Gaussian kernel k (fi , fj ) = exp(? 12 (fi ? fj )T ?(m) (fi ? fj )), the vectors fi
and fj are feature vectors for pixels i and j in an arbitrary feature space, w(m) are linear combination
weights, and ? is a label compatibility function. Each kernel k (m) is characterized by a symmetric,
positive-definite precision matrix ?(m) , which defines its shape.
(m)
(m)
For multi-class image segmentation and labeling we use contrast-sensitive two-kernel potentials,
defined in terms of the color vectors Ii and Ij and positions pi and pj :
!
|Ii ? Ij |2
|pi ? pj |2
|pi ? pj |2
(1)
(2)
?
.
(3)
k(fi , fj ) = w exp ?
+w
exp
?
2??2
2??2
2??2
{z
}
|
|
{z
}
appearance kernel
smoothness kernel
The appearance kernel is inspired by the observation that nearby pixels with similar color are likely
to be in the same class. The degrees of nearness and similarity are controlled by parameters ?? and
?? . The smoothness kernel removes small isolated regions [19]. The parameters are learned from
data, as described in Section 4.
A simple label compatibility function ? is given by the Potts model, ?(xi , xj ) = [xi 6= xj ]. It
introduces a penalty for nearby similar pixels that are assigned different labels. While this simple
model works well in practice, it is insensitive to compatibility between labels. For example, it
penalizes a pair of nearby pixels labeled ?sky? and ?bird? to the same extent as pixels labeled ?sky?
and ?cat?. We can instead learn a general symmetric compatibility function ?(xi , xj ) that takes
interactions between labels into account, as described in Section 4.
3
Efficient Inference in Fully Connected CRFs
Our algorithm is based on a mean field approximation to the CRF distribution. This approximation yields an iterative message passing algorithm for approximate inference. Our key observation
is that message passing in the presented model can be performed using Gaussian filtering in feature space. This enables us to utilize highly efficient approximations for high-dimensional filtering,
which reduce the complexity of message passing from quadratic to linear, resulting in an approximate inference algorithm for fully connected CRFs that is linear in the number of variables N and
sublinear in the number of edges in the model.
3.1
Mean Field Approximation
Instead of computing the exact distribution P (X), the mean field approximation computes a distribution Q(X) that minimizes the KL-divergence D(QkPQ
) among all distributions Q that can be
expressed as a product of independent marginals, Q(X) = i Qi (Xi ) [10].
3
Minimizing the KL-divergence, while constraining Q(X) and Qi (Xi ) to be valid distributions,
yields the following iterative update equation:
?
?
K
?
?
X
X
X
1
Qi (xi = l) =
exp ??u (xi ) ?
?(l, l0 )
w(m)
k (m) (fi , fj )Qj (l0 ) .
(4)
?
?
Zi
0
l ?L
m=1
j6=i
A detailed derivation of Equation 4 is given in the supplementary material. This update equation
leads to the following inference algorithm:
Algorithm 1 Mean field in fully connected CRFs
. Qi (xi ) ? Z1i exp{??u (xi )}
. See Section 6 for convergence analysis
. Message passing from all Xj to all Xi
Initialize Q
while not converged do
? (m) (l) ? P k (m) (fi , fj )Qj (l) for all m
Q
i
j6=i
P
P
?
? (m) (l)
Qi (xi ) ? l?L ?(m) (xi , l) m w(m) Q
i
? i (xi )}
Qi (xi ) ? exp{??u (xi ) ? Q
normalize Qi (xi )
end while
. Compatibility transform
. Local update
Each iteration of Algorithm 1 performs a message passing step, a compatibility transform, and a
local update. Both the compatibility transform and the local update run in linear time and are highly
efficient. The computational bottleneck is message passing. For each variable, this step requires
evaluating a sum over all other variables. A naive implementation thus has quadratic complexity in
the number of variables N . Next, we show how approximate high-dimensional filtering can be used
to reduce the computational cost of message passing to linear.
3.2
Efficient Message Passing Using High-Dimensional Filtering
From a signal processing standpoint, the message passing step can be expressed as a convolution
with a Gaussian kernel G?(m) in feature space:
(m)
? (m) (l) = P
(5)
Q
(fi , fj )Qj (l) ? Qi (l) = [G?(m) ? Q(l)] (fi ) ?Qi (l).
i
j?V k
{z
}
{z
} |
|
(m)
message passing
Qi
(m)
We subtract Qi (l) from the convolved function Qi (l)
ables, while message passing does not sum over Qi .
(l)
because the convolution sums over all vari(m)
This convolution performs a low-pass filter, essentially band-limiting Qi (l). By the sampling
theorem, this function can be reconstructed from a set of samples whose spacing is proportional
to the standard deviation of the filter [20]. We can thus perform the convolution by downsampling
Q(l), convolving the samples with G?(m) , and upsampling the result at the feature points [16].
(m)
Algorithm 2 Efficient message passing: Qi
(l) =
Q? (l) ? downsample(Q(l))
P
(m)
?i?V? Q?i (l) ? j?V? k (m) (f?i , f?j )Q?j (l)
(m)
Q
(l) ?
P
j?V
k (m) (fi , fj )Qj (l)
. Downsample
. Convolution on samples f?
(m)
upsample(Q? (l))
. Upsample
A common approximation to the Gaussian kernel is a truncated Gaussian, where all values beyond
two standard deviations are set to zero. Since the spacing of the samples is proportional to the standard deviation, the support of the truncated kernel contains only a constant number of sample points.
Thus the convolution can be approximately computed at each sample by aggregating values from
only a constant number of neighboring samples. This implies that approximate message passing can
be performed in O(N ) time [16].
High-dimensional filtering algorithms that follow this approach can still have computational complexity exponential in d. However, a clever filtering scheme can reduce the complexity of the convolution operation to O(N d). We use the permutohedral lattice, a highly efficient convolution data
4
structure that tiles the feature space with simplices arranged along d+1 axes [1]. The permutohedral
lattice exploits the separability of unit variance Gaussian kernels. Thus we need to apply a whitening
transform ?f = U f to the feature space in order to use it. The whitening transformation is found using the Cholesky decomposition of ?(m) into U U T . In the transformed space, the high-dimensional
convolution can be separated into a sequence of one-dimensional convolutions along the axes of the
lattice. The resulting approximate message passing procedure is highly efficient even with a fully
sequential implementation that does not make use of parallelism or the streaming capabilities of
graphics hardware, which can provide further acceleration if desired.
4
Learning
We learn the parameters of the model by piecewise training. First, the boosted unary classifiers are
trained using the JointBoost algorithm [21], using the features described in Section 5. Next we learn
the appearance kernel parameters w(1) , ?? , and ?? for the Potts model. w(1) can be found efficiently
by a combination of expectation maximization and high-dimensional filtering. Unfortunately, the
kernel widths ?? and ?? cannot be computed effectively with this approach, since their gradient
involves a sum of non-Gaussian kernels, which are not amenable to the same acceleration techniques.
We found it to be more efficient to use grid search on a holdout validation set for all three kernel
parameters w(1) , ?? and ?? .
The smoothness kernel parameters w(2) and ?? do not significantly affect classification accuracy,
but yield a small visual improvement. We found w(2) = ?? = 1 to work well in practice.
The compatibility parameters ?(a, b) = ?(b, a) are learned using L-BFGS to maximize the loglikelihood `(? : I, T ) of the model for a validation set of images I with corresponding ground
truth labelings T . L-BFGS requires the computation of the gradient of `, which is intractable to
estimate exactly, since it requires computing the gradient of the partition function Z. Instead, we
use the mean field approximation described in Section 3 to estimate the gradient of Z. This leads to
a simple approximation of the gradient for each training image:
X (n)
X
X
X
?
(n)
`(? : I (n) , T (n) ) ? ?
Ti (a)
k(fi , fj )Tj (b) +
Qi (a)
k(fi , fj )Qi (b),
??(a, b)
i
i
j6=i
j6=i
(6)
where (I (n) , T (n) ) is a single training image with its ground truth labeling and T (n) (a) is a binary
(n)
image in which the ith pixel Ti (a) has value 1 if the ground truth label at the ith pixel of T (n) is
a and 0 otherwise. A detailed derivation of Equation 6 is given in the supplementary material.
P
P
The sums j6=i k(fi , fj )Tj (b) and j6=i k(fi , fj )Qi (b) are both computationally expensive to evaluate directly. As in Section 3.2, we use high-dimensional filtering to compute both sums efficiently.
The runtime of the final learning algorithm is linear in the number of variables N .
5
Implementation
The unary potentials used in our implementation are derived from TextonBoost [19, 13]. We use
the 17-dimensional filter bank suggested by Shotton et al. [19], and follow Ladick?y et al. [13] by
adding color, histogram of oriented gradients (HOG), and pixel location features. Our evaluation
on the MSRC-21 dataset uses this extended version of TextonBoost for the unary potentials. For
the VOC 2010 dataset we include the response of bounding box object detectors [4] for each object
class as 20 additional features. This increases the performance of the unary classifiers on the VOC
2010 from 13% to 22%. We gain an additional 5% by training a logistic regression classifier on the
responses of the boosted classifier.
For efficient high-dimensional filtering, we use a publicly available implementation of the permutohedral lattice [1]. We found a downsampling rate of one standard deviation to work best for
all our experiments. Sampling-based filtering algorithms underestimate the edge strength k(fi , fj )
for very similar feature points. Proper normalization can cancel out most of this error. The permutohedral lattice allows for two types of normalizations. A global normalization by the average
5
P
kernel strength k? = N1 i,j k(fi , fj ) can correct for constant error. A pixelwise normalization by
P
k?i = j k(fi , fj ) handles regional errors as well, but slightly violates the CRF symmetry assumption ?p (xi , xj ) = ?p (xj , xi ). We found the pixelwise normalization to work better in practice.
6
Evaluation
We evaluate the presented algorithm on two standard benchmarks for multi-class image segmentation and labeling. The first is the MSRC-21 dataset, which consists of 591 color images of size
320 ? 213 with corresponding ground truth labelings of 21 object classes [19]. The second is the
PASCAL VOC 2010 dataset, which contains 1928 color images of size approximately 500 ? 400,
with a total of 20 object classes and one background class [3]. The presented approach was evaluated alongside the adjacency (grid) CRF of Shotton et al. [19] and the Robust P n CRF of Kohli et
al. [9], using publicly available reference implementations. To ensure a fair comparison, all models
used the unary potentials described in Section 5. All experiments were conducted on an Intel i7-930
processor clocked at 2.80GHz. Eight CPU cores were used for training; all other experiments were
performed on a single core. The inference algorithm was implemented in a single CPU thread.
Convergence. We first evaluate the convergence of the mean field approximation by analyzing
the KL-divergence between Q and P . Figure 2 shows the KL-divergence between Q and P over
successive iterations of the inference algorithm. The KL-divergence was estimated up to a constant
as described in supplementary material. Results are shown for different standard deviations ?? and
?? of the kernels. The graphs were aligned at 20 iterations for visual comparison. The number of
iterations was set to 10 in all subsequent experiments.
MSRC-21 dataset. We use the standard split of the dataset into 45% training, 10% validation and
45% test images [19]. The unary potentials were learned on the training set, while the parameters of
all CRF models were learned using holdout validation. The total CRF training time was 40 minutes.
The learned label compatibility function performed on par with the Potts model on this dataset.
Figure 3 provides qualitative and quantitative results on the dataset. We report the standard measures
of multi-class segmentation accuracy: ?global? denotes the overall percentage of correctly classified
image pixels and ?average? is the unweighted average of per-category classification accuracy [19, 9].
The presented inference algorithm on the fully connected CRF significantly outperforms the other
models, evaluated against the standard ground truth data provided with the dataset.
The ground truth labelings provided with the MSRC-21 dataset are quite imprecise. In particular,
regions around object boundaries are often left unlabeled. This makes it difficult to quantitatively
evaluate the performance of algorithms that strive for pixel-level accuracy. Following Kohli et al. [9],
we manually produced accurate segmentations and labelings for a set of images from the MSRC-21
dataset. Each image was fully annotated at the pixel level, with careful labeling around complex
boundaries. This labeling was performed by hand for 94 representative images from the MSRC21 dataset. Labeling a single image took 30 minutes on average. A number of images from this
?accurate ground truth? set are shown in Figure 3. Figure 3 reports segmentation accuracy against
this ground truth data alongside the evaluation against the standard ground truth. The results were
obtained using 5-fold cross validation, where 45 of the 94 images were used to train the CRF paQ(bird)
KL-divergence
??=??=10
??=??=30
??=??=50
??=??=70
??=??=90
0
5
10
15
Number of iterations
(a) KL-divergence
Image
20
Q(sky)
0 iterations
1 iteration
2 iterations
10 iterations
(b) Distributions Q(Xi = ?bird?) (top) and Q(Xi = ?sky?) (bottom)
Figure 2: Convergence analysis. (a) KL-divergence of the mean field approximation during successive iterations of the inference algorithm, averaged across 94 images from the MSRC-21 dataset. (b) Visualization of
convergence on distributions for two class labels over an image from the dataset.
6
grass
water
bird
bird
water
sky
building
tree
car
road
grass
cow
sky
tree
grass
Image
Grid CRF
Runtime
Unary classifiers
Grid CRF
Robust P n CRF
Fully connected CRF
?
1s
30s
0.2s
Robust Pn CRF
Our approach
Standard ground truth
Global
Average
84.0
76.6
84.6
77.2
84.9
77.5
86.0
78.3
Accurate ground truth
Accurate ground truth
Global
Average
83.2 ? 1.5
80.6 ? 2.3
84.8 ? 1.5
82.4 ? 1.8
86.5 ? 1.0
83.1 ? 1.5
88.2 ? 0.7 84.7 ? 0.7
Figure 3: Qualitative and quantitative results on the MSRC-21 dataset.
rameters. The unary potentials were learned on a separate training set that did not include the 94
accurately annotated images.
We also adopt the methodology proposed by Kohli et al. [9] for evaluating segmentation accuracy
around boundaries. Specifically, we count the relative number of misclassified pixels within a narrow band (?trimap?) surrounding actual object boundaries, obtained from the accurate ground truth
images. As shown in Figure 4, our algorithm outperforms previous work across all trimap widths.
Pixelwise Classifiaction Error [%]
PASCAL VOC 2010. Due to the lack of a publicly available ground truth labeling for the test
set in the PASCAL VOC 2010, we use the training and validation data for all our experiments. We
randomly partitioned the images into 3 groups: 40% training, 15% validation, and 45% test set. Segmentation accuracy was measured using the standard VOC measure [3]. The unary potentials were
learned on the training set and yielded an average classification accuracy of 27.6%. The parameters
for the Potts potentials in the fully connected CRF model were learned on the validation set. The
50
Unary classifiers
Grid CRF
Robust Pn CRF
Fully connected CRF
40
30
20
0
Image
Ground truth
Trimap (4px)
Trimap (8px)
(a) Trimaps of different widths
4
8
12
16
20
Trimap Width [Pixels]
(b) Segmentation accuracy within trimap
Figure 4: Segmentation accuracy around object boundaries. (a) Visualization of the ?trimap? measure. (b)
Percent of misclassified pixels within trimaps of different widths.
7
boat
background
cat
sheep
background
Image
Our approach
Ground truth
background
Image
Our approach
Ground truth
Figure 5: Qualitative results on the PASCAL VOC 2010 dataset. Average segmentation accuracy was 30.2%.
fully connected model with Potts potentials yielded an average classification accuracy of 29.1%.
The label compatibility function, learned on the validation set, further increased the classification
accuracy to 30.2%. For comparison, the grid CRF achieves 28.3%. Training time was 2.5 hours and
inference time is 0.5 seconds. Qualitative results are provided in Figure 5.
Long-range connections. We have examined the value of long-range connections in our model by
varying the spatial and color ranges ?? and ?? of the appearance kernel and analyzing the resulting
classification accuracy. For this experiment, w(1) was held constant and w(2) was set to 0. The
results are shown in Figure 6. Accuracy steadily increases as longer-range connections are added,
peaking at spatial standard deviation of ?? = 61 pixels and color standard deviation ?? = 11. At this
setting, more than 50% of the pairwise potential energy in the model was assigned to edges of length
35 pixels or higher. However, long-range connections can also propagate misleading information,
as shown in Figure 7.
50
88%
??
86%
25
84%
0
100
200
??
1.0
??
121.0
1.0
??
41.0
82%
(a) Quantitative
(b) Qualitative
Figure 6: Influence of long-range connections on classification accuracy. (a) Global classification accuracy on
the 94 MSRC images with accurate ground truth, as a function of kernel parameters ?? and ?? . (b) Results for
one image across two slices in parameter space, shown as black lines in (a).
Discussion. We have presented a highly efficient approximate inference algorithm for fully connected CRF models. Our results demonstrate that dense pixel-level connectivity leads to significantly more accurate pixel-level classification performance. Our single-threaded implementation
processes benchmark images in a fraction of a second and the algorithm can be parallelized for
further performance gains.
Acknowledgements. We thank Daphne Koller for helpful discussions. Philipp Kr?ahenb?uhl was
supported in part by a Stanford Graduate Fellowship.
background
road
bird
Image
Our approach
cat
Image
Ground truth
Our approach
void
Ground truth
Figure 7: Failure cases on images from the PASCAL VOC 2010 (left) and the MSRC-21 (right). Long-range
connections propagated misleading information, eroding the bird wing in the left image and corrupting the legs
of the cat on the right.
8
References
[1] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional filtering using the permutohedral lattice.
Computer Graphics Forum, 29(2), 2010. 2, 5
[2] A. Adams, N. Gelfand, J. Dolson, and M. Levoy. Gaussian kd-trees for fast high-dimensional filtering.
ACM Transactions on Graphics, 28(3), 2009. 2
[3] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes (VOC) challenge. IJCV, 88(2), 2010. 6, 7
[4] P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. Cascade object detection with deformable part
models. In Proc. CVPR, 2010. 5
[5] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel
neighborhoods. In Proc. ICCV, 2009. 1
[6] C. Galleguillos, A. Rabinovich, and S. Belongie. Object categorization using co-occurrence, location and
appearance. In Proc. CVPR, 2008. 1
[7] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location
prior. IJCV, 80(3), 2008. 1
[8] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling.
In Proc. CVPR, 2004. 1
[9] P. Kohli, L. Ladick?y, and P. H. S. Torr. Robust higher order potentials for enforcing label consistency.
IJCV, 82(3), 2009. 1, 2, 6, 7
[10] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009. 3
[11] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? PAMI, 26(2),
2004. 2
[12] S. Kumar and M. Hebert. A hierarchical field framework for unified context-based classification. In Proc.
ICCV, 2005. 1
[13] L. Ladick?y, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical crfs for object class image
segmentation. In Proc. ICCV, 2009. 1, 5
[14] L. Ladick?y, C. Russell, P. Kohli, and P. H. S. Torr. Graph cut based inference with co-occurrence statistics.
In Proc. ECCV, 2010. 1
[15] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. ICML, 2001. 3
[16] S. Paris and F. Durand. A fast approximation of the bilateral filter using a signal processing approach.
IJCV, 81(1), 2009. 2, 4
[17] N. Payet and S. Todorovic. (RF)2 ? random forest random field. In Proc. NIPS. 2010. 1, 2
[18] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In Proc.
ICCV, 2007. 1
[19] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class
object recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 81(1), 2009.
1, 3, 5, 6
[20] S. W. Smith. The scientist and engineer?s guide to digital signal processing. California Technical Publishing, 1997. 4
[21] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing visual features for multiclass and multiview object
detection. PAMI, 29(5), 2007. 5
[22] T. Toyoda and O. Hasegawa. Random field model for integration of local information and global information. PAMI, 30, 2008. 1
[23] J. J. Verbeek and B. Triggs. Scene segmentation with crfs learned from partially labeled images. In Proc.
NIPS, 2007. 1
9
| 4296 |@word kohli:6 version:1 everingham:1 triggs:1 propagate:1 decomposition:1 textonboost:3 series:1 contains:2 past:1 outperforms:2 contextual:1 subsequent:1 wiewiora:1 partition:1 predetermined:1 shape:2 enables:1 remove:1 update:7 grass:5 alone:1 fewer:1 mccallum:1 ith:2 smith:1 core:2 nearness:1 provides:1 node:1 philipp:2 location:4 successive:2 daphne:1 along:2 koltun:1 qualitative:5 consists:1 ijcv:5 pairwise:9 multi:8 inspired:1 voc:10 freeman:1 cpu:2 actual:1 provided:4 what:1 substantially:1 minimizes:1 unified:1 transformation:1 impractical:1 sky:7 every:1 quantitative:4 ti:2 runtime:2 exactly:1 classifier:12 demonstrates:1 unit:1 omit:1 trimap:7 segmenting:1 positive:1 scientist:1 aggregating:2 local:4 limit:1 analyzing:2 approximately:2 pami:3 black:1 bird:7 studied:1 examined:1 challenging:1 co:2 limited:2 range:11 graduate:1 averaged:1 practice:3 definite:1 procedure:2 maxx:1 significantly:3 cascade:1 vedaldi:2 imprecise:1 road:3 convenience:1 clever:1 cannot:1 unlabeled:1 context:3 influence:1 map:3 paq:1 crfs:8 williams:1 layout:1 independently:2 resolution:1 fulkerson:1 handle:1 limiting:1 exact:1 us:1 agreement:1 superpixel:1 recognition:2 expensive:1 cut:3 labeled:3 bottom:3 thousand:1 region:8 connected:27 russell:2 complexity:6 trained:1 localization:1 cat:4 kolmogorov:1 derivation:2 train:1 separated:1 surrounding:1 fast:3 zemel:1 labeling:21 neighborhood:1 refined:1 whose:1 quite:1 stanford:5 larger:1 supplementary:3 loglikelihood:1 gelfand:1 otherwise:1 cvpr:3 ability:2 statistic:1 transform:4 noisy:1 jointly:1 final:1 associative:1 sequence:2 took:1 interaction:1 product:1 neighboring:2 aligned:1 deformable:1 normalize:1 billion:2 convergence:5 produce:4 categorization:1 adam:2 object:20 pose:1 measured:1 ij:3 progress:1 implemented:1 c:2 involves:1 implies:1 correct:1 annotated:2 filter:4 criminisi:1 mcallester:1 material:3 violates:1 adjacency:2 around:5 ground:20 exp:7 achieves:1 adopt:1 torralba:1 proc:11 label:16 sensitive:1 establishes:1 mit:1 concurrently:1 gaussian:12 pn:2 boosted:2 varying:1 l0:2 ax:2 derived:1 notational:1 potts:5 improvement:1 greatly:1 contrast:3 ladick:4 cg:4 posteriori:2 inference:24 helpful:1 downsample:2 unary:18 streaming:1 koller:3 transformed:1 labelings:5 i1:1 misclassified:2 pixel:39 arg:1 classification:14 compatibility:10 pascal:7 among:1 overall:1 art:1 smoothing:1 initialize:1 spatial:2 uhl:1 field:20 integration:1 sampling:2 manually:1 toyoda:1 unsupervised:2 excessive:1 cancel:1 icml:1 minimized:1 report:2 piecewise:1 quantitatively:1 oriented:1 randomly:1 composed:1 divergence:8 individual:2 murphy:1 connects:1 n1:1 friedman:1 detection:2 message:16 highly:7 evaluation:4 sheep:1 introduces:1 tj:2 held:1 amenable:1 accurate:9 edge:9 tree:5 penalizes:1 desired:1 isolated:1 girshick:1 increased:1 modeling:1 assignment:2 lattice:6 maximization:1 cost:1 rabinovich:2 deviation:7 hundred:1 conducted:1 graphic:3 pixelwise:3 considerably:1 probabilistic:2 connectivity:4 again:1 tile:1 convolving:1 strive:1 wing:1 actively:1 account:1 potential:23 bfgs:2 performed:6 bilateral:1 capability:1 contribution:2 publicly:3 accuracy:22 descriptor:1 variance:1 efficiently:2 yield:3 accurately:1 produced:6 researcher:1 j6:6 processor:1 converged:3 classified:1 detector:1 sharing:1 z1i:1 against:3 failure:1 underestimate:1 energy:4 steadily:1 propagated:1 gain:2 dataset:18 holdout:2 knowledge:1 color:9 improves:1 car:1 segmentation:26 higher:3 follow:2 permitted:1 response:3 methodology:1 zisserman:1 arranged:1 evaluated:2 box:1 hand:1 multiscale:1 lack:1 defines:1 logistic:1 building:1 galleguillos:2 soatto:1 assigned:3 symmetric:2 iteratively:1 semantic:2 during:1 width:5 davis:1 clocked:1 multiview:1 crf:39 complete:2 demonstrate:3 performs:2 l1:1 fj:18 percent:1 image:62 fi:19 common:2 cohen:1 conditioning:1 insensitive:1 rodgers:1 he:1 marginals:1 significant:1 gibbs:3 smoothness:4 grid:6 msrc:12 consistency:1 similarity:1 longer:1 whitening:2 binary:1 durand:1 additional:2 parallelized:1 converge:1 maximize:2 elidan:1 signal:3 ii:2 multiple:1 technical:1 characterized:2 cross:1 long:6 controlled:1 qi:18 verbeek:1 basic:2 regression:1 vision:1 essentially:1 expectation:1 iteration:10 kernel:22 histogram:1 normalization:5 ahenb:1 achieved:1 background:5 fellowship:1 spacing:2 winn:2 void:1 standpoint:1 rest:1 regional:1 incorporates:1 inconsistent:1 lafferty:1 constraining:1 shotton:3 split:1 xj:10 affect:1 zi:1 cow:1 reduce:4 multiclass:1 qj:4 bottleneck:1 i7:1 thread:1 penalty:1 passing:16 todorovic:1 generally:2 detailed:3 ten:1 band:2 induces:1 hardware:1 category:2 zabih:1 percentage:1 payet:1 estimated:1 correctly:1 per:1 group:1 key:1 pj:3 utilize:1 graph:8 fraction:1 sum:6 run:3 patch:3 fold:1 quadratic:3 yielded:2 strength:2 scene:1 nearby:3 ables:1 kumar:1 performing:1 expanded:1 px:2 gould:1 department:2 combination:4 vladlen:2 kd:1 across:3 slightly:1 separability:1 partitioned:1 making:1 leg:1 peaking:1 restricted:2 iccv:4 ln:1 equation:4 computationally:1 visualization:2 count:1 end:1 available:3 operation:1 apply:1 eight:1 hierarchical:3 occurrence:2 convolved:1 denotes:1 top:1 include:2 ensure:1 publishing:1 graphical:1 xc:4 exploit:1 forum:1 added:1 traditional:1 gradient:6 separate:1 thank:1 upsampling:1 threaded:2 extent:1 water:2 enforcing:1 rother:1 length:1 relationship:1 minimizing:1 downsampling:2 difficult:1 unfortunately:1 hog:1 hasegawa:1 implementation:9 proper:1 perform:1 observation:2 convolution:10 datasets:1 benchmark:2 enabling:1 truncated:2 extended:1 arbitrary:2 pair:3 paris:1 kl:8 connection:7 optimized:1 california:1 learned:10 narrow:1 hour:4 nip:2 beyond:1 suggested:1 alongside:2 parallelism:1 challenge:2 rf:1 gool:1 hr:1 boat:1 scheme:1 improve:1 misleading:2 lk:1 naive:1 prior:1 understanding:1 l2:1 acknowledgement:1 relative:2 fully:29 par:1 sublinear:2 rameters:1 filtering:14 proportional:2 validation:9 digital:1 integrate:1 jointboost:1 degree:1 principle:1 bank:1 corrupting:1 pi:3 eccv:1 supported:1 hebert:1 guide:1 felzenszwalb:1 sparse:1 benefit:1 ghz:1 boundary:7 slice:1 xn:1 valid:1 evaluating:2 vari:1 computes:1 unweighted:1 made:1 levoy:1 employing:1 transaction:1 reconstructed:1 approximate:10 clique:3 global:6 belongie:2 xi:26 search:1 iterative:2 learn:3 robust:7 symmetry:1 forest:1 necessarily:1 complex:2 domain:1 did:2 pk:1 dense:3 main:3 bounding:1 fair:1 x1:1 intel:1 representative:1 elaborate:1 simplices:1 precision:1 position:1 pereira:1 exponential:1 perpinan:1 theorem:1 minute:2 baek:1 experimented:1 intractable:1 sequential:1 effectively:1 kr:2 adding:1 texture:2 subtract:1 explore:1 appearance:5 likely:1 visual:4 expressed:2 upsample:2 partially:3 van:1 truth:20 acm:1 conditional:5 goal:1 acceleration:2 careful:1 permutohedral:5 carreira:1 specifically:1 torr:3 operates:1 engineer:1 total:2 pas:1 support:1 cholesky:1 incorporate:2 evaluate:4 mcmc:4 bench:1 |
3,641 | 4,297 | Periodic Finite State Controllers for Efficient
POMDP and DEC-POMDP Planning
Jaakko Peltonen
Aalto University, Department of Information
and Computer Science, Helsinki Institute for
Information Technology HIIT,
P.O. Box 15400, FI-00076 Aalto, Finland
[email protected]
Joni Pajarinen
Aalto University, Department of
Information and Computer Science,
P.O. Box 15400, FI-00076 Aalto, Finland
[email protected]
Abstract
Applications such as robot control and wireless communication require planning
under uncertainty. Partially observable Markov decision processes (POMDPs)
plan policies for single agents under uncertainty and their decentralized versions
(DEC-POMDPs) find a policy for multiple agents. The policy in infinite-horizon
POMDP and DEC-POMDP problems has been represented as finite state controllers (FSCs). We introduce a novel class of periodic FSCs, composed of layers
connected only to the previous and next layer. Our periodic FSC method finds
a deterministic finite-horizon policy and converts it to an initial periodic infinitehorizon policy. This policy is optimized by a new infinite-horizon algorithm to
yield deterministic periodic policies, and by a new expectation maximization algorithm to yield stochastic periodic policies. Our method yields better results than
earlier planning methods and can compute larger solutions than with regular FSCs.
1 Introduction
Many machine learning applications involve planning under uncertainty. Such planning is necessary in medical diagnosis, control of robots and other agents, and in dynamic spectrum access for
wireless communication systems. The planning task can often be represented as a reinforcement
learning problem, where an action policy controls the behavior of an agent, and the quality of the
policy is optimized to maximize a reward function. Single agent policies can be optimized with
partially observable Markov decision processes (POMDPs) [1], when the world state is uncertain.
Decentralized POMDPs (DEC-POMDPs) [2] optimize policies for multiple agents that act without
direct communication, with separate observations and beliefs of the world state, to maximize a joint
reward function. POMDP and DEC-POMDP methods use various representations for the policies,
such as value functions [3], graphs [4, 5], or finite state controllers (FSCs) [6, 7, 8, 9, 10].
We present a novel efficient method for POMDP and DEC-POMDP planning. We focus on infinitehorizon problems, where policies must operate forever. We introduce a new policy representation:
periodic finite state controllers, which can be seen as an intelligent restriction which speeds up
optimization and can yield better solutions. A periodic FSC is composed of several layers (subsets
of states), and transitions are only allowed to states in the next layer, and from the final layer to the
first. Policies proceed through layers in a periodic fashion, and policy optimization determines the
probabilities of state transitions and action choices to maximize reward. Our work has four main
contributions. Firstly, we introduce an improved optimization method for standard finite-horizon
problems with FSC policies by compression. Secondly, we give a method to transform the finitehorizon FSC into an initial infinite-horizon periodic FSC. Thirdly, we introduce compression to
the periodic FSC. Fourthly, we introduce an expectation-maximization (EM) training algorithm for
planning with periodic FSCs. We show that the resulting method performs better than earlier DEC1
POMDP methods and POMDP methods with a restricted-size policy and that use of the periodic
FSCs enables computing larger solutions than with regular FSCs. Online execution has complexity
O(const) for deterministic FSCs and O(log(F SC layer width)) for stochastic FSCs.
We discuss existing POMDP and DEC-POMDP solution methods in Section 2 and formally define
the infinite-horizon (DEC-)POMDP. In Section 3 we introduce the novel concept of periodic FSCs.
We then describe the stages of our method: improving finite-horizon solutions, transforming them
to periodic infinite-horizon solutions, and improving the periodic solutions by a novel EM algorithm
for (DEC-)POMDPs (Section 3.2). In Section 4 we show the improved performance of the new
method on several planning problems, and we conclude in Section 5.
2 Background
Partially observable Markov decision processes (POMDPs) and decentralized POMDPs (DECPOMDPs) are model families for decision making under uncertainty. POMDPs optimize policies for
a single agent with uncertainty of the environment state while DEC-POMDPs optimize policies for
several agents with uncertainty of the environment state and each other?s states. Given the actions of
the agents the environment evolves according to a Markov model. The agents? policies are optimized
to maximize the expected reward earned for actions into the future. In infinite-horizon planning the
expected reward is typically discounted to emphasize current and near-future actions. Computationally POMDPs and DEC-POMDPs are complex: even for finite-horizon problems, finding solutions
is in the worst case PSPACE-complete for POMDPs and NEXP-complete for DEC-POMDPs [11].
For infinite-horizon DEC-POMDP problems, state of the art methods [8, 12] store the policy as a
stochastic finite state controller (FSC) for each agent which keeps the policy size bounded. The
FSC parameters can be optimized by expectation maximization (EM) [12]. An advantage of EM is
that it can be adapted to for example continuous probability distributions [7] or to take advantage of
factored problems [10]. Alternatives to EM include formulating FSC optimization as a non-linear
constraint satisfaction (NLP) problem solvable by an NLP solver [8], or iteratively improving each
FSC by linear programming with other FSCs fixed [13]. Deterministic FSCs with a fixed size could
also be found by a best-first search [14]. If a DEC-POMDP problem has a specific goal state, then
a goal-directed [15] approach can achieve good results. The NLP and EM methods have yielded
the best results for the infinite-horizon DEC-POMDP problems. In a recent variant called mealy
NLP [16], the NLP based approach to DEC-POMDPs is adapted to FSC policies represented by
Mealy machines instead of traditional Moore machine representations. In POMDPs, Mealy machine
based controllers can achieve equal or better solutions than Moore controllers of the same size.
This paper recognizes the need to improve general POMDP and DEC-POMDP solutions. We introduce an approach where FSCs have a periodic layer structure, which turns out to yield good results.
2.1 Infinite-horizon DEC-POMDP: definition
The tuple h{?i }, S, {Ai }, P, {?i }, O, R, b0 , ?i defines an infinite-horizon DEC-POMDP problem
for N agents ?i , where S is the set of environment states and Ai and ?i are the sets of possible
actions and observations for agent ?i . A POMDP is the special case when there is only one agent.
P (s? |s, ~a) is the probability to move from state s to s? , given the actions of all agents (jointly denoted
~a = ha1 , . . . , aN i). The observation function O(~o|s? , ~a) is the probability that the agents observe
~o = ho1 , . . . , oN i, where oi is the observation of agent i, when actions ~a were taken and the environment transitioned to state s? . The initial state distribution is b0 (s). R(s, ~a) is the real-valued reward
for executing actions ~a in state s. For brevity, we denote transition probabilities given the actions
by Ps? s~a , observation probabilities by P~os?~a , reward functions by Rs~a , and the set of all agents other
than i by ?i. At each time step, agents perform actions, the environment state changes, and agents
receive observations. The goal is to find
policy ? for the agents that maximizes expected
P?a joint
t
discounted infinite-horizon reward E
a(t) |? , where ? is the discount factor, and s(t)
t=0 ? Rs(t)~
and ~a(t) are the state and action at time t, and E[?|?] denotes expected value under policy ?. Here,
the policy is stored as a set of stochastic finite state controllers (FSCs), one for each agent. The
FSC of agent i is defined by the tuple hQi , ?qi , ?ai qi , ?qi? qi oi i, where Qi is the set of FSC nodes qi ,
?qi is the initial distribution P (qi ) over nodes, ?ai qi is the probability P (ai |qi ) to perform action
ai in node qi , and ?qi? qi oi is the probability P (qi? |qi , oi ) to transition from node qi to node qi? when
2
time t
s
time t +1
s
r
r
o
o
a
a
q
q
t=0,3,6,...
t=1,4,7,...
t=2,5,8,...
Figure 1: Left: influence diagram for a DEC-POMDP with finite state controllers ~q, states s, joint
observations ~o, joint actions ~a and reward r (given by a reward function R(s, ~a)). A dotted line separates two time steps. Right: an example of the new periodic finite state controller, with three layers
and three nodes in each layer, and possible transitions shown as arrows. The controller controls one
of the agents. Which layer is active depends only on the current time; which node is active, and
which action is chosen, depend on transition probabilities and action probabilities of the controller.
observing oi . The current FSC nodes of all agents are denoted ~q = hq1 , . . . , qN i. The policies are
optimized by optimizing the parameters ?qi , ?ai qi , and ?qi? qi oi . Figure 1 (left) illustrates the setup.
3 Periodic finite state controllers
State-of-the-art algorithms [6, 13, 8, 12, 16] for optimizing POMDP/DEC-POMDP policies with
restricted-size FSCs find a local optimum. A well-chosen FSC initialization could yield better solutions, but initializing (compact) FSCs is not straightforward: one reason is that dynamic programming is difficult to apply on generic FSCs. In [17] FSCs for POMDPs are built using dynamic
programming to add new nodes, but this yields large FSCs and cannot be applied on DEC-POMDPs
as it needs a piecewise linear convex value function. Also, general FSCs are irreducible, so a probability distribution over FSC nodes is not sparse over time even if a FSC starts from a single node.
This makes computations with large FSCs difficult and FSC based methods are limited by FSC size.
We introduce periodic FSCs, which allow the use of much larger controllers with a small complexity
increase, efficient FSC initialization, and new dynamic programming algorithms for FSCs.
A periodic FSC is composed of M layers of controller nodes. Nodes in each layer are connected
only to nodes in the next layer: the first layer is connected to the second, the second layer to the third
and so on, and the last layer is connected to the first. The width of a periodic FSC is the number of
controller nodes in a layer. Without loss of generality we assume all layers have the same number of
nodes. A single-layer periodic FSC equals an ordinary FSC. A periodic FSC has different action and
(m)
transition probabilities for each layer. ?ai qi is the layer m probability to take action ai when in node
(m)
qi , and ?q? qi oi is the layer m probability to move from node qi to qi? when observing oi . Each layer
i
connects only to the next one, so the policy cycles periodically through each layer: for t ? M we
(t mod M)
(t)
(t mod M)
(t)
where ?mod? denotes remainder. Figure 1 (right)
and ?q? qi oi = ?q? qi oi
have ?ai qi = ?ai qi
i
i
shows an example periodic FSC.
We now introduce our method for solving (DEC-)POMDPs with periodic FSC policies. We show
that the periodic FSC structure allows efficient computation of deterministic controllers, show how
to optimize periodic stochastic FSCs, and show how a periodic deterministic controller can be used
as initialization to a stochastic controller. The algorithms are discussed in the context of DECPOMDPs, but can be directly applied to POMDPs.
3.1 Deterministic periodic finite state controllers
In a deterministic FSC, actions and node transitions are deterministic functions of the current node
and observation. To optimize deterministic periodic FSCs we first compute a non-periodic finitehorizon policy. The finite-horizon policy is transformed into a periodic infinite-horizon policy by
connecting the last layer to the first layer and the resulting deterministic policy can then be im3
proved with a new algorithm (see Section 3.1.2). A periodic deterministic policy can also be used as
initialization for a stochastic FSC optimizer based on expectation maximization (see Section 3.2).
3.1.1 Deterministic finite-horizon controllers
We briefly discuss existing methods for deterministic finite-horizon controllers and introduce an
improved finite-horizon method, which we use as the initial solution for infinite-horizon controllers.
State-of-the-art point based finite-horizon DEC-POMDP methods [4, 5] optimize a policy graph,
with restricted width, for each agent. They compute a policy for a single belief, instead of all possible
beliefs. Beliefs over world states are sampled centrally using various action heuristics. Policy graphs
are built by dynamic programming from horizon T to the first time step. At each time step a policy
is computed for each policy graph node, by assuming that the nodes all agents are in are associated
with the same belief. In a POMDP, computing the deterministic policy for a policy graph node means
finding the best action, and the best connection (best next node) for each observation; this can be
done with a direct search. In a DEC-POMDP this approach would go through all combinations of
actions, observations and next nodes of all agents: the number of combinations grows exponentially
with the number of agents, so direct search works only for simple problems. A more efficient way
is to go through all action combinations, for each action combination sample random policies for all
agents, and then improve the policy of each agent in turn while holding the other agents? policies
fixed. This is not guaranteed to find the best policy for a belief, but has yielded good results in the
Point-Based Policy Generation (PBPG) algorithm [5].
We introduce a new algorithm which improves on [5]. PBPG used linear programming to find
policies for each agent and action-combination, but with a fixed joint action and fixed policies of
other agents we can use fast and simple direct search as follows. Initialize the value function V (s, ~q)
to zero. Construct an initial policy graph for each agent, starting from horizon t = T : (1) Project
the initial belief along a random trajectory to horizon t to yield a sampled belief b(s) over world
states. (2) Add, to the graph of each agent, a node to layer t. Find the best connections to the next
layer as follows. Sample random connections for each agent, then for each agent in turn optimize its
connection with connections of other agents fixed: for each action-combination ~a and observation
connect to the next-layer node that maximizes value, computed using b(s) and the next layer value
function; repeat this until convergence, using random restarts to escape local minima. The best
connections and action combination ~a become the policy for the current policy graph node. (3) Run
(1)-(2) until the graph layer has enough nodes. (4) Decrease t and run (1)-(3), until t = 0.
We use the above-described algorithm for initialization, and then use a new policy improvement
approach shown in Algorithm 1 that improves the policy value monotonically: (1) Here we do not
use a random trajectory for belief projection, instead we project the belief bt (s, ~q) over world states
s and controller nodes ~
q (agents are initially assumed to start from the first controller node) from
time step t = 0 to horizon T , through the current policy graph; this yields distributions for the FSC
nodes that match the current policy. (2) We start from the last layer and proceed towards the first.
At each layer, we optimize each agent separately: for each graph node qi of agent i, for each action
ai of the agent, and for each observation oi we optimize the (deterministic) connection to the next
layer. (3) If the optimized policy at the node (action and connections) is identical to policy ? of
another node in the layer, we sample a new belief over world states, and re-optimize the node for
the new belief; if no new policy is found even after trying several sampled beliefs, we try several
uniformly random beliefs for finding policies. We also redirect any connections from the previous
policy graph layer to the current node to go instead to the node having policy ?; this ?compresses?
the policy graph without changing its value (in POMDPs the redirection step is not necessary, it
will happen naturally when the previous layer is reoptimized). The computational complexity of
Algorithm 1 is O(2M |Q|2N |A|N |O|N |S|2 + M N |Q|N |O|N |A|N |S|2 + CN |Q|2 |A|N |O||S|).
Our finite-horizon method gets rid of the simplifying assumption that all FSCs are in the same node,
for a certain belief, made in [4, 5]. We only assume that for initialization steps, but not in actual
optimization. Our optimization monotonically improves the value of a fixed size policy graph and
converges to a local optimum. Here we applied the procedure to finite-horizon DEC-POMDPs; it
is adapted for improving deterministic infinite-horizon FSCs in Section 3.1.2. We also have two
simple improvements: (1) a speedup: [5] used linear programming to find policies for each agent
and action-combination in turn, but simple direct search is faster, and we use that; (2) improved
duplicate handling: [5] tried sampled beliefs to avoid duplicate nodes, we also try uniformly random
4
1
2
3
4
5
6
7
8
9
Initialize VT +1 (s, ~
q) = 0
Using current policy project bt (s, ~
q ) for 1 ? t ? T
for Time step t = T to 0 do
foreach Agent i do
foreach Node q of agent i do
foreach ai do
Q
P
i
o, s? |s, ~a)bt (s, ~q) j6=i Pt (aj |qj )Pt (qj? |qj , oj )Vt+1 (s? , ~q? )
h~oa,~
s,s? ,~
q,~
a P (~
q? =
P
i
?oi Ptai (qi? |qi = q, oi ) = argmaxPtai (q? |qi =q,oi ) q~? ,{oj }j6=i Ptai (qi? |qi = q, oi )h~oa,~
q?
i
X
Y
i
a?i = argmax
[bt (s, ~q)R(s, ~a)
Pt (aj |qj ) + ?Ptai (qi? |qi = q, oi )h~oa,~
q? ]
ai
10
11
12
13
14
s,s? ,~
q,~
a,~
o,~
q?
j6=i
a?i |qi )
a?
Pt (ai 6=
= 0, Pt (ai = a?i |qi ) = 1, Pt (qi? |qi , oi ) = Pt i (qi? |qi , oi )
if Any node p already has same policy as q then
For each qi for which Pt?1 (qi? = q|qi , oj ) = 1 redirect link to qi? = p
Sample belief b(s, qj = q?j) and use it to compute new policy by steps 7-13
Vt (s, ~
q ) = R(s, ~a)
Q
i
Pt (ai |qi ) + ?
Q
i
Pt (qi? |qi , oi )P (s? , ~o|s, ~a)Vt+1 (s? , ~q? )
Algorithm 1: Monotonic policy graph improvement algorithm
beliefs, and for DEC-POMDPs we redirect previous-layer connections to duplicate nodes. Unlike
the recursion idea in [4] our projection approach is guaranteed to improve value at each graph node
and find a local optimum.
3.1.2 Deterministic infinite-horizon controllers
To initialize an infinite-horizon problem, we transform a deterministic finite-horizon policy graph
(computed as in Section 3.1.1) into an infinite-horizon periodic controller by connecting the last
layer to the first. Assuming controllers start from policy graph node 1, we compute policies for the
other nodes in the first layer with beliefs sampled for time step M + 1, where M is the length of the
controller period. It remains to compute the (deterministic) connections from the last layer to the
first: approximately optimal connections are found using the beliefs at the last layer and the value
function projected from the last layer through the graph to the first layer. This approach can yield
efficient controllers on its own, but may not be suitable for problems with a long effective horizon.
To optimize controllers further, we give two changes to Algorithm 1 that enable optimization of
infinite-horizon policies: (1) To compute beliefs ?bu (s, ~q) over time steps u by projecting the initial
belief, first determine an effective projection horizon Tproj . Compute a QMDP policy [18] (an upper
bound to the optimal DEC-POMDP policy) by dynamic programming. As the projection horizon,
use the number of dynamic programming steps needed to gather enough value in the corresponding
MDP. Compute the belief bt (s, ~
q ) for each FSC layer t (needed on line 2 of Algorithm 1) as a
P
discounted sum of projected beliefs: bt (s, ~q) = C1 u?{t,t+M,t+2M,...;u?Tproj } ? u?bu (s, ~q). (2)
Compute value function Vt (s, ~
q ) for a policy graph layer by backing up (using line 14 of Algorithm
1) M ? 1 steps from the previous periodic FSC layer to current FSC layer, one layer at a time.
The complexity of one iteration of the infinite-horizon approach is O(2M |Q|2N |A|N |O|N |S|2 +
M (M ? 1)N |Q|N |O|N |A|N |S|2 + M CN |Q|2 |A|N |O||S|). There is no convergence guarantee
due to the approximations, but approximation error decreases exponentially with the period M .
3.2 Expectation maximization for stochastic infinite-horizon controllers
A stochastic FSC provides a solution of equal or larger value [6] compared to a deterministic FSC
with the same number of controller nodes. Many algorithms that optimize stochastic FSCs could be
adapted to use periodic FSCs; in this paper we adapt the expectation-maximization (EM) approach
[7, 12] to periodic FSCs. The adapted version retains the theoretical properties of regular EM, such
as monotonic convergence to a local optimum.
In the EM approach [7, 12] the optimization of policies is written as an inference problem: rewards
are scaled into probabilities and the policy, represented as a stochastic FSC, is optimized by EM
5
iteration to maximize the probability of getting rewards. We now introduce an EM algorithm for
(DEC-)POMDPs with periodic stochastic FSCs. We build on the EM method for DEC-POMDPs
with standard (non-periodic) FSCs by Kumar and Zilberstein [12]; see [7, 12] for more details of
? = 1|s, ~a) = (R(s, ~a) ?
non-periodic EM. First, the reward function is scaled into a probability R(r
Rmin )/(Rmax ? Rmin ), where Rmin and Rmax are the minimum and maximum rewards possible
? = 1|s, ~a) is the conditional probability for the binary reward r to be 1. The FSC parameters
and R(r
P?
? are optimized by maximizing the reward likelihood T =0 P (T )P (r = 1|T, ?) with respect to ?,
where the horizon is infinite and P (T ) = (1 ? ?)? T . This is equivalent to maximizing expected
discounted reward in the DEC-POMDP. The EM approach improves the policy, i.e. the stochastic
periodic finite state controllers, in each iteration. We next describe the E-step and M-step formulas.
In the E-step, alpha messages ?
? (m) (~
q , s) and beta messages ??(m) (~q, s) are computed for each layer
of the periodic FSC. Intuitively, ?
? (~
q , s) corresponds to the discount weighted average probability
that the world is in state s and FSCs are in nodes ~q, when following the policy defined by the
? q , s) is intuitively the expected discounted total scaled reward, when starting
current FSCs, and ?(~
from state s and FSC nodes ~q. The alpha messages are computed by projecting an initial nodesand-state distribution forward, while beta messages are computed by projecting reward probabilities
backward. We compute separate ?
? (m) (~q, s) and ??(m) (~q, s) for each layer m. We use a projection
horizon T = M TM ? 1, where M TM is divisible by the number of layers M . This means that when
we have accumulated enough probability mass in the E-step we still project a few steps in order to
reach a valid T . For a periodic FSC the forward projection of the joint distribution over world and
Q (t) (t)
P
FSC states from time step t to time step t + 1 is Pt (~q? , s? |~q, s) = ~o,~a Ps? s~a P~os?~a i [?ai qi ?q? q?o? ].
?
i i i
Each ?
? (m) (~
q , s) can be computed by projecting a single trajectory forward starting from the initial belief and then adding only messages belonging to layer m to each ?
? (m) (~q, s). In contrast,
each ??(m) (~
q , s) has to be projected separately backward, because we don?t have a ?starting point?
P
Q (m)
(m)
? s~a
similar to the alpha messages. Denoting such projections by ?0 (~q, s) = ~a R
i ?ai qi and
P
(m) ? ?
(m)
? ?
q , s )Pt (~
q , s |~q, s) the equations for the messages become
?t (~
q , s) = s? ,~q? ?t?1 (~
?
? (m) (~
q , s) =
TX
M ?1
? (m+tm M) (1??)?(m+tmM) (~q, s) and ??(m) (~q, s) =
tm =0
T
X
(m)
? t (1??)?t
(~q, s) .
t=0
(1)
This means that the complexity of the E-step for periodic FSCs is M times the complexity of the
E-step for usual FSCs with a total number of nodes equal to the width of the periodic FSC. The
complexity increases linearly with the number of layers.
In the M-step we can update the parameters of each layer separately using the alpha and beta mes?
sages
P Pfor that layer, as follows. EM maximizes?the expected complete log-likelihood Q(?, ? ) =
T
L P (r = 1, L, T |?) log P (r = 1, L, T |? ) , where L denotes all latent variables: actions,
observations, world states, and FSC states, ? denotes previous parameters, and ?? denotes new parameters. For periodic FSCs P (r = 1, L, T |?)"is
#
T
h
i
Y
(0)
(t)
?
P (r = 1, L, T |?) = P (T )[Rs~a ]t=T
?~aq~ Ps? s~a P~os?~a ?q~? q~~ot ?~aq~ b0 (s)
(2)
t=0
t=1
Q (t)
Q (0)
Q (t?1)
(t)
(0)
where we denoted ?~aq~ = i ?ai qi for t = 1, . . . , T , ?~aq~ = i ?ai qi ?qi , and ?q~? q~~ot = i ?q? qi oi .
i
The log in the expected complete log-likelihood Q(?, ?? ) transforms the product of probabilities into
a sum; we can divide the sums into smaller sums, where each sum contains only parameters from
the same periodic FSC layer. Denoting fs? sq~? ~o~am = Ps? s~a P~os?~a ??(m+1) (q~? , s? ), the M-step periodic
FSC parameter update rules can then be written as:
?q X ?(0)
?q?i = i
(3)
? (~q, s)?q?i b0 (s)
Ci s,q
?
i
(m)
=
? ? a(m)
i qi
?ai qi
Cqi
X
o,a?i
s,s? ,q?i ,q~? ,~
? s~a +
? R
{?
? (m) (~q, s)?a(m)
?
i
i q?
6
?
(m)
(m)
?q? q?o? ?qi ? qi oi fs? sq~? ~o~am }
i
i
?
1?? i
(4)
(m)
(m)
?? q? qi oi
i
=
?q? qi oi
i
Cqi oi
X
(m)
f ? ~? o~am .
? (m) ? ?
?
? (m) (~q, s)?a(m)
?
i ai qi q q?o? s sq ~
i q?
?
i i i
a
s,s? ,q?i ,q?i? ,o?i ,~
(5)
Note about initialization. Our initialization procedure (Sections 3.1.1 and 3.1.2) yields deterministic periodic controllers as initializations; a deterministic finite state controller is a stable point of
the EM algorithm, since for such a controller the M-step of the EM approach does not change the
probabilities. To allow EM to escape the stable point and find even better optima, we add noise to
the controllers in order to produce stochastic controllers that can be improved by EM.
4 Experiments
Experiments were run for standard POMDP and DEC-POMDP benchmark problems [8, 15, 16, 10]
with a time limit of two hours. For both types of benchmarks we ran the proposed infinite-horizon
method for deterministic controllers (denoted ?Peri?) with nine improvement rounds as described
in Section 3.1.2. For DEC-POMDP benchmarks we also ran the proposed periodic expectation
maximization approach in Section 3.2 (denoted ?PeriEM?), initialized by the finite-horizon approach
in Section 3.1.1 with nine improvement rounds and the infinite-horizon transformation in Section
3.1.2, paragraph 1. For ?PeriEM? a period of 10 was used. For ?Peri? a period of 30 was used for
problems with discount factor 0.9, 60 for discount factor 0.95, and 100 for larger discount factors.
The main comparison methods EM [12] and Mealy NLP [16] (with removal of dominated actions
and unreachable state-observation pairs) were implemented using Matlab and the NEOS server was
utilized for solving the Mealy NLP non-linear programs. We used the best of parallel experiment
runs to choose the number of FSC nodes. EM was run for all problems and Mealy NLP for the
Hallway2, decentralized tiger, recycling robots, and wireless network problems. SARSOP [3] was
run for all POMDP problems and we also report results from literature [8, 15, 16].
Table 1 shows DEC-POMDP results for the decentralized tiger, recycling robots, meeting in a grid,
wireless network [10], co-operative box pushing, and stochastic mars rover problems. A discount
factor of 0.99 was used in the wireless network problem and 0.9 in the other DEC-POMDP benchmarks. Table 2 shows POMDP results for the benchmark problems Hallway2, Tag-avoid, Tag-avoid
repeat, and Aloha. A discount factor of 0.999 was used in the Aloha problem and 0.95 in the other
POMDP benchmarks. Methods whose 95% confidence intervals overlap with that of the best method
are shown in bold. The proposed method ?Peri? performed best in the DEC-POMDP problems and
better than other restricted policy size methods in the POMDP problems. ?PeriEM? also performed
well, outperforming EM.
5 Conclusions and discussion
We introduced a new class of finite state controllers, periodic finite state controllers (periodic FSCs),
and presented methods for initialization and policy improvement. In comparisons the resulting methods outperformed state-of-the-art DEC-POMDP and state-of-the-art restricted size POMDP methods and worked very well on POMDPs in general.
In our method the period length was based simply on the discount factor, which already performed
very well; even better results could be achieved, for example, by running solutions of different
periods in parallel. In addition to the expectation-maximization presented here, other optimization
algorithms for infinite-horizon problems could also be adapted to periodic FSCs: for example, the
non-linear programming approach [8] could be adapted to periodic FSCs. In brief, a separate value
function and separate FSC parameters would be used for each time slice in the periodic FSCs, and
the number of constraints would grow linearly with the number of time slices.
Acknowledgments
We thank Ari Hottinen for discussions on decision making in wireless networks. The authors belong
to the Adaptive Informatics Research Centre (CoE of the Academy of Finland). The work was
supported by Nokia, TEKES, Academy of Finland decision number 252845, and in part by the
PASCAL2 EU NoE, ICT 216886. This publication reflects the authors? views only.
7
Table 1: DEC-POMDP benchmarks. Most
comparison results are from [8, 15, 16]; we ran
EM and Mealy NLP on many of the tests (see
Section 4). Note that ?Goal-directed? is a special method that can only be applied to problems with goals.
Table 2: POMDP benchmarks. Most comparison method results are from [16]; we ran EM,
SARSOP, and Mealy NLP on one test (see Section 4).
Algorithm (Size, Time): Value
DecTiger
(|S| = 2, |Ai | = 3, |Oi | = 2)
Peri (10 ? 30, 202s):
13.45
PeriEM (7 ? 10, 6540s):
9.42
Goal-directed (11, 75s):
5.041
NLP (19, 6173s):
?1.088
Mealy NLP (4, 29s):
?1.49
EM (6, 142s):
?16.30
Recycling robots
(|S| = 4, |Ai | = 3, |Oi | = 2)
Mealy NLP (1, 0s):
31.93
Peri (6 ? 30, 77s):
31.84
PeriEM (6 ? 10, 272s):
31.80
EM (2, 13s):
31.50
Meeting in a 2x2 grid
(|S| = 16, |Ai | = 5, |Oi | = 2)
Peri (5 ? 30, 58s):
6.89
PeriEM (5 ? 10, 6019s): 6.82
EM (8, 5086s):
6.80
Mealy NLP (5, 116s):
6.13
HPI+NLP (7, 16763s):
6.04
NLP (5, 117s):
5.66
Goal-directed (4, 4s):
5.64
Wireless network
(|S| = 64, |Ai | = 2, |Oi | = 6)
EM (3, 6886s):
?175.40
Peri (15 ? 100, 6492s):?181.24
PeriEM (2 ? 10, 3557s):?218.90
Mealy NLP (1, 9s):
?296.50
Box pushing
(|S| = 100, |Ai | = 4, |Oi | = 5)
Goal-directed (5, 199s): 149.85
Peri (15 ? 30, 5675s): 148.65
Mealy NLP (4, 774s):
143.14
PeriEM (4 ? 10, 7164s): 106.68
HPI+NLP (10, 6545s):
95.63
EM (6, 7201s):
43.33
Mars rovers
(|S| = 256, |Ai | = 6, |Oi | = 8)
Peri (10 ? 30, 6088s):
24.13
Goal-directed (6, 956s):
21.48
Mealy NLP (3, 396s):
19.67
PeriEM (3 ? 10, 7132s): 18.13
EM (3, 5096s):
17.75
HPI+NLP (4, 111s):
9.29
Hallway2
(|S| = 93, |A| = 5, |O| = 17)
Perseus (56, 10s):
0.35
HSVI2 (114, 1.5s):
0.35
PBPI (320, 3.1s):
0.35
SARSOP (776, 7211s):
0.35
HSVI (1571, 10010s):
0.35
PBVI (95, 360s):
0.34
Peri (160 ? 60, 5252s):
0.34
biased BPI (60, 790s):
0.32
NLP fixed (18, 240s):
0.29
NLP (13, 420s):
0.28
EM (30, 7129s):
0.28
Mealy NLP (1, 2s):
0.028
Algorithm (Size,Time): Value
Tag-avoid
(|S| = 870, |A| = 5, |O| = 30)
PBPI (818, 1133s):
?5.87
SARSOP (13588, 7394s): ?6.04
Peri (160 ? 60, 6394s):
?6.15
RTDP-BEL (2.5m, 493s): ?6.16
Perseus (280, 1670s):
?6.17
HSVI2 (415, 24s):
?6.36
Mealy NLP (2, 323s):
?6.65
biased BPI (17, 250s):
?6.65
BPI (940, 59772s):
?9.18
NLP (2, 5596s):
?13.94
EM (2, 30s):
?20.00
Tag-avoid repeat
(|S| = 870, |A| = 5, |O| = 30)
SARSOP (15202, 7203s):?10.71
Peri (160 ? 60, 6316s): ?11.02
Mealy NLP (2, 319s):
?11.44
Perseus (163, 5656s):
?12.35
HSVI2 (8433, 5413s):
?14.33
NLP (1, 37s):
?20.00
EM (2, 72s):
?20.00
Aloha
(|S| = 90, |A| = 29, |O| = 3)
SARSOP (82, 7201s): 1237.01
Peri (160 ? 100, 6793s):1236.70
Mealy NLP (7, 312s):
1221.72
HSVI2 (5434, 5430s):
1217.95
NLP (6, 1134s):
1211.67
EM (3, 7200s):
1120.05
Perseus (68, 5401s):
853.42
8
References
[1] R. D. Smallwood and E. J. Sondik. The optimal control of partially observable Markov processes over a finite horizon. Operations Research, pages 1071?1088, 1973.
[2] S. Seuken and S. Zilberstein. Formal models and algorithms for decentralized decision making
under uncertainty. Autonomous Agents and Multi-Agent Systems, 17(2):190?250, 2008.
[3] H. Kurniawati, D. Hsu, and W.S. Lee. Sarsop: Efficient point-based pomdp planning by approximating optimally reachable belief spaces. In Proc. Robotics: Science and Systems, 2008.
[4] S. Seuken and S. Zilberstein. Memory-bounded dynamic programming for DEC-POMDPs. In
Proc. of 20th IJCAI, pages 2009?2016. Morgan Kaufmann, 2007.
[5] F. Wu, S. Zilberstein, and X. Chen. Point-based policy generation for decentralized POMDPs.
In Proc. of 9th AAMAS, pages 1307?1314. IFAAMAS, 2010.
[6] P. Poupart and C. Boutilier. Bounded finite state controllers. Advances in neural information
processing systems, 16:823?830, 2003.
[7] M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic inference for solving (PO)MDPs.
Technical report, University of Edinburgh, 2006.
[8] C. Amato, D. Bernstein, and S. Zilberstein. Optimizing Memory-Bounded Controllers for
Decentralized POMDPs. In Proc. of 23rd UAI, pages 1?8. AUAI Press, 2007.
[9] A. Kumar and S. Zilberstein. Point-Based Backup for Decentralized POMDPs: Complexity
and New Algorithms. In Proc. of 9th AAMAS, pages 1315?1322. IFAAMAS, 2010.
[10] Joni Pajarinen and Jaakko Peltonen. Efficient Planning for Factored Infinite-Horizon DECPOMDPs. In Proc. of 22nd IJCAI, pages 325?331. AAAI Press, July 2011.
[11] D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein. The Complexity of Decentralized
Control of Markov Decision Processes. Mathematics of Operations Research, 27(4):819?840,
2002.
[12] A. Kumar and S. Zilberstein. Anytime Planning for Decentralized POMDPs using Expectation
Maximization. In Proc. of 26th UAI, 2010.
[13] D.S. Bernstein, E.A. Hansen, and S. Zilberstein. Bounded policy iteration for decentralized
POMDPs. In Proc. of 19th IJCAI, pages 1287?1292. Morgan Kaufmann, 2005.
[14] D. Szer and F. Charpillet. An optimal best-first search algorithm for solving infinite horizon
DEC-POMDPs. Proc. of 16th ECML, pages 389?399, 2005.
[15] C. Amato and S. Zilberstein. Achieving goals in decentralized POMDPs. In Proc. of 8th
AAMAS, volume 1, pages 593?600. IFAAMAS, 2009.
[16] C. Amato, B. Bonet, and S. Zilberstein. Finite-State Controllers Based on Mealy Machines for
Centralized and Decentralized POMDPs. In Proc. of 24th AAAI, 2010.
[17] S. Ji, R. Parr, H. Li, X. Liao, and L. Carin. Point-based policy iteration. In Proc. of 22nd AAAI,
volume 22, page 1243, 2007.
[18] F.A. Oliehoek, M.T.J. Spaan, and N. Vlassis. Optimal and approximate q-value functions for
decentralized pomdps. Journal of Artificial Intelligence Research, 32(1):289?353, 2008.
9
| 4297 |@word briefly:1 version:2 compression:2 nd:2 r:3 tried:1 simplifying:1 initial:10 contains:1 denoting:2 existing:2 current:11 must:1 written:2 hsvi2:4 periodically:1 happen:1 enables:1 qmdp:1 update:2 intelligence:1 provides:1 node:51 firstly:1 along:1 direct:5 become:2 beta:3 paragraph:1 introduce:12 finitehorizon:2 expected:8 behavior:1 planning:13 multi:1 discounted:5 actual:1 solver:1 project:4 bounded:5 maximizes:3 mass:1 rmax:2 perseus:4 rtdp:1 finding:3 transformation:1 guarantee:1 noe:1 act:1 auai:1 scaled:3 control:6 medical:1 local:5 limit:1 approximately:1 initialization:10 co:1 limited:1 directed:6 acknowledgment:1 harmeling:1 reoptimized:1 sq:3 procedure:2 projection:7 ho1:1 confidence:1 regular:3 get:1 cannot:1 context:1 influence:1 optimize:12 restriction:1 deterministic:24 equivalent:1 maximizing:2 straightforward:1 go:3 starting:4 convex:1 pomdp:44 factored:2 rule:1 seuken:2 smallwood:1 autonomous:1 pt:12 programming:11 storkey:1 utilized:1 ifaamas:3 initializing:1 oliehoek:1 worst:1 connected:4 earned:1 cycle:1 eu:1 decrease:2 ran:4 transforming:1 environment:6 complexity:9 reward:19 jaakko:3 dynamic:8 depend:1 solving:4 rover:2 po:1 joint:6 represented:4 various:2 tx:1 fast:1 describe:2 effective:2 fsc:48 artificial:1 sc:1 whose:1 heuristic:1 larger:5 valued:1 transform:2 jointly:1 final:1 online:1 advantage:2 product:1 decpomdps:3 remainder:1 pajarinen:3 pbvi:1 achieve:2 academy:2 getting:1 convergence:3 ijcai:3 p:4 optimum:5 produce:1 executing:1 converges:1 b0:4 implemented:1 stochastic:15 enable:1 require:1 givan:1 secondly:1 tmm:1 fourthly:1 kurniawati:1 parr:1 finland:4 optimizer:1 proc:12 outperformed:1 hansen:1 weighted:1 reflects:1 avoid:5 publication:1 zilberstein:11 focus:1 amato:3 improvement:6 likelihood:3 aalto:6 contrast:1 am:3 inference:2 accumulated:1 typically:1 bt:6 initially:1 transformed:1 backing:1 unreachable:1 denoted:5 plan:1 art:5 special:2 initialize:3 equal:4 construct:1 having:1 identical:1 carin:1 future:2 report:2 intelligent:1 piecewise:1 escape:2 irreducible:1 duplicate:3 few:1 composed:3 argmax:1 connects:1 centralized:1 message:7 hpi:3 tuple:2 necessary:2 divide:1 initialized:1 re:1 theoretical:1 uncertain:1 earlier:2 hallway2:3 retains:1 maximization:9 ordinary:1 subset:1 optimally:1 stored:1 connect:1 periodic:56 peri:13 bu:2 lee:1 probabilistic:1 informatics:1 connecting:2 aaai:3 choose:1 bpi:3 li:1 bold:1 depends:1 performed:3 try:2 view:1 sondik:1 observing:2 start:4 parallel:2 contribution:1 oi:30 kaufmann:2 yield:11 trajectory:3 pomdps:36 j6:3 reach:1 definition:1 naturally:1 associated:1 charpillet:1 sampled:5 hsu:1 proved:1 anytime:1 improves:4 restarts:1 improved:5 hiit:1 box:4 done:1 generality:1 sarsop:7 mar:2 stage:1 until:3 bonet:1 o:4 defines:1 quality:1 aj:2 mdp:1 grows:1 concept:1 iteratively:1 moore:2 round:2 width:4 trying:1 complete:4 performs:1 novel:4 fi:4 ari:1 ji:1 exponentially:2 foreach:3 thirdly:1 discussed:1 belong:1 volume:2 ai:29 rd:1 grid:2 mathematics:1 centre:1 aq:4 mealy:19 reachable:1 fscs:41 robot:5 access:1 nexp:1 stable:2 add:3 own:1 recent:1 optimizing:3 store:1 certain:1 server:1 binary:1 outperforming:1 vt:5 meeting:2 seen:1 minimum:2 morgan:2 determine:1 maximize:5 period:6 monotonically:2 july:1 multiple:2 technical:1 match:1 adapt:1 faster:1 long:1 qi:64 variant:1 controller:45 liao:1 expectation:9 iteration:5 pspace:1 robotics:1 dec:40 c1:1 receive:1 background:1 addition:1 separately:3 operative:1 interval:1 redirection:1 diagram:1 grow:1 ot:2 operate:1 unlike:1 biased:2 mod:3 near:1 bernstein:3 enough:3 divisible:1 idea:1 cn:2 tm:4 qj:5 f:2 proceed:2 nine:2 action:33 matlab:1 boutilier:1 involve:1 transforms:1 discount:8 joni:3 dotted:1 diagnosis:1 four:1 achieving:1 changing:1 backward:2 graph:20 convert:1 sum:5 run:6 uncertainty:7 family:1 wu:1 decision:8 layer:58 bound:1 guaranteed:2 centrally:1 yielded:2 adapted:7 constraint:2 rmin:3 worked:1 helsinki:1 x2:1 dominated:1 tag:4 speed:1 formulating:1 kumar:3 achieved:1 speedup:1 department:2 according:1 combination:8 belonging:1 smaller:1 em:34 spaan:1 evolves:1 making:3 projecting:4 restricted:5 intuitively:2 taken:1 computationally:1 equation:1 remains:1 discus:2 turn:4 needed:2 operation:2 decentralized:15 apply:1 observe:1 generic:1 alternative:1 compress:1 denotes:5 running:1 include:1 nlp:30 recognizes:1 recycling:3 coe:1 const:1 pushing:2 build:1 approximating:1 move:2 already:2 usual:1 traditional:1 separate:5 link:1 thank:1 oa:3 me:1 poupart:1 reason:1 assuming:2 length:2 setup:1 difficult:2 holding:1 sage:1 policy:87 perform:2 upper:1 observation:14 hqi:1 markov:6 benchmark:8 finite:30 ecml:1 vlassis:1 communication:3 introduced:1 pair:1 optimized:9 connection:12 bel:1 hour:1 program:1 built:2 oj:3 memory:2 belief:26 pascal2:1 suitable:1 satisfaction:1 overlap:1 solvable:1 recursion:1 improve:3 immerman:1 technology:1 brief:1 mdps:1 redirect:3 literature:1 ict:1 removal:1 loss:1 generation:2 toussaint:1 agent:48 gather:1 repeat:3 wireless:7 last:7 supported:1 formal:1 allow:2 institute:1 nokia:1 sparse:1 edinburgh:1 ha1:1 slice:2 world:9 transition:8 valid:1 qn:1 forward:3 made:1 reinforcement:1 projected:3 author:2 adaptive:1 alpha:4 observable:4 emphasize:1 forever:1 compact:1 approximate:1 keep:1 active:2 uai:2 rid:1 conclude:1 assumed:1 spectrum:1 don:1 continuous:1 search:6 latent:1 table:4 aloha:3 transitioned:1 improving:4 complex:1 main:2 linearly:2 arrow:1 backup:1 noise:1 allowed:1 aamas:3 peltonen:3 fashion:1 third:1 formula:1 specific:1 adding:1 ci:1 execution:1 illustrates:1 horizon:48 chen:1 simply:1 infinitehorizon:2 partially:4 monotonic:2 corresponds:1 determines:1 conditional:1 goal:10 hsvi:1 towards:1 change:3 tiger:2 infinite:26 uniformly:2 called:1 total:2 formally:1 pfor:1 brevity:1 handling:1 |
3,642 | 4,298 | Convergent Fitted Value Iteration
with Linear Function Approximation
Daniel J. Lizotte
David R. Cheriton School of Computer Science
University of Waterloo
Waterloo, ON N2L 3G1 Canada
[email protected]
Abstract
Fitted value iteration (FVI) with ordinary least squares regression is known to
diverge. We present a new method, ?Expansion-Constrained Ordinary Least
Squares? (ECOLS), that produces a linear approximation but also guarantees convergence when used with FVI. To ensure convergence, we constrain the least
squares regression operator to be a non-expansion in the ?-norm. We show that
the space of function approximators that satisfy this constraint is more rich than
the space of ?averagers,? we prove a minimax property of the ECOLS residual
error, and we give an efficient algorithm for computing the coefficients of ECOLS
based on constraint generation. We illustrate the algorithmic convergence of FVI
with ECOLS in a suite of experiments, and discuss its properties.
1
Introduction
Fitted value iteration (FVI), both in the model-based [4] and model-free [5, 15, 16, 17] settings, has
become a method of choice for various applied batch reinforcement learning problems. However, it
is known that depending on the function approximation scheme used, fitted value iteration can and
does diverge in some settings. This is particularly problematic?and easy to illustrate?when using
linear regression as the function approximator. The problem of divergence in FVI has been clearly
illustrated in several settings [2, 4, 8, 22]. Gordon [8] proved that the class of averagers?a very
smooth class of function approximators?can safely be used with FVI. Further interest in batch RL
methods then led to work that uses non-parametric function approximators with FVI to avoid divergence [5, 15, 16, 17]. This has left a gap in the ?middle ground? of function approximator choices
that guarantee convergence?we would like to have a function approximator that is more flexible than
the averagers but more easily interpreted than the non-parametric approximators. In many scientific
applications, linear regression is a natural choice because of its simplicity and interpretability when
used with a small set of scientifically meaningful state features. For example, in a medical setting,
one may want to base a value function on patient features that are hypothesized to impact a long-term
clinical outcome [19]. This enables scientists to interpret the parameters of an optimal learned value
function as evidence for or against the importance of these features. Thus for this work, we restrict
our attention to linear function approximation, and ensure algorithmic convergence to a fixed point
regardless of the generative model of the data. This is in contrast to previous work that explores
how properties of the underlying MDP and properties of the function approximation space jointly
influence convergence of the algorithm [1, 14, 6].
Our aim is to develop a variant of linear regression that, when used in a fitted value iteration algorithm, guarantees convergence of the algorithm to a fixed point. The contributions of this paper
are three-fold: 1) We develop and describe the ?Expansion-Constrained Ordinary Least Squares?
(ECOLS) approximator. Our approach is to constrain the regression operator to be a non-expansion
in the ?-norm. We show that the space of function approximators that satisfy this property is more
1
rich than the space of averagers [8], and we prove a minimax property on the residual error of the
approximator. 2) We give an efficient algorithm for computing the coefficients of ECOLS based
on quadratic programming with constraint generation. 3) We verify the algorithmic convergence
of fitted value iteration with ECOLS in a suite of experiments and discuss its performance. Finally, we discuss future directions of research and comment on the general problem of learning an
interpretable value function and policy from fitted value iteration.
2
Background
Consider a finite MDP with states S = {1, ..., n}, actions A = {1, ..., |A|}, state transition matrices
P (a) ? Rn?n for each action, a deterministic1 reward vector r ? Rn , and a discount factor ? < 1.
Let Mi,: (M:,i ) denote the ith row (column) of a matrix M . The ?Bellman optimality? operator or
?Dynamic Programming? operator T is given by
h
i
(a)
(T v)i = ri + max ?Pi,: v .
(1)
a
The fixed point of T is the optimal value function v ? which satisfies the Bellman equation, T v ? = v ?
(a)
[3]. From v ? we can recover a policy ?i? = ri + argmaxa ?Pi,: v ? that has v ? as its value function.
An analogous operator K can be defined for the state-action value function Q ? Rn?|A| .
(j)
(KQ)i,j = ri + ?Pi,: max Q:,a
a
(2)
The fixed point of K is the optimal state-action value Q? which satisfies KQ? = Q? . The value
iteration algorithm proceeds by starting with an initial v or Q, and applying T or K repeatedly until
convergence, which is guaranteed because both T and K are contraction mappings in the infinity
norm [8], as we discuss further below. The above operators assume knowledge of the transition
model P (a) and rewards r. However K in particular is easily adapted to the case of a batch of n
tuples of the form (si , ai , ri , s0i ) obtained by interaction with the system [5, 15, 16, 17]. In this case,
Q is only evaluated at states in our data set, and in MDPs with continuous state, the number of tuples
n is analogous from a computational point of view to the size of our state space.
Fitted value iteration [5, 15, 16, 17] (FVI) interleaves either T or K above with a function approximation operator M . For example in the model-based case, the composed operator (M ? T )
is applied repeatedly to an initial guess v 0 . FVI has become increasingly popular especially in the
field of ?batch-mode Reinforcement Learning? [13, 7] where a policy is learned from a fixed batch
of data that was collected by a prior agent. This has particular significance in scientific and medical
applications, where ethics concerns prevent the use of current RL methods to interact directly with
a trial subject. In these settings, data gathered from controlled trials can still be used to learn good
policies [11, 19]. Convergence of FVI depends on properties of M ?particularly on whether M is a
non-expansion in the ?-norm, as we discuss below. The main advantage of fitted value iteration is
that the computation of (M ? T ) can be much lower than n in cases where the approximator M only
requires computation of elements of (T v)i for a small subset of the state space. If M generalizes
well, this enables learning in large finite or continuous state spaces. Another advantage is that M
can be chosen to represent the value function in a meaningful way, i.e. in a way that meaningfully
relates state variables to expected performance. For example, if M were linear regression and a
particular state feature had a positive coefficient in the learned value function, we know that larger
values of that state feature are preferable. Linear models are of importance because of their ease of
interpretation, but unfortunately, ordinary least squares (OLS) function approximation can cause the
successive iterations of FVI to fail to converge. We now examine properties of the approximation
operator M that control the algorithmic convergence of FVI.
3
Non-Expansions and Operator Norms
We say M is a linear operator if M y + M y 0 = M (y + y 0 ) ?y, y 0 ? Rp and M 0 = 0. Any linear
operator can be represented by a p ? p matrix of real numbers.
1
A noisy reward signal does not alter the analyses that follow, nor does dependence of the reward on action.
2
By definition, an operator M is a ?-contraction in the q-norm if
?? ? 1 s.t. ||M y ? M y 0 ||q ? ?||y ? y 0 ||q ?y, y 0 ? Rp
(3)
If the condition holds only for ? = 1 then M is called a non-expansion in the q-norm. It is wellknown [3, 5, 21] that the operators T and K are ?-contractions in the ?-norm.
The operator norm of M induced by the q-norm can be defined in several ways, including
||M y||q
.
y?Rp ,y6=0 ||y||q
||M ||op(q) =
sup
(4)
Lemma 1. A linear operator M is a ?-contraction in the q-norm if and only if ||M ||op(q) ? ?.
Proof. If M is linear and is a ?-contraction, we have
||M (y ? y 0 )||q ? ?||y ? y 0 ||q ?y, y 0 ? Rp .
(5)
0
By choosing y = 0, it follows that M satisfies
||M z||q ? ?||z||q ?z ? Rp .
(6)
Using the definition of || ? ||op(q) , we have that the following conditions are equivalent:
||M z||q ? ?||z||q ?z ? Rp
||M z||q
? ? ?z ? Rp , z 6= 0
||z||q
||M z||q
sup
??
p
z?R ,z6=0 ||z||q
||M ||op(q) ? ?.
(7)
(8)
(9)
(10)
Conversely, any M that satisfies (10) satisfies (5) because we can always write y ? y 0 = z.
Lemma 1 implies that a linear operator M is a non-expansion in the ?-norm only if
||M ||op(?) ? 1
(11)
which is equivalent [18] to:
max
X
i
|mij | ? 1
(12)
j
Corollary 1. The set of all linear operators that satisfy (12) is exactly the set of linear operators
that are non-expansions in the ?-norm.
One subset of operators on Rp that are guaranteed to be non-expansions in the ?-norm are the
averagers2 , as defined by Gordon [8].
Corollary 2. The set of all linear operators that satisfy (12) is larger than the set of averagers.
Proof. For M to be an averager, it must satisfy
max
i
X
mij ? 0 ?i, j
(13)
mij ? 1.
(14)
j
These constraints are stricter than (12), because they impose an additional non-negativity constraint
on the elements of M .
We have shown that restricting M to be a non-expansion is equivalent to imposing the constraint
||M ||op(?) ? 1. It is well-known [8] that if such an M is used as a function approximator in
fitted value iteration, the algorithm is guaranteed to converge from any starting point because the
composition M ? T is a ?-contraction in the ?-norm.
2
The original definition of an averager was an operator of the form y 7? Ay + b for a constant vector b. For
this work we assume b = 0.
3
4
Expansion-Constrained Ordinary Least Squares
We now describe our Expansion-Constrained Ordinary Least Squares function approximation
method, and show how we enforce that it is a non-expansion in the ?-norm.
Suppose X is an n ? p design matrix with n > p and rank(X) = p, and suppose y is a vector of
regression targets. The usual OLS estimate ?? for the model y ? X? is given by
?? = argmin ||X? ? y||2
(15)
?
= (X T X)?1 X T y.
(16)
The predictions made by the model at the points in X?i.e., the estimates of y?are given by
y? = X ?? = X(X T X)?1 X T y = Hy
(17)
where H is the ?hat? matrix because it ?puts the hat? on y. The ith element of y? is a linear combination of the elements of y, with weights given by the ith row of H. These weights sum to one, and
may be positive or negative. Note that H is a projection of y onto the column space of X, and has 1
as an eigenvalue with multiplicity rank(X), and 0 as an eigenvalue with multiplicity (n?rank(X)).
It is known [18] that for a linear operator M , ||M ||op(2) is given by the largest singular value of M .
It follows that ||H||op(2) ? 1 and, by Lemma 1, H is a non-expansion in the 2-norm. However,
depending on the data X, we may not have ||H||op(?) ? 1, in which case H will not be a nonexpansion in the ?-norm. The ?-norm expansion property of H is problematic when using linear
function approximation for fitted value iteration, as we described earlier.
If one wants to use linear regression safely within a value-iteration algorithm, it is natural to consider
constraining the least-squares problem so that the resulting hat matrix is an ?-norm non-expansion.
Consider the following optimization problem:
? = argmin ||XW X T y ? y||2
W
(18)
W
s.t. ||XW X T ||op(?) ? 1, W ? Rp?p , W = W T .
The symmetric matrix W is of size p ? p, so we have a quadratic objective with a convex norm
? = XW
? X T . If the problem were unconstrained,
constraint on XW X T , resulting in a hat matrix H
? the original OLS parameter
? = (X T X)?1 , H
? = H and ?? = W
? X T y = ?,
we would have W
estimate.
? is a non-expansion by construction. However, unlike the OLS hat matrix H =
The matrix H
? depends on the targets y. That is, given a different set of regression
X(X T X)?1 X T , the matrix H
? We should therefore more properly write this non-linear
targets, we would compute a different H.
?
? y resulting from the minimization in
operator as Hy . Because of the non-linearity, the operator H
(18) can in fact be an expansion in the ?-norm despite the constraints.
We now show how we might remove the dependence on y from (18) so that the resulting operator is
a linear non-expansion in the op(?)-norm. Consider the following optimization problem:
? = argmin max ||XW X T z ? z||2
W
W
z
(19)
s.t. ||XW X T ||op(?) ? 1, ||z||2 = c, W ? Rp?p , W = W T , z ? Rn
? is a linear operator of the form X W
? X T that minimizes the squared
Intuitively, the resulting W
? does
error between its approximation z? and the worst-case (bounded) targets z.3 The resulting W
?
not depend on the regression targets y, so the corresponding H is a linear operator. The constraint
||XW X T ||op(?) ? 1 is effectively a regularizer on the coefficients of the hat matrix which will
? X T y toward zero.
tend to shrink the fitted values X W
? is not unique?there are in fact
Minimization 19 gives us a linear operator, but, as we now show, W
? that minimize (19).
an uncountable number of W
3
The c is a mathematical convenience; if ||z||2 were unbounded then the max would be unbounded and the
problem ill-posed.
4
Theorem 1. Suppose W 0 is feasible for (19) and is positive semi-definite. Then W 0 satisfies
max ||XW 0 X T z ? z||2 = min max ||XW X T z ? z||2
W z,||z||2 <c
z,||z||2 <c
(20)
for all c.
Proof. We begin by re-formulating (19), which contains a non-concave maximization, as a convex
minimization problem with convex constraints.
Lemma 2. Let X, W , c, and H be defined as above. Then
max ||XW X T z ? z||2 = c||XW X T ? I||op(2) .
z,||z||2 =c
Proof. maxz?Rn ,||z||2 =c ||XW X T z ? Iz||2 = maxz?Rn ,||z||2 ?1 ||(XW X T ? I)cz||2
c maxz?Rn ,||z||2 6=0 ||(XW X T ? I)z||2 /||z||2 = c||XW X T ? I||op(2) .
=
Using Lemma 2, we can rewrite (19) as
? = argmin ||XW X T ? I||op(2)
W
(21)
W
s.t. ||XW X T ||op(?) ? 1, W ? Rp?p , W = W T
which is independent of z and independent of the positive constant c. This objective is convex in
W , as are the constraints. We now prove a lower bound on (21) and prove that W 0 meets the lower
bound.
Lemma 3. For all n?p design matrices X s.t. n > p and all symmetric W , ||XW X T ?I||op(2) ? 1.
Proof. Recall that ||XW X T ? I||op(2) is given by the largest singular value of XW X T ? I. By
symmetry of W , write XW X T = U DU T where D is a diagonal matrix whose diagonal entries dii
are the eigenvalues of XW X T and U is an orthonormal matrix. We therefore have
XW X T ? I = U DU T ? I = U DU T ? U IU T = U (D ? I)U T
(22)
Therefore ||XW X T ? I||op(2) = maxi |dii ? 1|, which is the largest singular value of XW X T ? I.
Furthermore we know that rank(XW X T ) ? p and that therefore at least n ? p of the dii are zero.
Therefore maxi |dii ? 1| ? 1, implying ||XW X T ? I||op(2) ? 1.
Lemma 4. For any symmetric positive definite matrix W 0 that satisfies the constraints in (19) and
any n ? p design matrix X s.t. n > p, we have ||XW 0 X T ? I||op(2) = 1.
T
Proof. Let H 0 = XW 0 X T and write H 0 ? I = U 0 (D0 ? I)U 0 where U is orthogonal and D0 is a
diagonal matrix whose diagonal entries d0 ii are the eigenvalues of H 0 . We know H 0 is positive semidefinite because W 0 is assumed to be positive semi-definite; therefore d0ii ? 0. From the constraints
in (19), we have ||H 0 ||op(?) ? 1, and by symmetry of H 0 we have ||H 0 ||op(?) = ||H 0 ||op(1) . It is
p
known [18] that for any M , ||M ||op(2) ? ||M ||op(?) ||M ||op(1) which gives ||H 0 ||op(2) ? 1 and
therefore |d0ii | ? 1 for all i ? 1..n. Combining these results gives 0 ? d0ii ? 1 ?i. Recall that
||XW 0 X T ? I||op(2) = maxi |dii ? 1|, the maximum eigenvalue of H 0 . Because rank(XW X T ) ?
p, we know that there exists an i such that d0ii = 0, and because we have shown that 0 ? d0ii ? 1, it
follows that maxi |dii ? 1| = 1, and therefore ||XW 0 X T ? I||op(2) = 1.
Lemma 4 shows that the objective value at any feasible, symmetric postive-definite W 0 matches the
lower bound proved in Lemma 3, and that therefore any such W 0 satisfies the theorem statement.
5
Theorem 1 shows that the optimum of (19) not unique. We therefore solve the following optimization problem, which has a unique solution, shows good empirical performance, and yet still provides
the minimax property guaranteed by Theorem 1 when the optimal matrix is positive semi-definite.4
? = argmin max ||XW X T z ? Hz||2
W
W
z
(23)
s.t. ||XW X T ||op(?) ? 1, ||z||2 = c, W ? Rp?p , W = W T , z ? Rn
? such that linear approximation using X W
? T X T is as close
Intuitively, this objective searches for a W
as possible to the OLS approximation, for the worst case regression targets, according to the 2-norm.
5
Computational Formulation
By an argument identical to that of Lemma 2, we can re-formulate (23) as a convex optimization
problem with convex constraints, giving
? = argmin ||XW X T ? H||op(2)
W
(24)
W
s.t. ||XW X T ||op(?) ? 1, W ? Rp?p , W = W T .
Though convex, objective (24) has no simple closed form, and we found that standard solvers have
difficulty for larger problems [9]. However, ||XW X T ? H||op(2) is upper bounded by the Frobenius
P
norm ||M ||F = ( i,j m2ij )1/2 . Therefore, we minimize the quadratic objective ||XW X T ? H||F
subject to the same convex constraints, which is easier to solve than (21). Note that Theorem 1
? is positive semidefinite.
applies to the solution of this modified objective when the resulting W
T
T
T
Expanding ||XW X ? H||F gives ||XW X ? H||F = Tr XW X XW X T ? 2XW X T ? H .
Let M (:) be the length p ? n vector consisting of the stacked columns of the matrix M . After
some P
algebraic
manipulations, we can re-write the objective as W (:)T ?W (:) ? 2? T W (:), where
n Pn
(ij) (ij)T
T
? =
?
and ? (ij) = (Xi,:
Xj,: )(:), and ? = (X T X)(:). This objective can
i=1
j=1 ?
then be fed into any standard QP solver. The constraint ||XW X T ||op(?) ? 1 can be expressed
Pn
T
as the set of constraints j=1 |Xi,: W Xj,:
| < 1, i = 1..n, or as a set of n2n linear constraints
Pn
T
n
j=1 kj Xi,: W Xj,: < 1, i = 1..n, k ? {+1, ?1} . Each of these linear constraints involves a
vector k with entries {+1, ?1} multiplied by a row of XW X T . If the entries in k match the signs
of the row of XW X T , then their inner product is equal to the sum of the absolute values of the
row, which must be constrained. If they do not match, the result is smaller. By constraining all n2n
patterns of signs, we constrain the sum of the absolute values of the entries in the row. Explicitly
enforcing all of these constraints is intractable, so we employ a constraint-generation approach [20].
We solve a sequence of quadratic programs, adding the most violated linear
each
Pn constraint after
T
step. The most violated constraint is given by a row i? = argmaxi?1..n j=1 |Xi,: W Xj,:
| and a
vector k ? = sign Xi,: W . The resulting constraint on W (:) can be written as k ? L W (:) ? 1 where
?
?.
Lj,: = ? (i j) , i = 1..n. This formulation allows us to use a general QP solver to compute W
Note that batch fitted value iteration performs many regressions where the targets y change from
iteration to iteration, but the design matrix X is fixed. Therefore we only need to solve the ECOLS
optimization problem once for any given application of FVI, meaning the additional computational
cost of ECOLS over OLS is not a major drawback.
6
Experimental results
In order to illustrate the behavior of ECOLS in different settings, we present four different empirical
evaluations: one regression problem and three RL problems. In each of the RL settings, ECOLS
with FVI converges, and the learned value function defines a good greedy policy.
4
One could in principle include a semi-definite constraint in the problem formulation, at an increased computational cost. (The problem is not a standard semi-definite program because the objective is not linear in the
elements of W .) We have not imposed this constraint in our experiments and we have always found that the
? is positive semi-definite. We conjecture that W
? is always positive semi-definite.
resulting W
6
Expansion?Constrained Ordinary Least Squares Comparisons
10
OLS
ECOLS with Fro. norm
ECOLS with op(2)?norm
5
ECOLS Avg. with Fro. norm
Function Coefficients
1
x
x2
x3
rms
y
0
?5
?
??
1
0.95
-3
-2.92
-3
-3.00
1
1.00
6.69 6.68
?
??F
0.16
-1.80
-1.71
0.58
13.60
??op(2)
0.77
-2.02
-1.88
0.64
13.44
??avg
-2.21
-0.97
-1.09
0.37
16.52
?10
?15
?2
?1
0
1
x
2
3
4
Figure 1: Example of OLS, ECOLS with ||XW X T ? H||F , ECOLS with ||XW X T ? H||op(2)
Regression The first is a simple regression setting, where we examine the behavior of ECOLS
compared to OLS. To give a simple, pictorial rendition of the difference between OLS, ECOLS
using the Frobenius, ECOLS using the op(2)-norm, and an averager, we generated a dataset of
n = 25 tuples (x, y) as follows: x ? U (?2, 4), y = 1 ? 3x ? 3x2 + x3 + ?, ? ? N (0, 4). The
design matrix X had rows Xi,: = [1, xi , x2i , x3i ]. The ECOLS regression optimizing the Frobenius
norm using CPLEX [12] took 0.36 seconds, whereas optimizing the op(2)-norm using the cvx
package [10] took 8.97 seconds on a 2 GHz Intel Core 2 Duo.
Figure 1 shows the regression curves produced by OLS and the two versions of ECOLS, along with
the learned coefficients and root mean squared error of the predictions on the data. Neither of the
ECOLS curves fit the data as well as OLS, as one would expect. Generally, their curves are smoother
than the OLS fit, and predictions are on the whole shrunk toward zero. We also ran ECOLS with
? X T , effectively forcing the result to be an averager as
an additional positivity constraint on X W
described in Sect. 3. The result is smoother than either of the ECOLS regressors, with a higher RMS
prediction error. Note the small difference between ECOLS using the Frobenius norm (dark black
line) and using the op(2)-norm (dashed line.) This is encouraging, as we have found that in larger
datasets optimizing the op(2)-norm is much slower and less reliable.
Two-state example Our second example is a classic on-policy fitted value iteration problem that is
known to diverge using OLS. It is perhaps the simplest example of FVI diverging, due to Tsitsiklis
and Van Roy [22]. This is a deterministic on-policy example, or equivalently for our purposes, a
problem with |A| = 1. There are three states {1, 2, 3} with features X = (1, 2, 0)T , one action with
P1,2 = 1, P2,2 = 1 ? ?, P2,3 = ?, P3,3 = 1 and Pi,j = 0 elsewhere. The reward is R = [0, 0, 0]T
and the value function is v ? = [0, 0, 0]T . For ? > 5/(6 ? 4?), FVI with OLS diverges for any
starting point other than v ? . FVI with ECOLS always converges to v ? . If we change the reward
to R = [1, 1, 0]T and set ? = 0.95, ? = 0.1, we have v ? = [7.55, 6.90, 0]. FVI with OLS of
course still diverges, whereas FVI with ECOLS converges to v? = [4.41, 8.82, 0]. In this case, the
approximation space is poor, and no linear method based on the features in X can hope to perform
well. Nonetheless, ECOLS converges to a v? of at least the appropriate magnitude.
Grid world Our third example is an off-policy value iteration problem which is known to diverge
with OLS, due to Boyan and Moore [4]. In this example, there are effectively 441 discrete states, laid
out in a 21 ? 21 grid, and assigned an (x, y) feature in [0, 1]2 according to their position in the grid.
There are four actions which deterministically move the agent up, down, left, or right by a distance
of 0.05 in the feature space, and the reward is -0.5 everywhere except the corner state (1, 1), where
it is 0. The discount ? is set to 1.0 so the optimal value function is v ? (x, y) = ?20 + 10x + 10y.
Boyan and Moore define ?lucky? convergence of FVI as the case where the policy induced by
the learned value function is optimal, even if the learned value function itself does not accurately
represent v ? . They found that with OLS and a design matrix Xi,: = [1, xi , yi ], they achieve lucky
convergence. We replicated their result using FVI on 255 randomly sampled states plus the goal
7
state, and found that OLS converged5 to ?? = [?515.89, 9.99, 9.99] after 10455 iterations. This
value function induces a policy that attempts to increase x and y, which is optimal. ECOLS on the
other hand converged to ?? = [?1.09, 0.030, 0.07] after 31 iterations, which also induces an optimal
policy. In terms of learning correct value function coefficients, the OLS estimate gets 2 of the 3
almost exactly correct. In terms of estimating the value of states, OLS achieves an RMSE over all
states of 10413.73, whereas ECOLS achieves an RMSE of 208.41.
In the same work, Boyan and Moore apply OLS with quadratic features Xi,: =
[1, x, y, x2 , y 2 , xy], and find that FVI diverges. We found that ECOLS converges, with coefficients
[?0.80, ?2.67, ?2.78, 2.73, 2.91, 0.06]. This is not ?lucky?, as the induced policy is only optimal
for states in the upper-right half of the state space.
Left-or-right world Our fourth and last example is an off-policy value iteration problem with
stochastic dynamics where OLS causes non-divergent but non-convergent behavior. To investigate
properties of their tree-based Fitted Q-Iteration (FQI) methods, Ernst, Geurts, and Wehenkel define
the ?left-or-right? problem [5], an MDP with S = [0, 10], and stochastic dynamics given by st+1 =
st + a + ?, where ? ? N (0, 1). Rewards are 0 for s ? [0, 10], 100 for s > 10, and 50 for s < 0. All
states outside [0, 10] are terminal. The discount factor ? is 0.75. In their formulation they use A ?
{?2, 2}, which gives an optimal policy that is approximately ? ? (s) = {2 if s > 2.5, -2 otherwise}.
We examine a simpler scenario by choosing A ? {?4, 4}, so that ? ? (s) = 4, i.e., it is optimal to
always go right. Based on prior data [5], the optimal Q functions for this type of problem appear
to be smooth and non-linear, possibly with inflection points. Thus we use polynomial features6
Xi,: = [1, x, x2 , x3 ] where x = s/5 ? 1. As is common in FQI, we fit separate regressions to learn
Q(?, 4) and Q(?, ?4) at each iteration. We used 300 episodes worth of data generated by the uniform
random policy for learning.
In this setting, OLS does not diverge, but neither does it converge: the parameter vector of each
Q function moves chaotically within some bounded region of R4 . The optimal policy induced by
the Q-functions is determined solely by zeroes of Q(?, 4) ? Q(?, ?4), and in our experiments this
function had at most one zero. Over 500 iterations of FQI with OLS, the cutpoint ranged from -7.77
to 14.04, resulting in policies ranging from ?always go right? to ?always go left.? FQI with ECOLS
converged to a near-optimal policy ?
? (s) = {4 if s > 1.81, -4 otherwise}. We determined by Monte
Carlo rollouts that, averaged over a uniform initial state, the value of ?
? is 59.59, whereas the value
of the optimal policy ? ? is 60.70. While the performance of the learned policy is very good, the
estimate of the average value using the learned Qs, 28.75, is lower due to the shrinkage induced by
ECOLS in the predicted state-action values.
7
Concluding Remarks
Divergence of FVI with OLS has been a long-standing problem in the RL literature. In this paper, we introduced ECOLS, which provides guaranteed convergence of FVI. We proved theoretical
properties that show that in the minimax sense, ECOLS is optimal among possible linear approximations that guarantee such convergence. Our test problems confirm the convergence properties
of ECOLS and also illustrate some of its properties. In particular, the empirical results illustrate
the regularization effect of the op(?)-norm constraint that tends to ?shrink? predicted values toward zero. This is a further contribution of our paper: Our theoretical and empirical results indicate
that this shrinkage is a necessary cost of guaranteeing convergence of FVI using linear models with
a fixed set of features. This has important implications for the deployment of FVI with ECOLS.
In some applications where accurate estimates of policy performance are required, this shrinkage
may be problematic; addressing this problem is an interesting avenue for future research. In other
applications where the goal is to identify a good, intuitively represented value function and policy
ECOLS, is a useful new tool.
Acknowledgements We acknowledge support from Natural Sciences and Engineering Research
Council of Canada (NSERC) and the National Institutes of Health (NIH) grants R01 MH080015
and P50 DA10075.
5
6
Convergence criterion was ||? iter+1 ? ? iter || ? 10?5 . All starts were from ? = 0.
The re-scaling of s is for numerical stability.
8
References
[1] A. Antos, R. Munos, and Cs. Szepesv?ari. Fitted Q-iteration in continuous action-space MDPs.
In Advances in Neural Information Processing Systems 20, pages 9?16. MIT Press, 2008.
[2] L. Baird. Residual Algorithms: Reinforcement Learning with Function Approximation. In
A. Prieditis and S. Russell, editors, Proceedings of the 25th International Conference on Machine Learning, pages 30?37. Morgan Kaufmann, 1995.
[3] D. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 2007.
[4] J. Boyan and A. W. Moore. Generalization in reinforcement learning: Safely approximating
the value function. In Advances in neural information processing systems, pages 369?376,
1995.
[5] D. Ernst, P. Geurts, and L. Wehenkel. Tree-Based Batch Mode Reinforcement Learning. Journal of Machine Learning Research, 6:503?556, 2005.
[6] A. M. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. Regularized fitted Qiteration for planning in continuous-space Markovian decision problems. In American Control
Conference, pages 725?730, 2009.
[7] R. Fonteneau. Contributions to Batch Mode Reinforcement Learning. PhD thesis, University
of Liege, 2011.
[8] G. J. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie
Mellon University, 1999.
[9] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version
1.21. http://cvxr.com/cvx, Apr. 2011.
[10] M. C. Grant. Disciplined convex programming and the cvx modeling framework. Information
Systems Journal, 2006.
[11] A. Guez, R. D. Vincent, M. Avoli, and J. Pineau. Adaptive treatment of epilepsy via batchmode reinforcement learning. In D. Fox and C. P. Gomes, editors, Innovative Applications of
Artificial Intelligence, pages 1671?1678, 2008.
[12] IBM. IBM ILOG CPLEX Optimization Studio V12.2, 2011.
[13] S. Kalyanakrishnan and P. Stone. Batch reinforcement learning in a complex domain. In
Proceedings of the 6th international joint conference on Autonomous agents and multiagent
systems AAMAS 07, 2007.
[14] R. Munos and Cs. Szepesv?ari. Finite time bounds for fitted value iteration. Journal of Machine
Learning Research, 9:815?857, 2008.
[15] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine learning, 49(2):161?
178, 2002.
[16] M. Riedmiller. Neural fitted Q iteration-first experiences with a data efficient neural reinforcement learning method. In ECML 2005, pages 317?328. Springer, 2005.
[17] J. Rust. Using randomization to break the curse of dimensionality. Econometrica, 65(3):pp.
487?516, 1997.
[18] G. A. F. Seber. A MATRIX HANDBOOK FOR STATISTICIANS. Wiley, 2007.
[19] S. M. Shortreed, E. Laber, D. J. Lizotte, T. S. Stroup, J. Pineau, and S. A. Murphy. Informing sequential clinical decision-making through reinforcement learning : an empirical study.
Machine Learning, 2010.
[20] S. Siddiqi, B. Boots, and G. Gordon. A Constraint Generation Approach to Learning Stable
Linear Dynamical Systems. In Advances in Neural Information Processing Systems 20, pages
1329?1336. MIT Press, 2008.
[21] Cs. Szepesv?ari. Algorithms for Reinforcement Learning. Morgan and Claypool, 2010.
[22] J. N. Tsitsiklis and B. van Roy. An analysis of temporal-difference learning with function
approximation. IEEE Transactions on Automatic Control, 42(5):674?690, 1997.
9
| 4298 |@word trial:2 version:2 middle:1 polynomial:1 norm:35 kalyanakrishnan:1 contraction:6 tr:1 initial:3 contains:1 daniel:1 current:1 com:1 si:1 yet:1 guez:1 fvi:26 must:2 written:1 numerical:1 enables:2 remove:1 interpretable:1 implying:1 generative:1 greedy:1 guess:1 half:1 intelligence:1 ith:3 core:1 provides:2 mannor:1 successive:1 simpler:1 unbounded:2 mathematical:1 along:1 become:2 farahmand:1 prove:4 expected:1 behavior:3 p1:1 examine:3 nor:1 planning:1 terminal:1 bellman:2 encouraging:1 curse:1 solver:3 begin:1 estimating:1 underlying:1 linearity:1 bounded:3 stroup:1 duo:1 argmin:6 interpreted:1 minimizes:1 averagers:5 suite:2 guarantee:4 safely:3 temporal:1 concave:1 stricter:1 preferable:1 exactly:2 control:4 medical:2 grant:3 appear:1 bertsekas:1 positive:11 scientist:1 engineering:1 tends:1 despite:1 meet:1 solely:1 approximately:1 might:1 black:1 plus:1 r4:1 conversely:1 deployment:1 ease:1 averaged:1 unique:3 definite:9 x3:3 riedmiller:1 empirical:5 lucky:3 projection:1 boyd:1 fqi:4 argmaxa:1 get:1 onto:1 convenience:1 close:1 operator:29 put:1 influence:1 applying:1 equivalent:3 maxz:3 imposed:1 deterministic:1 go:3 attention:1 regardless:1 starting:3 convex:10 fonteneau:1 formulate:1 simplicity:1 q:1 orthonormal:1 classic:1 stability:1 autonomous:1 analogous:2 target:7 suppose:3 construction:1 programming:5 us:1 element:5 roy:2 particularly:2 worst:2 nonexpansion:1 region:1 sect:1 episode:1 russell:1 ran:1 reward:8 econometrica:1 dynamic:4 ghavamzadeh:1 batchmode:1 depend:1 rewrite:1 chaotically:1 easily:2 joint:1 various:1 represented:2 regularizer:1 stacked:1 describe:2 argmaxi:1 monte:1 artificial:1 outcome:1 choosing:2 outside:1 whose:2 larger:4 posed:1 solve:4 say:1 otherwise:2 g1:1 jointly:1 noisy:1 itself:1 features6:1 advantage:2 eigenvalue:5 sequence:1 took:2 sen:1 interaction:1 product:1 combining:1 ernst:2 achieve:1 frobenius:4 convergence:18 optimum:1 diverges:3 produce:1 guaranteeing:1 converges:5 illustrate:5 depending:2 develop:2 ij:3 op:44 school:1 p2:2 predicted:2 involves:1 implies:1 indicate:1 c:4 direction:1 avoli:1 drawback:1 correct:2 shrunk:1 stochastic:2 dii:6 generalization:1 randomization:1 hold:1 ground:1 claypool:1 algorithmic:4 mapping:1 major:1 achieves:2 purpose:1 waterloo:2 council:1 largest:3 tool:1 minimization:3 hope:1 mit:2 clearly:1 always:7 aim:1 modified:1 avoid:1 pn:4 shrinkage:3 corollary:2 properly:1 rank:5 contrast:1 inflection:1 lizotte:2 sense:1 lj:1 iu:1 among:1 flexible:1 ill:1 constrained:6 field:1 equal:1 once:1 identical:1 y6:1 alter:1 future:2 gordon:4 employ:1 randomly:1 composed:1 divergence:3 national:1 pictorial:1 murphy:1 consisting:1 cplex:2 rollouts:1 statistician:1 attempt:1 interest:1 investigate:1 evaluation:1 semidefinite:2 antos:1 implication:1 accurate:1 necessary:1 xy:1 experience:1 orthogonal:1 fox:1 tree:2 re:4 theoretical:2 fitted:19 increased:1 column:3 earlier:1 modeling:1 markovian:1 maximization:1 ordinary:7 cost:3 addressing:1 subset:2 entry:5 uniform:2 kq:2 st:2 explores:1 international:2 standing:1 off:2 diverge:5 squared:2 thesis:2 possibly:1 positivity:1 corner:1 american:1 coefficient:8 baird:1 satisfy:5 explicitly:1 depends:2 view:1 root:1 closed:1 break:1 sup:2 start:1 recover:1 rmse:2 contribution:3 minimize:2 square:9 kaufmann:1 gathered:1 identify:1 vincent:1 accurately:1 produced:1 carlo:1 worth:1 converged:2 definition:3 against:1 nonetheless:1 pp:1 proof:6 mi:1 sampled:1 proved:3 dataset:1 popular:1 treatment:1 recall:2 knowledge:1 dimensionality:1 ethic:1 averager:4 higher:1 follow:1 disciplined:2 laber:1 formulation:4 evaluated:1 shrink:2 though:1 furthermore:1 until:1 hand:1 defines:1 mode:3 pineau:2 perhaps:1 scientific:3 mdp:3 effect:1 hypothesized:1 verify:1 ranged:1 regularization:1 assigned:1 symmetric:4 moore:4 illustrated:1 scientifically:1 criterion:1 stone:1 ay:1 p50:1 geurts:2 performs:1 meaning:1 ranging:1 ari:4 nih:1 ols:26 common:1 rl:5 qp:2 rust:1 interpretation:1 interpret:1 epilepsy:1 mellon:1 composition:1 imposing:1 ai:1 n2l:1 automatic:1 unconstrained:1 grid:3 had:3 interleaf:1 stable:1 base:1 optimizing:3 wellknown:1 manipulation:1 forcing:1 scenario:1 approximators:5 yi:1 morgan:2 additional:3 impose:1 converge:3 dashed:1 signal:1 relates:1 semi:7 ii:1 smoother:2 d0:3 smooth:2 match:3 clinical:2 long:2 controlled:1 impact:1 prediction:4 variant:1 regression:19 patient:1 iteration:28 represent:2 cz:1 kernel:1 background:1 want:2 whereas:4 szepesv:4 singular:3 unlike:1 comment:1 subject:2 induced:5 tend:1 hz:1 meaningfully:1 near:1 constraining:2 easy:1 xj:4 fit:3 restrict:1 inner:1 prieditis:1 avenue:1 whether:1 rms:2 algebraic:1 cause:2 action:9 repeatedly:2 remark:1 matlab:1 generally:1 useful:1 discount:3 dark:1 induces:2 siddiqi:1 simplest:1 http:1 problematic:3 sign:3 write:5 discrete:1 carnegie:1 iz:1 iter:2 four:2 prevent:1 neither:2 sum:3 package:1 everywhere:1 fourth:1 laid:1 almost:1 qiteration:1 v12:1 cvx:4 p3:1 decision:3 scaling:1 seber:1 bound:4 guaranteed:5 convergent:2 fold:1 quadratic:5 adapted:1 constraint:29 infinity:1 constrain:3 ri:4 x2:4 software:1 hy:2 argument:1 optimality:1 min:1 formulating:1 concluding:1 innovative:1 conjecture:1 according:2 cheriton:1 combination:1 poor:1 smaller:1 increasingly:1 making:1 intuitively:3 multiplicity:2 equation:1 discus:5 fail:1 know:4 fed:1 generalizes:1 multiplied:1 apply:1 enforce:1 appropriate:1 batch:9 slower:1 rp:13 hat:6 original:2 uncountable:1 ensure:2 include:1 wehenkel:2 xw:48 giving:1 especially:1 approximating:1 r01:1 objective:10 move:2 parametric:2 dependence:2 usual:1 diagonal:4 distance:1 n2n:2 separate:1 athena:1 collected:1 toward:3 enforcing:1 length:1 equivalently:1 unfortunately:1 statement:1 negative:1 design:6 policy:22 perform:1 upper:2 boot:1 ilog:1 postive:1 datasets:1 markov:1 finite:3 acknowledge:1 ecml:1 rn:8 canada:2 david:1 introduced:1 required:1 learned:9 proceeds:1 below:2 pattern:1 dynamical:1 program:2 interpretability:1 max:10 including:1 reliable:1 rendition:1 natural:3 difficulty:1 boyan:4 regularized:1 ormoneit:1 residual:3 minimax:4 scheme:1 x2i:1 mdps:2 fro:2 negativity:1 health:1 kj:1 prior:2 literature:1 acknowledgement:1 multiagent:1 expect:1 generation:4 interesting:1 approximator:7 agent:3 principle:1 editor:2 pi:4 ibm:2 row:8 elsewhere:1 course:1 last:1 free:1 tsitsiklis:2 institute:1 munos:2 absolute:2 ghz:1 van:2 curve:3 transition:2 world:2 rich:2 made:1 reinforcement:12 avg:2 regressors:1 replicated:1 adaptive:1 transaction:1 approximate:1 confirm:1 handbook:1 assumed:1 gomes:1 tuples:3 xi:11 continuous:4 search:1 s0i:1 z6:1 learn:2 ca:1 expanding:1 symmetry:2 interact:1 expansion:21 du:3 complex:1 domain:1 significance:1 main:1 apr:1 uwaterloo:1 whole:1 cvxr:1 aamas:1 intel:1 wiley:1 position:1 deterministically:1 third:1 theorem:5 down:1 m2ij:1 maxi:4 divergent:1 evidence:1 concern:1 exists:1 intractable:1 restricting:1 adding:1 effectively:3 importance:2 sequential:1 phd:2 magnitude:1 studio:1 gap:1 easier:1 led:1 x3i:1 expressed:1 nserc:1 applies:1 springer:1 mij:3 satisfies:8 goal:2 informing:1 feasible:2 change:2 determined:2 except:1 lemma:10 called:1 experimental:1 diverging:1 meaningful:2 support:1 violated:2 shortreed:1 |
3,643 | 4,299 | Group Anomaly Detection using Flexible Genre Models
Liang Xiong
Machine Learning Department,
Carnegie Mellon University
[email protected]
Barnab?as P?oczos
Robotics Institute,
Carnegie Mellon University
[email protected]
Jeff Schneider
Robotics Institute,
Carnegie Mellon University
[email protected]
Abstract
An important task in exploring and analyzing real-world data sets is to detect
unusual and interesting phenomena. In this paper, we study the group anomaly
detection problem. Unlike traditional anomaly detection research that focuses on
data points, our goal is to discover anomalous aggregated behaviors of groups of
points. For this purpose, we propose the Flexible Genre Model (FGM). FGM is
designed to characterize data groups at both the point level and the group level so
as to detect various types of group anomalies. We evaluate the effectiveness of
FGM on both synthetic and real data sets including images and turbulence data,
and show that it is superior to existing approaches in detecting group anomalies.
1
Introduction
Anomaly detection is a crucial problem in processing large-scale data sets when our goal is to
find rare or unusual events. These events can either be outliers that should be ignored or novel
observations that could lead to new discoveries. See [1] for a recent survey of this field. Traditional
research often focuses on individual data points. In this paper, however, we are interested in finding
group anomalies, where a set of points together exhibit unusual behavior. For example, consider
text data where each article is considered to be a set (group) of words (points). While the phrases
?machine learning? or ?gummy bears? will not surprise anyone on their own, an article containing
both of them might be interesting.
We consider two types of group anomalies. A point-based group anomaly is a group of individually
anomalous points. A distribution-based anomaly is a group where the points are relatively normal,
but as a whole they are unusual. Most existing work on group anomaly detection focuses on pointbased anomalies. A common way to detect point-based anomalies is to first identify anomalous
points and then find their aggregations using scanning or segmentation methods [2, 3, 4]. This
paradigm clearly does not work well for distribution-based anomalies, where the individual points
are normal. To handle distribution-based anomalies, we can design features for groups and then treat
them as points [5, 6]. However, this approach relies on feature engineering that is domain specific
and can be difficult. Our contribution is to propose a new method (FGM) for detecting both types of
group anomalies in an integral way.
Group anomalies exist in many real-world problems. In astronomical studies, modern telescope
pipelines1 produce descriptions for a vast amount of celestial objects. Having these data, we want
to pick out scientifically valuable objects like planetary nebulae, or special clusters of galaxies that
could shed light on the development of the universe [7]. In physics, researchers often simulate the
motion of particles or fluid. In these systems, a single particle is seldom interesting, but a group of
particles can exhibit interesting motion patterns like the interweaving of vortices. Other examples
are abundant in the fields of computer vision, text processing, time series and spatial data analysis.
1
For example, the Sloan Digital Sky Survey (SDSS), http://www.sdss.org
1
We take a generative approach to address this problem. If we have a model to generate normal
data, then we can mark the groups that have small probabilities under this model as anomalies.
Here we make the ?bag-of-points? assumption, i.e., points in the same group are unordered and
exchangeable. Under this assumption, mixture models are often used to generate the data due to
De Finetti?s theorem [8]. The most famous class of mixture models for modeling group data is
the family of topic models [9, 10]. In topic models, distributions of points in different groups are
mixtures of components (?topics?), which are shared among all the groups.
Our proposed method is closely related to the class of topic models, but it is designed specifically for
the purpose of detecting group anomalies. We use two levels of concepts/latent variables to describe
a group. At the group level, a flexible structure based on ?genres? is used to characterize the topic
distributions so that complex normal behaviors are allowed and can be recognized. At the point level,
each group has its own topics to accommodate and capture the variations of points? distributions
(while global topic information is still shared among groups). We call this model the Flexible Genre
Model (FGM). Given a group of points, we can examine whether or not it conforms to the normal
behavior defined by the learned genres and topics. We will also propose scoring functions that can
detect both point-based and distribution-based group anomalies. Exact inference and learning for
FGM is intractable, so we resort to approximate methods. Inference for the FGM model will be
done by Gibbs sampling [11], which is efficient and simple to implement due to the application of
conjugate distributions. Single-sample Monte Carlo EM [12] is used to learn parameters based on
samples produced by the Gibbs sampler.
We demonstrate the effectiveness of the FGM on synthetic and on real-world data sets including
scene images and turbulence data. Empirical results show that FGM is superior to existing approaches in finding group anomalies.
The paper is structured as follows. In Section 2 we review related work and discuss the limitations
with existing algorithms and why a new method is needed for group anomaly detection. Section 3 introduces our proposed model. The parameter learning of our model and inference on it are explained
in Section 4. Section 5 describes how to use our method for group anomaly detection. Experimental
results are shown in Section 6. We finish that paper by drawing conclusions (Section 7).
2
Background and Related Work
In this section, we provide background about topic models and explain the limitation of existing
methods in detecting group anomalies. For intuition, we introduce the problem in the context of
detecting anomalous images, rare galaxy clusters, and unusual motion in a dynamic fluid simulation.
We consider a data set with M pre-defined groups G1 , . . . , GM (e.g. spatial clusters of galaxies, patches in an image, or fluid motions in a local region). Group Gm contains Nm points
(galaxies, image patches, simulation grid points). The features of these points are denoted by
xm = {xm,n 2 Rf }n=1,...,Nm , where f is the dimensionality of the points? features. These would
be spectral features of each galaxy, SIFT features of each image patch, or velocities at each grid
point of a simulation. We assume that points in the same group are unordered and exchangeable.
Having these data, we ask the question whether in group Gm the distribution of features xm looks
anomalous.
Topic models such as Latent Dirichlet Allocation (LDA) [10] are widely used to model data having
this kind of group structure. The original LDA model was proposed for text processing. It represents
the distribution of points (words) in a group (document) as a mixture of K global topics 1 , . . . K ,
each of which is a distribution (i.e., i 2 Sf , where Sf is the f -dimensional probability simplex).
Let M(?) be the multinomial distribution parameterized by ? 2 SK and Dir(?) be the Dirichlet
distribution with parameter ? 2 RK
+ . LDA generates the mth group by first drawing its topic
distribution ?m from the prior distribution Dir(?). Then for each point xmn in the mth group
it draws one of the K topics from M(?m ) (i.e., zmn ? M(?m )) and then generates the point
according to this topic (xmn ? M( zmn )).
In our examples, the topics can represent galaxy types (e.g. ?blue?,?red?, or ?emissive?, with
K = 3), image features (e.g. edge detectors representing various orientations), or common motion patterns in the fluid (fast left, slow right, etc). Each point in the group has its own topic. We
consider points that have multidimensional continuous feature vectors. In this case, topics can be
2
modeled by Gaussian distributions, and each point is generated from one of the K Gaussian topics.
At a higher level, a group is characterized by the distribution of topics ?m , i.e., the proportion of
different types in the group Gm . The concepts of topic and topic distribution help us define group
anomalies: a point-based anomaly contains points that do not belong to any of the normal topics
and a distribution-based anomaly has a topic distribution ?m that is uncommon.
Although topic models are very useful in estimating the topics and topic distributions in groups, the
existing methods are incapable of detecting group anomalies comprehensively. In order to detect
anomalies, the model should be flexible enough to enable complex normal behaviors. For example,
it should be able to model complex and multi-modal distributions of the topic distribution ?. LDA,
however, only uses a single Dirichlet distribution to generate topic distributions, and cannot effectively define what normal and abnormal distributions should be. It also uses the same K topics for
every group, which makes groups indifferentiable when looking at their topics. In addition, these
shared topics are not adapted to each group either.
The Mixture of Gaussian Mixture Model (MGMM) [13] firstly uses topic modeling for group
anomaly detection. It allows groups to select their topic distributions from a dictionary of multinomials, which is learned from data to define what is normal. [14] employed the same idea but
did not apply their model to anomaly detection. The problem of using multinomials is that it does
not consider the uncertainty of topic distributions. The Theme Model (ThM) [15] lets a mixture
of Dirichlets generate the topic distributions and then uses the memberships in this mixture to do
clustering on groups. This idea is useful for modeling group-level behaviors but fails to capture
anomalous point-level behaviors. The topics are still shared globally in the same way as in LDA. In
contrast, [16] proposed to use different topics for different groups in order to account for the burstiness of the words (points). These adaptive topics are useful in recognizing point-level anomalies,
but cannot be used to detect anomalous behavior at the group level. For the group anomaly detection
problem we propose a new method, the Flexible Genre Model, and demonstrate that it is able to cope
with the issues mentioned above and performs better than the existing state-of-the-art algorithms.
3
Model Specification
The flexible genre model (FGM) extends LDA such that the generating processes of topics and topic
distributions can model more complex distributions. To achieve this goal, two key components are
added. (i) To model the behavior of topic distributions, we use several ?genres?, each of which is a
typical distribution of topic distributions. (ii) We use ?topic generators? to generate adaptive topics
for different groups. We will also use them to learn how the normal topics have been generated. The
generative process of FGM is presented in Algorithm 1. A graphical representation of FGM is given
in Figure 1.
Algorithm 1 Generative process of FGM
for Groups m = 1 to M do
? Draw a genre {1, . . . , T } 3 ym ? M(?).
? Draw a topic distribution according to the genre ym : SK 3 ?m ? Dir(?ym ).
? Draw K topics { m,k ? P ( m,k |?k )}k=1,...,K .
for Points n = 1 to Nm do
? Draw a topic membership {1, . . . , K} 3 zm,n ? M(?m ). [ m,zmn topic will be active.]
? Generate a point xm,n ? P (xm,n | m,zmn ).
end for
end for
We assume there are T genres and K topics. M(?) denotes the global distribution of genres. Each
genre is a Dirichlet distribution for generating the topic distributions, and ? = {?t }t=1,...,T is the
set of genre parameters. Each group has K topics m = { m,k }k=1,...,K . The ?topic generators?,
? = {?k }, {P (?|?k )}k=1,...,K , are the global distributions for generating the corresponding topics.
Having the topic distribution ?m and the topics { m,k }, points are generated as in LDA.
By comparing FGM to LDA, the advantages of FGM become evident. (i) In FGM, each group has a
latent genre attribute ym , which determines how the topic distribution in this group should look like
(Dir(?ym )), and (ii) each group has its own topics { m,k }K
k=1 , but they are still tied through the
3
?
?
?m
ym
?
?
K
T
M
zmn
xmn
K
N
Figure 1: The Flexible Genre Model (FGM).
global distributions P (?|?). Thus, the topics can be adapted to local group data, but the information
is still shared globally. Moreover, the topic generators P (?|?) determine how the topics { m,k }
should look like. In turn, if a group uses unusual topics to generate its points, it can be identified.
To handle real-valued multidimensional data, we set the point-generating distributions (i.e., the topics) to be Gaussians, P (xm,n | m,k ) = N (xm,n | m,k ), where m,k = {?m,k , ?m,k } includes
the mean and covariance parameters. For computational convenience, the topic generators are
Gaussian-Inverse-Wishart (GIW) distributions, which are conjugate to the Gaussian topics. Hence
?k = {?0 , ?0 , 0 , ?0 } parameterizes the GIW distribution [17] (See the supplementary materials
for more details). Let ? = {?, ?, ?} denote the model parameters. We can write the complete
likelihood of data and latent variables in group Gm under FGM as follows:
P (Gm , ym , ?m ,
m |?)
= M(ym |?)Dir(?m |?ym )
Y
k
GIW (
m,k |?k )
Y
n
M(zmn |?m )N (xmn |
m,zmn ).
By integrating out ?m , m and summing out ym , z, we get the marginal likelihood of Gm :
X Z
Y
YX
P (Gm |?) =
?t Dir(?m |?t )
GIW ( m,k |?k )
?mk N (xmn | m,k )d m d?m .
t
?m ,
m
n
k
k
Finally, the data-set?s likelihood is just the product of all groups? likelihoods.
4
Inference and Learning
To learn FGM, we update the parameters ? to maximize the likelihood of data. The inferred latent
states?including the topic distributions ?m , the topics m , and the topic and genre memberships
zm , ym ?can be used for detecting anomalies and exploring the data. Nonetheless, the inference
and learning in FGM is intractable, so we train FGM using an approximate method described below.
4.1
Inference
The approximate inference of the latent variables can be done using Gibbs sampling [11]. In Gibbs
sampling, we iteratively update one variable at a time by drawing samples from its conditional
distribution when all the other parameters are fixed. Thanks to the use of conjugate distributions,
Gibbs sampling in FGM is simple and easy to implement. The sampling distributions of the latent
variables in group m are given below. We use P (?| ?) to denote the distribution of one variable
conditioned on all the others. For the genre membership ym we have that:
P (ym = t| ?) / P (?m |?t )P (ym = t|?) = ?t Dir(?m |?t ).
For the topic distribution ?m :
P (?m | ?) / P (zm |?m )P (?m |?, ym ) = M(zm |?m )Dir(?m |?ym ) = Dir(?ym + nm ),
where nm denotes the histogram of the K values in vector zm . The last equation follows from the
Dirichlet-Multinomial conjugacy. For m,k , the kth topic in group m, one can find that:
P(
m,k |
?) / P (x(k)
m |
m,k )P ( m,k |?k )
= N (x(k)
m |
4
m,k )GIW ( m,k |?k )
= GIW (
0
m,k |?k ),
(k)
where xm are points in group Gm from topic k, i.e., zm,n = k. The last equation follows from
the Gaussian-Inverse-Wishart-Gaussian conjugacy. ?k0 is the parameter of the posterior GIW distri(k)
bution given xm ; its exact form can be found in the supplementary material. For zmn , the topic
membership of point n in group m is as follows:
P (zmn = k| ?) / P (xmn |zmn = k,
4.2
m )P (zmn
= k|?m ) = ?m,k N (xmn |
m,k ).
Learning
Learning the parameters of FGM helps us identify the groups? and points? normal behaviors. Each of
the genres ? = {?t }t=1,...,T captures one typical distribution of topic distributions as ? ? Dir(?t ).
The topic generators ? = {?k }k=1,...,K determine how the normal topics { m,k } should look like.
We use single-sample Monte Carlo EM [12] to learn parameters from the samples provided by
the Gibbs sampler. Given sampled latent variables, we update the parameters to their maximum
likelihood estimations (MLE): we learn ? from y and ?; ? from ; and ? from y.
? can easily be estimated from the histogram of y?s. ?t is learned by the MLE of a Dirichlet distribution given the multinomials {?m |ym = t, m = 1, . . . , M } (i.e., the topic distributions having
genre t), which can be solved using the Newton?Raphson method [18]. The kth topic-generator?s
parameter ?k = {?0k , ?0k , 0k , ?0k } is the MLE of a GIW distribution given the parameters
{ m,k = (?m,k , ?m,k )}m=1,...,M (the kth topics of all groups). We have derived an efficient solution for this MLE problem. The details can be found in the supplementary material.
The overall learning algorithm works by repeating the following procedure until convergence: (1)
do Gibbs sampling to infer the states of the latent variables; (2) update the model parameters using
the estimations above. To select appropriate values for the parameters T and K (the number of
genres and topics), we can apply the Bayesian information criterion (BIC) [19], or use the values
that maximize the likelihood of a held-out validation set.
5
Scoring Criteria
The learned FGM model can easily be used for anomaly detection on test data. Given a test group,
we first infer its latent variables including the topics and the topic distribution. Then we treat these
latent states as the group?s characteristicsand examine if they are compatible with the normal behaviors defined by the FGM parameters.
Point-based group anomalies can be detected by examining the topics of the groups. If a group
contains anomalous points with rare feature values xmn , then the topics { m,k }K
k=1 that generate these points will deviate from the normal behavior defined by the topic generators ?. Let
QK
P ( m |?) = k=1 GIW ( m,k |?k ). The point-based anomaly score (PB score) of group Gm is
Z
E m [ ln P ( m |?)] =
P ( m |?, Gm ) ln P ( m |?)d m .
m
The posterior P ( m |?, Gm ) can again be approximated using Gibbs sampling, and the expectation
can be done by Monte Carlo integration.
Distribution-based group anomalies can be detected by examining the topic distributions. The genres
{?t }t=1,...,T capture the typical distributions of topic distributions. If a group?s topic distribution
?m is unlikely to be generated from any of these genres, we call it anomalous. Let P (?m |?) =
PT
t=1 ?t Dir(?m |?t ). The distribution-based anomaly score (DB score) of group Gm is defined as
Z
E?m [ ln P (?m |?)] =
P (?m |?, Gm ) ln P (?m |?)d?m .
(1)
?m
Again, this expectation can be approximated using Gibbs sampling and Monte Carlo integration.
Using a combination of the point-based and distribution-based scores, we can detect both pointbased and distribution-based group anomalies.
5
6
Experiments
In this section we provide empirical results produced by FGM on synthetic and real data. We show
that FGM outperforms several sate-of-the-art competitors in the group anomaly detection task.
6.1
Synthetic Data
In the first experiment, we compare FGM with the Mixture of Gaussian Mixture Model
(MGMM) [13] and with an adaptation of the Theme Model (ThM) [15] on synthetic data sets. The
original ThM handles only discrete data and was proposed for clustering. To handle continuous data
and detect anomalies, we modified it by using Gaussian topics and applied the distribution-based
anomaly scoring function (1). To detect both distribution-based and point-based anomalies, we can
use the data?s likelihood under ThM as the scoring function.
Using the synthetic data sets described below, we can demonstrate the behavior of the different
models and scoring functions. We generated the data using 2-dimensional GMMs as in [13]. Here
each group has a GMM to generate its points. All GMMs share three Gaussian components with
covariance 0.2 ? I2 and centered at points ( 1.7, 1), (1.7, 1), and (0, 2), respectively. A group?s
mixing weights are randomly chosen from w1 = [0.33, 0.33, 0.33] or w2 = [0.84, 0.08, 0.08]. Thus,
a group is normal if its points are sampled from these three Gaussians, and their mixing weights are
close to either w1 or w2 . To test the detectors, we injected both point-based and distribution-based
anomalies. Point-based anomalies were groups of points sampled from N ((0, 0), I2 ). Distributionbased anomalies were generated by GMMs consisting of normal Gaussian components but with
mixing weights [0.33, 0.64, 0.03] and [0.08, 0.84, 0.08], which were different from w1 and w2 . We
generated M = 50 groups, each of which had Nm ? P oisson(100) points. One point-based
anomalous group and two distribution-based anomalous groups were injected into the data set.
The detection results of MGMM, ThM, and FGM are shown in Fig. 2. We show 12 out of the
50 groups. Normal groups are surrounded by black solid boxes, point-based anomalies have green
dashed boxes, and distribution-based anomalies have red/magenta dashed boxes. Points are colored by the anomaly scores of the groups (darker color means more anomalous). An ideal detector
would make dashed boxes? points dark and solid boxes? points light gray. We can see that all the
MGMM
ThM
FGM
ThM ? Likelihood
Figure 2: Detection results on synthetic data.
models can find the distribution-based anomalies since they are able to learn the topic distributions.
However, MGMM and ThM miss the point-based anomaly. The explanation is simple; the anomalous points are distributed in the middle of the topics, thus the inferred topic distribution is around
[0.33, 0.33, 0.33], which is exactly w1 . As a result, MGMM and ThM infer this group to be normal,
although it is not. This example shows one possible problem of scoring groups based on topic distributions only. On the contrary, using the sum of point-based and distribution-based scores, FGM
found all of the group anomalies thanks to its ability to characterize groups both at the point-level
and the group-level. We also show the result of scoring the groups by the ThM likelihood. Only
point anomalies are found. This is because the data likelihood under ThM is dominated by the
anomalousness of points, thus a few eccentric points will overshadow group-level behaviors.
Figures 3(a) ? 3(c) show the density estimations given by MGMM, ThM, and FGM, respectively, for
the point-based anomalous group. We can see that FGM gives a better estimation due to its adaptive
topics, while MGMM and ThM are limited to use their global topics. Figure 3(d) shows the learned
6
PT
genres visualized as the distribution t=1 ?t Dir(?|?t ) on the topic simplex. This distribution summarizes the normal topic distributions in this data set. Observe that the two peaks in the probability
simplex are very close to w1 and w2 indeed.
(a)
(b)
(c)
(d)
Figure 3: (a),(b),(c) show the density of the point-based anomaly estimated by MGMM, ThM, and
FGM respectively. In MGMM and ThM, topics must be shared globally, therefore their perform
badly. (d) The genres in the synthetic data set learned by FGM.
6.2
Image Data
In this experiment we test the performance of our method on detecting anomalous scene images. We
use the data set from [15]. We selected the first 100 images from categories ?mountain?, ?coast?,
and ?inside city?. These 300 images are randomly divided: 80% are used for training and the rest
for testing. We created anomalies by stitching random normal test images from different categories.
For example, an anomaly may be a picture that is half mountain and half city street. These anomalies are challenging since they have the same local patches as the normal images. We mixed the
anomalies with normal test images and asked the detectors to find them. Some examples are shown
in Fig. 4(a). The images are represented as in [15]: we treat each of them as a group of local points.
On each image we randomly sample 100 patches, on each patch extract the 128-dimensional SIFT
feature [20], and then reduce its dimension to 2 using PCA. Points near the stitching boundaries are
discarded to avoid boundary artifacts.
We compare FGM with several other methods. We implemented a simple detector based on Gaussian
mixture models (GMM); it is able to detect point-based anomalies. This method fits a GMM to all
data points, calculates the points? scores as their likelihood under this GMM, and finally scores
a group by averaging these numbers. To be able to detect distribution-based anomalies, we also
implemented two other competitors. The first one, called LDA-KNN, uses LDA to estimate the topic
distributions of the groups and treats these topic distributions (vector parameters of multinomials)
as the groups? features. Then, a k-nearest neighbor (KNN) based point detector [21] is used to score
the groups? features. The second method uses symmetrized Kullback-Leibler (KL) divergences
between densities (DD). For each group, DD uses a GMM to estimate the distribution of its points.
Then KL divergences between these GMMs are estimated using Monte Carlo method, and then the
KNN-based detector is used to find anomalous GMMs (i.e., groups).
For all algorithms we used K = 8 topics and T = 6 genres as it was suggested by BIC searches. We
set ?0 = ?0 = 200 for FGM. The performance is measured by the area under the ROC curve (AUC)
of retrieving the anomalies from the test set. In the supplementary material we also show results
using the average precision performance measure. The performances from 30 random runs are
shown in Figure 4(b). GMM cannot detect the group anomalies that do not have anomalous points.
The performance of LDA-KNN was also close to the 50% random baseline. A possible reason is
that the KNN detector did not perform well in the K = 8 dimensional space. MGMM, ThM, and
FGM show improvements over the random baseline, and FGM achieves significantly better results
than others: the paired t-test gives a p-value of 1.6 ? 10 5 for FGM vs. ThM. We can also see that
the DD method performs poorly possibly due to many error-prone steps including fitting the GMMs
and estimating divergences using Monte Carlo method.
6.3
Turbulence Data
We present an explorative study of detecting group anomalies on turbulence data from the JHU Turbulence Database Cluster2 (TDC) [22]. TDC simulates fluid motion through time on a 3-dimensional
grid, and here we perform our experiment on a continuous 1283 sub-grid. In each time step and each
2
http://turbulence.pha.jhu.edu
7
0.75
0.7
0.65
AUC
0.6
0.55
0.5
0.45
0.4
0.35
P
(a) Sample images and stitched anomalies
LDA?KNN MGMM
ThM
FGM?DB
DD
(b) Detection performance
Figure 4: Detection of stitched images. (a) Images samples. Green boxes (first row) contain natural
images, and yellow boxes (second row) contain stitched anomalies. (b) The detection AUCs.
vertex of the grid, TDC records the 3-dimensional velocity of the fluid. We consider the vertices in a
local cubic region as a group, and the goal is to find groups of vertices whose velocity distributions
(i.e. moving patterns) are unusual and potentially interesting. The following steps were used to extract the groups: (1) We chose the {(8i, 8j, 8k)}i,j,k grid points as centers of our groups. Around
these centers, the points in 73 sized cubes formed our groups. (2) The feature of a point in the cube
was its velocity relative to the velocity at its cube?s center point. After these pre-processing steps,
we had M = 4 096 groups, each of which had 342 3-dimensional feature vectors.
We applied MGMM, ThM, and FGM to find anomalies in this group data. T = 4 genres and K = 6
topics were used for all methods. We do not have a groundtruth for anomalies in this data set.
However, we can compute the ?vorticity score? [23] for each vertex that indicates the tendency of
the fluid to ?spin?. Vortices and especially their interactions are uncommon and of great interest in
the field of fluid dynamics. This vorticity can be considered as a hand crafted anomaly score based
on expert knowledge of this fluid data. We do not want an anomaly detector to match this score
perfectly because there are other ?non-vortex? anomalous events it should find as well. However,
we do think higher correlation with this score indicates better anomaly detection performance.
Figure 5 visualizes the anomaly scores of FGM and the vorticity. We can see that these pictures are
highly correlated, which implies that FGM was able to find interesting turbulence activities based on
velocity only and without using the definition of vorticity or any other expert knowledge. Correlation
values between vorticity and the MGMM, ThM, and FGM scores from 20 random runs are displayed
in Fig. 5(c), showing that FGM is better at finding regions with high vorticity.
Correlation with Vorticity
0.54
0.52
0.5
0.48
0.46
0.44
0.42
MGMM
(a) FGM-DB Score
(b) Vorticity
ThM
FGM?DB
(c)
Figure 5: Detection results for the turbulence data. (a) & (b) FGM-DB anomaly score and vorticity
visualized on one slice of the cube. (c) Correlations of the anomaly scores with the vorticity.
7
Conclusion
We presented the generative Flexible Genre Model (FGM) for the group anomaly detection problem.
Compared to traditional topic models, FGM is able to characterize groups? behaviors at multiple
levels. This detailed characterization makes FGM an ideal tool for detecting different types of group
anomalies. Empirical results show that FGM achieves better performance than existing approaches.
In the future, we will examine other possibilities as well. For model selection, we can extend FGM
by using nonparametric Bayesian techniques such as hierarchical Dirichlet processes [24]. It would
also be interesting to study structured groups in which the exchangeability assumption is not valid.
8
References
[1] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM
Computing Surveys, 41-3, 2009.
[2] Geoffrey G. Hazel. Multivariate gaussian MRF for multispectral scene segmentation and
anomaly detection. IEEE Trans. Geoscience and Remote Sensing, 38-3:1199 ? 1211, 2000.
[3] Kaustav Das, Jeff Schneider, and Daniel Neill. Anomaly pattern detection in categorical
datasets. In Knowledge Discovery and Data Mining (KDD), 2008.
[4] Kaustav Das, Jeff Schneider, and Daniel Neill. Detecting anomalous groups in categorical
datasets. Technical Report 09-104, CMU-ML, 2009.
[5] Philip K. Chan and Matthew V. Mahoney. Modeling multiple time series for anomaly detection.
In IEEE International Conference on Data Mining, 2005.
[6] Eamonn Keogh, Jessica Lin, and Ada Fu. Hot sax: Efficiently finding the most unusual time
series subsequence. In IEEE International Conference on Data Mining, 2005.
[7] G. Mark Voit. Tracing cosmic evolution with clusters of galaxies. Reviews of Modern Physics,
77(1):207 ? 258, 2005.
[8] B. de Finetti. Funzione caratteristica di un fenomeno aleatorio. Atti della R. Academia
Nazionale dei Lincei, Serie 6. Memorie, Classe di Scienze Fisiche, Mathematice e Naturale, 4,
1931.
[9] Thomas Hofmann. Unsupervised learning with probabilistic latent semantic analysis. Machine
Learning Journal, 2001.
[10] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. JMLR,
3:993?1022, 2003.
[11] Stuart Geman and Donald Geman. Stochastic relaxation, gibbs distributions, and the bayesian
restoration of images. IEEE Trans. PAMI, 6:721 ? 741, 1984.
[12] Gilles Celeux, Didier Chaveau, and Jean Diebolt. Stochastic version of the em algorithm: An
experimental study in the mixture case. J. of Statistical Computation and Simulation, 55, 1996.
[13] Liang Xiong, Barnab?as P?oczos, and Jeff Schneider. Hierarchical probabilistic models for group
anomaly detection. In International conference on Artificial Intelligence and Statistics (AISTATS), 2011.
[14] Mikaela Keller and Samy Bengio. Theme-topic mixture model for document representation.
In Learning Methods for Text Understanding and Mining, 2004.
[15] Li Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories.
IEEE Conf. CVPR, pages 524?531, 2005.
[16] Gabriel Doyle and Charles Elkan. Accounting for burstiness in topic models. In International
Conference on Machine Learning, 2009.
[17] Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis.
Chapman and Hall/CRC, 2003.
[18] Thomas P. Minka. Estimating a dirichlet distribution. http://research.microsoft.
com/en-us/um/people/minka/papers/dirichlet, 2009.
[19] Gideon E. Schwarz. Estimating the dimension of a model. Annals of Statistics, (6-2):461?464,
1974.
[20] David G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91 ?
110, 2004.
[21] Manqi Zhao. Anomaly detection with score functions based on nearest neighbor graphs. In
NIPS, 2009.
[22] E. Perlman, R. Burns, Y. Li, and C. Meneveau. Data exploration of turbulence simulations
using a database cluster. In Supercomputing SC, 2007.
[23] Charles Meneveau. Lagrangian dynamics and models of the velocity gradient tensor in turbulent flows. Annual Review of Fluid Mechanics, 43:219?45, 2011.
[24] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet
process. Journal of the American Statistical Association, 101:1566 ? 1581, 2006.
9
| 4299 |@word version:1 middle:1 proportion:1 simulation:5 covariance:2 accounting:1 pick:1 serie:1 solid:2 accommodate:1 series:3 contains:3 score:20 daniel:2 document:2 outperforms:1 existing:8 comparing:1 com:1 must:1 john:1 explorative:1 academia:1 kdd:1 hofmann:1 designed:2 update:4 v:1 generative:4 selected:1 half:2 intelligence:1 record:1 colored:1 blei:2 detecting:11 characterization:1 didier:1 org:1 firstly:1 become:1 retrieving:1 ijcv:1 fitting:1 inside:1 introduce:1 interweaving:1 indeed:1 behavior:15 examine:3 mechanic:1 multi:1 globally:3 distri:1 discover:1 estimating:4 moreover:1 provided:1 what:2 mountain:2 kind:1 finding:4 sky:1 every:1 multidimensional:2 shed:1 exactly:1 um:1 exchangeable:2 engineering:1 local:5 treat:4 sd:2 vortex:3 analyzing:1 pami:1 might:1 black:1 chose:1 burn:1 challenging:1 limited:1 testing:1 perlman:1 implement:2 procedure:1 area:1 empirical:3 jhu:2 significantly:1 word:3 pre:2 integrating:1 donald:2 diebolt:1 get:1 cannot:3 convenience:1 close:3 selection:1 turbulence:9 gelman:1 context:1 yee:1 www:1 mikaela:1 lagrangian:1 center:3 keller:1 survey:4 handle:4 variation:1 annals:1 pt:2 gm:14 anomaly:82 exact:2 us:8 samy:1 elkan:1 velocity:7 approximated:2 geman:2 database:2 solved:1 capture:4 region:3 remote:1 valuable:1 burstiness:2 mentioned:1 intuition:1 nazionale:1 asked:1 dynamic:3 celestial:1 distinctive:1 easily:2 k0:1 various:2 represented:1 genre:28 train:1 fast:1 describe:1 monte:6 detected:2 eamonn:1 artificial:1 sc:1 whose:1 jean:1 widely:1 valued:1 supplementary:4 cvpr:1 drawing:3 ability:1 statistic:2 knn:6 g1:1 think:1 beal:1 advantage:1 propose:4 interaction:1 product:1 zm:6 adaptation:1 mixing:3 poorly:1 achieve:1 description:1 convergence:1 cluster:5 produce:1 generating:4 object:2 help:2 andrew:2 measured:1 nearest:2 implemented:2 c:3 implies:1 closely:1 attribute:1 stochastic:2 centered:1 exploration:1 enable:1 oisson:1 material:4 crc:1 barnab:2 keogh:1 exploring:2 around:2 considered:2 hall:1 normal:22 great:1 matthew:2 dictionary:1 achieves:2 purpose:2 estimation:4 bag:1 schwarz:1 individually:1 city:2 tool:1 clearly:1 gaussian:13 modified:1 avoid:1 exchangeability:1 pointbased:2 derived:1 focus:3 improvement:1 likelihood:12 indicates:2 contrast:1 baseline:2 detect:12 inference:7 membership:5 unlikely:1 mth:2 perona:1 interested:1 issue:1 among:2 flexible:9 orientation:1 denoted:1 overall:1 development:1 spatial:2 special:1 art:2 integration:2 marginal:1 field:3 cube:4 having:5 ng:1 sampling:8 chapman:1 represents:1 stuart:1 look:4 unsupervised:1 future:1 simplex:3 others:2 report:1 few:1 modern:2 randomly:3 divergence:3 doyle:1 individual:2 consisting:1 microsoft:1 jessica:1 detection:26 interest:1 highly:1 possibility:1 mining:4 mahoney:1 introduces:1 mixture:13 uncommon:2 light:2 stitched:3 held:1 integral:1 edge:1 fu:1 conforms:1 abundant:1 mk:1 modeling:4 restoration:1 ada:1 phrase:1 vertex:4 rare:3 recognizing:1 examining:2 characterize:4 scanning:1 dir:12 synthetic:8 thanks:2 density:3 peak:1 international:4 probabilistic:2 physic:2 michael:2 together:1 ym:18 dirichlets:1 w1:5 again:2 nm:6 containing:1 possibly:1 wishart:2 conf:1 resort:1 expert:2 zhao:1 american:1 li:2 account:1 de:2 unordered:2 chandola:1 includes:1 sloan:1 lowe:1 red:2 bution:1 aggregation:1 multispectral:1 contribution:1 formed:1 spin:1 qk:1 efficiently:1 identify:2 yellow:1 famous:1 bayesian:5 produced:2 carlo:6 researcher:1 visualizes:1 explain:1 detector:9 giw:9 definition:1 competitor:2 nonetheless:1 galaxy:7 minka:2 di:2 sampled:3 ask:1 astronomical:1 color:1 dimensionality:1 knowledge:3 segmentation:2 higher:2 varun:1 sax:1 modal:1 done:3 box:7 just:1 until:1 correlation:4 hand:1 banerjee:1 lda:12 gray:1 artifact:1 hal:1 concept:2 contain:2 evolution:1 hence:1 iteratively:1 leibler:1 i2:2 semantic:1 auc:3 scientifically:1 vipin:1 criterion:2 whye:1 evident:1 complete:1 demonstrate:3 performs:2 motion:6 image:22 coast:1 novel:1 arindam:1 charles:2 superior:2 common:2 multinomial:6 belong:1 extend:1 association:1 mellon:3 gibbs:10 eccentric:1 seldom:1 grid:6 particle:3 schneide:1 had:3 moving:1 specification:1 etc:1 vorticity:10 posterior:2 own:4 recent:1 multivariate:1 chan:1 oczos:2 incapable:1 scoring:7 atti:1 schneider:4 employed:1 recognized:1 aggregated:1 paradigm:1 determine:2 maximize:2 dashed:3 ii:2 multiple:2 keypoints:1 infer:3 technical:1 match:1 characterized:1 raphson:1 lin:1 divided:1 mle:4 paired:1 calculates:1 anomalous:19 mrf:1 vision:1 cmu:4 expectation:2 histogram:2 represent:1 robotics:2 cosmic:1 background:2 want:2 addition:1 cluster2:1 crucial:1 w2:4 rest:1 unlike:1 db:5 simulates:1 contrary:1 flow:1 gmms:6 effectiveness:2 jordan:2 call:2 near:1 ideal:2 bengio:1 enough:1 easy:1 finish:1 bic:2 fit:1 carlin:1 identified:1 perfectly:1 reduce:1 idea:2 parameterizes:1 whether:2 pca:1 ignored:1 useful:3 gabriel:1 detailed:1 overshadow:1 amount:1 repeating:1 dark:1 nonparametric:1 visualized:2 category:3 telescope:1 http:3 generate:9 exist:1 estimated:3 blue:1 carnegie:3 write:1 discrete:1 finetti:2 group:125 key:1 pb:1 gmm:6 vast:1 graph:1 relaxation:1 sum:1 run:2 inverse:2 parameterized:1 uncertainty:1 injected:2 extends:1 family:1 groundtruth:1 patch:6 draw:5 summarizes:1 abnormal:1 neill:2 annual:1 badly:1 zmn:11 adapted:2 activity:1 fei:2 scene:4 dominated:1 generates:2 simulate:1 anyone:1 kumar:1 relatively:1 department:1 structured:2 according:2 combination:1 conjugate:3 describes:1 pha:1 em:3 sate:1 scienze:1 lincei:1 outlier:1 explained:1 invariant:1 bapoczos:1 ln:4 equation:2 conjugacy:2 discus:1 turn:1 turbulent:1 needed:1 stitching:2 end:2 unusual:8 gaussians:2 apply:2 observe:1 hierarchical:4 spectral:1 appropriate:1 xiong:2 symmetrized:1 original:2 thomas:2 denotes:2 dirichlet:11 clustering:2 graphical:1 newton:1 yx:1 especially:1 tensor:1 question:1 added:1 traditional:3 exhibit:2 gradient:1 kth:3 fgm:54 street:1 philip:1 topic:104 reason:1 modeled:1 liang:2 difficult:1 potentially:1 fluid:10 design:1 stern:1 perform:3 gilles:1 teh:1 observation:1 datasets:2 discarded:1 displayed:1 looking:1 thm:21 inferred:2 david:3 kl:2 learned:6 planetary:1 nip:1 trans:2 address:1 xmn:8 able:7 suggested:1 below:3 pattern:4 xm:9 gideon:1 rf:1 including:5 green:2 explanation:1 hot:1 event:3 natural:2 representing:1 dei:1 picture:2 created:1 categorical:2 extract:2 text:4 review:3 prior:1 discovery:2 deviate:1 understanding:1 relative:1 bear:1 mixed:1 interesting:7 limitation:2 allocation:2 geoffrey:1 generator:7 digital:1 validation:1 tdc:3 article:2 dd:4 rubin:1 share:1 surrounded:1 row:2 prone:1 compatible:1 last:2 institute:2 neighbor:2 comprehensively:1 tracing:1 distributed:1 slice:1 boundary:2 dimension:2 curve:1 world:3 valid:1 adaptive:3 supercomputing:1 cope:1 approximate:3 kullback:1 ml:1 global:6 active:1 summing:1 subsequence:1 continuous:3 latent:13 search:1 un:1 sk:2 why:1 learn:6 complex:4 domain:1 da:2 did:2 aistats:1 universe:1 whole:1 allowed:1 fig:3 crafted:1 en:1 roc:1 cubic:1 kaustav:2 slow:1 darker:1 precision:1 fails:1 theme:3 sub:1 sf:2 classe:1 tied:1 jmlr:1 theorem:1 rk:1 magenta:1 specific:1 sift:2 showing:1 sensing:1 intractable:2 effectively:1 conditioned:1 surprise:1 geoscience:1 determines:1 relies:1 acm:1 conditional:1 goal:4 sized:1 jeff:4 shared:6 specifically:1 typical:3 sampler:2 averaging:1 miss:1 called:1 celeux:1 experimental:2 tendency:1 select:2 mark:2 people:1 phenomenon:1 evaluate:1 della:1 correlated:1 |
3,644 | 430 | A competitive modular connectionist architecture
Robert A. Jacobs and Michael I. Jordan
Department of Brain & Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
We describe a multi-network, or modular, connectionist architecture that
captures that fact that many tasks have structure at a level of granularity
intermediate to that assumed by local and global function approximation
schemes. The main innovation of the architecture is that it combines
associative and competitive learning in order to learn task decompositions.
A task decomposition is discovered by forcing the networks comprising the
architecture to compete to learn the training patterns. As a result of the
competition, different networks learn different training patterns and, thus,
learn to partition the input space. The performance of the architecture on
a "what" and "where" vision task and on a multi-payload robotics task
are presented.
1
INTRODUCTION
A dichotomy has arisen in recent years in the literature on nonlinear network learning rules between local approximation of functions and global approximation of
functions. Local approximation, as exemplified by lookup tables, nearest-neighbor
algorithms, and networks with units having local receptive fields, has the advantage
of requiring relatively few learning trials and tends to yield interpretable representations. Global approximation, as exemplified by polynomial regression and
fully-connected networks with sigmoidal units, has the advantage of requiring less
storage capacity than local approximators and may yield superior generalization.
In this paper, we report a multi-network, or modular, connectionist architecture
that captures the fact that many tasks have structure at a level of granularity
intermediate to that assumed by local and global approximation schemes. It does so
767
768
Jacobs and Jordan
Expert
Expert
Network 1
Network 2
Yl
~Y2
~-~------------------:-:~I__~_e_:_:_:_:k ~
o ?
__
y=g,y,+g2Y2
Figure 1: A Modular Connectionist Architecture
by combining the desirable features of the approaches embodied by these disparate
approximation schemes. In particular, it uses different networks to learn training
patterns from different regions of the input space. Each network can itself be a
local or global approximator for a particular region of the space.
2
A MODULAR CONNECTIONIST ARCHITECTURE
The technical issues addressed by the modular architecture are twofold: (a) detecting that different training patterns belong to different tasks and (b) allocating
different networks to learn the different tasks. These issues are addressed in the
architecture by combining aspects of competitive learning and associative learning.
Specifically, task decompositions are encouraged by enforcing a competition among
the networks comprising the architecture. As a result of the competition, different networks learn different training patterns and, thus, learn to compute different
functions. The architecture was first presented in Jacobs, Jordan, Nowlan, and Hinton (1991), and combines earlier work on learning task decompositions in a modular
architecture by Jacobs, Jordan, and Barto (1991) with the mixture models view of
competitive learning advocated by Nowlan (1990) and Hinton and Nowlan (1990).
The architecture is also presented elsewhere in this volume by Nowlan and Hinton (1991).
The architecture, which is illustrated in Figure 1, consists of two types of networks:
expert networks and a gating network. The expert networks compete to learn the
training patterns and the gating network mediates this competition. Whereas the
expert networks have an arbitrary connectivity, the gating network is restricted to
have as many output units as there are expert networks, and the activations of these
output units must be nonnegative and sum to one. To meet these constraints, we
use the "softmax" activation function (Bridle, 1989); specifically, the activation of
A Competitive Modular Connectionist Architecture
the
ith
output unit of the gating network, denoted gi, is
eS ,
gi
=
(1)
~n~-
2: es ;
j=l
where Si denotes the weighted sum of unit i's inputs and n denotes the number of
expert networks. The output of the entire architecture, denoted y, is
n
y
=
L.:: giYi
(2)
i=l
where Yi denotes the output of the ith expert network. During training, the
weights of the expert and gating networks are adjusted simultaneously using the
backpropagation algorithm (Ie Cun, 1985; Parker, 1985; Rumelhart, Hinton, and
Williams, 1986; Werbos, 1974) so as to maximize the function
_~IIY?_YiIl2
1n L = 1n ~
L..J gi e
,
(3)
i=l
where y'" denotes the target vector and
with the ith expert network.
(1[ denotes a scaling parameter associated
This architecture is best understood if it is given a probabilistic interpretation as an
"associative gaussian mixture model" (see Duda and Hart (1973) and McLachlan
and Basford (1988) for a discussion of non-associative gaussian mixture models).
Under this interpretation, the training patterns are assumed to be generated by
a number of different probabilistic rules. At each time step, a rule is selected
with probability gi and a training pattern is generated by the rule. Each rule is
characterized by a statistical model of the form y'" = Ii (x) + fi, where Ii (x) is a fixed
nonlinear function of the input vector, denoted x, and fi is a random variable. If it
is assumed that fi is gaussian with covariance matrix (1; I, then the residual vector
y'" - Yi is also gaussian and the cost function in Equation 3 is the log likelihood of
generating a particular target vector y'" .
The goal of the architecture is to model the distribution of training patterns. This is
achieved by gradient ascent in the log likelihood function. To compute the gradient
consider first the rartial derivative of the log likelihood with respect to the weighted
sum Si at the it output unit of the gating network. Using the chain rule and
Equation 1 we find that this derivative is given by:
8 In L = 9 (.z I x, Y "')
-8-Si
(4)
- gi
where g( i I x, y"') is the a posteriori probability that the
the target vector:
ith
expert network generates
-~IIY?-YdI2
'1
"')
gi e
,
9 (z x, Y = -,....;;l----~--IIY-._-y-j 2 .
1-1
~
L..J gje
j=l
2.,. .
'
(5)
769
770
Jacobs and Jordan
Thus the weights of the gating network are adjusted so that the network's outputsthe a priori probabilities gi-move toward the a posteriori probabilities.
Consider now the gradient of the log likelihood with respect to the output of the
ith expert network. Differentiation of In L with respect to Yi yields:
8ln L
(. I
. . ) (Y'" -2
Yi)
=g z x,Y
?
Yi
(Ti
-8
(6)
These derivatives involve the error term Y'" - Yi weighted by the a posteriori probability associated with the ith expert network. Thus the weights of the network
are adjusted to correct the error between the output of the ith network and the
global target vector, but only in proportion to the a posteriori probability. For each
input vector, typically only one expert network has a large a posteriori probability.
Consequently, only one expert network tends to learn each training pattern. In
general, different expert networks learn different training patterns and, thus, learn
to compute different functions.
3
THE WHAT AND WHERE VISION TASKS
We applied the modular connectionist architecture to the object recognition task
("what" task) and spatial localization task ( "where" task) studied by Rueckl, Cave,
and Kosslyn (1989).1 At each time step of the simulation, one of nine objects is
placed at one of nine locations on a simulated retina. The "what" task is to identify
the object; the "where" task is to identify its location.
The modular architecture is shown in Figure 2. It consists of three expert networks
and a gating network. The expert networks receive the retinal image and a task
specifier indicating whether the architecture should perform the "what" task or
the "where" task at the current time step. The gating network receives the task
specifier. The first expert network contains 36 hidden units, the second expert
network contains 18 hidden units, and the third expert network doesn't contain any
hidden units (i.e., it is a single-layer network).
There are at least three ways that this modular architecture might successfully learn
the "what" and "where" tasks. One of the multi-layer expert networks could learn
to perform both tasks. Because this solution doesn't show any task decomposition,
we consider it to be unsatisfactory. A second possibility is that one of the multilayer expert networks could learn the "what" task, and the other multi-layer expert
network could learn the "where" task. Although this solution exhibits task decomposition, a shortcoming of this solution is apparent when it is noted that, using the
retinal images designed by Rueckl et al. (1989), the "where" task is linearly separable. This means that the structure of the single-layer expert network most closely
matches the "where" task. Consequently, a third and possibly best solution would
be one in which one of the multi-layer expert networks learns the "what" task and
the single-layer expert network learns the "where" task. This solution would not
only show tagk decomposition but also the appropriate allocation of tasks to expert
networks. Simulation results show that the third possible solution is the one that
1 For a detailed presentation of the application of an earlier mod ular architecture to the
"what" and "where" tasks see Jacobs, Jordan, and Barto (1991).
A Competitive Modular Connectionist Architecture
Tesk spaclflar
t2ZI
whet
? of hidden
unite
? of output
units
3eO
? ? ? 0
l
90 ? ? ? 0
leO
I
where
? ? ? 0
l
90 ? ? ? 0
90 ? ? ? 0
Figure 2: The Modular Architecture Applied to the What and Where Tasks
is always achieved. These results provide evidence that the modular architecture
is capable of allocating a different network to different tasks and of allocating a
network with an appropriate structure to each task.
4
THE MULTI-PAYLOAD ROBOTICS TASK
When designing a compensator for a nonlinear plant, control engineers frequently
find it impossible or impractical to design a continuous control law that is useful
in all the relevant regions of a plant's parameter space. Typically, the solution to
this problem is to use gain scheduling; if it is known how the dynamics of a plant
change with its operating conditions, then it may be possible to design a piecewise
controller that employs different control laws when the plant is operating under
different conditions. From our viewpoint, gain scheduling is an attractive solution
because it involves task decomposition. It circumvents the problem of determining
a fixed global model ofthe plant dynamics. Instead, the dynamics are approximated
using local models that vary with the plant's operating conditions.
Task decomposition is a useful strategy not only when the control law is designed,
but also when it is learned. We suggest that an ideal controller is one that, like gain
scheduled controllers, uses local models of the plant dynamics, and like learning
controllers, learns useful control laws despite uncertainties about the plant or environment. Because the modular connectionist architecture is capable of both task
decomposition and learning, it may be useful in achieving both of these desiderata.
We applied the modular architecture to the problem of learning a feedforward con-
771
772
Jacobs and Jordan
0.25
SlI.i1e.m
SN -- Single network
MA -- Modular architecture
0.20
Joint
RMSE
(radians)
0.15
SN
0.10
MA
0.05
0.00 +------r---.----r-~-___.--.---y____-..,.......___.
10
4
8
6
o
2
Epochs
Figure 3: Learning Curves for the Multi-Payload Robotics Task
troller for a robotic arm in a multiple payload task. 2 The task is to drive a simulated
two-joint robot arm with a variety of payloads, each of a different mass, along a
desired trajectory. The architecture is given the payload's identity (e.g., payload A
or payload B) but not its mass.
The modular architecture consisted of six expert networks and a gating network.
The expert networks received as input the state of the robot arm and the desired
acceleration. The gating network received the payload identity. We also trained a
single multi-layer network to perform this task. The learning curves for the two systems are shown in Figure 3. The horizontal axis gives the training time in epochs.
The vertical axis gives the joint root mean square error in radians. Clearly, the
modular architecture learned significantly faster than the single network. Furthermore, the modular architecture learned to perform the task by allocating different
expert networks to control the arm with payloads from different mass categories
(e.g., light, medium, or heavy payloads).
Acknowledgements
This research was supported by a postdoctoral fellowship provided to the first author
from the McDonnell-Pew Program in Cognitive Neuroscience, by funding provided
to the second author from the Siemens Corporation, and by NSF grant IRI-9013991
awarded to both authors.
2For a detailed presentation of the application of the modular architecture to the multiple payload robotics task see Jacobs and Jordan (1991).
A Competitive Modular Connectionist Architecture
References
Bridle, J. (1989) Probabilistic interpretation of feedforward classification network
outputs, with relationships to statistical pattern recognition. In F. FogelmanSoulie & J. Herault (Eds.), Neuro-computing: Algorithms, Architectures, and
Applications. New York: Springer-Verlag.
Duda, R.O. & Hart, P.E. (1973) Pattern Classification and Scene Analysis. New
York: John Wiley & Sons.
Hinton, G.E. & Nowlan, S.J. (1990) The bootstrap Widrow-Hoff rule as a clusterformation algorithm. Neural Computation, 2, 355-362.
Jacobs, R.A. & Jordan, M.I. (1991) Learning piecewise control strategies in a modular connectionist architecture. Submitted to IEEE Transactions on Neural
Networks.
Jacobs, R.A., Jordan, M.I., & Barto, A.G. (1991) Task decomposition through
competition in a modular connectionist architecture: The what and where
vision tasks. Cognitive Science, in press.
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., & Hinton, G.E. (1991) Adaptive mixtures
of local experts. Neural Computation, in press.
Ie Cun, Y. (1985) Une procedure d'apprentissage pour reseau a seuil asymetrique
[A learning procedure for asymmetric threshold network]. Proceedings of Cognitiva, 85, 599-604.
McLachlan, G.J. & Basford, K.E. (1988) Mixture Models: Inference and Applications to Clustering. New York: Marcel Dekker.
Nowlan, S.J. (1990) Maximum likelihood competitive learning. In D.S. Touretzky
(Ed.), Advances in Neural Information Processing Systems 2. San Mateo, CA:
Morgan Kaufmann Publishers.
Nowlan, S.J. & Hinton, G.E. (1991) Evaluation of an associative mixture architecture on a vowel recognition task. In R.P. Lippmann, J. Moody, & D .S.
Touretzky (Eds.), Advances in Neural Information Processing Systems 3. San
Mateo, CA: Morgan Kaufmann Publishers.
Parker, D.B. (1985) Learning logic. Technical Report TR-47, Massachusetts Institute of Technology, Cambridge, MA.
Rueckl, J .G., Cave, K.R., & Kosslyn, S.M. (1989) Why are "what" and "where"
processed by separate cortical visual systems? A computational investigation.
Journal of Cognitive Neuroscience, 1, 171-186.
Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986) Learning internal representations by error propagation. In D.E. Rumelhart, J.1. McClelland, & the
PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge, MA: The MIT
Press.
Werbos, P.J. (1974) Beyond Regression: New Tools for Prediction and Analysis in
the Behavioral Sciences. Ph.D. thesis, Harvard University, Cambridge, MA.
773
| 430 |@word trial:1 polynomial:1 duda:2 proportion:1 dekker:1 simulation:2 jacob:11 decomposition:11 covariance:1 tr:1 contains:2 troller:1 current:1 nowlan:8 activation:3 si:3 must:1 john:1 partition:1 designed:2 interpretable:1 selected:1 une:1 ith:7 detecting:1 location:2 sigmoidal:1 along:1 consists:2 combine:2 behavioral:1 pour:1 frequently:1 multi:9 brain:1 provided:2 mass:3 medium:1 what:12 differentiation:1 impractical:1 corporation:1 cave:2 ti:1 control:7 unit:11 grant:1 understood:1 local:10 tends:2 specifier:2 despite:1 meet:1 might:1 studied:1 mateo:2 backpropagation:1 bootstrap:1 procedure:2 significantly:1 suggest:1 scheduling:2 storage:1 impossible:1 williams:2 iri:1 rule:7 seuil:1 target:4 us:2 designing:1 harvard:1 rumelhart:3 recognition:3 approximated:1 werbos:2 asymmetric:1 capture:2 region:3 connected:1 environment:1 dynamic:4 trained:1 localization:1 joint:3 leo:1 describe:1 shortcoming:1 dichotomy:1 apparent:1 modular:24 gi:7 itself:1 associative:5 advantage:2 relevant:1 combining:2 competition:5 generating:1 object:3 widrow:1 nearest:1 received:2 advocated:1 involves:1 payload:12 marcel:1 closely:1 correct:1 exploration:1 microstructure:1 generalization:1 investigation:1 adjusted:3 cognition:1 vary:1 successfully:1 tool:1 weighted:3 mclachlan:2 mit:1 clearly:1 gaussian:4 always:1 barto:3 unsatisfactory:1 likelihood:5 posteriori:5 inference:1 entire:1 typically:2 hidden:4 comprising:2 issue:2 among:1 classification:2 denoted:3 priori:1 herault:1 spatial:1 softmax:1 hoff:1 field:1 having:1 encouraged:1 connectionist:12 report:2 piecewise:2 few:1 retina:1 employ:1 simultaneously:1 vowel:1 possibility:1 evaluation:1 mixture:6 light:1 iiy:3 chain:1 allocating:4 capable:2 unite:1 desired:2 earlier:2 cost:1 ie:2 probabilistic:3 yl:1 e_:1 michael:1 moody:1 connectivity:1 thesis:1 possibly:1 cognitive:4 expert:32 derivative:3 gje:1 lookup:1 retinal:2 view:1 root:1 competitive:8 parallel:1 rmse:1 square:1 kaufmann:2 yield:3 identify:2 ofthe:1 trajectory:1 drive:1 submitted:1 touretzky:2 ed:3 associated:2 basford:2 bridle:2 gain:3 con:1 radian:2 massachusetts:2 furthermore:1 receives:1 horizontal:1 nonlinear:3 propagation:1 scheduled:1 requiring:2 y2:1 consisted:1 contain:1 illustrated:1 attractive:1 during:1 noted:1 image:2 fi:3 funding:1 superior:1 volume:2 belong:1 interpretation:3 cambridge:4 pew:1 robot:2 operating:3 recent:1 awarded:1 forcing:1 verlag:1 approximators:1 yi:6 morgan:2 eo:1 maximize:1 ii:2 multiple:2 desirable:1 technical:2 match:1 characterized:1 faster:1 hart:2 prediction:1 desideratum:1 regression:2 neuro:1 multilayer:1 vision:3 controller:4 arisen:1 robotics:4 achieved:2 receive:1 whereas:1 fellowship:1 addressed:2 publisher:2 cognitiva:1 ascent:1 mod:1 jordan:11 granularity:2 intermediate:2 ideal:1 feedforward:2 variety:1 architecture:40 whether:1 six:1 york:3 nine:2 useful:4 detailed:2 involve:1 ph:1 processed:1 category:1 mcclelland:1 nsf:1 neuroscience:2 group:1 i__:1 threshold:1 achieving:1 year:1 sum:3 compete:2 uncertainty:1 circumvents:1 scaling:1 layer:7 nonnegative:1 constraint:1 scene:1 generates:1 aspect:1 separable:1 relatively:1 department:1 mcdonnell:1 son:1 kosslyn:2 cun:2 restricted:1 ln:1 equation:2 appropriate:2 denotes:5 clustering:1 move:1 receptive:1 strategy:2 compensator:1 exhibit:1 gradient:3 separate:1 simulated:2 capacity:1 toward:1 enforcing:1 relationship:1 innovation:1 robert:1 disparate:1 design:2 perform:4 vertical:1 hinton:8 pdp:1 discovered:1 arbitrary:1 learned:3 mediates:1 beyond:1 pattern:13 exemplified:2 program:1 residual:1 arm:4 scheme:3 technology:2 axis:2 embodied:1 sn:2 epoch:2 literature:1 acknowledgement:1 determining:1 law:4 fully:1 plant:8 allocation:1 approximator:1 foundation:1 apprentissage:1 viewpoint:1 heavy:1 elsewhere:1 placed:1 supported:1 institute:2 neighbor:1 distributed:1 curve:2 cortical:1 doesn:2 author:3 adaptive:1 san:2 sli:1 transaction:1 lippmann:1 logic:1 global:7 robotic:1 assumed:4 postdoctoral:1 continuous:1 why:1 table:1 learn:16 ca:2 main:1 linearly:1 parker:2 wiley:1 third:3 learns:3 gating:11 evidence:1 visual:1 springer:1 ma:6 goal:1 presentation:2 identity:2 consequently:2 acceleration:1 twofold:1 change:1 specifically:2 ular:1 engineer:1 e:2 siemens:1 indicating:1 internal:1 |
3,645 | 4,300 | Budgeted Optimization with Concurrent
Stochastic-Duration Experiments
Javad Azimi, Alan Fern, Xiaoli Z. Fern
School of EECS, Oregon State University
{azimi, afern, xfern}@eecs.oregonstate.edu
Abstract
Budgeted optimization involves optimizing an unknown function that is costly to evaluate by requesting a limited number of function evaluations at intelligently selected inputs.
Typical problem formulations assume that experiments are selected one at a time with
a limited total number of experiments, which fail to capture important aspects of many
real-world problems. This paper defines a novel problem formulation with the following
important extensions: 1) allowing for concurrent experiments; 2) allowing for stochastic
experiment durations; and 3) placing constraints on both the total number of experiments
and the total experimental time. We develop both offline and online algorithms for selecting concurrent experiments in this new setting and provide experimental results on a
number of optimization benchmarks. The results show that our algorithms produce highly
effective schedules compared to natural baselines.
1
Introduction
We study the optimization of an unknown function f by requesting n experiments, each specifying an input
x and producing a noisy observation of f (x). In practice, the function f might be the performance of a device parameterized by x. We consider the setting where running experiments is costly (e.g. in terms of time),
which renders methods that rely on many function evaluations, such as stochastic search or empirical gradient methods, impractical. Bayesian optimization (BO) [8, 4] addresses this issue by leveraging Bayesian
modeling to maintain a posterior over the unknown function based on previous experiments. The posterior is
then used to intelligently select new experiments to trade-off exploring new parts of the experimental space
and exploiting promising parts.
Traditional BO follows a sequential approach where only one experiment is selected and run at a time.
However, it is often desirable to select more than one experiment at a time so that multiple experiments
can be run simultaneously to leverage parallel facilities. Recently, Azimi et al. (2010) proposed a batch BO
algorithm that selects a batch of k ? 1 experiments at a time. While this broadens the applicability of BO, it
is still limited to selecting a fixed number of experiments at each step. As such, prior work on BO, both batch
and sequential, completely ignores the problem of how to schedule experiments under fixed experimental
budget and time constraints. Furthermore, existing work assumes that the durations of experiments are
identical and deterministic, whereas in practice they are often stochastic.
Consider one of our motivating applications of optimizing the power output of nano-enhanced Microbial
Fuel Cells (MFCs). MFCs [3] use micro-organisms to generate electricity. Their performance depends
1
strongly on the surface properties of the anode [10]. Our problem involves optimizing nano-enhanced anodes, where various types of nano-structures, e.g. carbon nano-wire, are grown directly on the anode surface.
Because there is little understanding of how different nano-enhancements impact power output, optimizing
anode design is largely guess work. Our original goal was to develop BO algorithms for aiding this process.
However, many aspects of this domain complicate the application of BO. First, there is a fixed budget on
the number of experiments that can be run due to limited funds and a fixed time period for the project. Second, we can run multiple concurrent experiments, limited by the number of experimental apparatus. Third,
the time required to run each experiment is variable because each experiment requires the construction of a
nano-structure with specific properties. Nano-fabrication is highly unpredictable and the amount of time to
successfully produce a structure is quite variable. Clearly prior BO models fail to capture critical aspects of
the experimental process in this domain.
In this paper, we consider the following extensions. First, we have l available labs (which may correspond
to experimental stations at one location or to physically distinct laboratories), allowing up to l concurrent
experiments. Second, experiments have stochastic durations, independently and identically distributed according to a known density function pd . Finally, we are constrained by a budget of n total experiments and a
time horizon h by which point we must finish. The goal is to maximize the unknown function f by selecting
experiments and when to start them while satisfying the constraints.
We propose offline (Section 4) and online (Section 5) scheduling approaches for this problem, which aim
to balance two competing factors. First, a scheduler should ensure that all n experiments complete within
the horizon h, which encourages high concurrency. Second, we wish to select new experiments given as
many previously completed experiments as possible to make more intelligent experiment selections, which
encourages low concurrency. We introduce a novel measure of the second factor, cumulative prior experiments (CPE) (Section 3), which our approaches aim to optimize. Our experimental results indicate that these
approaches significantly outperform a set of baselines across a range of benchmark optimization problems.
2
Problem Setup
Let X ? <d be a d-dimensional compact input space, where each dimension i is bounded in [ai , bi ]. An
element of X is called an experiment. An unknown real-valued function f : X ? < represents the expected
value of the dependent variable after running an experiment. For example, f (x) might be the result of a wetlab experiment described by x. Conducting an experiment x produces a noisy outcome y = f (x) + , where
is a random noise term. Bayesian Optimization (BO) aims to find an experiment x ? X that approximately
maximizes f by requesting a limited number of experiments and observing their outcomes.
We extend traditional BO algorithms and study the experiment scheduling problem. Assuming a known
density function pd for the experiment durations, the inputs to our problem include the total number of
available labs l, the total number of experiments n, and the time horizon h by which we must finish. The
goal is to design a policy ? for selecting when to start experiments and which ones to start to optimize f .
Specifically, the inputs to ? are the set of completed experiments and their outcomes, the set of currently
running experiments with their elapsed running time, the number of free labs, and the remaining time till the
horizon. Given this information, ? must select a set of experiments (possibly empty) to start that is no larger
than the number of free labs. Any run of the policy ends when either n experiments are completed or the
time horizon is reached, resulting in a set X of n or fewer completed experiments. The objective is to obtain
a policy with small regret, which is the expected difference between the optimal value of f and the value of
f for the predicted best experiment in X. In theory, the optimal policy can be found by solving a POMDP
with hidden state corresponding to the unknown function f . However, this POMDP is beyond the reach of
any existing solvers. Thus, we focus on defining and comparing several principled policies that work well
in practice, but without optimality guarantees. Note that this problem has not been studied in the literature
to the best of our knowledge.
2
3
Overview of General Approach
A policy for our problem must make two types of decisions: 1) scheduling when to start new experiments,
and 2) selecting the specific experiments to start. In this work, we factor the problem based on these decisions
and focus on approaches for scheduling experiments. We assume a black box function SelectBatch for
intelligently selecting the k ? 1 experiments based on both completed and currently running experiments.
The implementation of SelectBatch is described in Section 6.
Optimal scheduling to minimize regret appears to be computationally hard for non-trivial instances of SelectBatch. Further, we desire scheduling approaches that do not depend on the details of SelectBatch, but
work well for any reasonable implementation. Thus, rather than directly optimizing regret for a specific
SelectBatch, we consider the following surrogate criteria. First, we want to finish all n experiments within
the horizon h with high probability. Second, we would like to select each experiment based on as much
information as possible, measured by the number of previously completed experiments. These two goals are
at odds, since maximizing the completion probability requires maximizing concurrency of the experiments,
which minimizes the second criterion. Our offline and online scheduling approaches provide different ways
for managing this trade-off.
Cosines
0.32
0.28
0.08
Regret
0.09
0.07
0.26
0.06
0.24
0.05
0.22
0.04
0.2
0.18
Hydrogen
0.1
0.3
Regret
To quantify the second criterion, consider a complete execution E of a scheduler. For any experiment e in E, let priorE (e)
denote the number of experiments in E that completed before starting e. P
We define the cumulative prior experiments
(CPE) of E as: e?E priorE (e). Intuitively, a scheduler with
a high expected CPE is desirable, since CPE measures the total
amount of information SelectBatch uses to make its decisions.
0
20
40
60
CPE
80
100
120
0.03
0
20
40
60
80
100
120
CPE
Figure 1: The correlation between CPE and
CPE agrees with intuition when considering extreme policies. regret for 30 different schedulers on two BO
A poor scheduler that starts all n experiments at the same time benchmarks.
(assuming enough labs) will have a minimum CPE of zero.
Further, CPE is maximized by a scheduler that sequentially executes all experiments (assuming enough
time). However, in between these extremes, CPE fails to capture certain intuitive properties. For example,
CPE increases linearly in the number of prior experiments, while one might expect diminishing returns as
the number of prior experiments becomes large. Similarly, as the number of experiments started together
(the batch size) increases, we might also expect diminishing returns since SelectBatch must choose the
experiments based on the same prior experiments. Unfortunately, quantifying these intuitions in a general
way is still an open problem. Despite its potential shortcomings, we have found CPE to be a robust measure
in practice.
To empirically examine the utility of CPE, we conducted experiments on a number of BO benchmarks. For
each domain, we used 30 manually designed diverse schedulers, some started more experiments early on
than later, and vice-versa, while others included random and uniform schedules. We measured the average
regret achieved for each scheduler given the same inputs and the expected CPE of the executions. Figure 1
shows the results for two of the domains (other results are highly similar), where each point corresponds to
the average regret and CPE of a particular scheduler. We observe a clear and non-trivial correlation between
regret and CPE, which provides empirical evidence that CPE is a useful measure to optimize. Further, as we
will see in our experiments, the performance of our methods is also highly correlated with CPE.
4
Offline Scheduling
We now consider offline schedules, which assign start times to all n experiments before the experimental
process begins. Note that while the schedules are offline, the overall BO policy has online characteristics,
since the exact experiments to run are only specified when they need to be started by SelectBatch, based
3
on the most recent information. This offline scheduling approach is often convenient in real experimental
domains where it is useful to plan out a static equipment/personnel schedule for the duration of a project.
Below we first consider a restricted class of schedules, called staged schedules, for which we present a
solution that optimizes CPE. Next, we describe an approach for a more general class of schedules.
4.1
Staged Schedules
A staged schedule defines a consecutivePsequence of NPexperimental stages, denoted by a sequence of
tuples h(ni , di )iN
i=1 , where 0 < ni ? l,
i di ? h, and
i ni ? n. Stage i begins by starting up ni new
experiments selected by SelectBatch using the most recent information, and ends after a duration of di , upon
which stage i + 1 starts. In some applications, staged schedules are preferable as they allow project planning
to focus on a relatively small number of time points (the beginning of each stage). While our approach tries
to ensure that experiments finish within their stage, experiments are never terminated and hence might run
longer than their specified duration. If, because of this, at the beginning of stage i there are not ni free labs,
the experiments will wait till labs free up.
We say that an execution E of a staged schedule S is safe if each experiment is completed within its specified
duration in S. We say that a staged schedule S is p-safe if with probability at least p an execution of S is safe
which provides a probabilistic guarantee that all n experiments complete within the horizon h. Further, it
ensures with probability p that the maximum number of concurrent experiments when executing S is maxi ni
(since experiments from two stages will not overlap with probability p). As such, we are interested in finding
staged schedules that are p-safe for a user specified p, e.g. 95%. Meanwhile, we want to maximize CPE.
PN
Pi?1
The CPE of any safe execution of S (slightly abusing notation) is: CPE(S) = i=2 ni j=1 nj . Typical
applications will use relative high values of p, since otherwise experimental resources would be wasted, and
thus with high probability we expect the CPE of an execution of S to equal CPE(S).
Our goal is thus to maximize CPE(S) while ensuring p-safeness. It turns out that for any fixed number of
stages N , the schedules that maximize CPE(S) must be uniform. A staged schedule is defined to be uniform
if ?i, j, |ni ? nj | ? 1, i.e., the batch sizes across stages may differ by at most a single experiment.
Proposition 1. For any number of experiments n and labs l, let SN be the set of corresponding N stage
schedules, where N ? dn/le. For any S ? SN , CPE(S) = maxS 0 ?SN CPE(S 0 ) if and only if S is uniform.
It is easy to verify that for a given n and l, an N
stage uniform schedule achieves a strictly higher
CPE than any N ? 1 stage schedule. This implies that we should prefer uniform schedules
with maximum number of stages allowed by the
p-safeness restriction. This motivates us to solve
the following problem: Find a p-safe uniform
schedule with maximum number of stages.
Algorithm 1 Algorithm for computing a p-safe uniform
schedule with maximum number of stages.
Input:number of experiments (n), number of labs (l),
horizon (h), safety probability (p)
Output:A p-safe uniform schedule with maximum
number of stages
N = dn/le, S ? null
loop
S 0 ? MaxProbUniform(N, n, l, h)
if S 0 is not p-safe then
return S
end if
S ? S0, N ? N + 1
end loop
Our approach, outlined in Algorithm 1, considers
N stage schedules in order of increasing N , starting at the minimum possible number of stages
N = dn/le for running all experiments. For each
value of N , the call to MaxProbUniform computes a uniform schedule S with the highest probability of a safe execution, among all N stage uniform schedules. If the resulting schedule is p-safe
then we consider N + 1 stages. Otherwise, there
is no uniform N stage schedule that is p-safe and
we return a uniform N ? 1 stage schedule, which was computed in the previous iteration.
4
It remains to describe the MaxProbUniform function, which computes a uniform N stage schedule S =
h(ni , di )iN
i=1 that maximizes the probability of a safe execution. First, any N stage uniform schedule must
have N 0 = (n mod N ) stages with n0 = bn/N c+1 experiments and N ?N 0 stages with n0 ?1 experiments.
Furthermore, the probability of a safe execution is invariant to the ordering of the stages, since we assume
i.i.d. distribution on the experiment durations. The MaxProbUniform problem is now reduced to computing
the durations di of S that maximize the probability of safeness for each given ni . For this we will assume that
the distribution of the experiment duration pd is log-concave, which allows us to characterize the solution
using the following lemma.
Lemma 1. For any duration distribution pd that is log-concave, if an N stage schedule S = h(ni , di )iN
i=1
0
0
is p-safe, then there is a p-safe N stage schedule S 0 = h(ni , d0i )iN
i=1 such that if ni = nj then di = dj .
This lemma suggests that any stages with equal ni ?s should have equal di ?s to maximize the probability of
safe execution. For a uniform schedule, ni is either n0 or n0 ? 1. Thus we only need to consider schedules
with two durations, d0 for stages with ni = n0 and d00 for stages with ni = n0 ? 1. Since all durations must
0
?N 0
0
sum to h, d0 and d00 are deterministically related by: d00 = h?d
N ?N 0 . Based on this, for any value of d the
probability of the uniform schedule using durations d0 and d00 is as follows, where Pd is the CDF of pd .
Pd (d0 )
N 0 ?n0
Pd
h ? d0 ? N 0
N ? N0
(N ?N 0 )?(n0 ?1)
(1)
We compute MaxProbUniform by maximizing Equation 1 with respect to d0 and using the corresponding
duration for d00 . Putting everything together we get the following result.
Theorem 1. For any log-concave pd , computing MaxProbUniform by maximizing Equation 1 over d0 , if a
p-safe uniform schedule exists, Algorithm 1 returns a maximum-stage p-safe uniform schedule.
4.2 Independent Lab Schedules
We now consider a more general class of offline schedules and a heuristic algorithm for computing them.
This class allows the start times of different labs to be decoupled, desirable in settings where labs are run
by independent experimenters. Further, our online scheduling approach is based on repeatedly calling an
offline scheduler, which requires the flexibility to make schedules for labs in different stages of execution.
An independent lab (IL) P
schedule S specifies a number of labs k < l and for each lab i, a number of
i
experiments mi such that i mi = n. Further, for each lab i a sequence of mi durations Di = hd1i , . . . , dm
i i
is given. The execution of S runs each lab independently, by having each lab start up experiments whenever
they move to the next stage. Stage j of lab i ends after a duration of dji , or after the experiment finishes
when it runs longer than dji (i.e. we do not terminate experiments). Each experiment is selected according
to SelectBatch, given information about all completed and running experiments across all labs.
We say that an execution of an IL schedule is safe if all experiments finish within their specified durations,
which also yields a notion of p-safeness. We are again interested in computing p-safe schedules that maximizes the CPE. Intuitively, CPE will be maximized if the amount of concurrency during an execution is
minimized, suggesting the use of as few labs as possible. This motivates the problem of finding a p-safe IL
schedule that use the minimum number of labs. Below we describe our heuristic approach to this problem.
Algorithm Description. Starting with k = 1, we compute a k labs IL schedule with the goal of maximizing
the probability of safe execution. If this probability is less than p, we increment k, and otherwise output the
schedule for k labs. To compute a schedule for each value of k, we first allocate the number of experiments
mi across k labs as uniformly as possible. In particular, (n mod k) labs will have bn/kc + 1 experiments
and k ? (n mod k) labs will have bn/kc experiments. This choice is motivated by the intuition that the
best way to maximize the probability of a safe execution is to distribute the work across labs as uniformly
as possible. Given mi for each lab, we assign all durations of lab i to be h/mi , which can be shown to be
optimal for log-concave pd . In this way, for each value of k the schedule we compute has just two possible
values of mi and labs with the same mi have the same stage durations.
5
5
Online Scheduling Approaches
We now consider online scheduling, which selects the start time of experiments online. The flexibility of
the online approaches offers the potential to outperform offline schedules by adapting to specific stochastic
outcomes observed during experimental runs. Below we first describe two baseline online approaches,
followed by our main approach, policy switching, which aims to directly optimize CPE.
Online Fastest Completion Policy (OnFCP). This baseline policy simply tries to finish all of the n experiments as quickly as possible. As such, it keeps all l labs busy as long as there are experiments left to run.
Specifically whenever a lab (or labs) becomes free the policy immediately uses SelectBatch with the latest
information to select new experiments to start right away. This policy will achieve a low value of expected
CPE since it maximizes concurrency.
Online Minimum Eager Lab Policy (OnMEL). One problem with OnFCP is that it does not attempt to
use the full time horizon. The OnMEL policy simply restricts OnFCP to use only k labs, where k is the
minimum number of labs required to guarantee with probability at least p that all n experiments complete
within the horizon. Monte-Carlo simulation is used to estimate p for each k.
Policy Switching (PS). Our policy switching approach decides the number of new experiments to start at
each decision epoch. Decision epochs are assumed to occur every ? units of time, where ? is a small
constant relative to the expected experiment durations. The motivation behind policy switching is to exploit
the availability of a policy generator that can produce multiple policies at any decision epoch, where at least
one of them is expected to be good. Given such a generator, the goal is to define a new (switching) policy that
performs as well or better than the best of the generated policies in any state. In our case, the objective is to
improve CPE, though other objectives can also be used. This is motivated by prior work on policy switching
[6] over a fixed policy library, and generalize that work to handle arbitrary policy generators instead of static
policy libraries. Below we describe the general approach and then the specific policy generator that we use.
Let t denote the number of remaining decision epochs (stages-to-go), which is originally equal to bh/?c and
decremented by one each epoch. We use s to denote the experimental state of the scheduling problem, which
encodes the number of completed experiments and ongoing experiments with their elapsed running time. We
assume access to a policy generator ?(s, t) which returns a set of base scheduling policies (possibly nonstationary) given inputs s and t. Prior work on policy switching [6] corresponds to the case where ?(s, t)
returns a fixed set of policies regardless of s and t. Given ?(s, t), ?
? (s, t, ?) denotes the resulting switching
policy based on s, t, and the base policy ? selected in the previous epoch. The decision returned by ?
? is
computed by first conducting N simulations of each policy returned by ?(s, t) along with ? to estimate their
CPEs. The base policy with the highest estimated CPE is then selected and its decision is returned by ?
? . The
need to compare to the previous policy ? is due to the use of a dynamic policy generator, rather than a fixed
library. The base policy passed into policy switching for the first decision epoch can be arbitrary.
Despite its simplicity, we can make guarantees about the quality of ?
? assuming a bound on the CPE estimation error. In particular, the CPE of the switching policy will not be much worse than the best of the policies
produced by our generator given accurate simulations. We say that a CPE estimator is -accurate if it can
estimate the CPE Ct? (s) of any base policy ? for any s and t within an accuracy bound of . Below we
denote the expected CPE of ?
? for s, t, and ? to be Ct?? (s, ?).
Theorem 2. Let ?(s, t) be a policy generator and ?
? be the switching policy computed with -accurate
0
estimates. For any state s, stages-to-go t, and base policy ?, Ct?? (s, ?) ? max?0 ??(s,t)?{?} Ct? (s) ? 2t.
We use a simple policy generator ?(s, t) that makes multiple calls to the offline IL scheduler described
earlier. The intuition is to notice that the produced p-safe schedules are fairly pessimistic in terms of the
experiment runtimes. In reality many experiments will finish early and we can adaptively exploit such
situations. Specifically, rather than follow the fixed offline schedule we may choose to use fewer labs and
hence improve CPE. Similarly if experiments run too long, we will increase the number of labs.
6
Table 1: Benchmark Functions
1 ? (u2 + v 2 ? 0.3cos(3?u) ? 0.3cos(3?v))
Rosenbrock(2)[1] 10 ? 100(y ? x2 )2 ? (1 ? x)2
u = 1.6x ?
0.5,d v = 1.6y ? 0.5 2
2 20
P
?i=1 4?i exp ??j=1 Aij (xj ? Pij )
i
Hartman(3,6)[7]
Michalewicz(5)[9]? 5i=1 sin(xi ). sin i.x
?
?1?4 , A4?d , P4?d are constants
1
Shekel(4)[7]
?10
?1?10 , A4?10 are constants
i=1 ? +?
4(x ?A )2
Cosines(2)[1]
i
j=1
j
ji
We define ?(s, t) to return k + 1 policies, {?(s,t,0) , . . . , ?(s,t,k) }, where k is the number of experiments
running in s. Policy ?(s,t,i) is defined so that it waits for i current experiments to finish, and then uses the
offline IL scheduler to return a schedule. This amounts to adding a small lookahead to the offline IL scheduler
where different amounts of waiting time are considered 1 . Note that the definition of these policies depends
on s and t and hence can not be viewed as a fixed set of static policies as used by traditional policy switching.
In the initial state s0 , ?(s0 ,h,0) corresponds to the offline IL schedule and hence the above theorem guarantees
that we will not perform much worse than the offline IL, with the expectation of performing much better.
Whenever policy switching selects a ?i with i > 0 then no new experiments will be started and we wait for
the next decision epoch. For i = 0, it will apply the offline IL scheduler to return a p-safe schedule to start
immediately, which may require starting new labs to ensure high probability of completing n experiments.
6
Experiments
Implementation of SelectBatch. Given the set of completed experiments O and on-going experiments A,
SelectBatch selects k new experiments. We implement SelectBatch based on a recent batch BO algorithm
[2], which greedily selects k experiments considering only O. We modify this greedy algorithm to also
consider A by forcing the selected batch to include the ongoing experiments plus k additional experiments.
SelectBatch makes selections based on a posterior over the unknown function f . We use Gaussian Process
Pd
with the RBF kernel and the kernel width = 0.01 i=1 li , where li is the input space length in dimension i.
Benchmark Functions. We evaluate our scheduling policies using 6 well-known synthetic benchmark
functions (shown in Tab. 1 with dimension inside the parenthesis) and two real-world benchmark functions
Hydrogen and FuelCell over [0, 1]2 [2]. The Hydrogen data is produced by a study on biosolar hydrogen
production [5], where the goal was to maximize hydrogen production of a particular bacteria by optimizing
PH and Nitrogen levels. The FuelCell data was collected in our motivating application mentioned in Sect. 1.
In both cases, the benchmark function was created by fitting regression models to the available data.
Evaluation. We consider a p-safeness guarantee of p = 0.95 and the number of available labs l is 10. For
pd (x), we use one sided truncated normal distribution such that x ? (0, inf) with ? = 1, ? 2 = 0.1, and we
set the total number of experiments n = 20. We consider three time horizons h of 6, 5, and 4.
Given l, n and h, to evaluate policy ? using function f (with a set of initial observed experiments), we execute
? and get a set X of n or fewer completed experiments. We measure the regret of ? as the difference between
the optimal value of f (known for all eight functions) and the f value of the predicted best experiment in X.
Results. Table 2 shows the results of our proposed offline and online schedulers. We also include, as a
reference point, the result of the un-constrained sequential policy (i.e., selecting one experiment at a time)
using SelectBatch, which can be viewed as an effective upper bound on the optimal performance of any
constrained scheduler because it ignores the time horizon (h = ?). The values in the table correspond to
the regrets (smaller values are better) achieved by each policy, averaged across 100 independent runs with
the same initial experiments (5 for 2-d and 3-d functions and 20 for the rest) for all policies in each run.
1
For simplicity our previous discussion of the IL scheduler did not consider states with ongoing experiments, which
will occur here. To handle this the scheduler first considers using already executing labs taking into account how long
they have been running. If more labs are required to ensure p-safeness new ones are added.
7
Table 2: The proposed policies results for different horizons.
h=4
Functionh = ? OnFCP OfStaged OfIL OnMEL
Cosines .142 .339
.181 .195
.275
.182 .191
.258
FuelCell .160 .240
Hydro
.025 .115
.069 .070
.123
.008 .013
.010 .009
.013
Rosen
Hart(3) .037 .095
.070 .069
.096
.509 .508
.525
Michal .465 .545
Shekel .427 .660
.630 .648
.688
Hart(6) .265 .348
.338 .340
.354
CPE
190
55
100
100
66
h=5
PS OfStaged OfIL OnMEL
.205
.181 .194
.274
.206
.167 .190
.239
.059
.071 .069
.086
.008
.009 .008
.011
.067
.055 .064
.081
.502
.500 .510
.521
.623
.635 .645
.682
.347
.334 .330
.333
100
100
100
91
h=6
PS OfStaged OfIL OnMEL
.150
.167 .147
.270
.185
.154 .163
.230
.042
.036 .035
.064
.008
.007 .009
.010
.045
.045 .050
.070
.494
.477 .460
.502
.540
.530 .564
.576
.297
.304 .266
.301
118
133
137
120
PS
.156
.153
.025
.009
.038
.480
.510
.262
138
We first note that the two offline algorithms (OfStages and OfIL) perform similarly across all three horizon
settings. This suggests that there is limited benefit in these scenarios to using the more flexible IL schedules,
which were primarily introduced for use in the online scheduling context. Comparing with the two online
baselines (OnFCP and OnMEL), the offline algorithms perform significantly better. This may seem surprising at first because online policies should offer more flexibility than fixed offline schedules. However, the
offline schedules purposefully wait for experiments to complete before starting up new experiments, which
tends to improve the CPE values. To see this, the last row of Table 2 gives the average CPEs of each policy. Both OnFCP and OnMEL yield significantly lower CPEs compared to the offline algorithms, which
correlates with their significantly larger regrets.
Finally, policy switching consistently outperforms other policies (excluding h = ?) on the medium horizon
setting and performs similarly in the other settings. This makes sense since the added flexibility of PS is not
as critical for long and short horizons. For short horizons, there is less opportunity for scheduling choices and
for longer horizons the scheduling problem is easier and hence the offline approaches are more competitive.
In addition, looking at Table 2, we see that PS achieves a significantly higher CPE than offline approaches in
the medium horizon, and is similar to them in the other horizons, again correlating with the regret. Further
examination of the schedules produced by PS indicates that although it begins with the same number of labs
as OfIL, PS often selects fewer labs in later steps if early experiments are completed sooner than expected,
which leads to higher CPE and consequently better performance. Note that the variances of the proposed
policies are very small which are shown in the supplementary materials.
7
Summary and Future Work
Motivated by real-world applications we introduced a novel setting for Bayesian optimization that incorporates a budget on the total time and number of experiments and allows for concurrent, stochastic-duration
experiments. We considered offline and online approaches for scheduling experiments in this setting, relying on a black box function to intelligently select specific experiments at their scheduled start times. These
approaches aimed to optimize a novel objective function, Cumulative Prior Experiments (CPE), which we
empirically demonstrate to strongly correlate with performance on the original optimization problem. Our
offline scheduling approaches significantly outperformed some natural baselines and our online approach of
policy switching was the best overall performer.
For further work we plan to consider alternatives to CPE, which, for example, incorporate factors such as
diminishing returns. We also plan to study further extensions to the experimental model for BO and also for
active learning. For example, taking into account varying costs and duration distributions across labs and
experiments. In general, we believe that there is much opportunity for more tightly integrating scheduling
and planning algorithms into BO and active learning to more accurately model real-world conditions.
Acknowledgments
The authors acknowledge the support of the NSF under grants IIS-0905678.
8
References
[1] B. S. Anderson, A. Moore, and D. Cohn. A nonparametric approach to noisy and costly optimization. In ICML,
2000.
[2] J. Azimi, A. Fern, and X. Fern. Batch bayesian optimization via simulation matching. In NIPS, 2010.
[3] D. Bond and D. Lovley. Electricity production by geobacter sulfurreducens attached to electrodes. Applications of
Environmental Microbiology, 69:1548?1555, 2003.
[4] E. Brochu, M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report TR-2009-23, Department
of Computer Science, University of British Columbia, 2009.
[5] E. H. Burrows, W.-K. Wong, X. Fern, F. W. Chaplen, and R. L. Ely. Optimization of ph and nitrogen for enhanced
hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning methods. Biotechnology
Progress, 25:1009?1017, 2009.
[6] H. Chang, R. Givan, and E. Chong. Parallel rollout for online solution of partially observable markov decision
processes. Discrete Event Dynamic Systems, 14:309?341, 2004.
[7] L. Dixon and G. Szeg. The Global Optimization Problem: An Introduction Toward Global Optimization. NorthHolland, Amsterdam, 1978.
[8] D. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization,
pages 345?383, 2001.
[9] Z. Michalewicz. Genetic algorithms + data structures = evolution programs (2nd, extended ed.). Springer-Verlag
New York, Inc., New York, NY, USA, 1994.
[10] D. Park and J. Zeikus. Improved fuel cell and electrode designs for producing electricity from microbial degradation. Biotechnol.Bioeng., 81(3):348?355, 2003.
9
| 4300 |@word cpe:48 pcc:1 nd:1 open:1 simulation:4 bn:3 tr:1 initial:3 selecting:7 genetic:1 outperforms:1 existing:2 freitas:1 current:1 comparing:2 michal:1 surprising:1 must:8 designed:1 fund:1 n0:9 greedy:1 selected:8 device:1 guess:1 fewer:4 beginning:2 rosenbrock:1 short:2 provides:2 location:1 rollout:1 dn:3 along:1 fitting:1 inside:1 introduce:1 expected:9 examine:1 planning:2 relying:1 little:1 unpredictable:1 solver:1 considering:2 becomes:2 project:3 begin:3 bounded:1 notation:1 maximizes:4 increasing:1 fuel:2 null:1 medium:2 minimizes:1 finding:2 impractical:1 nj:3 guarantee:6 every:1 concave:4 preferable:1 unit:1 grant:1 producing:2 before:3 safety:1 modify:1 apparatus:1 tends:1 switching:15 despite:2 approximately:1 might:5 black:2 plus:1 studied:1 specifying:1 suggests:2 co:2 fastest:1 limited:7 range:1 bi:1 averaged:1 acknowledgment:1 practice:4 regret:13 implement:1 empirical:2 significantly:6 adapting:1 convenient:1 matching:1 integrating:1 wait:4 get:2 selection:2 scheduling:21 bh:1 context:1 wong:1 optimize:5 restriction:1 deterministic:1 maximizing:5 latest:1 go:2 starting:6 duration:25 independently:2 pomdp:2 regardless:1 simplicity:2 immediately:2 estimator:1 handle:2 notion:1 increment:1 enhanced:3 construction:1 user:2 exact:1 us:3 element:1 satisfying:1 expensive:1 fuelcell:3 observed:2 afern:1 capture:3 ensures:1 sect:1 ordering:1 trade:2 highest:2 principled:1 intuition:4 pd:12 mentioned:1 dynamic:2 depend:1 solving:1 concurrency:5 upon:1 completely:1 various:1 grown:1 distinct:1 effective:2 shortcoming:1 describe:5 monte:1 broadens:1 outcome:4 quite:1 d0i:1 larger:2 valued:1 solve:1 say:4 heuristic:2 otherwise:3 supplementary:1 hartman:1 noisy:3 online:19 sequence:2 intelligently:4 propose:1 p4:1 loop:2 till:2 flexibility:4 achieve:1 lookahead:1 intuitive:1 description:1 exploiting:1 enhancement:1 empty:1 p:8 electrode:2 produce:4 executing:2 develop:2 completion:2 measured:2 school:1 progress:1 predicted:2 involves:2 indicate:1 implies:1 quantify:1 differ:1 safe:26 stochastic:7 material:1 everything:1 require:1 assign:2 givan:1 d00:5 proposition:1 pessimistic:1 extension:3 exploring:1 strictly:1 considered:2 normal:1 exp:1 achieves:2 early:3 estimation:1 outperformed:1 bond:1 currently:2 concurrent:7 agrees:1 vice:1 successfully:1 cora:1 clearly:1 gaussian:1 aim:4 rather:3 pn:1 varying:1 focus:3 consistently:1 indicates:1 equipment:1 baseline:6 greedily:1 sense:1 dependent:1 diminishing:3 hidden:1 kc:2 microbial:2 going:1 selects:6 interested:2 issue:1 overall:2 among:1 flexible:1 denoted:1 plan:3 constrained:3 fairly:1 bioeng:1 equal:4 never:1 having:1 manually:1 identical:1 placing:1 represents:1 runtimes:1 icml:1 jones:1 park:1 future:1 minimized:1 report:1 decremented:1 others:1 intelligent:1 micro:1 few:1 primarily:1 rosen:1 simultaneously:1 tightly:1 shekel:2 maintain:1 attempt:1 highly:4 evaluation:3 chong:1 extreme:2 behind:1 accurate:3 bacteria:1 decoupled:1 sooner:1 instance:1 modeling:2 earlier:1 electricity:3 applicability:1 cost:2 uniform:19 fabrication:1 conducted:1 too:1 eager:1 motivating:2 characterize:1 eec:2 synthetic:1 adaptively:1 density:2 probabilistic:1 off:2 together:2 quickly:1 again:2 choose:2 possibly:2 nano:7 worse:2 safeness:6 return:11 li:2 suggesting:1 potential:2 distribute:1 busy:1 account:2 de:1 availability:1 dixon:1 oregon:1 inc:1 ely:1 depends:2 later:2 try:2 azimi:4 lab:47 observing:1 tab:1 reached:1 start:16 competitive:1 parallel:2 minimize:1 il:12 ni:17 accuracy:1 variance:1 largely:1 conducting:2 maximized:2 correspond:2 characteristic:1 yield:2 generalize:1 bayesian:6 accurately:1 produced:4 fern:5 carlo:1 anode:4 executes:1 reach:1 whenever:3 complicate:1 ed:1 definition:1 nitrogen:2 dm:1 di:9 mi:8 static:3 experimenter:1 knowledge:1 schedule:60 brochu:1 appears:1 higher:3 originally:1 follow:1 response:1 improved:1 formulation:2 execute:1 box:2 strongly:2 though:1 furthermore:2 just:1 stage:39 anderson:1 correlation:2 cohn:1 abusing:1 defines:2 quality:1 scheduled:1 believe:1 usa:1 verify:1 facility:1 hence:5 evolution:1 laboratory:1 moore:1 sin:2 during:2 width:1 encourages:2 cosine:3 criterion:3 complete:5 demonstrate:1 performs:2 novel:4 recently:1 dji:2 empirically:2 overview:1 ji:1 attached:1 extend:1 organism:1 versa:1 ai:1 outlined:1 similarly:4 dj:1 access:1 longer:3 surface:3 base:6 posterior:3 recent:3 optimizing:6 optimizes:1 inf:1 forcing:1 scenario:1 certain:1 verlag:1 minimum:5 additional:1 performer:1 managing:1 maximize:8 period:1 ii:1 multiple:4 desirable:3 full:1 d0:7 alan:1 technical:1 offer:2 long:4 hart:2 parenthesis:1 impact:1 ensuring:1 regression:1 expectation:1 physically:1 iteration:1 kernel:2 achieved:2 cell:2 whereas:1 want:2 addition:1 rest:1 leveraging:1 mod:3 seem:1 incorporates:1 odds:1 call:2 nonstationary:1 leverage:1 identically:1 enough:2 easy:1 xj:1 finish:9 competing:1 requesting:3 motivated:3 allocate:1 utility:1 passed:1 render:1 returned:3 biotechnology:1 york:2 repeatedly:1 useful:2 clear:1 aimed:1 hydro:1 amount:5 aiding:1 nonparametric:1 ph:2 reduced:1 generate:1 specifies:1 outperform:2 restricts:1 nsf:1 tutorial:1 notice:1 estimated:1 diverse:1 discrete:1 waiting:1 putting:1 budgeted:2 wasted:1 sum:1 run:16 parameterized:1 reasonable:1 michalewicz:2 decision:12 prefer:1 bound:3 ct:4 completing:1 followed:1 occur:2 constraint:3 x2:1 encodes:1 calling:1 aspect:3 optimality:1 performing:1 relatively:1 department:1 according:2 poor:1 across:8 slightly:1 smaller:1 intuitively:2 restricted:1 invariant:1 xiaoli:1 sided:1 computationally:1 resource:1 equation:2 previously:2 remains:1 turn:1 synechocystis:1 fail:2 end:5 staged:8 available:4 apply:1 observe:1 eight:1 away:1 hierarchical:1 batch:8 alternative:1 original:2 assumes:1 running:10 ensure:4 include:3 completed:13 remaining:2 denotes:1 a4:2 opportunity:2 exploit:2 objective:4 move:1 already:1 added:2 costly:3 traditional:3 surrogate:1 gradient:1 considers:2 collected:1 trivial:2 toward:1 assuming:4 length:1 balance:1 setup:1 unfortunately:1 carbon:1 taxonomy:1 design:3 implementation:3 motivates:2 policy:64 unknown:7 perform:3 allowing:3 upper:1 observation:1 wire:1 markov:1 benchmark:9 acknowledge:1 truncated:1 defining:1 situation:1 excluding:1 looking:1 extended:1 station:1 arbitrary:2 introduced:2 required:3 specified:5 elapsed:2 purposefully:1 nip:1 address:1 beyond:1 below:5 program:1 max:2 power:2 critical:2 overlap:1 natural:2 rely:1 examination:1 event:1 improve:3 library:3 xfern:1 started:4 created:1 columbia:1 sn:3 prior:10 understanding:1 oregonstate:1 literature:1 epoch:8 chaplen:1 relative:2 expect:3 generator:9 pij:1 s0:3 pi:1 production:4 row:1 summary:1 last:1 free:5 offline:27 aij:1 allow:1 taking:2 distributed:1 benefit:1 dimension:3 world:4 cumulative:3 computes:2 ignores:2 author:1 reinforcement:1 correlate:2 compact:1 observable:1 keep:1 global:4 sequentially:1 decides:1 correlating:1 active:3 assumed:1 tuples:1 xi:1 search:1 hydrogen:6 un:1 reality:1 table:6 promising:1 terminate:1 robust:1 northholland:1 meanwhile:1 domain:5 did:1 sp:1 main:1 linearly:1 terminated:1 motivation:1 noise:1 allowed:1 javad:1 ny:1 fails:1 scheduler:18 wish:1 deterministically:1 burrow:1 third:1 theorem:3 british:1 specific:6 maxi:1 evidence:1 exists:1 sequential:3 adding:1 execution:16 budget:4 horizon:20 easier:1 simply:2 desire:1 personnel:1 amsterdam:1 partially:1 bo:16 u2:1 chang:1 springer:1 corresponds:3 environmental:1 cdf:1 goal:8 viewed:2 mfcs:2 quantifying:1 rbf:1 consequently:1 hard:1 included:1 typical:2 specifically:3 uniformly:2 szeg:1 lemma:3 degradation:1 called:2 total:9 experimental:14 select:7 support:1 ongoing:3 incorporate:1 evaluate:3 correlated:1 |
3,646 | 4,301 | Selective Prediction of Financial Trends with Hidden
Markov Models
Ran El-Yaniv and Dmitry Pidan
Department of Computer Science, Technion
Haifa, 32000 Israel
{rani,pidan}@cs.technion.ac.il
Abstract
Focusing on short term trend prediction in a financial context, we consider the
problem of selective prediction whereby the predictor can abstain from prediction
in order to improve performance. We examine two types of selective mechanisms
for HMM predictors. The first is a rejection in the spirit of Chow?s well-known
ambiguity principle. The second is a specialized mechanism for HMMs that identifies low quality HMM states and abstain from prediction in those states. We
call this model selective HMM (sHMM). In both approaches we can trade-off prediction coverage to gain better accuracy in a controlled manner. We compare
performance of the ambiguity-based rejection technique with that of the sHMM
approach. Our results indicate that both methods are effective, and that the sHMM
model is superior.
1
Introduction
Selective prediction is the study of predictive models that can automatically qualify their own predictions and output ?don?t know? when they are not sufficiently confident. Currently, manifestations
of selective prediction within machine learning mainly exist in the realm of inductive classification,
where this notion is often termed ?classification with a reject option.? In the study of a reject option,
which was initiated more than 40 years ago by Chow [5], the goal is to enhance accuracy (or reduce
?risk?) by compromising the coverage. For a classifier or predictor equipped with a rejection mechanism we can quantify its performance profile by evaluating its risk-coverage (RC) curve, giving
the functional relation between error and coverage. The RC curve represents a trade-off: the more
coverage we compromise, the more accurate we can expect to be, up to the point where we reject
everything and (trivially) never err. The essence of selective classification is to construct classifiers
achieving useful (and optimal) RC trade-offs, thus providing the user with control over the choice
of desired risk (with its associated coverage compromise).
Our longer term goal is to study selective prediction models for general sequential prediction tasks.
While this topic has only been sparsely considered in the literature, we believe that it has great potential in dealing with difficult problems. As a starting point, however, in this paper we focus on the
restricted objective of predicting next-day trends in financial sequences. While limited in scope, this
problem serves as a good representative of difficult sequential data [17]. A very convenient and quite
versatile modeling technique for analyzing sequences is the Hidden Markov Model (HMM). Therefore, the goal we set had been to introduce selection mechanisms for HMMs, capable of achieving
useful risk-coverage trade-off in predicting next-day trends.
To this end we examined two approaches. The first is a straightforward application of Chow?s
ambiguity principle implemented with HMMs. The second is a novel and specialized technique
utilizing the HMM state structure. In this approach we identify latent states whose prediction quality
is systematically inferior, and abstain from predictions while the underlying source is likely to be in
1
those states. We call this model selective HMM (sHMM). While this natural approach can work in
principle, if the HMM does not contain sufficiently many ?fine grained? states, whose probabilistic
volume (or ?visit rate?) is small, the resulting risk-coverage trade-off curve will be a coarse step
function that will prevent fine control and usability. One of our contributions is a solution to this
coarseness problem by introducing algorithms for refining sHMMs. The resulting refined sHMMs
give rise to smooth RC trade-off curves.
We present the results of quite extensive empirical study showing the effectiveness of our methods,
which can increase the edge in predicting next-day trends. We also show the advantage of sHMMs
over the classical Chow approach.
2
2.1
Preliminaries
Hidden Markov Models in brief
A Hidden Markov Model (HMM) is a generative probabilistic state machine with latent states, in
which state transitions and observations emissions represent first-order Markov processes. Given an
observation sequence, O = O1 , . . . , OT , hypothesized to be generated by such a model, we would
like to ?reverse engineer? the most likely (in a Bayesian sense) state machine giving rise to O, with
associated latent state sequence S = S1 , . . . , ST . An HMM is defined as ? , hQ, M, ?, A, Bi,
where Q is a set of states, M is the number of observations, ? is the initial states distribution, ?i ,
P [S1 = qi ], A = (aij ) is the transition matrix, aij , P [St+1 = qj | St = qi ], and B = (bj (k)) is
the observation emission matrix, bj (k) , P [Ot = vk | St = qj ].
Given an HMM ? and observation sequence O, an efficient algorithm for calculating P [O | ?] is
the forward-backward procedure (see details in, e.g., Rabiner [16]). The estimation of the HMM
parameters (training) is traditionally performed using a specialized expectation-maximization (EM)
algorithm called the Baum-Welch algorithm [2]. For a large variety of problems it is also essential to
identify the ?most likely? state sequence associated with a given observation sequence. This is commonly accomplished using the Viterbi algorithm [22], which computes arg maxS P [S | O, ?]. Similarly, one can identify the most likely ?individual? state, arg maxq P [St = q | O, ?], corresponding
to time t.
2.2
Selective Prediction and the RC Trade-off
To define the performance parameters in selective prediction we utilize the following definitions for
selective classifiers from [6, 7]. A selective (binary) classifier is represented as a pair of functions
hf, gi, where f is a binary classifier and g : X ? {0, 1} is a binary qualifier for f : whenever g(x) =
1, the prediction f (x) is accepted, and otherwise it is ignored. The performance of a selective
classifier is measured by its coverage and risk. Coverage is the expected volume of non-rejected
data instances, C , E [g(X)], (where expectation is w.r.t. the unknown underlyingdistribution)
and the risk is the error rate over non-rejected instances, R , E [I(f (X) 6= Y )g(X)] C, where Y
represents the true classification.
The purpose of a selective prediction model is to provide ?sufficiently low? risk with ?sufficiently
high? coverage. The functional relation between risk and coverage is called the risk coverage (RC)
trade-off. Generally, the user of a selective model would like to bound one measure (either risk or
coverage) and then obtain the best model in terms of the other measure. The RC curve of a given
model characterizes this trade-off on a risk/coverage plane thus describing its full spectrum.
A selective predictor is useful if its RC curve is ?non trivial? in the sense that progressively smaller
risk can be obtained with progressively smaller coverage. Thus, when constructing a selective classification or a prediction model it is imperative to examine its RC curve. One can consider theoretical
bounds of the RC curve (as in [6]) or empirical ones as we do here. Interpolated RC curve can be
obtained by selecting a number of coverage bounds at certain grid points of choice, and learning
(and testing) a selective model aiming at achieving the best possible risk for each coverage level.
Obviously, each such model should respect the corresponding coverage bound.
2
3
3.1
Selective Prediction with HMMs
Ambiguity Model
The first approach we consider is an implementation of the classical ambiguity idea. We construct an
HMM-based classifier, similar to the one used in [3], and endow it with a rejection mechanism in the
spirit of Chow [5]. This approach is limited to binary labeled observation sequences. The training
set, consisting of labeled sequences, is partitioned into its positive and negative instances, and two
HMM?s, ?+ and ?? , are trained using those sets, respectively. Thus, ?+ is trained to identify
positively labeled sequences, and ?? ? negatively labeled sequences. Then, each new observation
sequence O is classified as sign(P [O | ?+ ] ? P [O | ?? ]).
For applying Chow?s ambiguity idea using the model (?+ , ?? ), we need to define a measure C(O)
of prediction confidence for any observation sequence O. A natural choice in this context is to
measure the log-likelihood difference between the positive and negative models, normalized by the
length of the sequence. Thus, we define C(O) , | T1 (log P [O | ?+ ] ? log P [O | ?? ])|, where T is
the length of O. The greater C(O) is, the more confident are we in the classification of O. Now,
given the classification confidences of all sequences in the training data set, and given a required
lower bound on the coverage, an empirical threshold can be found such that a designated number of
instances with the smallest confidence measures will be rejected. If our data is non-stationary (e.g.
financial sequences), this threshold can be re-estimated at the arrival of every new data instance.
3.2
State-Based Selectivity
We propose a different approach for implementing selective prediction with HMMs. The idea is to
designate an appropriate subset of the states as ?rejective.? The proposed approach is suitable for
prediction problems whose observation sequences are labeled. Specifically, for each observation,
Ot , we assume that there is a corresponding label lt . The goal is to predict lt at time t ? 1.
Each state is assigned risk and visit rate estimates. For each state q, its risk estimate is used as
a proxy to the probability of making erroneous predictions from q, and its visit rate quantifies the
probability of outputting any symbol from q. A subset of the highest risk states is selected so that
their total expected visit rate does not exceed the user specified rejection bound. These states are
called rejective and predictions from them are ignored. The following two definitions formulate
these notions. We associate with each state q a label Lq representing the HMM prediction while at
this state (see Section 3.4). Denote ?t (i) , P [St = qi | O, ?], and note that ?t (i) can be efficiently
calculated using the standard forward-backward procedure (see Rabiner [16]).
Definition 3.1 (emprirical visit rate). Given an observation sequence, the empirical visit rate, v(i),
PT
of a state qi , is the fraction of time the HMM spends in state qi , that is v(i) , T1 t=1 ?t (i).
Definition 3.2 (empirical state risk). Given an observation sequence, the empirical risk, r(i), of a
PT
1
t=1
state qi , is the rate of erroneous visits to qi , that is r(i) , v(i)T
?t (i).
6 lt
Lq =
i
Suppose we are required to meet a user specified rejection bound 0 ? B ? 1. This means that we
are required to emit predictions (rather than ?don?t know?s) in at least 1 ? B fraction of the time. To
achieve this we apply the following greedy selection procedure of rejective states whereby highest
risk states are sequentially selected as long as their overall visit rate does not exceed B. We call the
resulting model Naive-sHMM. Formally, let qi1 , qi2 , . . . , qiN be an ordering of all states, such that
for each j < k, r(ij ) ? r(ik ). Then, the rejective state subset is,
?
?
X
K+1
?
?
X
K
RS , qi1 , . . . , qiK
(3.1)
v(ij ) > B .
v(ij ) ? B,
?
?
j=1
j=1
3.3
Overcoming Coarseness
The above simple approach suffers from the following coarseness problem. If our model does not
include a large number of states, or includes states with very high visit rates (as it is often the case
in applications), the total visit rate of the rejective states might be far from the requested bound
3
B, entailing that selectivity cannot be fully exploited. For example, consider a model that has
three states such that r(q1 ) > r(q2 ) > r(q3 ), v(q1 ) = , and v(q2 ) = B + . In this case only
the negligibly visited q1 will be rejected. We propose two methods to overcome this coarseness
problem. These methods are presented in the two subsequent sections.
3.3.1
Randomized Linear Interpolation (RLI)
In the randomized linear interpolation (RLI) method, predictions from rejective states are always
rejected, but predictions from the non-rejective state with the highest risk rate are rejected with
appropriate probability, such that the total expected rejection rate equals the rejection bound B. Let
q be the non-rejective state with the highest
rate. The probability
to reject predictions emerging
riskP
1
from this state is taken to be pq , v(q)
B ? q0 ?RS v(q 0 ) . Clearly, with pq thus defined, the
total expected rejection rate is precisely B, when expectation is taken over random choices.
3.3.2
Recursive Refinement (RR)
Given an initial HMM model, the idea in the recursive refinement approach is to construct an approximate HMM whose states have finer granularity of visit rates. This smaller granularity enables
a selection of rejective states whose total visit rate is closer to the required bound. The refinement is
achieved by replacing every highly visited state with a complete HMM.
The process starts with a root HMM, ?0 , trained in a standard way using the Baum-Welch algorithm.
In ?0 , states that have visit rate greater than a certain bound are identified. For each such state qi
(called a heavy state), a new HMM ?i (called a refining HMM) is trained and combined with ?0 as
follows: every transition from other states into qi in ?0 entails a transition into (initial) state in ?i in
accordance with the initial state distribution of ?i ; every self transition to qi in ?0 results in a state
transition in ?i according to its state transition matrix; finally, every transition from qi to another
state entails transition from a state in ?i whose probability is the original transition probability from
qi . States of ?i are assigned the label of qi . This refinement continues in a recursive manner and
terminates when all the heavy states have refinements. The non refined states are called leaf states.
_
_
?
?
++
? 1 kk
?2
_
_
ONML
HIJK
3 kk
++
_ ?
?4
_
ONML
HIJK
5 kk
ONML
HIJK
7 kk
++ ONML
HIJK
8
++ ONML
HIJK
6
Figure 1: Recursively Refined HMM
Figure 1 depicts a recursively refined HMM having two refinement levels. In this model, states 1,2,4
are heavy (and refined) states, and states 3,5,6,7,8 are leaf (emitting) states. The model consisting
of states 3 and 4 refines state 1, the model consisting of states 5 and 6 refines state 2, etc.
An aggregate state of the complete hierarchical model corresponds to a set of inner HMM states,
each of which is a state on a path from the root through refining HMMs, to a leaf state. Only leaf
states actually emit symbols. Refined states are non-emitting and their role in this construction is to
preserve the structure (and transitions) of the HMMs they refine.
At every time instance t, the model is at some aggregate state. Transition to the next aggregate state
always starts at ?0 , and recursively progresses to the leaf states, as shown in the following example.
Suppose that the model in Figure 1 is at aggregate state {1,4,7} at time t. The aggregate state at time
t + 1 is calculated as follows. ?0 is in state 1, so its next state (say 1 again) is chosen according to
the distribution {a11 , a12 }. We then consider the model that refines state 1, which was in state 4 at
4
time t. Here again the next state (say 3) is chosen according to the distribution {a43 , a44 }. State 3
is a leaf state that emits observations, and the aggregate state at time t + 1, is {1,3}. On the other
hand, if state 2 is chosen at the root, a new state (say 6) in its refining model is chosen according to
the initial distribution {?5 , ?6 } (transition into the heavy state from another state). The chosen state
6 is a leaf state so the new aggregate state becomes {2,6}.
Algorithm 1 TrainRefiningHMM
Input: HMM ? = h{qj }j=n
j=1 , M, ?, A, Bi, heavy state qi , O
j=n+N
j=n+N
j,k=n+N
j=n+N,m=M
1: Draw random HMM, ?i = h{qj }j=n+1 , M, {?j }j=n+1 , {ajk }j,k=n+1 , {bjm }j=n+1,m=1 i
2: For each 1 ? j ? n, j 6= i, replace transition qj qi with qj qn+1 . . . qj qn+N , and qi qj with
qn+1 qj . . . qn+N qj
3: Remove state qi with the corresponding {bim }i=M
i=1 from ?, and record it as a state refined by
?i . Set Lqj = Lqi for each n + 1 ? j ? n + N
4: while not converged do
5:
For each 1 ? j ? n, j 6= i, update aj(n+k) = aji ?n+k , and a(n+k)j = aij , 1 ? k ? N .
6:
For each n + 1 ? j ? n + N , update ?j = ?i ?j
7:
For each n + 1 ? j, k ? n + N , update ajk = aii ajk
j,k=n+N
j=n+N,m=M
8:
Re-estimate {?j }j=n+N
j=n+1 , {ajk }j,k=n+1 , {bjm }j=n+1,m=1 , using Eq.(3.2)
9: end while
10: Perform steps 5-7
Output: HMM ?
Algorithm 1 is a pseudocode of the training algorithm for refining HMM ?i , for a heavy state qi .
This algorithm is an extension of the Baum-Welch algorithm [2]. In steps 1-3, a random ?i is
generated and connected to the HMM ? instead of qi . Steps 5-8 iteratively update the parameters
of ?i until the Baum-Welch convergence criterion is met, and in step 10, ? is updated with the final
?i parameters. Finally, in step 3 qi is stored as a state refined by ?i , to preserve the hierarchical
structure of the resulting model (essential for the selection mechanism). The algorithm is applied on
heavy states until all states in the HMM have visit rates lower than a required bound.
?j =
?
1 ?
??1 (j) +
Z
n
T
?1 X
X
t=1
k=1
k6=i
?
?
?t (k, j)? , ajk =
TP
?1
T
P
?t (j, k)
t=1
?1
n+N
P TP
l=n+1 t=1
, bjm =
?t (j, l)
?t (j)
t=1
Ot =m
T
P
(3.2)
?t (j)
t=1
In Eq. (3.2), re-estimation formulas for the parameters of newly added states (Step 8) are presented,
where ?t (j, k) = P [qt = j, qt+1 = k | O, ?]. It is easy to see that, similarly to original Baum-Welch
formulas, constraints for the parameters to be valid distributions are preserved (Z is a normalization
factor in the ?j equation). The main difference from the original formulas is in the re-estimation of
?j : in the refinement process, transitions from other states into heavy state qi also affect the initial
distribution of its refining states.
The most likely aggregate state at time t, given sequence O, is found in a top-down manner using
the hierarchical structure of the model. Starting with the root model, ?0 , the most likely individual
state in it, say qi , is identified. If this state has no refinement, then we are done. Otherwise, the most
likely individual state in ?i (HMM that refines qi ), say qj , is identified, and the aggregate state is
updated to be {qi , qj }. The process continues until the last discovered state has no refinement.
The above procedure requires calculation of the quantity ?t (i) not only for the leaf states (where it is
calculated using a standard forward-backward procedure), but also for the refined states. For those
PN
j=1
states, ?t (i) =
?t (j) is calculated recursively over the hierarchical structure.
qj refines qi
The rejection subset is found using the Eq. (3.1), applied to the aggregate states of the refined model.
Visit and risk estimates for the aggregate state {qi1 . . . qik } are calculated using ?t (ik ), of a leaf state
qik that identifies this aggregate state.
5
The outcome of the RR procedure is a tree of HMMs whose main purpose is to redistribute visit rates
among states. This re-distribution is the key element that allows for achieving smooth RC curves.
Various other hierarchical HMM schemes have been proposed in the literature [4, 8, 10, 18, 20].
While some of these schemes may appear similar to ours at first glance, they do not address the visit
rate re-distribution objective. In fact, those models were developed to serve other purposes such as
better modeling of sequences that have special structure (e.g., sequences hypothesized to be emerged
from a hierarchical generative model).
3.4
State Labeling
It remains to address the assignment of labels to the states in our state-based selection models. Labels
can be assigned to states a-priori, and then a supervised EM method can be used for training (this
model is known as Class HMM), as in [15]. Alternatively, state labels can be calculated from the
statistics of the states, if an unsupervised training method is used. In our setting, we are following
the latter approach. For a state qi , and given observation label l, we calculate
P the average number of
visits (at qi ) whose corresponding label is l, as E [St = qi | lt = l, O, ?] = 1?t?T,lt =l ?t (i). Thus,
Lqi is chosen to be an l that maximized this quantity.
4
Experimental Results
We compared empirically the four selection mechanisms presented in Section 3, namely, the ambiguity model and the Naive, RLI, and RR sHMMs. All methods were compared on a next-day trend
prediction of the S&P500 index. This problem is known to be very difficult, and recent experimental
work by Rao and Hong [17] assessed that although HMM succeeds to achieve some positive edge,
the accuracy is near fifty-fifty (51.72%) when a pure price data is used.
For our prediction task, we took as observation sequence directions of the S&P500 price changes.
Specifically, the direction dt , at time t, is dt , sign(pt+1 ? pt ), where pt are close prices. The statebased models were fed with the series of partial sequences ot , dt?`+1 , . . . , dt . For the ambiguity
model, the partial sequences dt?`+1 , . . . , dt were used as a pool of observation sequences.
In a preliminary small experiment we observed the advantage of the state-based approach over the
ambiguity model. In order to validate this, we tried to falsify this hypothesis by optimizing the
hyper-parameters of the ambiguity model in hindsight.
For the state-based models we used a 5-state HMM, and predictions were made using the label of
the most likely individual state. Such HMMs are hypothesized to be sufficiently expressive to model
a small number of basic market conditions such as strong/weak trends (up and down) and sideways
markets [17, 23]. We have not tried to optimize this basic architecture and better results with more
expressive models can be possibly achieved. For the ambiguity model we constructed two 8-state
HMMs, where the length of a single observation sequence (`) is 5. This architecture was optimized
in hindsight among all possibilities of up to 10 states, and up to length 8, for a single observation
sequence. Every refining model in the RR procedure had the same structure, and the upper bound
on the visit rate was fixed at 0.1. For sHMMs, the hyper-parameter ` was arbitrarily set to 3 (there
is possibly room for further improvement by optimizing the model w.r.t. this hyper-parameter).
RC curves were computed for each technique by taking the linear grid of rejection rate bounds from
0 to 0.9 in steps of 0.1. For each bound, every model was trained and tested using 30-fold crossvalidation, with each fold consisting of 10 random restarts. Test performance was measured by
mean error rate, taken over the 30 folds, and standard error of the mean (SEM) statistics were also
calculated to monitor statistical significance.
Since the price sequences we deal with are highly non-stationary, we employed a walk-forward
scheme in which the model is trained over the window of past Wp returns and then tested on the
subsequent window of Wf ?future? returns. Then, we ?walk forward? Wf steps (days) in the return
sequence (so that the next training segment ends where the last test segment ended) and the process
repeats until we consume the entire data sequence. In the experiments we set Wp = 2000 and
Wf = 50 (that is, in each step we learn to predict the next business quarter, day by day). The data
sequence in this experiment consisted of the 3000 S&P500 returns from 1/27/1999 to 12/31/2010.
With our walk forward procedure, the first 2000 points were only used for training the first model.
6
0.5
0.48
Error Rate
Bound
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1.Ambiguity
V?STACKS
Spectral Algorithm
0.46
2.Naive
0.44
0.42
0.4
4.RR
3.RLI
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
Coverage Bound
(a) Error rate vs coverage bound
Amb.
0.889
0.796
0.709
0.616
0.516
0.440
0.337
0.256
0.168
Naive
0.999
0.939
0.778
0.719
0.633
0.507
0.385
0.305
0.199
RLI
0.899
0.798
0.696
0.593
0.491
0.391
0.291
0.192
0.094
RR
0.942
0.842
0.735
0.628
0.526
0.423
0.324
0.224
0.131
(b) Coverage rate vs coverage bound
Figure 2: S&P500 RC-curves
Figure 2a shows that all four methods exhibited meaningful RC-curves; namely, the error rates
decreased monotonically with decreasing coverage bounds. The RLI and RR models (curves 3 and 4,
respectively) outperformed the Naive one (curve 2), by better exploiting the allotted coverage bound,
as is evident from Table 2b. In addition, the RR model outperformed the RLI model, and moreover,
its effective coverage is higher for every required coverage bound. This validates the effectiveness
of the RR approach that implements a smarter selection process than the RLI model. Specifically,
when RR refines a state and the resulting sub-states have different risk rates, the selection procedure
will tend to reject riskier states first. Comparing the state-based models (curves 2-4) to the ambiguity
model (curve 1), we see that all the state-based models outperformed the ambiguity model through
the entire coverage range (despite the advantage we provided to the ambiguity model).
We also compared our models to two alternative HMM learning methods that were recently proposed: the spectral algorithm of Hsu et al. [13], and the V-STACKS algorithm of Siddiqi et al. [20].
As can be seen in Figure 2a, the selective techniques can also improve the accuracy obtained by
these methods (with full coverage).
Quantitatively very similar results were also obtained in a number of other experiments (not presented, due to lack of space) with continuous data (without discretization) of the S&P500 index and
of Gold, represented by its GLD exchange traded fund (ETF) replica.
2000
Number of instances
Number of instances
6000
4000
2000
0
?0.2
?0.1
0
Difference
0.1
1500
1000
500
0
?1
0.2
(a) Visit
?0.5
0
Difference
0.5
1
(b) Risk
Figure 3: Distributions of visit and risk train/test differences
Figure 3a depicts the distribution of differences between empirical visit rates, measured on the training set, and those rates on the test set. It is evident that this distribution is symmetric and concentrated around zero. This means that our empirical visit estimates are quite robust and useful.
Figure 3b depicts a similar distribution, but now for state risks. Unfortunately, here the distribution
is much less concentrated, which means that our naive empirical risk estimates are rather noisy.
While the distribution is symmetric about zero (and underestimates are often compensated by overestimates) it indicates that these noisy measurements are a major bottleneck in achieving better error
rates. Therefore, it would be very interesting to consider more sophisticated risk estimation methods.
7
5
Related Work
Selective classification was introduced by Chow [5], who took a Bayesian route to infer the optimal
rejection rule and analyze the risk-coverage trade-off under complete knowledge of the underlying
probabilistic source. Chow?s Bayes-optimal policy is to reject instances whenever none of the posteriori probabilities are sufficiently predominant. While this policy cannot be explicitly applied in
agnostic settings, it marked a general ambiguity-based approach for rejection strategies. There is
a substantial volume of research contributions on selective classification where the main theme is
the implementation of reject mechanisms for particular classifier learning algorithms like support
vector machines, see, e.g., [21]. Most of these mechanisms can be viewed as variations of the Chow
ambiguity-based policy. The general consensus is that selective classification can often provide
substantial error reductions and therefore rejection techniques have found good use in numerous
applications, see, e.g., [12]. Rejection mechanisms were also utilized in [14] as a post-processing
output verifier for HMM-based recognition systems There have been also a few theoretical studies
providing worst case high probability bounds on the risk-coverage trade-off; see, .e.g., [1, 6, 7, 9].
HMMs have been extensively studied and used both theoretically and in numerous application areas.
In particular, financial modeling with HMMs has been considered since their introduction by Baum
et al. While a complete survey is clearly beyond our scope here, we mention a few related results.
Hamilton [11] introduced a regime-switching model, in which the sequence is hypothesized to be
generated by a number of hidden sources, or regimes, whose switching process is modeled by a (firstorder) Markov chain. Later, in [19] a hidden Markov model of neural network ?experts? was used for
prediction of half-hour and daily price changes of the S&P500 index. Zhang [23] applied this model
for predicting S&P500 next day trends, employing mixture of Gaussians in the states. The latter two
works reported on prominent results in terms of cumulative profit. The recent experimental work by
Rao and Hong [17] evaluated HMMs for a next-day trend prediction task and measured performance
in terms of accuracy. They reported on a slight but consistent positive prediction edge.
In [3], an HMM-based classifier was proposed for ?reliable trends,? defined to be specialized 15 day
return sequences that end with either five consecutive positive or consecutive negative returns. A
classifier was constructed using two HMMs, one trained to identify upward (reliable) trends and the
other, for downward (reliable) trends. Non-reliable sequences are always rejected. Therefore, this
technique falls within selective prediction but the selection function has been manually predefined.
6
Concluding Remarks
The structure and modularity of HMMs make them particularly convenient for incorporating selective prediction mechanisms. Indeed, the proposed state-based method can result in a smooth and
monotonically decreasing risk-coverage trade-off curve that allows for some control on the desired
level of selectivity. We focused on selective prediction of trends in financial sequences. For these
difficult prediction tasks our models can provide non-trivial prediction improvements. We expect
that the relative advantage of these selective prediction techniques will be higher in easier tasks, or
even in the same task by utilizing more elaborate HMM modeling, perhaps including other sources
of specialized information including prices of other correlated indices.
We believe that a major bottleneck in attaining smaller test errors is the noisy risk estimates we obtain
for the hidden states (see Figure 3b). This noise is partly due to the noisy nature of our prediction
problem, but may also be attributed to the simplistic approach we took in estimating empirical risk.
A challenging problem would be to incorporate more robust estimates in our mechanism, which
is likely to enable better risk-coverage trade-offs. Finally, it will be very interesting to examine
selective prediction mechanisms in the more general context of Bayesian networks and other types
of graphical models.
Acknowledgements
This work was supported in part by the IST Programme of the European Community, under the
PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors?
views.
8
References
[1] P. L. Bartlett and M. H. Wegkamp. Classification with a reject option using a hinge loss.
Journal of Machine Learning Research, 9:1823?1840, 2008.
[2] L. E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the
statistical analysis of probabilistic functions of markov chains. The Annals of Mathematical
Statistics, 41(1):164?171, 1970.
[3] M. Bicego, E. Grosso, and E. Otranto. A Hidden Markov Model approach to classify and
predict the sign of financial local trends. SSPR, 5342:852?861, 2008.
[4] M. Brand. Coupled Hidden Markov Models for modeling interacting processes. Technical
Report 405, MIT Media Lab, 1997.
[5] C. Chow. On optimum recognition error and reject tradeoff. IEEE-IT, 16:41?46, 1970.
[6] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. JMLR,
11:1605?1641, May 2010.
[7] R. El-Yaniv and Y. Wiener. Agnostic selective classification. In NIPS, 2011.
[8] S. Fine, Y. Singer, and N. Tishby. The Hierarchical Hidden Markov Model: Analysis and
Applications. Machine Learning, 32(1):41?62, 1998.
[9] Y. Freund, Y. Mansour, and R. E. Schapire. Generalization bounds for averaged classifiers.
Annals of Statistics, 32(4):1698?1722, 2004.
[10] Z. Ghahramani and M. I. Jordan. Factorial Hidden Markov Models. Machine Learning, 29(2?
3):245?273, 1997.
[11] J. Hamilton. Analysis of time series subject to changes in regime. Journal of Econometrics,
45(1?2):39?70, 1990.
[12] B. Hanczar and E. R. Dougherty. Classification with reject option in gene expression data.
Bioinformatics, 24:1889?1895, 2008.
[13] D. Hsu, S. Kakade, and T. Zhang. A spectral algorithm for learning Hidden Markov Models.
In COLT, 2009.
[14] A. L. Koerich. Rejection strategies for handwritten word recognition. In IWFHR, 2004.
[15] A. Krogh. Hidden Markov Models for labeled sequences. In Proceedings of the 12th IAPR
ICPR?94, pages 140?144, 1994.
[16] L. R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), February 1989.
[17] S. Rao and J. Hong. Analysis of Hidden Markov Models and Support Vector Machines in financial applications. Technical Report UCB/EECS-2010-63, Electrical Engineering and Computer
Sciences University of California at Berkeley, 2010.
[18] L. K. Saul and M. I. Jordan. Mixed memory Markov models: Decomposing complex stochastic
processes as mixtures of simpler ones. Machine Learning, 37:75?87, 1999.
[19] S. Shi and A. S. Weigend. Taking time seriously: Hidden Markov Experts applied to financial
engineering. In IEEE/IAFE, pages 244?252. IEEE, 1997.
[20] S. Siddiqi, G. Gordon, and A. Moore. Fast State Discovery for HMM Model Selection and
Learning. In AI-STATS, 2007.
[21] F. Tortorella. Reducing the classification cost of support vector classifiers through an ROCbased reject rule. Pattern Anal. Appl., 7:128?143, 2004.
[22] A. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding
algorithm. IEEE-IT, 13(2):260?269, 1967.
[23] Y. Zhang. Prediction of financial time series with Hidden Markov Models. Master?s thesis,
The School of Computing Science, Simon Frazer University, Canada, 2004.
9
| 4301 |@word rani:1 coarseness:4 r:2 tried:2 q1:3 mention:1 profit:1 versatile:1 recursively:4 reduction:1 initial:6 series:3 selecting:1 seriously:1 ours:1 past:1 err:1 soules:1 comparing:1 discretization:1 riskier:1 refines:6 subsequent:2 enables:1 remove:1 progressively:2 update:4 v:2 stationary:2 generative:2 selected:3 greedy:1 leaf:9 fund:1 half:1 plane:1 short:1 record:1 coarse:1 simpler:1 zhang:3 five:1 rc:15 mathematical:1 constructed:2 ik:2 qi1:3 introduce:1 manner:3 excellence:1 theoretically:1 market:2 indeed:1 expected:4 falsify:1 examine:3 decreasing:2 automatically:1 equipped:1 window:2 becomes:1 provided:1 estimating:1 underlying:3 moreover:1 agnostic:2 medium:1 israel:1 spends:1 emerging:1 q2:2 developed:1 hindsight:2 ended:1 berkeley:1 every:9 firstorder:1 classifier:12 control:3 appear:1 hamilton:2 overestimate:1 positive:5 t1:2 engineering:2 accordance:1 local:1 aiming:1 switching:2 despite:1 analyzing:1 initiated:1 meet:1 path:1 interpolation:2 might:1 studied:1 examined:1 challenging:1 appl:1 hmms:15 limited:2 bim:1 bi:2 range:1 averaged:1 testing:1 recursive:3 implement:1 onml:5 procedure:9 aji:1 area:1 empirical:10 reject:11 convenient:2 confidence:3 word:1 cannot:2 close:1 selection:10 context:3 risk:34 applying:1 optimize:1 compensated:1 baum:7 shi:1 straightforward:1 starting:2 survey:1 welch:5 formulate:1 focused:1 stats:1 pure:1 rule:2 utilizing:2 financial:10 notion:2 traditionally:1 variation:1 updated:2 annals:2 pt:5 suppose:2 construction:1 user:4 hypothesis:1 associate:1 trend:14 element:1 recognition:4 particularly:1 utilized:1 continues:2 econometrics:1 sparsely:1 labeled:6 observed:1 negligibly:1 role:1 electrical:1 worst:1 calculate:1 connected:1 ordering:1 trade:13 highest:4 ran:1 substantial:2 trained:7 entailing:1 segment:2 compromise:2 predictive:1 serve:1 negatively:1 iapr:1 iafe:1 aii:1 represented:2 various:1 train:1 fast:1 effective:2 labeling:1 aggregate:12 hyper:3 outcome:1 refined:10 etf:1 quite:3 whose:9 emerged:1 say:5 consume:1 otherwise:2 statistic:4 gi:1 dougherty:1 noisy:4 validates:1 final:1 obviously:1 sequence:37 advantage:4 rr:10 took:3 propose:2 outputting:1 qin:1 achieve:2 gold:1 validate:1 crossvalidation:1 exploiting:1 convergence:1 yaniv:3 optimum:2 a11:1 ac:1 measured:4 ij:3 school:1 qt:2 progress:1 eq:3 krogh:1 strong:1 coverage:34 c:1 implemented:1 indicate:1 quantify:1 met:1 direction:2 compromising:1 stochastic:1 a12:1 enable:1 everything:1 implementing:1 exchange:1 lqj:1 generalization:1 preliminary:2 designate:1 extension:1 sufficiently:6 considered:2 around:1 great:1 scope:2 bj:2 viterbi:2 predict:3 traded:1 major:2 consecutive:2 smallest:1 purpose:3 estimation:4 outperformed:3 label:9 currently:1 visited:2 sideways:1 reflects:1 offs:2 clearly:2 mit:1 always:3 rather:2 pn:1 publication:1 endow:1 q3:1 focus:1 refining:7 emission:2 vk:1 improvement:2 likelihood:1 indicates:1 mainly:1 sense:2 wf:3 posteriori:1 el:3 entire:2 chow:10 hidden:17 relation:2 selective:32 upward:1 arg:2 classification:15 overall:1 among:2 colt:1 k6:1 priori:1 special:1 equal:1 construct:3 never:1 having:1 manually:1 represents:2 unsupervised:1 future:1 report:2 quantitatively:1 gordon:1 few:2 preserve:2 individual:4 consisting:4 highly:2 possibility:1 predominant:1 mixture:2 redistribute:1 chain:2 predefined:1 accurate:1 emit:2 edge:3 capable:1 closer:1 partial:2 daily:1 tree:1 walk:3 haifa:1 desired:2 re:6 theoretical:2 instance:9 classify:1 modeling:5 rao:3 tp:2 assignment:1 maximization:2 cost:1 introducing:1 imperative:1 subset:4 qualifier:1 predictor:4 technion:2 tishby:1 stored:1 reported:2 eec:1 combined:1 confident:2 st:7 randomized:2 probabilistic:4 off:11 hijk:5 decoding:1 pool:1 enhance:1 wegkamp:1 again:2 ambiguity:17 thesis:1 possibly:2 expert:2 return:6 potential:1 attaining:1 includes:1 explicitly:1 performed:1 root:4 later:1 view:1 lab:1 analyze:1 characterizes:1 start:2 hf:1 option:4 bayes:1 simon:1 contribution:2 il:1 accuracy:5 wiener:2 convolutional:1 who:1 efficiently:1 maximized:1 rabiner:3 identify:5 weak:1 bayesian:3 handwritten:1 none:1 finer:1 ago:1 classified:1 converged:1 suffers:1 whenever:2 definition:4 bicego:1 underestimate:1 associated:3 attributed:1 gain:1 emits:1 newly:1 hsu:2 realm:1 knowledge:1 sophisticated:1 actually:1 focusing:1 higher:2 dt:6 day:10 supervised:1 lqi:2 restarts:1 wei:1 done:1 evaluated:1 rejected:7 until:4 hand:1 replacing:1 expressive:2 lack:1 glance:1 quality:2 aj:1 perhaps:1 believe:2 hypothesized:4 contain:1 true:1 normalized:1 consisted:1 inductive:1 assigned:3 q0:1 iteratively:1 wp:2 symmetric:2 moore:1 deal:1 self:1 inferior:1 essence:1 whereby:2 criterion:1 manifestation:1 hong:3 prominent:1 evident:2 complete:4 abstain:3 novel:1 recently:1 petrie:1 superior:1 specialized:5 pseudocode:1 functional:2 quarter:1 empirically:1 volume:3 slight:1 measurement:1 ai:1 trivially:1 grid:2 similarly:2 had:2 pq:2 entail:2 longer:1 etc:1 own:1 recent:2 optimizing:2 reverse:1 termed:1 selectivity:3 certain:2 route:1 tortorella:1 binary:4 arbitrarily:1 qualify:1 accomplished:1 exploited:1 seen:1 greater:2 employed:1 monotonically:2 full:2 infer:1 smooth:3 technical:2 usability:1 calculation:1 long:1 post:1 visit:23 controlled:1 qi:28 prediction:44 basic:2 simplistic:1 expectation:3 represent:1 normalization:1 smarter:1 qik:3 achieved:2 preserved:1 addition:1 fine:3 decreased:1 source:4 a44:1 ot:5 fifty:2 exhibited:1 subject:1 tend:1 spirit:2 effectiveness:2 jordan:2 call:3 near:1 granularity:2 exceed:2 easy:1 variety:1 affect:1 architecture:2 identified:3 reduce:1 idea:4 inner:1 tradeoff:1 qj:13 rli:8 bottleneck:2 expression:1 bartlett:1 speech:1 remark:1 ignored:2 useful:4 generally:1 factorial:1 extensively:1 concentrated:2 siddiqi:2 schapire:1 qi2:1 exist:1 tutorial:1 sign:3 estimated:1 ist:2 key:1 four:2 threshold:2 achieving:5 monitor:1 prevent:1 utilize:1 backward:3 replica:1 frazer:1 asymptotically:1 fraction:2 year:1 weigend:1 master:1 draw:1 bound:25 fold:3 refine:1 precisely:1 constraint:1 interpolated:1 concluding:1 department:1 designated:1 according:4 icpr:1 smaller:4 terminates:1 em:2 partitioned:1 kakade:1 making:1 s1:2 restricted:1 taken:3 equation:1 remains:1 describing:1 mechanism:13 singer:1 know:2 bjm:3 fed:1 serf:1 end:4 gaussians:1 decomposing:1 apply:1 hierarchical:7 appropriate:2 spectral:3 alternative:1 original:3 top:1 include:1 graphical:1 hinge:1 calculating:1 giving:2 verifier:1 ghahramani:1 february:1 classical:2 objective:2 added:1 quantity:2 strategy:2 sspr:1 hq:1 hmm:40 topic:1 consensus:1 trivial:2 length:4 o1:1 index:4 modeled:1 kk:4 providing:2 code:1 difficult:4 unfortunately:1 negative:3 rise:2 implementation:2 anal:1 policy:3 unknown:1 perform:1 upper:1 observation:19 markov:19 discovered:1 interacting:1 stack:2 mansour:1 community:1 canada:1 overcoming:1 introduced:2 pair:1 required:6 specified:2 extensive:1 namely:2 optimized:1 amb:1 california:1 rejective:9 gld:1 hour:1 maxq:1 nip:1 address:2 beyond:1 pattern:1 regime:3 max:1 reliable:4 including:2 memory:1 pascal2:1 suitable:1 natural:2 business:1 predicting:4 representing:1 scheme:3 improve:2 brief:1 numerous:2 identifies:2 naive:6 coupled:1 literature:2 acknowledgement:1 discovery:1 relative:1 freund:1 fully:1 expect:2 loss:1 mixed:1 interesting:2 foundation:1 proxy:1 consistent:1 principle:3 systematically:1 heavy:8 repeat:1 last:2 supported:1 free:1 aij:3 fall:1 saul:1 taking:2 curve:18 calculated:7 overcome:1 evaluating:1 transition:15 valid:1 computes:1 qn:4 forward:6 commonly:1 refinement:9 made:1 cumulative:1 author:1 programme:1 far:1 emitting:2 employing:1 approximate:1 dmitry:1 gene:1 dealing:1 sequentially:1 alternatively:1 don:2 spectrum:1 continuous:1 latent:3 quantifies:1 modularity:1 table:1 learn:1 nature:1 p500:7 robust:2 riskp:1 sem:1 requested:1 european:1 complex:1 constructing:1 significance:1 main:3 noise:2 profile:1 arrival:1 positively:1 representative:1 depicts:3 elaborate:1 sub:1 theme:1 lq:2 jmlr:1 grained:1 formula:3 down:2 erroneous:2 showing:1 symbol:2 essential:2 incorporating:1 sequential:2 a43:1 downward:1 occurring:1 easier:1 rejection:16 lt:5 likely:9 corresponds:1 goal:4 marked:1 viewed:1 room:1 replace:1 ajk:5 price:6 change:3 specifically:3 reducing:1 engineer:1 called:6 total:5 accepted:1 experimental:3 partly:1 succeeds:1 brand:1 meaningful:1 ucb:1 formally:1 allotted:1 support:3 latter:2 assessed:1 bioinformatics:1 incorporate:1 tested:2 correlated:1 |
3,647 | 4,302 | Predicting Dynamic Difficulty
Olana Missura and Thomas G?artner
University of Bonn and Fraunhofer IAIS
Schlo? Birlinghoven
52757 Sankt Augustin, Germany
{olana.missura,thomas.gaertner}@uni-bonn.de
Abstract
Motivated by applications in electronic games as well as teaching systems, we
investigate the problem of dynamic difficulty adjustment. The task here is to repeatedly find a game difficulty setting that is neither ?too easy? and bores the
player, nor ?too difficult? and overburdens the player. The contributions of this paper are (i) the formulation of difficulty adjustment as an online learning problem
on partially ordered sets, (ii) an exponential update algorithm for dynamic difficulty adjustment, (iii) a bound on the number of wrong difficulty settings relative
to the best static setting chosen in hindsight, and (iv) an empirical investigation of
the algorithm when playing against adversaries.
1
Introduction
While difficulty adjustment is common practise in many traditional games (consider, for instance,
the handicap in golf or the handicap stones in go), the case for dynamic difficulty adjustment in
electronic games has been made only recently [7]. Still, there are already many different, more
or less successful, heuristic approaches for implementing it. In this paper, we formalise dynamic
difficulty adjustment as a game between a master and a player in which the master tries to predict the
most appropriate difficulty setting. As the player is typically a human with changing performance
depending on many hidden factors as well as luck, no assumptions about the player can be made.
The difficulty adjustment game is played on a partially ordered set which reflects the ?more difficult
than?-relation on the set of difficulty settings. To the best of our knowledge, in this paper, we provide
the first thorough theoretical treatment of dynamic difficulty adjustment as a prediction problem.
The contributions of this paper are: We formalise the learning problem of dynamic difficulty adjustment (in Section 2), propose a novel learning algorithm for this problem (in Section 4), and give
a bound on the number of proposed difficulty settings that were not just right (in Section 5). The
bound limits the number of mistakes the algorithm can make relative to the best static difficulty setting chosen in hindsight. For the bound to hold, no assumptions whatsoever need to be made on the
behaviour of the player. Last but not least we empirically study the behaviour of the algorithm under
various circumstances (in Section 6). In particular, we investigate the performance of the algorithm
?against? statistically distributed players by simulating the players as well as ?against? adversaries
by asking humans to try to trick the algorithm in a simplified setting. Implementing our algorithm
into a real game and testing it on real human players is left to future work.
2
Formalisation
To be able to theoretically investigate dynamic difficulty adjustment, we view it as a game between
a master and a player, played on a partially ordered set modelling the ?more difficult than?-relation.
The game is played in turns where each turn has the following elements:
1
1. the game master chooses a difficulty setting,
2. the player plays one ?round? of the game in this setting, and
3. the game master experiences whether the setting was ?too difficult?, ?just right?, or ?too
easy? for the player.
The master aims at making as few as possible mistakes, that is, at choosing a difficulty setting
that is ?just right? as often as possible. In this paper, we aim at developing an algorithm for the
master with theoretical guarantees on the number of mistakes in the worst case while not making
any assumptions about the player.
To simplify our analysis, we make the following, rather natural assumptions:
? the set of difficulty settings is finite and
? in every round, the (hidden) difficulty settings respect the partial order, that is,
? no state that ?is more difficult than? a state which is ?too difficult? can be ?just right? or
?too easy? and
? no state that ?is more difficult than? a state which is ?just right? can be ?too easy?.
Even with these natural assumptions, in the worst case, no algorithm for the master will be able to
make even a single correct prediction. As we can not make any assumptions about the player, we
will be interested in comparing our algorithm theoretically and empirically with the best statically
chosen difficulty setting, as is commonly the case in online learning [3].
3
Related Work
As of today there exist a few commercial games with a well designed dynamic difficulty adjustment
mechanism, but all of them employ heuristics and as such suffer from the typical disadvantages
(being not transferable easily to other games, requiring extensive testing, etc). What we would like
to have instead of heuristics is a universal mechanism for dynamic difficulty adjustment: An online
algorithm that takes as an input (game-specific) ways to modify difficulty and the current player?s
in-game history (actions, performance, reactions, . . . ) and produces as an output an appropriate
difficulty modification.
Both artificial intelligence researchers and the game developers community display an interest in the
problem of automatic difficulty scaling. Different approaches can be seen in the work of R. Hunicke
and V. Chapman [10], R. Herbich and T. Graepel [9], Danzi et al [7], and others. Since the perceived
difficulty and the preferred difficulty are subjective parameters, the dynamic difficulty adjustment
algorithm should be able to choose the ?right? difficulty level in a comparatively short time for any
particular player. Existing work in player modeling in computer games [14, 13, 5, 12] demonstrates
the power of utilising the player models to create the games or in-game situations of high interest
and satisfaction for the players.
As can be seen from these examples the problem of dynamic difficulty adjustment in video games
was attacked from different angles, but a unifying and theoretically sound approach is still missing. To the best of our knowledge this work contains the first theoretical formalization of dynamic
difficulty adjustment as a learning problem.
Under the assumptions described in Section 2, we can view the partially ordered set as a directed
acyclic graph, at each round labelled by three colours (say, red, for ?too difficult? green for ?just
right?, and blue for ?too easy?) such that
? for every directed path in the graph between two equally labelled vertices, all vertices on
that path have the same colour and
? there is no directed path from a green vertex to a red vertex and none from a blue vertex to
either a red or a green vertex.
The colours are allowed to change in each round as long as they obey the above rules. The master,
i.e., the learning algorithm, does not see the colours but must point at a green vertex as often as
2
possible. If he points at a red vertex, he receives the feedback ?1; if he points at a blue vertex, he
receives the feedback +1.
This setting is related to learning directed cuts with membership queries. For learning directed cuts,
i.e., monotone subsets, G?artner and Garriga [8] provided algorithms and bounds for the case in
which the labelling does not change over time. They then showed that the intersection between a
monotone and an antimonotone subset in not learnable. This negative result is not applicable in our
case, as the feedback we receive is more powerful. They furthermore showed that directed cuts are
not learnable with traditional membership queries if the labelling is allowed to change over time.
This negative result also does not apply to our case as the aim of the master is ?only? to point at a
green vertex as often as possible and as we are interested in a comparison with the best static vertex
chosen in hindsight.
If we ignore the structure inherent in the difficulty settings, we will be in a standard multi-armed
bandit setting [2]: There are K arms, to which an unknown adversary assigns loss values on each
iteration (0 to the ?just right? arms, 1 to all the others). The goal of the algorithm is to choose an
arm on each iteration to minimize its overall loss. The difficulty of the learning problem comes from
the fact that only the loss of the chosen arm is revealed to the algorithm. This setting was studied
extensively in the last years, see [11, 6, 4, 1] and others. The standard performance measure is the
so-called ?regret?: The difference of the loss acquired by the learning algorithm and by the best
static arm chosen in hindsight. The best known to-date algorithm that does not use any additional
information is the Improved BanditpStrategy (called I MPROVED PI in the following) [3]. The upper
bound on its regret is of the order KT ln(T ), where T is the amount of iterations. I MPROVED PI
will be the second baseline after the best static in hindsight (B SIH) in our experiments.
4
Algorithm
In this section we give an exponential update algorithm for predicting a vertex that corresponds to a
?just right? difficulty setting in a finite partially ordered set (K, ) of difficulty settings. The partial
order is such that for i, j ? K we write i j if difficulty setting i is ?more difficult than? difficulty
setting j. The learning rate of the algorithm is denoted by ?. The response that the master algorithm
can observe ot is +1 if the chosen difficulty setting was ?too easy?, 0 if it was ?just right?, and ?1 if it
was ?too difficult?. The algorithm maintains a belief w of each vertex being ?just right? and updates
this belief if the observed response implies that the setting was ?too easy? or ?too difficult?.
Algorithm 1 PARTIALLY-O RDERED -S ET M ASTER (P OSM ) for Difficulty Adjustment
Require: parameter ? ? (0, 1), K difficulty Settings K, partial order on K, and a sequence of
observations o1 , o2 , . . .
1: ?k ? K : let w1 (k) = 1
2: for each turn t = 1, 2, . . .P
do
3:
?k ? K : let At (k) = x?K:xk wt (x)
P
4:
?k ? K : let Bt (k) = x?K:xk wt (x)
5:
P REDICT kt = argmaxk?K min {Bt (k), At (k)}
6:
O BSERVE ot ? {?1, 0, +1}
7:
if ot = +1 then
?wt (k) if k kt
8:
?k ? K : let wt+1 (k) =
wt (x)
otherwise
9:
end if
10:
if ot = ?1 then
?wt (k) if k kt
11:
?k ? K : let wt+1 (k) =
wt (x)
otherwise
12:
end if
13: end for
The main idea of Algorithm 1 is that in each round we want to make sure we can update as much
belief as possible. The significance of this will be clearer when looking at the theory in the next
section. To ensure it, we compute for each setting k the belief ?above? k as well as ?below? k .
3
That is, At in line 3 of the algorithm collects the belief of all settings that are known to be ?more
difficult? and Bt in line 4 of the algorithm collects the belief of all settings that are known to be ?less
difficult? than k. If we observe that the proposed setting was ?too easy?, that is, we should ?increase
the difficulty?, in line 8 we update the belief of the proposed setting as well as all settings easier than
the proposed. If we observe that the proposed setting was ?too difficult?, that is, we should ?decrease
the difficulty?, in line 11 we update the belief of the proposed setting as well as all settings more
difficult than the proposed. The amount of belief that is updated for each mistake is thus equal to
Bt (kt ) or At (kt ). To gain the most information independent of the observation and thus to achieve
the best performance, we choose the k that gives us the best worst case update min{Bt (k), At (k)}
in line 5 of the algorithm.
5
Theory
We will now show a bound on the number of inappropriate difficulty settings that are proposed,
relative to the number of mistakes the best static difficulty setting makes. We denote the number of
mistakes of P OSM until time T by m and the minimum number of times a statically chosen difficulty
setting would have made a mistake until time
P T by M . We denote furthermore the total amount of
belief on the partially ordered set by Wt = k?K wt (k).
The analysis of the algorithm relies on the notion of a path cover of K, i.e., a set of paths covering
K. A path is a subset of K that is totally ordered. A set of paths is covering K if the union of the
paths is equal to K. Any path cover can be chosen but the minimum path cover of K achieves the
tightest bound. It can be found in time polynomial in |K| and its size is equal to the size of the
largest antichain in (K, ). We denote the chosen set of paths by C.
With this terminology, we are now ready to state the main result of our paper:
Theorem 1. For the number of mistakes of P OSM, it holds that:
?
?
?
?
? ln |K| + M ln 1/? ?
? .
m??
2|C|
ln 2|C|?1+?
P
For all c ? C we denote the amount
of belief on every chain by Wtc =
x?c wt (x), the beP
lief ?above? k on c by Act (k) =
w
(x),
and
the
belief
?below?
k
on c by Btc (k) =
x?c:xk t
P
c
x?c:xk wt (x). Furthermore, we denote the ?heaviest? chain by ct = argmaxc?C Wt .
Unless stated otherwise, the following statements hold for all t.
Observation 1.1. To relate the amount of belief updated by P OSM to the amount of belief on each
chain observe that
max min{At (k), Bt (k)} = max max min{At (k), Bt (k)}
c?C
k?K
k?c
? max max min{Act (k), Btc (k)}
c?C
k?c
? max min{Act t (k), Btct (k)} .
k?ct
Observation 1.2. As ct is the ?heaviest? among all chains and
Wtct ? Wt /|C|.
P
c?C
Wtc ? WT , it holds that
We will next show that for every chain, there is a difficulty setting for which it holds that: If we
proposed that setting and made a mistake, we would be able to update at least half of the total
weight of that chain.
Proposition 1.1. For all c ? C it holds that
max min{Act (k), Btc (k)} ? Wtc /2 .
k?c
Proof. We choose
i = argmax{Btc (k) | Btc (k) < Wtc /2}
k?c
4
and
j = argmin{Btc (k) | Btc (k) ? Wtc /2} .
k?c
This way, we obtain i, j ? c for which Btc (i) < Wtc /2 ? Btc (j) and which are consecutive, that
is, @k ? c : i ? k ? j. Such i, j exist and are unique as ?x ? K : wt (x) > 0. We then have
Btc (i) + Act (j) = Wtc and thus also Act (j) > Wtc /2. This immediatelly implies
Wtc /2 ? min{Act (j), Btc (j)} ? max min{Act (k), Btc (k)} .
k?c
Observation 1.3. We use the previous proposition to show that in each iteration in which P OSM
proposes an inappropriate difficulty setting, we update at least a constant fraction of the total weight
of the partially ordered set:
max min{At (k), Bt (k)} ? max min{Act t (k), Btct (k)} ?
k?K
k?ct
Wtct
Wt
?
2
2|C|
Proof (of Theorem 1). From the previous observations it follows that at each mistake we update at
least a fraction of 1/(2|C|) of the total weight and have at most a fraction of (2|C| ? 1)/(2|C|) which
is not updated. This implies
Wt+1
2|C| ? 1
?
2|C| ? 1
1
Wt +
Wt ?
+
Wt .
???
2|C|
2|C|
2|C|
2|C|
Applying this bound recursively, we obtain for time T
m
m
?
2|C| ? 1
?
2|C| ? 1
WT ? W0
+
+
? |K|
.
2|C|
2|C|
2|C|
2|C|
As we only update the weight of a difficulty setting if the response implied that the algorithm made
a mistake, ? M is a lower bound on the weight of one difficulty setting and hence also WT ? ? M .
Solving
m
?
|C| ? 1
M
+
? ? |K|
2|C|
2|C|
for m, proves the theorem.
Note, that this bound is similar to the bound for the full information setting [3] despite much weaker
information being available in our case. The influence of |C| is the new ingredient that changes the
behaviour of this bound for different partially ordered sets.
6
Experiments
We performed two sets of experiments: simulating a game against a stochastic environment, as well
as using human players to provide our algorithm with a non-oblivious adversary. To evaluate the
performance of our algorithm we have chosen two baselines. The first one is the best static difficulty
setting in hindsight: it is a difficulty that a player would pick if she knew her skill level in advance
and had to choose the difficulty only once. The second one is the I MPROVED PI algorithm [3].
In the following we denote the subset of poset?s vertices with the ?just right? labels the zero-zone
(because in the corresponding loss vector their components are equal to zero). In both stochastic and
adversarial scenario we consider two different settings: so called ?smooth? and ?non-smooth? one.
The settings? names describe the way the zero-zone changes with time. In the ?non-smooth? setting
we don?t place any restrictions on it apart from its size, while in the ?smooth? setting the border of
the zero-zone is allowed to move only by one vertex at a time. These two settings represent two
extreme situations: one player changing her skills gradually with time is changing the zero-zone
?smoothly?; different players with different skills for each new challenge the game presents will
make the zero-zone ?jump?. In a more realistic scenario the zero-zone would change ?smoothly?
most of the time, but sometimes it would perform jumps.
5
ImprovedPI
POSM
BSIH
400
ImprovedPI
POSM
300
250
350
200
300
150
regret
loss
250
200
100
50
150
0
100
-50
50
-100
0
0
100
200
300
400
500
0
100
200
time
300
400
500
time
(a) Loss.
(b) Regret.
Figure 1: Stochastic adversary, ?smooth? setting, on a single chain of 50 vertices.
ImprovedPI
POSM
BSIH
ImprovedPI
POSM
250
300
200
250
150
200
100
regret
loss
350
150
50
100
0
50
-50
-100
0
0
100
200
300
400
500
0
time
100
200
300
400
500
time
(a) Loss.
(b) Regret.
Figure 2: Stochastic adversary, ?smooth? setting, on a grid of 7x7 vertices.
6.1
Stochastic Adversary
In the first set of experiments we performed, the adversary is stochastic: On every iteration the zerozone changes with a pre-defined probability. In the ?smooth? setting only one of the border vertices
of the zero-zone at a time can change its label.For the ?non-smooth? setting we consider a truly evil
case of limiting the zero-zone to always containing only one vertex and a case where the zero-zone
may contain up to 20% of all the vertices in the graph. Note that even relabeling of a single vertex
may break the consistency of the labeling with regard to the poset. The necessary repair procedure
may result in more than one vertex being relabeled at a time.
We consider two graphs that represent two different but typical games structures with regard to the
difficulty: a single chain and a 2-dimensional grid. A set of progressively more difficult challenges
such that can be found in a puzzle or a time-management game can be directly mapped onto a chain
of a length corresponding to the amount of challenges. A 2- (or more-) dimensional grid on the other
hand is more like a skill-based game, where depending on the choices players make different game
states become available to them. In our experiments the chain contains 50 vertices, while the grid is
built on 7 ? 7 vertices.
In all considered variations of the setting the game lasts for 500 iterations and is repeated 10 times.
The resulting mean and standard deviation values of loss and regret, respectively, are shown in
the following figures: The ?smooth? setting in Figures 1(a), 1(b) and 2(a), 2(b); The ?non-smooth?
setting in Figures 3(a), 3(b) and 4(a), 4(b). (For brevity we omit the plots with the results of other
?non-smooth? variations. They all show very similar behaviour.)
Note that in the ?smooth? setting P OSM is outperforming B SIH and, therefore, its regret is negative.
Furthermore, in the considerably more difficult ?non-smooth? setting all algorithms perform badly
(as expected). Nevertheless, in a slightly easier case of larger zero-zone, B SIH performs the best of
the three, and P OSM performance starts getting better.
While B SIH is a baseline that can not be implemented as it requires to foresee the future, P OSM is a
correct algorithm for dynamic difficulty adjustment. Therefore it is surprising that P OSM performs
almost as good as B SIH or even better.
6
12
ImprovedPI
POSM
BSIH
450
ImprovedPI
POSM
10
400
8
350
6
regret
loss
300
250
200
4
2
150
0
100
-2
50
-4
0
0
100
200
300
400
500
0
100
200
time
300
400
500
time
(a) Loss.
(b) Regret.
Figure 3: Stochastic adversary, ?non-smooth? setting, exactly one vertex in the zero-zone, on a single
chain of 50 vertices.
ImprovedPI
POSM
BSIH
400
ImprovedPI
POSM
60
350
50
300
40
regret
loss
250
200
30
20
150
10
100
0
50
-10
0
0
100
200
300
400
500
0
time
100
200
300
400
500
time
(a) Loss.
(b) Regret.
Figure 4: Stochastic adversary, ?non-smooth? setting, up to 20% of all vertices may be in the zerozone, on a single chain of 50 vertices.
6.2
Evil Adversary
While the experiments in our stochastic environment show encouraging results, of real interest to
us is the situation where the adversary is ?evil?, non-stochastic, and furthermore, non-oblivious. In
dynamic difficulty adjustment the algorithm will have to deal with people, who are learning and
changing in hard to predict ways. We limit our experiments to a case of a linear order on difficulty
settings, in other words, the chain. Even though it is a simplified scenario, this situation is rather
natural for games and it demonstrates the power of our algorithm.
To simulate this situation, we?ve decided to use people as adversaries. Just as in dynamic difficulty
adjustment players are not supposed to be aware of the mechanics, our methods and goals were not
disclosed to the testing persons. Instead they were presented with a modified game of cups: On
every iteration the casino is hiding a coin under one of the cups; after that the player can point at
two of the cups. If the coin is under one of these two, the player wins it. Behind the scenes the
cups represented the vertices on the chain and the players? choices were setting the lower and upper
borders of the zero-zone. If the algorithm?s prediction was wrong, one of the two cups was decided
on randomly and the coin was placed under it. If the prediction was correct, no coin was awarded.
Unfortunately, using people in such experiments places severe limitations on the size of the game.
In a simplified setting as this and without any extrinsic rewards they can only handle short chains
and short games before getting bored. In our case we restricted the length of the chain to 8 and the
length of each game to 15. It is possible to simulate a longer game by not resetting the weights of
the algorithm after each game is over, but at the current stage of work it wasn?t done.
Again, we created the ?smooth? and ?non-smooth? setting by placing or removing restrictions on
how players were allowed to choose their cups. To each game either I MPROVED PI or P OSM was
assigned. The results for the ?smooth? setting are on Figures 5(a), 5(b), and 5(c); for the ?nonsmooth? on Figures 6(a), 5(b), and 6(c). Note, that due to the fact that this time different games were
played by I MPROVED PI and P OSM, we have two different plots for their corresponding loss values.
7
ImprovedPI
Best Static
14
POSM
Best Static
14
ImprovedPI
POSM
10
12
12
10
10
8
8
regret
loss
loss
8
6
6
4
4
2
2
0
0
6
4
2
0
2
4
6
8
10
12
14
0
0
2
4
6
time
8
10
12
14
0
2
4
6
time
(a) Games vs I MPROVED PI.
8
10
12
14
time
(b) Games vs P OSM.
(c) Regret.
Figure 5: Evil adversary, ?smooth? setting, a single chain of 8 vertices.
ImprovedPI
Best Static
12
POSM
Best Static
12
ImprovedPI
POSM
12
10
10
10
8
8
regret
loss
loss
8
6
6
4
4
2
2
0
0
6
4
0
2
4
6
8
10
12
14
time
(a) Games vs I MPROVED PI.
2
0
0
2
4
6
8
10
12
time
(b) Games vs P OSM.
14
0
2
4
6
8
10
12
14
time
(c) Regret.
Figure 6: Evil adversary, ?non-smooth? setting, a single chain of 8 vertices.
We can see that in the ?smooth? setting again the performance of P OSM is very close to that of B SIH.
In the more difficult ?non-smooth? one the results are also encouraging. Note, that the loss of B SIH
appears to be worse in games played by P OSM. A plausible interpretation is that players had to
follow more difficult (less static) strategies to fool P OSM to win their coins. Nevertheless, the regret
of P OSM is small even in this case.
7
Conclusions
In this paper we formalised dynamic difficulty adjustment as a prediction problem on partially ordered sets and proposed a novel online learning algorithm, P OSM, for dynamic difficulty adjustment.
Using this formalisation, we were able to prove a bound on the performance of P OSM relative to the
best static difficulty setting chosen in hindsight, B SIH. To validate our theoretical findings empirically, we performed a set of experiments, comparing P OSM and another state-of-the-art algorithm to
B SIH in two settings (a) simulating the player by a stochastic process and (b) simulating the player
by humans that are encouraged to play as adverserially as possible. These experiements showed
that P OSM performs very often almost as well as B SIH and, even more surprisingly, sometimes even
better. As this is also even better than the behaviour suggested by our mistake bound, there seems to
be a gap between the theoretical and empirical performance of our algorithm.
In future work we will on the one hand investigate this gap, aiming at providing better bounds by,
perhaps, making stronger but still realistic assumptions. On the other hand, we will implement
P OSM in a range of computer games as well as teaching systems to observe its behaviour in real
application scenarios.
Acknowledgments
This work was supported in part by the German Science Foundation (DFG) in the Emmy Noetherprogram under grant ?GA 1615/1-1?. The authors thank Michael Kamp for proofreading.
8
References
[1] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for
bandit linear optimization. 2008.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. Foundations of Computer Science, Annual IEEE Symposium on, 0:322, 1995.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[4] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction
with expert advice. Machine Learning, 66:321?352, 2007. 10.1007/s10994-006-5001-7.
[5] D. Charles and M. Black. Dynamic player modeling: A framework for player-centered digital
games. In Proc. of the International Conference on Computer Games: Artificial Intelligence,
Design and Education, pages 29?35, 2004.
[6] V. Dani and T. P. Hayes. Robbing the bandit: less regret in online geometric optimization
against an adaptive adversary. In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, SODA ?06, pages 937?943, New York, NY, USA, 2006. ACM.
[7] G. Danzi, A. H. P. Santana, A. W. B. Furtado, A. R. Gouveia, A. Leit?ao, and G. L. Ramalho.
Online adaptation of computer games agents: A reinforcement learning approach. II Workshop
de Jogos e Entretenimento Digital, pages 105?112, 2003.
[8] T. G?artner and G. C. Garriga. The cost of learning directed cuts. In Proceedings of the 18th
European Conference on Machine Learning, 2007.
[9] R. Herbrich, T. Minka, and T. Graepel. Trueskilltm : A bayesian skill rating system. In NIPS,
pages 569?576, 2006.
[10] R. Hunicke and V. Chapman. AI for dynamic difficulty adjustment in games. Proceedings
of the Challenges in Game AI Workshop, Nineteenth National Conference on Artificial Intelligence, 2004.
[11] H. McMahan and A. Blum. Online geometric optimization in the bandit setting against an
adaptive adversary. In J. Shawe-Taylor and Y. Singer, editors, Learning Theory, volume 3120
of Lecture Notes in Computer Science, pages 109?123. Springer Berlin / Heidelberg, 2004.
[12] O. Missura and T. G?artner. Player Modeling for Intelligent Difficulty Adjustment. In Discovery
Science, pages 197?211. Springer, 2009.
[13] J. Togelius, R. Nardi, and S. Lucas. Making racing fun through player modeling and track
evolution. In SAB?06 Workshop on Adaptive Approaches for Optimizing Player Satisfaction in
Computer and Physical Games, pages 61?70, 2006.
[14] G. Yannakakis and M. Maragoudakis. Player Modeling Impact on Player?s Entertainment in
Computer Games. Lecture notes in computer science, 3538:74, 2005.
9
| 4302 |@word polynomial:1 seems:1 stronger:1 rigged:1 pick:1 recursively:1 contains:2 subjective:1 reaction:1 existing:1 current:2 comparing:2 o2:1 surprising:1 must:1 realistic:2 designed:1 plot:2 update:11 progressively:1 v:4 intelligence:3 half:1 short:3 herbrich:1 become:1 symposium:2 prove:1 artner:4 acquired:1 theoretically:3 expected:1 nor:1 mechanic:1 multi:2 nardi:1 encouraging:2 armed:2 inappropriate:2 totally:1 hiding:1 provided:1 what:1 argmin:1 sankt:1 developer:1 whatsoever:1 hindsight:7 finding:1 guarantee:1 thorough:1 every:6 act:9 fun:1 exactly:1 wrong:2 demonstrates:2 grant:1 omit:1 before:1 modify:1 limit:2 mistake:12 aiming:1 despite:1 path:11 lugosi:1 black:1 lief:1 studied:1 collect:2 range:1 statistically:1 seventeenth:1 directed:7 unique:1 decided:2 acknowledgment:1 testing:3 union:1 regret:18 poset:2 implement:1 procedure:1 universal:1 empirical:2 pre:1 word:1 onto:1 close:1 ga:1 applying:1 influence:1 restriction:2 missing:1 go:1 assigns:1 bep:1 rule:1 handle:1 notion:1 variation:2 updated:3 limiting:1 s10994:1 play:2 today:1 commercial:1 trick:1 element:1 racing:1 cut:4 observed:1 worst:3 decrease:1 luck:1 aster:1 environment:2 reward:1 practise:1 dynamic:20 solving:1 easily:1 various:1 represented:1 ramalho:1 describe:1 artificial:3 query:2 labeling:1 sih:10 choosing:1 emmy:1 abernethy:1 heuristic:3 larger:1 plausible:1 nineteenth:1 say:1 otherwise:3 online:7 sequence:1 propose:1 formalised:1 adaptation:1 date:1 achieve:1 supposed:1 validate:1 getting:2 overburden:1 produce:1 depending:2 clearer:1 implemented:1 come:1 implies:3 correct:3 stochastic:11 centered:1 human:5 implementing:2 education:1 require:1 behaviour:6 ao:1 investigation:1 proposition:2 hold:6 considered:1 puzzle:1 predict:2 olana:2 achieves:1 consecutive:1 perceived:1 proc:1 applicable:1 label:2 augustin:1 largest:1 create:1 reflects:1 dani:1 always:1 aim:3 modified:1 rather:2 sab:1 she:1 modelling:1 garriga:2 adversarial:2 baseline:3 membership:2 typically:1 bt:8 hidden:2 relation:2 bandit:5 her:2 interested:2 germany:1 overall:1 among:1 denoted:1 lucas:1 proposes:1 art:1 equal:4 once:1 aware:1 chapman:2 btc:12 placing:1 encouraged:1 yannakakis:1 future:3 others:3 nonsmooth:1 simplify:1 inherent:1 few:2 employ:1 oblivious:2 randomly:1 intelligent:1 ve:1 national:1 relabeling:1 dfg:1 argmax:1 interest:3 investigate:4 severe:1 truly:1 extreme:1 behind:1 chain:18 kt:6 partial:3 necessary:1 experience:1 unless:1 stoltz:1 iv:1 taylor:1 formalise:2 theoretical:5 instance:1 modeling:5 asking:1 cover:3 disadvantage:1 cost:1 vertex:31 subset:4 deviation:1 successful:1 too:15 considerably:1 chooses:1 person:1 international:1 siam:1 gaertner:1 michael:1 w1:1 heaviest:2 again:2 cesa:3 management:1 containing:1 choose:6 iais:1 worse:1 expert:1 de:2 casino:2 performed:3 try:2 view:2 break:1 hazan:1 red:4 start:1 maintains:1 contribution:2 minimize:1 who:1 resetting:1 kamp:1 bayesian:1 schlo:1 none:1 researcher:1 history:1 against:6 minka:1 proof:2 static:13 argmaxc:1 gain:1 treatment:1 knowledge:2 graepel:2 auer:1 appears:1 follow:1 response:3 improved:2 formulation:1 foresee:1 though:1 done:1 furthermore:5 just:12 stage:1 until:2 hand:3 receives:2 perhaps:1 usa:1 name:1 requiring:1 contain:1 evolution:1 hence:1 assigned:1 deal:1 round:5 game:51 covering:2 transferable:1 stone:1 performs:3 novel:2 recently:1 charles:1 common:1 empirically:3 physical:1 volume:1 he:4 interpretation:1 cup:6 cambridge:1 ai:2 automatic:1 grid:4 consistency:1 teaching:2 shawe:1 had:2 longer:1 etc:1 showed:3 optimizing:1 apart:1 awarded:1 scenario:4 outperforming:1 seen:2 minimum:2 additional:1 ii:2 full:1 sound:1 smooth:22 wtc:9 long:1 equally:1 impact:1 prediction:7 circumstance:1 iteration:7 represent:2 sometimes:2 receive:1 want:1 evil:5 ot:4 sure:1 revealed:1 iii:1 easy:8 competing:1 idea:1 wasn:1 golf:1 whether:1 motivated:1 colour:4 suffer:1 york:1 repeatedly:1 action:1 fool:1 amount:7 dark:1 extensively:1 schapire:1 exist:2 extrinsic:1 track:1 blue:3 write:1 discrete:1 terminology:1 nevertheless:2 blum:1 changing:4 neither:1 graph:4 monotone:2 fraction:3 year:1 utilising:1 angle:1 master:11 powerful:1 soda:1 place:2 almost:2 electronic:2 scaling:1 bound:17 handicap:2 ct:4 played:5 display:1 annual:2 badly:1 scene:1 bonn:2 x7:1 simulate:2 min:11 proofreading:1 statically:2 developing:1 slightly:1 making:4 modification:1 gradually:1 repair:1 restricted:1 ln:4 turn:3 german:1 mechanism:2 santana:1 singer:1 end:3 available:2 tightest:1 apply:1 obey:1 observe:5 appropriate:2 simulating:4 coin:5 thomas:2 ensure:1 entertainment:1 unifying:1 robbing:1 prof:1 comparatively:1 implied:1 move:1 already:1 strategy:1 experiements:1 traditional:2 win:2 thank:1 mapped:1 berlin:1 w0:1 length:3 o1:1 providing:1 difficult:19 unfortunately:1 statement:1 relate:1 negative:3 stated:1 design:1 unknown:1 perform:2 bianchi:3 upper:2 observation:6 finite:2 attacked:1 situation:5 looking:1 mansour:1 community:1 rating:1 extensive:1 nip:1 able:5 adversary:17 suggested:1 below:2 challenge:4 built:1 green:5 max:10 video:1 belief:14 power:2 satisfaction:2 difficulty:64 natural:3 predicting:2 arm:5 created:1 ready:1 fraunhofer:1 argmaxk:1 geometric:2 discovery:1 relative:4 freund:1 loss:20 lecture:2 antichain:1 limitation:1 acyclic:1 ingredient:1 digital:2 foundation:2 agent:1 editor:1 playing:1 pi:7 placed:1 last:3 surprisingly:1 supported:1 weaker:1 distributed:1 regard:2 feedback:3 author:1 made:6 commonly:1 jump:2 simplified:3 adaptive:3 reinforcement:1 skill:5 uni:1 preferred:1 ignore:1 hayes:1 knew:1 don:1 heidelberg:1 european:1 significance:1 main:2 bored:1 border:3 allowed:4 repeated:1 osm:22 gambling:1 advice:1 ny:1 formalisation:2 formalization:1 exponential:2 mcmahan:1 theorem:3 removing:1 specific:1 redict:1 learnable:2 rakhlin:1 missura:3 disclosed:1 workshop:3 relabeled:1 labelling:2 gap:2 easier:2 smoothly:2 intersection:1 adjustment:23 ordered:10 partially:10 springer:2 corresponds:1 relies:1 acm:2 goal:2 labelled:2 change:8 hard:1 bore:1 typical:2 wt:23 called:3 total:4 player:39 zone:12 people:3 brevity:1 evaluate:1 |
3,648 | 4,303 | Learning with the Weighted Trace-norm under
Arbitrary Sampling Distributions
Rina Foygel
Department of Statistics
University of Chicago
[email protected]
Ohad Shamir
Microsoft Research New England
[email protected]
Ruslan Salakhutdinov
Department of Statistics
University of Toronto
[email protected]
Nathan Srebro
Toyota Technological Institute at Chicago
[email protected]
Abstract
We provide rigorous guarantees on learning with the weighted trace-norm under
arbitrary sampling distributions. We show that the standard weighted-trace norm
might fail when the sampling distribution is not a product distribution (i.e. when
row and column indexes are not selected independently), present a corrected variant for which we establish strong learning guarantees, and demonstrate that it
works better in practice. We provide guarantees when weighting by either the true
or empirical sampling distribution, and suggest that even if the true distribution is
known (or is uniform), weighting by the empirical distribution may be beneficial.
1
Introduction
One of the most common approaches to collaborative filtering and matrix completion is trace-norm
regularization [1, 2, 3, 4, 5]. In this approach we attempt to complete an unknown matrix, based on
a small subset of revealed entries, by finding a matrix with small trace-norm, which matches those
entries as best as possible.
This approach has repeatedly shown good performance in practice, and is theoretically well understood for the case where revealed entries are sampled uniformly [6, 7, 8, 9, 10, 11]. Under such
uniform sampling, ?(n log(n)) entries are sufficient for good completion of an n ? n matrix?
i.e. a nearly constant number of entries per row. However, for arbitrary sampling distributions, the
worst-case sample complexity lies between a lower bound of ?(n4/3 ) [12] and an upper bound of
O(n3/2 ) [13], i.e. requiring between n1/3 and n1/2 observations per row, and indicating it is not
appropriate for matrix completion in this setting.
Motivated by these issues, Salakhutdinov and Srebro [12] proposed to use a weighted variant of the
trace-norm, which takes the distribution of the entries into account, and showed experimentally that
this variant indeed leads to superior performance. However, although this recent paper established
that the weighted trace-norm corrects a specific situation where the standard trace-norm fails, no
general learning guarantees are provided, and it is not clear if indeed the weighted trace-norm always leads to the desired behavior. The only theoretical analysis of the weighted trace-norm that we
are aware of is a recent report by Negahban and Wainwright [10] that provides reconstruction guarantees for a low-rank matrix with i.i.d. noise, but only when the sampling distribution is a product
distribution, i.e. the rows index and column index of observed entries are selected independently. A
product distribution assumption does not seem realistic in many cases?e.g. for the Netflix data, it
would indicate that all users have the same (conditional) distribution over which movies they rate.
1
In this paper we rigorously study learning with a weighted trace-norm under an arbitrary sampling
distribution, and show that this situation is indeed more complicated, requiring a correction to the
weighting. We show that this correction is necessary, and present empirical results on the Netflix
and MovieLens dataset indicating that it is also helpful in practice. We also rigorously consider
weighting according to either the true sampling distribution (as in [10]) or the empirical frequencies,
as is actually done in practice, and present evidence that weighting by the empirical frequencies
might be advantageous. Our setting is also more general than that of [10]?we consider an arbitrary
loss and do not rely on i.i.d. noise, instead presenting results in an agnostic learning framework.
Setup and Notation. We consider an arbitrary unknown n ? m target matrix Y , where a subset
of entries {Yit ,jt }st=1 indexed by S = {(i1 , j1 ), . . . , (is , js )} is revealed to us. Without loss of
generality, we assume n ? m. Throughout most of the paper, we assume S is drawn i.i.d. according
to some sampling distribution p(i, j) (with replacement). Based on this subset on entries, we would
n?m
?
like to fill in the missing
entries and
, with low expected loss
h
i obtain a prediction matrix XS ? R
? S ) = Eij?p `((X
? S )ij , Yij ) , where `(x, y) is some loss function. Note that we measure the
Lp (X
loss with respect to the same distribution p(i, j) from which the training set is drawn (this is also the
case in [12, 10, 13]).
The trace-norm of a matrix X ? Rn?m , written kXktr , is defined as the sum of its singular values.
Given some distribution p(i, j) on [n] ? [m], the weighted trace-norm of X is given by [12]
1/2
1/2
kXktr(pr ,pc ) =
diag (pr ) ? X ? diag (pc )
,
tr
where pr ? Rn and pc ? Rm denote vectors of the row- and column-marginals respectively. Note
that the weighted trace-norm only depends on these marginals (but not their joint distribution) and
1
that if pr and pc are uniform, then kXktr(pr ,pc ) = ?nm
kXktr . The weighted trace-norm does not
generally scale with
n
and
m,
and
in
particular,
if
X
has
rank
r and entries bounded in [?1, 1], then
?
kXktr(pr ,pc ) ? r regardless of which p (i, j) is used. This motivates us to define the class
?
Wr [p] = X ? Rn?m : kXktr(pr ,pc ) ? r ,
although we emphasize that our results do not directly depend on the rank, and Wr [p] certainly
? S = arg min{L
? S (X) : X ?
includes full-rank matrices. We analyze here estimators of the form X
Ps
1
?
Wr [p]} where LS (X) = s t=1 `(Xit ,jt , Yit ,jt ) is the empirical error on the observed entries.
Although we focus mostly on the standard inductive setting, where the samples are drawn i.i.d. and
the guarantee is on generalization for future samples drawn by the same distribution, our results can
also be stated in a transductive model, where a training set and a test set are created by splitting a
fixed subset of entries uniformly at random (as in [13]). The transductive setting is discussed, and
transductive variants of our Theorems are given, in Section 4.2 and in the Supplementary Materials.
2
Learning with the Standard Weighting
In this Section, we consider learning using the weighted trace-norm as suggested by Salakhutdinov
and Srebro [12], i.e. when the weighting is according to the sampling distribution p(i, j). Following
the approach of [6] and [11], we base our results on bounding the Rademacher complexity of Wr [p],
as a class of functions mapping index pairs to entry values. However, we modify the analysis for the
weighted trace-norm with non-uniform sampling.
For a class of matrices X and a sample S = {(i1 , j1 ), . . . , (is , js )} of indexes in [n] ? [m], the
empirical Rademacher complexity of the class (with respect to S) is given by
#
"
s
1X
?
?t Xit jt ,
RS (X ) = E??{?1}s sup
X?X s t=1
? S (X ) measures the extent to
where ? is a vector of signs drawn uniformly at random. Intuitively, R
which the class X can ?overfit? data, by finding a matrix X which correlates as strongly as possible
to a sample from a matrix of random noise. For a loss `(x, y) that is Lipschitz in x, the Rademacher
? S (X)| for all X ? X , yielding
complexity can be used to uniformly bound the deviations |Lp (X)? L
a learning guarantee on the empirical risk minimizer [14].
2
2.1
Guarantees for Special Sampling Distributions
We begin by providing guarantees for an arbitrary, possibly unbounded, Lipschitz loss `(x, y), but
only under sampling distributions which are either product distributions (i.e. p(i, j) = pr (i)pc (j))
or have uniform marginals (i.e. pr and pc are uniform, but perhaps the rows and columns are not
independent). In Section 2.3 below, we will see why this severe restriction on p is needed.
Theorem 1. For an l-Lipschitz loss `, fix any matrix Y , sample size s, and distribution
?S =
p, such nthat p is either a product
distribution or has uniform marginals. Let X
o
? S (X) : X ? Wr [p] . Then, in expectation over the training sample S drawn i.i.d.
arg min L
from the distribution p,
!
r
rn
log(n)
? S ) ? inf Lp (X) + O l ?
Lp (X
.
(1)
s
X?Wr [p]
Here and elsewhere we state learning guarantees in expectation for simplicity. Since the guarantees are obtained by bounding the Rademacher complexity, one can also immediately obtain highprobability guarantees, with logarithmic dependence on the confidence parameter, via standard techniques (e.g. [14]).
h
i
? S (Wr [p]) , from
Proof. We will show how to bound the expected Rademacher complexity ES R
which the desired results follows using standard arguments (Theorem 8 of [14]1 ). Following [11] by
including the weights, using the duality between spectral norm k?ksp and trace-norm, we compute:
?
?
? ?
?
s
s
X
X
h
i ?r
r
e
it ,jt
? S (Wr [p]) =
?=
ES,? ?
?t p
ES,? ?
Qt
? ,
ES R
r
c
s
s
p (it ) p (jt )
t=1
t=1
sp
where ei,j = ei eTj and Qt = ?t ?
eit ,jt
pr (it )pc (jt )
? Rn?m .
sp
Since the Qt ?s are i.i.d. zero-
mean matrices,
combined with Remarks 6.4 and 6.5 there, establishes
h P Theoremi 6.1 of [15],
p
s
that ES,? k t=1 Qt ksp = O ? log(n) + R log(n) , where R and ? are defined to satisfy
P
P
kQt ksp ? R (almost surely) and ?2 = max
E QTt Qt
sp ,
E Qt QTt
sp . Calculating
q
these bounds (see Supplementary Material), we get R ? mini,j {npnm
r (i)?mpc (j)} , and
v
u
X
u
t
? ? s max max
i
j
X p (i, j) o
p (i, j)
,
max
?
pr (i) pc (j) j
pr (i) pc (j)
i
r
sn
.
mini,j {npr (i) ? mpc (j)}
If p has uniform row- and
column-marginals, then for all i, j, npr (i) = mpc (j) = 1. This yields
h
i
q
rn log(n)
? r [p]) ? O
ES R(W
, as desired. (Here we assume s > n log(n), since otherwise we
s
?
need only establish that excess error is O(l r), which holds trivially for any matrix in Wr [p].)
If p does not have uniform marginals, but instead is a product distribution, then the quantity R
defined above is potentially unbounded, so we cannot apply the same simple argument. However,
we can consider the ?p-truncated? class of matrices
n
log(n) o
: X ? Wr [p] .
Z = Z(X) = Xij I p (i, j) ? ?
s nm ij
h
i
? S (Z) ?
By a similar calculation of the expected spectral norms, we can now bound ES R
q
rn log(n)
? S )) ? L
? S (Z(X
? S )) (in exO
. Applying Theorem 8 of [14], this bounds Lp (Z(X
s
? S )ij 6= (X
? S )ij only on the extremely low-probability entries, we can
pectation). Since Z(X
1
Theorem 8 of [14] gives a learning guarantee holding with high probability, but their proof of this theorem
(in particular, the last series of displayed equations) contains a guarantee in expectation, which we use here.
3
? S ) ? Lp (Z(X
? S )) and L
? S (Z(X
? S )) ? L
? S (X
? S ) . Combining these steps,
also bound Lp (X
?S ) ? L
? S (X
? S ) . We similarly bound L
? S (X ? ) ? Lp (X ? ), where X ? =
we can bound Lp (X
? S (X
?S ) ? L
? S (X ? ), this yields the desired bound on excess erarg minX?Wr [p] Lp (X). Since L
ror. The details are given in the Supplementary Materials.
Examining the proof of Theorem 1, we see that we can generalize the result by including distributions p with row- and column-marginals that are lower-bounded. More precisely, if p satisfies
1
1
pr (i) ? Cn
, pc (j) ? Cm
for all i, j, then the bound (1) holds, up to a factor of C. Note that this
result does not require an upper bound on the row- and column-marginals, only a lower bound, i.e. it
only requires that no marginals are too low. This is important to note since the examples where the
unweighted trace-norm fails under a non-uniform distribution are situations where some marginals
are very high (but none are too low) [12]. This suggests that the low-probability marginals could
perhaps be ?smoothed? to satisfy a lower bound, without removing the advantages of the weighted
trace-norm. We will exploit this in Section 3 to give a guarantee that holds more generally for
arbitrary p, when smoothing is applied.
2.2
Guarantees for bounded loss
In Theorem 1, we showed a strong bound on excess error, but only for a restricted class of distributions p. We now show that if the loss function ` is bounded, then we can give a non-trivial, but
weaker, learning guarantee that holds uniformly over all distributions p. Since we are in any case
discussing Lipschitz loss functions, requiring that the loss function be bounded essentially amounts
to requiring that the entries of the matrices involved be bounded. That is, we can view this as a
guarantee on learning matrices with bounded entries. In Section 2.3 below, we will show that this
boundedness assumption is unavoidable if we want to give a guarantee that holds for arbitrary p.
Theorem 2. For an l-Lipschitz loss
any matrix Y , sample size s, and any
n ` bounded by b, fix o
? S = arg min L
? S (X) : X ? Wr [p] for r ? 1. Then, in expectation over
distribution p. Let X
the training sample S drawn i.i.d. from the distribution p,
!
r
3 rn log(n)
?
Lp (XS ) ? inf Lp (X) + O (l + b) ?
.
(2)
s
X?Wr [p]
The proof is provided in the Supplementary Materials, and is again
q basedon analyzing the expected
h
i
?
Rademacher complexity, ES R(` ? Wr [p]) ? O (l + b) ? 3 rn log(n) .
s
2.3
Problems with the standard weighting
In the previous Sections, we showed that for distributions p that are either product distributions or
have uniform marginals, we can prove a square-root bound on excess error, as shown in (1). For
arbitrary p, the only learning guarantee we obtain is a cube-root bound given in (2), for the special
case of bounded loss. We would like to know whether the square-root bound might hold uniformly
over all distributions p, and if not, whether the cube-root bound is the strongest result that we can
give for the bounded-loss setting, and whether any bound will hold uniformly over all p in the
unbounded-loss setting.
The examples below demonstrate that we cannot improve the results of Theorems 1 and 2 (up to log
factors), by constructing degenerate examples using non-product distributions p with non-uniform
marginals. Specifically, in Example 1, we show that in the special case of bounded loss, the cuberoot bound in (2) is the best possible bound (up to the log factor) that will hold for all p, by giving
a construction
arbitrary n = m and arbitrary s ? nm, such that with 1-bounded loss, excess
p for
error is ? 3 ns . In Example 2, we show that with unbounded (Lipschitz) loss, we cannot bound
excess error better than a constant bound, by giving a construction for arbitrary n = m and arbitrary
s ? nm in the unbounded-loss regime, where excess error is ?(1). For both examples we fix
r = 1. We note that both examples can be modified to fit the transductive setting, demonstrating
that smoothing is necessary in the transductive setting as well.
4
Example 1. Let `(x, y) = min{1, |x ? y|} ? 1, let a = (2s/n)2/3 < n, and let matrix Y and
block-wise constant distribution p be given by
?
? 1
!
n
0a? n
A
0a? n
2s ? 1a?
2
2
2
? ,
an
Y =
, (p (i, j)) = ?
1? 4s
0(n?a)? n 0(n?a)? n
n
n
0
n ? 1
2
n
a? 2
(n?a) 2
(n?a)? 2
2
(n?a)? 2
where A ? {?1}
is any sign matrix. Clearly, kY ktr(pr ,pc ) ? 1, and so inf X?Wr [p] Lp (X) = 0.
Now suppose we draw a sample S of size s from the matrix Y , according
to the distribution p. We
p
will show an ERM Y? such that in expectation over S, Lp (Y? ) ? 18 3 ns .
? S (Y S ) = 0, it
Consider Y S where YijS = Yij I {ij ? S}, and note that
Y S
tr(pr ,pc ) ? 1. Since L
N
, where N is the number
of ?1?s in Y which are not
is clearly an ERM. We also have Lp (Y S ) = 2s
p
1
an
1 3 n
S
observed in the sample. Since E [N ] ? an
,
we
see
that
E
L
(Y
)
?
p
4
2s ? 4 ? 8
s.
Example 2. Let `(x, y) = |x ? y|. Let Y = 0n?n ; trivially, Y ? Wr [p]. Let p (1, 1) = 1s , and
p (i, 1) = p (1, j) = 0 for all i, j > 1, yielding pr (1) = pc (1) = 1s . (The other entries of p may be
defined arbitrarily.) We will show an ERM Y? such that, in expectation over S, Lp (Y? ) ? 0.25. Let
A be the matrix with X11 = s and zeros elsewhere, and note that kAktr(pr ,pc ) = 1. With probability
? 0.25, entry (1, 1) will not appear in S, in which case Y? = A is an ERM, with Lp (Y? ) = 1.
The following table summarizes the learning guarantees that can be established for the (standard)
weighted trace-norm. As we saw, these guarantees are tight up to log-factors.
p = product
pr , pc = uniform
p arbitrary
3
1-Lipschitz,
q 1-bounded loss
rn log(n)
s
q
1-Lipschitz,
q unbounded loss
rn log(n)
s
q
3
rn log(n)
s
q
rn log(n)
s
rn log(n)
s
1
Smoothing the weighted trace norm
Considering Theorem 1 and the degenerate examples in Section 2.3, it seems that in order to be
able to generalize for non-product distributions, we need to enforce some sort of uniformity on
the weights. The Rademacher complexity computations in the proof of Theorem 1 show that the
problem lies not with large entries in the vectors pr and pc (i.e. if pr and/or pc are ?spiky?), but with
the small entries in these vectors. This suggests the possibility of ?smoothing? any overly low rowor column-marginals, in order to improve learning guarantees.
In Section 3.1, we present such a smoothing, and provide guarantees for learning with a smoothed
weighted trace-norm. The result suggests that there is no strong negative consequence to smoothing,
but there might be a large advantage, if confronted with situations as in Examples 1 and 2. In Section
3.2 we check the smoothing correction to the weighted trace-norm on real data, and observe that
indeed it can also be beneficial in practice.
3.1
Learning guarantee for arbitrary distributions
Fix a distribution p and a constant ? ? (0, 1), and let p? denote the smoothed marginals:
1
p?r (i) = ? ? pr (i) + (1 ? ?) ? n1 , p?c (j) = ? ? pc (j) + (1 ? ?) ? m
.
1
2,
(3)
In the theoretical results below, we use ? = but up to a constant factor, the same results hold for
any fixed choice of ? ? (0, 1).
Theorem 3. Forn an l-Lipschitz loss `, fix
o any matrix Y , sample size s, and any distribution p. Let
? S = arg min L
? S (X) : X ? Wr [?
X
p] . Then, in expectation over the training sample S drawn
i.i.d. from the distribution p,
!
r
rn log(n)
?
Lp (XS ) ? inf Lp (X) + O l ?
.
(4)
s
X?Wr [p]
?
5
q
h
i
rn log(n)
?
Proof. We bound ES?p RS (Wr [?
p]) ? O
, and then apply Theorem 8 of [14].
s
The proof of this Rademacher bound is essentially identical to the proof in Theorem?1, with the
.
e
modified definition of Qt = ?t ? rit ,jtc . Then kQt ksp ? maxij ? r 1 c
? 2 nm = R,
p? (i)p? (j)
p? (i)p? (j)
h
P
i
P
P
s
p(i,j)
and E
t=1 Qt QTt
sp = s ? maxi j p?r p(i,j)
? 4sm. Similarly,
1
j 12 pr (i)? 2m
(i)p?c (j) ? s ? maxi
i
h
P
?
.
s
E
t=1 QTt Qt
sp ? 4sn. Setting ? = 4sn and applying [15], we obtain the result.
Moving from Theorem 1 to Theorem 3, we are competing with a different class of matrices:
inf X?Wr [p] Lp (X)
inf X?Wr [p]
? Lp (X). In most applications we can think of, this change is
not significant. For example, we consider the low-rank matrix reconstruction problem, where the
trace-norm bound is used as a surrogate for rank. In order for the (squared) weighted trace-norm to
1/2
1/2
2
be a lower bound on the rank, we would need to assume
diag (pr ) Xdiag (pc )
F ? 1 [11]. If
2
2
we also assume that
(X ? )(i)
? m and
(X ? )(j)
? n for all rows i and columns j ? i.e. the
2
2
row and column magnitudes are not ?spiky? ? then X ? ? Wr [?
p]. Note that this condition is much
weaker than placing a spikiness condition on X ? itself, e.g. requiring |X ? |? ? 1.
3.2
Results on Netflix and MovieLens Datasets
We evaluated different models on two publicly-available collaborative filtering datasets: Netflix [16]
and MovieLens [17]. The Netflix dataset consists of 100,480,507 ratings from 480,189 users on
17,770 movies. Netflix also provides a qualification set containing 1,408,395 ratings, but due to the
sampling scheme, ratings from users with few ratings are overrepresented relative to the training
set. To avoid dealing with different training and test distributions, we also created our own validation and test sets, each containing 100,000 ratings set aside from the training set. The MovieLens
dataset contains 10,000,054 ratings from 71,567 users on 10,681 movies. We again set aside test
and validation sets of 100,000 ratings. Ratings were normalized to be zero-mean.
When dealing with large datasets the most practical way to fit trace-norm regularized models is
via stochastic gradient descent [18, 3, 12]. For computational reasons, however, we consider
rank-truncated trace-norm minimization, by optimizing within the restricted class {X : X ?
Wr [p] , rank (X) ? k} for k = 30 and k = 100, and for various values of smoothing parameters ?
(as in (3)). For each value of ? and k, the regularization parameter was chosen by cross-validation.
The following table shows root mean squared error (RMSE) for the experiments. For both k=30 and
k=100 the weighted trace-norm with smoothing (? = 0.9) significantly outperforms the weighted
trace-norm without smoothing (? = 1), even on the differently-sampled Netflix qualification set.
The proposed weighted trace-norm with smoothing outperforms max-norm regularization [19], and
performs comparably to ?geometric? smoothing [12]. On the Netfix qualification set, using k=30,
max-norm regularization and geometric smoothing achieve RMSE 0.9138 [19] and 0.9091 [12],
compared to 0.9096 achieved by the weighted trace-norm with smoothing. We note that geometric
smoothing was proposed by [12] as a heuristic without any theoretical or conceptual justification.
?
1
0.9
0.5
0.3
0
4
k
30
30
30
30
30
Test
0.7604
0.7589
0.7601
0.7712
0.7887
Netflix
Qual
k
0.9107 100
0.9096 100
0.9173 100
0.9198 100
0.9249 100
Test
0.7404
0.7391
0.7419
0.7528
0.7659
Qual
0.9078
0.9068
0.9161
0.9207
0.9236
k
30
30
30
30
30
MovieLens
Test
k
0.7852 100
0.7831 100
0.7836 100
0.7864 100
0.7997 100
Test
0.7821
0.7798
0.7815
0.7871
0.7987
The empirically-weighted trace norm
In practice, the sampling distribution p is not known exactly ? it can only be estimated via the
locations of the entries which are observed in the sample. Defining the empirical marginals
p?r (i) =
#{t : it = i} c
#{t : jt = j}
, p? (j) =
,
s
s
6
? S is estimated via regularization on the p?we would like to give a learning guarantee when X
weighted trace-norm, rather than the p-weighted trace-norm.
In Section 4.1, we give bounds on excess error when learning with smoothed empirical marginals,
which show that there is no theoretical disadvantage as compared to learning with the smoothed true
marginals. In fact, we provide evidence that suggests there might even be an advantage to using
the empirical marginals. To this end, in Section 4.2, we introduce the transductive learning setting,
and give a result based on the empirical marginals which implies a sample complexity bound that
1
is better by a factor of log /2 (n). In Section 4.3, we show that in low-rank matrix reconstruction
simulations, using empirical marginals indeed yields better reconstructions.
4.1
Guarantee for the standard (inductive) setting
We first show that when learning with the smoothed empirical marginals, defined as
1
p?r (i) = 12 p?r (i) + n1 , p?c (j) = 12 p?c (j) + m
,
we can obtain the same guarantee as for learning with the smoothed (true) marginals, given by p?.
Theorem 4. Forn an l-Lipschitz loss `, fix
o any matrix Y , sample size s, and any distribution p. Let
? S = arg min L
? S (X) : X ? Wr [?
X
p] . Then, in expectation over the training sample S drawn
i.i.d. from the distribution p,
r
r max{n, m} log(n + m)
?
Lp (XS ) ? inf Lp (X) + O l ?
.
(5)
s
X?Wr [p]
?
Note that although we regularize using the (smoothed) empirically-weighted trace-norm, we still
compare ourselves to the best possible matrix in the class defined by the (smoothed) true marginals.
The proof of this Theorem (in the Supplementary Material) uses Theorem 3 and involves showing
that when s = ?(n log(n)), which is required for all Theorems so far to be meaningful, the true and
empirical marginals are the same up to a constant factor. For this to be the case, such a sample size is
even necessary. In fact, the log(n) factor in our analysis (e.g. in the proof of Theorem 1) arises from
the bound on the expected spectral norm of a matrix, which, for a diagonal matrix, is just a bound
on the deviation of empirical frequencies. Might it be possible, then, to avoid this logarithmic factor
by using the empirical marginals? Although we could not establish such a result in the inductive
setting, we now turn to the transductive setting, where we could indeed obtain a better guarantee.
4.2
Guarantee for the transductive setting
In the transductive model, we fix a set S ? [n] ? [m] of size 2s, and then randomly split S into a
training set S and a test set T of equal size s. The goal is to obtain a good estimator for the entries
in T based on the values of the entries in S, as well as the locations (indexes) of all elements on S.
We will use the smoothed empirical marginals of S, for the weighted trace-norm.
We now show that, for bounded loss, there may be a benefit to weighting with the smoothed empir1
ical marginals?the sample size requirement can be lowered to s = O(rn log /2 (n)).
Theorem 5. For an l-Lipschitz loss ` bounded by b, fix any matrix Y and sample size s. Let
S ? [n] ? [m] be a fixed subset of size 2s, split uniformly at random into training and test
?
sets S and
n T , each of size s. Let
o p denote the smoothed empirical marginals of S. Let XS =
?
arg min LS (X) : X ? Wr [p] . Then in expectation s
over the splitting of S into S and T ,
!
1
rn log /2 (n)
b
?
?
?
LT (XS ) ? inf LT (X) + O l ?
+?
.
(6)
s
s
X?Wr [p]
This result (proved in the Supplementary Materials) is stated in the transductive setting, with a
somewhat different sampling procedure and evaluation criteria, but we believe the main difference is
in the use of the empirical weights. Although it is usually straightforward to convert a transductive
guarantee to an inductive one, the situation here is more complicated, since the hypothesis class
depends on the weighting, and hence on the sample S. Nevertheless, we believe such a conversion
might be possible, establishing a similar guarantee for learning with the (smoothed) empirically
7
weighted trace-norm also in the inductive setting. Furthermore, since the empirical marginals are
close to the true marginals when s = ?(n log(n)), it might be possible to obtain a learning guarantee
1
for the true (non-empirical) weighting with a sample of size s = O n(r log /2 (n) + log(n)) .
Theorem 5 can be viewed as a transductive analog to Theorem 3 (with weights based on the combined sample S). In the Supplementary Materials we give transductive analogs to Theorems 1 and 2.
As mentioned in Section 2.3, our lower bound examples can also be stated in the transductive setting,
and thus all our guarantees and lower bounds can also be obtained in this setting.
4.3
Simulations with empirical weights
In order to numerically investigate the possible advantage of empirical weighting, we performed
simulations on low-rank matrix reconstruction under uniform sampling with the unweighted, and the
smoothed empirically weighted, trace-norms. We choose to work with uniform sampling in order to
emphasize the benefit of empirical weights, even in situations where one might not consider to use
any weights at all. In all the experiments, we attempt to reconstruct a possibly noisy, random rank-2
?signal? matrix M with singular values ?12 (n, n, 0, . . . , 0), ensuring kM kF = n. We measure error
using the squared loss2 . Simulations were performed using M ATLAB, with code adapted from the
S OFT I MPUTE code developed by [20]. We performed two types of simulations:
Sample complexity
comparison in the
noiseless setting: We define Y = M , and compute
? S = arg min kXk : L
? S (X) = 0 , where kXk = kXk or = kXk r c , as appropriate.
X
tr
tr(p? ,p? )
In Figure 1(a), we plot the average number of samples per row needed to get average squared error
(over 100 repetitions) of at most 0.1, with both uniform weighting and empirical weighting.
Excess error comparison in the noiseless and noisy settings: We define
Y = M + ?N , where
? S = arg min kXk : L
? S (X) ? ? 2 .
noise N has i.i.d. standard normal entries. We compute X
In Figure 1(b), we plot the resulting average squared error (over 100 repetitions) over a range of
sample sizes s and noise levels ?, with both uniform weighting and empirical weighting. A larger
plot including standard error bars is shown in the Supplementary Materials.
The results from both experiments show a significant benefit to using the empirical marginals.
True p
Empirical p
0.8
12
Avg. squared error
s/n = Avg. # samples per row
13
11
10
9
0.4
?=0.4, true p
?=0.4, empirical p
?=0.2, true p
?=0.2, empirical p
?=0.0, true p
?=0.0, empirical p
0.2
0.1
100
141
200
Matrix dimension n
282
400
500
1000
Sample size s
2000
Figure 1: (a) Left: Sample size needed to obtain avg. error 0.1, with respect to n. (b) Right: Excess
error level over a range of sample sizes, for fixed n = 200. (Axes are on a logarithmic scale.)
5
Discussion
In this paper, we prove learning guarantees for the weighted trace-norm by analyzing expected
Rademacher complexities. We show that weighting with smoothed marginals eliminates degenerate
scenarios that can arise in the case of a non-product sampling distribution, and demonstrate in experiments on the Netflix and MovieLens datasets that this correction can be useful in applied settings.
We also give results for empirically-weighted trace-norm regularization, and see indications that
using the empirical distribution may be better than using the true distribution, even if it is available.
2
Although Lipschitz in a bounded domain, it is probably possible to improve all our results (removing the
square root) for the special case of the squared-loss, possibly with an i.i.d. noise assumption, as in [10].
8
References
[1] M. Fazel. Matrix rank minimization with applications. PhD Thesis, Stanford University, 2002.
[2] N. Srebro, J. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. Advances in
Neural Information Processing Systems, 17, 2004.
[3] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. Advances in Neural Information Processing Systems, 20, 2007.
[4] F. Bach. Consistency of trace-norm minimization. Journal of Machine Learning Research,
9:1019?1048, 2008.
[5] E. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE
Trans. Inform. Theory, 56(5):2053?2080, 2009.
[6] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. 18th Annual Conference on
Learning Theory (COLT), pages 545?560, 2005.
[7] B. Recht. A simpler approach to matrix completion. arXiv:0910.0651, 2009.
[8] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of
Machine Learning Research, 11:2057?2078, 2010.
[9] V. Koltchinskii, A. Tsybakov, and K. Lounici. Nuclear norm penalization and optimal rates for
noisy low rank matrix completion. arXiv:1011.6256, 2010.
[10] S. Negahban and M. Wainwright. Restricted strong convexity and weighted matrix completion:
Optimal bounds with noise. arXiv:1009.2118, 2010.
[11] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction.
24th Annual Conference on Learning Theory (COLT), 2011.
[12] R. Salakhutdinov and N. Srebro. Collaborative Filtering in a Non-Uniform World: Learning
with the Weighted Trace Norm. Advances in Neural Information Processing Systems, 23, 2010.
[13] O. Shamir and S. Shalev-Shwartz. Collaborative filtering with the trace norm: Learning,
bounding, and transducing. 24th Annual Conference on Learning Theory (COLT), 2011.
[14] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[15] J.A. Tropp. User-friendly tail bounds for sums of random matrices. arXiv:1004.4389, 2010.
[16] J. Bennett and S. Lanning. The netflix prize. In Proceedings of KDD Cup and Workshop,
volume 2007, page 35. Citeseer, 2007.
[17] MovieLens Dataset. Available at http://www.grouplens.org/node/73. 2006.
[18] Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering
model. ACM Int. Conference on Knowledge Discovery and Data Mining (KDD?08), pages
426?434, 2008.
[19] J. Lee, B. Recht, R. Salakhutdinov, N. Srebro, and J. Tropp. Practical Large-Scale Optimization
for Max-Norm Regularization. Advances in Neural Information Processing Systems, 23, 2010.
[20] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning
large incomplete matrices. Journal of Machine Learning Research, 11:2287?2322, 2010.
9
| 4303 |@word seems:1 norm:53 advantageous:1 km:1 r:2 simulation:5 citeseer:1 tr:4 boundedness:1 series:1 contains:2 outperforms:2 com:1 written:1 chicago:2 realistic:1 j1:2 kdd:2 plot:3 aside:2 selected:2 prize:1 provides:2 node:1 toronto:2 location:2 org:1 simpler:1 unbounded:6 prove:2 consists:1 npr:2 introduce:1 theoretically:1 expected:6 indeed:6 behavior:1 cand:1 salakhutdinov:6 considering:1 provided:2 begin:1 notation:1 bounded:16 agnostic:1 cm:1 developed:1 shraibman:1 finding:2 guarantee:36 friendly:1 exactly:1 rm:1 appear:1 understood:1 modify:1 qualification:3 consequence:1 analyzing:2 establishing:1 meet:1 might:9 koltchinskii:1 suggests:4 factorization:3 range:2 fazel:1 practical:2 practice:6 block:1 yehuda:1 procedure:1 empirical:33 significantly:1 confidence:1 suggest:1 get:2 cannot:3 close:1 risk:2 applying:2 restriction:1 forn:2 www:1 overrepresented:1 missing:1 straightforward:1 regardless:1 independently:2 l:2 convex:1 simplicity:1 splitting:2 immediately:1 estimator:2 fill:1 regularize:1 oh:1 nuclear:1 justification:1 shamir:2 target:1 construction:2 user:5 suppose:1 us:1 hypothesis:1 element:1 observed:4 worst:1 rina:2 technological:1 mentioned:1 convexity:1 complexity:12 rigorously:2 depend:1 tight:1 uniformity:1 ror:1 joint:1 eit:1 differently:1 various:1 shalev:1 neighborhood:1 heuristic:1 supplementary:8 larger:1 stanford:1 rennie:1 otherwise:1 reconstruct:1 statistic:2 transductive:14 think:1 itself:1 noisy:4 confronted:1 advantage:4 indication:1 reconstruction:6 product:11 combining:1 degenerate:3 achieve:1 ktr:1 ky:1 etj:1 p:1 requirement:1 rademacher:10 completion:8 ij:5 qt:9 strong:4 involves:1 indicate:1 implies:1 stochastic:1 material:8 require:1 fix:8 generalization:1 yij:2 correction:4 hold:9 normal:1 mapping:1 ruslan:1 grouplens:1 saw:1 repetition:2 establishes:1 weighted:34 minimization:3 clearly:2 lanning:1 gaussian:1 always:1 modified:2 rather:1 avoid:2 jaakkola:1 ax:1 xit:2 focus:1 rank:16 check:1 rigorous:1 helpful:1 ical:1 i1:2 tao:1 arg:8 issue:1 ksp:4 x11:1 colt:3 smoothing:15 special:4 cube:2 equal:1 aware:1 sampling:20 identical:1 placing:1 nearly:1 future:1 report:1 few:1 randomly:1 ourselves:1 replacement:1 microsoft:2 n1:4 attempt:2 mining:1 possibility:1 investigate:1 mnih:1 evaluation:1 certainly:1 severe:1 yielding:2 pc:22 necessary:3 ohad:1 indexed:1 incomplete:1 desired:4 theoretical:4 column:10 disadvantage:1 deviation:2 subset:5 entry:26 uniform:18 examining:1 too:2 combined:2 st:1 qtt:4 recht:2 negahban:2 probabilistic:1 lee:1 corrects:1 thesis:1 again:2 squared:7 nm:5 unavoidable:1 containing:2 possibly:3 choose:1 account:1 includes:1 int:1 satisfy:2 depends:2 performed:3 view:1 root:6 analyze:1 sup:1 netflix:10 sort:1 complicated:2 rmse:2 collaborative:5 square:3 publicly:1 yield:3 generalize:2 comparably:1 none:1 xdiag:1 strongest:1 inform:1 definition:1 frequency:3 involved:1 mpc:3 atlab:1 proof:10 sampled:2 dataset:4 proved:1 knowledge:1 actually:1 done:1 evaluated:1 strongly:1 generality:1 furthermore:1 just:1 lounici:1 spiky:2 overfit:1 tropp:2 ei:2 keshavan:1 qual:2 perhaps:2 believe:2 multifaceted:1 requiring:5 true:14 normalized:1 inductive:5 regularization:8 hence:1 criterion:1 presenting:1 complete:1 demonstrate:3 performs:1 wise:1 common:1 superior:1 empirically:5 volume:1 discussed:1 analog:2 tail:1 marginals:33 numerically:1 significant:2 cup:1 trivially:2 consistency:1 similarly:2 moving:1 lowered:1 base:1 j:2 own:1 showed:3 recent:2 optimizing:1 inf:8 scenario:1 arbitrarily:1 discussing:1 somewhat:1 surely:1 signal:1 full:1 match:1 england:1 calculation:1 cross:1 bach:1 ensuring:1 prediction:1 variant:4 essentially:2 expectation:9 noiseless:2 arxiv:4 achieved:1 want:1 spikiness:1 singular:2 eliminates:1 probably:1 seem:1 structural:1 near:1 revealed:3 split:2 fit:2 hastie:1 competing:1 cn:1 yijs:1 whether:3 motivated:1 bartlett:1 repeatedly:1 remark:1 generally:2 useful:1 clear:1 amount:1 tsybakov:1 http:1 xij:1 sign:2 estimated:2 kxktr:6 per:4 wr:27 overly:1 tibshirani:1 demonstrating:1 nevertheless:1 yit:2 drawn:9 relaxation:1 sum:2 convert:1 throughout:1 almost:1 draw:1 summarizes:1 bound:38 koren:1 annual:3 adapted:1 precisely:1 n3:1 nathan:1 argument:2 min:9 extremely:1 department:2 according:4 beneficial:2 lp:23 n4:1 rsalakhu:1 intuitively:1 restricted:3 pr:23 erm:4 equation:1 foygel:2 turn:1 fail:1 needed:3 know:1 end:1 available:3 apply:2 observe:1 appropriate:2 spectral:4 enforce:1 calculating:1 exploit:1 giving:2 establish:3 quantity:1 concentration:1 dependence:1 diagonal:1 surrogate:1 minx:1 gradient:1 extent:1 trivial:1 reason:1 code:2 index:6 mini:2 providing:1 setup:1 mostly:1 potentially:1 holding:1 trace:45 stated:3 negative:1 motivates:1 unknown:2 upper:2 conversion:1 observation:1 datasets:4 sm:1 descent:1 displayed:1 truncated:2 situation:6 defining:1 rn:18 smoothed:15 arbitrary:16 ttic:1 rating:8 pair:1 required:1 established:2 trans:1 able:1 suggested:1 bar:1 below:4 usually:1 regime:1 oft:1 kqt:2 including:3 max:9 maxij:1 wainwright:2 power:1 rely:1 regularized:1 transducing:1 scheme:1 improve:3 ohadsh:1 movie:3 created:2 sn:3 geometric:3 discovery:1 nati:1 kf:1 relative:1 loss:27 filtering:5 srebro:8 validation:3 penalization:1 sufficient:1 row:13 elsewhere:2 last:1 uchicago:1 highprobability:1 weaker:2 institute:1 benefit:3 dimension:1 world:1 unweighted:2 avg:3 far:1 correlate:1 excess:10 emphasize:2 dealing:2 conceptual:1 shwartz:1 why:1 table:2 mazumder:1 constructing:1 domain:1 diag:3 sp:6 main:1 montanari:1 bounding:3 noise:7 arise:1 n:2 fails:2 lie:2 weighting:17 toyota:1 theorem:26 removing:2 loss2:1 specific:1 jt:9 showing:1 maxi:2 x:6 evidence:2 mendelson:1 workshop:1 phd:1 magnitude:1 margin:1 logarithmic:3 lt:2 eij:1 kxk:5 minimizer:1 satisfies:1 acm:1 exo:1 conditional:1 goal:1 viewed:1 lipschitz:12 bennett:1 experimentally:1 change:1 movielens:7 specifically:1 corrected:1 uniformly:8 duality:1 e:10 meaningful:1 indicating:2 rit:1 arises:1 |
3,649 | 4,304 | Optimistic Optimization of a Deterministic Function
without the Knowledge of its Smoothness
R?emi Munos
SequeL project, INRIA Lille ? Nord Europe, France
[email protected]
Abstract
We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget of n evaluations. The function f is assumed to
be locally smooth (around one of its global maxima) with respect to a semi-metric
?. We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm,
DOO, that requires the knowledge of ?. We report a finite-sample performance
bound in terms of a measure of the quantity of near-optimal states. We then define
a second algorithm, SOO, which does not require the knowledge of the semimetric ? under which f is smooth, and whose performance is almost as good as
DOO optimally-fitted.
1 Introduction
We consider the problem of finding a good approximation of the maximum of a function f : X ? R
using a finite budget of evaluations of the function. More precisely, we want to design a sequential
exploration strategy of the search space X , i.e. a sequence x1 , x2 , . . . , xn of states of X , where each
xt may depend on previously observed values f (x1 ), . . . , f (xt?1 ), such that at round n (computational budget), the algorithms A returns a state x(n) with highest possible value. The performance
of the algorithm is evaluated by the loss
rn = sup f (x) ? f (x(n)).
(1)
x?X
Here the performance criterion is the accuracy of the recommendation made after n evaluations to
the function (which may be thought of as calls to a black-box model). This
P criterion is different from
usual bandit settings where the cumulative regret (n supx?X f (x) ? nt=1 f (x(t))) measures how
well the algorithm succeeds in selecting states with good values while exploring the search space.
The loss criterion (1) is closer to the simple regret defined in the bandit setting [BMS09, ABM10].
Since the literature on global optimization is huge, we only mention the works that are closely
related to our contribution. The approach followed here can be seen as an optimistic sampling
strategy where, at each round, we explore the space where the function could be the largest, given
the knowledge of previous evaluations. A large body of algorithmic work has been developed using
branch-and-bound techniques [Neu90, Han92, Kea96, HT96, Pin96, Flo99, SS00], such as Lipschitz
optimization where the function is assumed to be globally Lipschitz. Our first contribution with
respect to (w.r.t.) this literature is to considerably weaken the Lipschitz assumption usually made
and consider only a locally one-sided Lipschitz assumption around the maximum of f . In addition,
we do not require the space to be a metric space but only to be equipped with a semi-metric.
The optimistic strategy has been recently intensively studied in the bandit literature, such as in the
UCB algorithm [ACBF02] and the many extensions to tree search [KS06, CM07] (with application
1
to computer-go [GWMT06]), planning [HM08, BM10, BMSB11], and Gaussian process optimization [SKKS10]. The case of Lipschitz (or relaxed) assumption in a metric spaces is considered in
[Kle04, AOS07] and more recently in [KSU08, BMSS08, BMSS11], and in the case of unknown
Lipschitz constant, see [BSY11, Sli11] (where they assume a bound on the Hessian or another related parameter).
Compared to this literature, our contribution is the design and analysis of two algorithms: (1) A first
algorithm, Deterministic Optimistic Optimization (DOO), that requires the knowledge of the semimetric ? for which f is locally smooth around its maximum. A loss bound is provided (in terms of
the near-optimality dimension of f under ?) in a more general setting that previously considered.
(2) A second algorithm, Simultaneous Optimistic Optimization (SOO), that does not require the
knowledge of ?. We show that SOO performs almost as well as DOO optimally-fitted.
2 Assumptions about the hierarchical partition and the function
Our optimization algorithms will be implemented by resorting to a hierarchical partitioning of the
space X , which is given to the algorithms. More precisely, we consider a set of partitions of X at
all scales h ? 0: for any integer h, X is partitioned into a set of K h sets Xh,i (called cells), where
0 ? i ? K h ? 1. This partitioning may be represented by a K-ary tree structure where each cell
Xh,i corresponds to a node (h, i) of the tree (indexed by its depth h and index i), and such that each
node (h, i) possesses K children nodes {(h + 1, ik )}1?k?K . In addition, the cells of the children
{Xh+1,ik , 1 ? k ? K} form a partition of the parent?s cell Xh,i . The root of the tree corresponds
to the whole domain X (cell X0,0 ). To each cell Xh,i is assigned a specific state xh,i ? Xh,i where
f may be evaluated.
We now state 4 assumptions: Assumptions 1 is about the semi-metric ?, Assumption 2 is about the
smoothness of the function w.r.t. ?, and Assumptions 3 and 4 are about the shape of the hierarchical
partition w.r.t. ?.
Assumption 1 (Semi-metric). We assume that ? : X ? X ? R+ is such that for all x, y ? X , we
have ?(x, y) = ?(y, x) and ?(x, y) = 0 if and only if x = y.
Note that we do not require that ? satisfies the triangle inequality (in which case, ? would be a
metric). An example of a metric space is the Euclidean space Rd with the metric ?(x, y) = kx ? yk
(Euclidean norm). Now consider Rd with ?(x, y) = kx ? yk? , for some ? > 0. When ? ? 1, then
? is also a metric, but whenever ? > 1 then ? does not satisfy the triangle inequality anymore, and is
thus a semi-metric only.
Assumption 2 (Local smoothness of f ). There exists at least a global optimizer x? ? X of f (i.e.,
f (x? ) = supx?X f (x)) and for all x ? X ,
f (x? ) ? f (x) ? ?(x, x? ).
(2)
This condition guarantees that f does not decrease too fast around (at least) one global optimum x?
(this is a sort of a locally one-sided Lipschitz assumption).
Now we state the assumptions about the hierarchical partitions.
Assumption 3 (Bounded diameters). There exists a decreasing sequence ?(h) > 0, such that for
any depth h ? 0, for any cell Xh,i of depth h, we have supx?Xh,i ?(xh,i , x) ? ?(h).
Assumption 4 (Well-shaped cells). There exists ? > 0 such that for any depth h ? 0, any cell Xh,i
contains a ?-ball of radius ??(h) centered in xh,i .
3 When the semi-metric ? is known
In this Section, we consider the setting where Assumptions 1-4 hold for a specific semi-metric ?,
and that the semi-metric ? is known from the algorithm.
3.1 The DOO Algorithm
The Deterministic Optimistic Optimization (DOO) algorithm described in Figure 1 uses explicitly
the knowledge of ? (through the use of ?(h)). DOO builds incrementally a tree Tt for t = 1 . . . n, by
2
Initialization: T1 = {(0, 0)} (root node)
for t = 1 to n do
def
Select the leaf (h, j) ? Lt with maximum bh,j = f (xh,j ) + ?(h) value.
Expand this node: add to Tt the K children of (h, j)
end for
Return x(n) = arg max(h,i)?Tn f (xh,i )
Figure 1: Deterministic optimistic optimization (DOO) algorithm.
selecting at each round t a leaf of the current tree Tt to expand. Expanding a leaf means adding its
K children to the current tree (this corresponds to splitting the cell Xh,j into K sub-cells). We start
with the root node T1 = {(0, 0)}. We write Lt the leaves of Tt (set of nodes whose children are not
in Tt ), which are the set of nodes that can be expanded at round t.
This algorithm is called optimistic because it expands at each round a cell that may contain the
optimum of f , based on the information about (i) the previously observed evaluations of f , and (ii)
the knowledge of the local smoothness property (2) of f (since ? is known). The algorithm computes
def
the b-values bh,j = f (xh,j ) + ?(h) of all nodes (h, j) of the current tree Tt and select the leaf with
highest b-value to expand next. It returns the state x(n) with highest evaluation.
3.2 Analysis of DOO
Note that Assumption 2 implies that the b-value of any cell containing x? upper bounds f ? , i.e., for
any cell Xh,i such that x? ? Xh,i ,
bh,i = f (xh,i ) + ?(h) ? f (xh,i ) + ?(xh,i , x? ) ? f ? .
As a consequence, a node (h, i) such that f (xh,i ) + ?(h) < f ? will never be expanded (since at any
time t, the b-value of such a node will be dominated by the b-value of the leaf containing x? ). We
def
deduce that DOO only expands nodes of the set I = ?h?0 Ih , where
def
Ih = {nodes (h, i) such that f (xh,i ) + ?(h) ? f ? }.
In order to derive a loss bound we now define a measure of the quantity of near-optimal states,
called near-optimality dimension. This measure is closely related to similar measures introduced
def
in [KSU08, BMSS08]. For any ? > 0, let us write X? = {x ? X , f (x) ? f ? ? ?} the set of
?-optimal states.
Definition 1 (Near-optimality dimension). The near-optimality dimension is the smallest d ? 0 such
that there exists C > 0 such that for any ? > 0, the maximal number of disjoint ?-balls of radius ??
and center in X? is less than C??d .
Note that d is not an intrinsic property of f : it characterizes both f and ? (since we use ?-balls in the
packing of near-optimal states), and also depend on ?. We now bound the number of nodes in Ih .
Lemma 1. We have |Ih | ? C?(h)?d .
Proof. From Assumption 4, each cell (h, i) contains a ball of radius ??(h) centered in xh,i , thus if
|Ih | = |{xh,i ? X?(h) }| exceeded C?(h)?d , this would mean that there exists more than C?(h)?d
disjoint ?-balls of radius ??(h) with center in X?(h) , which contradicts the definition of d.
We now provide our loss bound for DOO.
Theorem 1. Let us write h(n) the smallest integer h such that C
of DOO is bounded as rn ? ?(h(n)).
3
Ph
l=0
?(l)?d ? n. Then the loss
Proof. Let (hmax , j) be the deepest node that has been expanded by the algorithm up to round n.
We known that DOO only expands nodes in the set I. Now, among all node expansion strategies
of the set of expandable nodes I, the uniform strategy is the one which minimizes the depth of the
resulting tree. From the definition of h(n) and from Lemma 1, we have
Ph(n)?1
Ph(n)?1
|Il | ? C l=0
?(l)?d < n,
l=0
thus the maximum depth of the uniform strategy is at least h(n), and we deduce that hmax ? h(n).
Now since node (hmax , j) has been expanded, we have that (hmax , j) ? I, thus
f (x(n)) ? f (xhmax ,j ) ? f ? ? ?(hmax ) ? f ? ? ?(h(n)).
Remark 1. This bound is in terms of the number of expanded nodes n. The actual number of
function evaluations is Kn (since each expansion generates K children that need to be evaluated).
Now, let us make the bound more explicit when the diameter ?(h) of the cells decreases exponentially fast with their depth (this case is rather general as illustrated in the examples described next,
as well as in the discussion in [BMSS11]).
Corollary 1. Assume that ?(h) = c? h for some constants c > 0 and ? < 1. If the near-optimality
?1/d 1/d ?1/d
d+1
C n
. Now,
of f is d > 0, then the loss decreases polynomially fast: rn ? c d 1 ? ? d
if d = 0, then the loss decreases exponentially fast: rn ? c? (n/C)?1 .
Ph(n)
?d(h(n)+1)
Proof. From Theorem 1, whenever d > 0 we have n ? C l=0 ?(l)?d = c C ? ? ?d ?1 ?1 ,
d+1
n
thus ? ?dh(n) ? cC
1 ? ? d , from which we deduce that rn ? ?(h(n)) ? c? h(n) ? c d 1 ?
?1/d 1/d ?1/d
Ph(n)
?d
C n
. Now, if d = 0 then n ? C l=0 ?(l)?d = C(h(n) + 1), and we deduce that
the loss is bounded as rn ? ?(h(n)) = c? (n/C)?1 .
3.3 Examples
Example 1: Let X = [?1, 1]D and f be the function f (x) = 1 ? kxk?
? , for some ? ? 1.
Consider a K = 2D -ary tree of partitions with (hyper)-squares. Expanding a node means splitting
the corresponding square in 2D squares of half length. Let xh,i be the center of Xh,i .
Consider the following choice of the semi metric: ?(x, y) = kx ? yk?? , with ? ? ?. We have
?(h) = 2?h? (recall that ?(h) is defined in terms of ?), and ? = 1. The optimum of f is x? = 0
and f satisfies the local smoothness property (2). Now let us compute its near-optimality dimension.
1/? D
For any ? > 0, X? is the L? -ball of radius ?1/? centered in 0, which can be packed by ??1/?
L? -balls of diameter ? (since a L? -balls of diameter ? is a ?-ball of diameter ?1/? ). Thus the nearoptimality dimension is d = D(1/? ? 1/?) (and the constant C = 1). From Corollary 1 we deduce
1 ??
that (i) when ? > ?, then d > 0 and in this case, rn = O n? D ??? . And (ii) when ? = ?, then
d = 0 and the loss decreases exponentially fast: rn ? 21?n .
It is interesting to compare this result to a uniform sampling strategy (i.e., the function is evaluated
at the set of points on a uniform grid), which would provide a loss of order n??/D . We observe that
DOO is better than uniform whenever ? < 2? and worse when ? > 2?.
This result provides some indication on how to choose the semi-metric ? (thus ?), which is a key
ingredient of the DOO algorithm (since ?(h) = 2?h? appears in the b-values): ? should be as
close as possible to the true (but unknown) ? (which can be seen as a local smoothness order of f
around its maximum), but never larger than ? (otherwise f does not satisfy the local smoothness
property (2)).
Example 2: The previous analysis generalizes to any function which is locally equivalent to kx ?
x? k? , for some ? > 0 (where k ? k is any norm, e.g., Euclidean, L? , or L1 ), around a global
maximum x? (among a set of global optima assumed to be finite). That is, we assume that there
exists constants c1 > 0, c2 > 0, ? > 0, such that
f (x? ) ? f (x)
f (x? ) ? f (x)
? c1 kx ? x? k? ,
? c2 kx ? x? k? ,
4
for all x ? X ,
for all kx ? x? k ? ?.
Let X = [0, 1]D . Again, consider a K = 2D -ary tree of partitions with (hyper)-squares. Let
?(x, y) = ckx ? yk? with c1 ? c and ? ? ? (so that f satisfies (2)). For simplicity we do not
make explicit all the constants using the O notation for convenience (the actual constants depend
on the choice of the norm k ? k). We have ?(h) = O(2?h? ). Now, let us compute the nearoptimality dimension. For any ? > 0, X? is included in a ball of radius (?/c2 )1/? centered in x? ,
1/? D
?-balls of diameter ?. Thus the near-optimality dimension is
which can be packed by O ??1/?
d = D(1/? ? 1/?), and the results of the previous example apply (up to constants), i.e. for ? > ?,
1 ??
then d > 0 and rn = O n? D ??? . And when ? = ?, then d = 0 and one obtains the exponential
rate rn = O(2??(n/C?1) ).
We deduce that the behavior of the algorithm depends on our knowledge of the local smoothness
(i.e. ? and c1 ) of the function around its maximum. Indeed, if this smoothness information is available, then one should defined the semi-metric ? (which impacts the algorithm through the definition
of ?(h)) to match this smoothness (i.e. set ? = ?) and derive an exponential loss rate. Now if
this information is unknown, then one should underestimate the true smoothness (i.e. by choosing
1 ??
? ? ?) and suffer a loss rn = O n? D ??? , rather than overestimating it (? > ?) since in this
case, (2) may not hold anymore and there is a risk that the algorithm converges to a local optimum
(thus suffering a constant loss).
3.4 Comparison with previous works
Optimistic planning: The deterministic planning problem described in [HM08] considers an optimistic approach for selecting the first action of a sequence x that maximizes the sum of discounted
rewards. We can easily cast their problem in our setting by considering the space X of the set of
infinite sequences of actions. The metric ?(x, y) is ? h(x,y) /(1 ? ?), where h(x, y) is the length of
the common initial actions between the sequences x and y, and ? is the discount factor. It is easy
to show that the function f (x), defined as the discounted sum of rewards along the sequence x of
actions, is Lipschitz w.r.t. ? and thus satisfies (2). Their algorithm is very close to DOO: it expands
a node of the tree (finite sequence of actions) with highest upper-bound on the possible value. Their
regret analysis makes use of a quantity of near-optimal sequences, from which they define ? ? [1, K]
that can be seen as the branching factor of the set of nodes I that can be expanded. This measure
is related to our near-optimality dimension by ? = ? ?d . Corollary 1 implies directly that the loss
log 1/?
bound is rn = O(n? log ? ) which is the result reported in [HM08].
HOO and Zooming algorithms: The DOO algorithm can be seen as a deterministic version of
the HOO algorithm of [BMSS11] and is also closely related to the Zooming algorithm of [KSU08].
Those works consider the case of noisy evaluations of the function (X -armed bandit setting), which
is assumed to be weakly Lipschitz (slightly stronger than our Assumption 2). The bounds reported
in those works are (for the case of exponentially decreasing diameters considered in their work
d+1
and in our Corollary 1) on the cumulative regret Rn = O(n d+2 ), which translates into the loss
1
considered here as rn = O(n? d+2 ), where d is the near-optimality dimension (or the closely defined
zooming dimension). We conclude that a deterministic evaluation of the function enables to obtain
a much better polynomial rate O(n?1/d ) when d > 0, and even an exponential rate when d = 0
(Corollary 1).
In the next section, we address the problem of an unknown semi-metric ?, which is the main contribution of the paper.
4 When the semi-metric ? is unknown
We now consider the setting where Assumptions 1-4 hold for some semi-metric ?, but the semimetric ? is unknown. The hierarchical partitioning of the space is still given, but since ? is unknown,
one cannot use the diameter ?(h) of the cells to design upper-bounds, like in DOO.
The question we wish to address is: If ? is unknown, is it possible to implement an optimistic algorithm with performance guarantees? We provide a positive answer to this question and in addition
we show that we can be almost as good as an algorithm that would know ?, for the best possible
? satisfying Assumptions 1-4.
5
The maximum depth function t 7? hmax (t) is a parameter of the algorithm.
Initialization: T1 = {(0, 0)} (root node). Set t = 1.
while True do
Set vmax = ??.
for h = 0 to min(depth(Tt ), hmax (t)) do
Among all leaves (h, j) ? Lt of depth h, select (h, i) ? arg max(h,j)?Lt f (xh,j )
if f (xh,i ) ? vmax then
Expand this node: add to Tt the K children (h + 1, ik )1?k?K
Set vmax = f (xh,i ), Set t = t + 1
if t = n then Return x(n) = arg max(h,i)?Tn xh,i
end if
end for
end while.
Figure 2: Simultaneous Optimistic Optimization (SOO) algorithm.
4.1 The SOO algorithm
The idea is to expand at each round simultaneously all the leaves (h, j) for which there exists a
semi-metric ? such that the corresponding upper-bound f (xh,j ) + supx?Xh,j ?(xh,j , x) would be
the highest. This is implemented by expanding at each round at most a leaf per depth, and a leaf is
expanded only if it has the largest value among all leaves of same or lower depths. The Simultaneous
Optimistic Optimization (SOO) algorithm is described in Figure 2.
The SOO algorithm takes as parameter a function t ? hmax (t) which forces the tree to a maximal
depth of hmax (t) after t node expansions. Again, Lt refers to the set of leaves of Tt .
4.2 Analysis of SOO
All previously relevant quantities such as the diameters ?(h), the sets Ih , and the near-optimality dimension d depend on the unknown semi-metric ? (which is such that Assumptions 1-4 are satisfied).
At time t, let us write h?t the depth of the deepest expanded node in the branch containing x? (an
optimal branch). Let (h?t + 1, i? ) be an optimal node of depth h?t + 1 (i.e., such that x? ? Xh?t +1,i? ).
Since this node has not been expanded yet, any node (h?t +1, i) of depth h?t +1 that is later expanded,
before (h?t + 1, i? ) is expanded, is ?(h?t + 1)-optimal. Indeed, f (xh?t +1,i ) ? f (xh?t +1,i? ) ? f ? ?
?(h?t + 1). We deduce that once an optimal node of depth h is expanded, it takes at most |Ih+1 | node
expansions at depth h + 1 before the optimal node of depth h + 1 is expanded. From that simple
observation, we deduce the following lemma.
Lemma 2. For any depth 0 ? h ? hmax (t), whenever t ? (|I0 | + |I1 | + ? ? ? + |Ih |)hmax (t), we
have h?t ? h.
Proof. We prove it by induction. For h = 0, we have h?t ? 0 trivially. Assume that the proposition
is true for all 0 ? h ? h0 with h0 < hmax (t). Let us prove that it is also true for h0 + 1. Let
t ? (|I0 | + |I1 | + ? ? ? + |Ih0 +1 |)hmax (t). Since t ? (|I0 | + |I1 | + ? ? ? + |Ih0 |)hmax (t) we know
that h?t ? h0 . So, either h?t ? h0 + 1 in which case the proof is finished, or h?t = h0 . In this
latter case, consider the nodes of depth h0 + 1 that are expanded. We have seen that as long as the
optimal node of depth h0 + 1 is not expanded, any node of depth h0 + 1 that is expanded must be
?(h0 + 1)-optimal, i.e., belongs to Ih0 +1 . Since there are |Ih0 +1 | of them, after |Ih0 +1 |hmax (t) node
expansions, the optimal one must be expanded, thus h?t ? h0 + 1.
Theorem 2. Let us write h(n) the smallest integer h such that
P
Chmax (n) hl=0 ?(l)?d ? n.
(3)
Then the loss is bounded as
rn ? ? min(h(n), hmax (n) + 1) .
6
(4)
Proof. From Lemma 1 and the definition of h(n) we have
h(n)?1
hmax (n)
X
l=0
h(n)?1
|Il | ? Chmax (n)
X
?(l)?d < n,
l=0
thus from Lemma 2, when h(n) ? 1 ? hmax (n) we have h?n ? h(n) ? 1. Now in the case
h(n) ? 1 > hmax (n), since the SOO algorithm does not expand nodes beyond depth hmax (n), we
have h?n = hmax (n). Thus in all cases, h?n ? min(h(n) ? 1, hmax (n)).
Let (h, j) be the deepest node in Tn that has been expanded by the algorithm up to round n. Thus
h ? h?n . Now, from the definition of the algorithm, we only expand a node when its value is larger
than the value of all the leaves of equal or lower depths. Thus, since the node (h, j) has been
expanded, its value is at least as high as that of the optimal node (h?n + 1, i? ) of depth h?n + 1 (which
has not been expanded, by definition of h?n ). Thus
f (x(n)) ? f (xh,j ) ? f (xh?n +1,i? ) ? f ? ? ?(h?n + 1) ? f ? ? ?(min(h(n), hmax (n) + 1)).
Remark 2. This result appears very surprising: although the semi-metric ? is not known, the performance is almost as good as for DOO (see Theorem 1) which uses the knowledge of ?. The main
difference is that the maximal depth hmax (n) appears both as a multiplicative factor in the definition of h(n) in (3) and as a threshold in the loss bound (4). Those two appearances of hmax (n)
defines a tradeoff between deep (large hmax ) versus broad (small hmax ) types of exploration. We
now illustrate the case of exponentially decreasing diameters.
Corollary 2. Assume that ?(h) = c? h for some c > 0 and ? < 1. Consider the two cases:
? The near-optimality d > 0. Let the depth function hmax (t) = t? , for some ? > 0 arbitrarily
small. Then, for n large enough (as a function of ?) the loss of SOO is bounded as:
C 1/d 1??
d+1
rn ? c d
n? d .
1 ? ?d
?
? The near-optimality d = 0. Let the depth function hmax (t) = t. Then the loss of SOO is
bounded as:
?
rn ? c? n min(1/C,1)?1 .
Proof. From Theorem 1, when d > 0 we have
h(n)
? ?d(h(n)+1) ? 1
? ?d ? 1
l=0
1??
thus for the choice hmax (n) = n? , we deduce ? ?dh(n) ? ncC 1 ? ? d . Thus h(n) is logarithmic
in n and for n large enough (as a function of ?), h(n) ? hmax (n) + 1, thus
C 1/d 1??
d+1
rn ? ? min(h(n), hmax (n) + 1) = ?(h(n)) ? c? h(n) ? c d
n? d .
1 ? ?d
Ph(n)
Now, if d =?0 then n ? Chmax (n) l=0 ?(l)?d = Chmax (n)(h(n) + 1), thus for the choice
hmax (n) = n we deduce that the loss decreases as:
?
rn ? ? min(h(n), hmax (n) + 1) ? c? n min(1/C,1)?1 .
n ? Chmax (n)
X
?(l)?d = cChmax (n)
Remark 3. The maximal depth function hmax (t) is still a parameter of the algorithm, which somehow influences the behavior of the algorithm (deep versus broad exploration of the tree). However,
for a large class of problems (e.g. when d > 0) the choice of the order ? does not impact the asymptotic performance of the algorithm.
Remark 4. Since our algorithm does not depend on ?, our analysis is actually true for any semimetric ? that satisfies Assumptions 1-4, thus Theorem 2 and Corollary 2 hold for the best
possible choice of such a ?. In particular, we can think of problems for which there exists a semimetric ? such that the corresponding near-optimality dimension d is 0. Instead of describing a
general class of problems satisfying this property, we illustrate in the next subsection non-trivial
optimization problems in X = RD where there exists ? such that d = 0.
7
4.3 Examples
Example 1: Consider the previous Example 1 where X = [?1, 1]D and f is the function f (x) =
?
1 ? kxk?
? , where ? ? 1 is unknown. We have seen that DOO with the metric ?(x, y) = kx ? yk?
1 ??
provides a polynomial loss rn = O n? D ??? whenever ? < ?, and an exponential loss rn ? 21?n
when ? = ?. However, here ? is unknown.
?
Now consider the SOO algorithm with the maximum depth function hmax (t) = t. As mentioned
before, SOO does not require ?, thus we can apply the analysis for any ? that satisfies Assumptions
?h?
1-4. So let us consider ?(x, y) = kx ? yk?
, ? = 1, and the near-optimality
? . Then ?(h) = 2
?
dimension of f under ? is d = 0 (and C = 1). We deduce that the loss of SOO is rn ? 2(1? n)? .
Thus SOO provides a stretched-exponential loss without requiring the knowledge of ?.
Note that a uniform grid provides the loss n??/D , which is polynomially decreasing only (and
subject to the curse of dimensionality). Thus, in this example SOO is always better than both
Uniform and DOO except if one knows perfectly ? and would use DOO with ? = ? (in which
case we obtain an exponential loss). The fact that SOO is not as good
? as DOO optimally fitted
comes from the truncation of SOO at a maximal depth hmax (n) = n (whereas DOO optimally
fitted would explore the tree up to a depth linear in n).
Example 2: The same conclusion holds for Example 2, where we consider a function f defined on
[0, 1]D that is locally equivalent to kx?x? k? , for some unknown ? > 0 (see the precise assumptions
in Section 3.3). We have seen that DOO using ?(x, y) = ckx ? yk? with ? < ? has a loss
1 ??
rn = O n? D ??? , and when ? = ?, then d = 0 and the loss is rn = O(2??(n/C?1) ).
?
Now by using SOO (which does not require
the knowledge of ?) with hmax (t) = t we deduce the
?
stretched-exponential loss rn = O(2? n?/C ) (by using ?(x, y) = kx ? yk? in the analysis, which
gives ?(h) = 2?h? and d = 0).
4.4 Comparison with the DIRECT algorithm
The DIRECT (DIviding RECTangles) algorithm [JPS93, FK04, Gab01] is a Lipschitz optimization
algorithm where the Lipschitz constant L of f is unknown. It uses an optimistic splitting technique
similar to ours where at each round, it expands the set of nodes that have the highest upper-bound (as
defined in DOO) for at least some value of L. To the best of our knowledge, there is no finite-time
analysis of this algorithm (only the consistency property limn?? rn = 0 is proven in [FK04]). Our
approach generalizes DIRECT and we are able to derive finite-time loss bounds in a much broader
setting where the function is only locally smooth and the space is semi-metric.
We are not aware of other finite-time analysis of global optimization algorithms that do not require
the knowledge of the smoothness of the function.
5 Conclusions
We presented two algorithms: DOO requires the knowledge of the semi-metric ? under which the
function f is locally smooth (according to Assumption 2). SOO does not require this knowledge and
performs almost as well as DOO optimally-fitted (i.e. for the best choice of ? satisfying Assumptions
1-4). We reported finite-time loss bounds using the near-optimality dimension d, which relates the
local smoothness of f around its maximum and the quantity of near-optimal states, measured by the
semi-metric ?. We provided illustrative examples of the performance of SOO in Euclidean spaces
where the local smoothness of f is unknown.
Possible future research directions include (i) deriving problem-dependent lower bounds, (ii) characterizing classes of functions f such that there exists a semi-metric ? for which f is locally smooth
w.r.t. ? and whose corresponding near-optimal dimension is d = 0 (in order to have a stretchedexponentially decreasing loss), and (iii) extending the SOO algorithm to stochastic X -armed bandits
(optimization of a noisy function) when the smoothness of f is unknown.
Acknowledgements: French ANR EXPLO-RA (ANR-08-COSI-004) and the European project
COMPLACS (FP7, grant agreement no 231495).
8
References
[ABM10]
J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In
Conference on Learning Theory, 2010.
[ACBF02] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning Journal, 47(2-3):235?256, 2002.
[AOS07]
P. Auer, R. Ortner, and Cs. Szepesv?ari. Improved rates for the stochastic continuum-armed bandit
problem. 20th Conference on Learning Theory, pages 454?468, 2007.
[BM10]
S. Bubeck and R. Munos. Open loop optimistic planning. In Conference on Learning Theory,
2010.
[BMS09]
S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proc.
of the 20th International Conference on Algorithmic Learning Theory, pages 23?37, 2009.
[BMSB11] L. Busoniu, R. Munos, B. De Schutter, and R. Babuska. Optimistic planning for sparsely stochastic systems. In IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning, 2011.
[BMSS08] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv?ari. Online optimization of X-armed bandits.
In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information
Processing Systems, volume 22, pages 201?208. MIT Press, 2008.
[BMSS11] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv?ari. X-armed bandits. Journal of Machine
Learning Research, 12:1655?1695, 2011.
[BSY11]
S. Bubeck, G. Stoltz, and J. Y. Yu. Lipschitz bandits without the Lipschitz constant. In Proceedings of the 22nd International Conference on Algorithmic Learning Theory, 2011.
[CM07]
P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In Uncertainty in Artificial
Intelligence, 2007.
[FK04]
D. E. Finkel and C. T. Kelley. Convergence analysis of the direct algorithm. Technical report,
North Carolina State University, Center for, 2004.
[Flo99]
C.A. Floudas. Deterministic Global Optimization: Theory, Algorithms and Applications. Kluwer
Academic Publishers, Dordrecht / Boston / London, 1999.
[Gab01]
J. M. X. Gablonsky. Modifications of the direct algorithm. PhD thesis, 2001.
[GWMT06] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in monte-carlo
go. Technical report, INRIA RR-6062, 2006.
[Han92]
E.R. Hansen. Global Optimization Using Interval Analysis. Marcel Dekker, New York, 1992.
[HM08]
J-F. Hren and R. Munos. Optimistic planning of deterministic systems. In European Workshop on
Reinforcement Learning Springer LNAI 5323, editor, Recent Advances in Reinforcement Learning, pages 151?164, 2008.
[HT96]
R. Horst and H. Tuy. Global Optimization ? Deterministic Approaches. Springer, Berlin / Heidelberg / New York, 3rd edition, 1996.
[JPS93]
D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian optimization without the lipschitz
constant. Journal of Optimization Theory and Applications, 79(1):157?181, 1993.
[Kea96]
R. B. Kearfott. Rigorous Global Search: Continuous Problems. Kluwer Academic Publishers,
Dordrecht / Boston / London, 1996.
[Kle04]
R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In 18th Advances in
Neural Information Processing Systems, 2004.
[KS06]
L. Kocsis and Cs. Szepesv?ari. Bandit based Monte-Carlo planning. In Proceedings of the 15th
European Conference on Machine Learning, pages 282?293, 2006.
[KSU08]
R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of
the 40th ACM Symposium on Theory of Computing, 2008.
[Neu90]
Neumaier. Interval Methods for Systems of Equations. Cambridge University Press, 1990.
[Pin96]
J.D. Pint?er. Global Optimization in Action (Continuous and Lipschitz Optimization: Algorithms,
Implementations and Applications). Kluwer Academic Publishers, 1996.
[SKKS10] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on
Machine Learning, pages 1015?1022, 2010.
[Sli11]
A. Slivkins. Multi-armed bandits on implicit metric spaces. In Advances in Neural Information
Processing Systems, 2011.
[SS00]
R.G. Strongin and Ya.D. Sergeyev. Global Optimization with Non-Convex Constraints: Sequential and Parallel Algorithms. Kluwer Academic Publishers, Dordrecht / Boston / London, 2000.
9
| 4304 |@word version:1 polynomial:2 norm:3 stronger:1 nd:1 dekker:1 open:1 carolina:1 mention:1 initial:1 contains:2 selecting:3 ours:1 current:3 nt:1 surprising:1 yet:1 must:2 partition:7 shape:1 enables:1 half:1 leaf:13 intelligence:1 provides:4 node:43 teytaud:1 along:1 c2:3 direct:5 symposium:2 ik:3 prove:2 tuy:1 x0:1 ra:1 indeed:2 behavior:2 planning:7 multi:4 globally:1 decreasing:5 discounted:2 actual:2 armed:10 equipped:1 considering:1 curse:1 project:2 provided:2 bounded:6 notation:1 maximizes:1 minimizes:1 developed:1 finding:1 guarantee:2 expands:5 partitioning:4 grant:1 t1:3 positive:1 before:3 local:9 ss00:2 consequence:1 inria:3 black:1 initialization:2 studied:1 regret:5 implement:1 floudas:1 thought:1 refers:1 cannot:1 close:2 convenience:1 bh:3 risk:1 influence:1 equivalent:2 deterministic:11 center:4 go:2 convex:1 simplicity:1 splitting:3 pure:1 deriving:1 programming:1 us:3 agreement:1 satisfying:3 sparsely:1 observed:2 wang:1 decrease:6 highest:6 yk:8 mentioned:1 reward:2 babuska:1 dynamic:1 depend:5 weakly:1 tight:1 triangle:2 packing:1 easily:1 represented:1 fast:5 describe:1 london:3 monte:2 artificial:1 hyper:2 choosing:1 h0:11 dordrecht:3 whose:3 larger:2 otherwise:1 anr:2 fischer:1 think:1 noisy:2 online:1 kocsis:1 sequence:8 indication:1 rr:1 matthias:1 maximal:5 fr:1 relevant:1 loop:1 parent:1 convergence:1 optimum:5 extending:1 converges:1 derive:3 illustrate:2 measured:1 dividing:1 implemented:2 c:4 marcel:1 implies:2 come:1 direction:1 radius:6 closely:4 stochastic:3 exploration:5 centered:4 require:8 proposition:1 exploring:1 extension:1 hold:5 around:8 considered:4 algorithmic:3 optimizer:1 continuum:2 smallest:3 proc:1 hansen:1 largest:2 mit:1 gaussian:2 always:1 rather:2 finkel:1 broader:1 corollary:7 seeger:1 rigorous:1 dependent:1 i0:3 lnai:1 bandit:18 koller:1 expand:7 france:1 i1:3 arg:3 among:4 equal:1 once:1 never:2 shaped:1 aware:1 sampling:2 lille:1 broad:2 yu:1 hren:1 jones:1 nearly:1 future:1 report:3 overestimating:1 ortner:1 simultaneously:1 huge:1 evaluation:9 closer:1 stoltz:4 tree:16 indexed:1 euclidean:4 weaken:1 fitted:5 uniform:7 too:1 optimally:5 reported:3 kn:1 answer:1 supx:4 considerably:1 international:4 sequel:1 complacs:1 again:2 thesis:1 satisfied:1 cesa:1 containing:3 choose:1 worse:1 return:4 de:1 north:1 satisfy:2 explicitly:1 audibert:1 depends:1 later:1 root:4 multiplicative:1 optimistic:18 sup:1 characterizes:1 start:1 sort:1 parallel:1 contribution:5 il:2 square:4 accuracy:1 expandable:1 ckx:2 identification:1 carlo:2 cc:1 ary:3 ncc:1 simultaneous:3 whenever:5 definition:8 underestimate:1 semimetric:6 proof:7 intensively:1 recall:1 knowledge:16 subsection:1 dimensionality:1 actually:1 auer:2 appears:3 exceeded:1 improved:1 evaluated:4 box:1 cosi:1 uct:1 implicit:1 incrementally:1 somehow:1 french:1 defines:1 contain:1 true:6 requiring:1 assigned:1 illustrated:1 round:10 branching:1 illustrative:1 criterion:3 tt:9 tn:3 performs:2 l1:1 recently:2 ari:4 common:1 exponentially:5 volume:1 acbf02:2 kluwer:4 multiarmed:1 cambridge:1 stretched:2 smoothness:15 rd:4 resorting:1 grid:2 trivially:1 consistency:1 kelley:1 europe:1 deduce:12 add:2 recent:1 belongs:1 inequality:2 arbitrarily:1 seen:7 relaxed:1 semi:21 relates:1 branch:3 ii:3 sham:1 smooth:6 technical:2 match:1 academic:4 long:1 niranjan:1 impact:2 metric:31 cell:17 c1:4 doo:28 addition:3 want:1 whereas:1 szepesv:4 interval:2 krause:1 limn:1 publisher:4 posse:1 subject:1 call:1 integer:3 near:21 iii:1 easy:1 enough:2 bengio:1 perfectly:1 andreas:1 idea:1 tradeoff:1 translates:1 suffer:1 hessian:1 york:2 remark:4 action:6 deep:2 neumaier:1 discount:1 locally:9 ph:6 diameter:10 disjoint:2 per:1 perttunen:1 write:5 key:1 threshold:1 rectangle:1 sum:2 uncertainty:1 almost:5 ks06:2 bound:21 def:5 followed:1 precisely:2 constraint:1 x2:1 dominated:1 generates:1 kleinberg:2 emi:1 optimality:15 min:8 expanded:20 according:1 ball:11 hoo:2 slightly:1 contradicts:1 partitioned:1 kakade:1 modification:2 hl:1 sided:2 equation:1 previously:4 describing:1 know:3 fp7:1 end:4 generalizes:2 available:1 apply:2 observe:1 hierarchical:6 anymore:2 include:1 lipschitzian:1 gelly:1 build:1 question:2 quantity:5 strategy:7 usual:1 zooming:3 berlin:1 considers:1 trivial:1 induction:1 length:2 index:1 nord:1 design:4 implementation:1 packed:2 unknown:15 bianchi:1 upper:5 observation:1 finite:10 nearoptimality:2 stuckman:1 precise:1 rn:26 introduced:1 cast:1 slivkins:2 address:2 beyond:1 able:1 usually:1 pattern:1 max:3 soo:22 force:1 arm:1 finished:1 literature:4 deepest:3 acknowledgement:1 asymptotic:1 loss:33 interesting:1 proven:1 versus:2 ingredient:1 upfal:1 editor:2 truncation:1 characterizing:1 munos:11 dimension:16 xn:1 depth:33 cumulative:2 computes:1 horst:1 made:2 adaptive:1 vmax:3 reinforcement:3 polynomially:2 schutter:1 obtains:1 global:14 assumed:4 conclude:1 search:5 continuous:2 expanding:3 schuurmans:1 heidelberg:1 expansion:5 bottou:1 european:3 domain:1 main:2 whole:1 edition:1 child:7 suffering:1 x1:2 body:1 sub:1 explicit:2 xh:39 exponential:7 wish:1 hmax:38 theorem:6 specific:2 xt:2 er:1 exists:10 intrinsic:1 ih:8 workshop:1 sequential:2 adding:1 phd:1 budget:3 kx:11 boston:3 lt:5 remi:1 logarithmic:1 explore:2 appearance:1 bubeck:6 kxk:2 recommendation:1 springer:2 corresponds:3 satisfies:6 dh:2 acm:1 lipschitz:15 included:1 infinite:1 except:1 lemma:6 called:3 experimental:1 ya:1 succeeds:1 ucb:1 explo:1 busoniu:1 select:3 pint:1 coquelin:1 sergeyev:1 latter:1 srinivas:1 |
3,650 | 4,305 | Spike and Slab Variational Inference for Multi-Task
and Multiple Kernel Learning
Michalis K. Titsias
University of Manchester
[email protected]
Miguel L?azaro-Gredilla
Univ. de Cantabria & Univ. Carlos III de Madrid
[email protected]
Abstract
We introduce a variational Bayesian inference algorithm which can be widely
applied to sparse linear models. The algorithm is based on the spike and slab prior
which, from a Bayesian perspective, is the golden standard for sparse inference.
We apply the method to a general multi-task and multiple kernel learning model
in which a common set of Gaussian process functions is linearly combined with
task-specific sparse weights, thus inducing relation between tasks. This model
unifies several sparse linear models, such as generalized linear models, sparse
factor analysis and matrix factorization with missing values, so that the variational
algorithm can be applied to all these cases. We demonstrate our approach in multioutput Gaussian process regression, multi-class classification, image processing
applications and collaborative filtering.
1
Introduction
Sparse inference has found numerous applications in statistics and machine learning [1, 2, 3]. It is a
generic idea that can be combined with popular models, such as linear regression, factor analysis and
more recently multi-task and multiple kernel learning models. In the regularization theory literature
sparse inference is tackled via `1 regularization [2], which requires expensive cross-validation for
model selection. From a Bayesian perspective, the spike and slab prior [1, 4, 5], also called twogroups prior [6], is the golden standard for sparse linear models. However, the discrete nature of
the prior makes Bayesian inference a very challenging problem. Specifically, for M linear weights,
inference under a spike and slab prior distribution on those weights requires a combinatorial search
over 2M possible models. The problems found when working with the spike and slab prior led
several researchers to consider soft-sparse or shrinkage priors such as the Laplace and other related
scale mixtures of normals [3, 7, 8, 9, 10]. However, such priors are not ideal since they assign zero
probability mass to events associated with weights having zero value.
In this paper, we introduce a simple and efficient variational inference algorithm based on the spike
and slab prior which can be widely applied to sparse linear models. The novel characteristic of this
algorithm is that the variational distribution over sparse weights has a factorial nature, i.e., it can be
written as a mixture of 2M components where M is the number of weights. Unlike the standard
mean field approximation which uses a unimodal variational distribution, our variational algorithm
can more precisely match the combinational nature of the posterior distribution over the weights.
We will show that the proposed variational approach is more accurate and robust to unfavorable
initializations than the standard mean field variational approximation.
We apply the variational method to a general multi-task and multiple kernel learning model that
expresses the correlation between tasks by letting them share a common set of Gaussian process
latent functions. Each task is modeled by linearly combining these latent functions with taskspecific weights which are given a spike and slab prior distribution. This model is a spike and
slab Bayesian reformulation of previous Gaussian process-based single-task multiple kernel learning
1
methods [11, 12, 13] and multi-task Gaussian processes (GPs) [14, 15, 16, 17]. Further, this model
unifies several sparse linear models, such as generalized linear models, factor analysis, probabilistic
PCA and matrix factorization with missing values. In the experiments, we apply the variational inference algorithms to all the above models and present results in multi-output regression, multi-class
classification, image denoising, image inpainting and collaborative filtering.
2
Spike and slab multi-task and multiple kernel learning
Section 2.1 discusses the spike and slab multi-task and multiple kernel learning (MTMKL) model
that linearly combines Gaussian process latent functions. Spike and slab factor analysis and probabilistic PCA is discussed in Section 2.2, while missing values are dealt with in Section 2.3.
2.1
The model
Let D = {X, Y}, with X ? RN ?D and Y ? RN ?Q , be a dataset such that the n-th row of X is
an input vector xn and the n-th row of Y is the set of Q corresponding tasks or outputs. We use
yq to refer to the q-th column of Y and ynq to the (n, q) entry. Outputs Y are then assumed to be
generated according to the following hierarchical Bayesian model:
ynq ? N (ynq |fq (xn ), ?q2 ),
fq (x) =
M
X
wqm ?m (x) = w>
q ?(x),
?n,q
(1a)
?q
(1b)
?q,m
?m .
(1c)
(1d)
m=1
2
wqm ? ?N (wqm |0, ?w
) + (1 ? ?)?0 (wqm ),
?m (x) ? GP(?m (x), km (xi , xj )),
Here, each ?m (x) is a mean function, km (xi , xj ) a covariance function, wq = [wq1 , . . . , wqM ]> ,
?(x) = [?1 (x), . . . , ?M (x)]> and ?0 (wqm ) denotes the Dirac delta function centered at zero. Since
each of the Q tasks is a linear combination of the same set of latent functions {?m (x)}M
m=1 (where
typically M < Q ), correlation is induced in the outputs. Sharing a common set of features means
that ?knowledge transfer? between tasks can occur and latent functions are inferred more accurately,
since data belonging to all tasks are used.
Several linear models can be expressed as special cases of the above. For instance, a generalized
linear model is obtained when the GPs are Dirac delta measures (with zero covariance functions)
that deterministically assign each ?m (x) to its mean function ?m (x). However, the model in (1) has
a number of additional features not present in standard linear models. Firstly, the basis functions are
no longer deterministic, but they are instead drawn from different GPs, so an extra layer of flexibility
is added to the model. Thus, a posterior distribution over the basis functions of the generalized linear
model can be inferred from data. Secondly, a truly sparse prior, the spike and slab prior (1c), is
placed over the weights of the model. Specifically, with probability 1??, each wqm is zero, and with
probability ?, it is drawn from a Gaussian. This contrasts with previous approaches [3, 7, 8, 9, 13]
in which soft-sparse priors that assign zero probability mass to the weights being exactly zero were
2
used. Hyperparameters ? and ?w
are learnable in order to determine the amount of sparsity and
the discrepancy of nonzero weights, respectively. Thirdly, the number of basis functions M can be
inferred from data, since the sparse prior on the weights allows basis functions to be ?switched off?
as necessary by setting the corresponding weights to zero.
Further, the model in (1) can be considered as a spike and slab Bayesian reformulation of multitask [14, 15] and multiple kernel learning previous methods [11, 12] that learn the weights using
maximum likelihood. By assuming the weights wq are given, each output function yq (x) is a GP
with covariance function
Cov[(yq (xi ), yq (xj )] =
M
X
2
wqm
km (xi , xj ),
m=1
which clearly consists of a conic combination of kernel functions. Therefore, the proposed model
can be reinterpreted as multiple kernel learning in which the weights of each kernel are assigned
spike and slab priors in a full Bayesian formulation.
2
2.2 Sparse factor and principal component analysis
An interesting case arises when ?m (x) = 0 and km (xi , xj ) = ?ij ?m, where ?ij is the Kronecker
delta. This says that each latent function is drawn from a white process so that it consists of independent values each following the standard normal distribution. We first define matrices ? ? RN ?M
and W ? RQ?M , whose elements are, respectively, ?nm = ?m (xn ) and wqm . Then, the model in
(1) reduces to
Y = ?W> + ?,
2
?N (wqm |0, ?w
)
wqm ?
?nm ? N (?nm |0, 1),
(2a)
+ (1 ? ?)?0 (wqm ),
?nq ? N (?nq |0, ?q2 ),
?q,m
?n,m
(2b)
(2c)
?n,q ,
(2d)
where ? is an N ? Q noise matrix with entries ?nq . The resulting model thus corresponds to sparse
factor analysis or sparse probabilistic PCA (when the noise is homoscedastic, i.e., ?q2 is constant for
all q). Observe that the sparse spike and slab prior is placed on the factor loadings W.
2.3 Missing values
The method can easily handle missing values and thus be applied to problems involving matrix
completion and collaborative filtering. More precisely, in the presence of missing values we have
a binary matrix Z ? RN ?Q that indicates the observed elements in Y. Using Z the likelihood in
(1a) is modified according to ynq ? N (ynq |fq (xn ), ?q2 ), ?n,q s.t. [Z]nq = 1. In the experiments we
consider missing values in applications such as image inpainting and collaborative filtering.
3
Efficient variational inference
The presence of the Dirac delta mass function makes the application of variational approximate
inference algorithms in spike and slab Bayesian models troublesome. However, there exists a simple reparameterization of the spike and slab prior that is more amenable to approximate inference
2
methods. Specifically, assume a Gaussian random variable w
eqm ? N (w
eqm |0, ?w
) and a Bernoulli
eqm forms a new random variable
random variable sqm ? ? sqm (1 ? ?)1?sqm . The product sqm w
that follows the probability distribution in eq. (1c). This allows to reparameterize wqm according to
wqm = sqm w
eqm and assign the above prior distributions on sqm and w
eqm . Thus, the reparameterized spike and slab prior takes the form:
2
p(w
eqm , sqm ) = N (wqm |0, ?w
)? sqm (1 ? ?)1?sqm ,
?q,m .
(3)
Notice that the presence of wqm in the likelihood function in (1a) is now replaced by the product
sqm w
eqm . After the above reparameterization, a standard mean field variational method that uses the
Q
f = {w
f
e q }Q
factorized variational distribution over W
q=1 and S = {sq }q=1 takes the form q(W, S) =
QQ
e q , sq ), where
q=1 q(w
e q , sq ) = q(w
e q )q(sq ) = N (w
e q |?wq , ?wq )
q(w
M
Y
sqm
?qm
(1 ? ?qm )1?sqm
(4)
m=1
and where (?wq , ?wq , ?q ) are variational parameters. Such an approach has extensively used in [18]
and also considered in [19]. However, the above variational distribution leads to a very inefficient
approximation. This is because (4) is a unimodal distribution, and therefore has limited capacity
when approximating the factorial true posterior distribution which can have exponentially many
modes. To analyze the nature of the true posterior distribution, we consider the following two
properties derived by assuming for simplicity a single output (Q = 1) so index q is dropped.
e
Property 1: The true marginal posterior p(w|Y)
can bePwritten as a mixture distribution having 2M
e
e Y)p(s|Y), where the summation
components. This is an obvious fact since p(w|Y)
= s p(w|s,
involves all 2M possible values of the binary vector s.
e Y) in the above sum.
The second property characterizes the nature of each conditional p(w|s,
e Y). We can write s = s1 ? s0 , where
Property 2: Assume the conditional distribution p(w|s,
s1 denotes the elements in s with value one and s0 the elements with value zero. Using the
3
e we have w
e = w
e1 ? w
e 0 . Then, p(w|s,
e Y) factorizes as
correspondence between s and w,
2
e Y) = p(w
e 1 |Y)N (w
e 0 |0, ?w
e 0 given s0 = 0
p(w|s,
I|w
),
which
says
that
the
posterior
over
w
e 0|
e 0 . This property is obvious because w
e 0 and s0 appear in the likelihood
is equal to the prior over w
e 0 ? s0 , thus when s0 = 0, w
e 0 becomes disconnected from the data.
as an elementwise product w
The standard variational distribution in (4) ignores these properties and approximates the marginal
e
p(w|Y),
which is a mixture with 2M components, with a single Gaussian distribution. Next we
present an alternative variational approximation that takes into account the above properties.
3.1
The proposed variational method
In the reparameterized spike and slab prior, each pair of variables {w
eqm , sqm } is strongly correlated
since their product is the underlying variable that interacts with the data. Thus, a sensible approximation must treat each pair {w
eqm , sqm } as a unit so that {w
eqm , sqm } are placed in the same factor
of the variational distribution. The simplest factorization that achieves this is:
e q , sq ) =
q(w
M
Y
q(w
eqm , sqm ).
(5)
m=1
e q ) which has 2M components. This can be seen by
This variational distribution yields a marginal q(w
QM
e q ) = m=1 [q(w
writing q(w
eqm , sqm = 1) + q(w
eqm , sqm = 0)] and then by multiplying the terms
a mixture of 2M components is obtained. Therefore, Property 1 is satisfied by (5). In turns out
that Property 2 is also satisfied. This can be shown by taking the stationary condition for the factor
q(w
eqm , sqm ) when maximizing the variational lower bound (on the true marginal likelihood):
2
p(Y, w
eqm , sqm , ?)p(?)N (w
eqm |0, ?w
)? sqm (1 ? ?)1?sqm
log
,
(6)
q(w
eqm , sqm )q(?)
q(w
eqm ,sqm )q(?)
where ? are the remaining random variables in the model (i.e., excluding {w
eqm , sqm }) and q(?)
their variational distribution. The stationary condition for q(w
eqm , sqm ) is
q(w
eqm , sqm ) =
1 hlog p(Y,weqm ,sqm ,?)iq(?)
2
e
N (w
eqm |0, ?w
)? sqm (1 ? ?)1?sqm ,
Z
(7)
where Z is a normalizing constant that does not depend on {w
eqm , sqm }. Therefore, we
2
)(1 ? ?), where C =
eqm |0, ?w
have q(w
eqm |sqm = 0) ? q(w
eqm , sqm = 0) = ZC N (w
hlog p(Y,w
eqm ,sqm =0,?)iq(?)
e
is a constant that does not depend on w
eqm . From the last expression
2
) which implies that Property 2 is satisfied.
we obtain q(w
eqm |sqm = 0) = N (w
eqm |0, ?w
The above remarks regarding variational distribution (5) are general and can hold for many spike and
e and binary variables s interact inside the likelihood
slab probability models as long as the weights w
e ? s.
function according to w
3.2
Application to the multi-task and multiple kernel learning model
Here, we briefly discuss the variational method applied to the multi-task and multiple kernel model
described in Section 2.1 and refer to supplementary material for variational EM update equations.
The explicit form of the joint probability density function on the training data of model (1) is
>
f S, ?) = N (Y|?(W?S)
f
p(Y, W,
, ?)
M
Y
Y
2
N (w
eqm |0, ?w
)? sqm (1 ? ?)sqm
N (?m |?m , Km ),
q,m
m=1
f S, ?} is the whole set of random variables that need to be marginalized out to compute
where {W,
the marginal likelihood. The marginal likelihood is analytically intractable, so we lower bound it
using the following variational distribution
f S, ?) =
q(W,
Q Y
M
Y
q(w
eqm , sqm )
q=1 m=1
M
Y
m=1
4
q(?m ).
(8)
The stationary conditions of the lower bound result in analytical updates for all factors above. More
precisely, q(?m ) is an N -dimensional Gaussian distribution and each factor q(w
eqm , sqm ) leads to
a marginal q(w
eqm ) which is a mixture of two Gaussians where one component is q(w
eqm |sqm =
2
0) = N (w
eqm |0, ?w
), as shown in the previous section. The optimization proceeds using an EM
algorithm that at the E-step updates the factors in (8) and at the M-step updates hyperparameters
2
M
{{?q }Q
q=1 , ?w , ?, {?m }m=1 } where ?m parameterize kernel matrix Km . There is, however, one
surprise in these updates. The GP hyperparameters ?m are strongly dependent on the factor q(?m )
of the corresponding GP latent vector, so updating ?m by keeping fixed the factor q(?m ) exhibits
slow convergence. This problem is efficiently resolved by applying a Marginalized Variational step
[20] which jointly updates the pair (q(?m ), ?m ). This more advanced update together with all
remaining updates of the EM algorithm are discussed in detail in the supplementary material.
4
Assessing the accuracy of the approximation
In this section we compare the proposed variational inference method, in the following called
paired mean field (PMF), against the standard mean field (MF) approximation. For simplicity,
we consider a single-output linear regression problem where the data are generated according to:
e ? s)T x + ?. Moreover, to remove the effect of hyperparameter learning from the comy = (w
2
) are fixed to known values. The objective of the comparison is to measure the
parison, (? 2 , ?, ?w
e ? s]
accuracy when approximating the true posterior mean value for the parameter vector wtr = E[w
where the expectation is under the true posterior distribution. wtr is obtained by running a very
long run of Gibbs sampling. PMF and MF provide alternative approximations wPMF and wMF , and
absolute errors between these approximations and wtr are used to measure accuracy. Since initialization is crucial for variational non-convex algorithms, the accuracy of PMF and MF is averaged
over many random initializations of their respective variational distributions.
MF
PMF
soft-error
soft-bound
extreme-error
extreme-bound
0.917 [0.002,1.930]
0.208 [0.002,0.454]
-628.9 [-554.6,-793.5]
-560.7 [-557.8, -564.1]
1.880 [0.965, 2.561]
0.204 [0.002, 0.454]
-895.0 [-618.9,-1483.3]
-560.6 [-557.8, -564.0]
Table 1: Comparison
of MF and PMF in Boston-housing data in terms of approximating the ground-truth.
P
tr
appr
Average errors ( 13
m=1 |wm ? wm |) together with 95% confidence intervals (given by percentiles) are shown
for soft and extreme initializations. Average values for the variational lower bound are also shown.
For the purpose of the comparison we also derived an efficient paired Gibbs sampler that follows
exactly the same principle as PMF. This Gibbs sampler iteratively samples the pair (w
em , sm ) from
e \m , s\m , y) and has been observed to mix much faster than the standard
the conditional p(w
em , sm |w
e and s separately. More details about the paired Gibbs sampler are
Gibbs sampler that samples w
given in the supplementary material.
We considered the Boston-housing dataset which consists of 456 training examples and 13 inputs.
2
Hyperparameters were fixed to values (? 2 = 0.1 ? var(y), ? = 0.25, ?w
= 1) where var(y) denotes
the variance of the data. We performed two types of experiments each repeated 300 times. Each
repetition of the first type uses a soft random initialization of each q(sm = 1) = ?m from the range
(0, 1). The second type uses an extreme random initialization so that each ?m is initialized to either
0 or 1. For each run PMF and MF are initialized to the same variational parameters.
Table 1 reports average absolute errors and also average values of the variational lower bounds.
Clearly, PMF is more accurate than MF, achieves significantly higher values for the lower bound
and exhibits smaller variance under different initializations. Further, for the more difficult case
of extreme initializations the performance of MF becomes worse, while the performance of PMF
remains unchanged. This shows that optimization in PMF, although non-convex, is very robust to
unfavorable initializations. Similar experiments in other datasets have confirmed the above remarks.
5
Experiments
Toy multi-output regression dataset. To illustrate the capabilities of the proposed model, we first
apply it to a toy multi-output dataset with missing observations. Toy data is generated as follows:
5
Ten random latent functions are generated by sampling i.i.d. from zero-mean GPs with the following
non-stationary covariance function
k(xi , xj ) = exp
?x2 ? x2
j
i
20
(4 cos(0.5(xi ? xj )) + cos(2(xi ? xj ))),
at 201 evenly spaced points in the interval x ? [?10, 10]. Ten tasks are then generated by adding
Gaussian noise with standard deviation 0.2 to those random latent functions, and two additional tasks
consist only of Gaussian noise with standard deviations 0.1 and 0.4. Finally, for each of the 12 tasks,
we artificially simulate missing data by removing 41 contiguous observations, as shown in Figure
1. Missing data are not available to any learning algorithm, and will be used to test performance
only. Note that the above covariance function is rank-4, so ten out of the twelve tasks will be related,
though we do not know how, or which ones.
All tasks are then learned using both independent GPs with squared exponential (SE) covariance
function kSE (xi , xj ) = exp(?(xi ? xj )2 /(2`)) and the proposed MTMKL with M = 7 latent
functions, each of them also using the SE prior. Hyperparameter `, as well as noise levels are
learned independently for each latent function. Figure 1 shows the inferred posterior means.
2.5
2
1.5
4
1
3
0.5
2
0
1
2
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?0.5
0
?1
?1
?1
?1.5
?1.5
?2
?2
?10
?8
?6
?4
?2
0
2
4
6
8
10
?2.5
?10
2
3
1
2
?8
?6
?4
?2
0
2
4
6
8
10
?1.5
?10
?8
?6
?4
?2
0
2
4
6
8
10
?2
?10
?8
?6
?4
?2
0
2
4
6
8
10
?8
?6
?4
?2
0
2
4
6
8
10
?8
?6
?4
?2
0
2
4
6
8
10
2.5
0.3
2
0.2
1.5
0.1
0
1
?1
0
?2
?1
1
0
0.5
0
?0.1
?0.5
?0.2
?1
?3
?2
?0.3
?1.5
?4
?10
?8
?6
?4
?2
0
2
4
6
8
?3
?10
10
6
?8
?6
?4
?2
0
2
4
6
8
10
2.5
?0.4
?10
?8
?6
?4
?2
0
2
4
6
8
10
2.5
2
5
?2
?10
1.5
1
2
1.5
4
0.5
1.5
1
3
0
1
0.5
2
?0.5
0.5
0
1
?1
?0.5
0
0
?1.5
?1
?0.5
?1
?2
?1.5
?1
?2
?3
?10
?2.5
?2
?8
?6
?4
?2
0
2
4
6
8
10
?1.5
?10
?8
?6
?4
?2
0
2
4
6
8
10
?2.5
?10
?8
?6
?4
?2
0
2
4
6
8
10
?3
?10
Figure 1: Twelve related tasks and predictions according to independent GPs (blue, continuous line) and
MTMKL (red, dashed line). Missing data for each task is represented using green circles.
The mean square error (MSE) between predictions and missing observations for each task are displayed in Table 2. MTMKL is able to infer how tasks are related and then exploit that information
to make much better predictions. After learning, only 4 out of the 7 available latent functions remain active, while the other ones are pruned by setting the corresponding weights to zero. This is in
correspondence with the generating covariance function, which only had 4 eigenfunctions, showing
how model order selection is automatic.
Method \ Task #
1
2
3
4
5
6
7
8
9
10
11
12
Independent GPs 6.51 11.70 7.52 2.49 1.53 18.25 0.41 7.43 2.73 1.81 19.93 93.80
MTMKL
1.97 4.57 7.71 1.94 1.98 2.09 0.41 1.96 1.90 1.57 1.20 2.83
Table 2: MSE performance of independent GPs vs. MTMKL on the missing observations for each task.
6
Inferred noise standard deviations for the noise-only tasks are 0.10 and 0.45, and the average for the
remaining tasks is 0.22, which agrees well with the stated actual values.
The flowers dataset. Though the proposed model has been designed as a tool for regression, it
can also be used approximately to solve classification problems by using output values to identify
class membership. In this section we will apply it to the challenging flower identification problem
posed in [21]. There are 2040 instances of flowers for training and 6149 for testing, mainly acquired
from the web, with varying scales, resolutions, etc., which are labeled into 102 categories. In [21],
four relevant features are identified: Color, histogram of gradient orientations and the scale invariant
feature transform, sampled on both the foreground region and its boundary. More information is
available at http://www.robots.ox.ac.uk/?vgg/data/flowers/.
For this type of dataset, state of the art performance has been achieved using a weighted linear
combination of kernels (one per feature) in a support vector machine (SVM) classifier. A different
set of weights is learned for each class. In [22] it is shown that these weights can be learned by
solving a convex optimization problem. I.e., the standard approach to tackle the flower classification
problem would correspond to solving 102 independent binary classification problems, each using a
linear combination of 4 kernels. We take a different approach: Since all the 102 binary classification
tasks are related, we learn all of them at once as a multi-task multiple-kernel problem, hoping that
knowledge transfer between them will enhance performance.
For each training instance, we set the corresponding output to +1 for the desired task, whereas the
output for the remaining tasks is set to -1. Then we consider both using 10 and 13 latent functions
per feature (i.e., M = 40 and M = 52). We measure performance in terms of the recognition
rate (RR), which is the average of break-even points (where precision equals recall) for each class;
average area under the curve (AUC); and the multi-class accuracy (MA) which is the rate of correctly
classified instances. As baseline, recall that a random classifier would yield a RR and AUC of 0.5
and a MA of 1/102 = 0.0098. Results are reported in Table 3.
Method
Latent function #
MTMKL
MTMKL
MKL from [21]
MKL from [13]
M
M
M
M
= 40
= 52
= 408
= 408
AUC on test set
RR on test set
MA on test set
0.944
0.952
0.957
0.889
0.893
0.728
-
0.329
0.400
-
Table 3: Performance of the different multiple kernel learning algorithms on the flowers dataset.
MTMKL significantly outperforms the state-of-the-art method in [21], yielding a performance in
line with [13], due to its ability to share information across tasks.
Image denoising and dictionary learning. Here we illustrate denoising on the 256 ? 256 ?house?
image used in [19]. Three noise levels (standard deviations 15, 25 and 50) are considered. Following [19], we partition the noisy image in 62,001 overlapping 8 ? 8 blocks and regard each block
as a different task. MTMKL is then run using M = 64 ?latent blocks?, also known as ?dictionary
elements? (bigger dictionaries do not result in significant performance increase). For the covariance
of the latent functions, we consider two possible choices: Either a white covariance function (as
|xi ?xj |
in [19]) or an exponential covariance of the form kEXP (xi , xj ) = e? ` , where x are the pixel
coordinates within each block. The first option is equivalent to placing an independent standard normal prior on each pixel of the dictionary. The second one, on the other hand, introduces correlation
between neighboring pixels in the dictionary. Results are shown in Table 4. The exponential covariance clearly enhances performance and produces a more structured dictionary, as can be seen in
Figure 3.(a). The Peak-to-Signal Ratio (PSNR) obtained using the proposed approach is comparable
to the state-of-the-art results obtained in [19].
Image inpainting and dictionary learning. We now address the inpainting problem in color images. Following [19], we consider a color image in which a random 80% of the RGB components
are missing. Using an analogous partitioning scheme as in the previous section we obtain 148,836
blocks of size 8 ? 8 ? 3, each of which is regarded as a different task. A dictionary size of M = 100
and a white covariance function (which is used in [19]) are selected. Note that we do not apply any
other preprocessing to data or any specific initialization as it is done in [19]. The PSNR of the image
7
PSNR (dB)
Figure 2: Noisy ?house? image with ? = 25 and restored version using Exponential cov. function.
Noise std
Noisy image
White
Expon.
? = 15
? = 25
? = 50
24.66
20.22
14.20
33.98
30.98
26.14
34.29
31.88
28.08
Table 4: PSNR for noisy and restored image using
several noise levels and covariance functions.
after it is restored using MTMKL is 28.94 dB, see Figure 3.(b). This result is similar to the results
reported in [19] and close to the state-of-the-art result of 29.65 dB achieved in [23].
(a) House: Dict. for white and Exponential
(b) Castle: Missing values, restored and original
Figure 3: Dictionaries inferred from noisy (? = 25) ?house? image; and ?castle? inpainting results.
Collaborative filtering. Finally, we performed an experiment on the 10M MovieLens data
set that consists of 10 million ratings for 71,567 users and 10,681 films, with ratings ranging
{1, 0.5, 2, . . . , 4.5, 5}. We followed the setup in [24] and used the ra and rb partitions provided
with the database, that split the data into a training and testing, so that they are 10 ratings per user
in the test set. We applied the sparse factor analysis model (i.e. sparse PCA but with heteroscedastic
noise for the columns of the observation matrix Y which corresponds to films) with M = 20 latent
dimensions. The RMSE for the ra partition was 0.88 for the rb partition was 0.85 so one average
0.865. This result is slightly better than 0.8740 RMSE reported in [24] using GP-LVM.
6
Discussion
In this work we have proposed a spike and slab multi-task and multiple kernel learning model. A
novel variational algorithm to perform inference in this model has been derived. The key contribution in this regard that explains the good performance of the algorithm is the choice of a joint
distribution over w
?qm and sqm in the variational posterior, as opposed to the usual independence
assumption. This has the effect of using exponentially many modes to approximate the posterior,
thus rendering it more accurate and much more robust to poor initializations of the variational parameters. The relevance and wide applicability of the proposed model has been illustrated by using
it on very diverse tasks: multi-output regression, multi-class classification, image denoising, image
inpainting and collaborative filtering. Prior structure beliefs were introduced in image dictionaries,
which is also a novel contribution to the best of our knowledge. Finally an interesting topic for future
research is to optimize the variational distribution proposed here with alternative approximate inference frameworks such as belief propagation or expectation propagation. This could allow to extend
current methodologies within such frameworks that assume unimodal approximations [25, 26].
Acknowledgments
We thank the reviewers for insightful comments. MKT was supported by EPSRC Grant No
EP/F005687/1 ?Gaussian Processes for Systems Identification with Applications in Systems Biology?. MLG gratefully acknowledges funding from CAM project CCG10-UC3M/TIC-5511 and
CONSOLIDER-INGENIO 2010 CSD2008-00010 (COMONSENS).
8
References
[1] T.J. Mitchell and J.J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023?1032, 1988.
[2] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58:267?288, 1994.
[3] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning
Research, 1:211?244, 2001.
[4] E.I. George and R.E. Mcculloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881?889, 1993.
[5] M. West. Bayesian factor regression models in the ?large p, small n? paradigm. In Bayesian Statistics,
pages 723?732. Oxford University Press, 2003.
[6] B. Efron. Microarrays, empirical Bayes and the two-groups model. Statistical Science, 23:1?22, 2008.
[7] C. Archambeau and F. Bach. Sparse probabilistic projections. In D. Koller, D. Schuurmans, Y. Bengio,
and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 73?80. 2009.
[8] F. Caron and A. Doucet. Sparse Bayesian nonparametric regression. In In 25th International Conference
on Machine Learning (ICML). ACM, 2008.
[9] Matthias W. Seeger and Hannes Nickisch. Compressed sensing and Bayesian experimental design. In
ICML, pages 912?919, 2008.
[10] C.M. Carvalho, N.G. Polson, and J.G. Scott. The horseshoe estimator for sparse signals. Biometrika,
97:465?480, 2010.
[11] T. Damoulas and M.A. Girolami. Probabilistic multi-class multi-kernel learning: on protein fold recognition and remote homology detection. Bioinformatics, 24:1264?1270, 2008.
[12] M. Christoudias, R. Urtasun, and T. Darrell. Bayesian localized multiple kernel learning. Technical report,
EECS Department, University of California, Berkeley, Jul 2009.
[13] C. Archambeau and F. Bach. Multiple Gaussian process models. In NIPS 23 workshop on New Directions
in Multiple Kernel Learning. 2010.
[14] Y.W. Teh, M. Seeger, and M.I. Jordan. Semiparametric latent factor models. In Proceedings of the
International Workshop on Artificial Intelligence and Statistics, volume 10, 2005.
[15] E.V. Bonilla, K.M.A. Chai, and C.K.I. Williams. Multi-task Gaussian process prediction. In Advances
Neural Information Processing Systems 20, 2008.
[16] P Boyle and M. Frean. Dependent Gaussian processes. In Advances in Neural Information Processing
Systems 17, pages 217?224. MIT Press, 2005.
[17] M. Alvarez and N.D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. In
Advances in Neural Information Processing Systems 20, pages 57?64, 2008.
[18] R. Yoshida and M. West. Bayesian learning in sparse graphical factor models via variational mean-field
annealing. Journal of Machine Learning Research, 11:1771?1798, 2010.
[19] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary
learning for sparse image represent ations. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams,
and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2295?2303. 2009.
[20] M. L?azaro-Gredilla and M. Titsias. Variational heteroscedastic Gaussian process regression. In 28th
International Conference on Machine Learning (ICML-11), pages 841?848, New York, NY, USA, June
2011. ACM.
[21] M.E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008.
[22] M. Varma and D. Ray. Learning the discriminative power invariance trade-off. In International Conference on Computer Vision. 2007.
[23] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. Image
Processing, 17, 2008.
[24] N.D. Lawrence and R. Urtasun. Non-linear matrix factorization with Gaussian processes. In Proceedings
of the 26th Annual International Conference on Machine Learning, pages 601?608, 2009.
[25] K. Sharp and M. Rattray. Dense message passing for sparse principal component analysis. In 13th
International Conference on Artificial Intelligence and Statistics (AISTATS), pages 725?732, 2010.
[26] J.M. Hern?andez-Lobato, D. Hern?andez-Lobato, and A. Su?arez. Network-based sparse Bayesian classification. Pattern Recognition, 44(4):886?900, 2011.
9
| 4305 |@word multitask:1 wmf:1 version:1 briefly:1 loading:1 wqm:16 consolider:1 km:6 rgb:1 covariance:13 inpainting:6 tr:1 series:1 outperforms:1 current:1 com:1 gmail:1 written:1 must:1 multioutput:1 partition:4 remove:1 designed:1 hoping:1 update:8 v:1 stationary:4 intelligence:2 selected:1 nq:4 beauchamp:1 firstly:1 consists:4 combinational:1 combine:1 ray:1 inside:1 introduce:2 acquired:1 ra:2 multi:23 actual:1 becomes:2 provided:1 project:1 underlying:1 moreover:1 mass:3 factorized:1 mcculloch:1 tic:1 q2:4 sapiro:2 berkeley:1 golden:2 tackle:1 exactly:2 biometrika:1 qm:4 classifier:2 uk:1 partitioning:1 unit:1 grant:1 appear:1 dropped:1 lvm:1 treat:1 troublesome:1 oxford:1 approximately:1 initialization:11 challenging:2 heteroscedastic:2 co:2 archambeau:2 factorization:4 limited:1 range:1 averaged:1 acknowledgment:1 testing:2 block:5 sq:5 area:1 empirical:1 significantly:2 projection:1 confidence:1 protein:1 close:1 selection:5 applying:1 writing:1 www:1 equivalent:1 deterministic:1 optimize:1 missing:15 maximizing:1 reviewer:1 williams:2 yoshida:1 lobato:2 independently:1 convex:3 resolution:1 simplicity:2 boyle:1 estimator:1 regarded:1 varma:1 reparameterization:2 handle:1 coordinate:1 laplace:1 qq:1 analogous:1 user:2 gps:8 us:4 element:5 expensive:1 recognition:3 updating:1 std:1 labeled:1 database:1 observed:2 epsrc:1 ep:1 parameterize:1 region:1 culotta:1 remote:1 trade:1 rq:1 cam:1 depend:2 solving:2 titsias:2 basis:4 easily:1 joint:2 resolved:1 arez:1 represented:1 univ:2 ynq:5 artificial:2 whose:1 widely:2 supplementary:3 solve:1 say:2 posed:1 film:2 compressed:1 elad:1 ability:1 statistic:4 cov:2 gp:5 jointly:1 transform:1 noisy:5 housing:2 rr:3 analytical:1 matthias:1 product:4 neighboring:1 relevant:1 combining:1 flexibility:1 inducing:1 christoudias:1 dirac:3 chai:1 manchester:1 convergence:1 darrell:1 assessing:1 produce:1 generating:1 iq:2 illustrate:2 completion:1 frean:1 ac:1 miguel:2 ij:2 eq:1 taskspecific:1 involves:1 implies:1 girolami:1 direction:1 centered:1 material:3 explains:1 assign:4 andez:2 secondly:1 summation:1 hold:1 considered:4 ground:1 normal:3 exp:2 lawrence:2 slab:21 appr:1 achieves:2 dictionary:11 homoscedastic:1 purpose:1 combinatorial:1 agrees:1 repetition:1 mtitsias:1 tool:1 weighted:1 mit:1 clearly:3 gaussian:19 modified:1 zhou:1 shrinkage:2 factorizes:1 varying:1 derived:3 june:1 bernoulli:1 fq:3 likelihood:8 indicates:1 rank:1 contrast:1 mainly:1 seeger:2 baseline:1 inference:15 dependent:2 membership:1 typically:1 relation:1 koller:1 pixel:3 classification:9 orientation:1 ingenio:1 art:4 special:1 marginal:7 field:6 equal:2 once:1 having:2 sampling:3 biology:1 placing:1 icml:3 carin:1 foreground:1 discrepancy:1 future:1 report:2 replaced:1 detection:1 message:1 reinterpreted:1 introduces:1 mixture:6 truly:1 extreme:5 yielding:1 amenable:1 accurate:3 necessary:1 respective:1 pmf:10 initialized:2 circle:1 desired:1 instance:4 column:2 soft:6 contiguous:1 ations:1 restoration:1 applicability:1 deviation:4 entry:2 graphic:1 reported:3 eec:1 nickisch:1 combined:2 density:1 twelve:2 peak:1 international:6 probabilistic:5 off:2 enhance:1 together:2 squared:1 nm:3 satisfied:3 opposed:1 worse:1 castle:2 inefficient:1 american:2 toy:3 account:1 de:2 bonilla:1 damoulas:1 performed:2 break:1 analyze:1 characterizes:1 red:1 wm:2 bayes:1 carlos:1 capability:1 option:1 jul:1 rmse:2 collaborative:6 contribution:2 square:1 accuracy:5 variance:2 characteristic:1 efficiently:1 yield:2 spaced:1 identify:1 correspond:1 dealt:1 bayesian:19 unifies:2 mtmkl:11 accurately:1 identification:2 ren:1 multiplying:1 confirmed:1 researcher:1 classified:1 sharing:1 mlg:1 against:1 obvious:2 associated:1 sampled:1 dataset:7 popular:1 mitchell:1 recall:2 knowledge:3 color:4 efron:1 psnr:4 uc3m:2 higher:1 tipping:1 methodology:1 zisserman:1 alvarez:1 hannes:1 formulation:1 done:1 though:2 strongly:2 ox:1 correlation:3 working:1 hand:1 web:1 su:1 overlapping:1 propagation:2 mkl:2 mode:2 usa:1 effect:2 true:6 homology:1 regularization:2 assigned:1 analytically:1 nonzero:1 iteratively:1 illustrated:1 white:5 auc:3 percentile:1 generalized:4 tsc:1 demonstrate:1 image:22 variational:41 ranging:1 novel:3 recently:1 funding:1 common:3 exponentially:2 volume:1 thirdly:1 discussed:2 million:1 approximates:1 elementwise:1 extend:1 association:2 refer:2 significant:1 caron:1 gibbs:6 paisley:1 automatic:1 gratefully:1 had:1 robot:1 longer:1 etc:1 posterior:11 perspective:2 binary:5 seen:2 additional:2 george:1 determine:1 eqm:36 paradigm:1 signal:2 dashed:1 multiple:17 unimodal:3 full:1 reduces:1 mix:1 infer:1 technical:1 match:1 faster:1 cross:1 long:2 wtr:3 bach:2 e1:1 bigger:1 paired:3 prediction:4 involving:1 regression:13 vision:2 expectation:2 expon:1 nilsback:1 histogram:1 kernel:22 represent:1 achieved:2 dec:1 whereas:1 semiparametric:1 separately:1 interval:2 annealing:1 crucial:1 extra:1 unlike:1 eigenfunctions:1 comment:1 induced:1 db:3 lafferty:1 jordan:1 presence:3 ideal:1 iii:1 split:1 bengio:2 rendering:1 automated:1 xj:12 independence:1 identified:1 lasso:1 idea:1 regarding:1 vgg:1 microarrays:1 expression:1 pca:4 york:1 passing:1 remark:2 se:2 factorial:2 amount:1 nonparametric:1 extensively:1 ten:3 category:1 simplest:1 http:1 notice:1 delta:4 per:3 correctly:1 rb:2 blue:1 diverse:1 tibshirani:1 discrete:1 write:1 hyperparameter:2 rattray:1 express:1 group:1 key:1 four:1 reformulation:2 drawn:3 sum:1 run:3 comparable:1 layer:1 bound:8 followed:1 tackled:1 correspondence:2 fold:1 kexp:1 annual:1 occur:1 precisely:3 kronecker:1 wq1:1 x2:2 simulate:1 reparameterize:1 pruned:1 structured:1 department:1 gredilla:2 according:6 combination:4 disconnected:1 poor:1 belonging:1 smaller:1 remain:1 em:5 across:1 slightly:1 s1:2 invariant:1 equation:1 remains:1 hern:2 discus:2 turn:1 know:1 letting:1 available:3 gaussians:1 apply:6 observe:1 hierarchical:1 generic:1 alternative:3 convolved:1 original:1 denotes:3 michalis:1 remaining:4 running:1 graphical:1 marginalized:2 exploit:1 approximating:3 society:1 unchanged:1 objective:1 added:1 spike:21 restored:4 parametric:1 usual:1 interacts:1 exhibit:2 gradient:1 enhances:1 thank:1 capacity:1 sensible:1 evenly:1 topic:1 urtasun:2 assuming:2 modeled:1 index:1 ratio:1 difficult:1 setup:1 hlog:2 stated:1 polson:1 design:1 perform:1 teh:1 observation:5 datasets:1 sm:3 horseshoe:1 kse:1 displayed:1 reparameterized:2 excluding:1 rn:4 sharp:1 inferred:6 rating:3 introduced:1 pair:4 comonsens:1 california:1 learned:4 nip:1 trans:1 address:1 able:1 proceeds:1 flower:7 pattern:1 scott:1 sparsity:1 green:1 royal:1 belief:2 power:1 event:1 dict:1 advanced:1 scheme:1 yq:4 numerous:1 conic:1 acknowledges:1 csd2008:1 prior:24 literature:1 interesting:2 filtering:6 carvalho:1 var:2 localized:1 validation:1 switched:1 s0:6 principle:1 editor:2 share:2 row:2 placed:3 last:1 keeping:1 supported:1 zc:1 allow:1 wide:1 taking:1 absolute:2 sparse:31 regard:2 boundary:1 curve:1 xn:4 dimension:1 ignores:1 preprocessing:1 approximate:4 doucet:1 active:1 mairal:1 assumed:1 xi:12 discriminative:1 search:1 latent:18 continuous:1 table:8 learn:2 nature:5 robust:3 transfer:2 schuurmans:2 interact:1 mse:2 bottou:1 artificially:1 aistats:1 dense:1 linearly:3 whole:1 noise:11 hyperparameters:4 repeated:1 west:2 madrid:1 slow:1 ny:1 precision:1 deterministically:1 explicit:1 exponential:5 house:4 removing:1 specific:2 showing:1 insightful:1 learnable:1 sensing:1 svm:1 normalizing:1 exists:1 intractable:1 consist:1 workshop:2 adding:1 chen:1 surprise:1 mf:8 boston:2 led:1 azaro:2 f005687:1 expressed:1 parison:1 corresponds:2 truth:1 acm:2 ma:3 conditional:3 mkt:1 specifically:3 movielens:1 sampler:4 denoising:4 principal:2 called:2 invariance:1 e:1 experimental:1 unfavorable:2 wq:6 support:1 arises:1 relevance:2 bioinformatics:1 indian:1 correlated:1 |
3,651 | 4,306 | Similarity-based Learning via Data Driven
Embeddings
Prateek Jain
Microsoft Research India
Bangalore, INDIA
[email protected]
Purushottam Kar
Indian Institute of Technology
Kanpur, INDIA
[email protected]
Abstract
We consider the problem of classification using similarity/distance functions over
data. Specifically, we propose a framework for defining the goodness of a
(dis)similarity function with respect to a given learning task and propose algorithms that have guaranteed generalization properties when working with such
good functions. Our framework unifies and generalizes the frameworks proposed
by [1] and [2]. An attractive feature of our framework is its adaptability to data
- we do not promote a fixed notion of goodness but rather let data dictate it. We
show, by giving theoretical guarantees that the goodness criterion best suited to a
problem can itself be learned which makes our approach applicable to a variety of
domains and problems. We propose a landmarking-based approach to obtaining a
classifier from such learned goodness criteria. We then provide a novel diversity
based heuristic to perform task-driven selection of landmark points instead of random selection. We demonstrate the effectiveness of our goodness criteria learning
method as well as the landmark selection heuristic on a variety of similarity-based
learning datasets and benchmark UCI datasets on which our method consistently
outperforms existing approaches by a significant margin.
1
Introduction
Machine learning algorithms have found applications in diverse domains such as computer vision,
bio-informatics and speech recognition. Working in such heterogeneous domains often involves
handling data that is not presented as explicit features embedded into vector spaces. However in
many domains, for example co-authorship graphs, it is natural to devise similarity/distance functions
over pairs of points. While classical techniques like decision tree and linear perceptron cannot handle
such data, several modern machine learning algorithms such as support vector machine (SVM) can
be kernelized and are thereby capable of using kernels or similarity functions.
However, most of these algorithms require the similarity functions to be positive semi-definite
(PSD), which essentially implies that the similarity stems from an (implicit) embedding of the data
into a Hilbert space. Unfortunately in many domains, the most natural notion of similarity does not
satisfy this condition - moreover, verifying this condition is usually a non-trivial exercise. Take for
example the case of images on which the most natural notions of distance (Euclidean, Earth-mover)
[3] do not form PSD kernels. Co-authorship graphs give another such example.
Consequently, there have been efforts to develop algorithms that do not make assumptions about
the PSD-ness of the similarity functions used. One can discern three main approaches in this area.
The first approach tries to coerce a given similarity measure into a PSD one by either clipping or
shifting the spectrum of the kernel matrix [4, 5]. However, these approaches are mostly restricted to
transductive settings and are not applicable to large scale problems due to eigenvector computation
requirements. The second approach consists of algorithms that either adapt classical methods like
1
k-NN to handle non-PSD similarity/distance functions and consequently offer slow test times [5],
or are forced to solve non-convex formulations [6, 7].
The third approach, which has been investigated recently in a series of papers [1, 2, 8, 9], uses the
similarity function to embed the domain into a low dimensional Euclidean space. More specifically,
these algorithms choose landmark points in the domain which then give the embedding. Assuming
a certain ?goodness? property (that is formally defined) for the similarity function, these models
offer both generalization guarantees in terms of how well-suited the similarity function is to the
classification task as well as the ability to use fast algorithmic techniques such as linear SVM [10]
on the landmarked space. The model proposed by Balcan-Blum in [1] gives sufficient conditions for
a similarity function to be well suited to such a landmarking approach. Wang et al. in [2] on the other
hand provide goodness conditions for dissimilarity functions that enable landmarking algorithms.
Informally, a similarity (or distance) function can be said to be good if points of similar labels
are closer to each other than points of different labels in some sense. Both the models described
above restrict themselves to a fixed goodness criterion, which need not hold for the underlying data.
We observe that this might be too restrictive in many situations and present a framework that allows us to tune the goodness criterion itself to the classification problem at hand. Our framework
consequently unifies and generalizes those presented in [1] and [2]. We first prove generalization
bounds corresponding to landmarked embeddings under a fixed goodness criterion. We then provide a uniform-convergence bound that enables us to learn the best goodness criterion for a given
problem. We further generalize our framework by giving the ability to incorporate any Lipschitz
loss function into our goodness criterion which allows us to give guarantees for the use of various
algorithms such as C-SVM and logistic regression on the landmarked space.
Now similar to [1, 2], our framework requires random sampling of training points to create the
embedding space1 . However in practice, random sampling is inefficient and requires sampling of a
large number of points to form a useful embedding, thereby increasing training and test time. To
address this issue, [2] proposes a heuristic to select the points that are to be used as landmarks.
However their scheme is tied to their optimization algorithm and is computationally inefficient for
large scale data. In contrast, we propose a general heuristic for selecting informative landmarks
based on a novel notion of diversity which can then be applied to any instantiation of our model.
Finally, we apply our methods to a variety of benchmark datasets for similarity learning as well as
ones from the UCI repository. We empirically demonstrate that our learning model and landmark
selection heuristic consistently offers significant improvements over the existing approaches. In
particular, for small number of landmark points, which is a practically important scenario as it is
expensive to compute similarity function values at test time, our method provides, on an average,
accuracy boosts of upto 5% over existing methods. We also note that our methods can be applied on
top of any strategy used to learn the similarity measure (eg. MKL techniques [11]) or the distance
measure (eg. [12]) itself. Akin to [1], our techniques can also be extended to learn a combination of
(dis)similarity functions but we do not explore these extensions in this paper.
2
Methodology
Let D be a fixed but unknown distribution over the labeled input domain X and let ` : X ?
{?1, +1} be a labeling over the domain. Given a (potentially non-PSD) similarity function2 K :
X ? X ? R, the goal is to learn a classifier `? : X ? {?1, +1} from a finite number of i.i.d.
samples from D that has bounded generalization error over D.
Now, learning a reasonable classifier seems unlikely if the given similarity function does not have
any inherent ?goodness? property. Intuitively, the goodness of a similarity function should be its
suitability to the classification task at hand. For PSD kernels, the notion of goodness is defined
in terms of the margin offered in the RKHS [13]. However, a more basic requirement is that the
similarity function should preserve affinities among similarly labeled points - that is to say, a good
similarity function should not, on an average, assign higher similarity values to dissimilarly labeled
points than to similarly labeled points. This intuitive notion of goodness turns out to be rather robust
1
Throughout the paper, we use the terms embedding space and landmarked space interchangeably.
Results described in this section hold for distance functions as well; we present results with respect to
similarity functions for sake of simplicity.
2
2
in the sense that all PSD kernels that offer a good margin in their respective RKHSs satisfy some
form of this goodness criterion as well [14].
Recently there has been some interest in studying different realizations of this general notion of
goodness and developing corresponding algorithms that allow for efficient learning with similarity/distance functions. Balcan-Blum in [1] present a goodness criteria in which a good similarity
function is considered to be one that, for most points, assigns a greater average similarity to similarly labeled points than to dissimilarly labeled points. More specifically, a similarity function is
(, ?)-good if there exists a weighing function w : X ? R such that, at least a (1 ? ) probability
mass of examples x ? D satisfies:
E [w (x0 ) K(x, x0 )|`(x0 ) = `(x)] ? 0E [w (x0 ) K(x, x0 )|`(x0 ) 6= `(x)] + ?.
x0 ?D
x ?D
(1)
where instead of average similarity, one considers an average weighted similarity to allow the definition to be more general.
Wang et al in [2] define a distance function d to be good if a large fraction of the domain is, on
an average, closer to similarly labeled points than to dissimilarly labeled points. They allow these
averages to be calculated based on some distribution distinct from D, one that may be more suited
to the learning problem. However it turns out that their definition is equivalent to one in which one
again assigns weights to domain elements, as done by [1], and the following holds
E
x0 ,x00 ?D?D
[w(x0 )w(x00 ) sgn (d(x, x00 ) ? d(x, x0 )) |`(x0 ) = `(x), `(x00 ) 6= `(x)] > ?
(2)
Assuming their respective goodness criteria, [1] and [2] provide efficient algorithms to learn classifiers with bounded generalization error. However these notions of goodness with a single fixed
criterion may be too restrictive in the sense that the data and the (dis)similarity function may not satisfy the underlying criterion. This is, for example, likely in situations with high intra-class variance.
Thus there is need to make the goodness criterion more flexible and data-dependent.
To this end, we unify and generalize both the above criteria to give a notion of goodness that is more
data dependent. Although the above goodness criteria (1) and (2) seem disparate at first, they can
be shown to be special cases of a generalized framework where an antisymmetric function is used
to compare intra and inter-class affinities. We use this observation to define our novel goodness
criterion using arbitrary bounded antisymmetric functions which we refer to as transfer functions.
This allows us to define a family of goodness criteria of which (1) and (2) form special cases ((1)
uses the identity function and (2) uses the sign function as transfer function). Moreover, the resulting
definition of a good similarity function is more flexible and data dependent. In the rest of the paper
we shall always assume that our similarity functions are normalized i.e. for the domain of interest
X , sup K(x, y) ? 1.
x,y?X
Definition 1 (Good Similarity Function). A similarity function K : X ? X ? R is said to be
an (, ?, B)-good similarity for a learning problem where , ?, B > 0 if for some antisymmetric
transfer function f : R ? R and some weighing function w : X ? X ? [?B, B], at least a (1 ? )
probability mass of examples x ? D satisfies
E
x0 ,x00 ?D?D
[w (x0 , x00 ) f (K(x, x0 ) ? K(x, x00 )) |`(x0 ) = `(x), `(x00 ) 6= `(x)] ? Cf ?
(3)
where Cf = sup f (K(x, x0 )) ? inf
f (K(x, x0 ))
0
x,x0 ?X
x,x ?X
As mentioned before, the above goodness criterion generalizes the previous notions of goodness3
and is adaptive to changes in data as it allows us, as shall be shown later, to learn the best possible
criterion for a given classification task by choosing the most appropriate transfer function from a
parameterized family of functions. We stress that the property of antisymmetry for the transfer
function is crucial to the definition in order to provide a uniform treatment to points of all classes as
will be evident in the proof4 of Theorem 2.
As in [1, 2], our goodness criterion lends itself to a simple learning algorithm which consists of
d
?
choosing a set of d random pairs of points from the domain P = x+
(which we refer to
i , xi
i=1
3
4
We refer the reader to the supplementary material (Section 2) for a discussion.
Due to lack of space we relegate all proofs to the supplementary material
3
as landmark pairs) and defining an embedding of the domain into a landmarked space using these
d
?
d
landmarks : ?L : X ? Rd , ?L (x) = f (K(x, x+
i ) ? K(x, xi )) i=1 ? R . The advantage of
performing this embedding is the guaranteed existence of a large margin classifier in the landmarked
space as shown below.
Theorem 2. If K is an (, ?, B)-good similarity with respect to transfer function f and weight function w then for any 1 > 0, with probability at least 1 ? ? over the choice of d = (8/? 2 ) ln(2/?1 )
d
? d
+
?
positive and negative samples, x+
respectively, the classifier
i i=1 ? D and xi i=1 ? D
P
d
+
?
+
1
h(x) = sgn[g(x)] where g(x) = d i=1 w(xi , xi )f K(x, xi ) ? K(x, x?
i ) has error no more
than + 1 at margin ?2 .
However, there are two hurdles to obtaining this large margin classifier. Firstly, the existence of this
classifier itself is predicated on the use of the correct transfer function, something which is unknown.
Secondly, even if an optimal transfer function is known, the above formulation cannot be converted
into an efficient learning algorithm for discovering the (unknown) weights since the formulation
seeks to minimize the number of misclassifications which is an intractable problem in general.
We overcome these two hurdles by proposing a nested learning problem. First of all we assume
that for some fixed loss function L, given any transfer function and any set of landmark pairs, it is
possible to obtain a large margin classifier in the corresponding landmarked space that minimizes L.
Having made this assumption, we address below the issue of learning the optimal transfer function
for a given learning task. However as we have noted before, this assumption is not valid for arbitrary
loss functions. This is why, subsequently in Section 2.2, we shall show it to be valid for a large class
of loss functions by incorporating surrogate loss functions into our goodness criterion.
2.1
Learning the transfer function
In this section we present results that allow us to learn a near optimal transfer function from a family
of transfer functions. We shall assume, for some fixed loss function L, the existence of an efficient
routine which we refer to as TRAIN that shall return, for any landmarked space indexed by a set of
landmark pairs P, a large margin classifier minimizing L. The routine TRAIN is allowed to make
use of additional training data to come up with this classifier.
An immediate algorithm for choosing the best transfer function is to simply search the set of possible transfer functions (in an algorithmically efficient manner) and choose the one offering lowest
training error. We show here that given enough landmark pairs, this simple technique, which we
refer to as FTUNE (see Algorithm 2) is guaranteed to return a near-best transfer function. For this
we prove a uniform convergence type guarantee on the space of transfer functions.
X ?X
Let F ? [?1, 1]R be a class of antisymmetric functions and W = [?B, B]
be a class of weight
functions. For two real valued functions f and g defined on X , let kf ? gk? := sup |f (x) ? g(x)|.
x?X
Let B? (f, r) := { f 0 ? F | kf ? f 0 k? < r}. Let L be a CL -Lipschitz loss function. Let P =
+ ? d
xi , xi i=1 be a set of (random) landmark pairs. For any f ? F, w ? W, define
G(f,w) (x)
g(f,w) (x)
=
=
E
x0 ,x00 ?D?D
[w (x0 , x00 ) f (K(x, x0 ) ? K(x, x00 )) |`(x0 ) = `(x), `(x00 ) 6= `(x)]
d
1X
?
+
?
w x+
i , xi f K(x, xi ) ? K(x, xi )
d i=1
Theorem
2.2)guarantees us that for any fixed f and any 1 > 0, if d is large enough
5 (see Section
then E L(g(f,w) (x)) ? E L(G(f,w) (x)) + 1 . We now show that a similar result holds even if
x
x
one is allowed to vary f . Before stating the result, we develop some notation.
For any transfer function f and arbitrary choice of landmark pairs P, let w(g,f ) be the best
weighing function
of transfer function and landmark pairs i.e. let w(g,f ) =
for this choice
arg min E L g(f,w) (x) 5 . Similarly, let w(G,f ) be the best weighing function corresponding
w?[?B,B]d x?D
to G i.e. w(G,f ) = arg min E L G(f,w) (x) . Then we can ensure the following :
w?W x?D
5
Note that the function g(f,w) (x) is dictated by the choice of the set of landmark pairs P
4
Theorem 3. Let F be a compact class of transfer functions with respect to the infinity norm and
1 , ? > 0. Let N (F, r) be the size of the smallest -netover F withrespect to the infinity norm at
scale r =
1
4CL B .
Then if one chooses d =
2
64B 2 CL
21
ln
16B?N (F ,r)
?1
random landmark pairs then
we have the following with probability greater than (1 ? ?)
h
i
h
ii
h
sup E L g(f,w(g,f ) ) (x) ? E L G(f,w(G,f ) ) (x) ? 1
f ?F
x?D
x?D
This result tells us that in a large enough landmarked space, we shall, for each function f ? F,
recover close to the best classifier possible for that transfer function. Thus, if we iterate over the
set of transfer functions (or use some gradient-descent based optimization routine), we are bound to
select a transfer function that is capable of giving a classifier that is close to the best.
2.2
Working with surrogate loss functions
The formulation of a good similarity function suggests a simple learning algorithm that involves
the construction of an embedding of the domain into a landmarked space on which the existence
of a large margin classifier having low misclassification rate is guaranteed.
However, in order to
?
exploit this guarantee we would have to learn the weights w x+
associated with this classifier
i , xi
by minimizing the empirical misclassification rate on some training set.
Unfortunately, not only is this problem intractable but also hard to solve approximately [15, 16].
Thus what we require is for the landmarked space to admit a classifier that has low error with
respect to a loss function that can also be efficiently minimized on any training set. In such a
situation, minimizing the loss on a random training set would, with very high probability, give us
weights that give similar performance guarantees as the ones used in the goodness criterion.
With a similar objective in mind, [1] offers variants of its goodness criterion tailored to the hinge loss
function which can be efficiently optimized on large training sets (for example LIBSVM [17]). Here
we give a general notion of goodness that can be tailored to any arbitrary Lipschitz loss function.
Definition 4. A similarity function K : X ? X ? R is said to be an (, B)-good similarity for
a learning problem with respect to a loss function L : R ? R+ where > 0 if for some transfer
function f : R ? R and some weighing function w : X ? X ? [?B, B], E [L(G(x))] ? where
x?D
G(x) =
E
x0 ,x00 ?D?D
[w (x0 , x00 ) f (K(x, x0 ) ? K(x, x00 )) |`(x0 ) = `(x), `(x00 ) 6= `(x)]
One can see that taking the loss functions as L(x) = 1x<Cf ? gives us Equation 3 which defines a
good similarity under the 0?1 loss function. It turns out that we can, for any Lipschitz loss function,
give similar guarantees on the performance of the classifier in the landmarked space.
Theorem 5. If K is an (, B)-good similarity function with respect to a CL -Lipschitz loss
function L then for any 1 > 0, with probability at least 1 ? ? over the choice of d =
(16B 2 CL2 /21 ) ln(4B/?1 ) positive and negative samples from D+ and D? respectively, the expected loss of the classifier g(x) with respect to L satisfies E [L(g(x))] ? + 1 where g(x) =
x
Pd
+
?
+
?
1
w
x
,
x
f
K(x,
x
)
?
K(x,
x
)
.
i
i
i
i
i=1
d
If the loss function is hinge loss at margin ? then CL = ?1 . The 0 ? 1 loss function and the loss
function L(x) = 1x<? (implicitly used in Definition 1 and Theorem 2) are not Lipschitz and hence
this proof technique does not apply to them.
2.3
Selecting informative landmarks
Recall that the generalization guarantees we described in the previous section rely on random selection of landmark pairs from a fixed distribution over the domain. However, in practice, a totally
random selection might require one to select a large number of landmarks, thereby leading to an
inefficient classifier in terms of training as well as test times. For typical domains such as computer
vision, similarity function computation is an expensive task and hence selection of a small number
of landmarks should lead to a significant improvement in the test times. For this reason, we propose a landmark pair selection heuristic which we call DSELECT (see Algorithm 1). The heuristic
5
Algorithm 1 DSELECT
Require: A training set T , landmarking size d.
Ensure: A set of d landmark pairs/singletons.
1: L ? get-random-element(T ), PFTUNE ? ?
2: for j = 2 to d do P
K(x, x0 ).
3:
z ? arg min
4:
5:
6:
7:
8:
9:
10:
Algorithm 2 FTUNE
Require: A family of transfer functions F, a similarity function K and a loss function L
Ensure: An optimal transfer function f ? ? F.
x?T
x0 ?L
1: Select d landmark pairs P .
L ? L ? {z}, T ? T \{z}
2: for all f ? F do
end for
3:
wf ? TRAIN(P, L), Lf ? L (wf )
for j = 1 to d do
4: end for
Sample z1 , z2 randomly from L with replace- 5: f ? ? arg minLf
f ?F
ment s.t. `(z1 ) = 1, `(z2 ) = ?1
6: return (f ? , wf ? ).
PFTUNE ? PFTUNE ? {(z1 , z2 )}
end for
return L (for BBS), PFTUNE (for FTUNE)
generalizes naturally to multi-class problems and can also be applied to the classification model of
Balcan-Blum that uses landmark singletons instead of pairs.
At the core of our heuristic is a novel notion of diversity among landmarks. Assuming K is a normalized similarity kernel,
P we call a set of points S ? X diverse if the average inter-point similarity
1
is small i.e |S|(|S|?1)
x,y?S,x6=y K(x, y) 1 (in case we are working with a distance kernel we
would require large inter-point distances). The key observation behind DSELECT is that a nondiverse set of landmarks would cause all data points to receive identical embeddings and linear
separation would be impossible. Small inter-landmark similarity, on the other hand would imply
that the landmarks are well-spread in the domain and can capture novel patterns in the data.
Similar notions of diversity have been used in the past for ensemble classifiers [18] and k-NN classifiers [5]. Here we use this notion to achieve a better embedding into the landmarked space. Experimental results demonstrate that the heuristic offers significant performance improvements over
random landmark selection (see Figure 1). One can easily extend Although Algorithm 1 to multiclass problems by selecting a fixed number of landmarks from each class.
3
Empirical results
In this section, we empirically study the performance of our proposed methods on a variety of benchmark datasets. We refer to the algorithmic formulation presented in [1] as BBS and its augmentation
using DSELECT as BBS+D. We refer to the formulation presented in [2] as DBOOST. We refer to
our transfer function learning based formulation as FTUNE and its augmentation using DSELECT
as FTUNE+D. In multi-class classification scenarios we will use a one-vs-all formulation which
presents us with an opportunity to further exploit the transfer function by learning separate transfer
function per class (i.e. per one-vs-all problem). We shall refer to our formulation using a single
(resp. multiple) transfer function as FTUNE+D-S (resp. FTUNE+D-M). We take the class of ramp
functions indexed by a slope parameter as our set of transfer functions. We use 6 different values
of the slope parameter {1, 5, 10, 50, 100, 1000}. Note that these functions (approximately) include
both the identity function (used by [1]) and the sign function (used by [2]).
Our goal in this section is two-fold: 1) to show that our FTUNE method is able to learn a more
suitable transfer function for the underlying data than the existing methods BBS and DBOOST and
2) to show that our diversity based heuristic for landmark selection performs better than random
selection. To this end, we perform experiments on a few benchmark datasets for learning with similarity (non-PSD) functions [5] as well as on a variety of standard UCI datasets where the similarity
function used is the Gaussian kernel function.
For our experiments, we implemented our methods FTUNE and FTUNE+D as well as BBS and
BBS+D using MATLAB while using LIBLINEAR [10] for SVM classification. For DBOOST, we
use the C++ code provided by the authors of [2]. On all the datasets we randomly selected a fixed
percentage of data for training, validation and testing. Except for DBOOST , we selected the SVM
penalty constant C from the set {1, 10, 100, 1000} using validation. For each method and dataset, we
report classification accuracies averaged over 20 runs. We compare accuracies obtained by different
methods using t-test at 95% significance level.
6
Dataset/Method
AmazonBinary
AuralSonar
Patrol
Voting
Protein
Mirex07
Amazon47
FaceRec
BBS
0.73(0.13)
0.82(0.08)
0.51(0.06)
0.95(0.03)
0.98(0.02)
0.12(0.01)
0.39(0.06)
0.20(0.04)
DBOOST
0.77(0.10)
0.81(0.08)
0.34(0.11)
0.94(0.03)
1.00(0.01)
0.21(0.03)
0.07(0.04)
0.12(0.03)
FTUNE+D-S
0.84(0.12)
0.80(0.08)
0.58(0.06)
0.94(0.04)
0.98(0.02)
0.28(0.03)
0.61(0.08)
0.63(0.04)
Dataset/Method
AmazonBinary
AuralSonar
Patrol
Voting
Protein
Mirex07
Amazon47
FaceRec
(a) 30 Landmarks
BBS
0.78(0.11)
0.88(0.06)
0.79(0.05)
0.97(0.02)
0.98(0.02)
0.17(0.02)
0.40(0.13)
0.27(0.05)
DBOOST
0.82(0.10)
0.85(0.07)
0.55(0.12)
0.97(0.01)
0.99(0.02)
0.31(0.04)
0.07(0.05)
0.19(0.03)
FTUNE+D-S
0.88(0.07)
0.85(0.07)
0.79(0.07)
0.97(0.02)
0.98(0.02)
0.35(0.02)
0.66(0.07)
0.64(0.04)
(b) 300 Landmarks
Table 1: Accuracies for Benchmark Similarity Learning Datasets for Embedding Dimensionality=30, 300. Bold numbers indicate the best performance with 95% confidence level.
FaceRec (Accuracy vs Landmarks)
0.7
0.6
0.4
0.5
0.35
0.4
FTUNE+D
FTUNE
BBS+D
BBS
DBOOST
0.3
0.2
0.6
0.5
0.1
50
100
150
200
Number of Landmarks
250
300
0
50
100
150
200
Number of Landmarks
250
Accuracy
0.8
0.45
0.7
Accuracy
0.9
Accuracy
Amazon47 (Accuracy vs Landmarks)
FTUNE+D
FTUNE
BBS+D
BBS
DBOOST
Mirex07 (Accuracy vs Landmarks)
FTUNE+D
FTUNE
BBS+D
BBS
DBOOST
0.6
0.5
Accuracy
AmazonBinary (Accuracy vs Landmarks)
1
0.3
0.25
0.1
0.15
0.1
0.3
0.2
0.2
300
0.4
FTUNE+D
FTUNE
BBS+D
BBS
DBOOST
50
100
150
200
Number of Landmarks
250
300
0
0
100
200
Number of Landmarks
300
Figure 1: Accuracy obtained by various methods on four different datasets as the number of landmarks used increases. Note that for small number of landmarks (30, 50) our diversity based landmark
selection criteria increases accuracy for both BBS and our method FTUNE-S significantly.
3.1
Similarity learning datasets
First, we conduct experiments on a few similarity learning datasets [5]; these datasets provide a
(non-PSD) similarity matrix along with class labels. For each of the datasets, we randomly select
70% of the data for training, 10% for validation and the remaining for testing purposes. We then
apply our FTUNE-S, FTUNE+D-S, BBS+D methods along with BBS and DBOOST with varying
number of landmark pairs. Note that we do not apply our FTUNE-M method to these datasets as it
overfits heavily to these datasets as typically they are small in size.
We first compare the accuracy achieved by FTUNE+D-S with the existing methods. Table 1 compares the accuracies achieved by our FTUNE+D-S method with those of BBS and DBOOST over
different datasets when using landmark sets of sizes 30 and 300. Numbers in brackets denote standard deviation over different runs. Note that in both the tables FTUNE+D-S is one of the best
methods (upto 95% significance level) on all but one dataset. Furthermore, for datasets with large
number of classes such as Amazon47 and FaceRec our method outperforms BBS and DBOOST by
at least 20% percent. Also, note that some of the datasets have multiple bold faced methods, which
means that the two sample t-test (at 95% level) rejects the hypothesis that their mean is different.
Next, we evaluate the effectiveness of our landmark selection criteria for both BBS and our method.
Figure 1 shows the accuracies achieved by various methods on four different datasets with increasing
number of landmarks. Note that in all the datasets, our diversity based landmark selection criteria
increases the classification accuracy by around 5 ? 6% for small number of landmarks.
3.2
UCI benchmark datasets
We now compare our FTUNE method against existing methods on a variety of UCI datasets [19].
We ran experiments with FTUNE and FTUNE+D but the latter did not provide any advantage. So
for lack of space we drop it from our presentation and only show results for FTUNE-S (FTUNE with
a single transfer function) and FTUNE-M (FTUNE with one transfer function per class). Similar
to [2], we use the Gaussian kernel function as the similarity function for evaluating our method.
We set the ?width? parameter in the Gaussian kernel to be the mean of all pair-wise training data
distances, a standard heuristic. For all the datasets, we randomly select 50% data for training, 20%
for validation and the remaining for testing. We report accuracy values averaged over 20 runs for
each method with varying number of landmark pairs.
7
Dataset/Method
Cod-rna
Isolet
Letters
Magic
Pen-digits
Nursery
Faults
Mfeat-pixel
Mfeat-zernike
Opt-digits
Satellite
Segment
BBS
0.93(0.01)
0.81(0.01)
0.67(0.02)
0.82(0.01)
0.94(0.01)
0.91(0.01)
0.70(0.01)
0.94(0.01)
0.79(0.02)
0.92(0.01)
0.85(0.01)
0.90(0.01)
DBOOST
0.89(0.01)
0.67(0.01)
0.58(0.01)
0.81(0.01)
0.93(0.01)
0.91(0.01)
0.68(0.02)
0.91(0.01)
0.72(0.02)
0.89(0.01)
0.86(0.01)
0.93(0.01)
FTUNE-S
0.93(0.01)
0.84(0.01)
0.69(0.01)
0.84(0.01)
0.97(0.01)
0.90(0.01)
0.70(0.02)
0.95(0.01)
0.79(0.02)
0.94(0.01)
0.86(0.01)
0.92(0.01)
FTUNE-M
0.93(0.01)
0.83(0.01)
0.68(0.02)
0.84(0.01)
0.97(0.00)
0.90(0.00)
0.71(0.02)
0.94(0.01)
0.79(0.02)
0.94(0.01)
0.87(0.01)
0.92(0.01)
Dataset/Method
Cod-rna
Isolet
Letters
Magic
Pen-digits
Nursery
Faults
Mfeat-pixel
Mfeat-zernike
Opt-digits
Satellite
Segment
(a) 30 Landmarks
BBS
0.94(0.00)
0.91(0.01)
0.72(0.01)
0.84(0.01)
0.96(0.00)
0.93(0.01)
0.72(0.02)
0.96(0.01)
0.81(0.01)
0.95(0.01)
0.85(0.01)
0.90(0.01)
DBOOST
0.93(0.00)
0.89(0.01)
0.84(0.01)
0.84(0.00)
0.99(0.00)
0.97(0.00)
0.74(0.02)
0.97(0.01)
0.79(0.01)
0.97(0.00)
0.90(0.01)
0.96(0.01)
FTUNE-S
0.94(0.00)
0.93(0.01)
0.83(0.01)
0.85(0.01)
0.99(0.00)
0.96(0.00)
0.73(0.02)
0.97(0.01)
0.82(0.02)
0.98(0.00)
0.89(0.01)
0.96(0.01)
FTUNE-M
0.94(0.00)
0.93(0.00)
0.83(0.01)
0.85(0.01)
0.99(0.00)
0.97(0.00)
0.73(0.02)
0.97(0.01)
0.82(0.01)
0.98(0.00)
0.89(0.01)
0.96(0.01)
(b) 300 Landmarks
Table 2: Accuracies for Gaussian Kernel for Embedding Dimensionality=30. Bold numbers indicate
the best performance with 95% confidence level. Note that both our methods, especially FTUNE-S,
performs significantly better than the existing methods.
Isolet (Accuracy vs Landmarks)
1
Letters (Accuracy vs Landmarks)
1
0.95
0.7
50
FTUNE (Single)
FTUNE (Multiple)
BBS
DBOOST
100
150
200
250
300
Number of Landmarks
0.8
0.7
0.6
0.5
50
Accuracy
0.8
0.75
Accuracy
0.98
Accuracy
Accuracy
0.98
0.96
0.9
0.85
0.65
Opt?digits (Accuracy vs Landmarks)
Pen?digits (Accuracy vs Landmarks)
1
0.99
0.9
0.97
0.96
FTUNE (Single)
FTUNE (Multiple)
BBS
DBOOST
0.95
FTUNE (Single)
FTUNE (Multiple)
BBS
DBOOST
100
150
200
250
300
Number of Landmarks
0.94
0.93
50
100
150
200
Number of Landmarks
250
300
0.94
0.92
FTUNE (Single)
FTUNE (Multiple)
BBS
DBOOST
0.9
0.88
50
100
150
200
Number of Landmarks
250
300
Figure 2: Accuracy achieved by various methods on four different UCI repository datasets as the
number of landmarks used increases. Note that both FTUNE-S and FTUNE-M perform significantly
better than BBS and DBOOST for small number of landmarks (30, 50).
Table 2 compares the accuracies obtained by our FTUNE-S and FTUNE-M methods with those of
BBS and DBOOST when applied to different UCI benchmark datasets. Note that FTUNE-S is one
of the best on most of the datasets for both the landmarking sizes. Also, BBS performs reasonably
well for small landmarking sizes while DBOOST performs well for large landmarking sizes. In
contrast, our method consistently outperforms the existing methods in both the scenarios.
Next, we study accuracies obtained by our method for different landmarking sizes. Figure 2 shows
accuracies obtained by various methods as the number of landmarks selected increases. Note that
the accuracy curve of our method dominates the accuracy curves of all the other methods, i.e. our
method is consistently better than the existing methods for all the landmarking sizes considered.
3.3
Discussion
We note that since FTUNE selects its output by way of validation, it is susceptible to over-fitting on
small datasets but at the same time, capable of giving performance boosts on large ones. We observe
a similar trend in our experiments ? on smaller datasets (such as those in Table 1 with average dataset
size 660), FTUNE over-fits and performs worse than BBS and DBOOST. However, even in these
cases, DSELECT (intuitively) removes redundancies in the landmark points thus allowing FTUNE
to recover the best transfer function. In contrast, for larger datasets like those in Table 2 (average
size 13200), FTUNE is itself able to recover better transfer functions than the baseline methods
and hence both FTUNE-S and FTUNE-M perform significantly better than the baselines. Note that
DSELECT is not able to provide any advantage here since the datasets sizes being large, greedy
selection actually ends up hurting the accuracy.
Acknowledgments
We thank the authors of [2] for providing us with C++ code of their implementation. P. K. is
supported by Microsoft Corporation and Microsoft Research India under a Microsoft Research India
Ph.D. fellowship award. Most of this work was done while P. K. was visiting Microsoft Research
Labs India, Bangalore.
8
References
[1] Maria-Florina Balcan and Avrim Blum. On a Theory of Learning with Similarity Functions. In International Conference on Machine Learning, pages 73?80, 2006.
[2] Liwei Wang, Cheng Yang, and Jufu Feng. On Learning with Dissimilarity Functions. In International
Conference on Machine Learning, pages 991?998, 2007.
[3] Piotr Indyk and Nitin Thaper. Fast Image Retrieval via Embeddings. In International Workshop Statistical
and Computational Theories of Vision, 2003.
[4] El?zbieta Pe?kalska and Robert P. W. Duin. On Combining Dissimilarity Representations. In Multiple
Classifier Systems, pages 359?368, 2001.
[5] Yihua Chen, Eric K. Garcia, Maya R. Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based Classification: Concepts and Algorithms. Journal of Machine Learning Research, 10:747?776, 2009.
[6] Cheng Soon Ong, Xavier Mary, St?ephane Canu, and Alexander J. Smola. Learning with non-positive
Kernels. In International Conference on Machine Learning, 2004.
[7] Bernard Haasdonk. Feature Space Interpretation of SVMs with Indefinite Kernels. IEEE Transactions on
Pattern Analysis and Machince Intelligence, 27(4):482?492, 2005.
[8] Thore Graepel, Ralf Herbrich, Peter Bollmann-Sdorra, and Klaus Obermayer. Classification on Pairwise
Proximity Data. In Neural Information Processing Systems, page 438444, 1998.
[9] Maria-Florina Balcan, Avrim Blum, and Nathan Srebro. Improved Guarantees for Learning via Similarity
Functions. In 21st Annual Conference on Computational Learning Theory, pages 287?298, 2008.
[10] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A
Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[11] Manik Varma and Bodla Rakesh Babu. More Generality in Efficient Multiple Kernel Learning. In 26th
Annual International Conference on Machine Learning, pages 1065?1072, 2009.
[12] Prateek Jain, Brian Kulis, Jason V. Davis, and Inderjit S. Dhillon. Metric and Kernel Learning using a
Linear Transformation. To appear, Journal of Machine Learning (JMLR), 2011.
[13] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. Kernels as Features: On Kernels, Margins,
and Low-dimensional Mappings. Machine Learning, 65(1):79?94, 2006.
[14] Nathan Srebro. How Good Is a Kernel When Used as a Similarity Measure? In 20th Annual Conference
on Computational Learning Theory, pages 323?335, 2007.
[15] M. R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the theory of NP-Completeness.
Freeman, San Francisco, 1979.
[16] Sanjeev Arora, L?aszl?o Babai, Jacques Stern, and Z. Sweedyk. The Hardness of Approximate Optima in
Lattices, Codes, and Systems of Linear Equations. Journal of Computer and System Sciences, 54(2):317?
331, April 1997.
[17] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27:1?27:27, 2011.
[18] Krithika Venkataramani and B. V. K. Vijaya Kumar. Designing classifiers for fusion-based biometric
verification. In Plataniotis Boulgouris and Micheli-Tzankou, editors, Biometrics: Theory, Methods and
Applications. Springer, 2009.
[19] A. Frank and Arthur Asuncion. UCI Machine Learning Repository. http://archive.ics.uci.
edu/ml, 2010. University of California, Irvine, School of Information and Computer Sciences.
9
| 4306 |@word kulis:1 repository:3 seems:1 norm:2 seek:1 hsieh:1 thereby:3 liblinear:2 series:1 selecting:3 offering:1 rkhs:1 outperforms:3 existing:9 past:1 com:1 z2:3 informative:2 landmarked:13 enables:1 remove:1 drop:1 v:10 greedy:1 discovering:1 weighing:5 selected:3 intelligence:1 core:1 provides:1 nitin:1 cse:1 completeness:1 herbrich:1 firstly:1 purushot:1 along:2 consists:2 prove:2 fitting:1 manner:1 pairwise:1 x0:28 inter:4 hardness:1 expected:1 sweedyk:1 themselves:1 multi:2 freeman:1 increasing:2 totally:1 provided:1 moreover:2 underlying:3 bounded:3 mass:2 notation:1 lowest:1 prateek:2 what:1 jufu:1 sdorra:1 minimizes:1 eigenvector:1 proposing:1 transformation:1 corporation:1 machince:1 guarantee:10 voting:2 classifier:23 bio:1 appear:1 positive:4 before:3 approximately:2 might:2 suggests:1 co:2 averaged:2 acknowledgment:1 testing:3 practice:2 definite:1 lf:1 digit:6 area:1 empirical:2 significantly:4 dictate:1 reject:1 liwei:1 confidence:2 jui:1 protein:2 get:1 cannot:2 close:2 selection:15 cazzanti:1 impossible:1 function2:1 equivalent:1 convex:1 unify:1 simplicity:1 assigns:2 isolet:3 varma:1 ralf:1 embedding:11 handle:2 notion:14 resp:2 construction:1 heavily:1 us:4 designing:1 hypothesis:1 element:2 trend:1 recognition:1 expensive:2 labeled:8 aszl:1 wang:4 verifying:1 capture:1 haasdonk:1 ran:1 mentioned:1 pd:1 ong:1 segment:2 ali:1 eric:1 easily:1 various:5 train:3 jain:2 forced:1 fast:2 distinct:1 cod:2 labeling:1 tell:1 klaus:1 choosing:3 heuristic:11 supplementary:2 solve:2 valued:1 say:1 ramp:1 larger:1 kai:1 ability:2 transductive:1 itself:6 coerce:1 indyk:1 advantage:3 net:1 propose:5 ment:1 uci:9 combining:1 realization:1 achieve:1 intuitive:1 convergence:2 requirement:2 satellite:2 optimum:1 develop:2 ac:1 stating:1 school:1 implemented:1 involves:2 implies:1 come:1 indicate:2 correct:1 subsequently:1 sgn:2 enable:1 material:2 require:6 assign:1 generalization:6 suitability:1 opt:3 brian:1 secondly:1 extension:1 rong:1 hold:4 practically:1 around:1 considered:2 proximity:1 ic:1 algorithmic:2 mapping:1 vary:1 smallest:1 earth:1 purpose:1 applicable:2 label:3 create:1 weighted:1 always:1 gaussian:4 rna:2 rather:2 varying:2 zernike:2 improvement:3 consistently:4 maria:3 contrast:3 baseline:2 sense:3 wf:3 dependent:3 el:1 nn:2 unlikely:1 typically:1 kernelized:1 selects:1 biometric:1 arg:4 issue:2 pixel:2 classification:13 flexible:2 among:2 proposes:1 ness:1 special:2 santosh:1 having:2 piotr:1 sampling:3 identical:1 promote:1 minimized:1 report:2 ephane:1 intelligent:1 bangalore:2 inherent:1 few:2 modern:1 randomly:4 np:1 preserve:1 mover:1 babai:1 microsoft:6 psd:10 patrol:2 interest:2 intra:2 bracket:1 behind:1 capable:3 closer:2 arthur:1 respective:2 biometrics:1 tree:1 indexed:2 euclidean:2 conduct:1 theoretical:1 goodness:32 lattice:1 clipping:1 deviation:1 uniform:3 johnson:1 too:2 chooses:1 cho:1 st:2 international:5 informatics:1 sanjeev:1 again:1 augmentation:2 choose:2 worse:1 admit:1 inefficient:3 leading:1 return:4 chung:1 converted:1 diversity:7 singleton:2 bold:3 babu:1 satisfy:3 manik:1 later:1 try:1 jason:1 lab:1 overfits:1 sup:4 recover:3 asuncion:1 slope:2 minimize:1 accuracy:35 variance:1 efficiently:2 ensemble:1 vijaya:1 generalize:2 unifies:2 thaper:1 definition:7 against:1 garey:1 naturally:1 proof:2 associated:1 irvine:1 dataset:7 treatment:1 recall:1 dimensionality:2 hilbert:1 graepel:1 adaptability:1 routine:3 actually:1 higher:1 x6:1 methodology:1 improved:1 wei:1 april:1 formulation:9 done:2 generality:1 furthermore:1 implicit:1 predicated:1 smola:1 working:4 hand:4 auralsonar:2 lack:2 mkl:1 defines:1 logistic:1 yihua:1 thore:1 mary:1 normalized:2 concept:1 xavier:1 hence:3 dhillon:1 eg:2 attractive:1 interchangeably:1 width:1 davis:1 noted:1 authorship:2 criterion:27 generalized:1 stress:1 evident:1 demonstrate:3 performs:5 balcan:6 percent:1 image:2 wise:1 novel:5 recently:2 empirically:2 extend:1 interpretation:1 significant:4 refer:9 hurting:1 rd:1 canu:1 similarly:5 similarity:62 something:1 purushottam:1 dictated:1 inf:1 driven:2 scenario:3 certain:1 kar:1 fault:2 devise:1 greater:2 additional:1 semi:1 multiple:8 stem:1 rahimi:1 adapt:1 offer:6 retrieval:1 lin:2 luca:1 award:1 variant:1 regression:1 basic:1 heterogeneous:1 vision:3 essentially:1 florina:3 metric:1 kernel:18 tailored:2 achieved:4 receive:1 hurdle:2 fellowship:1 crucial:1 rest:1 archive:1 bollmann:1 effectiveness:2 seem:1 call:2 near:2 yang:1 embeddings:4 enough:3 variety:6 iterate:1 fit:1 misclassifications:1 restrict:1 multiclass:1 effort:1 akin:1 penalty:1 peter:1 speech:1 cause:1 matlab:1 useful:1 informally:1 tune:1 ph:1 svms:1 http:1 percentage:1 sign:2 jacques:1 algorithmically:1 per:3 diverse:2 nursery:2 shall:7 key:1 four:3 redundancy:1 indefinite:1 blum:6 libsvm:2 graph:2 fraction:1 run:3 parameterized:1 letter:3 discern:1 throughout:1 reasonable:1 family:4 reader:1 chih:3 separation:1 decision:1 bound:3 guaranteed:4 maya:1 cheng:2 fold:1 fan:1 annual:3 duin:1 infinity:2 mfeat:4 sake:1 nathan:2 min:3 cl2:1 kumar:1 performing:1 vempala:1 developing:1 combination:1 smaller:1 intuitively:2 restricted:1 computationally:1 ln:3 equation:2 turn:3 mind:1 prajain:1 end:6 studying:1 generalizes:4 apply:4 observe:2 upto:2 appropriate:1 rkhss:1 existence:4 top:1 remaining:2 cf:3 ensure:3 include:1 opportunity:1 hinge:2 exploit:2 giving:4 restrictive:2 especially:1 classical:2 feng:1 objective:1 strategy:1 surrogate:2 said:3 visiting:1 affinity:2 lends:1 gradient:1 distance:12 separate:1 thank:1 obermayer:1 micheli:1 landmark:66 considers:1 trivial:1 reason:1 assuming:3 code:3 providing:1 minimizing:3 unfortunately:2 mostly:1 susceptible:1 potentially:1 robert:1 frank:1 gk:1 negative:2 disparate:1 magic:2 implementation:1 stern:1 unknown:3 perform:4 allowing:1 observation:2 datasets:30 benchmark:7 finite:1 descent:1 immediate:1 defining:2 situation:3 extended:1 antisymmetry:1 arbitrary:4 pair:19 optimized:1 z1:3 california:1 learned:2 boost:2 address:2 able:3 usually:1 below:2 pattern:2 shifting:1 misclassification:2 suitable:1 natural:3 rely:1 scheme:1 technology:2 imply:1 library:2 arora:1 faced:1 kf:2 xiang:1 embedded:1 loss:23 srebro:2 validation:5 offered:1 sufficient:1 verification:1 editor:1 intractability:1 landmarking:9 supported:1 soon:1 dis:3 guide:1 allow:4 perceptron:1 india:6 institute:1 taking:1 overcome:1 calculated:1 curve:2 valid:2 evaluating:1 author:2 made:1 adaptive:1 san:1 transaction:2 bb:32 approximate:1 compact:1 implicitly:1 iitk:1 ml:1 instantiation:1 francisco:1 xi:12 spectrum:1 x00:16 search:1 pen:3 why:1 table:7 learn:9 transfer:36 robust:1 reasonably:1 obtaining:2 investigated:1 cl:5 domain:18 antisymmetric:4 did:1 significance:2 main:1 spread:1 allowed:2 en:1 slow:1 space1:1 explicit:1 plataniotis:1 exercise:1 tied:1 pe:1 jmlr:1 third:1 kanpur:1 theorem:6 embed:1 jen:2 svm:5 gupta:1 dominates:1 fusion:1 exists:1 intractable:2 incorporating:1 avrim:3 workshop:1 dissimilarity:3 margin:11 rui:1 chen:1 suited:4 garcia:1 simply:1 explore:1 likely:1 relegate:1 inderjit:1 chang:2 springer:1 nested:1 satisfies:3 acm:1 goal:2 identity:2 presentation:1 consequently:3 lipschitz:6 replace:1 change:1 hard:1 specifically:3 typical:1 except:1 bernard:1 experimental:1 rakesh:1 formally:1 select:6 support:2 latter:1 alexander:1 indian:1 incorporate:1 evaluate:1 handling:1 |
3,652 | 4,307 | Object Detection with Grammar Models
Ross B. Girshick
Dept. of Computer Science
University of Chicago
Chicago, IL 60637
[email protected]
Pedro F. Felzenszwalb
School of Engineering and
Dept. of Computer Science
Brown University
Providence, RI 02912
[email protected]
David McAllester
TTI-Chicago
Chicago, IL 60637
[email protected]
Abstract
Compositional models provide an elegant formalism for representing the visual
appearance of highly variable objects. While such models are appealing from a
theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model
for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for
partially visible objects. To train the model, we introduce a new discriminative
framework for learning structured prediction models from weakly-labeled data.
1
Introduction
The idea that images can be hierarchically parsed into objects and their parts has a long history in
computer vision, see for example [15]. Image parsing has also been of considerable recent interest
[11, 13, 21, 22, 24]. However, it has been difficult to demonstrate that sophisticated compositional
models lead to performance advantages on challenging metrics such as the PASCAL object detection
benchmark [9]. In this paper we achieve new levels of performance for person detection using a
grammar model that is richer than previous models used in high-performance systems. We also
introduce a general framework for learning discriminative models from weakly-labeled data.
Our models are based on the object detection grammar formalism in [11]. Objects are represented
in terms of other objects through compositional rules. Deformation rules allow for the parts of an
object to move relative to each other, leading to hierarchical deformable part models. Structural
variability provides choice between multiple part subtypes ? effectively creating mixture models
throughout the compositional hierarchy ? and also enables optional parts. In this formalism parts
may be reused both within an object category and across object categories.
Our baseline and departure point is the UoC-TTI object detector [10, 12]. This system represents a
class of objects with three different pictorial structure models. Although these models are learned
automatically, making semantic interpretation unclear, it seems that the three components for the
person class differ in how much of the person is taken to be visible ? just the head and shoulders,
the head and shoulders together with the upper body, or the whole standing person. Each of the three
components has independently trained parts. For example, each component has a head part trained
independently from the head part of the other components.
Here we construct a single grammar model that allows more flexibility in describing the amount of
the person that is visible. The grammar model avoids dividing the training data between different
components and thus uses the training data more efficiently. The parts in the model, such as the
head part, are shared across different interpretations of the degree of visibility of the person. The
grammar model also includes subtype choice at the part level to accommodate greater appearance
1
variability across object instances. We use parts with subparts to benefit from high-resolution image
data, while also allowing for deformations. Unlike previous approaches, we explicitly model the
source of occlusion for partially visible objects.
Our approach differs from that of Jin and Geman [13] in that theirs focuses on whole scene interpretation with generative models, while we focus on discriminatively trained models of individual
objects. We also make Markovian restrictions not made in [13]. Our work is more similar to that of
Zhu et al. [21] who impose similar Markovian restrictions. However, our training method, image
features, and grammar design are substantially different.
The model presented here is designed to accurately capture the visible portion of a person. There
has been recent related work on occlusion modeling in pedestrian and person images [7, 18]. In
[7], Enzweiler et al. assume access to depth and motion information in order to estimate occlusion
boundaries. In [18], Wang et al. rely on the observation that the scores of individual filter cells (using
the Dalal and Triggs detector [5]) can reliably predict occlusion in the INRIA pedestrian data. This
does not hold for the harder PASCAL person data.
In addition to developing a grammar model for detecting people, we develop new training methods
which contribute to our boost in performance. Training data for vision is often assigned weak labels
such as bounding boxes or just the names of objects occurring in the image. In contrast, an image
analysis system will often produce strong predictions such as a segmentation or a pose. Existing
structured prediction methods, such as structural SVM [16, 17] and latent structural SVM [19], do
not directly support weak labels together with strong predictions. We develop the notion of a weaklabel structural SVM which generalizes structural SVMs and latent-structural SVMs. The key idea
is to introduce a loss L(y, s) for making a strong prediction s when the weak training label is y.
A formalism for learning from weak labels was also developed in [2]. One important difference is
that [2] generalizes ranking SVMs.1 Our framework also allows for softer relations between weak
labels and strong predictions.
2
Grammar models
Object detection grammars [11] represent objects recursively in terms of other objects. Let N be a
set of nonterminal symbols and T be a set of terminal symbols. We can think of the terminals as the
basic building blocks that can be found in an image. The nonterminals define abstract objects whose
appearance are defined in terms of expansions into terminals.
Let ? be a set of possible locations for a symbol within an image. A placed symbol, Y (?), specifies
a placement of Y ? N ? T at a location ? ? ?. The structure of a grammar model is defined by a
set, R, of weighted productions of the form
s
X(?0 ) ?? { Y1 (?1 ), . . . , Yn (?n ) },
(1)
where X ? N , Yi ? N ? T , ?i ? ? and s ? R is a score. We denote the score of r ? R by s(r).
We can expand a placed nonterminal to a bag of placed terminals by repeatedly applying productions. An expansion of X(?) leads to a derivation tree T rooted at X(?). The leaves of T are
labeled with placed terminals, and the internal nodes of T are labeled with placed nonterminals and
with the productions used to replace those symbols.
We define appearance models for the terminals using a function score(A, ?) that computes a score
for placing the terminal A at location ?. This score depends implicitly on the image data. We define
the score of a derivation tree T to be the sum of the scores of the productions used to generate T ,
plus the score of placing the terminals associated with T ?s leaves in their respective locations.
X
X
score(T ) =
s(r) +
score(A, ?)
(2)
r?internal(T )
A(w)?leaves(T )
To generalize the models from [10] we let ? be positions and scales within a feature map pyramid
H. We define the appearance models for terminals by associating a filter FA with each terminal A.
1
[2] claims the ranking framework overcomes a loss in performance when the number of background examples is increased. In contrast, we don?t use a ranking framework but always observed a performance improvement when increasing the number of background examples.
2
Subtype 1 Subtype 2
Example detections and derived filters
Part 1
Part 2
Part 3
Part 4
Part 5
Part 6
Occluder
Parts 1-6 (no occlusion)
Parts 1-4 & occluder
Parts 1-2 & occluder
Figure 1: Shallow grammar model. This figure illustrates a shallow version of our grammar model
(Section 2.1). This model has six person parts and an occlusion model (?occluder?), each of which
comes in one of two subtypes. A detection places one subtype of each visible part at a location
and scale in the image. If the derivation does not place all parts it must place the occluder. Parts
are allowed to move relative to each other, but their displacements are constrained by deformation
penalties.
Then score(A, ?) = FA ? ?(H, ?) is the dot product between the filter coefficients and the features
in a subwindow of the feature map pyramid, ?(H, ?). We use the variant of histogram of oriented
gradient (HOG [5]) features described in [10].
We consider models with productions specified by two kinds of schemas (a schema is a template for
generating productions). A structure schema specifies one production for each placement ? ? ?,
s
X(?) ?? { Y1 (? ? ?1 ), . . . , Yn (? ? ?n ) }.
(3)
Here the ?i specify constant displacements within the feature map pyramid. Structure schemas can
be used to define decompositions of objects into other objects.
Let ? be the set of possible displacements within a single scale of a feature map pyramid. A
deformation schema specifies one production for each placement ? ? ? and displacement ? ? ?,
???(?)
X(?) ?? { Y (? ? ?) }.
(4)
Here ?(?) is a feature vector and ? is a vector of deformation parameters. Deformation schemas
can be used to define deformable models. We define ?(?) = (dx, dy, dx2 , dy 2 ) so that deformation
scores are quadratic functions of the displacements.
The parameters of our models are defined by a weight vector w with entries for the score of each
structure schema, the deformation parameters of each deformation schema and the filter coefficients
associated with each terminal. Then score(T ) = w ? ?(T ), where ?(T ) is the sum of (sparse)
feature vectors associated with each placed terminal and production in T .
2.1
A grammar model for detecting people
Each component in the person model learned by the voc-release4 system [12] is tuned to detect
people under a prototypical visibility pattern. Based on this observation we designed, by hand, the
structure of a grammar that models visibility by using structural variability and optional parts. For
clarity, we begin by describing a shallow model (Figure 1) that places all filters at the same resolution
in the feature map pyramid. After explaining this model, we describe a deeper model that includes
deformable subparts at higher resolutions.
Fine-grained occlusion Our grammar model has a start symbol Q that can be expanded using one
of six possible structure schemas. These choices model different degrees of visibility ranging from
heavy occlusion (only the head and shoulders are visible) to no occlusion at all.
Beyond modeling fine-grained occlusion patterns when compared to the mixture models from [7]
and [12], our grammar model is also richer in a number of ways. In Section 5 we show that each of
the following modeling choices improves detection performance.
3
Occlusion model If a person is occluded, then there must be some cause of the occlusion ? either
the edge of the image or an occluding object, such as a desk or dinner table. We use a nontrivial
model to capture the appearance of the stuff that occludes people.
Part subtypes The mixture model from [12] has two subtypes for each mixture component. The
subtypes are forced to be mirror images of each other and correspond roughly to left-facing versus
right-facing people. Our grammar model has two subtypes for each part, which are also forced to
be mirror images of each other. But in the case of our grammar model, the decision of which part
subtype to instantiate at detection time is independent for each part.
The shallow person grammar model is defined by the following grammar. The indices p (for part), t
(for subtype), and k have the following ranges: p ? {1, . . . , 6}, t ? {L, R} and k ? {1, . . . , 5}.
s
k
Q(?) ??
{ Y1 (? ? ?1 ), . . . , Yk (? ? ?k ), O(? ? ?k+1 ) }
s6
Q(?) ?? { Y1 (? ? ?1 ), . . . , Y6 (? ? ?6 ) }
0
Yp (?) ??
{ Yp,t (?) }
Yp,t (?)
?p,t ??(?)
??
{ Ap,t (? ? ?) }
?t ??(?)
0
O(?) ?? { Ot (?) }
Ot (?)
??
{ At (? ? ?) }
The grammar has a start symbol Q with six alternate choices that derive people under varying degrees of visibility (occlusion). Each part has a corresponding nonterminal Yp that is placed at some
ideal position relative to Q. Derivations with occlusion include the occlusion symbol O. A derivation
selects a subtype and displacement for each visible part. The parameters of the grammar (production
scores, deformation parameters and filters) are learned with the discriminative procedure described
in Section 4. Figure 1 illustrates the filters in the resulting model and some example detections.
Deeper model We extend the shallow model by adding deformable subparts at two scales: (1)
the same as, and (2) twice the resolution of the start symbol Q. When detecting large objects,
high-resolution subparts capture fine image details. However, when detecting small objects, highresolution subparts cannot be used because they ?fall off the bottom? of the feature map pyramid.
The model uses derivations with low-resolution subparts when detecting small objects.
We begin by replacing the productions from Yp,t in the grammar above, and then adding new productions. Recall that p indexes the top-level parts and t indexes subtypes. In the following schemas,
the indices r (for resolution) and u (for subpart) have the ranges: r ? {H, L}, u ? {1, . . . , Np },
where Np is the number of subparts in a top-level part Yp .
Yp,t (?)
Zp,t (?)
?p,t ??(?)
??
0
??
{ Zp,t (? ? ?) }
{Ap,t (?), Wp,t,r,1 (? ? ?p,t,r,1 ), . . . , Wp,t,r,Np (? ? ?p,t,r,Np )}
?p,t,r,u ??(?)
Wp,t,r,u (?)
??
{Ap,t,r,u (? ? ?)}
We note that as in [23] our model has hierarchical deformations. The part terminal Ap,t can move
relative to Q and the subpart terminal Ap,t,r,u can move relative to Ap,t .
The displacements ?p,t,H,u place the symbols Wp,t,H,u one octave below Zp,t in the feature map
pyramid. The displacements ?p,t,L,u place the symbols Wp,t,L,u at the same scale as Zp,t . We add
subparts to the first two top-level parts (p = 1 and 2), with the number of subparts set to N1 = 3
and N2 = 2. We find that adding additional subparts does not improve detection performance.
2.2
Inference and test time detection
Inference involves finding high scoring derivations. At test time, because images may contain multiple instances of an object class, we compute the maximum scoring derivation rooted at Q(?), for
each ? ? ?. This can be done efficiently using a standard dynamic programming algorithm [11].
We retain only those derivations that score above a threshold, which we set low enough to ensure
high recall. We use box(T ) to denote a detection window associated with a derivation T . Given a
set of candidate detections, we apply nonmaximal suppression to produce a final set of detections.
We define box(T ) by assigning a detection window size, in feature map coordinates, to each structure schema that can be applied to Q. This leads to detections with one of six possible aspect ratios,
depending on which production was used in the first step of the derivation. The absolute location
and size of a detection depends on the placement of Q. For the first five production schemas, the
ideal location of the occlusion part, O, is outside of box(T ).
4
3
Learning from weakly-labeled data
Here we define a general formalism for learning functions from weakly-labeled data. Let X be an
input space, Y be a label space, and S be an output space. We are interested in learning functions
f : X ? S based on a set of training examples {(x1 , y1 ), . . . , (xn , yn )} where (xi , yi ) ? X ? Y.
In contrast to the usual supervised learning setting, we do not assume that the label space and the
output space are the same. In particular there may be many output values that are compatible with
a label, and we can think of each example as being only weakly labeled. It will also be useful to
associate a subset of possible outputs, S(x) ? S, with an example x. In this case f (x) ? S(x).
A connection between labels and outputs can be made using a loss function L : Y ? S ? R. L(y, s)
associates a cost with the prediction s ? S on an example labeled y ? Y. Let D be a distribution
over X ? Y. Then a natural goal is to find a function f with low expected loss ED [L(y, f (x))].
A simple example of a weakly-labeled training problem comes from learning sliding window classifiers in the PASCAL object detection dataset. The training data specifies pixel-accurate bounding
boxes for the target objects while a sliding window classifier reports boxes with a fixed aspect ratio
and at a finite number of scales. The output space is, therefore, a subset of the label space.
As usual, we assume f is parameterized by a vector of model parameters w and generates predictions
by maximizing a linear function of a joint feature map ?(x, s), f (x) = argmaxs?S(x) w ? ?(x, s).
We can train w by minimizing a regularized risk on the training set. We define a weak-label structural SVM (WL-SSVM) by the following training equation,
n
X
1
E(w) = ||w||2 + C
L0 (w, xi , yi ).
(5)
2
i=1
The surrogate training loss L0 is defined in terms of two different loss augmented predictions.
L0 (w, x, y) = max [w ? ?(x, s) + Lmargin (y, s)] ? max [w ? ?(x, s) ? Loutput (y, s)]
s?S(x)
s?S(x)
|
{z
} |
{z
}
(6a)
(6)
(6b)
Lmargin encourages high-loss outputs to ?pop out? of (6a), so that their scores get pushed down.
Loutput suppresses high-loss outputs in (6b), so the score of a low-loss prediction gets pulled up.
It is natural to take Lmargin = Loutput = L. In this case L0 becomes a type of ramp loss [4, 6, 14].
Alternatively, taking Lmargin = L and Loutput = 0 gives the ramp loss that has been shown to
be consistent in [14]. As we discuss below, the choice of Loutput can have a significant effect on
the computational difficulty of the training problem. Several popular learning frameworks can be
derived as special cases of WL-SSVM. For the examples below, let I(a, b) = 0 when a = b, and
I(a, b) = ? when a 6= b.
Structural SVM Let S = Y, Lmargin = L and Loutput (y, y?) = I(y, y?). Then L0 (w, x, y) is the
hinge loss used in a structural SVM [17]. In this case L0 is convex in w because the maximization
in (6b) disappears. We note, however, that this choice of Loutput may be problematic and lead to
inconsistent training problems. Consider the following situation. A training example (x, y) may
be compatible with a different label y? 6= y, in the sense that L(y, y?) = 0. But even in this case a
structural SVM pushes the score w ? ?(x, y) to be above w ? ?(x, y?). This issue can be addressed
by relaxing Loutput to include a maximization over labels in (6b).
Latent structural SVM Now let Z be a space of latent values, S = Y ? Z, Lmargin = L and
Loutput (y, (?
y , z?)) = I(y, y?). Then L0 (w, x, y) is the hinge loss used in a latent structural SVM [19].
In this case L0 is not convex in w due to the maximization over latent values in (6b). As in the
previous example, this choice of Loutput can be problematic because it ?requires? that the training
labels be predicted exactly. This can be addressed by relaxing Loutput , as in the previous example.
4
Training grammar models
Now we consider learning the parameters of an object detection grammar using the training data
in the PASCAL VOC datasets with the WL-SSVM framework. For two rectangles a and b let
overlap(a, b) = area(a ? b)/ area(a ? b). We will use this measure of overlap in our loss functions.
5
For training, we augment our model?s output space (the set of all derivation trees), with the background output ?. We define ?(x, ?) to be the zero vector, as was done in [1]. Thus the score of a
background hypothesis is zero independent of the model parameters w.
The training data specifies a bounding box for each instance of an object in a set of training images. We construct a set of weakly-labeled examples {(x1 , y1 ), . . . , (xn , yn )} as follows. For each
training image I, and for each bounding box B in I, we define a foreground example (x, y), where
y = B, x specifies the image I, and the set of valid predictions S(x) includes:
1. Derivations T with overlap(box(T ), B) ? 0.1 and overlap(box(T ), B 0 ) < 0.5 for all B 0
in I such that B 0 6= B.
2. The background output ?.
The overlap requirements in (1) ensure that we consider only predictions that are relevant for a
particular object instance, while avoiding interactions with other objects in the image.
We also define a very large set of background examples. For simplicity, we use images that do not
contain any bounding boxes. For each background image I, we define a different example (x, y) for
each position and scale ? within I. In this case y = ?, x specifies the image I, and S(x) includes
derivations T rooted at Q(?) and the background output ?. The set of background examples is very
large because the number of positions and scales within each image is typically around 250K.
4.1
Loss functions
The PASCAL benchmark requires a correct detection to have at least 50% overlap with a groundtruth bounding box. We use this rule to define our loss functions. First, define Ll,? (y, s) as follows
?
l
if y = ? and s 6= ?
?
?
0
if y = ? and s = ?
(7)
Ll,? (y, s) =
l
if y 6= ? and overlap(y, s) < ?
?
?
0
if y 6= ? and overlap(y, s) ? ? .
Following the PASCAL VOC protocol we use Lmargin = L1,0.5 . For a foreground example this
pushes down the score of detections that don?t overlap with the bounding box label by at least 50%.
Instead of using Loutput = Lmargin , we let Loutput = L?,0.7 . For a foreground example this
ensures that the maximizer of (6b) is a detection with high overlap with the bounding box label. For
a background example, the maximizer of (6b) is always ?. Later we discuss how this simplifies our
optimization algorithm. While our choice of Loutput does not produce a convex objective, it does
tightly limit the range of outputs, making our optimization less prone to reaching bad local optima.
4.2
Optimization
Since L0 is not convex, the WL-SSVM objective (5) leads to a nonconvex optimization problem. We
follow [19] in which the CCCP procedure [20] was used to find a local optima of a similar objective.
CCCP is an iterative algorithm that uses a decomposition of the objective into a sum of convex and
concave parts E(w) = Econvex (w) + Econcave (w).
n
Econvex (w) =
X
1
||w||2 + C
max [w ? ?(xi , s) + Lmargin (yi , s)]
2
s?S(xi )
i=1
Econcave (w) = ?C
n
X
i=1
max [w ? ?(xi , s) ? Loutput (yi , s)]
s?S(xi )
(8)
(9)
In each iteration, CCCP computes a linear upper bound to Econcave based on a current weight vector
wt . The bound depends on subgradients of the summands in (9). For each summand, we take the
subgradient ?(xi , si (wt )), where si (w) = argmaxs?S(xi ) [w ? ?(xi , s) ? Loutput (yi , s)] is a loss
augmented prediction.
We note that computing si (wt ) for each training example can be costly. But from our definition of
Loutput , we have that si (w) = ? for a background example independent of w. Therefore, for a
background example ?(xi , si (wt )) = 0.
6
Table 1: PASCAL 2010 results. UoC-TTI and our method compete in comp3. Poselets competes
comp4 due to its use of detailed pose and visibility annotations and non-PASCAL images.
AP
Grammar
47.5
+bbox
47.6
+context
49.5
UoC-TTI [9]
44.4
+bbox
45.2
+context
47.5
Poselets [9]
48.5
Table 2: Training objective and model structure evaluation on PASCAL 2007.
AP
Grammar LSVM
45.3
Grammar WL-SSVM
46.7
Mixture LSVM
42.6
Mixture WL-SSVM
43.2
After computing si (wt ) and ?(xi , si (wt )) for all examples (implicitly for background examples),
the weight vector is updated by minimizing a convex upper bound on the objective E(w):
wt+1 =
n
X
1
argmin ||w||2 + C
2
w
i=1
max [w ? ?(xi , s) + Lmargin (yi , s)] ? w ? ?(xi , si (wt )) . (10)
s?S(xi )
The optimization subproblem defined by equation (10) is similar in form to a structural SVM optimization. Given the size and nature of our training dataset we opt to solve this subproblem using
stochastic subgradient descent and a modified form of the data mining procedure from [10]. As
in [10], we data mine over background images to collect support vectors for background examples.
However, unlike in the binary LSVM setting considered in [10], we also need to apply data mining
to foreground examples. This would be slow because it requires performing relatively expensive inference (more than 1 second per image) on thousands of images. Instead of applying data mining to
the foreground examples, each time we compute si (wt ) for a foreground example, we also compute
the top M scoring outputs s ? S(xi ) of wt ? ?(xi , s) + Lmargin (yi , s), and place the corresponding
feature vectors in the data mining cache. This is efficient since much of the required computation
is shared with computation already necessary for computing si (wt ). While this is only a heuristic approximation to true data mining, it leads to an improvement over training with binary LSVM
(see Section 5). In practice, we find that M = 1 is sufficient for improved performance and that
increasing M beyond 1 does not improve our results.
4.3
Initialization
Using CCCP requires an initial model or heuristic for selecting the initial outputs si (w0 ). Inspired
by the methods in [10, 12], we train a single filter for fully visible people using a standard binary
SVM. To define the SVM?s training data, we select vertically elongated examples. We apply the orientation clustering method in [12] to further divide these examples into two sets that approximately
correspond to left-facing versus right-facing orientations. Examples from one of these two sets are
then anisotropically rescaled so their HOG feature maps match the dimensions of the filter. These
form the positive examples. For negative examples, random patches are extracted from background
images. After training the initial filter, we slice it into subfilters (one 8 ? 8 and five 3 ? 8) that form
the building blocks of the grammar model. We mirror these six filters to get subtypes, and then add
subparts using the energy covering heuristic in [10, 12].
5
Experimental results
We evaluated the performance of our person grammar and training framework on the PASCAL VOC
2007 and 2010 datasets [8, 9]. We used the standard PASCAL VOC comp3 test protocol, which
measures detection performance by average precision (AP) over different recall levels. Figure 2
shows some qualitative results, including failure cases.
PASCAL VOC 2010 Our results on the 2010 dataset are presented in Table 1 in the context of
two strong baselines. The first, UoC-TTI, won the person category in the comp3 track of the 2010
competition [9]. The 2010 entry of the UoC-TTI method extended [12] by adding an extra octave
to the HOG feature map pyramid, which allows the detector to find smaller objects. We report the
AP score of the UoC-TTI ?raw? person detector, as well as the scores after applying the bounding
7
(a) Full visibility
(b) Occlusion boundaries
(c) Early termination
(d) Mistakes
Figure 2: Example detections. Parts are blue. The occlusion part, if used, is dashed cyan. (a) Detections of fully visible people. (b) Examples where the occlusion part detects an occlusion boundary.
(c) Detections where there is no occlusion, but a partial person is appropriate. (d) Mistakes where
the model did not detect occlusion properly.
box prediction and context rescoring methods described in [10]. Comparing raw detector outputs
our grammar model significantly outperforms the mixture model: 47.5 vs. 44.4.
We also applied the two post-processing steps to the grammar model, and found that unlike with
the mixture model, the grammar model does not benefit from bounding box prediction. This is
likely because our fine-grained occlusion model reduces the number of near misses that are fixed
by bounding box prediction. To test context rescoring, we used the UoC-TTI detection data for the
other 19 object classes. Context rescoring boosts our final score to 49.5.
The second baseline is the poselets system described in [3]. Their system requires detailed pose and
visibility annotations, in contrast to our grammar model which was trained only with bounding box
labels. Prior to context rescoring, our model scores one point lower than the poselets model, and
after rescoring it scores one point higher.
Structure and training We evaluated several aspects of our model structure and training objective
on the PASCAL VOC 2007 dataset. In all of our experiments we set the regularization constant
to C = 0.006. In Table 2 we compare the WL-SSVM framework developed here with the binary
LSVM framework from [10]. WL-SSVM improves performance of the grammar model by 1.4 AP
points over binary LSVM training. WL-SSVM also improves results obtained using a mixture of
part-based models by 0.6 points. To investigate model structure, we evaluated the effect of part
subtypes and occlusion modeling. Removing subtypes reduces the score of the grammar model
from 46.7 to 45.5. Removing the occlusion part also decreases the score from 46.7 to 45.5. The
shallow model (no subparts) achieves a score of 40.6.
6
Discussion
Our results establish grammar-based methods as a high-performance approach to object detection
by demonstrating their effectiveness on the challenging task of detecting people in the PASCAL
VOC datasets. To do this, we carefully designed a flexible grammar model that can detect people
under a wide range of partial occlusion, pose, and appearance variability. Automatically learning
the structure of grammar models remains a significant challenge for future work. We hope that
our empirical success will provide motivation for pursing this goal, and that the structure of our
handcrafted grammar will yield insights into the properties that an automatically learned grammar
might require. We also develop a structured training framework, weak-label structural SVM, that
naturally handles learning a model with strong outputs, such as derivation trees, from data with
weak labels, such as bounding boxes. Our training objective is nonconvex and we use a strong loss
function to avoid bad local optima. We plan to explore making this loss softer, in an effort to make
learning more robust to outliers.
Acknowledgments This research has been supported by NSF grant IIS-0746569.
8
References
[1] M. Blaschko and C. Lampert. Learning to localize objects with structured output regression. In ECCV,
2008.
[2] M. Blaschko, A. Vedaldi, and A. Zisserman. Simultaneous object detection and ranking with weak supervision. In NIPS, 2010.
[3] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people using mutually consistent poselet activations.
In ECCV, 2010.
[4] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In ICML, 2006.
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[6] C. Do, Q. Le, C. Teo, O. Chapelle, and A. Smola. Tighter bounds for structured estimation. In NIPS,
2008.
[7] M. Enzweiler, A. Eigenstetter, B. Schiele, and D. M. Gavrila. Multi-cue pedestrian classification with
partial occlusion handling. In CVPR, 2010.
[8] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html.
[9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2010/workshop/index.html.
[10] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. PAMI, 2009.
[11] P. Felzenszwalb and D. McAllester. Object detection grammars. Univerity of Chicago, CS Dept., Tech.
Rep. 2010-02.
[12] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models,
release 4. http://people.cs.uchicago.edu/?pff/latent-release4/.
[13] Y. Jin and S. Geman. Context and hierarchy in a probabilistic image model. In CVPR, 2006.
[14] D. McAllester and J. Keshet. Generalization bounds and consistency for latent structural probit and ramp
loss. In NIPS, 2011.
[15] Y. Ohta, T. Kanade, and T. Sakai. An analysis system for scenes containing objects with substructures. In
ICPR, 1978.
[16] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. JMLR, 2006.
[18] X. Wang, T. Han, and S. Yan. An hog-lbp human detector with partial occlusion handling. In ICCV, 2009.
[19] C.-N. J. Yu and T. Joachims. Learning structural svms with latent variables. In ICML, 2009.
[20] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 2003.
[21] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and appearance sharing: Recursive
compositional models for multi-view multi-object detection. In CVPR, 2010.
[22] L. Zhu, Y. Chen, and A. Yuille. Unsupervised learning of probabilistic grammar-markov models for object
categories. PAMI, 2009.
[23] L. Zhu, Y. Chen, A. Yuille, and W. Freeman. Latent hierarchical structural learning for object detection.
In CVPR, 2010.
[24] S. Zhu and D. Mumford. A stochastic grammar of images. Foundations and Trends in Computer Graphics
and Vision, 2006.
9
| 4307 |@word version:1 dalal:2 seems:1 everingham:2 triggs:2 reused:1 termination:1 decomposition:2 harder:1 accommodate:1 recursively:1 initial:3 score:30 selecting:1 tuned:1 outperforms:2 existing:1 current:1 comparing:1 si:11 assigning:1 dx:1 must:2 parsing:1 activation:1 chicago:5 visible:10 occludes:1 enables:1 hofmann:1 visibility:8 designed:3 v:1 generative:1 leaf:3 instantiate:1 cue:1 provides:1 detecting:7 contribute:1 location:7 node:1 rescoring:5 org:2 five:2 qualitative:1 introduce:3 expected:1 roughly:1 multi:3 terminal:14 occluder:5 inspired:1 voc:10 detects:1 freeman:2 automatically:3 window:4 cache:1 increasing:2 becomes:1 begin:2 blaschko:2 competes:1 kind:1 argmin:1 substantially:1 suppresses:1 developed:2 finding:1 sinz:1 stuff:1 concave:2 exactly:1 classifier:2 subtype:7 grant:1 ramanan:1 yn:4 positive:1 engineering:1 local:3 vertically:1 limit:1 mistake:2 ap:11 approximately:1 inria:1 plus:1 twice:1 initialization:1 might:1 pami:2 collect:1 challenging:3 relaxing:2 range:4 acknowledgment:1 practice:1 block:2 recursive:1 differs:1 procedure:4 displacement:8 area:2 empirical:1 yan:1 significantly:1 vedaldi:1 altun:1 get:3 cannot:1 tsochantaridis:1 risk:1 applying:3 context:8 restriction:2 www:2 map:11 elongated:1 maximizing:1 williams:2 independently:2 convex:7 resolution:7 simplicity:1 rule:3 insight:1 s6:1 handle:1 notion:1 coordinate:1 updated:1 hierarchy:3 target:1 programming:1 us:3 hypothesis:1 associate:2 trend:1 expensive:1 geman:2 labeled:10 observed:1 bottom:1 subproblem:2 taskar:1 wang:2 capture:3 thousand:1 ensures:1 decrease:1 rescaled:1 yk:1 convexity:1 schiele:1 occluded:1 mine:1 dynamic:1 trained:6 weakly:7 yuille:4 joint:1 voc2010:2 represented:1 maji:1 derivation:15 train:3 forced:2 describe:1 outside:1 whose:1 richer:2 heuristic:3 solve:1 cvpr:5 ramp:3 grammar:46 think:2 final:2 advantage:2 interaction:1 product:1 argmaxs:2 relevant:1 flexibility:1 achieve:1 deformable:6 competition:1 scalability:1 requirement:1 zp:4 optimum:3 produce:3 generating:1 ohta:1 tti:8 rangarajan:1 object:47 derive:1 develop:4 depending:1 pose:4 bourdev:1 nonterminal:3 school:1 strong:7 dividing:1 c:3 involves:1 come:2 predicted:1 poselets:4 differ:1 trading:1 correct:1 filter:12 stochastic:2 human:2 mcallester:6 softer:2 require:1 generalization:1 opt:1 tighter:1 subtypes:10 hold:1 around:1 considered:1 predict:1 claim:1 achieves:1 early:1 torralba:1 estimation:1 bag:1 label:19 ross:1 teo:1 wl:9 weighted:1 hope:1 always:2 modified:1 reaching:1 avoid:1 dinner:1 varying:1 derived:2 focus:2 l0:9 release:1 improvement:2 properly:1 joachim:2 tech:1 contrast:4 suppression:1 baseline:3 detect:3 sense:1 inference:3 typically:1 relation:1 koller:1 expand:1 selects:1 interested:1 pixel:1 issue:1 classification:1 flexible:1 orientation:2 pascal:17 augment:1 html:2 plan:1 constrained:1 special:1 brox:1 construct:2 represents:2 placing:2 y6:1 icml:2 yu:1 unsupervised:1 foreground:6 future:1 np:4 report:2 summand:1 oriented:2 tightly:1 individual:2 pictorial:1 occlusion:29 n1:1 detection:36 interest:1 mining:5 highly:1 investigate:1 evaluation:1 mixture:9 accurate:1 edge:1 partial:4 necessary:1 respective:1 tree:4 divide:1 deformation:11 girshick:3 theoretical:1 instance:4 formalism:5 modeling:4 increased:1 markovian:2 maximization:3 cost:1 entry:2 subset:2 nonterminals:2 graphic:1 providence:1 person:18 retain:1 standing:1 probabilistic:2 off:1 together:2 containing:1 creating:1 leading:1 yp:7 includes:4 coefficient:2 pedestrian:3 explicitly:1 ranking:4 depends:3 collobert:1 later:1 view:2 schema:12 portion:1 start:3 annotation:2 substructure:1 il:2 who:1 efficiently:2 correspond:2 yield:1 generalize:1 weak:9 raw:2 accurately:1 history:1 detector:6 simultaneous:1 sharing:1 ed:1 definition:1 failure:1 energy:1 naturally:1 associated:4 dataset:4 popular:1 recall:3 improves:3 segmentation:1 sophisticated:1 carefully:1 higher:2 supervised:1 follow:1 specify:1 improved:1 zisserman:3 done:2 box:19 evaluated:3 just:2 smola:1 hand:1 replacing:1 maximizer:2 name:1 building:2 effect:2 brown:2 contain:2 true:1 pascalnetwork:2 regularization:1 assigned:1 wp:5 dx2:1 semantic:1 ll:2 encourages:1 rooted:3 covering:1 won:1 octave:2 highresolution:1 demonstrate:2 motion:1 l1:1 image:31 ranging:1 handcrafted:1 enzweiler:2 extend:1 interpretation:3 theirs:1 significant:2 consistency:1 dot:1 chapelle:1 access:1 han:1 supervision:1 summands:1 add:2 recent:2 poselet:1 nonconvex:2 binary:5 success:1 rep:1 yi:8 scoring:3 guestrin:1 greater:1 additional:1 impose:1 dashed:1 ii:1 sliding:2 multiple:2 full:1 reduces:2 match:1 long:1 cccp:4 post:1 prediction:16 variant:1 basic:1 regression:1 vision:3 metric:1 histogram:2 represent:1 iteration:1 pyramid:8 cell:1 addition:1 background:16 fine:4 lbp:1 addressed:2 winn:2 source:1 ot:2 extra:1 unlike:3 gavrila:1 elegant:1 inconsistent:1 effectiveness:1 structural:18 near:1 ideal:2 enough:1 associating:1 idea:2 simplifies:1 six:5 effort:1 penalty:1 cause:1 compositional:5 repeatedly:1 ssvm:9 useful:1 detailed:2 amount:1 desk:1 svms:4 category:4 generate:1 specifies:7 http:3 problematic:2 nsf:1 per:1 track:1 subpart:14 blue:1 key:1 threshold:1 demonstrating:1 localize:1 clarity:1 rectangle:1 subgradient:2 sum:3 compete:1 parameterized:1 place:7 throughout:1 groundtruth:1 patch:1 lsvm:6 decision:1 dy:2 pushed:1 bound:5 cyan:1 rbg:1 quadratic:1 nontrivial:1 placement:4 ri:1 scene:2 generates:1 aspect:3 subgradients:1 expanded:1 performing:1 relatively:1 structured:6 developing:1 alternate:1 icpr:1 across:3 smaller:1 uoc:7 appealing:1 shallow:6 making:4 outlier:1 iccv:1 taken:1 equation:2 mutually:1 remains:1 describing:2 discus:2 generalizes:2 apply:3 hierarchical:3 appropriate:1 top:4 clustering:1 include:2 ensure:2 hinge:2 parsed:1 establish:1 move:4 objective:8 already:1 malik:1 mumford:1 fa:2 costly:1 usual:2 surrogate:1 unclear:1 gradient:2 w0:1 index:6 ratio:2 minimizing:2 difficult:2 hog:4 negative:1 design:1 reliably:1 allowing:1 upper:3 observation:2 datasets:4 markov:2 benchmark:3 finite:1 jin:2 descent:1 optional:2 situation:1 extended:1 variability:4 head:6 shoulder:3 y1:6 ttic:1 david:1 required:1 specified:1 connection:1 learned:4 boost:2 pop:1 nip:4 beyond:2 bbox:2 below:3 pattern:2 departure:1 challenge:5 max:6 including:1 gool:2 overlap:10 natural:2 rely:1 regularized:1 difficulty:1 zhu:5 representing:1 improve:2 voc2007:2 disappears:1 prior:1 interdependent:1 relative:5 loss:20 fully:2 discriminatively:3 probit:1 prototypical:1 facing:4 versus:2 foundation:1 degree:3 sufficient:1 consistent:2 heavy:1 production:14 eccv:2 prone:1 compatible:2 placed:7 supported:1 uchicago:2 allow:1 deeper:2 pulled:1 wide:1 explaining:1 template:1 felzenszwalb:4 fall:1 absolute:1 sparse:1 taking:1 benefit:2 slice:1 boundary:3 depth:1 xn:2 valid:1 avoids:1 dimension:1 computes:2 van:2 sakai:1 made:2 subwindow:1 implicitly:2 overcomes:1 discriminative:3 xi:16 alternatively:1 don:2 latent:10 iterative:1 table:5 kanade:1 nature:1 robust:1 expansion:2 bottou:1 protocol:2 did:1 hierarchically:1 whole:2 bounding:13 motivation:1 lampert:1 n2:1 allowed:1 body:1 x1:2 augmented:2 slow:1 precision:1 position:4 explicit:1 candidate:1 jmlr:1 grained:3 down:2 removing:2 bad:2 symbol:11 svm:13 workshop:2 adding:4 effectively:1 mirror:3 keshet:1 illustrates:2 occurring:1 push:2 margin:2 pff:2 chen:3 appearance:8 likely:1 explore:1 visual:3 partially:2 release4:2 pedro:1 extracted:1 weston:1 eigenstetter:1 goal:2 shared:2 replace:1 considerable:1 wt:11 miss:1 experimental:1 occluding:1 select:1 internal:2 people:13 support:2 dept:3 avoiding:1 handling:2 |
3,653 | 4,308 | Statistical Tests for Optimization Efficiency
Levi Boyles, Anoop Korattikara, Deva Ramanan, Max Welling
Department of Computer Science
University of California, Irvine
Irvine, CA 92697-3425
{lboyles},{akoratti},{dramanan},{welling}@ics.uci.edu
Abstract
Learning problems, such as logistic regression, are typically formulated as pure
optimization problems defined on some loss function. We argue that this view
ignores the fact that the loss function depends on stochastically generated data
which in turn determines an intrinsic scale of precision for statistical estimation.
By considering the statistical properties of the update variables used during the
optimization (e.g. gradients), we can construct frequentist hypothesis tests to
determine the reliability of these updates. We utilize subsets of the data for computing updates, and use the hypothesis tests for determining when the batch-size
needs to be increased. This provides computational benefits and avoids overfitting
by stopping when the batch-size has become equal to size of the full dataset.
Moreover, the proposed algorithms depend on a single interpretable parameter ?
the probability for an update to be in the wrong direction ? which is set to a single
value across all algorithms and datasets. In this paper, we illustrate these ideas
on three L1 regularized coordinate descent algorithms: L1 -regularized L2 -loss
SVMs, L1 -regularized logistic regression, and the Lasso, but we emphasize that
the underlying methods are much more generally applicable.
1
Introduction
There is an increasing tendency to consider machine learning as a problem in optimization: define
a loss function, add constraints and/or regularizers and formulate it as a preferably convex program.
Then, solve this program using some of the impressive tools from the optimization literature. The
main purpose of this paper is to point out that this ?reduction to optimization? ignores certain
important statistical features that are unique to statistical estimation. The most important feature
we will exploit is the fact that the statistical properties of an estimation problem determine an
intrinsic scale of precision (that is usually much larger than machine precision). This implies
immediately that optimizing parameter-values beyond that scale is pointless and may even have an
adverse affect on generalization when the underlying model is complex. Besides a natural stopping
criterion it also leads to much faster optimization before we reach that scale by realizing that far
away from optimality we need much less precision to determine a parameter update than when
close to optimality. These observations can be incorporated in many off-the-shelves optimizers and
are often orthogonal to speed-up tricks in the optimization toolbox.
The intricate relationship between computation and estimation has been pointed out before in [1]
and [2] where asymptotic learning rates were provided. One of the important conclusions was
that a not so impressive optimization algorithm such as stochastic gradient descent (SGD) can be
nevertheless a very good learning algorithm because it can process more data per unit time. Also
in [3] (sec. 5.5) the intimate relationship between computation and model fitting is pointed out.
[4] gives bounds on the generalization risk for online algorithms, and [5] shows how additional
data can be used to reduce running time for a fixed target generalization error. Regret-minimizing
algorithms ([6], [7]) are another way to account for the interplay between learning and computation.
Hypothesis testing has been exploited for computational gains before in [8].
1
Our method exploits the fact that loss functions are random variables subject to uncertainty. In a
frequentist world we may ask how different the value of the loss would have been if we would have
sampled another dataset of the same size from a single shared underlying distribution. The role of
an optimization algorithm is then to propose parameter updates that will be accepted or rejected
on statistical grounds. The test we propose determines whether the direction of a parameter update
is correct with high probability. If we do not pass our tests when using all the available data-cases
then we stop learning (or alternatively we switch to sampling or bagging), because we have reached
the intrinsic scale of precision set by the statistical properties of the estimation problem.
However, we can use the same tests to speed up the optimization process itself, that is before we
reach the above stopping criterion. To see that, imagine one is faced with an infinite dataset. In batch
mode, using the whole (infinite) dataset, one would not take a single optimization step in finite time.
Thus, one should really be concerned with making as much progress as possible per computational
unit. Hence, one should only use a subset of the total available dataset. Importantly, the optimal
batch-size depends on where we are in the learning process: far away from convergence we only
need a rough idea of where to move which requires very few data-cases. On the other hand, the
closer we get to the true parameter value, the more resolution we need. Thus, the computationally
optimal batch-size is a function of the residual estimation error. Our algorithm adaptively grows a
subset of the data by requiring that we have just enough precision to confidently move in the correct
direction. Again, when we have exhausted all our data we stop learning.
Our algorithm heavily relies on the central limit tendencies of large sums of random variables.
Fortunately, many optimization algorithms are based on averages over data-cases. For instance,
gradient descent falls in this class, as the gradient is defined by an average (or sum). As in [11],
with large enough batch sizes we can use the Central Limit Theorem to claim that the average
gradients are normally distributed and estimate their variance without actually seeing more data
(this assumption is empirically verified in section 5.2). We have furthermore implemented methods
to avoid testing updates for parameters which are likely to fail their test. This ensures that we
approximately visit the features with their correct frequency (i.e. important features may require
more updates than unimportant ones).
In summary, the main contribution of this paper is to introduce a class of algorithms with the
following properties.
? They depend on a single interpretable parameter ? ? the probability to update parameters in the
wrong direction. Moreover, the performance of the algorithms is relatively insensitive to the
exact value we choose.
? They have a natural, inbuilt stopping criterion. The algorithms terminate when the probability to
update the parameters in the wrong direction can not be made smaller than ?.
? They are applicable to wide range of loss functions. The only requirement is that the updates
depend on sums of random variables.
? They inherit the convergence guarantees of the optimization method under consideration. This
follows because the algorithms will eventually consider all the data.
? They achieve very significant speedups in learning models from data. Throughout the learning
process they determine the size of the data subset required to perform updates that move in the
correct direction with probability at least 1 ? ?.
We emphasize that our framework is generally applicable. In this paper we show how these considerations can be applied to L1 -regularized coordinate descent algorithms: L1 -regularized L2 -loss
SVMs, L1 -regularized logistic regression, and Lasso [9]. Coordinate descent algorithms are convenient because they do not require any tuning of hyper-parameters to be effective, and are still efficient
when training sparse models. Our methodology extends these algorithms to be competitive for dense
models and for N >> p. In section 2 we review the coordinate descent algorithms. Then, in section
3.2 we formulate our hypothesis testing framework, followed by a heuristic for predicting hypothesis
test failures in section 4. We report experimental results in section 5 and we end with conclusions.
2
Coordinate Descent
We consider L1-regularized learning problems where the loss is defined as a statistical average over
N datapoints:
f (?) = ?||?||1 +
N
1 X
loss(?, xi , yi )
2N i=1
where
?, xi ? Rp
(1)
We will consider continously-differentiable loss functions (squared hinge-loss, log-loss, and
squared-loss) that allow for the use of efficient coordinate-descent optimization algorithms, where
2
each parameter is updated ?jnew ? ?j + dj with:
dj = argmin f (? + dej )
f (? + dej ) = |?j + d| + Lj (d; ?) + const
(2)
d
PN
1
th
standard basis vector. To solve
where Lj (d; ?) = 2N
i=1 loss(? + dej , xi , yi ) and ej is the j
the above, we perform a second-order Taylor expansion of the partial loss Lj (d; ?):
1
f (? + dej ) ? |?j + d| + L?j (0; ?)d + L??j (0; ?)d2 + const
2
[10] show that the minimum of the approximate objective (3) is obtained with:
? ?
Lj (0,?)+?
?
if L?j (0, ?) + ? ? L??j (0, ?)?j
?
?? L?? (0,?)
(3)
j
dj =
L? (0,?)??
? Lj ?? (0,?)
?
j
?
?
??j
if L?j (0, ?) ? ? ? L??j (0, ?)?j
(4)
otherwise
For quadratic loss functions, the approximation in (3) is exact. For general convex loss functions,
one can optimize (2) by repeatedly linearizing and applying the above update. We perform a single
update per parameter during the cyclic iteration over parameters. Notably, the partial derivatives
are functions of statistical averages computed over N training points. We show that one can use
frequentist hypothesis tests to elegantly manage the amount of data needed (N ) to reliably compute
these quantities.
2.1
L1 -regularized L2 -loss SVM
Using a squared hinge-loss function in (1), we obtain a L1 -regularized L2 -loss SVM:
lossSV M = max(0, 1 ? yi ? T xi )2
(5)
Appendix F of [10] derive the corresponding partial derivatives, where the second-order statistic is
approximate because the squared hinge-loss is not twice differentiable:
L?j (0, ?) = ?
1 X
yi xij bi (?)
N
L??j (0, ?) =
i?I(?)
1 X 2
xij
N
(6)
i?I(?)
where bi (?) = 1 ? yi ? T xi and I(?) = {i|bi (?) > 0}. We write xij for the j th element of
datapoint xi . In [10], each parameter is updated until convergence, using a line-search for each
update, whereas we simply check that L?? term is not ill formed rather than performing a line search.
2.2
L1 -regularized Logistic Regression
Using a log-loss function in (1), we obtain a L1 -regularized logistic regression model:
losslog = log(1 + e?yi ?
T
xi
)
(7)
Appendix G of [10] derive the corresponding partial derivatives:
L?j (0, ?) =
2.3
N
1 X
?xij
2N i=1 1 + eyi ? T xi
L??j (0, ?) =
2
N
T
1 X
xij
eyi ? x i
Tx
y
?
i
2N i=1 1 + e i
(8)
L1 -regularized Linear Regression (Lasso)
Using a quadratic loss function in (1), we obtain a L1 -regularized linear regression, or LASSO,
model:
lossquad = (yi ? ? T xi )2
(9)
The corresponding partial derivatives [9] are:
L?j (0, ?) = ?
N
1 X
(yi ? ? T xi )xij
N i=1
3
L??j (0, ?) =
N
1 X
xij xij
N i=1
(10)
Because the Taylor expansion is exact for quadratic loss functions, we can directly write the closed
form solution for parameter ?jnew = S(?j , ?) where
?
N
?? ? ? ? > 0, ? < |?|
1 X
(j)
?j =
xij (yi ? y?i )
S(?, ?) = ? + ? ? < 0, ? < |?|
(11)
?
N i=1
0
? ? |?|
P
(j)
where y?i = k6=j xik ?k is the prediction made with all parameters except ?j and S is a ?softthreshold? function that is zero for an interval of 2? about the origin, and shrinks the magnitude of
the input ? by ? outside of this interval. We can use this expression P
as an estimator P
for ? from a
dataset {xi , yi }. The above update rule assumes standardized data ( N1 i xij = 0, N1 i x2ij = 1),
but it is straightforward to extend for the general case.
3
Hypothesis Testing
Each update ?jnew = ?j + dj is computed using a statistical average over a batch of N training
points. We wish to estimate the reliability of an update as a function of N . To do so, we model
the current ? vector as a fixed constant and the N training points as random variables drawn
from an underlying joint density p(x, y). This also makes the proposed updates dj and ?jnew
random variables because they are functions of the training points. In the following we will make
an explicit distinction between random variables, e.g. ?jnew , dj , xij , yi and their instantiations,
??jnew , d?j , x
?ij , y?i . We would like to determine whether or not a particular update is statistically
justified. To this end, we use hypothesis tests where if there is high uncertainty in the direction of
the update, we say this update is not justified and the update is not performed. For example, if our
proposed update ??jnew is positive, we want to ensure that P (?jnew < 0) is small.
3.1
Algorithm Overview
We propose a ?growing batch? algorithm for handling very large or infinite datasets: first we select
a very small subsample of the data of size Nb ? N , and optimize until the entire set of parameters
are failing their hypothesis tests (described in more detail below). We then query more data points
and include them in our batch, reducing the variance of our estimates and making it more likely that
they will pass their tests. We continue adding data to our batch until we are using the full dataset
of size N . Once all of the parameters are failing their hypothesis tests on the full batch of data,
we stop training. The reasoning behind this is, as argued in the introduction, that at this point we
do not have enough evidence for even determining the direction in which to update, which implies
that further optimization would result in overfitting. Thus, our algorithm behaves like a stochastic
online algorithm during early stages and like a batch algorithm during later stages, equipped with
a natural stopping condition.
In our experiments, we increase the batch size Nb by a factor of 10 once all parameters fail their
hypothesis tests for a given batch. Values in the range 2-100 also worked well, however, we chose
10 as it works very well for our implementation.
3.2
Lasso
For quadratic loss functions with standardized variables, we can directly analyze the densities of
dj , ?jnew . We accept an update if the sign of dj can be estimated with sufficient probability. Central
to our analysis is ?j (11), which is equivalent to ?jnew for the unregularized case ? = 0. We rewrite
it as:
N
1 X
(j)
?j =
zij (?) where zij (?) = xij (yi ? y?i )
(12)
N i=1
Because zij (?) are given by a fixed transformation of the iid training points, they themselves are iid.
As N ? ?, we can appeal to the Central Limit Theorm and model ?j as distributed as a standard
Normal: ?j ? N (??j , ??j ), where ??j = E[zij ], ?i and ??2 j = N1 V ar(zij ) ?i. Empirical
justification of normality of these quantities is given in section 5.2. So, for any given ?j , we can
provide estimates
1 X
1 X
z?ij
V ar(zij ) ? ?z2?j =
(?
zij ? z?j )2
(13)
E[zij ] ? z?j =
N i
N ?1 i
4
AP and Time responses to ? (LR on INRIA dataset)
Q?Q Plots of Gradient Distributions
0.3
b
0.4
0.3
0.2
4
0.79
x 10
3
0.78
2.5
0.77
2
0.76
1.5
0.75
1
0.2
N =250,000
0.1Nb=1,000,000
Average Precision
Gradient Quantiles
0.6
0.5
N =60,000
b
0
?0.1
?0.2
0.74
0.1
Time (seconds)
Transformed
Original
0.7
0.5
?0.3
0
?2
?1
0
1
2
3
?4
0.73
0
?2
0
2
Normal Theoretic Quantiles
0.1
0.2
?
0.3
0.4
0
0.5
Figure 1: (left) A Gaussian distribution and the distribution resulting from applying the transformation S,
with ? = .1. The interval that is ?squashed? is shown by the dash-dotted blue lines. (middle) Q-Q plot
demonstrating the normality of the gradients on the L1 -regularized L2 -loss SVM, computed at various stages
of the algorithm (i.e. at different batch-sizes Nb and models ?). Straight lines provide evidence that the
empirical distribution is close to normality. (right ) Plot showing the behavior of our algorithm with respect to
?, using logistic regression on the INRIA dataset. ? = 0 corresponds to an algorithm which never updates, and
? = 0.5 corresponds to an algorithm which always updates (with no stopping criteria), so for these experiments
? was chosen in the range [.01, .49]. Error bars denote a single standard deviation.
which in turn provide estimates for ??j and ??j . We next apply the soft threshold function S to ?j to
obtain ?jnew , a random variable whose pdf is a Gaussian which has a section of width 2? ?squashed?
to zero into a single point of probability mass, with the remaining density shifted towards zero by a
magnitude ?. This is illustrated in Figure 1. Our criterion for accepting an update is that it moves
towards the true solution with high probability. Let d?j be the realization of the random variable
dj = ?jnew ? ?j , computed from the sample batch of N training points. If d?j > 0, then we want
P (dj ? 0) to be small, and vice versa. Specifically, for d?j > 0, we want P (dj ? 0) < ?, where
? ? ?(? +?)
?j
?? j
if ?j < 0
?? j
new
P (dj ? 0) = P (?j
? ?j ) =
(14)
?j ?(??j ??)
??
if ?j ? 0
??
j
where ?(?) denotes the cdf for the standard Normal. This distribution can be derived from its two
underlying Gaussians, one with mean ??j + ? and one with mean ??j ? ?. Similarly, one can
define an analgous test of P (dj ? 0) < ? for d?j < 0. These are the hypothesis test equations
for a single coordinate, so this test is performed once for each coordinate at its iteration in the
coordinate descent algorithm. If a coordinate update fails its test, then we assume that we do not
have enough evidence to perform an update on the coordinate, and do not update. Note that, since
we are potentially rejecting many updates, significant computation could be going to ?waste,? as we
are computing updates without using them. Methods to address this follow in section 4.
3.3
Gradient-Based Hypothesis Tests
For general convex loss functions, it is difficult to construct a pdf for dj and ?jnew . Instead, we
can be estimated with sufficient
accept an update ?jnew if the sign of the partial derivative ?f??(?)
j
reliability. Because f (?) may be nondifferentiable, we define ?j f (?) to be the set of 1D subgradients, or lower tangent planes, at ? along direction j. The minimal (in magnitude) subgradient gj ,
associated with the flatest lower tangent, is:
?
N
if ?j < 0
? ?j ? ?
1 X
if ?j > 0
g j = ?j + ?
where
?j = L?j (0, ?) =
zij
(15)
?
N i=1
S(?j , ?) otherwise
x
ij
for log-loss.
where zij (?) = ?2yi xij bi (?) for the squared hinge-loss and zij (?) =
T
1+eyi ? xi
Appealing to the same arguements as in Sec.3.2, one can show that ?j ? N (??j , ??j ) where ??j =
E[zij ], ?i and ??2 j = N1 V ar(zij ) ?i. Thus the pdf of subgradient g is a Normal shifted by ?sign(?j )
in the case where ?j 6= 0, or a Normal transformed by the function S(?j , ?) in the case ?j = 0.
To formulate our hypothesis test, we write g?j as the realization of random variable gj , computed
from the batch of N training points. We want to take an update only if our update is in the correct
5
SVM Algorithm Comparison on the INRIA dataset
SVM Algorithm Comparison on the VOC dataset
0.65
CD?Full
CD?Hyp Test
vanilla CD
SGD
SGD?Regret
0.6
2
10
3
10
time (seconds)
4
10
0.3
0.7
0.28
0.6
Average Precision
0.7
Average Precision
Average Precision
0.75
0.55 1
10
Logistic Regression Algorithm Comparison on the INRIA Dataset
0.8
0.32
0.8
0.26
0.24
0.22
0.2
0.16 3
10
4
10
time (seconds)
0.4
0.3
CD?Hyp. Test
vanilla CD
SGD
SGD?Regret
0.2
CD?Full
CD?HypTest
SGD
0.18
0.5
0.1
5
0
10
2
10
3
10
time (seconds)
4
10
Figure 2: Plot comparing various algorithms for the L1 -regularized L2 -loss SVM on the INRIA dataset (left)
and the VOC dataset (middle), and for the L1 -regularized logistic-regression on INRIA (right) using ? = 0.05.
?CD-Full? denotes our method using all applicable heuristic speedups, ?CD-Hyp Testing? does not use the
shrinking heuristic while ?vanilla CD? simply performs coordinate descent without any speedup methods.
?SGD? is stochastic gradient descent with an annealing schedule. Optimization of the hyper-parameters of the
annealing schedule (on train data) was not included in the total runtime. Note that our method achieves the
optimal precision faster than SGD and also stops learning approximately when overfitting sets in.
direction with high probability: for g?j > 0, we want P (gj ? 0) < ?, where
? 0?(? ??)
?j
??
if ?j ? 0
?? j
0?(? +?)
P (gj ? 0) =
?j
??
if ?j > 0
??
(16)
j
We can likewise define a test of P (gj ? 0) < ? which we use to accept updates given a negative
estimated gradient g?j < 0.
4
Additional Speedups
It often occurs that many coordinates will fail their respective hypothesis tests for several consecutive iterations, so predicting these consecutive failures and skipping computations on these
coordinates could potentially save computation. We employ a simple heuristic towards these
matters based on a few observations (where for simplified notation we drop the subscript j):
1. If the set of parameters that are updating remains constant between updates, then for a particular
coordinate, the change in the gradient from one iteration to the next is roughly constant. This is
an empirical observation.
2. When close to the solution, ?? remains roughly constant.
We employ a heuristic which is a complicated instance of a simple idea: if the value a(0) of
a variable of interest is changing at a constant rate r, we can predict its value at time t with
a(t) = a(0) + rt. In our case, we wish to predict when the gradient will have moved to a point
where the associated hypothesis test will pass.
First, we will consider the unregularized case (? = 0), wherein g = ?. We wish to detect when
the gradient will result in the hypothesis test passing, that is, we want to find the values ?? ? ?
?,
where ?
? is a realization of the random variable ?, such that P (g ? 0) = ? or P (g ? 0) = ?. For
this purpose, we need to draw the distinction between an update which was taken, and one which
is proposed but for which the hypothesis test failed. Let the set of accepted updates be indexed by
t, as in g?t , and let the set of potential updates, after an accepted update at time t, be indexed by s,
as in g?t (s). Thus the algorithm described in the previous section will compute g?t (1)...?
gt (s? ) until
the hypothesis test passes for g?t (s? ), and we then set g?t+1 (0) = g?t (s? ), and perform an update to
? using g?t+1 (0). Ideally, we would prefer not to compute g?t (1)...?
gt (s? ? 1) at all, and instead only
compute the gradient when we know the hypothesis test will pass, s? iterations after the last accept.
Given that we have some scheme from skipping k iterations, we estimate a ?velocity? at which
? t (s?k?1)
. If, for instance, ?? > 0, we can compute the value
g?t (s) = ?
? t (s) changes: ?e ? ?? t (s)??
k+1
of ?
? at which the hypothesis test will pass, assuming ?? remains constant, by setting P (g ? 0|?? =
?pass ) = ?, and subsequently we can approximate the number of iterations to skip next1 :
?pass = ??? ??1 (?)
1
kskip ?
?pass ? ?
? t (s)
??
In practice we cap kskip at some maximum number of iterations (say 40).
6
(17)
4
10
x 10
SGD ? ?0=.5
SGD ? ?0=1
0.81
SGD ? ?0=1
8
SGD ? ?0=2
0.8
SGD ? ?0=2
SGD ? ?0=3
7
6
?=.4
?=.2
?=.05
5
4
?=.4
?=.2
?=.05
0.77
0.76
0.75
2
0.74
1
SGD ? ?0=5
0.78
3
0
SGD ? ?0=3
0.79
SGD ? ?0=5
Average Precision
Time (seconds)
0.82
SGD ? ?0=.5
9
0.73
0.8
0.9
0.96
?
0.99
0.999
0.72
0.8
0.9
0.96
?
0.99
0.999
Figure 3: Comparison of our Lasso algorithm against SGD across various hyper-parameter settings for the
exponential annealing schedule. Our algorithm is marked by the horizontal lines, with ? ? {0.05, 0.2, 0.4}.
Note that all algorithms have very similar precision scores in the interval [0.75 ? 0.76]. For values of
? = {0.8, 0.9, 0.96, 0.99, 0.999}, SGD gives a good score, however, picking ?0 > 1 had an adverse effect on
the optimization speed. Our method converged faster then SGD with the best annealing schedule.
The regularized case with ?j > 0 is equivalent to the unregularized case where g = ? + ?, and we
solve for the value of ? that will allow the test to pass via P (g ? 0|?? = ?pass ) = ?:
?pass = ??? ??1 (?) ? ?
(18)
Similarly, the case with ?j ? 0 is equivalent to the unregularized case where g = ? ? ?:
?pass = ??? ??1 (?) + ?. For the case where ?? < 0, we solve for P (g ? 0|?? = ?pass ) = ?.
This gives ?pass = ??? ??1 (1 ? ?) + ? if ?j < 0 and ?pass = ??? ??1 (1 ? ?) ? ? otherwise.
A similar heuristic for the Lasso case can also be derived.
4.1
Shrinking Strategy
It is common in SVM algorithms to employ a ?shrinking? strategy in which datapoints which do
not contribute to the loss are removed from future computations. Specifically, if a data point (xi , yi )
has the property that bi = 1 ? yi ? T xi < ?shrink < 0, for some ?shrink , then the data point is
removed from the current batch. Data points removed from earlier batches in the optimization
are still candidates for future batches. We employ this heuristic in our SVM implementation, and
Figure 2 shows the relative performance between including this heuristic and not.
5
5.1
Experiments
Datasets
We provide experimental results for the task of visual object detection, building on recent successful
approaches that learn linear scanning-window classifiers defined on Histograms of Oriented Gradients (HOG) descriptors [12, 13]. We train and evaluate a pedestrain detector using the INRIA dataset
[12], where (N , p) = (5e6, 1100). We also train and evaluate a car detector using the 2007 PASCAL
VOC dataset [13], where (N ,p) = (6e7,1400). For both datasets, we measure performance using the
standard PASCAL evaluation protocol of average precision (with 50% overlap of predicted/ground
truth bounding boxes). On such large training sets, one would expect delicately-tuned stochastic
online algorithms (such as SGD) to outperform standard batch optimization (such as coordinate descent). We show that our algorithm exhibits the speed of the former with the reliability of the latter.
5.2
Normality Tests
In this section we empirically verify the normality claims on the INRIA dataset. Because the
negative examples in this data are comprised of many overlapping windows from images, we
may expect this non-iid property to damage any normality properties of our updates. For these
experiments, we focus on the gradients of the L1 -regularized, L2 -loss SVM computed during
various stages of the optimization process. Figure 1 shows quantile-quantile plots of the average
gradient, computed over different subsamples of the data of fixed size Nb , versus the standard
Normal. Experiments for smaller N (? 100) and random ? give similar curves. We conclude that
the presence of straight lines provide strong evidence for our assumption that the distribution of
gradients is in fact close to normally distributed.
7
5.3
Algorithm Comparisons
We compared our algorithm to the stochastic gradient method for L1 -regularized Log-linear models
in [14], adapted for the L1 -regularized methods above. We use the following decay schedule for
all curves over time labeled ?SGD?: ? = ?0 t01+t . In addition to this schedule, we also tested
against SGD using the regret-minimizing schedule of [6] on the INRIA dataset: ? = ?0 ?t10 +t .
After spending a significant amount of time hand-optimizing the hyper-parameters ?0 , t0 , we found
that settings ?0 ? 1 for both rate schedules, and t0 ? N/10 (standard SGD) and t0 ? (N/10)2
(regret-minimzing SGD) have worked well on our datasets. We ran all our algorithms ? Lasso,
Logistic Regression and SVM ? with a value of ? = 0.05 for both INRIA and VOC datasets.
Figures 2 show a comparison between our method and stochastic gradient descent on the INRIA and
VOC datasets. Our method including the shrinking strategy is faster for the SVM, while methods
without a data shrinking strategy, such as logistic regression, are still competitive (see Figure 2).
In comparing our methods to the coordinate descent upon which ours are based, we see that our
framework provides a considerable speedup over standard coordinate descent. We do this with a
method which eventually uses the entire batch of data, so the tricks that enable SGD to converge in
an L1 -regularized problem are not necessary. In terms of performance, our models are equivalent
or near to published state of the art results for linear models [13, 15].
We also performed a comparison against SGD with an exponential decay schedule ? = ?0 e??t on
the Lasso problem (see Fig 3). Exponential decay schedules are known to work well in practice
[14], but do not give the theoretical convergence guarantees of other schedules. For a range of
values for ?0 and ?, we compare SGD against our algorithm with ? ? {0.05, 0.2, 0.4}. From these
experiments we conclude that changing ? from its standard value 0.05 all the way to 0.4 (recall
that ? < 0.5) has very little effect on accuracy and speed. This in contrast to SGD which required
hyper-parameter tuning to achieve comparable performance.
To further demonstrate the robustness of our method to ?, we performed 5 trials of logistic regression
on the INRIA dataset with a wide range of values of ?, with random initializations, shown in Figure
1. All choices of ? give a reasonable average precision, and the algorithm begins to become
significantly slower only with ? > .3.
6
Conclusions
We have introduced a new framework for optimization problems from a statistical, frequentist point
of view. Every phase of the learning process has its own optimal batchsize. That is to say, we
need only few data-cases early on in learning but many data-cases close to convergence. In fact, we
argue that when we are using all of our data and cannot determine with statistical confidence that
our update is in the correct direction, we should stop learning to avoid overfitting. These ideas are
absent in the usual frequentist (a.k.a. maximum likelihood) and learning-theory approaches which
formulate learning as the optimization of some loss function. A meaningful smallest length scale
based on statistical considerations is present in Bayesian analysis through the notion of a posterior
distribution. However, the most common inference technique in that domain, MCMC sampling,
does not make use of the fact that less precision is needed during the first phases of learning (a.k.a.
?burn-in?) because any accept/reject rule requires all data-cases to be seen. Hence, our approach
can be thought of as a middle ground that borrows from both learning philosophies.
Our approach also leverages the fact that some features are more predictive than others, and may
deserve more attention during optimization. By predicting when updates will pass their statistical
tests, we can update each feature approximately with the correct frequency.
The proposed algorithms feature a single variable that needs to be set. However, the variable has a
clear meaning ? the allowed probability that an update moves in the wrong direction. We have used
? = 0.05 in all our experiments to showcase the robustness of the method.
Our method is not limited to L1 methods or linear models; our framework can be used on any
algorithm in which we take updates which are simple functions on averages over the data.
Relative to vanilla coordinate descent, our algorithms can handle dense datasets with N >> p. Relative to SGD2 our method can be thought of as ?self-annealing? in the sense that it increases its precision by adaptively increasing the dataset size. The advantages over SGD are therefore that we avoid
tuning hyper-parameters of an annealing schedule and that we have an automated stopping criterion.
2
Recent benchmarks [16] show that a properly tuned SGD solver is highly competitive for large-scale
problems [17].
8
References
[1] L. Bottou and O. Bousquet. Learning using large datasets. In Mining Massive DataSets for Security,
NATO ASI Workshop Series. IOS Press, Amsterdam, 2008.
[2] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information
processing systems, 20:161?168, 2008.
[3] B. Yu. Embracing statistical challenges in the information technology age. Technometrics, American
Statistical Association and the American Society for Quality, 49:237?248, 2007.
[4] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. Information Theory, IEEE Transactions on, 50(9):2050?2057, 2004.
[5] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In
Proceedings of the 25th international conference on Machine learning, pages 928?935. ACM, 2008.
[6] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. Twentieth
International Conference on Machine Learning, 2003.
[7] P.L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. Advances in Neural
Information Processing Systems, 21, 2007.
[8] A. Korattikara, L. Boyles, M. Welling, J. Kim, and H. Park. Statistical optimization of non-negative
matrix factorization. AISTATS, 2011.
[9] J. Friedman, T. Hastie, H. H?ofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of
Applied Statistics, 1(2):302?332, 2007.
[10] R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for large linear
classification. The Journal of Machine Learning Research, 9:1871?1874, 2008.
[11] N. Le Roux, P.A. Manzagol, and Y. Bengio. Topmoumoute online natural gradient algorithm. In Neural
Information Processing Systems (NIPS). Citeseer, 2007.
[12] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, volume 1, page 886. Citeseer, 2005.
[13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html.
[14] Yoshimasa Tsuruoka, Jun?ichi Tsujii, and Sophia Ananiadou. Stochastic gradient descent training for
l1-regularized log-linear models with cumulative penalty. In Proceedings of the Joint Conference of the
47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 477?485, Suntec, Singapore, August 2009. Association for Computational
Linguistics.
[15] Navneet Dalal. Finding People in Images and Video. PhD thesis, Institut National Polytechnique de
Grenoble / INRIA Grenoble, July 2006.
[16] Pascal large scale learning challenge. http://largescale.ml.tu-berlin.de/workshop/, 2008.
[17] A. Bordes, L. Bottou, and P. Gallinari. SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent.
Journal of Machine Learning Research, 10:1737?1754, 2009.
9
| 4308 |@word trial:1 middle:3 dalal:2 everingham:1 triggs:1 d2:1 hsieh:1 citeseer:2 delicately:1 sgd:33 liblinear:1 reduction:1 cyclic:1 series:1 score:2 zij:13 tuned:2 ours:1 current:2 z2:1 comparing:2 skipping:2 plot:5 interpretable:2 update:51 drop:1 plane:1 realizing:1 lr:1 accepting:1 provides:2 contribute:1 org:1 along:1 t01:1 become:2 sophia:1 fitting:1 introduce:1 notably:1 intricate:1 roughly:2 themselves:1 behavior:1 growing:1 voc:6 little:1 equipped:1 considering:1 increasing:2 window:2 provided:1 begin:1 moreover:2 underlying:5 notation:1 mass:1 solver:1 argmin:1 finding:1 transformation:2 guarantee:2 preferably:1 every:1 ofling:1 runtime:1 wrong:4 classifier:1 gallinari:1 ramanan:1 unit:2 normally:2 before:4 positive:1 limit:3 io:1 subscript:1 approximately:3 ap:1 inria:13 chose:1 twice:1 initialization:1 burn:1 acl:1 limited:1 factorization:1 range:5 bi:5 statistically:1 unique:1 testing:5 practice:2 regret:5 optimizers:1 empirical:3 asi:1 significantly:1 t10:1 convenient:1 reject:1 confidence:1 thought:2 seeing:1 get:1 cannot:1 close:5 nb:5 risk:1 applying:2 optimize:2 equivalent:4 zinkevich:1 www:1 straightforward:1 attention:1 williams:1 convex:4 formulate:4 resolution:1 roux:1 immediately:1 boyle:2 pure:1 estimator:1 rule:2 importantly:1 datapoints:2 handle:1 notion:1 coordinate:20 justification:1 updated:2 annals:1 imagine:1 target:1 heavily:1 massive:1 exact:3 programming:1 us:1 hypothesis:21 origin:1 trick:2 element:1 velocity:1 recognition:1 updating:1 showcase:1 labeled:1 role:1 wang:1 ensures:1 removed:3 ran:1 ideally:1 depend:3 deva:1 rewrite:1 predictive:1 upon:1 efficiency:1 basis:1 joint:3 various:4 tx:1 train:3 effective:1 query:1 hyper:6 outside:1 shalev:1 whose:1 heuristic:8 larger:1 solve:4 say:3 otherwise:3 ability:1 statistic:2 itself:1 online:6 afnlp:1 interplay:1 subsamples:1 differentiable:2 advantage:1 propose:3 tu:1 uci:1 korattikara:2 realization:3 achieve:2 moved:1 convergence:5 inbuilt:1 requirement:1 object:2 illustrate:1 derive:2 ij:3 progress:1 strong:1 implemented:1 predicted:1 skip:1 implies:2 direction:12 correct:7 stochastic:8 subsequently:1 human:1 enable:1 require:2 argued:1 generalization:4 really:1 batchsize:1 ic:1 ground:3 normal:6 predict:2 claim:2 achieves:1 early:2 consecutive:2 smallest:1 purpose:2 failing:2 estimation:6 applicable:4 vice:1 tool:1 rough:1 gaussian:2 always:1 e7:1 rather:1 avoid:3 shelf:1 pn:1 ej:1 derived:2 focus:1 properly:1 check:1 likelihood:1 contrast:1 kim:1 detect:1 sense:1 inference:1 stopping:7 typically:1 lj:5 entire:2 accept:5 quasi:1 transformed:2 going:1 classification:1 ill:1 pascal:4 html:1 k6:1 art:1 equal:1 construct:2 once:3 never:1 sampling:2 park:1 yu:1 future:2 report:1 others:1 few:3 employ:4 grenoble:2 oriented:2 national:1 phase:2 n1:4 technometrics:1 friedman:1 detection:2 hyp:3 interest:1 highly:1 mining:1 evaluation:1 behind:1 regularizers:1 closer:1 partial:6 necessary:1 respective:1 orthogonal:1 institut:1 indexed:2 taylor:2 theoretical:1 minimal:1 increased:1 instance:3 soft:1 earlier:1 ar:3 deviation:1 subset:4 comprised:1 successful:1 scanning:1 adaptively:2 density:3 international:3 dramanan:1 off:1 picking:1 thesis:1 again:1 cesa:1 central:4 squared:5 choose:1 manage:1 stochastically:1 american:2 derivative:5 softthreshold:1 account:1 potential:1 de:2 sec:2 waste:1 matter:1 eyi:3 depends:2 performed:4 view:2 later:1 closed:1 analyze:1 hazan:1 reached:1 competitive:3 complicated:1 contribution:1 formed:1 accuracy:1 variance:2 descriptor:1 likewise:1 bayesian:1 rejecting:1 iid:3 straight:2 published:1 converged:1 datapoint:1 detector:2 reach:2 infinitesimal:1 failure:2 against:4 frequency:2 associated:2 irvine:2 gain:1 dataset:20 sampled:1 stop:5 ask:1 recall:1 cap:1 car:1 schedule:12 actually:1 follow:1 methodology:1 response:1 wherein:1 zisserman:1 tsuruoka:1 shrink:3 box:1 furthermore:1 rejected:1 just:1 stage:4 until:4 hand:2 horizontal:1 overlapping:1 logistic:11 mode:1 quality:1 tsujii:1 grows:1 building:1 effect:2 pascalnetwork:1 requiring:1 true:2 verify:1 former:1 hence:2 illustrated:1 during:7 width:1 self:1 criterion:6 linearizing:1 generalized:1 pdf:3 theoretic:1 demonstrate:1 polytechnique:1 performs:1 l1:22 reasoning:1 image:2 spending:1 consideration:3 meaning:1 common:2 behaves:1 empirically:2 overview:1 insensitive:1 volume:1 extend:1 association:2 significant:3 versa:1 tuning:3 vanilla:4 similarly:2 pointed:2 language:1 dj:14 reliability:4 had:1 impressive:2 gj:5 gt:2 add:1 posterior:1 own:1 recent:2 optimizing:2 certain:1 continue:1 meeting:1 yi:15 exploited:1 embracing:1 seen:1 minimum:1 additional:2 fortunately:1 gentile:1 determine:6 converge:1 july:1 full:6 faster:4 lin:1 visit:1 prediction:1 regression:13 vision:1 iteration:8 histogram:2 justified:2 whereas:1 want:6 addition:1 interval:4 annealing:6 winn:1 pass:1 ascent:1 subject:1 near:1 presence:1 leverage:1 bengio:1 enough:4 concerned:1 automated:1 switch:1 affect:1 hastie:1 lasso:9 reduce:1 idea:4 tradeoff:1 absent:1 t0:3 whether:2 expression:1 bartlett:1 penalty:1 passing:1 repeatedly:1 generally:2 clear:1 unimportant:1 amount:2 svms:2 http:2 outperform:1 xij:13 ananiadou:1 singapore:1 shifted:2 dotted:1 sign:3 estimated:3 per:3 tibshirani:1 blue:1 write:3 ichi:1 levi:1 dej:4 nevertheless:1 demonstrating:1 threshold:1 drawn:1 changing:2 verified:1 utilize:1 subgradient:2 sum:3 inverse:1 uncertainty:2 extends:1 throughout:1 reasonable:1 draw:1 appendix:2 prefer:1 comparable:1 bound:1 followed:1 dash:1 fan:1 quadratic:4 annual:1 adapted:1 constraint:1 worked:2 bousquet:2 speed:5 optimality:2 performing:1 subgradients:1 relatively:1 speedup:5 department:1 across:2 smaller:2 appealing:1 making:2 taken:1 unregularized:4 computationally:1 equation:1 remains:3 turn:2 eventually:2 fail:3 needed:2 know:1 end:2 available:2 gaussians:1 apply:1 away:2 frequentist:5 batch:22 save:1 robustness:2 slower:1 rp:1 original:1 bagging:1 assumes:1 running:1 standardized:2 ensure:1 include:1 remaining:1 denotes:2 hinge:4 linguistics:1 newton:1 const:2 exploit:2 quantile:2 society:2 move:5 objective:1 quantity:2 occurs:1 squashed:2 strategy:4 rt:1 damage:1 usual:1 dependence:1 exhibit:1 gradient:27 berlin:1 nondifferentiable:1 argue:2 analgous:1 assuming:1 besides:1 length:1 index:1 relationship:2 manzagol:1 minimizing:2 difficult:1 potentially:2 hog:1 xik:1 negative:3 implementation:2 reliably:1 perform:5 bianchi:1 observation:3 datasets:10 benchmark:1 finite:1 descent:19 incorporated:1 august:1 introduced:1 required:2 toolbox:1 security:1 california:1 distinction:2 nip:1 address:1 beyond:1 bar:1 deserve:1 usually:1 below:1 pattern:1 confidently:1 challenge:4 program:2 max:2 including:2 video:1 gool:1 overlap:1 natural:5 regularized:22 predicting:3 largescale:1 residual:1 normality:6 scheme:1 technology:1 voc2007:2 library:1 jun:1 faced:1 review:1 literature:1 l2:7 tangent:2 determining:2 asymptotic:1 relative:3 loss:34 expect:2 srebro:1 versus:1 borrows:1 age:1 sufficient:2 navneet:1 cd:10 bordes:1 summary:1 last:1 allow:2 fall:1 wide:2 sparse:1 benefit:1 distributed:3 curve:2 van:1 world:1 avoids:1 x2ij:1 cumulative:1 ignores:2 qn:1 made:2 adaptive:1 simplified:1 far:2 welling:3 transaction:1 approximate:3 emphasize:2 nato:1 ml:1 overfitting:4 instantiation:1 conclude:2 xi:14 shwartz:1 alternatively:1 search:2 terminate:1 learn:1 ca:1 expansion:2 bottou:3 complex:1 elegantly:1 protocol:1 domain:1 inherit:1 aistats:1 main:2 dense:2 whole:1 subsample:1 bounding:1 allowed:1 fig:1 quantiles:2 precision:17 jnew:14 fails:1 shrinking:5 wish:3 explicit:1 exponential:3 candidate:1 suntec:1 intimate:1 topmoumoute:1 pointless:1 theorem:1 showing:1 appeal:1 decay:3 svm:12 rakhlin:1 evidence:4 intrinsic:3 workshop:3 adding:1 phd:1 magnitude:3 exhausted:1 simply:2 likely:2 twentieth:1 visual:2 failed:1 amsterdam:1 conconi:1 pathwise:1 chang:1 corresponds:2 truth:1 determines:2 relies:1 acm:1 cdf:1 marked:1 formulated:1 careful:1 towards:3 shared:1 considerable:1 adverse:2 change:2 included:1 infinite:3 except:1 reducing:1 specifically:2 total:2 pas:16 accepted:3 tendency:2 experimental:2 meaningful:1 select:1 e6:1 people:1 latter:1 anoop:1 philosophy:1 evaluate:2 mcmc:1 tested:1 handling:1 |
3,654 | 4,309 | Nonstandard Interpretations of Probabilistic
Programs for Efficient Inference
David Wingate
BCS / LIDS, MIT
[email protected]
Noah D. Goodman
Psychology, Stanford
[email protected]
?
Andreas Stuhlmuller
BCS, MIT
[email protected]
Jeffrey M. Siskind
ECE, Purdue
[email protected]
Abstract
Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages. Because the
program is in machine-readable format, a variety of techniques from compiler design and program analysis can be used to examine the structure of the distribution
represented by the probabilistic program. We show how nonstandard interpretations of probabilistic programs can be used to craft efficient inference algorithms:
information about the structure of a distribution (such as gradients or dependencies) is generated as a monad-like side computation while executing the program.
These interpretations can be easily coded using special-purpose objects and operator overloading. We implement two examples of nonstandard interpretations in
two different languages, and use them as building blocks to construct inference
algorithms: automatic differentiation, which enables gradient based methods, and
provenance tracking, which enables efficient construction of global proposals.
1
Introduction
Probabilistic programming simplifies the development of probabilistic models by allowing modelers
to specify a stochastic process using syntax that resembles modern programming languages. These
languages permit arbitrary mixing of deterministic and stochastic elements, resulting in tremendous
modeling flexibility. The resulting programs define probabilistic models that serve as prior distributions: running the (unconditional) program forward many times results in a distribution over execution traces, with each trace being a sample from the prior. Examples include BLOG [13], Bayesian
Logic Programs [10] IBAL[18], C HURCH [6], Stochastic M ATLAB [28], and HANSEI [11].
The primary challenge in developing such languages is scalable inference. Inference can be viewed
as reasoning about the posterior distribution over execution traces conditioned on a particular program output, and is difficult because of the flexibility these languages present: in principle, an
inference algorithm must behave reasonably for any program a user wishes to write. Sample-based
MCMC algorithms are the state-of-the-art method, due to their simplicity, universality, and compositionality. But in probabilistic modeling more generally, efficient inference algorithms are designed
by taking advantage of structure in distributions. How can we find structure in a distribution defined by a probabilistic program? A key observation is that some languages, such as C HURCH and
Stochastic M ATLAB, are defined in terms of an existing (non-probabilistic) language. Programs in
these languages may literally be executed in their native environments?suggesting that tools from
program analysis and programming language theory can be leveraged to find and exploit structure
in the program for inference, much as a compiler might find and exploit structure for performance.
Here, we show how nonstandard interpretations of probabilistic programs can help craft efficient
inference algorithms. Information about the structure of a distribution (such as gradients, dependencies or bounds) is generated as a monad-like side computation while executing the program. This
extra information can be used to, for example, construct good MH proposals, or search efficiently
for a local maximum. We focus on two such interpretations: automatic differentiation and provenance tracking, and show how they can be used as building blocks to construct efficient inference
1
algorithms. We implement nonstandard interpretations in two different languages (C HURCH and
Stochastic M ATLAB), and experimentally demonstrate that while they typically incur some additional execution overhead, they dramatically improve inference performance.
2 Background and Related Work
Alg. 1: A Gaussian-Gamma mixture
We begin by outlining our setup, following [28]. We de1: for i=1:1000
fine an unconditioned probabilistic program to be a pa2:
if ( rand > 0.5 )
rameterless function f with an arbitrary mix of stochas3:
X(i) = randn;
tic and deterministic elements (hereafter, we will use the
4:
else
term function and program interchangeably). The func5:
X(i) = gammarnd;
tion f may be written in any language, but our running
6:
end;
example will be M ATLAB. We allow the function to be
7: end;
arbitrarily complex inside, using any additional functions,
recursion, language constructs or external libraries it wishes. The only constraint is that the function must be self-contained, with no external side-effects which would impact the execution of the
function from one run to another.
The stochastic elements of f must come from a set of known, fixed elementary random primitives,
or ERPs. Complex distributions are constructed compositionally, using ERPs as building blocks. In
M ATLAB, ERPs may be functions such as rand (sample uniformly from [0,1]) or randn (sample
from a standard normal). Higher-order random primitives, such as nonparametric distributions, may
also be defined, but must be fixed ahead of time. Formally, let T be the set of ERP types. We assume
that each type t ? T is a parametric family of distributions pt (x|?t ), with parameters ?t .
Now, consider what happens while executing f . As f is executed, it encounters a series of ERPs.
Alg. 1 shows an example of a simple f written in M ATLAB with three syntactic ERPs: rand,
randn, and gammarnd. During execution, depending on the return value of each call to rand,
different paths will be taken through the program, and different ERPs will be encountered. We call
this path an execution trace. A total of 2000 random choices will be made when executing this f .
Let fk|x1 ,??? ,xk?1 be the k?th ERP encountered while executing f , and let xk be the value it returns.
Note that the parameters passed to the k?th ERP may change depending on previous xk ?s (indeed,
its type may also change, as well as the total number of ERPs). We denote by x all of the random
choices which are made by f , so f defines the probability distribution p(x). In our example, x ?
R2000 . The probability p(x) is the product of the probability of each individual ERP choice:
p(x) =
K
Y
ptk (xk |?tk , x1 , ? ? ? , xk?1 )
(1)
k=1
again noting explicitly that types and parameters may depend arbitrarily on previous random choices.
To simplify notation, we will omit the conditioning on the values of previous ERPs, but again wish
to emphasize that these dependencies are critical and cannot be ignored. By fk , it should therefore
be understood that we mean fk|x1 ,??? ,xk?1 , and by ptk (xk |?tk ) we mean ptk (xk |?tk , x1 , ? ? ? , xk?1 ).
Generative functions as described above are, of course, easy to write. A much harder problem, and
our goal in this paper, is to reason about the posterior conditional distribution p(x|y), where we
define y to be a subset of random choices which we condition on and (in an abuse of notation) x
to be the remaining random choices. For example, we may condition f on the X(i)?s, and reason
about the sequence of rand?s most likely to generate the X(i)?s. For the rest of this paper, we
will drop y and simply refer to p(x), but it should be understood that the goal is always to perform
inference in conditional distributions.
2.1 Nonstandard Interpretations of Probabilistic Programs
With an outline of probabilistic programming in hand, we now turn to nonstandard interpretations.
The idea of nonstandard interpretations originated in model theory and mathematical logic, where it
was proposed that a set of axioms could be interpreted by different models. For example, differential
geometry can be considered a nonstandard interpretation of classical arithmetic.
In programming, a nonstandard interpretation replaces the domain of the variables in the program
with a new domain, and redefines the semantics of the operators in the program to be consistent
with the new domain. This allows reuse of program syntax while implementing new functionality.
For example, the expression ?a ? b? can be interpreted equally well if a and b are either scalars or
2
matrices, but the ??? operator takes on different meanings. Practically, many useful nonstandard
interpretations can be implemented with operator overloading: variables are redefined to be objects
with operators that implement special functionality, such as tracing, reference counting, or profiling.
For the purposes of inference in probabilistic programs, we will augment each random choice xk
with additional side information sk , and replace each xk with the tuple hxk , sk i. The native interpreter for the probabilistic program can then interpret the source code as a sequence of operations
on these augmented data types. For a recent example of this, we refer the reader to [24].
3
Automatic Differentiation
For probabilistic models with many continuous-valued random variables, the gradient of the likelihood ?x p(x) provides local information that can significantly improve the properties of MonteCarlo inference algorithms. For instance, Langevin Monte-Carlo [20] and Hamiltonian MCMC [15]
use this gradient as part of a variable-augmentation technique (described below). We would like
to be able to use gradients in the probabilistic-program setting, but p(x) is represented implicitly
by the program. How can we compute its gradient? We use automatic differentiation (AD) [3, 7],
a nonstandard interpretation that automatically constructs ?x p(x). The automatic nature of AD is
critical because it relieves the programmer from hand-computing derivatives for each model; moreover, some probabilistic programs dynamically create or delete random variables making simple
closed-form expressions for the gradient very difficult to find.
Unlike finite differencing, AD computes an exact derivative of a function f at a point (up to machine
precision). To do this, AD relies on the chain rule to decompose the derivative of f into derivatives
of its sub-functions: ultimately, known derivatives of elementary functions are composed together to
yield the derivative of the compound function. This composition can be computed as a nonstandard
interpretation of the underlying elementary functions.
The derivative computation as a composition of the derivatives of the elementary functions can be
performed in different orders. In forward mode AD [27], computation of the derivative proceeds by
propagating perturbations of the input toward the output. This can be done by a nonstandard interpretation that extends each real value to the first two terms of its Taylor expansion [26], overloading
each elementary function to operate on these real ?polynomials?. Because the derivatives of f at c
can be extracted from the coefficients of ? in f (c + ?) , this allows computation of the gradient. In
reverse mode AD [25], computation of the derivative proceeds by propagating sensitivities of the
output toward the input. One way this can be done is by a nonstandard interpretation that extends
each real value into a ?tape? that captures the trace of the real computation which led to that value
from the inputs, overloading each elementary function to incrementally construct these tapes. Such
a tape can be postprocessed, in a fashion analogous to backpropagation [21], to yield the gradient.
These two approaches have complementary computational tradeoffs: reverse mode (which we use in
our implementation) can compute the gradient of a function f : Rn ? R with the same asymptotic
time complexity as computing f , but not the same asymptotic space complexity (due to its need
for saving the computation trace), while forward mode can compute the gradient with these same
asymptotic space complexity, but with a factor of O(n) slowdown (due to its need for constructing
the gradient out of partial derivatives along each independent variable).
There are implementations of AD for many languages, including S CHEME(e.g., [17]), F ORTRAN
(e.g., ADIFOR[2]), C (e.g., ADOL ? C [8]), C ++ (e.g., FADBAD ++[1]), M ATLAB (e.g., INTLAB [22]),
and M APLE (e.g., GRADIENT [14]). See www.autodiff.org. Additionally, overloading and AD
are well established techniques that have been applied to machine learning, and even to applicationspecific programming languages for machine learning, e.g., L USH[12] and DYNA[4]. In particular,
DYNA applies a nonstandard interpretation for ? and ? as a semiring (? and +, + and max, . . .) in
a memoizing P ROLOG to generalize Viterbi, forward/backward, inside/outside, etc. and uses AD to
derive the outside algorithm from the inside algorithm and support parameter estimation, but unlike
probabilistic programming, it does not model general stochastic processes and does not do general
inference over such. Our use of overloading and AD differs in that it facilitates inference in complicated models of general stochastic processes formulated as probabilistic programs. Probabilistic
programming provides a powerful and convenient framework for formulating complicated models
and, more importantly, separating such models from orthogonal inference mechanisms. Moreover,
overloading provides a convenient mechanism for implementing many such inference mechanisms
(e.g., Langevin MC, Hamiltonian MCMC, Provenance Tracking, as demonstrated below) in a probabilistic programming language.
3
(define (perlin-pt x y keypt power)
(* 255 (sum (map (lambda (p2 pow)
(let ((x0 (floor (* p2 x))) (y0 (floor (* p2 y))))
(* pow (2d-interp (keypt x0 y0) (keypt (+ 1 x0) y0) (keypt x0 (+ 1 y0)) (keypt (+ 1 x0) (+ 1 y0))))))
powers-of-2 power))))
(define (perlin xs ys power)
(let ([keypt (mem (lambda (x y) (/ 1 (+ 1 (exp (- (gaussian 0.0 2.0)))))))])
(map (lambda (x) (map (lambda (y) (perlin-pt x y keypt power)) xs)) ys)))
Figure 1: Code for the structured Perlin noise generator. 2d-interp is B-spline interpolation.
3.1 Hamiltonian MCMC
To illustrate the power of AD in proba- Alg. 2: Hamiltonian MCMC
bilistic programming, we build on Hamil- 1: repeat forever
tonian MCMC (HMC), an efficient algo2:
Gibbs step:
rithm whose popularity has been some- 3:
Draw momentum m ? N (0, ? 2 )
what limited by the necessity of comput4:
Metropolis step:
ing gradients?a difficult task for com5:
Start with current state (x, m)
plex models. Neal [15] introduces HMC
6:
Simulate Hamiltonian dynamics to give (x? , m? )
?
?
as an inference method which ?produces
7:
Accept w/ p = min[1, e(?H(x ,m )+H(x,m)) ]
distant proposals for the Metropolis algo8: end;
rithm, thereby avoiding the slow exploration of the state space that results from the diffusive behavior of simple random-walk proposals.?
HMC begins by augmenting the states space with ?momentum variables? m. The distribution over
this augmented space is eH(x,m) , where the Hamiltonian function H decomposed into the sum of
a potential energy term U (x) = ? ln p(x) and a kinetic energy K(m) which is usually taken to
be Gaussian. Inference proceeds by alternating between a Gibbs step and Metropolis step: fixing
the current state x, a new momentum m is sampled from the prior over m; then x and m are updated together by following a trajectory according to Hamiltonian dynamics. Discrete integration
of Hamiltonian dynamics requires the gradient of H, and must be done with a symplectic (i.e. volume preserving) integrator (following [15] we use the Leapfrog method). While this is a complex
computation, incorporating gradient information dramatically improves performance over vanilla
random-walk style MH moves (such as Gaussian drift kernels), and its statistical efficiency also
scales much better with dimensionality than simpler methods [15]. AD can also compute higherorder derivatives. For example, Hessian matrices can be used to construct blocked Metropolis moves
[9] or proposals based on Newton?s method [19], or as part of Riemannian manifold methods [5].
3.2 Experiments and Results
We implemented HMC by extending B HER [28], a lightweight implementation of the C HURCH
language which provides simple, but universal, MH inference. We used used an implementation of
AD based on [17] that uses hygienic operator overloading to do both forward and reverse mode AD
for Scheme (the target language of the B HER compiler).
The goal is to compute ?x p(x). By Eq. 1, p(x) is the product of the individual choices made by
each xi (though each probability can depend on previous choices, through the program evaluation).
To compute p(x), B HER executes the corresponding program, accumulating likelihoods. Each time
a continuous ERP is created or retrieved, we wrap it in a ?tape? object which is used to track gradient
information; as the likelihood p(x) is computed, these tapes flow through the program and through
appropriately overloaded operators, resulting in a dependency graph for the real portion of the computation. The gradient is then computed in reverse mode, by ?back-propagating? along this graph.
We implement an HMC kernel by using this gradient in the leapfrog integrator. Since program states
may contain a combination of discrete and continuous ERPs, we use an overall cycle kernel which
alternates between standard MH kernel for individual discrete random variables and the HMC kernel for all continuous random choices. To decrease burn-in time, we initialize the sampler by using
annealed gradient ascent (again implemented using AD).
We ran two sets of experiments that illustrate two different benefits of HMC with AD: automated
gradients of complex code, and good statistical efficiency.
Structured Perlin noise generation. Our first experiment uses HMC to generate modified Perlin
noise with soft symmetry structure. Perlin noise is a procedural texture used by computer graphics
artists to add realism to natural textures such as clouds, grass or tree bark. We generate Perlinlike noise by layering octaves of random but smoothly varying functions. We condition the result
4
al
on t r y
ag
e
di
m
m
sy
Distance to true
expectation
6
MCMC
HMC
4
2
0
0
50
100
150
Samples
200
250
300
Figure 2: On the left: samples from the structured Perlin noise generator. On the right: convergence
of expected mean for a draw from a 3D spherical Gaussian conditioned on lying on a line.
on approximate diagonal symmetry, forcing the resulting image to incorporate additional structure
without otherwise skewing the statistics of the image. Note that the MAP solution for this problem is
uninteresting, as it is a uniform image; it is the variations around the MAP that provide rich texture.
We generated 48x48 images; the model had roughly 1000 variables.
Fig. 2 shows the result via typical samples generated by HMC, where the approximate symmetry is
clearly visible. A code snippet demonstrating the complexity of the calculations is shown in Fig. 1;
this experiment illustrates how the automatic nature of the gradients is most helpful, as it would be
time consuming to compute these gradients by hand?particularly since we are free to condition
using any function of the image.
Complex conditioning. For our second example, we Normal distribution noisily condidemonstrate the improved statistical efficiency of the tioned on line (2D projection)
samples generated by HMC versus B HER?s standard
1: x ? N (?, ?)
MCMC algorithm. The goal is to sample points from a
dist(line,x)
2: k ? Bernoulli(e? noise )
complex 3D distribution, defined by starting with a Gaus3: Condition on k = 1
sian prior, and sampling points that are noisily conditioned to be on a line running through R3 . This creates
2
complex interactions with the prior to yield a smooth, but
strongly coupled, energy landscape.
1
Fig. 2 compares our HMC implementation with B HER?s
-1.5 -1.0 -0.5
0.5 1.0 1.5
standard MCMC engine. The x-axis denotes samples,
-1
while the y-axis denotes the convergence of an estimator
of certain marginal statistics of the samples. We see that
-2
this estimator converges much faster for HMC, implying
that the samples which are generated are less autocorrelated ? affirming that HMC is indeed making
better distal moves. HMC is about 5x slower than MCMC for this experiment, but the overhead is
justified by the significant improvement in the statistical quality of the samples.
4
Provenance Tracking for Fine-Grained Dynamic Dependency Analysis
One reason gradient based inference algorithms are effective is that the chain rule of derivatives
propagates information backwards from the data up to the proposal variables. But gradients, and the
chain rule, are only defined for continuous variables. Is there a corresponding structure for discrete
choices? We now introduce a new nonstandard interpretation based on provenance tracking (PT). In
programming language theory, the provenance of a variable is the history of variables and computations that combined to form its value. We use this idea to track fine-grained dependency information
between random values and intermediate computations as they combine to form a likelihood. We
then use this provenance information to construct good global proposals for discrete variables as
part of a novel factored multiple-try MCMC algorithm.
4.1 Defining and Implementing Provenance Tracking
Like AD, PT can be implemented with operator overloading. Because provenance information is
much coarser than gradient information, the operators in PT objects have a particularly simple form;
most program expressions can be covered by considering a few cases. Let X denote the set {xi }
of all (not necessarily random) variables in a program. Let R(x) ? X define the provenance of a
variable x. Given R(x), the provenance of expressions involving x can be computed by breaking
5
down expressions into a sequence of unary operations, binary operations, and function applications.
Constants have empty provenances.
Let x and y be expressions in the program (consisting of an arbitrary mix of variables, constants,
functions and operators). For a binary operation x ? y, the provenance R(x ? y) of the result is
defined to be R(x ? y) = R(x) ? R(y). Similarly, for a unary operation, the provenance R(?x) =
R(x). For assignments, x = y ? R(x) = R(y). For a function, R(f (x, y, ...)) may be computed
by examining the expressions within f ; a worst-case approximation is R(f (x, y, ...)) = R(x) ?
R(y) ? ? ? . A few special cases are also worth noting. Strictly speaking, the previous rules track a
superset of provenance information because some functions and operations are constant for certain
inputs. In the case of multiplication, x ? 0 = 0, so R(x ? 0) = {}. Accounting for this gives tighter
provenances, implying, for example, that special considerations apply to sparse linear algebra.
In the case of probabilistic programming, recall that random variables (or ERPs) are represented as
stochastic functions fi that accept parameters ?i . Whenever a random variable is conditioned, the
output of the corresponding fi is fixed; thus, while the likelihood of a particular output of fi depends
on ?i , the specific output of fi does not. For the purposes inference, therefore, R(fi (?i )) = {}.
4.2 Using Provenance Tracking as Part of Inference
Provenance information could be used in many ways. Here, we illustrate one use: to help construct
good block proposals for MH inference. Our basic idea is to construct a good global proposal by
starting with a random global proposal (which is unlikely to be good) and then inhibiting the bad
parts. We do this by allowing each element of the likelihood to ?vote? on which proposals seemed
good. This can be considered a factored version of a multiple-try MCMC algorithm [16].
The algorithm is shown in Fig. 3. Let xO be the starting state. In step (2), we propose a new state
?
xO . This new state changes many ERPs at once, and is unlikely to be good (for the proof, we require
?
O
that xO
i 6= xi for all i). In step (3), we accept or reject each element of the proposal based on a
function ?. Our choice of ? (Fig. 3, left) uses PT, as we explain below. In step (4) we construct a
new proposal xM by ?mixing? two states: we set the variables in the accepted set A to the values of
?
O
xO
i , and we leave the variables in the rejected set R at their original values in x . In steps (5-6) we
compute the forward probabilities. In steps (7-8) we sample one possible path backwards from xM
to xO , with the relevant probabilities. Finally, in step (9) we accept or reject the overall proposal.
?
We use ?(xO , xO ) to allow the likelihood to ?vote? in a fine-grained way for which proposals
seemed good and which seemed bad. To do this, we compute p(xO ) using PT to track how each
O
O
O
xO
i influences the overall likelihood p(x ). Let D(i; x ) denote the ?descendants? of variable xi ,
O
O?
defined as all ERPs whose likelihood xi impacted. We also use PT to compute p(x ), again
?
tracking dependents D(i; xO ), and let D(i) be the joint set of ERPs that xi influences in either state
?
O
O?
x or x . We then use D(i), p(xO ) and p(xO ) to estimate the amount by which each constituent
?
element xO
i in the proposal changed the likelihood. We assign ?credit? to each i as if it were the
only proposal ? that is, we assume that if, for example, the likelihood went up, it was entirely due to
the change in xO
i . Of course, the variables? effects are not truly independent; this is a fully-factored
approximation to those effects. The final ? is shown in Fig. 3 (left), where we define p(xD(i) ) to be
the likelihood of only the subset of variables that xi impacted.
Here, we prove that our algorithm is valid MCMC by following [16] and showing detailed balance.
?
MI
To do this, we must integrate over all possible rejected paths of the negative bits xO
R and xR :
Z
Z
I MI MI
?
p(xM ) QM
O? M M M I
A PA PR
p(xO )P (xM |xO ) = p(xO )
QO
Q
P
P
Q
min
1,
?
A
R A
R
R
M M
?
I
p(xO ) QO
xO
xM
A PA PR
R
Z
Z R
n
o
?
MI
O
O? M M
M
MI MI MI
=
QO
R QR min p(x )QA PA PR , p(x )QA PA PR
xO
R
=
=
?
I
xM
R
p(xM )
Z
xO
R
?
Z
I
xM
R
I MI MI MI O
QM
A Q R PA PR Q R
?
(
?
M M
p(xO )QO
A PA PR
min 1,
M
I
M
p(xM )QA PA I PRM I
p(xM )P (xO |xM )
?
MI
where the subtlety to the equivalence is that the rejected bits xO
R and xR have switched roles.
6
)
Alg. 3: Factored Multiple-Try MH
O
1: Begin in state xO . Assume it is composed of individual ERPs xO = xO
1 , ? ? ? , xk .
?
?
O? O
O
2: Propose a new state for many ERPs. For i = 1, ? ? ? , k, propose xO
|x ) s.t. xO
i ? Q(x
i 6= xi .
?
?
3: Decide to accept or reject each element of xO . This test can depend arbitrarily on xO and xO , but must
?
?
decide for each ERP independently; let ?i (xO , xO ) be the probability of accepting xO
i . Let A be the set
of indices of accepted proposals,
R the set
of rejected ones.
n and
oS
O
?
xj : j ? R . This new state mixes new values for the
4: Construct a new state, xM = xO
:
i
?
A
i
ERPs from the accepted set A and old values for the ERPs in the rejected set R.
Q
Q
?
5: Let PAM = i?A ?i (xO , xO ) be the probability of accepting the ERPs in A, and let PRM = j?R (1 ?
?
6:
7:
8:
9:
?j (xO , xO )) be the probability of rejecting the ERPs in R.
Q
Q
?
O? O
O? O
O?
Let QO
A =
j?R Q(xj |x ).
i?A Q(xi |x ) and QR =
MI
Construct a new state x . Propose new values for all of the rejected ERPs using xM as the start state,
I
? Q(?|xM ). Then,
but leave ERPs in the
original value. For j ? R let xM
j
Saccepted
M I set at their
MI
O
xj : j ? R .
x
= xi : i ? A
Q
Q
Let PAM I = i?A ?i (xM , xM I ), and let PRM I = j?R (1 ? ?j (xM , xM I )).
o
n
I MI MI
O
O? M M
Accept xM with probability min 1, (p(xM )QM
A PA PR )/(p(x )QA PA PR ) .
1:
2:
3:
4:
The PT algorithm implements ?i (x, x? ).
Compute p(x), tracking D(xi ; x)
Compute p(x? ), tracking D(xi ; x? )
Let D(i) = D(xi ; x)n? D(xi ; x? )
o
5: Let ?i (x, x? ) = min 1,
Accepted set
Rejected set
Individual ERPs
Alg. 4: A PT-based Acceptance Test
p(x?i )p(x?D(i) )
p(xi )p(xD(i) )
Start
state
Proposed
state
Mixed
state
Reverse path
Figure 3: The factored multiple-try MH algorithm (top), the PT-based acceptance test (left) and an
illustration of the process of mixing together different elementary proposals (right).
4.3
Experiments and Results
We implemented provenance tracking and in Stochastic M ATLAB [28] by leveraging M ATLAB?s
object oriented capabilities, which provides full operator overloading. We tested on four tasks: a
Bayesian ?mesh induction? task, a small QMR problem, probabilistic matrix factorization [23] and
an integer-valued variant of PMF. We measured performance by examining likelihood as a function
of wallclock time; an important property of the provenance tracking algorithm is that it can help
mitigate constant factors affecting inference performance.
Bayesian mesh induction.
The BMI task is simple: Alg. 5: Bayesian Mesh Induction
given a prior distribution over meshes and a target im1: function X = bmi( base mesh )
age, sample a mesh which, when rendered, looks like the
2:
mesh = base mesh + randn;
target image. The prior is a Gaussian centered around a
3:
img
= render( mesh );
?mean mesh,? which is a perfect sphere; Gaussian noise
4:
X = img + randn;
is added to each vertex to deform the mesh. The model
5: end;
is shown in Alg. 5. The rendering function is a custom
O PEN GL renderer implemented as a MEX function. No
gradients are available for this renderer, but it is reasonably easy to augment it with provenance
information recording vertices of the triangle that were responsible for each pixel. This allows us to
make proposals to mesh vertices, while assigning credit based on pixel likelihoods.
Results for this task are shown in Fig. 4 (?Face?). Even though the renderer is quite fast, MCMC
with simple proposals fails: after proposing a change to a single variable, it must re-render the image
in order to compute the likelihood. In contrast, making large, global proposals is very effective.
Fig. 4 (top) shows a sequence of images representing burn-in of the model as it starts from the initial
condition and samples its way towards regions of high likelihood. A video demonstrating the results
is available at http://www.mit.edu/? wingated/papers/index.html.
7
Time
Input
9
x 10
Face
QMR
Log likelihood
x 1e9
?40
Target
4
x 10
0 x 1e4
PMF
Integer
PMF
7
x 10
0 x 1e7
?1
?1.5
?60
?2
?5
?2
?80
?4
?2.5
?3
0
?10
15
30
45
?100
?6
0
1
5 10 15 20 25
2
Time (seconds)
5
10 15 20 25
Figure 4: Top: Frames from the face task. Bottom: results on Face, QMR, PMF and Integer PMF.
QMR. The QMR model is a bipartite, binary model relating diseases (hidden) to symptoms (observed) using a log-linear noisy-or model. Base rates on diseases can be quite low, so ?explaining
away? can cause poor mixing. Here, MCMC with provenance tracking is effective: it finds highlikelihood solutions quickly, again outperforming naive MCMC.
Probabilistic Matrix Factorization. For the PMF task, we factored a matrix A ? R1000x1000 with
99% sparsity. PMF places a Gaussian prior over two matrices, U ? R1000x10 and V ? R1000x10 ,
for a total of 20,000 parameters. The model assumes that Aij ? N (Ui VjT , 1). In Fig. 4, we see
that MCMC with provenance tracking is able to find regions of much higher likelihood much more
quickly than naive MCMC. We also compared to an efficient hand-coded MCMC sampler which
is capable of making, scoring and accepting/rejecting about 20,000 proposals per second. Interestingly, MCMC with provenance tracking is more efficient than the hand-coded sampler, presumably
because of the economies of scale that come with making global proposals.
Integer Probabilistic Matrix Factorization. The Integer PMF task is like ordinary PMF, except
that every entry in U and V is constrained to be an integer between 1 and 10. These constraints
imply that no gradients exist. Empirically, this does not seem to matter for the efficiency of the
algorithm relative to standard MCMC: in Fig. 4 we again see dramatic performance improvements
over the baseline Stochastic M ATLAB sampler and the hand-coded sampler.
5
Conclusions
We have shown how nonstandard interpretations of probabilistic programs can be used to extract
structural information about a distribution, and how this information can be used as part of a variety of inference algorithms. The information can take the form of gradients, Hessians, fine-grained
dependencies, or bounds. Empirically, we have implemented two such interpretations and demonstrated how this information can be used to find regions of high likelihood quickly, and how it can
be used to generate samples with improved statistical properties versus random-walk style MCMC.
There are other types of interpretations which could provide additional information. For example,
interval arithmetic [22] could be used to provide bounds or as part of adaptive importance sampling.
Each of these interpretations can be used alone or in concert with each other; one of the advantages
of the probabilistic programming framework is the clean separation of models and inference algorithms, making it easy to explore combinations of inference algorithms for complex models. More
generally, this work begins to illuminate the close connections between probabilistic inference and
programming language theory. It is likely that other techniques from compiler design and program
analysis could be fruitfully applied to inference problems in probabilistic programs.
Acknowledgments
DW was supported in part by AFOSR (FA9550-07-1-0075) and by Shell Oil, Inc. NDG was supported in part by ONR (N00014-09-1-0124) and a J. S. McDonnell Foundation Scholar Award.
JMS was supported in part by NSF (CCF-0438806), by NRL (N00173-10-1-G023), and by ARL
(W911NF-10-2-0060). All views expressed in this paper are the sole responsibility of the authors.
8
References
[1] C. Bendtsen and O. Stauning. FADBAD, a flexible C++ package for automatic differentiation. Technical
Report IMM?REP?1996?17, Department of Mathematical Modelling, Technical University of Denmark,
Lyngby, Denmark, Aug. 1996.
[2] C. H. Bischof, A. Carle, G. F. Corliss, A. Griewank, and P. D. Hovland. ADIFOR: Generating derivative
codes from Fortran programs. Scientific Programming, 1(1):11?29, 1992.
[3] G. Corliss, C. Faure, A. Griewank, L. Hasco?et, and U. Naumann. Automatic Differentiation: From
Simulation to Optimization. Springer-Verlag, New York, NY, 2001.
[4] J. Eisner, E. Goldlust, and N. A. Smith. Compiling comp ling: Weighted dynamic programming and
the Dyna language. In Proceedings of Human Language Technology Conference and Conference on
Empirical Methods in Natural Language Processing (HLT-EMNLP), pages 281?290, Vancouver, October
2005.
[5] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J.
R. Statist. Soc. B, 73(2):123?214, 2011.
[6] N. Goodman, V. Mansinghka, D. Roy, K. Bonawitz, and J. Tenenbaum. Church: a language for generative
models. In Uncertainty in Artificial Intelligence (UAI), 2008.
[7] A. Griewank. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Number 19 in Frontiers in Applied Mathematics. SIAM, 2000.
[8] A. Griewank, D. Juedes, and J. Utke. ADOL-C, a package for the automatic differentiation of algorithms
written in C/C++. ACM Trans. Math. Software, 22(2):131?167, 1996.
[9] E. Herbst. Gradient and Hessian-based MCMC for DSGE models (job market paper), 2010.
[10] K. Kersting and L. D. Raedt. Bayesian logic programming: Theory and tool. In L. Getoor and B. Taskar,
editors, An Introduction to Statistical Relational Learning. MIT Press, 2007.
[11] O. Kiselyov and C. Shan. Embedded probabilistic programming. In Domain-Specific Languages, pages
360?384, 2009.
[12] Y. LeCun and L. Bottou. Lush reference manual. Technical report, 2002. URL http://lush.
sourceforge.net.
[13] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: Probabilistic models with
unknown objects. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1352?1359,
2005.
[14] M. B. Monagan and W. M. Neuenschwander. GRADIENT: Algorithmic differentiation in Maple. In
International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 68?76, July 1993.
[15] R. M. Neal. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte-Carlo (Steve
Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, Eds.), 2010.
[16] S. Pandolfi, F. Bartolucci, and N. Friel. A generalization of the multiple-try metropolis algorithm for
bayesian estimation and model selection. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[17] B. A. Pearlmutter and J. M. Siskind. Lazy multivariate higher-order forward-mode AD. In Symposium on
Principles of Programming Languages (POPL), pages 155?160, 2007. doi: 10.1145/1190215.1190242.
[18] A. Pfeffer. IBAL: A probabilistic rational programming language. In International Joint Conference on
Artificial Intelligence (IJCAI), pages 733?740. Morgan Kaufmann Publ., 2001.
[19] Y. Qi and T. P. Minka. Hessian-based Markov chain Monte-Carlo algorithms (unpublished manuscript),
2002.
[20] P. J. Rossky, J. D. Doll, and H. L. Friedman. Brownian dynamics as smart monte carlo simulation. Journal
of Chemical Physics, 69:4628?4633, 1978.
[21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors.
323:533?536, 1986.
[22] S. Rump. INTLAB - INTerval LABoratory. In Developments in Reliable Computing, pages 77?104.
Kluwer Academic Publishers, Dordrecht, 1999.
[23] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Neural Information Processing
Systems (NIPS), 2008.
[24] J. M. Siskind and B. A. Pearlmutter. First-class nonstandard interpretations by opening closures. In Symposium on Principles of Programming Languages (POPL), pages 71?76, 2007. doi: 10.1145/1190216.
1190230.
[25] B. Speelpenning. Compiling Fast Partial Derivatives of Functions Given by Algorithms. PhD thesis,
Department of Computer Science, University of Illinois at Urbana-Champaign, Jan. 1980.
[26] B. Taylor. Methodus Incrementorum Directa et Inversa. London, 1715.
[27] R. E. Wengert. A simple automatic derivative evaluation program. Commun. ACM, 7(8):463?464, 1964.
[28] D. Wingate, A. Stuhlmueller, and N. D. Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2011.
9
| 4309 |@word version:1 polynomial:1 closure:1 simulation:2 accounting:1 dramatic:1 thereby:1 harder:1 initial:1 necessity:1 series:1 aple:1 hereafter:1 lightweight:2 interestingly:1 existing:1 current:2 universality:1 assigning:1 must:8 written:3 mesh:12 distant:1 visible:1 enables:2 designed:1 drop:1 concert:1 grass:1 implying:2 generative:2 alone:1 intelligence:5 de1:1 xk:12 hamiltonian:10 realism:1 smith:1 fa9550:1 accepting:3 illustrate:3 provides:5 math:1 org:1 simpler:1 wingated:2 mathematical:2 along:2 constructed:1 differential:1 symposium:3 descendant:1 prove:1 overhead:2 combine:1 inside:3 introduce:1 x0:5 expected:1 market:1 roughly:1 indeed:2 examine:1 dist:1 behavior:1 integrator:2 salakhutdinov:1 decomposed:1 spherical:1 automatically:1 riemann:1 considering:1 begin:4 notation:2 moreover:2 underlying:1 what:2 tic:1 interpreted:2 interpreter:1 proposing:1 ag:1 differentiation:9 mitigate:1 every:1 xd:2 qm:3 omit:1 understood:2 local:2 qmr:5 friel:1 meng:1 path:5 interpolation:1 erps:22 abuse:1 might:1 burn:2 pam:2 resembles:2 dynamically:1 equivalence:1 limited:1 factorization:4 acknowledgment:1 responsible:1 lecun:1 block:4 implement:5 differs:1 backpropagation:1 xr:2 jan:1 axiom:1 universal:1 empirical:1 significantly:1 reject:3 convenient:2 projection:1 symbolic:1 cannot:1 close:1 selection:1 operator:11 gelman:1 ast:1 influence:2 milch:1 accumulating:1 www:2 deterministic:2 demonstrated:2 map:5 algo2:1 annealed:1 primitive:2 maple:1 starting:3 independently:1 williams:1 simplicity:1 griewank:4 factored:6 rule:4 estimator:2 importantly:1 siskind:3 dw:1 variation:1 analogous:1 updated:1 construction:1 pt:12 target:4 user:1 exact:1 programming:24 us:4 pa:9 element:7 roy:1 rumelhart:1 particularly:2 native:2 coarser:1 pfeffer:1 bottom:1 cloud:1 role:1 observed:1 taskar:1 wingate:2 capture:1 worst:1 region:3 cycle:1 went:1 decrease:1 russell:1 ran:1 disease:2 environment:1 complexity:4 ui:1 ong:1 dynamic:7 ultimately:1 depend:3 around:2 algebra:1 smart:1 incur:1 serve:1 creates:1 bipartite:1 efficiency:4 calderhead:1 triangle:1 easily:1 mh:7 joint:3 represented:3 autocorrelated:1 fast:2 effective:3 london:1 monte:5 doi:2 artificial:5 outside:2 dordrecht:1 whose:2 quite:2 stanford:2 valued:2 pa2:1 otherwise:1 statistic:4 syntactic:1 noisy:1 unconditioned:1 ptk:3 final:1 advantage:2 sequence:4 wallclock:1 net:1 propose:4 interaction:1 product:2 relevant:1 mixing:4 flexibility:2 constituent:1 qr:2 sourceforge:1 convergence:2 empty:1 ijcai:2 extending:1 postprocessed:1 stuhlmueller:1 generating:1 produce:1 executing:5 converges:1 object:6 help:3 depending:2 andrew:1 augmenting:1 fixing:1 measured:1 tk:3 kolobov:1 propagating:4 derive:1 aug:1 eq:1 soc:1 implemented:7 sole:1 job:1 come:2 arl:1 bilistic:1 girolami:1 p2:3 functionality:2 stochastic:12 exploration:1 centered:1 human:1 programmer:1 implementing:3 require:1 assign:1 scholar:1 generalization:1 decompose:1 symplectic:1 elementary:7 tighter:1 strictly:1 frontier:1 practically:1 randn:5 considered:2 credit:2 normal:2 exp:1 presumably:1 lying:1 viterbi:1 algorithmic:2 prm:3 inhibiting:1 hovland:1 purpose:3 layering:1 estimation:2 create:1 tool:2 weighted:1 mit:6 clearly:1 gaussian:8 always:1 modified:1 e7:1 kersting:1 varying:1 focus:1 leapfrog:2 improvement:2 bernoulli:1 likelihood:19 modelling:1 contrast:1 baseline:1 helpful:1 inference:31 economy:1 dependent:1 unary:2 typically:1 unlikely:2 accept:6 her:5 hidden:1 semantics:1 pixel:2 overall:3 html:1 flexible:1 augment:2 development:2 art:1 special:4 integration:1 initialize:1 marginal:1 constrained:1 construct:13 saving:1 once:1 sampling:2 look:1 jones:1 report:2 spline:1 simplify:1 few:2 opening:1 modern:2 oriented:1 composed:2 gamma:1 individual:5 geometry:1 relief:1 consisting:1 jeffrey:1 proba:1 friedman:1 acceptance:2 mnih:1 custom:1 evaluation:2 introduces:1 mixture:1 truly:1 unconditional:1 ndg:1 compilation:1 chain:5 tuple:1 capable:1 partial:2 x48:1 orthogonal:1 literally:1 tree:1 taylor:2 old:1 walk:3 pmf:9 re:1 delete:1 instance:1 modeling:2 soft:1 w911nf:1 raedt:1 assignment:1 ordinary:1 vertex:3 subset:2 entry:1 uninteresting:1 uniform:1 examining:2 fruitfully:1 graphic:1 dependency:7 combined:1 lush:2 international:5 sensitivity:1 siam:1 probabilistic:36 physic:1 together:3 quickly:3 neuenschwander:1 again:6 augmentation:1 thesis:1 leveraged:1 emnlp:1 e9:1 lambda:4 external:2 derivative:18 style:2 return:2 li:1 suggesting:1 potential:1 deform:1 transformational:1 coefficient:1 matter:1 inc:1 explicitly:1 ad:18 depends:1 tion:1 performed:1 try:5 responsibility:1 closed:1 view:1 compiler:4 start:4 portion:1 complicated:2 capability:1 kaufmann:1 efficiently:1 sy:1 yield:3 landscape:1 generalize:1 bayesian:6 artist:1 rejecting:2 mc:1 carlo:5 trajectory:1 worth:1 comp:1 ibal:2 executes:1 history:1 nonstandard:19 plex:1 explain:1 whenever:1 hlt:1 manual:1 ed:1 energy:3 atlab:10 minka:1 proof:1 riemannian:1 modeler:2 di:1 mi:15 sampled:1 rational:1 recall:1 improves:1 dimensionality:1 back:2 manuscript:1 steve:1 higher:3 specify:2 improved:2 rand:5 impacted:2 done:3 though:2 strongly:1 symptom:1 rejected:7 hand:6 qo:5 o:1 incrementally:1 defines:1 mode:7 quality:1 interp:2 scientific:1 building:3 effect:3 oil:1 contain:1 true:1 hamil:1 ccf:1 chemical:1 alternating:1 laboratory:1 neal:2 distal:1 interchangeably:1 self:1 during:1 octave:1 syntax:3 outline:1 demonstrate:1 pearlmutter:2 reasoning:1 meaning:1 image:8 consideration:1 novel:1 fi:5 empirically:2 conditioning:2 volume:1 interpretation:24 relating:1 interpret:1 kluwer:1 refer:2 composition:2 blocked:1 significant:1 gibbs:2 automatic:10 vanilla:1 fk:3 mathematics:1 similarly:1 illinois:1 language:30 had:1 etc:1 add:1 base:3 renderer:3 posterior:2 multivariate:1 recent:1 brownian:1 retrieved:1 noisily:2 commun:1 hxk:1 reverse:5 compound:1 verlag:1 forcing:1 certain:2 n00014:1 rep:1 blog:2 arbitrarily:3 binary:3 outperforming:1 herbst:1 onr:1 scoring:1 preserving:1 morgan:1 additional:5 floor:2 july:1 arithmetic:2 full:1 mix:3 bcs:2 multiple:5 champaign:1 ing:1 smooth:1 faster:1 profiling:1 calculation:1 sphere:1 technical:3 academic:1 equally:1 y:2 coded:4 award:1 impact:1 qi:1 scalable:1 pow:2 involving:1 basic:1 variant:1 expectation:1 kernel:5 mex:1 diffusive:1 proposal:24 background:1 justified:1 fine:5 affecting:1 interval:2 else:1 compositionally:1 source:1 publisher:1 goodman:3 extra:1 rest:1 unlike:2 operate:1 appropriately:1 ascent:1 popl:2 recording:1 facilitates:1 flow:1 leveraging:1 seem:1 call:2 integer:6 structural:1 noting:2 counting:1 backwards:2 intermediate:1 easy:3 superset:1 automated:1 variety:2 xj:3 rendering:1 psychology:1 nrl:1 andreas:1 simplifies:1 idea:3 tradeoff:1 expression:7 url:1 passed:1 reuse:1 render:2 algebraic:1 sontag:1 york:1 hessian:4 tape:5 speaking:1 skewing:1 cause:1 dramatically:2 generally:2 ignored:1 useful:1 covered:1 detailed:1 redefines:1 nonparametric:1 amount:1 tenenbaum:1 statist:1 generate:4 http:2 exist:1 nsf:1 popularity:1 tonian:1 track:4 per:1 write:2 discrete:5 key:1 four:1 procedural:1 demonstrating:2 erp:6 im1:1 clean:1 backward:1 graph:2 sum:2 run:1 package:2 powerful:1 uncertainty:1 extends:2 family:1 reader:1 decide:2 place:1 separation:1 draw:2 bit:2 entirely:1 bound:3 shan:1 replaces:1 encountered:2 ahead:1 noah:1 constraint:2 software:1 simulate:1 min:6 formulating:1 rendered:1 format:1 structured:3 developing:1 according:1 alternate:1 department:2 combination:2 poor:1 mcdonnell:1 ush:1 y0:5 metropolis:5 lid:1 making:6 happens:1 pr:8 xo:41 taken:2 ln:1 lyngby:1 vjt:1 turn:1 montecarlo:1 mechanism:3 dyna:3 r3:1 fortran:1 end:4 available:2 operation:6 permit:1 cheme:1 apply:1 doll:1 away:1 encounter:1 compiling:2 slower:1 original:2 denotes:2 running:3 include:1 remaining:1 top:3 assumes:1 newton:1 readable:1 exploit:2 eisner:1 build:1 classical:1 move:3 added:1 parametric:1 primary:1 diagonal:1 illuminate:1 gradient:32 wrap:1 distance:1 higherorder:1 separating:1 manifold:2 reason:3 toward:2 induction:3 denmark:2 code:5 index:2 illustration:1 balance:1 differencing:1 difficult:3 executed:2 setup:1 hmc:15 october:1 trace:6 negative:1 design:2 implementation:6 publ:1 redefined:1 unknown:1 perform:1 allowing:2 observation:1 markov:2 urbana:1 purdue:2 finite:1 snippet:1 behave:1 langevin:3 hinton:1 relational:1 defining:1 frame:1 rn:1 perturbation:1 arbitrary:3 drift:1 provenance:24 compositionality:1 david:1 overloaded:1 semiring:1 unpublished:1 connection:1 bischof:1 engine:1 marthi:1 mansinghka:1 tremendous:1 established:1 nip:1 qa:4 trans:1 able:2 brook:1 proceeds:3 below:3 usually:1 xm:21 sparsity:1 challenge:1 program:42 including:1 max:1 video:1 reliable:1 power:6 critical:2 getoor:1 natural:2 eh:1 sian:1 recursion:1 representing:1 scheme:1 improve:2 technology:1 library:1 imply:1 axis:2 created:1 church:1 galin:1 coupled:1 naive:2 extract:1 prior:8 bark:1 multiplication:1 vancouver:1 asymptotic:3 relative:1 afosr:1 fully:1 embedded:1 mixed:1 generation:1 versus:2 outlining:1 generator:2 age:1 integrate:1 switched:1 foundation:1 consistent:1 propagates:1 principle:4 editor:1 xiao:1 course:2 changed:1 repeat:1 slowdown:1 free:1 gl:1 supported:3 aij:1 side:4 allow:3 explaining:1 taking:1 face:4 sparse:1 tracing:1 benefit:1 issac:1 valid:1 evaluating:1 rich:1 computes:1 seemed:3 forward:7 made:3 adaptive:1 author:1 approximate:2 emphasize:1 implicitly:1 forever:1 logic:3 global:6 imm:1 uai:1 mem:1 handbook:1 img:2 consuming:1 xi:15 search:1 continuous:5 pen:1 stuhlmuller:1 sk:2 additionally:1 bonawitz:1 nature:2 reasonably:2 symmetry:3 alg:7 expansion:1 bottou:1 complex:8 necessarily:1 constructing:1 domain:4 aistats:2 bmi:2 noise:8 ling:1 perlin:8 complementary:1 x1:4 augmented:2 fig:10 wengert:1 rithm:2 fashion:1 slow:1 ny:1 precision:1 sub:1 momentum:3 originated:1 wish:3 fails:1 breaking:1 grained:4 perfect:1 down:1 e4:1 bad:2 specific:2 showing:1 x:2 incorporating:1 overloading:10 importance:1 texture:3 phd:1 execution:6 autodiff:1 conditioned:4 illustrates:1 smoothly:1 led:1 simply:1 likely:2 explore:1 lazy:1 expressed:1 contained:1 tracking:15 scalar:1 subtlety:1 applies:1 springer:1 leave:2 relies:1 extracted:1 kinetic:1 shell:1 acm:2 conditional:2 viewed:1 goal:4 formulated:1 tioned:1 towards:1 replace:1 change:5 experimentally:1 typical:1 except:1 uniformly:1 sampler:5 total:3 ece:1 accepted:4 craft:2 vote:2 formally:1 support:1 incorporate:1 mcmc:24 tested:1 avoiding:1 |
3,655 | 431 | A Comparative Study of the Practical
Characteristics of Neural Network and
Conventional Pattern Classifiers
Richard P. Lippmann
Lincoln Laboratory, MIT
Lexington, MA 02173-9108
Kenney Ng
BBN Systems and Technologies
Cambridge, MA 02138
Abstract
Seven different pattern classifiers were implemented on a serial computer
and compared using artificial and speech recognition tasks. Two neural
network (radial basis function and high order polynomial GMDH network)
and five conventional classifiers (Gaussian mixture, linear tree, K nearest
neighbor, KD-tree, and condensed K nearest neighbor) were evaluated.
Classifiers were chosen to be representative of different approaches to pattern classification and to complement and extend those evaluated in a
previous study (Lee and Lippmann, 1989). This and the previous study
both demonstrate that classification error rates can be equivalent across
different classifiers when they are powerful enough to form minimum error decision regions, when they are properly tuned, and when sufficient
training data is available. Practical characteristics such as training time,
classification time, and memory requirements, however, can differ by orders of magnitude. These results suggest that the selection of a classifier
for a particular task should be guided not so much by small differences in
error rate, but by practical considerations concerning memory usage, computational resources, ease of implementation, and restrictions on training
and classification times.
1
INTRODUCTION
Few studies have compared practical characteristics of adaptive pattern classifiers
using real data. There has frequently been an over-emphasis on back-propagation
classifiers and artificial problems and a focus on classification error rate as the main
performance measure. No study has compared the practical trade-offs in training
time, classification time, memory requirements, and complexity provided by the
970
Practical Characteristics of Neural Network and Conventional Pattern Classifiers
many alternative classifiers that have been developed (e.g. see Lippmann 1989).
The purpose of this study was to better understand and explore practical characteristics of classifiers not included in a previous study (Lee and Lippmann, 1989; Lee
1989). Seven different neural network and conventional pattern classifiers were evaluated. These included radial basis function (RBF), high order polynomial GMDH
network, Gaussian mixture, linear decision tree, J{ nearest neighbor (KNN), KD
tree, and condensed J{ nearest neighbor (CKNN) classifiers. All classifiers were
implemented on a serial computer (Sun 3-110 Workstation with FPA) and tested
using a digit recognition task (7 digits, 22 cepstral inputs, 16 talkers, 70 training
and 112 testing patterns per talker), a vowel recognition task (10 vowels, 2 formant
frequency inputs, 67 talkers, 338 training and 333 testing patterns), and two artificial tasks with two input dimensions that require either a single convex or two
disjoint decision regions. Tasks are as in (Lee and Lippmann, 1989) and details of
experiments are described in (Ng, 1990).
2
TUNING EXPERIMENTS
Internal parameters or weights of classifiers were determined using training data.
Global free parameters that provided low error rates were found experimentally
using cross-validation and the training data or by using test data. Global parameters
included an overall basis function width scale factor for the RBF classifier, order
of nodal polynomials for the GMDH network, and number of nearest neighbors for
the KNN, KD tree, and CKNN classifiers.
Experiments were also performed to match the complexity of each classifier to that
of the training data. Many classifiers exhibit a characteristic divergence between
training and testing error rates as a function of their complexity. Poor performance
results when a classifier is too simple to model the complexity of training data
and also when it is too complex and "over-fits" the training data. Cross-validation
and statistical techniques were used to determine the correct size of the linear tree
and GMDH classifiers where training and test set error rates diverged substantially.
An information theoretic measure (Predicted Square Error) was used to limit the
complexity of the GMDH classifier. This classifier was allowed to grow by adding
layers and widening layers to find the number of layers and the layer width which
minimized predicted square error. Nodes in the linear tree were pruned using 10fold cross-validation and a simple statistical test to determine the minimum size tree
that provides good performance. Training and test set error rates did not diverge
for the RBF and Gaussian mixture classifiers. Test set performance was thus used
to determine the number of Gaussian centers for these classifiers.
A new multi-scale radial basis function classifier was developed. It has multiple
radial basis functions centered on each basis function center with widths that vary
over 1 1/2 orders of magnitude. Multi-scale RBF classifiers provided error rates
that were similar to those of more conventional RBF classifiers but eliminated the
need to search for a good value of the global basis function width scale factor.
The CKNN classifier used in this study was also new. It was developed to reduce
memory requirements and dependency on training data order. In the more conventional CKNN classifier, training patterns are presented sequentially and classified
using a KNN rule. Patterns are stored as exemplars only if they are classified in-
971
972
Ng and Lippmann
correctly. In the new CKNN classifier, this conventional CKNN training procedure
is repeated N times with different orderings of the training patterns. All exemplar
patterns stored using any ordering are combined into a new reduced set of training
patterns which is further pruned by using it as training data for a final pass of
conventional CKNN training. This approach typically required less memory than
a KNN or a conventional CKNN classifier. Other experiments described in (Chang
and Lippmann, 1990) demonstrate how genetic search algorithms can further reduce
KNN classifier memory requirements.
B) POLYNOMIAL GMDH NETWORK
A) GAUSSIAN MIXTURE
4000
3000
--
2000
N
-
N
:J:
:J:
N
L&.
1000
500
o
500
1000
F1 (Hz)
1400
o
500
1000
F1 (Hz)
Figure 1: Decision Regions Created by (A) RBF and (B) GMDH Classifiers for the
Vowel Problem.
3
DECISION REGIONS
Classifiers differ not only in their structure and training but also in how decision
regions are formed. Decision regions formed by the RBF classifier for the vowel
problem are shown in Figure 1A. Boundaries are smooth spline-like curves that
can form arbitrarily complex regions. This improves generalization for many real
problems where data for different classes form one or more roughly ellipsoidal clusters. Decision regions for the high-order polynomial (GMDH) network classifier are
shown in Figure lB. Decision region boundaries are smooth and well behaved only
in regions of the input space that are densely sampled by the training data. Decision
boundaries are erratic in regions where there is little training data due to the high
polynomial order of the discriminant functions formed by the GMDH classifier. As
a result, the GMDH classifier generalizes poorly in regions with little training data.
Decision boundaries for the linear tree classifier are hyperplanes. This classifier may
also generalize poorly if data is in ellipsoidal clusters.
4
ERROR RATES
Figure 2 shows the classification (test set) error rates for all classifiers on the bullseye, disjoint, vowel, and digit problems. The solid line in each plot represents the
1400
Practical Characteristics of Neural Network and Conventional Pattern Classifiers
10
10
BULLSEYE
8
8
6
6
c::
0
c::
c::
4
4
2
2
Z
0
30
0
30
--
DISJOINT
"i!P-
W
0
~
U
u..
-rrnn
.ex:
-J
U
DIGIT 50%
25
25
20
20
15
15
10
10
5
5
0
u.
In
a:
en a: IIJ :E:
::IE
I
<IIJ C
wa: ::IE
u. Z .... (!J
In a:
...I
IIJ
a:
~
::IE
(!J
IIJ
w
a:
....I
0
Z
Z
lIf::
VOWEL
U.
In
a:
0
en a: IIJ :E:
::IE
I
u.
<IIJ C
wa: ::IE
Z .... (!J
aI ..I
a:
C
lIf::
a:
::J
~
::IE
0
IIJ
IIJ
1=I
C
lIf::
Figure 2: Test Data Error Rates for All Classifiers and All Problems.
mean test set error rate across all the classifiers for that problem. The shaded regions represent one binomial standard deviation, u, above and below. The binomial
standard deviation was calculated as u
)?(1- ?)jN, where ? is the estimated
average problem test set error rate and N is the number of test patterns for each
problem. The shaded region gives a rough measure of the range of expected statistical fluctuation if the error rates for different classifiers are identical. A more
detailed statistical analysis of the test set error rates for classifiers was performed
using McNemar's significance test. At a significance level of a
0.01, the error
rates of the different classifiers on the bullseye, disjoint, and vowel problems do not
differ significantly from each other.
=
=
Performance on the more difficult digit problem, however, did differ significantly
across classifiers. This problem has very little training data (10 training patterns
per class) and high dimensional inputs (an input dimension of 22). Some classifiers,
including the RBF and Gaussian mixture classifiers, were able to achieve very low
error rates on this problem and generalize well even in this high dimensional space
with little training data. Other classifiers, including the multi-scale RBF, KDtree, and CKNN classifiers, provided intermediate error rates. The GMDH network
classifier and the linear tree classifier provided high error rates.
The linear tree classifier performed poorly on the digit problem because there is
973
974
Ng and Lippmann
not enough training data to sample the input space densely enough for the training
algorithm to form decision boundaries that can generalize well. The poor performance of the GMDH network classifier is due, in part, to the inability of the GMDH
network classifier to extrapolate well to regions with little training data.
5
PERFORMANCE TRADE-OFFS
Although differences in the error rates of most classifiers are small, differences in
practical performance characteristics are often large. For example, on the vowel
problem, although both the Gaussian mixture and KD tree classifiers perform well,
the Gaussian mixture classifier requires 20 times less classification memory than the
KD tree classifier, but takes 10 times longer to train.
B) DIGIT PROBLEM
A) VOWEL PROBLEM
-
.-
1000
I
1000
I
G'
w
-
RBF?MS
L?TREE
~
GMIX
10
-
CKNN
z
z
G'
w
100
f-
CJ
10
GMIX
??
f-
z
CKNN
Z
;;{
;;{
KO?TREE
a:
l-
?
?
RBF?MS
L?TREE
w
:::2
f=
?
?
~
RBF
? ??
?
W
I(9
GMDH
?
100
(/)
a:
100
?
RBF
?
?
KO?TREE
I-
-
KNN
KNN
.1
I
I.
GMOH
I
I
1000
10000
CLASSIFICATION MEMORY (BYTES)
.1
100
I
I
1000
10000
CLASSIFICATION MEMORY (BYTES)
Figure 3: Training Time Versus Classification Memory Usage For All Classifiers On
The (A) Vowel And (B) Digit Problems.
Figure 3 shows the relationship between training time (in CPU seconds measured on
a Sun 3/110 with FPA) and classification memory usage (in bytes) for the different
classifiers on the vowel and digit problems. On these problems, the KNN and KDtree classifiers train quickly, but require large amounts of memory. The Gaussian
mixture (GMIX) and linear tree (L-TREE) classifiers use little memory, but require
more training time. The RBF and CKNN classifiers have intermediate memory and
training time requirements. Due to the extra basis functions, the multiscale RBF
(RBF-MS) classifier requires more training time and memory than the conventional
RBF classifier. The GMDH classifier has intermediate memory requirements, but
takes the longest to train. On average, the GMDH classifier takes 10 times longer
to train than the RBF classifier, and 100 times longer than the KD tree classifier.
In general, classifiers that use little memory require long training times, while those
that train rapidly are not memory efficient.
Figure 4 shows the relationship between classification time (in CPU milliseconds
Practical Characteristics of Neural Network and Conventional Pattern Classifiers
B) DIGIT PROBLEM
A) VOWEL PROBLEM
100
50~----------TI-----------'I---'
,-----------,--1----------...-.---,
RBF?MS
GMOH
?
6" 40
w
en
::2
w
::2 30
CKNN
z
0
~
<t
(.) 20
li:
KNN
en
10
<3
KO?TREE
RBF
GMIX
10000
CLASSIFICATION MEMORY USAGE (BYTES)
??
KO?TREE
?
en
en
()
-
KNN
?
RBF
40
li:
:5
?
1000
?
?
f-
l?TREE
OL-________~-~~I----------~I--~
100
-
GMOH
Q
??
?
(j)
80
z
RBF ?MS
GMIX
6"
w
en
::2
w
::2 60
i=
? ?
~
:5()
-
-
CKNN
-
20
l?TREE
OL---------~?~I----------~I--~
100
1000
10000
CLASSIFICATION MEMORY USAGE (BYTES)
Figure 4: Classification Time Versus Classification Memory Usage For All Classifiers
On The (A) Vowel And (B) Digit Problems.
for one pattern) and classification memory usage (in bytes) for the different classifiers on the vowel and digit problems. At one extreme, the linear tree classifier
requires very little memory and classifies almost instantaneously. At the other, the
GMDH classifier takes the longest to classify and requires a large amount of memory. Gaussian mixture and RBF classifiers are intermediate. On the vowel problem,
the CKNN and the KD tree classifiers are faster than the conventional KNN classifier. On the digit problem, the CKNN classifier is faster than both the KD tree
and KNN classifiers because of the greatly reduced number of stored patterns (15
out of 70). The speed up in search provided by the KD tree is greatly reduced for
the digit problem due to the increase in input dimensionality. In general, the trend
is for classification time to be proportional to the amount of classification memory.
It is important to note, however, that trade-offs in performance characteristics depend on the particular problem and can vary for different implementations of the
classifiers.
6
SUMMARY
Seven different neural network and conventional pattern classifiers were compared
using artificial and speech recognition tasks. High order polynomial GMDH classifiers typically provided intermediate error rates and often required long training
times and large amounts of memory. In addition, the decision regions formed did
not generalize well to regions of the input space with little training data. Radial basis function classifiers generalized well in high dimensional spaces, and provided low
error rates with training times that were much less than those of back-propagation
classifiers (Lee and Lippmann, 1989). Gaussian mixture classifiers provided good
performance when the numbers and types of mixtures were selected carefully to
model class densities well. Linear tree classifiers were the most computationally ef-
975
976
Ng and Lippmann
ficient but performed poorly with high dimensionality inputs and when the number
of training patterns was small. KD-tree classifiers reduced classification time by a
factor of four over conventional KNN classifiers for low 2-input dimension problems.
They provided little or no reduction in classification time for high 22-input dimension problems. Improved condensed KNN classifiers reduced memory requirements
over conventional KNN classifiers by a factor of two to fifteen for all problems,
without increasing the error rate significantly.
7
CONCLUSION
This and a previous study (Lee and Lippmann, 1989) explored the performance of 18
neural network, AI, and statistical pattern classifiers. Both studies demonstrated
the need to carefully select and tune global parameters and the need to match
classifier complexity to that of the training data using cross-validation and/or information theoretic approaches. Two new variants of existing classifiers (multi-scale
RBF and improved versions of the CKNN classifier) were developed as part of this
study. Classification error rates on speech problems in both studies were equivalent with most classifiers when classifiers were powerful enough to form minimum
error decision regions, when sufficient training data was available, and when classifiers were carefully tuned. Practical classifier characteristics including training
time, classification time, and memory usage, however, differed by orders of magnitude. These results suggest that the selection of a classifier for a particular task
should be guided not so much by small differences in error rate, but by practical
considerations concerning memory usage, ease of implementation, computational
resources, and restrictions on training and classification times. Researchers should
take time to understand the wide range of classifiers that are available and the
practical tradeoffs that these classifiers provide.
Acknowledgements
This work was sponsored by the Air Force Office of Scientific Research and the Department
of the Air Force.
References
Eric I. Chang and Richard P. Lippmann. Using Genetic Algorithms to Improve Pattern
Classification Performance. In Lippmann, R. Moody, J., Touretzky, D., (Eds.) Advances
in Neural Information Processing Systems 9, 1990.
Yuchun Lee. Classifiers: Adaptive modules in pattern recognition systems. Master's
Thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and
Computer Science, Cambridge, MA, May 1989.
Yuchun Lee and R. P. Lippmann. Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems. In D. Touretzky (Ed.)
Advances in Neural Information Processing Systems !l, 168-177, 1989.
R. P. Lippmann. Pattern Classification Using Neural Networks. IEEE Communications
Magazine, 27(27):47-54, 1989.
Kenney Ng. A Comparative Study of the Practical Characteristics of Neural Network and
Conventional Pattern Classifiers. Master's Thesis, Massachusetts Institute of Technology,
Department of Electrical Engineering and Computer Science, Cambridge, MA, May 1990.
| 431 |@word implemented:2 predicted:2 version:1 polynomial:7 differ:4 guided:2 correct:1 laboratory:1 centered:1 kdtree:2 exhibit:1 width:4 fifteen:1 solid:1 require:4 reduction:1 m:5 f1:2 generalization:1 generalized:1 seven:3 tuned:2 genetic:2 demonstrate:2 theoretic:2 discriminant:1 existing:1 relationship:2 consideration:2 ef:1 diverged:1 talker:3 difficult:1 vary:2 cknn:17 plot:1 sponsored:1 purpose:1 extend:1 implementation:3 selected:1 condensed:3 perform:1 cambridge:3 ai:2 tuning:1 instantaneously:1 provides:1 mit:1 node:1 offs:3 rough:1 hyperplanes:1 gaussian:11 gmix:5 communication:1 five:1 longer:3 lb:1 nodal:1 office:1 complement:1 focus:1 required:2 bullseye:3 properly:1 longest:2 greatly:2 extrapolate:1 kenney:2 expected:1 roughly:1 arbitrarily:1 frequently:1 mcnemar:1 multi:4 ol:2 able:1 minimum:3 pattern:27 typically:2 below:1 little:10 cpu:2 determine:3 increasing:1 provided:10 classifies:1 memory:29 multiple:1 overall:1 classification:27 erratic:1 including:3 smooth:2 widening:1 match:2 substantially:1 faster:2 lif:3 developed:4 cross:4 long:2 force:2 lexington:1 concerning:2 serial:2 ng:6 eliminated:1 improve:1 identical:1 represents:1 technology:3 variant:1 ti:1 ko:4 created:1 minimized:1 classifier:109 spline:1 represent:1 richard:2 few:1 byte:6 acknowledgement:1 addition:1 divergence:1 densely:2 engineering:2 limit:1 grow:1 extra:1 vowel:15 proportional:1 versus:2 fluctuation:1 hz:2 validation:4 emphasis:1 sufficient:2 shaded:2 mixture:11 extreme:1 ease:2 intermediate:5 range:2 enough:4 summary:1 practical:15 fit:1 exemplar:2 testing:3 free:1 reduce:2 understand:2 digit:14 procedure:1 tree:30 tradeoff:1 institute:2 neighbor:5 wide:1 cepstral:1 significantly:3 boundary:5 dimension:4 curve:1 calculated:1 radial:5 classify:1 ficient:1 suggest:2 adaptive:2 speech:4 selection:2 deviation:2 detailed:1 restriction:2 conventional:18 equivalent:2 demonstrated:1 center:2 tune:1 amount:4 ellipsoidal:2 lippmann:15 too:2 global:4 convex:1 stored:3 sequentially:1 dependency:1 reduced:5 combined:1 rule:1 millisecond:1 density:1 search:3 estimated:1 disjoint:4 ie:6 per:2 correctly:1 lee:8 diverge:1 quickly:1 moody:1 behaved:1 four:1 magazine:1 thesis:2 complex:2 did:3 significance:2 trend:1 main:1 recognition:5 li:2 allowed:1 repeated:1 module:1 powerful:2 electrical:2 master:2 representative:1 en:7 region:18 almost:1 differed:1 sun:2 iij:8 ordering:2 trade:3 performed:4 decision:14 complexity:6 layer:4 fold:1 depend:1 square:2 formed:4 air:2 characteristic:13 explored:1 eric:1 basis:9 generalize:4 speed:1 adding:1 pruned:2 bbn:1 magnitude:3 train:5 researcher:1 department:3 classified:2 artificial:5 poor:2 touretzky:2 kd:10 ed:2 across:3 explore:1 frequency:1 formant:1 knn:15 chang:2 workstation:1 sampled:1 final:1 massachusetts:2 computationally:1 resource:2 ma:4 improves:1 dimensionality:2 cj:1 yuchun:2 carefully:3 rbf:24 back:2 experimentally:1 rapidly:1 available:3 generalizes:1 included:3 determined:1 poorly:4 achieve:1 lincoln:1 improved:2 evaluated:3 pas:1 alternative:1 gmdh:18 cluster:2 requirement:7 rrnn:1 jn:1 select:1 comparative:2 multiscale:1 binomial:2 propagation:2 internal:1 inability:1 ex:1 measured:1 nearest:5 scientific:1 tested:1 usage:9 |
3,656 | 4,310 | Boosting with Maximum Adaptive Sampling
Franc?ois Fleuret
Idiap Research Institute
[email protected]
Charles Dubout
Idiap Research Institute
[email protected]
Abstract
Classical Boosting algorithms, such as AdaBoost, build a strong classifier without
concern about the computational cost. Some applications, in particular in computer vision, may involve up to millions of training examples and features. In
such contexts, the training time may become prohibitive. Several methods exist
to accelerate training, typically either by sampling the features, or the examples,
used to train the weak learners. Even if those methods can precisely quantify the
speed improvement they deliver, they offer no guarantee of being more efficient
than any other, given the same amount of time.
This paper aims at shading some light on this problem, i.e. given a fixed amount
of time, for a particular problem, which strategy is optimal in order to reduce
the training loss the most. We apply this analysis to the design of new algorithms which estimate on the fly at every iteration the optimal trade-off between
the number of samples and the number of features to look at in order to maximize
the expected loss reduction. Experiments in object recognition with two standard
computer vision data-sets show that the adaptive methods we propose outperform
basic sampling and state-of-the-art bandit methods.
1
Introduction
Boosting is a simple and efficient machine learning algorithm which provides state-of-the-art performance on many tasks. It consists of building a strong classifier as a linear combination of weaklearners, by adding them one after another in a greedy manner. However, while textbook AdaBoost
repeatedly selects each of them using all the training examples and all the features for a predetermined number of rounds, one is not obligated to do so and can instead choose only to look at a
subset of examples and features.
For the sake of simplicity, we identify the space of weak learners and the feature space by considering all the thresholded versions of the latter. More sophisticated combinations of features can be
envisioned in our framework by expanding the feature space.
The computational cost of one iteration of Boosting is roughly proportional to the product of the
number of candidate weak learners Q and the number of samples T considered, and the performance increases with both. More samples allow a more accurate estimation of the weak-learners?
performance, and more candidate weak-learners increase the performance of the best one. Therefore, one wants at the same time to look at a large number of candidate weak-learners, in order to
find a good one, but also needs to look at a large number of training examples, to get an accurate
estimate of the weak-learner performances. As Boosting progresses, the candidate weak-learners
tend to behave more and more similarly, as their performance degrades. While a small number of
samples is initially sufficient to characterize the good weak-learners, it becomes more and more
difficult, and the optimal values for a fixed product Q T moves to larger T and smaller Q.
We focus in this paper on giving a clear mathematical formulation of the behavior described above.
Our main analytical results are Equations (13) and (17) in ? 3. They give exact expressions of the
1
expected edge of the selected weak-learner ? that is the immediate loss reduction it provides in the
considered Boosting iteration ? as a function of the number T of samples and number Q of weaklearners used in the optimization process. From this result we derive several algorithms described in
? 4, and estimate their performance compared to standard and state-of-the-art baselines in ? 5.
2
Related works
The most computationally intensive operation performed in Boosting is the optimization of the
weak-learners. In the simplest version of the procedure, it requires to estimate for each candidate
weak-learner a score dubbed ?edge?, which requires to loop through every training example. Reducing this computational cost is crucial to cope with high-dimensional feature spaces or very large
training sets. This can be achieved through two main strategies: sampling the training examples, or
the feature space, since there is a direct relation between features and weak-learners.
Sampling the training set was introduced historically to deal with weak-learners which can not be
trained with weighted samples. This procedure consists of sampling examples from the training set
according to their Boosting weights, and of approximating a weighted average over the full set by
a non-weighted average over the sampled subset. See ? 3.1 for formal details. Such a procedure
has been re-introduced recently for computational reasons [5, 8, 7], since the number of subsampled
examples controls the trade-off between statistical accuracy and computational cost.
Sampling the feature space is the central idea behind LazyBoost [6], and consists simply of replacing
the brute-force exhaustive search over the full feature set by an optimization over a subset produced
by sampling uniformly a predefined number of features. The natural redundancy of most of the
families of features makes such a procedure particularly efficient.
Recently developed methods rely on multi-arms bandit methods to balance properly the exploitation
of features known to be informative, and the exploration of new features [3, 4]. The idea behind
those methods is to associate a bandit arm to every feature, and to see the loss reduction as a reward.
Maximizing the overall reduction is achieved with a standard bandit strategy such as UCB [1], or
Exp3.P [2], see ? 5.2 for details.
These techniques suffer from three important drawbacks. First they make the assumption that the
quality of a feature ? the expected loss reduction of a weak-learner using it ? is stationary. This goes
against the underpinning of Boosting, which is that at any iteration the performance of the learners
is relative to the sample weights, which evolves over the training (Exp3.P does not make such an
assumption explicitly, but still rely only on the history of past rewards). Second, without additional
knowledge about the feature space, the only structure they can exploit is the stationarity of individual
features. Hence, improvement over random selection can only be achieved by sampling again the
exact same features one has already seen in the past. We therefore only use those methods in a
context where features come from multiple families. This allows us to model the quality, and to bias
the sampling, at the level of families instead of individual features.
Those approaches exploit information about features to bias the sampling, hence making it more
efficient, and reducing the number of weak-learners required to achieve the same loss reduction.
However, they do not explicitly aim at controlling the computational cost. In particular, there is no
notion of varying the number of samples used for the estimation of the weak-learners? performance.
3
Boosting with noisy maximization
We present in this section some analytical results to approximate a standard round of AdaBoost ? or
most other Boosting algorithms ? by sampling both the training examples and the features used to
build the weak-learners. Our main goal is to devise a way of selecting the optimal numbers of weaklearners Q and samples T to look at, so that their product is upper-bounded by a given constant, and
that the expectation of the real performance of the selected weak-learner is maximal.
In ? 3.1 we recall standard notation for Boosting, the concept of the edge of a weak-learner, and
how it can be approximated by a sampling of the training examples. In ? 3.2 we formalize the
optimization of the learners and derive the expectation E[G? ] of the true edge G? of the selected
weak-learner, and we illustrate these results in the Gaussian case in ? 3.3.
2
1{condition} is equal to 1 if the condition is true, 0 otherwise
N number of training examples
F number of weak-learners
K number of families of weak-learners
T number of examples subsampled from the full training set
Q number of weak-learners sampled in the case of a single family of features
Q1 , . . . , QK number of weak-learners sampled from each one of the K families
(xn , yn ) ? X ? {?1, 1} training examples
?n ? R weights of the nth training example in the considered Boosting iteration
Gq true edge of the qth weak-learner
G? true edge of the selected weak-learner
e(Q, T ) value of E[G? ], as a function of Q and T
e(Q1 , . . . , QK , T ) value of E[G? ], in the case of K families of features
Hq approximated edge of the qth weak-learner, estimated from the subsampled T examples
?q estimation error in the approximated edge Hq ? Gq
Table 1: Notations
As stated in the introduction, we will ignore the feature space itself, and only consider in the following sections the set of weak-learners built from it. Also, note that both the Boosting procedure
and our methods are presented in the context of binary classification, but can be easily extended to a
multi-class context using for example AdaBoost.MH, which we used in all our experiments.
3.1
Edge estimation with weighting-by-sampling
Given a training set
(xn , yn ) ? X ? {?1, 1}, n = 1, . . . , N
(1)
and a set H of weak-learners, the standard Boosting procedure consists of building a strong classifier
X
f (x) =
?i hi (x)
(2)
i
by choosing the terms ?i ? R and hi ? H in a greedy manner to minimize a loss estimated over the
training samples.
At every iteration, choosing the optimal weak-learner boils down to finding the weak-learner with
the largest edge, which is the derivative of the loss reduction w.r.t. the weak-learner weight. The
higher this value, the more the loss can be reduced locally, and thus the better the weak-learner. The
edge is a linear function of the responses of the weak-learner over the samples
G(h) =
N
X
yn ?n h(xn ),
(3)
n=1
where the ?n s depend on the current responses of f over the xn s. We consider without loss of
PN
generality that they have been normalized such that n=1 ?n = 1.
Given an arbitrary distribution ? over the sample indexes, with a non-zero mass over every index,
we can rewrite the edge as
yN ?N
G(h) = EN ??
h(xN )
(4)
?(N )
which, for ?(n) = ?n gives
G(h) = EN ?? [yN h(xN )]
(5)
The idea of weighting-by-sampling consists of replacing the expectation in that expression with an
approximation obtained by sampling. Let N1 , . . . , NT , be i.i.d. of distribution ?, we define the
approximated edge as
T
1X
H(h) =
yN h(xNt ),
(6)
T t=1 t
3
P(H1 | G1 )
P(H2 | G 2 )
G2
P(H3 | G 3 )
G1 G3
G*
H2
H3 H1
Figure 1: To each of the Q weak-learner corresponds a real edge Gq computed over all the training
examples, and an approximated edge Hq computed from a subsampling of T training examples.
The approximated edge fluctuates around the true value, with a binomial distribution. The Boosting
algorithm selects the weak-learner with the highest approximated edge, which has a real edge G? .
On this figure, the largest approximated edge is H1 , hence the real edge G? of the selected weaklearner is equal to G1 , which is less than G3 .
which follows a binomial distribution centered on the true edge, with a variance decreasing with the
number of samples T . It is accurately modeled with
(1 + G)(1 ? G)
H(h) ? N G,
.
(7)
T
3.2
Formalization of the noisy maximization
Let G1 , . . . , GQ be a series of independent, real-valued random variables standing for the true edges
of Q weak-learners sampled randomly. Let ?1 , . . . , ?Q be a series of independent, real-valued
random variables standing for the noise in the estimation of the edges due to the sampling of only T
training examples, and finally ?q, let Hq = Gq + ?q be the approximated edge.
We define G? as the true edge of the weak-learner, which has the highest approximated edge
G? = Gargmax1?q?Q Hq
(8)
This quantity is random due to both the sampling of the weak-learners, and the sampling of the
training examples.
The quantity we want to optimize is e(Q, T ) = E[G? ], the expectation of the true edge of the
selected learner, which increases with both Q and T . A higher Q increases the number of terms in
the maximization of Equation (8), and a higher T reduces the variance of the ?s, ensuring that G? is
close to maxq Gq . In practice, if the variance of the ?s is of the order of, or higher than, the variance
of the Gs, the maximization is close to a pure random selection, and looking at many weak-learners
is useless.
We have:
e(Q, T ) = E[G? ]
h
i
= E Gargmax1?q?Q Hq
?
?
Q
X
Y
=
E ?Gq
1{Hq >Hu } ?
q=1
(10)
(11)
u6=q
??
1{Hq >Hu } Hq ??
=
E ? E ?Gq
q=1
u6=q
?
?
Q
X
Y
E 1{Hq >Hu } Hq ?
=
E ? E[Gq |Hq ]
Q
X
(9)
? ?
Y
q=1
u6=q
4
(12)
(13)
If the distributions of the Gq s and the ?q s are Gaussians
or mixtures
of Gaussians, we can derive an
alytical expressions for both E[Gq |Hq ] and E 1{Hq >Hu } Hq , and compute the value of e(Q, T )
efficiently.
In the case of multiple families of weak-learners, it makes sense to model the distributions of the
edges Gq separately for each family, as they often have a more homogeneous behavior inside a
family than across families. We can easily adapt the framework developed in the previous sections
to that case, and we define e(Q1 , . . . , QK , T ), the expected edge of the selected weak-learner when
we sample T examples from the training set, and Qk weak-learners from the kth family.
3.3
Gaussian case
As an illustrative example, we consider here the case where the Gq s, the ?q s, and hence also the
Hq s all follow Gaussian distributions. We take Gq ? N (0, 1), and ?q ? N (0, ? 2 ), and obtain:
?
?
Y
(14)
e(Q, T ) = Q E ? E[G1 |H1 ]
E 1{H1 >Hu } H1 ?
u6=1
"
Q?1 #
H1
H1
= QE 2
? ?
(15)
? +1
?2 + 1
h
i
1
Q?1
=?
E Q G1 ? (G1 )
(16)
?2 + 1
1
E max Gq .
=?
(17)
2
1?q?Q
? +1
Where, ? stands for the cumulative distribution function of the unit Gaussian, and ? depends on T .
See Figure 2 for an illustration of the behavior of e(Q, T ) for two different variances of the Gq s.
There is no reason to expect the distribution of the Gq s to be Gaussian, contrary to the ?q s, as shown
by Equation (7), but this is not a problem as it can always be approximated by a mixture, for which
we can still derive analytical expressions, even if the Gq s or the ?q s have different distributions for
different qs.
4
Adaptive Boosting Algorithms
We propose here several new algorithms to sample features and training examples adaptively at each
Boosting step.
While all the formulation above deals with uniform sampling of weak learners, we actually sample
features, and optimize thresholds to build stumps. We observed that after a small number of Boosting
iterations, the Gaussian model of Equation (7) is sufficiently accurate.
4.1
Maximum Adaptive Sampling
At every Boosting step, our first algorithm MAS Naive models Gq with a Gaussian mixture model
fitted on the edges estimated at the previous iteration, computes from that density model the pair
(Q, T ) maximizing e(Q, T ), samples the corresponding number of examples and features, and keeps
the weak-learner with the highest approximated edge.
The algorithm MAS 1.Q takes into account the decomposition of the feature set into K families of
feature extractors. It models the distributions of the Gq s separately, estimating the distribution of
each on a small number of features and examples sampled at the beginning of each iteration, chosen
so as to account for 10% of the total cost. From these models, it optimizes Q, T and the index l of
the family to sample from, to maximize e(Q1{l=1} , . . . , Q 1{l=K} , T ). Hence, in a given Boosting
step, it does not mix weak-learners based on features from different families.
Finally MAS Q.1 similarly models the distributions of the Gq s, but it optimizes Q1 , . . . , QK , T
greedily, starting from Q1 = 0, . . . , QK = 0, and iteratively incrementing one of the Ql so as to
maximize e(Q1 , . . . , QK , T ).
5
4
E[G*]
3
E[G*]
TQ = 1,000
TQ = 1,000
TQ = 10,000
TQ = 100,000
E[G*] for a given cost TQ
4
2
1
0
10,000
1,000
100
10
Number of features Q
1
1
10
100
1,000
TQ = 10,000
TQ = 100,000
3
2
1
10,000
0
Number of samples T
1
10
100
1,000
Number of features Q
10,000
4
TQ = 1,000
4
TQ = 1,000
TQ = 10,000
TQ = 100,000
3
E[G*]
E[G*] for a given cost TQ
E[G*]
2
1
0
10,000
1,000
100
10
Number of features Q
1
1
10
100
1,000
TQ = 10,000
TQ = 100,000
3
2
1
10,000
0
Number of samples T
1
10
100
1,000
Number of features Q
10,000
Figure 2: Simulation of the expectation of G? in the case where both the Gq s and the ?q s follow
Gaussian distributions. Top: Gq ? N (0, 10?2 ). Bottom: Gq ? N (0, 10?4 ). In both simulations
?q ? N (0, 1/T ). Left: Expectation of G? vs. the number of sampled weak-learner Q and the
number of samples T . Right: same value as a function of Q alone, for different fixed costs (product
of the number of examples T and Q). As this graphs illustrates, the optimal value for Q is greater
for larger variances of the Gq . In such a case the Gq are more spread out, and identifying the largest
one can be done despite a large noise in the estimations, hence with a limited number of samples.
4.2
Laminating
The fourth algorithm we have developed tries to reduce the requirement for a density model of
the Gq . At every Boosting step it iteratively reduces the number of considered weak-learners, and
increases the number of samples.
More precisely: given a fixed Q0 and T0 , at every Boosting iteration, the Laminating first samples
Q0 weak-learners and T0 training examples. Then, it computes the approximated edges and keeps
the Q0 /2 best weak-learners. If more than one remains, it samples 2T0 examples, and re-iterates.
The cost of each iteration is constant, equal to Q0 T0 , and there are at most log2 (Q0 ) of them, leading
to an overall cost of O(log2 (Q0 )Q0 T0 ). In the experiments, we equalize the computational cost with
the MAS approaches parametrized by T, Q by forcing log2 (Q0 )Q0 T0 = T Q.
5
Experiments
We demonstrate the validity of our approach for pattern recognition on two standard data-sets, using
multiple types of image features. We compare our algorithms both to different flavors of uniform
sampling and to state-of-the-art bandit based methods, all tuned to deal properly with multiple families of features.
5.1
Datasets and features
For the first set of experiments we use the well known MNIST handwritten digits database [10], containing respectively 60,000/10,000 train/test grayscale images of size 28 ? 28 pixels, divided in ten
classes. We use features computed by multiple image descriptors, leading to a total of 16,451 features. Those descriptors can be broadly divided in two categories. (1) Image transforms: Identity,
Gradient image, Fourier and Haar transforms, Local Binary Patterns (LBP/iLBP). (2) Histograms:
6
sums of the intensities in random image patches, histograms of (oriented and non oriented) gradients
at different locations, Haar-like features.
For the second set of experiments we use the challenging CIFAR-10 data-set [9], a subset of the
80 million tiny images data-set. It contains respectively 50,000/10,000 train/test color images of
size 32 ? 32 pixels, also divided in 10 classes. We dub it as challenging as state-of-the-art results
without using additional training data barely reach 65% accuracy. We use directly as features the
same image descriptors as described above for MNIST, plus additional versions of some of them
making use of color information.
5.2
Baselines
We first define three baselines extending LazyBoost in the context of multiple feature families. The
most naive strategy one could think of, that we call Uniform Naive, simply ignores the families, and
picks features uniformly at random. This strategy does not properly distribute the sampling among
the families, thus if one of them had a far greater cardinality than the others, all features would come
from it. We define Uniform 1.Q to pick one of the feature families at random, and then samples
the Q features from that single family, and Uniform Q.1 to pick uniformly at random Q families of
features, and then pick one feature uniformly in each family.
The second family of baselines we have tested bias their sampling at every Boosting iteration according to the observed edges in the previous iterations, and balance the exploitation of families of
features known to perform well with the exploration of new family by using bandit algorithms [3, 4].
We use three such baselines (UCB, Exp3.P, -greedy), which differ only by the underlying bandit
algorithm used.
We tune the meta-parameters of these techniques ? namely the scale of the reward and the
exploration-exploitation trade-off ? by training them multiple times over a large range of parameters
and keeping only the results of the run with the smallest final Boosting loss. Hence, the computational cost is around one order of magnitude higher than for our methods in the experiments.
Nb. of
stumps
Naive
Uniform
1.Q
Q.1
UCB?
10
100
1,000
10,000
-0.34 (0.01)
-0.80 (0.01)
-1.70 (0.01)
-5.32 (0.01)
-0.33 (0.02)
-0.73 (0.03)
-1.45 (0.02)
-3.80 (0.02)
-0.35 (0.02)
-0.81 (0.01)
-1.68 (0.01)
-5.04 (0.01)
-0.33 (0.01)
-0.73 (0.01)
-1.64 (0.01)
-5.26 (0.01)
10
100
1,000
10,000
-0.26 (0.00)
-0.33 (0.00)
-0.47 (0.00)
-0.93 (0.00)
-0.25 (0.01)
-0.33 (0.01)
-0.46 (0.00)
-0.85 (0.00)
-0.26 (0.00)
-0.34 (0.00)
-0.48 (0.00)
-0.91 (0.00)
-0.25 (0.01)
-0.33 (0.00)
-0.48 (0.00)
-0.90 (0.00)
Bandits
Exp3.P? -greedy?
MNIST
-0.32 (0.01) -0.34 (0.02)
-0.73 (0.02) -0.73 (0.03)
-1.52 (0.02) -1.60 (0.04)
-5.35 (0.04) -5.38 (0.09)
CIFAR-10
-0.25 (0.01) -0.26 (0.00)
-0.33 (0.00) -0.33 (0.00)
-0.47 (0.00) -0.48 (0.00)
-0.91 (0.00) -0.91 (0.00)
Naive
MAS
1.Q
Q.1
Laminating
-0.51 (0.02)
-1.00 (0.01)
-1.83 (0.01)
-5.35 (0.01)
-0.50 (0.02)
-1.00 (0.01)
-1.80 (0.01)
-5.05 (0.02)
-0.52 (0.01)
-1.03 (0.01)
-1.86 (0.00)
-5.30 (0.00)
-0.43 (0.00)
-1.01 (0.01)
-1.99 (0.01)
-6.14 (0.01)
-0.28 (0.00)
-0.35 (0.00)
-0.48 (0.00)
-0.93 (0.00)
-0.28 (0.00)
-0.35 (0.00)
-0.48 (0.00)
-0.88 (0.00)
-0.28 (0.01)
-0.37 (0.01)
-0.49 (0.01)
-0.89 (0.01)
-0.28 (0.00)
-0.37 (0.00)
-0.50 (0.00)
-0.90 (0.00)
Table 2: Mean and standard deviation of the Boosting loss (log10 ) on the two data-sets and for
each method, estimated on ten randomized runs. Methods highlighted with a ? require the tuning of
meta-parameters which have been optimized by training fully multiple times.
5.3
Results and analysis
We report the results of the proposed algorithms against the baselines introduced in ? 5.2 on the
two data-sets of ? 5.1 using the standard train/test cuts in tables 2 and 3. We ran each configuration
ten times and report the mean and standard deviation of each. We set the maximum cost of all the
algorithms to 10N , setting Q = 10 and T = N for the baselines, as this configuration leads to the
best results after 10,000 Boosting rounds of AdaBoost.MH.
These results illustrate the efficiency of the proposed methods. For 10, 100 and 1,000 weak-learners,
both the MAS and the Laminating algorithms perform far better than the baselines. Performance
tend to get similar for 10,000 stumps, which is unusually large.
As stated in ? 5.2, the meta-parameters of the bandit methods have been optimized by running the
training fully ten times, with the corresponding computational effort.
7
Nb. of
stumps
Naive
Uniform
1.Q
Q.1
UCB?
10
100
1,000
10,000
51.18 (4.22)
8.95 (0.41)
1.75 (0.06)
0.94 (0.06)
54.37 (7.93)
11.64 (1.06)
2.37 (0.12)
1.13 (0.03)
48.15 (3.66)
8.69 (0.48)
1.76 (0.08)
0.94 (0.04)
52.86 (4.75)
11.39 (0.53)
1.80 (0.08)
0.90 (0.05)
10
100
1,000
10,000
76.27 (0.97)
56.94 (1.01)
39.13 (0.61)
31.83 (0.29)
78.57 (1.94)
58.33 (1.30)
39.97 (0.37)
31.16 (0.29)
76.00 (1.60)
54.48 (0.64)
37.70 (0.38)
30.56 (0.30)
77.04 (1.65)
57.49 (0.46)
38.13 (0.30)
30.55 (0.24)
Bandits
Exp3.P? -greedy?
MNIST
53.80 (4.53) 51.37 (6.35)
11.58 (0.93) 11.59 (1.12)
2.18 (0.14) 1.83 (0.16)
0.84 (0.02) 0.85 (0.07)
CIFAR-10
77.51 (1.50) 77.13 (1.15)
58.47 (0.81) 58.19 (0.83)
39.23 (0.31) 38.36 (0.72)
30.39 (0.22) 29.96 (0.45)
Naive
MAS
1.Q
Q.1
Laminating
25.91 (2.04)
4.87 (0.29)
1.50 (0.06)
0.92 (0.03)
25.94 (2.57)
4.78 (0.16)
1.59 (0.08)
0.97 (0.05)
25.73 (1.33)
4.54 (0.21)
1.45 (0.04)
0.94 (0.04)
35.70 (2.35)
4.85 (0.16)
1.34 (0.08)
0.85 (0.04)
71.54 (0.69)
53.94 (0.55)
38.79 (0.28)
32.07 (0.27)
71.13 (0.49)
52.79 (0.09)
38.31 (0.27)
31.36 (0.13)
70.63 (0.34)
50.15 (0.64)
36.95 (0.25)
32.51 (0.38)
71.54 (1.06)
50.44 (0.68)
36.39 (0.58)
31.17 (0.22)
Table 3: Mean and standard deviation of the test error (in percent) on the two data-sets and for
each method, estimated on ten randomized runs. Methods highlighted with a ? require the tuning of
meta-parameters which have been optimized by training fully multiple times.
On the MNIST data-set, when adding 10 or 100 weak-learners, our methods roughly divides the
error rate by two, and still improves it by ' 30% with 1,000 stumps. The loss reduction follows the
same pattern.
The CIFAR data-set is a very difficult pattern recognition problem. Still our algorithms performs
substantially better than the baselines for 10 and 100 weak-learners, gaining more than 10% in the
test error rates, and behave similarly to the baselines for larger number of stumps.
As stated in ? 1, the optimal values for a fixed product Q T moves to larger T and smaller Q.
For instance, On the MNIST data-set with MAS Naive, averaging on ten randomized runs, for
respectively 10, 100, 1,000, 10,000 stumps, T = 1,580, 13,030, 37,100, 43,600, and Q = 388, 73,
27, 19. We obtain similar and consistent results across settings.
The overhead of MAS algorithms compared to Uniform ones is small, in our experiments, taking
into account the time spent computing features, it is approximately 0.2% for MAS Naive, 2% for
MAS 1.Q and 8% for MAS Q.1. The Laminating algorithm has no overhead.
The poor behavior of bandit methods for small number of stumps may be related to the large variations of the sample weights during the first iterations of Boosting, which goes against the underlying
assumption of stationarity of the loss reduction.
6
Conclusion
We have improved Boosting by modeling the statistical behavior of the weak-learners? edges. This
allowed us to maximize the loss reduction under strict control of the computational cost. Experiments demonstrate that the algorithms perform well on real-world pattern recognition tasks.
Extensions of the proposed methods could be investigated along two axis. The first one is to blur the
boundary between the MAS procedures and the Laminating, by deriving an analytical model of the
loss reduction for generalized sampling procedures: Instead of doubling the number of samples and
halving the number of weak-learners, we could adapt both set sizes optimally. The second is to add a
bandit-like component to our methods by adding a variance term related to the lack of samples, and
their obsolescence in the Boosting process. This would account for the degrading density estimation
when weak-learner families have not been sampled for a while, and induce an exploratory sampling
which may be missing in the current algorithms.
Acknowledgments
This work was supported by the European Community?s 7th Framework Programme under grant
agreement 247022 ? MASH, and by the Swiss National Science Foundation under grant 200021124822 ? VELASH. We also would like to thank Dr. Robert B. Israel, Associate Professor Emeritus
at the University of British Columbia for his help on the derivation of the expectation of the true
edge of the weak-learner with the highest approximated edge (equations (9) to (13)).
8
References
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2):235?256, 2002.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2003.
[3] R. Busa-Fekete and B. Kegl. Accelerating AdaBoost using UCB. JMLR W&CP, Jan 2009.
[4] R. Busa-Fekete and B. Kegl. Fast Boosting using adversarial bandits. In ICML, 2010.
[5] N. Duffield, C. Lund, and M. Thorup. Priority sampling for estimation of arbitrary subset
sums. J. ACM, 54, December 2007.
[6] G. Escudero, L. M`arquez, and G. Rigau. Boosting applied to word sense disambiguation.
Machine Learning: ECML 2000, pages 129?141, 2000.
[7] F. Fleuret and D. Geman. Stationary features and cat detection. Journal of Machine Learning
Research (JMLR), 9:2549?2578, 2008.
[8] Z. Kalal, J. Matas, and K. Mikolajczyk. Weighted sampling for large-scale Boosting. British
machine vision conference, 2008.
[9] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master?s thesis, 2009.
http://www.cs.toronto.edu/?kriz/cifar.html.
[10] Y. Lecun and C. Cortes. The mnist database of handwritten digits. http://yann.lecun.
com/exdb/mnist/.
9
| 4310 |@word exploitation:3 version:3 hu:5 simulation:2 decomposition:1 q1:7 pick:4 shading:1 reduction:11 configuration:2 series:2 score:1 selecting:1 contains:1 tuned:1 past:2 qth:2 current:2 com:1 nt:1 duffield:1 informative:1 predetermined:1 blur:1 v:1 stationary:2 greedy:5 prohibitive:1 selected:7 alone:1 beginning:1 provides:2 boosting:33 iterates:1 location:1 toronto:1 mathematical:1 along:1 direct:1 become:1 consists:5 overhead:2 busa:2 inside:1 manner:2 expected:4 behavior:5 roughly:2 multi:2 decreasing:1 considering:1 cardinality:1 becomes:1 estimating:1 bounded:1 notation:2 underlying:2 mass:1 israel:1 substantially:1 textbook:1 developed:3 degrading:1 finding:1 dubbed:1 guarantee:1 every:9 unusually:1 classifier:3 control:2 brute:1 unit:1 grant:2 yn:6 local:1 despite:1 approximately:1 plus:1 challenging:2 limited:1 range:1 acknowledgment:1 lecun:2 practice:1 swiss:1 digit:2 procedure:8 jan:1 word:1 induce:1 get:2 close:2 selection:2 nb:2 context:5 optimize:2 www:1 missing:1 maximizing:2 go:2 starting:1 simplicity:1 identifying:1 pure:1 q:1 deriving:1 u6:4 his:1 notion:1 variation:1 exploratory:1 controlling:1 exact:2 homogeneous:1 agreement:1 associate:2 recognition:4 particularly:1 approximated:14 cut:1 geman:1 database:2 observed:2 bottom:1 fly:1 weaklearner:1 trade:3 highest:4 ran:1 envisioned:1 equalize:1 reward:3 trained:1 depend:1 rewrite:1 deliver:1 efficiency:1 learner:61 kalal:1 accelerate:1 easily:2 mh:2 cat:1 derivation:1 train:4 fast:1 choosing:2 exhaustive:1 fluctuates:1 larger:4 valued:2 otherwise:1 fischer:1 g1:7 think:1 highlighted:2 noisy:2 itself:1 final:1 analytical:4 propose:2 product:5 maximal:1 gq:27 loop:1 achieve:1 requirement:1 extending:1 francois:1 object:1 spent:1 derive:4 illustrate:2 help:1 h3:2 progress:1 strong:3 ois:1 idiap:4 come:2 c:1 quantify:1 differ:1 drawback:1 exploration:3 centered:1 require:2 extension:1 underpinning:1 around:2 considered:4 sufficiently:1 smallest:1 estimation:8 largest:3 weighted:4 gaussian:8 always:1 aim:2 pn:1 varying:1 focus:1 improvement:2 properly:3 adversarial:1 greedily:1 baseline:10 sense:2 arquez:1 typically:1 initially:1 bandit:15 relation:1 selects:2 pixel:2 overall:2 classification:1 among:1 html:1 art:5 equal:3 sampling:28 look:5 icml:1 others:1 report:2 franc:1 randomly:1 oriented:2 national:1 individual:2 subsampled:3 n1:1 tq:14 detection:1 stationarity:2 mixture:3 light:1 behind:2 predefined:1 accurate:3 edge:36 divide:1 re:2 fitted:1 instance:1 modeling:1 maximization:4 cost:15 deviation:3 subset:5 uniform:8 krizhevsky:1 characterize:1 optimally:1 adaptively:1 density:3 randomized:3 siam:1 standing:2 off:3 again:1 central:1 cesa:2 thesis:1 containing:1 choose:1 dr:1 priority:1 derivative:1 leading:2 account:4 distribute:1 stump:8 explicitly:2 depends:1 performed:1 h1:8 try:1 minimize:1 accuracy:2 qk:7 variance:7 efficiently:1 descriptor:3 identify:1 weak:58 handwritten:2 accurately:1 produced:1 dub:1 history:1 reach:1 against:3 boil:1 sampled:7 recall:1 knowledge:1 color:2 improves:1 formalize:1 sophisticated:1 actually:1 auer:2 higher:5 follow:2 adaboost:6 response:2 improved:1 formulation:2 done:1 generality:1 dubout:2 replacing:2 lack:1 quality:2 building:2 validity:1 concept:1 true:10 normalized:1 hence:7 q0:9 iteratively:2 deal:3 round:3 during:1 kriz:1 illustrative:1 qe:1 generalized:1 exdb:1 demonstrate:2 performs:1 cp:1 percent:1 image:10 recently:2 charles:2 million:2 multiarmed:2 tuning:2 similarly:3 had:1 add:1 optimizes:2 forcing:1 meta:4 binary:2 devise:1 seen:1 additional:3 greater:2 maximize:4 full:3 multiple:10 mix:1 reduces:2 exp3:5 adapt:2 offer:1 cifar:5 divided:3 ensuring:1 halving:1 basic:1 vision:3 expectation:7 iteration:14 histogram:2 achieved:3 lbp:1 want:2 separately:2 crucial:1 strict:1 tend:2 december:1 contrary:1 call:1 nonstochastic:1 reduce:2 idea:3 intensive:1 t0:6 expression:4 accelerating:1 effort:1 suffer:1 repeatedly:1 fleuret:3 clear:1 involve:1 tune:1 amount:2 transforms:2 locally:1 ten:6 category:1 simplest:1 reduced:1 schapire:1 http:2 outperform:1 exist:1 estimated:5 broadly:1 redundancy:1 threshold:1 thresholded:1 graph:1 sum:2 escudero:1 run:4 fourth:1 master:1 family:27 yann:1 patch:1 disambiguation:1 layer:1 hi:2 g:1 precisely:2 sake:1 fourier:1 speed:1 according:2 combination:2 poor:1 smaller:2 across:2 g3:2 evolves:1 making:2 computationally:1 equation:5 remains:1 thorup:1 operation:1 gaussians:2 apply:1 binomial:2 top:1 subsampling:1 running:1 log2:3 log10:1 exploit:2 giving:1 build:3 approximating:1 classical:1 move:2 matas:1 already:1 quantity:2 strategy:5 degrades:1 gradient:2 kth:1 hq:16 thank:1 parametrized:1 reason:2 barely:1 index:3 modeled:1 useless:1 illustration:1 balance:2 difficult:2 ql:1 robert:1 stated:3 xnt:1 design:1 perform:3 bianchi:2 upper:1 datasets:1 finite:1 behave:2 ecml:1 kegl:2 immediate:1 extended:1 looking:1 arbitrary:2 community:1 intensity:1 introduced:3 pair:1 required:1 namely:1 optimized:3 maxq:1 pattern:5 lund:1 built:1 max:1 gaining:1 mash:1 natural:1 force:1 rely:2 haar:2 arm:2 nth:1 historically:1 axis:1 naive:9 columbia:1 relative:1 freund:1 loss:16 expect:1 fully:3 proportional:1 h2:2 foundation:1 sufficient:1 consistent:1 tiny:2 supported:1 keeping:1 formal:1 allow:1 bias:3 institute:2 taking:1 boundary:1 xn:6 stand:1 cumulative:1 world:1 computes:2 ignores:1 mikolajczyk:1 adaptive:4 programme:1 far:2 cope:1 approximate:1 ignore:1 keep:2 grayscale:1 search:1 table:4 expanding:1 investigated:1 european:1 main:3 spread:1 incrementing:1 noise:2 allowed:1 en:2 formalization:1 candidate:5 jmlr:2 weighting:2 extractor:1 down:1 british:2 cortes:1 concern:1 mnist:8 adding:3 magnitude:1 illustrates:1 flavor:1 simply:2 g2:1 doubling:1 fekete:2 ch:2 corresponds:1 acm:1 ma:13 goal:1 identity:1 professor:1 reducing:2 uniformly:4 averaging:1 total:2 ucb:5 latter:1 tested:1 |
3,657 | 4,311 | Bayesian Bias Mitigation for Crowdsourcing
Michael I. Jordan
University of California, Berkeley
[email protected]
Fabian L. Wauthier
University of California, Berkeley
[email protected]
Abstract
Biased labelers are a systemic problem in crowdsourcing, and a comprehensive
toolbox for handling their responses is still being developed. A typical crowdsourcing application can be divided into three steps: data collection, data curation, and learning. At present these steps are often treated separately. We present
Bayesian Bias Mitigation for Crowdsourcing (BBMC), a Bayesian model to unify
all three. Most data curation methods account for the effects of labeler bias by
modeling all labels as coming from a single latent truth. Our model captures the
sources of bias by describing labelers as influenced by shared random effects.
This approach can account for more complex bias patterns that arise in ambiguous or hard labeling tasks and allows us to merge data curation and learning into
a single computation. Active learning integrates data collection with learning, but
is commonly considered infeasible with Gibbs sampling inference. We propose a
general approximation strategy for Markov chains to efficiently quantify the effect
of a perturbation on the stationary distribution and specialize this approach to active learning. Experiments show BBMC to outperform many common heuristics.
1
Introduction
Crowdsourcing is becoming an increasingly important methodology for collecting labeled data, as
demonstrated among others by Amazon Mechanical Turk, reCAPTCHA, Netflix, and the ESP game.
Motivated by the promise of a wealth of data that was previously impractical to gather, researchers
have focused in particular on Amazon Mechanical Turk as a platform for collecting label data [11,
12]. Unfortunately, the data collected from crowdsourcing services is often very dirty: Unhelpful
labelers may provide incorrect or biased responses that can have major, uncontrolled effects on
learning algorithms. Bias may be caused by personal preference, systematic misunderstanding of
the labeling task, lack of interest or varying levels of competence. Further, as soon as malicious
labelers try to exploit incentive schemes in the data collection cycle yet more forms of bias enter.
The typical crowdsourcing pipeline can be divided into three main steps: 1) Data collection. The
researcher farms the labeling tasks to a crowdsourcing service for annotation and possibly adds a
small set of gold standard labels. 2) Data curation. Since labels from the crowd are contaminated by
errors and bias, some filtering is applied to curate the data, possibly using the gold standard provided
by the researcher. 3) Learning. The final model is learned from the curated data.
At present these steps are often treated as separate. The data collection process is often viewed as
a black box which can only be minimally controlled. Although the potential for active learning to
make crowdsourcing much more cost effective and goal driven has been appreciated, research on
the topic is still in its infancy [4, 9, 17]. Similarly, data curation is in practice often still performed
as a preprocessing step, before feeding the data to a learning algorithm [6, 8, 10, 11, 12, 14]. We
believe that the lack of systematic solutions to these problems can make crowdsourcing brittle in
situations where labelers are arbitrarily biased or even malicious, such as when tasks are particularly
ambiguous/hard or when opinions or ratings are solicited.
1
Our goal in the current paper is to show how crowdsourcing can be leveraged more effectively by
treating the overall pipeline within a Bayesian framework. We present Bayesian Bias Mitigation for
Crowdsourcing (BBMC) as a way to achieve this. BBMC makes two main contributions.
The first is a flexible latent feature model that describes each labeler?s idiosyncrasies through multiple shared factors and allows us to combine data curation and learning (steps 2 and 3 above)
into one inferential computation. Most of the literature accounts for the effects of labeler bias
by assuming a single, true latent labeling from which labelers report noisy observations of some
kind [2, 3, 4, 6, 8, 9, 10, 11, 15, 16, 17, 18]. This assumption is inappropriate when labels are solicited on subjective or ambiguous tasks (ratings, opinions, and preferences) or when learning must
proceed in the face of arbitrarily biased labelers. We believe that an unavoidable and necessary
extension of crowdsourcing allows multiple distinct (yet related) ?true? labelings to co-exist, but
that at any one time we may be interested in learning about only one of these ?truths.? Our BBMC
framework achieves this by modeling the sources of labeler bias through shared random effects.
Next, we want to perform active learning in this model to actively query labelers, thus integrating
step 1 with steps 2 and 3. Since our model requires Gibbs sampling for inference, a straightforward
application of active learning is infeasible: Each active learning step relies on many inferential
computations and would trigger a multitude of subordinate Gibbs samplers to be run within one
large Gibbs sampler. Our second contribution is a new methodology for solving this problem. The
basic idea is to approximate the stationary distribution of a perturbed Markov chain using that of
an unperturbed chain. We specialize this idea to active learning in our model and show that the
computations are efficient and that the resulting active learning strategy substantially outperforms
other active learning schemes.
The paper is organized as follows: We discuss related work in Section 2. In Section 3 we propose
the latent feature model for labelers and in Section 4 we discuss the inference procedure that combines data curation and learning. Then we present a general method to approximate the stationary
distribution of perturbed Markov chains and apply it to derive an efficient active learning criterion
in Section 5. In Section 6 we present comparative results and we draw conclusions in Section 7.
2
Related Work
Relevant work on active learning in multi-teacher settings has been reported in [4, 9, 17]. Sheng
et al. [9] use the multiset of current labels with a random forest label model to score which task to
next solicit a repeat label for. The quality of the labeler providing the new label does not enter the
selection process. In contrast, Donmez et al. [4] actively choose the labeler to query next using a
formulation based on interval estimation, utilizing repeated labelings of tasks. The task to label next
is chosen separately from the labeler. In contrast, our BBMC framework can perform meaningful
inferences even without repeated labelings of tasks and treats the choices of which labeler to query
on which task as a joint choice in a Bayesian framework. Yan et al. [17] account for the effects of
labeler bias through a coin flip observation model that filters a latent label assignment, which in turn
is modeled through a logistic regression. As in [4], the labeler is chosen separately from the task
by solving two optimization problems. In other work on data collection strategies, Wais et al. [14]
require each labeler to first pass a screening test before they are allowed to label any more data. In
a similar manner, reputation systems of various forms are used to weed out historically unreliable
labelers before collecting data.
Consensus voting among multiple labels is a commonly used data curation method [12, 14]. It
works well when low levels of bias or noise are expected but becomes unreliable when labelers vary
greatly in quality [9]. Earlier work on learning from variable-quality teachers was revisited by Smyth
et al. [10] who looked at estimating the unknown true label for a task from a set of labelers of varying
quality without external gold standard signal. They used an EM strategy to iteratively estimate the
true label and the quality of the labelers. The work was extended to a Bayesian formulation by
Raykar et al. [8] who assign latent variables to labelers capturing their mislabeling probabilities.
Ipeirotis et al. [6] pointed out that a biased labeler who systematically mislabels tasks is still more
useful than a labeler who reports labels at random. A method is proposed that separates low quality
labelers from high quality, but biased labelers. Dekel and Shamir [3] propose a two-step process.
First, they filter labelers by how far they disagree from an estimated true label and then retrain the
model on the cleaned data. They give a generalization analysis for anticipated performance. In a
2
similar vein, Dekel and Shamir [2] show that, under some assumptions, restricting each labeler?s
influence on a learned model can control the effect of low quality or malicious labelers. Together
with [8, 16, 18], [2] and [3] are among the recent lines of research to combine data curation and
learning. Work has also focused on using gold standard labels to determine labeler quality. Going
beyond simply counting tasks on which labelers disagree with the gold standard, Snow et al. [11]
estimate labeler quality in a Bayesian setting by comparing to the gold standard.
Lastly, collaborative filtering has looked extensively at completing sparse matrices of ratings [13].
Given some gold standard labels, collaborative filtering methods could in principle also be used to
curate data represented by a sparse label matrix. However, collaborative filtering generally does not
combine this inference with the learning of a labeler-specific model for prediction (step 3). Also,
with the exception of [19], active learning has not been studied in the collaborative filtering setting.
3
Modeling Labeler Bias
In this section we specify a Bayesian latent feature model that accounts for labeler bias and allows
us to combine data curation and learning into a single inferential calculation. For ease of exposition
we will focus on binary classification, but our method can be generalized. Suppose we solicited
labels for n tasks from m labelers. In practical settings it is unlikely that a task is labeled by more
than 3?10 labelers [14]. Let task descriptions xi ? Rd , i = 1, . . . , n, be collected in the matrix
X. The label responses are recorded in the matrix Y so that yi,l ? {?1, 0, +1} denotes the label
given to task i by labeler l. The special label 0 denotes that a task was not labeled. A researcher is
interested in learning a model that can be used to predict labels for new tasks. When consensus is
lacking among labelers, our desideratum is to predict the labels that the researcher (or some other
expert) would have assigned, as opposed to labels from an arbitrary labeler in the crowd. In this
situation it makes sense to stratify the labelers in some way. To facilitate this, the researcher r
provides gold standard labels in column r of Y to a small subset of the tasks. Loosely speaking,
the gold standard allows our model to curate the data by softly combining labels from those labelers
whose responses will useful in predicting r?s remaining labels. It is important to note that our model
is entirely symmetric in the role of the researcher and labelers. If instead we were interested in
predicting labels for labeler l, we would treat column l as containing the gold standard labels. The
researcher r is just another labeler, the only distinction being that we wish to learn a model that
predicts r?s labels. To simplify our presentation, we will accordingly refer to labelers in the crowd
and the researcher occasionally just as ?labelers,? indexed by l, and only use the distinguishing index
r when necessary. We account for each labeler l?s idiosyncrasies by assigning a parameter ?l ? Rd
to l and modeling labels yi,l , i = 1, . . . , n, through a probit model p(yi,l |xi , ?l ) = ?(yi,l x>
i ?l ),
where ?(?) is the standard normal CDF. This section describes a joint Bayesian prior on parameters
?l that allows for parameter sharing; two labelers that share parameters have similar responses. In
the context of this model, the two-step process of data curation and learning a model that predicts
r?s labels is reduced to posterior inference on ?r given X and Y . Inference softly integrates labels
from relevant labelers, while at the same time allowing us to predict r?s remaining labels.
3.1
Latent feature model
Labelers are not independent, so it makes sense to impose structure on the set of ?l ?s. Specifically,
each vector ?l is modeled as the sum of a set of latent factors that are shared across the population.
Let zl be a latent binary vector for labeler l whose component zl,b indicates whether the latent
factor ?b ? Rd contributes to ?l . In principle, our model allows for an infinite number of distinct
factors
as long as only a finite number of those factors is active (i.e.,
P? (i.e., zl is infinitely long),
?
z
<
?).
Let
?
=
(?
)
of the factors ?b . Given a labeler?s vector
l,b
b
b=1 be the concatenation
b=1
P?
zl and factors ? we define the parameter ?l = b=1 zl,b ?b .
For multiple labelers we let the infinitely long matrix Z = (z1 , . . . , zm )> collect the vectors zl and
define the index set of all observed labels L = {(i, l) : yi,l 6= 0}, so that the likelihood is
Y
Y
p(Y |X, ?, Z) =
p(yi,l |xi , ?, zl ) =
?(yi,l x>
(1)
i ?l ).
(i,l)?L
(i,l)?L
To complete the model we need to specify priors for ? and Z. We define the prior distribution of
each ?b to be a zero-mean Gaussian ?b ? N (0, ? 2 I), and let Z be governed by an Indian Buffet
3
Process (IBP) Z ? IBP(?), parameterized by ? [5]. The IBP is a stochastic process on infinite
binary matrices consisting of vectors zl . A central property of the IBP is that with probability one, a
sampled
P?matrix Z contains only a finite number of nonzero entries, thus satisfying our requirement
that b=1 zl,b < ?. In the context of our model this means that when working with finite data,
with probability one only a finite set of features is active across all labelers. To simplify notation in
subsequent sections, we use this observation and collapse an infinite matrix Z and vector ? to finite
dimensional equivalents. From now on, we think of Z as the finite matrix having all zero-columns
removed. Similarly, we think of ? as having all blocks ?b corresponding to zero-columns in the
original matrix Z removed. With probability one, the number of columns K(Z) of Z is finite so we
PK(Z)
may write ?l = b=1 zl,b ?b , Zl> ?, with Zl = zl ? I the Kronecker product of zl and I.
4
Inference: Data Curation and Learning
We noted before that our model combines data curation and learning in a single inferential computation. In this section we lay out the details of a Gibbs sampler for achieving this. Given a task j
which was not labeled by r (and possibly no other labeler), we need the predictive probability
Z
p(yj,r = +1|X, Y ) = p(yj,r = +1|xj , ?r )p(?r |X, Y )d?r .
(2)
To approximate this probability we need to gather samples from the posterior p(?r |Y, X). Equivalently, since ?r = Zr> ?, we need samples from the posterior p(?, zr |Y, X). Because latent factors can be shared across multiple labelers, the posterior will softly absorb label information from
labelers whose latent factors tend to be similar to those of the researcher r. Thus, Bayesian inference p(?r |Y, X) automatically combines data curation and learning by weighting label information
through an inferred sharing structure. Importantly, the posterior is informative even when no labeler
in the crowd labeled any of the tasks the researcher labeled.
4.1
Gibbs sampling
For Gibbs sampling in the probit model one commonly augments the likelihood in Eq. (1) with
intermediate random variables T = {ti,l : yi,l 6= 0}. The generative model for the label yi,l given
xi , ? and zl first samples ti,l from a Gaussian N (?l> xi , 1). Conditioned on ti,l , the label is then
defined as yi,l = 2 1[ti,l > 0] ? 1. Figure 1(a) summarizes the augmented graphical model by
letting ? denote the collection of ?l variables. We are interested in sampling from p(?, zr |Y, X).
The Gibbs sampler for this lives in the joint space of T, ?, Z and samples iteratively from the three
conditional distributions p(T |X, ?, Z), p(?|X, Z, T ) and p(Z|?, X, Y ). The different steps are:
Sampling T given X, ?, Z: We independently sample elements of T given X, ?, Z from a truncated normal as
(ti,l |X, ?, Z) ? N yi,l (ti,l |? > Zl xi , 1),
(3)
where we use N ?1 (t|?, 1) and N +1 (t|?, 1) to indicate the density of the negative- and positiveorthant-truncated normal with mean ? and variance 1, respectively, evaluated at t.
Sampling ? given X, Z, T : Straightforward calculations show that conditional sampling of ?
given X, Z, T follows a multivariate Gaussian
(?|X, Z, T ) ? N (?|?, ?),
(4)
where
??1 =
X
I
>
+
Zl xi x>
i Zl
2
?
?=?
(i,l)?L
X
(i,l)?L
4
Zl xi ti,l .
(5)
?
Z
?
X
T
? t?1
?t
? t+1
? t+2
??t?1
??t
??t+1
??t+2
Y
(a)
(b)
Figure 1: (a) A graphical model of the augmented latent feature model. Each node corresponds to
a collection of random variables in the model. (b) A schematic of our approximation scheme. The
top chain indicates an unperturbed Markov chain, the lower a perturbed Markov chain. Rather than
sampling from the lower chain directly (dashed arrows), we transform samples from the top chain
to approximate samples from the lower (wavy arrows).
Sampling Z given ?, X, Y : Finally, for inference on Z given ?, X, Y we may use techniques
outlined in [5]. We are interested in performing active learning in our model, so it is imperative
to keep the conditional sampling calculations as compact as possible. One simple way to achieve
this is to work with a finite-dimensional approximation to the IBP: We constrain Z to be an m ? K
matrix, assigning each labeler at most K active latent features. This is not a substantial limitation;
in practice the truncated IBP oftenP
performs comparably, and for K ? ? converges in distribution
to the full IBP [5]. Let m?l,b = l0 6=l zl0 ,b be the number of labelers, excluding l, with feature b
P
active. Define ?l (zl,b ) = zl,b ?b + b0 6=b zl,b0 ?b0 as the parameter ?l either specifically including
or excluding ?b . Now if we let z?l,b be the column b of Z, excluding element zl,b then updated
elements of Z can be sampled one by one as
p(zl,b = 1|z?l,b ) =
?
m?l,b + K
?
n+ K
(6)
p(zl,b |z?l,b , ?, X, Y ) ? p(zl,b |z?l,b )
Y
?(yi,l x>
i ?l (zl,b )).
(7)
i:yi,l 6=0
After reaching approximate stationarity, we collect samples (? s , Z s ) , s = 1, . . . , S, from the Gibbs
sampler as they are generated. We then compute samples from p(?r |Y, X) by writing ?rs = Zrs > ? s .
5
Active Learning
The previous section outlined how, given a small set of gold standard labels from r, the remaining
labels can be predicted via posterior inference p(?r |Y, X). In this section we take an active learning
approach [1, 7] to incrementally add labels to Y so as to quickly learn about ?r while reducing data
acquisition costs. Active learning allows us to guide the data collection process through model inferences, thus integrating the data collection, data curation and learning steps of the crowdsourcing
pipeline. We envision a unified system that automatically asks for more labels from those labelers
on those tasks that are most useful in inferring ?r . This is in contrast to [9], where labelers cannot be
targeted with tasks. It is also unlike [4] since we can let labelers be arbitrarily unhelpful, and differs
from [17] which assumes a single latent truth.
A well-known active learning criterion popularized by Lindley [7] is to label that task next which
maximizes the prior-posterior reduction in entropy of an inferential quantity of interest. The original
formulation has been generalized beyond entropy to arbitrary utility functionals U (?) of the updated
posterior probability [1]. The functional U (?) is a model parameter that can depend on the type of
inferences we are interested in. In our particular setup, we wish to infer the parameter ?r to predict
labels for the researcher r. Suppose we chose to solicit a label for task i0 from labeler l0 , which produced label yi0 ,l0 . The utility of this observation is U (p(?r |yi0 ,l0 )). The average utility of receiving
5
a label on task i0 from labeler l0 is I((i0 , l0 ) , p(?r )) = E(U (p(?r |yi0 ,lR0 ))), where the expectation
is taken with respect to the predictive label probabilities p(yi0 ,l0 |xi0 ) = p(yi0 ,l0 |xi0 , ?l0 )p(?l0 )d?l0 .
Active learning chooses that pair (i0 , l0 ) which maximizes I((i0 , l0 ) , p(?r )). If we want to choose
the next task for the researcher to label, we constrain l0 = r. To query the crowd we let l0 6= r. Similarly, we can constrain i0 to any particular value or subset of interest. For the following discussion
we let U (p(?r |yi0 ,l0 )) = ||Ep(?r ) (?r ) ? Ep(?r |yi0 ,l0 ) (?r )||2 be the `2 norm of the difference in means
of ?r . Picking the task that shifts the posterior mean the most is similar in spirit to the common
criterion of maximizing the Kullback-Leibler divergence between the prior and posterior.
5.1
Active learning for MCMC inference
A straightforward application of active learning is impractical using Gibbs sampling, because to
score a single task-labeler pair (i0 , l0 ) we would have to run two Gibbs samplers (one for each of
the two possible labels) in order to approximate the updated posterior distributions. Suppose we
started with k task-labeler pairs that active learning could choose from. Depending on the number
of selections we wish to perform, we would have to run k . g . k 2 Gibbs samplers within the
topmost Gibbs sampler of Section 4. Clearly, such a scoring approach is not practical. To solve
this problem, we propose a general purpose strategy to approximate the stationary distribution of
a perturbed Markov chain using that of an unperturbed Markov chain. The approximation allows
efficient active learning in our model that outperforms na??ve scoring both in speed and quality.
The main idea can be summarized as follows. Suppose we have two Markov chains, p(?rt |?rt?1 )
and p?(??rt |??rt?1 ), the latter of which is a slight perturbation of the former. Denote the stationary
distributions by p? (?r ) and p?? (??r ), respectively. If we are given the stationary distribution p? (?r )
of the unperturbed chain, then we propose to approximate the perturbed stationary distribution by
Z
?
p?? (?r ) ? p?(??r |?r )p? (?r )d?r .
(8)
If p?(??t |??t?1 ) = p(??t |??t?1 ) the approximation is exact. Our hope is that if the perturbation is
small enough the above approximation is good. To use this practically with MCMC, we first run the
unperturbed MCMC chain to approximate stationarity, and then use samples of p? (?r ) to compute
approximate samples from p?? (??r ). Figure 1(b) shows this scheme visually.
To map this idea to our active learning setup we conceptually let the unperturbed chain p(?rt |?rt?1 )
be the chain on ?r induced by the Gibbs sampler in Section 4. The perturbed chain p?(??rt |??rt?1 )
represents the chain where we have added a new observation yi0 ,l0 to the measured data. If we have
S samples ?rs from p? (?r ), then we approximate the perturbed distribution as
S
1X ? s
p?? (??r ) ?
p?(?r |?r ),
(9)
S s=1
and the active learning score as U (p(?r |yi0 ,l0 )) ? U p?? (??r ) . To further specialize this strategy
to our model we first rewrite the Gibbs sampler outlined in Section 4. We suppress
mentions of
X and Y in the subsequent presentation. Instead of first sampling T |? t?1 , Z from Eq. (3), and
then sampling (? t |T, Z) from Eq. (4), we combine them into one larger sampling step ? t |? t?1 , Z .
Starting from a fixed ? t?1 and Z we sample from ? t as
?
?
X
d
? t |? t?1 , Z = ?? + ? = ? ????2 I +
Zl xi ?1 + ti,l |? t?1 , Z ?,
(10)
(i,l)?L
where ?? is a zero-mean Gaussian with covariance ?, and ?1 a standard normal random variable. If it
were feasible, we could also absorb the intermediate
sampling of Z into the notation and write down
a single induced Markov chain ?rt |?rt?1 , as referred to in Eqs. (8) and (9). As this is not possible,
we will account for Z separately. We see that the effect of adding a new observation yi0 ,l0 is to perturb
the Markov chain in Eq. (10) by adding an element to L. Supposing we added this new observation
at time t ? 1, let ?(i0 ,l0 ) be defined as ? but with (i0 , l0 ) added to L. Straightforward calculations
using the Sherman-Morrison-Woodbury identity on ?(i0 ,l0 ) give that, conditioned on ? t?1 , Z, we can
6
(a)
(b)
(c)
Figure 2: Examples of easy and ambiguous labeling tasks. We asked labelers to determine if the
triangle is to the left or above the square.
write the first step of the perturbed Gibbs sampler as a function of the unperturbed Gibbs sampler.
>
> >
If we let Ai0 ,l0 = ?Zl0 xi0 x>
i0 Zl0 /(1 + xi0 Zl0 ?Zl0 xi0 ) for compactness, then we yield
d
t
t?1
?(i
, Z = (I ? Ai0 ,l0 ) ? t |? t?1 , Z + ?(i0 ,l0 ) Zl0 xi0 ?1 + ti0 ,l0 |? t?1 , Z .
(11)
0 ,l0 ) |?
To approximate the utility U (?) we now appeal to Eq. (9) and estimate the difference in means using
recent samples ? s , Z s , s = 1, . . . , S from the unperturbed sampler. In terms of Eqs. (10) and (11),
U (p(?r |yi0 ,l0 )) = Ep(?r ) (?r ) ? Ep(?r |yi0 ,l0 ) (?r )
(12)
2
!
S
1 X s?1 >
s?1
s?1
s?1
s?1
? E
Zr
?|?
,Z
? ?(i0 ,l0 ) |?
,Z
. (13)
S?1
s=2
2
By simple cancellations and expectations of truncated normal variables we can reduce the above
expression to a sample
average of elementary calculations. Note that the sample
? s is a realization
s?1
s?1
s?1
s?1
of ?|?
,Z
. We have used this to approximate E ?|?
,Z
? ? s . Thus, the sum
only runs over S ? 1 terms. In principle the exact expectation could also be computed. The final
utility calculation is straightforward but too long to expand. Finally, we use samples from the Gibbs
sampler to approximate p(yi0 ,l0 |xi0 ) and estimate I((i0 , l0 ) , p(?r )) for querying labeler l0 on task i0 .
6
Experimental Results
We evaluated our active learning method on an ambiguous localization task which asked labelers on
Amazon Mechanical Turk to determine if a triangle was to the left or above a rectangle. Examples
are shown in Figure 6. Tasks such as these are important for learning computer vision models of
perception. Rotation, translation and scale, as well as aspect ratios, were pseudo-randomly sampled
in a way that produced ambiguous tasks. We expected labelers to use centroids, extreme points
and object sizes in different ways to solve the tasks, thus leading to structurally biased responses.
Additionally, our model will also have to deal with other forms of noise and bias. The gold standard
was to compare only the centroids of the two objects. For training we generated 1000 labeling tasks
and solicited 3 labels for each task. Tasks were solved by 75 labelers with moderate disagreement.
To emphasize our results, we retained only the subset of 523 tasks with disagreement. We provided
about 60 gold standard labels to BBMC and then performed inference and active learning on ?r so as
to learn a predictive model emulating gold standard labels. We evaluated methods based on the log
likelihood and error rate on a held-out test set of 1101 datapoints.1 All results shown in Table 1 were
averaged across 10 random restarts. We considered two scenarios. The first compares our model to
other methods when no active learning is performed. This will demonstrate the advantages of the
latent feature model presented in Sections 3 and 4. The second scenario compares performance of
our active learning scheme to various other methods. This will highlight the viability of our overall
scheme presented in Section 5 that ties data collection together with data curation and learning.
First we show performance without active learning. Here only about 60 gold standard labels and all
the labeler data is available for training. The results are shown in the top three rows of Table 1. Our
method, ?BBMC,? outperforms the other two methods by a large margin. The BBMC scores were
computed by running the Gibbs sampler of Section 4 with 2000 iterations burnin and then computing
1
The test set was similarly constructed by selecting from 2000 tasks those on which three labelers disagreed.
7
GOLD
CONS
BBMC
GOLD-ACT
CONS-ACT
RAND-ACT
DIS-ACT
MCMC-ACT
BBMC-ACT
Final Loglik
?3716 ? 1695
?421.1 ? 2.6
?219.1 ? 3.1
?1957 ? 696
?396.1 ? 3.6
?186.0 ? 2.2
?198.3 ? 5.8
?196.1 ? 6.7
?160.8 ? 3.9
Final Error
0.0547 ? 0.0102
0.0935 ? 0.0031
0.0309 ? 0.0033
0.0290 ? 0.0037
0.0906 ? 0.0024
0.0292 ? 0.0029
0.0392 ? 0.0052
0.0492 ? 0.0050
0.0188 ? 0.0018
Table 1: The top three rows give results without and the bottom six rows results with active learning.
a predictive model by averaging over the next 20000 iterations. The alternatives include ?GOLD,?
which is a logistic regression trained only on gold standard labels, and ?CONS,? which evaluates
logistic regression trained on the overall majority consensus. Training on the gold standard only
often overfits, and training on the consensus systematically misleads.
Next, we evaluate our active learning method. As before, we seed the model with about 60 gold
standard labels. We repeatedly select a new task for which to receive a gold standard label from the
researcher. That is, for this experiment we constrained active learning to use l0 = r. Of course, in our
framework we could have just as easily queried labelers in the crowd. Following 2000 steps burnin
we performed active learning every 200 iterations for a total of 100 selections. The reported scores
were computed by estimating a predictive model from the last 200 iterations. The results are shown
in the lower six rows of Table 1. Our model with active learning, ?BBMC-ACT,? outperforms all
alternatives. The first alternative we compared against, ?MCMC-ACT,? does active learning with the
MCMC-based scoring method outlined in Section 5. In line with our utility U (?) this method scores
a task by running two Gibbs samplers within the overall Gibbs sampler and then approximates
the expected mean difference of ?r . Due to time constraints, we could only afford to run each
subordinate chain for 10 steps. Even then, this method requires on the order of 10 ? 83500 Gibbs
sampling iterations for 100 active learning steps. It takes about 11 hours to run the entire chain,
while BBMC only requires 2.5 hours. The MCMC method performs very poorly. This demonstrates
our point: Since the MCMC method computes a similar quantity as our approximation, it should
perform similarly given enough iterations in each subchain. However, 10 iterations is not nearly
enough time for the scoring chains to mix and also quite a small number to compute empirical
averages, leading to decreased performance. A more realistic alternative to our model is ?DIS-ACT,?
which picks one of the tasks with most labeler disagreement to label next. Lastly, the baseline
alternatives include ?GOLD-ACT? and ?CONS-ACT? which pick a random task to label and then
learn logistic regressions on the gold standard or consensus labels respectively. Those results can
be directly compared against ?RAND-ACT,? which uses our model and inference procedure but
similarly selects tasks at random. In line with our earlier evaluation, we still outperform these two
methods when effectively no active learning is done.
7
Conclusions
We have presented Bayesian Bias Mitigation for Crowdsourcing (BBMC) as a framework to unify
the three main steps in the crowdsourcing pipeline: data collection, data curation and learning.
Our model captures labeler bias through a flexible latent feature model and conceives of the entire pipeline in terms of probabilistic inference. An important contribution is a general purpose
approximation strategy for Markov chains that allows us to efficiently perform active learning, despite relying on Gibbs sampling for inference. Our experiments show that BBMC is fast and greatly
outperforms a number of commonly used alternatives.
Acknowledgements
We would like to thank Purnamrita Sarkar for helpful discussions and Dave Golland for assistance
in developing the Amazon Mechanical Turk HITs.
8
References
[1] K. Chaloner and I. Verdinelli. Bayesian Experimental Design: A Review. Statistical Science,
10(3):273?304, 1995.
[2] O. Dekel and O. Shamir. Good Learners for Evil Teachers. In L. Bottou and M. Littman,
editors, Proceedings of the 26th International Conference on Machine Learning (ICML). Omnipress, 2009.
[3] O. Dekel and O. Shamir. Vox Populi: Collecting High-Quality Labels from a Crowd. In
Proceedings of the 22nd Annual Conference on Learning Theory (COLT), Montreal, Quebec,
Canada, 2009.
[4] P. Donmez, J. G. Carbonell, and J. Schneider. Efficiently Learning the Accuracy of Labeling Sources for Selective Sampling. In Proceedings of the 15th ACM SIGKDD, KDD, Paris,
France, 2009.
[5] T. L. Griffiths and Z. Ghahramani. Infinite Latent Feature Models and the Indian Buffet Process. Technical report, Gatsby Computational Neuroscience Unit, 2005.
[6] P. G. Ipeirotis, F. Provost, and J. Wang. Quality Management on Amazon Mechanical Turk. In
Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP, pages 64?67,
Washington DC, 2010.
[7] D. V. Lindley. On a Measure of the Information Provided by an Experiment. The Annals of
Mathematical Statistics, 27(4):986?1005, 1956.
[8] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning
from Crowds. Journal of Machine Learning Research, 11:1297?1322, April 2010.
[9] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get Another Label? Improving Data Quality and
Data Mining using Multiple, Noisy Labelers. In Proceeding of the 14th ACM SIGKDD, KDD,
Las Vegas, Nevada, 2008.
[10] P. Smyth, U. M. Fayyad, M. C. Burl, P. Perona, and P. Baldi. Inferring Ground Truth from
Subjective Labelling of Venus Images. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors,
Advances in Neural Information Processing Systems 7 (NIPS). MIT Press, 1994.
[11] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and Fast?But is it Good? Evaluating
Non-Expert Annotations for Natural Language Tasks. In Proceedings of EMNLP. Association
for Computational Linguistics, 2008.
[12] A. Sorokin and D. Forsyth. Utility Data Annotation with Amazon Mechanical Turk. In CVPR
Workshop on Internet Vision, Anchorage, Alaska, 2008.
[13] X. Su and T. M. Khoshgoftaar. A Survey of Collaborative Filtering Techniques. Advances in
Artificial Intelligence, 2009:4:2?4:2, January 2009.
[14] P. Wais, S. Lingamnei, D. Cook, J. Fennell, B. Goldenberg, D. Lubarov, D. Marin, and H. Simons. Towards Building a High-Quality Workforce with Mechanical Turk. In NIPS Workshop
on Computational Social Science and the Wisdom of Crowds, Whistler, BC, Canada, 2010.
[15] P. Welinder, S. Branson, S. Belongie, and P. Perona. The Multidimensional Wisdom of Crowds.
In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances
in Neural Information Processing Systems 23 (NIPS). MIT Press, 2010.
[16] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose Vote Should Count
More: Optimal Integration of Labels from Labelers of Unknown Expertise. In Y. Bengio,
D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems 22 (NIPS). MIT Press, 2009.
[17] Y. Yan, R. Rosales, G. Fung, and J. G. Dy. Active Learning from Crowds. In L. Getoor and
T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning
(ICML), Bellevue, Washington, 2011.
[18] Y. Yan, R. Rosales, G. Fung, M. Schmidt, G. Hermosillo, L. Bogoni, L. Moy, and J. G. Dy.
Modeling Annotator Expertise: Learning When Everybody Knows a Bit of Something. In
Proceedings of AISTATS, volume 9, Chia Laguna, Sardinia, Italy, 2010.
[19] K. Yu, A. Schwaighofer, V. Tresp, X. Xu, and H. Kriegel. Probabilistic Memory-based Collaborative Filtering. IEEE Transactions On Knowledge and Data Engineering, 16(1):56?69,
January 2004.
9
| 4311 |@word norm:1 yi0:13 dekel:4 nd:1 r:2 covariance:1 pick:2 asks:1 mention:1 bellevue:1 reduction:1 contains:1 score:6 selecting:1 bc:1 hermosillo:1 envision:1 subjective:2 outperforms:5 current:2 comparing:1 loglik:1 yet:2 assigning:2 must:1 subsequent:2 realistic:1 informative:1 kdd:2 cheap:1 mislabels:1 treating:1 stationary:7 generative:1 intelligence:1 cook:1 accordingly:1 ruvolo:1 mitigation:4 provides:1 multiset:1 revisited:1 node:1 preference:2 mathematical:1 anchorage:1 constructed:1 incorrect:1 specialize:3 combine:8 baldi:1 manner:1 purnamrita:1 expected:3 multi:1 relying:1 automatically:2 inappropriate:1 becomes:1 provided:3 estimating:2 notation:2 maximizes:2 kind:1 substantially:1 developed:1 unified:1 impractical:2 pseudo:1 berkeley:4 every:1 collecting:4 voting:1 ti:8 act:12 multidimensional:1 tie:1 demonstrates:1 hit:1 control:1 unit:1 zl:28 before:5 service:2 engineering:1 treat:2 esp:1 laguna:1 despite:1 marin:1 becoming:1 merge:1 black:1 chose:1 minimally:1 studied:1 collect:2 co:1 jurafsky:1 ease:1 collapse:1 branson:1 systemic:1 averaged:1 practical:2 woodbury:1 yj:2 practice:2 block:1 differs:1 movellan:1 procedure:2 empirical:1 yan:3 inferential:5 integrating:2 griffith:1 get:1 cannot:1 selection:3 context:2 influence:1 writing:1 equivalent:1 map:1 demonstrated:1 maximizing:1 straightforward:5 williams:2 starting:1 independently:1 focused:2 survey:1 unify:2 amazon:6 utilizing:1 importantly:1 datapoints:1 population:1 crowdsourcing:16 updated:3 annals:1 shamir:4 trigger:1 suppose:4 smyth:2 exact:2 distinguishing:1 us:1 element:4 satisfying:1 particularly:1 curated:1 lay:1 predicts:2 labeled:6 vein:1 observed:1 role:1 ep:4 bottom:1 solved:1 capture:2 wang:1 culotta:2 cycle:1 removed:2 substantial:1 topmost:1 ti0:1 asked:2 littman:1 personal:1 trained:2 depend:1 solving:2 rewrite:1 predictive:5 localization:1 learner:1 triangle:2 easily:1 joint:3 various:2 represented:1 distinct:2 fast:2 effective:1 query:4 artificial:1 labeling:7 zemel:1 crowd:11 whose:4 heuristic:1 larger:1 solve:2 quite:1 cvpr:1 statistic:1 think:2 farm:1 noisy:2 mislabeling:1 final:4 transform:1 advantage:1 nevada:1 propose:5 coming:1 product:1 zm:1 ai0:2 relevant:2 combining:1 realization:1 poorly:1 achieve:2 gold:24 description:1 requirement:1 comparative:1 wavy:1 converges:1 object:2 derive:1 depending:1 conceives:1 montreal:1 measured:1 b0:3 ibp:7 eq:7 c:2 predicted:1 indicate:1 rosales:2 quantify:1 snow:2 filter:2 stochastic:1 human:1 vox:1 opinion:2 subordinate:2 require:1 feeding:1 assign:1 generalization:1 elementary:1 extension:1 practically:1 considered:2 ground:1 normal:5 visually:1 seed:1 predict:4 major:1 achieves:1 vary:1 purpose:2 estimation:1 integrates:2 khoshgoftaar:1 label:67 hope:1 mit:3 clearly:1 gaussian:4 rather:1 reaching:1 varying:2 l0:36 focus:1 flw:1 chaloner:1 indicates:2 likelihood:3 greatly:2 contrast:3 sigkdd:3 centroid:2 baseline:1 sense:2 helpful:1 inference:18 goldenberg:1 i0:15 softly:3 unlikely:1 entire:2 compactness:1 perona:2 expand:1 going:1 labelings:3 interested:6 selects:1 france:1 selective:1 overall:4 among:4 flexible:2 classification:1 colt:1 platform:1 special:1 constrained:1 integration:1 having:2 washington:2 sampling:19 labeler:37 ng:1 represents:1 yu:2 icml:2 nearly:1 anticipated:1 others:1 contaminated:1 report:3 simplify:2 randomly:1 divergence:1 comprehensive:1 ve:1 consisting:1 stationarity:2 interest:3 screening:1 mining:1 evaluation:1 extreme:1 held:1 chain:24 necessary:2 solicited:4 indexed:1 alaska:1 loosely:1 taylor:1 column:6 modeling:5 earlier:2 assignment:1 cost:2 subset:3 entry:1 imperative:1 welinder:1 too:1 reported:2 perturbed:8 teacher:3 chooses:1 density:1 international:2 systematic:2 probabilistic:2 receiving:1 picking:1 michael:1 together:2 quickly:1 na:1 central:1 unavoidable:1 recorded:1 opposed:1 leveraged:1 possibly:3 choose:3 idiosyncrasy:2 containing:1 emnlp:1 management:1 external:1 expert:2 zhao:1 leading:2 valadez:1 actively:2 account:7 potential:1 summarized:1 forsyth:1 caused:1 performed:4 try:1 overfits:1 netflix:1 curate:3 annotation:3 misunderstanding:1 simon:1 lindley:2 contribution:3 collaborative:6 square:1 accuracy:1 variance:1 who:4 efficiently:3 yield:1 wisdom:2 conceptually:1 bayesian:13 comparably:1 produced:2 expertise:2 researcher:14 solicit:2 dave:1 influenced:1 touretzky:1 sharing:2 evaluates:1 against:2 acquisition:1 turk:7 con:4 sampled:3 knowledge:1 organized:1 restarts:1 methodology:2 response:6 specify:2 rand:2 populi:1 formulation:3 evaluated:3 box:1 done:1 april:1 leen:1 just:3 lastly:2 sheng:2 working:1 su:1 lack:2 incrementally:1 logistic:4 quality:15 believe:2 facilitate:1 effect:9 building:1 true:5 former:1 burl:1 assigned:1 symmetric:1 iteratively:2 nonzero:1 leibler:1 deal:1 assistance:1 game:1 raykar:2 ambiguous:6 noted:1 everybody:1 criterion:3 generalized:2 complete:1 demonstrate:1 performs:2 omnipress:1 image:1 vega:1 common:2 donmez:2 rotation:1 functional:1 volume:1 association:1 xi0:7 slight:1 approximates:1 refer:1 connor:1 gibbs:23 enter:2 queried:1 rd:3 outlined:4 similarly:6 pointed:1 cancellation:1 wais:2 language:1 sherman:1 shawe:1 labelers:46 add:2 something:1 posterior:11 multivariate:1 recent:2 bergsma:1 italy:1 moderate:1 driven:1 tesauro:1 scenario:2 occasionally:1 binary:3 arbitrarily:3 life:1 yi:13 scoring:4 impose:1 schneider:1 determine:3 morrison:1 signal:1 dashed:1 multiple:6 full:1 mix:1 infer:1 hcomp:1 technical:1 calculation:6 long:4 chia:1 divided:2 curation:17 controlled:1 schematic:1 prediction:1 desideratum:1 basic:1 regression:4 vision:2 expectation:3 iteration:7 receive:1 golland:1 want:2 separately:4 interval:1 decreased:1 wealth:1 evil:1 source:3 malicious:3 biased:7 unlike:1 induced:2 tend:1 supposing:1 quebec:1 lafferty:2 spirit:1 jordan:2 counting:1 intermediate:2 bengio:1 enough:3 easy:1 viability:1 xj:1 florin:1 reduce:1 idea:4 venus:1 shift:1 whether:1 motivated:1 expression:1 six:2 utility:7 stratify:1 moy:2 proceed:1 speaking:1 afford:1 repeatedly:1 useful:3 generally:1 extensively:1 augments:1 reduced:1 outperform:2 exist:1 estimated:1 neuroscience:1 write:3 promise:1 incentive:1 achieving:1 rectangle:1 sum:2 run:7 parameterized:1 wu:1 draw:1 summarizes:1 dy:2 bit:1 capturing:1 entirely:1 uncontrolled:1 completing:1 internet:1 annual:1 sorokin:1 kronecker:1 constraint:1 constrain:3 aspect:1 speed:1 fayyad:1 performing:1 developing:1 fung:2 popularized:1 describes:2 across:4 increasingly:1 em:1 pipeline:5 taken:1 previously:1 describing:1 discus:2 turn:1 count:1 know:1 flip:1 letting:1 available:1 apply:1 disagreement:3 alternative:6 coin:1 buffet:2 schmidt:1 original:2 denotes:2 dirty:1 remaining:3 top:4 assumes:1 graphical:2 running:2 include:2 linguistics:1 exploit:1 perturb:1 ghahramani:1 added:3 quantity:2 looked:2 strategy:7 rt:10 separate:2 wauthier:1 thank:1 concatenation:1 majority:1 carbonell:1 topic:1 collected:2 consensus:5 assuming:1 modeled:2 index:2 retained:1 providing:1 ratio:1 equivalently:1 setup:2 unfortunately:1 recaptcha:1 whitehill:1 negative:1 suppress:1 design:1 unknown:2 perform:5 allowing:1 disagree:2 workforce:1 observation:7 markov:11 fabian:1 finite:8 truncated:4 january:2 situation:2 extended:1 excluding:3 emulating:1 dc:1 perturbation:3 provost:2 arbitrary:2 competence:1 canada:2 inferred:1 rating:3 sarkar:1 pair:3 mechanical:7 toolbox:1 cleaned:1 z1:1 paris:1 california:2 learned:2 distinction:1 hour:2 nip:4 beyond:2 kriegel:1 unhelpful:2 pattern:1 perception:1 including:1 memory:1 getoor:1 treated:2 natural:1 ipeirotis:3 predicting:2 zr:5 scheme:6 historically:1 started:1 tresp:1 prior:5 literature:1 acknowledgement:1 review:1 sardinia:1 lacking:1 probit:2 highlight:1 brittle:1 limitation:1 filtering:7 querying:1 annotator:1 gather:2 principle:3 editor:5 systematically:2 share:1 translation:1 row:4 course:1 repeat:1 last:1 soon:1 infeasible:2 dis:2 appreciated:1 bias:18 guide:1 face:1 sparse:2 evaluating:1 computes:1 collection:12 commonly:4 preprocessing:1 far:1 subchain:1 social:1 transaction:1 functionals:1 approximate:14 compact:1 emphasize:1 absorb:2 kullback:1 unreliable:2 keep:1 active:43 belongie:1 xi:9 latent:19 reputation:1 table:4 additionally:1 learn:4 lr0:1 contributes:1 forest:1 improving:1 schuurmans:1 bottou:1 complex:1 aistats:1 pk:1 main:4 arrow:2 noise:2 arise:1 repeated:2 allowed:1 weed:1 xu:1 augmented:2 referred:1 retrain:1 scheffer:1 gatsby:1 structurally:1 inferring:2 wish:3 infancy:1 governed:1 weighting:1 down:1 specific:1 unperturbed:8 appeal:1 multitude:1 disagreed:1 workshop:3 restricting:1 adding:2 effectively:2 labelling:1 conditioned:2 margin:1 entropy:2 simply:1 infinitely:2 bogoni:2 schwaighofer:1 corresponds:1 truth:4 relies:1 acm:3 cdf:1 conditional:3 viewed:1 goal:2 presentation:2 targeted:1 exposition:1 identity:1 towards:1 shared:5 feasible:1 hard:2 typical:2 specifically:2 infinite:4 reducing:1 sampler:17 averaging:1 total:1 pas:1 verdinelli:1 experimental:2 la:1 vote:1 meaningful:1 burnin:2 exception:1 select:1 whistler:1 latter:1 indian:2 evaluate:1 mcmc:8 handling:1 |
3,658 | 4,312 | Generalizing from Several Related Classification
Tasks to a New Unlabeled Sample
Gyemin Lee, Clayton Scott
University of Michigan
{gyemin,clayscot}@umich.edu
Gilles Blanchard
Universit?at Potsdam
[email protected]
Abstract
We consider the problem of assigning class labels to an unlabeled test data set,
given several labeled training data sets drawn from similar distributions. This
problem arises in several applications where data distributions fluctuate because
of biological, technical, or other sources of variation. We develop a distributionfree, kernel-based approach to the problem. This approach involves identifying
an appropriate reproducing kernel Hilbert space and optimizing a regularized empirical risk over the space. We present generalization error analysis, describe universal kernels, and establish universal consistency of the proposed methodology.
Experimental results on flow cytometry data are presented.
1
Introduction
Is it possible to leverage the solution of one classification problem to solve another? This is a question that has received increasing attention in recent years from the machine learning community, and
has been studied in a variety of settings, including multi-task learning, covariate shift, and transfer
learning. In this work we study a new setting for this question, one that incorporates elements of the
three aforementioned settings, and is motivated by many practical applications.
To state the problem, let X be a feature space and Y a space of labels to predict; to simplify the
exposition, we will assume the setting of binary classification, Y = {?1, 1} , although the methodology and results presented here are valid for general output spaces. For a given distribution PXY ,
we refer to the X marginal distribution PX as simply the marginal distribution, and the conditional
PXY (Y |X) as the posterior distribution.
(i)
There are N similar but distinct distributions PXY on X ? Y, i = 1, . . . , N . For each i, there is a
(i)
training sample Si = (Xij , Yij )1?j?ni of iid realizations of PXY . There is also a test distribution
(i)
T
PXY
that is similar to but again distinct from the ?training distributions? PXY . Finally, there is a test
T
sample (XjT , YjT )1?j?nT of iid realizations of PXY
, but in this case the labels Yj are not observed.
The goal is to correctly predict these unobserved labels. Essentially, given a random sample from
T
the marginal test distribution PX
, we would like to predict the corresponding labels. Thus, when we
say that the training and test distributions are ?similar,? we mean that there is some pattern making
it possible to learn a mapping from marginal distributions to labels. We will refer to this learning
problem as learning marginal predictors. A concrete motivating application is given below.
This problem may be contrasted with other learning problems. In multi-task learning, only the
training distributions are of interest, and the goal is to use the similarity among distributions to
improve the training of individual classifiers [1, 2, 3]. In our context, we view these distributions
as ?training tasks,? and seek to generalize to a new distribution/task. In the covariate shift problem,
the marginal test distribution is different from the marginal training distribution(s), but the posterior
1
distribution is assumed to be the same [4]. In our case, both marginal and posterior test distributions
can differ from their training counterparts [5].
Finally, in transfer learning, it is typically assumed that at least a few labels are available for the
test data, and the training data sets are used to improve the performance of a standard classifier, for
example by learning a metric or embedding which is appropriate for all data sets [6, 7]. In our case,
no test labels are available, but we hope that through access to multiple training data sets, it is still
possible to obtain collective knowledge about the ?labeling process? that may be transferred to the
test distribution. Some authors have considered transductive transfer learning, which is similar to
the problem studied here in that no test labels are available. However, existing work has focused on
the case N = 1 and typically relies on the covariate shift assumption [8].
We propose a distribution-free, kernel-based approach to the problem of learning marginal predictors. Our methodology is shown to yield a consistent learning procedure, meaning that the generalization error tends to the best possible as the sample sizes N, {ni }, nT tend to infinity. We also offer
a proof-of-concept experimental study validating the proposed approach on flow cytometry data,
including comparisons to multi-task kernels and a simple pooling approach.
2
Motivating Application: Automatic Gating of Flow Cytometry Data
Flow cytometry is a high-throughput measurement platform that is an important clinical tool for the
diagnosis of many blood-related pathologies. This technology allows for quantitative analysis of
individual cells from a given population, derived for example from a blood sample from a patient.
We may think of a flow cytometry data set as a set of d-dimensional attribute vectors (Xj )1?j?n ,
where n is the number of cells analyzed, and d is the number of attributes recorded per cell. These
attributes pertain to various physical and chemical properties of the cell. Thus, a flow cytometry
data set is a random sample from a patient-specific distribution.
Now suppose a pathologist needs to analyze a new (?test?) patient with data (XjT )1?j?nT . Before
proceeding, the pathologist first needs the data set to be ?purified? so that only cells of a certain
type are present. For example, lymphocytes are known to be relevant for the diagnosis of leukemia,
whereas non-lymphocytes may potentially confound the analysis. In other words, it is necessary to
determine the label YjT ? {?1, 1} associated to each cell, where YjT = 1 indicates that the j-th cell
is of the desired type.
In clinical practice this is accomplished through a manual process known as ?gating.? The data are
visualized through a sequence of two-dimensional scatter plots, where at each stage a line segment
or polygon is manually drawn to eliminate a portion of the unwanted cells. Because of the variability
in flow cytometry data, this process is difficult to quantify in terms of a small subset of simple rules.
Instead, it requires domain-specific knowledge and iterative refinement. Modern clinical laboratories
routinely see dozens of cases per day, so it would be desirable to automate this process.
Since clinical laboratories maintain historical databases, we can assume access to a number (N ) of
historical patients that have already been expert-gated. Because of biological and technical varia(i)
tions in flow cytometry data, the distributions PXY of the historical patients will vary. For example,
Fig. 1 shows exemplary two-dimensional scatter plots for two different patients, where the shaded
cells correspond to lymphocytes. Nonetheless, there are certain general trends that are known to
hold for all flow cytometry measurements. For example, lymphocytes are known to exhibit low
levels of the ?side-scatter? (SS) attribute, while expressing high levels of the attribute CD45 (see
column 2 of Fig. 1). More generally, virtually every cell type of interest has a known tendency
(e.g., high or low) for most measured attributes. Therefore, it is reasonable to assume that there is an
underlying distribution (on distributions) governing flow cytometry data sets, that produces roughly
similar distributions thereby making possible the automation of the gating process.
3
Formal Setting
Let X denote the observation space and Y = {?1, 1} the output space. Let PX ?Y denote the set
of probability distributions on X ? Y, PX the set of probability distributions on X , and PY|X the
set of conditional probabilities of Y given X (also known as Markov transition kernels from X to
2
Figure 1: Two-dimensional projections of multi-dimensional flow cytometry data. Each row corresponds to a single patient. The distribution of cells differs from patient to patient. Lymphocytes, a
type of white blood cell, are marked dark (blue) and others are marked bright (green). These were
manually selected by a domain expert.
Y ) which we also call ?posteriors? in this work. The disintegration theorem (see for instance [9],
Theorem 6.4) tells us that (under suitable regularity properties, e.g., X is a Polish space) any element
PXY ? PX ?Y can be written as a product PXY = PX ?PY |X , with PX ? PX , PY |X ? PY |X . The
space PX ?Y is endowed with the topology of weak convergence and the associated Borel ?-algebra.
(1)
(N )
It is assumed that there exists a distribution ? on PX ?Y , where PXY , . . . , PXY are i.i.d. realizations
(i)
from ?, and the sample Si is made of ni i.i.d. realizations of (X, Y ) following the distribution PXY .
T
Now consider a test distribution PXY
and test sample S T = (XjT , YjT )1?j?nT , whose labels are
not observed. A decision function is a function f : PX ? X 7? R that predicts Ybi = f (PbX , Xi ),
where PbX is the associated empirical X distribution. If ` : R ? Y 7? R+ is a loss, then the average
PnT
loss incurred on the test sample is n1T i=1
`(YbiT , YiT ) . Based on this, we define the average
generalization error of a decision function over test samples of size nT ,
"
#
nT
1 X
T
T
T
T ?? ES T ?(P T )?nT
`(f (PbX , Xi ), Yi ) .
(1)
E(f, nT ) := EPXY
XY
nT i=1
In important point of the analysis is that, at training time as well as at test time, the marginal distribution PX for a sample is only known through the sample itself, that is, through the empirical
marginal PbX . As is clear from equation (1), because of this the generalization error also depends on
T
T
. This motivates the following genthe test sample size nT . As nT grows, PbX
will converge to PX
eralization error when we have an infinite test sample, where we then assume that the true marginal
T
PX
is observed:
T
T
T
T ?? E(X T ,Y T )?P T
E(f, ?) := EPXY
`(f
(P
,
X
),
Y
)
.
(2)
X
XY
To gain some insight into this risk, let us decompose ? into two parts, ?X which generates the
marginal distribution PX , and ?Y |X which, conditioned on PX , generates the posterior PY |X . De? = (PX , X). We then have
note X
h
i
? Y)
E(f, ?) = EPX ??X EPY |X ??Y |X EX?PX EY |X?PY |X `(f (X),
h
i
? Y)
= EPX ??X EX?PX EPY |X ??Y |X EY |X?PY |X `(f (X),
h
i
?
= E(X,Y
? )?Q? `(f (X), Y ) .
? by first drawing PX according to ?X , and then drawing
Here Q? is the distribution that generates X
? by first drawing PY |X according
X according to PX . Similarly, Y is generated, conditioned on X,
to ?Y |X , and then drawing Y from PY |X . From this last expression, we see that the risk is like a
? Y ) ? Q? . Thus, we can deduce several properties
standard binary classification risk based on (X,
3
that are known to hold for binary classification risks. For example, if the
loss is the 0/1 loss, then
? ?
?
?
?
f (X) = 2?
? (X) ? 1 is an optimal predictor, where ??(X) = EY ?Q ? 1{Y =1} . More generally,
Y |X
h
i
?
?
E(f, ?) ? E(f ? , ?) = EX?Q
1
|2?
?
(
X)
?
1|
.
?
?
? =sign(f (X))}
?
{sign(f (X))6
?
X
Our goal is a learning rule that asymptotically predicts as well as the global minimizer of (2), for
a general loss `. By the above observations, consistency with respect to a general ` (thought of
as a surrogate) will imply consistency for the 0/1 loss, provided ` is classification calibrated [10].
Despite the similarity to standard binary classification in the infinite sample case, we emphasize that
eij , Yij ) are neither independent nor
the learning task here is different, because the realizations (X
identically distributed.
T
Finally, we note that there is a condition where for ?-almost all test distribution PXY
, the classifier
?
T
?
f (PX , .) (where f is the global minimizer of (2)) coincides with the optimal Bayes classifier for
T
PXY
, although no labels from this test distribution are observed. This condition is simply that the
posterior PY |X is (?-almost surely) a function of PX . In other words, with the notation introduced
above, ?Y |X (PX ) is a Dirac delta for ?-almost all PX . Although we will not be assuming this
condition throughout the paper, it is implicitly assumed in the motivating application presented in
Section 2, where an expert labels the data points by just looking at their marginal distribution.
Lemma 3.1. For a fixed distribution PXY , and a decision function f : X ? R, let us denote
R(f, PXY ) = E(X,Y )?PXY [`(f (X), Y )] and
R? (PXY ) := min R(f, PXY ) = min E(X,Y )?PXY [`(f (X), Y )]
f :X ?R
f :X ?R
the corresponding optimal (Bayes) risk for the loss function `. Assume that ? is a distribution on
PX ?Y such that ?-a.s. it holds PY |X = F (PX ) for some deterministic mapping F . Let f ? be a
minimizer of the risk (2). Then we have for ?-almost all PXY :
R(f ? (PX , .), PXY ) = R? (PXY )
and
E(f ? , ?) = EPXY ?? [R? (PXY )] .
Proof. Straightforward. Obviously for any f : PX ? X ? R, one has for all PXY :
R(f (PX , .), PXY ) ? R? (PXY ). For any fixed PX ? PX , consider PXY := PX ? F (PX ) and
g ? (PX ) a Bayes classifier for this joint distribution. Pose f (PX , x) := g ? (PX )(x). Then f coincides for ?-almost PXY with a Bayes classifier for PXY , achieving equality in the above inequality.
The second equality follows by taking expectation over PXY ? ?.
4
Learning Algorithm
We consider an approach based on positive semi-definite kernels, or simply kernels for short. Background information on kernels, including the definition, normalized kernels, universal kernels, and
reproducing kernel Hilbert spaces (RKHSs), may be found in [11]. Several well-known learning
algorithms, such as support vector machines and kernel ridge regression, may be viewed as minimizers of a norm-regularized empirical risk over the RKHS of a kernel. A similar development
also exists for multi-task learning [3]. Inspired by this framework, we consider a general kernel
algorithm as follows.
Consider the loss function ` : R ? Y ? R+ . Let k be a kernel on PX ? X , and let Hk be the
(i)
associated RKHS. For the sample Si let PbX denote the corresponding empirical distribution of the
eij = (Pb(i) , Xij ).
Xij s. Also consider the extended input space PX ? X and the extended data X
X
(i)
Note that PbX plays a role similar to the task index in multi-task learning. Now define
ni
N
1 X 1 X
eij ), Yij ) + ? kf k2 .
fb? = arg min
`(f (X
N
n
f ?Hk
i=1 i j=1
4
(3)
For the hinge loss, by the representer theorem [12] this optimization problem reduces to a quadratic
program equivalent to the dual of a kind of cost-sensitive SVM, and therefore can be solved using
existing software packages. The final predictor has the form
fb? (PbX , x) =
ni
N X
X
(i)
?ij Yij k((PbX , Xij ), (PbX , x))
i=1 j=1
where the ?ij are nonnegative and mostly zero. See [11] for details.
In the rest of the paper we will consider a kernel k on PX ? X of the product form
k((P1 , x1 ), (P2 , x2 )) = kP (P1 , P2 )kX (x1 , x2 ),
(4)
where kP is a kernel on PX and kX a kernel on X . Furthermore, we will consider kernels on
0
PX of a particular form. Let kX
denote a kernel on X (which might be different from kX ) that is
0 :
measurable and bounded. We define the following mapping ? : PX ? HkX
Z
0
PX 7? ?(PX ) :=
kX
(x, ?)dPX (x).
(5)
X
This mapping has been studied in the framework of ?characteristic kernels? [13], and it has been
0
and injectivity of ? [14, 15].
proved that there are important links between universality of kX
0
) =
Note that the mapping ? is linear. Therefore, if we consider the kernel kP (PX , PX
0
h?(PX ), ?(PX )i, it is a linear kernel on PX and cannot be a universal kernel. For this reason,
0 and consider the kernel on PX given by
we introduce yet another kernel K on HkX
0
0
kP (PX , PX
) = K (?(PX ), ?(PX
)) .
(6)
Note that particular kernels inspired by the finite dimensional case are of the form
K(v, v 0 ) = F (kv ? v 0 k),
(7)
or
K(v, v 0 ) = G(hv, v 0 i),
(8)
where F, G are real functions of a real variable such that they define a kernel. For example, F (t) =
exp(?t2 /(2? 2 )) yields a Gaussian-like kernel, while G(t) = (1 + t)d yields a polynomial-like
kernel. Kernels of the above form on the space of probability distributions over a compact space X
have been introduced and studied in [16]. Below we apply their results to deduce that k is a universal
0
, and K.
kernel for certain choices of kX , kX
5
Learning Theoretic Study
Although the regularized estimation formula (3) defining fb? is standard, the generalization error
eij are neither identically distributed nor independent. We begin with a
analysis is not, since the X
generalization error bound that establishes uniform estimation error control over functions belonging
to a ball of Hk . We then discuss universal kernels, and finally deduce universal consistency of the
algorithm. To simplify somewhat the analysis, we assume below that all training samples have the
same size ni = n. Also let Bk (r) denote the closed ball of radius r, centered at the origin, in the
RKHS of the kernel k. We consider the following assumptions on the loss and kernels:
(Loss) The loss function ` : R ? Y ? R+ is L` -Lipschitz in its first variable and bounded by B` .
0
(Kernels-A) The kernels kX , kX
and K are bounded respectively by constants Bk2 , Bk20 ? 1, and
2
0
BK . In addition, the canonical feature map ?K : HkX
? HK associated to K satisfies a
0 (Bk 0 ) :
H?older condition of order ? ? (0, 1] with constant LK , on BkX
0 (Bk 0 ) :
?v, w ? BkX
?
k?K (v) ? ?K (w)k ? LK kv ? wk .
(9)
Sufficient conditions for (9) are described in [11]. As an example, the condition is shown to hold
0 . The boundedness assumptions are also
with ? = 1 when K is the Gaussian-like kernel on HkX
clearly satisfied for Gaussian kernels.
5
Theorem 5.1 (Uniform estimation error control). Assume conditions (Loss) and (Kernels-A) hold.
(1)
(N )
If PXY , . . . , PXY are i.i.d. realizations from ?, and for each i = 1, . . . , N , the sample Si =
(i)
(Xij , Yij )1?j?n is made of i.i.d. realizations from PXY , then for any R > 0, with probability at
least 1 ? ?:
X
n
1 N 1X
eij ), Yij ) ? E(f, ?)
sup
`(f (X
f ?Bk (R) N i=1 n j=1
!
!
r
?
log N + log ? ?1 2
1
log ? ?1
? c RBk L` Bk0 LK
+ BK ?
+ B`
, (10)
n
N
N
where c is a numerical constant, and Bk (R) denotes the ball of radius R of Hk .
Proof sketch. The full proofs of this and other results are given in [11]. We give here a brief
overview. We use the decomposition
ni
N
X
1 X
1
eij ), Yij ) ? E(f, ?)
sup
`(f (X
f ?Bk (R) N i=1 ni j=1
ni
N
X
1 X
1
(i)
(i)
`(f (PbX , Xij ), Yij ) ? `(f (PX , Xij ), Yij )
? sup
f ?Bk (R) N i=1 ni j=1
ni
N
1 X
1 X
(i)
+ sup
`(f (PX , Xij ), Yij ) ? E(f, ?) =: (I) + (II).
f ?Bk (R) N i=1 ni j=1
Bounding (I), using the Lipschitz property of the loss function, can be reduced to controlling
b(i)
(i)
f (PX , .) ? f (PX , .)
,
?
(i)
conditional to PX , uniformly for i
= 1, . . . , N . This can be obtained using the reproducing property
(i)
(i)
of the kernel k, the convergence of ?(PbX ) to ?(PX ) as a consequence of Hoeffding?s inequality
in a Hilbert space, and the other assumptions (boundedness/H?older property) on the kernels.
Concerning the control of the term (II), it can be decomposed in turn into the convergence con(i)
ditional to (PX ), and the convergence of the conditional generalization error. In both cases, a
standard approach using the Azuma-McDiarmid?s inequality [17] followed by symmetrization and
Rademacher complexity analysis on a kernel space [18, 19] can be applied. For the first part, the
(i)
random variables are the (Xij , Yij ) (which are independent conditional to (PX )); for the second
(i)
part, the i.i.d. variables are the (PX ) (the (Xij , Yij ) being integrated out).
To establish that k is universal on PX ? X , the following lemma is useful.
Lemma 5.2. Let ?, ?0 be two compact spaces and k, k 0 be kernels on ?, ?0 , respectively. If k, k 0
are both universal, then the product kernel
k((x, x0 ), (y, y 0 )) := k(x, y)k 0 (x0 , y 0 )
is universal on ? ? ?0 .
Several examples of universal kernels are known on Euclidean space. We also need universal kernels
on PX . Fortunately, this was recently investigated [16]. Some additional assumptions on the kernels
and feature space are required:
0
(Kernels-B) kX , kX
, K, and X satisfy the following: X is a compact metric space; kX is universal
0
0 .
on X ; kX is continuous and universal on X ; K is universal on any compact subset of HkX
6
Adapting the results of [16], we have the following.
Theorem 5.3 (Universal kernel). Assume condition (Kernels-B) holds. Then, for kP defined as in
(6), the product kernel k in (4) is universal on PX ?X . Furthermore, the assumption on K is fulfilled
if K is of the form (8), where G is an analytical function with positive Taylor series coefficients, or if
K is the normalized kernel associated to such a kernel.
0
As an example, suppose that X is a compact subset of Rd . Let kX and kX
be Gaussian kernels on
0
0
X . Taking G(t) = exp(t), it follows that K(PX , PX ) = exp(h?(PX ), ?(PX
)iHk0 ) is universal on
X
0
PX . By similar reasoning as in the finite dimensional case, the Gaussian-like kernel K(PX , PX
)=
1
0
2
exp(? 2?2 k?(PX ) ? ?(PX )kHk0 ) is also universal on PX . Thus the product kernel is universal.
X
Corollary 5.4 (Universal consistency). Assume the conditions (Loss), (Kernels-A), and (KernelsB) are satisfied. Assume that N, n grow to infinityq
in such a way that N = O(n? ) for some ? > 0.
Then, if ?j is a sequence such that ?j ? 0 and ?j
E(fb?min(N,n? ) , ?) ?
j
log j
inf
? ?, it holds that
f :PX ?X ?R
E(f, ?)
in probability.
6
Experiments
We demonstrate the proposed methodology for flow cytometry data auto-gating, described above.
Peripheral blood samples were obtained from 35 normal patients, and lymphocytes were classified
by a domain expert. The corresponding flow cytometry data sets have sample sizes ranging from
10,000 to 100,000, and the proportion of lymphocytes in each data set ranges from 10 to 40%. We
took N = 10 of these data sets for training, and the remaining 25 for testing. To speed training time,
we subsampled the 10 training data sets to have 1000 data points (cells) each. Adopting the hinge
loss, we used the SVMlight [20] package to solve the quadratic program characterizing the solution.
0
The kernels kX , kX
, and K are all taken to be GausTrain Test
kP
0
sian kernels with respective bandwidths ?X , ?X
, and
Pooling
(?
=
1)
1.41 2.32
2
equals 10 times the average
?. We set ?X such that ?X
MTL (? = 0.01)
1.59 2.64
distance of a data point to its nearest neighbor within
1.34 2.36
MTL (? = 0.5)
the same data set. The second bandwidth was defined
Proposed
1.32 2.29
similarly, while the third was set to 1. The regularization parameter ? was set to 1.
Table 1: The misclassification rates (%) on
For comparison, we also considered three other training data sets and test data sets for difoptions for kP .
These kernels have the form ferent kP . The proposed method adapts the
kP (P1 , P2 ) = 1 if P1 = P2 , and kP (P1 , P2 ) = ? decision function to the test data (through
otherwise. When ? = 1, the method is equivalent to the marginal-dependent kernel), accountpooling all of the training data together in one data set, ing for its improved performance.
and learning a single SVM classifier. This idea has
been previously studied in the context of flow cytometry by [21]. When 0 < ? < 1, we obtain a kernel like what was used for multi-task learning (MTL)
by [3]. Note that these kernels have the property that if P1 is a training data set, and P2 a test data set,
then P1 6= P2 and so kP (P1 , P2 ) is simply a constant. This implies that the learning rules produced
by these kernels do not adapt to the test distribution, unlike the proposed kernel. In the experiments,
we take ? = 1 (pooling), 0.01, and 0.5 (MTL).
The results are shown in Fig. 2 and summarized in Table 1. The middle column of the table reports
the average misclassification rate on the training data sets. Here we used those data points that
were not part of the 1000-element subsample used for training. The right column shows the average
misclassification rate on the test data sets.
7
Discussion
? = (PX , X).
Our approach to learning marginal predictors relies on the extended input pattern X
Thus, we study the natural algorithm of minimizing a regularized empirical loss over a reproducing
7
Figure 2: The misclassification rates (%) on training data sets and test data sets for different kP . The
last 25 data sets separated by dotted line are not used during training.
kernel Hilbert space associated with the extended input domain PX ?X . We also establish universal
consistency, using a novel generalization error analysis under the inherent non-iid sampling plan,
and a construction of a universal kernel on PX ? X . For the hinge loss, the algorithm may be
implemented using standard techniques for SVMs. The algorithm is applied to flow cytometry autogating, and shown to improve upon kernels that do not adapt to the test distribution.
Several future directions exist. From an application perspective, the need for adaptive classifiers
arises in many applications, especially in biomedical applications involving biological and/or technical variation in patient data. For example, when electrocardiograms are used to monitor cardiac
patients, it is desirable to classify each heartbeat as irregular or not. However, irregularities in a test
patient?s heartbeat will differ from irregularities of historical patients, hence the need to adapt to the
test distribution [22].
We can also ask how the methodology and analysis can be extended to the context where a small
number of labels are available for the test distribution, as is commonly assumed in transfer learning.
In this setting, two approaches are possible. The simplest one is to use the same optimization problem (3), wherein we include additionally the labeled examples of the test distribution. However, if
several test samples are to be treated in succession, and we want to avoid a full, resource-consuming
re-training using all the training samples each time, an interesting alternative is the following: learn
once a function f0 (PX , x) using the available training samples via (3); then, given a partially labeled
test sample, learn a decision function on this sample only via the usual kernel norm regularized em2
2
pirical loss minimization method, but replace the usual regularizer term kf k by kf ? f0 (Px , .)k
(note that f0 (Px , .) ? Hk ). In this sense, the marginal-adaptive decision function learned from the
training samples would serve as a ?prior? for learning on the test data.
It would also be of interest to extend the proposed methodology to a multi-class setting. In this case,
the problem has an interesting interpretation in terms of ?learning to cluster.? Each training task may
be viewed as a data set that has been clustered by a teacher. Generalization then entails the ability
to learn the clustering process, so that clusters may be assigned to a new unlabeled data set.
Future work may consider other asymptotic regimes, e.g., where {ni }, nT do not tend to infinity,
or they tend to infinity much slower than N . It may also be of interest to develop implementations
for differentiable losses such as the logistic loss, allowing for estimation of posterior probabilities.
Finally, we would like to specify conditions on ?, the distribution-generating distribution, that are
favorable for generalization (beyond the simple condition discussed in Lemma 3.1).
Acknowledgments
G. Blanchard was supported by the European Community?s 7th Framework Programme under
the PASCAL2 Network of Excellence (ICT-216886) and under the E.U. grant agreement 247022
(MASH Project). G. Lee and C. Scott were supported in part by NSF Grant No. 0953135.
8
References
[1] S. Thrun, ?Is learning the n-th thing any easier than learning the first?,? Advances in Neural
Information Processing Systems, pp. 640?646, 1996.
[2] R. Caruana, ?Multitask learning,? Machine Learning, vol. 28, pp. 41?75, 1997.
[3] T. Evgeniou and M. Pontil, ?Learning multiple tasks with kernel methods,? J. Machine Learning Research, pp. 615?637, 2005.
[4] S. Bickel, M. Br?uckner, and T. Scheffer, ?Discriminative learning under covariate shift,? J.
Machine Learning Research, pp. 2137?2155, 2009.
[5] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, Dataset Shift in
Machine Learning, The MIT Press, 2009.
[6] R. K. Ando and T. Zhang, ?A high-performance semi-supervised learning method for text
chunking,? Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL 05), pp. 1?9, 2005.
[7] A. Rettinger, M. Zinkevich, and M. Bowling, ?Boosting expert ensembles for rapid concept
recall,? Proceedings of the 21st National Conference on Artificial Intelligence (AAAI 06), vol.
1, pp. 464?469, 2006.
[8] A. Arnold, R. Nallapati, and W.W. Cohen, ?A comparative study of methods for transductive
transfer learning,? Seventh IEEE International Conference on Data Mining Workshops, pp.
77?82, 2007.
[9] O. Kallenberg, Foundations of Modern Probability, Springer, 2002.
[10] P. Bartlett, M. Jordan, and J. McAuliffe, ?Convexity, classification, and risk bounds,? J. Amer.
Stat. Assoc., vol. 101, no. 473, pp. 138?156, 2006.
[11] G. Blanchard, G. Lee, and C. Scott, ?Supplemental material,? NIPS 2011.
[12] I. Steinwart and A. Christmann, Support Vector Machines, Springer, 2008.
[13] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola, ?A kernel approach to comparing distributions,? in Proceedings of the 22nd AAAI Conference on Artificial Intelligence,
R. Holte and A. Howe, Eds., 2007, pp. 1637?1641.
[14] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola, ?A kernel method
for the two-sample-problem,? in Advances in Neural Information Processing Systems 19,
B. Sch?olkopf, J. Platt, and T. Hoffman, Eds., 2007, pp. 513?520.
[15] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. Lanckriet, ?Hilbert space
embeddings and metrics on probability measures,? Journal of Machine Learning Research,
vol. 11, pp. 1517?1561, 2010.
[16] A. Christmann and I. Steinwart, ?Universal kernels on non-standard input spaces,? in Advances
in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor,
R. Zemel, and A. Culotta, Eds., 2010, pp. 406?414.
[17] C. McDiarmid, ?On the method of bounded differences,? Surveys in Combinatorics, vol. 141,
pp. 148?188, 1989.
[18] V. Koltchinskii, ?Rademacher penalties and structural risk minimization,? IEEE Transactions
on Information Theory, vol. 47, no. 5, pp. 1902 ? 1914, 2001.
[19] P. Bartlett and S. Mendelson, ?Rademacher and Gaussian complexities: Risk bounds and
structural results,? Journal of Machine Learning Research, vol. 3, pp. 463?482, 2002.
[20] T. Joachims, ?Making large-scale SVM learning practical,? in Advances in Kernel Methods Support Vector Learning, B. Sch?olkopf, C. Burges, and A. Smola, Eds., chapter 11, pp. 169?
184. MIT Press, Cambridge, MA, 1999.
[21] J. Toedling, P. Rhein, R. Ratei, L. Karawajew, and R. Spang, ?Automated in-silico detection of
cell populations in flow cytometry readouts and its application to leukemia disease monitoring,?
BMC Bioinformatics, vol. 7, pp. 282, 2006.
[22] J. Wiens, Machine Learning for Patient-Adaptive Ectopic Beat Classication, Masters Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of
Technology, 2010.
9
| 4312 |@word multitask:1 middle:1 polynomial:1 norm:2 proportion:1 nd:1 seek:1 decomposition:1 thereby:1 boundedness:2 series:1 rkhs:3 pbx:12 existing:2 comparing:1 nt:12 si:4 assigning:1 scatter:3 written:1 ybit:1 universality:1 yet:1 numerical:1 plot:2 intelligence:2 selected:1 short:1 math:1 boosting:1 mcdiarmid:2 zhang:1 introduce:1 excellence:1 x0:2 rapid:1 roughly:1 p1:8 nor:2 multi:8 inspired:2 decomposed:1 increasing:1 provided:1 begin:1 underlying:1 notation:1 bounded:4 project:1 what:1 kind:1 supplemental:1 unobserved:1 quantitative:1 every:1 unwanted:1 universit:1 classifier:8 k2:1 assoc:1 control:3 platt:1 grant:2 pirical:1 mcauliffe:1 before:1 positive:2 engineering:1 tends:1 consequence:1 despite:1 might:1 acl:1 koltchinskii:1 studied:5 distributionfree:1 shaded:1 range:1 practical:2 acknowledgment:1 yj:1 testing:1 practice:1 definite:1 differs:1 dpx:1 irregularity:2 procedure:1 pontil:1 universal:24 empirical:6 thought:1 adapting:1 projection:1 word:2 cannot:1 unlabeled:3 pertain:1 risk:11 context:3 silico:1 py:11 equivalent:2 deterministic:1 measurable:1 map:1 zinkevich:1 straightforward:1 attention:1 williams:1 focused:1 survey:1 identifying:1 rule:3 insight:1 spang:1 embedding:1 population:2 variation:2 controlling:1 suppose:2 play:1 construction:1 bk0:1 origin:1 agreement:1 lanckriet:1 element:3 trend:1 predicts:2 labeled:3 database:1 observed:4 role:1 solved:1 hv:1 electrical:1 culotta:1 readout:1 disease:1 convexity:1 complexity:2 n1t:1 segment:1 algebra:1 serve:1 upon:1 heartbeat:2 joint:1 various:1 polygon:1 routinely:1 regularizer:1 chapter:1 separated:1 distinct:2 describe:1 kp:12 artificial:2 zemel:1 labeling:1 tell:1 whose:1 solve:2 say:1 gyemin:2 s:1 drawing:4 otherwise:1 ability:1 transductive:2 think:1 itself:1 final:1 obviously:1 sequence:2 differentiable:1 exemplary:1 analytical:1 took:1 propose:1 product:5 relevant:1 realization:7 adapts:1 kv:2 dirac:1 olkopf:5 epx:2 convergence:4 regularity:1 cluster:2 rademacher:3 produce:1 generating:1 comparative:1 tions:1 develop:2 stat:1 pose:1 measured:1 nearest:1 ij:2 received:1 p2:8 implemented:1 involves:1 implies:1 christmann:2 quantify:1 differ:2 direction:1 rasch:2 radius:2 attribute:6 centered:1 material:1 generalization:10 clustered:1 decompose:1 biological:3 yij:12 hold:7 considered:2 normal:1 exp:4 lawrence:1 mapping:5 predict:3 automate:1 vary:1 bickel:1 ditional:1 estimation:4 favorable:1 label:14 sensitive:1 symmetrization:1 establishes:1 tool:1 hoffman:1 fukumizu:1 hope:1 minimization:2 mit:2 clearly:1 gaussian:6 pnt:1 avoid:1 fluctuate:1 corollary:1 derived:1 joachim:1 indicates:1 hk:6 polish:1 hkx:5 sense:1 varia:1 minimizers:1 dependent:1 typically:2 eliminate:1 integrated:1 arg:1 classification:8 aforementioned:1 among:1 dual:1 development:1 plan:1 platform:1 marginal:17 equal:1 once:1 evgeniou:1 sampling:1 manually:2 bmc:1 throughput:1 leukemia:2 representer:1 future:2 report:1 others:1 t2:1 inherent:1 simplify:2 few:1 modern:2 national:1 individual:2 subsampled:1 maintain:1 ando:1 detection:1 interest:4 mining:1 analyzed:1 epxy:3 necessary:1 xy:2 respective:1 euclidean:1 taylor:2 desired:1 re:1 eralization:1 instance:1 column:3 classify:1 caruana:1 cost:1 subset:3 lymphocyte:7 predictor:5 uniform:2 seventh:1 motivating:3 teacher:1 calibrated:1 st:1 borgwardt:2 international:1 lee:3 together:1 concrete:1 thesis:1 again:1 aaai:2 clayscot:1 recorded:1 satisfied:2 hoeffding:1 expert:5 de:2 electrocardiogram:1 summarized:1 wk:1 automation:1 blanchard:4 coefficient:1 satisfy:1 combinatorics:1 wiens:1 depends:1 view:1 closed:1 candela:1 analyze:1 sup:4 portion:1 bayes:4 bright:1 ni:13 characteristic:1 succession:1 yield:3 correspond:1 ensemble:1 ybi:1 generalize:1 weak:1 produced:1 iid:3 monitoring:1 classified:1 manual:1 ed:4 definition:1 sriperumbudur:1 nonetheless:1 pp:17 proof:4 associated:7 con:1 gain:1 proved:1 dataset:1 massachusetts:1 ask:1 recall:1 knowledge:2 hilbert:5 day:1 mtl:4 methodology:6 wherein:1 improved:1 specify:1 supervised:1 amer:1 furthermore:2 governing:1 stage:1 just:1 biomedical:1 smola:3 sketch:1 steinwart:2 logistic:1 grows:1 concept:2 true:1 normalized:2 counterpart:1 equality:2 rbk:1 chemical:1 regularization:1 hence:1 purified:1 laboratory:2 assigned:1 white:1 during:1 bowling:1 coincides:2 ridge:1 theoretic:1 demonstrate:1 reasoning:1 meaning:1 ranging:1 novel:1 recently:1 physical:1 overview:1 cohen:1 extend:1 interpretation:1 discussed:1 association:1 refer:2 measurement:2 expressing:1 cambridge:1 bkx:2 automatic:1 rd:2 consistency:6 similarly:2 sugiyama:1 pathology:1 shawe:1 access:2 f0:3 similarity:2 entail:1 deduce:3 posterior:7 recent:1 perspective:1 optimizing:1 inf:1 certain:3 inequality:3 binary:4 meeting:1 accomplished:1 yi:1 injectivity:1 fortunately:1 somewhat:1 additional:1 holte:1 ey:3 surely:1 determine:1 converge:1 semi:2 ii:2 multiple:2 desirable:2 full:2 gretton:3 reduces:1 ing:1 technical:3 adapt:3 clinical:4 offer:1 yjt:4 concerning:1 uckner:1 involving:1 xjt:3 regression:1 essentially:1 metric:3 patient:15 expectation:1 kernel:76 adopting:1 epy:2 cell:14 irregular:1 whereas:1 background:1 addition:1 want:1 grow:1 source:1 howe:1 sch:5 rest:1 unlike:1 pooling:3 tend:3 validating:1 virtually:1 thing:1 flow:16 incorporates:1 lafferty:1 jordan:1 call:1 structural:2 leverage:1 svmlight:1 identically:2 embeddings:1 automated:1 variety:1 xj:1 topology:1 bandwidth:2 idea:1 br:1 shift:5 motivated:1 expression:1 bartlett:2 penalty:1 generally:2 useful:1 clear:1 dark:1 visualized:1 svms:1 simplest:1 reduced:1 xij:10 exist:1 canonical:1 nsf:1 dotted:1 sign:2 delta:1 fulfilled:1 correctly:1 per:2 blue:1 diagnosis:2 vol:8 pb:1 blood:4 drawn:2 yit:1 achieving:1 monitor:1 neither:2 kallenberg:1 asymptotically:1 year:1 package:2 master:1 almost:5 reasonable:1 throughout:1 decision:6 bound:3 followed:1 pxy:36 quadratic:2 nonnegative:1 annual:1 infinity:3 x2:2 software:1 pathologist:2 generates:3 speed:1 min:4 px:84 transferred:1 department:1 according:3 peripheral:1 ball:3 belonging:1 cardiac:1 making:3 confound:1 taken:1 chunking:1 equation:1 resource:1 previously:1 discus:1 turn:1 umich:1 available:5 endowed:1 apply:1 appropriate:2 alternative:1 rkhss:1 slower:1 denotes:1 remaining:1 include:1 clustering:1 linguistics:1 hinge:3 especially:1 establish:3 question:2 already:1 usual:2 surrogate:1 exhibit:1 distance:1 link:1 thrun:1 reason:1 assuming:1 index:1 minimizing:1 difficult:1 mostly:1 potentially:1 implementation:1 collective:1 motivates:1 gated:1 gilles:1 allowing:1 observation:2 markov:1 finite:2 beat:1 defining:1 extended:5 variability:1 looking:1 reproducing:4 cytometry:16 community:2 clayton:1 introduced:2 bk:10 required:1 potsdam:2 learned:1 nip:1 beyond:1 below:3 pattern:2 scott:3 azuma:1 regime:1 program:2 including:3 green:1 pascal2:1 suitable:1 misclassification:4 natural:1 treated:1 regularized:5 mash:1 sian:1 older:2 improve:3 technology:2 brief:1 imply:1 lk:3 auto:1 text:1 prior:1 ict:1 kf:3 asymptotic:1 loss:21 interesting:2 foundation:1 incurred:1 sufficient:1 consistent:1 bk2:1 classication:1 row:1 supported:2 last:2 free:1 side:1 formal:1 burges:1 arnold:1 neighbor:1 institute:1 taking:2 characterizing:1 distributed:2 valid:1 transition:1 fb:4 ferent:1 author:1 made:2 refinement:1 adaptive:3 commonly:1 historical:4 programme:1 transaction:1 emphasize:1 uni:1 implicitly:1 compact:5 em2:1 global:2 assumed:5 consuming:1 xi:2 discriminative:1 continuous:1 iterative:1 table:3 additionally:1 learn:4 transfer:5 investigated:1 european:1 domain:4 bounding:1 subsample:1 nallapati:1 x1:2 fig:3 scheffer:1 borel:1 third:1 dozen:1 theorem:5 formula:1 specific:2 covariate:4 gating:4 svm:3 exists:2 workshop:1 mendelson:1 quionero:1 conditioned:2 kx:18 easier:1 generalizing:1 michigan:1 simply:4 eij:6 schwaighofer:1 partially:1 springer:2 corresponds:1 minimizer:3 satisfies:1 relies:2 ma:1 conditional:5 goal:3 marked:2 viewed:2 exposition:1 lipschitz:2 replace:1 infinite:2 contrasted:1 uniformly:1 lemma:4 experimental:2 tendency:1 disintegration:1 e:1 ectopic:1 support:3 arises:2 bioinformatics:1 ex:3 |
3,659 | 4,313 | Hierarchically Supervised Latent Dirichlet Allocation
Adler Perotte
Nicholas Bartlett
No?emie Elhadad
Frank Wood
Columbia University, New York, NY 10027, USA
{ajp9009@dbmi,bartlett@stat,noemie@dbmi,fwood@stat}.columbia.edu
Abstract
We introduce hierarchically supervised latent Dirichlet allocation (HSLDA), a
model for hierarchically and multiply labeled bag-of-word data. Examples of such
data include web pages and their placement in directories, product descriptions
and associated categories from product hierarchies, and free-text clinical records
and their assigned diagnosis codes. Out-of-sample label prediction is the primary
goal of this work, but improved lower-dimensional representations of the bagof-word data are also of interest. We demonstrate HSLDA on large-scale data
from clinical document labeling and retail product categorization tasks. We show
that leveraging the structure from hierarchical labels improves out-of-sample label
prediction substantially when compared to models that do not.
1
Introduction
There exist many sources of unstructured data that have been partially or completely categorized
by human editors. In this paper we focus on unstructured text data that has been, at least in part,
manually categorized. Examples include but are not limited to webpages and curated hierarchical
directories of the same [1], product descriptions and catalogs, and patient records and diagnosis
codes assigned to them for bookkeeping and insurance purposes. In this work we show how to
combine these two sources of information using a single model that allows one to categorize new
text documents automatically, suggest labels that might be inaccurate, compute improved similarities between documents for information retrieval purposes, and more. The models and techniques
that we develop in this paper are applicable to other data as well, namely, any unstructured representations of data that have been hierarchically classified (e.g., image catalogs with bag-of-feature
representations).
There are several challenges entailed in incorporating a hierarchy of labels into the model. Among
them, given a large set of potential labels (often thousands), each instance has only a small number
of labels associated to it. Furthermore, there are no naturally occurring negative labeling in the data,
and the absence of a label cannot always be interpreted as a negative labeling.
Our work operates within the framework of topic modeling. Our approach learns topic models of the
underlying data and labeling strategies in a joint model, while leveraging the hierarchical structure
of the labels. For the sake of simplicity, we focus on ?is-a? hierarchies, but the model can be applied
to other structured label spaces. We extend supervised latent Dirichlet allocation (sLDA) [6] to
take advantage of hierarchical supervision. We propose an efficient way to incorporate hierarchical
information into the model. We hypothesize that the context of labels within the hierarchy provides
valuable information about labeling.
We demonstrate our model on large, real-world datasets in the clinical and web retail domains. We
observe that hierarchical information is valuable when incorporated into the learning and improves
our primary goal of multi-label classification. Our results show that a joint, hierarchical model
outperforms a classification with unstructured labels as well as a disjoint model, where the topic
model and the hierarchical classification are inferred independently of each other.
1
Figure 1: HSLDA graphical model
The remainder of this paper is as follows. Section 2 introduces hierarchically supervised LDA
(HSLDA), while Section 3 details a sampling approach to inference in HSLDA. Section 4 reviews
related work, and Section 5 shows results from applying HSLDA to health care and web retail data.
2
Model
HSLDA is a model for hierarchically, multiply-labeled, bag-of-word data. We will refer to individual
groups of bag-of-word data as documents. Let wn,d ? ? be the nth observation in the dth document.
Let wd = {w1,d , . . . , w1,Nd } be the set of Nd observations in document d. Let there
be D such doc
uments and let the size of the vocabulary be V = |?|. Let the set of labels be L = l1 , l2 , . . . , l|L| .
Each label l ? L, except the root, has a parent pa(l) ? L also in the set of labels. We will for exposition purposes assume that this label set has hard ?is-a? parent-child constraints (explained later),
although this assumption can be relaxed at the cost of more computationally complex inference.
Such a label hierarchy forms a multiply rooted tree. Without loss of generality we will consider a
tree with a single root r ? L. Each document has a variable yl,d ? {?1, 1} for every label which
indicates whether the label is applied to document d or not. In most cases yi,d will be unobserved,
in some cases we will be able to fix its value because of constraints on the label hierarchy, and in the
relatively minor remainder its value will be observed. In the applications we consider, only positive
labels are observed.
The constraints imposed by an is-a label hierarchy are that if the lth label is applied to document
d, i.e., yl,d = 1, then all labels in the label hierarchy up to the root are also applied to document d,
i.e., ypa(l),d = 1, ypa(pa(l)),d = 1, . . . , yr,d = 1. Conversely, if a label l0 is marked as not applying
to a document then no descendant of that label may be applied to the same. We assume that at least
one label is applied to every document. This is illustrated in Figure 1 where the root label is always
applied but only some of the descendant labelings are observed as having been applied (diagonal
hashing indicates that potentially some of the plated variables are observed).
In HSLDA, documents are modeled using the LDA mixed-membership mixture model with global
topic estimation. Label responses are generated using a conditional hierarchy of probit regressors.
The HSLDA graphical model is given in Figure 1. In the model, K is the number of LDA ?topics?
(distributions over the elements of ?), ?k is a distribution over ?words,? ? d is a document-specific
distribution over topics, ? is a global distribution over topics, DirK (?) is a K-dimensional Dirichlet
distribution, NK (?) is the K-dimensional Normal distribution, IK is the K dimensional identity
matrix, 1d is the d-dimensional vector of all ones, and I(?) is an indicator function that takes the
value 1 if its argument is true and 0 otherwise. The following procedure describes how to generate
from the HSLDA generative model.
2
1. For each topic k = 1, . . . , K
? Draw a distribution over words ?k ? DirV (?1V )
2. For each label l ? L
? Draw a label application coefficient ? l | ?, ? ? NK (?1K , ?IK )
3. Draw the global topic proportions ? | ?0 ? DirK (?0 1K )
4. For each document d = 1, . . . , D
? Draw topic proportions ? d | ?, ? ? DirK (??)
? For n = 1, . . . , Nd
? Draw topic assignment zn,d | ? d ? Multinomial(? d )
? Draw word wn,d | zn,d , ?1:K ? Multinomial(?zn,d )
? Set yr,d = 1
? For each label l in a breadth first traversal of L starting at the children of root r
N (?
zTd ? l , 1),
ypa(l),d = 1
?d , ? l , ypa(l),d ?
? Draw al,d | z
T
N (?
zd ? l , 1)I(al,d < 0), ypa(l),d = ?1
? Apply label l to document d according to al,d
1 if al,d > 0
yl,d | al,d =
?1 otherwise
?Td = [?
Here z
z1 , . . . , z?k , . . . , z?K ] is the empirical topic distribution for document d, in which
each entry is the percentage of the words in that document that come from topic k, z?k =
PNd
Nd?1 n=1
I(zn,d = k).
The second half of step 4 is a substantial part of our contribution to the general class of supervised
LDA models. Here, each document is labeled generatively using a hierarchy of conditionally dependent probit regressors [14]. For every label l ? L, both the empirical topic distribution for document
d and whether or not its parent label was applied (i.e. I(ypa(l),d = 1)) are used to determine whether
or not label l is to be applied to document d as well. Note that label yl,d can only be applied to
document d if its parent label pa(l) is also applied (these expressions are specific to is-a constraints
but can be modified to accommodate different constraints). The regression coefficients ? l are independent a priori, however, the hierarchical coupling in this model induces a posteriori dependence.
The net effect of this is that label predictors deeper in the label hierarchy are able to focus on finding
specific, conditional labeling features. We believe this to be a significant source of the empirical
label prediction improvement we observe experimentally. We test this hypothesis in Section 5.
Note that the choice of variables al,d and how they are distributed were driven at least in part by
posterior inference efficiency considerations. In particular, choosing probit-style auxiliary variable
distributions for the al,d ?s yields conditional posterior distributions for both the auxiliary variables
(3) and the regression coefficients (2) which are analytic. This simplifies posterior inference substantially.
In the common case where no negative labels are observed (like the example applications we consider in Section 5), the model must be explicitly biased towards generating data that has negative
labels in order to keep it from learning to assign all labels to all documents. This is a common
problem in modeling unbalanced data. To see how this model can be biased in this way we draw
?d
the reader?s attention to the ? parameter and, to a lesser extent, the ? parameter above. Because z
is always positive, setting ? to a negative value results in a bias towards negative labelings, i.e. for
large negative values of ?, all labels become a priori more likely to be negative (yl,d = ?1). We
explore the ability of ? to bias out-of-sample label prediction performance in Section 5.
3
Inference
In this section we provide the conditional distributions required to draw samples from the HSLDA
posterior distribution using Gibbs sampling and Markov chain Monte Carlo. Note that, like in
collapsed Gibbs samplers for LDA [16], we have analytically marginalized out the parameters ?1:K
3
and ? 1:D in the following expressions. Let a be the set of all auxiliary variables, w the set of all
words, ? the set of all regression coefficients, and z\zn,d the set z with element zn,d removed. The
conditional posterior distribution of the latent topic indicators is
p (zn,d = k | z\zn,d , a, w, ?, ?, ?, ?) ?
ck,?(n,d) +?
wn,d ,(?)
k,?(n,d)
c(?),d
+ ?? k k,?(n,d)
c(?),(?)
+V ?
Q
l?Ld
exp ?
(z?Td ?l ?al,d )
2
2
(1)
k,?(n,d)
where cv,d
is the number of words of type v in document d assigned to topic k omitting the
nth word of document d. The subscript (?)?s indicate to sum over the range of the replaced variable,
P k,?(n,d)
k,?(n,d)
i.e. cwn,d ,(?) = d cwn,d ,d . Here Ld is the set of labels which are observed for document d.
The conditional posterior distribution of the regression coefficients is given by
?
? l , ?)
p(? l | z, a, ?) = N (?
(2)
where
? T al
? 1? + Z
? ?1 = I? ?1 + Z
? T Z.
?
?l = ?
?
?
?
? is a D ? K matrix such that row d of Z
? is z
?d , and al = [al,1 , al,2 , . . . , al,D ]T . The simHere Z
plicity of this conditional distribution follows from the choice of probit regression [4]; the specific
form of the update is a standard result from Bayesian normal linear regression [14]. It also is a standard probit regression result that the conditional posterior distribution of al,d is a truncated normal
distribution [4].
zd I (al,d yl,d > 0) I(al,d < 0), ypa(l),d = ?1
exp ? 21 al,d ? ? Tl ?
(3)
p (al,d | z, Y, ?) ?
zd I (al,d yl,d > 0) ,
ypa(l),d = 1
exp ? 21 al,d ? ? Tl ?
Note that care must be taken to initialize the Gibbs sampler in a valid state.
HSLDA employs a hierarchical Dirichlet prior over topic assignments (i.e., ? is estimated from data
rather than fixed a priori). This has been shown to improve the quality and stability of inferred topics
[26]. Sampling ? is done using the ?direct assignment? method of Teh et al. [25]
? | z, ?0 , ? ? Dir m(?),1 + ?0 , m(?),2 + ?0 , . . . , m(?),K + ?0 .
(4)
Here md,k are auxiliary variables that are required to sample the posterior distribution of ?. Their
conditional posterior distribution is sampled according to
? (?? k )
s ck(?),d , m (?? k )m
(5)
p md,k = m | z, m?(d,k) , ? =
? ?? k + ck(?),d
where s (n, m) represents stirling numbers of the first kind.
The hyperparameters ?, ?0 , and ? are sampled using Metropolis-Hastings.
4
Related Work
In this work we extend supervised latent Dirichlet allocation (sLDA) [6] to take advantage of hierarchical supervision. sLDA is latent Dirichlet allocation (LDA) [7] augmented with per document ?supervision,? often taking the form of a single numerical or categorical label. It has been demonstrated
that the signal provided by such supervision can result in better, task-specific document models and
can also lead to good label prediction for out-of-sample data [6]. It also has been demonstrated
that sLDA has been shown to outperform both LASSO (L1 regularized least squares regression) and
LDA followed by least squares regression [6]. sLDA can be applied to data of the type we consider
in this paper; however, doing so requires ignoring the hierarchical dependencies amongst the labels.
In Section 5 we constrast HSLDA with sLDA applied in this way.
Other models that incorporate LDA and supervision include LabeledLDA [23] and DiscLDA [18].
Various applications of these models to computer vision and document networks have been explored [27, 9] . None of these models, however, leverage dependency structure in the label space.
4
In other work, researchers have classified documents into a hierarchy (a closely related task) with
naive Bayes classifiers and support vector machines. Most of this work has been demonstrated on
relatively small datasets, small label spaces, and has focused on single label classification without a
model of documents such as LDA [21, 11, 17, 8].
5
Experiments
We applied HSLDA to data from two domains: predicting medical diagnosis codes from hospital
discharge summaries and predicting product categories from Amazon.com product descriptions.
5.1
Data and Pre-Processing
5.1.1
Discharge Summaries and ICD-9 Codes
Discharge summaries are authored by clinicians to summarize patient hospitalization courses. The
summaries typically contain a record of patient complaints, findings and diagnoses, along with treatment and hospital course. For each hospitalization, trained medical coders review the information
in the discharge summary and assign a series of diagnoses codes. Coding follows the ICD-9-CM
controlled terminology, an international diagnostic classification for epidemiological, health management, and clinical purposes.1 The ICD-9 codes are organized in a rooted-tree structure, with
each edge representing an is-a relationship between parent and child, such that the parent diagnosis
subsumes the child diagnosis. For example, the code for ?Pneumonia due to adenovirus? is a child
of the code for ?Viral pneumonia,? where the former is a type of the latter. It is worth noting that the
coding can be noisy. Human coders sometimes disagree [3], tend to be more specific than sensitive
in their assignments [5], and sometimes make mistakes [13].
The task of automatic ICD-9 coding has been investigated in the clinical domain. Methods range
from manual rules to online learning [10, 15, 12]. Other work had leveraged larger datasets and
experimented with K-nearest neighbor, Naive Bayes, support vector machines, Bayesian Ridge Regression, as well as simple keyword mappings, all with promising results [19, 24, 22, 20].
Our dataset was gathered from the NewYork-Presbyterian Hospital clinical data warehouse. It consists of 6,000 discharge summaries and their associated ICD-9 codes (7,298 distinct codes overall),
representing all the discharges from the hospital in 2009. All included discharge summaries had
associated ICD-9 Codes. Summaries have 8.39 associated ICD-9 codes on average (std dev=5.01)
and contain an average of 536.57 terms after preprocessing (std dev=300.29). We split our dataset
into 5,000 discharge summaries for training and 1,000 for testing.
The text of the discharge summaries was tokenized with NLTK.2 A fixed vocabulary was formed
by taking the top 10,000 tokens with highest document frequency (exclusive of names, places and
other identifying numbers). The study was approved by the Institutional Review Board and follows
HIPAA (Health Insurance Portability and Accountability Act) privacy guidelines.
5.1.2
Product Descriptions and Categorizations
Amazon.com, an online retail store, organizes its catalog of products in a mulitply-rooted hierarchy and provides textual product descriptions for most products. Products can be discovered by
users through free-text search and product category exploration. Top-level product categories are
displayed on the front page of the website and lower level categories can be discovered by choosing
one of the top-level categories. Products can exist in multiple locations in the hierarchy.
In this experiment, we obtained Amazon.com product categorization data from the Stanford Network Analysis Platform (SNAP) dataset [2]. Product descriptions were obtained separately from
the Amazon.com website directly. We limited our dataset to the collection of DVDs in the product
catalog.
Our dataset contains 15,130 product descriptions for training and 1,000 for testing. The product
descriptions are shorter than the discharge summaries (91.89 terms on average, std dev=53.08).
1
2
http://www.cdc.gov/nchs/icd/icd9cm.htm
http://www.nltk.org
5
Overall, there are 2,691 unique categories. Products are assigned on average 9.01 categories (std
dev=4.91). The vocabulary consists of the most frequent 30,000 words omitting stopwords.
5.2
Comparison Models
We evaluated HSLDA along with two closely related models against the two datasets. The comparison models included sLDA with independent regressors (hierarchical constraints on labels ignored)
and HSLDA fit by first performing LDA then fitting tree-conditional regressions. These models were
chosen to highlight several aspects of HSLDA including performance in the absence of hierarchical
constraints, the effect of the combined inference, and regression performance attributable solely to
the hierarchical constraints.
sLDA with independent regressors is the most salient comparison model for our work. The distinguishing factor between HSLDA and sLDA is the additional structure imposed on the label space, a
distinction that we hypothesized would result in a difference in predictive performance.
There are two components to HSLDA, LDA and a hierarchically constrained response. The second
comparison model is HSLDA fit by performing LDA first followed by performing inference over the
hierarchically constrained label space. In this comparison model, the separate inference processes
do not allow the responses to influence the low dimensional structure inferred by LDA. Combined
inference has been shown to improve performance in sLDA [6]. This comparison model examines
not the structuring of the label space, but the benefit of combined inference over both the documents
and the label space.
For all three models, particular attention was given to the settings of the prior parameters for the
regression coefficients. These parameters implement an important form of regularization in HSLDA.
In the setting where there are no negative labels, a Gaussian prior over the regression parameters
with a negative mean implements a prior belief that missing labels are likely to be negative. Thus,
we evaluated model performance for all three models with a range of values for ?, the mean prior
parameter for regression coefficients (? ? {?3, ?2.8, ?2.6, . . . , 1}).
1.0
1.0
0.8
0.8
Sensitivity
Sensitivity
The number of topics for all models was set to 50, the prior distributions of p (?), p (?0 ), and p (?)
were gamma distributed with a shape parameter of 1 and a scale parameters of 1000.
0.6
0.4
0.6
0.4
HSLDA
0.2
HSLDA
0.2
sLDA
sLDA
LDA + conditional regression
0.0
0.0
0.2
0.4
0.6
1-Specificity
0.8
LDA + conditional regression
0.0
0.0
1.0
(a) Clinical data performance.
0.2
0.4
0.6
1-Specificity
0.8
1.0
(b) Retail product performance.
Figure 2: ROC curves for out-of-sample label prediction varying ?, the prior mean of the regression
parameters. In both figures, solid is HSLDA, dashed are independent regressors + sLDA (hierarchical constraints on labels ignored), and dotted is HSLDA fit by running LDA first then running
tree-conditional regressions.
5.3
Evaluation and Results
We evaluated our model, HSLDA, against the comparison models with a focus on predictive performance on held-out data. Prediction performance was measured with standard metrics ? sensitivity
(true positive rate) and 1-specificity (false positive rate).
6
1.0
Sensitivity
0.8
0.6
0.4
HSLDA
0.2
sLDA
LDA + conditional regression
0.0
0.0
0.2
0.4
0.6
1-Specificity
0.8
1.0
Figure 3: ROC curve for out-of-sample ICD-9 code prediction varying auxiliary variable threshold.
? = ?1.0 for all three models in this figure.
The gold standard for comparison was derived from the testing set in each dataset. To make the
comparison as fair as possible among models, ancestors of observed nodes in the label hierarchy
were ignored, observed nodes were considered positive and descendents of observed nodes were
considered to be negative. Note that this is different from our treatment of the observations during inference. Since the sLDA model does not enforce the hierarchical constraints, we establish a
more equal footing by considering only the observed labels as being positive, despite the fact that,
following the hierarchical constraints, ancestors must also be positive. Such a gold standard will
likely inflate the number of false positives because the labels applied to any particular document are
usually not as complete as they could be. ICD-9 codes, for instance, lack sensitivity and their use
as a gold standard could lead to correctly positive predictions being labeled as false positives [5].
However, given that the label space is often large (as in our examples) it is a moderate assumption
that erroneous false positives should not skew results significantly.
Predictive performance in HSLDA is evaluated by p yl,d? | w1:N ?,d?, w1:Nd ,1:D , yl?L,1:D for each
d
? For efficiency, the expectation of this probability distribution was estimated in the
test document, d.
following way. Expectations of ?
zd? and ? l were estimated with samples from the posterior. Using
these expectations, we performed Gibbs sampling over the hierarchy to acquire predictive samples
for the documents in the test set. The true positive rate was calculated as the average expected
labeling for gold standard positive labels. The false positive rate was calculated as the average
expected labeling for gold standard negative labels.
As sensitivity and specificity can always be traded off, we examined sensitivity for a range of values
for two different parameters ? the prior means for the regression coefficients and the threshold for
the auxiliary variables. The goal in this analysis was to evaluate the performance of these models
subject to more or less stringent requirements for predicting positive labels. These two parameters
have important related functions in the model. The prior mean in combination with the auxiliary
variable threshold together encode the strength of the prior belief that unobserved labels are likely to
be negative. Effectively, the prior mean applies negative pressure to the predictions and the auxiliary
variable threshold determines the cutoff. For each model type, separate models were fit for each
value of the prior mean of the regression coefficients. This is a proper Bayesian sensitivity analysis.
In contrast, to evaluate predictive performance as a function of the auxiliary variable threshold, a
single model was fit for each model type and prediction was evaluated based on predictive samples
drawn subject to different auxiliary variable thresholds. These methods are significantly different
since the prior mean is varied prior to inference, and the auxiliary variable threshold is varied following inference.
Figure 2(a) demonstrates the performance of the model on the clinical data as an ROC curve varying
?. For instance, a hyperparameter setting of ? = ?1.6 yields the following performance: the full
HSLDA model had a true positive rate of 0.57 and a false positive rate of 0.13, the sLDA model had
7
a true positive rate of 0.42 and a false positive rate of 0.07, and the HSLDA model where LDA and
the regressions were fit separately had a true positive rate of 0.39 and a false positive rate of 0.08.
These points are highlighted in Figure 2(a).
These results indicate that the full HSLDA model predicts more of the the correct labels at a cost of
an increase in the number of false positives relative to the comparison models.
Figure 2(b) demonstrates the performance of the model on the retail product data as an ROC curve
also varying ?. For instance, a hyperparameter setting of ? = ?2.2 yields the following performance: the full HSLDA model had a true positive rate of 0.85 and a false positive rate of 0.30, the
sLDA model had a true positive rate of 0.78 and a false positive rate of 0.14, and the HSLDA model
where LDA and the regressions were fit separately had a true positive rate of 0.77 and a false positive
rate of 0.16. These results follow a similar pattern to the clinical data. These points are highlighted
in Figure 2(b).
Figure 3 shows the predictive performance of HSLDA relative to the two comparison models on
the clinical dataset as a function of the auxiliary variable threshold. For low values of the auxiliary
variable threshold, the models predict labels in a more sensitive and less specific manner, creating
the points in the upper right corner of the ROC curve. As the auxiliary variable threshold is increased, the models predict in a less sensitive and more specific manner, creating the points in the
lower left hand corner of the ROC curve. HSLDA with full joint inference outperforms sLDA with
independent regressors as well as HSLDA with separately trained regression.
6
Discussion
The SLDA model family, of which HSLDA is a member, can be understood in two different ways.
One way is to see it as a family of topic models that improve on the topic modeling performance
of LDA via the inclusion of observed supervision. An alternative, complementary way is to see it
as a set of models that can predict labels for bag-of-word data. A large diversity of problems can
be expressed as label prediction problems for bag-of-word data. A surprisingly large amount of that
kind of data possess structured labels, either hierarchically constrained or otherwise. That HSLDA
directly addresses this kind of data is a large part of the motivation for this work. That it outperforms
more straightforward approaches should be of interest to practitioners.
Variational Bayes has been the predominant estimation approach applied to sLDA models. Hierarchical probit regression makes for tractable Markov chain Monte Carlo SLDA inference, a benefit
that should extend to other sLDA models should probit regression be used for response variable
prediction there too.
The results in Figures 2(a) and 2(b) suggest that in most cases it is better to do full joint estimation
of HSLDA. An alternative interpretation of the same results is that, if one is more sensitive to
the performance gains that result from exploiting the structure of the labels, then one can, in an
engineering sense, get nearly as much gain in label prediction performance by first fitting LDA
and then fitting a hierarchical probit regression. There are applied settings in which this could be
advantageous.
Extensions to this work include unbounded topic cardinality variants and relaxations to different
kinds of label structure. Unbounded topic cardinality variants pose interesting inference challenges.
Utilizing different kinds of label structure is possible within this framework, but requires relaxing
some of the simplifications we made in this paper for expositional purposes.
References
[1] DMOZ open directory project. http://www.dmoz.org/, 2002.
[2] Stanford network analysis platform. http://snap.stanford.edu/, 2004.
[3] The computational medicine center?s 2007 medical natural language processing challenge.
http://www.computationalmedicine.org/challenge/previous, 2007.
[4] J. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. Journal of the
American Statistical Association, 88(422):669, 1993.
8
[5] E. Birman-Deych, A. D. Waterman, Y. Yan, D. S. Nilasena, M. J. Radford, and B. F. Gage. Accuracy
of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Medical Care, 43(5):480?5,
2005.
[6] D. Blei and J. McAuliffe. Supervised topic models. Advances in Neural Information Processing, 20:
121?128, 2008.
[7] D. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993?1022, March
2003. ISSN 1532-4435.
[8] S. Chakrabarti, B. Dom, R. Agrawal, and P. Raghavan. Scalable feature selection, classification and
signature generation for organizing large text databases into hierarchical topic taxonomies. The VLDB
Journal, 7:163?178, August 1998. ISSN 1066-8888.
[9] J. Chang and D. M. Blei. Hierarchical relational models for document networks. Annals of Applied
Statistics, 4:124?150, 2010. doi: 10.1214/09-AOAS309.
[10] K. Crammer, M. Dredze, K. Ganchev, P.P. Talukdar, and S. Carroll. Automatic code assignment to medical
text. Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language
Processing, pages 129?136, 2007.
[11] S. Dumais and H. Chen. Hierarchical classification of web content. In Proceedings of the 23rd annual
international ACM SIGIR conference on Research and development in information retrieval, SIGIR ?00,
pages 256?263, New York, NY, USA, 2000. ACM.
[12] R. Farkas and G. Szarvas. Automatic construction of rule-based ICD-9-CM coding systems. BMC bioinformatics, 9(Suppl 3):S10, 2008.
[13] M. Farzandipour, A. Sheikhtaheri, and F. Sadoughi. Effective factors on accuracy of principal diagnosis coding based on international classification of diseases, the 10th revision. International Journal of
Information Management, 30:78?84, 2010.
[14] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC,
2nd ed. edition, 2004.
? Uzuner. Three approaches to automatic assignment of ICD-9-CM
[15] I. Goldstein, A. Arzumtsyan, and O.
codes to radiology reports. AMIA Annual Symposium Proceedings, 2007:279, 2007.
[16] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 101(suppl. 1):5228?5235, 2004.
[17] D. Koller and M. Sahami. Hierarchically classifying documents using very few words. Technical Report
1997-75, Stanford InfoLab, February 1997. Previous number = SIDL-WP-1997-0059.
[18] S. Lacoste-Julien, F. Sha, and M. I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. In Neural Information Processing Systems, pages 897?904.
[19] L. Larkey and B. Croft. Automatic assignment of ICD9 codes to discharge summaries. Technical report,
University of Massachussets, 1995.
[20] L. V. Lita, S. Yu, S. Niculescu, and J. Bi. Large scale diagnostic code classification for medical patient records. In Proceedings of the 3rd International Joint Conference on Natural Language Processing
(IJCNLP?08), 2008.
[21] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Building domain-specific search engines with
machine learning techniques. In Proc. AAAI-99 Spring Symposium on Intelligent Agents in Cyberspace,
1999.
[22] S. Pakhomov, J. Buntrock, and C. Chute. Automating the assignment of diagnosis codes to patient encounters using example-based and machine learning techniques. Journal of the American Medical Informatics
Association (JAMIA), 13(5):516?525, 2006.
[23] D. Ramage, D. Hall, R. Nallapati, and C. D. Manning. Labeled LDA: a supervised topic model for credit
attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in
Natural Language Processing, pages 248?256, 2009.
[24] B. Ribeiro-Neto, A. Laender, and L. De Lima. An experimental study in automatically categorizing
medical documents. Journal of the American society for Information science and Technology, 52(5):
391?401, 2001.
[25] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[26] H. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing
Systems 22, pages 1973?1981. 2009.
[27] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, pages
1903?1910, 2009.
9
| 4313 |@word advantageous:1 proportion:2 nd:6 approved:1 open:1 vldb:1 pressure:1 solid:1 accommodate:1 ld:2 reduction:1 generatively:1 series:1 contains:1 document:39 expositional:1 outperforms:3 wd:1 com:4 must:3 numerical:1 shape:1 analytic:1 hypothesize:1 update:1 farkas:1 generative:1 half:1 yr:2 website:2 directory:3 mccallum:2 footing:1 record:4 blei:5 provides:2 node:3 location:1 org:3 perotte:1 stopwords:1 unbounded:2 along:2 direct:1 become:1 symposium:2 ik:2 chakrabarti:1 descendant:2 consists:2 warehouse:1 combine:1 fitting:3 privacy:1 manner:2 introduce:1 expected:2 multi:2 automatically:2 td:2 gov:1 considering:1 cardinality:2 revision:1 provided:1 project:1 underlying:1 coder:2 kind:5 interpreted:1 substantially:2 cm:4 unobserved:2 finding:3 every:3 act:1 complaint:1 classifier:1 demonstrates:2 medical:8 mcauliffe:1 cardiovascular:1 positive:28 understood:1 engineering:1 fwood:1 mistake:1 despite:1 mach:1 subscript:1 solely:1 amia:1 might:1 accountability:1 examined:1 wallach:1 conversely:1 relaxing:1 limited:2 range:4 bi:1 unique:1 testing:3 nchs:1 epidemiological:1 implement:2 procedure:1 empirical:4 yan:1 significantly:2 word:15 pre:1 griffith:1 specificity:5 suggest:2 get:1 cannot:1 selection:1 gelman:1 context:1 applying:2 collapsed:1 influence:1 risk:1 www:4 imposed:2 demonstrated:3 missing:1 center:1 straightforward:1 attention:2 starting:1 independently:1 attribution:1 focused:1 sigir:2 williams:1 simplicity:1 unstructured:4 constrast:1 amazon:4 identifying:2 rule:2 examines:1 utilizing:1 steyvers:1 stability:1 discharge:11 annals:1 hierarchy:16 construction:1 lima:1 user:1 distinguishing:1 hypothesis:1 pa:3 element:2 curated:1 std:4 predicts:1 labeled:6 database:1 observed:11 wang:1 thousand:1 dirv:1 culotta:1 keyword:1 removed:1 highest:1 valuable:2 substantial:1 disease:1 traversal:1 dom:1 signature:1 trained:2 predictive:7 efficiency:2 completely:1 htm:1 joint:5 various:1 newyork:1 distinct:1 effective:1 monte:2 doi:1 labeling:8 choosing:2 slda:22 larger:1 stanford:4 cvpr:1 snap:2 rennie:1 otherwise:3 ability:1 statistic:1 radiology:1 highlighted:2 noisy:1 online:2 beal:1 advantage:2 agrawal:1 net:1 propose:1 product:22 talukdar:1 hslda:39 remainder:2 frequent:1 organizing:1 gold:5 description:8 webpage:1 parent:6 exploiting:1 requirement:1 categorization:3 generating:1 coupling:1 develop:1 pose:1 stat:2 measured:1 nearest:1 minor:1 inflate:1 auxiliary:14 come:1 indicate:2 closely:2 correct:1 exploration:1 human:2 raghavan:1 stringent:1 crc:1 assign:2 fix:1 biological:1 extension:1 ijcnlp:1 considered:2 hall:2 normal:3 exp:3 credit:1 mapping:1 predict:3 traded:1 institutional:1 purpose:5 estimation:3 proc:1 applicable:1 bag:6 label:81 sensitive:4 ganchev:1 always:4 gaussian:1 modified:1 ck:3 rather:1 varying:4 structuring:1 l0:1 focus:4 categorizing:1 derived:1 improvement:1 encode:1 indicates:2 contrast:1 sense:1 posteriori:1 inference:16 dependent:1 membership:1 niculescu:1 inaccurate:1 typically:1 koller:1 ancestor:2 labelings:2 overall:2 among:2 classification:11 translational:1 priori:3 development:1 platform:2 constrained:3 initialize:1 equal:1 having:1 ng:1 sampling:4 manually:1 bmc:1 represents:1 chapman:1 yu:1 nearly:1 report:3 intelligent:1 employ:1 few:1 chib:1 gamma:1 individual:1 replaced:1 interest:2 multiply:3 insurance:2 evaluation:1 predominant:1 entailed:1 introduces:1 mixture:1 hospitalization:2 held:1 chain:2 edge:1 ypa:8 shorter:1 tree:5 re:1 sidl:1 instance:4 increased:1 modeling:3 dev:4 zn:8 assignment:8 stirling:1 cost:2 entry:1 predictor:1 front:1 too:1 dependency:2 dir:1 adler:1 combined:3 dumais:1 international:5 sensitivity:8 automating:1 yl:9 off:1 informatics:1 together:1 polychotomous:1 w1:4 aaai:1 management:2 leveraged:1 corner:2 creating:2 american:4 style:1 potential:1 diversity:1 de:1 coding:5 subsumes:1 coefficient:9 matter:1 descendent:1 explicitly:1 later:1 root:5 performed:1 doing:1 bayes:3 annotation:1 contribution:1 square:2 formed:1 accuracy:2 yield:3 gathered:1 infolab:1 bayesian:5 none:1 carlo:2 worth:1 researcher:1 classified:2 stroke:1 simultaneous:1 manual:1 ed:1 against:2 frequency:1 naturally:1 associated:5 sampled:2 gain:2 dataset:7 treatment:2 improves:2 dimensionality:1 organized:1 goldstein:1 hashing:1 supervised:8 follow:1 response:5 improved:2 done:1 evaluated:5 generality:1 furthermore:1 hand:1 hastings:1 web:4 lack:1 lda:23 quality:1 scientific:1 believe:1 dredze:1 building:1 usa:2 effect:2 omitting:2 true:9 contain:2 name:1 former:1 analytically:1 assigned:4 regularization:1 ramage:1 wp:1 illustrated:1 conditionally:1 during:1 rooted:3 ridge:1 demonstrate:2 complete:1 l1:2 image:2 variational:1 consideration:1 common:2 bookkeeping:1 viral:1 multinomial:2 extend:3 interpretation:1 association:3 refer:1 significant:1 gibbs:4 cv:1 automatic:5 rd:2 inclusion:1 language:4 had:8 seymore:1 similarity:1 supervision:6 carroll:1 posterior:10 moderate:1 driven:1 store:1 binary:1 yi:1 additional:1 care:3 relaxed:1 determine:1 signal:1 dashed:1 multiple:1 full:5 pnas:1 technical:2 clinical:11 retrieval:2 controlled:1 prediction:14 variant:2 regression:28 scalable:1 patient:5 vision:1 metric:1 expectation:3 albert:1 sometimes:2 szarvas:1 suppl:2 retail:6 separately:4 source:3 biased:2 posse:1 subject:2 tend:1 member:1 leveraging:2 lafferty:1 jordan:3 practitioner:1 leverage:1 noting:1 split:1 bengio:1 wn:3 fit:7 carlin:1 lasso:1 simplifies:1 lesser:1 icd:13 whether:3 expression:2 bartlett:2 york:2 ignored:3 amount:1 authored:1 induces:1 category:8 generate:1 http:5 outperform:1 exist:2 bagof:1 percentage:1 dotted:1 estimated:3 disjoint:1 per:1 diagnostic:2 correctly:1 diagnosis:9 zd:4 hyperparameter:2 group:1 elhadad:1 salient:1 terminology:1 threshold:10 drawn:1 cutoff:1 breadth:1 lacoste:1 relaxation:1 wood:1 sum:1 place:1 family:2 reader:1 doc:1 draw:9 disclda:2 followed:2 simplification:1 annual:2 strength:1 placement:1 constraint:11 s10:1 fei:2 dvd:1 sake:1 aspect:1 argument:1 spring:1 performing:3 relatively:2 structured:2 according:2 combination:1 march:1 manning:1 describes:1 metropolis:1 ztd:1 plicity:1 explained:1 taken:1 computationally:1 skew:1 sahami:1 tractable:1 apply:1 observe:2 hierarchical:24 enforce:1 nicholas:1 alternative:2 encounter:1 top:3 dirichlet:9 include:4 running:2 graphical:2 marginalized:1 medicine:1 establish:1 february:1 society:1 strategy:1 primary:2 dependence:1 md:2 diagonal:1 pneumonia:2 exclusive:1 sha:1 amongst:1 separate:2 rethinking:1 topic:27 extent:1 dmoz:2 tokenized:1 code:20 issn:2 modeled:1 relationship:1 acquire:1 potentially:1 frank:1 taxonomy:1 bionlp:1 negative:15 guideline:1 proper:1 stern:1 neto:1 teh:2 disagree:1 upper:1 observation:3 datasets:4 markov:2 chute:1 pnd:1 waterman:1 displayed:1 truncated:1 relational:1 incorporated:1 dirk:3 discovered:2 varied:2 august:1 inferred:3 namely:1 required:2 z1:1 catalog:4 engine:1 distinction:1 textual:1 address:1 dth:1 able:2 usually:1 pattern:1 emie:1 challenge:4 summarize:1 including:1 belief:2 natural:3 regularized:1 predicting:3 indicator:2 nth:2 representing:2 improve:3 technology:1 julien:1 categorical:1 naive:2 columbia:2 health:3 text:7 review:3 prior:15 l2:1 relative:2 loss:1 probit:8 cdc:1 highlight:1 mixed:1 interesting:1 generation:1 allocation:6 agent:1 rubin:1 editor:2 classifying:1 row:1 course:2 summary:12 token:1 surprisingly:1 free:2 bias:2 allow:1 deeper:1 neighbor:1 taking:2 distributed:2 benefit:2 curve:6 calculated:2 vocabulary:3 world:1 valid:1 mimno:1 collection:1 made:1 regressors:6 preprocessing:1 ribeiro:1 keep:1 global:3 corpus:1 hypothesized:1 portability:1 discriminative:1 search:2 latent:7 why:1 promising:1 learn:1 ignoring:1 nigam:1 schuurmans:1 investigated:1 complex:1 domain:4 hierarchically:10 motivation:1 hyperparameters:1 edition:1 massachussets:1 nallapati:1 child:5 fair:1 complementary:1 categorized:2 augmented:1 tl:2 roc:6 board:1 ny:2 attributable:1 learns:1 croft:1 nltk:2 erroneous:1 birman:1 specific:9 explored:1 experimented:1 incorporating:1 workshop:1 false:12 effectively:1 occurring:1 nk:2 chen:1 likely:4 explore:1 expressed:1 partially:1 chang:1 applies:1 radford:1 determines:1 acm:2 conditional:14 lth:1 goal:3 marked:1 identity:1 exposition:1 towards:2 absence:2 content:1 hard:1 experimentally:1 included:2 except:1 operates:1 clinician:1 sampler:2 principal:1 hospital:4 experimental:1 organizes:1 support:2 latter:1 crammer:1 unbalanced:1 categorize:1 bioinformatics:1 incorporate:2 evaluate:2 |
3,660 | 4,314 | Extracting Speaker-Specific Information with a
Regularized Siamese Deep Network
Ke Chen and Ahmad Salman
School of Computer Science, The University of Manchester
Manchester M13 9PL, United Kingdom
{chen,salmana}@cs.manchester.ac.uk
Abstract
Speech conveys different yet mixed information ranging from linguistic to
speaker-specific components, and each of them should be exclusively used in a
specific task. However, it is extremely difficult to extract a specific information
component given the fact that nearly all existing acoustic representations carry
all types of speech information. Thus, the use of the same representation in both
speech and speaker recognition hinders a system from producing better performance due to interference of irrelevant information. In this paper, we present a
deep neural architecture to extract speaker-specific information from MFCCs. As
a result, a multi-objective loss function is proposed for learning speaker-specific
characteristics and regularization via normalizing interference of non-speaker related information and avoiding information loss. With LDC benchmark corpora
and a Chinese speech corpus, we demonstrate that a resultant speaker-specific representation is insensitive to text/languages spoken and environmental mismatches
and hence outperforms MFCCs and other state-of-the-art techniques in speaker
recognition. We discuss relevant issues and relate our approach to previous work.
1
Introduction
It is well known that speech conveys various yet mixed information where there are linguistic information, a major component, and non-verbal information such as speaker-specific and emotional
components [1]. For human communication, all the information components in speech turn out to
be very useful and exclusively used for different tasks. For example, one often recognizes a speaker
regardless of what is spoken for speaker recognition, while it is effortless for him/her to understand
what is exactly spoken by different speakers for speech recognition. In general, however, there is no
effective way to automatically extract an information component of interest from speech signals so
that the same representation has to be used in different speech information tasks. The interference
of different yet entangled speech information components in most existing acoustic representations
hinders a speech or speaker recognition system from achieving better performance [1].
For speaker-specific information extraction, two main efforts have been made so far; one is the use
of data component analysis [2], e.g., PCA or ICA, and the other is the use of adaptive filtering
techniques [3]. However, the aforementioned techniques either fail to associate extracted data components with speaker-specific information as such information is non-predominant over speech or
obtain features overfitting to a specific corpus since it is unlikely that speaker-specific information
is statically resided in fixed frequency bands. Hence, the problem is still unsolved in general [4].
Recent studies suggested that learning deep architectures (DAs) provides a new way for tackling
complex AI problems [5]. In particular, representations learned by DAs greatly facilitate various
recognition tasks and constantly lead to the improved performance in machine perception [6]-[9]. On
the other hand, the Siamese architecture originally proposed in [10] uses supervised yet contrastive
1
CS
x?
1t
x
1t
!
! !
D CS X1 , CS X 2 ; "
CS
x
x?
2t
2t
Figure 1: Regularized Siamese deep network (RSDN) architecture.
learning to explore intrinsic similarity/disimilarity underlying an unknown data space. Incorporated
by DAs, the Siamese architecture has been successfully applied to face recognition [11] and dimensionality reduction [12]. Inspired by the aforementioned work, we present a regularized Siamese
deep network (RSDN) to extract speaker-specific information from a spectral representation, Mel
Frequency Cepstral Coefficients (MFCCs), commonly used in both speech and speaker recognition.
A multi-objective loss function is proposed for learning speaker-specific characteristics, normalizing
interference of non-speaker related information and avoiding information loss. Our RSDN learning
adopts the famous two-phase deep learning strategy [5],[13]; i.e., greedy layer-wise unsupervised
learning for initializing its component deep neural networks followed by global supervised learning
based on the proposed loss function. With LDC benchmark corpora [14] and a Chinese corpus [15],
we demonstrate that a generic speaker-specific representation learned by our RSDN is insensitive
to text and languages spoken and, moreover, applicable to speech corpora unseen during learning.
Experimental results in speaker recognition suggest that a representation learned by the RSDN outperforms MFCCs and that by the CDBN [9] that learns a generic speech representation without
speaker-specific information extraction. To our best knowledge, the work presented in this paper is
the first attempt on speaker-specific information extraction with deep learning.
In the reminder of this paper, Sect. 2 describes our RSDN architecture and proposes a loss function.
Sect. 3 presents a two-phase learning algorithm to train the RSDN. Sect. 4 reports our experimental
methodology and results. The last section discusses relevant issues and relates our approach to
previous work in deep learning.
2
Model Description
In this section, we first describe our RSDN architecture and then propose a multi-objective loss
function used to train the RSDN for learning speaker-specific characteristics.
2.1
Architecture
As illustrated in Figure 1, our RSDN architecture consists of two subnets, and each subnet is a fully
connected multi-layered perceptron of 2K+1 layers, i.e., an input layer, 2K-1 hidden layers and a
visible layer at the top. If we stipulate that layer 0 is input layer, there are the same number of
neurons in layers k and 2K-k for k = 0, 1, ? ? ? , K. In particular, the Kth hidden layer is used as
code layer, and neurons in this layer are further divided into two subsets. As depicted in Figure 1,
those neurons in the box named CS and colored in red constitute one subset for encoding speakerspecific information and all remaining neurons in the code layer form the other subset expected to
2
accommodate non-speaker related information. The input to each subnet is an MFCC representation
of a frame after a short-term analysis that a speech segment is divided into a number of frames and
the MFCC representation is achieved for each frame. As depicted in Figure 1, xit is the MFCC
B
feature vector of frame t in Xi , input to subnet i (i=1,2), where Xi = {xit }Tt=1
collectively denotes
MFCC feature vectors for a speech segment of TB frames.
During learning, two identical subsets are coupled at their coding layers via neurons in CS with an
incompatibility measure defined on two speech segments of equal length, X1 and X2 , input to two
subnets, which will be presented in 2.2. After learning, we achieve two identical subnets and hence
can use either of them to produce a new representation for a speech frame. For input x to a subnet,
only the bottom K layers of the subnet are used and the output of neurons in CS at the code layer or
layer K, denoted by CS(x), is its new representation, as illustrated by the dash box in Figure 1.
2.2
Loss Function
Let CS(xit ) be the output of all neurons in CS of subnet i (i=1,2) for input xit ? Xi and CS(Xi ) =
B
{CS(xit )}Tt=1
, which pools output of neurons in CS for TB frames in Xi , as illustrated in Figure 1.
As statistics of speech signals is more likely to capture speaker-specific information [5], we define
the incompatibility measure based on the 1st- and 2nd-order statistics of a new representation to be
learned as
D[CS(X1 ), CS(X2 ); ?] = ||?(1) ? ?(2) ||22 + ||?(1) ? ?(2) ||2F ,
(1)
where
TB
TB
X
1 X
1
?(i) =
[CS(xit ) ? ?(i) ][CS(xit ) ? ?(i) ]T , i = 1, 2.
CS(xit ), ?(i) =
TB t=1
TB ? 1 t=1
In Eq. (1), || ? ||2 and || ? ||F are the L2 norm and the Frobenius norm, respectively. ? is a collective
notation of all connection weights and biases in the RSDN. Intuitively, two speech segments belonging to different speakers lead to different statistics and hence their incompatibility score measured
by (1) should be large after learning. Otherwise their score is expected to be small.
For a corpus of multiple speakers, we can construct a training set so that an example be in the form:
(X1 , X2 ; I) where I is the label defined as I = 1 if two speech segments, X1 and X2 , are spoken
by the same speaker or I = 0 otherwise. Using such training examples, we apply the energy-based
model principle [16] to define a loss function as
L(X1 , X2 ; ?) = ?[LR (X1 ; ?) + LR (X2 ; ?)] + (1 ? ?)LD (X1 , X2 ; ?),
(2)
where
TB
D
Dm
1 X
? S
? it ||22 (i = 1, 2), LD (X1 , X2 ; ?) = ID + (1 ? I)(e? ?m + e ?S ).
||xit ? x
LR (Xi ; ?) =
TB t=1
Here Dm = ||?(1) ? ?(2) ||22 and DS = ||?(1) ? ?(2) ||2F . ?m and ?S are the tolerance bounds
of incompatibility scores in terms of Dm and DS , which can be estimated from a training set. In
LD (X1 , X2 ; ?), we drop explicit parameters of D[CS(X1 ), CS(X2 ); ?] to simplify presentation.
Eq. (2) defines a multi-objective loss function where ? (0 < ? < 1) is a parameter used to tradeoff between two objectives LR (Xi ; ?) and LD (X1 , X2 ; ?). The motivation for two objectives are
as follows. By nature, both speaker-specific and non-speaker related information components are
entangled over speech [1],[5]. When we tend to extract speaker-specific information, the interference of non-speaker related information is inevitable and appears in various forms. LD (X1 , X2 ; ?)
measures errors responsible for wrong speaker-specific statistics on a representation learned by a
Siamese DA in different situations. However, using LD (X1 , X2 ; ?) only to train a Siamese DA
cannot cope with enormous variations of non-speaker related information, in particular, linguistic
information (a predominant information component in speech), which often leads to overfitting to
a training corpus according to our observations. As a result, we use LR (Xi ; ?) to measure reconstruction errors to monitor information loss during speaker-specific information extraction. By
minimizing reconstruction errors in two subnets, the code layer leads to a speaker-specific representation with the output of neurons in CS while the remaining neurons are used to regularize various
interference by capturing some invariant properties underlying them for good generalization.
In summary, we anticipate that minimizing the multi-objective loss function defined in Eq. (2) will
enable our RSDN to extract speaker-specific information by encoding it through a generic speakerspecific representation.
3
3
Learning Algorithm
In this section, we apply the two-phase deep learning strategy [5],[13] to derive our learning algorithm, i.e., pre-training for initializing subnets and discriminative learning for learning a speakerspecific representation.
We first present the notation system used in our algorithm. Let hkj (xit ) denote the output of the
?
?|hk |
jth neuron in layer k for k=0,1,? ? ? ,K,? ? ? ,2K. hk (xit ) = hkj (xit ) j=1
is a collective notation
of the output of all neurons in layer k of subnet i (i=1,2) where |hk | is the number of neurons
in layer k. By this notation, k=0 refers to the input layer with h0 (xit ) = xit , and k=2K refers
? it . In the coding layer, i.e., layer K, CS(xit ) =
to the top layer producing the reconstruction x
?
?|CS|
(i)
(i)
hKj (xit ) j=1 is a simplified notation for output of neurons in CS. Let Wk and bk denote the
connection weight matrix between layers k-1 and k and the bias vector of layer k in subnet i (i=1,2),
respectively, for k=1,? ? ? ,2K. Then output of layer k is hk (xit ) = ?[uk (xit )] for k=1,? ? ? ,2K-1,
?|z|
?
(i)
(i)
where uk (xit ) = Wk hk?1 (xit ) + bk and ?(z) = (1 + e?zj )?1 j=1 . Note that we use the
linear transfer function in the top layer, i.e., layer 2K, to reconstruct the original input.
3.1
Pre-training
For pre-training, we employ the denoising autoencoder [17] as a building block to initialize biases
and connection weight matrices of a subnet. A denoising autoencoder is a three-layered perceptron
? , is a distorted version of the target output, x. For a training example, (?
where the input, x
x, x), the
? . Since MFCCs fed to the first hidden layer and
output of the autoencoder is a restored version, x
its intermediate representation input to all other hidden layers are of continuous value, we always
? . The restoration learning
distort input, x, by adding Gaussian noise to form a distorted version, x
? with respect to the weight matrix and biases.
is done by minimizing the MSE loss between x and x
We apply the stochastic back-propagation (SBP) algorithm to train denoising autoencoders, and
the greedy layer-wise learning procedure [5],[13] leads to initial weight matrices for the first K
hidden layers, as depicted in a dash box in Figure 1, i.e., W1 , ? ? ? , WK of a subnet. Then, we set
T
WK+k = WK?k+1
for k=1,? ? ? ,K to initialize WK+1 , ? ? ? , W2K of the subnet. Finally, the second
subnet is created by simply duplicating the pre-trained one.
3.2
Discriminative Learning
For discriminative learning, we minimizing the loss function in Eq. (2) based on pre-trained subnets
for speaker-specific information extraction. Given our loss function is defined on statistics of TB
frames in a speech segment, we cannot update parameters until we have TB output of neurons in
CS at the code layer. Fortunately, the SBP algorithm perfectly meets our requirement; In the SBP
algorithm, we always set the batch size to the number of frames in a speech segment. To simplify the
presentation, we shall drop explicit parameters in our derivation if doing so causes no ambiguities.
In terms of the reconstruction loss, LR (Xi ; ?), we have the following gradients. For layer k = 2K,
?LR
= 2(?
xit ? xit ), i = 1, 2.
(3)
?u2K (xit )
For all hidden layers, k=2K-1,? ? ? ,1, applying the chain rule and (3) leads to
?
?|hk |
? (i) ?T
?LR
?LR
?LR
?LR
=
hkj (xit )[1?hkj (xit )]
,
= Wk+1
. (4)
?uk (xit )
?hkj (xit )
?h
(x
)
?u
k
it
k+1 (xit )
j=1
As the contrastive loss, LD (X1 , X2 ; ?), defined on neurons in CS at code layers of two subnets, its
gradients are determined only by parameters related to K hidden layers in two subnets, as depicted
by dash boxes in Figure 1. For layer k=K and subnet i=1, 2, after a derivation (see the appendix for
details), we obtain
??
?|CS| ? ?|hK | ?
?LD
m
?D
?m ]? (x )
=
[I ? ??1
(1
?
I)e
j
it j=1 , 0 j=|CS|+1 +
m
?uK (xit )
??
D
?|CS| ? ?|hK | ?
? ?S
S ]? (x )
[I ? ??1
(5)
j
it j=1 , 0 j=|CS|+1 .
S (1 ? I)e
4
(i) ?
Here, ?j (xit ) = pj
? ? ?
? ?
?
? ? ?
? ?
CS(xit ) j 1? CS(xit ) j and ?j (xit ) = qj (xit ) CS(xit ) j 1? CS(xit ) j ,
where p(i) = T2B sign(1.5?i)(?(1) ??(2) ), q(xit ) = TB4?1 sign(1.5?i)(?(1) ??(2) )[CS(xit )??(i) ]
?
?
and CS(xit ) j is output of the jth neuron in CS for input xit . For layers k=K-1, ? ? ? ,1, we have
?
?|hk |
? (i) ?T
?LD
?LD
?LD
?LR
=
hkj (xit )[1?hkj (xit )]
,
= Wk+1
. (6)
?uk (xit )
?hkj (xit )
?h
(x
)
?u
(xit )
k
it
k+1
j=1
?
?
B
B
Given a training example, {x1t }Tt=1
, {x2t }Tt=1
; I , we use gradients achieved from Eqs. (3)-(6) to
update all the parameters in the RSDN. For layers k=K+1, ? ? ? , 2K, their parameters are updated by
TB X
TB X
2
2
?LR
?? X
?LR
?? X
(i)
(i)
(i)
(i)
[hk?1 (xrt )]T , bk ? bk ?
. (7)
Wk ? Wk ?
TB t=1 r=1 ?uk (xrt )
TB t=1 r=1 ?uk (xrt )
For layers k=1, ? ? ? , K, their weight matrices and biases are updated with
TB X
2 ?
?LR
?LD ?
? X
(i)
(i)
?
+(1 ? ?)
Wk ? Wk ?
[hk?1 (xrt )]T ,
TB t=1 r=1
?uk (xrt )
?uk (xrt )
(8a)
TB X
2 ?
?LD ?
?LR
? X
?
+(1 ? ?)
.
(8b)
TB t=1 r=1
?uk (xrt )
?uk (xrt )
In Eqs. (7) and (8), ? is a learning rate. Here we emphasize that using sum of gradients caused by
two subnets in update rules guarantees that two subsets are always kept identical during learning.
(i)
(i)
bk ? bk ?
4
Experiment
In this section, we describe our experimental methodology and report experiments results in visualization of vowel distributions, speaker comparison and speaker segmentation.
We employ two LDC benchmark corpora [14], KING and TIMIT, and a Chinese speech corpus [15],
CHN, in our experiments. KING, including wide-band and narrow-band sets, consists of 51 speakers
whose utterances were recorded in 10 sessions. By convention, its narrow-band set is called NKING
while KING itself is often referred to its wide-band set. There are 630 speakers in TIMIT and 59
speakers in CHN of three sessions, respectively. All corpora were collected especially for evaluating
a speaker recognition system. The same feature extraction procedure is applied to all three corpora;
i.e., after a short-term analysis suggested in [18], including silence removal with an energy-based
method, pre-emphasis with the filter H(z) = 1?0.95z ?1 as well as Hamming windowing with the
size of 20 ms and 10 ms shift, we extract 19-order MFCCs [1] for each frame.
For the RSDN learning, we use utterances of all 49 speakers recorded in sessions 1 and 2 in KING.
Furthermore, we distort all the utterances by the additive white noise channel with SNR of 10dB
and the Rayleigh fading channel with 5 Hz Doppler shift [19] to simulate channel effects. Thus
our training set consists of clean utterances and their corrupted versions. We randomly divide all
utterances into speech segments of a length TB (1 sec ? TB ? 2 sec) and then exhaustively combine
them to form training examples as described in Sect. 2.2. With a validation set of all the utterances
recorded in session 3 in KING, we select a structure of K=4 (100, 100, 100 and 200 neurons in
layers 1-4 and |CS|=100 in the code layer or layer 4) from candidate models of 2<K<5 and 501000 neurons in a hidden layer. Parameters used in our learning are as follows: Gaussian noise of
N (0, 0.1?) used in denoising autoencoder, ?=0.2, ?m =100 and ?S =2.5 in the loss function defined
in Eq. (2), and learning rates ?=0.01 and 0.001 for pre-training and discriminative learning. After
learning, the RSDN is used to yield a 100-dimensional representation, CS, from 19-order MFCCs.
For any speaker recognition tasks, speaker modeling (SM) is inevitable. In our experiments, we use
the 1st- and 2nd-order statistics of a speech segment based on a representation, SM = {?, ?}, for
?1
SM. Furthermore, we employ a speaker distance metric: d(SM1 , SM2 ) = tr[(??1
1 + ?2 )(?1 ?
T
?2 )(?1 ? ?2 ) ], where SMi = {?i , ?i } (i = 1, 2) are two speaker models (SMs). This distance
metric is derived from the divergence metric for two normal distributions [20] by dropping the term
concerning only covariance matrices based on our observation that covariance matrices often vary
considerably for short segments and the original divergence metric often leads to poor performance
for various representations including MFCCs and ours. In contrast, the one defined above is stable
irrespective of utterance lengths and results in good performance for different representations.
5
/aa/, /iy/, /aw/, /ay/
/ae/, /aw/, /iy/, /ix/
/iy/, /ih/, /eh/, /ix/
/ae/, /aa/, /aw/, /ay/
(a)
(b)
(c)
Figure 2: Visualization of all 20 vowels. (a) CS representation. (b) CS representation. (c) MFCCs.
4.1
Visualization
Vowels have been recognized to be a main carrier of speaker-specific information [1],[4],[18],[20].
TIMIT [14] provides phonetic transcription of all 10 utterances containing all 20 vowels in English
for every speaker. As all the vowels may appear in 10 different utterances, up to 200 vowel segments
in length of 0.1-0.5 sec are available for a speaker, which enables us to investigate vowel distributions
in a representation space for different speakers. Here, we merely visualize mean feature vectors of
up to 200 segments for a speaker in terms of a specific representation with the t-SNE method [21],
which is likely to reflect intrinsic manifolds, by projecting them onto a two-dimensional plane.
In the code layer of our RSDN, output of neurons 1-100 forms a speaker-specific representation, CS,
and that of remaining 100 neurons becomes a non-speaker related representation, dubbed CS. For a
noticeable effect, we randomly choose only five speakers (four females and one male) and visualize
their vowel distributions in Figure 2 in terms of CS, CS and MFCC representations, respectively,
where a maker/color corresponds to a speaker. It is evident from Figure 2(a) that, by using the CS
representation, most vowels spoken by a speaker are tightly grouped together while vowels spoken
by different speakers are well separated. For the CS representation, close inspection on Figure
2(b) reveals that the same vowels spoken by different speakers are, to a great extent, co-located.
Moreover, most of phonetically correlated vowels, as circled and labeled, are closely located in
dense regions independent of speakers and genders. For comparison, we also visualize the same by
using their original MFCCs in Figure 2(c) and observe that most of phonetically correlated vowels
are also co-located, as circled and labeled, whilst others scatter across the plane and their positions
are determined mainly by vowels but affected by speakers. In particular, most of vowels spoken
by the male, marked by ? and colored by green, are grouped tightly but isolated from those by all
females. Thus, visualization in Figure 2 demonstrates how our RSDN learning works and could lend
an evidence to justification on why MFCCs can be used in both speech and speaker recognition [1].
4.2 Speaker Comparison
Speaker comparison (SC) is an essential process involved in any speaker recognition tasks by comparing two speaker models to collect evidence for decision-making, which provides a direct way to
evaluate representations/speaker modeling without addressing decision-making issues [22]. In our
SC experiments, we employ NKING [14], a narrow-band corpus, of many variabilities. During data
collection, there was a ?great divide? between sessions 1-5 and 6-10; both recording device and environments changed, which alters spectral features of 26 speakers and leads to 10dB SNR reduction
on average. As suggested in [18], we conduct two experiments: within-divide where SMs built
on utterances in session 1 are compared to SMs on those in sessions 2-5 and cross-divide where
SMs built on utterances in session 1 are compared with those in sessions 6-10. As short utterances
poses a greater challenge for speaker recognition [4],[18],[20], utterances are partitioned into short
segments of a certain length and SMs built on segments of the same length are always used for SC.
For a thorough evaluation, we apply the SM technique in question to our representation, MFCCs,
and a representation (i.e., the better one of those yielded by two layers) learned by the CDBN [9]
on all 10 sessions in NKING, and name them SM-RSDN, SM-MFCC and SM-CDBN hereinafter. In
addition, we also compare them to GMMs trained on MFCCs (GMM-MFCC), a state-of-the-art SM
technique that provides the baseline performance [4],[20], where for each speaker a GMM-based
SM consisting of 32 Gaussian components is trained on his/her utterances of 60 sec in sessions 1-2
with the EM algorithm [18]. For the CDBN learning [9] and the GMM training [18], we strictly
follow their suggested parameter settings in our experiments (see [9],[18] for details).
6
1
0.2
0.4
False Alarm Probability
0.6
0.8
0.1
0.1
1
1
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.2
0.4
False Alarm Probability
0.6
0.8
0.1
0.1
1
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.2
0.4
False Alarm Probability
0.6
0.8
(a)
1
0.4
0.1
0.1
0.6
0.8
1
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.6
0.2
0.2
0.4
False Alarm Probability
0.8
Miss Probability
0.6
0.4
0.2
1
0.8
Miss Probability
0.6
0.4
0.2
1
0.8
Miss Probability
0.6
0.4
0.2
0.2
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.8
Miss Probability
0.6
0.4
0.1
0.1
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.8
Miss Probability
Miss Probability
0.6
0.1
0.1
1
1
SM?MFCC
GMM?MFCC
SM?CDBN
SM?RSDN
0.8
0.4
0.2
0.2
0.4
False Alarm Probability
(b)
0.6
0.8
1
0.1
0.1
0.2
0.4
False Alarm Probability
0.6
0.8
1
(c)
Figure 3: Performance of speaker comparison (DET) in the within-divide (upper row) and the crossdivide (lower row) experiments for different segment lengths. (a) 1 sec. (b) 3 sec. (d) 5 sec.
Table 1: Performance (mean?std)% of speaker segmentation on TIMIT and CHN audio streams.
Index
TIMIT Audio Stream
CHN Audio Stream
BIC-MFCC Dist-MFCC Dist-RSDN BIC-MFCC Dist-MFCC Dist-RSDN
FAR
26?09
22?11
18?11
46?04
27?11
24?11
MDR
26?14
22?12
18?10
46?10
27?17
24?17
F1
67?12
74?11
79?09
44?08
68?17
72?17
We use Detection Error Trade-off (DET) curves as the performance index in SC. From Figure 3, it is
evident that SM-RSDN outperforms SM-MFCC, SM-CDBN and GMM-MFCC, a baseline system
trained on much longer utterances, as it always yields a smaller operating region, i.e., all possible
errors, in all the settings. In contrast, SM-MFCC performs better in within-divide settings while
SM-CDBN is always inferior to the baseline system. Relevant issues will be discussed later on.
4.3
Speaker Segmentation
Speaker segmentation (SS) is a task of detecting speaker change points in an audio stream to split
it into acoustically homogeneous segments so that every segment contains only one speaker [23].
Following the same protocol used in previous work [23], we utilize utterances in TIMIT and CHN
corpora to simulate audio conversations. As a result, we randomly select 250 speakers from TIMIT
to create 25 audio streams where the duration of speakers ranges from 1.6 to 7.0 sec and 50 speakers
from CHN to create 15 audio streams where the duration of speakers is from 3.0 to 8.3 sec. In the
absence of prior knowledge, the distance-based and the BIC techniques are two main approaches
to SS [23]. In our simulations, we apply the distance-based method [23] to our representation and
MFCCs, dubbed Dist-RSDN and Dist-MFCC, where the same parameters, including sliding window
of 1.5 sec and tolerance level of 0.5 sec, are used. In addition, we also apply the BIC method [23] to
MFCCs (BIC-MFCC). Note that the BIC method is inapplicable to our representation since it uses
only covariance information but the high dimensionality of our representation and the use of a small
sliding window in the BIC result in unstable performance, as pointed out early in this section.
For evaluation, we use three common indexes [23], i.e., False Alarm Rate (FAR), Miss Detection
Rate (MDR) and F1 measure defined based on both precision and recall rates. Moreover, we only
report results as FAR equals MDR to avoid addressing decision-making issues [23]. Table 1 tabulates
SS performance where, as boldfaced, results by our representation are superior to those by MFCCs
regardless of SS methods and corpora for creating audio streams used in our simulations.
In summary, visualization of vowels and results in SC and SS suggest that our RSDN successfully
extracts speaker-specific information; its resultant representation can be generalized to unseen corpora during learning and is insensitive to text and languages spoken and environmental changes.
7
5
Discussion
As pointed out earlier, speech carries different yet mixed information and speaker-specific information is minor in comparison to predominant linguistic information. Our empirical studies suggest
that our success in extracting speaker-specific information is attributed to both unsupervised pretraining and supervised discriminative learning with a contrastive loss. In particular, the use of data
regularization in discriminative learning and distorted data in two learning phases plays a critical
role in capturing intrinsic speaker-specific characteristics and variations caused by miscellaneous
mismatches. Our results not reported here, due to limited space, indicate that without the pretraining in Sect. 3.1, a randomly initialized RSDN leads to unstable performance often considerably
worse than that of using the pre-training in general. Without discriminative learning, a DA working
on unsupervised learning only, e.g., the CDBN [9], tends to yield a new representation that redistributes different information but does not highlight minor speaker-specific information given the
fact that the CDBN trained on all 10 sessions in NKING leads to a representation that fails to yield
satisfactory SC performance on the same corpus but works well for various audio classification tasks
[9]. If we do not use the regularization term, LR (Xi ; ?), in the loss function in Eq. (2), our RSDN
is boiled down to a standard Siamese architecture [10]. Our results not reported here show that such
an architecture learns a representation often overfitting to the training corpus due to interference
of predominant non-speaker related information, which is not a problem in predominant information extraction. The previous work in face recognition [11] could lend an evidence to support our
argument where a Siamese DA without regularization successfully captures predominant identity
characteristics from facial images as, we believe, facial expression and other non-identity information are minor in this situation. While the use of distorted data in pre-training is in the same spirit of
self-taught learning [24], our results including those not reported here reveal that the use of distorted
data in pre-training but not in discriminative learning yields results worse than the baseline performance in the cross-divide SC experiment. Hence, sufficient training data reflecting mismatches are
also required in discriminative learning for speaker-specific information extraction.
Our RSDN architecture resembles the one proposed in [12] for dimensionality reduction of handwritten digits via learning a nonlinear embedding. However, ours distinguishes from theirs in the
use of different building blocks in our DAs, loss functions and motivations. The DA in [12] uses
the RBM [13] as a building block to construct a deep belief subnet in their Siamese DA and the
NCA [25] as their contrastive loss function to minimize the intra-class variability. However, the
NCA does not meet our requirements as there are so many examples in one class. Instead we propose a contrastive loss to minimize both intra- and inter-class variabilities simultaneously. On the
other hand, intrinsic topological structures of a handwritten digit convey predominant information
given the fact that without using the NCA loss a deep belief autoencoder already yields a good representation [7],[12],[13],[26]. Thus, the use of the NCA in [12] simply reinforces the topological
invariance by minimizing other variabilities with a small amount of labeled data [12]. In our work,
however, speaker-specific information is non-predominant in speech and hence a large amount of labeled data reflecting miscellaneous variabilities are required during discriminative learning despite
the pre-training. Finally, our code layer yields an overcomplete representation to facilitate nonpredominant information extraction. In contrast, a parsimonious representation seems more suitable
for extracting predominant information since dimensionality reduction is likely to discover ?principal? components that often associate with predominant information, as are evident in [11],[12].
To conclude, we propose a deep neural architecture for speaker-specific information extraction and
demonstrate that its resultant speaker-specific representation outperforms the state-of-the-art techniques. It should also be stated that our work presented here is limited to speech corpora available
at present. In our ongoing work, we are employing richer training data towards learning a universal speaker-specific representation. In a broader sense, our work presented in this paper suggests
that speech information component analysis (ICA) becomes critical in various speech information
processing tasks; the use of proper speech ICA techniques would result in task-specific speech representations to improve their performance radically. Our work demonstrates that speech ICA is
feasible via learning. Moreover, deep learning could be a promising methodology for speech ICA.
Acknowledgments
Authors would like to thank H. Lee for providing their CDBN code [9] and L. Wang for offering
their SIAT Chinese speech corpus [15] to us; both of which were used in our experiments.
8
References
[1] Huang, X., Acero, A. & Hon, H. (2001) Spoken Language Processing. New York: Prentice Hall.
[2] Jang, G., Lee, T. & Oh, Y. (2001) Learning statistically efficient feature for speaker recognition. Proc.
ICASSP, pp. I427-I440, IEEE Press.
[3] Mammone, R., Zhang, X. & Ramachandran, R. (1996) Robust speaker recognition: a feature-based approach. IEEE Signal Processing Magazine, 13(1): 58-71.
[4] Reynold, D. & Campbell, W. (2008) Text-independent speaker recognition. In J. Benesty, M. Sondhi and
Y. Huang (Eds.), Handbook of Speech Processing, pp. 763-781, Berlin: Springer.
[5] Bengio, Y. (2009) Learning deep architectures for AI. Foundation and Trends in Machine Learning 2(1):
1-127.
[6] Hinton, G. (2007) Learning multiple layers of representation. Trends in Cognitive Science 11(10): 428-434.
[7] Larochelle, H., Bengio, Y., Louradour, J. & Lamblin, P. (2009) Exploring strategies for training deep neural
networks. Journal of Machine Learning Research 10(1): pp. 1-40.
[8] Boureau, Y., Bach, F., LeCun, Y. & Ponce, J. (2010) Learning mid-level features for recognition. Proc.
CVPR, IEEE Press.
[9] Lee, H., Largman, Y., Pham, P. & Ng, A. (2009) Unsupervised feature learning for audio classification using
convolutional deep belief networks. In Advances in Neural Information Processing Systems 22, Cambridge,
MA: MIT Press.
[10] Bromley, J., Guyon, I., LeCun, Y., Sackinger, E. & Shah, R. (1994) Signature verification using a Siamese
time delay neural network. In Advances in Neural Information Processing Systems 5, Morgan Kaufmann.
[11] Chopra, S., Hadsell, R. & LeCun, Y. (2005) Learning a similarity metric discriminatively, with application
to face verification. In Proc. CVPR, IEEE Press.
[12] Salakhutdinov, R. & Hinton, G. (2007) Learning a non-linear embedding by preserving class neighborhood
structure. In Proc. AISTATS, Cambridge, MA: MIT Press.
[13] Hinton, G., Osindero, S. & Teh, Y. (2006) A fast learning algorithm for deep belief nets. Neural Computation 18(7): 1527-1554.
[14] Linguistic Data Consortium (LDC). [online] www.ldc.upenn.edu
[15] Wang, L. (2008) A Chinese speech corpus for speaker recognition. Tech. Report, SIAT-CAS, China.
[16] LeCun, Y., Chopra, S. Hadsell, R., Ranzato, M. & Huang, F. (2007) Energy-based models. In Predicting
Structured Outputs, pp. 191-246, Cambridge, MA: MIT Press.
[17] Vincent, P., Bengio, Y. & Manzagol, P. (2008) Extracting and composing robust features with denoising
autoencoders. Proc. ICML, pp. 1096-1102, ACM Press.
[18] Reynolds, D. (1995) Speaker Identification and verification using Gaussian mixture speaker models.
Speech Communication 17(1): 91-108.
[19] Proakis, J. (2001) Digital Communications (4th Edition). New York: McGraw-Hill.
[20] Campbell, J. (1997) Speaker recognition: A tutorial. Proceedings of The IEEE 85(10): 1437-1462.
[21] van der Maaten, L. & Hinton, G. (2008) Visualizing data using t-SNE. Journal of Machine Learning
Research 9: 2579-2605.
[22] Campbell, W. & Karam, Z. (2009) Speaker comparison with inner product discriminant functions. In
Advances in Neural Information Processing Systems 22, Cambridge, MA: MIT Press.
[23] Kotti, M., Moschou, V. & Kotropoulos, C. (2008) Speaker segmentation and clustering. Signal Processing
88(8): 1091-1124.
[24] Raina, R., Battle, A., Lee, H., Packer, B. & Ng, A. (2007) Self-taught learning: transfer learning from
unlabeled data. Proc. ICML, ACM press.
[25] Goldberger, J., Roweis, S., Hinton, G. & Salakhutdinov, R., (2005) Neighbourhood component analysis.
In Advances in Neural Information Processing Systems 17, Cambridge, MA: MIT Press.
[26] Hinton, G. & Salakhutdinov, R. (2006) Reducing the dimensionality of data with neural networks. Science
313: 504-507.
9
| 4314 |@word version:4 norm:2 seems:1 nd:2 simulation:2 covariance:3 contrastive:5 tr:1 accommodate:1 ld:13 carry:2 reduction:4 initial:1 contains:1 exclusively:2 united:1 score:3 offering:1 ours:2 reynolds:1 outperforms:4 existing:2 comparing:1 goldberger:1 yet:5 tackling:1 scatter:1 visible:1 additive:1 enables:1 drop:2 update:3 greedy:2 device:1 mdr:3 plane:2 inspection:1 short:5 lr:17 colored:2 provides:4 detecting:1 zhang:1 five:1 direct:1 consists:3 combine:1 boldfaced:1 inter:1 upenn:1 expected:2 ica:5 dist:6 multi:6 inspired:1 salakhutdinov:3 automatically:1 window:2 becomes:2 discover:1 underlying:2 moreover:4 notation:5 what:2 whilst:1 spoken:11 dubbed:2 guarantee:1 duplicating:1 every:2 thorough:1 exactly:1 wrong:1 demonstrates:2 uk:12 appear:1 producing:2 carrier:1 tends:1 despite:1 encoding:2 id:1 meet:2 emphasis:1 resembles:1 china:1 collect:1 suggests:1 co:2 limited:2 range:1 statistically:1 nca:4 responsible:1 acknowledgment:1 lecun:4 block:3 digit:2 procedure:2 empirical:1 universal:1 speakerspecific:3 pre:11 refers:2 suggest:3 consortium:1 cannot:2 onto:1 layered:2 close:1 unlabeled:1 acero:1 effortless:1 applying:1 prentice:1 www:1 regardless:2 duration:2 ke:1 hadsell:2 rule:2 lamblin:1 regularize:1 oh:1 his:1 embedding:2 variation:2 justification:1 updated:2 target:1 play:1 magazine:1 homogeneous:1 us:3 associate:2 trend:2 recognition:21 located:3 std:1 sbp:3 labeled:4 bottom:1 role:1 initializing:2 capture:2 wang:2 region:2 connected:1 hinders:2 sect:5 ranzato:1 trade:1 ahmad:1 environment:1 exhaustively:1 signature:1 trained:6 segment:17 inapplicable:1 icassp:1 sondhi:1 various:7 derivation:2 train:4 separated:1 fast:1 effective:1 describe:2 sc:7 mammone:1 h0:1 neighborhood:1 u2k:1 whose:1 richer:1 cvpr:2 s:5 otherwise:2 reconstruct:1 statistic:6 redistributes:1 unseen:2 itself:1 online:1 net:1 propose:3 reconstruction:4 product:1 stipulate:1 relevant:3 achieve:1 roweis:1 description:1 frobenius:1 x1t:1 manchester:3 requirement:2 produce:1 derive:1 ac:1 pose:1 subnets:9 measured:1 minor:3 school:1 noticeable:1 eq:8 c:48 indicate:1 larochelle:1 convention:1 resided:1 closely:1 filter:1 stochastic:1 human:1 enable:1 subnet:14 f1:2 generalization:1 anticipate:1 strictly:1 pl:1 exploring:1 pham:1 hall:1 normal:1 bromley:1 great:2 visualize:3 major:1 vary:1 early:1 proc:6 applicable:1 label:1 maker:1 him:1 grouped:2 create:2 successfully:3 mit:5 always:6 gaussian:4 avoid:1 incompatibility:4 broader:1 linguistic:5 derived:1 xit:45 ponce:1 mainly:1 greatly:1 hk:11 contrast:3 tech:1 baseline:4 sense:1 unlikely:1 her:2 hidden:8 issue:5 aforementioned:2 smi:1 classification:2 denoted:1 hon:1 proakis:1 proposes:1 art:3 initialize:2 equal:2 construct:2 extraction:10 ng:2 identical:3 unsupervised:4 nearly:1 icml:2 inevitable:2 report:4 others:1 simplify:2 employ:4 distinguishes:1 randomly:4 simultaneously:1 divergence:2 tightly:2 packer:1 phase:4 consisting:1 vowel:16 attempt:1 detection:2 interest:1 investigate:1 intra:2 cdbn:15 evaluation:2 predominant:10 male:2 mixture:1 chain:1 facial:2 conduct:1 divide:7 initialized:1 isolated:1 overcomplete:1 modeling:2 earlier:1 restoration:1 addressing:2 subset:5 snr:2 delay:1 osindero:1 reported:3 aw:3 corrupted:1 considerably:2 st:2 lee:4 off:1 pool:1 acoustically:1 together:1 iy:3 w1:1 ambiguity:1 w2k:1 recorded:3 containing:1 choose:1 huang:3 reflect:1 worse:2 cognitive:1 creating:1 coding:2 wk:12 sec:11 coefficient:1 karam:1 caused:2 stream:7 later:1 doing:1 red:1 timit:7 minimize:2 phonetically:2 convolutional:1 characteristic:5 kaufmann:1 yield:7 famous:1 handwritten:2 boiled:1 vincent:1 identification:1 mfcc:28 ed:1 distort:2 energy:3 frequency:2 involved:1 pp:5 conveys:2 resultant:3 dm:3 attributed:1 rbm:1 unsolved:1 hamming:1 recall:1 knowledge:2 reminder:1 dimensionality:5 color:1 segmentation:5 conversation:1 back:1 reflecting:2 appears:1 campbell:3 originally:1 supervised:3 follow:1 methodology:3 improved:1 done:1 box:4 furthermore:2 autoencoders:2 d:2 hand:2 until:1 working:1 ramachandran:1 sackinger:1 nonlinear:1 propagation:1 defines:1 reveal:1 believe:1 facilitate:2 building:3 effect:2 name:1 regularization:4 hence:6 satisfactory:1 illustrated:3 white:1 visualizing:1 during:7 self:2 inferior:1 speaker:104 mel:1 m:2 generalized:1 hill:1 ay:2 tt:4 demonstrate:3 evident:3 performs:1 largman:1 ranging:1 wise:2 image:1 common:1 superior:1 insensitive:3 discussed:1 theirs:1 cambridge:5 ai:2 session:12 pointed:2 language:4 mfccs:16 stable:1 similarity:2 longer:1 operating:1 recent:1 female:2 irrelevant:1 phonetic:1 certain:1 success:1 der:1 reynold:1 morgan:1 preserving:1 fortunately:1 greater:1 recognized:1 signal:4 relates:1 siamese:11 multiple:2 windowing:1 sliding:2 cross:2 bach:1 divided:2 concerning:1 ae:2 metric:5 achieved:2 addition:2 entangled:2 hz:1 tend:1 recording:1 db:2 gmms:1 spirit:1 extracting:4 chopra:2 intermediate:1 split:1 bengio:3 bic:7 architecture:14 perfectly:1 inner:1 tradeoff:1 det:2 shift:2 qj:1 expression:1 pca:1 effort:1 speech:43 york:2 cause:1 constitute:1 pretraining:2 deep:18 useful:1 amount:2 mid:1 band:6 zj:1 tutorial:1 alters:1 sign:2 estimated:1 reinforces:1 shall:1 dropping:1 affected:1 taught:2 four:1 enormous:1 achieving:1 monitor:1 pj:1 clean:1 gmm:10 kept:1 utilize:1 merely:1 sum:1 distorted:5 named:1 guyon:1 parsimonious:1 decision:3 appendix:1 maaten:1 capturing:2 layer:50 bound:1 followed:1 dash:3 topological:2 yielded:1 fading:1 x2:14 simulate:2 argument:1 extremely:1 statically:1 structured:1 according:1 poor:1 belonging:1 battle:1 describes:1 across:1 em:1 smaller:1 partitioned:1 making:3 intuitively:1 invariant:1 projecting:1 interference:7 visualization:5 benesty:1 discus:2 turn:1 fail:1 x2t:1 fed:1 available:2 apply:6 sm1:1 observe:1 spectral:2 generic:3 neighbourhood:1 batch:1 jang:1 shah:1 original:3 top:3 remaining:3 denotes:1 clustering:1 recognizes:1 emotional:1 sm2:1 tabulates:1 chinese:5 especially:1 objective:7 question:1 already:1 restored:1 strategy:3 gradient:4 kth:1 distance:4 thank:1 berlin:1 manifold:1 collected:1 extent:1 unstable:2 discriminant:1 xrt:8 code:10 length:7 index:3 manzagol:1 providing:1 minimizing:5 kingdom:1 difficult:1 sne:2 relate:1 stated:1 collective:2 proper:1 unknown:1 teh:1 upper:1 neuron:21 observation:2 sm:37 benchmark:3 hereinafter:1 situation:2 hinton:6 communication:3 incorporated:1 variability:5 frame:10 bk:6 required:2 doppler:1 connection:3 acoustic:2 learned:6 narrow:3 suggested:4 perception:1 mismatch:3 challenge:1 tb:20 built:3 including:5 green:1 lend:2 belief:4 ldc:5 critical:2 suitable:1 eh:1 regularized:3 predicting:1 raina:1 improve:1 created:1 irrespective:1 extract:8 coupled:1 autoencoder:5 utterance:16 text:4 prior:1 circled:2 l2:1 removal:1 loss:24 fully:1 highlight:1 discriminatively:1 mixed:3 filtering:1 validation:1 foundation:1 digital:1 sufficient:1 verification:3 principle:1 row:2 summary:2 changed:1 last:1 english:1 jth:2 verbal:1 bias:5 silence:1 understand:1 perceptron:2 wide:2 face:3 cepstral:1 tolerance:2 van:1 curve:1 evaluating:1 adopts:1 made:1 adaptive:1 commonly:1 simplified:1 collection:1 author:1 far:4 employing:1 cope:1 emphasize:1 mcgraw:1 transcription:1 global:1 overfitting:3 reveals:1 handbook:1 corpus:21 conclude:1 xi:10 discriminative:10 continuous:1 why:1 table:2 promising:1 nature:1 transfer:2 channel:3 composing:1 robust:2 ca:1 m13:1 mse:1 complex:1 protocol:1 da:10 louradour:1 aistats:1 main:3 dense:1 motivation:2 noise:3 alarm:7 edition:1 convey:1 x1:15 referred:1 precision:1 fails:1 position:1 explicit:2 candidate:1 learns:2 ix:2 down:1 specific:40 salman:1 normalizing:2 evidence:3 intrinsic:4 essential:1 ih:1 false:7 adding:1 boureau:1 chen:2 chn:6 depicted:4 rayleigh:1 simply:2 explore:1 likely:3 collectively:1 springer:1 aa:2 corresponds:1 gender:1 environmental:2 constantly:1 extracted:1 radically:1 ma:5 acm:2 marked:1 presentation:2 king:5 identity:2 towards:1 miscellaneous:2 absence:1 feasible:1 change:2 determined:2 reducing:1 hkj:9 denoising:5 miss:7 principal:1 called:1 invariance:1 experimental:3 select:2 support:1 ongoing:1 evaluate:1 audio:10 avoiding:2 correlated:2 |
3,661 | 4,315 | Active dendrites:
adaptation to spike-based communication
Bal?azs B Ujfalussy1,2
M?at?e Lengyel1
[email protected]
[email protected]
1
Computational & Biological Learning Lab, Dept. of Engineering, University of Cambridge, UK
2
Computational Neuroscience Group, Dept. of Biophysics, MTA KFKI RMKI, Budapest, Hungary
Abstract
Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking
inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we
address the computational challenge faced by neurons that compute and represent
analogue quantities but communicate with digital spikes, and show that reliable
computation of even purely linear functions of inputs can require the interplay of
strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to
the joint statistics of presynaptic inputs. This approach suggests normative roles
for some puzzling forms of nonlinear dendritic dynamics and plasticity.
1
Introduction
The operation of neural circuits fundamentally depends on the capacity of neurons to perform complex, nonlinear mappings from their inputs to their outputs. Since the vast majority of synaptic inputs
impinge the dendritic membrane, its morphology, and passive as well as active electrical properties
play important roles in determining the functional capabilities of a neuron. Indeed, both theoretical
and experimental studies suggest that active, nonlinear processing in dendritic trees can significantly
enhance the repertoire of singe neuron operations [1, 2].
However, previous functional approaches to dendritic processing were limited because they studied
dendritic computations in a firing rate-based framework [3, 4], essentially requiring both the inputs
and the output of a cell to have stationary firing rates for hundreds of milliseconds. Thus, they
ignored the effects and consequences of temporal variations in neural activities at the time scale
of inter-spike intervals characteristic of in vivo states [5]. Conversely, studies of spiking network
dynamics [6, 7] have ignored the complex and highly nonlinear effects of the dendritic tree.
Here we develop a computational theory that aims at explaining some of the morphological and
electrophysiological properties of dendritic trees as adaptations towards spike-based communication. In line with the vast majority of theories about neural network computations, the starting point
of our theory is that each neuron needs to compute some function of the membrane potential (or,
equivalently, the instantaneous firing rate) of its presynaptic partners. However, as the postsynaptic
neuron does not have direct access to the presynaptic membrane potentials, only to the spikes emitted by its presynaptic partners based on those potentials, computing the required function becomes a
non-trivial inference problem. That is, neurons need to perform computations on their inputs in the
face of significant uncertainty as to what those inputs exactly are, and so as to what their required
output might be.
In section 2 we formalize the problem of inferring some required output based on incomplete
spiking-based information about inputs and derive an optimal online estimator for some simple
1
but tractable cases. In section 3 we show that the optimal estimator exhibits highly nonlinear behavior closely matching aspects of active dendritic processing, even when the function of inputs to be
computed is purely linear. We also present predictions about how the statistics of presynaptic inputs
should be matched by the clustering patterns of synaptic inputs onto active subunits of the dendritic
tree. In section 4 we discuss our findings and ways to test our predictions experimentally.
2
2.1
Estimation from correlated spike trains
The need for nonlinear dendritic operations
Ideally, the (subthreshold) dynamics of the somatic membrane potential, v(t), should implement
some nonlinear function, f (u(t)), of the presynaptic membrane potentials, u(t).1
?
dv(t)
= f (u(t))
dt
v(t)
(1)
However, the presynaptic membrane potentials cannot be observed directly, only the presynaptic
spike trains s0:t that are stochastic functions of the presynaptic membrane potential trajectories.
Therefore, to minimise squared error, the postsynaptic membrane potential should represent the
mean of the posterior over possible output function values it should be computing based on the input
spike trains:
Z
dv(t)
?
' f (u(t)) P(u(t)|s0:t ) du(t) v(t)
(2)
dt
Biophysically, to a first approximation, the somatic membrane potential of the postsynaptic neuron
can be described as some function(al), f?, of the local dendritic membrane potentials, vd (t)
?
dv(t)
dt
=
f? vd (t)
v(t)
(3)
This is interesting because Pfister et al. [11, 12] have recently suggested that short-term synaptic
plasticity arranges for each local dendritic postsynaptic potential, vid , to (approximately) represent
the posterior mean of the corresponding presynaptic membrane potential:
Z
d
vi (t) ' ui (t) P(ui (t)|si,0:t ) dui
(4)
Thus, it would be tempting to say that in order to achieve the computational goal of Eq. 2, the way
the dendritic tree (together with the soma) should integrate these local potentials, as given by f?,
should be directly determined by the function that needs to be computed: f? = f . However, it is easy
to see that in general this is going to be incorrect:
! Z
Z
Y
f
u(t)
P(ui (t)|si,0:t ) du(t) 6= f (u(t)) P(u(t)|s0:t ) du(t)
(5)
i
where the l.h.s. is what the neuron implements (eqs. 3-4) and the r.h.s. is what it should compute
(eq. 2). The equality does not hold in general when f is non-linear or P(u(t)|s0:t ) does not factorise.
In the following, we are going to consider
Pthe case when the function, f (u), is a purely linear
combination of synaptic inputs, f (u) =
i ci ui . Such linear transformations seem to suggest
linear dendritic operations and, in combination with a single global ?somatic? nonlinearity, they are
often assumed in neural network models and descriptive models of neuronal signal processing [10].
However, as we will show below, estimation from the spike trains of multiple correlated presynaptic
neurons requires a non-linear integration of inputs even in this case.
1
Dynamics of this form are assumed by many neural network models, though the variables u amd v are
usually interpreted as instantaneous firing rates rather than membrane potentials [10]. However, just as in our
case (Eq. 8), the two are often taken to be related through a simple non-linear function which thus makes the
two frameworks essentially isomorphic.
2
2.2
The mOU-NP model
We assume that the hidden dynamics of presynaptic membrane potentials are described by a multivariate Ornstein?Uhlenbeck (mOU) process (discretised in time into t ! 0 time bins, thus formally
yielding an AR(1) process):
p
1
iid
ut = ut t + (u0 ut t ) t + qt t,
qt ? N (0, Q)
(6)
?
p
t
= ?ut t + qt t + u0
(7)
?
where we described all neurons with the same parameters: u0 , the resting potential and ? , the
membrane time constant (with ? = 1 ?t ). Importantly, Q is the covariance matrix parametrising
the correlations between the subthreshold membrane potential fluctuations of presynaptic neurons.
Spiking is described by a nonlinear-Poisson (NP) process where the instantaneous firing rate, r, is
an exponential function of u with exponent and ?baseline rate? g:
r(u) = g e
u
(8)
and the number of spikes emitted in a time bin, s, is Poisson with this rate:
(9)
P(s|u) = Poisson(s; t r(u))
The spiking process itself is independent i.e., the likelihood is factorised across cells:
Y
P(s|u) =
P(si |ui )
(10)
i
2.3
Assumed density filtering in the mOU-NP model
Our goal is to derive the time evolution of the posterior distribution of the membrane potential,
P(ut |s0:t ), given a particular spiking pattern observed. Ultimately, we will need to compute some
function of u underPthis distribution. For linear computations (see above), the final quantity of
interest depends on i ci ui which in the limit (of many presynaptic cells) is going to be Gaussiandistributed, and as such only dependent on the first two moments of the posterior. This motivates
us to perform assumed density filtering, by which we substitute the true posterior with a momentmatched multivariate Gaussian in each time step, P(ut |s0:t ) ' N (ut ; ?t , ?t ).
After some algebra (see Appendix for details) we obtain the following equations for the time evolution of the mean and covariance of the posterior under the generative process defined by Eqs. 7-10:
??
=
?
?
=
1
(u0 ?) + ? (s(t)
)
?
2
2
(?OU ?)
? ?
?
(11)
(12)
where si (t) is the spike train of presynaptic neuron i represented as a sum of Dirac-delta functions,
2?
ii
( ) is a vector (diagonal matrix) whose elements i = ii = g e ?i + 2 are the estimated firing
rates of the neurons, and ?OU = Q?
2 is the prior covariance matrix of the presynaptic membrane
potentials in the absence of any observation.
2.4
Modelling correlated up and down states
The mOU-NP process is a convenient and analytically tractable way to model correlations between
presynaptic neurons but it obviously falls short of the dynamical complexity of cortical ensembles
in many respects. Following and expanding on [12], here we considered one extension that allowed
us to model coordinated changes between more hyper- and depolarised states across presynaptic
neurons, such as those brought about by cortical up and down states.
In this extension, the ?resting? potential of each presynaptic neuron, u0 , could switch between two
different values, uup and udown , and followed first order Markovian dynamics. Up and down states in
cortical neurons are not independent but occur synchronously [13]. To reproduce these correlations
3
??
?
mean
?
0.6
?1
?2
??
??
0.5
0.1
?
1.2
????
????
0
?????
v (mV)
0
12
-1.2
??
?????
??
?????
???????????
???
???
?????
??
variance
?0.6
???
?
?
???
????
???
200
?????????
?
?
??
????
???
Figure 1: Simulation of the optimal estimator in the case of two presynaptic spikes with different
time delays ( t). A: The posterior means (Aa), variances, ?ii , and the covariance, ?12 (Ab). The
dynamics of the postsynaptic membrane potential, v (Ad) is described by Eq. 1, where f (u) =
u1 + u2 (Ac). B: The same as A on an extended time scale. C: The nonlinear summation of
two EPSPs, characterised by the ratio of the actual EPSP (cyan on Ad) and the linear sum of two
individual EPSPs (grey on Ad) is shown for different delays and correlations between the presynaptic
neurons. The summation is sublinear if the presynaptic neurons are positively correlated, whereas
negative correlations imply supralinear summation.
we introduced a global, binary state variable, x that influenced the Markovian dynamics of the
resting potential of individual neurons (see Appendix and Fig. 2A). Unfortunately, an analytical
solution to the optimal estimator was out of reach in this case, so we resorted to particle filtering
[14] to compute the output of the optimal estimator.
3
3.1
Nonlinear dendrites as near-optimal estimators
Correlated Ornstein-Uhlenbeck process
First, we analysed the estimation problem in case of mOU dynamics where we could derive an optimal estimator for the membrane potential. Postsynaptic dynamics needed to follow the linear sum
of presynaptic membrane potentials. Figure 1 shows the optimal postsynaptic response (Eqs. 11-12)
after observing a pair of spikes from two correlated presynaptic neurons with different time delays.
When one of the cells (black) emits a spike, this causes an instantaneous increase not only in the
membrane potential estimate of the neuron itself but also in those of all correlated neurons (red neuron in Fig. 1Aa and Ba). Consequently, the estimated firing rate, , of both cells increases. Albeit
indirectly, a spike also influences the uncertainty about the presynaptic membrane potentials ? quantified by the posterior covariance matrix. A spike itself does not change this covariance directly, but
since it increases estimated firing rates, the absence of even more spikes in the subsequent period
becomes more informative. This increased information rate following a spike decreases estimator
uncertainty about true membrane potential values for a short period (Fig. 1Ab and Bb). However, as
the estimated firing rate decreases back to its resting value nearly exponentially after the spike, the
estimated uncertainty also returns back to its steady state.
Importantly, the instantaneous increase of the posterior means in response to a spike is proportional
to the estimated uncertainty about the membrane potentials and to the estimator?s current belief
about the correlations between the neurons. As each spike influences not only the mean estimate
of the membrane potentials of other correlated neurons but also the uncertainty of these estimates,
the effect of a spike from one cell on the posterior mean depends on the spiking history of all other
correlated neurons (Fig. 1Ac-Ad).
4
In the example shown in Fig. 1, the postsynaptic dynamics is required to compute a purely linear
sum of two presynaptic membrane potentials, f (u) = u1 + u2 . However, depending on the prior
correlation between the two presynaptic neurons and the time delay between the two spikes, the
amplitude of the postsynaptic membrane potential change evoked by the pair of spikes can be either
larger or smaller than the linear sum of the individual excitatory postsynaptic potentials (EPSPs)
(Fig. 1Ad, C). EPSPs from independent neurons are additive, but if the presynaptic neurons are positively correlated then their spikes convey redundant information and they are integrated sublinearly.
Conversely, simultaneous spikes from negatively correlated presynaptic neurons are largely unexpected and induce supralinear summation. The deviation from the linear summation is proportional
to the magnitude of the correlation between the presynaptic neurons (Fig. 1C).
We compared the nonlinear integration of the inputs in the optimal estimator with experiments measuring synaptic integration in the dendritic tree of neurons. For a passive membrane, cable theory
[15] implies that inputs are integrated linearly only if they are on electronically separated dendritic
branches, but reduction of the driving force entails a sublinear interaction between co-localised inputs. Moreover, it has been found that active currents, the IA potassium current in particular, also
contribute to the sublinear integration within the dendritic tree [16, 17]. Our model predicts that
inputs that are integrated sublinearly are positively correlated (Fig. 1C).
In sum, we can already see that correlated inputs imply nonlinear integration in the postsynaptic
neuron, and that the form of nonlinearity needs to be matched to the degree and sign of correlations between inputs. However, the finding that supralinear interactions are only expected from
anticorrelated inputs defeats biological intuition. Another shortcoming of the mOU model is related
to the second-order effects of spikes on the posterior covariance. As the covariance matrix does not
change instantaneously after observing a presynaptic spikes (Fig. 1B), two spikes arriving simultaneously are summed linearly (not shown). At the other extreme, two spikes separated by long delays
again do not influence each other. Therefore the nonlinearity of the integration of two spikes has a
non-monotonic shape, which again is unlike the monotonic dependence of the degree of nonlinearity
on interspike intervals found in experiments [18, 19]. In order to overcome these limitations, we extended the model to incorporate correlated changes in the activity levels of presynaptic neurons [13].
3.2
Correlated up and down states
While the statistics of presynaptic membrane potentials exhibit more complex temporal dependencies in the extended model (Fig. 2A), importantly, the task is still assumed to be the same simple
linear computation as before: f (u) = u1 + u2 .
However, the more complex P(u) P
distribution means that we need to sum over the possible values
of the hidden variables: P(u) =
u0 P(u|u0 ) P(u0 ). The observation of a spike changes both
the conditional distributions, P(u|u0 ), and the probability of being in the up state, P(u0 = uup ),
by causing an upward shift in both. A second spike causes a further increase in the membrane
potential estimate, and, more importantly, in the probability of being in the up state for both neurons.
Since the probability of leaving the up state is low, the membrane potential estimate decays back
to its steady state more slowly if the probability of being in the up state is high (Fig. 2B). This
causes a supralinear increase in the membrane potential of the postsynaptic neuron which again
depends on the interspike interval, but this time supralinearity is predicted for positively correlated
presynaptic neurons (Fig. 2C,E). Note, that while in the mOU model, supralinear integration arises
due to dynamical changes in uncertainty (of membrane potential estimates), in the extended model
it is associated with a change in a hypothesis (about hidden up-down states).
This is qualitatively similar to what was found in pyramidal neurons in the neocortex [19] and in the
hippocampus [18, 20] that are able to switch from (sub)linear to supralinear integration of synaptic
inputs through the generation of dendritic spikes [21]. Specifically, in neocortical pyramidal neurons
Polsky et al. [19] found, that nearly synchronous inputs arriving to the same dendritic branch evoke
substantially larger postsynaptic responses than expected from the linear sum of the individual responses (Fig. 2D-E). While there is a good qualitative match between model and experiments, the
time scales of integration are off by a factor of 2. Neverthless, given that we did not perform exhaustive parameter fitting in our model, just simply set parameters to values that produced realistic
presynaptic membrane potential trajectories (cf. our Fig. 2A with [13]), we regard the match acceptable and are confident that with further fine tuning of parameters the match would also improve
quantitatively.
5
?
???
1 ms
????
?????????
32 ms
????
????
2 mV
?
100 ms
50 ms
100 ms
?
2 mV
15 ms
50 ms
30 ms
50 ms
2
?4
?2
0
?????????
??????????????
????????????
?20
20
60
time (ms)
100
?20
??????????
20
60
time (ms)
100
?????
????????
??????????????
????????
??????????????
2
0 ms
up state probability
0.0
0.4
0.8
??
?
?
?? ??
?
???????????????????
4
6
8
10
?
Membrane potential (mV)
?
0
10
20
30
40
??????????????????
50
0
20
40
60
80
??????????????????
100
Figure 2: A: Example voltage traces and spikes from the modeled presynaptic neurons (black and
red) with correlated up and down states. The green line indicates the value of the global up-down
state variable. B: InferenceP
in the model: The posterior probability of being in the up state (left)
and the posterior mean of i ui after observing two spikes (grey) from different neurons with
t = 8 ms latency. C: Supralinear summation in the switching mOU-NP model. D: Supralinear
summation by dendritic spikes in a cortical pyramidal neuron. E: Peak amplitude of the response
(red) and the linear sum (black squares) is shown for different delays in experiments (left) and the
model (right). (D and left panel in E are reproduced from [19]).
3.3
Nonlinear dendritic trees are necessary for purely linear computations
In the previous sections we demonstrated that optimal inference based on correlated spike trains
requires nonlinear interaction within the postsynaptic neuron, and we showed that the dynamics of
the optimal estimator is qualitatively similar to the dynamics of the somatic membrane potential of
a postsynaptic neuron with nonlinear dendritic processing. In this section we will build a simplified
model of dendritic signal processing and compare its performance directly to several alternative
models (see below) on a purely linear task, for which the neuron needs to compute the sum of
P10
presynaptic membrane potentials: f (u) = i=1 ui .
We model the dendritic estimator as a two-layer feed-forward network of simple units (Fig. 3A)
that has been proposed to closely mimic the repertoire of input-output transformations achievable
by active dendritic trees [22]. In this model, synaptic inputs impinge on units in the first layer,
corresponding to dendritic branches, where nonlinear integration of inputs arriving to a dendritic
branch is modeled by a sigmoidal input-output function, and the outputs of dendritic branch units
are in turn summed linearly in the single (somatic) unit of the second layer. We trained the model to
estimate f by changing the connection weights of the two layers corresponding to synaptic weights
(wji ) and branch coupling strengths (?
cj , see Appendix, Fig. 3A).
We compared the performance of the dendritic estimator to four alternative models (Figure 3B):
1. The linear estimator, which is similar to the dendritic estimator except that the dendrites are
linear.
2. The independent estimator, in which the individual synapses are independently optimal estimators of the corresponding presynaptic membrane potentials (Eq. 4) [11, 12], and the cell
combines these estimates linearly. Note that the only difference between the independent estimator and the optimal estimator is the assumption implicit to the former that presynaptic cells
are independent.
3. The scaled independent estimator still combines the synaptic potentials linearly, but the weights
of each synapse are rescaled to partially correct for the wrong assumption of independence.
4. Finally, the optimal estimator is represented by the differential equations 11-12.
The performance of the different
estimators were quantified by the estimation error normalized by
P
h( i ui v
?estimator )2 i
P
the variance of the signal,
. Figure 3C shows the estimation error of the five differvar[ i ui ]
ent models in the case of 10 uniformly correlated presynaptic neurons. If the presynaptic neurons
6
0 .8
?
?
???????????????????????????
0 .4
0 .6
?
??????
???????????
??????????????????
?????????
0 .2
???????
0
0 .5
???????????
0 .9
Figure 3: Performance of 5 different estimators are compared in the task of estimating f (u) =
PN
i=1 ui . A: Model of the dendritic estimator. B: Different estimators (see text for more details).
C: Estimation error, normalised with the variance of the signal. The number of presynaptic neurons
were N = 10. Error bars show standard deviations.
were independent, all three estimators that used dynamical synapses (?
vind , v?sind and v?opt ) were optimal, whereas the linear estimator had substantially larger error. Interestingly, the performance of
the dendritic estimator (yellow) was nearly optimal even if the individual synapses were not optimal estimators for the corresponding presynaptic membrane potentials. In fact, adding depressing
synapses to the dendritic model degraded its performance because the sublinear effect introduced
by the saturation of the sigmoidal dendritic nonlinearity interfered with that implied by synaptic
depression. When the correlation increased between the presynaptic neurons, the performance of
the estimators assuming independence (black and orange) became severely suboptimal, whereas the
dendritic estimator (yellow) remained closer to optimal.
Finally, in order to investigate the synaptic mechanisms underlying the remarkably high performance
of the dendritic estimator, we trained a dendritic estimator on a task where the presynaptic neurons
formed two groups. Neurons from different groups were independent or negatively correlated with
each other, cor(ui , uk ) = { 0.6, 0.3, 0}, while there were positive correlations between neurons from the same group, cor(ui , uj ) = {0.3, 0.6, 0.9} (Fig. 4A). The postsynaptic neuron had
two dendritic branches, each of them receiving input from each presynaptic neurons initially. After
tuning synaptic weights and branch coupling strengths to minimize estimation error, and pruning
synapses with weights below threshold, the model achieved near-optimal performance as before
(Fig. 4C). More importantly, we found that the structure of the presynaptic correlations was reflected in the synaptic connection patterns on the dendritic branches: most neurons developed stable
synaptic weights only on one of the two dendritic branches, and synapses originating from neurons
within the same group tended to cluster on the same branch (Fig. 4B).
4
Discussion
In the present paper we introduced a normative framework to describe single neuron computation
that sheds new light on nonlinear dendritic information processing. Following [12], we observe that
spike-based communication causes information loss in the nervous system, and neurons must infer
the variables relevant for the computation [23?25]. As a consequence of this spiking bottleneck,
signal processing in single neurons can be conceptually divided into two parts: the inference of
the relevant variables and the computation itself. When the presynaptic neurons are independent
then synapses with short term plasticity can optimally solve the inference problem [12] and nonlinear processing in the dendrites is only for computation. However, neurons in a population are
often tend to be correlated [5, 13] and so the postsynaptic neuron should combine spike trains from
such correlated neurons in order to find the optimal estimate of its output. We demonstrated that
the solution of this inference problem requires nonlinear interaction between synaptic inputs in the
7
???????
??????????????????
?????? ????
??????
?
? ??
?
?????????
???????
????????????????
? ??
? ??
???????????????
?
?
?
? ??
? ??
????????????????
? ??
?
Figure 4: Synaptic connectivity reflects the correlation structure of the input. A: The presynaptic
covariance matrix is block-diagonal, with two groups (neurons 1?4 and 5?8). Initially, each presynaptic neuron innervates both dendritic branches, and the weights, w, of the static synapses are then
tuned to minimize estimation error. B: Synaptic weights after training, and pruning the weakest
synapses. Columns corresponds to solutions of the error-minimization task with different presynaptic correlations and/or initial conditions, and rows are different synapses. The detailed connectivity
patterns differ across solutions, but neurons from the same group usually all innervate the same dendritic branch. Below: fraction of neurons in each solution innervating 0, 1 or 2 branches. The height
of the yellow (blue, green) bar indicates the proportion of presynaptic neurons innervating two (one,
zero, respectively) branches of the postsynaptic neuron. C: After training, the nonlinear dendritic
estimator performs close to optimal and much better than the linear neuron.
postsynaptic cell even if the computation itself is purely linear. Of course, actual neurons are usually
faced with both problems: they will need to compute nonlinear functions of correlated inputs and
thus their nonlinearities will serve both estimation and computation. In such cases our approach
allows dissecting the respective contributions of active dendritic processing towards estimation and
computation.
We demonstrated that the optimal estimator of the presynaptic membrane potentials can be closely
approximated by a nonlinear dendritic tree where the connectivity from the presynaptic cells to the
dendritic branches and the nonlinearities in the dendrites are tuned according to the dependency
structure of the input. Our theory predicts that independent neurons will innervate distant dendritic domains, whereas neurons that have correlated membrane potentials will impinge on nearby
dendritic locations, preferentially on the same dendritic branches, where synaptic integration in
nonlinear [19, 26]. More specifically, the theory predicts sublinear integration between positively
correlated neurons and supralinear integration through dendritic spiking between neurons with correlated changes in their activity levels. To directly test this prediction the membrane potentials of
several neurons need to be recorded under naturalistic in vivo conditions [5, 13] and then the subcellular topography of their connectivity with a common postsynaptic target needs to be determined.
Similar approaches have been used recently to characterize the connectivity between neurons with
different receptive field properties in vivo [27, 28].
Our model suggests that the postsynaptic neuron should store information about the dependency
structure of its presynaptic partners within its dendritic membrane. Online learning of this information based on the observed spiking patterns requires new, presumably non-associative forms of plasticity such as branch strength potentiation [29, 30] or activity-dependent structural plasticity [31].
Acknowledgments
We thank J-P Pfister for valuable insights and comments on earlier versions of the manuscript, and
P Dayan, B Gutkin, and Sz K?ali for useful discussions. This work has been supported by the Hungarian Scientific Research Fund (OTKA, grant number: 84471, BU) and the Welcome Trust (ML).
8
References
1. Koch, C. Biophysics of computation (Oxford University Press, 1999).
2. Stuart, G., Spruston, N. & Hausser, M. Dendrites (Oxford University Press, 2007).
3. Poirazi, P. & Mel, B.W. Impact of active dendrites and structural plasticity on the memory capacity of
neural tissue. Neuron 29, 779?96 (2001).
4. Poirazi, P., Brannon, T. & Mel, B.W. Arithmetic of subthreshold synaptic summation in a model CA1
pyramidal cell. Neuron 37, 977?87 (2003).
5. Crochet, S., Poulet, J.F., Kremer, Y. & Petersen, C.C. Synaptic mechanisms underlying sparse coding of
active touch. Neuron 69, 1160?75 (2011).
6. Maass, W. & Bishop, C. Pulsed Neural Networks (MIT Press, 1998).
7. Gerstner, W. & Kistler, W. Spiking Neuron Models (Cambridge University Press, 2002).
8. Rieke, F., Warland, D., de Ruyter van Steveninck, R. & Bialek, W. Spikes (MIT Press, 1996).
9. Deneve, S. Bayesian spiking neurons I: inference. Neural Comput. 20, 91?117 (2008).
10. Dayan, P. & Abbot, L.F. Theoretical neuroscience (The MIT press, 2001).
11. Pfister, J., Dayan, P. & Lengyel, M. Know thy neighbour: a normative theory of synaptic depression. Adv.
Neural Inf. Proc. Sys. 22, 1464?1472 (2009).
12. Pfister, J., Dayan, P. & Lengyel, M. Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nat. Neurosci. 13, 1271?1275 (2010).
13. Poulet, J.F. & Petersen, C.C. Internal brain state regulates membrane potential synchrony in barrel cortex
of behaving mice. Nature 454, 881?5 (2008).
14. Doucet, A., De Freitas, N. & Gordon, N. Sequential Monte Carlo Methods in Practice (Springer, New
York, 2001).
15. Rall, W. Branching dendritic trees and motoneuron membrane resistivity. Exp. Neurol. 1, 491?527 (1959).
16. Hoffman, D.A., Magee, J.C., Colbert, C.M. & Johnston, D. K+ channel regulation of signal propagation
in dendrites of hippocampal pyramidal neurons. Nature 387, 869?75 (1997).
17. Cash, S. & Yuste, R. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron 22,
383?94 (1999).
18. Gasparini, S., Migliore, M. & Magee, J.C. On the initiation and propagation of dendritic spikes in CA1
pyramidal neurons. J. Neurosci. 24, 11046?56 (2004).
19. Polsky, A., Mel, B.W. & Schiller, J. Computational subunits in thin dendrites of pyramidal cells. Nat.
Neurosci. 7, 621?7 (2004).
20. Margulis, M. & Tang, C.M. Temporal integration can readily switch between sublinear and supralinear
summation. J. Neurophysiol. 79, 2809?13 (1998).
21. Hausser, M., Spruston, N. & Stuart, G.J. Diversity and dynamics of dendritic signaling. Science 290,
739?44 (2000).
22. Poirazi, P., Brannon, T. & Mel, B.W. Pyramidal neuron as two-layer neural network. Neuron 37, 989?99
(2003).
23. Huys, Q.J., Zemel, R.S., Natarajan, R. & Dayan, P. Fast population coding. Neural Comput. 19, 404?41
(2007).
24. Natarajan, R., Huys, Q.J.M., Dayan, P. & Zemel, R.S. Encoding and decoding spikes for dynamics stimuli.
Neural Computation 20, 2325?2360 (2008).
25. Gerwinn, S., Macke, J. & Bethge, M. Bayesian population decoding with spiking neurons. Frontiers in
Computational Neuroscience 3 (2009).
26. Losonczy, A. & Magee, J.C. Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron 50, 291?307 (2006).
27. Bock, D.D. et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177?82
(2011).
28. Ko, H. et al. Functional specificity of local synaptic connections in neocortical networks. Nature (2011).
29. Losonczy, A., Makara, J.K. & Magee, J.C. Compartmentalized dendritic plasticity and input feature storage
in neurons. Nature 452, 436?41 (2008).
30. Makara, J.K., Losonczy, A., Wen, Q. & Magee, J.C. Experience-dependent compartmentalized dendritic
plasticity in rat hippocampal CA1 pyramidal neurons. Nat. Neurosci. 12, 1485?7 (2009).
31. Butz, M., Worgotter, F. & van Ooyen, A. Activity-dependent structural plasticity. Brain Res. Rev. 60,
287?305 (2009).
9
| 4315 |@word version:1 achievable:1 hippocampus:1 proportion:1 hu:1 grey:2 simulation:1 integrative:1 eng:1 covariance:9 innervating:2 reduction:1 moment:3 initial:1 tuned:2 makara:2 interestingly:1 freitas:1 current:3 analysed:1 si:4 must:1 readily:1 subsequent:1 additive:1 informative:1 plasticity:10 shape:1 interspike:2 realistic:1 distant:1 fund:1 stationary:2 generative:1 nervous:1 sys:1 short:5 oblique:1 contribute:1 location:1 sigmoidal:2 five:1 height:1 direct:1 differential:1 incorrect:1 qualitative:1 fitting:1 combine:3 inter:1 thy:1 sublinearly:2 expected:2 indeed:1 behavior:1 morphology:1 brain:2 rall:1 actual:2 becomes:2 estimating:1 matched:2 moreover:1 circuit:2 panel:1 underlying:2 duo:1 what:5 barrel:1 interpreted:1 substantially:2 developed:1 ca1:5 finding:2 transformation:2 temporal:3 shed:1 exactly:1 scaled:1 wrong:1 uk:3 unit:4 grant:1 before:2 positive:1 engineering:1 local:4 limit:1 consequence:2 switching:1 severely:1 encoding:1 oxford:2 fluctuation:2 firing:9 approximately:1 might:1 black:4 studied:1 quantified:2 evoked:1 conversely:3 suggests:2 co:1 limited:1 huys:2 steveninck:1 acknowledgment:1 practice:1 block:1 implement:2 signaling:1 significantly:1 physiology:1 matching:2 convenient:1 induce:1 radial:1 specificity:1 suggest:2 petersen:2 naturalistic:1 onto:1 cannot:1 close:1 storage:1 influence:3 demonstrated:3 starting:1 independently:1 formalized:1 arranges:1 estimator:37 insight:1 importantly:5 population:3 rieke:1 variation:1 target:1 play:1 hypothesis:1 element:1 approximated:1 natarajan:2 predicts:4 observed:3 role:2 electrical:1 adv:1 morphological:1 innervates:1 decrease:2 rescaled:1 valuable:1 intuition:1 ui:13 complexity:1 ideally:1 cam:1 dynamic:15 ultimately:1 trained:2 algebra:1 ali:1 purely:7 negatively:2 serve:1 neurophysiol:1 vid:1 joint:1 represented:2 train:7 separated:2 fast:1 shortcoming:1 describe:1 monte:1 zemel:2 hyper:1 exhaustive:1 whose:1 larger:3 solve:1 say:1 statistic:3 itself:5 final:1 online:2 obviously:1 interplay:1 descriptive:1 reproduced:1 associative:1 analytical:1 interaction:4 epsp:1 adaptation:2 causing:1 relevant:2 budapest:1 hungary:1 pthe:1 achieve:1 subcellular:1 dirac:1 az:1 ent:1 potassium:1 cluster:1 derive:3 develop:1 ac:3 depending:1 coupling:2 qt:3 eq:8 epsps:4 predicted:1 hungarian:1 implies:1 differ:1 anatomy:1 closely:3 correct:1 gasparini:1 stochastic:1 momentmatched:1 kistler:1 bin:2 require:1 potentiation:1 repertoire:2 opt:1 biological:2 dendritic:61 summation:10 extension:2 frontier:1 hold:1 koch:1 considered:1 exp:1 presumably:1 mapping:1 driving:1 estimation:10 proc:1 instantaneously:1 reflects:1 minimization:1 hoffman:1 brought:1 mit:3 gaussian:1 aim:1 rather:1 pn:1 cash:1 voltage:1 modelling:1 likelihood:1 indicates:2 baseline:1 inference:6 worgotter:1 mou:8 dependent:4 dayan:6 integrated:3 initially:2 hidden:3 originating:1 going:3 reproduce:1 dissecting:1 upward:1 exponent:1 integration:14 summed:2 orange:1 field:1 stuart:2 nearly:3 thin:1 mimic:1 np:5 stimulus:1 fundamentally:1 quantitatively:1 gordon:1 migliore:1 wen:1 neighbour:1 simultaneously:1 individual:6 ab:2 factorise:1 interest:1 highly:2 investigate:1 extreme:1 yielding:1 light:1 closer:1 necessary:1 experience:1 respective:1 tree:12 incomplete:1 spruston:2 re:1 theoretical:2 increased:2 column:1 earlier:1 abbot:1 markovian:2 ar:1 measuring:1 deviation:2 hundred:1 delay:6 optimally:1 characterize:1 dependency:3 confident:1 density:2 peak:1 bu:1 off:1 receiving:1 decoding:2 enhance:1 together:1 mouse:1 bethge:1 connectivity:5 squared:1 again:3 gaussiandistributed:1 recorded:1 slowly:1 macke:1 return:1 potential:49 nonlinearities:3 de:2 factorised:1 diversity:1 coding:2 coordinated:1 caused:1 mv:4 depends:4 vi:1 ornstein:2 ad:5 lab:1 observing:3 red:3 capability:1 synchrony:1 vivo:4 contribution:1 minimize:2 formed:1 square:1 uup:2 degraded:1 became:1 variance:4 characteristic:1 largely:1 subthreshold:3 ensemble:1 yellow:3 conceptually:1 biophysically:1 bayesian:2 produced:1 iid:1 carlo:1 trajectory:2 lengyel:3 tissue:1 history:1 simultaneous:1 synapsis:11 influenced:1 reach:1 tended:1 resistivity:1 synaptic:23 associated:1 static:1 emits:1 ut:7 electrophysiological:1 cj:1 formalize:1 ou:2 amplitude:2 back:3 manuscript:1 feed:1 dt:3 follow:1 reflected:1 response:5 compartmentalized:2 synapse:1 depressing:1 though:1 strongly:1 just:2 implicit:1 correlation:14 trust:1 touch:1 nonlinear:25 propagation:2 scientific:1 effect:5 requiring:1 true:2 normalized:1 evolution:2 equality:1 analytically:1 former:1 maass:1 branching:1 steady:2 mel:4 rat:1 bal:1 m:13 hippocampal:3 neocortical:2 performs:1 passive:2 instantaneous:5 recently:2 common:1 functional:3 spiking:14 regulates:1 exponentially:1 defeat:1 otka:1 resting:4 significant:1 cambridge:2 lengyel1:1 tuning:2 particle:1 nonlinearity:5 innervate:2 had:2 gutkin:1 access:1 entail:1 stable:1 cortex:1 behaving:1 posterior:13 multivariate:2 showed:1 pulsed:1 inf:1 store:1 initiation:1 gerwinn:1 binary:1 wji:1 p10:1 motoneuron:1 redundant:1 period:2 tempting:1 arithmetic:1 signal:6 u0:10 multiple:1 ii:3 branch:18 infer:1 match:3 long:1 divided:1 biophysics:2 impact:1 prediction:3 ko:1 essentially:2 poisson:3 represent:3 uhlenbeck:2 achieved:1 cell:12 whereas:4 remarkably:1 fine:1 interval:3 johnston:1 pyramidal:11 leaving:1 unlike:1 comment:1 tend:1 seem:1 emitted:2 structural:3 near:2 easy:1 switch:3 independence:2 suboptimal:1 poirazi:3 minimise:1 shift:1 synchronous:1 bottleneck:1 ubi:1 york:1 cause:4 depression:2 ignored:2 useful:1 latency:1 detailed:1 neocortex:1 welcome:1 millisecond:1 sign:1 neuroscience:3 delta:1 estimated:6 blue:1 group:7 four:1 soma:1 threshold:1 changing:1 resorted:1 vast:2 deneve:1 fraction:1 sum:10 uncertainty:7 communicate:1 appendix:3 acceptable:1 cyan:1 layer:5 followed:1 brannon:2 activity:5 strength:3 occur:1 nearby:1 aspect:1 u1:3 mta:1 according:1 combination:2 membrane:46 across:3 smaller:1 postsynaptic:22 cable:1 rev:1 dv:3 taken:1 equation:2 discus:1 turn:1 mechanism:2 needed:1 know:1 tractable:2 cor:2 operation:4 polsky:2 observe:1 indirectly:1 alternative:2 substitute:1 clustering:1 cf:1 ooyen:1 warland:1 build:1 uj:1 implied:1 already:1 quantity:2 spike:44 receptive:1 losonczy:3 dependence:1 diagonal:2 bialek:1 exhibit:2 thank:1 schiller:1 capacity:2 majority:2 vd:2 amd:1 impinge:3 presynaptic:56 partner:3 trivial:1 assuming:1 modeled:2 ratio:1 preferentially:1 equivalently:1 regulation:1 unfortunately:1 localised:1 negative:1 trace:1 ba:1 motivates:1 perform:4 anticorrelated:1 neuron:100 observation:2 colbert:1 subunit:3 extended:4 communication:4 synchronously:1 somatic:5 introduced:3 discretised:1 required:4 pair:2 connection:3 hausser:2 address:1 able:1 suggested:1 bar:2 usually:4 pattern:5 below:4 dynamical:3 challenge:1 saturation:1 reliable:1 green:2 memory:1 belief:1 analogue:1 ia:1 force:1 improve:1 imply:2 magee:5 faced:2 prior:2 text:1 poulet:2 determining:1 loss:1 sublinear:6 interesting:1 limitation:1 filtering:3 proportional:2 generation:1 topography:1 yuste:1 bock:1 digital:1 integrate:1 degree:2 s0:6 row:1 excitatory:2 course:1 supported:1 kremer:1 electronically:1 arriving:3 normalised:1 explaining:1 fall:1 face:1 sparse:1 van:2 regard:2 overcome:1 cortical:5 rich:1 forward:1 qualitatively:2 simplified:1 bb:1 supralinear:10 pruning:2 evoke:1 sz:1 ml:1 global:3 active:10 doucet:1 assumed:5 nature:7 channel:1 ruyter:1 expanding:1 ignoring:1 dendrite:10 singe:1 du:3 gerstner:1 complex:4 domain:1 did:1 linearly:5 neurosci:4 pulsatile:1 allowed:1 convey:1 positively:5 neuronal:1 fig:19 sub:1 inferring:1 exponential:1 comput:2 tang:1 down:7 remained:1 bishop:1 normative:3 decay:1 neurol:1 weakest:1 albeit:1 adding:1 sequential:1 ci:2 magnitude:1 nat:3 interfered:1 parametrising:1 simply:1 visual:1 unexpected:1 partially:1 u2:3 monotonic:2 springer:1 aa:2 corresponds:1 conditional:1 goal:2 consequently:1 towards:2 absence:2 experimentally:1 change:9 determined:2 characterised:1 specifically:2 except:1 uniformly:1 pfister:4 isomorphic:1 experimental:1 formally:1 puzzling:1 internal:1 arises:1 incorporate:1 dept:2 correlated:26 |
3,662 | 4,316 | Non-Asymptotic Analysis of Stochastic
Approximation Algorithms for Machine Learning
Eric Moulines
LTCI
Telecom ParisTech, Paris, France
[email protected]
Francis Bach
INRIA - Sierra Project-team
Ecole Normale Sup?erieure, Paris, France
[email protected]
Abstract
We consider the minimization of a convex objective function defined on a Hilbert space,
which is only available through unbiased estimates of its gradients. This problem includes standard machine learning algorithms such as kernel logistic regression and
least-squares regression, and is commonly referred to as a stochastic approximation
problem in the operations research community. We provide a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent
(a.k.a. Robbins-Monro algorithm) as well as a simple modification where iterates are
averaged (a.k.a. Polyak-Ruppert averaging). Our analysis suggests that a learning rate
proportional to the inverse of the number of iterations, while leading to the optimal convergence rate in the strongly convex case, is not robust to the lack of strong convexity or
the setting of the proportionality constant. This situation is remedied when using slower
decays together with averaging, robustly leading to the optimal rate of convergence. We
illustrate our theoretical results with simulations on synthetic and standard datasets.
1 Introduction
The minimization of an objective function which is only available through unbiased estimates of
the function values or its gradients is a key methodological problem in many disciplines. Its analysis has been attacked mainly in three communities: stochastic approximation [1, 2, 3, 4, 5, 6],
optimization [7, 8], and machine learning [9, 10, 11, 12, 13, 14, 15]. The main algorithms which
have emerged are stochastic gradient descent (a.k.a. Robbins-Monro algorithm), as well as a simple
modification where iterates are averaged (a.k.a. Polyak-Ruppert averaging).
Traditional results from stochastic approximation rely on strong convexity and asymptotic analysis,
but have made clear that a learning rate proportional to the inverse of the number of iterations, while
leading to the optimal convergence rate in the strongly convex case, is not robust to the wrong setting
of the proportionality constant. On the other hand, using slower decays together with averaging
robustly leads to optimal convergence behavior (both in terms of rates and constants) [4, 5].
The analysis from the convex optimization and machine learning literatures however has focused
on differences between strongly convex and non-strongly convex objectives, with learning rates and
roles of averaging being different in these two cases [11, 12, 13, 14, 15].
A key desirable behavior of an optimization method is to be adaptive to the hardness of the problem,
and thus one would like a single algorithm to work in all situations, favorable ones such as strongly
convex functions and unfavorable ones such as non-strongly convex functions. In this paper, we
unify the two types of analysis and show that (1) a learning rate proportional to the inverse of the
number of iterations is not suitable because it is not robust to the setting of the proportionality
constant and the lack of strong convexity, (2) the use of averaging with slower decays allows (close
to) optimal rates in all situations.
More precisely, we make the following contributions:
? We provide a direct non-asymptotic analysis of stochastic gradient descent in a machine learning context (observations of real random functions defined on a Hilbert space) that includes
1
kernel least-squares regression and logistic regression (see Section 2), with strong convexity
assumptions (Section 3) and without (Section 4).
? We provide a non-asymptotic analysis of Polyak-Ruppert averaging [4, 5], with and without
strong convexity (Sections 3.3 and 4.2). In particular, we show that slower decays of the
learning rate, together with averaging, are crucial to robustly obtain fast convergence rates.
? We illustrate our theoretical results through experiments on synthetic and non-synthetic examples in Section 5.
Notation. We consider a Hilbert space H with a scalar product h?, ?i. We denote by k ? k the
associated norm and use the same notation for the operator norm on bounded linear operators from
H to H, defined as kAk = supkxk61 kAxk (if H is a Euclidean space, then kAk is the largest
singular value of A). We also use the notation ?w.p.1? to mean ?with probability one?. We denote
by E the expectation or conditional expectation with respect to the underlying probability space.
2 Problem set-up
We consider a sequence of convex differentiable random functions (fn )n>1 from H to R. We consider the following recursion, starting from ?0 ? H:
(1)
?n > 1, ?n = ?n?1 ? ?n fn? (?n?1 ),
where (?n )n>1 is a deterministic sequence of positive scalars, which we refer to as the learning
rate sequence. The function fn is assumed to be differentiable (see, e.g., [16] for definitions and
properties of differentiability for functions defined on Hilbert spaces), and its gradient is an unbiased
estimate of the gradient of a certain function f we wish to minimize:
(H1) Let (Fn )n>0 be an increasing family of ?-fields. ?0 is F0 -measurable, and for each ? ? H,
the random variable fn? (?) is square-integrable, Fn -measurable and
?? ? H, ?n > 1, E(fn? (?)|Fn?1 ) = f ? (?), w.p.1.
(2)
For an introduction to martingales, ?-fields, and conditional expectations, see, e.g., [17]. Note that
depending whether F0 is a trivial ?-field or not, ?0 may be random or not. Moreover, we could
restrict Eq. (2) to be satisfied only for ?n?1 and ?? (which is a global minimizer of f ).
Given only the noisy gradients fn? (?n?1 ), the goal of stochastic approximation is to minimize the
function f with respect to ?. Our assumptions include two usual situations, but also include many
others (e.g., potentially, active learning):
? Stochastic approximation: in the so-called Robbins-Monro setting, for all ? ? H and n > 1,
fn (?) may be expressed as fn (?) = f (?) + h?n , ?i, where (?n )n>1 is a square-integrable martingale difference (i.e., such that E(?n |Fn?1 ) = 0), which corresponds to a noisy observation
f ? (?n?1 ) + ?n of the gradient f ? (?n?1 ).
? Learning from i.i.d. observations: for all ? ? H and n > 1, fn (?) = ?(?, zn ) where zn is an
i.i.d. sequence of observations in a measurable space Z and ? : H ? Z is a loss function. Then
f (?) is the generalization error of the predictor defined by ?. Classical examples are leastsquares or logistic regression (linear or non-linear through kernel methods [18, 19]), where
fn (?) = 12 (hxn , ?i ? yn )2 , or fn (?) = log[1 + exp(?yn hxn , ?i)], for xn ? H, and yn ? R
(or {?1, 1} for logistic regression).
Throughout this paper, unless otherwise stated, we assume that each function fn is convex and
smooth, following the traditional definition of smoothness from the optimization literature, i.e.,
Lipschitz-continuity of the gradients (see, e.g., [20]). However, we make two slightly different
assumptions: (H2) where the function ? 7? E(fn? (?)|Fn?1 ) is Lipschitz-continuous in quadratic
mean and a strengthening of this assumption, (H2?) in which ? 7? fn? (?) is almost surely Lipschitzcontinuous.
(H2) For each n > 1, the function fn is almost surely convex, differentiable, and:
?n > 1, ??1 , ?2 ? H, E(kfn? (?1 ) ? fn? (?2 )k2 |Fn?1 ) 6 L2 k?1 ? ?2 k2 ,
w.p.1.
(3)
(H2?) For each n > 1, the function fn is almost surely convex, differentiable with Lipschitzcontinuous gradient fn? , with constant L, that is:
?n > 1, ??1 , ?2 ? H, kfn? (?1 ) ? fn? (?2 )k 6 Lk?1 ? ?2 k , w.p.1.
(4)
2
If fn is twice differentiable, this corresponds to having the operator norm of the Hessian operator
of fn bounded by L. For least-squares or logistic regression, if we assume that (Ekxn k4 )1/4 6
R for all n ? N, then we may take L = R2 (or even L = R2 /4 for logistic regression) for
assumption (H2), while for assumption (H2?), we need to have an almost sure bound kxn k 6 R.
3 Strongly convex objectives
In this section, following [21], we make the additional assumption of strong convexity of f , but not
of all functions fn (see [20] for definitions and properties of such functions):
(H3) The function f is strongly convex with respect to the norm k?k, with convexity constant ? > 0.
That is, for all ?1 , ?2 ? H, f (?1 ) > f (?2 ) + hf ? (?2 ), ?1 ? ?2 i + ?2 k?1 ? ?2 k2 .
Note that (H3) simply needs to be satisfied for ?2 = ?? being the unique global minimizer of f
(such that f ? (?? ) = 0). In the context of machine learning (least-squares or logistic regression),
assumption (H3) is satisfied as soon as ?2 k?k2 is used as an additional regularizer. For all strongly
convex losses (e.g., least-squares), it is also satisfied as soon as the expectation E(xn ? xn ) is
invertible. Note that this implies that the problem is finite-dimensional, otherwise, the expectation
is a compact covariance operator, and hence non-invertible (see, e.g., [22] for an introduction to
covariance operators). For non-strongly convex losses such as the logistic loss, f can never be
strongly convex unless we restrict the domain of ? (which we do in Section 3.2). Alternatively to
restricting the domain, replacing the logistic loss u 7? log(1 + e?u) by u 7? log(1 + e?u) + ?u2/2,
for some small ? > 0, makes it strongly convex in low-dimensional settings.
By strong convexity of f , if we assume (H3), then f attains its global minimum at a unique vector
?? ? H such that f ? (?? ) = 0. Moreover, we make the following assumption (in the context of
stochastic approximation, it corresponds to E(k?n k2 |Fn?1 ) 6 ? 2 ):
(H4) There exists ? 2 ? R+ such that for all n > 1, E(kfn? (?? )k2 |Fn?1 ) 6 ? 2 , w.p.1.
3.1 Stochastic gradient descent
Before stating our first theorem (see proof in [23]), we introduce the following family of functions
?? : R+ \ {0} ? R given by:
?
t ?1
if ? 6= 0,
?
?? (t) =
log t if ? = 0.
The function ? 7? ?? (t) is continuous for all t > 0. Moreover, for ? > 0, ?? (t) <
1
(both with asymptotic equality when t is large).
? < 0, we have ?? (t) < ??
t?
? ,
while for
Theorem 1 (Stochastic gradient descent, strong convexity) Assume (H1,H2,H3,H4). Denote
?n = Ek?n ? ?? k2 , where ?n ? H is the n-th iterate of the recursion in Eq. (1), with ?n = Cn?? .
We have, for ? ? [0, 1]:
?
?C 1??
4C? 2
?2
?
2 2
?
?0 + 2 +
, if 0 6 ? < 1,
n
?2 exp 4L C ?1?2? (n) exp ?
4
L
?n?
?n 6
(5)
2
2 2
?
exp(2L C )
?
2 2 ??C/2?1 (n)
?
?
?
+
+
2?
C
,
if
?
=
1.
0
n?C
L2
n?C/2
Sketch of proof. Under our assumptions, it can be shown that (?n ) satisfies the following recursion:
?n 6 (1 ? 2??n + 2L2 ?n2 )?n?1 + 2? 2 ?n2 .
(6)
Note that it also appears in [3, Eq. (2)] under different assumptions. Using this deterministic recursion, we then derive bounds using classical techniques from stochastic approximation [2], but in a
non-asymptotic way, by deriving explicit upper-bounds.
Related work. To the best of our knowledge, this non-asymptotic bound, which depends explicitly
upon the parameters of the problem, is novel (see [1, Theorem 1, Electronic companion paper] for a
simpler bound with no such explicit dependence). It shows in particular that there is convergence in
quadratic mean for any ? ? (0, 1]. Previous results from the stochastic approximation literature have
focused mainly on almost sure convergence of the sequence of iterates. Almost-sure convergence
requires that ? > 1/2, with counter-examples for ? < 1/2 (see, e.g., [2] and references therein).
3
Bound on function values. The bounds above imply a corresponding a bound on the functions
values. Indeed, under assumption (H2), it may be shown that E[f (?n ) ? f (?? )] 6 L2 ?n (see proof
in [23]).
Tightness for quadratic functions. Since the deterministic recursion in Eq. (6) is an equality for
quadratic functions fn , the result in Eq. (5) is optimal (up to constants). Moreover, our results are
consistent with the asymptotic results from [6].
Forgetting initial conditions. Bounds depend on the initial condition ?0 = E k?0 ? ?? k2 and the
variance ? 2 of the noise term. The initial condition is forgotten sub-exponentially fast for ? ? (0, 1),
2
but not for ? = 1. For ? < 1, the asymptotic term in the bound is 4C?
?n? .
Behavior for ? = 1. For ? = 1, we have
?
??C/2?1 (n)
n?C/2
6
1
1
?C/2?1 n
if C? > 2,
??C/2?1 (n)
n?C/2
=
log n
n
(n)
1
1
if C? = 2 and ?C/2?1
6 1??C/2
if C? > 2. Therefore, for ? = 1, the choice of C is
n?C/2
n?C/2
critical, as already noticed by [8]: too small C leads to convergence at arbitrarily small rate of the
form n??C/2 , while too large C leads to explosion due to the initial condition. This behavior is
confirmed in simulations in Section 5.
Setting C too large. There is a potentially catastrophic term when C is chosen too large, i.e.,
exp 4L2 C 2 ?1?2? (n) , which leads to an increasing bound when n is small. Note that for ? < 1,
this catastrophic term is in front of a sub-exponentially decaying factor, so its effect is mitigated
once the term in n1?? takes over ?1?2? (n), and the transient term stops increasing. Moreover, the
asymptotic term is not involved in it (which is also observed in simulations in Section 5).
Minimax rate. Note finally, that the asymptotic convergence rate in O(n?1 ) matches optimal
asymptotic minimax rate for stochastic approximation [24, 25]. Note that there is no explicit dependence on dimension; this dependence is implicit in the definition of the constants ? and L.
3.2 Bounded gradients
In some cases such as logistic regression, we also have a uniform upper-bound on the gradients, i.e.,
we assume (note that in Theorem 2, this assumption replaces both (H2) and (H4)).
(H5) For each n > 1, almost surely, the function fn if convex, differentiable and has gradients
uniformly bounded by B on the ball of center 0 and radius D, i.e., for all ? ? H and all n > 0,
k?k 6 D ? kfn? (?)k 6 B.
Note that no function may be strongly convex and Lipschitz-continuous (i.e., with uniformly
bounded gradients) over the entire Hilbert space H. Moreover, if (H2?) is satisfied, then we may take
D = k?? k and B = LD. The next theorem shows that with a slight modification of the recursion
in Eq. (1), we get simpler bounds than the ones obtained in Theorem 1, obtaining a result which
already appeared in a simplified form [8] (see proof in [23]):
Theorem 2 (Stochastic gradient descent, strong
convexity, bounded gradients) Assume
(H1,H3,H5). Denote ?n = E k?n ? ?? k2 , where ?n ? H is the n-th iterate of the following recursion:
?n > 1, ?n = ?D [?n?1 ? ?n fn? (?n?1 )],
(7)
where ?D is the orthogonal projection operator on the ball {? : k?k 6 D}. Assume k?? k 6 D. If
?n = Cn?? , we have, for ? ? [0, 1]:
?
?C 1??
2B 2 C 2
?
?0 + B 2 C 2 ?1?2? (n) exp ?
+
, if ? ? [0, 1) ;
n
?n 6
(8)
2
?n?
? ??C
?0 n
+ 2B 2 C 2 n??C ??C?1 (n),
if ? = 1 .
The proof follows the same lines than for Theorem 1, but with the deterministic recursion ?n 6
(1 ? 2??n )?n?1 + B 2 ?n2 . Note that we obtain the same asymptotic terms than for Theorem 1 (but B
replaces ?). Moreover, the bound is simpler (no explosive multiplicative factors), but it requires to
know D in advance, while Theorem 1 does not. Note that because we have only assumed Lipschitzcontinuity, we obtain a bound on function values of order O(n??/2 ), which is sub-optimal. For
bounds directly on function values, see [26].
4
3.3 Polyak-Ruppert averaging
Pn?1
We now consider ??n = n1 k=0 ?k and, following [4, 5], we make extra assumptions regarding the
smoothness of each fn and the fourth-order moment of the driving noise:
(H6) For each n > 1, the function fn is almost surely twice differentiable with Lipschitz-continuous
Hessian operator fn?? , with Lipschitz constant M . That is, for all ?1 , ?2 ? H and for all n > 1,
kfn?? (?1 ) ? fn?? (?2 )k 6 M k?1 ? ?2 k, where k ? k is the operator norm.
Note that (H6) needs only to be satisfied for ?2 = ?? . For least-square regression, we have M = 0,
while for logistic regression, we have M = R3 /4.
(H7) There exists ? ? R+ , such that for each n > 1, E(kfn? (?? )k4 |Fn?1 ) 6 ? 4 almost surely.
Moreover, there exists a nonnegative self-adjoint operator ? such that for all n, E(fn? (?? ) ?
fn? (?? )|Fn?1 ) 4 ? almost-surely.
The operator ? (which always exists as soon as ? is finite) is here to characterize precisely the
variance term, which will be independent of the learning rate sequence (?n ), as we now show:
Theorem 3 (Averaging, strong convexity) Assume (H1, H2?, H3, H4, H6, H7). Then, for ??n =
Pn?1
1
k=0 ?k and ? ? (0, 1), we have:
n
1/2
tr f ?? (?? )?1 ?f ?? (?? )?1
6?
M C? 2
?1?? (n)
1
? 2 1/2
?
?
Ek?n ? ? k
6
+
+
(1+(?C)1/2 )
1/2
1??/2
3/2
n
n
?C
n
2?
4LC 1/2 ?1?? (n)1/2
8A 1
? 2 1/2
+
+
+ L ?0 + 2
?
n
L
n?1/2 C
1/2
? 4
?E
k?
?
?
k
5M C 1/2 ?
0
2 3
2 2
+
A exp 24L4C 4 ?0 +
+
2?
C
?
+
8?
C
,
(9)
2n?
20C? 2
where A is a constant that depends only on ?, C, L and ?.
Sketch of proof. Following [4], we start from Eq. (1), write it as fn? (?n?1 ) = ?1n (?n?1 ? ?n ), and
notice that (a) fn? (?n?1 ) ? fn? (??P
) + f ?? (?? )(?n?1 ? ?? ), (b) fn? (?? ) has zero mean and behaves
n
1
like an i.i.d. sequence, and (c) n k=1 ?1k (?k?1 ? ?k ) turns out to be negligible owing to a summation
and to the bound obtained in Theorem 1. This implies that ??n ? ?? behaves like
Pnby parts
1
?? ? ?1 ? ?
? n k=1 f (? ) fk (? ). Note that we obtain a bound on the root mean square error.
Forgetting initial conditions. There is no sub-exponential forgetting of initial conditions, but
rather a decay at rate O(n?2 ) (last two lines in Eq. (9)). This is a known problem which may
slow down the convergence, a common practice being to start averaging after a certain number of
iterations [2]. Moreover, the constant A may be large when LC is large, thus the catastrophic terms
are more problematic than for stochastic gradient descent, because they do not appear in front of
sub-exponentially decaying terms (see [23]). This suggests to take CL small.
Asymptotically leading term. When M > 0 and ? > 1/2, the asymptotic term for ?n is independent
of (?n ) and of order O(n?1 ). Thus, averaging allows to get from the slow rate O(n?? ) to the optimal rate O(n?1 ). The next two leading terms (in the first line) have order O(n??2 ) and O(n?2? ),
suggesting the setting ? = 2/3 to make them equal. When M = 0 (quadratic functions), the leading
term has rate O(n?1 ) for all ? ? (0, 1) (with then a contribution of the first term in the second line).
Case ? = 1. We get a simpler bound by directly averaging the bound in Theorem 1, which leads
to an unchanged rate of n?1 , i.e., averaging is not key for ? = 1, and does not solve the robustness
problem related to the choice of C or the lack of strong convexity.
Leading term independent of (?n ). The term in O(n?1 ) does not depend on ?n . Moreover, as noticed in the stochastic approximation literature [4], in the context of learning from i.i.d. observations,
this is exactly the Cramer-Rao bound (see, e.g., [27]), and thus the leading term is asymptotically
optimal. Note that no explicit Hessian inversion has been performed to achieve this bound.
Relationship with prior work on online learning. There is no clear way of adding a bounded
gradient assumption in the general case ? ? (0, 1), because the proof relies on the recursion without
projections, but for ? = 1, the rate of O(n?1 ) (up to a logarithmic term) can be achieved in the
more general framework of online learning, where averaging is key to deriving bounds for stochastic
approximation from regret bounds. Moreover, bounds are obtained in high probability rather than
simply in quadratic mean (see, e.g., [11, 12, 13, 14, 15]).
5
4 Non-strongly convex objectives
In this section, we do not assume that the function f is strongly convex, but we replace (H3) by:
(H8) The function f attains its global minimum at a certain ?? ? H (which may not be unique).
In the machine learning scenario, this essentially implies that the best predictor is in the function
class we consider.1 In the following theorem, since ?? is not unique, we only derive a bound on
function values. Not assuming strong convexity is essential in practice to make sure that algorithms
are robust and adaptive to the hardness of the learning or optimization problem (much like gradient
descent is).
4.1 Stochastic gradient descent
The following theorem is shown in a similar way to Theorem 1; we first derive a deterministic recursion, which we analyze with novel tools compared to the non-stochastic case (see details in [23]),
obtaining new convergence rates for non-averaged stochastic gradient descent :
Theorem 4 (Stochastic gradient descent, no strong convexity) Assume (H1,H2?,H4,H8). Then,
if ?n = Cn?? , for ? ? [1/2, 1], we have:
1
?2
1 + 4L3/2 C 3/2
E [f (?n ) ? f (?? )] 6
?0 + 2 exp 4L2 C 2 ?1?2? (n)
.
(10)
C
L
min{?1?? (n), ??/2 (n)}
When ? = 1/2, the bound goes to zero only when LC < 1/4, at rates which can be arbitrarily
slow. For ? ? (1/2, 2/3), we get convergence at rate O(n??/2 ), while for ? ? (2/3, 1), we get a
convergence rate of O(n??1 ). For ? = 1, the upper bound is of order O((log n)?1 ), which may be
very slow (but still convergent). The rate of convergence changes at ? = 2/3, where we get our best
rate O(n?1/3 ), which does not match the minimax rate of O(n?1/2 ) for stochastic approximation
in the non-strongly convex case [25]. These rates for stochastic gradient descent without strong
convexity assumptions are new and we conjecture that they are asymptotically minimax optimal (for
stochastic gradient descent, not for stochastic approximation). Nevertheless, the proof of this result
falls out of the scope of this paper.
If we further assume that we have all gradients bounded by B (that is, we assume D = ? in (H5)),
then, we have the following theorem, which allows ? ? (1/3, 1/2) with rate O(n?3?/2+1/2 ):
Theorem 5 (Stochastic gradient descent, no strong convexity, bounded gradients) Assume
(H1, H2?, H5, H8). Then, if ?n = Cn?? , for ? ? [1/3, 1], we have:
?
1+4L1/2 C 1/2
? ?0 + B 2 C 2 ?1?2? (n)
C min{?1?? (n),??/2 (n)} , if ? ? [1/2, 1],
?
E [f (?n ) ? f (? )] 6 2
(11)
1/2
BC 3/2 )
? (?0 + B 2 C 2 )1/2 (1+4L
,
if ? ? [1/3, 1/2].
C
(1?2?)1/2 ?
(n)
3?/2?1/2
4.2 Polyak-Ruppert averaging
Averaging in the context of non-strongly convex functions has been studied before, in particular in
the optimization and machine learning literature, and the following theorems are similar in spirit to
earlier work [7, 8, 13, 14, 15]:
Theorem 6 (averaging, no strong convexity) Assume (H1,H2?,H4,H8). Then, if ?n = Cn?? , for
? ? [1/2, 1], we have
i ?2 C
1
1
? 2 exp 2L2 C 2 ?1?2? (n) h
?
1+ ?
?
E f (?n ) ? f (? ) 6
+
?0 + 2
1+(2LC)
?1?? (n). (12)
C
L
n1??
2n
If ? = 1/2, then we only have convergence under LC < 1/4 (as in Theorem 4), with potentially
slow rate, while for ? > 1/2, we have a rate of O(n?? ), with otherwise similar behavior than for
the strongly convex case with no bounded gradients. Here, averaging has allowed the rate to go from
O(max{n??1 , n??/2 }) to O(n?? ).
1
For least-squares regression with kernels, where fn (?) = 12 (yn ? h?, ?(xn )i)2 , with ?(xn ) being the
feature map associated with a reproducing kernel Hilbert space H with universal kernel [28], then we need that
x 7? E(Y |X = x) is a function within the RKHS. Taking care of situations where this is not true is clearly of
importance but out of the scope of this paper.
6
power 2
power 4
1
sgd ? 1/3
ave ? 1/3
sgd ? 1/2
ave ? 1/2
sgd ? 2/3
ave ? 2/3
sgd ? 1
ave ? 1
?2
?3
?
n
?
n
?1
sgd ? 1/3
ave ? 1/3
sgd ? 1/2
ave ? 1/2
sgd ? 2/3
ave ? 2/3
sgd ? 1
ave ? 1
0
log[f(? )?f ]
0
log[f(? )?f ]
2
?2
?4
?6
?4
0
1
2
3
4
5
0
1
2
log(n)
3
4
5
log(n)
Figure 1: Robustness to lack of strong convexity for different learning rates and stochastic gradient
(sgd) and Polyak-Ruppert averaging (ave). From left to right: f (?) = |?|2 and f (?) = |?|4 , (between
?1 and 1, affine outside of [?1, 1], continuously differentiable). See text for details.
? = 1/2
?=1
?5
0
2
sgd ? C=1/5
ave ? C=1/5
sgd ? C=1
ave ? C=1
sgd ? C=5
ave ? C=5
?
n
n
0
5
log[f(? )?f ]
sgd ? C=1/5
ave ? C=1/5
sgd ? C=1
ave ? C=1
sgd ? C=5
ave ? C=5
?
log[f(? )?f ]
5
0
?5
0
4
log(n)
2
4
log(n)
Figure 2: Robustness to wrong constants for ?n = Cn?? . Left: ? = 1/2, right: ? = 1. See text for
details. Best seen in color.
Theorem 7 (averaging, no strong convexity, bounded gradients) Assume (H1,H5,H8). If ?n =
Cn?? , for ? ? [0, 1], we have
n??1
B2
(?0 + C 2 B 2 ?1?2? (n)) +
?1?? (n).
(13)
E f (??n ) ? f (?? ) 6
2C
2n
With the bounded gradient assumption (and in fact without smoothness), we obtain the minimax
asymptotic rate O(n?1/2 ) up to logarithmic terms [25] for ? = 1/2, and, for ? < 1/2, the rate
O(n?? ) while for ? > 1/2, we get O(n??1 ). Here, averaging has also allowed to increase the
range of ? which ensures convergence, to ? ? (0, 1).
5 Experiments
Robustness to lack of strong convexity. Define f : R ? R as |?|q for |?| 6 1 and extended into
a continuously differentiable function, affine outside of [?1, 1]. For all q > 1, we have a convex
function with Lipschitz-continuous gradient with constant L = q(q?1). It is strongly convex around
the origin for q ? (1, 2], but its second derivative vanishes for q > 2. In Figure 1, we plot in log-log
scale the average of f (?n ) ? f (?? ) over 100 replications of the stochastic approximation problem
(with i.i.d. Gaussian noise of standard deviation 4 added to the gradient). For q = 2 (left plot),
where we locally have a strongly convex case, all learning rates lead to good estimation with decay
proportional to ? (as shown in Theorem 1), while for the averaging case, all reach the exact same
convergence rate (as shown in Theorem 3). However, for q = 4 where strong convexity does not
hold (right plot), without averaging, ? = 1 is still fastest but becomes the slowest after averaging;
on the contrary, illustrating Section 4, slower decays (such as ? = 1/2) leads to faster convergence
when averaging is used. Note also the reduction in variability for the averaged iterations.
Robustness to wrong constants. We consider the function on the real line f , defined as f (?) =
1
2
2 |?| and consider standard i.i.d. Gaussian noise on the gradients. In Figure 2, we plot the average
performance over 100 replications, for various values of C and ?. Note that for ? = 1/2 (left plot),
the 3 curves for stochastic gradient descent end up being aligned and equally spaced, corroborating
a rate proportional to C (see Theorem 1). Moreover, when averaging for ? = 1/2, the error ends up
7
Selecting rate after n/10 iterations
Selecting rate after n/10 iterations
?1.5
?2
?
?1
1/3 ? sgd
1/3 ? ave
1/2 ? sgd
1/2 ? ave
2/3 ? sgd
2/3 ? ave
1 ? sgd
1 ? ave
0
n
n
?
log[f(? )?f ]
?0.5
log[f(? )?f ]
1/3 ? sgd
1/3 ? ave
1/2 ? sgd
1/2 ? ave
2/3 ? sgd
2/3 ? ave
1 ? sgd
1 ? ave
?0.5
?1
?2.5
0
1
2
3
4
?1.5
0
5
log(n)
1
2
3
log(n)
4
Figure 3: Comparison on non strongly convex logistic regression problems. Left: synthetic example,
right: ?alpha? dataset. See text for details. Best seen in color.
being independent of C and ? (see Theorem 3). Finally, when C is too large, there is an explosion
(up to 105 ), hinting at the potential instability of having C too large. For ? = 1 (right plot), if C is
too small, convergence is very slow (and not at the rate n?1 ), as already observed (see, e.g., [8, 6]).
Medium-scale experiments with linear logistic regression. We consider two situations where
H = Rp : (a) the ?alpha? dataset from the Pascal large scale learning challenge (http://
largescale.ml.tu-berlin.de/), for which p = 500 and n = 50000, and (b) a synthetic example where p = 100, n = 100000; we generate the input data i.i.d. from a multivariate Gaussian
distribution with mean zero and a covariance matrix sampled from a Wishart distribution with p
degrees of freedom (thus with potentially bad condition number), and the output is obtained through
a classification by a random hyperplane. For different values of ?, we choose C in an adaptive
way where we consider the lowest test error after n/10 iterations, and report results in Figure 3. In
experiments reported in [23], we also consider C equal to 1/L suggested by our analysis to avoid
large constants, for which the convergence speed is very slow, suggesting that our global bounds involving the Lipschitz constants may be locally far too pessimistic and that designing a truly adaptive
sequence (?n ) instead of a fixed one is a fruitful avenue for future research.
6 Conclusion
In this paper, we have provided a non-asymptotic analysis of stochastic gradient, as well as its
averaged version, for various learning rate sequences of the form ?n = Cn?? (see summary of
results in Table 1). Following earlier work from the optimization, machine learning and stochastic
approximation literatures, our analysis highlights that ? = 1 is not robust to the choice of C and to
the actual difficulty of the problem (strongly convex or not). However, when using averaging with
? ? (1/2, 1), we get, both in strongly convex and non-strongly convex situation, close to optimal
rates of convergence. Moreover, we highlight the fact that problems with bounded gradients have
better behaviors, i.e., logistic regression is easier to optimize than least-squares regression.
Our work can be extended in several ways: first, we have focused on results in quadratic mean
and we expect that some of our results can be extended to results in high probability (in the line
of [13, 3]). Second, we have focused on differentiable objectives, but the extension to objective
functions with a differentiable stochastic part and a non-differentiable deterministic (in the line
of [14]) would allow an extension to sparse methods.
Acknowledgements. Francis Bach was partially supported by the European Research Council
(SIERRA Project). We thank Mark Schmidt and Nicolas Le Roux for helpful discussions.
?
(0
, 1/3)
(1/3 , 1/2)
(1/2 , 2/3)
(2/3 ,
1)
SGD
?, L
?
?
?
?
SGD
?, B
?
?
?
?
SGD
L
?
?
?/2
1??
SGD
L, B
?
(3? ? 1)/2
?/2
1??
Aver.
?, L
2?
2?
1
1
Aver.
L
?
?
1??
1??
Aver.
B
?
?
1??
1??
Table 1: Summary of results: For stochastic gradient descent (SGD) or Polyak-Ruppert averaging
(Aver.), we provide their rates of convergence of the form n?? corresponding to learning rate sequences ?n = Cn?? , where ? is shown as a function of ?. For each method, we list the main
assumptions (?: strong convexity, L: bounded Hessian, B: bounded gradients).
8
References
[1] M. N. Broadie, D. M. Cicek, and A. Zeevi. General bounds and finite-time improvement for
stochastic approximation algorithms. Technical report, Columbia University, 2009.
[2] H. J. Kushner and G. G. Yin. Stochastic approximation and recursive algorithms and applications. Springer-Verlag, second edition, 2003.
` Mozgovo??. An estimate for the rate of convergence of recurrent
[3] O. Yu. Kul? chitski?? and A. E.
robust identification algorithms. Kibernet. i Vychisl. Tekhn., 89:36?39, 1991.
[4] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM
Journal on Control and Optimization, 30(4):838?855, 1992.
[5] D. Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical
Report 781, Cornell University Operations Research and Industrial Engineering, 1988.
[6] V. Fabian. On asymptotic normality in stochastic approximation. The Annals of Mathematical
Statistics, 39(4):1327?1332, 1968.
[7] Y. Nesterov and J. P. Vial. Confidence level solutions for stochastic programming. Automatica,
44(6):1559?1568, 2008.
[8] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[9] L. Bottou and Y. Le Cun. On-line learning for very large data sets. Applied Stochastic Models
in Business and Industry, 21(2):137?151, 2005.
[10] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems (NIPS), 20, 2008.
[11] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size.
In Proc. ICML, 2008.
[12] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In Proc. ICML, 2007.
[13] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization.
In Conference on Learning Theory (COLT), 2009.
[14] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization.
Journal of Machine Learning Research, 9:2543?2596, 2010.
[15] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting.
Journal of Machine Learning Research, 10:2899?2934, 2009.
[16] J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization: Theory and
Examples. Springer, 2006.
[17] R. Durrett. Probability: theory and examples. Duxbury Press, third edition, 2004.
[18] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2001.
[19] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
[20] Y. Nesterov. Introductory lectures on convex optimization: a basic course. Kluwer Academic
Publishers, 2004.
[21] K. Sridharan, N. Srebro, and S. Shalev-Shwartz. Fast rates for regularized objectives. Advances
in Neural Information Processing Systems, 22, 2008.
[22] N. N. Vakhania, V. I. Tarieladze, and S. A. Chobanyan. Probability distributions on Banach
spaces. Reidel, 1987.
[23] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for
machine learning. Technical Report 00608041, HAL, 2011.
[24] A.S. Nemirovsky and D.B. Yudin. Problem complexity and method efficiency in optimization.
Wiley & Sons, 1983.
[25] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower
bounds on the oracle complexity of convex optimization, 2010. Tech. report, Arxiv 1009.0571.
[26] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for
stochastic strongly-convex optimization. In Proc. COLT, 2001.
[27] G. Casella and R. L. Berger. Statistical Inference. Duxbury Press, 2001.
[28] I. Steinwart. On the influence of the kernel on the consistency of support vector machines.
Journal of Machine Learning Research, 2:67?93, 2002.
9
| 4316 |@word illustrating:1 version:1 inversion:1 norm:5 proportionality:3 simulation:3 nemirovsky:1 covariance:3 sgd:28 tr:1 ld:1 reduction:1 moment:1 initial:6 selecting:2 ecole:1 bc:1 rkhs:1 fn:46 plot:6 juditsky:2 iterates:3 kaxk:1 simpler:4 mathematical:1 h4:6 direct:1 replication:2 introductory:1 introduce:1 forgetting:3 indeed:1 hardness:2 behavior:6 moulines:3 actual:1 solver:1 increasing:3 becomes:1 project:2 provided:1 notation:3 bounded:15 underlying:1 moreover:13 medium:1 lowest:1 mitigated:1 forgotten:1 exactly:1 wrong:3 k2:9 control:1 yn:4 appear:1 positive:1 before:2 negligible:1 engineering:1 inria:1 twice:2 therein:1 studied:1 suggests:2 fastest:1 nemirovski:1 range:1 averaged:5 unique:4 practice:2 regret:2 recursive:1 universal:1 projection:2 confidence:1 get:8 pegasos:1 close:2 operator:11 context:5 influence:1 h8:5 instability:1 optimize:1 measurable:3 deterministic:6 map:1 center:1 fruitful:1 go:2 kale:1 starting:1 convex:37 focused:4 unify:1 roux:1 splitting:1 deriving:2 annals:1 shamir:1 exact:1 programming:2 designing:1 origin:1 observed:2 role:1 ensures:1 counter:1 vanishes:1 convexity:22 complexity:2 nesterov:2 cristianini:1 depend:2 upon:1 eric:2 efficiency:1 various:2 regularizer:1 fast:3 outside:2 shalev:4 emerged:1 solve:1 tightness:1 otherwise:3 statistic:1 noisy:2 online:4 sequence:10 differentiable:12 product:1 strengthening:1 fr:2 tu:1 aligned:1 achieve:1 adjoint:1 olkopf:1 convergence:25 sierra:2 illustrate:2 depending:1 stating:1 derive:3 recurrent:1 h3:8 eq:8 strong:21 implies:3 radius:1 owing:1 stochastic:47 transient:1 generalization:1 pessimistic:1 leastsquares:1 summation:1 extension:2 hold:1 around:1 cramer:1 exp:9 scope:2 zeevi:1 driving:1 favorable:1 estimation:2 proc:3 council:1 robbins:4 largest:1 tool:1 minimization:3 mit:1 clearly:1 always:1 gaussian:3 normale:1 rather:2 pn:2 avoid:1 cornell:1 improvement:1 methodological:1 mainly:2 slowest:1 tech:1 industrial:1 ave:23 attains:2 helpful:1 inference:1 entire:1 france:2 classification:1 colt:2 pascal:1 dual:1 field:3 once:1 never:1 having:2 equal:2 yu:1 icml:2 future:1 others:1 report:5 n1:3 explosive:1 ltci:1 freedom:1 truly:1 primal:1 explosion:2 orthogonal:1 unless:2 euclidean:1 taylor:1 theoretical:2 industry:1 earlier:2 rao:1 zn:2 deviation:1 predictor:2 uniform:1 too:8 front:2 characterize:1 reported:1 synthetic:5 siam:2 discipline:1 invertible:2 together:3 continuously:2 borwein:1 satisfied:6 choose:1 slowly:1 wishart:1 ek:2 derivative:1 leading:8 ekxn:1 kul:1 suggesting:2 potential:1 de:1 b2:1 includes:2 explicitly:1 depends:2 multiplicative:1 h1:8 root:1 performed:1 hazan:1 analyze:1 francis:3 sup:1 start:2 hf:1 decaying:2 monro:4 contribution:2 minimize:2 square:11 variance:2 spaced:1 identification:1 confirmed:1 reach:1 casella:1 definition:4 kfn:6 involved:1 associated:2 proof:8 stop:1 sampled:1 dataset:2 knowledge:1 color:2 hilbert:6 appears:1 strongly:25 implicit:1 smola:1 hand:1 sketch:2 steinwart:1 replacing:1 nonlinear:1 lack:5 continuity:1 logistic:14 hal:1 effect:1 unbiased:3 true:1 hence:1 equality:2 kxn:1 self:1 kak:2 theoretic:1 duchi:1 l1:1 novel:2 common:1 behaves:2 exponentially:3 banach:1 slight:1 kluwer:1 refer:1 cambridge:1 smoothness:3 erieure:1 fk:1 consistency:1 shawe:1 l3:1 f0:2 multivariate:1 scenario:1 verlag:1 certain:3 arbitrarily:2 integrable:2 seen:2 minimum:2 additional:2 care:1 surely:7 desirable:1 smooth:1 technical:3 match:2 h7:2 faster:1 bach:4 academic:1 equally:1 ravikumar:1 involving:1 regression:17 basic:1 essentially:1 expectation:5 arxiv:1 iteration:8 kernel:9 agarwal:1 achieved:1 singular:1 crucial:1 sch:1 extra:1 publisher:1 sure:4 contrary:1 spirit:1 sridharan:2 iterate:2 restrict:2 polyak:8 regarding:1 cn:9 avenue:1 tradeoff:1 whether:1 hxn:2 bartlett:1 hessian:4 clear:2 vial:1 locally:2 differentiability:1 http:1 generate:1 shapiro:1 problematic:1 notice:1 estimated:1 write:1 key:4 nevertheless:1 lan:1 k4:2 backward:1 asymptotically:3 inverse:4 fourth:1 family:2 throughout:1 almost:10 electronic:1 bound:31 convergent:2 quadratic:7 replaces:2 nonnegative:1 oracle:1 precisely:2 bousquet:1 speed:1 min:2 conjecture:1 ball:2 enst:1 slightly:1 son:1 cun:1 modification:3 turn:1 r3:1 singer:2 know:1 end:2 available:2 operation:2 h6:3 robustly:3 schmidt:1 robustness:5 batch:1 slower:5 rp:1 duxbury:2 include:2 kushner:1 classical:2 unchanged:1 objective:8 noticed:2 already:3 added:1 dependence:4 usual:1 traditional:2 gradient:43 thank:1 remedied:1 berlin:1 trivial:1 assuming:1 relationship:1 berger:1 potentially:4 stated:1 reidel:1 upper:3 observation:5 datasets:1 finite:3 fabian:1 descent:16 attacked:1 situation:7 extended:3 variability:1 team:1 reproducing:1 community:2 paris:2 nip:1 beyond:1 suggested:1 pattern:1 appeared:1 challenge:1 max:1 wainwright:1 power:2 suitable:1 critical:1 difficulty:1 rely:1 business:1 regularized:2 largescale:1 recursion:10 minimax:5 normality:1 imply:1 lk:1 columbia:1 text:3 prior:1 literature:6 l2:7 acknowledgement:1 asymptotic:19 loss:5 expect:1 highlight:2 lecture:1 proportional:5 srebro:4 h2:14 degree:1 affine:2 consistent:1 xiao:1 course:1 summary:2 supported:1 last:1 soon:3 allow:1 fall:1 taking:1 barrier:1 sparse:1 curve:1 dimension:1 xn:5 lipschitzcontinuous:2 yudin:1 forward:1 commonly:1 made:1 adaptive:4 simplified:1 durrett:1 far:1 alpha:2 compact:1 ml:1 global:5 active:1 corroborating:1 assumed:2 automatica:1 shwartz:4 alternatively:1 continuous:5 table:2 robust:7 nicolas:1 obtaining:2 bottou:2 cl:1 european:1 domain:2 main:2 noise:4 edition:2 n2:3 allowed:2 telecom:1 referred:1 en:1 martingale:2 slow:7 wiley:1 lc:5 sub:6 wish:1 explicit:4 exponential:1 third:1 aver:4 theorem:27 companion:1 down:1 bad:1 r2:2 decay:7 hinting:1 list:1 svm:2 exists:4 essential:1 restricting:1 adding:1 importance:1 easier:1 logarithmic:2 yin:1 simply:2 expressed:1 partially:1 scalar:2 u2:1 springer:2 corresponds:3 minimizer:2 satisfies:1 relies:1 lewis:1 conditional:2 goal:1 acceleration:1 lipschitz:7 replace:1 paristech:1 ruppert:8 change:1 uniformly:2 averaging:31 hyperplane:1 called:1 catastrophic:3 unfavorable:1 mark:1 support:1 h5:5 |
3,663 | 4,317 | Neural Reconstruction with Approximate
Message Passing (NeuRAMP)
Sundeep Rangan
Polytechnic Institute of New York University
[email protected]
Alyson K. Fletcher
University of California, Berkeley
[email protected]
Aniruddha Bhargava
University of Wisconsin Madison
[email protected]
Lav R. Varshney
IBM Thomas J. Watson Research Center
[email protected]
Abstract
Many functional descriptions of spiking neurons assume a cascade structure where
inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. This paper presents a
novel and systematic parameter estimation procedure for such models and applies
the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive
fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using
a recently-developed generalized approximate message passing (GAMP) method.
The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown
to be computational efficient, provides a more exact modeling of the sparsity,
can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP
method can also exploit inherent structured sparsity in the linear weights. The
method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells.
1
Introduction
Fundamental to describing the behavior of neurons in response to sensory stimuli or to inputs from
other neurons is the need for succinct models that can be estimated and validated with limited data.
Towards this end, many functional models assume a cascade structure where an initial linear stage
combines inputs to produce a low-dimensional output for subsequent nonlinear stages. For example,
in the widely-used linear nonlinear Poisson (LNP) model for retinal ganglion cells (RGCs) [1,2], the
time-varying input stimulus vector is first linearly filtered and summed to produce a low (typically
one or two) dimensional output, which is then passed through a memoryless nonlinear function that
outputs the neuron?s instantaneous Poisson spike rate. An initial linear filtering stage also appears
in the well-known integrate-and-fire model [3]. The linear filtering stage in these models reduces
the dimensionality of the parameter estimation problem and provides a simple characterization of a
neuron?s receptive field or connectivity.
However, even with the dimensionality reduction from assuming such linear stages, parameter estimation may be difficult when the stimulus is high-dimensional or the filter lengths are large. Compressed sensing methods have been recently proposed [4] to reduce the dimensionality further. The
key insight is that although most experiments for mapping, say visual receptive fields, expose the
1
Linear
filtering
Stimulus
(eg. n pixel u1[t ]
image)
Gaussian noise
d [t ]
(u1 * w1)[t ]
Nonlinearity
Poisson
spike
process
?[t ]
z[t ]
Spike count
y[t ]
un [t ]
(un * wn )[t ]
Figure 1: Linear nonlinear Poisson (LNP) model for a neuron with n stimuli.
neural system under investigation to a large number of stimulus components, the overwhelming majority of the components do not affect the instantaneous spiking rate of any one particular neuron due
to anatomical sparsity [5, 6]. As a result, the linear weights that model the response to these stimulus components will be sparse; most of the coefficients will be zero. For the retina, the stimulus is
typically a large image, whereas the receptive field of any individual neuron is usually only a small
portion of that image. Similarly, for mapping cortical connectivity to determine the connectome,
each neuron is typically only connected to a small fraction of the neurons under test [7]. Due to the
sparsity of the weights, estimation can be performed via sparse reconstruction techniques similar to
those used in compressed sensing (CS) [8?10].
This paper presents a CS-based estimation of linear neuronal weights via a recently-developed generalized approximate message passing (GAMP) methods from [11] and [12]. GAMP, which builds
upon earlier work in [13, 14], is a Gaussian approximation of loopy belief propagation. The benefits of the GAMP method for neural mapping are that it is computationally tractable with large
sums of data, can incorporate very general graphical model descriptions of the neuron and provides
a method for simultaneously estimating the parameters in the linear and nonlinear stages. In contrast, methods such as the common spike-triggered average (STA) perform separate estimation of the
linear and nonlinear components. Following the simulation methodology in [4], we show that the
GAMP method offers significantly improved reconstruction of cortical wiring diagrams over other
state-of-the-art CS techniques.
We also validate the GAMP-based sparse estimation methodology in the problem of fitting LNP
models of salamander RGCs. LNP models have been widely-used in systems modeling of the retina,
and they have provided insights into how ganglion cells communicate to the lateral geniculate nucleus, and further upstream to the visual cortex [15]. Such understanding has also helped clarify the
computational purpose of cell connectivity in the retina. The filter shapes estimated by the GAMP
algorithm agree with other findings on RGC cells using STA methods, such as [16]. What is important here is that the filter coefficients can be estimated accurately with a much smaller number of
measurements. This feature suggests that GAMP-based sparse modeling may be useful in the future
for other neurons and more complex models.
2
2.1
Linear Nonlinear Poisson Model
Mathematical Model
We consider the following simple LNP model for the spiking output of a single neuron under n
stimulus components shown in Fig. 1, cf. [1, 2]. Inputs and outputs are measured in uniform time
intervals t = 0, 1, . . . , T ? 1, and we let uj [t] denote the jth stimulus input in the tth time interval,
j = 1, . . . , n. For example, if the stimulus is a sequence of images, n would be the number of pixels
in each image and uj [t] would be the value of the jth pixel over time. We let y[t] denote the number
of spikes in the tth time interval, and the general problem is to find a model that explains the relation
between the stimuli uj [t] and spike outputs y[t].
As the name suggests, the LNP model is a cascade of three stages: linear, nonlinear and Poisson. In
the first (linear) stage, the input stimulus is passed through a set of n linear filters and then summed
2
to produce the scalar output z[t] given by
z[t] =
n
n X
L?1
X
X
(wj ? uj )[t] =
wj [`]uj [t ? `],
j=1
(1)
j=1 `=0
where wj [?] is the linear filter applied to the jth stimulus component and (wj ? uj )[t] is the convolution of the filter with the input. We assume the filters have finite impulse response (FIR) with L
taps, wj [`], ` = 0, 1, . . . , L ? 1. In the second (nonlinear) stage of the LNP model, the scalar linear
output z[t] passes through a memoryless nonlinear random function to produce a spike rate ?[t]. We
assume a nonlinear mapping of the form
h
i
?[t] = f (v[t]) = log 1 + exp ?(v[t]; ?) ,
(2a)
v[t]
= z[t] + d[t], d[t] ? N (0, ?d2 ),
(2b)
where d[t] is Gaussian noise to account for randomness in the spike rate and ?(v; ?) is the ?-th
order polynomial,
(3)
?(v; ?) = ?0 + ?1 v + ? ? ? + ?? v ? .
The form of the function in (2b) ensures that the spike rate ?[t] is always positive. In the third and
final stage of the LNP model, the number of spikes is modeled as a Poisson process with mean ?[t].
That is,
Pr y[t] = k ?[t] = e??[t] ?[t]k /k!, k = 0, 1, 2, . . .
(4)
This LNP model is sometimes called a one-dimensional model since z[t] is a scalar.
2.2
Conventional Estimation Methods
The parameters in the neural model can be written as the vector ? = (w, ?, ?d2 ), where w is the nLdimensional vector of the filter coefficients, the vector ? contains the ? + 1 polynomial coefficients
in (3) and ?d2 is the noise variance. The basic problem is to estimate the parameters ? from the
input/output data uj [t] and y[t]. We briefly summarize three conventional methods: spike-triggered
average (STA), reverse correlation (RC) and maximum likelihood (ML), all described in several
texts including [1].
The STA and RC methods are based on simple linear regression. The vector z of linear filter outputs
z[t] in (1) can be written as z = Aw, where A is a known block Toeplitz matrix with the input
data uj [t]. The STA and RC methods then both attempt to find a w such that output z has high
linear correlation with measured spikes y. The RC method finds this solution with the least squares
estimate
b RC = (A? A + ? 2 I)?1 A? y,
w
(5)
for some parameter ? 2 , and the STA is an approximation given by
b STA =
w
1 ?
A y.
T
(6)
The statistical properties of the estimates are discussed in [17, 18].
b =w
b STA or w
b RC has been computed, one can compute an estimate b
b
Once the estimate w
z = Aw
for the linear output z and then use any scalar estimation method to find a nonlinear mapping from
z[t] to ?[t] based on the outputs y[t].
A shortcoming of the STA and RC methods is that the filter coefficients w are selected to maximize
the linear correlation and may not work well when there is a strong nonlinearity. A maximum
likelihood (ML) estimate may overcome this problem by jointly optimizing over nonlinear and linear
parameters. To describe the ML estimate, first fix parameters ? and ?d2 in the nonlinear stage. Then,
given the vector output z from the linear stage, the spike count components y[t] are independent:
T?1
Y
Pr y[t] z[t], ?, ?d2
Pr y z, ?, ?d2 =
t=0
3
(7)
where the component distributions are given by
Z ?
(8)
Pr y[t] ?[t] p ?[t] z[t], ?, ?d2 d?[t],
P y[t] z[t], ?, ?d2 =
0
and p ?[t] z[t], ?, ?d2 can be computed from the relation (2b) and Pr y[t] ?[t] is the Poisson
distribution (4). The ML estimate is then given by the solution to the optimization
T?1
Y
(9)
?bML := arg max
Pr y[t] z[t], ?, ?d2 , z = Aw.
(w,?,?d2 ) t=0
In this way, the ML estimate attempts to maximize the goodness of fit by simultaneously searching
over the linear and nonlinear parameters.
3
3.1
Estimation via Compressed Sensing
Bayesian Model with Group Sparsity
A difficulty in the above methods is that the number, Ln, of filter coefficients in w may be large and
require an excessive number of measurements to estimate accurately. As discussed above, the key
idea in this work is that most stimulus components have little effect on the spiking output. Most of
the filter coefficients wj [`] will be zero and exploiting this sparsity may be able to reduce the number
of measurements while maintaining the same estimation accuracy.
The sparse nature of the filter coefficients can be modeled with the following group sparsity structure: Let ?j be a binary random variable with ?j = 1 when stimulus j is in the receptive field of the
neuron and ?j = 0 when it is not. We call the variables ?j the receptive field indicators, and model
these indicators as i.i.d. Bernoulli variables with
Pr(?j = 1) = 1 ? Pr(?j = 0) = ?,
(10)
where ? ? [0, 1] is the average fraction of stimuli in the receptive field. We then assume that,
given the vector ? of receptive field indicators, the filter weight coefficients are independent with
distribution
0
if ?j = 0
(11)
p wj [`] ? = p wj [`] ?j =
N (0, ? 2 ) if ? = 1.
j
x
That is, the linear weight coefficients are zero outside the receptive field and Gaussian within the
receptive field. Since our algorithms are general, other distributions can also be used?we use the
Gaussian for illustration. The distribution on w defined by (10) and (11) is often called a group
sparse model, since the components of the vector w are zero in groups.
Estimation with this sparse structure leads naturally to a compressed sensing problem. Specifically,
we are estimating a sparse vector w through a noisy version y of a linear transform z = Aw, which
is precisely the problem of compressed sensing [8?10]. With a group structure, one can employ a
variety of methods including the group Lasso [19?21] and group orthogonal matching pursuit [22].
However, these methods are designed for either AWGN or logistic outputs. In the neural model, the
spike count y[t] is a nonlinear, random function of the linear output z[t] described by the probability
distribution in (8).
3.2
GAMP-Based Sparse Estimation
To address the nonlinearities in the outputs, we use the generalized approximate message passing
(GAMP) algorithm [11] with extensions in [12]. The GAMP algorithm is a general approximate
inference method for graphical models with linear mixing. To place the neural estimation problem
in the GAMP framework, first fix the stimulus input vector u, nonlinear output parameters ? and
?d2 . Then, the conditional joint distribution of the outputs y, linear filter weights w and receptive
field indicators ? factor as
"
# T?1
n
L?1
Y
Y
Y
p y, ?, w u, ?, ?d2
=
Pr(?j )
p wj [`] ?j
Pr y[t] z[t], ?, ?d2 ,
j=1
z
`=0
= Aw.
t=0
(12)
4
Receptive field
indicators
?
Nonlinear
parameters
?, ? d2
Filter Data matrix
Filter
weights with input
stimuli uj[t] outputs
w
z
Observed
spike counts
y
A
P (? j )
?
p w j [? ] ? j
?
p ?? y[t ] z[t ], ?, ? d2 ??
?
?
Figure 2: The neural estimation problem represented as a
graphical model with linear mixing. Solid circles are unknown
variables, dashed circles are observed variables (in this case,
spike counts) and squares are factors in the probability distribution. The linear mixing component of the graph indicates the
constraints that z = Aw.
Similar to standard graphical model estimation [23], GAMP is based on the first representing the
distribution in (12) via a factor graph as shown in Fig. 2. In the factor graph, the solid circles
represent the components of the unknown vectors w, ?, . . ., and the dashed circles the components
of the observed or measured variables y. Each square corresponds to one factor in the distribution
(12). What is new for the GAMP methodology, is that the factor graph also contains a component
to indicate the linear constraints that z = Aw, which would normally be represented by a set of
additional factor nodes.
Inference on graphical models is often performed by some variant of loopy belief propagation (BP).
Loopy BP attempts to reduce the joint estimation of all the variables to a sequence of lower dimensional estimation problems associated with each of the factors in the graph. Estimation at the factor
nodes is performed iteratively, where after each iteration, ?beliefs? of the variables are passed to the
factors to improve the estimates in the subsequent iterations. Details can be found in [23].
However, exact implementation of loopy BP is intractable for the neural estimation problem: The
linear constraints z = Aw create factor nodes that connect each of the variables z[t] to all the
variables wj [`] where uj [t ? `] is non-zero. In the RGC experiments below, the pixels value uj [t]
are non-zero 50% of the time, so each variable z[t] will be connected to, on average, half of the Ln
filter weight coefficients through these factor nodes. Since exact implementation of loopy BP grows
exponentially in the degree of the factor nodes, loopy BP would be infeasible for the neural problem,
even for moderate values of Ln.
The GAMP method reduces the complexity of loopy BP by exploiting the linear nature of the relations between the variables w and z. Specifically, it is shown that when each term z[t] is a linear
combination of a large number of terms wj [`], the belief messages across the factor node for the
linear constraints can be approximated as Gaussians and the factor node updates can be computed
with a central limit theorem approximation. Details are in [11] and [12].
4
Receptive Fields of Salamander Retinal Ganglion Cells
The sparse LNP model with GAMP-based estimation was evaluated on data from recordings of
neural spike trains from salamander retinal ganglion cells exposed to random checkerboard images,
following the basic methods of [24].1 In the experiment, spikes from individual neurons were measured over an approximately 1900s period at a sampling interval of 10ms. During the recordings,
the salamander was exposed to 80 ? 60 pixel random black-white binary images that changed every
3 to 4 sampling intervals. The pixels of each image were i.i.d. with a 50-50 black-white probability.
We compared three methods for fitting an L = 30 tap one-dimensional LNP model for the RGC
neural responses: (i) truncated STA, (ii) approximate ML, and (iii) GAMP estimation with the
sparse LNP model. Methods (i) and (ii) do not exploit sparsity, while method (iii) does.
The truncated STA method was performed by first computing a linear filter estimate as in (6) for the
entire 80 ? 60 image and then setting all coefficients outside an 11 ? 11 pixel subarea around the
pixel with the largest estimated response to zero. The 11 ? 11 size was chosen since it is sufficiently
large to contain these neurons? entire receptive fields. This truncation significantly improves the
STA estimate by removing spurious estimates that anatomically cannot have relation to the neural
1
Data from the Leonardo Laboratory at the Janelia Farm Research Campus.
5
Non?sparse LNP w/ STA
Sparse LNP w/ GAMP
400 s
Training
Non?sparse LNP w/ STA
600 s
Training
Sparse LNP w/ GAMP
1000 s
Training
0
100
200
Delay (ms)
300
0
100
200
Delay (ms)
300
(a) Filter responses over time
(b) Spatial receptive field
Figure 3: Estimated filter responses and visual receptive field for salamander RGCs using a nonsparse LNP model with STA estimation and a sparse LNP model with GAMP estimation.
b STA of the
responses; this provides a better comparison to test other methods. From the estimate w
b of the linear filter output. The output
linear filter coefficients, we compute an estimate b
z = Aw
parameters ? and ?d2 are then fit by numerical maximization of likelihood P (y|b
z, ?, ?d2 ) in (7).
We used a (? = 1)-order polynomial, since higher orders did not improve the prediction. The
fact that only a linear polynomial was needed in the output is likely due to the fact that random
checkerboard images rarely align with the neuron?s filters and therefore do not excite the neural
spiking into a nonlinear regime. An interesting future experiment would be to re-run the estimation
with swatches of natural images as in [25]. We believe that under such experimental conditions, the
advantages of the GAMP-based nonlinear estimation would be even larger.
The RC estimate (5) was also computed, but showed no appreciable difference from the STA estimate for this matrix A. As a result, we discuss only STA results below.
The GAMP-based sparse estimation used the STA estimate for initialization to select the 11 ? 11
pixel subarea and the variances ?x2 in (11). As in the STA case, we used only a (? = 1)-order linear
polynomial in (3). The linear coefficient ?1 was set to 1 since other scalings could be absorbed into
the filter weights w. The constant term ?0 was incorporated as another linear regression coefficient.
For a third algorithm, we approximately computed the ML estimate (9) by running the GAMP
algorithm, but with all the factors for the priors on the weights w removed.
To illustrate the qualitative differences between the estimates, Fig. 3 shows the estimated responses
for the STA and GAMP-based sparse LNP estimates for one neuron using three different lengths of
training data: 400, 600 and 1000 seconds of the total 1900 second training data. For brevity, the
approximate ML estimate is omitted, but is similar to the STA estimate. The estimated responses in
Fig. 3(a) are displayed as 11 ? 11 = 121 curves, each curve representing the linear filter response
with L = 30 taps over the 30?10 = 300ms response. Fig. 3(b) shows the estimated spatial receptive
fields plotted as the total magnitude of the 11 ? 11 filters. One can immediately see that the GAMP
based sparse estimate is significantly less noisy than the STA estimate, as the smaller, unreliable
responses are zeroed out in the GAMP-based sparse LNP estimate.
The improved accuracy of the GAMP-estimation with the sparse LNP model was verified in the
cross validation, as shown in Fig. 4. In this plot, the length of the training data was varied from 200
to 1000 seconds, with the remaining portion of the 1900 second data used for cross-validation. At
each training length, each of the three methods?STA, GAMP-based sparse LNP and approximate
b ?,
b ?
ML?were used to produce an estimate ?b = (w,
bd2 ). Fig. 4 shows, for each of these methods,
2
1/T
b ?
the cross-validation scores P (y|b
z, ?,
bd ) , which is the geometric mean of the likelihood in (7).
It can be seen that the GAMP-based sparse LNP estimate significantly outperforms the STA and
6
0.925
Cross?valid score
0.92
0.915
Figure 4: Prediction accuracy of sparse and
non-sparse LNP estimates for data from salamander RGC cells. Based on cross-validation
scores, the GAMP-based sparse LNP estimation provides a significantly better estimate for
the same amount of training.
0.91
0.905
Sparse LNP w/ GAMP
Non?sparse LNP w/ STA
Non?sparse LNP w/ approx ML
0.9
0.895
200
400
600
Train time (sec)
800
1000
Missed detect prob, pMD
1
Figure 5: Comparison of reconstruction methods on cortical connectome mapping with
multi-neuron excitation based on simulation
model in [4]. In this case, connectivity from
n = 500 potential pre-synaptic neurons are
estimated from m = 300 measurements with
40 neurons excited in each measurement. In
the simulation, only 6% of the n potential
neurons are actually connected to the postsynaptic neuron under test.
RC
CoSaMP
GAMP
0.8
0.6
0.4
0.2
0 ?4
10
?3
10
?2
?1
10
10
False alarm prob, pFA
0
10
approximate ML estimates that do not assume any sparse structure. Indeed, by the measure of the
cross-validation score, the sparse LNP estimate with GAMP after only 400 seconds of data was as
accurate as the STA estimate with 1000 seconds of data. Interestingly, the approximate ML estimate
is actually worse than the STA estimate, presumably since it overfits the model.
5
Neural Mapping via Multi-Neuron Excitation
The GAMP methodology was also applied to neural mapping from multi-neuron excitation, originally proposed in [4]. A single post-synaptic neuron has connections to n potential pre-synaptic
neurons. The standard method to determine which of the n neurons are connected to the postsynaptic neurons is to excite one neuron at a time. This process is wasteful, since only a small
fraction of the neurons are typically connected. In the method of [4], multiple neurons are excited
in each measurement. Then, exploiting the sparsity in the connectivity, compressed sensing techniques can be used to recover the mapping from m < n measurements. Unfortunately, the output
stage of spiking neurons is often nonlinear and most CS methods cannot directly incorporate such
nonlinearities into the estimation. The GAMP methodology thus offers the possibility of improved
performance for reconstruction.
To validate the methodology, we compared the performance of GAMP to various reconstruction
methods following a simulation of mapping of cortical neurons with multi-neuron excitation in [4].
The simulation assumes an LNP model of Section 2.1, where the inputs uj [t] are 1 or 0 depending
on whether the jth pre-synaptic input is excited in tth measurement. The filters have a single tap (i.e.
L=1), which are modeled as a Bernoulli-Weibull distribution with a probability ? = 0.06 of being
on (the neuron is connected) or 1 ? ? of being zero (the neuron is not connected). The output has a
strong nonlinearity including a thresholding and saturation ? the levels of which must be estimated.
Connectivity detection amounts to determining which of the n pre-synaptic neurons have non-zero
weights.
Fig. 5 plots the missed detection vs. false alarm rate of the various detectors. It can be seen that the
GAMP-based connectivity detection significantly outperforms both non-sparse RC reconstruction
as well as a state-of-the-art greedy sparse method CoSaMP [26, 27].
7
6
Conclusions and Future Work
A general method for parameter estimation in neural models based on generalized approximate
message passing was presented. The GAMP methodology is computationally tractable for large
data sets, can exploit sparsity in the linear coefficients and can incorporate a wide range of nonlinear
modeling complexities in a systematic manner. Experimental validation of the GAMP-based estimation of a sparse LNP model for salamander RGC cells shows significantly improved prediction in
cross-validation over simple non-sparse estimation methods such as STA. Benefits over state-of-theart sparse reconstruction methods are also apparent in simulated models of cortical mapping with
multi-neuron excitation.
Going forward, the generality offered by the GAMP model will enable accurate parameter estimation for other complex neural models. For example, the GAMP model can incorporate other prior
information such as a correlation between responses in neighboring pixels. Future work may also
include experiments with integrate-and-fire models [3]. An exciting future possibility for cortical
mapping is to decode memories, which are thought to be stored as the connectome [7, 28].
Throughout this paper, we have presented GAMP as an experimental data analysis method.
One might wonder, however, whether the brain itself might use compressive representations and
message-passing algorithms to make sense of the world. There have been several previous suggestions that visual and general cortical regions of the brain may use belief propagation-like algorithms [29, 30]. There have also been recent suggestions that the visual system uses compressive
representations [31]. As such, we assert the biologically plausibility of the brain itself using the
algorithms presented herein for receptive field and memory decoding.
7
Acknowledgements
We thank D. B. Chklovskii and T. Hu for formulative discussions on the problem, A. Leonardo for
providing experimental data and further discussions, and B. Olshausen for discussions.
References
[1] Peter Dayan and L. F. Abbott. Theoretical Neuroscience. Computational and Mathematical
Modeling of Neural Systems. MIT Press, 2001.
[2] Odelia Schwartz, Jonathan W. Pillow, Nicole C. Rust, and Eero P. Simoncelli. Spike-triggered
neural characterization. J. Vis., 6(4):13, July 2006.
[3] Liam Paninski, Jonathan W. Pillow, and Eero P. Simoncelli. Maximum Likelihood Estimation
of a Stochastic Integrate-and-Fire Neural Encoding Model. Neural Computation, 16(12):2533?
2561, December 2004.
[4] Tao Hu and Dmitri B. Chklovskii. Reconstruction of sparse circuits using multi-neuronal
excitation (RESCUME). In Yoshua Bengio, Dale Schuurmans, John Lafferty, Chris Williams,
and Aron Culotta, editors, Advances in Neural Information Processing Systems 22, pages 790?
798. MIT Press, Cambridge, MA, 2009.
[5] James R. Anderson, Bryan W. Jones, Carl B. Watt, Margaret V. Shaw, Jia-Hui Yang, David
DeMill, James S. Lauritzen, Yanhua Lin, Kevin D. Rapp, David Mastronarde, Pavel Koshevoy,
Bradley Grimm, Tolga Tasdizen, Ross Whitaker, and Robert E. Marc. Exploring the retinal
connectome. Mol. Vis, 17:355?379, February 2011.
[6] Elad Ganmor, Ronen Segev, and Elad Schneidman. The architecture of functional interaction
networks in the retina. J. Neurosci., 31(8):3044?3054, February 2011.
[7] Lav R. Varshney, Per Jesper Sj?ostr?om, and Dmitri B. Chklovskii. Optimal information storage
in noisy synapses under resource constraints. Neuron, 52(3):409?423, November 2006.
[8] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?
509, February 2006.
[9] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289?1306, April
2006.
8
[10] E. J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: Universal
encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406?5425, December 2006.
[11] S. Rangan. Generalized Approximate Message Passing for Estimation with Random Linear
Mixing. arXiv:1010.5141 [cs.IT]., October 2010.
[12] S. Rangan, A.K. Fletcher, V.K.Goyal, and P. Schniter. Hybrid Approximate Message Passing
with Applications to Group Sparsity . arXiv, 2011.
[13] D. Guo and C.-C. Wang. Random sparse linear systems observed via arbitrary channels: A
decoupling principle. In Proc. IEEE Int. Symp. Inform. Th., pages 946 ? 950, Nice, France,
June 2007.
[14] David L. Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for
compressed sensing. PNAS, 106(45):18914?18919, September 2009.
[15] David H. Hubel. Eye, Brain, and Vision. W. H. Freeman, 2nd edition, 1995.
[16] Toshihiko Hosoya, Stephen A. Baccus, and Markus Meister. Dynamic predictive coding by
the retina. Nature, 436(7047):71?77, July 2005.
[17] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems., 12:199?213, 2001.
[18] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network:
Computation in Neural Systems, 14:437?464, 2003.
[19] S. Bakin. Adaptive regression and model selection in data mining problems. PhD thesis,
Australian National University, Canberra, 1999.
[20] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J.
Royal Statist. Soc., 68:49?67, 2006.
[21] Lukas Meier, Sara van de Geer, and Peter B?uhlmann. Model selection and estimation in regression with grouped variables. J. Royal Statist. Soc., 70:53?71, 2008.
?
[22] Aur?elie C. Lozano, Grzegorz Swirszcz,
and Naoki Abe. Group orthogonal matching pursuit
for variable selection and prediction. In Proc. NIPS, Vancouver, Canada, December 2008.
[23] C. M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics.
Springer, New York, NY, 2006.
[24] Markus Meister, Jerome Pine, and Denis A. Baylor. Multi-neuronal signals from the retina:
acquisition and analysis. J. Neurosci. Methods, 51(1):95?106, January 1994.
[25] Joaquin Rapela, Jerry M. Mendel, and Norberto M. Grzywacz. Estimating nonlinear receptive
fields from natural images. J. Vis., 6(4):11, May 2006.
[26] D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate
samples. Appl. Comput. Harm. Anal., 26(3):301?321, May 2009.
[27] W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing signal reconstruction.
IEEE Trans. Inform. Theory, 55(5):2230?2249, May 2009.
[28] Dmitri B. Chklovskii, Bartlett W. Mel, and Karel Svoboda. Cortical rewiring and information
storage. Nature, 431(7010):782?788, October 2004.
[29] Tai Sing Lee and David Mumford. Hierarchical bayesian inference in the visual cortex. J. Opt.
Soc. Am. A, 20(7):1434?1448, July 2003.
[30] Karl Friston. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci., 11(2):127?
138, February 2010.
[31] Guy Isely, Christopher J. Hillar, and Friedrich T. Sommer. Decyphering subsampled data:
Adaptive compressive sampling as a principle of brain communication. In J. Lafferty, C. K. I.
Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 910?918. MIT Press, Cambridge, MA, 2010.
9
| 4317 |@word milenkovic:1 version:1 briefly:1 polynomial:5 nd:1 d2:18 hu:2 simulation:5 excited:3 pavel:1 solid:2 reduction:1 initial:3 contains:2 score:4 interestingly:1 outperforms:3 bradley:1 com:1 written:2 bd:1 must:1 john:1 subsequent:3 numerical:1 shape:1 designed:1 plot:2 update:1 v:1 half:1 selected:1 greedy:1 mastronarde:1 filtered:1 provides:5 characterization:2 node:7 denis:1 mendel:1 mathematical:2 rc:10 qualitative:1 yuan:1 combine:1 fitting:2 symp:1 manner:1 indeed:1 andrea:1 cand:2 behavior:1 multi:8 brain:6 freeman:1 little:1 overwhelming:1 provided:1 estimating:3 campus:1 circuit:1 what:2 weibull:1 developed:2 compressive:4 unified:1 finding:1 assert:1 berkeley:2 every:1 schwartz:1 normally:1 positive:1 naoki:1 limit:1 awgn:1 encoding:2 approximately:2 black:2 might:2 initialization:1 suggests:2 sara:1 appl:1 limited:1 liam:1 range:1 elie:1 block:1 goyal:1 procedure:1 universal:1 cascade:4 significantly:8 matching:2 thought:1 pre:4 tolga:1 ganmor:1 projection:1 cannot:2 selection:4 romberg:1 storage:2 conventional:2 center:1 nicole:1 hillar:1 williams:2 recovery:2 immediately:1 needell:1 insight:2 searching:1 grzywacz:1 decode:1 exact:4 svoboda:1 carl:1 subarea:2 us:1 approximated:1 recognition:1 observed:4 wang:1 wj:11 ensures:1 connected:7 region:1 culotta:2 removed:1 complexity:2 dynamic:1 exposed:2 predictive:1 upon:1 alyson:2 joint:2 represented:2 various:2 train:2 jesper:1 shortcoming:1 describe:1 zemel:1 kevin:1 outside:2 apparent:1 widely:2 larger:1 elad:2 say:1 compressed:10 toeplitz:1 statistic:1 jointly:1 noisy:3 transform:1 final:1 farm:1 itself:2 triggered:4 sequence:2 advantage:1 reconstruction:11 lowdimensional:1 grimm:1 interaction:1 rewiring:1 neighboring:1 mixing:4 margaret:1 description:2 validate:2 exploiting:3 convergence:1 cosamp:3 produce:6 illustrate:1 depending:1 measured:4 lauritzen:1 strong:2 soc:3 c:5 indicate:1 australian:1 filter:28 stochastic:1 rapp:1 enable:1 explains:1 require:1 fix:2 investigation:1 opt:1 extension:1 exploring:1 clarify:1 around:1 sufficiently:1 exp:1 presumably:1 fletcher:2 mapping:13 pine:1 omitted:1 purpose:1 estimation:43 proc:2 geniculate:1 expose:1 ross:1 uhlmann:1 largest:1 grouped:2 create:1 karel:1 mit:3 gaussian:6 always:1 varying:1 validated:2 june:1 bernoulli:2 likelihood:5 indicates:1 salamander:8 contrast:1 detect:1 sense:1 am:1 inference:3 dayan:1 inaccurate:1 typically:4 entire:2 spurious:1 relation:4 going:1 france:1 tao:3 pixel:10 arg:1 toshihiko:1 art:2 summed:2 spatial:2 field:21 once:1 sampling:3 jones:1 excessive:1 theart:1 future:5 yoshua:1 stimulus:19 inherent:1 employ:1 retina:6 sta:29 simultaneously:2 national:1 individual:2 sundeep:1 subsampled:1 fire:3 attempt:3 detection:3 message:10 possibility:2 highly:1 mining:1 pfa:1 light:1 accurate:2 schniter:1 arian:1 orthogonal:2 incomplete:2 taylor:1 circle:4 re:1 plotted:1 theoretical:1 modeling:5 earlier:1 goodness:1 maximization:1 loopy:8 uniform:1 delay:2 wonder:1 stored:1 gamp:45 connect:1 aw:9 eec:1 fundamental:1 aur:1 systematic:2 lee:1 decoding:1 connectome:4 connectivity:8 w1:1 central:1 thesis:1 fir:1 guy:1 worse:1 checkerboard:2 account:1 potential:3 nonlinearities:3 de:1 retinal:5 sec:1 coding:1 coefficient:16 int:1 vi:3 aron:1 performed:4 helped:1 overfits:1 portion:2 recover:1 jia:1 om:1 square:3 accuracy:3 variance:2 ronen:1 bayesian:2 accurately:2 drive:1 randomness:1 detector:1 synapsis:1 inform:5 synaptic:5 energy:1 acquisition:1 frequency:1 james:2 naturally:1 associated:1 dimensionality:3 improves:1 pmd:1 actually:2 appears:1 higher:1 originally:1 methodology:7 response:15 improved:4 april:1 evaluated:1 generality:1 anderson:1 stage:15 correlation:4 jerome:1 joaquin:1 tropp:1 christopher:1 nonlinear:25 propagation:4 logistic:1 impulse:1 grows:1 bml:1 believe:1 name:1 effect:1 olshausen:1 rgcs:3 contain:1 maleki:1 lozano:1 jerry:1 memoryless:2 iteratively:1 laboratory:1 eg:1 white:3 wiring:1 nonsparse:1 during:1 excitation:7 mel:1 m:4 generalized:5 image:12 instantaneous:2 novel:1 recently:3 common:1 functional:3 spiking:6 rust:1 exponentially:1 discussed:2 measurement:8 cambridge:2 approx:1 similarly:1 nonlinearity:3 janelia:1 shawe:1 cortex:2 align:1 showed:1 recent:1 optimizing:1 moderate:1 reverse:1 neuramp:1 binary:2 watson:1 lnp:32 seen:2 additional:1 dai:1 determine:2 maximize:2 period:1 schneidman:1 signal:6 ii:3 dashed:2 multiple:1 simoncelli:2 pnas:1 reduces:2 july:3 stephen:1 plausibility:1 offer:2 cross:7 lin:2 post:1 prediction:4 variant:1 basic:2 regression:5 vision:1 poisson:9 arxiv:2 iteration:2 sometimes:1 represent:1 cell:9 whereas:1 chklovskii:4 interval:5 diagram:1 pass:1 recording:2 december:3 lafferty:2 call:1 near:1 yang:1 iii:2 bengio:1 wn:1 variety:1 srangan:1 affect:1 fit:2 architecture:1 lasso:1 reduce:3 idea:1 whether:2 bartlett:1 passed:4 peter:2 passing:9 york:2 useful:1 amount:2 statist:2 tth:3 estimated:10 neuroscience:1 per:1 bryan:1 anatomical:1 group:9 key:2 wisc:1 wasteful:1 verified:1 abbott:1 graph:5 dmitri:3 fraction:3 sum:1 run:1 prob:2 uncertainty:1 communicate:1 place:1 throughout:1 missed:2 scaling:1 precisely:1 constraint:5 rangan:3 bp:6 leonardo:2 x2:1 segev:1 markus:2 u1:2 structured:1 combination:1 watt:1 smaller:2 across:1 postsynaptic:2 rev:1 biologically:1 anatomically:1 pr:10 aniruddha:2 computationally:2 ln:3 agree:1 resource:1 tai:1 describing:1 count:5 discus:1 needed:1 tractable:2 end:1 meister:2 pursuit:3 gaussians:1 polytechnic:1 hierarchical:1 shaw:1 thomas:1 assumes:1 running:1 cf:1 remaining:1 include:1 graphical:6 sommer:1 maintaining:1 madison:1 whitaker:1 exploit:3 build:1 uj:12 february:4 spike:20 mumford:1 receptive:21 strategy:1 september:1 subspace:1 separate:1 thank:1 lateral:1 simulated:1 majority:1 chris:1 assuming:1 length:4 modeled:3 illustration:1 providing:1 baylor:1 difficult:1 unfortunately:1 october:2 robert:1 baccus:1 implementation:2 anal:1 unknown:2 perform:1 neuron:43 convolution:1 sing:1 finite:1 november:1 displayed:1 truncated:2 january:1 incorporated:1 communication:1 varied:1 arbitrary:1 abe:1 canada:1 grzegorz:1 david:5 meier:1 chichilnisky:1 connection:1 friedrich:1 tap:4 california:1 herein:1 swirszcz:1 nip:1 trans:4 address:1 able:1 usually:1 below:2 pattern:1 regime:1 sparsity:11 summarize:1 saturation:1 including:3 max:1 memory:2 belief:6 royal:2 difficulty:1 natural:2 hybrid:1 friston:1 indicator:5 bhargava:1 representing:2 improve:2 eye:1 text:1 prior:2 understanding:1 geometric:1 acknowledgement:1 nice:1 vancouver:1 determining:1 wisconsin:1 isely:1 interesting:1 suggestion:2 filtering:4 validation:7 integrate:3 nucleus:1 degree:1 offered:1 zeroed:1 thresholding:1 exciting:1 editor:2 principle:4 tasdizen:1 ibm:2 karl:1 changed:1 truncation:1 free:1 jth:4 infeasible:1 ostr:1 institute:1 wide:1 lukas:1 sparse:38 benefit:2 van:1 overcome:1 curve:2 cortical:8 valid:1 world:1 pillow:2 sensory:2 forward:1 dale:1 adaptive:2 sj:1 approximate:13 varshney:2 unreliable:1 ml:12 hubel:1 harm:1 excite:2 eero:2 un:2 iterative:1 nature:4 channel:1 rescume:1 robust:1 decoupling:1 mol:1 schuurmans:1 complex:2 poly:1 upstream:1 marc:1 did:1 montanari:1 linearly:1 neurosci:3 noise:4 alarm:2 edition:1 succinct:1 neuronal:4 fig:8 canberra:1 ny:1 comput:1 third:2 theorem:1 removing:1 hosoya:1 bishop:1 sensing:11 intractable:1 false:2 hui:1 phd:1 magnitude:1 nat:1 paninski:2 likely:1 ganglion:5 absorbed:1 visual:6 lav:2 scalar:4 applies:1 springer:1 corresponds:1 ma:2 conditional:1 donoho:2 towards:1 appreciable:1 specifically:2 called:2 total:2 geer:1 experimental:4 e:2 rarely:1 rgc:5 select:1 guo:1 odelia:1 jonathan:2 brevity:1 incorporate:5 |
3,664 | 4,318 | Why The Brain Separates Face Recognition From
Object Recognition
Joel Z Leibo, Jim Mutch and Tomaso Poggio
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge MA 02139
[email protected], [email protected], [email protected]
Abstract
Many studies have uncovered evidence that visual cortex contains specialized regions involved in processing faces but not other object classes. Recent electrophysiology studies of cells in several of these specialized regions revealed that at
least some of these regions are organized in a hierarchical manner with viewpointspecific cells projecting to downstream viewpoint-invariant identity-specific cells
[1]. A separate computational line of reasoning leads to the claim that some transformations of visual inputs that preserve viewed object identity are class-specific.
In particular, the 2D images evoked by a face undergoing a 3D rotation are not
produced by the same image transformation (2D) that would produce the images
evoked by an object of another class undergoing the same 3D rotation. However, within the class of faces, knowledge of the image transformation evoked
by 3D rotation can be reliably transferred from previously viewed faces to help
identify a novel face at a new viewpoint. We show, through computational simulations, that an architecture which applies this method of gaining invariance to
class-specific transformations is effective when restricted to faces and fails spectacularly when applied to other object classes. We argue here that in order to
accomplish viewpoint-invariant face identification from a single example view,
visual cortex must separate the circuitry involved in discounting 3D rotations of
faces from the generic circuitry involved in processing other objects. The resulting
model of the ventral stream of visual cortex is consistent with the recent physiology results showing the hierarchical organization of the face processing network.
1
Introduction
There is increasing evidence that visual cortex contains discrete patches involved in processing faces
but not other objects [2, 3, 4, 5, 6, 7]. Though progress has been made recently in characterizing
the properties of these brain areas, the computational-level reason the brain adopts this modular
architecture has remained unknown.
In this paper, we propose a new computational-level explanation for why visual cortex separates face
processing from object processing. Our argument does not require us to claim that faces are automatically processed in ways that are inapplicable to objects (e.g. gaze detection, gender detection)
or that cortical specialization for faces arises due to perceptual expertise [8], though the perspective
that emerges from our model is consistent with both of these claims.
We show that the task of identifying individual faces in an optimally viewpoint invariant way from
single training examples requires a separate neural circuitry specialized for faces. The crux of this
identification problem involves discounting transformations of the target individual?s appearance.
Generic transformations e.g, translation, scaling and 2D in-plane rotation can be learned from any
1
Figure 1: Layout of face-selective regions in macaque visual cortex, adapted from [1] with permission.
object class and usefully applied to any other class [9]. Other transformations which are classspecific, include changes in viewpoint and illumination. They depend on the object?s 3D structure
and material properties both of which vary between ? but not within ? certain object classes. Faces
are the protoypical example of such a class where the individual objects are similar to each other.
In this paper, we describe a method by which invariance to class-specific transformations can be
encoded and used for within-class identification. The resulting model of visual cortex must separate
the representations of different classes in order to achieve good performance.
This analysis is mainly computational but has implications for neuroscience and psychology. Section 2 of this paper describes the recently discovered hierarchical organization of the macaque face
processing network [1]. Sections 3 and 4 describe an extension to an existing hierarchical model
of object recognition to include invariances for class-specific transformations. The final section explains why the brain should have separate modules and relates the proposed computational model
to physiology and neuroimaging evidence that the brain does indeed separate face recognition from
object recognition.
2
The macaque face recognition hierarchy
In macaques, there are 6 discrete face-selective regions in the ventral visual pathway, one posterior
lateral face patch (PL), two middle face patches (lateral- ML and fundus- MF), and three anterior
face patches, the anterior fundus (AF), anterior lateral (AL), and anterior medial (AM) patches [5, 4].
At least some of these patches are organized into a feedforward hierarchy. Visual stimulation evokes
a change in the local field potential ? 20 ms earlier in ML/MF than in patch AM [1]. Consistent with
a hierarchical organization involving information passing from ML/MF to AM via AL, electrical
stimulation of ML elicited a response in AL and stimulation in AL elicited a response in AM [10].
The firing rates of cells in ML/MF are most strongly modulated by face viewpoint. Further along
the hierarchy, in patch AM, cells are highly selective for individual faces but tolerate substantial
changes in viewpoint [1].
The computational role of this recently discovered hierarchical organization is not yet established.
In this paper, we argue that such a system ? with view-tuned cells upstream from view-invariant
identity-selective cells ? is ideally suited to support face identification. In the subsequent sections,
we present a model of the ventral stream that is consistent with a large body of experimental results1
and additionally predicts the existence of discrete face-selective patches organized in this manner.
1
See [11] for a review.
2
3
Hubel-Wiesel inspired hierarchical models of object recognition
At the end of the ventral visual pathway, cells in the most anterior parts of visual cortex respond
selectively to highly complex stimuli and also invariantly over several degrees of visual angle. Hierarchical models inspired by Hubel and Wiesel?s work, H-W models, seek to achieve similar selectivity and invariance properties by subjecting visual inputs to successive tuning and pooling operations
[12, 13, 14, 15]. A major algorithmic claim made by these H-W models is that repeated application of this AND-like tuning operation is the source of the selective responses of cells at the end
of the ventral stream. Likewise, repeated application of OR-like pooling operations yield invariant
responses.
Hubel and Wiesel described complex cells as pooling the outputs of simple cells with the same optimal stimuli but receptive fields in different locations [16]. This pooling-over-position arrangement
yields complex cells with larger receptive fields. That is, the operation transforms a position sensitive
input to a (somewhat) translation invariant output. Similar pooling operations can also be employed
to gain tolerance to other image transformations, including those induced by changes in viewpoint
or illumination. Beyond V1, neurons can implement pooling just as they do within V1. Complex
cells could pool over any transformation e.g., viewpoint, simply by connecting to (simple-like) cells
that are selective for the appearance of the same feature at different viewpoints.
The specific H-W model which we extended in this paper is commonly known as HMAX [14,
17]; analogous extensions could be done for many related models. In this model, simple (S) cells
compute a measure of their input?s similarity to a stored optimal feature via a gaussian radial basis
function or a normalized dot product. Complex (C) cells pool over S cells by computing the max
response of all the S cells with which they are connected. These operations are typically repeated in
a hierarchical manner, with the output of one C layer feeding into the next S layer and so on.
The max-pooling operation we employ can be viewed as an idealized mathematical description of
the operation obtained by a system that has accurately associated template images across transformations. These associations could be acquired by a learning rule that connects input patterns that
occur nearby in time to the same C unit. Numerous algorithms have been proposed to solve this
invariance-learning problem through temporal association [18, 19, 20, 21, 22]. There is also psychophysical and physiological evidence that visual cortex employs a temporal association strategy2
[23, 24, 25, 26, 27].
4
Invariance to class-specific transformations
H-W models can gain invariance to some transformations in a generic way. When the appearance of
an input image under the transformation depends only on information available in a single example
e.g., translation, scaling, and in-plane rotation, then the model?s response to any image undergoing
the transformation will remain constant no matter what templates were associated with one another
to build the model. For example, a face can be encoded invariantly to translation as a vector of
similarities to previously viewed template images of any other objects. The similarity ?values? need
not be high as long as they remain consistent across positions [9]. We refer to transformations with
this property as generic, and note that they are the most common. Other transformations are classspecific, that is, they depend on information about the depicted object that is not available in a single
image. For example, the 2D image evoked by an object undergoing a change in viewpoint depends
on its 3D structure. Likewise, the images evoked by changes in illumination depend on the object?s
material properties. These class-specific properties can be learned from one or more exemplars of
the class and applied to other objects in the class (see also [28, 29]). For this to work, the object
class needs to consist of objects with similar 3D shape and material properties. Faces, as a class, are
consistent enough in both 3D structure and material properties for this to work. Other, more diverse
classes, such as ?automobiles? are not.
2
These temporal association algorithms and the evidence for their employment by visual cortex are interesting in their own right. In this paper we sidestep the issue of how visual cortex associates similar features
under different transformations in order to focus on the implications of having the representation that results
from applying these learning rules.
3
C3
S3
C2
S2
C1
S1
Tuning
Pooling
Input
Figure 2: Illustration of an extension to the HMAX model to incorporate class-specific invariance to
face viewpoint changes.
In our implementation of the HMAX model, the response of a C cell ? associating templates w at
each position t ? is given by:
??
?
?
n
X
1
(1)
(wt,j ? xj )2 ??
rw (x) = max ?exp ??
t
2? j=1
The same template wt is replicated at all positions, so this C response models the outcome of a
temporal association learning process that associated the patterns evoked by a template at each
position. This C response is invariant to translation. An analogous method can achieve viewpointtolerant responses. rw (x) is invariant to viewpoint changes of the input face x, as long as the
3D structure of the face depicted in the template images wt matches the 3D structure of the face
depicted in x. Since all human faces have a relatively similar 3D structure, rw (x) will tolerate
substantial viewpoint changes within the domain of faces. It follows that templates derived from a
class of objects with the wrong 3D structure give rise to C cells that do not respond invariantly to
3D rotations.
Figures 3 and 4 show the performance of the extended HMAX model on viewpoint-invariant (fig3)
and illumination-invariant (fig4) within-category identification tasks. Both of these are one-shot
learning tasks. That is, a single view of a target object is encoded and a simple classifier (nearest
neighbors) must rank test images depicting the same object as being more similar to the encoded
target than to images of any other objects. Both targets and distractors were presented under varying
viewpoints and illuminations. This task models the common situation of encountering a new face or
object at one viewpoint and then being asked to recognize it again later from a different viewpoint.
The original HMAX model [14], represented here by the red curves (C2), shows a rapid decline
in performance due to changes in viewpoint and illumination. In contrast, the C3 features of the
extended HMAX model perform significantly better than C2. Additionally, the performance of the
C3 features is not strongly affected by viewpoint and illumination changes (see the plots along the
diagonal in figures 3I and 4I).
4
I
II
Figure 3: Viewpoint invariance. Bottom panel (II): Example images from three classes of stimuli.
Class A consists of faces produced using FaceGen (Singular Inversions). Class B is a set of synthetic
objects produced using Blender (Stichting Blender Foundation). Each object in this class has a
central spike protruding from a sphere and two bumps always in the same location on top of the
sphere. Individual objects differ from one another by the direction in which another protusion comes
off of the central spike and the location/direction of an additional protrusion. Class C is another
set of synthetic objects produced using Blender. Each object in this class has a central pyramid
on a flat plane and two walls on either side. Individual objects differ in the location and slant of
three additional bumps. For both faces and the synthetic classes, there is very little information
to disambiguate individuals from views of the backs of the objects. Top panel (I): Each column
shows the results of testing the model?s viewpoint-invariant recognition performance on a different
class of stimuli (A,B or C). The S3/C3 templates were obtained from objects in class A in the top
row, class B in the middle row and class C in the bottom row. The abscissa of each plot shows
the maximum invariance range (maximum deviation from the frontal view in either direction) over
which targets and distractors were presented. The ordinate shows the AUC obtained for the task of
recognizing an individual novel object despite changes in viewpoint. The model was never tested
using the same images that were used to produce S3/C3 templates. A simple correlation-based
nearest-neighbor classifier must rank all images of the same object at different viewpoints as being
more similar to the frontal view than other objects. The red curves show the resulting AUC when
the input to the classifier consists of C2 responses and the blue curves show the AUC obtained
when the classifier?s input is the C3 responses only. Simulation details: These simulations used
2000 translation and scaling invariant C2 units tuned to patches of natural images. The choice of
natural image patches for S2/C2 templates had very little effect on the final results. Error bars (+/one standard deviation) show the results of cross validation by randomly choosing a set of example
images to use for producing S3/C3 templates and testing on the rest of the images. The above
simulations used 710 S3 units (10 exemplar objects and 71 views) and 10 C3 units.
5
I
II
Figure 4: Illumination invariance. Same organization as in figure 3. Bottom panel (II): Example
images from three classes of stimuli. Each class consists of faces with different light reflectance
properties, modeling different materials. Class A was opaque and non-reflective like wood. Class
B was opaque but highly reflective like a shiny metal. Class C was translucent like glass. Each
image shows a face?s appearance corresponding to a different location of the source of illumination
(the lamp). The face models were produced using FaceGen and modifed with Blender. Top panel
(I): Columns show the results of testing illumination-invariant recognition performance on class A
(left), B (middle) and C (right). S3/C3 templates were obtained from objects in class A (top row),
B (middle row), and C (bottom row). The model was never tested using the same images that were
used to produce S3/C3 templates. As in figure 3, the abscissa of each plot shows the maximum
invariance range (maximum distance the light could move in either direction away from a neutral
position where the lamp is even with the middle of the head) over which targets and distractors
were presented. The ordinate shows the AUC obtained for the task of recognizing an individual
novel object despite changes in illumination. A correlation-based nearest-neighbor ?classifier? must
rank all images of the same object under each illumination condition as being more similar to the
neutral view than other objects. The red curves show the resulting AUC when the input to the
classifier consists of C2 responses and the blue curves show the AUC obtained when the classifier?s
input is the C3 responses only. Simulation details: These simulations used 80 translation and scaling
invariant C2 units tuned to patches of natural images. The choice of natural image patches for S2/C2
templates had very little effect on the final results. Error bars (+/- one standard deviation) show the
results of cross validation by randomly choosing a set of example images to use for producing
S3/C3 templates and testing on the rest of the images. The above simulations used 1200 S3 units
(80 exemplar faces and 15 illumination conditions) and 80 C3 units.
6
The C3 features are class-specific. Good performance on within-category identification is obtained
using templates derived from the same category (plots along the diagonal in figures 3I and 4I). When
C3 features from the wrong category are used in this way, performance suffers (off-diagonal plots).
In all these cases, the C2 features which encode nothing specifically useful for taking into account
the relevant transformation perform as well as or better than C3 features derived from objects of the
wrong class. It follows that if the brain is using an algorithm of this sort (an H-W architecture) to accomplish within-category identification, then it must separate the circuitry that produces invariance
for the transformations that objects of one class undergo from the circuitry producing invariance to
the transformations that other classes undergo.
5
Conclusion
Everyday visual tasks require reasonably good invariance to non-generic transformations like
changes in viewpoint and illumination3 . We showed that a broad class of ventral stream models
that is well-supported by physiology data (H-W models) require class-specific modules in order to
accomplish these tasks.
The recently-discovered macaque face-processing hierarchy bears a strong resemblance to the architecture of our extended HMAX model. The responses of cells in an early part of the hierarchy
(patches ML and MF) are strongly dependent on viewpoint, while the cells in a downstream area
(patch AM) tolerate large changes in viewpoint. Identifying the S3 layer of our extended HMAX
model with the ML/MF cells and the C3 layer with the AM cells is an intruiging possibility. Another mapping from the model to the physiology could be to identify the outputs of simple classifiers
operating on C2, S3 or C3 layers with the responses of cells in ML/MF and AM.
Fundamentally, the 3D rotation of an object class with one 3D structure e.g., faces, is not the same
as the 3D rotation of another class of objects with a different 3D structure. Generic circuitry cannot
take into account both transformations at once. The same argument applies to all other non-generic
transformations as well. Since the brain must take these transformations into account in interpreting
the visual world, it follows that visual cortex must have a modular architecture. Object classes
that are important enough to require invariance to these transformations of novel exemplars must
be encoded by dedicated circuitry. Faces are clearly a sufficiently important category of objects to
warrant this dedication of resources. Analogous arguments apply to a few other categories; human
bodies all have a similar 3D structure and also need to be seen and recognized under a variety
of viewpoint and illumination conditions, likewise, reading is an important enough activity that it
makes sense to encode the visual transformations that words and letters undergo with dedicated
circuitry (changes in font, viewing angle, etc). We do not think it is coincidental that, just as for
faces, brain areas which are thought to be specialized for visual processing of the human body (the
extrastriate body area [32]) and reading (the visual word form area [33, 34]) are consistently found
in human fMRI experiments.
We have argued in favor of visual cortex implenting a modularity of content rather than process.
The computations performed in each dedicated processing region can remain quite similar to the
computations performed in other regions. Indeed, the connectivity within each region can be wired
up in the same way, through temporal association. The only difference across areas is the object
class (and the transformations) being encoded. In this view, visual cortex must be modular in order
to succeed in the tasks with which it is faced.
Acknowledgments
This report describes research done at the Center for Biological & Computational Learning, which
is in the McGovern Institute for Brain Research at MIT, as well as in the Dept. of Brain & Cognitive
3
It is sometimes claimed that human vision is not viewpoint invariant [30]. It is certainly true that performance on psychophysical tasks requiring viewpoint invariance is worse than on tasks requiring translation
invariance. This is fully consistent with our model. The 3D structure of faces does not vary wildly within
the class, but there is certainly still some significant variation. It is this variability in 3D structure within the
class that is the source of the model?s imperfect performance. Many psychophysical experiments on viewpoint
invariance were performed with synthetic ?paperclip? objects defined entirely by their 3D structure. Our model
predicts particularly weak performance on viewpoint-tolerance tasks with these stimuli and that is precisely
what is observed [31].
7
Sciences, and which is affiliated with the Computer Sciences & Artificial Intelligence Laboratory
(CSAIL). This research was sponsored by grants from DARPA (IPTO and DSO), National Science
Foundation (NSF-0640097, NSF-0827427), AFSOR-THRL (FA8650-05-C-7262). Additional support was provided by: Adobe, Honda Research Institute USA, King Abdullah University Science
and Technology grant to B. DeVore, NEC, Sony and especially by the Eugene McDermott Foundation.
References
[1] W. Freiwald and D. Tsao, ?Functional Compartmentalization and Viewpoint Generalization Within the
Macaque Face-Processing System,? Science, vol. 330, no. 6005, p. 845, 2010.
[2] N. Kanwisher, J. McDermott, and M. Chun, ?The fusiform face area: a module in human extrastriate
cortex specialized for face perception,? The Journal of Neuroscience, vol. 17, no. 11, p. 4302, 1997.
[3] K. Grill-Spector, N. Knouf, and N. Kanwisher, ?The fusiform face area subserves face perception, not
generic within-category identification,? Nature Neuroscience, vol. 7, no. 5, pp. 555?562, 2004.
[4] D. Tsao, W. Freiwald, R. Tootell, and M. Livingstone, ?A cortical region consisting entirely of faceselective cells,? Science, vol. 311, no. 5761, p. 670, 2006.
[5] D. Tsao, W. Freiwald, T. Knutsen, J. Mandeville, and R. Tootell, ?Faces and objects in macaque cerebral
cortex,? Nature Neuroscience, vol. 6, no. 9, pp. 989?995, 2003.
[6] R. Rajimehr, J. Young, and R. Tootell, ?An anterior temporal face patch in human cortex, predicted by
macaque maps,? Proceedings of the National Academy of Sciences, vol. 106, no. 6, p. 1995, 2009.
[7] S. Ku, A. Tolias, N. Logothetis, and J. Goense, ?fMRI of the Face-Processing Network in the Ventral
Temporal Lobe of Awake and Anesthetized Macaques,? Neuron, vol. 70, no. 2, pp. 352?362, 2011.
[8] M. Tarr and I. Gauthier, ?FFA: a flexible fusiform area for subordinate-level visual processing automatized
by expertise,? Nature Neuroscience, vol. 3, pp. 764?770, 2000.
[9] J. Z. Leibo, J. Mutch, L. Rosasco, S. Ullman, and T. Poggio, ?Learning Generic Invariances in Object
Recognition: Translation and Scale,? MIT-CSAIL-TR-2010-061, CBCL-294, 2010.
[10] S. Moeller, W. Freiwald, and D. Tsao, ?Patches with links: a unified system for processing faces in the
macaque temporal lobe,? Science, vol. 320, no. 5881, p. 1355, 2008.
[11] T. Serre, M. Kouh, C. Cadieu, U. Knoblich, G. Kreiman, and T. Poggio, ?A theory of object recognition:
computations and circuits in the feedforward path of the ventral stream in primate visual cortex,? CBCL
Paper #259/AI Memo #2005-036, 2005.
[12] K. Fukushima, ?Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,? Biological Cybernetics, vol. 36, pp. 193?202, Apr. 1980.
[13] M. Riesenhuber and T. Poggio, ?Hierarchical models of object recognition in cortex,? Nature Neuroscience, vol. 2, pp. 1019?1025, Nov. 1999.
[14] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio, ?Robust Object Recognition with CortexLike Mechanisms,? IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 3, pp. 411?426, 2007.
[15] B. W. Mel, ?SEEMORE: Combining Color, Shape, and Texture Histogramming in a Neurally Inspired
Approach to Visual Object Recognition,? Neural Computation, vol. 9, pp. 777?804, May 1997.
[16] D. Hubel and T. Wiesel, ?Receptive fields, binocular interaction and functional architecture in the cat?s
visual cortex,? The Journal of Physiology, vol. 160, no. 1, p. 106, 1962.
[17] J. Mutch and D. Lowe, ?Object class recognition and localization using sparse features with limited
receptive fields,? International Journal of Computer Vision, vol. 80, no. 1, pp. 45?57, 2008.
[18] P. F?oldi?ak, ?Learning invariance from transformation sequences,? Neural Computation, vol. 3, no. 2,
pp. 194?200, 1991.
[19] S. Stringer and E. Rolls, ?Invariant object recognition in the visual system with novel views of 3D objects,?
Neural Computation, vol. 14, no. 11, pp. 2585?2596, 2002.
[20] L. Wiskott and T. Sejnowski, ?Slow feature analysis: Unsupervised learning of invariances,? Neural computation, vol. 14, no. 4, pp. 715?770, 2002.
[21] T. Masquelier, T. Serre, S. Thorpe, and T. Poggio, ?Learning complex cell invariance from natural videos:
A plausibility proof,? AI Technical Report #2007-060 CBCL Paper #269, 2007.
[22] M. Spratling, ?Learning viewpoint invariant perceptual representations from cluttered images,? IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 753?761, 2005.
8
[23] D. Cox, P. Meier, N. Oertelt, and J. J. DiCarlo, ??Breaking?position-invariant object recognition,? Nature
Neuroscience, vol. 8, no. 9, pp. 1145?1147, 2005.
[24] N. Li and J. J. DiCarlo, ?Unsupervised natural experience rapidly alters invariant object representation in
visual cortex.,? Science, vol. 321, pp. 1502?7, Sept. 2008.
[25] N. Li and J. J. DiCarlo, ?Unsupervised Natural Visual Experience Rapidly Reshapes Size-Invariant Object
Representation in Inferior Temporal Cortex,? Neuron, vol. 67, no. 6, pp. 1062?1075, 2010.
[26] G. Wallis and H. H. B?ulthoff, ?Effects of temporal association on recognition memory.,? Proceedings of
the National Academy of Sciences of the United States of America, vol. 98, pp. 4800?4, Apr. 2001.
[27] G. Wallis, B. Backus, M. Langer, G. Huebner, and H. B?ulthoff, ?Learning illumination-and orientationinvariant representations of objects through temporal association,? Journal of vision, vol. 9, no. 7, 2009.
[28] T. Vetter, A. Hurlbert, and T. Poggio, ?View-based models of 3D object recognition: invariance to imaging
transformations,? Cerebral Cortex, vol. 5, no. 3, p. 261, 1995.
[29] E. Bart and S. Ullman, ?Class-based feature matching across unrestricted transformations,? Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 9, pp. 1618?1631, 2008.
[30] H. B?ulthoff and S. Edelman, ?Psychophysical support for a two-dimensional view interpolation theory of
object recognition,? Proceedings of the National Academy of Sciences, vol. 89, no. 1, p. 60, 1992.
[31] N. Logothetis, J. Pauls, H. B?ulthoff, and T. Poggio, ?View-dependent object recognition by monkeys,?
Current Biology, vol. 4, no. 5, pp. 401?414, 1994.
[32] P. Downing and Y. Jiang, ?A cortical area selective for visual processing of the human body,? Science,
vol. 293, no. 5539, p. 2470, 2001.
[33] L. Cohen, S. Dehaene, and L. Naccache, ?The visual word form area,? Brain, vol. 123, no. 2, p. 291,
2000.
[34] C. Baker, J. Liu, L. Wald, K. Kwong, T. Benner, and N. Kanwisher, ?Visual word processing and experiential origins of functional selectivity in human extrastriate cortex,? Proceedings of the National Academy
of Sciences, vol. 104, no. 21, p. 9087, 2007.
9
| 4318 |@word fusiform:3 cox:1 middle:5 inversion:1 wiesel:4 simulation:7 seek:1 blender:4 lobe:2 tr:1 shot:1 extrastriate:3 liu:1 uncovered:1 contains:2 united:1 tuned:3 existing:1 current:1 anterior:6 yet:1 must:10 subsequent:1 shape:2 plot:5 sponsored:1 medial:1 bart:1 intelligence:3 plane:3 lamp:2 location:5 successive:1 honda:1 downing:1 mathematical:1 along:3 c2:11 shiny:1 edelman:1 consists:4 pathway:2 manner:3 acquired:1 kanwisher:3 indeed:2 rapid:1 tomaso:1 abscissa:2 brain:12 inspired:3 automatically:1 little:3 increasing:1 provided:1 baker:1 panel:4 translucent:1 circuit:1 what:2 coincidental:1 monkey:1 unified:1 transformation:32 temporal:11 usefully:1 wrong:3 classifier:8 unit:7 grant:2 compartmentalization:1 producing:3 local:1 despite:2 mach:1 ak:1 jiang:1 firing:1 path:1 interpolation:1 evoked:6 limited:1 range:2 ulthoff:4 acknowledgment:1 testing:4 implement:1 area:11 fig3:1 physiology:5 significantly:1 thought:1 matching:1 word:4 radial:1 vetter:1 cannot:1 tootell:3 applying:1 map:1 center:1 layout:1 cluttered:1 identifying:2 freiwald:4 rule:2 kouh:1 variation:1 analogous:3 target:6 hierarchy:5 logothetis:2 origin:1 associate:1 paperclip:1 recognition:22 particularly:1 predicts:2 bottom:4 role:1 module:3 observed:1 electrical:1 region:9 connected:1 backus:1 substantial:2 ideally:1 asked:1 employment:1 depend:3 inapplicable:1 localization:1 basis:1 darpa:1 represented:1 cat:1 america:1 effective:1 describe:2 sejnowski:1 mcgovern:1 artificial:1 outcome:1 choosing:2 quite:1 modular:3 encoded:6 larger:1 solve:1 spector:1 favor:1 think:1 final:3 sequence:1 propose:1 interaction:1 product:1 relevant:1 combining:1 rapidly:2 moeller:1 organizing:1 achieve:3 academy:4 protrusion:1 description:1 everyday:1 produce:4 wired:1 object:68 help:1 exemplar:4 nearest:3 progress:1 strong:1 predicted:1 involves:1 come:1 seemore:1 differ:2 direction:4 human:9 kwong:1 viewing:1 afsor:1 material:5 subordinate:1 explains:1 argued:1 require:4 modifed:1 crux:1 feeding:1 generalization:1 wall:1 biological:2 extension:3 pl:1 sufficiently:1 exp:1 cbcl:3 algorithmic:1 mapping:1 claim:4 circuitry:8 bump:2 major:1 ventral:8 vary:2 early:1 sensitive:1 mit:5 clearly:1 gaussian:1 always:1 rather:1 varying:1 encode:2 derived:3 focus:1 consistently:1 rank:3 mainly:1 contrast:1 am:8 glass:1 sense:1 dependent:2 typically:1 selective:8 issue:1 flexible:1 histogramming:1 field:5 once:1 never:2 having:1 tarr:1 cadieu:1 biology:1 broad:1 unsupervised:3 warrant:1 subjecting:1 fmri:2 report:2 stimulus:6 fundamentally:1 employ:2 few:1 thorpe:1 randomly:2 preserve:1 recognize:1 national:5 individual:9 intell:1 connects:1 consisting:1 classspecific:2 fukushima:1 detection:2 organization:5 highly:3 possibility:1 joel:1 certainly:2 light:2 implication:2 experience:2 poggio:8 column:2 earlier:1 modeling:1 tp:1 deviation:3 neutral:2 stichting:1 recognizing:2 optimally:1 stored:1 accomplish:3 synthetic:4 international:1 csail:2 off:2 cortexlike:1 pool:2 gaze:1 connecting:1 connectivity:1 again:1 central:3 dso:1 rosasco:1 worse:1 cognitive:2 sidestep:1 ullman:2 li:2 account:3 potential:1 matter:1 idealized:1 stream:5 depends:2 later:1 view:14 performed:3 lowe:1 spectacularly:1 red:3 sort:1 elicited:2 roll:1 likewise:3 yield:2 identify:2 weak:1 identification:8 accurately:1 produced:5 expertise:2 cybernetics:1 unaffected:1 suffers:1 pp:19 involved:4 associated:3 proof:1 hurlbert:1 gain:2 automatized:1 massachusetts:1 knowledge:1 emerges:1 distractors:3 color:1 organized:3 back:1 tolerate:3 mutch:3 response:16 devore:1 done:2 though:2 strongly:3 wildly:1 just:2 binocular:1 correlation:2 gauthier:1 resemblance:1 usa:1 effect:3 serre:3 normalized:1 true:1 requiring:2 discounting:2 laboratory:1 self:1 auc:6 inferior:1 mel:1 m:1 neocognitron:1 dedicated:3 interpreting:1 reasoning:1 image:31 novel:5 recently:4 common:2 rotation:9 specialized:5 stimulation:3 functional:3 cohen:1 cerebral:2 association:8 refer:1 significant:1 cambridge:1 ai:3 slant:1 tuning:3 had:2 dot:1 cortex:24 similarity:3 encountering:1 operating:1 etc:1 posterior:1 own:1 recent:2 showed:1 perspective:1 selectivity:2 certain:1 claimed:1 results1:1 mcdermott:2 seen:1 additional:3 somewhat:1 unrestricted:1 employed:1 recognized:1 ii:4 relates:1 neurally:1 technical:1 match:1 af:1 cross:2 long:2 sphere:2 plausibility:1 adobe:1 involving:1 wald:1 vision:3 sometimes:1 pyramid:1 cell:27 c1:1 facegen:2 singular:1 source:3 rest:2 pooling:8 induced:1 undergo:3 dehaene:1 reflective:2 revealed:1 feedforward:2 enough:3 variety:1 xj:1 psychology:1 architecture:6 associating:1 imperfect:1 decline:1 shift:1 grill:1 specialization:1 fa8650:1 passing:1 useful:1 transforms:1 masquelier:1 processed:1 category:8 rw:3 nsf:2 s3:11 alters:1 neuroscience:7 blue:2 diverse:1 discrete:3 vol:31 affected:1 leibo:2 v1:2 imaging:1 downstream:2 langer:1 fig4:1 wood:1 angle:2 letter:1 respond:2 opaque:2 knoblich:1 evokes:1 patch:17 scaling:4 entirely:2 layer:5 abdullah:1 activity:1 kreiman:1 adapted:1 occur:1 precisely:1 awake:1 flat:1 nearby:1 argument:3 relatively:1 transferred:1 department:1 describes:2 across:4 remain:3 primate:1 s1:1 projecting:1 invariant:20 restricted:1 resource:1 previously:2 mechanism:2 sony:1 end:2 available:2 operation:8 apply:1 hierarchical:10 away:1 generic:9 permission:1 existence:1 original:1 top:5 include:2 reflectance:1 build:1 especially:1 experiential:1 psychophysical:4 move:1 arrangement:1 spike:2 font:1 receptive:4 diagonal:3 distance:1 separate:9 link:1 lateral:3 stringer:1 naccache:1 argue:2 reason:1 dicarlo:3 illustration:1 neuroimaging:1 memo:1 rise:1 implementation:1 reliably:1 affiliated:1 anal:1 unknown:1 perform:2 neuron:3 riesenhuber:2 oldi:1 situation:1 extended:5 variability:1 head:1 jim:1 discovered:3 ordinate:2 meier:1 c3:18 learned:2 established:1 macaque:10 trans:1 beyond:1 bar:2 pattern:6 perception:2 reading:2 fundus:2 gaining:1 including:1 explanation:1 max:3 video:1 memory:1 natural:7 technology:2 numerous:1 sept:1 faced:1 review:1 eugene:1 fully:1 bear:1 interesting:1 dedication:1 validation:2 foundation:3 degree:1 metal:1 consistent:7 wiskott:1 viewpoint:34 translation:9 row:6 supported:1 side:1 institute:3 neighbor:3 template:17 face:58 characterizing:1 taking:1 anesthetized:1 sparse:1 protruding:1 tolerance:2 curve:5 cortical:3 world:1 adopts:1 made:2 commonly:1 replicated:1 transaction:2 nov:1 ml:8 hubel:4 tolias:1 modularity:1 why:3 additionally:2 disambiguate:1 nature:5 reasonably:1 ku:1 robust:1 depicting:1 automobile:1 upstream:1 complex:6 bileschi:1 domain:1 apr:2 s2:3 paul:1 nothing:1 repeated:3 body:5 slow:1 fails:1 position:9 perceptual:2 breaking:1 hmax:8 subserves:1 young:1 remained:1 ffa:1 specific:11 showing:1 undergoing:4 physiological:1 chun:1 evidence:5 reshapes:1 consist:1 texture:1 nec:1 illumination:15 mf:7 suited:1 depicted:3 electrophysiology:1 simply:1 appearance:4 visual:35 applies:2 gender:1 wolf:1 ma:1 succeed:1 identity:3 viewed:4 king:1 invariantly:3 tsao:4 content:1 change:16 specifically:1 wt:3 wallis:2 invariance:24 experimental:1 knutsen:1 livingstone:1 selectively:1 support:3 arises:1 modulated:1 ipto:1 frontal:2 incorporate:1 dept:1 tested:2 |
3,665 | 4,319 | P I C O D ES: Learning a Compact Code for
Novel-Category Recognition
Alessandro Bergamo, Lorenzo Torresani
Dartmouth College
Hanover, NH, U.S.A.
{aleb, lorenzo}@cs.dartmouth.edu
Andrew Fitzgibbon
Microsoft Research
Cambridge, United Kingdom
[email protected]
Abstract
We introduce P I C O D ES: a very compact image descriptor which nevertheless allows high performance on object category recognition. In particular, we address
novel-category recognition: the task of defining indexing structures and image
representations which enable a large collection of images to be searched for an
object category that was not known when the index was built. Instead, the training images defining the category are supplied at query time. We explicitly learn
descriptors of a given length (from as small as 16 bytes per image) which have
good object-recognition performance. In contrast to previous work in the domain
of object recognition, we do not choose an arbitrary intermediate representation,
but explicitly learn short codes. In contrast to previous approaches to learn compact codes, we optimize explicitly for (an upper bound on) classification performance. Optimization directly for binary features is difficult and nonconvex, but
we present an alternation scheme and convex upper bound which demonstrate excellent performance in practice. P I C O D ES of 256 bytes match the accuracy of the
current best known classifier for the Caltech256 benchmark, but they decrease the
database storage size by a factor of 100 and speed-up the training and testing of
novel classes by orders of magnitude.
1 Introduction
In this work we consider the problem of efficient object-class recognition in large image collections.
We are specifically interested in scenarios where the classes to be recognized are not known in
advance. The motivating application is ?object-class search by example? where a user provides
at query time a small set of training images defining an arbitrary novel category and the system
must retrieve from a large database images belonging to this class. This application scenario poses
challenging requirements on the system design: the object classifier must be learned efficiently at
query time from few examples; recognition must have low computational cost with respect to the
database size; finally, compact image descriptors must be used to allow storage of large collections
in memory rather than on disk for additional efficiency.
Traditional object categorization methods do not meet these requirements as they typically use nonlinear kernels on high-dimensional descriptors, which renders them computationally expensive to
train and test, and causes them to occupy large amounts of storage. For example, the LP-? multiple
kernel combiner [11] achieves state-of-the-art accuracy on several categorization benchmarks but it
requires over 23 Kbytes to represent each image and it uses 39 feature-specific nonlinear kernels.
This recognition model is impractical for our application because it would require costly query-time
kernel evaluations for each image in the database since the training set varies with every new query
and thus pre-calculation of kernel distances is not possible.
We propose to address these storage and efficiency requirements by learning a compact binary image representation, called P I C O D ES1 , optimized to yield good categorization accuracy with linear
1
Which we think of as ?Picture Codes? or ?Pico-Descriptors?, or (with Greek pronunciation) ?-codes
1
Figure 1: Visualization of P I C O D ES. The 128-bit P I C O D E (whose accuracy on Caltech256 is displayed in
figure 3) is applied to the test data of ILSVRC2010. Six of the 128 bits are illustrated as follows: for bit c,
all images are sorted by non-binarized classifier outputs a?
c x and the 10 smallest and largest are presented on
each row. Note that ac is defined only up to sign, so the patterns to which the bits are specialized may appear
in either the ?positive? or ?negative? columns.
(i.e., efficient) classifiers. The binary entries in our image descriptor are thresholded nonlinear projections of low-level visual features extracted from the image, such as descriptors encoding texture
or the appearance of local image patches. Each non-linear projection can be viewed as implementing
a nonlinear classifier using multiple kernels. The intuition is that we can then use these pre-learned
multiple kernel combiners as a classification basis to define recognition models for arbitrary novel
categories: the final classifier for a novel class is obtained by linearly combining the binary outputs
of the basis classifiers, which we can pre-compute for every image in the database, thus enabling
efficient novel object-class recognition even in large datasets.
The search for compact codes for images has been the subject of much recent work, which we
loosely divide into ?designed? and ?learned? codes. In the former category we include min-hash [6],
VLAD [14], and attributes [10, 18, 17] which are fully-supervised classifiers trained to recognize
certain visual properties in the image. A related idea is the representation of images in terms of
distances to basis classes. This has been previously investigated as a way to define image similarities [30], to perform video search [12], or to enable natural scene recognition and retrieval [29].
Torresani et al. [27] define a compact image code as a bitvector, the entries of which are the outputs of a large set of weakly-trained basis classifiers (?classemes?) evaluated on the image. Simple
linear classifiers trained on classeme vectors produce near state-of-the-art categorization accuracy.
Li et al. [19] use the localized outputs of object detectors as an image representation. The advantage of this representation is that it encodes spatial information; furthermore, object detectors are
more robust to clutter and uninformative background than classifiers evaluated on the entire image.
These prior methods work under the assumption that an ?overcomplete? representation for classification can be obtained by pre-learning classifiers for a large number of basis classes, some of which
will be related to those encountered at test-time. Such high-dimensional representations are then
compressed down using quantization, dimensionality reduction or feature selection methods.
The second strand of related work is the learning of compact codes for images [31, 26, 24, 15,
22, 8] where binary image codes are learned such that the Hamming distance between codewords
approximates a kernelized distance between image descriptors, most typically GIST. Autoencoder
learning [23], on the other hand, produces a compact code which has good image reconstruction
properties, but again is not specialized for category recognition.
All the above descriptors can produce very compact codes, but few (excepting [27, 19]) have been
shown to be effective at category-level recognition beyond simplified problems such as Caltech20 [2] or Caltech-101 [14, 16]. In contrast, we consider Caltech-256 a baseline competence, and
also test compact codes on a large-scale class retrieval task using ImageNet [7].
The goal of this paper then is to learn a compact binary code (as short as 128 bits) which has
good object-category recognition accuracy. In contrast to previous learning approaches, our training
objective is a direct approximation to this goal; while in contrast to previous ?designed? descriptors,
we learn abstract categories (see figure 1) aimed at optimizing classification rather than an arbitrary
predefined set of attributes or classemes, and thus achieve increased accuracy for a given code length.
2
2 Technical approach
Accuracy (%)
We start by introducing the basic classifier archi32
tecture used by state-of-the-art category recognizers,
30
6415
which we want to leverage as effectively as possible
28
to define our image descriptor. Given an image I, a
26
bank of feature descriptors is computed (e.g. SIFT,
52K
PHOG, GIST), to yield a feature vector f I ? RF
24
(the feature vector used in our implementation has
22
Linear SVM on x
dimensionality F = 17360 and is described in the
LP?? using explicit feature maps
20
experimental section). State-of-the-art recognizers
LP?? using nonlinear kernels
use kernel matching between these descriptors to de18
fine powerful classifiers, nonlinear in f I . For exam16 2
3
4
5
ple, the LP-? classifier of Gehler and Nowozin [11],
10
10
10
10
Descriptor dimensionality
which has achieved the best results on several benchmarks to date, operates by combining the outputs of Figure 2: The accuracy versus compactness trade
nonlinear SVMs trained on individual features. In off. The benchmark is Caltech256, using 10 exour work we approximate these nonlinear classifiers amples per class. The pink cross shows the multiby employing the lifting method of Vedaldi and Zis- class categorization accuracy achieved by an LP-?
serman [28]: for the family of homogeneous additive classifier using kernel distance; the red triangle is
kernels K, there exists a finite-dimensional feature the accuracy of an LP-? classifier that uses ?liftedmap ? : RF ?? RF (2r+1) such that the nonlin- up? features to approximate kernel distances; the
ear kernel distance K(f I , f I ? ) ? h?(f I ), ?(f I ? )i blue line shows accuracy of a linear SVM trained
where r is a small positive integer (in our implemen- on PCA projections of the lifted-up features, as a
tation set to 1). These explicit feature maps allow function of PCA dimension.
us to approximate a non-linear classifier, such as the
LP-? kernel combiner, via an efficient linear projection. As described below, we use these nonlinear
classifier approximated via linear projections as the basis for learning our features. However, in our
case F (2r + 1) = 17360 ? 3 = 52080. This dimensionality is too large in practice for our learning. Thus, we apply a linear dimensionality reduction xI = P?(f I ), where projection matrix P is
obtained through PCA, so xI ? Rn for n ? F (2r + 1). As this procedure is performed identically
for every image we consider, we will drop the dependence on I and simply refer to the ?image? x.
A natural question to address is: how much accuracy do we lose due to the kernel approximation
and the PCA projection? We answer this question in figure 2, where we compare the multi-class
classification accuracies obtained on the Caltech256 data set by the following methods using our
low-level descriptors f ? R17360 : an LP-? combiner based on exact non-linear kernel calculations;
an LP-? combiner using explicit feature maps; a linear SVM trained on the PCA projections x as a
function of the PCA subspace dimensionality. We see from this figure that the explicit maps degrade
the accuracy only slightly, which is consistent with the results reported in [28]. However, the linear
SVM produces slightly inferior accuracy even when applied to the full 52,080-dimensional feature
vectors. The key-difference between the linear SVM and the LP-? classifier is that the former defines
a classifier in the joint space of all 13 features, while the latter first trains a separate classifier for
each feature, and then learns a linear combination of them. The results in our figure suggest that the
two-step procedure of LP-? provides a form of beneficial regularization, a fact first noted in [11].
For our feature learning algorithm, we chose to use a PCA subspace of dimensionality n = 6415
since, as suggested by the plot, this setting gives a good tradeoff in terms of compact dimensionality
and good recognition accuracy.
Torresani et al. [27] have shown that an effective image descriptor for categorization can be built
by collecting in a vector the thresholded outputs of a large set of nonlinear classifiers evaluated on
the image. This ?classeme? descriptor can produce recognition accuracies within 10% of the state
of the art for novel classes even with simple linear classification models. Using our formulation
based on explicit feature maps, we can approximately express each classeme entry (which in [27]
is implemented as an LP-? classifier) as the output of a linear classifier
h(x; ac ) = 1[aTc x > 0]
(1)
where 1[.] is the 0-1 indicator function of its boolean argument and x is the PCA-projection of ?(f ),
with a 1 appended to it to avoid dealing with an explicit bias coefficient, i.e., x = [P?(f ); 1] .
3
If following the approach of Torresani et al. [27], we would collect C training categories, and learn
the parameters ac for each class from offline training data using some standard training objective
such as hinge loss. We gather the parameters into a n ? C matrix
A = [a1 | . . . |aC ].
Then, for image x, the ?classeme? descriptor h(x) is computed as the concatenation of the outputs
of the classifiers learned for the training categories:
?
?
h(x; a1 )
?
?
..
h(x; A) = ?
(2)
? ? {0, 1}C
.
h(x; aC )
The P I C O D E descriptor is also of this form. However, the key-difference with respect to [27] lies in
our training procedure, and the fact that the dimensionality C is no longer restricted to be the same
as the number of training classes.
To emphasize once more the contributions of this paper, let us review the shortcomings of existing
attribute- and classifier-based descriptors, which we overcome in this paper:
? Prior work used attributes learned disjointly from one another, which ?just so-happen?
to work well as features for classification, without theoretical justification for their use
in subsequent classification. Given that we want to use attributes as features for linear
classification, we propose to formalize as learning objective that linear combinations of
such attributes must yield good accuracy.
? Unlike the attribute or classeme approach, our method decouples the number of training
classes from the target dimensionality of the binary descriptor. We can optimize our features for any arbitrary desired length, thus avoiding a suboptimal feature selection stage.
? Finally, we directly optimize the learning parameters with respect to binary features while
prior attribute systems binarized the features in a quantization stage after the learning.
We now introduce a framework to learn the A parameters directly on a classification objective.
2.1 Learning the basis classifiers
We assume that we are given a set of N training images, with each image coming from one of K
training classes. We will continue to let C stand for the dimensionality (i.e., number of bits) of our
code. Let D = {(xi , y i )}N
i=1 be the training set for learning the basis classifiers, where xi is the i-th
image example (represented by its n-dimensional PCA projection augmented with a constant entry
set to 1) and y i ? {?1, +1}K is a vector encoding the category label out of K possible classes:
yik = +1 iff the i-th example belongs to class k.
We then define our c-th basis classifier to be a boolean function of the form (1), a thresholded nonlinear projection of the original low-level features f , parameterized by ac ? Rn . We then optimize
these parameters so that linear combinations of these basis classifiers yield good categorization accuracy on D. The learning objective introduces auxiliary variables (wk , bk ) for each training class,
which parameterize the linear classifier for that training class, operating on the P I C O D E representation of the training examples, and the objective for A simply minimizes over these auxiliaries:
E(A, w1..K , b1..K )
(3)
min
w1..K ,b1..K
Solving for A then amounts to simultaneous optimization over all variables of the following learning
objective, which is a trade off between a small classification error and a large margin when using the
output bits of the basis classifiers as features in a one-versus-all linear SVM:
(
)
K
N
X
1
? X
?
2
E(A, w 1..K , b1..K ) =
? yi,k (bk + wk h(xi ; A)
(4)
kwk k +
2
N i=1
E(A) =
k=1
where ?[?] is the traditional hinge loss function. Expanding, we get
)
K
N
C
X
X
1
? X
2
T
E(A, w1..K , b1..K ) =
.
kwk k +
? yi,k (bk +
wkc 1[ac xi > 0])
2
N i=1
c=1
k=1
4
Note that the linear SVM and the basis classifiers are learned jointly using the method described
below.
2.2 Optimization
We propose to minimize this error function by block coordinate descent. We alternate between the
two following steps:
1. Learn classifiers.
We fix A and optimize the objective with respect to w and b jointly. This optimization is convex
and equivalent to traditional linear SVM learning.
2. Learn projectors.
Given the current values of w and b, we minimize the objective with respect to A by updating one basis-classifier at a time. Let us consider the update of ac with fixed parameters
w1..K , b, a1 , . . . , ac?1 , ac+1 , . . . , aC . It can be seen (Appendix A) that in this case the objective
becomes:
N
X
vi 1[zi aTc xi > 0] + const
(5)
E(ac ) =
i=1
+
where zi ? {?1, +1} and vi ? R are known values computed from the fixed parameters. Optimizing the objective in Eq. 5 is equivalent to learning a linear classifier minimizing the sum of weighted
misclassifications, where vi represents the cost of misclassifying example i. Unfortunately, this objective is not convex and it is difficult to optimize. Thus, we replace it with the following convex
upper bound defined in terms of the hinge function ?:
? c) =
E(a
N
X
vi ?(zi aTc xi )
(6)
i=1
This objective can be globally optimized using an LP solver or software for SVM training. We had
success with LIBLINEAR [9], which deals nicely with the large problem sizes we considered.
We have also experimented with several other optimization methods, including stochastic gradient
descent applied to a modified version of our objective where we replaced the binarization function
h(x; ac ) = 1[aTc x > 0] with the sigmoid function ?(x; ac ) = 1/(1 + exp(? T2 aTc x)) to relax
the problem. After learning, at test-time we replaced back ?(x; ac ) with h(x; ac ) to obtain binary
descriptors. However, we found that these binary codes performed much worse than those directly
learned via the coordinate descent procedure described above.
3 Experiments
We now describe experimental evaluations carried out over several data sets. In order to allow a fair
comparison, we reimplemented the ?classeme descriptor? based on the same set of low-level features
and settings described in [27] but using the explicit feature map framework to replace the expensive
nonlinear kernel distance computations. The low-level features are: color GIST [21], spatial pyramid
of histograms of oriented gradients (PHOG) [4], spatial pyramid of self-similarity descriptors [25],
and a histogram of SIFT features [20] quantized using a dictionary of 5000 visual words. Each
spatial pyramid level of each descriptor was treated as a separate feature, thus producing a total
of 13 low-level features. Each of these features was lifted up to a higher-dimensional space using
the explicit feature maps of Vedaldi and Zisserman [28]. We chose the mapping approximating the
histogram intersection kernels for n = 1, which effectively mapped each low-level feature descriptor
to a space 3 times larger than its original one. The resulting vectors ?(f ) have dimensionality
3 ? F = 52, 080. To learn our basis classifiers, we used 6415-dimensional PCA projections of these
high-dimensional vectors.
We compared P I C O D ES with binary classeme vectors. For both descriptors we used a training set
of K = 2659 classes randomly sampled from the ImageNet dataset [7], with 30 images for each
category for a total of N = 2659 ? 30 = 79, 770 images. Each class in ImageNet is associated
to a ?synset?, which is a set of words describing the category. Since we wanted to evaluate the
5
35
24
LP??
30
22
20
20
PiCoDes
Classemes from ImageNet + RFE
LSH [13]
Spectral Hashing [31]
BRE [15]
Original Classemes [27] + RFE
15
10
5
Accuracy (%)
Accuracy (%)
25
128
192 256
512
1024
Descriptor size (number of bits)
18
16
PiCoDes, 512 bits
Classemes from ImageNet + RFE, 512 bits
PiCoDes, 128 bits
Classemes from ImageNet + RFE, 128 bits
14
12
10
2048
8
Figure 3: Multiclass categorization accuracy on Caltech256 using different binary codes, as a function
of the number of bits. P I C O D ES outperform all the
other compact codes. P I C O D ES of 2048 bits match
the accuracy of the state-of-the-art LP-? classifier.
1329
2659
Number of training classes for the descriptor (K)
3988
Figure 4: Caltech256 classification accuracy for P I C O D ES and classemes as a function of the number of
training classes used to learn the descriptors.
learned descriptors on the Caltech256 and ILSVRC2010 [3] benchmarks, we selected 2659 ImageNet training classes such that the synsets of these classes do not contain any of the Caltech256 or
ILSVRC2010 class labels, so as to avoid ?pre-learning? the test classes during the feature-training
stage, which could yield a biased evaluation.
We also present comparisons with binary codes trained to directly approximate the Eucliden distances between the vectors x, using the following previously proposed algorithms: locality sensitive
hashing (LSH) [13], spectral hashing (SH) [31], and binary reconstructive embeddings (BRE) [15].
Since these descriptors in the past have been used predominantly with the k-NN classifier, we have
also tested this classification model but obtained inferior results compared to when using a linear
SVM. For this reason, here we report only results using the linear SVM model.
Multiclass recognition using P I C O D ES. We first report in figure 3 the results showing multiclass
classification accuracy achieved with binary codes on the Caltech256 data set. Since P I C O D ES are
optimized for categorization using linear models, we adopt simple linear ?one-versus-all? SVMs as
classifiers. For each Caltech256 category, the classifier was trained using 10 positive examples and a
total of 2550 negative examples obtained by sampling 10 images from each of the other classes. We
computed accuracies using 25 test examples per class, using 5-fold cross validation for the model
selection. As usual, accuracy is computed as the average over the mean recognition rates per class.
Figure 3 shows the results obtained with binary descriptors of varying dimensionality. While our
approach can accommodate easily the case were the number of feature dimensions (C) is different
from the number of feature-training categories (K), the classeme learning method can only produce
descriptors of size K. Thus, the descriptor size is typically reduced through a subsequent feature
selection stage [27, 19]. In this figure we show accuracy obtained with classeme features selected
by multi-class recursive feature elimination (RFE) with SVM [5], which at each iteration retrains
the SVMs for all classes on the active features and then removes the m least-used active features
until reaching the desired compactness. We also report accuracy obtained with the original classeme
vectors of [27], which were learned with exact kernels on a different training set, consisting of
weakly-supervised images retrieved with text-based image search engines. From this figure we see
that P I C O D ES greatly outperform all the other compact codes considered here (classemes, LSH,
SH, BRE) for all descriptor sizes. In addition, perhaps surprisingly, P I C O D ES of 2048 bits yield
even higher accuracy than the-state-of-the-art multiple kernel combiner LP-? [11] trained on our
low-level features f (30.5% versus 29.7%). At the same time, our codes are 100 times smaller and
reduce the training and testing time by two-orders of magnitude compared to LP-?.
We have also investigated the influence of the parameter K, i.e., the number of training classes
used to learn the descriptor. We learned different P I C O D ES and classeme descriptors by varying K
while keeping the number of training examples per class fixed to 30. Figure 4 shows the multiclass
categorization accuracy on Caltech256 as a function of K. From this plot we see that P I C O D ES
6
32
42
30
40
24
22
20
PiCoDes
Classemes from ImageNet + RFE
Original Classemes [27] + RFE
18
36
34
32
30
16
28
14
12
PiCoDes
Original Classemes [27] + RFE
38
26
Precision @ K (%)
Precision @ 25 (%)
28
5
10
15
20
25
30
35
40
45
Number of training examples per class
26
50
1
3
5
7
10
15
K
20
25
Figure 5: Precision of object-class search using
Figure 6: Finding pictures of an object class in the
codes of 256 bytes on Caltech256: for a varying
number of training examples per class, we report
the percentage of true positives in top 25 retrieved
from a dataset containing 6375 distractors and 25
relevant results.
ILSVRC2010 dataset, which includes 150K images
for 1000 different classes, using 256-byte codes. P I C O D ES enable accurate class retrieval from this large
collection in less than a second.
profit more than classemes from a larger number of training classes, producing further improvement
in generalization on novel classes.
An advantage of the classeme learning setup presented in [27] is the intrinsic parallelization that can
be achieved during the learning of the C classeme classifiers (which are disjointly trained), enabling
the use of more training data. We have considered this scenario, and tried learning the classeme
descriptors from ImageNet using 5 times more images than for P I C O D ES, i.e., 150 images for each
training category for a total of N = 2659 ? 150 = 398, 850 examples. Despite the disparate training
set sizes, we found that P I C O D ES still outperformed classemes (22.41% versus 20.4% for 512 bits).
Retrieval of object classes on Caltech256. In figure 5 we present results corresponding to our
motivating application of object-class search, using codes of 256 bytes. For each Caltech256 class,
we trained a one-versus-all linear SVM using p positive examples and p ? 255 negative examples,
for varying values of p. We then used the learned classifier to find images of that category in a
database containing 6400 Caltech256 test images, with 25 images per class. The retrieval accuracy
is measured as precision at 25, which is the proportion of true positives (i.e., images of the query
class) ranked in the top 25. Again, we see that our features yield consistently better ranking precision
compared to classeme vectors learned on the same ImageNet training set, and produce an average
improvement of about 28% over the original classeme features.
Object class search in a large image collection. Finally, we present experiments on the 150K
image data set of the Large Scale Visual Recognition Challenge 2010 (ILSVRC2010) [3], which
includes images of 1000 different categories, different from those used to train P I C O D ES. Again, we
evaluate our binary codes on the task of object-class retrieval. For each of the 1000 classes, we train
a linear SVM using all examples of that class available in the ILSVRC2010 training set (this number
varies from a minimum of 619 to a maximum of 3047, depending on the class) and 4995 negative
examples obtained by sampling five images from each of the other classes. We test the classifiers
on the ILSVRC2010 test set, which includes 150 images for each of the 1000 classes. Figure 6
shows a comparison between P I C O D ES and classemes in terms of precision at k for varying k.
Despite the very large number of distractors (149,850 for each query), search with our codes yields
precisions exceeding 38%. Furthermore, the tiny size of our descriptor allows the entire data set to
be easily kept in memory for efficient retrieval (the whole database size using our representation is
only 36MB): the average search time for a query class, including the learning time, is about 1 second
on an Intel Xeon X5680 @ 3.33GHz.
7
4 Conclusion
We have described a new type of compact code, which is learned by directly minimizing a multiclass
classification objective on a large set of offline training classes. This allows recognition of novel
categories to be performed using extremely compact codes with state-of-the-art accuracy. Although
there is much existing work on learning compact codes, we know of no other compact code which
offers this performance on a category recognition task.
Our experiments have focussed on whole-image ?Caltech-like? category recognition, while it is
clear that subimage recognition is also an important application. However, we argue that for many
image search tasks, whole-image performance is relevant, and for a very compact code, one could
possibly encode several windows (dozens, say) in each image, while retaining a relatively compact
representation.
Additional material including software to extract P I C O D ES from images may be obtained from [1].
Acknowledgments
We are grateful to Chen Fang for programming help. This research was funded in part by Microsoft
and NSF CAREER award IIS-0952943.
A
Derivation of eq. 5
We present below the derivation of eq. 5. First, we rewrite our objective function, i.e., eq. 4, in
expanded form:
(
)
K
C
N
X
X
1
? X
T
2
E(A, w1..K , b1..K ) =
? yik (bk +
wkc 1[ac xi > 0])
.
kwk k +
2
N i=1
c=1
k=1
Fixing the parameters w1..K , b, a1 , . . . , ac?1 , ac+1 , . . . , aC and minimizing the function above
with respect to ac , is equivalent to minimizing the following objective:
K X
N
X
X
? yik wkc 1[aTc xi > 0] + yik bk +
yik wkc? 1[aTc? xi > 0] .
E ? (ac ) =
c? 6=c
k=1 i=1
P
Let us define ?ikc ? yik wkc , and ?ikc ? (yik bk + c? 6=c yik wkc? 1[aTc? xi > 0]). Then, we can
rewrite the objective as follows:
K X
N
X
T
?
? ?ikc 1[ac xi > 0] + ?ikc
E (ac ) =
k=1 i=1
=
N
X
i=1
=
N
X
i=1
(
1[aTc xi
(
1[aTc xi
> 0]
K
X
?(?ikc + ?ikc ) + (1 ?
1[aTc xi
> 0])
k=1
> 0]
K
X
K
X
k=1
)
?(?ikc + ?ikc ) ? ?(?ikc )
k=1
)
?(?ikc )
+ const .
Finally, it can be seen that optimizing this objective is equivalent to minimizing
E(ac ) =
N
X
vi 1[zi aTc xi > 0]
i=1
P
P
K
where vi = K
k=1 ?(?ikc + ?ikc ) ? ?(?ikc ) and zi = sign
k=1 ?(?ikc + ?ikc ) ? ?(?ikc ) .
This yields eq. 5.
8
References
[1] http://vlg.cs.dartmouth.edu/picodes.
[2] B. Babenko, S. Branson, and S. Belongie. Similarity metrics for categorization: From monolithic to
category specific. In Intl. Conf. Computer Vision, pages 293 ?300, 2009.
[3] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge, 2010. http://www.imagenet.org/challenges/LSVRC/2010/.
[4] A. Bosch, A. Zisserman, and X. Mu?noz. Representing shape with a spatial pyramid kernel. In Conf.
Image and Video Retrieval (CIVR), pages 401?408, 2007.
[5] O. Chapelle and S. S. Keerthi. Multi-class feature selection with support vector machines. Proc. of the
Am. Stat. Assoc., 2008.
[6] O. Chum, J. Philbin, and A. Zisserman. Near duplicate image detection: min-hash and tf-idf weighting.
In British Machine Vision Conf., 2008.
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR, 2009.
[8] M. Douze, A. Ramisa, and C. Schmid. Combining attributes and fisher vectors for efficient image retrieval.
In Proc. Comp. Vision Pattern Recogn. (CVPR), 2011.
[9] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear
classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[10] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
[11] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009.
[12] A. G. Hauptmann, R. Yan, W.-H. Lin, M. G. Christel, and H. D. Wactlar. Can high-level concepts fill the
semantic gap in video retrieval? a case study with broadcast news. IEEE Transactions on Multimedia,
9(5):958?966, 2007.
[13] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
In STOC ?98: Proceedings of the thirtieth annual ACM symposium on Theory of computing, New York,
NY, USA, 1998. ACM Press.
[14] H. J?egou, M. Douze, C. Schmid, and P. P?erez. Aggregating local descriptors into a compact image
representation. In Proc. Comp. Vision Pattern Recogn. (CVPR), 2010.
[15] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In Advances in Neural
Information Processing Systems (NIPS), 2009.
[16] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In Intl. Conf.
Computer Vision, 2010.
[17] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face
Verification. In Intl. Conf. Computer Vision, 2009.
[18] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[19] L. Li, H. Su, E. Xing, and L. Fei-Fei. Object Bank: A high-level image representation for scene classification & semantic feature sparsification. In NIPS. 2010.
[20] D. Lowe. Distinctive image features from scale-invariant keypoints. Intl. Jrnl. of Computer Vision,
60(2):91?110, 2004.
[21] A. Oliva and A. Torralba. Building the gist of a scene: The role of global image features in recognition.
Visual Perception, Progress in Brain Research, 155, 2006.
[22] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In Advances
in Neural Information Processing Systems (NIPS), 2010.
[23] M. Ranzato, Y. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. In Advances in
Neural Information Processing Systems (NIPS), 2007.
[24] R. Salakhutdinov and G. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50:969?978, 2009.
[25] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. In Proc. Comp.
Vision Pattern Recogn. (CVPR), 2007.
[26] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In Proc.
Comp. Vision Pattern Recogn. (CVPR), 2008.
[27] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object category recognition using classemes. In
ECCV, 2010.
[28] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. In CVPR, 2010.
[29] J. Vogel and B. Schiele. Semantic modeling of natural scenes for content-based image retrieval. Intl. Jrnl.
of Computer Vision, 72(2):133?157, 2007.
[30] G. Wang, D. Hoiem, and D. Forsyth. Learning image similarity from flickr using stochastic intersection
kernel machines. In Intl. Conf. Computer Vision, 2009.
[31] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS. 2009.
9
| 4319 |@word kulis:2 version:1 proportion:1 disk:1 tried:1 hsieh:1 egou:1 profit:1 accommodate:1 liblinear:2 shechtman:1 reduction:2 united:1 hoiem:2 past:1 existing:2 current:2 com:1 babenko:1 must:5 additive:2 happen:1 subsequent:2 shape:1 wanted:1 remove:1 designed:2 gist:4 drop:1 plot:2 hash:3 update:1 selected:2 phog:2 short:2 kbytes:1 implemen:1 provides:2 classeme:16 quantized:1 org:1 five:1 direct:1 symposium:1 introduce:2 multi:3 brain:1 salakhutdinov:1 globally:1 curse:1 window:1 solver:1 farhadi:1 becomes:1 minimizes:1 finding:1 sparsification:1 impractical:1 every:3 binarized:2 collecting:1 decouples:1 classifier:47 assoc:1 grauman:1 appear:1 producing:2 positive:6 aggregating:1 local:3 monolithic:1 tation:1 despite:2 encoding:2 meet:1 approximately:1 chose:2 collect:1 challenging:1 branson:1 acknowledgment:1 harmeling:1 lecun:1 testing:2 practice:2 block:1 recursive:1 fitzgibbon:2 procedure:4 zis:1 yan:1 vedaldi:3 projection:12 matching:2 pre:5 word:2 suggest:1 get:1 selection:5 storage:4 influence:1 optimize:6 disjointly:2 map:8 equivalent:4 projector:1 www:1 convex:4 fill:1 fang:1 retrieve:1 coordinate:2 justification:1 target:1 user:1 exact:2 programming:1 homogeneous:1 us:2 recognition:27 expensive:2 approximated:1 updating:1 wkc:6 database:9 gehler:2 role:1 caltech256:15 wang:2 parameterize:1 news:1 ranzato:1 decrease:1 trade:2 alessandro:1 intuition:1 mu:1 schiele:1 trained:11 weakly:2 solving:1 grateful:1 rewrite:2 distinctive:1 efficiency:2 basis:14 triangle:1 ikc:16 easily:2 joint:1 represented:1 recogn:4 derivation:2 train:4 effective:2 shortcoming:1 describe:1 reconstructive:2 query:8 pronunciation:1 whose:1 larger:2 cvpr:8 say:1 relax:1 compressed:1 unseen:1 think:1 jointly:2 final:1 indyk:1 advantage:2 propose:3 reconstruction:1 douze:2 coming:1 mb:1 relevant:2 combining:3 date:1 iff:1 achieve:1 rfe:8 motwani:1 requirement:3 darrell:1 intl:6 produce:7 categorization:11 object:23 help:1 depending:1 andrew:1 ac:25 fixing:1 stat:1 bosch:1 pose:1 measured:1 nearest:1 progress:1 eq:5 implemented:1 c:2 auxiliary:2 greek:1 attribute:12 stochastic:2 enable:3 elimination:1 implementing:1 material:1 require:1 fix:1 generalization:1 civr:1 pico:1 considered:3 exp:1 mapping:1 achieves:1 dictionary:1 smallest:1 adopt:1 torralba:3 proc:5 outperformed:1 lose:1 label:2 sensitive:3 largest:1 tf:1 bergamo:1 weighted:1 modified:1 rather:2 reaching:1 avoid:2 lifted:2 varying:5 thirtieth:1 combiner:5 encode:1 improvement:2 consistently:1 greatly:1 contrast:5 baseline:1 am:1 detect:1 nn:1 typically:3 entire:2 compactness:2 kernelized:2 interested:1 classification:18 retaining:1 art:8 spatial:5 once:1 nicely:1 sampling:2 represents:1 t2:1 report:4 torresani:5 duplicate:1 few:2 oriented:1 randomly:1 recognize:1 individual:1 replaced:2 consisting:1 keerthi:1 microsoft:3 detection:1 evaluation:3 introduces:1 sh:2 predefined:1 accurate:1 loosely:1 divide:1 desired:2 overcomplete:1 theoretical:1 increased:1 column:1 xeon:1 boolean:2 modeling:1 cost:2 introducing:1 entry:4 too:1 motivating:2 reported:1 answer:1 varies:2 endres:1 nickisch:1 off:2 dong:1 w1:6 again:3 ear:1 containing:2 choose:1 possibly:1 broadcast:1 worse:1 conf:6 li:4 amples:1 wk:2 includes:3 coefficient:1 int:1 forsyth:2 explicitly:3 ranking:1 vi:6 performed:3 philbin:1 lowe:1 kwk:3 red:1 start:1 xing:1 contribution:1 appended:1 tecture:1 minimize:2 accuracy:34 descriptor:41 efficiently:1 yield:9 comp:4 detector:2 simultaneous:1 flickr:1 associated:1 hamming:1 sampled:1 dataset:3 vlad:1 color:1 distractors:2 dimensionality:14 formalize:1 bre:3 back:1 higher:2 hashing:6 supervised:2 zisserman:4 wei:2 formulation:1 evaluated:3 furthermore:2 just:1 stage:4 until:1 hand:1 su:1 nonlinear:12 defines:1 perhaps:1 usa:1 building:1 contain:1 true:2 concept:1 former:2 regularization:1 irani:1 semantic:4 illustrated:1 deal:1 during:2 self:2 inferior:2 noted:1 demonstrate:1 reasoning:1 image:74 lazebnik:1 novel:10 predominantly:1 sigmoid:1 specialized:2 nh:1 approximates:1 refer:1 cambridge:1 approx:1 erez:1 had:1 funded:1 lsh:3 chapelle:1 similarity:5 recognizers:2 longer:1 operating:1 picodes:6 jrnl:2 recent:1 retrieved:2 optimizing:3 belongs:1 scenario:3 certain:1 nonconvex:1 binary:19 continue:1 success:1 alternation:1 yi:2 caltech:3 seen:2 minimum:1 additional:2 deng:2 belhumeur:1 recognized:1 ii:1 multiple:4 full:1 keypoints:1 technical:1 match:2 calculation:2 cross:2 offer:1 retrieval:11 lin:2 award:1 a1:4 scalable:1 basic:1 oliva:1 vision:11 metric:1 histogram:3 kernel:24 represent:1 iteration:1 pyramid:4 achieved:4 background:1 uninformative:1 want:2 fine:1 addition:1 biased:1 parallelization:1 unlike:1 vogel:1 subject:1 nonlin:1 integer:1 near:2 leverage:1 intermediate:1 identically:1 embeddings:2 zi:5 misclassifications:1 dartmouth:3 suboptimal:1 classemes:15 idea:1 reduce:1 tradeoff:1 multiclass:6 shift:1 retrains:1 six:1 pca:10 render:1 york:1 cause:1 deep:1 yik:8 clear:1 aimed:1 amount:2 clutter:1 svms:3 category:28 reduced:1 http:2 occupy:1 supplied:1 outperform:2 percentage:1 misclassifying:1 nsf:1 chum:1 sign:2 per:8 blue:1 awf:1 express:1 ilsvrc2010:7 key:2 nevertheless:1 thresholded:3 kept:1 sum:1 raginsky:1 parameterized:1 powerful:1 family:1 patch:1 appendix:1 bit:16 bound:3 fold:1 fan:1 encountered:1 annual:1 idf:1 fei:6 scene:4 software:2 encodes:1 speed:1 argument:1 min:3 extremely:1 kumar:1 simile:1 expanded:1 relatively:1 alternate:1 combination:4 pink:1 belonging:1 beneficial:1 slightly:2 smaller:1 across:1 lp:17 restricted:1 indexing:1 iccv:1 invariant:2 computationally:1 visualization:1 previously:2 describing:2 know:1 available:1 hanover:1 apply:1 hierarchical:1 spectral:3 original:7 top:2 include:1 hinge:3 const:2 atc:12 approximating:1 objective:19 question:2 codewords:1 costly:1 dependence:1 usual:1 traditional:3 gradient:2 subspace:2 distance:9 separate:2 mapped:1 concatenation:1 degrade:1 argue:1 reason:1 code:35 length:3 index:1 minimizing:5 kingdom:1 difficult:2 unfortunately:1 setup:1 stoc:1 negative:4 disparate:1 design:1 implementation:1 perform:1 upper:3 datasets:1 benchmark:5 enabling:2 finite:1 descent:3 displayed:1 defining:3 hinton:1 rn:2 arbitrary:5 competence:1 bk:6 optimized:3 imagenet:12 engine:1 learned:14 nip:5 address:3 beyond:1 suggested:1 below:3 pattern:5 reimplemented:1 perception:1 challenge:3 built:2 rf:3 memory:2 video:4 including:3 belief:1 natural:3 treated:1 ranked:1 indicator:1 representing:1 scheme:1 lorenzo:2 library:1 picture:2 carried:1 autoencoder:1 extract:1 schmid:2 byte:5 prior:3 review:1 binarization:1 text:1 fully:1 loss:2 versus:6 localized:1 validation:1 gather:1 verification:1 consistent:1 vlg:1 bank:2 nowozin:2 tiny:1 row:1 eccv:1 surprisingly:1 keeping:1 offline:2 bias:1 allow:3 combiners:1 synset:2 noz:1 neighbor:1 focussed:1 face:1 subimage:1 sparse:1 ghz:1 overcome:1 dimension:2 stand:1 collection:5 simplified:1 ple:1 employing:1 transaction:1 approximate:5 compact:22 emphasize:1 dealing:1 global:1 active:2 b1:5 belongie:1 xi:17 fergus:2 excepting:1 search:11 learn:12 transfer:1 robust:1 expanding:1 career:1 excellent:1 investigated:2 domain:1 linearly:1 whole:3 lampert:1 fair:1 augmented:1 intel:1 ny:1 precision:7 explicit:9 bitvector:1 exceeding:1 lie:1 weighting:1 learns:1 dozen:1 down:1 british:1 removing:1 specific:2 sift:2 showing:1 experimented:1 svm:14 exists:1 intrinsic:1 quantization:2 socher:1 effectively:2 texture:1 magnitude:2 lifting:1 hauptmann:1 margin:1 boureau:1 chen:1 gap:1 locality:3 intersection:2 simply:2 appearance:1 visual:6 strand:1 chang:1 extracted:1 acm:2 sorted:1 viewed:1 goal:2 towards:1 replace:2 fisher:1 content:1 lsvrc:1 specifically:1 operates:1 called:1 total:4 multimedia:1 e:20 experimental:2 college:1 berg:2 searched:1 support:1 latter:1 szummer:1 evaluate:2 es1:1 tested:1 avoiding:1 nayar:1 |
3,666 | 432 | Rapidly Adapting Artificial Neural Networks for
Autonomous Navigation
Dean A. Pomerleau
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses
the problem of training artificial neural networks in real time to perform difficult
perception tasks. ALVINN ,is a back-propagation network that uses inputs from a
video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified
Chevy van. This paper describes training techniques which allow ALVINN to learn
in under 5 minutes to autonomously control the Navlab by watching a human driver's
response to new situations. Using these techniques, ALVINN has been trained
to drive in a variety of circumstances including single-lane paved and unpaved
roads, multilane lined and unlined roads, and obstacle-ridden on- and off-road
environments, at speeds of up to 20 miles per hour.
1 INTRODUCTION
Previous trainable connectionist perception systems have often ignored important aspects of
the form and content of available sensor data. Because of the assumed impracticality of
training networks to perform realistic high level perception tasks, connectionist researchers
have frequently restricted their task domains to either toy problems (e.g. the T-C identification
problem [11] [6]) or fixed low level operations (e.g. edge detection [8]). While these restricted
domains can provide valuable insight into connectionist architectures and implementation
techniques, they frequently ignore the complexities associated with real world problems.
There are exceptions to this trend towards simplified tasks. Notable successes in high level
domains such as speech recognition [12], character recognition [5] and face recognition [2]
have been achieved using real sensor data. However, the results have come only in very
controlled environments, after careful preprocessing of the input to segment and label the
training exemplars. In addition, these successful connectionist perception systems have
ignored the fact that sensor data normally becomes available gradually and not as a monolithic
training set. In short, artificial neural networks previously have never been successfully trained
429
430
Pomerleau
Road Intensity
Feedback Unit
Sh"'Jl
left
SIJ'aIShl
Ahead
Sh"'Jl
RIghI
Sharp
Left
Stralghl
Sharp
Ahead
Righi
30 Output
Units
3Oxl2 Sensor
Retia
Figure 1: ALVINN's previous (left) and current (right) architectures
using sensor data in real time to perform a real world perception task.
The ALVINN (Autonomous Land Vehicle In a Neural Network) system remedies this shortcoming. ALVINN is a back-propagation network designed to drive the CMU Navlab. a
modified Chevy van. Using real time training techniques, the system quickly learns to autonomously control the Navlab by watching a human driver's reactions. ALVINN has been
trained to drive in a variety of circumstances including single-lane paved and unpaved roads,
multilane lined and unlined roads and obstacle ridden on- and off-road environments, at
speeds of up to 20 miles per hour. This paper will primarily focus on improvements and
extensions made to the ALVINN system since the presentation of this work at the 1988 NIPS
conference [9].
2
NETWORK ARCHITECTURE
The current architecture for an individual ALVINN driving network is significantly simpler
than the previous version (See Figure 1). The input layer now consists of a single 30x32 unit
"retina" onto which a sensor image from either the video camera or the laser rangefinder is
projected. Each of the 960 input units is fully connected to the hidden layer of 5 units, which
is in tum fully connected to the output layer. The 30 unit output layer is a linear representation
of the currently appropriate steering direction which may serve to keep the vehicle on the road
or to prevent it from colliding with nearby obstacles l . The centermost output unit represents
the "travel straight ahead" condition, while units to the left and right of center represent
successively sharper left and right turns.
The reductions in network complexity over previous versions have been made in response
to experience with ALVINN in actual driving situations. I have found that the distributed
nature of the internal representation allows a network of only 5 hidden units to accurately
drive in a variety of situations. I have also learned that multiple sensor inputs to a single
network are redundant and can be eliminated. For instance, when training a network on a
single-lane road, there is sufficient information in the video image alone for accurate driving.
Similarly, for obstacle avoidance, the laser rangefinder image is sufficient and the video image
IThe task a particular driving network perfonns depends on the type of input sensor image and the
driving situation it has been trained to handle.
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
is superfluous. The road intensity feedback unit has been eliminated on similar grounds. In
the previous architecture, it provided the network with the relative intensity of the road vs.
the non-road in the previous image. This information was unnecessary for accurate road
following, and undefined in new ALVINN domains such as off-road driving.
To drive the Navlab, an image from the appropriate sensor is reduced to 30 x 32 pixels and
projected onto the input layer. After propagating activation through the network, the output
layer's activation prOfile is translated into a vehicle steering command. The steering direction
dictated by the network is taken to be the center of mass of the "hill" of activation surrounding
the output unit with the highest activation leveL Using the center of mass of activation instead
of the most active output unit when determining the direction to steer permits finer steering
corrections, thus improving ALVINN's driving accuracy.
3
TRAINING "ON -THE-FLY"
The most interesting recent improvement to ALVINN is the training teChnique. Originally,
ALVINN was trained with backpropagation using 1200 simulated scenes portraying roads
under a wide variety of weather and lighting conditions [9]. Once trained, the network was
able to drive the Navlab at up to 1.8 meters per second (3.5 mph) along a 400 meter path
through a wooded area of the CMU campus in weather which included snowy, rainy, sunny"
and cloudy situations.
Despite its apparent success, this training paradigm had serious shortcomings. It required
approximately 6 hours of Sun-4 CPU time to generate the synthetic road scenes, and then an
additional 45 minutes of Warp2 computation time to train the network. Furthermore, while
effective at training the network to drive on a single-lane road, extending the synthetic training
paradigm to deal with more complex driving situations like multilane and off-road driving
would have required prohibitively complex artificial scene generators.
I have developed a scheme called training "on-the-fiy" to deal with these problems. Using
this technique, the network learns to imitate a person as he drives. The network is trained
with back-propagation using the latest video camera image as input and the person's current
steering direction as the desired output.
There are two potential problems associated with this simple training on-the-fiy scheme. First,
since the person steers the vehicle down the center of the road during training, the network
will never be presented with situations where it must recover from misalignment errors. When
driving for itself, the network may occasionally stray from the road center, so it must be
prepared to recover by steering the vehicle back to the middle of the road. The second
problem is that naively training the network with only the current video image and steering
direction may cause it to overlearn recent inputs. If the person drives the Navlab down a
stretch of straight road near the end of training, the network will be presented with a long
sequence of similar images. This sustained lack of diversity in the training set will cause the
network to "forget" what it had learned about driving on curved roads and instead learn to
always steer straight ahead.
Both problems associated with training on-the-fiy stem from the fact that back-propagation
requires training data which is representative of the full task to be learned. To provide the
necessary variety of exemplars while still training on real data, the simple training on-the2There was fonnerly a 100 MFLOP Warp systolic array supercomputer onboard the Navlab. It has
been replaced by 3 Sun-4s, further necessitating the streamlined architecture described in the previous
section.
431
432
Pomerleau
Original Image
Shifted and Rotated Images
Figure 2: The single original video image is shifted and rotated to create multiple training
exemplars in which the vehicle appears to be a different locations relative to the road.
fly scheme described' above must be modified. Instead of presenting the network with only
the current video image and steering direction, each original image is shifted and rotated in
software to create 14 additional images in which the vehicle appears to be situated differently
relative to the environment (See Figure 2). The sensor's position and orientation relative to
the ground plane are known, so precise transformations can be achieved using perspective
geometry. The correct steering direction as dictated by the driver for the original image is
altered for each of the transformed images to account for the altered vehicle placement 3 .
Using transformed training patterns allows the network to learn how to recover from driving
errors. Also, overtraining on repetitive images is less of a problem, since the transfonned
training exemplars add variety to the training set. As additional insurance against the effects
of repetitive exemplars, the training set diversity is further increased by maintaining a buffer
of previously encountered training patterns.
In practice, training on-the-fly works as follows. A live sensor image is digitized and reduced
to the low resolution image required by the network. This single original image is shifted and
rotated 14 times to create 14 additional training exemplars4 . Fifteen old exemplars from the
current training set of 200 patterns are chosen and replaced by the 15 new exemplars. The 15
exemplars to be replaced in the training set are chosen on the basis of how closely they match
the steering direction of one of the new tokens. Exchanging a new token for an old token with
a similar steering direction helps maintain diversity in the training buffer during monotonous
stretches of road by preventing novel older patterns from being replaced by recent redundant
ones.
After this replacement process, one forward and one backward pass of the baCk-propagation
algorithm is performed on the 200 exemplars to update the network's weights. The entire
process is then repeated. The network requires approximately 50 iterations through this
digitize-replace-train cycle to learn to drive in the domains that have been tested. Running
3 A simple steering model is used when transforming the driver's original direction. It assumes the
"correct" steering direction is the one that will eliminate the additional vehicle translation and rotation
introduced by the transformation and bringing the vehicle to the point the person was originally steering
towards a fixed distance ahead of the vehicle.
4The shifts are chosen randomly from the range -1.25 to +1.25 meters and the rotations from the
range -6.0 to +6.0 degrees.
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Figure 3: Video images taken on three of the test roads ALVINN has been trained to drive on.
They are, from left to right, a single-lane dirt access road, a single-lane paved bicycle path,
and a lined two-lane highway.
on a Sun-4, this takes about five minutes during which a person drives the Navlab at about 4
miles per hour over the training road.
4
RESULTS AND DISCUSSION
Once it has learned, the network can accurately traverse the length of road used for training
and also generalize to drive along parts of the road it has never encountered under a variety
of weather conditions. In addition, since determining the steering direction from the input
image merely involves a forward sweep through the network, the system is able to process 25
images per second, allowing it to drive at up to the Navlab's maximum speed of20 miles per
hou~. This is over twice as fast as any other sensor-based autonomous system has driven the
Navlab [3] [7].
The training on-the-fly scheme gives ALVINN a flexibility which is novel among autonomous
navigation systems. It has allowed me to successfully train individual networks to drive in
a variety of situations, including a single-lane dirt access road, a single-lane paved bicycle
path, a two-lane suburban neighborhood street, and a lined two-lane highway (See Figure 3).
Using other sensor modalities as input, including laser range images and laser reflectance
images, individual ALVINN networks have been trained to follow roads in total darkness,
to avoid collisions in obstacle rich environments, and to follow alongside railroad tracks.
ALVINN networks have driven in each of these situations for up to 1/2 mile, until reaching a
dead end or a difficult intersection. The development of a system for each of these domains
using the "traditional approach" to autonomous navigation would require the programmer to
1) determine what features are important for the particular task, 2) program detectors (using
statistical or symbolic techniques) for finding these important features and 3) develop an
algorithm for determining which direction to steer from the location of the detected features.
In contrast, ALVINN is able to learn for each new domain what image features are important,
how to detect them and how to use their position to steer the vehicle. Analysis of the
hidden unit representations developed in different driving situations shows that the network
forms detectors for the image features which correlate with the correct steering direction.
When trained on multi-lane roads, the network develops hidden unit feature detectors for the
lines painted on the road, while in single-lane driving situations, the detectors developed are
5The Navlab has a hydraulic drive system which allows for very precise speed control, but which
prevents the vehicle from driving over 20 miles per hour.
433
434
Pomerleau
sensitive to road edges and rOad-shaped regions of similar intensity in the image. For a more
detailed analysis of ALVINN's internal representations see [9] [10].
This ability to utilize arbitrary image features can be problematic. This was the case when
ALVINN was trained to drive on a poorly defined dirt road with a distinct ditch on its right side.
The network had no problem learning and then driving autonomously in one direction, but
when driving the other way, the network was erratic, swerving from one side of the road to the
other. After analyzing the network's hidden representation, the reason for its difficulty became
clear. Because of the poor distinction between the road and the non-road, the network had
developed only weak detectors for the road itself and instead relied heavily on the position of
the ditch to determine the direction to steer. When tested in the opposite direction, the network
was able to keep the vehicle on the road using its weak road detectors but was unstable because
the ditch it had learned to look for on the right side was now on the left. Individual ALVINN
networks have a tendency to rely on any image feature consistently correlated with the correct
steering direction. Therefore, it is important to expose them to a wide enough variety of
situations during training so as to minimize the effects of transient image features.
On the other hand, experience has shown that it is more efficient to train several domain
specific networks for circumstances like one-lane vs. two-lane driving, instead training a
single network for all situations. To prevent this network specificity from reducing ALVINN's
generality, I am currently implementing connectionist and non-connectionist techniques for
combining networks trained for different driving situations. Using a simple rule-based priority
system similar to the subsumption architecture [1], I have recently combined a road following
network and an obstacle avoidance network. The road following network uses video camera
input to follow a single-lane road. The obstacle avoidance network uses laser rangefinder
images as input. It is trained to swerve appropriately to prevent a collision when confronted
with obstacles and to drive straight when the terrain ahead is free of obstructions. The
arbitration rule gives priority to the road following network when determining the steering
direction, except when the obstacle avoidance network outputs a sharp steering command. In
this case, the urgency of avoiding an imminent collision takes precedence over road following
and the steering direction is determined by the obstacle avoidance network. Together, the
two networks and the arbitration rule comprise a system capable of staying on the road and
swerving to prevent collisions.
To facilitate other rule-based arbitration teChniques, I am currently adding to ALVINN a
non-connectionist module which maintains the vehicle's position on a map. Knowing its map
position will allow ALVINN to use arbitration rules such as "when on a stretch of two lane
highway, rely primarily on the two lane highway network". This symbolic mapping module
will also allow ALVINN to make high level, goal-oriented decisions such as which way to
tum at intersections and when to stop at a predetermined destination.
Finally, I am experimenting with connectionist techniques, such as the task decomposition
architecture [6] and the meta-pi architecture [4], for combining networks more seamlessly
than is possible with symbolic rules. These connectionist arbitration techniques will enable
ALVINN to combine outputs from networks trained to perform the same task using different
sensor modalities and to decide when a new expert must be trained to handle the current
situation.
Acknowledgements
The principle support for the Navlab has come from DARPA, under contracts DACA76-85-C0019, DACA76-85-C-0003 and DACA76-85-C-0002. This research was also funded in part
by a grant from Fujitsu Corporation.
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
References
[1] Brooks, R.A. (1986) A robust layered control system for a mobile robot. IEEE Journal
of Robotics and Automation, vol. RA-2, no. 1, pp. 14-23, April 1986.
[2] Cottrell, G.W. (1990) Extracting features from faces using compression networks: Face,
identity, emotion and gender recognition using holons. In Connectionist Models: Proc.
of the 1990 Summer School, David Touretzky (Ed.), Morgan Kaufmann, San Mateo,
CA.
[3] Crisman, J.D. and Thorpe C.E. (1990) Color vision for road following. In Vision and
Navigation: The CMU Navlab Charles Thorpe (Ed.), Kluwer Academic Publishers,
Boston,MA.
[4] Hampshire, J.B., Waibel A.H. (1989) The meta-pi network: Buildingdistributedknowledge representations for robust pattern recognition. Carnegie Mellon Technical Report
CMU-CS-89-l66-R. August, 1989.
[5] LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., and
Jackel, L.D. (1989) Backpropagation applied to handwritten zip code recognition. Neural
Computation} (4).
[6] Jacobs, R.A, Jordan, M.I., Barto, A.G. (1990) Task decomposition through competition
in a modular connectionist architecture: The what and where vision tasks. Univ. of
Massachusetts Computer and Information Science Technical Report 90-27, March 1990.
[7] Kluge, K. and Thorpe C.E. (1990) Explicit models for robot road following. In Vision
and Navigation: The CMU Navlab Charles Thorpe (Ed.), Kluwer Academic Publishers,
Boston,MA.
[8] Koch, c., Bair, W., Harris, J.G., Horiuchi, T., Hsu, A. and Luo, J. (1990) Real-time computervision and robotics using analog VLSI circuits. In Advances in Neural Information
Processing Systems, 2, D.S. Touretzky (Ed.), Morgan Kaufmann, San Mateo, CA
[9] Pomerleau, D.A. (1989) ALVINN: An Autonomous Land Vehicle In a Neural Network,
Advances in Neural Information Processing Systems, }, D.S. Touretzky (Ed.), Morgan
Kaufmann, San Mateo, CA.
[10] Pomerleau, D.A. (1990) Neural network based autonomous navigation. In Vision and
Navigation: The CMU Navlab Charles Thorpe (Ed.), Kluwer Academic Publishers,
Boston,MA.
[11] Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986) Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland (Eds.) Parallel
Distributed Processing: Explorations in the Microstructures of Cognition. Vol.}.' Foundations. Bradford Books/MlT Press, Cambridge, MA.
[12] Waibel, A, Hanazawa, T., Hinton, G., Shikano, K., Lang, K. (1988) Phoneme recognition: Neural Networks vs. Hidden Markov Models. Proceedings from Int. Conf. on
Acoustics, Speech and Signal ProceSSing, New York, New York.
435
| 432 |@word middle:1 version:2 compression:1 decomposition:2 jacob:1 fifteen:1 reduction:1 ridden:2 reaction:1 current:7 luo:1 activation:5 lang:1 must:4 hou:1 cottrell:1 realistic:1 predetermined:1 designed:1 update:1 v:3 alone:1 imitate:1 plane:1 short:1 location:2 traverse:1 simpler:1 five:1 along:2 driver:4 consists:1 sustained:1 combine:1 ra:1 frequently:2 multi:1 actual:1 cpu:1 becomes:1 project:1 provided:1 campus:1 snowy:1 mass:2 circuit:1 what:4 developed:4 finding:1 transformation:2 corporation:1 holons:1 prohibitively:1 control:4 normally:1 unit:14 grant:1 monolithic:1 subsumption:1 despite:1 painted:1 analyzing:1 path:3 approximately:2 twice:1 mateo:3 fiy:3 range:3 camera:4 lecun:1 practice:1 backpropagation:2 area:1 adapting:4 significantly:1 weather:3 imminent:1 road:51 specificity:1 symbolic:3 onto:2 layered:1 live:1 darkness:1 dean:1 map:2 center:5 latest:1 williams:1 impracticality:1 resolution:1 x32:1 insight:1 avoidance:5 array:1 rule:6 handle:2 autonomous:11 heavily:1 us:3 pa:1 trend:1 rumelhart:2 recognition:7 module:2 fly:4 region:1 connected:2 sun:3 cycle:1 autonomously:3 highest:1 valuable:1 environment:5 transforming:1 complexity:2 trained:15 segment:1 ithe:1 serve:1 misalignment:1 basis:1 translated:1 darpa:1 differently:1 hydraulic:1 surrounding:1 laser:6 train:4 distinct:1 fast:1 shortcoming:2 effective:1 horiuchi:1 univ:1 artificial:7 detected:1 paved:4 neighborhood:1 apparent:1 modular:1 ability:1 itself:2 hanazawa:1 confronted:1 sequence:1 wooded:1 combining:2 rapidly:4 flexibility:1 poorly:1 competition:1 extending:1 rotated:4 staying:1 help:1 develop:1 propagating:1 exemplar:9 school:2 c:1 involves:1 come:2 direction:20 closely:1 correct:4 exploration:1 human:2 transient:1 programmer:1 enable:1 implementing:1 require:1 perfonns:1 daca76:3 extension:1 precedence:1 correction:1 stretch:3 koch:1 ground:2 bicycle:2 mapping:1 cognition:1 driving:19 proc:1 travel:1 label:1 currently:3 expose:1 jackel:1 sensitive:1 highway:4 hubbard:1 create:3 successfully:2 sensor:14 always:1 modified:3 reaching:1 avoid:1 mobile:1 command:2 railroad:1 barto:1 focus:1 improvement:2 consistently:1 experimenting:1 seamlessly:1 contrast:1 detect:1 am:3 entire:1 eliminate:1 hidden:6 vlsi:1 transformed:2 pixel:1 among:1 orientation:1 suburban:1 development:1 emotion:1 once:2 never:3 shaped:1 comprise:1 eliminated:2 represents:1 look:1 of20:1 connectionist:11 report:2 develops:1 serious:1 primarily:2 retina:1 rainy:1 randomly:1 oriented:1 thorpe:5 individual:4 replaced:4 geometry:1 replacement:1 maintain:1 detection:1 mflop:1 insurance:1 henderson:1 navigation:10 sh:2 undefined:1 superfluous:1 accurate:2 edge:2 capable:1 necessary:1 experience:2 old:2 desired:1 monotonous:1 instance:1 increased:1 obstacle:10 steer:6 exchanging:1 successful:1 synthetic:2 combined:1 person:6 destination:1 off:4 contract:1 together:1 quickly:1 successively:1 priority:2 watching:2 dead:1 book:1 expert:1 conf:1 toy:1 account:1 potential:1 diversity:3 automation:1 int:1 notable:1 depends:1 vehicle:17 performed:1 recover:3 relied:1 maintains:1 parallel:1 minimize:1 accuracy:1 became:1 kaufmann:3 phoneme:1 generalize:1 weak:2 identification:1 handwritten:1 accurately:2 lighting:1 drive:19 finer:1 researcher:1 straight:4 overtraining:1 detector:6 touretzky:3 ed:7 streamlined:1 against:1 pp:1 associated:3 stop:1 hsu:1 massachusetts:1 color:1 back:6 appears:2 tum:2 originally:2 follow:3 response:2 april:1 generality:1 furthermore:1 until:1 hand:1 propagation:6 lack:1 microstructures:1 facilitate:1 effect:2 remedy:1 mlt:1 mile:6 deal:2 during:4 hill:1 presenting:1 necessitating:1 onboard:1 image:34 dirt:3 novel:2 recently:1 charles:3 rotation:2 urgency:1 jl:2 analog:1 he:1 kluwer:3 mellon:2 cambridge:1 similarly:1 had:5 swerving:2 funded:1 access:2 robot:2 add:1 recent:3 dictated:2 perspective:1 driven:2 occasionally:1 buffer:2 meta:2 success:2 morgan:3 additional:5 steering:20 zip:1 determine:2 paradigm:2 redundant:2 signal:1 multiple:2 full:1 stem:1 technical:2 match:1 academic:3 long:1 controlled:1 circumstance:3 cmu:7 vision:5 repetitive:2 represent:1 iteration:1 achieved:2 robotics:2 navlab:16 addition:2 modality:2 appropriately:1 publisher:3 bringing:1 jordan:1 extracting:1 near:1 enough:1 variety:9 architecture:10 opposite:1 knowing:1 shift:1 bair:1 speech:2 york:2 cause:2 ignored:2 collision:4 detailed:1 clear:1 prepared:1 obstruction:1 situated:1 mcclelland:1 reduced:2 generate:1 problematic:1 shifted:4 per:7 track:1 carnegie:2 vol:2 prevent:4 rangefinder:4 backward:1 utilize:1 imaging:1 merely:1 decide:1 decision:1 layer:6 ditch:3 summer:1 encountered:2 ahead:6 placement:1 colliding:1 scene:3 software:1 lane:18 nearby:1 aspect:1 speed:4 cloudy:1 waibel:2 march:1 sunny:1 poor:1 describes:1 character:1 restricted:2 gradually:1 sij:1 taken:2 previously:2 turn:1 lined:4 end:2 available:2 operation:1 permit:1 denker:1 appropriate:2 supercomputer:1 original:6 assumes:1 running:1 maintaining:1 reflectance:1 sweep:1 traditional:1 distance:1 simulated:1 mph:1 street:1 me:1 righi:2 unstable:1 transfonned:1 reason:1 systolic:1 length:1 code:1 difficult:2 sharper:1 pomerleau:6 implementation:1 perform:4 allowing:1 markov:1 howard:1 curved:1 situation:15 hinton:2 precise:2 digitized:1 sharp:3 arbitrary:1 august:1 intensity:4 introduced:1 david:1 required:3 trainable:1 acoustic:1 learned:5 distinction:1 boser:1 hour:5 nip:1 brook:1 address:1 able:4 alongside:1 perception:5 pattern:5 program:1 including:4 video:10 erratic:1 difficulty:1 rely:2 scheme:4 altered:2 older:1 swerve:1 meter:3 acknowledgement:1 determining:4 relative:4 fully:2 portraying:1 interesting:1 generator:1 foundation:1 degree:1 sufficient:2 principle:1 pi:2 land:3 translation:1 fujitsu:1 token:3 free:1 side:3 allow:3 warp:1 wide:2 face:3 van:2 distributed:2 feedback:2 world:2 rich:1 preventing:1 forward:2 made:2 preprocessing:1 simplified:1 projected:2 san:3 correlate:1 ignore:1 keep:2 active:1 pittsburgh:1 assumed:1 unnecessary:1 shikano:1 terrain:1 multilane:3 learn:5 nature:1 robust:2 ca:3 improving:1 alvinn:29 complex:2 domain:8 profile:1 repeated:1 allowed:1 representative:1 stray:1 position:5 explicit:1 learns:2 minute:3 down:2 specific:1 naively:1 adding:1 chevy:2 boston:3 forget:1 intersection:2 prevents:1 gender:1 harris:1 ma:4 goal:1 presentation:1 identity:1 careful:1 towards:2 replace:1 content:1 included:1 determined:1 except:1 reducing:1 hampshire:1 called:1 total:1 pas:1 bradford:1 tendency:1 exception:1 internal:3 support:1 arbitration:5 tested:2 avoiding:1 correlated:1 |
3,667 | 4,320 | Probabilistic Modeling of Dependencies Among
Visual Short-Term Memory Representations
A. Emin Orhan
Robert A. Jacobs
Department of Brain & Cognitive Sciences
University of Rochester
Rochester, NY 14627
{eorhan,robbie}@bcs.rochester.edu
Abstract
Extensive evidence suggests that items are not encoded independently in visual
short-term memory (VSTM). However, previous research has not quantitatively
considered how the encoding of an item influences the encoding of other items.
Here, we model the dependencies among VSTM representations using a multivariate Gaussian distribution with a stimulus-dependent mean and covariance matrix.
We report the results of an experiment designed to determine the specific form of
the stimulus-dependence of the mean and the covariance matrix. We find that the
magnitude of the covariance between the representations of two items is a monotonically decreasing function of the difference between the items? feature values,
similar to a Gaussian process with a distance-dependent, stationary kernel function. We further show that this type of covariance function can be explained as a
natural consequence of encoding multiple stimuli in a population of neurons with
correlated responses.
1
Introduction
In each trial of a standard visual short-term memory (VSTM) experiment (e.g. [1,2]), subjects are
first presented with a display containing multiple items with simple features (e.g. colored squares)
for a brief duration and then, after a delay interval, their memory for the feature value of one of
the items is probed using either a recognition or a recall task. Let s = [s1 , s2 , . . . , sN ]T denote the
feature values of the N items in the display on a given trial. In this paper, our goal is to provide a
quantitative description of the content of a subject?s visual memory for the display after the delay
interval. That is, we want to characterize a subject?s belief state about s.
We suggest that a subject?s belief state can be expressed as a random variable ?s = [?
s1 , s?2 , . . . , s?N ]T
that depends on the actual stimuli s: ?s = ?s(s). Consequently, we seek a suitable joint probability
model p(?s) that can adequately capture the content of a subject?s memory of the display. We note
that most research on VSTM is concerned with characterizing how subjects encode a single item in
VSTM (for instance, the precision with which a single item can be encoded [1,2]) and, thus, does not
consider the joint encoding of multiple items. In particular, we are not aware of any previous work
attempting to experimentally probe and characterize exactly how the encoding of an item influences
the encoding of other items, i.e. the joint probability distribution p(?
s1 , s?2 , . . . , s?N ).
A simple (perhaps simplistic) suggestion is to assume that the encoding of an item does not influence the encoding of other items, i.e. the feature values of different items are represented independently in VSTM. If so, the joint probability distribution factorizes as p(?
s1 , s?2 , . . . , s?N ) =
p(?
s1 )p(?
s2 ) . . . p(?
sN ). However, there is now extensive evidence against this simple model [3,4,5,6].
1
2
A Gaussian process model
We consider an alternative model for p(?
s1 , s?2 , . . . , s?N ) that allows for dependencies among representations of different items in VSTM. We model p(?
s1 , s?2 , . . . , s?N ) as an N -dimensional multivariate Gaussian distribution with mean m(s) and full covariance matrix ?(s), both of which depend on
the actual stimuli s appearing in a display. This model assumes that only pairwise (or second-order)
correlations exist between the representations of different items. Although more complex models
incorporating higher-order dependencies between the representations of items in VSTM can be considered, it would be difficult to experimentally determine the parameters of these models. Below we
show how the parameters of the multivariate Gaussian model, m(s) and ?(s), can be experimentally
determined from standard VSTM tasks with minor modifications.
Importantly, we emphasize the dependence of m(s) and ?(s) on the actual stimuli s. This is to allow
for the possibility that subjects might encode stimuli with different similarity relations differently.
For instance (and to foreshadow our experimental results), if the items in a display have similar feature values, one might reasonably expect there to be large dependencies among the representations
of these items. Conversely, the correlations among the representations of items might be smaller if
the items in a display are dissimilar. These two cases would imply different covariance matrices ?,
hence the dependence of ? (and m) on s.
Determining the properties of the covariance matrix ?(s) is, in a sense, similar to finding an appropriate kernel for a given dataset in the Gaussian process framework [7]. In Gaussian processes, one
expresses the covariance matrix in the form ?ij = k(si , sj ) using a parametrized kernel function k.
Then one can ask various questions about the kernel function: What kind of kernel function explains
the given dataset best, a stationary kernel function that only depends on |si ? sj | or a more general,
non-stationary kernel? What parameter values of the chosen kernel (e.g. the scale length parameter
for a squared exponential type kernel) explain the dataset best? We ask similar questions about our
stimulus-dependent covariance matrix ?(s): Does the covariance between VSTM representations
of two stimuli depend only on the absolute difference between their feature values, |si ? sj |, or is
the relationship non-stationary and more complex? If the covariance function is stationary, what is
its scale length (how quickly does the covariance dissipate with distance)? In Section 3, we address
these questions experimentally.
Why does providing an appropriate context improve memory?
Modeling subjects? VSTM representations of multiple items as a joint probability distribution allows
us to explain an intriguing finding by Jiang, Olson and Chun [3] in an elegant way. We first describe
the finding, and then show how to explain this result within our framework.
Jiang et al. [3] showed that relations between items in a display, as well as items? individual characteristics, are encoded in VSTM. In their Experiment 1, they briefly presented displays consisting of
colored squares to subjects. There were two test or probe conditions. In the single probe condition,
only one of the squares (called the target probe) reappeared, either with the same color as in the
original display, or with a different color. In the minimal color change condition, the target probe
(again with the same color or with a different color) reappeared together with distracter probes which
always had the same colors as in the original display. In both conditions, subjects decided whether
a color change occurred in the target probe. Jiang et al. [3] found that subjects? performances were
significantly better in the minimal color change condition than in the single probe condition. This
result suggests that the color for the target square was not encoded independently of the colors of
the distracter squares because if the target color was encoded independently then subjects would
have shown identical performances regardless of whether distractor squares were present (minimal
color change condition) or absent (single probe condition). In Experiment 2 of [3], a similar result
was obtained for location memory: location memory for a target was better in the minimal change
condition than in the single probe condition or in a maximal change condition where all distracters
were presented but at different locations than their original locations.
These results are easy to understand in terms of our joint probability model for item memories, p(?s).
Intuitively, the single probe condition taps the marginal probability of the memory for the target item,
p(?
st ), where t represents the index of the target item. In contrast, the minimal color change condition
taps the conditional probability of the memory for the target given the memories for the distracters,
p(?
st |?s?t = s?t ) where ?t represents the indices of the distracter items, because the actual dis2
1000 ms
100 ms
1000 ms (delay)
Until response
Figure 1: The sequence of events on a single trial of the experiment with N = 2.
tracters s?t are shown during test. If the target probe has high probability under these distributions,
then the subject will be more likely to respond ?no-change?, whereas if it has low probability, then
the subject will be more likely to respond ?change?. If the items are represented independently in
VSTM, the marginal and conditional distributions are the same; i.e. p(?
st ) = p(?
st |?s?t ). Hence,
the independent-representation assumption predicts that there should be no difference in subjects?
performances in the single probe and minimal color change conditions. The significant differences
in subjects? performances between these conditions observed in [3] provides evidence against the
independence assumption.
It is also easy to understand why subjects performed better in the minimal color change condition
than in the single probe condition. The conditional distribution p(?
st |?s?t ) is, in general, a lowervariance distribution than the marginal distribution p(?
st ). Although this is not exclusively true for
the Gaussian distribution, it can analytically be proven in the Gaussian case. If p(?s) is modeled as
an N -dimensional multivariate Gaussian distribution:
?s = [?
st , ?s?t ]T ? N ([a, b]T , [A, C; C T , B])
(1)
(where the covariance matrix is written using Matlab notation), then the conditional distribution
p(?
st |?s?t ) has mean a + CB ?1 (?s?t ? b) and variance A ? CB ?1 C T , whereas the marginal distribution p(?
st ) has mean a and variance A which is always greater than A ? CB ?1 C T . [As an aside,
note that when the distracter probes are different from the mean of the memories for distracters,
i.e. ?s?t 6= b, the conditional distribution p(?
st |?s?t ) is biased away from a, explaining the poorer
performance in the maximal change condition than in the single probe condition.]
3
Experiments
We conducted two VSTM recall experiments to determine the properties of m(s) and ?(s). The
experiments used position along a horizontal line as the relevant feature to be remembered.
Procedure: Each trial began with the display of a fixation cross at a random location within an
approximately 12? ? 16? region of the screen for 1 second. Subjects were then presented with a
number of colored squares (N = 2 or N = 3 squares in separate experiments) on linearly spaced
dark and thin horizontal lines for 100 ms. After a delay interval of 1 second, a probe screen was
presented. Initially, the probe screen contained only the horizontal lines. Subjects were asked to
use the computer mouse to indicate their estimate of the horizontal location of each of the colored
squares presented on that trial. We note that this is a novelty of our experimental task, since in most
other VSTM tasks, only one of the items is probed and the subject is asked to report the content of
their memory associated with the probed item. Requiring subjects to indicate the feature values of
all presented items allows us to study the dependencies between the memories for different items.
Subjects were allowed to adjust their estimates as many times as they wished. When they were
satisfied with their estimates, they proceeded to the next trial by pressing the space bar. Figure 1
shows the sequence of events on a single trial of the experiment with N = 2.
To study the dependence of m(s) and ?(s) on the horizontal locations of the squares s =
[s1 , s2 , . . . , sN ]T , we used different values of s on different trials. We call each different s a particular ?display configuration?. To cover a range of possible display configurations, we selected
uniformly-spaced points along the horizontal dimension, considered all possible combinations of
these points (e.g. item 1 is at horizontal location s1 and item 2 is at location s2 ), and then added a
small amount of jitter to each combination. In the experiment with two items, 6 points were selected
along the horizontal dimension, and thus there were 36 (6?6) different display configurations. In
the experiment with three items, 3 points were selected along the horizontal dimension, meaning
that 27 (3?3?3) configurations were used.
3
10
1
Corr(?
s1 , s?2 )
s2 (degrees)
0.5
0
0
?0.5
?10
?10
0
?1
0
10
8
16
|s1 ? s2|
s1 (degrees)
(a)
(b)
Figure 2: (a) Results for subject RD. The actual display configurations s are represented by magenta
dots, the estimated means based on the subject?s responses are represented by black dots and the
estimated covariances are represented by contours (with red contours representing ?(s) for which
the two dimensions were significantly correlated at the p < 0.05 level). (b) Results for all 4 subjects.
The graph plots the mean correlation coefficients (and standard errors of the means) as a function of
|s1 ? s2 |. Each color corresponds to a different subject.
Furthermore, since m(s) and ?(s) cannot be reliably estimated from a single trial, we presented
the same configuration s a number of times and collected the subject?s response each time. We
then estimated m(s) and ?(s) for a particular configuration s by fitting an N -dimensional Gaussian
distribution to the subject?s responses for the corresponding s. We thus assume that when a particular
configuration s is presented in different trials, the subject forms and makes use of (i.e. samples from)
roughly the same VSTM representation p(?s) = N (m(s), ?(s)) in reporting the contents of their
memory. In the experiment with N = 2, each of the 36 configurations was presented 24 times
(yielding a total of 864 trials) and in the experiment with N = 3, each of the 27 configurations was
presented 26 times (yielding a total of 702 trials), randomly interleaved. Subjects participating in
the same experiment (either two or three items) saw the same set of display configurations.
Participants: 8 naive subjects participated in the experiments (4 in each experiment). All subjects
had normal or corrected-to-normal vision, and they were compensated at a rate of $10 per hour. For
both set sizes, subjects completed the experiment in two sessions.
Results: We first present the results for the experiment with N = 2. Figure 2a shows the results for a
representative subject (subject RD). In this graph, the actual display configurations s are represented
by magenta dots, the estimated means m(s) based on the subject?s responses are represented by
black dots and the estimated covariances ?(s) are represented by contours (red contours represent
?(s) for which the two dimensions were significantly (p < 0.05) correlated). For this particular
subject, p(?
s1 , s?2 ) exhibited a significant correlation for 12 of 36 configurations. In all these cases,
correlations were positive, meaning that when the subject made an error in a given direction for one
of the items, s/he was likely to make an error in the same direction for the other item. This tendency
was strongest when items were at similar horizontal positions [e.g. distributions are more likely to
exhibit significant correlations for display configurations close to the main diagonal (s1 = s2 )].
Figure 2b shows results for all 4 subjects. This graph plots the correlation coefficients for subjects?
position estimates as a function of the absolute differences in items? positions (|s1 ? s2 |). In this
graph, configurations were divided into 6 equal-length bins according to their |s1 ?s2 | values, and the
correlations shown are the mean correlation coefficients (and standard errors of the means) for each
bin. Clearly, the correlations decrease with increasing |s1 ? s2 |. Correlations differed significantly
across different bins (one-way ANOVA: p < .05 for all but one subject, as well as for combined
data from all subjects). One might consider this graph as representing a stationary kernel function
that specifies how the covariance between the memory representations of two items changes as a
function of the distance |s1 ? s2 | between their feature values. However, as can be observed from
Figure 2a, the experimental kernel function that characterizes the dependencies between the VSTM
representations of different items is not perfectly stationary. Additional analyses (not detailed here)
indicate that subjects had a bias toward the center of the display. In other words, when an item
appeared on the left side of a display, subjects were likely to estimate its location as being to the
4
1
Distance in 1 ? d
1
0.5
Corr(?
si , s?j )
Corr(?
si , s?j )
0.5
Distance in 2 ? d
0
?0.5
0
?0.5
?1
0
9
?1
4
18
d(i, j)
11
18
d(i, j)
(a)
(b)
Figure 3: Subjects? mean correlation coefficients (and standard errors of the means) as a function of
the distance d(i, j) between items i and j. d(i, j) is measured either (a) in one dimension (considering only horizontal locations) or (b) in two dimensions (considering both horizontal and vertical
locations). Each color corresponds to a different subject.
right of its actual location. Conversely, items appearing on the right side of a display were estimated
as lying to the left of their actual locations. (This tendency can be observed in Figure 2a by noting
that the black dots in this figure are often closer to the main diagonal than the magenta crosses). This
bias is consistent with similar ?regression-to-the-mean? type biases previously reported in visual
short-term memory for spatial frequency [5,8] and size [6].
Results for the experiment with three items were qualitatively similar. Figure 3 shows that, similar
to the results observed in the experiment with two items, the magnitude of the correlations between
subjects? position estimates decreases with Euclidean distance between items. In this figure, all si sj pairs (recall that si is the horizontal location of item i) for all display configurations were divided
into 3 equal-length bins based on the Euclidean distance d(i, j) between items i and j where we
measured distance either in one dimension (considering only the horizontal locations of the items,
Figure 3a) or in two dimensions (considering both horizontal and vertical locations, Figure 3b).
Correlations differed significantly across different bins as indicated by one-way ANOVA (for both
distance measures: p < .01 for all subjects, as well as for combined data from all subjects). Overall
subjects exhibited a smaller number of significant s1 -s3 correlations than s1 -s2 or s2 -s3 correlations.
This is probably due to the fact that the s1 -s3 pair had a larger vertical distance than the other pairs.
4
Explaining the covariances with correlated neural population responses
What could be the source of the specific form of covariances observed in our experiments? In this
section we argue that dependencies of the form we observed in our experiments would naturally arise
as a consequence of encoding multiple items in a population of neurons with correlated responses.
To show this, we first consider encoding multiple stimuli with an idealized, correlated neural population and analytically derive an expression for the Fisher information matrix (FIM) in this model.
This analytical expression for the FIM, in turn, predicts covariances of the type we observed in our
experiments. We then simulate a more detailed and realistic network of spiking neurons and consider encoding and decoding the features of multiple items in this network. We show that this more
realistic network also predicts covariances of the type we observed in our experiments. We emphasize that these predictions will be derived entirely from general properties of encoding and decoding
information in correlated neural populations and as such do not depend on any specific assumptions
about the properties of VSTM or how these properties might be implemented in neural populations.
Encoding multiple stimuli in a neural population with correlated responses
We first consider the problem of encoding N stimuli (s = [s1 , . . . , sN ]) in a correlated population
of K neurons with Gaussian noise:
1
1
p(r|s) = p
exp[? (r ? f (s))T Q?1 (s)(r ? f (s))]
(2)
2
(2?)K det Q(s)
5
where r is a vector containing the firing rates of the neurons in the population, f (s) represents the
tuning functions of the neurons and Q represents the specific covariance structure chosen. More
specifically, we assume a ?limited range correlation structure? for Q that has been analytically studied several times in the literature [9]?[15]. In a neural population with limited range correlations,
the covariance between the firing rates of the k-th and l-th neurons (the kl-th cell of the covariance
matrix) is assumed to be a monotonically decreasing function of the distance between their preferred
stimuli [11]:
||c(k) ? c(l) ||
Qkl (s) = afk (s)? fl (s)? exp(?
)
(3)
L
where c(k) and c(l) are the tuning function centers of the neurons. There is extensive experimental
evidence for this type of correlation structure in the brain [16]-[19]. For instance, Zohary et al. [16]
showed that correlations between motion direction selective MT neurons decrease with the difference in their preferred directions. This ?limited-range? assumption about the covariances between
the firing rates of neurons will be crucial in explaining our experimental results in terms of the FIM
of a correlated neural population encoding multiple stimuli.
We are interested in deriving the FIM, J(s), for our correlated neural population encoding the stimuli s. The significance of the FIM is that the inverse of the FIM provides a lower bound on the
covariance matrix of any unbiased estimator of s and also expresses the asymptotic covariance matrix of the maximum-likelihood estimate of s in the limit of large K 1 . The ij-th cell of the FIM is
defined as:
?2
Jij (s) = ?E[
log p(r|s)]
(4)
?si ?sj
Our derivation of J(s) closely follows that of Wilke and Eurich in [11]. To derive an analytical
expression for J(s), we make a number of assumptions: (i) all neurons encode the same feature
dimension (e.g. horizontal location in our experiment); (ii) indices of the neurons can be assigned
such that neurons with adjacent indices have the closest tuning function centers; (iii) the centers of
the tuning functions of neurons are linearly spaced with density ?. The last two assumptions imply
that the covariance between neurons with indices k and l can be expressed as Qkl = ?|k?l| afk? fl?
(we omitted the s-dependence of Q and f for brevity) with ? = exp(?1/(L?)) where L is a length
parameter determining the spatial extent of the correlations. With these assumptions, it can be shown
that (see Supplementary Material):
Jij (s) =
K
K?1
K
K?1
X (i) (j)
2?
2?2 X (i) (j) 2?2 ?2 X (i) (j)
1 + ?2 X (i) (j)
h
h
?
h
h
+
g
g
?
gk gk+1
k
k
k
k+1
k
k
a(1 ? ?2 )
a(1 ? ?2 )
1 ? ?2
1 ? ?2
k=1
k=1
k=1
k=1
(5)
(i)
where hk =
1 ?fk
fk? ?si
(i)
and gk =
1 ?fk
fk ?si .
Although not necessary for our results (see Supplementary Material), for convenience, we further
assume that the neurons can be divided into N groups where in each group the tuning functions are
a function of the feature value of only one of the stimuli, i.e. fk (s) = fk (sn ) for neurons in group
n, so that the effects of other stimuli on the mean firing rates of neurons in group n are negligible.
A population of neurons satisfying this assumption, as well as the assumptions (i)-(iii) above, for
N = 2 is schematically illustrated in Figure 4a. We consider Gaussian tuning functions of the form:
fk (s) = g exp(?(s ? ck )2 /? 2 ), with ck linearly spaced between ?12? and 12? and g and ? 2 are
assumed to be the same for all neurons. We take the inverse of J(s), which provides a lower bound
on the covariance matrix of any unbiased estimator of s, and calculate correlation coefficients
based
p ?1
?1
?1
(s).
on J ?1 (s) for each s. For N = 2, for instance, we do this by calculating J12
(s)/ J11
(s)J22
In Figure 4b, we plot this measure for all s1 , s2 pairs between ?10? and 10? . We see that the inverse
of the FIM predicts correlations between the estimates of s1 and s2 and these correlations decrease
with |s1 ? s2 |, just as we observed in our experiments (see Figure 4c). The best fits to experimental
data were obtained with fairly broad tuning functions (see Figure 4 caption). For such broad tuning
functions, the inverse of the FIM also predicts negative correlations when |s1 ? s2 | is very large,
which does not seem to be as strong in our data.
Intuitively, this result can be understood as follows. Consider the hypothetical neural population
shown in Figure 4a encoding the pair s1 , s2 . In this population, it is assumed that fk (s) = fk (s1 )
1 ?1
J (s) provides a lower bound on the covariance matrix of any unbiased estimator of s in the matrix sense
(where A ? B means A ? B is positive semi-definite).
6
?10
Corr(?
s1 , s?2 )
1
1
0.8
0.6
Q kl =a ??k ?l? f k f l
?
r3
...
r1
0.4
0.2
0
0
?0.2
?0.4
Corr(?
s1 , s?2 )
0.5
s2 (degrees)
?
0
?0.5
?0.6
?0.8
r4
r2
10
?10
0
10
?1
?1
0
s 1 (degrees)
(a)
(b)
5
10
15
|s1 ? s2 |
(c)
Figure 4: (a) A population of neurons satisfying all assumptions made in deriving the FIM. For
neurons in the upper row fk (s) = fk (s1 ), and for neurons in the lower row fk (s) = fk (s2 ). The
magnitude of correlations between two neurons is indicated by the thickness of the line connecting
them. (b) Correlation coefficients estimated from the inverse of the FIM for all stimuli pairs s1 , s2 .
(c) Mean correlation coefficients as a function of |s1 ? s2 | (red: model?s prediction; black: collapsed
data from all 4 subjects in the experiment with N = 2). Parameters: ? = 0.5, g = 50, a = 1 (these
were set to biologically plausible values); other parameters: K = 500, ? = 9.0, L = 0.0325 (the
last two were chosen to provide a good fit to the experimental results).
for neurons in the upper row, and fk (s) = fk (s2 ) for neurons in the lower row. Suppose that in
the upper row, the k-th neuron has the best-matching tuning function for a given s1 . Therefore, on
average, the k-th neuron has the highest firing rate in response to s1 . However, since the responses
of the neurons are stochastic, on some trials, neurons to the left (right) of the k-th neuron will have
the highest firing rate in response to s1 . When this happens, neurons in the lower row with similar
preferences will be more likely to get activated, due to the limited-range correlations between the
neurons. This, in turn, will introduce correlations in an estimator of s based on r that are strongest
when the absolute difference between s1 and s2 is small.
Encoding and decoding multiple stimuli in a network of spiking neurons
There might be two concerns about the analytical argument given in the previous subsection. The
first is that we needed to make many assumptions in order to derive an analytic expression for J(s).
It is not clear if we would get similar results when one or more of these assumptions are violated.
Secondly, the interpretation of the off-diagonal terms (covariances) in J ?1 (s) is somewhat different
from the interpretation of the diagonal terms (variances). Although the diagonal terms provide lower
bounds on the variances of any unbiased estimator of s, the off-diagonal terms do not necessarily
provide lower bounds on the covariances of the estimates, that is, there might be estimators with
lower covariances.
To address these concerns, we simulated a more detailed and realistic network of spiking neurons.
The network consisted of two layers. In the input layer, there were 169 Poisson neurons arranged
in a 13 ? 13 grid with linearly spaced receptive field centers between ?12? and 12? along both
horizontal and vertical directions. On a given trial, the firing rate of the k-th input neuron was
determined by the following equation:
rk = gin [exp(?
kx1 ? c(k) k
kx2 ? c(k) k
) + exp(?
)]
?in
?in
(6)
for the case of N = 2. Here k ? k is the Euclidean norm, xi is the vertical and horizontal locations
of the i-th stimulus, c(k) is the receptive field center of the input neuron, gin is a gain parameter and
?in is a scale parameter (both assumed to be the same for all input neurons).
The output layer consisted of simple leaky integrate-and-fire neurons. There were 169 of these
neurons arranged in a 13 ? 13 grid with the receptive field center of each neuron matching the
receptive field center of the corresponding neuron in the input layer. We induced limited-range
correlations between the output neurons through receptive field overlap, although other ways of
introducing limited-range correlations can be considered such as through local lateral connections.
Each output neuron had a Gaussian connection weight profile centered at the corresponding input
7
10
1
Corr(?
s1 , s?2 )
s2 (degrees)
0.5
0
0
?0.5
?10
?10
0
?1
0
10
8
16
|s1 ? s2|
s1 (degrees)
(a)
(b)
Figure 5: (a) Results for the network model. The actual display configurations s are represented
by magenta dots, the estimated means based on the model?s responses are represented by black
dots and the estimated covariances are represented by contours (with red contours representing ?(s)
for which the two dimensions were significantly correlated at the p < 0.05 level). (b) The mean
correlation coefficients (and standard errors of the means) as a function of |s1 ? s2 | (red: model prediction; black: collapsed data from all 4 subjects in the experiment with N = 2). Model parameters:
gin = 120, ?in = 2, ?out = 2. Parameters were chosen to provide a good fit to the experimental
results.
neuron and with a standard deviation of ?out . The output neurons had a threshold of -55 mV and a
reset potential of -70 mV. Each spike of an input neuron k instantaneously increased the voltage of
an output neuron l by 10wkl mV, where wkl is the connection weight between the two neurons and
the voltage decayed with a time constant of 10 ms. We implemented the network in Python using
the Brian neural network simulator [20].
We simulated this network with the same display configurations presented to our subjects in the
experiment with N = 2. Each of the 36 configurations was presented 96 times to the network,
yielding a total of 3456 trials. For each trial, the network was simulated for 100 ms and its estimates
of s1 and s2 were read out using a suboptimal decoding strategy. Specifically, to get an estimate
of s1 , we considered only the row of neurons in the output layer whose preferred vertical locations
were closest to the vertical location of the first stimulus and then we fit a Gaussian function (with
amplitude, peak location and width parameters) to the activity profile of this row of neurons and
considered the estimated peak location as the model?s estimate of s1 . We did the same for obtaining
an estimate of s2 . Figure 5 shows the results for the network model. Similar to our experimental
results, the spiking network model predicts correlations between the estimates of s1 and s2 and
these correlations decrease with |s1 ? s2 | (correlations differed significantly across different bins
as indicated by a one-way ANOVA: F (5, 30) = 22.9713, p < 10?8 ; see Figure 5b). Interestingly,
the model was also able to replicate the biases toward the center of the screen observed in the
experimental data. This is due to the fact that output neurons near the center of the display tended
to have higher activity levels, since they have more connections with the input neurons compared to
the output neurons near the edges of the display.
5
Discussion
Properties of correlations among the responses of neural populations have been studied extensively
from both theoretical and experimental perspectives. However, the implications of these correlations
for jointly encoding multiple items in memory are not known. Our results here suggest that one
consequence of limited-range neural correlations might be correlations in the estimates of the feature
values of different items that decrease with the absolute difference between their feature values. An
interesting question is whether our results generalize to other feature dimensions, such as orientation,
spatial frequency etc. Preliminary data from our lab suggest that covariances of the type reported
here for spatial location might also be observed in VSTM for orientation.
Acknowledgments: We thank R. Moreno-Bote for helpful discussions. This work was supported by a research
grant from the National Science Foundation (DRL-0817250).
8
References
[1] Bays, P.M. & Husain, M. (2008) Dynamic shifts of limited working memory resources in human vision.
Science 321:851-854.
[2] Zhang, P.H. & Luck, S.J. (2008) Discrete fixed-resolution representations in visual working memory. Nature
453:233-235.
[3] Jiang, Y., Olson, I.R. & Chun, M.M. (2000) Organization of visual short-term memory. Journal of Experimental Psychology: Learning, Memory and Cognition 26(3):683-702.
[4] Kahana, M.J. & Sekuler, R. (2002) Recognizing spatial patterns: a noisy exemplar approach. Vision Research 42:2177-2192.
[5] Huang, J. & Sekuler, R. (2010) Distortions in recall from visual memory: Two classes of attractors at work
Journal of Vision 10:1-27.
[6] Brady, T.F. & Alvarez, G.A. (in press) Hierarchical encoding in visual working memory: ensemble statistics
bias memory for individual items. Psychological Science.
[7] Rasmussen, C.E. & Williams, C.K.I (2006) Gaussian Processes for Machine Learning. MIT Press.
[8] Ma, W.J. & Wilken, P. (2004) A detection theory account of change detection. Journal of Vision 4:11201135.
[9] Abbott, L.F. & Dayan, P. (1999) The effect of correlated variability on the accuracy of a population code.
Neural Computation 11:91-101.
[10] Shamir, M. & Sompolinsky, H. (2004) Nonlinear population codes. Neural Computation 16:1105-1136.
[11] Wilke, S.D. & Eurich, C.W. (2001) Representational accuracy of stochastic neural populations. Neural
Computation 14:155-189.
[12] Berens, P., Ecker, A.S., Gerwinn, S., Tolias, A.S. & Bethge, M. (2011) Reassessing optimal neural population codes with neurometric functions. PNAS 108(11): 44234428.
[13] Snippe, H.P. & Koenderink, J.J. (1992) Information in channel-coded systems: correlated receivers. Biological Cybernetics 67: 183-190.
[14] Sompolinsky, H., Yoon, H., Kang, K. & Shamir, M. (2001) Population coding in neural systems with
correlated noise. Physical Review E 64: 051904.
[15] Josi?c, K., Shea-Brown, E., Doiron, B. & de la Rocha, J. (2009) Stimulus-dependent correlations and
population codes. Neural Computation 21:2774-2804.
[16] Zohary, E., Shadlen, M.N. & Newsome, W.T. (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370:140-143.
[17] Bair, W., Zohary, E. & Newsome, W.T. (2001) Correlated firing in macaque area MT: Time scales and
relationship to behavior. The Journal of Neuroscience 21(5): 16761697.
[18] Maynard, E.M., Hatsopoulos, N.G., Ojakangas, C.L., Acuna, B.D., Sanes, J.N., Norman, R.A. &
Donoghue, J.P. (1999) Neuronal interactions improve cortical population coding of movement direction. The
Journal of Neuroscience 19(18): 80838093.
[19] Smith, M.A. & Kohn, A. (2008) Spatial and temporal scales of neuronal correlation in primary visual
cortex. The Journal of Neuroscience 28(48): 1259112603.
[20] Goodman, D. & Brette, R. (2008) Brian: a simulator for spiking neural networks in Python. Frontiers in
Neuroinformatics 2:5. doi: 10.3389/neuro.11.005.2008.
9
| 4320 |@word trial:16 proceeded:1 briefly:1 norm:1 replicate:1 seek:1 jacob:1 covariance:34 configuration:19 exclusively:1 interestingly:1 si:10 intriguing:1 written:1 realistic:3 analytic:1 reappeared:2 designed:1 plot:3 moreno:1 aside:1 stationary:7 selected:3 item:60 smith:1 short:5 colored:4 provides:4 location:24 preference:1 zhang:1 along:5 fixation:1 drl:1 fitting:1 introduce:1 pairwise:1 behavior:1 roughly:1 distractor:1 simulator:2 brain:2 decreasing:2 actual:9 considering:4 increasing:1 zohary:3 notation:1 what:4 kind:1 finding:3 brady:1 temporal:1 quantitative:1 hypothetical:1 exactly:1 wilke:2 grant:1 positive:2 negligible:1 understood:1 local:1 limit:1 consequence:3 encoding:20 jiang:4 dis2:1 firing:8 approximately:1 might:9 black:6 reassessing:1 studied:2 r4:1 suggests:2 conversely:2 sekuler:2 limited:8 range:8 decided:1 acknowledgment:1 definite:1 procedure:1 area:1 significantly:7 matching:2 word:1 distracter:4 suggest:3 get:3 cannot:1 close:1 convenience:1 acuna:1 context:1 influence:3 collapsed:2 ecker:1 compensated:1 center:10 williams:1 regardless:1 independently:5 duration:1 resolution:1 estimator:6 importantly:1 deriving:2 rocha:1 j12:1 population:24 discharge:1 husain:1 target:10 suppose:1 shamir:2 caption:1 recognition:1 satisfying:2 predicts:6 observed:11 yoon:1 capture:1 calculate:1 region:1 sompolinsky:2 decrease:6 highest:2 luck:1 hatsopoulos:1 movement:1 asked:2 dynamic:1 depend:3 ojakangas:1 joint:6 differently:1 represented:11 various:1 derivation:1 describe:1 doi:1 neuroinformatics:1 whose:1 encoded:5 larger:1 supplementary:2 plausible:1 distortion:1 statistic:1 jointly:1 noisy:1 sequence:2 pressing:1 analytical:3 interaction:1 maximal:2 jij:2 reset:1 relevant:1 kx1:1 representational:1 description:1 olson:2 participating:1 r1:1 derive:3 exemplar:1 measured:2 ij:2 minor:1 wished:1 strong:1 implemented:2 indicate:3 direction:6 closely:1 snippe:1 stochastic:2 centered:1 human:1 material:2 bin:6 explains:1 preliminary:1 brian:2 biological:1 secondly:1 frontier:1 lying:1 considered:6 normal:2 exp:6 cb:3 cognition:1 omitted:1 saw:1 instantaneously:1 mit:1 clearly:1 gaussian:16 always:2 ck:2 factorizes:1 voltage:2 encode:3 derived:1 likelihood:1 hk:1 contrast:1 afk:2 sense:2 helpful:1 dependent:4 dayan:1 brette:1 initially:1 relation:2 selective:1 interested:1 overall:1 among:6 orientation:2 spatial:6 fairly:1 marginal:4 equal:2 aware:1 field:5 identical:1 represents:4 broad:2 thin:1 report:2 stimulus:22 quantitatively:1 randomly:1 national:1 individual:2 consisting:1 fire:1 attractor:1 detection:2 organization:1 possibility:1 adjust:1 yielding:3 activated:1 implication:2 poorer:1 edge:1 closer:1 necessary:1 euclidean:3 theoretical:1 minimal:7 psychological:1 instance:4 increased:1 modeling:2 cover:1 newsome:2 introducing:1 deviation:1 delay:4 recognizing:1 conducted:1 characterize:2 reported:2 dependency:8 thickness:1 combined:2 st:10 density:1 decayed:1 peak:2 probabilistic:1 off:2 decoding:4 bethge:1 together:1 quickly:1 mouse:1 connecting:1 squared:1 again:1 satisfied:1 containing:2 huang:1 cognitive:1 koenderink:1 account:1 potential:1 de:1 coding:2 coefficient:8 mv:3 depends:2 dissipate:1 idealized:1 performed:1 lab:1 characterizes:1 red:5 participant:1 rochester:3 square:10 accuracy:2 variance:4 characteristic:1 ensemble:1 spaced:5 generalize:1 cybernetics:1 explain:3 strongest:2 tended:1 against:2 frequency:2 naturally:1 associated:1 gain:1 dataset:3 ask:2 recall:4 color:17 subsection:1 amplitude:1 higher:2 response:14 emin:1 alvarez:1 arranged:2 robbie:1 furthermore:1 just:1 correlation:42 until:1 working:3 horizontal:18 nonlinear:1 maynard:1 perhaps:1 indicated:3 effect:2 requiring:1 true:1 unbiased:4 consisted:2 adequately:1 hence:2 analytically:3 assigned:1 read:1 norman:1 illustrated:1 adjacent:1 during:1 width:1 m:6 bote:1 motion:1 meaning:2 began:1 spiking:5 mt:2 physical:1 occurred:1 he:1 interpretation:2 significant:4 rd:2 tuning:9 fk:15 grid:2 session:1 had:6 dot:7 similarity:1 cortex:1 etc:1 multivariate:4 closest:2 showed:2 perspective:1 gerwinn:1 remembered:1 kx2:1 greater:1 additional:1 somewhat:1 determine:3 novelty:1 monotonically:2 ii:1 semi:1 multiple:11 full:1 bcs:1 pnas:1 cross:2 divided:3 coded:1 prediction:3 neuro:1 simplistic:1 regression:1 vision:5 poisson:1 kernel:11 represent:1 cell:2 whereas:2 want:1 participated:1 schematically:1 interval:3 source:1 crucial:1 goodman:1 biased:1 exhibited:2 probably:1 subject:53 induced:1 elegant:1 j11:1 seem:1 call:1 near:2 noting:1 iii:2 easy:2 concerned:1 independence:1 fit:4 psychology:1 perfectly:1 suboptimal:1 donoghue:1 det:1 absent:1 shift:1 whether:3 expression:4 bair:1 kohn:1 fim:11 j22:1 matlab:1 detailed:3 clear:1 amount:1 dark:1 extensively:1 specifies:1 exist:1 s3:3 estimated:11 neuroscience:3 per:1 discrete:1 probed:3 express:2 group:4 threshold:1 anova:3 abbott:1 graph:5 inverse:5 jitter:1 respond:2 reporting:1 interleaved:1 entirely:1 fl:2 bound:5 layer:5 display:27 activity:2 simulate:1 argument:1 attempting:1 department:1 according:1 combination:2 kahana:1 smaller:2 across:3 modification:1 s1:48 biologically:1 happens:1 explained:1 intuitively:2 equation:1 resource:1 previously:1 turn:2 r3:1 needed:1 probe:18 hierarchical:1 away:1 appropriate:2 appearing:2 alternative:1 original:3 assumes:1 completed:1 calculating:1 psychophysical:1 question:4 added:1 spike:1 receptive:5 strategy:1 dependence:5 primary:1 diagonal:6 exhibit:1 gin:3 distance:12 separate:1 thank:1 simulated:3 lateral:1 parametrized:1 argue:1 collected:1 extent:1 toward:2 neurometric:1 length:5 code:4 index:5 relationship:2 modeled:1 providing:1 difficult:1 robert:1 gk:3 negative:1 reliably:1 upper:3 vertical:7 neuron:55 variability:1 wkl:2 pair:6 kl:2 extensive:3 eurich:2 connection:4 tap:2 kang:1 hour:1 macaque:1 address:2 able:1 bar:1 below:1 pattern:1 appeared:1 memory:27 belief:2 suitable:1 event:2 natural:1 overlap:1 representing:3 improve:2 brief:1 imply:2 brown:1 naive:1 sn:5 review:1 literature:1 python:2 determining:2 asymptotic:1 qkl:2 expect:1 suggestion:1 interesting:1 proven:1 foundation:1 integrate:1 degree:6 consistent:1 shadlen:1 row:8 supported:1 last:2 rasmussen:1 bias:5 allow:1 understand:2 side:2 explaining:3 characterizing:1 absolute:4 leaky:1 dimension:12 cortical:1 contour:6 made:2 qualitatively:1 sj:5 emphasize:2 preferred:3 receiver:1 assumed:4 tolias:1 xi:1 bay:1 why:2 nature:2 reasonably:1 channel:1 obtaining:1 complex:2 necessarily:1 berens:1 did:1 significance:1 main:2 linearly:4 s2:33 noise:2 arise:1 profile:2 allowed:1 neuronal:3 representative:1 screen:4 differed:3 ny:1 precision:1 position:5 doiron:1 sanes:1 exponential:1 magenta:4 rk:1 specific:4 r2:1 chun:2 evidence:4 concern:2 incorporating:1 corr:6 shea:1 magnitude:3 likely:6 visual:10 expressed:2 contained:1 distracters:3 corresponds:2 ma:1 conditional:5 goal:1 consequently:1 fisher:1 content:4 change:14 experimentally:4 determined:2 specifically:2 uniformly:1 corrected:1 called:1 total:3 experimental:12 tendency:2 la:1 dissimilar:1 brevity:1 violated:1 correlated:17 |
3,668 | 4,321 | An Empirical Evaluation of Thompson Sampling
Lihong Li
Yahoo! Research
Santa Clara, CA
[email protected]
Olivier Chapelle
Yahoo! Research
Santa Clara, CA
[email protected]
Abstract
Thompson sampling is one of oldest heuristic to address the exploration / exploitation trade-off, but it is surprisingly unpopular in the literature. We present
here some empirical results using Thompson sampling on simulated and real data,
and show that it is highly competitive. And since this heuristic is very easy to
implement, we argue that it should be part of the standard baselines to compare
against.
1
Introduction
Various algorithms have been proposed to solve exploration / exploitation or bandit problems. One
of the most popular is Upper Confidence Bound or UCB [7, 3], for which strong theoretical guarantees on the regret can be proved. Another representative is the Bayes-optimal approach of Gittins
[4] that directly maximizes expected cumulative payoffs with respect to a given prior distribution.
A less known family of algorithms is the so-called probability matching. The idea of this heuristic
is old and dates back to [16]. This is the reason why this scheme is also referred to as Thompson
sampling.
The idea of Thompson sampling is to randomly draw each arm according to its probability of being
optimal. In contrast to a full Bayesian method like Gittins index, one can often implement Thompson
sampling efficiently. Recent results using Thompson sampling seem promising [5, 6, 14, 12]. The
reason why it is not very popular might be because of its lack of theoretical analysis. Only two
papers have tried to provide such analysis, but they were only able to prove asymptotic convergence
[6, 11].
In this work, we present some empirical results, first on a simulated problem and then on two realworld ones: display advertisement selection and news article recommendation. In all cases, despite
its simplicity, Thompson sampling achieves state-of-the-art results, and in some cases significantly
outperforms other alternatives like UCB. The findings suggest the necessity to include Thompson
sampling as part of the standard baselines to compare against, and to develop finite-time regret bound
for this empirically successful algorithm.
2
Algorithm
The contextual bandit setting is as follows. At each round we have a context x (optional) and a set
of actions A. After choosing an action a ? A, we observe a reward r. The goal is to find a policy
that selects actions such that the cumulative reward is as large as possible.
Thompson sampling is best understood in a Bayesian setting as follows. The set of past observations
D is made of triplets (xi , ai , ri ) and are modeled using a parametric likelihood function P (r|a, x, ?)
depending on some parameters ?. Given some prior distribution P (?) on these
Q parameters, the posterior distribution of these parameters is given by the Bayes rule, P (?|D) ? P (ri |ai , xi , ?)P (?).
1
In the realizable case, the reward is a stochastic function of the action, context and the unknown,
true parameter ?? . Ideally, we would like to choose the action maximizing the expected reward,
maxa E(r|a, x, ?? ).
Of course, ?? is unknown. If we are just interested in maximizing the
R immediate reward (exploitation), then one should choose the action that maximizes E(r|a, x) = E(r|a, x, ?)P (?|D)d?.
But in an exploration / exploitation setting, the probability matching heuristic consists in randomly
selecting an action a according to its probability of being optimal. That is, action a is chosen with
probability
Z h
i
0
I E(r|a, x, ?) = max
E(r|a
,
x,
?)
P (?|D)d?,
0
a
where I is the indicator function. Note that the integral does not have to be computed explicitly: it
suffices to draw a random parameter ? at each round as explained in Algorithm 1. Implementation
of the algorithm is thus efficient and straightforward in most applications.
Algorithm 1 Thompson sampling
D=?
for t = 1, . . . , T do
Receive context xt
Draw ?t according to P (?|D)
Select at = arg maxa Er (r|xt , a, ?t )
Observe reward rt
D = D ? (xt , at , rt )
end for
In the standard K-armed Bernoulli bandit, each action corresponds to the choice of an arm. The
reward of the i-th arm follows a Bernoulli distribution with mean ?i? . It is standard to model the mean
reward of each arm using a Beta distribution since it is the conjugate distribution of the binomial
distribution. The instantiation of Thompson sampling for the Bernoulli bandit is given in algorithm
2. It is straightforward to adapt the algorithm to the case where different arms use different Beta
distributions as their priors.
Algorithm 2 Thompson sampling for the Bernoulli bandit
Require: ?, ? prior parameters of a Beta distribution
Si = 0, Fi = 0, ?i. {Success and failure counters}
for t = 1, . . . , T do
for i = 1, . . . , K do
Draw ?i according to Beta(Si + ?, Fi + ?).
end for
Draw arm ?? = arg maxi ?i and observe reward r
if r = 1 then
S?? = S?? + 1
else
F?? = F?? + 1
end if
end for
3
Simulations
We present some simulation results with Thompson sampling for the Bernoulli bandit problem and
compare them to the UCB algorithm. The reward probability of each of the K arms is modeled
by a Beta distribution which is updated after an arm is selected (see algorithm 2). The initial prior
distribution is Beta(1,1).
There are various variants of the UCB algorithm, but they all have in common that the confidence
parameter should increase over time. Specifically, we chose the arm for which the following upper
2
confidence bound [8, page 278] is maximum:
s
k
2m
2 log 1?
log 1?
k
+
+
,
m
m
m
r
?=
1
,
t
(1)
where m is the number of times the arm has been selected and k its total reward. This is a tight
upper confidence bound derived from Chernoff?s bound.
In this simulation, the best arm has a reward probability of 0.5 and the K ? 1 other arms have a
probability of 0.5 ? ?. In order to speed up the computations, the parameters are only updated after
every 100 iterations. The regret as a function of T for various settings is plotted in figure 1. An
asymptotic lower bound has been established in [7] for the regret of a bandit algorithm:
"K
#
X p? ? pi
R(T ) ? log(T )
+ o(1) ,
(2)
D(pi ||p? )
i=1
where pi is the reward probability of the i-th arm, p? = max pi and D is the Kullback-Leibler
divergence. This lower bound is logarithmic in T with a constant depending on the pi values. The
plots in figure 1 show that the regrets are indeed logarithmic in T (the linear trend on the right hand
side) and it turns out that the observed constants (slope of the lines) are close to the optimal constants
given by the lower bound (2). Note that the offset of the red curve is irrelevant because of the o(1)
term in the lower bound (2). In fact, the red curves were shifted such that they pass through the
lower left-hand corner of the plot.
K=10, ?=0.1
K=100, ?=0.1
10000
900
800
700
Thompson
UCB
Asymptotic lower bound
Thompson
UCB
Asymptotic lower bound
8000
Regret
Regret
600
500
400
6000
4000
300
200
2000
100
0 2
10
3
4
10
5
10
T
0 2
10
6
10
10
K=10, ?=0.02
5
6
10
10
7
10
K=100, ?=0.02
4
5
Thompson
UCB
Asymptotic lower bound
x 10
4
3000
Regret
2500
Regret
4
10
T
4000
3500
3
10
2000
Thompson
UCB
Asymptotic lower bound
3
2
1500
1000
1
500
0 2
10
3
10
4
5
10
10
6
10
0 2
10
7
10
T
4
6
10
10
8
10
T
Figure 1: Cumulative regret for K ? {10, 100} and ? ? {0.02, 0.1}. The plots are averaged over
100 repetitions. The red line is the lower bound (2) shifted such that it goes through the origin.
As with any Bayesian algorithm, one can wonder about the robustness of Thompson sampling to
prior mismatch. The results in figure 1 include already some prior mismatch because the Beta prior
with parameters (1,1) has a large variance while the true probabilities were selected to be close to
3
1800
1600
Thompson
Optimistic Thompson
1400
Regret
1200
1000
800
600
400
200
0 2
10
3
10
4
5
10
10
6
10
7
10
T
Figure 2: Regret of optimistic Thompson sampling [11] in the same setting as the lower left plot of
figure 1.
0.5. We have also done some other simulations (not shown) where there is a mismatch in the prior
mean. In particular, when the reward probability of the best arm is 0.1 and the 9 others have a
probability of 0.08, Thompson sampling?with the same prior as before?is still better than UCB
and is still asymptotically optimal.
We can thus conclude that in these simulations, Thompson sampling is asymptotically optimal and
achieves a smaller regret than the popular UCB algorithm. It is important to note that for UCB,
the confidence bound (1) is tight; we have tried some other confidence bounds, including the one
originally proposed in [3], but they resulted in larger regrets.
Optimistic Thompson sampling The intuition behind UCB and Thompson sampling is that, for
the purpose of exploration, it is beneficial to boost the predictions of actions for which we are uncertain. But Thompson sampling modifies the predictions in both directions and there is apparently no
benefit in decreasing a prediction. This observation led to a recently proposed algorithm called Optimistic Bayesian sampling [11] in which the modified score is never smaller than the mean. More
precisely, in algorithm 1, Er (r|xt , a, ?t ) is replaced by max(Er (r|xt , a, ?t ), Er,?|D (r|xt , a, ?)).
Simulations in [12] showed some gains using this optimistic version of Thompson sampling. We
compared in figure 2 the two versions of Thompson sampling in the case K = 10 and ? = 0.02.
Optimistic Thompson sampling achieves a slightly better regret, but the gain is marginal. A possible explanation is that when the number of arms is large, it is likely that, in standard Thompson
sampling, the selected arm has a already a boosted score.
Posterior reshaping Thompson sampling is a heuristic advocating to draw samples from the posterior, but one might consider changing that heuristic to draw samples from a modified distribution.
In particular, sharpening the posterior would have the effect of increasing exploitation while widening it would favor exploration. In our simulations, the posterior is a Beta distribution with parameters a and b, and we have tried to change it to parameters a/?, b/?. Doing so does not change the
posterior mean, but multiply its variance by a factor close to ?2 .
Figure 3 shows the average and distribution of regret for different values of ?. Values of ? smaller
than 1 decrease the amount of exploration and often result in lower regret. But the price to pay is a
higher variance: in some runs, the regret is very large. The average regret is asymptotically not as
good as with ? = 1, but tends to be better in the non-asymptotic regime.
Impact of delay In a real world system, the feedback is typically not processed immediately
because of various runtime constraints. Instead it usually arrives in batches over a certain period of
time. We now try to quantify the impact of this delay by doing some simulations that mimic the
problem of news articles recommendation [9] that will be described in section 5.
4
4500
4000
3500
7000
?=2
?=1
?=0.5
?=0.25
Asymptotic lower bound
6000
5000
2500
Regret
Regret
3000
2000
4000
3000
1500
2000
1000
1000
500
0 2
10
0
3
10
4
5
10
10
6
10
7
2
10
1
?
0.5
0.25
T
Figure 3: Thompson sampling where the parameters of the Beta posterior distribution have been
divided by ?. The setting is the same as in the lower left plot of figure 1 (1000 repetitions). Left:
average regret as a function of T . Right: distribution of the regret at T = 107 . Since the outliers can
take extreme values, those above 6000 are compressed at the top of the figure.
Table 1: Influence of the delay: regret when the feedback is provided every ? steps.
?
UCB
TS
Ratio
1
24,145
9,105
2.65
3
24,695
9,199
2.68
10
25,662
9,049
2.84
32
28,148
9,451
2.98
100
37,141
11,550
3.22
316
77,687
21,594
3.60
1000
226,220
59,256
3.82
We consider a dynamic set of 10 items. At a given time, with probability 10?3 one of the item retires
and is replaced by a new one. The true reward probability of a given item is drawn according to a
Beta(4,4) distribution. The feedback is received only every ? time units. Table 1 shows the average
regret (over 100 repetitions) of Thompson sampling and UCB at T = 106 . An interesting quantity
in this simulation is the relative regret of UCB and Thompson sampling. It appears that Thompson
sampling is more robust than UCB when the delay is long. Thompson sampling alleviates the
influence of delayed feedback by randomizing over actions; on the other hand, UCB is deterministic
and suffers a larger regret in case of a sub-optimal choice.
4
Display Advertising
We now consider an online advertising application. Given a user visiting a publisher page, the
problem is to select the best advertisement for that user. A key element in this matching problem is
the click-through rate (CTR) estimation: what is the probability that a given ad will be clicked given
some context (user, page visited)? Indeed, in a cost-per-click (CPC) campaign, the advertiser only
pays when his ad gets clicked. This is the reason why it is important to select ads with high CTRs.
There is of course a fundamental exploration / exploitation dilemma here: in order to learn the CTR
of an ad, it needs to be displayed, leading to a potential loss of short-term revenue. More details on
on display advertising and the data used for modeling can be found in [1].
In this paper, we consider standard regularized logistic regression for predicting CTR. There are
several features representing the user, page, ad, as well as conjunctions of these features. Some
of the features include identifiers of the ad, advertiser, publisher and visited page. These features
are hashed [17] and each training sample ends up being represented as sparse binary vector of
dimension 224 .
In our model, the posterior distribution on the weights is approximated by a Gaussian distribution
with diagonal covariance matrix. As in the Laplace approximation, the mean of this distribution is
the mode of the posterior and the inverse variance of each weight is given by the curvature. The use
5
of this convenient approximation of the posterior is twofold. It first serves as a prior on the weights
to update the model when a new batch of training data becomes available, as described in algorithm
3. And it is also the distribution used in Thompson sampling.
Algorithm 3 Regularized logistic regression with batch updates
Require: Regularization parameter ? > 0.
mi = 0, qi = ?. {Each weight wi has an independent prior N (mi , qi?1 )}
for t = 1, . . . , T do
Get a new batch of training data (xj , yj ), j = 1, . . . , n.
d
n
X
1X
Find w as the minimizer of:
qi (wi ? mi )2 +
log(1 + exp(?yj w> xj )).
2 i=1
j=1
mi = wi
n
X
qi = qi +
x2ij pj (1 ? pj ), pj = (1 + exp(?w> xj ))?1 {Laplace approximation}
j=1
end for
Evaluating an explore / exploit policy is difficult because we typically do not know the reward of an
action that was not chosen. A possible solution, as we shall see in section 5, is to use a replayer in
which previous, randomized exploration data can be used to produce an unbiased offline estimator
of the new policy [10]. Unfortunately, their approach cannot be used in our case here because
it reduces the effective data size substantially when the number of arms K is large, yielding too
high variance in the evaluation results. [15] studies another promising approach using the idea of
importance weighting, but the method applies only when the policy is static, which is not the case
for online bandit algorithms that constantly adapt to its history.
For the sake of simplicity, therefore, we considered in this section a simulated environment. More
precisely, the context and the ads are real, but the clicks are simulated using a weight vector w? .
This weight vector could have been chosen arbitrarily, but it was in fact a perturbed version of some
weight vector learned from real clicks. The input feature vectors x are thus as in the real world setting, but the clicks are artificially generated with probability P (y = 1|x) = (1 + exp(?w?> x))?1 .
About 13,000 contexts, representing a small random subset of the total traffic, are presented every
hour to the policy which has to choose an ad among a set of eligible ads. The number of eligible ads
for each context depends on numerous constraints set by the advertiser and the publisher. It varies
between 5,910 and 1 with a mean of 1,364 and a median of 514 (over a set of 66,373 ads). Note that
in this experiment, the number of eligible ads is smaller than what we would observe in live traffic
because we restricted the set of advertisers.
The model is updated every hour as described in algorithm 3. A feature vector is constructed for
every (context, ad) pair and the policy decides which ad to show. A click for that ad is then generated
with probability (1 + exp(?w?> x))?1 . This labeled training sample is then used at the end of the
hour to update the model. The total number of clicks received during this one hour period is the
reward. But in order to eliminate unnecessary variance in the estimation, we instead computed the
expectation of that number since the click probabilities are known.
Several explore / exploit strategies are compared; they only differ in the way the ads are selected; all
the rest, including the model updates, is identical as described in algorithm 3. These strategies are:
Thompson sampling This is algorithm 1 where each weight is drawn independently according to
its Gaussian posterior approximation N (mi , qi?1 ) (see algorithm 3). As in section 3, we
?1/2
also consider a variant in which the standard deviations qi
are first multiplied by a factor
? ? {0.25, 0.5}. This favors exploitation over exploration.
LinUCB This is an extension of the UCB algorithm to the parametric case [9]. It selects the
ad based on mean and standard deviation. It also has a factor ? to control the exploration / exploitation
trade-off. More precisely, LinUCB selects the ad for which
qP
Pd
d
?1 2
i=1 mi xi + ?
i=1 qi xi is maximum.
Exploit-only Select the ad with the highest mean.
Random Select the ad uniformly at random.
6
Method
Parameter
Regret (%)
0.25
4.45
Table 2: CTR regrets on the display advertising data.
TS
LinUCB
?-greedy
0.5
1
0.5
1
2
0.005 0.01 0.02
3.72 3.81 4.99 4.22 4.14 5.05 4.98 5.22
Exploit
Random
5.00
31.95
0.1
Thompson
LinUCB
Exploit
0.09
0.08
CTR regret
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0
10
20
30
40
50
Hour
60
70
80
90
Figure 4: CTR regret over the 4 days test period for 3 algorithms: Thompson sampling with ? = 0.5,
LinUCB with ? = 2, Exploit-only. The regret in the first hour is large, around 0.3, because the
algorithms predict randomly (no initial model provided).
?-greedy Mix between exploitation and random: with ? probability, select a random ad; otherwise,
select the one with the highest mean.
Results A preliminary result is about the quality of the variance prediction. The diagonal Gaussian
approximation of the posterior does not seem to harm the variance predictions. In particular, they
are very well calibrated: when constructing a 95% confidence interval for CTR, the true CTR is in
this interval 95.1% of the time.
The regrets of the different explore / exploit strategies can be found in table 2. Thompson sampling
achieves the best regret and interestingly the modified version with ? = 0.5 gives slightly better
results than the standard version (? = 1). This confirms the results of the previous section (figure 3)
where ? < 1 yielded better regrets in the non-asymptotic regime.
Exploit-only does pretty well, at least compared to random selection. This seems at first a bit surprising given that the system has no prior knowledge about the CTRs. A possible explanation is that
the change in context induces some exploration, as noted in [13]. Also, the fact that exploit-only
is so much better than random might explain why ?-greedy does not beat it: whenever this strategy chooses a random action, it suffers a large regret in average which is not compensated by its
exploration benefit.
Finally figure 4 shows the regret of three algorithms across time. As expected, the regret has a
decreasing trend over time.
5
News Article Recommendation
In this section, we consider another application of Thompson sampling in personalized news article
recommendation on Yahoo! front page [2, 9]. Each time a user visits the portal, a news article out
of a small pool of hand-picked candidates is recommended. The candidate pool is dynamic: old
articles may retire and new articles may be added in. The average size of the pool is around 20.
The goal is to choose the most interesting article to users, or formally, maximize the total number of
clicks on the recommended articles. In this case, we treat articles as arms, and define the payoff to
be 1 if the article is clicked on and 0 otherwise. Therefore, the average per-trial payoff of a policy is
its overall CTR.
7
Normalized CTR
2
TS 0.5
TS 1
OTS 0.5
OTS 1
UCB 1
UCB 2
UCB 5
EG 0.05
EG 0.1
Exploit
1.8
1.6
1.4
1.2
1
10
30
60
Delay (min)
Figure 5: Normalized CTRs of various algorithm on the news article recommendation data with different update delays: {10, 30, 60} minutes. The normalization is with respect to a random baseline.
Each user was associated with a binary raw feature vector of over 1000 dimension, which indicates
information of the user like age, gender, geographical location, behavioral targeting, etc. These
features are typically sparse, so using them directly makes learning more difficult and is computationally expensive. One can find lower dimension feature subspace by, say, following previous
practice [9]. Here, we adopted the simpler principal component analysis (PCA), which did not appear to affect the bandit algorithms much in our experience. In particular, we performed a PCA and
projected the raw user feature onto the first 20 principal components. Finally, a constant feature 1 is
appended, so that the final user feature contains 21 components. The constant feature serves as the
bias term in the CTR model described next.
We use logistic regression, as in Algorithm 3, to model article CTRs: given a user feature vector
x ? <21 , the probability of click on an article a is (1 + exp(?x> wa ))?1 for some weight vector
wa ? <21 to be learned. The same parameter algorithm and exploration heuristics are applied as in
the previous section. Note that we have a different weight vector for each article, which is affordable
as the numbers of articles and features are both small. Furthermore, given the size of data, we have
not found article features to be helpful. Indeed, it is shown in our previous work [9, Figure 5] that
article features are helpful in this domain only when data are highly sparse.
Given the small size of candidate pool, we adopt the unbiased offline evaluation method of [10]
to compare various bandit algorithms. In particular, we collected randomized serving events for
a random fraction of user visits; in other words, these random users were recommended an article
chosen uniformly from the candidate pool. From 7 days in June 2009, over 34M randomized serving
events were obtained.
As in section 3, we varied the update delay to study how various algorithms degrade. Three values
were tried: 10, 30, and 60 minutes. Figure 5 summarizes the overall CTRs of four families of
algorithm together with the exploit-only baseline. As in the previous section, (optimistic) Thompson
sampling appears competitive across all delays. While the deterministic UCB works well with short
delay, its performance drops significantly as the delay increases. In contrast, randomized algorithms
are more robust to delay, and when there is a one-hour delay, (optimistic) Thompson sampling is
significant better than others (given the size of our data).
6
Conclusion
The extensive experimental evaluation carried out in this paper reveals that Thompson sampling is a
very effective heuristic for addressing the exploration / exploitation trade-off. In its simplest form,
it does not have any parameter to tune, but our results show that tweaking the posterior to reduce
exploration can be beneficial. In any case, Thompson sampling is very easy to implement and should
thus be considered as a standard baseline. Also, since it is a randomized algorithm, it is robust in the
case of delayed feedback.
Future work includes of course, a theoretical analysis of its finite-time regret. The benefit of this
analysis would be twofold. First, it would hopefully contribute to make Thompson sampling as
popular as other algorithms for which regret bounds exist. Second, it could provide guidance on
tweaking the posterior in order to achieve a smaller regret.
8
References
[1] D. Agarwal, R. Agrawal, R. Khanna, and N. Kota. Estimating rates of rare events with multiple
hierarchies through scalable log-linear models. In Proceedings of the 16th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 213?222, 2010.
[2] Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu
Ramakrishnan, Scott Roy, and Joe Zachariah. Online models for content optimization. In
Advances in Neural Information Processing Systems 21, pages 17?24, 2008.
[3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235?256, 2002.
[4] John C. Gittins. Multi-armed Bandit Allocation Indices. Wiley Interscience Series in Systems
and Optimization. John Wiley & Sons Inc, 1989.
[5] Thore Graepel, Joaquin Quinonero Candela, Thomas Borchert, and Ralf Herbrich. Web-scale
Bayesian click-through rate prediction for sponsored search advertising in Microsoft?s Bing
search engine. In Proceedings of the Twenty-Seventh International Conference on Machine
Learning (ICML-10), pages 13?20, 2010.
[6] O.-C. Granmo. Solving two-armed bernoulli bandit problems using a bayesian learning automaton. International Journal of Intelligent Computing and Cybernetics, 3(2):207?234, 2010.
[7] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6:4?22, 1985.
[8] J. Langford. Tutorial on practical prediction theory for classification. Journal of Machine
Learning Research, 6(1):273?306, 2005.
[9] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized
news article recommendation. In Proceedings of the 19th international conference on World
wide web, pages 661?670, 2010.
[10] L. Li, W. Chu, J. Langford, and X. Wang. Unbiased offline evaluation of contextual-banditbased news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 297?306, 2011.
[11] Benedict C. May, Nathan Korda, Anthony Lee, and David S. Leslie. Optimistic Bayesian
sampling in contextual-bandit problems. Technical Report 11:01, Statistics Group, Department
of Mathematics, University of Bristol, 2011. Submitted to the Annals of Applied Probability.
[12] Benedict C. May and David S. Leslie. Simulation studies in optimistic Bayesian sampling in
contextual-bandit problems. Technical Report 11:02, Statistics Group, Department of Mathematics, University of Bristol, 2011.
[13] J. Sarkar. One-armed bandit problems with covariates. The Annals of Statistics, 19(4):1978?
2002, 1991.
[14] S. Scott. A modern bayesian look at the multi-armed bandit. Applied Stochastic Models in
Business and Industry, 26:639?658, 2010.
[15] Alexander L. Strehl, John Langford, Lihong Li, and Sham M. Kakade. Learning from logged
implicit exploration data. In Advances in Neural Information Processing Systems 23 (NIPS10), pages 2217?2225, 2011.
[16] William R. Thompson. On the likelihood that one unknown probability exceeds another in
view of the evidence of two samples. Biometrika, 25(3?4):285?294, 1933.
[17] K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. Smola. Feature hashing for
large scale multitask learning. In ICML, 2009.
9
| 4321 |@word multitask:1 trial:1 exploitation:10 version:5 seems:1 confirms:1 simulation:10 tried:4 covariance:1 initial:2 necessity:1 contains:1 score:2 selecting:1 series:1 interestingly:1 outperforms:1 past:1 com:2 contextual:5 surprising:1 clara:2 si:2 chu:2 john:3 plot:5 drop:1 update:6 sponsored:1 greedy:3 selected:5 item:3 oldest:1 short:2 nitin:1 contribute:1 location:1 herbrich:1 simpler:1 elango:1 constructed:1 beta:10 prove:1 consists:1 behavioral:1 interscience:1 indeed:3 expected:3 multi:2 chap:1 decreasing:2 armed:5 increasing:1 clicked:3 provided:2 becomes:1 estimating:1 maximizes:2 what:2 substantially:1 maxa:2 finding:1 sharpening:1 guarantee:1 every:6 runtime:1 biometrika:1 control:1 unit:1 appear:1 before:1 understood:1 benedict:2 treat:1 tends:1 despite:1 might:3 chose:1 campaign:1 averaged:1 practical:1 yj:2 practice:1 regret:41 implement:3 empirical:3 significantly:2 matching:3 convenient:1 confidence:7 word:1 tweaking:2 suggest:1 get:2 cannot:1 close:3 selection:2 targeting:1 onto:1 context:9 influence:2 live:1 deterministic:2 compensated:1 maximizing:2 modifies:1 straightforward:2 go:1 independently:1 thompson:49 automaton:1 simplicity:2 immediately:1 rule:2 estimator:1 his:1 ralf:1 laplace:2 updated:3 annals:2 hierarchy:1 user:13 olivier:1 origin:1 trend:2 element:1 approximated:1 expensive:1 roy:1 labeled:1 observed:1 wang:1 news:8 trade:3 counter:1 decrease:1 highest:2 motgi:1 intuition:1 environment:1 pd:1 covariates:1 reward:17 ideally:1 seung:1 dynamic:2 tight:2 solving:1 dilemma:1 various:7 represented:1 effective:2 choosing:1 heuristic:8 larger:2 solve:1 say:1 otherwise:2 compressed:1 favor:2 statistic:3 fischer:1 final:1 online:3 agrawal:1 date:1 alleviates:1 achieve:1 convergence:1 produce:1 gittins:3 depending:2 develop:1 received:2 strong:1 quantify:1 differ:1 direction:1 stochastic:2 exploration:16 cpc:1 require:2 ots:2 suffices:1 preliminary:1 extension:1 around:2 considered:2 exp:5 predict:1 achieves:4 adopt:1 purpose:1 estimation:2 visited:2 robbins:1 repetition:3 gaussian:3 modified:3 boosted:1 conjunction:1 derived:1 june:1 bernoulli:6 likelihood:2 indicates:1 contrast:2 sigkdd:1 baseline:5 realizable:1 helpful:2 typically:3 eliminate:1 bandit:18 selects:3 interested:1 arg:2 among:1 overall:2 classification:1 yahoo:5 art:1 marginal:1 never:1 unpopular:1 sampling:44 chernoff:1 identical:1 park:1 look:1 icml:2 mimic:1 future:1 others:2 report:2 intelligent:1 modern:1 randomly:3 divergence:1 resulted:1 delayed:2 replaced:2 microsoft:1 william:1 highly:2 mining:2 multiply:1 evaluation:5 arrives:1 extreme:1 yielding:1 behind:1 integral:1 experience:1 old:2 plotted:1 guidance:1 theoretical:3 uncertain:1 korda:1 industry:1 modeling:1 leslie:2 cost:1 deviation:2 subset:1 addressing:1 rare:1 wonder:1 successful:1 delay:12 seventh:1 too:1 front:1 randomizing:1 perturbed:1 ctrs:5 varies:1 calibrated:1 chooses:1 geographical:1 fundamental:1 randomized:5 international:5 lee:1 off:3 pool:5 together:1 ctr:11 cesa:1 choose:4 corner:1 chung:1 leading:1 li:4 potential:1 includes:1 inc:3 explicitly:1 ad:21 depends:1 performed:1 try:1 picked:1 view:1 candela:1 optimistic:10 traffic:2 apparently:1 red:3 competitive:2 bayes:2 doing:2 slope:1 appended:1 variance:8 efficiently:1 bayesian:9 raw:2 advertising:5 cybernetics:1 bristol:2 history:1 submitted:1 explain:1 suffers:2 whenever:1 against:2 failure:1 associated:1 mi:6 static:1 gain:2 proved:1 popular:4 knowledge:2 graepel:1 auer:1 back:1 appears:2 retire:1 originally:1 higher:1 day:2 hashing:1 done:1 zachariah:1 furthermore:1 just:1 implicit:1 smola:1 langford:5 hand:4 joaquin:1 web:3 hopefully:1 lack:1 khanna:1 logistic:3 mode:1 quality:1 thore:1 effect:1 normalized:2 true:4 unbiased:3 regularization:1 leibler:1 eg:2 round:2 during:1 noted:1 fi:2 recently:1 common:1 empirically:1 qp:1 significant:1 multiarmed:1 ai:2 mathematics:3 lihong:3 chapelle:1 hashed:1 etc:1 curvature:1 posterior:14 recent:1 showed:1 irrelevant:1 certain:1 binary:2 success:1 arbitrarily:1 maximize:1 period:3 advertiser:4 recommended:3 full:1 mix:1 multiple:1 reduces:1 sham:1 exceeds:1 technical:2 adapt:2 long:1 divided:1 lai:1 reshaping:1 visit:2 impact:2 prediction:7 variant:2 regression:3 qi:8 scalable:1 expectation:1 affordable:1 iteration:1 normalization:1 agarwal:2 receive:1 interval:2 else:1 median:1 publisher:3 rest:1 seem:2 easy:2 xj:3 affect:1 click:11 reduce:1 idea:3 pca:2 action:13 santa:2 tune:1 amount:1 induces:1 processed:1 simplest:1 schapire:1 exist:1 tutorial:1 shifted:2 per:2 serving:2 shall:1 dasgupta:1 group:2 key:1 four:1 drawn:2 changing:1 pj:3 asymptotically:4 fraction:1 realworld:1 run:1 inverse:1 fourth:1 logged:1 family:2 eligible:3 draw:7 summarizes:1 bit:1 bound:18 pay:2 display:4 yielded:1 precisely:3 constraint:2 ri:2 personalized:2 sake:1 kota:1 nathan:1 speed:1 min:1 department:2 according:6 conjugate:1 smaller:5 beneficial:2 slightly:2 across:2 son:1 wi:3 kakade:1 explained:1 outlier:1 restricted:1 computationally:1 bing:1 turn:1 know:1 end:7 serf:2 raghu:1 adopted:1 available:1 multiplied:1 observe:4 attenberg:1 alternative:1 robustness:1 batch:4 weinberger:1 thomas:1 binomial:1 top:1 include:3 exploit:11 already:2 quantity:1 added:1 taek:1 parametric:2 strategy:4 rt:2 diagonal:2 visiting:1 linucb:5 subspace:1 simulated:4 quinonero:1 degrade:1 argue:1 collected:1 reason:3 index:2 modeled:2 ratio:1 difficult:2 unfortunately:1 implementation:1 policy:7 unknown:3 twenty:1 bianchi:1 upper:3 observation:2 finite:3 t:4 displayed:1 optional:1 immediate:1 payoff:3 beat:1 varied:1 sarkar:1 david:2 pair:1 extensive:1 advocating:1 engine:1 learned:2 established:1 boost:1 hour:7 address:1 able:1 usually:1 mismatch:3 scott:2 regime:2 pradheep:1 max:3 including:2 explanation:2 event:3 widening:1 business:1 regularized:2 predicting:1 indicator:1 arm:18 representing:2 scheme:1 numerous:1 carried:1 prior:13 literature:1 discovery:1 bee:1 asymptotic:9 relative:1 loss:1 interesting:2 allocation:2 age:1 revenue:1 article:21 pi:5 strehl:1 course:3 surprisingly:1 offline:3 side:1 bias:1 wide:1 deepak:1 sparse:3 benefit:3 curve:2 feedback:5 dimension:3 world:3 cumulative:3 x2ij:1 evaluating:1 made:1 adaptive:1 projected:1 kullback:1 decides:1 instantiation:1 reveals:1 harm:1 conclude:1 unnecessary:1 xi:4 search:3 triplet:1 why:4 table:4 pretty:1 promising:2 learn:1 robust:3 ca:2 borchert:1 artificially:1 constructing:1 domain:1 anthony:1 did:1 identifier:1 representative:1 referred:1 wiley:2 sub:1 candidate:4 weighting:1 advertisement:2 minute:2 xt:6 er:4 maxi:1 offset:1 evidence:1 joe:1 importance:1 portal:1 chen:1 logarithmic:2 led:1 likely:1 explore:3 recommendation:7 applies:1 gender:1 corresponds:1 minimizer:1 ramakrishnan:1 constantly:1 acm:2 goal:2 twofold:2 price:1 content:1 change:3 specifically:1 uniformly:2 principal:2 called:2 total:4 pas:1 experimental:1 ucb:22 select:7 formally:1 alexander:1 |
3,669 | 4,322 | Active learning of neural response functions
with Gaussian processes
Mijung Park
Electrical and Computer Engineering
The University of Texas at Austin
[email protected]
Greg Horwitz
Departments of Physiology and Biophysics
The University of Washington
[email protected]
Jonathan W. Pillow
Departments of Psychology and Neurobiology
The University of Texas at Austin
[email protected]
Abstract
A sizeable literature has focused on the problem of estimating a low-dimensional
feature space for a neuron?s stimulus sensitivity. However, comparatively little
work has addressed the problem of estimating the nonlinear function from feature
space to spike rate. Here, we use a Gaussian process (GP) prior over the infinitedimensional space of nonlinear functions to obtain Bayesian estimates of the ?nonlinearity? in the linear-nonlinear-Poisson (LNP) encoding model. This approach
offers increased flexibility, robustness, and computational tractability compared
to traditional methods (e.g., parametric forms, histograms, cubic splines). We
then develop a framework for optimal experimental design under the GP-Poisson
model using uncertainty sampling. This involves adaptively selecting stimuli according to an information-theoretic criterion, with the goal of characterizing the
nonlinearity with as little experimental data as possible. Our framework relies on
a method for rapidly updating hyperparameters under a Gaussian approximation
to the posterior. We apply these methods to neural data from a color-tuned simple cell in macaque V1, characterizing its nonlinear response function in the 3D
space of cone contrasts. We find that it combines cone inputs in a highly nonlinear
manner. With simulated experiments, we show that optimal design substantially
reduces the amount of data required to estimate these nonlinear combination rules.
1
Introduction
One of the central problems in systems neuroscience is to understand how neural spike responses
convey information about environmental stimuli, which is often called the neural coding problem.
One approach to this problem is to build an explicit encoding model of the stimulus-conditional
response distribution p(r|x), where r is a (scalar) spike count elicited in response to a (vector) stimulus x. The popular linear-nonlinear-Poisson (LNP) model characterizes this encoding relationship
in terms of a cascade of stages: (1) linear dimensionality reduction using a bank of filters or receptive
fields; (2) a nonlinear function from filter outputs to spike rate; and (3) an inhomogeneous Poisson
spiking process [1].
While a sizable literature [2?10] has addressed the problem of estimating the linear front end to this
model, the nonlinear stage has received comparatively less attention. Most prior work has focused
on: simple parametric forms [6, 9, 11]; non-parametric methods that do not scale easily to high
1
input
inverse-link
nonlinearity
Poisson
spiking
response
history filter
Figure 1: Encoding model schematic. The nonlinear function f converts an input vector x to a
scalar, which g then transforms to a non-negative spike rate ? = g(f (x)). The spike response r is a
Poisson random variable with mean ?.
dimensions (e.g., histograms, splines) [7, 12]; or nonlinearities defined by a sum or product of 1D
nonlinear functions [10, 13].
In this paper, we use a Gaussian process (GP) to provide a flexible, computationally tractable model
of the multi-dimensional neural response nonlinearity f (x), where x is a vector in feature space.
Intuitively, a GP defines a probability distribution over the infinite-dimensional space of functions
by specifying a Gaussian distribution over its finite-dimensional marginals (i.e., the probability over
the function values at any finite collection of points), with hyperparameters that control the function?s variability and smoothness [14]. Although exact inference under a model with GP prior and
Poisson observations is analytically intractable, a variety of approximate and sampling-based inference methods have been developed [15, 16]). Our work builds on a substantial literature in neuroscience that has used GP-based models to decode spike trains [17?19], estimate spatial receptive
fields [20,21], infer continuous spike rates from spike trains [22?24], infer common inputs [25], and
extract low-dimensional latent variables from multi-neuron spiking activity [26, 27].
We focus on data from trial-based experiments where stimulus-response pairs (x, r) are sparse in the
space of possible stimuli. We use a fixed inverse link function g to transform f (x) to a non-negative
spike rate, which ensures the posterior over f is log-concave [6, 20]. This log-concavity justifies a
Gaussian approximation to the posterior, which we use to perform rapid empirical Bayes estimation
of hyperparameters [5, 28]. Our main contribution is an algorithm for optimal experimental design,
which allows f to be characterized quickly and accurately from limited data [29, 30]. The method
relies on uncertainty sampling [31], which involves selecting the stimulus x for which g(f (x)) is
maximally uncertain given the data collected in the experiment so far. We apply our methods to
the nonlinear color-tuning properties of macaque V1 neurons. We show that the GP-Poisson model
provides a flexible, tractable model for these responses, and that optimal design can substantially
reduce the number of stimuli required to characterize them.
2
2.1
GP-Poisson neural encoding model
Encoding model (likelihood)
We begin by defining a probabilistic encoding model for the neural response. Let ri be an observed
neural response (the spike count in some time interval T ) at the i?th trial given the input stimulus
xi . Here, we will assume that x is D-dimensional vector in the moderately low-dimensional neural
feature space to which the neuron is sensitive, the output of the ?L? stage in the LNP model.
Under the encoding model (Fig. 1), an input vector xi passes through a nonlinear function f , whose
real-valued output is transformed to a positive spike rate through a (fixed) function g. The spike response is a Poisson random variable with mean g(f (x)), so the conditional probability of a stimulusresponse pair is Poisson:
p(ri |xi , f ) =
1 ri ??i
,
ri ! ? i e
?i = g(f (xi )).
(1)
For a complete dataset, the log-likelihood is:
L(f ) = log p(r|X, f ) = r> log(g(f )) ? 1> g(f ) + const,
2
(2)
where r = (r1 , . . . , rN )> is a vector of spike responses, 1 is a vector of ones, and f =
(f (x1 ), . . . f (xN ))> is shorthand for the vector defined by evaluating f at the points in X =
{x1 , . . . xN }. Note that although f is an infinite-dimensional object in the space of functions, the
likelihood only depends on the value of f at the points in X.
In this paper, we fix the inverse-link function to g(f ) = log(1 + exp(f )), which has the nice
property that it grows linearly for large f and decays gracefully to zero for negative f . This allows
us to place a Gaussian prior on f without allocating probability mass to negative spike rates, and
obviates the need for constrained optimization of f (but see [22] for a highly efficient solution). Most
importantly, for any g that is simultaneously convex and log-concave1 , the log-likelihood L(f ) is
concave in f , meaning it is free of non-global local extrema [6,20]. Combining L with a log-concave
prior (as we do in the next section) ensures the log-posterior is also concave.
2.2
Gaussian Process prior
Gaussian processes (GPs) allow us to define a probability distribution over the infinite-dimensional
space of functions by specifying a Gaussian distribution over a function?s finite-dimensional
marginals (i.e., the probability over the function values at any finite collection of points). The hyperparameters defining this prior are a mean ?f and a kernel function k(xi , xj ) that specifies the
covariance between function values f (xi ) and f (xj ) for any pair of input points xi and xj . Thus,
the GP prior over the function values f is given by
1
p(f ) = N (f | ?f 1, K) = |2?K|? 2 exp ? 12 (f ? ?f 1)> K ?1 (f ? ?f 1)
(3)
where K is a covariance matrix whose i, j?th entry is Kij = k(xi , xj ). Generally, the kernel
controls the prior smoothness of f by determining how quickly the correlation between nearby
function values falls off as a function of distance. (See [14] for a general treatment). Here, we use a
Gaussian kernel, since neural response nonlinearities are expected to be smooth in general:
k(xi , xj ) = ? exp ?||xi ? xj ||2 /(2? ) ,
(4)
where hyperparameters ? and ? control the marginal variance and smoothness scale, respectively.
The GP therefore has three total hyperparameters, ? = {?f , ?, ? } which set the prior mean and
covariance matrix over f for any collection of points in X.
MAP inference for f
2.3
The maximum a posteriori (MAP) estimate can be obtained by numerically maximizing the posterior
for f . From Bayes rule, the log-posterior is simply the sum of the log-likelihood (eq. 2) and log-prior
(eq. 3) plus a constant:
log p(f |r, X, ?) = r> log(g(f )) ? 1> g(f ) ? 21 (f ? ?f )> K ?1 (f ? ?f ) + const.
(5)
As noted above, this posterior has a unique maximum fmap so long as g is convex and log-concave.
However, the solution vector fmap defined this way contains only the function values at the points
in the training set X. How do we find the MAP estimate of f at other points not in our training set?
The GP prior provides a simple analytic formula for the maximum of the joint marginal containing
the training data and any new point f ? = f (x? ), for a new stimulus x? . We have
p(f ? , f |x? , r, X, ?) = p(f ? |f , ?)p(f |r, X, ?) = N (f ? |?? , ? ? 2 ) p(f |r, X, ?)
?
?>
?1
?2
?
?
?>
(6)
? ?
where, from the GP prior, ? = ?f + k K (f ? ?f ) and ? = k(x , x ) ? k K k are
the (f -dependent) mean and variance of f ? , and row vector k? = (k(x1 , x? ), . . . k(xN , x? )). This
factorization arises from the fact that f ? is conditionally independent of the data given the value
of the function at X. Clearly, this posterior marginal (eq. 6) is maximized when f ? = ?? and
f = fmap .2 Thus, for any collection of novel points X ? , the MAP estimate for f (X ? ) is given by
the mean of the conditional distribution over f ? given fmap :
p(f (X ? )|X ? , fmap , ?) = N ?f + K ? K ?1 (fmap ? ?f ), K ?? ? K ? K ?1 K ?>
(7)
1
Such functions must grow monotonically at least linearly and at most exponentially [6]. Examples include
exponential, half-rectified linear, log(1 + exp(f ))p for p ? 1.
2
Note that this is not necessarily identical to the marginal MAP estimate of f ? |x? , r, X, ?, which requires
maximizing (eq. 6) integrated with respect to f .
3
??
= k(x?i , x?j ).
where Kil? = k(x?i , xl ) and Kij
In practice, the prior covariance matrix K is often ill-conditioned when datapoints in X are closely
spaced and smoothing hyperparameter ? is large, making it impossible to numerically compute
K ?1 . When the number of points is not too large (N < 1000), we can address this by performing a
singular value decomposition (SVD) of K and keeping only the singular vectors with singular value
above some threshold. This results in a lower-dimensional numerical optimization problem, since
we only have to search the space spanned by the singular vectors of K. We discuss strategies for
scaling to larger datasets in the Discussion.
2.4
Efficient evidence optimization for ?
The hyperparameters ? = {?f , ?, ? } that control the GP prior have a major influence on the shape
of the inferred nonlinearity, particularly in high dimensions and when data is scarce. A theoretically
attractive and computationally efficient approach for setting ? is to maximize the evidence p(?|r, X),
also known as the marginal likelihood, a general approach known as empirical Bayes [5, 14, 28, 32].
Here we describe a method for rapid evidence maximization that we will exploit to design an active
learning algorithm in Section 3.
The evidence can be computed by integrating the product of the likelihood and prior with respect to
f , but can also be obtained by solving for the (often neglected) denominator term in Bayes? rule:
Z
p(r|f )p(f |?)
p(r|?) = p(r|f )p(f |?)df =
,
(8)
p(f |r, ?)
where we have dropped conditioning on X for notational convenience. For the GP-Poisson model
here, this integral is not tractable analytically, but we can approximate it as follows. We begin with
a well-known Gaussian approximation to the posterior known as the Laplace approximation, which
comes from a 2nd-order Taylor expansion of the log-posterior around its maximum [28]:
p(f |r, ?) ? N (f |fmap , ?),
??1 = H + K ?1 ,
(9)
2
?
where H = ?f
2 L(f ) is the Hessian (second derivative matrix) of the negative log-likelihood (eq. 2),
evaluated at fmap , and K ?1 is the inverse prior covariance (eq. 3). This approximation is reasonable given that the posterior is guaranteed to be unimodal and log-concave. Plugging it into the
denominator in (eq. 8) gives us a formula for evaluating approximate evidence,
exp L(f ) N (f |?f , K)
p(r|?) ?
,
(10)
N (f |fmap , ?)
which we evaluate at f = fmap , since the Laplace approximation is the most accurate there [20, 33].
The hyperparameters ? directly affect the prior mean and covariance (?f , K), as well as the posterior mean and covariance (fmap , ?), all of which are essential for evaluating the evidence. Finding
fmap and ? given ? requires numerical optimization of log p(f |r, ?), which is computationally expensive to perform for each search step in ?. To overcome this difficulty, we decompose the posterior
moments (fmap , ?) into terms that depend on ? and terms that do not via a Gaussian approximation
to the likelihood. The logic here is that a Gaussian posterior and prior imply a likelihood function
proportional to a Gaussian, which in turn allows prior and posterior moments to be computed analytically for each ?. This trick is similar to that of the EP algorithm [34]: we divide a Gaussian
component out of the Gaussian posterior and approximate the remainder as Gaussian. The resulting
moments are H = ??1 ? K ?1 for the likelihood inverse-covariance (which is the Hessian of the
log-likelihood from eq. 9), and m = H ?1 (??1 fmap ? K ?1 ?f ) for the likelihood mean, which
comes from the standard formula for the product of two Gaussians.
Our algorithm for evidence optimization proceeds as follows: (1) given the current hyperparameters
?i , numerically maximize the posterior and form the Laplace approximation N (fmap i , ?i ); (2)
compute the Gaussian ?potential? N (mi , Hi ) underlying the likelihood, given the current values of
(fmap i , ?i , ?i ), as described above; (3) Find ?i+1 by maximizing the log-evidence, which is:
1
1
E(?) = rT log(g(fmap ))?1T g(fmap )? log |KHi +I|? (fmap ??f )T K ?1 (fmap ??f ), (11)
2
2
where fmap and ? are updated using Hi and mi obtained in step (2), i.e. fmap = ?(Hi mi +
K ?1 ?f ) and ? = (Hi + K ?1 )?1 . Note that this significantly expedites evidence optimization
since we do not have to numerically optimize fmap for each ?.
4
true
posterior mean
random
sampling
rate (spikes/trial)
25
95% confidence region
20 datapoints
15
rate (spikes/trial)
random sampling
uncertainty sampling
20
5
0
10
25
uncertainty
sampling
B
100 datapoints
error
A
15
0
5
0
0
50
100 0
50
100
10
20
40
80
# of datapoints
160
Figure 2: Comparison of random and optimal design in a simulated experiment with a 1D nonlinearity. The true nonlinear response function g(f (x)) is in gray, the posterior mean is in black solid, 95%
confidence interval is in black dotted, stimulus is in blue dots. A (top): Random design: responses
were measured with 20 (left) and 100 (right) additional stimuli, with stimuli sampled uniformly over
the interval shown on the x axis. A (bottom): Optimal design: responses were measured with same
numbers of additional stimuli selected by uncertainty sampling (see text). B: Mean square error as
a function of the number of stimulus-response pairs. The optimal design achieved half the error rate
of the random design experiment.
3
Optimal design: uncertainty sampling
So far, we have introduced an efficient algorithm for estimating the nonlinearity f and hyperparameters ? for an LNP encoding model under a GP prior. Here we introduce a method for adaptively
selecting stimuli during an experiment (often referred to as active learning or optimal experimental design) to minimize the amount of data required to estimate f [29]. The basic idea is that we
should select stimuli that maximize the expected information gained about the model parameters.
This information gain of course depends the posterior distribution over the parameters given the
data collected so far. Uncertainty sampling [31] is an algorithm that is appropriate when the model
parameters and stimulus space are in a 1-1 correspondence. It involves selecting the stimulus x
for which the posterior over parameter f (x) has highest entropy, which in the case of a Gaussian
posterior corresponds to the highest posterior variance.
Here we alter the algorithm slightly to select stimuli for which we are most uncertain about the spike
rate g(f (x)), not (as stated above) the stimuli where we are most uncertain about our underlying
function f (x). The rationale for this approach is that we are generally more interested in the neuron?s spike-rate as a function of the stimulus (which involves the inverse link function g) than in
the parameters we have used to define that function. Moreover, any link function that maps R to
the positive reals R+ , as required for Poisson models, we will have unavoidable uncertainty about
negative values of f , which will not be overcome by sampling small (integer) spike-count responses.
Our strategy therefore focuses on uncertainty in the expected spike-rate rather than uncertainty in f .
Our method proceeds as follows. Given the data observed up to a certain time in the experiment,
we define a grid of (evenly-spaced) points {x?j } as candidate next stimuli. For each point, we
compute the posterior uncertainty ?j about the spike rate g(f (x?j )) using the delta method, i.e.,
?j = g 0 (f (x?j ))?j , where ?j is the posterior standard deviaton (square root of the posterior variance)
at f (xj ) and g 0 is the derivative of g with respect to its argument. The stimulus selected next on trial
t + 1, given all data observed up to time t, is selected randomly from the set:
xt+1 ? {x?j | ?j ? ?i ?i},
(12)
that is, the set of all stimuli for which uncertainty ? is maximal. To find {?j } at each candidate point,
we must first update ? and fmap . After each trial, we update fmap by numerically optimizing the
posterior, then update the hyperparameters using (eq. 11), and then numerically re-compute fmap
and ? given the new ?. The method is summarized in Algorithm 1, and runtimes are shown in Fig. 5.
5
Algorithm 1 Optimal design for nonlinearity estimation under a GP-Poisson model
1. given the current data Dt = {x1 , ..., xt , r1 , ..., rt }, the posterior mode fmap t , and hyper?
parameters ?t , compute the posterior mean and standard deviation (fmap
, ? ? ) at a grid of
?
candidate stimulus locations {x }.
?
2. select the element of {x? } for which ? ? = g 0 (fmap
)? ? is maximal
3. present the selected xt+1 and record the neural response rt+1
4. find fmap t+1 |Dt+1 , ?t ; update ?i+1 by maximizing evidence; find fmap t+1 |Dt+1 , ?t+1
4
Simulations
We tested our method in simulation using a 1-dimensional feature space, where it is easy to visualize
the nonlinearity and the uncertainty of our estimates (Fig. 2). The stimulus space was taken to be
the range [0, 100], the true f was a sinusoid, and spike responses were simulated as Poisson with
rate g(f (x)). We compared the estimate of g(f (x)) obtained using optimal design to the estimate
obtained with ?random sampling?, stimuli drawn uniformly from the stimulus range.
Fig. 2 shows the estimates of g(f (x)) after 20 and 100 trials using each method, along with the
marginal posterior standard deviation, which provides a ?2 SD Bayesian confidence interval for the
estimate. The optimal design method effectively decreased the high variance in the middle (near 50)
because it drew more samples where uncertainty about the spike rate was higher (due to the fact that
variance increases with mean for Poisson neurons). The estimates using random sampling (A, top)
was not accurate because it drew more points in the tails where the variance was originally lower
than the center. We also examined the errors in each method as a function of the number of data
points. We drew each number of data points 100 times and computed the average error between
the estimate and the true g(f (x)). As shown in (B), uncertainty sampling achieved roughly half the
error rate of the random sampling after 20 datapoints.
5
Experiments
cone contrast
0.6
L
M
S
0
spike count
?0.6
20
10
0
0
20
trial #
40
60
Figure 3: Raw experimental data: stimuli in 3D conecontrast space (above) and recorded spike counts (below)
during the first 60 experimental trials. Several (3-6) stimulus staircases along different directions in color space
were randomly interleaved to avoid the effects of adaptation; a color direction is defined as the relative proportions of L, M, and S cone contrasts, with [0 0 0] corresponding to a neutral gray (zero-contrast) stimulus. In
each color direction, contrast was actively titrated with
the aim of evoking a response of 29 spikes/sec. This
sampling procedure permitted a broad survey of the stimulus space, with the objective that many stimuli evoked
a statistically reliable but non-saturating response. In all,
677 stimuli in 65 color directions were presented for this
neuron.
We recorded from a V1 neuron in an awake, fixating rhesus monkey while Gabor patterns with varying color and contrast were presented at the receptive field. Orientation and spatial frequency of the
Gabor were fixed at preferred values for the neuron and drifted at 3 Hz for 667 ms each. Contrast
was varied using multiple interleaved staircases along different axes in color space, and spikes were
counted during a 557ms window beginning 100ms after stimulus appeared. The staircase design
was used because the experiments were carried out prior to formulating the optimal design methods
described in this paper. However, we will analyze them here for a ?simulated optimal design experiment?, where we choose stimuli sequentially from the list of stimuli that were actually presented
during the experiment, in an order determined by our information-theoretic criterion. See Fig. 3
caption for more details of the experimental recording.
6
15
0
L
30
0.6
0
0
0.6
0.6
15
0
0.3
0.6
M cone contrast
15
0
M
0.6
0
0.2
0.4
S cone contrast
0
- 0.6
- 0.4
30
0
150 datapoints
random sampling
uncertainty sampling
- 0.6
- 0.6
30
0
0
spike rate
0.3
L cone contrast
L
spike rate
150 datapoints
random
sampling
uncertainty
sampling
B
spike rate
A
all data
posterior
mean
95% conf.
interval
- 0.6
- 0.4
0.6
- 0.6
0
0.4 - 0.4
0
0.4 - 0.4
all data
0
0.6 - 0.6
0
0.6
0
0.4 - 0.4
0
0.4
0
0.4
- 0.4
0
0.4
M
S
S
Figure 4: One and two-dimensional conditional ?slices? through the 3D nonlinearity of a V1 simple
cell in cone contrast space. A: 1D conditionals showing spike rate as a function of L, M, and S
cone contrast, respectively, with other cone contrasts fixed to zero. Traces show the posterior mean
and ?2SD credible interval given all datapoints (solid and dotted gray), and the posterior mean
given only 150 data points selected randomly (black) or by optimal design (red), carried out by
drawing a subset of the data points actually collected during the experiment. Note that even with
only 1/4 of data, the optimal design estimate is nearly identical to the estimate obtained from all 677
datapoints. B: 2D conditionals on M and L (first row), S and L (second row), M and S (third row)
cones, respectively, with the other cone contrast set to zero. 2D conditionals using optimal design
sampling (middle column) with 150 data points are much closer to the 2D conditionals using all data
(right column) than those from a random sub-sampling of 150 points (left column).
We first used the entire dataset (677 stimulus-response pairs) to find the posterior maximum fmap ,
with hyperparameters set by maximizing evidence (sequential optimization of fmap and ? (eq. 11)
until convergence). Fig. 4 shows 1D and 2D conditional slices through the estimated 3D nonlinearity
g(f (x)), with contour plots constructed using the MAP estimate of f on a fine grid of points. The
contours for a neuron with linear summation of cone contrasts followed by an output nonlinearity
(i.e., as assumed by the standard model of V1 simple cells) would consist of straight lines. The
curvature observed in contour plots (Fig. 4B) indicates that cone contrasts are summed together in a
highly nonlinear fashion, especially for L and M cones (top).
We then performed a simulated optimal design experiment by selecting from the 677 stimulusresponse pairs collected during the experiment, and re-ordering them greedily according to the
uncertainty sampling algorithm described above. We compared the estimate obtained using only
1/4 of the data (150 points) with an estimate obtained if we had randomly sub-sampled 150 data
points from the dataset (Fig. 4). Using only 150 data points, the conditionals of the estimate using
uncertainty sampling were almost identical to those using all data (677 points).
Although our software implementation of the optimal design method was crude (using Matlab?s
fminunc twice to find fmap and fmincon once to optimize the hyperparameters during each
inter-trial interval), the speed was more than adequate for the experimental data collected (Fig. 5,
A) using a machine with an Intel 3.33GHz XEON processor. The largest bottleneck by far was
computing the eigendecomposition of K for each search step for ?. We will discuss briefly how to
improve the speed of our algorithm in the Discussion.
Lastly, we added a recursive filter h to the model (Fig. 1), to incorporate the effects of spike history
on the neuron?s response, allowing us to account for the possible effects of adaptation on the spike
counts obtained. We computed the maximum a posteriori (MAP) estimate for h under a temporal
7
0.8
MSE
0.6
0.4
8
uncertainty
sampling
6
4
0.2
0
50
C
random
sampling
10
100
150
200
250
# of datapoints
300
2
50
100
150
200
250
# of datapoints
300
?4
x 10
estimated
history filter
B
run time
(in seconds)
A
5
0
?5
0
25
50
time before spike (s)
Figure 5: Comparison of run time and error of optimal design method using simulated experiments
by resampling experimental data. A: The run time for uncertainty sampling (including the posterior
update and the evidence optimization) as a function of the number of data points observed. (The grid
of ?candidate? stimuli {x? } was the subset of stimuli in the experimental dataset not yet selected,
but the speed was not noticeably affected by scaling to much larger sets of candidate stimuli). The
black dotted line shows the mean intertrial interval of 677ms. B: The mean squared error between
the estimate obtained using each sampling method and that obtained using the full dataset. Note
that the error of uncertainty sampling with 150 points is even lower than that from random sampling
with 300 data points. C: Estimated response-history filter h, which describes how recent spiking
influences the neuron?s spike rate. This neuron shows self-excitatory influence on the time-scale of
25s, with self-suppression on a longer scale of approximately 1m.
smoothing prior (Fig. 5). It shows that the neuron?s response has a mild dependence on its recent
spike-history, with a self-exciting effect of spikes within the last 25s. We evaluated the performance
of the augmented model by holding out a random 10% of the data for cross-validation. Prediction
performance on test data was more accurate by an average of 0.2 spikes per trial in predicted spike
count, a 4 percent reduction in cross-validation error compared to the original model.
6
Discussion
We have developed an algorithm for optimal experimental design, which allows the nonlinearity in
a cascade neural encoding model to be characterized quickly and accurately from limited data. The
method relies on a fast method for updating the hyperparameters using a Gaussian factorization of
the Laplace approximation to the posterior, which removes the need to numerically recompute the
MAP estimate as we optimize the hyperparameters. We described a method for optimal experimental design, based on uncertainty sampling, to reduce the number of stimuli required to estimate such
response functions. We applied our method to the nonlinear color-tuning properties of macaque
V1 neurons and showed that the GP-Poisson model provides a flexible, tractable model for these
responses, and that optimal design can substantially reduce the number of stimuli required to characterize them. One additional virtue of the GP-Poisson model is that conditionals and marginals
of the high-dimensional nonlinearity are straightforward, making it easy to visualize their lowerdimensional slices and projections (as we have done in Fig. 4). We added a history term to the LNP
model in order to incorporate the effects of recent spike history on the spike rate (Fig. 5), which
provided a very slight improvement in prediction accuracy. We expect that the ability to incorporate dependencies on spike history to be important for the success of optimal design experiments,
especially with neurons that exhibit strong spike-rate adaptation [30].
One potential criticism of our approach is that uncertainty sampling in unbounded spaces is known
to ?run away from the data?, repeatedly selecting stimuli that are far from previous measurements.
We wish to point out that in neural applications, the stimulus space is always bounded (e.g., by the
gamut of the monitor), and in our case, stimuli at the corners of the space are actually helpful for
initializing estimates the range and smoothness of the function.
In future work, we will work to improve the speed of the algorithm for use in real-time neurophysiology experiments, using analytic first and second derivatives for evidence optimization and exploring
approximate methods for sparse GP inference [35]. We will examine kernel functions with a more
tractable matrix inverse [20], and test other information-theoretic data selection criteria for response
function estimation [36].
8
References
[1] E. P. Simoncelli, J. W. Pillow, L. Paninski, and O. Schwartz. The Cognitive Neurosciences, III, chapter 23,
pages 327?338. MIT Press, Cambridge, MA, October 2004.
[2] R.R. de Ruyter van Steveninck and W. Bialek. Proc. R. Soc. Lond. B, 234:379?414, 1988.
[3] E. J. Chichilnisky. Network: Computation in Neural Systems, 12:199?213, 2001.
[4] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Network: Computation in Neural
Systems, 12:289?316, 2001.
[5] M. Sahani and J. Linden. NIPS, 15, 2003.
[6] L. Paninski. Network: Computation in Neural Systems, 15:243?262, 2004.
[7] Tatyana Sharpee, Nicole C Rust, and William Bialek. Neural Comput, 16(2):223?250, Feb 2004.
[8] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Journal of Vision, 6(4):484?507, 7 2006.
[9] J. W. Pillow and E. P. Simoncelli. Journal of Vision, 6(4):414?428, 4 2006.
[10] Misha B Ahrens, Jennifer F Linden, and Maneesh Sahani. J Neurosci, 28(8):1929?1942, Feb 2008.
[11] Nicole C Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P Simoncelli. Neuron, 46(6):945?956,
Jun 2005.
[12] I. DiMatteo, C. Genovese, and R. Kass. Biometrika, 88:1055?1073, 2001.
[13] S.F. Martins, L.A. Sousa, and J.C. Martins. Image Processing, 2007. ICIP 2007. IEEE International
Conference on, volume 3, pages III?309. IEEE, 2007.
[14] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[15] Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama Rad, Michael
Vidne, Joshua Vogelstein, and Wei Wu. J Comput Neurosci, Aug 2009.
[16] Jarno Vanhatalo, Ville Pietil?ainen, and Aki Vehtari. Statistics in medicine, 29(15):1580?1607, July 2010.
[17] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. Journal of Neuroscience, 18:7411?7425, 1998.
[18] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Neural Computation, 18(1):80?118,
2006.
[19] Y. Ahmadian, J. W. Pillow, and L. Paninski. Neural Comput, 23(1):46?96, Jan 2011.
[20] K.R. Rad and L. Paninski. Network: Computation in Neural Systems, 21(3-4):142?168, 2010.
[21] Jakob H Macke, Sebastian Gerwinn, Leonard E White, Matthias Kaschube, and Matthias Bethge. Neuroimage, 56(2):570?581, May 2011.
[22] John P. Cunningham, Krishna V. Shenoy, and Maneesh Sahani. Proceedings of the 25th international
conference on Machine learning, ICML ?08, pages 192?199, New York, NY, USA, 2008. ACM.
[23] R.P. Adams, I. Murray, and D.J.C. MacKay. Proceedings of the 26th Annual International Conference on
Machine Learning. ACM New York, NY, USA, 2009.
[24] Todd P. Coleman and Sridevi S. Sarma. Neural Computation, 22(8):2002?2030, 2010.
[25] J. E. Kulkarni and L Paninski. Network: Computation in Neural Systems, 18(4):375?407, 2007.
[26] A.C. Smith and E.N. Brown. Neural Computation, 15(5):965?991, 2003.
[27] B.M. Yu, J.P. Cunningham, G. Santhanam, S.I. Ryu, K.V. Shenoy, and M. Sahani. Journal of Neurophysiology, 102(1):614, 2009.
[28] C.M. Bishop. Pattern recognition and machine learning. Springer New York:, 2006.
[29] D. Mackay. Neural Computation, 4:589?603, 1992.
[30] J. Lewi, R. Butera, and L. Paninski. Neural Computation, 21(3):619?687, 2009.
[31] David D. Lewis and William A. Gale. Proceedings of the ACM SIGIR conference on Research and
Development in Information Retrieval, pages 3?12. Springer-Verlag, 1994.
[32] G. Casella. American Statistician, pages 83?87, 1985.
[33] J. W. Pillow, Y. Ahmadian, and L. Paninski. Neural Comput, 23(1):1?45, Jan 2011.
[34] T. P. Minka. UAI ?01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages
362?369, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
[35] E. Snelson and Z. Ghahramani. Advances in neural information processing systems, 18:1257, 2006.
[36] Andreas Krause, Ajit Singh, and Carlos Guestrin. J. Mach. Learn. Res., 9:235?284, June 2008.
9
| 4322 |@word mild:1 trial:11 neurophysiology:2 middle:2 briefly:1 proportion:1 nd:1 simulation:2 rhesus:1 vanhatalo:1 covariance:8 decomposition:1 solid:2 moment:3 reduction:2 contains:1 selecting:6 daniel:1 tuned:1 current:3 ka:1 yet:1 must:2 john:1 numerical:2 shape:1 analytic:2 remove:1 plot:2 ainen:1 update:5 resampling:1 fminunc:1 half:3 selected:6 intelligence:1 beginning:1 coleman:1 smith:1 record:1 provides:4 recompute:1 location:1 unbounded:1 along:3 constructed:1 shorthand:1 combine:1 introduce:1 manner:1 theoretically:1 inter:1 expected:3 rapid:2 roughly:1 examine:1 multi:2 little:2 mijung:1 window:1 begin:2 estimating:4 underlying:2 moreover:1 provided:1 mass:1 bounded:1 substantially:3 monkey:1 developed:2 extremum:1 finding:1 temporal:1 concave:6 ferreira:1 biometrika:1 schwartz:3 control:4 fmincon:1 evoking:1 shenoy:2 positive:2 before:1 engineering:1 local:1 dropped:1 sd:2 todd:1 encoding:10 mach:1 approximately:1 black:5 plus:1 twice:1 examined:1 evoked:1 specifying:2 limited:2 factorization:2 liam:1 range:3 statistically:1 steveninck:1 unique:1 practice:1 recursive:1 lewi:1 procedure:1 jan:2 empirical:2 maneesh:2 physiology:1 cascade:2 significantly:1 gabor:2 confidence:3 integrating:1 projection:1 rahnama:1 convenience:1 selection:1 impossible:1 influence:3 optimize:3 map:9 expedites:1 center:1 maximizing:5 nicole:2 straightforward:1 attention:1 williams:1 convex:2 focused:2 survey:1 sigir:1 rule:3 importantly:1 spanned:1 datapoints:11 laplace:4 updated:1 decode:1 exact:1 caption:1 gps:1 carl:1 trick:1 element:1 expensive:1 particularly:1 updating:2 recognition:1 theunissen:1 observed:5 ep:1 bottom:1 electrical:1 initializing:1 region:1 ensures:2 ordering:1 highest:2 substantial:1 vehtari:1 moderately:1 neglected:1 depend:1 solving:1 singh:2 easily:1 joint:1 chapter:1 train:2 fast:1 describe:1 ahmadian:3 artificial:1 hyper:1 whose:2 larger:2 valued:1 drawing:1 ability:1 statistic:1 gp:19 transform:1 matthias:2 product:3 maximal:2 remainder:1 adaptation:3 combining:1 rapidly:1 flexibility:1 convergence:1 r1:2 adam:1 object:1 develop:1 quirk:1 measured:2 received:1 aug:1 strong:1 soc:1 eq:10 predicted:1 involves:4 come:2 sizable:1 direction:4 inhomogeneous:1 closely:1 filter:6 noticeably:1 fix:1 decompose:1 summation:1 exploring:1 kamiar:1 around:1 exp:5 visualize:2 major:1 estimation:3 proc:1 utexas:2 sensitive:1 largest:1 mit:2 clearly:1 gaussian:22 always:1 aim:1 rather:1 avoid:1 varying:1 wilson:1 ax:1 focus:2 june:1 notational:1 improvement:1 likelihood:14 indicates:1 contrast:16 criticism:1 greedily:1 suppression:1 posteriori:2 inference:4 helpful:1 dependent:1 integrated:1 entire:1 cunningham:2 bienenstock:1 transformed:1 interested:1 flexible:3 ill:1 orientation:1 development:1 spatial:2 constrained:1 smoothing:2 summed:1 marginal:6 field:3 once:1 jarno:1 mackay:2 washington:1 sampling:32 runtimes:1 identical:3 park:1 broad:1 icml:1 nearly:1 genovese:1 yu:1 alter:1 future:1 stimulus:48 spline:2 randomly:4 simultaneously:1 statistician:1 william:2 highly:3 misha:1 allocating:1 accurate:3 integral:1 closer:1 taylor:1 divide:1 re:3 uncertain:3 increased:1 kij:2 column:3 xeon:1 maximization:1 tractability:1 deviation:2 subset:2 entry:1 neutral:1 stimulusresponse:2 front:1 too:1 characterize:2 dependency:1 adaptively:2 international:3 sensitivity:1 probabilistic:1 off:1 dimatteo:1 michael:1 together:1 quickly:3 bethge:1 squared:1 central:1 unavoidable:1 recorded:2 containing:1 choose:1 gale:1 conf:1 corner:1 american:1 cognitive:1 derivative:3 macke:1 actively:1 fixating:1 potential:2 nonlinearities:2 account:1 de:1 sizeable:1 coding:1 summarized:1 sec:1 inc:1 depends:2 performed:1 root:1 analyze:1 characterizes:1 red:1 bayes:4 carlos:1 elicited:1 contribution:1 minimize:1 square:2 greg:1 accuracy:1 variance:7 kaufmann:1 maximized:1 spaced:2 bayesian:2 raw:1 accurately:2 rectified:1 straight:1 horwitz:1 history:8 processor:1 casella:1 sebastian:1 frequency:1 intertrial:1 minka:1 mi:3 fmap:34 sampled:2 gain:1 dataset:5 treatment:1 popular:1 hsu:1 color:9 dimensionality:1 credible:1 actually:3 higher:1 dt:3 originally:1 permitted:1 response:31 maximally:1 wei:1 evaluated:2 done:1 stage:3 lastly:1 correlation:1 until:1 nonlinear:16 defines:1 mode:1 gray:3 grows:1 usa:3 effect:5 staircase:3 true:4 brown:2 analytically:3 sinusoid:1 butera:1 white:1 conditionally:1 attractive:1 during:7 self:3 aki:1 noted:1 criterion:3 m:4 theoretic:3 complete:1 percent:1 meaning:1 image:1 snelson:1 novel:1 common:1 kil:1 spiking:4 rust:3 conditioning:1 exponentially:1 volume:1 tail:1 slight:1 marginals:3 numerically:7 measurement:1 cambridge:1 smoothness:4 tuning:2 grid:4 nonlinearity:14 had:1 dot:1 longer:1 feb:2 curvature:1 posterior:36 recent:3 showed:1 sarma:1 optimizing:1 certain:1 verlag:1 gerwinn:1 success:1 lnp:5 joshua:1 krishna:1 morgan:1 additional:3 lowerdimensional:1 guestrin:1 maximize:3 monotonically:1 vogelstein:1 july:1 multiple:1 unimodal:1 full:1 reduces:1 infer:2 simoncelli:4 smooth:1 characterized:2 offer:1 long:1 cross:2 retrieval:1 plugging:1 biophysics:1 schematic:1 prediction:2 basic:1 denominator:2 vision:2 poisson:18 df:1 histogram:2 kernel:4 achieved:2 cell:3 conditionals:6 fine:1 krause:1 addressed:2 interval:8 decreased:1 grow:1 singular:4 publisher:1 pass:1 hz:1 recording:1 integer:1 near:1 iii:2 easy:2 variety:1 xj:7 affect:1 psychology:1 reduce:3 idea:1 andreas:1 donoghue:1 texas:2 bottleneck:1 movshon:1 hessian:2 york:3 repeatedly:1 matlab:1 adequate:1 generally:2 amount:2 transforms:1 specifies:1 dotted:3 ahrens:1 gil:1 neuroscience:4 delta:1 estimated:3 per:1 blue:1 hyperparameter:1 affected:1 santhanam:1 threshold:1 monitor:1 drawn:1 uw:1 v1:6 ville:1 sum:2 cone:15 convert:1 run:4 inverse:7 uncertainty:25 place:1 almost:1 reasonable:1 wu:2 scaling:2 interleaved:2 hi:4 guaranteed:1 followed:1 correspondence:1 annual:1 activity:1 awake:1 ri:4 software:1 nearby:1 speed:4 argument:1 lond:1 formulating:1 performing:1 martin:2 department:2 according:2 combination:1 describes:1 slightly:1 making:2 intuitively:1 taken:1 computationally:3 jennifer:1 discus:2 count:7 turn:1 tractable:5 end:1 gaussians:1 mjpark:1 apply:2 away:1 appropriate:1 robustness:1 sousa:1 original:1 obviates:1 top:3 vidne:1 include:1 const:2 medicine:1 exploit:1 ghahramani:1 build:2 especially:2 murray:1 comparatively:2 objective:1 added:2 spike:44 parametric:3 receptive:3 strategy:2 rt:3 traditional:1 dependence:1 bialek:2 exhibit:1 distance:1 link:5 simulated:6 koyama:1 gracefully:1 evenly:1 chris:1 mail:2 collected:5 kaschube:1 relationship:1 october:1 holding:1 frank:1 trace:1 negative:6 stated:1 design:28 implementation:1 perform:2 allowing:1 gallant:1 neuron:17 observation:1 datasets:1 shinsuke:1 finite:4 defining:2 neurobiology:1 variability:1 rn:1 varied:1 jakob:1 ajit:1 tatyana:1 inferred:1 introduced:1 david:2 pair:6 required:6 chichilnisky:1 icip:1 rad:2 ryu:1 macaque:3 nip:1 address:1 proceeds:2 below:1 pattern:2 appeared:1 reliable:1 including:1 difficulty:1 scarce:1 improve:2 imply:1 axis:1 gamut:1 carried:2 jun:1 extract:1 sahani:4 text:1 prior:23 literature:3 nice:1 determining:1 relative:1 expect:1 rationale:1 proportional:1 vinje:1 validation:2 eigendecomposition:1 exciting:1 bank:1 austin:2 row:4 course:1 excitatory:1 last:1 free:1 keeping:1 rasmussen:1 allow:1 understand:1 fall:1 characterizing:2 sparse:2 ghz:1 slice:3 overcome:2 dimension:2 xn:3 evaluating:3 pillow:7 contour:3 concavity:1 infinitedimensional:1 collection:4 san:1 counted:1 far:5 approximate:5 preferred:1 logic:1 global:1 active:3 sequentially:1 yashar:1 uai:1 assumed:1 eero:1 francisco:1 xi:10 continuous:1 latent:1 search:3 learn:1 ruyter:1 ca:1 expansion:1 mse:1 necessarily:1 anthony:1 main:1 linearly:2 neurosci:2 hyperparameters:15 convey:1 x1:4 augmented:1 fig:13 referred:1 intel:1 cubic:1 fashion:1 ny:2 sub:2 neuroimage:1 explicit:1 khi:1 exponential:1 xl:1 candidate:5 crude:1 wish:1 comput:4 third:1 tang:1 formula:3 xt:3 bishop:1 showing:1 list:1 decay:1 virtue:1 evidence:13 linden:2 intractable:1 essential:1 consist:1 sequential:1 effectively:1 gained:1 drew:3 justifies:1 conditioned:1 entropy:1 simply:1 paninski:8 gao:1 drifted:1 saturating:1 scalar:2 van:1 springer:2 corresponds:1 environmental:1 relies:3 acm:3 ma:1 lewis:1 conditional:5 goal:1 leonard:1 infinite:3 determined:1 uniformly:2 called:1 total:1 experimental:12 svd:1 sharpee:1 select:3 odelia:1 arises:1 jonathan:1 kulkarni:1 incorporate:3 evaluate:1 tested:1 |
3,670 | 4,323 | Learning large-margin halfspaces
with more malicious noise
Rocco A. Servedio
Columbia University
[email protected]
Philip M. Long
Google
[email protected]
Abstract
We describe a simple algorithm that runs in time poly(n, 1/?, 1/?) and learns an
unknown n-dimensional ?-margin halfspace to accuracy 1 ? ? in theppresence of
malicious noise, when the noise rate is allowed to be as high as ?(?? log(1/?)).
Previous efficient algorithms could only learn to accuracy ? in the presence of
malicious noise of rate at most ?(??).
Our algorithm does not work by optimizing a convex loss function. We show that
no algorithm for learning ?-margin halfspaces that minimizes a convex proxy for
misclassification error can tolerate malicious noise at a rate greater than ?(??);
this may partially explain why previous algorithms could not achieve the higher
noise tolerance of our new algorithm.
1
Introduction
Learning an unknown halfspace from labeled examples that satisfy a margin constraint (meaning that
no example may lie too close to the separating hyperplane) is one of the oldest and most intensively
studied problems in machine learning, with research going back at least five decades to early seminal
work on the Perceptron algorithm [5, 26, 27].
In this paper we study the problem of learning an unknown ?-margin halfspace in the model of
Probably Approximately Correct (PAC) learning with malicious noise at rate ?. More precisely, in
this learning scenario the target function is an unknown origin-centered halfspace f (x) = sign(w ?
x) over the domain Rn (we may assume w.l.o.g. that w is a unit vector). There is an unknown
distribution D over the unit ball Bn = {x ? Rn : kxk2 ? 1} which is guaranteed to put zero
probability mass on examples x that lie within Euclidean distance at most ? from the separating
hyperplane w ? x = 0; in other words, every point x in the support of D satisfies |w ? x| ? ?. The
learner has access to a noisy example oracle EX ? (f, D) which works as follows: when invoked,
with probability 1 ? ? the oracle draws x from D and outputs the labeled example (x, f (x)) and
with probability ? the oracle outputs a ?noisy? labeled example which may be an arbitrary element
(x0 , y) of Bn ?{?1, 1}. (It may be helpful to think of the noisy examples as being constructed by an
omniscient and malevolent adversary who has full knowledge of the state of the learning algorithm
and previous draws from the oracle. In particular, note that noisy examples need not satisfy the
margin constraint and can lie arbitrarily close to, or on, the hyperplane w ? x = 0.) The goal of
the learner is to output a hypothesis h : Rn ? {?1, 1} which has high accuracy with respect to
D: more precisely, with probability at least 1/2 (over the draws from D used to run the learner and
any internal randomness of the learner) the hypothesis h must satisfy Prx?D [h(x) 6= f (x)] ? ?.
(Because a success probability can be improved efficiently using standard repeat-and-test techniques
[19], we follow the common practice of excluding this success probability from our analysis.) In
particular, we are interested in computationally efficient learning algorithms which have running
time poly(n, 1/?, 1/?).
1
Introduced by Valiant in 1985 [30], the malicious noise model is a challenging one, as witnessed by
the fact that learning algorithms can typically only withstand relatively low levels of malicious noise.
Indeed, it is well known that for essentially all PAC learning problems it is information-theoretically
possible to learn to accuracy 1 ? ? only if the malicious noise rate ? is at most ?/(1 + ?) [20],
and most computationally efficient algorithms for learning even simple classes of functions can only
tolerate significantly lower malicious noise rates (see e.g. [1, 2, 8, 20, 24, 28]).
Interestingly, the original Perceptron algorithm [5, 26, 27] for learning a ?-margin halfspace can be
shown to have relatively high tolerance to malicious noise. Several researchers [14, 17] have established upper bounds on the number of mistakes that the Perceptron algorithm will make when run on
a sequence of examples that are linearly separable with a margin except for some limited number of
?noisy? data points. Servedio [28] observed that combining these upper bounds with Theorem 6.2
of Auer and Cesa-Bianchi [3] yields a straightforward ?PAC version? of the online Perceptron algorithm that can learn ?-margin halfspaces to accuracy 1 ? ? in the presence of malicious noise
provided that the malicious noise rate ? is at most some value ?(??). Servedio [28] also describes
a different PAC learning algorithm which uses a ?smooth? booster together with a simple geometric
real-valued weak learner and achieves essentially the same result: it also learns a ?-margin halfspace
to accuracy 1 ? ? in the presence of malicious noise at rate at most ?(??). Both the boosting-based
algorithm of [28] and the Perceptron-based approach run in time poly(n, 1/?, 1/?).
Our results. We give a simple new algorithm for learning ?-margin halfspaces in the presence of
malicious noise. Like the earlier approaches, our algorithm runs in time poly(n, 1/?, 1/?); however,
it goes beyond the ?(??) malicious noise tolerance of previous approaches. Our first main result is:
Theorem 1 There is a poly(n, 1/?, 1/?)-time algorithm that can learn an unknown
?-margin halfp
space to accuracy 1 ? ? in the presence of malicious noise at any rate ? ? c?? log(1/?) whenever
? < 1/7, where c > 0 is a universal constant.
p
While our ?( log(1/?)) improvement is not large, it is interesting to go beyond the ?naturallooking? ?(??) bound of Perceptron and other simple approaches. The algorithm of Theorem 1 is
not based on convex optimization, and this is not a coincidence: our second main result is, roughly
stated, the following.
Informal paraphrase of Theorem 2 Let A be any learning algorithm that chooses a hypothesis
vector v so as to minimize a convex proxy for the binary misclassification error. Then A cannot
learn ?-margin halfspaces to accuracy 1 ? ? in the presence of malicious noise at rate ? ? c??,
where c > 0 is a universal constant.
Our approach. The algorithm of Theorem 1 is a modification of a boosting-based approach to
learning halfspaces that is due to Balcan and Blum [7] (see also [6]). [7] considers a weak learner
which simply generates a random origin-centered halfspace sign(v ? x) by taking v to be a uniform
random unit vector. The analysis of [7], which is for a noise-free setting, shows that such a random
halfspace has probability ?(?) of having accuracy at least 1/2 + ?(?) with respect to D. Given
this, any boosting algorithm can be used to get a PAC algorithm for learning ?-margin halfspaces to
accuracy 1 ? ?.
Our algorithm is based on a modified weak learner which generates a collection of k = dlog(1/?)e
independent random origin-centered halfspaces h1 = sign(v1 ? x), . . . , hk = sign(vk ? x) and takes
the majority vote H = Maj(h1 , . . . , hk ). The crux of our analysis is to show that if there is?
no noise,
then with probability at least (roughly) ? 2 the function H has accuracy at least 1/2 + ?(? k) with
respect to D (see Section 2, in particular Lemma 1). By using this weak learner in conjunction with
a ?smooth? boosting algorithm as in [28], we get the overall malicious-noise-tolerant PAC learning
algorithm of Theorem 1 (see Section 3).
For Theorem 2 we consider any algorithm that draws some number m of samples and minimizes
a convex proxy for misclassification error. If m is too small then well-known sample complexity
bounds imply that the algorithm cannot learn ?-margin halfspaces to high accuracy, so we may
assume that m is large; but together with the assumption that the noise rate is high, this means
that with overwhelmingly high probability the sample will contain many noisy examples. The heart
of our analysis deals with this situation; we describe a simple ?-margin data source and adversary
2
strategy which ensures that the convex proxy for misclassification error will achieve its minimum
on a hypothesis vector that has accuracy less than 1 ? ? with respect to the underlying noiseless
distribution of examples. We also establish the same fact about algorithms that use a regularizer
from a class that includes the most popular regularizers based on p-norms.
Related work. As mentioned above, Servedio [28] gave a boosting-based algorithm that learns
?-margin halfspaces with malicious noise at rates up to ? = ?(??). Khardon and Wachman [21]
empirically studied the noise tolerance of variants of the Perceptron algorithm. Klivans et al. [22]
showed that an algorithm that combines PCA-like techniques with smooth boosting can tolerate relatively high levels of malicious noise provided that the distribution D is sufficiently ?nice? (uniform
over the unit sphere or isotropic log-concave). We note that ?-margin distributions are significantly
less restrictive and can be very far from having the ?nice? properties required by [22].
We previously [23] showed that any boosting algorithm that works by stagewise minimization of a
convex ?potential function? cannot tolerate random classification noise ? this is a type of ?benign?
rather than malicious noise, which independently flips the label of each example with probability ?.
A natural question is whether Theorem 2 follows from [23] by having the malicious noise simply
simulate random classification noise; the answer is no, essentially because the ordering of quantifiers
is reversed in the two results. The construction and analysis from [23] crucially relies on the fact
that in the setting of that paper, first the random misclassification noise rate ? is chosen to take some
particular value in (0, 1/2), and then the margin parameter ? is selected in a way that depends on
?. In contrast, in this paper the situation is reversed: in our setting first the margin parameter ? is
selected, and then given this value we study how high a malicious noise rate ? can be tolerated.
2
The basic weak learner for Theorem 1
Let f (x) = sign(w ? x) be an unknown halfspace and D be an unknown distribution over the ndimensional unit ball that has a ? margin with respect to f as described in Section 1. For odd k ? 1
we let Ak denote the algorithm that works as follows: Ak generates k independent uniform random
unit vectors v1 , . . . , vk in Rn and outputs the hypothesis H(x) = Maj(sign(v1 ? x), . . . , sign(vk ?
x)). Note that Ak does not use any examples (and thus malicious noise does not affect its execution).
As the main result of Section 2 we show that if k is not too large then algorithm Ak has a nonnegligible chance of outputting a reasonably good weak hypothesis:
Lemma 1 For odd k ?
1
16? 2
?
the hypothesis H generated by Ak has probability at least ?(? k/2k )
of satisfying Prx?D [H(x) 6= f (x)] ?
2.1
1
2
?
?
? k
100? .
A useful tail bound
Theh following notation
will be useful in analyzing algorithm Ak : Let
vote(?, k) :=
i
Pk
Pr
i=1 Xi < k/2 where X1 , . . . , Xk are i.i.d. Bernoulli (0/1) random variables with E[Xi ] =
1/2 + ? for all i. Clearly vote(?, k) is the lower tail of a Binomial distribution, but for our purposes we need an upper bound on vote(?, k) when k is very small relative to 1/? 2 and the value
of vote(?, k) is close to but ? crucially ? less than 1/2. Standard Chernoff-type bounds [10] do not
seem to be useful here, so we give a simple self-contained proof of the bound we need (no attempt
has been made to optimize constant factors below).
Lemma 2 For 0 < ? < 1/2 and odd k ?
1
16? 2
we have vote(?, k) ? 1/2 ?
?
? k
50 .
Proof: The lemma is easily
1, 3, 5, 7 so we assume k ? 9 below. The
for k =k?i
P verified
k
value vote(?, k) equals
(1/2
?
?)
(1/2 + ?)i , which is easily seen to equal
i<k/2 i
P
P
k
1
2 i
k?2i
. Since k is odd 21k i<k/2 ki equals 1/2, so it rei<k/2 i (1 ? 4? ) (1 ? 2?)
2k
?
P
mains to show that 21k i<k/2 ki 1 ? (1 ? 4? 2 )i (1 ? 2?)k?2i ? ?50k . Consider any integer
3
i ? [0, k/2 ?
?
k]. For such an i we have
k?2i
(1 ? 2?)
? (1 ? 2?)
?
2 k
?
?
2 2 k
1 ? (2?)(2 k) + (2?)
2
?
?
?
1 ? 4? k + 8? k(? k)
?
?
?
1 ? 4? k + 2? k = 1 ? 2? k
?
?
?
(1)
(2)
(3)
?
2 k
where (1) is obtained by truncating the alternating
binomial series expansion
of (1 ? 2?)
after
?
a positive term, (2) uses the upper bound 2` ? `2 /2, and (3) uses ? k ? 1/4 which follows
?
1
2 i
k?2i
from the bound k ? 16?
? 1 ? 2? k and thus we have
2 . So we have (1 ? 4? ) (1 ? 2?)
?
P
1 ? (1 ? 4? 2 )i (1 ? 2?)k?2i ? 2? k. The sum i?k/2??k ki is at least 0.01 ? 2k for all odd k ? 9
[13], so we obtain the claimed bound:
?
X k ?
1 X k
1
? k
2 i
k?2i
1 ? (1 ? 4? ) (1 ? 2?)
? k
2? k ?
.
i
i
2k
2
50
?
i<k/2
2.2
i<k/2? k
Proof of Lemma 1
Throughout the following discussion it will be convenient to view angles between vectors as lying
the range [??, ?), so acute angles are in the range (??/2, ?/2).
Recall that sign(w?x) is the unknown target halfspace (we assume w is a unit vector) and v1 , . . . , vk
are the random unit vectors generated by algorithm Ak . For j ? {1, . . . , k} let Gj denote the
?good? event that the angle between vj and w is acute, i.e. lies in the interval (??/2, ?/2), and
let G denote the event G1 ? ? ? ? ? Gk . Since the vectors vi are selected independently we have
Qk
Pr[G] = j=1 Pr[Gj ] = 2?k .
The following claim shows that conditioned on G, any ?-margin point has a noticeably-better-than1
2 chance of being classified correctly by H (note that the probability below is over the random
generation of H by Ak ):
Claim 3 Fix x ? Bn to be
any point such that |w ?x| ? ?. Then we have PrH [H(x) 6= f (x) | G] ?
?
vote(?/?, k) ? 1/2 ? ?50?k .
Proof: Without loss of generality we assume that x is a positive example (an entirely similar analysis
goes through for negative examples), so w ? x ? ?. Let ? denote the angle from w to x in the plane
spanned by w and x; again without loss of generality we may assume that ? lies in [0, ?/2] (the
case of negative angles is symmetric). In fact since x is a positive example with margin ?, we have
that 0 ? ? ? ?/2 ? ?.
Fix any j ? {1, . . . , k} and let us consider the random unit vector vj . Let vj0 be the projection of
vj onto the plane spanned by x and w. The distribution of vj0 /||vj0 || is uniform on the unit circle
in that plane. We have that sign(vj ? x) 6= f (x) if and only if the magnitude of the angle between
vj0 and x is at least ?/2. Conditioned on Gj , the angle from v0 to w is uniformly distributed over
the interval (??/2, ?/2). Since the angle from w to x is ?, the angle from v0 to x is the sum of
the angle from v0 to w and the angle from w to x, and therefore it is uniformly distributed over the
interval (??/2 + ?, ?/2 + ?) . Recalling that ? ? 0, we have that sign(vj ? x) 6= f (x) if and only
if angle from v0 to x lies in (?/2, ?/2 + ?). Since the margin condition implies ? ? ?/2 ? ? as
noted above, we have Pr[sign(vj ? x) 6= f (x) | Gj ] ? ?/2??
= 12 ? ?? .
?
Now recall that v1 , ..., vk are chosen independently at random, and G = G1 ? ? ? ? ? Gk . Thus, after
conditioning on G, we have that v1 , ..., vk are still independent and the events sign(v1 ? x) 6=
f (x), . . . , sign(vk ? x)? 6= f (x) are independent. It follows that PrH [H(x) 6= f (x) | G] ?
vote ?? , k ? 1/2 ? ?50?k , where we used Lemma 2 for the final inequality.
Now all the ingredients are in place for us to prove Lemma 1. Since Claim? 3 may be applied to
every x in the support of D, we have Prx?D,H [H(x) 6= f (x) | G] ? 1/2 ? ?50?k . Applying Fubini?s
4
?
theorem we get that EH [Prx?D [H(x) 6= f (x)] | G] ? 1/2 ? ?50?k . Applying Markov?s inequality
to the nonnegative random variable Prx?D [H(x) 6= f (x)], we get
?
?
"
#
1 ? ?50?k
2(1/2 ? ?50?k )
?
Pr Pr [H(x) 6= f (x)] >
,
| G ?
H x?D
2
1? ? k
50?
which implies
"
Pr
H
Pr [H(x) 6= f (x)] ?
2
x?D
Since PrH [G] = 2?k we get
"
Pr
H
?
? k
50?
1?
Pr [H(x) 6= f (x)] ?
1?
x?D
?
? k
50?
2
#
?
| G ? ?(? k).
#
?
? ?(? k/2k ),
and Lemma 1 is proved.
3
Proof of Theorem 1: smooth boosting the weak learner to tolerate
malicious noise
Our overall algorithm for learning ?-margin halfspaces with malicious noise, which we call Algorithm B, combines a weak learner derived from Section 2 with a ?smooth? boosting algorithm.
Recall that boosting algorithms [15, 25] work by repeatedly running a weak learner on a sequence of carefully crafted distributions over labeled examples. Given the initial distribution P
over labeled examples (x, y), a distribution Pi over labeled examples is said to be ?-smooth if
Pi [(x, y)] ? ?1 P [(x, y)] for every (x, y) in the support of P. Several boosting algorithms are known
[9, 16, 28] that generate only 1/?-smooth distributions when boosting to final accuracy 1 ? ?. For
concreteness we will use the MadaBoost algorithm of [9], which generates a (1 ? ?)-accurate final
hypothesis after O( ??12 ) stages of calling the weak learner and runs in time poly( 1? , ?1 ).
At a high level our analysis here is related to previous works [28, 22] that used smooth boosting to
tolerate malicious noise. The basic idea is that since a smooth booster does not increase the weight
of any example by more than a 1/? factor, it cannot ?amplify? the malicious noise rate by more
than this factor. In [28] the weak learner only achieved advantage O(?) so as long as the malicious
noise rate was initially O(??), the ?amplified? malicious noise rate of O(?) could not completely
?overcome? the advantage and boosting could proceed successfully. Here we have a weak learner
that achieves a higher advantage, so boosting can proceed successfully in the presence of more
malicious noise. The rest of this section provides details.
The weak learner W that B uses is a slight extension of algorithm Ak from Section 2 with k =
dlog(1/?)e. When invoked with distribution Pt over labeled examples, algorithm W
? makes ` (specified later) calls to algorithm Adlog(1/?)e , generating candidate hypotheses
H1 , ..., H` ; and
? evaluates H1 , ..., H` using M (specified later) independent examples drawn from Pt and
outputs the Hj that makes the fewest errors on these examples.
The overall algorithm B
? draws a multiset S of m examples (we will argue later that poly(n, 1/?, 1/?) many examples suffice) from EX? (f, D);
? sets the initial distribution P over labeled examples to be uniform over S; and
? uses MadaBoost to boost to accuracy 1 ? /4 with respect to P , using W as a weak learner.
p
Recall that we are assuming ? ? c?? log(1/?); we will show that under this assumption, algorithm
B outputs a final hypothesis h that satisfies Prx?D [h(x) = f (x)] ? 1 ? ? with probability at least
1/2.
5
First, let SN ? S denote the noisy examples in S. A standard Chernoff bound [10] implies that
with probability at least 5/6 we have |SN |/|S| ? 2?; we henceforth write ? 0 to denote |SN |/|S|.
We will show below that with high probability, every time MadaBoost calls the weak learner W
with a distribution Pt , W generates a weak hypothesis (call it ht ) that has Pr(x,y)?Pt [ht (x) = y] ?
p
1/2 + ?(? log(1/?)). MadaBoost?s boosting guarantee then implies that the final hypothesis (call
it h) of Algorithm B satisfies Pr(x,y)?P [h(x) = y] ? 1 ? ?/4. Since h is correct on (1 ? ?/4) of the
points in the sample S and ? 0 ? 2?, h must be correct on at least 1 ? ?/4 ? 2? of the points in S \
SN , which is a noise-free sample of poly(n, 1/?, 1/?) labeled examples generated according to D.
Since h belongs to a class of hypotheses with VC dimension at most poly(n, 1/?, 1/) (because the
analysis of MadaBoost implies that h is a weighted vote over O(1/(?? 2 )) many weak hypotheses,
and each weak hypothesis is a vote over O(log(1/?)) n-dimensional halfspaces), by standard sample
complexity bounds [4, 31, 29], with probability 5/6, the accuracy of h with respect to D is at least
1 ? ?/2 ? 4? > 1 ? ?, as desired.
Thus it remains to show that with high probability each time W
p is called on a distribution Pt , it
indeed generates a weak hypothesis with advantage at least ?(? log(1/?)). Recall the following:
Definition 1 The total variation distance between distributions P and Q over finite domain X is
dT V (P, Q) := maxE?X P [E] ? Q[E].
Suppose R is the uniform distribution over the noisy points SN ? S, and P 0 is the uniform distribution over the remaining points S \ SN (we may view P 0 as the ?clean? version of P ). Then
the distribution P may be written as P = (1 ? ? 0 )P 0 + ? 0 R, and for any event E we have
P [E] ? P 0 [E] ? ? 0 R[E] ? ? 0 , so dT V (P, P 0 ) ? ? 0 .
Let Pt denote the distribution generated by MadaBoost during boosting stage t. The smoothness of
MadaBoost implies that Pt [SN ] ? 4? 0 /, so the noisy examples have total probability at most 4? 0 /?
under Pt . Arguing as for the original distribution, we have that the clean version Pt0 of Pt satisfies
dT V (Pt0 , Pt ) ? 4? 0 /.
By Lemma 1, each call to algorithm Adlog(1/?)e yields a hypothesis (call it g) that satisfies
p
Pr[errorPt0 (g) ? 1/2 ? ? log(1/?)/(100?)] ? ?(? 2 ),
g
(4)
(5)
def
where for any distribution Q we define errorQ (g) = Pr(x,y)?Q [g(x) 6= y]. Recalling that ? 0 ? 2?
p
and ? < c?? log(1/?), for a suitably small absolute constant c > 0 we have that
p
4? 0 /? < ? log(1/?)/(400?).
(6)
p
Then (4) and (5) imply that Prg [errorPt (g) ? 1/2 ? 3? log(1/?)/(400?)] ? ?(? 2 ). This means
that by taking the parameters ` and M of the weak learner W to be poly(1/?, log(1/?)), we can ensure that with overall probability at least 2/3, at each stage t of boosting the weak
p hypothesis ht that
W selects from its ` calls to A in that stage will satisfy errorPt (gt ) ? 1/2 ? ? log(1/?)/(200?).
This concludes the proof of Theorem 1.
4
Convex optimization algorithms have limited malicious noise tolerance
Given a sample S = {(x1 , y1 ), . . . , (xm , ym )} of labeled examples, the number of examples misclassified by the hypothesis sign(v ? x) is a nonconvex function of v, and thus it can be difficult to
find a v that minimizes this error (see [12, 18] for theoretical results that support this intuition in
various settings). In an effort to bring the powerful tools of convex optimization to bear on various
halfspace learning problems, a widely used approach is to instead minimize some convex proxy for
misclassification error.
Definition 2 will define the class of such algorithms analyzed in this section. This definition allows
algorithms to use regularization, but by setting the regularizer ? to be the all-0 function it also covers
algorithms that do not.
6
Definition 2 A function ? : R ? R+ is a convex misclassification proxy if ? is convex, nonincreasing, differentiable,
satisfies ?0 (0) < 0. A function ? : Rn ? [0, ?) is a componentwise
Pand
n
regularizer if ?(v) = i=1 ? (vi ) for a convex, differentiable ? : R ? [0, ?) for which ? (0) = 0.
Given a sample of labeled examples S = {(x1P
, y1 ), . . . , (xm , ym )} ? (Rn ? {?1, 1})m , the (?,?)m
loss of vector v on S is L?,?,S (v) := ?(v) + i=1 ?(y(v ? xi )). A (?,?)-minimizer is any learning
algorithm that minimizes L?,?,S (v) whenever the minimum exists.
Our main negative result, shows that for any sample size, algorithms that minimize a regularized
convex proxy for misclassification error will succeed with exponentially small probability for a
malicious noise rate that is ?(??), and therefore for any larger malicious noise rate.
Theorem 2 Fix ? to be any convex misclassification proxy and ? to be any componentwise regularizer, and let algorithm A be a (?,?)-minimizer. Fix ? ? (0, 1/8] to be any error parameter,
? ? (0, 1/8] to be any margin parameter, and m ? 1 to be any sample size. Let the malicious noise
rate ? be 16??.
Then there is an n, a target halfspace f (x) = sign(w ? x) over Rn , a ?-margin distribution D for f
w
(supported on points x ? Bn that have | kwk
? x| ? ?), and a malicious adversary with the following
property: If A? is given m random examples drawn from EX? (f, D) and outputs a vector v, then
the probability (over the draws from EX? (f, D)) that v satisfies Prx?D [sign(v ? x) 6= f (x)] ? ? is
at most e?c/? , where c > 0 is some universal constant.
Proof: The analysis has two cases based on whether or not the number of examples m exceeds
1
m0 := 32?
2 . (We emphasize that Case 2, in which n is taken to be just 2, is the case that is of
primary interest, since in Case 1 the algorithm does not have enough examples to reliably learn a
?-margin halfspace even in a noiseless scenario.)
Case 1 (m ? m0 ): Let n = b1/? 2 c and let e(i) ? Rn denote the unit vector with a 1 in the ith
component. Then the set of examples E := {e(1) , ..., e(n) } is shattered by the family F which consists of all 2n halfspaces whose weight vectors are in {??, ?}n , and any distribution whose support
is E is a ?-margin distribution for any such halfspace. The proof of the well-known informationtheoretic lower bound of [11]1 gives that for any learning algorithm that uses m examples (such as
A), there is a distribution D supported on E and ahalfspace
f ? F such that the output h of A
c
satisfies Pr[Prx?D [h(x) 6= f (x)] > ] ? 1 ? exp ? ? 2 , where the outer probability is over the
random examples drawn by A. This proves the theorem in Case 1.
Case 2 (m > m0 ): We note that it is well known (see e.g. [31]) that O( ??12 ) examples suffice to
learn ?-margin n-dimensional halfspaces for any n if there is no noise, so noisy examples will play
an important role in the construction in this case.
p
We take n = 2. The target halfspace is f (x) = sign( 1 ? ? 2 x1 + ?x2 ). The
distribution
D is very
simple and is supported on only two points: it puts weight 2 on the point ? ? 2 , 0 which is a
1??
positive example for f , and weight 1 ? 2 on the point (0, 1) which is also a positive example for
f. When the malicious adversary is allowed to corrupt an example, with probability 1/2 it provides
the point (1, 0) and mislabels it as negative, and with probability 1/2 it provides the point (0, 1) and
mislabels it as negative.
Let S = ((x?1n, y1 ), ?..., ?
(xm , ym?o?
)) be a sample of m examples drawn from EX? (f, D). We define pS,1 :=
1
?
? t:xt = ?/
|S|
1?? 2 ,0
?
?
, pS,2 :=
|{t:xt =(0,1),y=1}|
,
|S|
?S,1 :=
|{t:xt =(1,0)}|
,
|S|
In particular, see the last displayed equation in the proof of Lemma 3 of [11].
7
and ?S,2 :=
|{t:xt =(0,1),y=?1}|
.
|S|
Using standard Chernoff bounds (see e.g. [10]) and a union bound we get
Pr[pS,1 = 0 or pS,2 = 0 or pS,1 > 3 or ?S,1 < ?/4 or ?S,2 < ?/4]
?m
m
+ 2 exp ?
? (1 ? 2?(1 ? ?))m + (1 ? (1 ? 2?)(1 ? ?))m + exp ?
12
24
?m
m
+ 2 exp ?
(since ? 1/4 and ? ? 1/2)
? 2(1 ? ?)m + exp ?
12
24
1
1
1
? 2 exp ?
+ exp ?
+ 2 exp ?
.
32? 2
96? 2
48?
Since the theorem allows for a e?c/? success probability for A, it suffices to consider the case in
which pS,1 and pS,2 are both positive, pS,1 ? 3, and min{?S,1 , ?S,2 } ? ?/4. For v = (v1 , v2 ) ?
R2 the value L?,?,S (v) is proportional to
!
?v1
?(v)
L(v1 , v2 ) := pS,1 ? p
+ pS,2 ?(v2 ) + ?S,1 ?(?v1 ) + ?S,2 ?(?v2 ) +
.
2
|S|
1??
From the bounds stated above on pS,1 , pS,2 , ?S,1 and ?S,2 we may conclude that L?,?,S (v) does
achieve a minimum value. This is because for any z ? R the set {v : L?,?,S (v) ? z} is bounded,
and therefore so is its closure. Since L?,?,S (v) is bounded below by zero and is continuous, this
implies that it has a minimum. To see that for any z ? R the set {v : L?,?,S (v) ? z} is bounded,
observe that if either v1 or v2 is fixed and the other one is allowed to take on arbitrarily large
magnitude values (either positive or negative), this causes L?,?,S (v) to take on arbitrarily large
positive values (this is an easy consequence of the definition of L, the fact that ? is convex, nonnegative and nonincreasing, ?0 (0) < 0, and the fact that pS,1 , pS,2 , ?S,1 , ?S,2 are all positive).
Taking the derivative with respect to v1 yields
?L
?
= pS,1 p
?0
?v1
1 ? ?2
When v1 = 0, the derivative (7) is pS,1 ? ?
?v1
p
1?? 2
1 ? ?2
!
? ?S,1 ?0 (?v1 ) ?
? 0 (v1 )
.
|S|
(7)
?0 (0) ? ?S,1 ?0 (0) (recall that ? is minimized at 0 and
thus ? 0 (0) = 0). Recall that ?0 (0) < 0 by assumption. If pS,1 ? ?
1?? 2
< ?S,1 then (7) is positive at
0, which means that L(v1 , v2 ) is an increasing function of v1 at v1 = 0 for all v2 . Since L is convex,
this means that for each v2 ? R we have that the value v1? that minimizes L(v1? , v2 ) is a negative
value v1? < 0. So, if pS,1 ? ? 2 < ?S,1 , the linear classifier v output by A? has v1 ? 0; hence it
1??
misclassifies the point ( ? ?
1?? 2
, 0), and thus has error rate at least 2 with respect to D.
Combining the fact that ? ? 1/8 with the facts that pS,1 ? 3 and ?S,1 > ?/4, we get pS,1 ? ?
1?? 2
<
1.01 ? pS,1 ? < 4? = ?/4 < ?S,1 which completes the proof.
5
Conclusion
It would be interesting to further improve on the malicious noise tolerance of efficient algorithms
for PAC learning ?-margin halfspaces, or to establish computational hardness results for this problem. Another goal for future work is to develop an algorithm that matches the noise tolerance of
Theorem 1 but uses a single halfspace as its hypothesis representation.
References
[1] J. Aslam and S. Decatur. Specification and simulation of statistical query algorithms for efficiency and
noise tolerance. Journal of Computer and System Sciences, 56:191?208, 1998.
[2] P. Auer. Learning nested differences in the presence of malicious noise. Theor. Comp. Sci., 185(1):159?
175, 1997.
[3] P. Auer and N. Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. Annals of
Mathematics and Artificial Intelligence, 23:83?99, 1998.
8
[4] E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Comput., 1:151?160,
1989.
[5] H. Block. The Perceptron: a model for brain functioning. Reviews of Modern Physics, 34:123?135, 1962.
[6] A. Blum. Random Projection, Margins, Kernels, and Feature-Selection. In LNCS Volume 3940, pages
52?68, 2006.
[7] A. Blum and M.-F. Balcan. A discriminative model for semi-supervised learning. Journal of the ACM,
57(3), 2010.
[8] S. Decatur. Statistical queries and faulty PAC oracles. In Proc. 6th COLT, pages 262?268, 1993.
[9] C. Domingo and O. Watanabe. MadaBoost: a modified version of AdaBoost. In Proc. 13th COLT, pages
180?189, 2000.
[10] D. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms.
Cambridge University Press, Cambridge, 2009.
[11] A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. Information and Computation, 82(3):247?251, 1989.
[12] V. Feldman, P. Gopalan, S. Khot, and A. Ponnuswami. On agnostic learning of parities, monomials, and
halfspaces. SIAM J. Comput., 39(2):606?645, 2009.
[13] W. Feller. Generalization of a probability limit theorem of Cram?er. Trans. Am. Math. Soc., 54:361?372,
1943.
[14] Y. Freund and R. Schapire. Large margin classification using the Perceptron algorithm. In Proc. 11th
COLT, pages 209?217., 1998.
[15] Y. Freund and R. Schapire. A short introduction to boosting. J. Japan. Soc. Artif. Intel., 14(5):771?780,
1999.
[16] D. Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. JMLR, 4:101?
117, 2003.
[17] C. Gentile and N. Littlestone. The robustness of the p-norm algorithms. In Proc. 12th COLT, pages 1?11,
1999.
[18] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. SIAM J. Comput.,
39(2):742?765, 2009.
[19] D. Haussler, M. Kearns, N. Littlestone, and M. Warmuth. Equivalence of models for polynomial learnability. Information and Computation, 95(2):129?161, 1991.
[20] M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing,
22(4):807?837, 1993.
[21] R. Khardon and G. Wachman. Noise tolerant variants of the perceptron algorithm. JMLR, 8:227?248,
2007.
[22] A. Klivans, P. Long, and R. Servedio. Learning Halfspaces with Malicious Noise. JMLR, 10:2715?2740,
2009.
[23] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. Machine
Learning, 78(3):287?304, 2010.
[24] Y. Mansour and M. Parnas. Learning conjunctions with noise under product distributions. Information
Processing Letters, 68(4):189?196, 1998.
[25] R. Meir and G. R?atsch. An introduction to boosting and leveraging. In LNAI Advanced Lectures on
Machine Learning, pages 118?183, 2003.
[26] A. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on Mathematical
Theory of Automata, volume XII, pages 615?622, 1962.
[27] F. Rosenblatt. The Perceptron: a probabilistic model for information storage and organization in the brain.
Psychological Review, 65:386?407, 1958.
[28] R. Servedio. Smooth boosting and learning with malicious noise. JMLR, 4:633?648, 2003.
[29] J. Shawe-Taylor, P. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over datadependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926?1940, 1998.
[30] L. Valiant. Learning disjunctions of conjunctions. In Proc. 9th IJCAI, pages 560?566, 1985.
[31] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
9
| 4323 |@word version:4 polynomial:1 norm:2 suitably:1 closure:2 simulation:1 crucially:2 bn:4 initial:2 series:1 pt0:2 interestingly:1 omniscient:1 com:1 must:2 written:1 benign:1 mislabels:2 intelligence:1 selected:3 warmuth:1 plane:3 oldest:1 isotropic:1 xk:1 ith:1 short:1 provides:3 boosting:22 multiset:1 math:1 five:1 x1p:1 mathematical:1 constructed:1 symposium:1 prove:1 consists:1 combine:2 interscience:1 theoretically:1 x0:1 hardness:2 indeed:2 roughly:2 brain:2 increasing:1 provided:2 underlying:1 notation:1 suffice:2 mass:1 bounded:3 agnostic:2 what:1 minimizes:5 guarantee:1 every:4 concave:1 nonnegligible:1 classifier:1 unit:11 positive:10 mistake:1 consequence:1 limit:1 ak:9 analyzing:1 approximately:1 studied:2 wachman:2 equivalence:1 challenging:1 limited:2 range:2 arguing:1 practice:1 union:1 block:1 lncs:1 universal:3 significantly:2 convenient:1 projection:2 word:1 cram:1 get:7 cannot:4 close:3 onto:1 amplify:1 selection:1 put:2 faulty:1 applying:2 seminal:1 storage:1 risk:1 optimize:1 baum:1 straightforward:1 go:3 independently:3 convex:18 truncating:1 automaton:1 madaboost:8 haussler:3 spanned:2 variation:1 annals:1 target:4 construction:2 pt:10 suppose:1 play:1 hierarchy:1 us:7 hypothesis:20 origin:3 domingo:1 element:1 satisfying:1 labeled:11 observed:1 role:1 coincidence:1 ensures:1 ordering:1 halfspaces:18 mentioned:1 intuition:1 feller:1 complexity:2 efficiency:1 learner:19 completely:1 easily:2 various:2 regularizer:4 fewest:1 describe:2 query:2 artificial:1 disjunction:1 whose:2 widely:1 valued:1 larger:1 g1:2 think:1 prh:3 noisy:10 final:5 online:1 sequence:2 advantage:4 differentiable:2 net:1 outputting:1 product:1 combining:2 achieve:3 amplified:1 convergence:1 ijcai:1 p:21 generating:1 develop:1 odd:5 gavinsky:1 soc:2 c:1 implies:7 rei:1 correct:3 vc:1 centered:3 noticeably:1 crux:1 fix:4 suffices:1 generalization:2 theor:1 extension:1 lying:1 sufficiently:1 exp:8 claim:3 m0:3 achieves:2 early:1 purpose:1 proc:5 label:1 successfully:2 tool:1 weighted:1 minimization:2 clearly:1 modified:2 rather:1 hj:1 overwhelmingly:1 conjunction:3 derived:1 improvement:1 vk:7 bernoulli:1 hk:2 contrast:1 am:1 helpful:1 shattered:1 typically:1 lnai:1 initially:1 going:1 misclassified:1 selects:1 interested:1 overall:4 classification:4 colt:4 misclassifies:1 equal:3 khot:1 having:3 chernoff:3 future:1 minimized:1 novikoff:1 modern:1 attempt:1 recalling:2 organization:1 interest:1 analyzed:1 regularizers:1 nonincreasing:2 accurate:1 euclidean:1 taylor:1 littlestone:2 circle:1 desired:1 theoretical:1 psychological:1 witnessed:1 earlier:1 cover:1 monomials:1 uniform:7 too:3 learnability:1 optimally:1 answer:1 tolerated:1 chooses:1 randomized:1 siam:3 probabilistic:1 physic:1 together:2 ym:3 again:1 cesa:2 henceforth:1 booster:3 derivative:2 withstand:1 li:1 japan:1 potential:2 includes:1 satisfy:4 depends:1 vi:2 later:3 h1:4 view:2 kwk:1 aslam:1 halfspace:17 minimize:3 accuracy:16 pand:1 qk:1 who:1 efficiently:1 yield:3 weak:21 raghavendra:1 comp:1 researcher:1 randomness:1 classified:1 prg:1 explain:1 whenever:2 definition:5 evaluates:1 servedio:7 proof:11 proved:1 popular:1 intensively:1 recall:7 knowledge:1 carefully:1 auer:3 back:1 tolerate:6 higher:2 fubini:1 follow:1 dt:3 supervised:1 improved:1 adaboost:1 generality:2 just:1 stage:4 google:2 stagewise:1 vj0:4 artif:1 contain:1 functioning:1 regularization:1 hence:1 alternating:1 symmetric:1 ehrenfeucht:1 deal:1 during:1 self:1 noted:1 bring:1 balcan:2 meaning:1 invoked:2 common:1 empirically:1 ponnuswami:1 conditioning:1 exponentially:1 volume:2 defeat:1 tail:2 slight:1 cambridge:2 feldman:1 smoothness:1 mathematics:1 shawe:1 access:1 specification:1 acute:2 gj:4 v0:4 gt:1 showed:2 optimizing:1 belongs:1 scenario:2 claimed:1 nonconvex:1 inequality:2 binary:1 arbitrarily:3 success:3 seen:1 minimum:4 greater:1 gentile:1 semi:1 full:1 smooth:11 exceeds:1 match:1 long:4 sphere:1 variant:2 basic:2 essentially:3 noiseless:2 kernel:1 achieved:1 interval:3 completes:1 malicious:43 source:1 rest:1 probably:1 leveraging:1 seem:1 integer:1 call:8 structural:1 presence:9 enough:1 easy:1 affect:1 gave:1 idea:1 panconesi:1 whether:2 pca:1 guruswami:1 bartlett:1 effort:1 proceed:2 cause:1 york:1 repeatedly:1 useful:3 gopalan:1 parnas:1 generate:1 schapire:2 meir:1 sign:17 correctly:1 rosenblatt:1 xii:1 write:1 blum:3 drawn:4 clean:2 verified:1 decatur:2 ht:3 v1:25 concreteness:1 sum:2 run:6 angle:12 letter:1 powerful:1 place:1 throughout:1 family:1 draw:6 entirely:1 bound:18 ki:3 def:1 guaranteed:1 oracle:5 nonnegative:2 constraint:2 precisely:2 x2:1 calling:1 generates:6 simulate:1 klivans:2 min:1 separable:1 relatively:3 according:1 ball:2 describes:1 modification:1 dlog:2 quantifier:1 pr:16 heart:1 taken:1 computationally:2 equation:1 previously:1 remains:1 needed:1 flip:1 informal:1 observe:1 v2:9 robustness:1 original:2 binomial:2 running:2 remaining:1 ensure:1 restrictive:1 prof:1 establish:2 dubhashi:1 question:1 strategy:1 rocco:2 primary:1 concentration:1 said:1 distance:2 reversed:2 separating:2 sci:1 philip:1 majority:1 outer:1 argue:1 considers:1 assuming:1 difficult:1 gk:2 stated:2 negative:7 reliably:1 unknown:9 bianchi:2 upper:4 markov:1 finite:1 displayed:1 situation:2 excluding:1 y1:3 rn:8 mansour:1 arbitrary:1 paraphrase:1 introduced:1 required:1 specified:2 componentwise:2 established:1 boost:1 trans:1 beyond:2 adversary:4 below:5 xm:3 misclassification:9 event:4 natural:1 eh:1 regularized:1 ndimensional:1 advanced:1 improve:1 imply:2 maj:2 concludes:1 columbia:2 sn:7 nice:2 geometric:1 review:2 relative:1 freund:2 loss:4 lecture:1 bear:1 interesting:2 generation:1 proportional:1 ingredient:1 proxy:8 corrupt:1 pi:2 repeat:1 supported:3 free:2 last:1 parity:1 perceptron:11 taking:3 absolute:1 tolerance:8 distributed:2 overcome:1 dimension:1 valid:1 collection:1 made:1 adaptive:1 far:1 transaction:1 emphasize:1 informationtheoretic:1 tolerant:2 b1:1 conclude:1 xi:3 discriminative:1 continuous:1 decade:1 why:1 learn:8 reasonably:1 expansion:1 williamson:1 poly:10 anthony:1 domain:2 vj:6 pk:1 main:5 linearly:1 noise:57 prx:8 allowed:3 x1:3 crafted:1 intel:1 wiley:1 watanabe:1 plong:1 khardon:2 comput:3 lie:6 kxk2:1 candidate:1 jmlr:4 learns:3 theorem:17 xt:4 pac:8 er:1 r2:1 exists:1 vapnik:1 valiant:3 magnitude:2 execution:1 conditioned:2 margin:33 simply:2 contained:1 datadependent:1 partially:1 nested:1 minimizer:2 satisfies:8 relies:1 chance:2 acm:1 succeed:1 goal:2 except:1 uniformly:2 hyperplane:3 lemma:10 kearns:3 called:1 total:2 vote:11 atsch:1 maxe:1 perceptrons:1 internal:1 support:5 ex:5 |
3,671 | 4,324 | Multiple Instance Filtering
Kamil Wnuk
Stefano Soatto
University of California, Los Angeles
{kwnuk,soatto}@cs.ucla.edu
Abstract
We propose a robust filtering approach based on semi-supervised and multiple instance learning (MIL). We assume that the posterior density would
be unimodal if not for the effect of outliers that we do not wish to explicitly model. Therefore, we seek for a point estimate at the outset, rather
than a generic approximation of the entire posterior. Our approach can
be thought of as a combination of standard finite-dimensional filtering (Extended Kalman Filter, or Unscented Filter) with multiple instance learning,
whereby the initial condition comes with a putative set of inlier measurements. We show how both the state (regression) and the inlier set (classification) can be estimated iteratively and causally by processing only the
current measurement. We illustrate our approach on visual tracking problems whereby the object of interest (target) moves and evolves as a result
of occlusions and deformations, and partial knowledge of the target is given
in the form of a bounding box (training set).
1
Introduction
Algorithms for filtering and prediction have a venerable history studded by quantum leaps by
Wiener, Kolmogorov, Mortensen, Zakai, Duncan among others. Many attempts to expand
finite-dimensional optimal filtering beyond the linear-Gaussian case failed,1 which explains
in part the resurgence of general-purpose approximation methods for the filtering equation,
such as weak-approximations (particle filters [6, 16]) as well as parametric ones (e.g., sumof-Gaussians or interactive multiple models [5]). Unfortunately, in many applications of
interest, from visual tracking to robotic navigation, the posterior is not unimodal. This has
motivated practitioners to resort to general-purpose approximations of the entire posterior,
mostly using particle filtering. However, in many applications one has reason to believe that
the posterior would be unimodal if not for the effect of outlier measurements, and therefore
the interest is in a point estimate, for instance the mode, mean or median, rather than in the
entire posterior. So, we tackle the problem of filtering, where the data is partitioned into
two unknown subsets (inliers and outliers). Our goal is to devise finite-dimensional filtering
schemes that will approximate the dominant mode of the posterior distribution, without
explicitly modeling the outliers. There is a significant body of related work, summarized
below.
1.1
Prior related work
Our goal is naturally framed in the classical robust statistical inference setting, whereby
classification (inlier/outlier) is solved along with regression (filtering). We assume that an
initial condition is available, both for the regressor (state) as well as the inlier distribution.
1
Also due to the non-existence of invariant family of distributions for large classes of FokkerPlanck operators.
1
The latter can be thought of as training data in a semi-supervised setting. Robust filtering
has been approached from many perspectives: Using a robust norm (typically H ? or `1 )
for the prediction residual yields worst-case disturbance rejection [14, 9]; rejection sampling
schemes in the spirit of the M-estimator [11] ?robustify? classical filters and their extensions.
These approaches work with few outliers, say 10 ? 20%, but fail in vision applications where
one typically has 90% or more. Our approach relates to recent work in detection-based
tracking [3, 10] that use semi-supervised learning [4, 18, 13], as well as multiple-instance
learning [2] and latent-SVM models [8, 20].
In [3] an ensemble of pixel-level weak classifiers is combined on-line via boosting; this is
efficient but suffers from drift; [10] improves stability by using a static model trained on
the first frame as a prior for labeling new training samples used to update an online classifier. MILTrack [4] addressed the problem of selecting training data for model update so
as to maintain maximum discriminative power. This is related to our approach, except
that we have an explicit dynamical model, rather than a scanning window for detection.
Also, our discrimination criterion operates on a collection of parts/regions rather than a
single template. This allows more robustness to deformations and occlusions. We adopt an
incremental SVM with a fast approximation of a nonlinear kernel [21] rather than online
boosting. Our part based representation and explicit dynamics allow us to better handle
scale and shape changes without the need for a multi-scale image search [4, 13]. PROST [18]
proposed a cascade of optical flow, online random forest, and template matching. The P-N
tracker [13] combined a median flow tracker with an online random forest. New training
samples were collected when detections violated structural constraints based on estimated
object position. In an effort to control drift, new training data was not incorporated into
the model until the tracked object returned to a previously confirmed appearance with high
confidence. This meant that if object appearance never returned to the ?key frames,? the
online model would never be updated. In the aforementioned works objects are represented
as a bounding box. Several recent approaches have also used segmentation to improve the
reliability of tracking: [17] did not leverage temporal information beyond adjacent frames,
[22] required several annotated input frames with detailed segmentations, and [7] relied on
trackable points on both sides of the object boundary. In all methods above there was no
explicit temporal modeling beyond adjacent frames; therefore the schemes had poor predictive capabilities. Other approaches have used explicit temporal models together with
sparsity constraints to model appearance changes [15].
We propose a semi-supervised approach to filtering, with an explicit temporal model, that
assumes imperfect labeling, whereby portions of the image inside the bounding box are
?true positives? and others are outliers. This enables us to handle appearance changes, for
instance due to partial occlusions or changes of vantage point.
1.2
Formalization
We denote with x(t) ? Rn the state of the model at time t ? Z+ . It describes a discretetime trajectory in a finite-dimensional (vector) space. This can be thought of as a realization of a stochastic process that evolves via some kind of ordinary difference equation
IID
x(t + 1) = f (x(t)) + ?(t), where ?(t) ? p? is a temporally independent and identically
distributed process. We will assume that, possibly after whitening, the components of ?(t)
are independent.
m(t)
We denote the set of measurements at time t with y(t) = {yi (t)}i=1 , yi (t) ? Rk . We
assume each can be represented by some fixed dimensionality descriptor, ? : Rk ? Rl ; (y) ?
?(y). In classical filtering, the measurements are a known function of the state, y(t) =
h(x(t)) + n(t), up to the measurement noise, n(t), that is a realization of a stochastic
process that is often assumed to be temporally independent and identically distributed,
and also independent of ?(t). In our case, however, the components of the measurement
process y1 (t), . . . , ym(t) (t) are divided into two groups: those that behave like standard
measurements in a filtering process, and those that do not.
This distinction is made by an indicator variable ?(t) ? {?1, 1}m(t) of the same dimensionality as the number of measurements, whose values are unknown, and can change over time.
2
For brevity of notation we denote the two sets of indexes as ?(t)+ = {i | ?i (t) = 1} and
?(t)? = {i | ?i (t) = ?1}. For the first set we have that {yi (t)}i??(t)+ = h(x(t), t)+n(t), just
like in classical filtering, except that the measurement model h(?, t) is time-varying in a way
that includes singular perturbations, since the number of measurements changes over time,
so the function h : Rn ? R ? Rm(t) ; (x, t) 7? h(x, t) changes dimension over time. For the
second group, unlike particle filtering, we do not care to model their states, and instead just
discount them as outliers. The measurements are thus samples from a stochastic process
that includes two independent sources of uncertainty: the measurement noise, n(t), and the
selection process ?(t).
Our goal is that of determining a point-estimate of the state x(t) given measurements up
to time t. This will be some statistic (the mean, median, mode, etc.) of the conditional
density p(x(t)|{y(k)}tk=1 ), where the process ?(t) has to be marginalized.
In order to design a filter, we first consider the full forward model of how the various
samples of the inlier measurements are generated. To this end, we assume that the inlier
set is separable from the outlier set by a hyper-plane in some feature space, represented
by the normal vector w(t) ? Rl . So, given the assignment of inliers and outliers ?(t), we
have that the new maximal-margin boundary can be obtained from w(t ? 1) by several
iterations of a stochastic subgradient descent procedure [19], which for brevity we denote as
w(t) = stochSubgradIters(w(t?1), y(t), ?(t)) and describe in Sec. 2 and Sec. 2.2. Conversely,
if we are given the hyperplane w(t), and state x(t), the measurements can be classified via
?(t) = argmin? E(y(t), w(t), x(t), ?). The energy function, E(y(t), w(t), x(t), ?) depends on
how one chooses to model the object and what side information is applied to constrain the
selection of training data. In the implementation details we give examples of how appearance
continuity can be used as a constraint in this step. Further, motion similarity and occlusion
boundaries could also be used.
Finally, the forward (data-formation) model for a sample (realization) of the measurement
process is given as follows: At time t = 0, we will assume that we have available an initial
distribution p(x0 ) together with an initial assignment of inliers and outliers ?0 , so x(0) ?
p(x0 );
?(0) = ?0 . Given ?(0), we bootstrap our classifier by minimizing a standard
Pm(0)
1
support vector machine cost function: w(1) = argminw ( ?2 ||w||2 + m(0)
i=1 max(0, 1 ?
?i (0))hw, ?(yi (0))i), where ? ? R is the tradeoff between the importance of margin size
versus loss. At all subsequent times t, each realization evolves according to:
?
x(t + 1) = f (x(t)) + v(t),
?
?
?
w(t + 1) = stochSubgradIters(w(t), y(t), ?(t)),
?
??(t) = argmin? E(y(t), w(t), x(t), ?),
?
{yi (t)}i??(t)+ = h(x(t), t) + n(t).
(1)
where the first two equations can be thought of as the ?model equations? and the last two
as the ?measurement equations.? The presence of ?0 makes this a semi-supervised learning
problem, where ?0 is the ?training set? for the process ?(t). Note that it is possible for the
model above to proceed in open-loop, when no inliers are present.
The model (1) can easily be extended to the case when the measurement equation is in
implicit form, h(x(t), {yi (t)}i??(t)+ , t) = n(t), since all that matters is the innovation pro.
cess e(t) = h({yi (t)}i??(t)+ , x
?(t), t). Additional extensions can be entertained where the
dynamics f depends on the classifier w, so that x(t + 1) = f (x(t), w(t)) + v(t), and similarly
for the measurement equation h(x(t), w(t), t), although we will not consider them here.
1.3
Application example: Visual tracking with shape and appearance changes
Objects of interest (e.g. humans, cars) move in ways that result in a deformation of their
projection onto the image plane, even when the object is rigid. Further changes of appearance occur due to motion relative to the light source and partial occlusions. Because
of the ambiguities in shape and appearance, one can fix one factor and model the other.
For instance, one can fix a bounding box (shape) and model change of appearance inside,
3
including outliers (due to occlusion) and inliers (newly visible portions of the object). Alternatively, one can enforce constancy of the reflectance function, but then shape changes
as well as illumination must be modeled explicitly, which is complex [12].
Our approach tracks the motion of a bounding box, enclosing the data inliers. Call c(t) ? R2
the center of this bounding box, vc (t) ? R2 the velocity of the center, d(t) ? R2 the
length of the sides of the bounding box, and vd (t) ? R2 its rate of change. Thus, we have
x(t) = [c(t), vc (t), d(t), vd (t)]T . As before ?(t) indicates a binary labeling of the measurement
components, where ?(t)+ is the set of samples that correspond to the object of interest. We
have tested different versions of our framework where the components are superpixels as
well as trajectories of feature points. For reasons of space limitation, below we describe the
case of superpixels, and report results for trajectories as supplementary material.
Consider a time-varying image I(t) : D ? R2 ? R+ ; (u, v) 7? I(u, v, t): superpixels {Si }
are just a partition of the domain D = ?ri=1 Si with Si ? Sj = ?ij ; ?(t) becomes a binary
labeling of the superpixels, with ?(t)+ collecting the indices of elements on the object of
interest, and ?(t)? on the background.
The measurement equation is obtained as the centroid and diameter of the restriction of the
bounding box to the domain of the inlier super-pixels: If y(t) = I(t) ? RN ?M is an image,
then h1 ({I(u, v, t)}(u,v)?Si ) ? R2 is the centroid of the superpixels {Si }i??(t)+ computed
from I(t), and h2 ({I(u, v, t)}(u,v)?Si ) ? R2 is the diameter of the same region. This is in
the form (1), with h constant (the time dependency is only through y(t) and ?(t)). The
resulting model is:
?
?
x(t + 1) = F x(t) + ?(t)
?
?
?w(t + 1) = stochSubgradIters(w(t), y(t), ?(t))
(2)
?(t) = argmin? E(y(t), w(t), x(t), ?)
?
?
?
?h(y (t)
i
i??(t)+ ) = Cx(t) + n(t)
8?8
I
0
I
I
, C ? R4?8 , C =
where F ? R
is block-diagonal with each 4 ? 4 block given by
I 0 0 0
IID
, and I is the 2 ? 2 identity matrix. Similarly, ?(t) ? N (0, Q), Q ? R8?8
0 0 I 0
IID
and n(t) ? N (0, R), R ? R4?4 .
2
Algorithm development
We focus our discussion in this section on the development of the discriminative appearance
model at the heart of the inlier/outlier classification, w(t). For simplicity, pretend for now
that each frame contains m observations.We assume an object is identified with a subset of
the observations (inliers); at time t, we have {yi (t)}i??(t)+ . Also pretend that observations
Nf
from all frames, Y = {y(t)}t=1
, were available simultaneously; Nf is the number of frames
in the video sequence. If all frames were labeled, (?(t) known ? t), a maximum margin
classifier w
? could be obtained by minimizing the objective (3) over all samples in all frames:
?
?
Nf m
X
X
?
1
w
? = argmin ? ||w||2 +
`(w, ?(yi (t)), ?i (t))? .
(3)
2
mNf t=1 i=1
w
where ? ? R, and `(w, ?(yi (t)), ?i (t)) is a loss that ensures data fit. We use the hinge loss
`(w, ?(yi (t)), ?i (t)) = max(0, 1 ? ?i (t)hw, ?(yi (t))i) in which slack is implicit, so we can use
an efficient sequential optimization in the primal form.
In reality an exact label assignment at every frame is not available, so we must infer the latent
labeling ? simultaneously while learning the hyperplane w. Continuing our hypothetical
batch processing scenario, pretend we have estimates of some state of the object throughout
Nf
? = {?
time, X
x(t)}t=1
. This allows us to identify a reduced subset of candidate inliers
4
(in MIL terminology a positive bag), within which we assume all inliers are contained. The
specification of a positive bag helps reduce the search space, since we can assume all samples
outside of a positive bag are negative. This changes the SVM formulation to a mixed integer
program similar to the mi-SVM [2], except that [2] assumed a positive/negative bag partition
was given, whereas we use the estimated state and add a term to the decision boundary
cost function to express the dependence between the labeling, ?(t), and state estimate, x
?,
at each time:
?
!?
Nf
m
X
X
?
1
w,
? ?
? = argmin ? ||w||2 +
max (0, 1 ? ?i (t)hw, ?(yi (t))i) + E (y(t), ?, x
?(t)) ? .
2
mNf t=1 i=1
w,?
(4)
Here E(y(t), ?(t), x
?(t)) represents a general mechanism to enforce constraints on label assignment on a per-frame basis within a temporal sequence.2 A standard optimization procedure
alternates between updating the decision boundary w, subject to an estimated labeling ?,
?
followed by relabeling the original data to satisfy the positive bag constraints generated
from the state estimates, x
?, while keeping w fixed:
?
PNf Pm
1
?w
? = argminw ?2 ||w||2 + mN
?i (t)hw, ?(yi (t))i) ,
t=1
i=1 max(0, 1 ? ?
f
PNf Pm
??
? = argmin? 1
(
max(0, 1 ? ?i (t)hw,
? ?(yi (t))i) + E(y(t), ?(t), x
?(t))) .
mNf
t=1
i=1
(5)
In practice, annotation is available only in the first frame, and the data must be processed
causally and sequentially. Recently, [19] proposed an efficient incremental scheme, PEGASOS, to solve the hinge loss objective in the primal form. This enables straightforward
incremental training of w as new data becomes available. The algorithm operates on a
training set consisting of tuples of labeled descriptors: T = {(?(yi ), ?i )}m
i=1 }. In a nutshell,
at each PEGASOS iteration we select a subset of training samples from the current training set Aj ? T , and update w according
P to wj+1 = wj ? ?j 5j . The subgradient of the
hinge loss is given by 5j = ?wj ? |A1j | i?Aj ?i ?(yi ). To finalize the update and accelerate
convergence wj+1 is projected onto the set {w : ||w|| ? ?1? }, which [19] show is the space
containing the optimal solution.
The second objective of Eq. (5) seeks a solution to the binary integer program of inlier
selection given w
? and x
?. Instead of tackling this NP-hard problem, we re-interpret it as a
constraint enforcement step based on additional cues within a search area specified by our the
current state estimate. One example constraint for a superpixel based object representation
is to re-interpret the given objective as a graph cut problem, with pairwise terms enforcing
appearance consistency. See supplementary material for details, as well as for experiments
with other choices of constraints for tracks, rather than superpixels.
2.1 Initialization
At t = 0 we are given initial observations y(0) and a bounding box indicating the object of
interest {c(0) ? d(0)}. We initialize ?(0) with positive indices corresponding to superpixels
that have a majority of their area |yi (0)| within the bounding box:
(
? yi (0)|
> y ,
1 if |{c(0)?d(0)}
|yi (0)|
?i (0) =
(6)
?1 otherwise.
The area threshold is y = 0.7 throughout all experiments. This represents a bootstrap
training set, T1 from which we learn an initial classifier w(1) for distinguishing object appearance. Each element of the training set is a triplet (?(yi (t)), ?i (t), ?i = t), where the
last element is the time at which the feature is added to the training set. We start by
selecting all positive samples and a set number of negatives, nf , sampled randomly from
?(0)? , giving T1 = {(?(yi (0)), ?i (0), 0)}?i??(0)+ ? {(?(yj (0)), ?j (0), 0) | j ? ?(0)?
rand ?
?
?
?(0) , |?(0)rand | = nf }.
2
It represents the side information necessary to avoid zero information gain in the semisupervised inference procedure.
5
2.2
Prediction Step
At time t, given the current estimate of the object state and classification ?(t), we add all
positive samples and difficult negative samples lying outside of the estimated bounding box
to the new training set Tt+1|t . We then propagate the object state with the model of motion
dynamics and finally update the decision boundary with the newly updated training set.
?
?
x
?(t + 1|t) = F x
?(t|t)
?
?
?
?
P (t + 1|t) = F P (t|t)F T + Q
?
?
?
?
Tt+1
= Tt+1,old ? Tt+1,new
?
?
?
?
?
T
= {(?(yi ), ?i , ?i ) | ?i h?(yi ), w(t)i < 1, t ? ?i ? ?max }
? t+1,old
?
?
?
T
= {(?(yi (t)), ?i (t), t) | ?i (t) = 1} ?
?
t+1,new
?
?
?
?
? yi (t)|
?
? 1 ? y , h?(yi (t)), w(t)i > ?1}
{(?(yi (t)), ?1, t) | |D/{?c(t|t)?|yd(t|t)}
i (t)|
w(t + 1)
? for j = nT , ..., N (update starting with wnT = w(t))
?
?
?
?
?
choose Aj ? Tt+1
?
?
?
1
?
n
j = ?j
?
?
?
? P
?
?
wj+1 = (1 ? ?j ?)wj + |Ajj | i?Aj ?i (t)?(yi (t))
?
?
?
?
?
1/ ?
?
wj+1 = min{1, ||w
?
|| }wj+1
?
j+1
?
?
end
(7)
It is typically not necessary to update w at every step, so training data can be collected
over several frames during which w(t + 1) = w(t) and the update above can be invoked
either at some regular interval, on demand, or upon some form of model validation as
in [13]. The parameter ?max determines memory of the classifier update procedure for
difficult examples. If ?max = 0, no memory is used and training data for model update
consists only of observations from the current image. Such a memory of recent training
samples is analogous to the training cache used in [8] for training the latentSVM model.
During each classifier update we perform N ? nT iterations of the stochastic subgradient
descent algorithm, starting from the current best estimate of the separating hyperplane
wnT = w(t). The overall number of iterations N is set as N = 20/?, where ? is a function
of the bootstrap training set size, ? = 1/(10|T1 |). The number in the denominator is used
as a parameter to set the relative importance of the margin size and the loss, but we fix
it at 10 for our experiments. The number of iterations at a new time is then decided by
nT = max(1?|Tt |/N, 0.75) in order to limit how much the hyperplane can change in a single
update. These parameters can also be viewed as tuning the learning rates and forgetting
factors of the classifier.
2.3
Update Step
The innovation is in implicit form with h(yi (t + 1)i??(t+1)+ ) ? R4 giving a tight bounding
box around the selected foreground regions in the same form as they appear in the state.
In the update equations r specifies the size of the search region around the predicted state
within which we consider observations as candidates for foreground; ? specifies the indices
of candidate observations (positive bag).
?
?
r
?
?
?
?
?
?
?
?
?
?
??(t + 1)
e(t + 1)
?
?
?
?L
?
?
?
?x
?(t + 1|t + 1)
?
?
?P (t + 1|t + 1)
= ?r ( I 0 diag(CP (t + 1|t)C T ) + 0 I diag(CP (t + 1|t)C T ),
? yi (t+1)|
> Ey },
= {i | |{c(t+1|t)?(d(t+1|t)+r)}
|yi (t+1)|
= argmin??{?1,1}m E(w(t + 1), {yi (t + 1)}i?? , x
?(t + 1|t), ?)
= h(yi (t + 1)i??(t+1)+ ) ? C x
?(t + 1|t)
= Pt+1|t C T (CPt+1|t C T + R)?1
=x
?(t + 1|t) + Le(t + 1)
= (I ? LC)P (t + 1|t)(I ? LC)T + LRLT .
(8)
Above ?r ? R is a factor (we fix it at 3) for scaling the region size based on filter covariance.
6
Figure 1: Ski sequence: Left panel shows frame number, search area (black rectangle), filter
prediction (blue), observation (red), and updated filter estimate (green). The center panels overlay
the SVM scores for each region (solid blue = ?1, solid red = 1). Right panels show the regions
selected as inliers. This challenging sequence includes viewpoint and scale changes, deformation,
changing background. The algorithm performs well and successfully recovers from missed detection
(from frame 349 to 352 shown above).
Figure 2: P-N tracker [13] (above) and MILTrack [4] (below) initialized with the same bounding box
as our approach. Original implementations by the respective authors were used for this comparison.
The P-N tracker fails because of the absence of stable low-level tracks on the target and quickly
locks onto a patch of trees in the background. MILTrack survives longer but does not adapt scale
quickly enough, eventually drifting to become a detector of the tree line.
3
Experiments
To compare with [18, 4, 13], we first evaluate our discriminative model without maintaining
any training data history ?max = 0 and updating w every 6 frames, with training data
collected between incremental updates. Even with ?max = 0 we can track highly deforming
objects (a skier) with significant scale changes through most of the 1496 frames (Fig. 1).
We also recover from errors due to the implicit memory in the decision boundary from
incremental updating. For comparison, [4, 13] quickly drift and fail to recover (Fig. 2).
For a quantitative comparison we test our full algorithm against the state of the art on
the PROST dataset [18] consisting of 4 videos with fast motion, occlusions, scale changes,
translucency, and small background motions. In all experiments ?max = 25, and all other
parameters were fixed as described earlier and in supplementary material. Two evaluation
metrics are reported: the mean center location error in pixels [4], and percentage of correctly
area(ROID ?ROIGT )
tracked frames as computed by the bounding box overlap criteria area(ROI
> 0.5,
D ?ROIGT )
7
Figure 3: Convergence of the classifier: Samples from frames 113, 125, 733, and 1435 of the ?liquor?
sequence. The leftmost image shows the probabilities returned by the initial classifier trained using
only the first frame, the second image shows the foreground probabilities returned from the current
classifier, the third image shows the foreground selection made by the graph-cut step, and the final
image shows the smoothed score used to select bounding box location.
where ROID is the detected region and ROIGT is the ground truth region. The ground
truth for the PROST dataset is reported using a constant sized bounding box. Table 1
compares to [18, 4, 1, 13].
In the liquor sequence our method correctly shrinks the bounding box to the label, since the
rest of the bottle is not discriminative. Unfortunately, this is penalized in the Pascal score
since the area ratio drops below 0.5 of the initial bounding box despite perfect tracking. This
causes the score to drop to 18.9. If we modify the criterion to count as valid a detection
where > 99% of the detection area lies within the annotated ground truth region, the score
becomes 75.6%. If we allow for > 90% of the detected area to lie within the ground truth
box, the final pascal result for the liquor sequence becomes 79.1%. See Figure 3. The same
phenomenon occurs in the box sequence, where our approach adapts to tracking the label
at the bottom of the box. Note, this additional detection criteria has no effect on any other
scores. Additional results, including failure modes as well as successful tracking where other
approaches fail, are reported in the supplementary material, both for the case of superpixels
and tracks.
ours
P-N [13]
PROST [18]
MILTrack [4]
FragTrack [1]
Overall
pascal
74.7
37.15
80.4
49.2
66.0
board
pascal distance
92.1
13.7
12.9
139.5
75.0
39.0
67.9
51.2
67.9
90.1
pascal
42.9*
36.9
90.6
24.5
61.4
box
distance
63.7
99.3*
13.0
104.6
57.4
lemming
pascal distance
88.1
19.4
34.3
26.4*
70.5
25.1
83.6
14.9
54.9
82.8
liquor
pascal distance
75.6*
42.5*
64.5
17.4*
85.4
21.5
20.6
165.1
79.9
30.7
Table 1: Comparison with recent methods on the PROST dataset. Best scores for each sequence
and metric are shown in bold. Our method and the P-N tracker [13] do not always detect the
object. Ground truthed frames in which no location was reported by the method of [13] were not
counted into the final distance score. The method of [13] missed 2 detections on the box sequence,
1 detection on the lemming sequence, and 80 on the liquor sequence. When our approach failed to
detect the object, we used the predicted bounding box from the state of the filter as our reported
result.
4
Discussion
We have proposed an approach to robust filtering embedding a multiple instance learning SVM within a filtering framework, and iteratively performing regression (filtering) and
classification (inlier selection) in hope of reaching an approximate estimate of the dominant mode of the posterior for the case where other modes are due to outlier processes in
the measurements. We emphasize that our approach comes with no provable properties or
guarantees, other than for the trivial case when the dynamics are linear, the inlier-outlier
sets are linearly separable, the noises are Gaussian, zero-mean, IID white and independent
with known covariance, and when the initial inlier set is known to include all inliers but is
not necessarily pure. In this case, the method proposed converges to the conditional mean
of the posterior p(x(t)|{y(k)}tk=1 ). However, we have provided empirical validation of our
approach on challenging visual tracking problems, where it exceeds the state of the art, and
illustrated some of its failure modes.
8
Acknowledgment: Research supported by
N000141110863, and DARPA FA8650-11-1-7156.
AFOSR
FA9550-09-1-0427,
ONR
References
[1] A. Adam, E. Rivlin, and I. Shimshoni. Robust fragments-based tracking using the integral
histogram. In Proc. CVPR, 2006.
[2] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance
learning. In Proc. NIPS, 2003.
[3] S. Avidan. Ensemble tracking. PAMI, 29:261?271, 2007.
[4] B. Babenko, M.-H. Yang, and S. Belongie. Visual tracking with online multiple instance
learning. In Proc. CVPR, 2009.
[5] Y. Bar-Shalom and X.-R. Li. Estimation and tracking: principles, techniques and software.
YBS Press, 1998.
[6] A. Doucet, N. de Freitas, and N. Gordon. Sequential monte carlo methods in practice. Springer
Verlag, New York, 2001.
[7] J. Fan, X. Shen, and Y. Wu. Closed-loop adaptation for robust tracking. In Proc. ECCV,
2010.
[8] P. Felzenszwalb, D. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. In PAMI, 2010.
[9] L. El Ghaoui and G. Calafiore. Robust filtering for discrete-time systems with structured
uncertainty. In IEEE Transactions on Automatic Control, 2001.
[10] H. Grabner, C. Leistner, and H. Bischof. Semi-supervised on-line boosting for robust tracking.
In Proc. ECCV, 2008.
[11] P.J. Huber. Robust Statistics. Wiley, New York, 1981.
[12] J. Jackson, A. J. Yezzi, and S. Soatto. Dynamic shape and appearance modeling via moving
and deforming layers. IJCV, 79(1):71?84, August 2008.
[13] Z. Kalal, J. Matas, and K. Mikolajczyk. P-n learning: Bootstrapping binary classifiers by
structural constraints. In Proc. CVPR, 2010.
[14] H. Li and M. Fu. A linear matrix inequality approach to robust h? filtering. IEEE Transactions
on Signal Processing, 45(9):2338?2350, September 1997.
[15] H. Lim, V. Morariu, O. Camps, and M. Sznaier. Dynamic appearance modeling for human
tracking. In Proc. CVPR, 2006.
[16] J. Liu. Monte carlo strategies in scientific computing. SPringer Verlag, 2001.
[17] X. Ren and J. Malik. Tracking as repeated figure/ground segmentation. In Proc. CVPR, 2007.
[18] J. Santner, C. Leistner, A. Saffari, T. Pock, and H. Bischof. PROST Parallel Robust Online
Simple Tracking. In Proc. CVPR, 2010.
[19] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In Proc. ICML, 2007.
[20] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453?1484, September 2005.
[21] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. In Proc.
CVPR, 2010.
[22] Z. Yin and R. T. Collins. Shape constrained figure-ground segmentation and tracking. In Proc.
CVPR, 2009.
9
| 4324 |@word version:1 norm:1 rivlin:1 open:1 seek:2 propagate:1 ajj:1 covariance:2 solid:2 initial:9 liu:1 contains:1 score:8 selecting:2 fragment:1 ours:1 freitas:1 current:7 nt:3 babenko:1 si:6 tackling:1 must:3 subsequent:1 visible:1 partition:2 additive:1 shape:7 enables:2 hofmann:2 drop:2 update:15 discrimination:1 cue:1 selected:2 morariu:1 plane:2 mnf:3 fa9550:1 boosting:3 location:3 along:1 become:1 consists:1 ijcv:1 inside:2 pairwise:1 x0:2 huber:1 forgetting:1 multi:1 window:1 cache:1 solver:1 becomes:4 provided:1 notation:1 panel:3 what:1 kind:1 argmin:7 bootstrapping:1 guarantee:1 temporal:5 quantitative:1 every:3 collecting:1 nf:7 hypothetical:1 tackle:1 interactive:1 nutshell:1 classifier:13 rm:1 control:2 ramanan:1 appear:1 causally:2 before:1 t1:3 positive:10 pock:1 modify:1 limit:1 despite:1 yd:1 pami:2 black:1 initialization:1 r4:3 conversely:1 challenging:2 decided:1 acknowledgment:1 yj:1 practice:2 block:2 bootstrap:3 procedure:4 mortensen:1 area:9 empirical:1 thought:4 cascade:1 matching:1 projection:1 outset:1 confidence:1 vantage:1 regular:1 vedaldi:1 altun:1 onto:3 pegasos:3 selection:5 operator:1 tsochantaridis:2 restriction:1 map:1 center:4 straightforward:1 starting:2 shen:1 simplicity:1 pure:1 estimator:1 jackson:1 stability:1 handle:2 embedding:1 analogous:1 updated:3 target:3 pt:1 exact:1 distinguishing:1 superpixel:1 velocity:1 element:3 updating:3 cut:2 labeled:2 bottom:1 constancy:1 solved:1 worst:1 region:10 ensures:1 wj:8 venerable:1 dynamic:6 trained:3 tight:1 predictive:1 upon:1 basis:1 kalal:1 easily:1 accelerate:1 darpa:1 represented:3 various:1 kolmogorov:1 fast:2 describe:2 monte:2 detected:2 approached:1 labeling:7 hyper:1 formation:1 outside:2 shalev:1 whose:1 supplementary:4 solve:1 cvpr:8 say:1 otherwise:1 sznaier:1 statistic:2 final:3 online:7 sequence:12 propose:2 maximal:1 adaptation:1 argminw:2 loop:2 realization:4 adapts:1 los:1 convergence:2 incremental:5 perfect:1 converges:1 inlier:12 object:23 illustrate:1 tk:2 help:1 adam:1 andrew:1 ij:1 eq:1 c:1 predicted:2 come:2 annotated:2 filter:9 stochastic:5 vc:2 human:2 mcallester:1 saffari:1 material:4 explains:1 fix:4 leistner:2 extension:2 unscented:1 lying:1 tracker:5 around:2 ground:7 normal:1 roi:1 calafiore:1 adopt:1 purpose:2 estimation:1 proc:12 bag:6 leap:1 label:4 successfully:1 survives:1 hope:1 gaussian:2 always:1 super:1 rather:6 reaching:1 avoid:1 varying:2 mil:2 focus:1 joachim:1 indicates:1 superpixels:8 centroid:2 detect:2 camp:1 inference:2 rigid:1 el:1 entire:3 typically:3 expand:1 pixel:3 fokkerplanck:1 classification:5 among:1 aforementioned:1 overall:2 pascal:7 development:2 art:2 constrained:1 initialize:1 never:2 sampling:1 represents:3 icml:1 foreground:4 others:2 report:1 np:1 gordon:1 few:1 randomly:1 simultaneously:2 latentsvm:1 relabeling:1 occlusion:7 consisting:2 maintain:1 attempt:1 detection:10 interest:7 highly:1 evaluation:1 navigation:1 light:1 inliers:11 primal:3 integral:1 fu:1 partial:3 necessary:2 respective:1 tree:2 continuing:1 old:2 initialized:1 re:2 deformation:4 girshick:1 instance:10 modeling:4 earlier:1 assignment:4 ordinary:1 cost:2 subset:4 successful:1 reported:5 dependency:1 scanning:1 trackable:1 combined:2 chooses:1 density:2 regressor:1 together:2 ym:1 quickly:3 ambiguity:1 containing:1 choose:1 possibly:1 resort:1 li:2 de:1 summarized:1 sec:2 includes:3 bold:1 matter:1 satisfy:1 explicitly:3 depends:2 h1:1 closed:1 portion:2 start:1 relied:1 red:2 capability:1 recover:2 annotation:1 wnt:2 parallel:1 wiener:1 descriptor:2 ensemble:2 yield:1 correspond:1 identify:1 weak:2 iid:4 ren:1 carlo:2 trajectory:3 confirmed:1 finalize:1 history:2 classified:1 detector:1 suffers:1 against:1 failure:2 energy:1 skier:1 naturally:1 mi:1 recovers:1 static:1 sampled:1 newly:2 gain:1 dataset:3 knowledge:1 zakai:1 improves:1 dimensionality:2 segmentation:4 car:1 lim:1 supervised:6 zisserman:1 rand:2 yb:1 formulation:1 box:24 shrink:1 just:3 implicit:4 robustify:1 until:1 nonlinear:1 continuity:1 mode:7 aj:4 scientific:1 believe:1 semisupervised:1 effect:3 true:1 soatto:3 iteratively:2 illustrated:1 white:1 adjacent:2 during:2 whereby:4 shimshoni:1 criterion:4 leftmost:1 tt:6 performs:1 motion:6 stefano:1 pro:1 cp:2 image:10 invoked:1 recently:1 rl:2 tracked:2 interpret:2 measurement:23 significant:2 framed:1 tuning:1 automatic:1 consistency:1 pm:3 similarly:2 particle:3 had:1 reliability:1 moving:1 specification:1 stable:1 similarity:1 longer:1 whitening:1 etc:1 add:2 dominant:2 posterior:9 recent:4 perspective:1 shalom:1 scenario:1 verlag:2 inequality:1 binary:4 onr:1 yi:34 devise:1 additional:4 care:1 ey:1 signal:1 semi:6 relates:1 multiple:8 unimodal:3 full:2 infer:1 exceeds:1 adapt:1 divided:1 prediction:4 regression:3 avidan:1 denominator:1 vision:1 metric:2 iteration:5 kernel:2 histogram:1 santner:1 background:4 whereas:1 addressed:1 interval:1 median:3 singular:1 source:2 rest:1 unlike:1 subject:1 flow:2 spirit:1 practitioner:1 call:1 structural:2 integer:2 leverage:1 presence:1 yang:1 identically:2 enough:1 fit:1 identified:1 imperfect:1 reduce:1 tradeoff:1 angeles:1 motivated:1 effort:1 returned:4 fa8650:1 proceed:1 cause:1 york:2 cpt:1 detailed:1 prost:6 discount:1 processed:1 discretetime:1 diameter:2 reduced:1 specifies:2 overlay:1 percentage:1 estimated:6 track:5 per:1 correctly:2 blue:2 discrete:1 express:1 group:2 key:1 terminology:1 threshold:1 changing:1 ce:1 rectangle:1 graph:2 subgradient:3 uncertainty:2 family:1 throughout:2 wu:1 missed:2 putative:1 patch:1 decision:4 duncan:1 scaling:1 layer:1 followed:1 fan:1 occur:1 constraint:9 constrain:1 ri:1 software:1 ucla:1 min:1 performing:1 separable:2 optical:1 structured:2 according:2 alternate:1 combination:1 sumof:1 poor:1 describes:1 partitioned:1 evolves:3 outlier:15 invariant:1 ghaoui:1 heart:1 equation:9 previously:1 slack:1 eventually:1 fail:3 mechanism:1 count:1 enforcement:1 singer:1 end:2 yezzi:1 available:6 gaussians:1 generic:1 enforce:2 batch:1 robustness:1 drifting:1 existence:1 original:2 assumes:1 include:1 lock:1 marginalized:1 hinge:3 maintaining:1 pretend:3 reflectance:1 giving:2 grabner:1 classical:4 move:2 objective:4 added:1 matas:1 occurs:1 malik:1 parametric:1 strategy:1 dependence:1 diagonal:1 september:2 gradient:1 distance:5 pnf:2 separating:1 vd:2 majority:1 collected:3 trivial:1 reason:2 enforcing:1 provable:1 kalman:1 length:1 index:4 modeled:1 ratio:1 minimizing:2 innovation:2 difficult:2 unfortunately:2 mostly:1 a1j:1 negative:4 resurgence:1 design:1 implementation:2 enclosing:1 ski:1 unknown:2 perform:1 observation:8 finite:4 descent:2 behave:1 extended:2 incorporated:1 frame:22 rn:3 y1:1 perturbation:1 smoothed:1 august:1 drift:3 bottle:1 required:1 specified:1 bischof:2 california:1 distinction:1 nip:1 beyond:3 bar:1 below:4 dynamical:1 sparsity:1 program:2 max:12 including:2 video:2 memory:4 green:1 power:1 overlap:1 disturbance:1 indicator:1 residual:1 mn:1 scheme:4 improve:1 temporally:2 prior:2 interdependent:1 determining:1 relative:2 afosr:1 loss:6 discriminatively:1 mixed:1 limitation:1 filtering:21 srebro:1 versus:1 validation:2 h2:1 principle:1 viewpoint:1 eccv:2 penalized:1 supported:1 last:2 keeping:1 side:4 allow:2 template:2 felzenszwalb:1 distributed:2 boundary:7 dimension:1 valid:1 quantum:1 mikolajczyk:1 forward:2 collection:1 made:2 projected:1 author:1 counted:1 transaction:2 sj:1 approximate:2 emphasize:1 doucet:1 robotic:1 sequentially:1 assumed:2 belongie:1 tuples:1 discriminative:4 shwartz:1 alternatively:1 search:5 latent:2 triplet:1 reality:1 table:2 learn:1 robust:12 forest:2 complex:1 necessarily:1 kamil:1 domain:2 diag:2 did:1 linearly:1 bounding:19 noise:3 repeated:1 body:1 fig:2 board:1 wiley:1 lc:2 formalization:1 fails:1 position:1 sub:1 wish:1 explicit:6 candidate:3 lie:2 jmlr:1 third:1 hw:5 rk:2 r8:1 r2:7 svm:7 sequential:2 importance:2 entertained:1 illumination:1 margin:5 demand:1 rejection:2 cx:1 yin:1 appearance:14 visual:5 failed:2 contained:1 tracking:19 springer:2 truthed:1 truth:4 determines:1 conditional:2 goal:3 identity:1 viewed:1 sized:1 absence:1 change:17 hard:1 except:3 operates:2 hyperplane:4 deforming:2 indicating:1 select:2 support:2 latter:1 meant:1 brevity:2 collins:1 violated:1 evaluate:1 tested:1 phenomenon:1 |
3,672 | 4,325 | Lower Bounds for Passive and Active Learning
Maxim Raginsky?
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
Alexander Rakhlin
Department of Statistics
University of Pennsylvania
Abstract
We develop unified information-theoretic machinery for deriving lower bounds
for passive and active learning schemes. Our bounds involve the so-called Alexander?s capacity function. The supremum of this function has been recently rediscovered by Hanneke in the context of active learning under the name of ?disagreement coefficient.? For passive learning, our lower bounds match the upper bounds
of Gin?e and Koltchinskii up to constants and generalize analogous results of Massart and N?ed?elec. For active learning, we provide first known lower bounds based
on the capacity function rather than the disagreement coefficient.
1
Introduction
Not all Vapnik-Chervonenkis classes are created equal. This was observed by Massart and N?ed?elec
[24], who showed that, when it comes to binary classification rates on a sample of size n under a
margin condition, some classes admit rates of the order 1/n while others only (log n)/n. The latter
classes were called ?rich? in [24]. As noted by Gin?e and Koltchinskii [15], the fine complexity
notion that defines this ?richness? is in fact embodied in Alexander?s capacity function.1 Somewhat
surprisingly, the supremum of this function (called the disagreement coefficient by Hanneke [19])
plays a key role in risk bounds for active learning. The contribution of this paper is twofold. First, we
prove lower bounds for passive learning based on Alexander?s capacity function, matching the upper
bounds of [15] up to constants. Second, we prove lower bounds for the number of label requests in
active learning in terms of the capacity function. Our proof techniques are information-theoretic in
nature and provide a unified tool to study active and passive learning within the same framework.
Active and passive learning. Let (X , A) be an arbitrary measurable space. Let (X, Y ) be a random
variable taking values in X ? {0, 1} according to an unknown distribution P = ? ? PY |X , where
? denotes the marginal distribution of X. Here, X is an instance (or a feature, a predictor variable)
and Y is a binary response (or a label). Classical results in statistical learning assume availability of
an i.i.d. sample {(Xi , Yi )}ni=1 from P . In this framework, the learner is passive and has no control
on how this sample is chosen. The classical setting is well studied, and the following question has
recently received attention: do we gain anything if data are obtained sequentially, and the learner
is allowed to modify the design distribution ? of the predictor variable before receiving the next
pair (Xi , Yi )? That is, can the learner actively use the information obtained so far to facilitate faster
learning?
Two paradigms often appear in the literature: (i) the design distribution is a Dirac delta function
at some xi that depends on (xi?1 , Y i?1 ), or (ii) the design distribution is a restriction of the original distribution to some measurable set. There is rich literature on both approaches, and we only
mention a few results here. The paradigm (i) is closely related to learning with membership queries
[21], generalized binary search [25], and coding with noiseless feedback [6]. The goal is to actively
choose the next xi so that the observed Yi ? PY |X=xi is sufficiently ?informative? for the classification task. In this paradigm, the sample no longer provides information about the distribution
?
1
Affiliation until January, 2012: Department of Electrical and Computer Engineering, Duke University.
To be precise, the capacity function depends on the underlying probability distribution.
1
? (see [7] for further discussion and references). The setting (ii) is often called selective sampling
[9, 13, 8], although the term active learning is also used. In this paradigm, the aim is to sequentially
choose subsets Di ? X based on the observations prior to the ith example, such that the label Yi
is requested only if Xi ? Di . The sequence {Xi }ni=1 is assumed to be i.i.d., and so, form the view
point of the learner, the Xi is sampled from the conditional distribution ?(?|Di ).
In recent years, several interesting algorithms for active learning and selective sampling have appeared in the literature, most notably: the A2 algorithm of Balcan et al. [4], which explicitly maintains Di as a ?disagreement? set of a ?version space?; the empirical risk minimization (ERM) based
algorithm of Dasgupta et al. [11], which maintains the set Di implicitly through synthetic and real
examples; and the importance-weighted active learning algorithm of Beygelzimer et al. [5], which
constructs the design distribution through careful reweighting in the feature space. An insightful
analysis has been carried out by Hanneke [20, 19], who distilled the role of the so-called disagreement coefficient in governing the performance of several of these active learning algorithms. Finally,
Koltchinskii [23] analyzed active learning procedures using localized Rademacher complexities and
Alexander?s capacity function, which we discuss next.
Alexander?s capacity function. Let F denote a class of candidate classifiers, where a classifier is a
measurable function f : X ? {0, 1}. Suppose the VC dimension of F is finite: VC-dim(F) = d.
The loss (or risk) of f is its probability of error, RP (f ) , EP [1{f (X)6=Y } ] = P (f (X) 6= Y ).
It is well known that the risk is globally minimized by the Bayes classifier f ? = fP? , defined by
f ? (x) , 1{2?(x)?1} , where ?(x) , E[Y |X = x] is the regression function. Define the margin as
h , inf x?X |2?(x) ? 1|. If h > 0, we say the problem satisfies Massart?s noise condition. We
define the excess risk of a classifier f by EP (f ) , RP (f ) ? RP (f ? ), so that EP (f ) ? 0, with
equality if and only if f = f ? ?-a.s. Given ? ? (0, 1], define
F? (f ? ) , {f ? F : ?(f (X) 6= f ? (X)) ? ?} ,
D? (f ? ) , {x ? X : ?f ? F? (f ? ) s.t. f (x) 6= f ? (x)}
The set F? consists of all classifiers f ? F that are ?-close to f ? in the L1 (?) sense, while the set
D? consists of all points x ? X , for which there exists a classifier f ? F? that disagrees with the
Bayes classifier f ? at x. The Alexander?s capacity function [15] is defined as
? (?) , ?(D? (f ? ))/?,
(1)
that is, ? (?) measures the relative size (in terms of ?) of the disagreement region D? compared to ?.
Clearly, ? (?) is always bounded above by 1/?; however, in some cases ? (?) ? ?0 with ?0 < ?.
The function ? was originally introduced by Alexander [1, 2] in the context of exponential inequalities for empirical processes indexed by VC classes of functions, and Gin?e and Koltchinskii [15] generalized Alexander?s results. In particular, they proved (see [15, p. 1213]) that,
for a VC-classPof binary-valued functions with VC-dim(F) = d, the ERM solution fbn =
n
arg minf ?F n1 i=1 1{f (Xi )6=Yi } under Massart?s noise condition satisfies
d
d
s
b
EP (fn ) ? C
log ?
+
(2)
nh
nh2
nh
with probability at least 1 ? Ks?1 e?s/K for some constants C, K and any s > 0. The upper bound
(2) suggests the importance of the Alexander?s capacity function for passive learning, leaving open
the question of necessity. Our first contribution is a lower bound which matches the upper bound (2)
up to constant, showing that, in fact, dependence on the capacity is unavoidable.
Recently, Koltchinskii [23] made an important connection between Hanneke?s disagreement coefficient and Alexander?s capacity function. Under Massart?s noise condition, Koltchinskii showed (see
[23, Corollary 1]) that, for achieving an excess loss of ? with confidence 1??, the number of queries
issued by his active learning algorithm is bounded above by
?0 log(1/?)
[d log ?0 + log(1/?) + log log(1/?) + log log(1/h)] ,
(3)
h2
where ?0 = sup??(0,1] ? (?) is Hanneke?s disagreement coefficient. Similar bounds based on the
disagreement coefficient have appeared in [19, 20, 11]. The second contribution of this paper is a
lower bound on the expected number of queries based on Alexander?s capacity ? (?).
C
2
Comparison to known lower bounds. For passive learning, Massart and N?ed?elec [24] proved
two lower bounds which, in fact, correspond to ? (?) = 1/? and ? (?) = ?0 , the two endpoints on
the complexity scale for the capacity function. Without the capacity function at hand, the authors
emphasize that ?rich? VC classes yield a larger lower bound. Our Theorem 1 below gives a unified
construction for all possible complexities ? (?).
In the PAC framework, the lower bound ?(d/? + (1/?) log(1/?)) goes back to [12]. It follows
from our results that in the noisy version of the problem (h 6= 1), the lower bound is in fact
?((d/?) log(1/?) + (1/?) log(1/?)) for classes with ? (?) = ?(1/?).
For active learning, Castro and Nowak [7] derived lower bounds, but without the disagreement
coefficient and under a Tsybakov-type noise condition. This setting is out of the scope of this paper.
Hanneke [19] proved a lower bound on the number of label requests specifically for the A2 algorithm
in terms of the disagreement coefficient. In contrast, lower bounds of Theorem 2 are valid for any
algorithm and are in terms of Alexander?s capacity function. Finally, a result by K?aa? ri?ainen [22]
(strengthened by [5]) gives a lower bound of ?(? 2 /?2 ) where ? = inf f ?F EP (f ). A closer look
at the construction of the lower bound reveals that it is achieved by considering a specific margin
h = ?/?. Such an analysis is somewhat unsatisfying, as we would like to keep h as a free parameter,
not necessarily coupled with the desired accuracy ?. This point of view is put forth by Massart and
N?ed?elec [24, p. 2329], who argue for a non-asymptotic analysis where all the parameters of the
problem are made explicit. We also feel that this gives a better understanding of the problem.
2
Setup and main results
We suppose that the instance space X is a countably infinite set. Also, log(?) ? loge (?) throughout.
Definition 1. Given a VC function class F and a margin parameter h ? [0, 1], let C(F, h) denote
the class of all conditional probability distributions PY |X of Y ? {0, 1} given X ? X , such that:
(a) the Bayes classifier f ? ? F, and (b) the corresponding regression function satisfies the Massart
condition with margin h > 0.
Let P(X ) denote the space of all probability measures on X . We now introduce Alexander?s capacity function (1) into the picture. Whenever we need to specify explicitly the dependence of ? (?)
on f ? and ?, we will write ? (?; f ? , ?). We also denote by T the set of all admissible capacity
functions ? : (0, 1] ? R+ , i.e., ? ? T if and only if there exist some f ? ? F and ? ? P(X ), such
that ? (?) = ? (?; f ? , ?) for all ? ? (0, 1]. Without loss of generality, we assume ? (?) ? 2.
Definition 2. Given some ? ? P(X ) and a pair (F, h) as in Def. 1, we let P(?, F, h) denote the set
of all joint distributions of (X, Y ) ? X ? {0, 1} of the form ? ? PY |X , such that PY |X ? C(F, h).
Moreover, given an admissible function ? ? T and some ? ? (0, 1], we let P(?, F, h, ?, ?) denote
the subset of P(?, F, h), such that ? (?; f ? , ?) = ? (?).
Finally, we specify the type of learning schemes we will be dealing with.
Definition 3. An n-step learning scheme S consists of the following objects: n conditional proba(t)
bility distributions ?Xt |X t?1 ,Y t?1 , t = 1, . . . , n, and a mapping ? : X n ? {0, 1}n ? F.
This definition covers the passive case if we let
(t)
?Xt |X t?1 ,Y t?1 (?|xt?1 , y t?1 ) = ?(?),
?(xt?1 , y t?1 ) ? X t?1 ? {0, 1}t?1
(t)
as well as the active case, in which ?Xt |X t?1 ,Y t?1 is the user-controlled design distribution for
the feature at time t given all currently available information. The learning process takes place
sequentially as follows: At each time step t = 1, . . . , n, a random feature Xt is drawn accord(t)
ing to ?X t?1 ,Y t?1 (?|X t?1 , Y t?1 ), and then a label Yt is drawn given Xt . After the n samples
{(Xt , Yt )}n are collected, the learner computes the candidate classifier fbn = ?(X n , Y n ).
t=1
To quantify the performance of such a scheme, we need the concept of an induced measure, which
generalizes the set-up of [14]. Specifically, given some P = ? ? PY |X ? P(?, F, h), define the
3
following probability measure on X n ? {0, 1}n :
PS (xn , y n ) =
n
Y
(t)
PY |X (yt |xt )?Xt |X t?1 ,Y t?1 (xt |xt?1 , y t?1 ).
t=1
Definition 4. Let Q be a subset of P(?, F, h). Given an accuracy parameter ? ? (0, 1) and a
confidence parameter ? ? (0, 1), an n-step learning scheme S is said to (?, ?)-learn Q if
(4)
sup PS EP (fbn ) ? ?h ? ?.
P ?Q
Remark 1. Leaving the precision as ?h makes the exposition a bit cleaner in light of the fact that,
under Massart?s noise condition with margin h, EP (f ) ? hkf ? fP? kL1 (?) = h?(f (X) 6= fP? (X))
(cf. Massart and N?ed?elec [24, p. 2352]).
With these preliminaries out of the way, we can state the main results of this paper:
Theorem 1 (Lower bounds for passive learning). Given any ? ? T , any sufficiently large d ? N and
any ? ? (0, 1], there exist a probability measure ? ? P(X ) and a VC class F with VC-dim(F) = d
with the following properties:
(1) Fix any K > 1 and ? ? (0, 1/2). If there exists an n-step passive learning scheme that (?/2, ?)learns P(?, F, h, ?, ?) for some h ? (0, 1 ? K ?1 ], then
log 1?
(1 ? ?)d log ? (?)
n=?
+
.
(5)
K?h2
K?h2
(2) If there exists an n-step passive learning scheme that (?/2, ?)-learns P(?, F, 1, ?, ?), then
(1 ? ?)d
.
(6)
n=?
?
Theorem 2 (Lower bounds for active learning). Given any ? ? T , any sufficiently large d ? N and
any ? ? (0, 1], there exist a probability measure ? ? P(X ) and a VC class F with VC-dim(F) = d
with the following property: Fix any K > 1 and any ? ? (0, 1/2). If there exists an n-step active
learning scheme that (?/2, ?)-learns P(?, F, h, ?, ?) for some h ? (0, 1 ? K ?1 ], then
(1 ? ?)d log ? (?) ? (?) log 1?
n=?
+
.
(7)
Kh2
Kh2
Remark 2. The lower bound in (6) is well-known and goes back to [12]. We mention it because it
naturally arises from our construction. In fact, there is a smooth transition between (5) and (6), with
the extra log ? (?) factor disappearing as h approaches 1. As for the active learning lower bound, we
conjecture that d log ? (?) is, in fact, optimal, and the extra factor of ?0 in d?0 log ?0 log(1/?) in (3)
arises from the use of a passive learning algorithm as a black box.
The remainder of the paper is organized as follows: Section 3 describes the required informationtheoretic tools, which are then used in Section 4 to prove Theorems 1 and 2. The proofs of a number
of technical lemmas can be found in the Supplementary Material.
3
Information-theoretic framework
Let P and Q be two probability distributions on a common measurable space W. Given a convex
function ? : [0, ?) ? R such that ?(1) = 0, the ?-divergence2 between P and Q [3, 10] is given by
Z
dQ
dP/d?
D? (PkQ) ,
?
d?,
(8)
dQ/d?
W d?
where ? is an arbitrary ?-finite measure that dominates both P and Q.3 For the special case of
W = {0, 1}, when P and Q are the distributions of a Bernoulli(p) and a Bernoulli(q) random
2
We deviate from the standard term ?f -divergence? since f is already reserved for a generic classifier.
For instance, one can always take ? = P + Q. It it easy to show that the value of D? (PkQ) in (8) does not
depend on the choice of the dominating measure.
3
4
variable, we will denote their ?-divergence by
p
1?p
d? (pkq) = q ? ?
+ (1 ? q) ? ?
.
q
1?q
(9)
Two particular choices of ? are of interest: ?(u) = u log u, which gives the ordinary Kullback?
Leibler (KL) divergence D(PkQ), and ?(u) = ? log u, which gives the reverse KL divergence
D(QkP), which we will denote by Dre (PkQ). We will write d(?k?) for the binary KL divergence.
Our approach makes fundamental use of the data processing inequality that holds for any ?divergence [10]: if P and Q are two possible probability distributions for a random variable W ? W
and if PZ|W is a conditional probability distribution of some other random variable Z given W , then
D? (PZ kQZ ) ? D? (PkQ),
(10)
where PZ (resp., QZ ) is the marginal distribution of Z when W has distribution P (resp., Q).
Consider now an arbitrary n-step learning scheme S. Let us fix a finite set {f1 , . . . , fN } ? F and
assume that to each m ? [N ] we can associate a probability measure P m = ??PYm|X ? P(?, F, h)
with the Bayes classifier fP?m = fm . For each m ? [N ], let us define the induced measure
PS,m (xn , y n ) ,
n
Y
(t)
PYm|X (yt |xt )?Xt |X t?1 ,Y t?1 (xt |xt?1 , y t?1 ).
(11)
t=1
Moreover, given any probability distribution ? over [N ], let PS,? (m, xn , y n ) , ?(m)PS,m (xn , y n ).
In other words, PS,? is the joint distribution of (M, X n , Y n ) ? [N ] ? X n ? {0, 1}n , under which
M ? ? and P(X n , Y n |M = m) = PS,m (X n , Y n ).
The first ingredient in our approach is standard [27, 14, 24]. Let {f1 , . . . , fN } be an arbitrary 2?packing subset of F (that is, kfi ? fj kL1 (?) > 2? for all i 6= j). Suppose that S satisfies (4) on
some Q that contains {P 1 , . . . , P N }. Now consider
c?M
c(X n , Y n ) , arg min kfbn ? fm kL (?) .
M
1
(12)
1?m?N
Then the following lemma is easily proved using triangle inequality:
c 6= M ) ? ?.
Lemma 1. With the above definitions, PS,? (M
The second ingredient of our approach is an application of the data processing inequality (10) with a
judicious choice of ?. Let W , (M, X n , Y n ), let M be uniformly distributed over [N ], ?(m) = N1
for all m ? [N ], and let P be the induced measure PS,? . Then we have the following lemma (see
also [17, 16]):
Lemma 2. Consider any probability measure Q for W , under which M is distributed according to
? and independent of (X n , Y n ). Let the divergence-generating function ? be such that the mapping
p 7? d? (pkq) is nondecreasing on the interval [q, 1]. Then, assuming that ? ? 1 ? N1 ,
1
1
N?
D? (PkQ) ?
? ? (N (1 ? ?)) + 1 ?
??
.
(13)
N
N
N ?1
Proof. Define the indicator random variable Z = 1{M
c=M } . Then P(Z = 1) ? 1 ? ? by Lemma 1.
On the other hand, since Q can be factored as Q(m, xn , y n ) = N1 QX n ,Y n (xn , y n ), we have
Q(Z = 1) =
N
X
N
X
X
1
c = m) = 1
.
Q(M = m, M
QX n ,Y n (xn , y n )1{M
c(xn ,y n )=m} =
N
N
m=1
m=1 xn ,y n
Therefore,
D? (PkQ) ? D? (PZ kQZ ) = d? (P(Z = 1)kQ(Z = 1)) ? d? (1 ? ?k1/N ),
where the first step is by the data processing inequality (10), the second is due to the fact that Z is
binary, and the third is by the assumed monotonicity property of ?. Using (9), we arrive at (13).
Next, we need to choose the divergence-generating function ? and the auxiliary distribution Q.
5
Choice of ?. Inspection of the right-hand side of (13) suggests that the usual ?(log N ) lower
bounds [14, 27, 24] can be obtained if ?(u) behaves like u log u for large u. On the other hand,
if ?(u) behaves like ? log u for small u, then the lower bounds will be of the form ? log 1? .
These observations naturally lead to the respective choices ?(u) = u log u and ?(u) = ? log u,
corresponding to the KL divergence D(PkQ) and the reverse KL divergence Dre (PkQ) = D(QkP).
Choice of Q. One obvious choice of Q satisfying the conditions of the lemma is the product of
PN
the marginals PM ? ? and PX n ,Y n ? N ?1 m=1 PS,m : Q = PM ? PX n ,Y n . With this Q and
?(u) = u log u, the left-hand side of (13) is given by
D(PkQ) = D(PM,X n ,Y n kPM ? PX n ,Y n ) = I(M ; X n , Y n ),
(14)
where I(M ; X n , Y n ) is the mutual information between M and (X n , Y n ) with joint distribution P.
On the other hand, it is not hard to show that the right-hand side of (13) can be lower-bounded by
(1 ? ?) log N ? log 2. Combining with (14), we get
I(M ; X n , Y n ) ? (1 ? ?) log N ? log 2,
which is (a commonly used variant of) the well-known Fano?s inequality [14, Lemma 4.1], [18,
p. 1250], [27, p. 1571]. The same steps, but with ?(u) = ? log u, lead to the bound
1
1
1
1
log ? log 2 ? log ? log 2,
L(M ; X n , Y n ) ? 1 ?
N
?
2
?
where L(M ; X n , Y n ) , Dre (PM,X n ,Y n kPM ? PX n ,Y n ) is the so-called lautum information between M and (X n , Y n ) [26], and the second inequality holds whenever N ? 2.
However, it is often more convenient to choose Q as follows. Fix an arbitrary conditional distribution
QY |X of Y ? {0, 1} given X ? X . Given a learning scheme S, define the probability measure
QS (xn , y n ) ,
1
S
N Q (x
n
n
and let Q(m, xn , y n ) =
n
Y
(t)
QY |X (yt |xt )?Xt |X t?1 ,Y t?1 (xt |xt?1 , y t?1 )
(15)
t=1
n n
, y ) for all m ? [N ].
Lemma 3. For each x ? X and y ? X , let N (y|xn ) , |{1 ? t ? n : xt = y}|. Then
D(PkQ) =
N
1 X X
D(PYm|X (?|x)kQY |X (?|x))EPS,m [N (x|X n )] ;
N m=1
(16)
N
1 X X
Dre (PYm|X (?|x)kQY |X (?|x))EQ [N (x|X n )] .
N m=1
(17)
x?X
Dre (PkQ) =
x?X
Moreover, if the scheme S is passive, then Eq. (17) becomes
h
i
Dre (PkQ) = n ? EX EM Dre (PYM|X (?|X)kQY |X (?|X)) ,
(18)
and the same holds for Dre replaced by D.
4
Proofs of Theorems 1 and 2
Combinatorial preliminaries. Given k ? N, onsider the k-dimensional Boolean cube {0, 1}k =
{? = (?1 , . . . , ?k ) : ?i ? {0, 1}, i ? [k]}. For any two ?, ? 0 ? {0, 1}k , define their Hamming disPk
tance dH (?, ? 0 ) , i=1 1{?i 6=?i0 } . The Hamming weight of any ? ? {0, 1}k is the number of its
nonzero coordinates. For k > d, let {0, 1}kd denote the subset of {0, 1}k consisting of all binary
strings with Hamming weight d. We are interested in large separated and well-balanced subsets of
{0, 1}kd . To that end, we will use the following lemma:
Lemma 4. Suppose that d is even and k > 2d. Then, for d sufficiently large, there exists a set
k
; (ii) dH (?, ? 0 ) > d for
Mk,d ? {0, 1}kd with the following properties: (i) log |Mk,d | ? d4 log 6d
(2)
any two distinct ?, ? 0 ? Mk,d ; (iii) for any j ? [k],
X
1
3d
d
?
?j ?
(19)
2k
|Mk,d |
2k
??Mk,d
6
Proof of Theorem 1. Without loss of generality, we take X = N. Let k = d? (?) (we increase ? if
necessary to ensure that k ? N), and consider the probability measure ? that puts mass ?/d on each
x = 1 through x = k and the remaining mass 1 ? ?? (?) on x = k + 1. (Recall that ? (?) ? 1/?.)
Let F be the class of indicator functions of all subsets of X with cardinality d. Then VC-dim(F) =
d. We will focus on a particular subclass F 0 of F. For each ? ? {0, 1}kd , define f? : X ? {0, 1}
by f? (x) = ?x if x ? [k] and 0 otherwise, and take F 0 = {f? : ? ? {0, 1}kd }. For p ? [0, 1], let ?p
denote the probability distribution of a Bernoulli(p) random variable. Now, to each f? ? F 0 let us
associate the following conditional probability measure PY? |X :
PY? |X (y|x) = ?(1+h)/2 (y)?x + ?(1?h)/2 (y)(1 ? ?x ) 1{x?[k]} + 1{y=0} 1{x6?[k]}
It is easy to see that each PY? |X belongs to C(F, h). Moreover, for any two f? , f? 0 ? F we have
k
kf? ? f? 0 kL1 (?) = ?(f? (X) 6= f? 0 (X)) =
?X
?
1{?i 6=?i0 } ? dH (?, ? 0 ).
d i=1
d
Hence, for each choice of f ? = f? ? ? F we have F? (f? ? ) = {f? : dH (?, ? ? ) ? d}. This implies
that D? (f? ? ) = [k], and therefore ? (?; f? ? , ?) = ?([k])/? = ? (?). We have thus established that,
for each ? ? {0, 1}kd , the probability measure P ? = ? ? PY? |X is an element of P(?, F, h, ?, ?).
Finally, let Mk,d ? {0, 1}kd be the set described in Lemma 4, and let G , {f? : ? ? Mk,d }. Then
for any two distinct ?, ? 0 ? Mk,d we have kf? ? f? 0 kL1 (?) = d? dH (?, ? 0 ) > ?. Hence, G is a
?-packing of F 0 in the L1 (?)-norm.
Now we are in a position to apply the lemmas of Section 3. Let {? (1) , . . . , ? (N ) }, N = |Mk,d |,
be a fixed enumeration of the elements of Mk,d . For each m ? [N ], let us denote by PYm|X the
(m)
conditional probability measure PY? |X , by P m the measure ? ? PYm|X on X ? {0, 1}, and by
fm ? G the corresponding Bayes classifier. Now consider any n-step passive learning scheme that
(?/2, ?)-learns P(?, F, h, ?, ?), and define the probability measure P on [N ] ? X n ? {0, 1}n by
P(m, xn , y n ) = N1 PS,m (xn , y n ), where PS,m is constructed according to (11). In addition, for
every ? ? (0, 1) define the auxiliary measure Q? on [N ] ? X n ? {0, 1}n by Q? (m, xn , y n ) =
1
n n
S
S
N Q? (x , y ), where Q? is constructed according to (15) with
Q?Y |X (y|x) , ?? (y)1{x?[k]} + 1{y=0} 1{x6?[k]} .
Applying Lemma 2 with ?(u) = u log u, we can write
D(PkQ? ) ? (1 ? ?) log N ? log 2 ?
Next we apply Lemma 3. Defining ? =
1+h
2
(1 ? ?)d
k
log
? log 2
4
6d
(20)
and using the easily proved fact that
D(PYm|X (?|x)kQ?Y |X (?|x)) = [d(?k?) ? d(1 ? ?k?)] fm (x) + d(1 ? ?k?)1{x?[k]} ,
we get
D(PkQ? ) = n? [d(?k?) + (? (?) ? 1)d(1 ? ?k?)] .
(21)
Therefore, combining Eqs. (20) and (21) and using the fact that k = d? (?), we obtain
n?
(1 ? ?)d log ? (?)
6 ? log 16
,
4? [d(?k?) + (? (?) ? 1)d(1 ? ?k?)]
?? ? (0, 1)
(22)
This bound is valid for all h ? (0, 1], and the optimal choice of ? for a given h can be calculated in
h
closed form: ? ? (h) = 1?h
2 + ? (?) . We now turn to the reverse KL divergence. First, suppose that
h 6= 1. Lemma 2 gives Dre (PkQ1?? ) ? (1/2) log(1/?) ? log 2. On the other hand, using the fact
that
Dre (PYm|X (?|x)kQ1??
Y |X (?|x)) = d(?k1 ? ?)fm (x)
7
(23)
and applying Eq. (18), we can write
Dre (PkQ1?? ) = n? ? d(?k1 ? ?) = n? ? h log
1+h
.
1?h
(24)
We conclude that
n?
1
2
log
1
?
? log 2
?h log
1+h
1?h
.
(25)
For h = 1, we get the vacuous bound n ? 0.
Now we consider the two cases of Theorem 1.
1+h
(1) For a fixed K > 1, it follows from the inequality log u ? u ? 1 that h log 1?h
? Kh2 for all
h ? (0, 1 ? K ?1 ]. Choosing ? = 1?h
2 and using Eqs. (22) and (25), we obtain (5).
(2) For h = 1, we use (22) with the optimal setting ? ? (1) = 1/? (?), which gives (6). The transition
h
between h = 1 and h 6= 1 is smooth and determined by ? ? (h) = 1?h
2 + ? (?) .
Proof of Theorem 2. We work with the same construction as in the proof of Theorem 1. First, let
PN
QX n ,Y n , N1 m=1 PS,m . and Q = ? ? QX n ,Y n , where ? is the uniform distribution on [N ].
Then, by convexity,
#
" n
N
X
PYm|X (Yt |Xt )
0
1 X
? n max
max D(PYm|X (?|x)kPYm|X (?|x))
EP
log m0
D(PkQ) ? 2
0
N
m,m ?[N ] x?[k]
PY |X (Yt |Xt )
t=1
m,m0 =1
which is upper bounded by nh log
obtain
1+h
1?h .
n?
Applying Lemma 2 with ?(u) = u log u, we therefore
k
6d ?
1+h
4h log 1?h
(1 ? ?)d log
Next, consider the auxiliary measure Q1?? with ? =
(a)
Dre (PkQ1?? ) =
log 16
1+h
2 .
.
(26)
Then
N
k
1 XX
n
Dre (PYm|X (?|x)kQ1??
Y |X (?|x))EQ1?? [N (x|X )]
N
x=1
M =1
N
k
d(?k1 ? ?) X X
fm (x)EQ1?? [N (x|X n )]
N
m=1 x=1
!
k
N
X
1 X
= d(?k1 ? ?)
fm (x) EQ1?? [N (x|X n )]
N
x=1
m=1
!
k
N
X
1 X (m)
(c)
= d(?k1 ? ?)
?x
EQ1?? [N (x|X n )]
N
x=1
m=1
" k
#
X
(d)
3
1+h
n
?
h log
EQ
N (x|X )
2? (?)
1 ? h 1?? x=1
(b)
=
(e)
?
3n
1+h
h log
,
2? (?)
1?h
where (a) is by Lemma 3, (b) is by (23), (c) is by definition of {fm }, (d) is by the balance condiPk
P
tion (19) satisfied by Mk,d , and (e) is by the fact that x=1 N (x|X n ) ? x?X N (x|X n ) = n.
Applying Lemma 2 with ?(u) = ? log u, we get
? (?) log 1? ? log 4
n?
(27)
1+h
3h log 1?h
Combining (26) and (27) and using the bound h log
8
1+h
1?h
? Kh2 for h ? (0, 1 ? K ?1 ], we get (7).
References
[1] K.S. Alexander. Rates of growth and sample moduli for weighted empirical processes indexed by sets.
Probability Theory and Related Fields, 75(3):379?423, 1987.
[2] K.S. Alexander. The central limit theorem for weighted empirical processes indexed by sets. Journal of
Multivariate Analysis, 22(2):313?339, 1987.
[3] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another.
J. Roy. Stat. Soc. Ser. B, 28:131?142, 1966.
[4] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML ?06: Proceedings of
the 23rd international conference on Machine learning, pages 65?72, New York, NY, USA, 2006. ACM.
[5] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML. ACM
New York, NY, USA, 2009.
[6] M.V. Burnashev and K.S. Zigangirov. An interval estimation problem for controlled observations. Problemy Peredachi Informatsii, 10(3):51?61, 1974.
[7] R. M. Castro and R. D. Nowak. Minimax bounds for active learning. IEEE Trans. Inform. Theory,
54(5):2339?2353, 2008.
[8] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear classification and selective sampling under low
noise conditions. Advances in Neural Information Processing Systems, 21, 2009.
[9] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15(2):201?221, 1994.
[10] I. Csisz?ar. Information-type measures of difference of probability distributions and indirect observations.
Studia Sci. Math. Hungar., 2:299?318, 1967.
[11] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances in
neural information processing systems, volume 20, page 2, 2007.
[12] A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. Information and Computation, 82(3):247?261, 1989.
[13] Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee
algorithm. Machine Learning, 28(2):133?168, 1997.
[14] C. Gentile and D. P. Helmbold. Improved lower bounds for learning from noisy examples: an informationtheoretic approach. Inform. Comput., 166:133?155, 2001.
[15] E. Gin?e and V. Koltchinskii. Concentration inequalities and asymptotic results for ratio type empirical
processes. Ann. Statist., 34(3):1143?1216, 2006.
[16] A. Guntuboyina. Lower bounds for the minimax risk using f -divergences, and applications. IEEE Trans.
Inf. Theory, 57(4):2386?2399, 2011.
[17] A. A. Gushchin. On Fano?s lemma and similar inequalities for the minimax risk. Theory of Probability
and Mathematical Statistics, pages 29?42, 2003.
[18] T. S. Han and S. Verd?u. Generalizing the Fano inequality. IEEE Trans. Inf. Theory, 40(4):1247?1251,
1994.
[19] S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th
international conference on Machine learning, page 360. ACM, 2007.
[20] S. Hanneke. Rates of convergence in active learning. Ann. Statist., 39(1):333?361, 2011.
[21] T. Heged?us. Generalized teaching dimensions and the query complexity of learning. In COLT ?95, pages
108?117, New York, NY, USA, 1995. ACM.
[22] M. K?aa? ri?ainen. Active learning in the non-realizable case. In ALT, pages 63?77, 2006.
[23] V. Koltchinskii. Rademacher complexities and bounding the excess risk of active learning. J. Machine
Learn. Res., 11:2457?2485, 2010.
? N?ed?elec. Risk bounds for statistical learning. Ann. Statist., 34(5):2326?2366, 2006.
[24] P. Massart and E.
[25] R. D. Nowak. The geometry of generalized binary search. Preprint, October 2009.
[26] D. P. Palomar and S. Verd?u. Lautum information. IEEE Trans. Inform. Theory, 54(3):964?975, March
2008.
[27] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Ann.
Statist., 27(5):1564?1599, 1999.
9
| 4325 |@word version:2 norm:1 open:1 q1:1 mention:2 necessity:1 contains:1 chervonenkis:1 beygelzimer:3 fn:3 informative:1 ainen:2 atlas:1 inspection:1 ith:1 provides:1 math:1 mathematical:1 constructed:2 prove:3 consists:3 introduce:1 notably:1 expected:1 bility:1 globally:1 enumeration:1 considering:1 cardinality:1 becomes:1 xx:1 underlying:1 bounded:4 moreover:4 mass:2 agnostic:3 string:1 unified:3 every:1 subclass:1 growth:1 classifier:12 ser:1 control:1 appear:1 before:1 engineering:1 modify:1 limit:1 black:1 koltchinskii:8 studied:1 k:1 suggests:2 disappearing:1 kfi:1 procedure:1 kh2:4 empirical:5 matching:1 convenient:1 confidence:2 word:1 get:5 close:1 put:2 context:2 risk:9 applying:4 py:13 restriction:1 measurable:4 yt:7 go:2 attention:1 convex:1 helmbold:1 factored:1 q:1 haussler:1 deriving:1 his:1 notion:1 coordinate:1 analogous:1 feel:1 qkp:2 construction:4 play:1 suppose:5 user:1 resp:2 duke:1 shamir:1 palomar:1 verd:2 associate:2 element:2 roy:1 satisfying:1 observed:2 role:2 ep:8 preprint:1 electrical:1 region:1 richness:1 balanced:1 convexity:1 complexity:7 seung:1 depend:1 ali:1 learner:5 triangle:1 packing:2 easily:2 joint:3 indirect:1 elec:6 separated:1 distinct:2 query:5 choosing:1 larger:1 valued:1 supplementary:1 say:1 dominating:1 otherwise:1 statistic:2 nondecreasing:1 noisy:2 sequence:1 product:1 remainder:1 combining:3 forth:1 csisz:1 dirac:1 convergence:2 p:13 rademacher:2 generating:2 object:1 develop:1 stat:1 received:1 eq:6 soc:1 auxiliary:3 come:1 implies:1 quantify:1 closely:1 vc:12 material:1 fix:4 f1:2 generalization:1 preliminary:2 hold:3 sufficiently:4 scope:1 mapping:2 m0:2 a2:2 estimation:1 label:6 currently:1 combinatorial:1 tool:2 weighted:4 minimization:1 clearly:1 always:2 aim:1 rather:1 pn:2 corollary:1 derived:1 focus:1 bernoulli:3 contrast:1 hkf:1 sense:1 problemy:1 dim:5 realizable:1 membership:1 i0:2 zigangirov:1 selective:4 interested:1 arg:2 classification:3 colt:1 special:1 mutual:1 marginal:2 equal:1 construct:1 distilled:1 cube:1 field:1 sampling:4 look:1 icml:2 minf:1 minimized:1 others:1 few:1 divergence:13 replaced:1 geometry:1 consisting:1 n1:6 proba:1 interest:1 rediscovered:1 analyzed:1 light:1 silvey:1 nowak:3 closer:1 necessary:1 respective:1 machinery:1 indexed:3 loge:1 desired:1 re:1 mk:11 instance:3 boolean:1 cover:1 ar:1 ordinary:1 subset:7 kl1:4 predictor:2 kq:2 uniform:1 tishby:1 synthetic:1 fundamental:1 international:2 receiving:1 fbn:3 central:1 satisfied:1 cesa:1 unavoidable:1 choose:4 admit:1 actively:2 coding:1 availability:1 coefficient:10 coordinated:1 unsatisfying:1 explicitly:2 depends:2 tion:1 view:2 closed:1 sup:2 bayes:5 maintains:2 contribution:3 ni:2 accuracy:2 who:3 reserved:1 correspond:1 yield:1 generalize:1 hanneke:8 inform:3 monteleoni:1 whenever:2 ed:6 definition:7 obvious:1 naturally:2 proof:7 di:5 hamming:3 gain:1 sampled:1 proved:5 hsu:1 studia:1 recall:1 organized:1 back:2 pym:12 originally:1 x6:2 response:1 specify:2 improved:1 box:1 generality:2 nh2:1 governing:1 until:1 langford:2 hand:8 cohn:1 reweighting:1 defines:1 modulus:1 facilitate:1 name:1 usa:3 concept:1 equality:1 hence:2 laboratory:1 leibler:1 nonzero:1 ehrenfeucht:1 noted:1 anything:1 d4:1 generalized:4 theoretic:4 l1:2 passive:16 balcan:2 fj:1 recently:3 common:1 behaves:2 endpoint:1 nh:3 volume:1 marginals:1 eps:1 rd:1 pm:4 fano:3 teaching:1 illinois:1 han:1 longer:1 pkq:18 multivariate:1 showed:2 recent:1 inf:4 belongs:1 reverse:3 kpm:2 issued:1 inequality:11 binary:8 affiliation:1 yi:5 gentile:2 somewhat:2 paradigm:4 ii:3 dre:13 champaign:1 ing:1 match:2 faster:1 smooth:2 technical:1 determination:1 controlled:2 variant:1 regression:2 noiseless:1 accord:1 achieved:1 qy:2 addition:1 fine:1 interval:2 leaving:2 extra:2 massart:11 induced:3 yang:1 iii:1 easy:2 pennsylvania:1 fm:8 kq1:2 york:3 burnashev:1 remark:2 involve:1 cleaner:1 tsybakov:1 statist:4 exist:3 heged:1 delta:1 write:4 dasgupta:3 key:1 achieving:1 drawn:2 guntuboyina:1 year:1 raginsky:1 place:1 throughout:1 arrive:1 bit:1 bound:41 def:1 informatsii:1 ri:2 min:1 px:4 conjecture:1 department:2 according:4 request:2 march:1 kd:7 describes:1 em:1 castro:2 erm:2 discus:1 turn:1 committee:1 needed:1 end:1 available:1 generalizes:1 apply:2 barron:1 generic:1 disagreement:11 rp:3 original:1 denotes:1 remaining:1 cf:1 ensure:1 k1:6 classical:2 question:2 already:1 concentration:1 dependence:2 usual:1 said:1 gin:4 dp:1 sci:1 capacity:18 argue:1 collected:1 assuming:1 ratio:1 balance:1 hungar:1 setup:1 october:1 design:5 unknown:1 bianchi:1 upper:5 ladner:1 observation:4 urbana:1 finite:3 january:1 defining:1 precise:1 arbitrary:5 introduced:1 vacuous:1 pair:2 required:1 kl:7 connection:1 established:1 trans:4 below:1 appeared:2 fp:4 cavallanti:1 max:2 tance:1 indicator:2 minimax:4 scheme:12 picture:1 created:1 carried:1 coupled:1 embodied:1 deviate:1 prior:1 literature:3 disagrees:1 understanding:1 kf:2 relative:1 asymptotic:2 freund:1 loss:4 interesting:1 localized:1 ingredient:2 h2:3 dq:2 surprisingly:1 free:1 side:3 taking:1 distributed:2 peredachi:1 feedback:1 dimension:2 xn:15 valid:2 transition:2 rich:3 computes:1 calculated:1 author:1 made:2 commonly:1 far:1 qx:4 excess:3 emphasize:1 informationtheoretic:2 implicitly:1 countably:1 kullback:1 supremum:2 keep:1 dealing:1 monotonicity:1 active:28 sequentially:3 reveals:1 assumed:2 conclude:1 xi:10 search:2 qz:1 nature:1 learn:2 improving:1 requested:1 necessarily:1 main:2 bounding:1 noise:6 allowed:1 strengthened:1 ny:3 precision:1 position:1 explicit:1 exponential:1 comput:1 candidate:2 third:1 learns:4 admissible:2 theorem:11 specific:1 xt:23 showing:1 insightful:1 pac:1 rakhlin:1 pz:4 alt:1 dominates:1 exists:5 vapnik:1 valiant:1 importance:3 maxim:1 eq1:4 margin:6 generalizing:1 aa:2 satisfies:4 dh:5 acm:4 conditional:7 goal:1 ann:4 careful:1 exposition:1 twofold:1 hard:1 judicious:1 specifically:2 infinite:1 uniformly:1 determined:1 lemma:20 kearns:1 called:6 latter:1 arises:2 alexander:16 ex:1 |
3,673 | 4,326 | Learning Anchor Planes for Classification
Ziming Zhang? L?ubor Ladick?? Philip H.S. Torr? Amir Saffari??
? Department of Computing, Oxford Brookes University, Wheatley, Oxford, OX33 1HX, U.K.
? Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ, U.K.
? Sony Computer Entertainment Europe, London, UK
{ziming.zhang, philiptorr}@brookes.ac.uk
[email protected]
[email protected]
Abstract
Local Coordinate Coding (LCC) [18] is a method for modeling functions of data
lying on non-linear manifolds. It provides a set of anchor points which form a local
coordinate system, such that each data point on the manifold can be approximated
by a linear combination of its anchor points, and the linear weights become the
local coordinate coding. In this paper we propose encoding data using orthogonal
anchor planes, rather than anchor points. Our method needs only a few orthogonal
anchor planes for coding, and it can linearize any (?, ?, p)-Lipschitz smooth nonlinear function with a fixed expected value of the upper-bound approximation error
on any high dimensional data. In practice, the orthogonal coordinate system can be
easily learned by minimizing this upper bound using singular value decomposition
(SVD). We apply our method to model the coordinates locally in linear SVMs
for classification tasks, and our experiment on MNIST shows that using only 50
anchor planes our method achieves 1.72% error rate, while LCC achieves 1.90%
error rate using 4096 anchor points.
1
Introduction
Local Coordinate Coding (LCC) [18] is a coding scheme that encodes the data locally so that any
non-linear (?, ?, p)-Lipschitz smooth function (see Definition 1 in Section 2 for details) over the data
manifold can be approximated using linear functions. There are two components in this method: (1)
a set of anchor points which decide the local coordinates, and (2) the coding for each data based
on the local coordinates given the anchor points. Theoretically [18] suggests that under certain
assumptions, locality is more essential than sparsity for non-linear function approximation. LCC has
been successfully applied to many applications such like object recognition (e.g. locality-constraint
linear coding (LLC) [16]) in VOC 2009 challenge [7].
One big issue in LCC is that its classification performance is highly dependent on the number of
anchor points, as observed in Yu and Zhang [19], because these points should be ?local enough?
to encode surrounding data on the data manifold accurately, which sometimes means that in real
applications the number of anchor points explodes to a surprisingly huge number. This has been
demonstrated in [18] where LCC has been tested on MNIST dataset, using from 512 to 4096 anchor
points learned from sparse coding, the error rate decreased from 2.64% to 1.90%. This situation
could become a serious problem when the distribution of the data points is sparse in the feature
space, i.e. there are many ?holes? between data points (e.g. regions of feature space that are sparsely
populated by data). As a result of this, many redundant anchor points will be distributed in the holes
with little information. By using many anchor points, the computational complexity of the classifier
at both training and test time increases significantly, defeating the original purpose of using LCC.
1
So far several approaches have been proposed for problems closely related to anchor point learning
such as dictionary learning or codebook learning. For instance, Lee et. al. [12] proposed learning
the anchor points for sparse coding using the Lagrange dual. Mairal et. al. [13] proposed an online
dictionary learning algorithm using stochastic approximations. Wang et. al. [16] proposed localityconstraint linear coding (LLC), which is a fast implementation of LCC, and an online incremental
codebook learning algorithm using coordinate descent method, whose performance is very close to
that using K-Means. However, none of these algorithms can deal with holes of sparse data as they
need many anchor points.
In this paper, we propose a method to approximate any non-linear (?, ?, p)-Lipschitz smooth function using an orthogonal coordinate coding (OCC) scheme on a set of orthogonal basis vectors. Each
basis vector v ? Rd defines a family of anchor planes, each of which can be considered as consisting of infinite number of anchor points, and the nearest point on each anchor plane to a data point
x ? Rd is used for coding, as illustrated in Figure 1. The data point x will be encoded based on
the margin, xT v where (?)T denotes the matrix transpose operator, between x and an anchor plane
defined by v. The benefits of using anchor planes are:
? A few anchor planes can replace many anchor points while preserving similar locality of
anchor points. This sparsity may lead to a better generalization since many anchor points
will overfit the data easily. Therefore, it can deal with the hole problem in LCC.
? The learned orthogonal basis vectors can fit naturally into locally linear SVM?s (such as
[9,10,11,19,21]) which we describe below.
Theoretically we show that using OCC any (?, ?, p)-Lipschitz smooth non-linear function can be
linearized with a fixed upper-bound approximation error. In practice by minimizing this upper
bound, the orthogonal basis vectors can be learned using singular value decomposition (SVD). In
our experiments, We integrate OCC into LL-SVM for classification.
Linear support vector machines have become popular for solving classification tasks due to their
fast and simple online application to large scale data sets. However, many problems are not linearly
separable. For these problems kernel-based SVMs are often used, but unlike their linear variant they
suffer from various drawbacks in terms of computational and memory efficiency. Their response
can be represented only as a function of the set of support vectors, which has been experimentally
shown to grow linearly with the size of the training set. A recent trend has grown to create a classifier
locally based on a set of linear SVMs [9,10,11,19,21]. For instance, in [20] SVMs are trained only
based on the N nearest neighbors of each data, and in [9] multiple kernel learning was applied
locally. In [10] Kecman and Brooks proved that the stability bounds for local SVMs are tighter than
the ones for traditional, global, SVMs. Ladicky and Torr [11] proposed a novel locally linear SVM
classifier (LL-SVM) with smooth decision boundary and bounded curvature. They show how the
functions defining the classifier can be approximated using local codings and show how this model
can be optimized in an online fashion by performing stochastic gradient descent with the same
convergence guarantees as standard gradient descent method for linear SVMs. Mathematically LLSVM is formulated as follows:
arg min
W,b
s.t.
?
1 X
kWk2 +
?k
2
|S|
(1)
k?S
?k ? S :
?k ? 1 ? yk ? Txk Wxk + ? Txk b , ?k ? 0
where ?k, xk ? Rd is a training vector, yk ? {?1, 1} is its label, ? xk ? RN is its local coding,
? ? 0 is a pre-defined scalar, and W ? RN ?d and b ? RN are the model parameters. As
demonstrated in our experiments, the choices of the local coding methods are very important for
LL-SVM, and an improper choice will hurt its performance.
The rest of the paper is organized as follows. In Section 2 we first recall some definitions and lemmas
in LCC, then introduce OCC for non-linear function approximation and its property on the upper
bound of localization error as well as comparing OCC with LCC in terms of geometric interpretation
and optimization. In Section 3, we explain how to fit OCC into LL-SVM to model the coordinates
for classification. We show our experimental results and comparison in Section 4, and conclude the
paper in Section 5.
2
2
Anchor Plane Learning
In this section, we introduce our Orthogonal Coordinate Coding (OCC) based on some orthogonal
basis vectors. For clarification, we summarize some notations in Table 1 which are used in LCC and
OCC.
Table 1: Some notations used in LCC and OCC.
Notation
v ? Rd
C ? Rd
C ? Rd?|C|
?v (x) ? R
?(x) ? Rd
? x ? R|C|
?
(?, C)
2.1
Definition
A d-dimensional anchor point in LCC; a d-dimensional basis vector which defines a
family of anchor planes in OCC.
A subset in d-dimensional space containing all the anchor points (?v, v ? C) in LCC;
a subset in d-dimensional space containing all the basis vectors in OCC.
The anchor point (or basis vector) matrix with v ? C as columns.
The local coding of a data point x ? Rd using the anchor point (or basis vector) v.
The physical approximation vector of a data point x.
The coding vector of data point x containing all ?v (x) in order ? x = [?v (x)]v?C .
A map of x ? Rd to ? x .
A coordinate coding.
Preliminary
We first recall some definitions and lemmas in LCC based on which we develop our method. Notice
that in the following sections, k ? k denotes the `2 -norm without explicit explanation.
Definition 1 (Lipschitz Smoothness [18]). A function f (x) on Rd is (?, ?, p)-Lipschitz smooth with
respect to a norm k?k if |f (x0 )?f (x)| ? ?kx?x0 k and |f (x0 )?f (x)??f (x)T (x0 ?x)| ? ?kx?x0 k1+p ,
where we assume ?, ? > 0 and p ? (0, 1].
Definition 2 (Coordinate Coding [18]). A coordinate coding is a pair (?,
C), where C ? Rd is a set
P
d
|C|
of anchor points, and ? is a map of x ? R to [?v (x)]v?C ? R such that v ?v (x) = 1. It induces the
P
following physical approximation of x in Rd : ?(x) = v?C ?v (x)v. Moreover, for all x ? Rd , we define the
P
corresponding coding norm as kxk? = ( v?C ?v (x)2 )1/2 .
Lemma 1 (Linearization [18]). Let (?, C) be an arbitrary coordinate coding on Rd . Let f be an (?, ?, p)Lipschitz smooth function. We have for all x ? Rd :
X
X
?v (x)f (v) ? ?kx ? ?(x)k + ?
|?v (x)| kv ? ?(x)k1+p
f (x) ?
v?C
(2)
v?C
As explained in [18], a good coding scheme for non-linear function approximation should make x
close to its physical approximation ?(x) (i.e. smaller
P data reconstruction error kx ? ?(x)k) and
should be localized (i.e. smaller localization error v?C |?v (x)| kv ? ?(x)k1+p ). This is the basic
idea of LCC.
Definition 3 (Localization Measure [18]). Given ?, ?, p, and coding (?, C), we define
"
#
Q?,?,p (?, C) = Ex ?kx ? ?(x)k + ?
X
|?v (x)| kv ? ?(x)k
1+p
(3)
v?C
Localization measure is equivalent to the expectation of the upper bound of the approximate error.
2.2
Orthogonal Coordinate Coding
In the following sections, we will follow the notations in Table 1, and define our orthogonal coordinate coding (OCC) as below.
Definition 4 (Orthogonal Coordinate Coding). An orthogonal coordinate coding is a pair (?, C),
where C ? Rd contains |C| orthogonal basis vectors, that is, ?u, v ? C, if u 6= v, then uT v = 0,
P
xT v
and coding ? is a map of x ? Rd to [?v (x)]v?C ? R|C| such that ?v (x) ? kvk
2 and
v?C |?v (x)| =
1.
3
Figure 1: Comparison of the geometric views on (a) LCC and (b) OCC, where the white and red dots denote
the data and anchor points, respectively. In LCC, the anchor points are distributed among the data space and
several nearest neighbors around the data are selected for data reconstruction, while in OCC the anchor points
are located on the anchor plane defined by the normal vector (i.e. coordinate, basis vector) v and only the
closest point to each data point on the anchor plane is selected for coding. The figures are borrowed from the
slides of [17], and best viewed in color.
Compared to Definition 2, there are two changes in OCC: (1) instead of anchor points we use a set
of orthogonal basis vectors, which defines a set of anchor planes, and (2) the coding for each data
point is defined on the `1 -norm unit ball, which removes the scaling factors in both x and v. Notice
that since given the data matrix, the maximum number of orthogonal basis vectors which can be
used to represent all the data precisely is equal to the rank of the data matrix, the maximum value of
|C| is equal to the rank of the data matrix as well.
Figure 1 illustrates the geometric views on LCC and OCC respectively. Intuitively, in both methods
anchor points try to encode data locally. However, the ways of their arrangement are quite different.
In LCC anchor points are distributed among the whole data space such that each data can be covered
by certain anchor points in a local region, and their distribution cannot be described using regular
shapes. On the contrary, the anchor points in OCC are located on the anchor plane defined by
a basis vector. In fact, each anchor plane can be considered as infinite number of anchor points,
and for each data point only its closest point on each anchor plane is utilized for reconstruction.
Therefore, intuitively the number of anchor planes in OCC should be much fewer than the number
of anchor points in LCC.
Theorem 1 (Localization Error of OCC). Let (?, C) be an orthogonal coordinate coding on Rd
where C ? Rd with size |C| = M . Let f be an (?, ?, p)-Lipschitz smooth function. Without
losing generalization, assuming ?x ? Rd , kxk ? 1 and ?v ? C, 1 ? kvk ? h(h ? 1), then the
localization error in Lemma 1 is bounded by:
X
h
i1+p
|?v (x)| kv ? ?(x)k1+p ? (1 + M )h
.
(4)
v?C
Proof. Let ?v (x) =
X
xT v
sx kvk2 ,
|?v (x)| kv ? ?(x)k1+p =
v?C
where sx =
X
P
|xT v|
v?C kvk2 ,
then
h
i 1+p
X
2
|?v (x)| kvk2 ? 2sx ?v (x)kvk2 +
s2x ?v (x)2 kvk2
v?C
?
X
v?C
h
|?v (x)| kvk2 + 2sx kvk2 |?v (x)| +
v?C
?
X
X
s2x ?v (x)2 kvk2
i 1+p
2
v?C
h
|?v (x)| kvk2 + 2sx kvk2 |?v (x)| +
v?C
X
v?C
?v (x)2
max s2x kvk2
i 1+p
2
v?C
P
? ?x ? Rd , kxk ? 1 and ?v ? C, 1 ? kvk ? h(h ? 1), v?C |?v (x)| = 1,
T
P
P
P
v|
kxkkvk
? ?v ? C, |?v (x)| ? 1, v?C ?v (x)2 ? 1, sx = v?C |x
v?C kvk2 ? M
kvk2 ?
4
(5)
?
X
|?v (x)| kv ? ?(x)k1+p
?
v?C
X
i 1+p
h
2
|?v (x)| h2 + 2M h2 |?v (x)| + M 2 h2
v?C
i 1+p
h
2
2
|?v (x)| 1 + 2M |?v (x)| + M
1+p
X
1+p
X
= h
v?C
? h
h
i 1+p
2
2
|?v (x)| max 1 + 2M |?v (x)| + M
v?C
v?C
h
i 1+p
2
1+p
2
= h
? max 1 + 2M |?v (x)| + M
v?C
h
? h1+p 1 + 2M + M 2
2.3
i 1+p
2
h
i1+p
= (1 + M )h
.
(6)
Learning Orthogonal Basis Vectors
Instead of optimizing Definition 3, LCC simplifies the localization error term by assuming ?(x) = x
and p = 1. Mathematically LCC solves the following optimization problem:
o
X
X
X n1
kx ? ?(x)k2 + ?
|?v (x)|kv ? xk2 + ?
kvk2
min
(7)
2
(?,C)
v?C
v?C
x?X
P
s.t. ?x,
v?C ?v (x) = 1.
They update C and ? via alternating optimization. The step of updating ? can be transformed into a
canonical LASSO problem, and the step of updating C is a least squares problem.
For OCC, given an (?, ?, p)-Lipschitz smooth function f and a set of data X ? Rd , whose corresponding data matrix and its rank are denoted as X and D, respectively, we would like to learn an
orthogonal coordinate coding (?, C) where the number of basis vectors |C| = M ? D such that the
localization measure of this coding is minimized. Since Theorem 1 proves that the localization error
per data point is bounded by a constant given an OCC, in practice we only need to minimize the data
reconstruction error in order to minimize the upper bound of the localization measure. That is, we
need to solve the following problem:
X
min
kx ? C? x k2
(8)
(?,C)
x?X
s.t. ?u, v ? C, u 6= v ? uT v = 0,
|C| = M,
?x, k? x k1 = 1.
This optimization problem is quite similar to sparse coding [12], except that there exists the orthogonal constraint on the basis vectors. In practice we relax this problem by removing the constraint
?x, k? x k1 = 1.
(I) Solving for C. Eqn. 8 can be solved first using singular value decomposition (SVD). Let the
SVD of X = V?U where the singular values are positive and in descending order with respect to
?. Then we set C = V{d?M } ?{M ?M } , where V{d?M } denotes a sub-matrix of V containing the
elements within rows from 1 to d and columns from 1 to M , similarly for ?{M ?M } . We need only
to use a few top eigenvectors as our orthogonal basis vectors for coding, and the search space is far
smaller than generating anchor points.
(II) Solving for ? x . Since we have the orthogonal basis vectors in C, we can easily derive the for? x , the value of ?x before normalization, that is, ?
? x = (CT C)?1 CT x.
mulation for calculating ?
Letting {?
v} and {?v } be the corresponding singular vectors and singular values, based on the orT
thogonality of basis vectors we have ??v (x) = v??vx , which is a variant of the coding definition in
? x as follows: ? x =
Definition 4. Finally, we can calculate ? x by normalizing ?
5
?x
?
k?
? x k1 .
3
Modeling Classification Decision Boundary in SVM
Given a set of data {(xi , yi )} where yi ? {?1, 1} is the label of xi , the decision boundary for binary
classification of a linear SVM is f (x) = wT x + b where w is the normal vector of the decision
hyperplane (i.e. coefficients) of the SVM and b is a bias term. Here, we assume that the decision
boundary is an (?, ?, p)-Lipschitz smooth function. Since in LCC each data is encoded by some
anchor points
P on the data manifold, it can model the decision boundary of an SVM directly using
f (x) ? v?C ?v (x)f (v). Then by taking ? x as the input data of a linear SVM, f (v)?s can be
learned to approximate the decision boundary f .
However, OCC learns a set of orthogonal basis vectors, rather than anchor points, and corresponding
coding for data. This makes OCC suitable to model the normal vectors of decision hyperplanes in
SVMs locally with LL-SVM. Given data x and an orthogonal coordinate coding (?, C), the decision
boundary in LL-SVM can be formulated as follows 1 .
X
f (x) = w(x)T x + b =
?v (x)w(v)T x + b = ? Tx Wx + b
(9)
v?C
M ?d
where W ? R
is a matrix which needs to be learned for SVMs. In the view of kernel SVMs,
we actually define another kernel K based on x and ? x as shown below.
?i, j, K(xi , xj ) =< ? xi xTi , ? xj xTj >
(10)
where < ?, ? > denotes the Frobenius inner product. Notice that in our kernel, latent semantic kernel
[6] has been involved which is defined based on a set of orthogonal basis vectors.
4
Experiments
In our experiments, we test OCC with LL-SVM for classification on the benchmark datasets:
MNIST, USPS and LETTER. The features we used are the raw features such that we can compare
our results fairly with others.
MNIST contains 40000 training and 10000 test gray-scale images with resolution 28?28, which are
normalized directly into 784 dimensional vectors. The label of each image is one of the 10 digits
from 0 to 9. USPS contains 7291 training and 2007 test gray-scale images with resolution 16 x 16,
directly stored as 256 dimensional vectors, and the label of each image still corresponds to one of
the 10 digits from 0 to 9. LETTER contains 16000 training and 4000 testing images, each of which
is represented as a relatively short 16 dimensional vector, and the label of each image corresponds
to one of the 26 letters from A to Z.
We re-implemented LL-SVM based on the C++ code of LIBLINEAR [8] 2 and PEGASOS [14] 3 ,
respectively, and performed multi-class classification using the one-vs-all strategy. This aims to
test the effect of either quadratic programming or stochastic gradient based SVM solver on both
accuracy and computational time. We denote these two ways of LL-SVM as LIB-LLSVM and PEGLLSVM for short. We tried to learn our basis vectors in two ways: (1) SVD is applied directly to the
entire training data matrix, or (2) SVD is applied separately to the data matrix consisting of all the
positive training data. We denote these two types of OCC as G-OCC (i.e. Generic OCC) and C-OCC
(i.e. Class-specific OCC), respectively. Then the coding for each data is calculated as explained in
Section 2.3. Next, all the training raw features and their coding vectors are taken as the input to train
the model (W, b) of LL-SVM. For each test data x, we calculate its coding in the same way and
classify it based on its decision values, that is, y(x) = arg maxy ? Tx,y Wy x + by .
Figure 2 shows the comparison of classification error rates among G-OCC + LIB-LLSVM, G-OCC +
PEG-LLSVM, C-OCC + LIB-LLSVM, and C-OCC + PEG-LLSVM on MNIST (left), USPS (middle),
and LETTER (right), respectively, using different numbers of orthogonal basis vectors. With the
same OCC, LIB-LLSVM performs slightly better than PEG-LLSVM in terms of accuracy, and both
1
Notice that Eqn. 9 is slightly different from the original formulation in [11] by ignoring the different bias
term for each orthogonal basis vector.
2
Using LIBLINEAR, we implemented LL-SVM based on Eqn. 9.
3
Using PEGASOS, we implemented LL-SVM based on the original formulation in [11].
6
Figure 2: Performance comparison among the 4 different combinations of OCC + LL-SVM on MNIST (left),
USPS (middle), and LETTER (right) using different numbers of orthogonal basis vectors. This figure is best
viewed in color.
behaves similarly with the increase of the number of orthogonal basis vectors. It seems that in
general C-OCC is better than G-OCC.
Table 2 summarizes our comparison results between our methods and some other SVM based approaches. The parameters of the RBF kernel used in the kernel SVMs are the same as [2]. Since
there are no results of LCC on USPS and LETTER or its code, we tested the published code of LLC
[16] on these two datasets so that we can have a rough idea of how well LCC works. The anchor
points are found using K-Means. From Table 2, we can see that applying linear SVM directly on
OCC works slightly better than on the raw features, and when OCC is working with LL-SVM, the
performance is boosted significantly while the numbers of anchor points that are needed in LL-SVM
are reduced. On MNIST we can see that our non-linear function approximation is better than LCC,
improved LCC, LLC, and LL-SVM, on USPS ours is better than both LLC and LL-SVM, but on
LETTER ours is worse than LLC (4096 anchor points) and LL-SVM (100 anchor points). The
reason for this is that strictly speaking LETTER is not a high dimensional dataset (only 16 dimensions per data), which limits the power of OCC. Compared with kernel based SVMs, our method
can achieve comparable or even better results (e.g. on USPS). All of these results demonstrate that
OCC is quite suitable to model the non-linear normal vectors using linear SVMs for classification on
high dimensional data. In summary, our encoding scheme uses much less number of basis vectors
compared to anchor points in LCC while achieving better test accuracy, which translates to higher
performance both in terms of generalization and efficiency in computation.
We show our training and test time on these three datasets as well in Table 3 based on unoptimized
MATLAB code on a single thread of a 2.67 GHz CPU. For training, the time includes calculating
OCC and training LL-SVM. From this table, we can see that our methods are a little slower than the
original LL-SVM, but still much faster than kernel SVMs. The main reason for this is that OCC is
non-sparse while in [11] the coefficients are sparse. However, for calculating coefficients, OCC is
faster than [11], because there is no distance calculation or K nearest neighbor search involved in
OCC, just simple multiplication and normalization.
5
Conclusion
In this paper, we propose orthogonal coordinate coding (OCC) to encode high dimensional data
based on a set of anchor planes defined by a set of orthogonal basis vectors. Theoretically we prove
that our OCC can guarantee a fixed upper bound of approximation error for any (?, ?, p)-Lipschitz
smooth function, and we can easily learn the orthogonal basis vectors using SVD to minimize
the localization measure. Meanwhile, OCC can help locally linear SVM (LL-SVM) approximate
the kernel-based SVMs, and our experiments demonstrate that with a few orthogonal anchor
planes, LL-SVM can achieve comparable or better results than LCC and its variants improved
LCC and LLC with linear SVMs, and on USPS even better than kernel-based SVMs. In future, we
would like to learn the orthogonal basis vectors using semi-definite programming to guarantee the
orthogonality.
Acknowledgements. We thank J. Valentin, P. Sturgess and S. Sengupta for useful discussion
in this paper. This work was supported by the IST Programme of the European Community, under
7
Table 2: Classification error rate comparison (%) between our methods and others on MNIST, USPS, and
LETTER. The numbers of anchor planes in the brackets are the ones which returns the best result on each
dataset. All kernel methods [13, 14, 15, 16, 17] use the RBF kernel. In general, LIB-LLSVM + C-OCC
performs best.
Methods
MNIST
USPS
LETTER
Linear SVM + G-OCC (# basis vectors)
9.25 (100) 7.82 (95) 30.52 (15)
Linear SVM + C-OCC (# anchor planes)
7.42 (100) 5.98 (95) 14.95 (16)
LIB-LLSVM + G-OCC (# basis vectors)
1.72 (50)
4.14 (20)
6.85 (15)
PEG-LLSVM + G-OCC (# basis vectors)
1.81 (40)
4.38 (50)
9.83 (14)
LIB-LLSVM + C-OCC (# basis vectors)
1.61 (90)
3.94 (80)
7.35 (16)
1.74 (90)
4.09 (80)
8.30 (16)
PEG-LLSVM + C-OCC (# basis vectors)
Linear SVM (10 passes) [1]
12.00
9.57
41.77
Linear SVM + LCC (512 anchor points) [18]
2.64
Linear SVM + LCC (4096 anchor points) [18]
1.90
Linear SVM + improved LCC (512 anchor points) [19]
1.95
1.64
Linear SVM + improved LCC (4096 anchor points) [19]
Linear SVM + LLC (512 anchor points) [16]
3.69
5.78
9.02
Linear SVM + LLC (4096 anchor points) [16]
2.28
4.38
4.12
LibSVM [4]
1.36
1.42
LA-SVM (1 pass) [3]
LA-SVM (2 passes) [3]
1.36
1.44
4.24
2.42
MCSVM [5]
SV Mstruct [15]
1.40
4.38
2.40
LA-RANK (1 pass) [2]
1.41
4.25
2.80
LL-SVM (100 anchor points, 10 passes) [11]
1.85
5.78
5.32
Table 3: Computational time comparison between our methods and others on MNIST, USPS, and LETTER.
The numbers in Row 7-14 are copied from [11]. The training times of our methods include the calculation of
OCC and training LL-SVM. All the numbers are corresponding to the methods shown in Table 2 with the same
parameters. Notice that for PEG-LLSVM, 106 random data points is used for training.
Training Time (s)
Test Time (ms)
Methods
MNIST
USPS
LETTER
MNIST
USPS
LETTER
3
LIB-LLSVM + G-OCC
113.38
5.78
4.14
5.51?10
19.23
4.09
PEG-LLSVM + G-OCC
125.03
14.50
2.02
302.28
23.25
3.33
LIB-LLSVM + C-OCC
224.09
25.61
1.66
9.57?103
547.60
63.13
PEG-LLSVM + C-OCC
273.70
23.31
0.85
503.18
50.63
28.94
Linear SVM (10 passes) [1]
1.5
0.26
0.18
8.75?10?3
1.75?104
46
LibSVM [4]
LA-SVM (1 pass) [3]
4.9?103
40.6
LA-SVM (2 passes) [3]
1.22?104
42.8
MCSVM [5]
2.5?104
60
1.2?103
SV Mstruct [15]
2.65?105 6.3?103 2.4?104
LA-RANK (1 pass) [2]
3?104
85
940
LL-SVM (100, 10 passes) [11]
81.7
6.2
4.2
0.47
-
the PASCAL2 Network of Excellence, IST-2007-216886. P. H. S. Torr is in receipt of Royal Society
Wolfson Research Merit Award.
8
References
[1] Bordes, A., Bottou, L. & Gallinari, P. (2009) Sgd-qn: Careful quasi-newton stochastic gradient
descent. Journal of Machine Learning Research (JMLR).
[2] Bordes, A., Bottou, L., Gallinari, P., & Weston, J. (2007) Solving multiclass support vector
machines with larank. In Proceeding of International Conference on Machine Learning (ICML).
[3] Bordes, A., Ertekin, S., Weston, J., & Bottou, L. (2005) Fast kernel classifiers with online and
active learning. Journal of Machine Learning Research (JMLR).
[4] Chang, C. & Lin, C. (2011) LIBSVM: A Library for Support Vector Machines. ACM Transactions on Intelligent Systems and Technology, vol. 2, issue 3, pp. 27:1-27:27.
[5] Crammer, K. & Singer, Y. (2002) On the algorithmic implementation of multiclass kernel-based
vector machines. Journal of Machine Learning Research (JMLR).
[6] Cristianini, N., Shawe-Taylor, J. & Lodhi, H. (2002) Latent Semantic Kernels. Journal of Intelligent Information Systems, Vol. 18, No. 2-3, 127-152.
[7] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J. & Zisserman, A. The PASCAL
Visual Object Classes Challenge 2009 (VOC2009). http://www.pascal-network.org/
challenges/VOC/voc2009/workshop/index.html
[8] Fan, R., Chang, K., Hsieh, C., Wang, X. & Lin, C. (2008) LIBLINEAR: A Library for Large
Linear Classification. Journal of Machine Learning Research (JMLR), vol. 9, pp. 1871-1874.
[9] G?nen, M. & Alpaydin, E. (2008) Localized Multiple Kernel Learning. In Proceeding of International Conference on Machine Learning (ICML).
[10] Kecman, V. & Brooks, J.P. (2010) Locally Linear Support Vector Machines and Other Local
Models. In Proceeding of IEEE World Congress on Computational Intelligence (WCCI), pp. 26152620.
[11] Ladicky, L. & Torr, P.H.S. (2011) Locally Linear Support Vector Machines. In Proceeding of
International Conference on Machine Learning (ICML).
[12] Lee, H., Battle, A., Raina, R., & Ng, A.Y. (2007) Efficient Sparse Coding Algorithms. In
Advances in Neural Information Processing Systems (NIPS).
[13] Mairal, J., Bach, F., Ponce, J. & Sapiro, G. (2009) Online Dictionary Learning for Sparse
Coding. In Proceeding of International Conference on Machine Learning (ICML).
[14] Shalev-Shwartz, S., Singer, Y., & Srebro, N. (2007) Pegasos: Primal Estimated sub-GrAdient
SOlver for SVM. In Proceeding of International Conference on Machine Learning (ICML).
[15] Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005) Large margin methods for
structured and interdependent output variables. Journal of Machine Learning Research (JMLR).
[16] Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., & Gong, Y. (2010) Locality-constrained Linear
Coding for Image Classification. In Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition (CVPR).
[17] Yu, K. & Ng, A. (2010) ECCV-2010 Tutorial: Feature Learning for Image Classification.
http://ufldl.stanford.edu/eccv10-tutorial/.
[18] Yu, K., Zhang, T., & Gong, Y. (2009) Nonlinear Learning using Local Coordinate Coding. In
Advances in Neural Information Processing Systems (NIPS).
[19] Yu, K. & Zhang, T. (2010) Improved Local Coordinate Coding using Local Tangents. In Proceeding of International Conference on Machine Learning (ICML).
[20] Zhang, H., Berg, A., Maure, M. & Malik, J. (2006) SVM-KNN: Discriminative nearest neighbor classification for visual category recognition. In Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pp. 2126-2136.
9
| 4326 |@word middle:2 norm:4 seems:1 everingham:1 lodhi:1 linearized:1 tried:1 decomposition:3 hsieh:1 sgd:1 liblinear:3 contains:4 ours:2 comparing:1 wx:1 hofmann:1 shape:1 remove:1 update:1 v:1 intelligence:1 selected:2 fewer:1 amir:2 plane:22 xk:2 short:2 provides:1 codebook:2 hyperplanes:1 org:2 zhang:6 txk:2 kvk2:14 become:3 prove:1 introduce:2 excellence:1 x0:5 theoretically:3 expected:1 multi:1 voc:2 little:2 xti:1 cpu:1 solver:2 lib:9 bounded:3 notation:4 moreover:1 wolfson:1 guarantee:3 sapiro:1 classifier:5 k2:2 uk:3 gallinari:2 unit:1 positive:2 before:1 engineering:1 local:17 congress:1 limit:1 encoding:2 oxford:4 suggests:1 testing:1 practice:4 definite:1 larank:1 digit:2 llsvm:18 significantly:2 pre:1 road:1 regular:1 altun:1 pegasos:3 close:2 tsochantaridis:1 operator:1 cannot:1 applying:1 descending:1 www:1 equivalent:1 map:3 demonstrated:2 williams:1 resolution:2 wcci:1 stability:1 coordinate:27 hurt:1 losing:1 programming:2 us:1 trend:1 element:1 approximated:3 recognition:4 located:2 utilized:1 updating:2 sparsely:1 observed:1 wang:3 solved:1 calculate:2 region:2 improper:1 alpaydin:1 yk:2 complexity:1 cristianini:1 trained:1 solving:4 localization:11 efficiency:2 basis:36 usps:13 easily:4 represented:2 various:1 tx:2 grown:1 surrounding:1 train:1 fast:3 describe:1 london:1 shalev:1 whose:2 encoded:2 quite:3 solve:1 cvpr:2 stanford:1 relax:1 knn:1 online:6 propose:3 reconstruction:4 product:1 achieve:2 frobenius:1 kv:7 convergence:1 generating:1 incremental:1 object:2 help:1 derive:1 linearize:1 ac:2 gong:2 develop:1 nearest:5 borrowed:1 solves:1 implemented:3 mulation:1 closely:1 drawback:1 stochastic:4 vx:1 lcc:36 saffari:1 hx:1 generalization:3 preliminary:1 tighter:1 eccv10:1 mathematically:2 strictly:1 lying:1 around:1 considered:2 s2x:3 normal:4 algorithmic:1 achieves:2 dictionary:3 xk2:1 purpose:1 label:5 create:1 successfully:1 defeating:1 rough:1 aim:1 rather:2 lubor:1 boosted:1 encode:3 ponce:1 joachim:1 rank:5 ladick:1 dependent:1 entire:1 transformed:1 unoptimized:1 i1:2 quasi:1 issue:2 classification:17 dual:1 arg:2 among:4 denoted:1 pascal:2 html:1 sengupta:1 constrained:1 fairly:1 equal:2 ng:2 park:1 yu:5 icml:6 future:1 minimized:1 others:3 intelligent:2 serious:1 few:4 xtj:1 consisting:2 n1:1 huge:1 highly:1 brooke:2 bracket:1 kvk:3 primal:1 orthogonal:34 taylor:1 re:1 instance:2 column:2 modeling:2 classify:1 subset:2 valentin:1 stored:1 sv:2 international:6 lee:2 containing:4 huang:1 receipt:1 worse:1 return:1 coding:49 includes:1 coefficient:3 performed:1 view:3 try:1 h1:1 red:1 minimize:3 square:1 accuracy:3 raw:3 accurately:1 none:1 published:1 explain:1 definition:12 pp:4 involved:2 naturally:1 proof:1 dataset:3 proved:1 popular:1 recall:2 color:2 ut:2 organized:1 actually:1 higher:1 follow:1 response:1 improved:5 zisserman:1 formulation:2 ox:1 just:1 overfit:1 working:1 eqn:3 ox1:1 nonlinear:2 defines:3 gray:2 effect:1 normalized:1 alternating:1 semantic:2 illustrated:1 deal:2 white:1 ll:25 m:1 demonstrate:2 performs:2 image:8 novel:1 behaves:1 physical:3 interpretation:1 kwk2:1 smoothness:1 rd:22 populated:1 similarly:2 shawe:1 dot:1 robot:1 europe:1 ort:1 curvature:1 closest:2 recent:1 optimizing:1 certain:2 binary:1 yi:2 preserving:1 redundant:1 semi:1 ii:1 multiple:2 smooth:11 faster:2 calculation:2 bach:1 lin:2 award:1 variant:3 basic:1 vision:2 expectation:1 sometimes:1 kernel:18 wxk:1 represent:1 normalization:2 ertekin:1 separately:1 decreased:1 winn:1 singular:6 grow:1 rest:1 unlike:1 explodes:1 pass:6 contrary:1 yang:1 enough:1 xj:2 fit:2 lasso:1 inner:1 idea:2 simplifies:1 multiclass:2 translates:1 thread:1 suffer:1 speaking:1 matlab:1 useful:1 covered:1 eigenvectors:1 slide:1 locally:11 induces:1 svms:17 sturgess:1 category:1 reduced:1 http:2 peg:8 canonical:1 tutorial:2 notice:5 estimated:1 per:2 vol:3 ist:2 achieving:1 pj:1 libsvm:3 letter:13 family:2 decide:1 decision:10 summarizes:1 scaling:1 comparable:2 bound:9 ct:2 copied:1 fan:1 quadratic:1 constraint:3 ladicky:2 precisely:1 orthogonality:1 encodes:1 min:3 performing:1 separable:1 relatively:1 department:2 structured:1 combination:2 ball:1 battle:1 smaller:3 slightly:3 voc2009:2 maxy:1 explained:2 intuitively:2 taken:1 needed:1 singer:2 letting:1 sony:1 merit:1 nen:1 apply:1 generic:1 slower:1 original:4 denotes:4 top:1 entertainment:1 include:1 newton:1 calculating:3 k1:9 prof:1 society:1 malik:1 arrangement:1 strategy:1 traditional:1 gradient:5 distance:1 thank:1 philip:1 manifold:5 reason:2 assuming:2 code:4 index:1 minimizing:2 implementation:2 upper:8 datasets:3 benchmark:1 descent:4 situation:1 defining:1 rn:3 arbitrary:1 community:1 pair:2 optimized:1 learned:6 nip:2 brook:2 below:3 wy:1 pattern:2 sparsity:2 challenge:3 summarize:1 max:3 memory:1 explanation:1 royal:1 pascal2:1 power:1 suitable:2 gool:1 philiptorr:1 raina:1 scheme:4 technology:1 library:2 mcsvm:2 geometric:3 acknowledgement:1 interdependent:1 tangent:1 multiplication:1 ziming:2 srebro:1 localized:2 lv:1 h2:3 integrate:1 bordes:3 row:2 eccv:1 summary:1 surprisingly:1 supported:1 transpose:1 bias:2 neighbor:4 taking:1 sparse:9 kecman:2 distributed:3 benefit:1 boundary:7 calculated:1 llc:9 dimension:1 ghz:1 van:1 qn:1 world:1 programme:1 far:2 transaction:1 approximate:4 global:1 active:1 anchor:70 mairal:2 conclude:1 xi:4 shwartz:1 discriminative:1 search:2 latent:2 table:10 learn:4 ignoring:1 bottou:3 european:1 meanwhile:1 main:1 linearly:2 big:1 whole:1 fashion:1 sub:2 explicit:1 jmlr:5 learns:1 theorem:2 removing:1 xt:4 specific:1 svm:52 normalizing:1 essential:1 exists:1 mnist:12 workshop:1 linearization:1 illustrates:1 hole:4 margin:2 kx:7 sx:6 locality:4 visual:2 lagrange:1 kxk:3 scalar:1 chang:2 corresponds:2 acm:1 weston:2 viewed:2 formulated:2 rbf:2 careful:1 lipschitz:11 occ:60 replace:1 experimentally:1 change:1 infinite:2 torr:4 except:1 wt:1 hyperplane:1 lemma:4 clarification:1 pas:4 svd:7 experimental:1 la:6 berg:1 support:6 crammer:1 tested:2 ex:1 |
3,674 | 4,327 | Energetically Optimal Action Potentials
Martin Stemmler
BCCN and LMU Munich
Grosshadernerstr. 2,
Planegg, 82125 Germany
Biswa Sengupta, Simon Laughlin, Jeremy Niven
Department of Zoology,
University of Cambridge,
Downing Street, Cambridge CB2 3EJ, UK
Abstract
Most action potentials in the nervous system take on the form of strong, rapid, and
brief voltage deflections known as spikes, in stark contrast to other action potentials, such as in the heart, that are characterized by broad voltage plateaus. We
derive the shape of the neuronal action potential from first principles, by postulating that action potential generation is strongly constrained by the brain?s need to
minimize energy expenditure. For a given height of an action potential, the least
energy is consumed when the underlying currents obey the bang-bang principle:
the currents giving rise to the spike should be intense, yet short-lived, yielding
spikes with sharp onsets and offsets. Energy optimality predicts features in the
biophysics that are not per se required for producing the characteristic neuronal
action potential: sodium currents should be extraordinarily powerful and inactivate with voltage; both potassium and sodium currents should have kinetics that
have a bell-shaped voltage-dependence; and the cooperative action of multiple
?gates? should start the flow of current.
1
The paradox
Nerve cells communicate with each other over long distances using spike-like action potentials,
which are brief electrical events traveling rapidly down axons and dendrites. Each action potential is
caused by an accelerating influx of sodium or calcium ions, depolarizing the cell membrane by forty
millivolts or more, followed by repolarization of the cell membrane caused by an efflux of potassium
ions. As different species of ions are swapped across the membrane during the action potential, ion
pumps shuttle the excess ions back and restore the ionic concentration gradients.
If we label each ionic species by ?, the work ?E done to restore the ionic concentration gradients
is
X
[?]
?E = RT V
?[?]in ln out ,
(1)
[?]in
?
where R is the gas constant, T is the temperature in Kelvin, V is the cell volume, [?]in|out is the
concentration of ion ? inside or outside the cell, and ?[?]in is the concentration P
change inside the
cell, which is assumed to be small relative to the total concentration. The sum ? z? ?[?] = 0,
where z? is the charge on ion ?, as no net charge accumulates during the action potential and no
net work is done by or on the electric field. Often, sodium (Na+ ) and potassium (K+ ) play the
dominant role in generating action potentials, in which case
K ), where
?E = ?[Na]in F V(ENa ? E
+
F is Faraday?s constant, ENa = RT /F ln [Na]out /[Na]in is the reversal
potential
for
Na
, at which
no net sodium current flows, and EK = RT /F ln [K]out /[K]in . This estimate of the work done
does not include heat (due to loss through the membrane resistance) or the work done by the ion
channel proteins in changing their conformational state during the action potential.
Hence, the action potential?s energetic cost to the cell is directly proportional to ?[Na]in ; taking
into account that each Na+ ion carries one elementary charge, the cost is also proportional to the
1
charge QNa that accumulates inside the cell. A maximally efficient cell reduces the charge per spike
to a minimum. If a cell fires action potentials at an average rate f , the cell?s Na/K pumps must
move Na+ and K+ ions in opposite directions, against their respective concentration gradients, to
counteract an average inward Na+ current of f QNa . Exhaustive measurements on myocytes in the
heart, which expend tremendous amounts of energy to keep the heart beating, indicate that Na/K
pumps expel ? 0.5 ?A/cm2 of Na+ current at membrane potentials close to rest [1]. Most excitable
cells, even when spiking, spend most of their time close to resting potential, and yet standard models
for action potentials can easily lead to accumulating an ionic charge of up to 5 ?C/cm2 [2]; most of
this accumulation occurs during a very brief time interval. If one were to take an isopotential nerve
cell with the same density of ion pumps as in the heart, then such a cell would not be able to produce
more than an action potential once every ten seconds on average. The brain should be effectively
silent.
Clearly, this conflicts with what is known about the average firing rates of neurons in the brainstem
or even the neocortex, which can sustain spiking up to at least 7 Hz [3]. Part of the discrepancy can
be resolved by noting that nerve cells are not isopotential and that action potential generation occurs
within a highly restricted area of the membrane. Even so, standard models of action potential generation waste extraordinary amounts of energy; recent evidence [4] points out that many mammalian
cortical neurons are much more efficient.
As nature places a premium on energy consumption, we will argue that one can predict both the
shape of the action potential and the underlying biophysics of the nonlinear, voltage-dependent
ionic conductances from the principle of minimal energy consumption. After reviewing the ionic
basis of action potentials, we first sketch how to compute the minimal energy cost for an arbitrary
spike shape, and then solve for the optimal action potential shape with a given height. Finally, we
show how minimal energy consumption explains all the dynamical features in the standard HodgkinHuxley (HH) model for neuronal dynamics that distinguish the brain?s action potentials from other
highly nonlinear oscillations in physics and chemistry.
2
Ionic basis of the action potential
In an excitable cell, synaptic drive forces the membrane permeability to different ions to change
rapidly in time, producing the dynamics of the action potential. The current density I? carried by
an ion species ? is given by the Goldman-Hodgkin-Katz (GHK) current equation[5, 6, 2], which
assumes that ions are driven independently across the membrane under the influence of a constant
electric field. I? depends upon the ions membrane permeability, P? , its concentrations on either
side of the membrane [?]out and [?]in and the voltage across the membrane, V , according to:
I? = P?
z?2 V F 2 [?]out ? [?]in exp (z? V F/RT )
,
RT
1 ? exp(z? V F/RT )
(2)
To produce the fast currents that generate APs, a subset of the membranes ionic permeabilities P? are
gated by voltage. Changes in the permeability P? are not instantaneous; the voltage-gated permeability is scaled mathematically by gating variables m(t) and h(t) with their own time dependence.
After separating constant from time-dependent components in the permeability, the voltage-gated
permeability obeys
P? (t) = m(t)r h(t)s
such that
0 ? P? (t) ? P?? ,
where r and s are positive, and P?? is the peak permeability to ion ? when all channels for ion ? are
open. Gating is also referred to as activation, and the associated nonlinear permeabilities are called
active. There are also passive, voltage-insensitive permeabilities that maintain the resting potential
and depolarise the membrane to trigger action potentials.
The simplest possible kinetics for the gating variables are first order, involving only a single derivative in time. The steady state of each gating variable at a given voltage is determined by a Boltzmann
function, to which the gating variables evolve:
dm p
r
?m
= P?? m? (V ) ? m(t)
dt
dh
and ?h
=h? (V ) ? h(t),
dt
2
?1
with m? (V ) = {1 + exp ((V ? Vm )/sm )} the Boltzmann function described by the slope sm >
?1
0 and the midpoint Vm ; similarly, h? (V ) = {1 + exp ((V ? Vh )/sh )} , but with sh < 0. Scaling
m? (V ) by the rth root of the peak permeability P?? is a matter of mathematical convenience.
We will consider both voltage-independent and voltage-dependent time constants, either setting ?j =
?j,0 to be constant, where j ? {m(t), h(t)}, or imposing a bell-shaped voltage dependence ?j (V ) =
?j,0 sech [sj (V ? Vj )]
The synaptic, leak, and voltage-dependent currents drive the rate of change in the voltage across the
membrane
X
dV
C
= Isyn + Ileak +
I? ,
dt
?
where the synaptic permeability and leak permeability are held constant.
3
Resistive and capacitive components of the energy cost
By treating the action potential as the charging and discharging of the cell membrane capacitance,
the action potentials measured at the mossy fibre synapse in rats [4] or in mouse thalamocortical
neurons [7] were found to be highly energy-efficient: the nonlinear, active conductances inject only
slightly more current than is needed to charge a capacitor to the peak voltage of the action potential. The implicit assumption made here is that one can neglect the passive loss of current through
the membrane resistance, known as the leak. Any passive loss must be compensated by additional
charge, making this loss the primary target of the selection pressure that has shaped the dynamics
of action potentials. On the other hand, the membrane capacitance at the site of AP initiation is
generally modelled and experimentally confirmed [8] as being fairly constant around 1 ?F/cm2 ; in
contrast, the propagation, but not generation, of AP?s can be assisted by a reduction in the capacitance achieved by the myelin sheath that wraps some axons. As myelin would block the flow of
ions, we posit that the specific capacitance cannot yield to selection pressure to minimise the work
W = QNa (ENa ? EK ) needed for AP generation.
To address how the shape and dynamics of action potentials might have evolved to consume less
energy, we first fix the action potential?s shape and solve for the minimum charge QNa ab initio,
without treating the cell membrane as a pure capacitor. Regardless of the action potential?s particular time-course V (t), voltage-dependent ionic conductances must transfer Na+ and K+ charge to
elicit an action potential. Figure 1 shows a generic action potential and the associated ionic currents,
comparing the latter to the minimal currents required. The passive equivalent circuit for the neuron
consists of a resistor in parallel with a capacitor, driven by a synaptic current. To charge the membrane to the peak voltage, a neuron in a high-conductance state [9, 10] may well lose more charge
through the resistor than is stored on the capacitor. For neurons in a low-conductance state and
for rapid voltage deflections from the resting potential, membrane capacitance will be the primary
determinant of the charge.
4
The norm of spikes
How close can voltage-gated channels with realistic properties come to the minimal currents? What
time-course for the action potential leads to the smallest minimal currents?
To answer these questions, we must solve a constrained optimization problem on the solutions to the
nonlinear differential equations for the neuronal dynamics. To separate action potentials from mere
small-amplitude oscillations in the voltage, we need to introduce a metric. Smaller action potentials
consume less energy, provided the underlying currents are optimal, yet signalling between neurons
depends on the action potential?s voltage deflection reaching a minimum amplitude. Given the
importance of the action potential?s amplitude, we define an Lp norm on the voltage wave-form
V (t) to emphasize the maximal voltage deflection:
(Z
kV (t) ? hV i kp =
T
) p1
p
kV (t) ? hV ik dt
0
3
,
Generic Action Potential
-10
+
a
V [mV]
-20
-30
gsyn
-40
-50
-60
0
2
4
6
8
t [ms]
10
12
14
16
gNa
Active and Minimal Currents
100
gK
+
gleak
C
+
+
80
2
current [?A/cm ]
60
b
Active IK
Minimum IK
40
20
0
-20
For a fixed action potential waveform V (t):
Active INa
Minimum INa
-40
-60
Minimum INa (t) = ?LV (t)?(LV (t))
Minimum IK (t) = ?LV (t)?(?LV (t))
-80
-100
0
2
4
6
8
10
t [ms]
12
14
with LV (t) ? C V? (t) + Ileak [V (t)] + Isyn [V (t)].
16
c
Qresistive/Qcapacitive
Resistive vs. Capacitive
Minimum Charge
1
0.5
0
0.2 0.4
0.6 0.8
1.0
1.2
leak conductance [mS/cm2]
1.4
Figure 1: To generate an action potential with an arbitrary time-course V (t), the nonlinear, timedependent permeabilities must deliver more charge than just to load the membrane capacitance?
resistive losses must be compensated. (a) The action potential?s time-course in a generic HH
model for a neuron, represented by the circuit diagram on the right. The peak of the action potential is ? 50 mV above the average potential. (b) The inward Na+ current, shown in green
going in the negative direction, rapidly depolarizes the potential V (t) and yields the upstroke of
the action potential. Concurrently, the K+ current activates, displayed as a positive deflection,
and leads to the downstroke in the potential V (t). Inward and outward currents overlap significantly in time. The dotted lines within the region bounded by the solid lines represent the minimal
Na+ current and the minimal K+ current needed to produce the V (t) spike waveform in (a). By
the law of current conservation, the sum of capacitive, resistive, and synaptic currents, denoted by
LV (t) ? C V? (t) + Ileak [V (t)] + Isyn [V (t)], must be balanced by the active currents. If the cell?s
passive properties, namely its capacitance and (leak) resistance, and the synaptic conductance are
constant, we can deduce the minimal active currents needed to generate a specified V (t). The minimal currents, by definition, do not overlap in time. Taking into account passive current flow, restoring
the concentration gradients after the action potential requires 29 nJ/cm2 . By contrast, if the active
currents were optimal, the cost would be 8.9 nJ/cm2 . (c) To depolarize from the minimum to the
maximum of the AP, the synaptic voltage-gated currents must deliver a charge Qcapacitive to charge
the membrane capacitance and a charge Qresistive to compensate for the loss of current through leak
channels. For a large leak conductance in the cell membrane, Qresistive can be larger than Qcapacitive .
4
where hV i is the average voltage. In the limit as p ? ?, the norm simply becomes the difference
between the action potential?s peak voltage and the mean voltage, whereas a finite p ensures that
the norm is differentiable. In parameter space, we will focus our attention to the manifold of action
potentials with constant Lp norm with 2 p < ?, which entails that the optimal action potential
will have a finite, though possibly narrow width. To be close to the supremum norm, yet still have a
norm that is well-behaved under differentiation, we decided to use p = 16.
5
Poincar?e-Lindstedt perturbation of periodic dynamical orbits
Standard (secular) perturbation theory diverges for periodic orbits, so we apply the PoincarLindstedt technique of expanding both in the period and the dynamics of the asymptotic orbit and
then derive a set of adjoint sensitivity equations for the differential-algebraic system. Solving once
for the adjoint functions, we can easily compute the parameter gradient of any functional on the
orbit, even for thousands of parameters.
We start with a set of ordinary differential equations x? = F(x; p) for the neuron?s dynamics, an
asymptotically periodic orbit x? (t) that describes the action potential, and a functional G(x; p) on
the orbit, representing the energy consumption, for instance. The functional can be written as an
integral
Z ?(p)?1
G(x? ; p) =
g(x? (t); p) dt,
0
?
over some source term g(x (t); p). Assume that locally perturbing a parameter p ? p induces a
smooth change in the stable limit cycle, preserving its existence. Generally, a perturbation changes
not only the limit cycle?s path in state space, but also the average speed with which this orbit is
traversed; as a consequence, the value of the functional depends on this change in speed, to lowest
order. For simplicity, consider a single, scalar parameter p. G(x? ; p) is the solution to
?(p)?? [G(x? ; p)] = g(x? ; p),
where we have normalised time via
? = ?(p)t. Denoting partial derivatives by subscripts, we
expand p 7? p + to get the O 1 equation
d? [Gp (x? ; p)] + ?p g(x? ; p) = gx (x? ; p)xp + gp (x? ; p)
in a procedure known as the Poincar?e-Lindstedt method. Hence,
Z ??1
dG
=
(gp + gx xp ? ?p g) dt,
dp
0
where, once again by the Poincar?e-Lindstedt method, xp is the solution to
x? p =Fx (x? )xp + Fp (x? ) ? ?p F (x? ) .
Following the approach described by Cao, Li, Petzold, and Serban (2003), introduce a Lagrange
vector AG (x) and consider the augmented objective function
Z ??1
I(x? ; p) = G(x? ; p) ?
AG (x? ). (F(x? ) ? x? ? ) dt,
0
?
which is identical to G(x ; p) as F(x) ? x? = 0. Then
Z ??1
Z ??1
dI(x? ; p)
=
(gp + gx xp ? ?p g) dt ?
AG . (Fp + Fx xp ? ?p F ? x? p ) dt.
dp
0
0
Integrating the AG (x).x? p term by parts and using periodicity, we get
dI(x? ; p)
=
dp
Z
? ?1
gp ? ?p g ? AG . (Fp ? ?p F) dt ?
0
Z
0
5
? ?1
h
i
? G + AG .F xp dt.
?gx + A
Parameter
peak permeability P?Na
peak permeability P?K
midpoint voltage Vm ? Vh
slope sm ? (?sh )
time constant ?m,0 ? ?h,0
gating exponent r ? s
minimum
0.24 fm/s
6.6 fm/s
- 72 mV
3.33 mV
5 ?s
0.2
maximum
0.15 ?m/s
11 ?m/s
70 mV
200 mV
200 ms
5.0
Table 1: Parameter limits.
We can let the second term vanish by making the vector AG (x) obey
? G (x) = ?FT (x; p) AG (x) + gx (x; p).
A
x
Label the homogeneous solution (obtained by setting gx (x? ; p) = 0) as Z(x). It is known that
R ??1
the term ?p is given by ?p = ? 0 Z(x).Fp (x) dt, provided Z(x) is normalised to satisfy
Z(x).F(x) = 1. We can add any multiple of the homogeneous solution Z(x) to the inhomogeneous solution, so we can always make
Z ??1
AG (x).F(x) dt = G
0
by taking
G
Z
G
A (x) 7? A (x) ? Z(x)
? ?1
!
G
A (x).F(x) dt ? ?G .
(3)
0
This condition will make AG (x) unique. Finally, with eq. (3) we get
Z ??1
dI(x? ; p)
dG(x? ; p)
=
=
gp ? AG . Fp dt.
dp
dp
0
The first term in the integral gives rise to the partial derivative ?G(x? ; p)/ ?p. In many cases, this
term is either zero, can be made zero, or at least made independent of the dynamical variables.
The parameters for the neuron models are listed in Table 1 together with their minimum and maximum allowed values.
For each parameter in the neuron model, an auxiliary parameter on the entire real line is introduced,
and a mapping from the real line onto the finite range set by the biophysical limits is defined. Gradient descent on this auxiliary parameter space is performed by orthogonalizing the gradient dQ? /dp
to the gradient dL/dp of the norm. To correct for drift off the constraint manifold of constant norm,
illustrated in Fig. 3, steps of gradient ascent or descent on the Lp norm are performed while keeping
Q? constant. The step size during gradient descent is adjusted to assure that ?Q? < 0 and that a
periodic solution x? exists after adapting the parameters. The energy landscape is locally convex
(Fig. 3).
6
Predicting the Hodgkin-Huxley model
We start with a single-compartment Goldman-Hodgkin-Katz model neuron containing voltage-gated
Na+ and leak conductances (Figure 1). A tonic synaptic input to the model evokes repetitive firing
of action potentials. We seek those parameters that minimize the ionic load for an action potential of
constant norm?in other words, spikes whose height relative to the average voltage is fairly constant,
subject to a trade-off with the spike width. The ionic load is directly proportional to the work W
performed by the ion flux. All parameters governing the ion channels? voltage dependence and
kinetics, including their time constants, mid-points, slopes, and peak values, are subject to change.
The simplest model capable of generating an action potential must have two dynamical variables and
two time scales: one for the upstroke and another for the downstroke. If both Na+ and K+ currents
6
Transient Na Current Model
Optimal Action Potential
Falling Phase Currents
40
20
a
? [ms]
5
1
2
Q = 239 nC/cm
PNa = m(t)h(t)
PK = n(t)
0
-60
0
-20
?h
?n
60
current [?A/cm2]
V [mV]
V [mV]
-40
IK[V]
Excess INa[V]
Peak Resurgence
300
200
100
-60
-4
0
2
t [ms]
0
4
40
Q = 169 nC/cm2
20
PNa = m(t)h(t)
PK = n(t)
?i = ?i(V)
0
? [ms]
5
1
-60
0
-20
?h
?n
60
current [?A/cm2]
60
V [mV]
-40
-60
-4
-2
0
2
t [ms]
0.5
0.75
IK[V]
Excess INa[V]
Peak Resurgence
200
100
0
4
0.25
40
5
1
PNa = m(t)h(t)
s
PK = n(t)
?i = ?i(V)
20
0
delay
? [ms]
Q = 156 nC/cm2
current [?A/cm2]
60
-60
0
-20
?h
?n
60
V [mV]
-40
-60
t [ms]
0.5
t [ms]
0.75
Cooperative Gating Model
Optimal Action Potential
Falling Phase Currents
V [mV]
c
0.25
Voltage-dependent (In)activation Model
Falling Phase Currents
Optimal Action Potential
V [mV]
b
-2
-2
-1
0
t [ms]
1
750
500
250
0
0
2
IK[V]
Excess INa[V]
Peak Resurgence
0.2
t [ms]
0.4
Figure 2: Optimal spike shapes and currents for neuron models with different biophysical features.
During optimization, the spikes were constrained to have constant norm kV (t) ? hV i k16 = 92 mV,
which controls the height of the spike. Insets in the left column display the voltage-dependence of
the optimized time constants for sodium inactivation and potassium activation; sodium activation is
modeled as occurring instantaneously. (a) Model with voltage-dependent inactivation of Na+ ; time
constants for the first order permeability kinetics are voltage-independent (inset). Inactivation turns
off the Na+ current on the downstroke, but not completely: as the K+ current activates to repolarize
the membrane, the inward Na+ current reactivates and counteracts the K+ current; the peak of the
resurgent Na+ current is marked by a triangle. (b) Model with voltage-dependent time constants
for the first order kinetics of activation and inactivation. The voltage dependence minimizes the
resurgence of the Na+ current. (c) Power-law gating model with an inwardly rectifying potassium
current replacing the leak current. The power law dependence introduces an effective delay in the
onset of the K+ current, which further minimizes the overlap of Na+ and K+ currents in time.
7
Energy per Spike
Surface of Constant Norm Spikes
ya
10
16
VK [mV]
18
10
12
14
14
16
T
b
V [mV]
0
t [ms]
2
10
18
16
12 s [mV]
K
VK [mV]
T
a
V [mV]
12
nJ/cm2 ? 16.5
16.3
16.3
yc
16
sK [mV]
100
0
-2
16.4
T
c
100
0
-2
V [mV]
14
14
VE [nJ/cm2]
yc
ya
?K [ms]
yb
12
16.5
yb
20
0
t [ms]
2
100
0
-2
0
t [ms]
2
Figure 3: The energy required for an action potential three parameters governing potassium activation: the midpoint voltage VK , the slope sK , and the (maximum) time constant ?K . The energy is
the minimum work required to restore the ionic concentration gradients, as given by Eq. (1). Note
that the energy within the constrained manifold of constant norm spikes is locally convex.
are persistent, current flows in opposite directions at the same time, so that, even at the optimum, the
ionic load is 1200 nC/cm2 . On the other hand, no voltage-gated K+ channels are even required for
a spike, as long as Na+ channels activate on a fast time scale and inactivate on a slower time scale
and the leak is powerful enough to repolarize the neuron. Even so, the load is still 520 nC/cm2 .
While spikes require dynamics on two time scales, suppressing the overlap between inward and
outward currents calls for a third time scale. The resulting dynamics are higher-dimensional and
reduce the load to to 239 nC/cm2 .
Making the activation and inactivation time constants voltage-dependent permits ion channels to
latch to an open or closed state during the rising and falling phase of the spike, reducing the ionic
load to 189 nC/cm2 (Fig. 2) . The minimal Na+ and K+ currents are separated in time, yet dynamics
that are linear in the activation variables cannot enforce a true delay between the offset of the Na+
current and the onset of the K+ current. If current flow depends on multiple gates that need to be
activated simultaneously, optimization can use the nonlinearity of multiplication to introduce a delay
in the rise of the K+ current that abolishes the overlap, and the ionic load drops to 156 nC/cm2 .
Any number of kinetic schemes for the nonlinear permeabilities P? can give rise to the same spike
waveform V (t), including the simplest two-dimensional one. Yet only the full Hodgkin-Huxley
(HH) model, with its voltage-dependent kinetics that prevent the premature resurgence of inward
current and cooperative gating that delays the onset of the outward current, minimizes the energetic
cost. More complex models, in which voltage-dependent ion channels make transitions between
multiple closed, inactivated, and open states, instantiate the energy-conserving features of the HH
system at the molecular level. Furthermore, features that are eliminated during optimization, such as
a voltage-dependent inactivation of the outward potassium current, are also not part of the delayed
rectifier potassium current in the Hodgkin-Huxley framework.
8
References
[1] Paul De Weer, David C. Gadsby, and R. F. Rakowski. Voltage dependence of the na-k pump.
Ann. Rev. Physiol., 50:225?241, 1988.
[2] B. Frankenhaeuser and A. F. Huxley. The action potential in the myelinated nerve fibre of
xenopus laevis as computed on the basis of voltage clamp data. J. Physiol., 171:302?315,
1964.
[3] Samuel S.-H. Wang, Jennifer R. Shultz, Mark J. Burish, Kimberly H. Harrison, Patrick R. Hof,
Lex C. Towns, Matthew W. Wagers, and Krysta D. Wyatt. Functional trade-offs in white matter
axonal scaling. J. Neurosci., 28(15):4047?4056, 2008.
[4] Henrik Alle, Arnd Roth, and J?org R. P. Geiger. Energy-efficient action potentials in hippocampal mossy fibers. Science, 325(5946):1405?1408, 2009.
[5] D. E. Goldman. Potential, impedance and rectification in membranes. J. Gen. Physiol., 27:37?
60, 1943.
[6] A. L. Hodgkin and B. Katz. The effect of sodium ions on the electrical activity of the giant
axon of the squid. J. Physiol., 108:37?77, 1949.
[7] Brett C. Carter and Bruce P. Bean. Sodium entry during action potentials of mammalian neurons: Incomplete inactivation and reduced metabolic efficiency in fast-spiking neurons. Neuron, 64(6):898?909, 2009.
[8] Luc J. Gentet, Greg J. Stuart, and John D. Clements. Direct measurement of specific membrane
capacitance in neurons. Biophys. J., 79:314?320, 2000.
[9] Alain Destexhe, Michael Rudolph, and Denis Par?e. The high-conductance state of neocortical
neurons in vivo. Nature Neurosci. Rev., 4:739?751, 2003.
[10] Bilal Haider and David A. McCormick. Rapid neocortical dynamics: Cellular and network
mechanisms. Neuron, 62:171?189, 2009.
9
| 4327 |@word determinant:1 rising:1 norm:14 open:3 cm2:18 squid:1 seek:1 pressure:2 solid:1 carry:1 reduction:1 denoting:1 suppressing:1 bilal:1 pna:3 current:66 comparing:1 clements:1 activation:8 yet:6 must:9 written:1 john:1 physiol:4 realistic:1 shape:7 treating:2 drop:1 aps:1 v:1 instantiate:1 nervous:1 signalling:1 short:1 hodgkinhuxley:1 denis:1 gx:6 org:1 downing:1 height:4 mathematical:1 direct:1 differential:3 ik:7 persistent:1 consists:1 resistive:4 upstroke:2 inside:3 introduce:3 rapid:3 p1:1 brain:3 goldman:3 becomes:1 provided:2 brett:1 underlying:3 bounded:1 circuit:2 ileak:3 inward:6 what:2 evolved:1 cm:2 lowest:1 resurgent:1 minimizes:3 ag:11 giant:1 differentiation:1 nj:4 every:1 charge:18 secular:1 scaled:1 uk:1 control:1 producing:2 kelvin:1 discharging:1 positive:2 limit:5 bccn:1 consequence:1 accumulates:2 subscript:1 firing:2 path:1 ap:4 might:1 ghk:1 range:1 obeys:1 decided:1 unique:1 restoring:1 timedependent:1 block:1 cb2:1 procedure:1 poincar:3 area:1 elicit:1 bell:2 significantly:1 adapting:1 word:1 integrating:1 protein:1 get:3 convenience:1 close:4 selection:2 cannot:2 onto:1 influence:1 accumulating:1 accumulation:1 equivalent:1 compensated:2 roth:1 regardless:1 attention:1 independently:1 convex:2 simplicity:1 pure:1 mossy:2 fx:2 target:1 play:1 trigger:1 homogeneous:2 assure:1 mammalian:2 predicts:1 cooperative:3 role:1 ft:1 electrical:2 hv:4 wang:1 thousand:1 region:1 ensures:1 cycle:2 trade:2 balanced:1 leak:10 efflux:1 dynamic:11 reviewing:1 solving:1 deliver:2 upon:1 efficiency:1 basis:3 completely:1 triangle:1 easily:2 resolved:1 represented:1 fiber:1 stemmler:1 k16:1 separated:1 heat:1 fast:3 effective:1 activate:1 kp:1 outside:1 extraordinarily:1 exhaustive:1 whose:1 spend:1 solve:3 larger:1 consume:2 gp:6 rudolph:1 differentiable:1 biophysical:2 net:3 clamp:1 maximal:1 cao:1 rapidly:3 gen:1 conserving:1 adjoint:2 kv:3 potassium:8 optimum:1 diverges:1 produce:3 generating:2 derive:2 measured:1 eq:2 strong:1 auxiliary:2 indicate:1 come:1 direction:3 posit:1 waveform:3 inhomogeneous:1 correct:1 bean:1 brainstem:1 transient:1 explains:1 require:1 fix:1 elementary:1 mathematically:1 traversed:1 adjusted:1 kinetics:6 assisted:1 initio:1 around:1 exp:4 mapping:1 predict:1 matthew:1 alle:1 smallest:1 lose:1 label:2 instantaneously:1 offs:1 clearly:1 concurrently:1 activates:2 always:1 reaching:1 inactivation:7 ej:1 shuttle:1 voltage:47 focus:1 vk:3 contrast:3 dependent:12 entire:1 expand:1 going:1 germany:1 denoted:1 exponent:1 sengupta:1 constrained:4 fairly:2 field:2 once:3 shaped:3 eliminated:1 identical:1 broad:1 stuart:1 discrepancy:1 gsyn:1 dg:2 simultaneously:1 ve:1 delayed:1 phase:4 fire:1 maintain:1 ab:1 conductance:10 expenditure:1 highly:3 introduces:1 zoology:1 sh:3 yielding:1 activated:1 held:1 wager:1 integral:2 capable:1 partial:2 respective:1 intense:1 incomplete:1 permeability:18 orbit:7 minimal:12 instance:1 column:1 wyatt:1 depolarizes:1 ordinary:1 cost:6 subset:1 pump:5 entry:1 hof:1 delay:5 stored:1 answer:1 periodic:4 density:2 peak:13 sensitivity:1 physic:1 vm:3 off:3 michael:1 together:1 mouse:1 na:29 again:1 town:1 containing:1 possibly:1 ek:2 derivative:3 inject:1 stark:1 li:1 account:2 potential:70 jeremy:1 de:1 chemistry:1 waste:1 matter:2 satisfy:1 caused:2 mv:20 onset:4 depends:4 performed:3 root:1 closed:2 start:3 wave:1 parallel:1 simon:1 slope:4 depolarizing:1 bruce:1 rectifying:1 vivo:1 minimize:2 compartment:1 greg:1 characteristic:1 yield:2 landscape:1 modelled:1 krysta:1 ionic:16 mere:1 confirmed:1 drive:2 plateau:1 synaptic:8 definition:1 against:1 energy:21 dm:1 associated:2 di:3 gleak:1 amplitude:3 back:1 nerve:4 higher:1 dt:15 sustain:1 maximally:1 synapse:1 yb:2 done:4 though:1 strongly:1 furthermore:1 just:1 implicit:1 governing:2 traveling:1 lmu:1 sketch:1 hand:2 replacing:1 nonlinear:7 propagation:1 behaved:1 depolarize:1 effect:1 true:1 hence:2 illustrated:1 white:1 latch:1 during:9 width:2 shultz:1 steady:1 samuel:1 rat:1 m:17 hippocampal:1 neocortical:2 temperature:1 passive:6 instantaneous:1 functional:5 spiking:3 haider:1 perturbing:1 insensitive:1 volume:1 counteracts:1 resting:3 katz:3 rth:1 measurement:2 cambridge:2 imposing:1 ena:3 similarly:1 nonlinearity:1 stable:1 entail:1 sech:1 surface:1 deduce:1 add:1 patrick:1 dominant:1 own:1 recent:1 driven:2 isyn:3 initiation:1 preserving:1 minimum:12 additional:1 forty:1 period:1 multiple:4 full:1 reduces:1 smooth:1 characterized:1 long:2 compensate:1 molecular:1 biophysics:2 involving:1 metric:1 repetitive:1 represent:1 achieved:1 cell:20 ion:23 whereas:1 interval:1 diagram:1 harrison:1 source:1 swapped:1 rest:1 ascent:1 hz:1 subject:2 flow:6 capacitor:4 call:1 axonal:1 noting:1 enough:1 destexhe:1 fm:2 opposite:2 silent:1 reduce:1 consumed:1 depolarise:1 minimise:1 accelerating:1 energetically:1 energetic:2 resistance:3 algebraic:1 action:62 generally:2 conformational:1 se:1 myelin:2 listed:1 amount:2 outward:4 neocortex:1 ten:1 locally:3 induces:1 mid:1 carter:1 simplest:3 reduced:1 generate:3 repolarization:1 dotted:1 per:3 serban:1 inactivate:2 falling:4 changing:1 prevent:1 millivolt:1 asymptotically:1 sum:2 fibre:2 deflection:5 counteract:1 powerful:2 communicate:1 hodgkin:6 place:1 evokes:1 oscillation:2 geiger:1 scaling:2 followed:1 distinguish:1 display:1 activity:1 constraint:1 huxley:4 influx:1 speed:2 myelinated:1 optimality:1 martin:1 department:1 munich:1 according:1 membrane:26 across:4 slightly:1 smaller:1 describes:1 lp:3 rev:2 making:3 dv:1 restricted:1 heart:4 ln:3 equation:5 rectification:1 jennifer:1 turn:1 mechanism:1 hh:4 needed:4 reversal:1 permit:1 apply:1 obey:2 generic:3 enforce:1 gate:2 slower:1 existence:1 capacitive:3 assumes:1 include:1 neglect:1 giving:1 move:1 capacitance:9 question:1 objective:1 spike:20 occurs:2 lex:1 concentration:9 dependence:8 rt:6 primary:2 gradient:11 dp:7 wrap:1 distance:1 separate:1 separating:1 street:1 consumption:4 manifold:3 argue:1 cellular:1 gna:1 modeled:1 nc:8 gk:1 negative:1 rise:4 resurgence:5 lived:1 calcium:1 boltzmann:2 gated:7 mccormick:1 neuron:20 sm:3 finite:3 descent:3 gas:1 displayed:1 tonic:1 paradox:1 perturbation:3 sharp:1 arbitrary:2 drift:1 introduced:1 david:2 namely:1 required:5 specified:1 optimized:1 conflict:1 narrow:1 tremendous:1 address:1 able:1 dynamical:4 beating:1 yc:2 fp:5 green:1 including:2 charging:1 power:2 event:1 overlap:5 force:1 restore:3 predicting:1 sodium:9 representing:1 scheme:1 brief:3 carried:1 excitable:2 vh:2 evolve:1 multiplication:1 relative:2 law:3 ina:6 loss:6 asymptotic:1 par:1 generation:5 proportional:3 lv:6 xp:7 principle:3 dq:1 metabolic:1 course:4 periodicity:1 thalamocortical:1 keeping:1 alain:1 side:1 normalised:2 laughlin:1 taking:3 expend:1 midpoint:3 cortical:1 transition:1 made:3 premature:1 flux:1 excess:4 sj:1 emphasize:1 keep:1 supremum:1 active:8 arnd:1 assumed:1 conservation:1 sk:2 table:2 impedance:1 channel:9 nature:2 transfer:1 expanding:1 inactivated:1 dendrite:1 complex:1 electric:2 vj:1 pk:3 neurosci:2 paul:1 allowed:1 downstroke:3 neuronal:4 site:1 referred:1 augmented:1 fig:3 postulating:1 extraordinary:1 axon:3 henrik:1 resistor:2 vanish:1 third:1 down:1 load:8 specific:2 rectifier:1 inset:2 gating:9 offset:2 evidence:1 dl:1 exists:1 effectively:1 importance:1 orthogonalizing:1 occurring:1 biophys:1 simply:1 qna:4 repolarize:2 lagrange:1 scalar:1 faraday:1 dh:1 kinetic:1 marked:1 bang:2 ann:1 kimberly:1 luc:1 change:8 experimentally:1 determined:1 reducing:1 total:1 specie:3 called:1 isopotential:2 premium:1 ya:2 mark:1 latter:1 |
3,675 | 4,328 | Multilinear Subspace Regression: An Orthogonal
Tensor Decomposition Approach
Qibin Zhao 1 , Cesar F. Caiafa 2 , Danilo P. Mandic 3 , Liqing Zhang4 , Tonio Ball 5 , Andreas
Schulze-Bonhage5 , and Andrzej Cichocki1
1
Brain Science Institute, RIKEN, Japan
Instituto Argentino de Radioastronom??a (IAR), CONICET, Argentina
3
Dept. of Electrical & Electronic Engineering, Imperial College, UK
4
Dept. of Computer Science & Engineering, Shanghai Jiao Tong University, China
5
BCCN, Albert-Ludwigs-University, Germany
[email protected]
2
Abstract
A multilinear subspace regression model based on so called latent variable decomposition is introduced. Unlike standard regression methods which typically
employ matrix (2D) data representations followed by vector subspace transformations, the proposed approach uses tensor subspace transformations to model
common latent variables across both the independent and dependent data. The
proposed approach aims to maximize the correlation between the so derived latent variables and is shown to be suitable for the prediction of multidimensional
dependent data from multidimensional independent data, where for the estimation
of the latent variables we introduce an algorithm based on Multilinear Singular
Value Decomposition (MSVD) on a specially defined cross-covariance tensor. It
is next shown that in this way we are also able to unify the existing Partial Least
Squares (PLS) and N-way PLS regression algorithms within the same framework.
Simulations on benchmark synthetic data confirm the advantages of the proposed
approach, in terms of its predictive ability and robustness, especially for small
sample sizes. The potential of the proposed technique is further illustrated on a
real world task of the decoding of human intracranial electrocorticogram (ECoG)
from a simultaneously recorded scalp electroencephalograph (EEG).
1
Introduction
The recent progress in sensor technology has made possible a plethora of novel applications, which
typically require increasingly large amount of multidimensional data, such as large-scale images,
3D video sequences, and neuroimaging data. To match the data dimensionality, tensors (also called
multiway arrays) have been proven to be a natural and efficient representation for such massive data.
In particular, tensor subspace learning methods have been shown to outperform their corresponding
vector subspace methods, especially for small sample size problems [1, 2]; these methods include
multilinear PCA [3], multilinear LDA [4, 5], multiway covariates regression [6] and tensor subspace
analysis [7]. These desirable properties have made tensor decomposition becoming a promising tool
in exploratory data analysis [8, 9, 10, 11].
The Partial Least Squares (PLS) is a well-established estimation, regression and classification framework that aims to predict a set of dependent variables (responses) Y from a large set of independent
variables (predictors) X, and has been proven to be particularly useful for highly collinear data [12].
Its optimization objective is to maximize pairwise covariance of a set of latent variables (also called
latent vectors, score vectors) by projecting both X and Y onto a new subspace. A popular way
1
to estimate the model parameters is the Non-linear Iterative Partial Least Squares (NIPALS) [13],
an iterative procedure similar to the power method; for an overview of PLS and its applications in
multivariate regression analysis see [14, 15, 16]. As an extension of PLS to multiway data, the N way PLS (NPLS) decomposes the independent and dependent data into rank-one tensors, subject to
maximum pairwise covariance of the latent vectors [17]. The widely reported sensitivity to noise of
PLS is attributed to redundant (irrelevant) latent variables, whose selection remains an open problem. The number of latent variables also dependents on the rank of independent data, resulting in
overfitting when the number of observations is smaller than the number of latent variables. Although
the standard PLS can also handle an N -way tensor dataset differently, e.g. applied on a mode-1 matricization of X and Y, this would make it difficult to interpret the loadings as the physical meaning
would be lost due to the unfolding.
To alleviate these issues, in this study, a new tensor subspace regression model, called the HigerOrder Partial Least Squares (HOPLS), is proposed to predict an M th-order tensor Y from an N thorder tensor X. It considers each data sample as a higher order tensor represented as a linear combination of tensor subspace bases. This way, the dimensionality of parameters estimated by HOPLS
is much smaller than the dimensionality of parameters estimated by PLS, thus making HOPLS particularly suited for small sample sizes. In addition, the latent variables and tensor subspace can be
optimized to ensure a maximum correlation between the latent variables of X and Y with a constraint imposed to ensure a special structure of the core tensor. This is achieved by a simultaneous
stepwise rank-(1, L2 , . . . , LN ) decompositions of X and rank-(1, K2 , . . . , KM ) decomposition of
Y [18], using multiway singular value decomposition (MSVD) [19].
2
2.1
Preliminaries
Notation and definitions
We denote N th-order tensors (multi-way arrays) by underlined boldface capital letters, matrices
(two-way arrays) by boldface capital letters, and vectors by boldface lower-case letters, e.g., X, P
and t are examples of a tensor, a matrix and a vector, respectively.
The ith entry of a vector x is denoted by xi , element (i, j) of a matrix X by xij , and element
(i1 , i2 , . . . , iN ) of an N th-order tensor X ? RI1 ?I2 ?????IN by xi1 i2 ...iN or (X)i1 i2 ...iN . Indices
typically range from 1 to their capital version, e.g., iN = 1, . . . , IN . The nth matrix in a sequence
is denoted by a superscript in parentheses, e.g., X(n) . The nth-mode matricization of a tensor X is
denoted by X(n) .
The n-mode product of a tensor X ? RI1 ?????In ?????IN and matrix A ? RJn ?In is denoted by
Y = X ?n A ? RI1 ?????In?1 ?Jn ?In+1 ?????IN and is defined as:
yi1 i2 ...in?1 jn in+1 ...iN =
X
xi1 i2 ...in ...iN ajn in .
(1)
in
The n-mode cross-covariance between an N th-order tensor X ? RI1 ?????In ?????IN and an M thorder tensor Y ? RJ1 ?????Jn ?????JM with the same size In = Jn on the nth-mode, denoted by
COV{n;n} (X, Y) ? RI1 ?????In?1 ?In+1 ?????IN ?J1 ?????Jn?1 ?Jn+1 ?????JM , is defined as
C = COV{n;n} (X, Y) =< X, Y >{n;n} ,
(2)
where the symbol < ?, ? >{n;n} represents a multiplication between two tensors, and is defined as
ci1 ,...,in?1 ,in+1 ...iN ,j1 ,...,jn?1 jn+1 ...jM =
In
X
in =1
2
xi1 ,...,in ,...,iN yj1 ,...,in ,...,jM .
(3)
2.2
Partial Least Squares
The objective of the PLS method is to find a set of latent vectors that explains as much as possible the
covariance between X and Y, which can be achieved by performing the following decomposition
X
T
= TP + E =
R
X
tr pTr + E,
r=1
Y
= UCT + F =
R
X
ur cTr + F,
(4)
r=1
where T = [t1 , t2 , . . . , tR ] ? RI?R is a matrix of R extracted orthogonal latent variables from X,
that is, TT T = I, and U = [u1 , u2 , . . . , uR ] ? RI?R are latent variables from Y that have maximum covariance with T column-wise. The matrices P and C represent loadings (vector subspace
bases) and E and F are residuals. A useful property is that the relation between T and U can be
approximated linearly by
U ? TD,
(5)
where D is an (R ? R) diagonal matrix, and scalars drr = uTr tr /tTr tr play the role of regression
coefficients.
3
Higher-order PLS (HOPLS)
=
+...+
+
=
Raw Data
+
Loadings
Latent variables
Residuals
Figure 1: Schematic diagram of the HOPLS model: decomposing X as a sum of rank-(1, L2 , L3 )
tensors. Decomposition for Y follows a similar principle.
For an N th-order independent tensor X ? RI1 ?????IN and an M th-order dependent tensor Y ?
RJ1 ?????JM , having the same size on the first mode1 , i.e., I1 = J1 , similar to PLS, our objective is
to find the optimal subspace approximation of X and Y, in which the latent vectors of independent
and dependent variables have maximum pairwise covariance.
3.1
Proposed model
The new tensor subspace represented by the Tucker model can be obtained by approximating X
with a sum of rank-(1, L2 , . . . , LN ) decompositions (see Fig.1), while dependent data Y are approximated by a sum of rank-(1, K2 , . . . , KM ) decompositions. From the relation between the
1
The first mode is usually associated with the sample mode or time mode, and for each sample, we have an
independent data represented by an (N ? 1)th-order tensor and a dependent data represented by an (M ? 1)thorder tensor.
3
latent vectors in (5), upon replacing U by TD and integrating D into the core tensor, the operation
of HOPLS can be expressed as
X=
R
X
(N ?1)
Gr ?1 tr ?2 P(1)
+E,
r ?3 ? ? ??N Pr
r=1
Y=
R
X
(M ?1)
Dr ?1 tr ?2 Q(1)
+F,
r ?3 ? ? ??M Qr
(6)
r=1
n
oN ?1
(n)
where R is the number of latent vectors, tr ? RI1 is the r-th latent vector, Pr
?
n=1
n
oM ?1
(m)
RIn+1 ?Ln+1 (In+1 > Ln+1 ) and Qr
? RJn+1 ?Kn+1 (Jn+1 > Kn+1 ) are loading
m=1
matrices corresponding to the latent vector tr on mode-n and mode-m respectively, and Gr ?
R1?L2 ?????LN and Dr ? R1?K2 ?????KM are core tensors. Note that the new tensor subspace for X
is spanned by R tensor bases represented by Tucker model
e }R = G ?2 P(1) ?3 ? ? ??N P(N ?1) ,
{P
r
r
r r=1
r
(7)
while the new subspace for Y is represented by Tucker model
e }R = D ?2 Q(1) ?3 ? ? ??N Q(M ?1) .
{Q
r
r
r
r r=1
(8)
The rank-(1, L2 , . . . , LN ) decomposition in (6) is not unique, however, since MSVD generates both
an all-orthogonal core [19] and column-wise orthogonal factors, these can be applied to obtain the
unique components of the Tucker decomposition. This way, we ensure that Gr and Dr are all(n)T (n)
(m)
(n)
orthogonal and Pr , Qr are column-wise orthogonal, i.e. Pr Pr = I ? RLn+1 ?Ln+1 and
(m)T (m)
Qr = I ? RKm+1 ?Km+1 .
Qr
By defining a latent matrix T = [t1 , . . . , tR ] ? RI1 ?R , mode-n loading matrix P
(n)
(n)
[P1 , . . . , PR ]
Jn+1 ?RKm+1
(m)
In+1 ?RLn+1
(n)
=
(m)
(m)
= [Q1 , . . . , QR ]
R?RL2 ?????RLN
? R
, mode-m loading matrix Q
?
R
, D =
and core tensor G = blockdiag(G1 , . . . , GR ) ? R
blockdiag(D1 , . . . , DR ) ? RR?RK2 ?????RKM , the HOPLS model in (6) can be rewritten as
X = G ?1 T ?2 P
(1)
(1)
Y = D ?1 T ?2 Q
? ? ? ? ?N P
(N ?1)
+ E,
(M ?1)
?3 ? ? ? ? M Q
+ F,
(9)
where E and F are residuals. The core tensors G and D have a special block-diagonal structure (see
Fig. 1) whose elements indicate the level of interactions between the corresponding latent vectors
and loading matrices.
Note that HOPLS simplifies into NPLS if we define ?n : {Ln } = 1 and ?m : {Km } = 1. On the
other hand, for ?n : {Ln } = rankn (X) and ?m : {Km } = rankm (Y)2 , HOPLS obtains the same
solution as the standard PLS performed on a mode-1 matricization of X and Y. This is obvious
from a matricized form of (6), given by
T
X(1) ? tr Gr(1) Pr(N ?1) ? ? ? ? ? P(1)
,
(10)
r
T
(N ?1)
(1)
where Gr(1) Pr
? ? ? ? ? Pr
can approximate arbitrarily well the pTr in (4) computed
from X(1) .
3.2
Objective function and algorithm
The optimization of subspace transformation yielding the common latent variables will be formu(n)
(m)
lated as a problem of determining a set of loading matrices Pr , Qr , r = 1, 2, . . . , R that maximize an objective function. Since the latent vectors can be optimized sequentially with the same
2
rankn (X) = rank X(n) .
4
Algorithm 1 The Higher-order Partial Least Squares (HOPLS) Algorithm
Input: X ? RI1 ?????IN , Y ? RJ1 ?????JM with I1 = J1
M
Number of latent vectors is R and number of loading vectors are {Ln }N
n=2 and {Km }m=2 .
(n)
(m)
Output: {Pr }; {Qr }; {Gr }; {Dr }; Tr
r = 1, . . . , R; n = 1, . . . , N ? 1; m = 1, . . . , M ? 1.
Initialization: E1 = X, F1 = Y.
for r = 1 to R do
if kEr k > and kFr k > then
Cr ?< Er , Fr >{1,1} ;
Rank-(L2 , . . . , LN , K2 , . . . , KM ) decomposition of Cr by HOOI [8] as
(1)
(N ?1)
(1)
(M ?1)
Cr ? [[Hr ; Pr , . . . , Pr
, Qr , . . . , Qr
]];
(1)T
(N ?1)T
?3 ? ? ? ?N Pr
tr ? the first leading left singular vector by SVD Er ?2 Pr
(1)
(N ?1)T
(1)T
]];
Gr ? [[Er ; tTr , Pr , . . . , Pr
(1)T
(M ?1)T
T
Dr ? [[Fr ; tr , Qr , . . . , Qr
]];
Deflation:
(N ?1)
(1)
]];
Er+1 ? Er ? [[Gr ; tr , Pr , . . . , Pr
(M ?1)
(1)
]];
Fr+1 ? Fr ? [[Dr ; tr , Qr , . . . , Qr
else
Break;
end if
end for
(m)
(n)
Return all {Pr }; {Qr }; {Gr }; {Dr }; Tr .
criteria based on deflation3 , we shall simplify the problem to that of the first latent vector t1 and two
(n)
(m)
groups of loading matrices P1 and Q1 . To simplify the notation, r is removed in the following
equations. An objective function employed to determine the tensor bases, represented by P(n) and
Q(m) , can be defined as
2
2
? [[G; t, P(1) , . . . , P(N ?1) ]]
+
Y ? [[D; t, Q(1) , . . . , Q(M ?1) ]]
min
X
(n)
(m)
{P
s. t.
,Q
{P
}
(n)T
P(n) } = ILn+1 ,
{Q(m)T Q(m) } = IKm+1 ,
(11)
and yields the common latent vector t that best approximates X and Y. The solution can be obtained
by maximizing the norm of the core tensors G and D simultaneously. Since tT t = 1, we have
2
kG ?1 Dk2 =
[[< X, Y >{1;1} ; P(1) , . . . , P(N ?1) , Q(1) , . . . Q(M ?1) ]]
.
(12)
We now define a mode-1 cross-covariance tensor C = COV{1;1} (X, Y) ? RI2 ?????IN ?J2 ?????JM .
Using the property of kG ?1 Dk2 ? kGk2 kDk2 and based on (11), (12), we have
2
max
[[C; P(1) , . . . , P(N ?1) , Q(1) , . . . Q(M ?1) ]]
{P(n) ,Q(m) }
s. t.
P(n)T P(n) = ILn+1 and Q(n)T Q(n) = IKm+1 ,
(13)
indicating that instead of decomposing X directly, we may opt to find a rank(L2 , . . . , LN , K2 , . . . , KM ) tensor decomposition of C. According to (11), for a given set of
loading matrices {P(n) }, the latent vector t must explain variance of X as much as possible, that is
2
t = arg min
X ? [[G; t, P(1) , . . . , P(N ?1) ]]
.
(14)
t
The HOPLS algorithm is outlined in Algorithm 1.
3
As in the NPLS case, this deflation does not reduce the rank of the residuals.
5
;
3.3
Prediction
Predictions of the new observations are performed using the matricization form of data tensors X
and Y. More specifically, for any new observation Xnew , we can predict the Ynew as
+
? new = Xnew P(N ?1) ? ? ? ? ? P(1) GT
T
(1)
(1)
(M ?1)
(1) T
? new D
? new = T
? ??? ? Q
,
(15)
Y
(1) Q
(1)
where (?)+ denotes the Moore-Penrose pseudoinverse operation.
Figure 2: Performance comparison between HOPLS, NPLS and PLS, for a varying number of latent
vectors under the conditions of noise free (A) and SNR=10dB (B).
4
Experimental results
We performs two case studies, one on synthetic data which illustrates the benefits of HOPLS, and the
other on real-life electrophysiological data. To quantify the predictability the index Q2 was defined
PI
PI
as Q2 = 1 ? i=1 (yi ? y?i )2 / i=1 (yi ? y?)2 , where y?i denotes the prediction of yi using a model
created with the ith sample omitted.
4.1
Simulations on synthetic datasets
A simulation study on synthetic datasets was undertaken to evaluate the HOPLS regression method
in terms of its predictive ability and effectiveness under different conditions related to small number
of samples and noise levels. The HOPLS and NPLS were performed on tensor datasets whereas
Figure 3: The optimal performance after choosing an appropriate number of latent vectors. (A)
Noise free case. (B) For case with SNR=10dB.
6
PLS was performed on a mode-1 matricization of the corresponding datasets (i.e. X(1) and Y(1) ).
The tensor X was generated from a full-rank standard normal distribution and the tensor Y as a
linear combination of X. Noise was added to both independent and dependent datasets to evaluate
performance at different noise levels. To reduce random fluctuations, the results were averaged over
50 simulation trials with datasets generated repeatedly according to the same criteria.
We considered a 3th-order tensor X and a 3th-order tensor Y, for the case where the sample size
was much smaller than the number of predictors, i.e., I1 << I2 ? I3 . Fig. 2 illustrates the predictive performances on the validation datasets for a varying number of latent vectors. Observe that
when the number of latent vectors was equal to the number of samples, both PLS and NPLS had
the tendency to be unstable, while HOPLS had no such problems. With an increasing number of
latent vectors, HOPLS exhibited enhanced performance while the performance of NPLS and PLS
deteriorated due to the noise introduced by excess latent vectors (see Fig. 2B). Fig. 3 illustrates the
optimal prediction performances obtained by selecting an appropriate number of latent vectors. The
HOPLS outperformed the NPLS and PLS at different noise levels and the superiority of HOPLS was
more pronounced in the presence of noise, indicating its enhanced robustness to noise.
Figure 4: Stability of the performance of HOPLS, NPLS and PLS for a varying number of latent
vectors, under the conditions of (A) SNR=5dB and (B) SNR=0dB.
Observe that PLS was sensitive to the number of latent vectors, indicating that the selection of latent
vectors is a crucial issue for obtaining an optimal model. Finding the optimal number of latent
vectors for unseen test data remains a challenging problem, implying that the stability of prediction
performance for a varying number of latent vectors is essential for alleviating the sensitivity of the
model. Fig. 4 illustrates the stable predictive performance of HOPLS for a varying number of latent
vectors, this behavior was more pronounced for higher noise levels.
4.2
Decoding ECoG from EEG
In the last decade, considerable progress has been made in decoding the movement kinematics (e.g.
trajectories or velocity) from neuronal signals recorded both invasively, such as spiking activity
[20] and electrocorticogram (ECoG) [21, 22], and noninvasively- from scalp electroencephalography (EEG) [23]. To extract more information from brain activities, neuroimaging data fusion has
also been investigated, whereby mutimodal brain activities were recorded continuously and synchronously. In contrast to the task of decoding the behavioral data from brain activity, in this study,
our aim was to decode intracranial ECoG from scalp EEG. Assuming that both ECoG and EEG are
related to the same brain sources, we set out to extract the common latent components between EEG
and ECoG and examined whether ECoG can be decoded from the corresponding EEG by employing
our proposed HOPLS method.
ECoG (8?8 grid) and EEG (21 electrodes) were recorded simultaneously at a sample rate of 1024Hz
from a human subject during relaxed state. After the preprocessing by spatial filter of common aver7
age reference (CAR), ECoG and EEG signals were transformed into a time-frequency representation
and downsampled to 8 Hz by the continuous complex Morlet wavelet transformation with frequency
range of 2-150Hz and 2-40Hz, respectively. To ease the computation burden, we employed a 4 second time window of EEG to predict the corresponding ECoG with the same window length. Thus,
our objective was to decode the ECoG dataset comprised in a 4th-order tensor Y (trial ? channel
? frequency ? time) from an EEG dataset contained in a 4th-order tensor X (trial ? channel ?
frequency ? time).
According to the HOPLS model, the common latent vectors in T can be regarded as brain source
e and
components that establish a bridge between EEG and ECoG, while the loading tensors P
r
e
Qr , r = 1, . . . , R can be regarded as a set of tensor bases, as shown in Fig. 5(A). These bases
are computed from the training dataset and explain the relationship of spatio-temporal frequency
patterns between EEG and ECoG. The decoding model was calibrated from 30 second datasets and
was applied to predict the subsequent 30 second datasets. The quality of prediction was evaluated
by the values of total correlation coefficients between the predicted and actual time-frequency representation of ECoG, denoted by rvec(Y),
? vec(Y) .
Fig. 5(B) illustrates the prediction performance by using a different number of latent vectors, ranging
from 1 to 8 and compared with the standard PLS performed on a mode-1 matricization of tensors
X and Y. The optimal number of latent vectors for HOPLS and PLS were 4 and 1, respectively.
Conforming with analysis, HOPLS was more stable for a varying number of latent vectors and
outperformed the standard PLS in terms of its predictive ability.
Figure 5: (A) The basis of the tensor subspace computed from the spatial, temporal, and spectral
representation of EEG and ECoG. (B) The correlation coefficient r between predicted and actual
spatio-temporal-frequency representation of ECoG signals for a varying number of latent vectors.
5
Conclusion
We have introduced the Higher-order Partial Least Squares (HOPLS) framework for tensor subspace
regression, whereby data samples are represented in a tensor form, thus providing an natural generalization of the existing Partial Least Squares (PLS) and N -way PLS (NPLS) approaches. Compared
to the standard PLS, our proposed method has been shown to be more flexible and robust, especially
for small sample size cases. Simulation results have demonstrated the superiority and effectiveness of HOPLS over the existing algorithms for different noise levels. A challenging application of
decoding intracranial electrocorticogram (ECoG) from a simultaneously recorded scalp electroencephalography (EEG) (both from human brain) has been studied and the results have demonstrated
the large potential of HOPLS for multi-way correlated datasets.
Acknowledgments
The work was supported in part by the national natural science foundation of China under grant
number 90920014 and NSFC international cooperation program under grant number 61111140019.
8
References
[1] L. Wolf, H. Jhuang, and T. Hazan. Modeling appearances with low-rank SVM. In IEEE
Conference on Computer Vision and Pattern Recognition, pages 1?6. IEEE, 2007.
[2] Hamed Pirsiavash, Deva Ramanan, and Charless Fowlkes. Bilinear classifiers for visual recognition. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors,
Advances in Neural Information Processing Systems 22, pages 1482?1490. 2009.
[3] H. Lu, K.N. Plataniotis, and A.N. Venetsanopoulos. MPCA: Multilinear principal component
analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1):18?39, 2008.
[4] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.J. Zhang. Multilinear discriminant analysis
for face recognition. IEEE Transactions on Image Processing, 16(1):212?220, 2007.
[5] D. Tao, X. Li, X. Wu, and S.J. Maybank. General tensor discriminant analysis and Gabor
features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,
29(10):1700?1715, 2007.
[6] A.K. Smilde and H.A.L. Kiers. Multiway covariates regression models. Journal of Chemometrics, 13(1):31?48, 1999.
[7] X. He, D. Cai, and P. Niyogi. Tensor subspace analysis. Advances in Neural Information
Processing Systems, 18:499, 2006.
[8] T.G. Kolda and B.W. Bader. Tensor decompositions and applications. SIAM Review,
51(3):455?500, 2009.
[9] A. Cichocki, R. Zdunek, A. H. Phan, and S. I. Amari. Nonnegative Matrix and Tensor Factorizations. John Wiley & Sons, 2009.
[10] E. Acar, D.M. Dunlavy, T.G. Kolda, and M. M?rup. Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 2010.
[11] R. Bro, R.A. Harshman, N.D. Sidiropoulos, and M.E. Lundy. Modeling multi-way data with
linearly dependent loadings. Journal of Chemometrics, 23(7-8):324?340, 2009.
[12] S. Wold, M. Sjostroma, and L. Erikssonb. PLS-regression: A basic tool of chemometrics.
Chemometrics and Intelligent Laboratory Systems, 58:109?130, 2001.
[13] H. Wold. Soft modeling by latent variables: The nonlinear iterative partial least squares approach. Perspectives in probability and statistics, papers in honour of MS Bartlett, pages
520?540, 1975.
[14] A. Krishnan, L.J. Williams, A.R. McIntosh, and H. Abdi. Partial least squares (PLS) methods
for neuroimaging: A tutorial and review. NeuroImage, 56(2):455 ? 475, 2011.
[15] H. Abdi. Partial least squares regression and projection on latent structure regression (PLS
Regression). Wiley Interdisciplinary Reviews: Computational Statistics, 2(1):97?106, 2010.
[16] R. Rosipal and N. Kr?amer. Overview and recent advances in partial least squares. In Subspace,
Latent Structure and Feature Selection, volume 3940 of Lecture Notes in Computer Science,
pages 34?51. Springer, 2006.
[17] R. Bro. Multiway calibration. Multilinear PLS. Journal of Chemometrics, 10(1):47?61, 1996.
[18] L. De Lathauwer. Decompositions of a higher-order tensor in block terms - Part II: Definitions
and uniqueness. SIAM J. Matrix Anal. Appl, 30(3):1033?1066, 2008.
[19] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition.
SIAM Journal on Matrix Analysis and Applications, 21(4):1253?1278, 2000.
[20] M. Velliste, S. Perel, M.C. Spalding, A.S. Whitford, and A.B. Schwartz. Cortical control of a
prosthetic arm for self-feeding. Nature, 453(7198):1098?1101, 2008.
[21] Z.C. Chao, Y. Nagasaka, and N. Fujii. Long-term asynchronous decoding of arm motion using
electrocorticographic signals in monkeys. Frontiers in Neuroengineering, 3(3), 2010.
[22] T. Pistohl, T. Ball, A. Schulze-Bonhage, A. Aertsen, and C. Mehring. Prediction of arm
movement trajectories from ECoG-recordings in humans. Journal of Neuroscience Methods,
167(1):105?114, 2008.
[23] T.J. Bradberry, R.J. Gentili, and J.L. Contreras-Vidal. Reconstructing three-dimensional hand
movements from noninvasive electroencephalographic signals. The Journal of Neuroscience,
30(9):3432, 2010.
9
| 4328 |@word trial:3 version:1 loading:13 norm:1 open:1 km:9 simulation:5 decomposition:18 covariance:8 q1:2 tr:16 score:1 selecting:1 existing:3 must:1 conforming:1 john:1 subsequent:1 j1:4 acar:1 implying:1 intelligence:1 yi1:1 ith:2 core:7 zhang:2 fujii:1 lathauwer:2 kdk2:1 behavioral:1 introduce:1 pairwise:3 behavior:1 p1:2 multi:3 brain:8 td:2 actual:2 jm:7 window:2 electroencephalography:2 increasing:1 notation:2 kg:2 monkey:1 q2:2 argentina:1 transformation:4 finding:1 ajn:1 temporal:3 multidimensional:3 k2:5 lated:1 schwartz:1 control:1 uk:1 classifier:1 ramanan:1 grant:2 dunlavy:1 superiority:2 harshman:1 t1:3 engineering:2 instituto:1 bccn:1 bilinear:1 nsfc:1 fluctuation:1 becoming:1 initialization:1 china:2 examined:1 studied:1 challenging:2 appl:1 ease:1 factorization:2 range:2 averaged:1 unique:2 acknowledgment:1 lost:1 block:2 procedure:1 ker:1 ri2:1 yan:1 gabor:1 projection:1 integrating:1 downsampled:1 onto:1 selection:3 imposed:1 demonstrated:2 maximizing:1 williams:2 unify:1 array:3 regarded:2 spanned:1 stability:2 handle:1 exploratory:1 deteriorated:1 enhanced:2 play:1 rj1:3 massive:1 qibin:1 alleviating:1 decode:2 us:1 kolda:2 element:3 velocity:1 approximated:2 particularly:2 recognition:4 electrocorticographic:1 role:1 rankm:1 electrical:1 culotta:1 movement:3 removed:1 rup:1 covariates:2 deva:1 predictive:5 upon:1 rin:1 basis:1 formu:1 differently:1 blockdiag:2 represented:8 riken:2 jiao:1 iar:1 choosing:1 whose:2 widely:1 amari:1 ability:3 cov:3 niyogi:1 g1:1 unseen:1 bro:2 statistic:2 superscript:1 advantage:1 sequence:2 rr:1 cai:1 gait:1 interaction:1 product:1 caiafa:1 rln:3 fr:4 j2:1 pronounced:2 qr:16 chemometrics:6 electrode:1 plethora:1 r1:2 object:1 progress:2 predicted:2 indicate:1 quantify:1 filter:1 bader:1 human:4 explains:1 require:1 feeding:1 f1:1 ci1:1 generalization:1 alleviate:1 preliminary:1 opt:1 multilinear:9 neuroengineering:1 ecog:18 extension:1 frontier:1 considered:1 normal:1 lundy:1 predict:5 omitted:1 uniqueness:1 estimation:2 outperformed:2 sensitive:1 bridge:1 tool:2 moor:1 unfolding:1 sensor:1 aim:3 i3:1 ikm:2 cr:3 varying:7 derived:1 venetsanopoulos:1 rank:14 electroencephalographic:1 contrast:1 cesar:1 dependent:11 typically:3 relation:2 transformed:1 i1:5 germany:1 tao:1 issue:2 classification:1 arg:1 flexible:1 denoted:6 spatial:2 special:2 equal:1 having:1 hooi:1 represents:1 t2:1 simplify:2 intelligent:2 employ:1 simultaneously:4 national:1 highly:1 yielding:1 partial:12 orthogonal:6 ynew:1 incomplete:1 column:3 modeling:3 soft:1 tp:1 entry:1 hopls:28 predictor:2 ri1:9 snr:4 comprised:1 vandewalle:1 gr:10 reported:1 kn:2 synthetic:4 calibrated:1 international:1 sensitivity:2 siam:3 interdisciplinary:1 mode1:1 xi1:3 decoding:7 continuously:1 ctr:1 recorded:5 dr:8 zhao:1 leading:1 return:1 li:1 japan:1 potential:2 de:4 coefficient:3 performed:5 break:1 hazan:1 kgk2:1 whitford:1 om:1 square:12 variance:1 yield:1 matricized:1 raw:1 lu:1 trajectory:2 nagasaka:1 simultaneous:1 explain:2 hamed:1 definition:2 frequency:7 tucker:4 obvious:1 associated:1 attributed:1 dataset:4 popular:1 car:1 dimensionality:3 electrophysiological:1 higher:6 danilo:1 response:1 amer:1 evaluated:1 wold:2 uct:1 correlation:4 hand:2 replacing:1 nonlinear:1 mode:16 quality:1 lda:1 rk2:1 moore:1 laboratory:2 i2:7 illustrated:1 during:1 self:1 drr:1 iln:2 whereby:2 ptr:2 criterion:2 ttr:2 m:1 tt:2 performs:1 motion:1 image:2 meaning:1 wise:3 novel:1 ranging:1 charles:1 common:6 bonhage:1 spiking:1 physical:1 overview:2 shanghai:1 jp:1 volume:1 schulze:2 he:1 approximates:1 interpret:1 sidiropoulos:1 vec:1 maybank:1 mcintosh:1 velliste:1 outlined:1 grid:1 multiway:6 had:2 l3:1 stable:2 calibration:1 morlet:1 gt:1 base:6 multivariate:1 intracranial:3 recent:2 perspective:1 irrelevant:1 contreras:1 underlined:1 arbitrarily:1 life:1 yi:3 relaxed:1 employed:2 determine:1 maximize:3 redundant:1 signal:5 ii:1 full:1 desirable:1 match:1 cross:3 long:1 dept:2 mandic:1 liqing:1 e1:1 parenthesis:1 schematic:1 prediction:9 scalable:1 regression:16 basic:1 vision:1 albert:1 represent:1 achieved:2 addition:1 whereas:1 diagram:1 singular:4 else:1 source:2 crucial:1 unlike:1 specially:1 exhibited:1 subject:2 hz:4 recording:1 db:4 electrocorticogram:3 lafferty:1 effectiveness:2 presence:1 yang:1 bengio:1 kiers:1 krishnan:1 andreas:1 simplifies:1 reduce:2 whether:1 pca:1 bartlett:1 collinear:1 repeatedly:1 useful:2 amount:1 outperform:1 xij:1 tutorial:1 estimated:2 neuroscience:2 shall:1 group:1 imperial:1 capital:3 rjn:2 undertaken:1 sum:3 letter:3 electronic:1 wu:1 followed:1 xnew:2 nonnegative:1 scalp:4 activity:4 constraint:1 ri:2 prosthetic:1 generates:1 u1:1 min:2 performing:1 according:3 ball:2 combination:2 across:1 smaller:3 increasingly:1 son:1 ur:2 reconstructing:1 making:1 honour:1 projecting:1 pr:20 ln:12 equation:1 remains:2 kinematics:1 deflation:2 end:2 yj1:1 decomposing:2 operation:2 rewritten:1 vidal:1 observe:2 appropriate:2 spectral:1 rl2:1 fowlkes:1 robustness:2 jn:10 andrzej:1 denotes:2 include:1 ensure:3 especially:3 establish:1 approximating:1 tensor:61 objective:7 added:1 diagonal:2 aertsen:1 subspace:21 considers:1 unstable:1 discriminant:2 boldface:3 assuming:1 length:1 index:2 relationship:1 providing:1 difficult:1 neuroimaging:3 anal:1 observation:3 datasets:10 benchmark:1 defining:1 synchronously:1 introduced:3 optimized:2 rkm:3 smilde:1 established:1 able:1 usually:1 pattern:3 program:1 rosipal:1 max:1 pirsiavash:1 video:1 power:1 suitable:1 natural:3 hr:1 residual:4 nth:3 arm:3 technology:1 nipals:1 created:1 extract:2 cichocki:1 chao:1 review:3 l2:7 multiplication:1 determining:1 lecture:1 proven:2 age:1 validation:1 foundation:1 mpca:1 principle:1 editor:1 pi:2 cooperation:1 jhuang:1 supported:1 last:1 free:2 asynchronous:1 institute:1 face:1 dk2:2 benefit:1 noninvasive:1 cortical:1 world:1 noninvasively:1 made:3 thorder:3 preprocessing:1 kfr:1 employing:1 transaction:3 excess:1 approximate:1 obtains:1 confirm:1 pseudoinverse:1 overfitting:1 sequentially:1 spatio:2 xi:1 mehring:1 continuous:1 latent:51 iterative:3 decade:1 decomposes:1 matricization:6 promising:1 nature:1 channel:2 robust:1 obtaining:1 eeg:15 schuurmans:1 investigated:1 complex:1 linearly:2 noise:12 ludwigs:1 xu:1 neuronal:1 fig:8 tong:1 predictability:1 wiley:2 neuroimage:1 decoded:1 plataniotis:1 wavelet:1 tang:1 invasively:1 er:5 symbol:1 zdunek:1 svm:1 utr:1 essential:1 stepwise:1 fusion:1 burden:1 rankn:2 kr:1 illustrates:5 phan:1 suited:1 appearance:1 penrose:1 visual:1 expressed:1 contained:1 pls:30 scalar:1 u2:1 springer:1 wolf:1 extracted:1 considerable:1 specifically:1 principal:1 called:4 total:1 svd:1 experimental:1 tendency:1 indicating:3 college:1 abdi:2 evaluate:2 d1:1 correlated:1 |
3,676 | 4,329 | Practical Variational Inference for Neural Networks
Alex Graves
Department of Computer Science
University of Toronto, Canada
[email protected]
Abstract
Variational methods have been previously explored as a tractable approximation
to Bayesian inference for neural networks. However the approaches proposed so
far have only been applicable to a few simple network architectures. This paper
introduces an easy-to-implement stochastic variational method (or equivalently,
minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational
perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural
network applied to the TIMIT speech corpus.
1
Introduction
In the eighteen years since variational inference was first proposed for neural networks [10] it has not
seen widespread use. We believe this is largely due to the difficulty of deriving analytical solutions
to the required integrals over the variational posteriors. Such solutions are complicated for even
the simplest network architectures, such as radial basis networks [2] and single layer feedforward
networks with linear outputs [10, 1, 14], and are generally unavailable for more complex systems.
The approach taken here is to forget about analytical solutions and search instead for variational
distributions whose expectation values (and derivatives thereof) can be efficiently approximated with
numerical integration. While it may seem perverse to replace one intractable integral (over the true
posterior) with another (over the variational posterior), the point is that the variational posterior is far
easier to draw probable samples from, and correspondingly more amenable to numerical methods.
The result is a stochastic method for variational inference with a diagonal Gaussian posterior that can
be applied to any differentiable log-loss parametric model?which includes most neural networks1
Variational inference can be reformulated as the optimisation of a Minimum Description length
(MDL; [21]) loss function; indeed it was in this form that variational inference was first considered
for neural networks. One advantage of the MDL interpretation is that it leads to a clear separation
between prediction accuracy and model complexity, which can help to both analyse and optimise the
network. Another benefit is that recasting inference as optimisation makes it to easier to implement
in existing, gradient-descent-based neural network software.
2
Neural Networks
For the purposes of this paper a neural network is a parametric model that assigns a conditional
probability Pr(D|w) to some dataset D, given a set w = {wi }W
i=1 of real-valued parameters, or
weights. The elements (x, y) of D, each consisting of an input x and a target y, are assumed to be
1
An important exception are energy-based models such as restricted Boltzmann machines [24] whose logloss is intractable.
1
drawn independently from a joint distribution p(x, y)2 . The network loss LN (w, D) is defined as
the negative log probability of the data given the weights.
X
LN (w, D) = ? ln Pr(D|w) = ?
ln Pr(y|x, w)
(1)
(x,y)?D
The logarithm could be taken to any base, but to avoid confusion we will use the natural logarithm ln throughout. We assume that the partial derivatives of LN (w, D) with respect to the network weights can be efficiently calculated (using, for example, backpropagation or backpropagation
through time [22]).
3
Variational Inference
Performing Bayesian inference on a neural network requires the posterior distribution of the network weights given the data. If the weights have a prior probability P (w|?) that depends on some
parameters ?, the posterior can be written Pr(w|D, ?). Unfortunately, for most neural networks
Pr(w|D, ?) cannot be calculated analytically, or even efficiently sampled from. Variational inference addresses this problem by approximating Pr(w|D, ?) with a more tractable distribution
Q(w|?). The approximation is fitted by minimising the variational free energy F with respect to
the parameters ?, where
Pr(D|w)P (w|?)
F = ? ln
(2)
Q(w|?)
w?Q(?)
and for some function g of a random variable x with distribution p(x), hgix?p denotes the expectation of g over p. A fully Bayesian approach would infer the prior parameters ? from a hyperprior;
however in this paper they are found by simply minimising F with respect to ? as well as ?.
4
Minimum Description Length
F can be reinterpreted as a minimum description length loss function [12] by rearranging Eq. (2)
and substituting in from Eq. (1) to get
(3)
F = LN (w, D) w?Q(?) + DKL (Q(?)||P (?)),
where DKL (Q(?)||P (?)) is the Kullback-Leibler divergence between Q(?) and P (?). Shannon?s
source coding theorem [23] tells us that the first term on the right hand side of Eq. (3) is a lower
bound on the expected amount of information (measured in nats, due to the use of natural logarithms) required to transmit the targets in D to a receiver who knows the inputs, using the outputs
of a network whose weights are sampled from Q(?). Since this term decreases as the network?s
prediction accuracy increases, we identify it as the error loss LE (?, D):
LE (?, D) = LN (w, D) w?Q(?)
(4)
Shannon?s bound can almost be achieved in practice using arithmetic coding [26]. The second term
on the right hand side of Eq. (3) is the expected number of nats required by a receiver who knows
P (?) to pick a sample from Q(?). Since this term measures the cost of ?describing? the network
weights to the receiver, we identify it as the complexity loss LC (?, ?):
LC (?, ?) = DKL (Q(?)||P (?))
(5)
C
L (?, ?) can be realised with bits-back coding [25, 10]. Although originally conceived as a thought
experiment, bits-back coding has been used for an actual compression scheme [5]. Putting the terms
together F can be rephrased as an MDL loss function L(?, ?, D) that measures the total number of
nats required to transmit the training targets using the network, given ? and ?:
L(?, ?, D) = LE (?, D) + LC (?, ?)
(6)
The network is then trained on D by minimising L(?, ?, D) with respect to ? and ?, just like
an ordinary neural network loss function. One advantage of using a transmission cost as a loss
2
Unsupervised learning can be treated as a special case where x = ?
2
function is that we can immediately determine whether the network has compressed the targets past
a reasonable benchmark (such as that given by an off-the-shelf compressor). If it has, we can be
fairly certain that the network is learning underlying patterns in the data and not simply memorising
the training set. We would therefore expect it to generalise well to new data. In practice we have
found that as long as significant compression is taking place, decreasing L(?, ?, D) on the training
set does not increase LE (?, D) on the test set, and it is therefore unnecessary to sacrifice any training
data for early stopping.
Two transmission costs were ignored in the above discussion. One is the cost of transmitting the
model with w unspecified (for example software that implements the network architecture, the training algorithm etc.). The other is the cost of transmitting the prior. If either of these are used to encode
a significant amount of information about D, the MDL principle will break down and the generalisation guarantees that come with compression will be lost. The easiest way to prevent this is to keep
both costs very small compared to D. In particular the prior should not contain too many parameters.
5
Choice of Distributions
We now derive the form of LE (?, D) and LC (?, ?) for various choices of Q(?) and P (?). We also
derive the gradients of LE (?, D) and LC (?, ?) with respect to ? and the optimal values of ? given
?. All continuous distributions are implicitly assumed to be quantised at some very fine resolution,
QW
and we will limit ourselves to diagonal posteriors of the form Q(?) = i=1 qi (?i ), meaning that
P
W
LC (?, ?) = i=1 DKL (qi (?i )||P (?)).
5.1
Delta Posterior
Perhaps the simplest nontrivial distribution for Q(?) is a delta distribution that assigns probability
1 to a particular set of weights w and 0 to all other weights. In this case ? = w, LE (?, D) =
LN (w, D) and LC (?, ?) = LC (?, w) = ?logP (w|?) + C. where C is a constant that depends
only on the discretisation of Q(?). Although C has no effect on the gradient used for training, it is
usually large enough to ensure that the network cannot compress the data using the coding scheme
described in the previous section3 . If the prior is uniform, and all realisable weight values are equally
likely then LC (?, ?) is a constant and we recover ordinary maximum likelihood training.
QW 1
If the prior is a Laplace distribution then ? = {?, b}, P (w|?) = i=1 2b
exp ? |wib??| and
W
?LC (?, w)
1X
sgn(wi ? ?)
|wi ? ?| + C =?
L (?, w) = W ln 2b +
=
b i=1
?wi
b
C
(7)
If ? = 0 and b is fixed, this is equivalent to ordinary L1 regularisation. However we can instead
? for w as follows: ?
determine the optimal prior parameters ?
? = ?1/2 (w) (the median weight value)
PW
1
|w
?
?
?
|.
and ?b = W
i
i=1
2
QW
1
i ??)
If the prior is Gaussian then ? = {?, ? 2 }, P (w|?) = i=1 ?2??
and
exp ? (w2?
2
2
W
?
1 X
wi ? ?
?LC (?, w)
2
LC (?, w) = W ln( 2?? 2 ) + 2
=
(wi ? ?) + C =?
2? i=1
?wi
?2
(8)
With ? = 0 and ? 2 fixed this is equivalent to L2 regularisation (also known as weight decay for
PW
PW
2
1
1
? given w are ?
neural networks). The optimal ?
?= W
?2 = W
?)
i=1 wi and ?
i=1 (wi ? ?
5.2
Gaussian Posterior
A more interesting distribution for Q(?) is a diagonal Gaussian. In this case each weight requires a
separate mean and variance, so ? = {?, ? 2 } with the mean vector ? and variance vector ? 2 both
3
The floating point resolution of the computer architecture used to train the network could in principle be
used to upper-bound the discretisation constant, and hence the compression; but in practice the bound would
be prohibitively high.
3
the same size as w. For a general network architecture we cannot compute either LE (?, D) or its
derivatives exactly, so we resort to sampling. Applying Monte-Carlo integration to Eq. (4) gives
S
1X N k
L (?, D) ?
L (w , D)
S
E
(9)
k=1
with wk drawn independently from Q(?). A combination of the Gaussian characteristic function
and integration by parts can be used to derive the following identities for the derivatives of multivariate Gaussian expectations [18]:
1
?? hV (a)ia?N = h?a V (a)ia?N ,
?? hV (a)ia?N = h?a ?a V (a)ia?N
(10)
2
where N is a multivariate Gaussian with mean vector ? and covariance matrix ?, and V is an
arbitrary function of a. Differentiating Eq. (4) and applying these identities yields
N
S
?L (w, D)
1 X ?LN (wk , D)
?LE (?, D)
=
?
(11)
??i
?wi
?wi
w?Q(?) S k=1
*
2 +
2
S
?LE (?, D)
1
1 ? 2 LN (w, D)
?LN (w, D)
1 X ?LN (wk , D)
?
=
?
??i2
2
?wi2
?wi
2S
?wi
w?Q(?) 2
w?Q(?)
k=1
(12)
where the first approximation in Eq. (12) comes from substituting the negative diagonal of the empirical Fisher information matrix for the diagonal of the Hessian. This approximation is exact if the
conditional distribution Pr(D|w) matches the empirical distribution of D (i.e. if the network perfectly models the data); we would therefore expect it to improve as LE (?, D) decreases. For simple
networks whose second derivatives can be calculated efficiently the approximation is unnecessary
and the diagonal Hessian can be sampled instead.
A simplification of the above distribution is to consider the variances of Q(?) fixed and optimise
only the means. Then the sampling used to calculate the derivatives in Eq. (11) is equivalent to
adding zero-mean, fixed-variance Gaussian noise to the network weights during training. In particular, if the prior P (?) is uniform and a single weight sample is taken for each element of D,
then minimising L(?, ?, D) is identical to minimising LN (w, D) with weight noise or synaptic
noise [13]. Note that the quantisation of the uniform prior adds a large constant to LC (?, ?), making it unfeasible to compress the data with our MDL coding scheme; in practice early stopping is
required to prevent overfitting when training with weight noise.
If the prior is Gaussian then ? = {?, ? 2 } and
W
X
i
?
1 h
2
+ 2 (?i ? ?) + ?i2 ? ? 2
?i
2?
i=1
?LC (?, ?)
?i ? ?
?LC (?, ?)
1 1
1
=?
=
,
=
?
??i
?2
??i2
2 ?2
?i2
? given ? are
The optimal prior parameters ?
W
W
i
1 X
1 Xh 2
2
?
?=
?i ,
?
?2 =
?i + (?i ? ?
?)
W i=1
W i=1
C
L (?, ?) =
ln
(13)
(14)
(15)
If a Gaussian prior is used with the fixed variance ?weight noise? posterior described above, it is still
possible to choose the optimal prior parameters for each ?. This requires only a slight modification
of standard weight-noise training, with the derivatives on the left of Eq. (14) added to the weight
gradient and ? optimised after every weight update. But because the prior is no longer uniform the
network is able to compress the data, making it feasible to dispense with early stopping.
The terms in the sum on the right hand side of Eq. (13) are the complexity costs of individual
network weights. These costs give valuable insight into the internal structure of the network, since
(with a limited budget of bits to spend) the network will assign more bits to more important weights.
Importance can be used, for example, to prune away spurious weights [15] or determine which
inputs are relevant [16].
4
6
Optimisation
If the derivatives of LE (?, D) are stochastic, we require an optimiser that can tolerate noisy gradient
estimates. Steepest descent with momentum [19] and RPROP [20] both work well in practice.
Although stochastic derivatives should in principle be estimated using the same weight samples for
the entire dataset, it is in practice much more efficient to pick different weight samples for each
(x, y) ? D. If both the prior and posterior are Gaussian this yields
?L(?, ?, D)
?i ? ?
+
?
??i
?2
S
X
(x,y)?D
1 1
1
?L(?, ?, D)
?
?
+
??i2
2 ?2
?i2
1 X ?LN (wk , x, y)
S
?wi
(16)
k=1
X
(x,y)?D
2
S
1 X ?LN (wk , x, y)
2S
?wi
(17)
k=1
where LN (wk , x, y) = ? ln Pr(y|x, w) and a separate set of S weight samples {wk }Sk=1 is drawn
from Q(?) for each (x, y). For large datasets it is usually sufficient to set S = 1; however performance can in some cases be substantially improved by using more samples, at the cost of longer
training times.
If the data is divided into B equally-sized batches such that D = {bj }B
j=1 , and an ?online? optimiser
is used, with the parameters updated after each batch gradient calculation, the following online loss
function (and corresponding derivatives) should be employed:
L(?, ?, bj ) =
1 C
L (?, ?) + LE (?, bj )
B
(18)
Note the 1/B factor for the complexity loss. This is because the weights (to which the complexity cost applies) are only transmitted once for the entire dataset, whereas the error cost must be
transmitted separately for each batch.
During training, the prior parameters ? should be set to their optimal values after every update to
?. For more complex priors where the optimal ? cannot be found in closed form (such as mixture
distributions), ? and ? can instead be optimised simultaneously with gradient descent [17, 10].
Ideally a trained network should be evaluated on some previously unseen input x0 using the expected
distribution hPr(.|x0 , w)iw?Q(?) . However the maximum a posteriori approximation Pr(.|x0 , w? ),
where w? is the mode of Q(?), appears to work well in practice (at least for diagonal Gaussian
posteriors). This is equivalent to removing weight noise during testing.
7
Pruning
Removing weights from a neural network (a process usually referred to as pruning) has been repeatedly proposed as a means of reducing complexity and thereby improving generalisation [15, 7].
This would seem redundant for variational inference, which automatically limits the network complexity. However pruning can reduce the computational cost and memory demands of the network.
Furthermore we have found that if the network is retrained after pruning, the final performance can
be improved. A possible explanation is that pruning reduces the noise in the gradient estimates
(because the pruned weights are not sampled) without increasing network complexity.
Weights w that are more probable under Q(?) tend to give lower LN (w, D) and pruning a weight
is equivalent to fixing it to zero. These two facts suggest a pruning heuristic where a weight is
removed if its probability density at zero is sufficiently high under Q(?). For a diagonal posterior
we can define the relative probability of each wi at zero as the density of qi (?i ) at zero divided by
the density of qi (?i ) at its mode. We can then define a pruning heuristic by removing all weights
whose relative probability at zero exceeds some threshold ?, with 0 ? ? ? 1. If qi (?i ) is Gaussian
this yields
?i
?2i
exp ? 2 > ? =? < ?
(19)
2?i
?i
5
?In wage negotiations the industry bargains as a unit with a single union.?
Figure 1: Two representations of a TIMIT utterance. Note the lower resolution and greater
decorrelation of the MFC coefficients (top) compared to the spectrogram (bottom).
?
where we have used the reparameterisation ? = ?2 ln ?, with ? ? 0. If ? = 0 no weights
are pruned. As ? grows the amount of pruning increases, and the probability of the pruned weight
vector under Q(?) (and therefore the likely network performance) decreases. A good rule of thumb
for how high ? can safely be set is the point at which the pruned weights become less probable than
an average weight sampled from qi (?i ). For a Gaussian this is
q
?
? = 2 ln 2 ? 0.83
(20)
If the network is retrained after pruning, the cost of transmitting which weights have been removed
should in principle be added to LC (?, ?) (since this information could be used to overfit the training
data). However the extra cost does not depend on the network parameters, and can therefore be
ignored for the purposes of optimisation.
When a Gaussian prior is used its mean tends to be near zero. This implies that ?cheaper? weights,
where qi (?i ) ? P (?), have high relative probability at zero and are thus more likely to be pruned.
8
Experiments
We tested all the combinations of posterior and prior described in Section 5 on a hierarchical multidimensional recurrent neural network [9] trained to do phoneme recognition on the TIMIT speech
corpus [4]. We also assessed the pruning heuristic from Section 7 by applying it with various thresholds to a trained network and observing the impact on performance and network size.
TIMIT is a popular phoneme recognition benchmark. The core training and test sets (which we used
for our experiments) contain respectively 3696 and 192 phonetically transcribed utterances. We
defined a validation set by randomly selecting 184 sequences from the training set. The reduced set
of 39 phonemes [6] was used during both training and testing. The audio data was presented to the
network in the form of spectrogram images. One such image is contrasted with the mel-frequency
cepstrum representation used for most speech recognition systems in Fig. 1.
Hierarchical multidimensional recurrent neural networks containing Long Short-Term Memory [11]
hidden layers and a CTC output layer [8] have proven effective for offline handwriting recognition [9]. The same architecture is employed here, with a spectrogram in place of a handwriting
image, and phoneme labels in place of characters. Since the network scans through the spectrogram
in all directions, both vertical and horizontal correlations can be captured.
The network topology was identical for all experiments. It was the same as that of the handwriting
recognition network in [9] except that the dimensions of the three subsampling windows used to
progressively decrease resolution were now 2 ? 4, 2 ? 4 and 1 ? 4, and the CTC layer now contained
40 output units (one for each phoneme, plus an extra for ?blank?). This gave a total of 15 layers,
1306 units (not counting the inputs or bias), and 139,536 weights. All network parameters were
trained with online steepest descent (weight updates after every sequence) using a learning rate of
10?4 and a momentum of 0.9. For the networks with stochastic derivatives (i.e those with Gaussian
posteriors) a single weight sample was drawn for each sequence. Prefix search CTC decoding [8]
was used to transcribe the test set, with probability threshold 0.995. When parameters in the posterior or prior were fixed, the best value was found empirically. All networks were initialised with
random weights (or random weight means if the posterior was Gaussian), chosen from a Gaussian
6
Adaptive weight noise
Adapt. prior weight noise
Weight noise
Maximum likelihood
Figure 2: Error curves for four networks during training. The green, blue and red curves correspond to the average per-sequence error loss LE (?, D) on the training, test and validation sets
respectively. Adaptive weight noise does not overfit, and normal weight noise overfits much more
slowly than maximum likelihood. Adaptive weight noise led to longer training times and noisier
error curves.
Table 1: Results for different priors and posteriors. All distribution parameters were learned by
the network unless fixed values are specified. ?Error? is the phoneme error rate on the core test set
(total edit distance between the network transcriptions and the target transcriptions, multiplied by
100). ?Epochs? is the number of passes through the training set after which the error was recorded.
?Ratio? is the compression ratio of the training set transcription targets relative to a uniform code
over the 39 phoneme labels (? 5.3 bits per phoneme); this could only be calculated for the networks
with Gaussian priors and posteriors.
Name
Posterior
Prior
Error
Epochs
Ratio
Adaptive L1
Adaptive L2
Adaptive mean L2
L2
Maximum likelihood
L1
Adaptive mean L1
Weight noise
Adaptive prior weight noise
Adaptive weight noise
Delta
Delta
Delta
Delta
Delta
Delta
Delta
Gauss ?i = 0.075
Gauss ?i = 0.075
Gauss
Laplace
Gauss
Gauss ? 2 = 0.1
Gauss ? = 0, ? 2 = 0.1
Uniform
Laplace ? = 0, b = 1/12
Laplace b = 1/12
Uniform
Gauss
Gauss
49.0
35.1
28.0
27.4
27.1
26.0
25.4
25.4
24.7
23.8
7
421
53
59
44
545
765
220
260
384
?
?
?
?
?
?
?
?
0.542
0.286
with mean 0, standard deviation 0.1. For the adaptive Gaussian posterior, the standard deviations
of the weights were initialised to 0.075 then optimised during training; this ensured that the variances (which are the standard deviations squared) remained positive. The networks with Gaussian
posteriors and priors did not require early stopping and were trained on all 3696 utterances in the
training set; all other networks used the validation set for early stopping and hence were trained on
3512 utterances. These were also the only networks for which the transmission cost of the network
weights could be measured (since it did not depend on the quantisation of the posterior or prior).
The networks were evaluated on the test set using the parameters giving lowest LE (?, D) on the
training set (or validation set if present). All experiments were stopped after 100 training epochs
with no improvement in either L(?, ?, D), LE (?, D) or the number of transcription errors on the
training or validation set. The reason for such conservative stopping criteria was that the error curves
of some of the networks were extremely noisy (see Fig. 2).
Table 1 shows the results for the different posteriors and priors. L2 regularisation was no better
than unregularised maximum likelihood, while L1 gave a slight improvement; this is consistent
with our previous experience of recurrent neural networks. The fully adaptive L1 and L2 networks
performed very badly, apparently because the priors became excessively narrow (? 2 ? 0.003 for
L2 and b ? 0.002 for L1). L1 with fixed variance and adaptive mean was somewhat better than L1
with mean fixed at 0 (although the adaptive mean was very close to zero, settling around 0.0064).
The networks with Gaussian posteriors outperformed those with delta posteriors, with the best score
obtained using a fully adaptive posterior.
Table 2 shows the effect of pruning on the trained ?adaptive weight noise? network from Table 1.
The pruned networks were retrained using the same optimisation as before, with the error recorded
before and after retraining. As well as being highly effective at removing weights, pruning led to
improved performance following retraining in some cases. Notice the slow increase in initial error
up to ? = 0.5 and sharp rise thereafter; this is consistent with the ?safe? threshold of ? ? 0.83
7
Table 2: Effect of Network Pruning. ??? is the threshold used for pruning. ?Weights? is the number
of weights left after pruning and ?Percent? is the same figure expressed as a percentage of the original
weights. ?Initial Error? is the test error immediately after pruning and ?Retrain Error? is the test error
following ?Retrain Epochs? of subsequent retraining. ?Bits/weight? is the average bit cost (as defined
in Eq. (13)) of the unpruned weights.
Weights
Percent
Initial error
Retrain error
Retrain Epochs
Bits/weight
0
0.01
0.05
0.1
0.2
0.5
1
2
139,536
107,974
63,079
52,984
43,182
31,120
22,806
16,029
100%
77.4%
45.2%
37.9%
30.9%
22.3%
16.3%
11.5%
23.8
23.8
23.9
23.9
23.9
24.0
24.5
28.0
23.8
24.0
23.5
23.3
23.7
23.3
24.1
24.5
0
972
35
351
740
125
403
335
0.53
0.72
1.15
1.40
1.82
2.21
3.19
3.55
cells
?
input gates
H forget gates
V forget gates
cells
output gates
Figure 3: Weight costs in an 2D LSTM recurrent connection. Each dot corresponds to a weight;
the lighter the colour the more bits the weight costs. The vertical axis shows the LSTM cell the
weight comes from; the horizontal axis shows the LSTM unit the weight goes to. Note the low cost of
the ?V forget gates? (these mediate vertical correlations between frequency bands in the spectrogram,
which are apparently less important to transcription than horizontal correlations between timesteps);
the high cost of the ?cells? (LSTM?s main processing units); the bright horizontal and vertical bands
(corresponding to units with ?important? outputs and inputs respectively); and the bright diagonal
through the cells (corresponding to self connections).
mentioned in Section 7. The lowest final phoneme error rate of 23.3 would until recently have been
the best recorded on TIMIT; however the application of deep belief networks has now improved the
benchmark to 20.5 [3].
Acknowledgements
I would like to thank Geoffrey Hinton, Christian Osendorfer, Justin Bayer and Thomas R?uckstie?
for helpful discussions and suggestions. Alex Graves is a Junior Fellow of the Canadian Institute for
Advanced Research.
Figure 4: The ?cell? weights from Fig. 3 pruned at different thresholds. Black dots are pruned
weights, white dots are remaining weights. ?Cheaper? weights tend to be removed first as ? grows.
8
References
[1] D. Barber and C. M. Bishop. Ensemble learning in Bayesian neural networks., pages 215?237. SpringerVerlag, Berlin, 1998.
[2] D. Barber and B. Schottky. Radial basis functions: A bayesian treatment. In NIPS, 1997.
[3] G. E. Dahl, M. Ranzato, A. rahman Mohamed, and G. Hinton. Phone recognition with the meancovariance restricted boltzmann machine. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel,
and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 469?477. 2010.
[4] DARPA-ISTO. The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus (TIMIT), speech disc
cd1-1.1 edition, 1990.
[5] B. J. Frey. Graphical models for machine learning and digital communication. MIT Press, Cambridge,
MA, USA, 1998.
[6] K. fu Lee and H. wuen Hon. Speaker-independent phone recognition using hidden markov models. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 1989.
[7] C. L. Giles and C. W. Omlin. Pruning recurrent neural networks for improved generalization performance.
IEEE Transactions on Neural Networks, 5:848?851, 1994.
[8] A. Graves, S. Fern?andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International
Conference on Machine Learning, ICML 2006, Pittsburgh, USA, 2006.
[9] A. Graves and J. Schmidhuber. Offline handwriting recognition with multidimensional recurrent neural
networks. In NIPS, pages 545?552, 2008.
[10] G. E. Hinton and D. van Camp. Keeping the neural networks simple by minimizing the description length
of the weights. In COLT, pages 5?13, 1993.
[11] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735?1780,
1997.
[12] A. Honkela and H. Valpola. Variational learning and bits-back coding: An information-theoretic view to
bayesian learning. IEEE Transactions on Neural Networks, 15:800?810, 2004.
[13] K.-C. Jim, C. Giles, and B. Horne. An analysis of noise in recurrent neural networks: convergence and
generalization. Neural Networks, IEEE Transactions on, 7(6):1424 ?1438, nov 1996.
[14] N. D. Lawrence. Variational Inference in Probabilistic Models. PhD thesis, University of Cambridge,
2000.
[15] Y. Le Cun, J. Denker, and S. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural
Information Processing Systems, volume 2, pages 598?605. Morgan Kaufmann, San Mateo, CA, 1990.
[16] D. J. C. MacKay. Probable networks and plausible predictions - a review of practical bayesian methods
for supervised neural networks. Neural Computation, 1995.
[17] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight sharing. Neural Computation,
4:173?193, 1992.
[18] M. Opper and C. Archambeau. The variational gaussian approximation revisited. Neural Computation,
21(3):786?792, 2009.
[19] D. Plaut, S. Nowlan, and G. E. Hinton. Experiments on learning by back propagation. Technical Report
CMU-CS-86-126, Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1986.
[20] M. Riedmiller and T. Braun. A direst adaptive method for faster backpropagation learning: The rprop
algorithm. In International Symposium on Neural Networks, 1993.
[21] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465 ? 471, 1978.
[22] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors,
pages 696?699. MIT Press, Cambridge, MA, USA, 1988.
[23] C. E. Shannon. A mathematical theory of communication. Bell system technical journal, 27, 1948.
[24] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory, pages 194?
281. MIT Press, Cambridge, MA, USA, 1986.
[25] C. S. Wallace. Classification by minimum-message-length inference. In Proceedings of the international
conference on Advances in computing and information, ICCI?90, pages 72?81, New York, NY, USA,
1990. Springer-Verlag New York, Inc.
[26] I. H. Witten, R. M. Neal, and J. G. Cleary. Arithmetic coding for data compression. Commun. ACM,
30:520?540, June 1987.
9
| 4329 |@word pw:3 compression:6 retraining:3 covariance:1 simplifying:1 pick:2 thereby:1 cleary:1 initial:3 score:1 selecting:1 prefix:1 past:1 existing:1 blank:1 nowlan:2 written:1 must:1 numerical:2 recasting:1 subsequent:1 christian:1 update:3 progressively:1 steepest:2 core:2 short:2 provides:1 plaut:1 revisited:1 toronto:2 mathematical:1 along:1 become:1 symposium:1 x0:3 sacrifice:1 expected:3 indeed:1 wallace:1 brain:1 decreasing:1 automatically:1 actual:1 window:1 increasing:1 provided:1 horne:1 underlying:1 qw:3 lowest:2 easiest:1 unspecified:1 substantially:1 guarantee:1 safely:1 fellow:1 every:3 multidimensional:4 temporal:1 braun:1 exactly:1 prohibitively:1 ensured:1 unit:6 positive:1 before:2 frey:1 tends:1 limit:2 optimised:3 black:1 plus:1 mateo:1 archambeau:1 limited:1 practical:2 testing:2 practice:7 lost:1 implement:3 union:1 backpropagation:3 networks1:1 riedmiller:1 empirical:2 bell:1 thought:1 radial:2 suggest:1 get:1 cannot:4 unfeasible:1 close:1 applying:3 equivalent:5 go:1 williams:2 independently:2 resolution:4 assigns:2 immediately:2 insight:1 rule:1 deriving:1 laplace:4 transmit:2 updated:1 target:6 exact:1 lighter:1 pa:1 element:2 rumelhart:1 approximated:1 recognition:8 bottom:1 hv:2 calculate:1 culotta:1 ranzato:1 solla:1 decrease:4 removed:3 valuable:1 mentioned:1 complexity:8 nats:3 dispense:1 ideally:1 trained:8 depend:2 basis:2 joint:1 darpa:2 various:2 train:1 effective:2 monte:1 zemel:1 tell:1 whose:5 heuristic:4 spend:1 valued:1 plausible:1 compressed:1 unseen:1 analyse:1 noisy:2 final:2 online:3 advantage:2 differentiable:1 sequence:5 analytical:2 relevant:1 description:6 convergence:1 transmission:3 help:1 derive:3 recurrent:9 propagating:1 fixing:1 measured:2 eq:11 c:2 come:3 implies:1 direction:1 safe:1 stochastic:5 sgn:1 require:2 assign:1 generalization:2 andez:1 probable:4 sufficiently:1 considered:1 around:1 normal:1 exp:3 lawrence:1 bj:3 substituting:2 early:5 purpose:2 outperformed:1 applicable:1 harmony:1 label:2 iw:1 edit:1 mit:3 gaussian:23 avoid:1 shelf:1 encode:1 june:1 improvement:2 likelihood:5 camp:1 posteriori:1 inference:13 helpful:1 stopping:6 entire:2 spurious:1 hidden:2 classification:2 colt:1 hon:1 negotiation:1 integration:3 special:1 fairly:1 mackay:1 once:1 sampling:2 identical:2 unsupervised:1 icml:1 osendorfer:1 connectionist:1 report:1 few:1 randomly:1 simultaneously:1 divergence:1 individual:1 cheaper:2 floating:1 consisting:1 ourselves:1 message:1 highly:1 reinterpreted:1 mdl:5 introduces:1 mixture:1 section3:1 amenable:1 logloss:1 integral:2 bayer:1 partial:1 fu:1 experience:1 discretisation:2 unless:1 taylor:1 logarithm:3 hyperprior:1 fitted:1 stopped:1 industry:1 soft:1 giles:2 modeling:1 logp:1 ordinary:3 cost:20 deviation:3 uniform:7 too:1 density:3 lstm:4 international:3 lee:1 off:1 probabilistic:1 decoding:1 together:1 transmitting:3 squared:1 thesis:1 recorded:3 containing:1 choose:1 slowly:1 transcribed:1 resort:1 derivative:11 realisable:1 coding:8 wk:7 includes:1 coefficient:1 inc:1 depends:2 performed:1 break:1 view:1 closed:1 apparently:2 observing:1 overfits:1 realised:1 red:1 recover:1 complicated:1 timit:7 bright:2 accuracy:2 phonetically:1 variance:7 largely:1 efficiently:4 who:2 characteristic:1 identify:2 yield:3 phoneme:9 correspond:1 ensemble:1 bayesian:7 thumb:1 disc:1 fern:1 carlo:1 touretzky:1 sharing:1 synaptic:1 rprop:2 energy:2 frequency:2 initialised:2 mohamed:1 thereof:1 handwriting:4 sampled:5 dataset:3 treatment:1 popular:1 back:5 appears:1 originally:1 tolerate:1 supervised:1 improved:6 cepstrum:1 evaluated:2 furthermore:1 just:1 correlation:3 overfit:2 hand:3 until:1 horizontal:4 rahman:1 honkela:1 unsegmented:1 propagation:1 widespread:1 mode:2 perhaps:1 believe:1 grows:2 name:1 effect:3 excessively:1 contain:2 true:1 perverse:1 unregularised:1 usa:5 analytically:1 hence:2 leibler:1 i2:6 neal:1 white:1 during:6 self:1 speaker:1 mel:1 criterion:1 theoretic:1 confusion:1 l1:9 percent:2 meaning:1 variational:19 image:3 recently:1 common:1 witten:1 ctc:3 empirically:1 volume:1 interpretation:1 slight:2 significant:2 mellon:1 cambridge:4 shawe:1 dot:3 mfc:1 longer:3 quantisation:2 etc:1 base:1 add:1 posterior:28 multivariate:2 perspective:1 commun:1 phone:2 schmidhuber:3 phonetic:1 certain:1 verlag:1 wib:1 optimiser:2 morgan:1 seen:1 minimum:5 transmitted:2 greater:1 captured:1 spectrogram:5 employed:2 prune:1 determine:3 somewhat:1 redundant:1 shortest:1 signal:1 arithmetic:2 infer:1 reduces:1 exceeds:1 technical:2 match:1 adapt:1 calculation:1 minimising:5 long:3 faster:1 divided:2 equally:2 dkl:4 qi:7 prediction:3 impact:1 wuen:1 optimisation:5 expectation:3 cmu:1 achieved:1 cell:6 hochreiter:1 kaufmann:1 whereas:1 fine:1 separately:1 median:1 source:1 w2:1 extra:2 pass:1 tend:2 lafferty:1 seem:2 near:1 counting:1 feedforward:1 canadian:1 easy:1 enough:1 gave:2 timesteps:1 architecture:6 perfectly:1 topology:1 reduce:2 became:1 whether:1 colour:1 regularisers:1 reformulated:1 speech:6 hessian:2 york:2 repeatedly:1 deep:1 ignored:2 generally:1 clear:1 amount:3 band:2 simplest:2 reduced:1 percentage:1 notice:1 delta:10 conceived:1 estimated:1 per:2 blue:1 carnegie:1 rephrased:1 putting:1 four:1 thereafter:1 threshold:6 rissanen:1 drawn:4 prevent:2 schottky:1 dahl:1 year:1 sum:1 quantised:1 place:3 throughout:1 almost:1 reasonable:1 separation:1 draw:1 bit:10 bound:4 layer:5 bargain:1 simplification:1 gomez:1 badly:1 nontrivial:1 alex:2 software:2 extremely:1 pruned:8 performing:1 department:2 combination:2 character:1 wi:16 cun:1 making:2 modification:1 restricted:2 pr:10 isto:1 taken:3 ln:25 previously:2 describing:1 know:2 tractable:2 multiplied:1 denker:1 hierarchical:3 away:1 batch:3 gate:5 original:1 compress:3 denotes:1 top:1 ensure:1 subsampling:1 thomas:1 remaining:1 graphical:1 giving:1 approximating:1 added:2 parametric:2 damage:1 diagonal:9 gradient:8 distance:1 separate:2 thank:1 berlin:1 valpola:1 reparameterisation:1 barber:2 reason:1 length:6 code:1 ratio:3 minimizing:1 equivalently:1 unfortunately:1 negative:2 rise:1 boltzmann:2 upper:1 vertical:4 datasets:1 markov:1 eighteen:1 benchmark:3 descent:4 hinton:6 communication:2 jim:1 arbitrary:1 retrained:3 sharp:1 canada:1 required:5 specified:1 junior:1 connection:2 acoustic:2 learned:1 narrow:1 nip:2 address:1 able:1 justin:1 usually:3 pattern:1 dynamical:1 wi2:1 smolensky:1 optimise:2 memory:3 explanation:1 green:1 belief:1 ia:4 omlin:1 decorrelation:1 difficulty:1 natural:2 treated:1 settling:1 advanced:1 scheme:3 improve:1 axis:2 utterance:4 prior:30 epoch:5 l2:7 acknowledgement:1 review:1 graf:5 regularisation:3 relative:4 loss:13 fully:3 expect:2 interesting:1 suggestion:1 proven:1 geoffrey:1 validation:5 digital:1 foundation:1 wage:1 sufficient:1 consistent:2 principle:4 unpruned:1 editor:2 free:1 keeping:1 drastically:1 side:3 offline:2 bias:1 generalise:1 institute:1 taking:1 correspondingly:1 differentiating:1 benefit:1 van:1 curve:4 calculated:4 dimension:1 opper:1 adaptive:16 san:1 far:2 transaction:4 pruning:19 nov:1 implicitly:1 kullback:1 transcription:5 keep:1 overfitting:1 corpus:3 receiver:3 assumed:2 unnecessary:2 pittsburgh:2 automatica:1 search:2 continuous:2 sk:1 table:5 rearranging:1 ca:1 hpr:1 unavailable:1 improving:1 complex:2 did:2 main:1 revisits:1 noise:19 mediate:1 edition:1 fig:3 referred:1 retrain:4 slow:1 ny:1 lc:16 momentum:2 xh:1 uckstie:1 theorem:1 down:1 removing:4 remained:1 bishop:1 explored:1 decay:1 intractable:2 adding:1 importance:1 phd:1 labelling:1 budget:1 demand:1 easier:2 forget:4 led:2 cd1:1 simply:2 likely:3 expressed:1 contained:1 compressor:1 applies:1 springer:1 corresponds:1 acm:1 ma:3 transcribe:1 conditional:2 identity:2 sized:1 replace:1 fisher:1 feasible:1 springerverlag:1 generalisation:3 except:1 reducing:1 contrasted:1 conservative:1 total:3 experimental:1 gauss:8 shannon:3 exception:1 internal:1 scan:1 assessed:1 noisier:1 memorising:1 audio:1 tested:1 |
3,677 | 433 | Optimal Filtering in the Salamander Retina
Fred Riekea,l;, W. Geoffrey Owenb and Willialll Bialeka,b,c
Depart.ment.s of Physics a and Molecular and Cell Biologyb
Universit.y of California
Berkeley, California 94720
and
NEC Research Inst.itute C
4 Independence \Vay
Princeton, N e'... .J ersey 08540
Abstract
The dark-adapted visual system can count photons wit h a reliability limited by thermal noise in the rod photoreceptors - the processing circuitry
bet.ween t.he rod cells and the brain is essentially noiseless and in fact may
be close to optimal. Here we design an optimal signal processor which
estimates the time-varying light intensit.y at the retina based on the rod
signals. \Ve show that. the first stage of optimal signal processing involves
passing the rod cell out.put. t.hrough a linear filter with characteristics determined entirely by the rod signal and noise spectra. This filter is very
general; in fact it. is the first st.age in any visual signal processing task
at. 10\\' photon flux. \Ve iopntify the output of this first-st.age filter wit.h
the intracellular voltage response of the bipolar celL the first anatomical
st.age in retinal signal processing. From recent. data on tiger salamander
phot.oreceptors we extract t.he relevant. spect.ra and make parameter-free,
quantit.ative predictions of the bipolar celll'esponse to a dim, diffuse flash.
Agreement wit.h experiment is essentially perfect. As far as we know this
is the first successful predicti ve t.heory for neural dynamics.
1
Introd uction
A number of hiological sensory cells perform at. a level which can be called optimal
- t.heir performancf' approaches limits set. by t.he laws of physics [1]. In some cases
377
378
Rieke, Owen, and Bialek
the behavioral performance of an organism, not just the performance of the sensory
cells, also approaches fundamental limits. Such performance indicates that neural
comput.ation can reach a level of precision where the reliability of the computed
out.put is limited by noise in the sensory input rather than by inefficiencies in the
processing algorithm or noise in the processing hardware [2]. These observations
suggest that we study algorithms for optimal signal processing. If we can make the
notion of optimal processing precise we will have the elements of a predictive (and
hence unequivocally testable) theory for what the nervous system should compute.
This is in contrast. t.o traditional modeling approaches which involve adjustment of
free parameters to fit experimental data.
To further develop these ideas we consider the vertebrate retina. Since the classic
experiments of Hecht, Shlaer and Pirenne we have known that the dark-adapted
visual syst.em can count small numbers of photons [3]. Recent experiment.s confirm
Barlow's suggestion [4,5] t.hat the reliability of behavioral decision making reaches
limits imposed by dark noise in the photoreceptors due to thermal isomerizat.ion of
t.he photopigment [6]. If dark-adapted visual performance is limit.ed by thermal noise
in t.he sensory cells then the subsequent layers of signal processing circuitry must be
extremely reliable. Rather than trying to determine precise limits t.o reliability, we
follO\\I the approach introduced in [7] and use the not.ion of "optimal computation"
t.o design the optimal processor of visual stimuli. These theoret.ical arguments
result in parameter-free predictions for the dynamics of signal transfer from t.he
rod photoreceptor to t.he bipolar cell, the first stage in visual signal processing. We
compare these predictions directly with measurements on the intact retina of t.he
t.iger salamander A mbystoma tigrinum [8,9].
2
Design of the optimal processor
All of an organism's knowledge of the visual world derives from the currents In (t)
flowing in the photoreceptor cells (labeled n). Visual signal processing consists of
estimating various aspects of the visual scene from observat.ion of these current.s.
Furthermore, t.o be of use to the organism t.hese estimates must be carried out in real
time. The general problem then is to formulate an optimal strat.egy for estimating
some functional G[R(r, t)] of the time and position dependent photon arrival rate
R(r, t) from real time observation of the currents InU).
\Ve can make considerable analytic progress to\vards solving this general prohlem
using probabilistic methods [7,2]. St.art. by writ.ing an expression for the probability
of t.he functional G[R(r,t)] conditional on the currents InU), P{G[R(r,t)Jlln(t)}.
Expanding for low signal-to-noise ratio (SNR) we find that the first term in the
expa.nsion of P{ GIl} depends only on a filt.ered version of the rod current.s,
P{G[R(r, t)]IIn(t)}
=8
G
[F
* In] + higher orJer corrections,
(1)
where * denotes convolution; the filter F depends only on t.he signal a.nd noise
characteristics of t.he photorecept.ors, as described below. Thus the estimation t.ask
divides nat.urally int.o two stages - a universal "pre-processing" stage and a t.aske1ept>ndellt stage. The univf'rsal stage is independent both of the stimulus R(r, t) anel
of the particular functiona.l G[R] we wish to estimate. Intuitively this separa.tion
makes sense; in conventional signa.l processing systems detector outputs are first
Optimal Filtering in the Salamander Retina
photon rate R(I)
time
estimated rate R
??
t
(t)
reconstruction
algorithm
~
time
....- - - rod current
Figure 1: Schematic view of photon arrival rate estimation problem.
processed by a filter whose shape is motivated by general SNR considerat.ions. Thus
the view of retinal signal processing which emerges from this calculation is a preprocessing or "cleaning up" stage followed by more specialized processing stages.
Vve emphasize that this separat.ion is a mathematical fact, not. a model we have
imposed.
To fill in some of the details of the calculation we turn to t.he simplest example of
the estimat.ion tasks discussed above - est.imation of t.he phot.on arrival rat.e itself
(Fig. 1): Photons from a light source are incident on a small patch of retina at
a time-varying rate R(t), resulting in a current J(t) in a particular rod cell. The
theoret.ical problem is t.o determine the opt.imal st.rategy for est.imat.ing R{t) based
on t.he currents 1(t) in a small collect.ion of rod cells. \Vit.h an appropriat.e definition of "optima.l" we can pose t.he estimation problem ma.themat.ically and look for
analytic or numerica.l solutions. One approach is the conditional probability calculat.ion discussed above [7]. Alternatively we can solve t.his problem using functional
met.hods. Here we outline the funct.ional calculation.
Start by writing the estimated rate as a filtered version of t.he rod currents:
J
JJ
dTFl(T)J(t - T)
Rest(t)
+
dT
dT' F 2(T, T')J(t - T)/(i - T')
+ ....
(2)
In t.he low SNR limit. t.he rods respond linearly (t.hey count photons), and we expect.
that. t.he linear term dominates the series (2) . \Ve then solve analyt.ically for t.he
filt.er FdT) which minimizes \2 = (J dt IR(t) - Rest (t)12) - i.t. t.he filt.er which
satisfies 6\2j6Fdr) = o. The averages ( .. . ) are taken over t.he ensemble of stimuli
379
380
Rieke, Owen, and Bialek
R( t). The result of t.his optimization is*
J
dw
Fd T) =
.
_e-1u.'T
2;r
(R(w)i*(w))
.
(li(w)F)
(3)
In the photon counting regime the rod currents are described as a sum of impulse
responses 10(t - tiJ) occuring at t.he phot.on arrival times t p , plus a noise term 61(t).
Expanding for low SNR we find
F()
'I r =
J
dw -i ....,TS
)io(w)
-,-e
R(W -.
2;r
~J(w)
+ '" ,
(4)
where SR(W) is t.he spectral density of fluctuations in the photon arrival rate, io(w)
is the Fourier transform of IoU), and Sdw) is the spectral density of current noise
ol(t) in the rod.
The filter (4) naturally separat.es into two distinct stages: A "first" stage
= io(W)/SI(W)
(5)
which depends only on t.he signal and noise properties of the rod cell, and a "second" stage SR(W) which contains our a priori knowledge of the stimulus. The first
stage filter is the matched filter given the rod signal and noise characteristics; each
frequency component. in the output of this filt.er is weight.ed according to its input
SNR.
Fbip(W)
Recall from the probabilistic argument above that optimal estimation of some arbitrary aspect of the scene, such as motion, also results in a separation into t.wo processing stages. Specifically, estimation of any functional of light intensity involves
only a filtered version of the rod currents. This filter is precisely t.he universal filter
Fbip( T) defined in (5). This result makes intuitive sense since the first stage of
filtering is simply "cleaning up" the rod signals prior to subsequent computation.
Intuitively we expect that this filtering occurs at an early stage of visual processing.
The first opportunity to filter the rod signals occurs in the transfer of signals bet.ween the rod and bipolar cells; we identify the transfer function between these cells
with the first st.age of our optimal filter. More precisely we ident.ify the intracellular
voltage response of the bipolar cell with the output of the filter F biP ( r). In response
to a dim flash of light at t = 0 the average bipolar cell voltage response should t.hen
be
{'bip(t) ()(
J
dT Fbip(r)Io(t - r).
(6)
1Vowhere in this prediction process do 'we illsert allY information about the bipolar
reSp01lSE - th( shape of Oltr' prediction is go punEd entirely by signal and noise
properties of the rod cell and the theordical prillciple of optimality.
3
Extracting the filter parameters and predicting the
bipolar response
To complet.e our prediction of t.he dim flash bipolar response we extract the rod
single photon current Io(t) and rod current. noise spect.rum .':h(w') from experimen?'Ve definf> the Fourier Transrorm as j(w) =
J dte+iu.,t 1(t).
Optimal Filtering in the Salamander Retina
(\,I
0
0
0
predicted bipolar
response
.
(\,I
0
Q)
~
11)
c:
0
a.
11)
...
C1)
measured biP~
response
,..
~
9
.j\;t;'
'0
C1)
.~
"iij
E
...
to
9
measured rod
responses
0
c:
Q)
9
.
~
....
C'!
or;
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
time (sec)
Figure 2: Comparison of predicted dim flash bipolar voltage response (based entirely
on rod signal and noise characteristics) and measured bipolar voltage response. For
reference we show rod voltage responses from two different cells which show the typical
variations from cell to cell and thus indicate the variations we should expect in different
bipolar cells. The measured responses are averages of many presentations of a diffuse
flash occurring at t = 0 and resulting in the absorption of an average of about 5 photons
in the rod cell. The errors bars are one standard deviation.
t.al data. To compare our predict.ion directly wit.h experiment. we must obt.ain the
rod characteristics under identical recording conditions as the bipolar measurement.
This excludes suct.ion pipette measurement.s which measure t.he current.s directly,
but effect. t.he rod response dynamics [10.11]. The bipolar voltage response is measured intracellularly in t.he eyecup preparation [8]; our approach is t.o use int.racellular volt.age recordings t.o characterize the rod network and thus convert. volt.ages
to current.s, as in [12]. This approach to the problem Illay seem overly complicat.ed
- why did we formulat.e the theory in t.erms of currents and not. voltages? It is
important. we formulate our theory in t.erms of the i7ldilliriuaJ rod signal and noise
characteristics. The electrical coupling between rod cells in t.he ret.ina causes t.he
voltage noise in nearby rods t.o be correlated; each rod, however, independently
injects current noise int.o the network.
The impedances connecting adjacent. rod cells, the impedance of t.he rod cell itself
and t.he spat.ial la.yout and connect.ions between rods det.ermine t.he relationship
bet.ween current.s and voltages in t.he net.work. The rods lie nearly on a square
381
382
Rieke, Owen, and Bialek
lattice with lattice constant 2011111 . Using this result we extract t.he impedances from
two independent experiments (12]. Once we have t.he impedances we "dec.orrelate"
the voltage noise to calculate the uncorrelat.ed current noise. We also convert the
measured single photon voltage response to the corresponding current Io(t). It
is important. to realize that t.he impedance characteristics of the rod network are
experimentally determined, and are not. in any sense free parameters!
After completing these calculat.ions the elements of our bipolar prediction are obtained under ident.ical conditions to the experimental bipolar response, and we can
make a direct comparison between the t.wo; th ere are no free parameters il1 this
prediction. As shown in the Fig. 2, t.he predicted bipolar response (6) is in excellent
agreement wit.h the measured response; all deviat.ions are well within the error bars.
4
Concluding remarks
'Ve began by posing a theoretical question: How can we best recover t.he phot.on
arrival rat.e from observations of the rod signals? The answer, in the form of a linear
filter which we apply to t.he rod current, divides into two st.ages - a stage which is
matched to the rod signal anel noise charact.eristics, and a stage which depends on
the particular characteristics of the phot.on source we are observing. The first-stage
filter in fact. is the universal pre-processor for all visual processing tasks at low SNR.
vVe identified t.his filter wit.h the rod-bipolar transfer function, and based on this
hypothesis predicted the bipolar response t.o a dim , diffuse flash. Our prediction
agrees ext.remely well with experiment.al bipolar responses.
emphasize once
more that this is not. a "model" of the bipolar cell; in fact there is nothing in our
theory about the physical propert.ies of bipolar cells. Rather our approach results
in parameter-free predictions of the computation al properties of these cells from t.he
general theoretical principle of opt.imal computation. As far as we know t.his is the
first. successful quantit.at.ive predict.ion from a theory of neural computation.
' 'Te
Thus far our results are limited t.o t.he dark-adapted regime; however the theoretical analysis present.ed here depends only on low SNR. This observat.ion suggest.s a
follow-up experiment. t.o test t.he role of adaptation in t.he rod-bipolar transfer function. If the retina is first a.dapt.ed to a constant background illuminat.ion and then
shown dim flashes on t.op of the background we can use the analysis presented here
to predict t.he adaptEd bipolar rpsponse from the adapted rod impulse response and
noise. Such an experiment.s would answer a number of interesting questions about.
ret.inal processing: (l) Does the processing remain optima.l at. higher light. levels?
(2) Does t.he bipolar ('ell ~till function as t.he universal pre-processor? (:3) Do the
roJ anel bipolar ('ells adapt t.oget.her in such a way that the optimal first.-stage filter
remains IInchanged, or does t.he rod-bipola.r transfer function also adapt.?
Can t.hese iJeas be ext.ended t.o ot.her systems, particularly spiking cells'? A number
of other signal processing syst.ems exhibit. nearly optimal performance [2]. One
example we are currently st.udying is the extraction of movement information from
the array of photoreceptor voltages in the insect compound eye [13). In related
work. A tick and Redlich [l4] have argueJ that t.he receptive field characteristics of
ret.inal ganglion C(-'lIs call be quantitat.ively predicted from a principle of opt.imal
encoding (see also [15)). A more general quest.ion we are currently pursuing is
the efficiency of t.he coding of sensory information in neural spike t.rains. Our
Optimal Filtering in the Salamander Retina
preliminary results indicate that the information rate in a spike train ca.n be as high
as 80% of the maximum information rate possible given the noise characteristics of
spike generat.ion [16]. From these examples we believe t.hat "optimal performance"
provides a general theoretical framework which can be used to predict t.he significant
computational dynamics of cells in many neural systems.
Acknowledgnlellts
We thank R. Miller and W. Hare for sharing their data and ideas, D. 'Varland and
R. de Ruyter van Steveninck for helping develop many of the methods we have used
in this analysis, and J. Atick, J. Hopfield and D. Tank for many helpful discussions.
\V. B. thanks the Aspen Center for Physics for the environment whic h catalyzed
t.hese discussions. VVork at. Berkeley was supported by t.he National Inst.itutes of
Health through Grant No. EY 03785 to \VGO, and by the National Science Foundation t.hrough a Presidential Young Investigator Award to 'VB, supplemented by
funds from Cray Research, Sun Microsystems, and the NEC Research Institute, and
through a Graduate Fellowship to FR.
References
1. \V. Bialek. Ann. Ret'. Biophys. Biophys. Chem., 16:455, 1987.
2. \V. Bialek. In E. Jen, editor, 1989 Led-ures in Complex Systems, SFI Studies in the Sciences of Complexity, volume 2, pages 513-595. Addison-'Vesley,
Reading, Mass., 1990.
3. S. Hecht, S. Shlaer, and M. Pirenne. J. Gen. Ph.ysiol., 25:819, 1942.
4. H. B. Barlow. J. Opt. Soc. Am., 46:634, 1956.
5. H. B. Barlow. Nature, 334:296, 1988.
6. A.-C. Aho, K. Donner, C. Hyden, L. O. Larsen, and T. Reuter. Nature, 324:348,
1988.
7.
8.
9.
10.
11.
12.
13.
'tV. Bialek and ,V. Owen. Biophys. J., in press.
,"V. A. Hare and W. G. Owen. J. Physiol., 421:223, 1990.
M. Capovilla, \V. A. Hare, and 'tV. G. Owen. J. Ph ysiol. , 391:125,1987.
Denis Baylor, T. D. Lamb, and K.-W. Yau. J. Physiol.. 288:613-634, 1979.
D. Baylor, G . l\.Jatthews, and K. Yau . ./. Physiol., 309:591, 1980.
V. Torre and \V. G. Owen. Biophys . ./., 41:305-324, 1983.
\V. Bialek, F. Rieke, R. R. de Ruyter van St.eveninck, and D. 'Variant!. In
D. Touretzky. editor. Advances in Neural Information Proctssillg Systems 2,
pages :36-43. Morgan Kaufmann, San Mateo. Ca., 1990.
14. J . .J. Atick and N. Redlich. Neural Comp'u tation, 2:308, 1990.
15. W. Bialek, D. Ruderman, and A. Zee. In D. Touretzky, edit.or, Adl'ances in
Neural Iuformation Processing Systems .'3. Morgan Kaufma.nn, San Mateo, Ca.,
1991.
16. F. Rieke, \V. Yamada, K. Moortgat, E. R. Lewis, and \V. Bialek. Proceedings
of the 9th. International Symposium on Htarillg, 1991.
383
| 433 |@word version:3 nd:1 inefficiency:1 series:1 contains:1 donner:1 current:22 imat:1 erms:2 si:1 must:3 realize:1 physiol:3 subsequent:2 shape:2 analytic:2 heir:1 fund:1 nervous:1 ial:1 yamada:1 filtered:2 provides:1 denis:1 ional:1 quantit:2 mathematical:1 direct:1 symposium:1 consists:1 cray:1 behavioral:2 fdt:1 prohlem:1 ra:1 brain:1 ol:1 vertebrate:1 estimating:2 matched:2 mass:1 what:1 minimizes:1 ret:4 ended:1 berkeley:2 bipolar:27 estimat:1 universit:1 grant:1 limit:6 io:6 tation:1 ext:2 encoding:1 fluctuation:1 plus:1 mateo:2 collect:1 limited:3 graduate:1 steveninck:1 ered:1 universal:4 pre:3 suggest:2 close:1 put:2 writing:1 conventional:1 imposed:2 center:1 go:1 vit:1 independently:1 formulate:2 wit:6 array:1 fill:1 his:4 dw:2 classic:1 rieke:5 notion:1 variation:2 cleaning:2 hypothesis:1 agreement:2 element:2 particularly:1 intracellularly:1 labeled:1 role:1 electrical:1 calculate:1 sun:1 aho:1 movement:1 environment:1 complexity:1 dynamic:4 hese:3 observat:2 solving:1 funct:1 predictive:1 efficiency:1 hopfield:1 various:1 train:1 distinct:1 whose:1 solve:2 ive:1 presidential:1 transform:1 itself:2 calculat:2 spat:1 net:1 reconstruction:1 ment:1 adaptation:1 fr:1 relevant:1 gen:1 till:1 formulat:1 yout:1 intuitive:1 hrough:2 heory:1 optimum:2 perfect:1 coupling:1 develop:2 pose:1 measured:7 op:1 progress:1 soc:1 predicted:5 involves:2 indicate:2 met:1 iou:1 torre:1 filter:18 preliminary:1 opt:4 absorption:1 dapt:1 helping:1 correction:1 predict:4 circuitry:2 early:1 estimation:5 currently:2 ain:1 edit:1 agrees:1 ere:1 rather:3 bet:3 varying:2 voltage:13 separa:1 indicates:1 salamander:6 contrast:1 sense:3 am:1 inst:2 dim:6 helpful:1 dependent:1 nn:1 her:2 ical:3 iu:1 tank:1 insect:1 priori:1 art:1 ell:2 field:1 once:2 extraction:1 generat:1 identical:1 look:1 nearly:2 experimen:1 stimulus:4 roj:1 inu:2 retina:9 ve:7 national:2 fd:1 light:5 separat:2 ively:1 zee:1 divide:2 theoretical:4 modeling:1 lattice:2 deviation:1 snr:7 successful:2 characterize:1 connect:1 answer:2 st:9 density:2 fundamental:1 thanks:1 international:1 probabilistic:2 physic:3 connecting:1 yau:2 li:2 syst:2 photon:13 de:2 retinal:2 sec:1 coding:1 int:3 depends:5 tion:1 view:2 observing:1 start:1 recover:1 ative:1 square:1 ir:1 kaufmann:1 characteristic:10 ensemble:1 miller:1 identify:1 comp:1 processor:5 detector:1 reach:2 touretzky:2 sharing:1 ed:6 definition:1 frequency:1 hare:3 larsen:1 vgo:1 naturally:1 adl:1 ask:1 recall:1 knowledge:2 emerges:1 strat:1 higher:2 dt:4 follow:1 response:23 flowing:1 charact:1 furthermore:1 just:1 stage:19 atick:2 ally:1 ruderman:1 ident:2 impulse:2 believe:1 effect:1 barlow:3 hence:1 volt:2 sfi:1 adjacent:1 rat:2 trying:1 outline:1 occuring:1 motion:1 reuter:1 iin:1 began:1 urally:1 specialized:1 functional:4 spiking:1 physical:1 volume:1 discussed:2 he:54 organism:3 measurement:3 significant:1 eristic:1 reliability:4 recent:2 compound:1 morgan:2 ey:1 determine:2 ween:3 signal:25 ing:2 adapt:2 calculation:3 aspen:1 molecular:1 y:1 hecht:2 pipette:1 award:1 schematic:1 prediction:10 variant:1 essentially:2 noiseless:1 cell:30 ion:18 vards:1 c1:2 dec:1 background:2 fellowship:1 ures:1 source:2 ot:1 rest:2 sr:2 recording:2 seem:1 call:1 extracting:1 counting:1 independence:1 fit:1 identified:1 idea:2 det:1 rod:46 expression:1 motivated:1 introd:1 filt:4 wo:2 passing:1 cause:1 jj:1 remark:1 tij:1 complet:1 involve:1 obt:1 dark:5 ph:2 hardware:1 processed:1 simplest:1 gil:1 estimated:2 overly:1 anatomical:1 appropriat:1 numerica:1 excludes:1 injects:1 sum:1 convert:2 respond:1 lamb:1 pursuing:1 patch:1 separation:1 decision:1 vb:1 entirely:3 layer:1 spect:2 followed:1 completing:1 adapted:6 precisely:2 scene:2 diffuse:3 propert:1 nearby:1 aspect:2 fourier:2 argument:2 extremely:1 optimality:1 concluding:1 whic:1 tv:2 according:1 remain:1 em:2 making:1 intuitively:2 taken:1 remains:1 turn:1 count:3 know:2 imation:1 bip:3 addison:1 apply:1 spectral:2 hat:2 denotes:1 rain:1 follo:1 opportunity:1 testable:1 question:2 depart:1 occurs:2 receptive:1 spike:3 traditional:1 bialek:9 exhibit:1 thank:1 writ:1 relationship:1 ratio:1 baylor:2 ify:1 design:3 perform:1 observation:3 convolution:1 t:1 thermal:3 precise:2 arbitrary:1 intensity:1 introduced:1 california:2 bar:2 below:1 microsystems:1 regime:2 reading:1 reliable:1 ation:1 predicting:1 imal:3 eye:1 carried:1 extract:3 health:1 prior:1 hen:1 law:1 ina:1 expect:3 ically:2 suggestion:1 interesting:1 filtering:6 geoffrey:1 age:7 foundation:1 anel:3 incident:1 signa:1 principle:2 editor:2 unequivocally:1 supported:1 free:6 tick:1 institute:1 van:2 fred:1 world:1 rum:1 sensory:5 preprocessing:1 san:2 far:3 flux:1 emphasize:2 ances:1 confirm:1 ermine:1 photoreceptors:2 alternatively:1 spectrum:1 why:1 impedance:5 nature:2 transfer:6 ruyter:2 expanding:2 ca:3 dte:1 posing:1 excellent:1 complex:1 did:1 intracellular:2 linearly:1 noise:23 arrival:6 nothing:1 vve:2 fig:2 redlich:2 theoret:2 il1:1 iij:1 precision:1 position:1 wish:1 comput:1 lie:1 young:1 jen:1 supplemented:1 er:3 dominates:1 derives:1 uction:1 nsion:1 nec:2 nat:1 te:1 hod:1 occurring:1 biophys:4 egy:1 led:1 simply:1 ganglion:1 visual:11 adjustment:1 hey:1 intensit:1 satisfies:1 lewis:1 ma:1 inal:2 conditional:2 presentation:1 ann:1 flash:7 catalyzed:1 owen:7 considerable:1 tiger:1 experimentally:1 determined:2 specifically:1 typical:1 called:1 experimental:2 e:1 la:1 photoreceptor:3 intact:1 est:2 l4:1 quest:1 chem:1 preparation:1 investigator:1 princeton:1 correlated:1 |
3,678 | 4,330 | Variational Gaussian Process Dynamical Systems
Andreas C. Damianou?
Department of Computer Science
University of Sheffield, UK
[email protected]
Michalis K. Titsias
School of Computer Science
University of Manchester, UK
[email protected]
Neil D. Lawrence?
Department of Computer Science
University of Sheffield, UK
[email protected]
Abstract
High dimensional time series are endemic in applications of machine learning such as robotics
(sensor data), computational biology (gene expression data), vision (video sequences) and
graphics (motion capture data). Practical nonlinear probabilistic approaches to this data are
required. In this paper we introduce the variational Gaussian process dynamical system. Our
work builds on recent variational approximations for Gaussian process latent variable models
to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical
prior in the latent space. The approach also allows for the appropriate dimensionality of the
latent space to be automatically determined. We demonstrate the model on a human motion
capture data set and a series of high resolution video sequences.
1
Introduction
Nonlinear probabilistic modeling of high dimensional time series data is a key challenge for the machine learning community. A standard approach is to simultaneously apply a nonlinear dimensionality reduction to the
data whilst governing the latent space with a nonlinear temporal prior. The key difficulty for such approaches is
that analytic marginalization of the latent space is typically intractable. Markov chain Monte Carlo approaches
can also be problematic as latent trajectories are strongly correlated making efficient sampling a challenge. One
promising approach to these time series has been to extend the Gaussian process latent variable model [1, 2]
with a dynamical prior for the latent space and seek a maximum a posteriori (MAP) solution for the latent
points [3, 4, 5]. Ko and Fox [6] further extend these models for fully Bayesian filtering in a robotics setting. We
refer to this class of dynamical models based on the GP-LVM as Gaussian process dynamical systems (GPDS).
However, the use of a MAP approximation for training these models presents key problems. Firstly, since the
latent variables are not marginalised, the parameters of the dynamical prior cannot be optimized without the
risk of overfitting. Further, the dimensionality of the latent space cannot be determined by the model: adding
further dimensions always increases the likelihood of the data. In this paper we build on recent developments
in variational approximations for Gaussian processes [7, 8] to introduce a variational Gaussian process dynamical system (VGPDS) where latent variables are approximately marginalized through optimization of a rigorous
lower bound on the marginal likelihood. As well as providing a principled approach to handling uncertainty in
the latent space, this allows both the parameters of the latent dynamical process and the dimensionality of the
latent space to be determined. The approximation enables the application of our model to time series containing
millions of dimensions and thousands of time points. We illustrate this by modeling human motion capture data
and high dimensional video sequences.
?
Also at the Sheffield Institute for Translational Neuroscience, University of Sheffield, UK.
1
2
The Model
D
Assume a multivariate times series dataset {yn , tn }N
is a data vector observed at time
n=1 , where yn ? R
tn ? R+ . We are especially interested in cases where each yn is a high dimensional vector and, therefore,
we assume that there exists a low dimensional manifold that governs the generation of the data. Specifically, a
temporal latent function x(t) ? RQ (with Q D), governs an intermediate hidden layer when generating the
data, and the dth feature from the data vector yn is then produced from xn = x(tn ) according to
ynd = fd (xn ) + nd ,
nd ? N (0, ? ?1 ),
(1)
where fd (x) is a latent mapping from the low dimensional space to dth dimension of the observation space
and ? is the inverse variance of the white Gaussian noise. We do not want to make strong assumptions about
the functional form of the latent functions (x, f ).1 Instead we would like to infer them in a fully Bayesian
non-parametric fashion using Gaussian processes [9]. Therefore, we assume that x is a multivariate Gaussian
process indexed by time t and f is a different multivariate Gaussian process indexed by x, and we write
xq (t) ? GP(0, kx (ti , tj )), q = 1, . . . , Q,
fd (x) ? GP(0, kf (xi , xj )), d = 1, . . . , D.
(2)
(3)
Here, the individual components of the latent function x are taken to be independent sample paths drawn from
a Gaussian process with covariance function kx (ti , tj ). Similarly, the components of f are independent draws
from a Gaussian process with covariance function kf (xi , xj ). These covariance functions, parametrized by
parameters ?x and ?f respectively, play very distinct roles in the model. More precisely, kx determines the
properties of each temporal latent function xq (t). For instance, the use of an Ornstein-Uhlbeck covariance
function yields a Gauss-Markov process for xq (t), while the squared-exponential covariance function gives rise
to very smooth and non-Markovian processes. In our experiments, we will focus on the squared exponential
covariance function (RBF), the Mat?ern 3/2 which is only once differentiable, and a periodic covariance function
[9, 10] which can be used when data exhibit strong periodicity. These covariance functions take the form:
! ?
?
(ti ?tj )2
? 3|ti ?tj |
3|ti ? tj |
2 ? (2l2t )
2
lt
kx(rbf) (ti , tj ) = ?rbf e
e
, kx(mat) (ti , tj ) = ?mat 1 +
,
lt
1
2 ?2
kx(per) (ti , tj ) = ?per
e
sin2
t ?tj ))
( 2?
T ( i
lt
.
(4)
The covariance function kf determines the properties of the latent mapping f that maps each low dimensional
variable xn to the observed vector yn . We wish this mapping to be a non-linear but smooth, and thus a suitable
choice is the squared exponential covariance function
1
2
kf (xi , xj ) = ?ard
e? 2
PQ
q=1
wq (xi,q ?xj ,q )2
,
(5)
which assumes a different scale wq for each latent dimension. This, as in the variational Bayesian formulation
of the GP-LVM [8], enables an automatic relevance determination procedure (ARD), i.e. it allows Bayesian
training to ?switch off? unnecessary dimensions by driving the values of the corresponding scales to zero.
The matrix Y ? RN ?D will collectively denote all observed data so that its nth row corresponds to the data
point yn . Similarly, the matrix F ? RN ?D will denote the mapping latent variables, i.e. fnd = fd (xn ), associated with observations Y from (1). Analogously, X ? RN ?Q will store all low dimensional latent variables
xnq = xq (tn ). Further, we will refer to columns of these matrices by the vectors yd , fd , xq ? RN . Given the
latent variables we assume independence over the data features, and given time we assume independence over
latent dimensions to give
p(Y, F, X|t) = p(Y |F )p(F |X)p(X|t) =
D
Y
d=1
p(yd |fd )p(fd |X )
Q
Y
p(xq |t),
(6)
q=1
where t ? RN and p(yd |fd ) is a Gaussian likelihood function term defined from (1). Further, p(fd |X ) is a
marginal GP prior such that
p(fd |X ) = N (fd |0, KNN ),
(7)
1
To simplify our notation, we often write x instead of x(t) and f instead of f (x). Later we also use a similar convention
for the covariance functions by often writing them as kf and kx .
2
where KN N = kf (X, X) is the covariance matrix defined by the covariance function kf and similarly p(xq |t)
is the marginal GP prior associated with the temporal function xq (t),
p(xq |t) = N (xq |0, Kt ) ,
(8)
where Kt = kx (t, t) is the covariance matrix obtained by evaluating the covariance function kx on the observed
times t.
Bayesian inference using the above model poses a huge computational challenge as, for instance, marginalization of the variables X, that appear non-linearly inside the covariance matrix KN N , is troublesome. Practical
approaches that have been considered until now (e.g. [5, 3]) marginalise out only F and seek a MAP solution
for X. In the next section we describe how efficient variational approximations can be applied to marginalize
X by extending the framework of [8].
2.1
Variational Bayesian training
The key difficulty with the Bayesian approach is propagating the prior density p(X|t) through the nonlinear
mapping. This mapping gives the expressive power to the model, but simultaneously renders the associated
marginal likelihood,
Z
p(Y |t) =
p(Y |F )p(F |X)p(X|t)dXdF,
(9)
intractable. We now invoke the variational Bayesian methodology to approximate the integral. Following a
standard procedure [11], we introduce a variational distribution q(?) and compute the Jensen?s lower bound Fv
on the logarithm of (9),
Z
p(Y |F )p(F |X)p(X |t)
Fv (q, ?) =
q(?) log
dXdF,
(10)
q(?)
where ? denotes the model?s parameters. However, the above form of the lower bound is problematic because
X (in the GP term p(F |X)) appears non-linearly inside the covariance matrix KN N making the integration
over X difficult. As shown in [8], this intractability is removed by applying the ?data augmentation? principle.
More precisely, we augment the joint probability model in (6) by including M extra samples of the GP latent
mapping f , known as inducing points, so that um ? RD is such a sample. The inducing points are evaluated at
? ? RM ?Q . The augmented joint probability density takes the form
a set of pseudo-inputs X
?
p(Y, F, U, X, X|t)
=
D
Y
?
p(yd |fd )p(fd |ud , X )p(ud |X)p(X|t),
(11)
d=1
? is a zero-mean Gaussian with a covariance matrix KM M constructed using the same function
where p(ud |X)
? from our expressions, we write the augmented GP prior analytically
as for the GP prior (7). By dropping X
(see [9]) as
?1
?1
p(fd |ud , X) = N fd |KN M KM
(12)
M ud , KN N ? KN M KM M KM N .
A key result in [8] is that a tractable lower bound (computed analogously to (10)) can be obtained through the
variational density
q(?) = q(F, U, X) = q(F |U, X)q(U )q(X) =
D
Y
p(fd |ud , X)q(ud )q(X),
(13)
d=1
QQ
where q(X) = q=1 N (xq |?q , Sq ) and q(ud ) is an arbitrary variational distribution. Titsias and Lawrence [8]
assume full independence for q(X) and the variational covariances are diagonal matrices. Here, in contrast, the
posterior over the latent variables will have strong correlations, so Sq is taken to be a N ? N full covariance
matrix. Optimization of the variational lower bound provides an approximation to the true posterior p(X|Y )
by q(X). In the augmented probability model, the ?difficult? term p(F |X) appearing in (10) is now replaced
with (12) and, eventually, it cancels out with the first factor of the variational distribution (13) so that F can be
marginalised out analytically. Given the above and after breaking the logarithm in (10), we obtain the final form
of the lower bound (see supplementary material for more details)
Fv (q, ?) = F?v ? KL(q(X) k p(X|t)),
3
(14)
R
with F?v = q(X) log p(Y |F )p(F |X) dXdF . Both terms in (14) are now tractable. Note that the first of
the above terms involves the data while the second one only involves the prior. All the information regarding
data point correlations is captured in the KL term and the connection with the observations comes through the
variational distribution. Therefore, the first term in (14) has the same analytical solution as the one derived in
[8]. Equation (14) can be maximized by using gradient-based methods2 . However, when not factorizing q(X)
across data points yields O(N 2 ) variational parameters to optimize. This issue is addressed in the next section.
2.2
Reparametrization and Optimization
The optimization involves the model parameters ? = (?, ?f , ?x ), the variational parameters {?q , Sq }Q
q=1 from
3 ?
q(X) and the inducing points X.
Optimization of the variational parameters appears challenging,
due to their large number and the correlations
between them. However, by reparametrizing our O N 2 variational parameters according to the framework
described in [12] we can obtain a set of O(N ) less correlated variational parameters. Specifically, we first take
the derivatives of the variational bound (14) w.r.t. Sq and ?q and set them to zero, to find the stationary points,
Sq = Kt?1 + ?q
?1
?
?q,
and ?q = Kt ?
(15)
?
?Fv
v (q,?)
? q = ??
where ?q = ?2 ?F?S
is a N ? N diagonal, positive matrix and ?
is a N ?dimensional vector.
q
q
The above stationary conditions tell us that, since Sq depends on a diagonal matrix ?q , we can reparametrize it
using only the N ?dimensional diagonal of that matrix, denoted by ?q . Then, we can optimise the 2(Q ? N )
? q ) and obtain the original parameters using (15).
parameters (?q , ?
2.3
Learning from Multiple Sequences
Our objective is to model multivariate time series. A given data set may consist of a group of independent observed sequences, each with a different length (e.g. in human motion capture data several
walks from a subject).
Let, for example, the dataset be a group of S independent sequences Y (1) , ..., Y (S) . We would like our model
to capture the underlying commonality of these data. We handle this by allowing a different temporal latent
function for each of the independent sequences, so that X (s) is the set of latent variables corresponding to the
sequence s. These sets are a priori assumed to be independent since they correspond to separate sequences,
QS
i.e. p X (1) , X (2) , ..., X (S) = s=1 p(X (s) ), where we dropped the conditioning on time for simplicity. This
factorisation leads to a block-diagonal structure for the time covariance matrix Kt , where each block corresponds to one sequence. In this setting, each block of observations Y (s) is generated from its corresponding
X (s) according to Y (s) = F (s) + , where the latent function which governs this mapping is shared across all
sequences and is Gaussian noise.
3
Predictions
Our algorithm models the temporal evolution of a dynamical system. It should be capable of generating completely new sequences or reconstructing missing observations from partially observed data. For generating
a novel sequence given training data the model requires a time vector t? as input and computes a density
p(Y? |Y, t, t? ). For reconstruction of partially observed data the time-stamp information is additionally accompanied by a partially observed sequence Y?p ? RN? ?Dp from the whole Y? = (Y?p , Y?m ), where p and
m are set indices indicating the present (i.e. observed) and missing dimensions of Y? respectively, so that
p ? m = {1, . . . , D}. We reconstruct the missing dimensions by computing the Bayesian predictive distribution
p(Y?m |Y?p , Y, t? , t). The predictive densities can also be used as estimators for tasks like generative Bayesian
classification. Whilst time-stamp information is always provided, in the next section we drop its dependence to
avoid notational clutter.
2
See supplementary material for more detailed derivation of (14) and for the equations for the gradients.
We will use the term ?variational parameters? to refer only to the parameters of q(X) although the inducing points are
also variational parameters.
3
4
3.1
Predictions Given Only the Test Time Points
To approximate the predictive density, we will need to introduce the underlying latent function values F? ?
RN? ?D (the noisy-free version of Y? ) and the latent variables X? ? RN? ?Q . We write the predictive density as
Z
Z
p(Y? |Y ) = p(Y? , F? , X? |Y )dF? dX? = p(Y? |F? )p(F? |X? , Y )p(X? |Y )dF? dX? .
(16)
The term p(F? |X? , Y ) is approximated by the variational distribution
Z Y
Y
q(F? |X? ) =
p(f?,d |ud , X? )q(ud )dud =
q(f?,d |X? ),
d?D
(17)
d?D
where q(f?,d |X? ) is a Gaussian that can be computed analytically, since in our variational framework the optimal
setting for q(ud ) is also found to be a Gaussian (see suppl. material for complete forms). As for the term
p(X? |Y ) in eq. (16), it is approximated by a Gaussian variational distribution q(X? ),
Q
Q Z
Q
Y
Y
Y
q(X? ) =
q(x?,q ) =
p(x?,q |xq )q(xq )dxq =
hp(x?,q |xq )iq(xq ) ,
(18)
q=1
q=1
q=1
where p(x?,q |xq ) is a Gaussian found from the conditional GP prior (see [9]) and q(X) is also Gaussian. We
can, thus, work out analytically the mean and variance for (18), which turn out to be:
?x?,q = K?N ?
?q
(19)
var(x?,q ) = K?? ? K?N (Kt +
?1
??1
KN ?
q )
(20)
>
and K?? = kx (t? , t? ). Notice that these equations have exactly the
where K?N = kx (t? , t), K?N = K?N
same form as found in standard GP regression problems. Once we have analytic forms for the posteriors in (16),
the predictive density is approximated as
Z
Z
p(Y? |Y ) = p(Y? |F? )q(F? |X? )q(X? )dF? dX? = p(Y? |F? ) hq(F? |X? )iq(X? ) dF? ,
(21)
which is a non-Gaussian integral that cannot be computed analytically. However, following the same argument
as in [9, 13], we can calculate analytically its mean and covariance:
E(F? ) = B > ??1
Cov(F? ) = B >
(22)
h
i
?1
?1
??2 ? ??1 (??1 )> B + ??0 I ? Tr KM
??2 I,
M ? (KM M + ??2 )
(23)
?1
?
?
?
where B = ? (KM M + ??2 ) ?>
1 Y , ?0 = hkf (X? , X? )i, ?1 = hKM ? i and ?2 = hKM ? K?M i. All
expectations are taken w.r.t. q(X? ) and can be calculated analytically, while KM ? denotes the cross-covariance
? and X? . The ? quantities are calculated analytically (see suppl.
matrix between the training inducing inputs X
material). Finally, since Y? is just a noisy version of F? , the mean and covariance of (21) is just computed as:
E(Y? ) = E(F? ) and Cov(Y? ) = Cov(F? ) + ? ?1 IN? .
3.2
Predictions Given the Test Time Points and Partially Observed Outputs
The expression for the predictive density p(Y?m |Y?p , Y ) is similar to (16),
Z
p(Y?m |Y?p , Y ) = p(Y?m |F?m )p(F?m |X? , Y?p , Y )p(X? |Y?p , Y )dF?m dX? ,
(24)
and is analytically intractable. To obtain an approximation, we firstly need to apply variational inference and
approximate p(X? |Y?p , Y ) with a Gaussian distribution. This requires the optimisation of a new variational
lower bound that accounts for the contribution of the partially observed data Y?p . This lower bound approximates
the true marginal likelihood p(Y?p , Y ) and has exactly analogous form with the lower bound computed only on
the training data Y . Moreover, the variational optimisation requires the definition of the variational distribution
q(X? , X) which needs to be optimised and is fully correlated across X and X? . After the optimisation, the
approximation to the true posterior p(X? |Y?p , Y ) is given from the marginal q(X? ). A much faster but less
accurate method would be to decouple the test from the training latent variables by imposing the factorisation
q(X? , X) = q(X)q(X? ). This is not used, however, in our current implementation.
5
4
Handling Very High Dimensional Datasets
Our variational framework avoids the typical cubic complexity of Gaussian processes allowing relatively large
training sets (thousands of time points, N ). Further, the model scales only linearly with the number of dimensions D. Specifically, the number of dimensions only matters when performing calculations involving the data
matrix Y . In the final form of the lower bound (and consequently in all of the derived quantities, such as gradients) this matrix only appears in the form Y Y > which can be precomputed. This means that, when N D,
we can calculate Y Y > only once and then substitute Y with the SVD (or Cholesky decomposition) of Y Y > . In
this way, we can work with an N ? N instead of an N ? D matrix. Practically speaking, this allows us to work
with data sets involving millions of features. In our experiments we model directly the pixels of HD quality
video, exploiting this trick.
5
Experiments
We consider two different types of high dimensional time series, a human motion capture data set consisting
of different walks and high resolution video sequences. The experiments are intended to explore the various
properties of the model and to evaluate its performance in different tasks (prediction, reconstruction, generation
of data). Matlab source code for repeating the following experiments and links to the video files are available
on-line from http://staffwww.dcs.shef.ac.uk/people/N.Lawrence/vargplvm/.
5.1
Human Motion Capture Data
We followed [14, 15] in considering motion capture data of walks and runs taken from subject 35 in the CMU
motion capture database. We treated each motion as an independent sequence. The data set was constructed and
preprocessed as described in [15]. This results in 2,613 separate 59-dimensional frames split into 31 training
sequences with an average length of 84 frames each.
The model is jointly trained, as explained in section 2.3, on both walks and runs, i.e. the algorithm learns a
common latent space for these motions. At test time we investigate the ability of the model to reconstruct test
data from a previously unseen sequence given partial information for the test targets. This is tested once by
providing only the dimensions which correspond to the body of the subject and once by providing those that
correspond to the legs. We compare with results in [15], which used MAP approximations for the dynamical
models, and against nearest neighbour. We can also indirectly compare with the binary latent variable model
(BLV) of [14] which used a slightly different data preprocessing. We assess the performance using the cumulative error per joint in the scaled space defined in [14] and by the root mean square error in the angle space
suggested by [15]. Our model was initialized with nine latent dimensions. We performed two runs, once using
the Mat?ern covariance function for the dynamical prior and once using the RBF. From table 1 we see that the
variational Gaussian process dynamical system considerably outperforms the other approaches. The appropriate
latent space dimensionality for the data was automatically inferred by our models. The model which employed
an RBF covariance to govern the dynamics retained four dimensions, whereas the model that used the Mat?ern
kept only three. The other latent dimensions were completely switched off by the ARD parameters. The best
performance for the legs and the body reconstruction was achieved by the VGPDS model that used the Mat?ern
and the RBF covariance function respectively.
5.2
Modeling Raw High Dimensional Video Sequences
For our second set of experiments we considered video sequences. Such sequences are typically preprocessed
before modeling to extract informative features and reduce the dimensionality of the problem. Here we work
directly with the raw pixel values to demonstrate the ability of the VGPDS to model data with a vast number of
features. This also allows us to directly sample video from the learned model.
Firstly, we used the model to reconstruct partially observed frames from test video sequences4 . For the first
video discussed here we gave as partial information approximately 50% of the pixels while for the other two
we gave approximately 40% of the pixels on each frame. The mean squared error per pixel was measured to
4
?Missa? dataset: cipr.rpi.edu. ?Ocean?: cogfilms.com. ?Dog?: fitfurlife.com. See details in supplementary. The logo
appearing in the ?dog? images in the experiments that follow, has been added with post-processing.
6
Table 1: Errors obtained for the motion capture dataset considering nearest neighbour in the angle space (NN) and in the
scaled space(NN sc.), GPLVM, BLV and VGPDS. CL / CB are the leg and body datasets as preprocessed in [14], L and B
the corresponding datasets from [15]. SC corresponds to the error in the scaled space, as in Taylor et al. while RA is the
error in the angle space. The best error per column is in bold.
Data
Error Type
BLV
NN sc.
GPLVM (Q = 3)
GPLVM (Q = 4)
GPLVM (Q = 5)
NN sc.
NN
VGPDS (RBF)
VGPDS (Mat?ern 3/2)
CL
SC
11.7
22.2
-
CB
SC
8.8
20.5
-
L
SC
11.4
9.7
13.4
13.5
14.0
8.19
6.99
L
RA
3.40
3.38
4.25
4.44
4.11
3.57
2.88
B
SC
16.9
20.7
23.4
20.8
30.9
10.73
14.22
B
RA
2.49
2.72
2.78
2.62
3.20
1.90
2.23
compare with the k?nearest neighbour (NN) method, for k ? (1, .., 5) (we only present the error achieved
for the best choice of k in each case). The datasets considered are the following: firstly, the ?Missa? dataset,
a standard benchmark used in image processing. This is a 103,680-dimensional video, showing a woman
talking for 150 frames. The data is challenging as there are translations in the pixel space. We also considered
an HD video of dimensionality 9 ? 105 that shows an artificially created scene of ocean waves as well as
a 230, 400?dimensional video showing a dog running for 60 frames. The later is approximately periodic in
nature, containing several paces from the dog. For the first two videos we used the Mat?ern and RBF covariance
functions respectively to model the dynamics and interpolated to reconstruct blocks of frames chosen from the
whole sequence. For the ?dog? dataset we constructed a compound kernel kx = kx(rbf) + kx(periodic) , where
the RBF term is employed to capture any divergence from the approximately periodic pattern. We then used
our model to reconstruct the last 7 frames extrapolating beyond the original video. As can be seen in table
2, our method outperformed NN in all cases. The results are also demonstrated visually in figure 1 and the
reconstructed videos are available in the supplementary material.
Table 2: The mean squared error per pixel for VGPDS and NN for the three datasets (measured only in the missing inputs).
The number of latent dimensions selected by our model is in parenthesis.
VGPDS
NN
Missa
2.52 (Q = 12)
2.63
Ocean
9.36 (Q = 9)
9.53
Dog
4.01 (Q = 6)
4.15
As can be seen in figure 1, VGPDS predicts pixels which are smoothly connected with the observed part of the
image, whereas the NN method cannot fit the predicted pixels in the overall context.
As a second task, we used our generative model to create new samples and generate a new video sequence. This
is most effective for the ?dog? video as the training examples were approximately periodic in nature. The model
was trained on 60 frames (time-stamps [t1 , t60 ]) and we generated new frames which correspond to the next 40
time points in the future. The only input given for this generation of future frames was the time-stamp vector,
[t61 , t100 ]. The results show a smooth transition from training to test and amongst the test video frames. The
resulting video of the dog continuing to run is sharp and high quality. This experiment demonstrates the ability
of the model to reconstruct massively high dimensional images without blurring. Frames from the result are
shown in figure 2. The full video is available in the supplementary material.
6
Discussion and Future Work
We have introduced a fully Bayesian approach for modeling dynamical systems through probabilistic nonlinear
dimensionality reduction. Marginalizing the latent space and reconstructing data using Gaussian processes
7
(a)
(b)
(e)
(c)
(f)
(d)
(g)
(h)
Figure 1: (a) and (c) demonstrate the reconstruction achieved by VGPDS and NN respectively for the most challenging
frame (b) of the ?missa? video, i.e. when translation occurs. (d) shows another example of the reconstruction achieved by
VGPDS given the partially observed image. (e) (VGPDS) and (f) (NN) depict the reconstruction achieved for a frame of
the ?ocean? dataset. Finally, we demonstrate the ability of the model to automatically select the latent dimensionality by
showing the initial lengthscales (fig: (g)) of the ARD covariance function and the values obtained after training (fig: (h)) on
the ?dog? data set.
(a)
(b)
(c)
Figure 2: The last frame of the training video (a) is smoothly followed by the first frame (b) of the generated video. A
subsequent generated frame can be seen in (c).
results in a very generic model for capturing complex, non-linear correlations even in very high dimensional
data, without having to perform any data preprocessing or exhaustive search for defining the model?s structure
and parameters.
Our method?s effectiveness has been demonstrated in two tasks; firstly, in modeling human motion capture data
and, secondly, in reconstructing and generating raw, very high dimensional video sequences. A promising future
direction to follow would be to enhance our formulation with domain-specific knowledge encoded, for example,
in more sophisticated covariance functions or in the way that data are being preprocessed. Thus, we can obtain
application-oriented methods to be used for tasks in areas such as robotics, computer vision and finance.
Acknowledgments
Research was partially supported by the University of Sheffield Moody endowment fund and the Greek State
Scholarships Foundation (IKY). We also thank Colin Litster and ?Fit Fur Life? for allowing us to use their video
files as datasets. Finally, we thank the reviewers for their insightful comments.
8
References
[1] N. D. Lawrence, ?Probabilistic non-linear principal component analysis with Gaussian process latent variable models,? Journal of Machine Learning Research, vol. 6, pp. 1783?1816, 2005.
[2] N. D. Lawrence, ?Gaussian process latent variable models for visualisation of high dimensional data,? in
Advances in Neural Information Processing Systems, pp. 329?336, MIT Press, 2004.
[3] J. M. Wang, D. J. Fleet, and A. Hertzmann, ?Gaussian process dynamical models,? in NIPS, pp. 1441?
1448, MIT Press, 2006.
[4] J. M. Wang, D. J. Fleet, and A. Hertzmann, ?Gaussian process dynamical models for human motion,?
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 283?298, Feb. 2008.
[5] N. D. Lawrence, ?Hierarchical Gaussian process latent variable models,? in Proceedings of the International Conference in Machine Learning, pp. 481?488, Omnipress, 2007.
[6] J. Ko and D. Fox, ?GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation
models,? Auton. Robots, vol. 27, pp. 75?90, July 2009.
[7] M. K. Titsias, ?Variational learning of inducing variables in sparse Gaussian processes,? in Proceedings of
the Twelfth International Conference on Artificial Intelligence and Statistics, vol. 5, pp. 567?574, JMLR
W&CP, 2009.
[8] M. K. Titsias and N. D. Lawrence, ?Bayesian Gaussian process latent variable model,? in Proceedings
of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 844?851, JMLR
W&CP 9, 2010.
[9] C. E. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006.
[10] D. J. C. MacKay, ?Introduction to Gaussian processes,? in Neural Networks and Machine Learning (C. M.
Bishop, ed.), NATO ASI Series, pp. 133?166, Kluwer Academic Press, 1998.
[11] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics). Springer,
1st ed. 2006. corr. 2nd printing ed., Oct. 2007.
[12] M. Opper and C. Archambeau, ?The variational Gaussian approximation revisited,? Neural Computation,
vol. 21, no. 3, pp. 786?792, 2009.
[13] A. Girard, C. E. Rasmussen, J. Qui?nonero-Candela, and R. Murray-Smith, ?Gaussian process priors with
uncertain inputs - application to multiple-step ahead time series forecasting,? in Neural Information Processing Systems, 2003.
[14] G. W. Taylor, G. E. Hinton, and S. Roweis, ?Modeling human motion using binary latent variables,? in
Advances in Neural Information Processing Systems, vol. 19, MIT Press, 2007.
[15] N. D. Lawrence, ?Learning for larger datasets with the Gaussian process latent variable model,? in Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, pp. 243?250,
Omnipress, 2007.
9
| 4330 |@word version:2 nd:3 twelfth:1 km:8 seek:2 covariance:30 decomposition:1 tr:1 reduction:3 initial:1 series:10 outperforms:1 current:1 com:3 rpi:1 gmail:1 dx:4 subsequent:1 informative:1 analytic:2 enables:2 drop:1 extrapolating:1 depict:1 fund:1 stationary:2 generative:2 selected:1 intelligence:4 smith:1 provides:1 revisited:1 firstly:5 bayesfilters:1 constructed:3 eleventh:1 inside:2 introduce:4 ra:3 automatically:3 considering:2 provided:1 notation:1 underlying:2 dxq:1 moreover:1 whilst:2 temporal:6 pseudo:1 ti:8 finance:1 exactly:2 um:1 rm:1 hkm:2 uk:7 scaled:3 demonstrates:1 yn:6 appear:1 positive:1 before:1 lvm:2 dropped:1 t1:1 troublesome:1 optimised:1 path:1 approximately:6 yd:4 logo:1 challenging:3 archambeau:1 practical:2 acknowledgment:1 block:4 sq:6 procedure:2 area:1 asi:1 cannot:4 marginalize:1 risk:1 applying:1 writing:1 context:1 optimize:1 map:5 demonstrated:2 missing:4 reviewer:1 williams:1 resolution:2 simplicity:1 factorisation:2 q:1 estimator:1 hd:2 handle:1 analogous:1 qq:1 target:1 play:1 trick:1 approximated:3 recognition:1 predicts:1 database:1 observed:14 role:1 wang:2 capture:12 thousand:2 calculate:2 connected:1 removed:1 principled:1 rq:1 govern:1 complexity:1 hertzmann:2 dynamic:2 trained:2 predictive:6 titsias:4 blurring:1 completely:2 joint:3 various:1 reparametrize:1 derivation:1 distinct:1 describe:1 effective:1 monte:1 artificial:3 sc:8 tell:1 lengthscales:1 exhaustive:1 encoded:1 supplementary:5 larger:1 reconstruct:6 ability:4 cov:3 neil:1 knn:1 unseen:1 gp:13 jointly:1 noisy:2 statistic:4 final:2 sequence:24 differentiable:1 analytical:1 reconstruction:6 nonero:1 roweis:1 inducing:6 exploiting:1 manchester:1 extending:1 generating:4 illustrate:1 iq:2 ac:3 propagating:1 pose:1 measured:2 nearest:3 ard:4 school:1 eq:1 strong:3 predicted:1 involves:3 come:1 convention:1 direction:1 greek:1 human:8 material:6 secondly:1 practically:1 considered:4 visually:1 lawrence:9 mapping:8 cb:2 driving:1 commonality:1 outperformed:1 create:1 mtitsias:1 mit:4 sensor:1 gaussian:39 always:2 avoid:1 derived:2 focus:1 notational:1 fur:1 likelihood:5 contrast:1 rigorous:1 hkf:1 posteriori:1 sin2:1 inference:2 nn:12 typically:2 hidden:1 visualisation:1 interested:1 pixel:9 translational:1 issue:1 classification:1 overall:1 augment:1 denoted:1 priori:1 development:1 integration:1 mackay:1 marginal:6 once:7 having:1 sampling:1 biology:1 cancel:1 future:4 simplify:1 oriented:1 neighbour:3 simultaneously:3 divergence:1 individual:1 replaced:1 intended:1 consisting:1 huge:1 fd:16 investigate:1 tj:9 chain:1 kt:6 accurate:1 integral:2 capable:1 partial:2 fox:2 indexed:2 taylor:2 logarithm:2 walk:4 initialized:1 continuing:1 uncertain:1 instance:2 column:2 modeling:7 markovian:1 graphic:1 kn:7 periodic:5 considerably:1 st:1 density:9 international:4 probabilistic:4 off:2 invoke:1 enhance:1 analogously:2 moody:1 squared:5 augmentation:1 containing:2 woman:1 derivative:1 account:1 accompanied:1 bold:1 matter:1 ornstein:1 depends:1 later:2 root:1 ynd:1 performed:1 candela:1 wave:1 reparametrization:1 contribution:1 ass:1 square:1 variance:2 maximized:1 yield:2 correspond:4 bayesian:13 raw:3 t60:1 produced:1 carlo:1 trajectory:1 damianou:2 ed:3 definition:1 against:1 pp:11 associated:3 dataset:7 knowledge:1 dimensionality:10 sophisticated:1 appears:3 follow:2 methodology:1 formulation:2 evaluated:1 strongly:1 governing:1 just:2 until:1 correlation:4 expressive:1 nonlinear:7 reparametrizing:1 quality:2 true:3 evolution:1 analytically:9 dud:1 white:1 complete:1 demonstrate:4 tn:4 motion:14 cp:2 omnipress:2 image:5 variational:35 novel:1 common:1 functional:1 conditioning:1 million:2 extend:2 discussed:1 approximates:1 kluwer:1 refer:3 imposing:1 automatic:1 rd:1 dxdf:3 similarly:3 hp:1 pq:1 robot:1 feb:1 multivariate:4 posterior:4 recent:2 massively:1 store:1 compound:1 binary:2 life:1 captured:1 seen:3 employed:2 ud:11 colin:1 july:1 full:3 multiple:2 infer:1 smooth:3 faster:1 determination:1 calculation:1 cross:1 academic:1 post:1 parenthesis:1 prediction:5 involving:2 ko:2 sheffield:6 regression:1 vision:2 expectation:1 df:5 optimisation:3 cmu:1 kernel:1 suppl:2 robotics:3 achieved:5 whereas:2 want:1 shef:2 addressed:1 thirteenth:1 source:1 extra:1 file:2 comment:1 subject:3 effectiveness:1 intermediate:1 split:1 switch:1 marginalization:2 xj:4 independence:3 gave:2 fit:2 andreas:2 regarding:1 reduce:1 fleet:2 expression:3 forecasting:1 render:1 speaking:1 nine:1 matlab:1 governs:3 detailed:1 clutter:1 repeating:1 http:1 generate:1 problematic:2 notice:1 neuroscience:1 per:6 pace:1 write:4 mat:8 dropping:1 vol:6 group:2 key:5 four:1 drawn:1 preprocessed:4 kept:1 vast:1 run:4 inverse:1 angle:3 uncertainty:1 draw:1 qui:1 capturing:1 bound:11 layer:1 followed:2 ahead:1 precisely:2 scene:1 interpolated:1 argument:1 performing:1 relatively:1 ern:6 department:2 according:3 across:3 slightly:1 reconstructing:3 making:2 leg:3 explained:1 taken:4 equation:3 previously:1 turn:1 eventually:1 precomputed:1 tractable:2 auton:1 available:3 apply:2 hierarchical:1 appropriate:2 indirectly:1 generic:1 ocean:4 appearing:2 original:2 substitute:1 assumes:1 michalis:1 denotes:2 running:1 marginalized:1 scholarship:1 build:2 especially:1 murray:1 objective:1 added:1 quantity:2 occurs:1 parametric:1 dependence:1 diagonal:5 exhibit:1 gradient:3 dp:1 hq:1 amongst:1 separate:2 link:1 thank:2 parametrized:1 manifold:1 length:2 code:1 index:1 retained:1 providing:3 l2t:1 difficult:2 rise:1 implementation:1 perform:1 allowing:3 gpds:1 observation:6 markov:2 datasets:7 benchmark:1 gplvm:4 endemic:1 defining:1 hinton:1 dc:2 rn:8 frame:18 arbitrary:1 sharp:1 community:1 inferred:1 introduced:1 dog:9 required:1 kl:2 optimized:1 connection:1 marginalise:1 fv:4 learned:1 nip:1 dth:2 suggested:1 beyond:1 dynamical:16 pattern:3 challenge:3 including:1 optimise:1 video:27 power:1 suitable:1 difficulty:2 treated:1 marginalised:2 nth:1 created:1 methods2:1 extract:1 xq:16 prior:13 kf:7 marginalizing:1 fully:4 generation:3 filtering:2 var:1 foundation:1 switched:1 principle:1 intractability:1 endowment:1 translation:2 row:1 periodicity:1 supported:1 last:2 free:1 rasmussen:2 allow:1 institute:1 sparse:1 dimension:15 xn:4 evaluating:1 calculated:2 avoids:1 computes:1 cumulative:1 transition:1 opper:1 preprocessing:2 transaction:1 reconstructed:1 approximate:3 nato:1 gene:1 overfitting:1 unnecessary:1 assumed:1 xi:4 factorizing:1 search:1 latent:48 table:4 additionally:1 promising:2 nature:2 correlated:3 cl:2 artificially:1 complex:1 domain:1 linearly:3 whole:2 noise:2 girard:1 body:3 augmented:3 fig:2 fashion:1 cubic:1 wish:1 exponential:3 stamp:4 breaking:1 jmlr:2 printing:1 learns:1 specific:1 bishop:2 showing:3 insightful:1 jensen:1 intractable:3 exists:1 consist:1 adding:1 corr:1 kx:14 smoothly:2 lt:3 explore:1 partially:8 talking:1 collectively:1 springer:1 corresponds:3 determines:2 oct:1 conditional:1 consequently:1 rbf:10 shared:1 vgpds:12 determined:3 specifically:3 typical:1 iky:1 fnd:1 decouple:1 principal:1 gauss:1 svd:1 indicating:1 select:1 wq:2 cholesky:1 people:1 relevance:1 evaluate:1 tested:1 handling:2 |
3,679 | 4,331 | Unsupervised learning models of primary cortical
receptive fields and receptive field plasticity
Andrew Saxe, Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Y. Ng
Department of Computer Science
Stanford University
{asaxe, mbhand, rmudur, bipins, ang}@cs.stanford.edu
Abstract
The efficient coding hypothesis holds that neural receptive fields are adapted to
the statistics of the environment, but is agnostic to the timescale of this adaptation,
which occurs on both evolutionary and developmental timescales. In this work we
focus on that component of adaptation which occurs during an organism?s lifetime, and show that a number of unsupervised feature learning algorithms can
account for features of normal receptive field properties across multiple primary
sensory cortices. Furthermore, we show that the same algorithms account for
altered receptive field properties in response to experimentally altered environmental statistics. Based on these modeling results we propose these models as
phenomenological models of receptive field plasticity during an organism?s lifetime. Finally, due to the success of the same models in multiple sensory areas, we
suggest that these algorithms may provide a constructive realization of the theory,
first proposed by Mountcastle [1], that a qualitatively similar learning algorithm
acts throughout primary sensory cortices.
1
Introduction
Over the last twenty years, researchers have used a number of unsupervised learning algorithms to
model a range of neural phenomena in early sensory processing. These models have succeeded in
replicating many features of simple cell receptive fields in primary visual cortex [2, 3], as well as
cochlear nerve fiber responses in the subcortical auditory system [4]. Though these algorithms do
not perfectly match the experimental data (see [5]), they continue to improve in recent work (e.g.
[6, 7]). However, each phenomenon has generally been fit by a different algorithm, and there has
been little comparison of an individual algorithm?s breadth in simultaneously capturing different
types of data. In this paper we test whether a single learning algorithm can provide a reasonable
fit to data from three different primary sensory cortices. Further, we ask whether such algorithms
can account not only for typical data from normal environments but also for experimental data from
animals raised with drastically different environmental statistics.
Our motivation for exploring the breadth of each learning algorithm?s applicability is partly biological. Recent reviews of the experimental literature regarding the functional consequences of plasticity have remarked on the surprising similarity in plasticity outcomes across sensory cortices [8, 9].
These empirical results raise the possibility that a single phenomenological model of plasticity (a
?learning algorithm? in our terminology) might account for receptive field properties independent of
modality. Finding such a model, if it exists, could yield broad insight into early sensory processing
strategies. As an initial step in this direction, we evaluate the match between current unsupervised
learning algorithms and receptive field properties in visual, auditory, and somatosensory cortex. We
find that many current algorithms achieve qualitatively similar matches to receptive field properties
in all three modalities, though differences between the models and experimental data remain.
In the second part of this paper, we examine the sensitivity of these algorithms to changes in their
input statistics. Most previous work that uses unsupervised learning algorithms to explain neural
1
receptive fields makes no claim about the relative contributions of adaptation on evolutionary as
compared to developmental timescales, but rather models the end point of these complex processes,
that is, the receptive field ultimately measured in the adult animal. In this work, we consider the alternative view that significant adaptation occurs during an organism?s lifetime, i.e., that the learning
algorithm operates predominantly during development rather than over the course of evolution.
One implication of lifetime adaptation is that experimental manipulations of early sensory experience should result in altered receptive field properties. We therefore ask whether current unsupervised learning algorithms can reproduce appropriately altered receptive field properties in response
to experimentally altered inputs. Our results show that the same unsupervised learning algorithm can
model normal and altered receptive fields, yielding an account of sensory receptive fields focused
heavily on activity dependent plasticity processes operating during an organism?s lifetime.
2
Modeling approach
We use the same three stage processing pipeline to model each modality; the first stage models peripheral end-receptors, namely rods and cones in the retina, hair cells in the cochlea, and mechanoreceptors in glabrous skin; the second stage crudely models subcortical processing as a whitening
transformation of the data; and the third stage models cortical receptive field plasticity mechanisms
as an unsupervised learning algorithm. We note that the first two stages cannot do justice to the
complexities of subcortical processing, and the simple approximation built into these stages limits
the quality of fit we can expect from the models.
We consider five unsupervised learning algorithms: independent component analysis [10], sparse
autoencoder neural networks [11], restricted Boltzmann machines (RBMs) [12], K-means [13], and
sparse coding [2]. These algorithms were chosen on two criteria. First, all of the algorithms share the
property of learning a sparse representation of the input, though they clearly differ in their details,
and have at least qualitatively been shown to yield Gabor-like filters when applied to naturalistic
visual input. Second, we selected algorithms to span a number of reasonable approaches and popular
formalisms, i.e., efficient coding ideas, backpropagation in artificial neural networks, probabilistic
generative models, and clustering methods. As we will show in the rest of the paper, in fact these
five algorithms turn out to yield very similar results, with no single algorithm being decisively better.
Each algorithm contains a number of parameters which control the learning process, which we fit to
the experimental data by performing extensive grid searches through the parameter space. To obtain
an estimate of the variability in our results, we trained multiple models at each parameter setting but
with different randomly drawn datasets and different initial weights. All error bars are the standard
error of the mean. The results reported in this paper are for the best-fitting parameter settings for
each algorithm per modality. We worried that we might overfit the experimental data due to the
large number of models we trained (? 60, 000). As one check against this, we performed a crossvalidation-like experiment by choosing the parameters of each algorithm to maximize the fit to one
modality, and then evaluating the performance of these parameters on the other two modalities. We
found that, though quantitatively the results are slightly worse as expected, qualitatively the results
follow the same patterns of which phenomena are well-fit (see supplementary material). Because
we have fit model parameters to experimental data, we cannot assess the efficiency of the resulting
code. Rather, our aim is to evaluate the single learning algorithm hypothesis, which is orthogonal to
the efficient coding hypothesis. A learning algorithm could potentially learn a non-efficient code, for
instance, but nonetheless describe the establishment of receptive fields seen in adult animals. Details
of the algorithms, parameters, and fitting methods can be found in the supplementary information.
Results from our grid searches are available at http://www.stanford.edu/?asaxe/rf_
plasticity.html.
3
Naturalistic experience and normal receptive field properties
In this section we focus on whether first-order, linear properties of neural responses can be captured
by current unsupervised learning algorithms applied to naturalistic visual, auditory, and somatosensory inputs. Such a linear description of neural responses has been broadly studied in all sensory
cortices [14, 15, 16, 17]. Though a more complete model would incorporate nonlinear components,
these more sophisticated nonlinear models often have as their first step a convolution with a linear
kernel (see [18] for an overview); and it is this kernel which we suggest might be learned over the
course of development, by a qualitatively similar learning algorithm across modalities.
2
Figure 1: Top left: K-means bases learned from natural images. Histograms: Black lines show
population statistics for K-means bases, gray bars show V1 simple cell data from Macaque. Far
right: Distribution of receptive field shapes; Red triangles are V1 simple cells from [5], blue circles
are K-means bases.
3.1
Primary visual cortex
A number of studies have shown that response properties in V1 can be successfully modeled using
a variety of unsupervised learning algorithms [2, 19, 3, 12, 10, 6, 7]. We replicate these findings for
the particular algorithms we employ and make the first detailed comparisons to experiment for the
sparse autoencoder, sparse RBM, and K-means algorithms.
Our natural image dataset consists of ten gray scale images of outdoor scenes [2]. Multiple nonoverlapping patches were sampled to form the first stage of our model, meant to approximate the
response of rods and cones. This raw data was then whitened using PCA whitening in the second stage of the model, corresponding to retinal ganglion or LGN responses.1 These inputs were
supplied to each of the five learning algorithms.
Fig. 1 shows example bases learned by the K-means algorithm. All five algorithms learn localized,
band-pass receptive field structures for a broad range of parameter settings, in qualitative agreement
with the spatial receptive fields of simple cells in primary visual cortex. To better quantify the match,
we compare five properties of model neuron receptive fields to data from macaque V1, namely the
spatial frequency bandwidth, orientation tuning bandwidth, length, aspect ratio, and peak spatial
frequency of the receptive fields. We compare population histograms of these metrics to those
measured in macaque V1 by [14, 15] as reported in [3]. Fig. 1 shows these histograms for the bestfitting K-means bases according to the average L1 distance between model and data histograms. For
all five algorithms, the histograms show general agreement with the distribution of parameters in
primary visual cortex except for the peak spatial frequency, consistent with the results of previous
studies for ICA and sparse coding [2, 3]. Additional plots for the other algorithms can be found in
the supplementary materials.
Next, we compare the shape of simulated receptive fields to experimentally-derived receptive fields.
As had been done for the experimental data, we fit Gabor functions to our simulated receptive fields
and calculated the ?normalized? receptive field sizes nx = ?x f and ny = ?y f where ?x is the
standard deviation of the gaussian envelope along the axis with sinusoidal modulation, ?y is the
stardard deviation of the gaussian envelope along the axis in which the filter is low pass, and f is
the frequency of the sinusoid. The parameters nx and ny measure the number of sinusoidal cycles
that fit within an interval of length ?x and ?y respectively. Hence they capture the number of
excitatory and inhibitory lobes of significant power in each receptive field. The right panel of Fig. 1
shows the distribution of nx and ny for K-means compared to those reported experimentally [5].
The model bases lie within the experimentally derived values, though our models fail to exhibit as
much variability in shape as the experimentally-derived data. As had been noted for ICA and sparse
coding in [5], all five of our algorithms fail to capture low frequency bases near the origin. These
low frequency bases correspond to ?blobs? with just a single excitatory region.
1
Taking the log of the image intensities before whitening, as in [3], yielded similar fits to V1 data.
3
Figure 2: Comparison to A1. Left: RBM bases. Second from left, top: Composite MTF in cat A1,
reproduced from [16]. Bottom: Composite MTF for RBM. Second from right, top: temporal MTF
in A1 (dashed gray) and for our model (black). Bottom: spectral MTF. Right, top: frequency sweep
preference. Bottom: Spectrum width vs center frequency for A1 neurons (red triangles) and model
neurons (blue circles).
3.2 Primary auditory cortex
In contrast to the large amount of work in the visual system, few efficient coding studies have
addressed response properties in primary auditory cortex (but see [20]). We base our comparison
on natural sound data consisting of a mixture of data from the Pittsburgh Natural Sounds database
and the TIMIT speech corpus. A mix of speech and natural sounds was reported to be necessary
to achieve a good match to auditory nerve fiber responses in previous sparse coding work [4]. We
transform the raw sound waveform into a representation of its frequency content over time meant
to approximate the response of the cochlea [21]. In particular, we pass the input sound signal to a
gammatone filterbank which approximates auditory nerve fiber responses [21]. The energy of the
filter responses is then summed within fixed time-bins at regular intervals, yielding a representation
similar to a spectrogram. We then whiten the data to model subcortical processing. Although there
is evidence for temporal whitening in the responses of afferents to auditory cortex, this is certainly
a very poor aproximation of subcortical auditory processing [16]. After whitening, we applied
unsupervised learning models, yielding the bases shown in Fig. 2 for RBMs. These bases map from
our spectrogram input to the model neuron output, and hence represent the spectrotemporal receptive
field (STRF) of the model neurons.
We then compared properties of our model STRFs to those measured in cortex. First, based on
the experiments reported in O?Connor et al. [22], we analyze the relationship between spectrum
bandwidth and center frequency. O?Connor et al. found a nearly linear relationship between these,
which matches well with the scaling seen in our model bases (see Fig. 2 bottom right). Next we
compared model receptive fields to the composite cortical modulation transfer function reported in
[16]. The modulation transfer function (MTF) of a neuron is the amplitude of the 2D Fourier transform of its STRF. The STRF contains one spectral and one temporal axis, and hence its 2D Fourier
transform contains one spectral modulation and one temporal modulation axis. The composite MTF
is the average of the MTFs computed for each neuron, and for all five algorithms it has a characteristic inverted ?V? shape evident in Fig. 2. Summing the composite MTF over time yields the
spectral MTF, which is low-pass for our models and well-matched to the spectral MTF reported in
cat A1[16]. Summing over the spectral dimension yields the temporal MTF, which is low-pass in
our models but band-pass in the experimental data. Finally, we investigate the preference of neurons
for upsweeps in frequency versus downsweeps, which can be cast in terms of the MTF by measuring
the energy in the left half compared to the right half. The difference in these energies normalized
by their sum is the spectrotemporal asymmetry, shown in Fig. 2 top right. All algorithms showed
qualitatively similar distributions of spectrotemporal asymmetry to that found in cat A1. Hence
the model bases are broadly consistent with receptive field properties measured in primary auditory
cortex such as a roughly linear scaling of center frequency with spectrum bandwidth; a low-pass
4
Figure 3: Left: Data collection pipeline. Center: Top two rows, sparse autoencoder bases. Bottom
two rows, first six PCA components. Right: Histograms of receptive field structure for the sparse
autoencoder algorithm. Black, model distribution. Gray, experimental data from [17]. (Best viewed
in color)
spectral MTF of appropriate slope; and a similar distribution of spectrotemporal asymmetry. The
models differ from experiment in their temporal structure, which is band-pass in the experimental
data but low-pass in our models.
3.3
Primary somatosensory cortex
Finally, we test whether these learning algorithms can model somatosensory receptive fields on
the hand. To enable this comparison we collected a naturalistic somatosensory dataset meant to
capture the statistics of contact points on the hand during normal primate grasping behavior. A
variety of objects were dusted with fine white powder and then grasped by volunteers wearing blue
latex gloves. To match the natural statistics of primate grasps, we performed the same grip types
in the same proportions as observed ecologically in a study of semi-free ranging Macaca mulatta
[23]. Points of contact were indicated by the transfer of powder to the gloved hand, which was then
placed palm-up on a green background and imaged using a digital camera. The images were then
post-processed to yield an estimate of the pressure applied to the hand during the grasp (Fig. 3, left).
The dataset has a number of limitations: it contains no temporal information, but rather records all
areas of contact for the duration of the grip. Most significantly, it contains only 1248 individual
grasps due to the high effort required to collect such data (?4 minutes/sample), and hence is an
order of magnitude smaller than the datasets used for the vision and auditory analyses. Given these
limitations, we decided to compare our receptive fields to those found in area 3b of primary somatosensory cortex. Neurons in area 3b respond to light cutaneous stimulation of restricted regions
of glabrous skin [24], the same sort of contact that would transfer powder to the glove. Area 3b
neurons also receive a large proportion of inputs from slowly adapting mechanoreceptor afferents
with sustained responses to static skin indentation [25], making the lack of temporal information
less problematic.
Bases learned by the algorithms are shown in Fig. 3. These exhibit a number of qualitative features
that accord with the biology. As in area 3b, the model receptive fields are localized to a single digit
[24], and receptive field sizes are larger on the palm than on the fingers [25]. These qualitative
features are not shared by PCA bases, which typically span multiple fingers. As a more quantitative
assesment, we compared model receptive fields on the finger tips to those derived for area 3b neurons
in [17]. We computed the ratio between excitatory and inhibitory area for each basis, and plot a
population histogram of this ratio, shown for the sparse autoencoder algorithm in the right panel of
Fig. 3. Importantly, because this comparison is based on the ratio of the areas, it is not affected by the
unknown scale factor between the dimensions of our glove images and those of the macaque hand.
We also plot the ratio of the excitatory and inhibitory mass, where excitatory and inhibitory mass is
defined as the sum of the positive and negative coefficients in the receptive field, respectively. We
find good agreement for all the algorithms we tested.
5
Figure 4: Top row: Input image; Resulting goggle image, reproduced from [26]; Our simulated
goggle image. Bottom row: Natural image; Simulated goggle image; Bases learned by sparse
coding. Right: Orientation histogram for model neurons is biased towards goggle orientation (90? ).
4
Adaptation to altered environmental statistics
Numerous studies in multiple sensory areas and species document plasticity of receptive field properties in response to various experimental manipulations during an organism?s lifetime. In visual
cortex, for instance, orientation selectivity can be altered by rearing animals in unidirectionally oriented environments [26]. In auditory cortex, pulsed-tone rearing results in an expansion in the area
of auditory cortex tuned to the pulsed tone frequency [27]. And in somatosensory cortex, surgically
fusing digits 3 and 4 (the middle and ring fingers) of the hand to induce an artificial digital syndactyly
(webbed finger) condition results in receptive fields that span these digits [28]. In this section we
ask whether the same learning algorithms that explain features of normal receptive fields can also
explain these alterations in receptive field properties due to manipulations of sensory experience.
4.1 Goggle-rearing alters V1 orientation tuning
The preferred orientations of neurons in primary visual cortex can be strongly influenced by altering
visual inputs during development; Tanaka et al. fitted goggles that severly restricted orientation
information to kittens at postnatal week three, and documented a massive overrepresentation of the
goggle orientation subsequently in primary visual cortex [26]. Hsu and Dayan [29] have shown
that an unsupervised learning algorithm, the product-of-experts model (closely related to ICA), can
reproduce aspects of the goggle-rearing experiment. Here we follow their methods, extending the
analysis to the other four algorithms we consider.
To simulate the effect of the goggles on an input image, we compute the 2D Fourier transform of
the image and remove all energy except at the preferred orientation of the goggles. We slightly
blur the resulting image with a small Gaussian filter. Because the kittens receive some period of
natural experience, we trained the models on mixtures of patches from natural and altered images,
adding one parameter in addition to the algorithmic parameters. Fig. 4 shows resulting receptive
fields obtained using the sparse coding algorithm. After learning, the preferred orientations of the
bases were derived using the analysis described in Section 3.1. All five algorithms demonstrated an
overrepresentation of the goggle orientation, consistent with the experimental data.
4.2 Pulsed-tone rearing alters A1 frequency tuning
Early sensory experience can also profoundly alter properties of neural receptive fields in primary
auditory cortex. Along similar lines to the results for V1 in Section 4.1, early exposure to a pulsed
tone can induce shifts in the preferred center frequency of A1 neurons. In particular, de VillersSidani et al. raised rats in an environment with a free field speaker emitting a tone with 40Hz amplitude modulation that repeatedly cycled on for 250ms then off for 500ms [27]. Mapping the preferred
center frequencies of neurons in tone-exposed rats revealed a corresponding overrepresentation in
A1 around the pulsed-tone frequency.
We instantiated this experimental paradigm by adding a pulsed tone to the raw sound waveforms
of the natural sounds and speech before computing the gammatone responses. Example bases for
ICA are shown in the center panel of Fig. 5, many of which are tuned to the pulsed-tone frequency.
We computed the preferred frequency of each model receptive field by summing the square of each
patch along the temporal dimension. The right panel of Fig. 5 shows population histograms of the
6
Figure 5: Left: Example spectrograms before and after adding a 4kHz pulsed tone. Center: ICA
bases learned from pulsed tone data. Right: Population histograms of preferred frequency reveal a
strong preference for the pulsed-tone frequency of 4kHz.
preferred center frequencies for models trained on natural and pulsed-tone data for ICA and Kmeans. We find that all algorithms show an overrepresentation in the frequency band containing the
tone, in qualitative agreement with the results reported in [27]. Intuitively, this overrepresentation
is due to the fact that many bases are necessary to represent the temporal information present in
the pulsed-tone, that is, the phase of the amplitude modulation and the onset or offset time of the
stimulus.
4.3 Artificial digital syndactyly in S1
Allard et al. [28] surgically fused adjacent skin on
digits 3 and 4 in adult owl monkeys to create an artificial sydactyly, or webbed finger, condition. After
14, 25, or 33 weeks, many receptive fields of neurons in area 3b of S1 were found to span digits 3
and 4, a qualitative change from the normally strict
localization of receptive fields to a single digit. Additionally, at the tips of digits 3 and 4 where there
is no immediately adjacent skin on the other digit,
some neurons showed discontinuous double-digit receptive fields that responded to stimulation on either Figure 6: Bases trained on artificial synfinger tip [28]. In contrast to the shifts in receptive dactyly data. Top row: Sparse coding. Botfield properties described in the preceding two sec- tom row: K-means.
tions, these striking changes are qualitatively different, and as such provide an important test for functional models of plasticity.
We modeled the syndactyly condition by fusing digits 3 and 4 of our gloves and collecting 782 additional grip samples according to the method in Section 3.3. Bases learned from this syndactyly
dataset are shown in Fig. 4.3. All models learned double-digit receptive fields that spanned digits
3 and 4, in qualitative agreement with the findings reported in [28]. Additionally, a small number
of bases contained discontinuous double-digit receptive fields consisting of two well-separated excitatory regions on the extreme finger tips (e.g., Fig. 4.3 top right). In contrast to the experimental
findings, model receptive fields spanning digits 3 and 4 also typically have a discontinuity along the
seam. We believe this reflects a limitation of our dataset; although digits 3 and 4 of our data collection glove are fused together and must move in concert, the seam between these digits remains inset
from the neighboring fingers, and hence grasps rarely transfer powder to this area. In the experiment,
the skin was sutured to make the seam flush with the neighboring fingers.
5
Discussion
Taken together, our results demonstrate that a number of unsupervised learning algorithms can account for certain normal and altered linear receptive field properties across multiple primary sensory
cortices. Each of the five algorithms we tested obtained broadly consistent fits to experimental data
in V1, A1 and S1. Although these fits were not perfect?notably, missing ?blob? receptive fields
in V1 and bandpass temporal structure in A1?they demonstrate the feasibility of applying a single
learning algorithm to experimental data from multiple modalities.
7
In no setting did one of our five algorithms yield qualitatively different results from any other. This
finding likely reflects the underlying similarities between the algorithms, which all attempt to find
a sparse representation of the input while preserving information about it. The relative robustness
of our results to the details of the algorithms offers one explanation of the empirical observation of
similar plasticity outcomes at a functional level despite potentially very different underlying mechanisms [8]. Even if the mechanisms differ, provided that they still incorporate some version of
sparsity, they can produce qualitatively very similar outcomes.
The success of these models in capturing the effects of experimental manipulations of sensory input
suggests that the adaptation of receptive field properties to natural statistics, as proposed by efficient
coding models, may occur significantly on developmental timescales. If so, this would allow the
extensive literature on plasticity to constrain further modeling efforts.
Furthermore, the ability of a single algorithm to capture responses in multiple sensory cortices shows
that, in principle, a qualitatively similar plasticity process could operate throughout primary sensory
cortices. Experimentally, such a possibility has been addressed most directly by cortical ?rewiring?
experiments, where visual input is rerouted to either auditory or somatosensory cortex [30, 31, 32,
33, 34, 35]. In neonatal ferrets, visual input normally destined for lateral geniculate nucleus can
be redirected to the auditory thalamus, which then projects to primary auditory cortex. Roe et
al. [32] and Sharma et al. [34] found that rewired ferrets reared to adulthood had neurons in auditory
cortex responsive to oriented edges, with orientation tuning indistinguishable from that in normal
V1. Further, Von Melchner et al. [33] found that rewired auditory cortex can mediate behavior such
as discriminating between different grating stimuli and navigating toward a light source. Rewiring
experiments in hamster corroborate these results, and in addition show that rewiring visual input to
somatosensory cortex causes S1 to exhibit light-evoked responses similar to normal V1 [31, 35].
Differences between rewired and normal cortices do exist?for example, the period of the orientation
map is larger in rewired animals [34]. However, these experiments are consistent with the hypothesis
that sensory cortices share a common learning algorithm, and that it is through activity dependent
development that they specialize to a specific modality. Our results provide a possible explanation
of these experiments, as we have shown constructively that the exact same algorithm can produce
V1-, A1-, or S1-like receptive fields depending on the type of input data it receives.
Acknowledgements We give warm thanks to Andrew Maas, Cynthia Henderson, Daniel Hawthorne
and Conal Sathi for code and ideas. This work is supported by the DARPA Deep Learning program
under contract number FA8650-10- C-7020. Andrew Saxe is supported by a NDSEG and Stanford
Graduate Fellowship.
References
[1] V.B. Mountcastle. An organizing principle for cerebral function: The unit module and the distributed
system., pages 7?50. MIT Press, Cambridge, MA, 1978.
[2] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381(6583):607?9, 1996.
[3] J.H. van Hateren and D.L. Ruderman. Independent component analysis of natural image sequences
yields spatio-temporal filters similar to simple cells in primary visual cortex. Proc. R. Soc. Lond. B,
265(1412):2315?20, December 1998.
[4] E.C. Smith and M.S. Lewicki. Efficient auditory coding. Nature, 439(7079):978?82, 2006.
[5] D.L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual
cortex. J. Neurophysiol., 88(1):455?63, July 2002.
[6] M. Rehn and F.T. Sommer. A network that uses few active neurones to code visual input predicts the
diverse shapes of cortical receptive fields. J. Comput. Neurosci., 22(2):135?46, April 2007.
[7] G. Puertas, J. Bornschein, and J. Lucke. The Maximal Causes of Natural Scenes are Edge Filters. In
NIPS, 2010.
[8] D.E. Feldman. Synaptic mechanisms for plasticity in neocortex. Annu. Rev. Neurosci., 32:33?55, January
2009.
[9] K. Fox and R.O.L. Wong. A comparison of experience-dependent plasticity in the visual and somatosensory systems. Neuron, 48(3):465?77, November 2005.
[10] A.J. Bell and T.J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Res., 37(23):3327?38, December 1997.
8
[11] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and Composing Robust Features with
Denoising Autoencoders. In ICML, 2008.
[12] H. Lee, C. Ekanadham, and A.Y. Ng. Sparse deep belief net model for visual area V2. In NIPS, 2008.
[13] A. Coates, H. Lee, and A.Y. Ng. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In AISTATS, 2011.
[14] R.L. De Valois, D.G. Albrecht, and L.G. Thorell. Spatial frequency selectivity of cells in macaque visual
cortex. Vision Res., 22(5):545?59, January 1982.
[15] R.L. De Valois, E.W. Yund, and N. Hepler. The orientation and direction selectivity of cells in macaque
visual cortex. Vision Res., 22(5):531?544, 1982.
[16] L.M. Miller, M.A. Escab??, H.L. Read, and C.E. Schreiner. Spectrotemporal receptive fields in the lemniscal auditory thalamus and cortex. J. Neurophysiol., 87(1):516?27, January 2002.
[17] J.J. DiCarlo, K.O. Johnson, and S.S. Hsiao. Structure of receptive fields in area 3b of primary somatosensory cortex in the alert monkey. J. Neurosci., 18(7):2626?45, April 1998.
[18] M. Carandini, J.B. Demb, V. Mante, D.J. Tolhurst, Y. Dan, B.A. Olshausen, J.L. Gallant, and N.C. Rust.
Do we know what the early visual system does? J. Neurosci., 25(46):10577?97, November 2005.
[19] A. Hyv?arinen, J. Hurri, and P.O. Hoyer. Natural Image Statistics. Springer, London, 2009.
[20] D.J. Klein, P. K?onig, and K.P. K?ording. Sparse Spectrotemporal Coding of Sounds. EURASIP J. Adv. Sig.
Proc., 7:659?667, 2003.
[21] R.D. Patterson, K. Robinson, J. Holdsworth, D. McKeown, C. Zhang, and M. Allerhand. Complex sounds
and auditory images. In Adv. Biosci., 1992.
[22] K.N. O?Connor, C.I. Petkov, and M.L. Sutter. Adaptive stimulus optimization for auditory cortical neurons. J. Neurophysiol., 94(6):4051?67, 2005.
[23] N.B.W. Macfarlane and M.S.A. Graziano. Diversity of grip in Macaca mulatta. Exp. Brain Res.,
197(3):255?68, August 2009.
[24] M. Sur. Receptive fields of neurons in areas 3b and 1 of somatosensory cortex in monkeys. Brain Res.,
198(2):465?471, October 1980.
[25] R.L. Paul, M.M. Merzenich, and H. Goodman. Representation of slowly and rapidly adapting cutaneous
mechanoreceptors of the hand in Brodmann?s areas 3 and 1 of Macaca mulatta. Brain Res., 36(2):229?49,
January 1972.
[26] S. Tanaka, J. Ribot, K. Imamura, and T. Tani. Orientation-restricted continuous visual exposure induces
marked reorganization of orientation maps in early life. NeuroImage, 30(2):462?77, April 2006.
[27] E. de Villers-Sidani, E.F. Chang, S. Bao, and M.M. Merzenich. Critical period window for spectral tuning
defined in the primary auditory cortex (A1) in the rat. J. Neurosci., 27(1):180?9, 2007.
[28] T. Allard, S.A. Clark, W.M. Jenkins, and M.M. Merzenich. Reorganization of somatosensory area 3b
representations in adult owl monkeys after digital syndactyly. J. Neurophysiol., 66(3):1048?58, September
1991.
[29] A.S. Hsu and P. Dayan. An unsupervised learning model of neural plasticity: Orientation selectivity in
goggle-reared kittens. Vision Res., 47(22):2868?77, October 2007.
[30] M. Sur, P. Garraghty, and A. Roe. Experimentally induced visual projections into auditory thalamus and
cortex. Science, 242(4884):1437?1441, December 1988.
[31] C. M?etin and D.O. Frost. Visual responses of neurons in somatosensory cortex of hamsters with experimentally induced retinal projections to somatosensory thalamus. PNAS, 86(1):357?61, January 1989.
[32] A.W. Roe, S.L. Pallas, Y.H. Kwon, and M. Sur. Visual projections routed to the auditory pathway in ferrets: receptive fields of visual neurons in primary auditory cortex. J. Neurosci., 12(9):3651?64, September
1992.
[33] L. von Melchner, S.L. Pallas, and M. Sur. Visual behaviour mediated by retinal projections directed to
the auditory pathway. Nature, 404(6780):871?876, 2000.
[34] J. Sharma, A. Angelucci, and M. Sur. Induction of visual orientation modules in auditory cortex. Nature,
404(April):841?847, 2000.
[35] D.O. Frost, D. Boire, G. Gingras, and M. Ptito. Surgically created neural pathways mediate visual pattern
discrimination. PNAS, 97(20):11068?73, September 2000.
9
| 4331 |@word version:1 middle:1 proportion:2 replicate:1 justice:1 hyv:1 lobe:1 pressure:1 initial:2 valois:2 contains:5 daniel:1 tuned:2 document:1 rearing:5 ording:1 current:4 surprising:1 must:1 blur:1 plasticity:16 shape:5 remove:1 plot:3 concert:1 v:1 discrimination:1 generative:1 selected:1 half:2 tone:15 destined:1 postnatal:1 sutter:1 smith:1 record:1 tolhurst:1 preference:3 zhang:1 five:11 along:5 alert:1 redirected:1 qualitative:6 consists:1 sustained:1 specialize:1 fitting:2 dan:1 pathway:3 notably:1 ica:6 expected:1 behavior:2 roughly:1 examine:1 brain:3 little:1 window:1 provided:1 project:1 matched:1 underlying:2 panel:4 agnostic:1 mass:2 what:1 monkey:4 finding:5 transformation:1 sutured:1 temporal:12 quantitative:1 collecting:1 act:1 filterbank:1 control:1 normally:2 unit:1 onig:1 before:3 positive:1 limit:1 consequence:1 receptor:1 despite:1 demb:1 mtf:12 modulation:7 hsiao:1 might:3 black:3 studied:1 strf:3 evoked:1 collect:1 suggests:1 range:2 graduate:1 decided:1 directed:1 camera:1 backpropagation:1 digit:16 suresh:1 grasped:1 area:18 empirical:2 maneesh:1 bell:1 gabor:2 composite:5 significantly:2 adapting:2 projection:4 induce:2 regular:1 suggest:2 naturalistic:4 cannot:2 applying:1 wong:1 www:1 map:3 demonstrated:1 center:9 missing:1 exposure:2 duration:1 focused:1 petkov:1 immediately:1 schreiner:1 insight:1 importantly:1 spanned:1 population:5 gingras:1 heavily:1 massive:1 exact:1 us:2 hypothesis:4 origin:1 agreement:5 sig:1 predicts:1 database:1 bottom:6 observed:1 module:2 capture:4 region:3 cycle:1 adv:2 grasping:1 environment:4 developmental:3 complexity:1 ultimately:1 trained:5 raise:1 surgically:3 exposed:1 localization:1 patterson:1 efficiency:1 basis:1 triangle:2 neurophysiol:4 darpa:1 thorell:1 cat:3 fiber:3 finger:9 various:1 separated:1 instantiated:1 describe:1 london:1 sejnowski:1 artificial:5 outcome:3 choosing:1 stanford:4 supplementary:3 larger:2 ability:1 statistic:10 timescale:1 transform:4 emergence:1 reproduced:2 blob:2 sequence:1 net:1 bornschein:1 propose:1 rewiring:3 product:1 maximal:1 adaptation:7 neighboring:2 realization:1 rapidly:1 organizing:1 achieve:2 gammatone:2 powder:4 description:1 macaca:3 bao:1 crossvalidation:1 double:3 asymmetry:3 extending:1 produce:2 mckeown:1 perfect:1 ring:1 object:1 tions:1 depending:1 andrew:4 measured:4 grating:1 soc:1 strong:1 c:1 somatosensory:15 larochelle:1 quantify:1 differ:3 direction:2 waveform:2 closely:1 discontinuous:2 filter:7 subsequently:1 saxe:2 enable:1 material:2 owl:2 bin:1 arinen:1 behaviour:1 biological:1 exploring:1 hold:1 around:1 normal:10 exp:1 algorithmic:1 indentation:1 week:2 claim:1 mapping:1 early:7 proc:2 geniculate:1 spectrotemporal:6 create:1 successfully:1 reflects:2 mit:1 clearly:1 gaussian:3 aim:1 establishment:1 rather:4 derived:5 focus:2 check:1 contrast:3 dependent:3 dayan:2 typically:2 mechanoreceptor:1 pallas:2 bhand:1 yund:1 reproduce:2 lgn:1 html:1 orientation:18 development:4 animal:5 raised:2 spatial:6 summed:1 field:65 ng:3 biology:1 broad:2 unsupervised:16 nearly:1 icml:1 alter:1 stimulus:3 quantitatively:1 employ:1 retina:1 few:2 randomly:1 oriented:2 melchner:2 simultaneously:1 kwon:1 individual:2 phase:1 consisting:2 hepler:1 hamster:2 attempt:1 decisively:1 possibility:2 investigate:1 goggles:3 certainly:1 grasp:4 henderson:1 mixture:2 extreme:1 yielding:3 light:3 asaxe:2 implication:1 succeeded:1 edge:3 necessary:2 experience:6 orthogonal:1 fox:1 circle:2 re:7 villers:1 fitted:1 instance:2 formalism:1 modeling:3 corroborate:1 measuring:1 goggle:9 altering:1 applicability:1 fusing:2 deviation:2 ekanadham:1 johnson:1 reported:9 thanks:1 peak:2 sensitivity:1 discriminating:1 probabilistic:1 off:1 contract:1 lee:2 tip:4 together:2 fused:2 graziano:1 von:2 ndseg:1 containing:1 slowly:2 worse:1 expert:1 albrecht:1 account:6 sinusoidal:2 de:4 nonoverlapping:1 retinal:3 alteration:1 coding:14 sec:1 diversity:1 coefficient:1 afferent:2 onset:1 performed:2 view:1 analyze:1 red:2 sort:1 slope:1 timit:1 contribution:1 ass:1 square:1 responded:1 characteristic:1 miller:1 yield:8 correspond:1 raw:3 vincent:1 ecologically:1 researcher:1 explain:3 influenced:1 synaptic:1 against:1 rbms:2 nonetheless:1 frequency:24 remarked:1 energy:4 rbm:3 latex:1 static:1 sampled:1 auditory:29 dataset:5 hsu:2 popular:1 ask:3 carandini:1 holdsworth:1 color:1 amplitude:3 sophisticated:1 nerve:3 follow:2 tom:1 response:20 seam:3 april:4 brodmann:1 done:1 though:6 strongly:1 lifetime:6 furthermore:2 stage:8 just:1 crudely:1 overfit:1 hand:7 receives:1 autoencoders:1 ruderman:1 nonlinear:2 lack:1 tani:1 quality:1 gray:4 indicated:1 reveal:1 believe:1 olshausen:2 effect:2 normalized:2 evolution:1 hence:6 sinusoid:1 read:1 imaged:1 mudur:1 merzenich:3 ringach:1 white:1 adjacent:2 indistinguishable:1 during:9 width:1 noted:1 speaker:1 whiten:1 rat:3 criterion:1 m:2 evident:1 complete:1 demonstrate:2 angelucci:1 l1:1 image:19 ranging:1 predominantly:1 common:1 mulatta:3 functional:3 stimulation:2 rust:1 overview:1 khz:2 cerebral:1 organism:5 approximates:1 mechanoreceptors:2 significant:2 biosci:1 connor:3 cambridge:1 feldman:1 tuning:5 grid:2 replicating:1 had:3 phenomenological:2 cortex:46 similarity:2 operating:1 whitening:5 base:25 recent:2 showed:2 pulsed:12 manipulation:4 selectivity:4 certain:1 allard:2 success:2 continue:1 life:1 inverted:1 seen:2 captured:1 additional:2 preserving:1 preceding:1 spectrogram:3 sharma:2 maximize:1 period:3 paradigm:1 dashed:1 signal:1 semi:1 multiple:9 sound:9 mix:1 thalamus:4 pnas:2 july:1 constructive:1 match:7 offer:1 post:1 a1:13 feasibility:1 hair:1 whitened:1 vision:5 metric:1 assesment:1 volunteer:1 adulthood:1 cochlea:2 kernel:2 histogram:10 represent:2 accord:1 cell:10 roe:3 receive:2 background:1 addition:2 fine:1 fellowship:1 interval:2 addressed:2 ferret:3 source:1 modality:9 appropriately:1 envelope:2 rest:1 biased:1 operate:1 goodman:1 strict:1 hz:1 induced:2 december:3 extracting:1 near:1 revealed:1 bengio:1 variety:2 fit:12 perfectly:1 bandwidth:4 regarding:1 idea:2 aproximation:1 shift:2 rod:2 whether:6 six:1 pca:3 effort:2 routed:1 fa8650:1 speech:3 neurones:1 cause:2 repeatedly:1 deep:2 generally:1 strfs:1 detailed:1 grip:4 amount:1 ang:1 ten:1 band:4 neocortex:1 processed:1 induces:1 documented:1 http:1 supplied:1 exist:1 problematic:1 inhibitory:4 alters:2 coates:1 per:1 klein:1 blue:3 broadly:3 diverse:1 affected:1 profoundly:1 four:1 terminology:1 drawn:1 breadth:2 v1:13 year:1 cone:2 sum:2 respond:1 striking:1 throughout:2 reasonable:2 patch:3 garraghty:1 scaling:2 capturing:2 layer:1 rewired:4 mante:1 yielded:1 activity:2 adapted:1 occur:1 lucke:1 constrain:1 scene:3 aspect:2 fourier:3 simulate:1 span:4 lond:1 performing:1 glabrous:2 department:1 palm:2 according:2 flush:1 peripheral:1 poor:1 across:4 remain:1 slightly:2 smaller:1 frost:2 rev:1 primate:2 cutaneous:2 making:1 s1:5 intuitively:1 restricted:4 pipeline:2 taken:1 remains:1 puertas:1 turn:1 mechanism:4 fail:2 know:1 end:2 available:1 jenkins:1 reared:2 v2:1 spectral:8 appropriate:1 responsive:1 alternative:1 robustness:1 top:9 clustering:1 sommer:1 neonatal:1 contact:4 sweep:1 skin:6 move:1 occurs:3 receptive:64 primary:24 strategy:1 evolutionary:2 exhibit:3 navigating:1 hoyer:1 distance:1 september:3 simulated:4 lateral:1 nx:3 bipin:1 evaluate:2 cochlear:1 collected:1 spanning:1 toward:1 induction:1 code:5 length:2 modeled:2 relationship:2 manzagol:1 ratio:5 dicarlo:1 sur:5 reorganization:2 october:2 potentially:2 negative:1 constructively:1 boltzmann:1 twenty:1 unknown:1 gallant:1 convolution:1 neuron:23 datasets:2 observation:1 november:2 january:5 variability:2 august:1 intensity:1 namely:2 cast:1 required:1 extensive:2 learned:8 tanaka:2 macaque:7 discontinuity:1 adult:4 nip:2 bar:2 robinson:1 pattern:2 sparsity:1 program:1 built:1 green:1 explanation:2 belief:1 power:1 critical:1 natural:17 warm:1 altered:10 improve:1 numerous:1 axis:4 created:1 mediated:1 autoencoder:5 review:1 literature:2 mountcastle:2 acknowledgement:1 relative:2 expect:1 limitation:3 subcortical:5 versus:1 localized:2 clark:1 digital:4 nucleus:1 consistent:5 rehn:1 principle:2 cycled:1 share:2 row:6 course:2 excitatory:6 maas:1 placed:1 last:1 free:2 supported:2 drastically:1 allow:1 taking:1 sparse:18 webbed:2 distributed:1 van:1 calculated:1 cortical:6 worried:1 evaluating:1 dimension:3 sensory:18 qualitatively:10 collection:2 hawthorne:1 adaptive:1 far:1 emitting:1 lemniscal:1 approximate:2 preferred:8 active:1 corpus:1 pittsburgh:1 summing:3 severly:1 spatio:1 hurri:1 spectrum:3 search:2 continuous:1 additionally:2 learn:2 transfer:5 nature:4 composing:1 robust:1 symmetry:1 expansion:1 complex:2 did:1 aistats:1 timescales:3 neurosci:6 motivation:1 paul:1 mediate:2 fig:15 ny:3 neuroimage:1 bandpass:1 comput:1 lie:1 outdoor:1 third:1 minute:1 annu:1 specific:1 inset:1 cynthia:1 offset:1 evidence:1 exists:1 adding:3 magnitude:1 likely:1 ganglion:1 visual:31 contained:1 lewicki:1 chang:1 springer:1 environmental:3 ma:1 viewed:1 marked:1 kmeans:1 towards:1 shared:1 content:1 experimentally:9 change:3 eurasip:1 typical:1 except:2 operates:1 glove:5 denoising:1 etin:1 specie:1 pas:9 partly:1 experimental:19 rarely:1 meant:3 phenomenon:3 hateren:1 incorporate:2 wearing:1 tested:2 kitten:3 |
3,680 | 4,332 | Multi-armed bandits on implicit metric spaces
Aleksandrs Slivkins
Microsoft Research Silicon Valley
Mountain View, CA 94043
slivkins at microsoft.com
Abstract
The multi-armed bandit (MAB) setting is a useful abstraction of many online
learning tasks which focuses on the trade-off between exploration and exploitation. In this setting, an online algorithm has a fixed set of alternatives (?arms?),
and in each round it selects one arm and then observes the corresponding reward.
While the case of small number of arms is by now well-understood, a lot of recent work has focused on multi-armed bandits with (infinitely) many arms, where
one needs to assume extra structure in order to make the problem tractable. In
particular, in the Lipschitz MAB problem there is an underlying similarity metric
space, known to the algorithm, such that any two arms that are close in this metric
space have similar payoffs. In this paper we consider the more realistic scenario
in which the metric space is implicit ? it is defined by the available structure but
not revealed to the algorithm directly. Specifically, we assume that an algorithm
is given a tree-based classification of arms. For any given problem instance such
a classification implicitly defines a similarity metric space, but the numerical similarity information is not available to the algorithm. We provide an algorithm for
this setting, whose performance guarantees (almost) match the best known guarantees for the corresponding instance of the Lipschitz MAB problem.
1
Introduction
In a multi-armed bandit (MAB) problem, a player is presented with a sequence of trials. In each
round, the player chooses one alternative from a set of alternatives (?arms?) based on the past history,
and receives the payoff associated with this alternative. The goal is to maximize the total payoff of
the chosen arms. The multi-armed bandit setting was introduced in 1950s and has since been studied
intensively since then in Operations Research, Economics and Computer Science, e.g. see [8] for
background. This setting is often used to model the tradeoffs between exploration and exploitation,
which is the principal issue in sequential decision-making under uncertainty.
One standard way to evaluate the performance of a multi-armed bandit algorithm is regret, defined
as the difference between the expected payoff of an optimal arm and that of the algorithm. By
now the multi-armed bandit problem with a small finite number of arms is quite well understood
(e.g. see [22, 3, 2]). However, if the set of arms is exponentially or infinitely large, the problem
becomes intractable, unless we make further assumptions about the problem instance. Essentially,
an MAB algorithm needs to find a needle in a haystack; for each algorithm there are inputs on which
it performs as badly as random guessing.
The bandit problems with large sets of arms have received a considerable attention, e.g. [1, 5, 23,
12, 21, 10, 24, 25, 11, 4, 16, 20, 7, 19]. The common theme in these works is to assume a certain
structure on payoff functions. Assumptions of this type are natural in many applications, and often
lead to efficient learning algorithms, e.g. see [18, 8] for a background.
1
In particular, the line of work [1, 17, 4, 20, 7, 19] considers the Lipschitz MAB problem, a broad
and natural bandit setting in which the structure is induced by a metric on the set of arms.1 In this
setting an algorithm is given a metric space (X, D), where X is the set of arms, which represents the
available similarity information (information on similarity between arms). Payoffs are stochastic:
the payoff from choosing arm x is an independent random sample with expectation ?(x). The metric
space is related to payoffs via the following Lipschitz condition:2
|?(x) ? ?(y)| ? D(x, y) for all x, y ? X.
(1)
Performance guarantees consider regret R(t) as a function of time t, and focus on the asymptotical dependence of R(?) on a suitably defined ?dimensionality? of the problem instance (X, D, ?).
? ? ), ? < 1 have been proved.
Various upper and lower bounds of the form R(t) = ?(t
We relax an important assumption in Lipschitz MAB that the available similarity information provides numerical values in the sense of (1).3 Specifically, following [21, 24, 25] we assume that an
algorithm is (only) given a taxonomy on arms: a tree-based classification modeled by a rooted tree
T whose leaf set is X. The idea is that any two arms in the same subtree are likely to have similar payoffs. Motivations include contextual advertising and web search with topical taxonomies,
e.g. [25, 6, 29, 27], Monte-Carlo planning [21, 24], and Computer Go [13, 14].
We call the above formulation the Taxonomy MAB problem; a problem instance is a triple (X, T , ?).
Crucially, in Taxonomy MAB no numerical similarity information is explicitly revealed. All prior
algorithms for Lipschitz MAB (and in particular, all algorithms in [20, 7]) are parameterized by
some numerical similarity information, and therefore do not directly apply to Taxonomy MAB.
One natural way to quantify the extent of similarity between arms in a given subtree is via the
maximum difference in expected payoffs. Specifically, for each internal node v we define the width
of the corresponding subtree T (v) to be W(v) = supx,y?X(v) |?(x) ? ?(y)|, where X(v) is the set
of leaves in T (v). Note that the subtree widths are non-increasing from root to leaves. A standard
notion of distance induced by subtree widths, henceforth called implicit distance, is as follows:
Dimp (x, y) is the width of the least common ancestor of leaves x, y. It is immediate that this is
indeed a metric, and moreover that it satisfies (1). In fact, Dimp (x, y) is the smallest ?width-based?
distance that satisfies (1). If the widths are strictly decreasing, T can be reconstructed from Dimp .
Thus, an instance (X, T , ?) of Taxonomy MAB naturally induces an instance (X, Dimp , ?) of Lipschitz MAB which (assuming the widths are strictly decreasing) encodes all relevant information.
The crucial distinction is that in Taxonomy MAB the metric space (X, Dimp ) is implicit: the subtree
widths are not revealed to the algorithm. In particular, the algorithms in [20, 7] do not apply.
We view Lipschitz MAB as a performance benchmark for Taxonomy MAB. We are concerned with
the following question: can an algorithm for Taxonomy MAB perform as if it was given the implicit
metric space (X, Dimp )? More formally, we ask whether it is possible to obtain guarantees for
Taxonomy MAB that (almost) match the state-of-art for Lipschitz MAB.
We answer this question in the affirmative as long as the implicit metric space (X, Dimp ) has a small
doubling constant (see Section 2 for a milder condition). We provide an algorithm with guarantees
that are almost identical to those for the zooming algorithm in [20].4
Our algorithm proceeds by estimating subtree widths of near-optimal subtrees. Thus, we encounter
a two-pronged exploration-exploitation trade-off: samples from a given subtree reveal information
not only about payoffs but also about the width, whereas in Lipschitz MAB we only need to worry
about the payoffs. Dealing with this more complicated trade-off is the main new conceptual hurdle
(which leads to some technical complications such as the proof of Lemma 4.4). These complications
aside, our algorithm is similar to those in [17, 20] in that it maintains a partition of the space of arms
into regions (in this case, subtrees) so that each region is treated as a ?meta-arm?, and this partition
is adapted to the high-payoff regions.
1
This problem has been explicitly defined in [20]. Preceding work [1, 17, 9, 4] considered a few special
cases such as a one-dimensional real interval with a metric defined by D(x, y) = |x ? y|? , ? ? (0, 1].
2
Lipschitz constant is clip = 1 without loss of generality: else, one could take a metric clip ? D.
3
In the full version of [20] the setting is relaxed so that (1) needs to hold only if x is optimal, and the
distances between non-optimal points do not need to be explicitly known; [7] provides a similar result.
4
The guarantees in [7] are similar but slightly different technically.
2
1.1
Preliminaries
The Taxonomy MAB problem and the implicit metric space (X, Dimp ) are defined as in Section 1.
We assume stochastic payoffs [2]: in each round t the algorithm chooses a point x = xt ? X and
observes an independent random sample from a payoff distribution Ppayoff (x) with support [0, 1]
and expectation ?(x).5 The payoff function ? : X ? [0, 1] is not revealed to an algorithm. The
goal is to minimize regret with respect to the best expected arm:
hP
i
hP
i
T
T
R(T ) , ?? T ? E
?(x
)
=
E
?(x
)
,
(2)
t
t
t=1
t=1
where ?? , supx?X ?(x) is the maximal expected payoff, and ?(x) , ?? ? ?(x), is the ?badness?
of arm x. An arm x ? X is called optimal if ?(x) = ?? .
We will assume that the number of arms is finite (but possibly very large). Extension to infinitely
many arms (which does not require new algorithmic ideas) is not included to simplify presentation.
Also, we will assume a known time horizon (total number of rounds), denoted Thor .
Our guarantees are in terms of the zooming dimension [20] of (X, Dimp , ?), a concept that takes
into account both the dimensionality of the metric space and the ?goodness? of the payoff function.
Below we specialize the definition from [20] to Taxonomy MAB.
Definition 1.1 (zooming dimension). For X 0 ? X, define the covering number N?cov (X 0 ) as the
smallest number of subtrees of width at most ? that cover X 0 . Let X? , {x ? X : 0 < ?(x) ? ?}.
The zooming dimension of a problem instance I = (X, T , ?), with multiplier c, is
cov
ZoomDim(I, c) , inf{d ? 0 : N?/8
(X? ) ? c ? ?d
?? > 0}.
(3)
cov
In other words, we consider a covering property N?/8
(X? ) ? c ? ?d , and define the zooming dimension as the smallest d such that this covering property holds for all ? > 0. The zooming dimension
essentially coincides with the covering dimension of (X, D) 6 for the worst-case payoff function ?,
but can be (much) smaller when ? is ?benign?. In particular, zooming dimension would ?ignore? a
subtree with high covering dimension but significantly sub-optimal payoffs.
The doubling constant cDBL of a metric space is the smallest k such that any ball can be covered by
k balls of half the radius. (In our case, any subtree can be covered by k subtrees of half the width.)
Doubling constant has been a standard notion in theoretical computer science literature since [15];
since then, it was used to characterize tractable problem instances for a variety of problems. It is
0
known that cDBL = O(2d ) for any bounded subset S ? Rd of linear dimension d, under any metric
d
`p , p ? 1. Moreover, cDBL ? c 2 if d is the covering dimension with multiplier c.
2
Statement of results
We will prove that our algorithm (TaxonomyZoom) satisfies the following regret bound:
For each instance I of Taxonomy MAB, each c > 0 and each T ? Thor ,
R(T ) ? O(c KI log Thor )1/(2+d) ? T 1?1/(2+d) ,
d = ZoomDim(I, c).
(4)
We will bound the factor KI below. For KI = 1 this is the guarantee for the zooming algorithm
in [20] for the corresponding instance (X, Dimp , ?) of Lipschitz MAB. Note that the definition of
zooming dimension allows a trade-off between c and d, and we obtain the optimal trade-off since (4)
holds for all values of c at once. Following the prior work on Lipschitz MAB, we identify the
exponent in (4) as the crucial parameter, as long as the multiplier c is sufficiently small.7
Our first (and crude) bound for KI is in terms of the doubling constant of (X, Dimp ).
Theorem 2.1 (Crude bound). Given an upper bound c0DBL on the doubling constant of (X, Dimp ),
TaxonomyZoom achieves (4) with KI = f (c0DBL ) log |X|, where f (n) = n 2n .
5
Other than support and expectation, the ?shape? of Ppayoff (x) is not essential for this paper.
cov
Covering dimension is defined as in (3), replacing N?/8
(X? ) with N?cov (X)..
7
One can reduce ZoomDim by making c huge, e.g. ZoomDim = 0 for c = |X|. However, this is not likely to
lead to useful regret bounds. Similar trade-off (dimension vs multiplier) is implicit in [7].
6
3
Our main result (which implies Theorem 2.1) uses a more efficient bound for KI .
Recall that in Taxonomy MAB subtree widths are not revealed, and the algorithm has to use sampling to estimate them. Informally, the taxonomy is useful for our purposes if and only if subtree
widths can be efficiently estimated using random sampling. We quantify this as a parameter called
quality, and bound KI in terms of this parameter.
We use simple random sampling: start at a tree node v and choose a branch uniformly at random at
each junction. Let P(u|v) be the probability that node u is reached starting from v. The probabilities
P(?|v) induce a distribution on X(v), the leaf set of subtree T (v). A sample from this distribution
P
is called a random sample from T (v), with expected payoff ?(v) , x?X(v) ?(x) P(x|v).
Definition 2.2. The quality of the taxonomy for a given problem instance is the largest number
q ? (0, 1) with the following property: for each subtree T (v) containing an optimal arm there exist
tree nodes u, u0 ? T (v) such that P(u|v) and P(u0 |v) are at least q and
|?(u) ? ?(u0 )| ?
1
2
W(v).
(5)
One could use the pair u, u0 in Definition 2.2 to obtain reliable estimates for W(v). The definition
focuses on the difficulty of obtaining such pair via random sampling from T (v). The definition is
flexible: it allows u and u0 to be at different depth (which is useful if node degrees are large and
non-uniform) and the widths of other internal nodes in T (v) cannot adversely impact quality. The
constant 21 in (5) is arbitrary; we fix it for convenience.
For a particularly simple example, consider a binary taxonomy such that for each subtree T (v)
containing an optimal arm there exist grandchildren u, u0 of v that satisfy (5). For instance, such
u, u0 exist if the width of each grandchild of v is at most 41 W(v). Then quality ? 41 .
Theorem 2.3 (Main result). Assume an lower bound q ? quality(I) is known. Then
TaxonomyZoom achieves (4) with KI = deg
q log |X|, where deg is the degree of the taxonomy.
Theorem 2.1 follows because, letting cDBL be the doubling constant of (X, Dimp ), all node degrees
are at most cDBL and moreover quality ? 2?cDBL (we omit the proof from this version).
Discussion. The guarantee in Theorem 2.3 is instance-dependent: it depends on deg/quality
and ZoomDim, and is meaningful only if these quantities are small compared to the number of arms
(informally, we will call such problem instances ?benign?). Also, the algorithm needs to know
a non-trivial lower bound on quality; very conservative estimates would not suffice. However,
underestimating quality (and likewise overestimating Thor ) is relatively inexpensive as long as the
?influence? of these parameters on regret is eventually dominated by the T 1?1/(2+d) term.
For benign problem instances, the benefit of using the taxonomy is the vastly improved dependence
on the number of arms. Without a taxonomy or any other structure, regret of any algorithm for
stochastic MAB scales linearly in the number of (near-optimal) arms, for a sufficiently large t.
Specifically, let N? be the number of arms x such that 2? < ?(x) ? ?. Then the worst-case regret
(over all problem instances) cannot be better than R(t) = min(?t, ?( 1? N? )). 8
An alternative approach to MAB problems on trees (without knowing the ?widths?) are the ?tree
bandit algorithms? explored in [21, 24]. Here for each tree node v there is a separate, independent
copy of UCB 1 [2] or a UCB 1-style index algorithm (call it Av ), so that the ?arms? for Av correspond to children u of v, and selecting a child u in a given round corresponds to playing Au in
this round. [21, 24] report successful empirical performance of such algorithms on some examples.
However, regret bounds for these algorithms do not scale as well with the number of arms: even if
the tree widths are given, then letting ?min , minx?X: ?(x)>0 ?(x), the regret bound is proportional to |X? |/?min (where X? is as in Definition 1.1), whereas the regret bound in Theorem 2.3 is
cov
(essentially) in terms of the covering numbers N?/8
(X? ).
8
This is implicit from the lower-bounding analysis in [22] and [3].
4
3
Main algorithm
Our algorithm TaxonomyZoom(Thor , q) is parameterized by the time horizon Thor and the quality
parameter q ? quality. In each round the algorithm selects one of the tree nodes, say v, and plays
a randomly sampled arm x from T (v). We say that a subtree T (u) is hit in this round if u ? T (v)
and x ? T (u). For each tree node v and time t, let nt (v) be the number of times the subtree T (v)
has been hit by the algorithm before time t, and let ?t (v) be the corresponding average reward. Note
that E[?t (v) | nt (v) > 0] = ?(v). Define the confidence radius of v at time t as
p
radt (v) , 8 log(Thor |X|) / (2 + nt (v)).
(6)
The meaning of the confidence radius is that |?t (v) ? ?(v)| ? radt (v) with high probability.
For each tree node v and time t, let us define the index of v at time t as
It (v) , ?t (v) + (1 + 2 kA ) radt (v),
where
p
kA , 4 2/q.
Here we posit ?t (v) = 0 if nt (v) = 0. Let us define the width estimate9
Ut (v) , maxu?T (v), s?t ?s (u) ? rads (u),
Wt (v) , max(0, Ut (v) ? Lt (v)), where
Lt (v) , minu?T (v), s?t ?s (u) + rads (u).
(7)
(8)
Here Ut (v) is the best available lower confidence bound on maxx?X(v) ?(x), and Lt (v) is the best
available upper confidence bound on minx?X(v) ?(x). If both bounds hold then Wt (v) ? W(v).
Throughout the phase, some tree nodes are designated active. We maintain the following invariant:
Wt (v) < kA radt (v) for each active internal node v.
(9)
TaxonomyZoom(Thor , q ) operates as follows. Initially the only active tree node is the root. In each
round, the algorithm performs the following three steps:
(S1) While Invariant (9) is violated by some v, de-activate v and activate all its children.
(S2) Select an active tree node v with the maximal index (7), breaking ties arbitrarily.
(S3) Play a randomly sampled arm from T (v).
Note that each arm is activated and deactivated at most once.
Implementation details. If an explicit representation of the taxonomy can be stored in memory,
then the following simple implementation is possible. For each tree node v, we store several statistics: nt , ?t , Ut and Lt . Further, we maintain a linked list of active nodes, sorted by the index.
Suppose in a given round t, a subtree v is chosen, and an arm x is played. We update the statistics
by going up the x ? v path in the tree (note that only the statistics on this path need to be updated).
This update can be done in time O(depth(x)). Then one can check whether Invariant (9) holds for
a given node in time O(1). So step (S1) of the algorithm can be implemented in time O(1 + N ),
where N is the number of nodes activated during this step. Finally, the linked list of active nodes
can be updated in time O(1 + N ). Then the selections in steps (S2) and (S3) are done in time O(1).
Lemma 3.1. TaxonomyZoom can be implemented with O(1) storage per each tree node, so that in
each round the time complexity is O(N + depth(x)), where N is the number of arms activated in
step (S1), and x is the arm chosen in step (S3).
Sometimes it may be feasible (and more space-efficient) to represent the taxonomy implicitly, so that
a tree node is expanded only if needed. Specifically, suppose the following interface is provided:
given a tree node v, return all its children and an arbitrary arm x ? T (v). Then TaxonomyZoom can
be implemented so that it only stores the statistics for each node u such that P(u|v) ? q for some
active node v (rather than for all tree nodes).10 The running times are as in Lemma 3.1.
9
10
Defining Ut , Lt in (8) via s ? t (rather than s = t) improves performance, but is not essential for analysis.
The algorithm needs to be modified slightly; we leave the details to the full version.
5
4
Analysis: proof of Theorem 2.3
First, let us fix some notation. We will focus on regret up to a fixed time T ? Thor . In what follows,
let d = ZoomDim(I, c) for some fixed c > 0. Recall the notation X? , {x ? X : ?(x) ? ?} from
Definition 1.1. Here ? is the ? distance scale?; we will be interested in ? ? ?0 , for
1/(d+2)
2
?0 , ( K
, where K , O(c deg kA
log Thor ).
(10)
T )
We identify a certain high-probability behavior of the algorithm, and argue deterministically conditional on the event that this behavior actually holds.
Definition 4.1. An execution of TaxonomyZoom is called clean if for each time t ? T and all tree
nodes v the following two properties hold:
(P1) |?t (v) ? ?(v)| ? radt (v) as long as nt (v) > 0.
(P2) If u ? T (v) then
nt (v) P(u|v) ? 8 log T ? nt (u) ?
1
2
nt (v) P(u|v).
Note that in a clean execution the quantities in (8) satisfy the desired high-confidence bounds:
Ut (v) ? maxx?X(v) ?(x) and Lt (v) ? minx?X(v) ?(x), which implies W(v) ? Wt (v).
?2
Lemma 4.2. An execution of TaxonomyZoom is clean with probability at least 1 ? 2 Thor
.
Proof. For part (P1), fix a tree node v and let ?j to be the payoff in the j-th round that v has been
Pn
Pn
hit. Then { j=1 (?j ? ?(v))}n=1..T is a martingale.11 Let ??n , n1 j=1 ?j be the n-th average.
Then by the Azuma-Hoeffding inequality, for any n ? Thor we have:
p
Pr[ |??n ? ?(v)| > r(n)] ? (Thor |X|)?2 , where r(n) , 8 log(Thor |X|) / (2 + n).
(11)
Note that radt (v) = r(nt (v)). We obtain (P1) by taking the Union Bound for (11) over all nodes v
and all n ? T . (This is the only place where we use the log |X| term in (6).)
Part (P2) is proved via a similar application of martingales and Azuma-Hoeffding inequality.
From now on we will argue about a clean execution. Recall that by definition of W(?),
?(v) ? ?(u) + W(v) for any tree node u ? T (v).
(12)
The crux of the proof of Theorem 2.3 is that at all times the maximal index is at least ?? .
Lemma 4.3. Consider a clean execution of TaxonomyZoom(Thor , q). Then the following holds: in
any round t ? Thor , at any point in the execution such that the invariant (9) holds, there exists an
active tree node v ? such that It (v ? ) ? ?? .
Proof. Fix an optimal arm x? ? X. Note that in each round t, there exist an active tree node vt?
such that x? ? T (v ? ). (One can prove it by induction on t, using the (de)activation rule (S1) in
TaxonomyZoom.) Fix round t and the corresponding tree node v ? = vt? .
By Definition 2.2, there exist v0 , v1 ? Tq (v ? ) such that |?(v1 ) ? ?(v0 )| ? W(v ? )/2.
Assume that ? , W(v ? ) > 0, and define f (?) = 83 log(Thor ) ??2 . Then for each tree node v
radt (v) ? ?/8 ?? nt (v) ? f (?).
(13)
Now, for the sake of contradiction let us suppose that nt (v ? ) ? ( 41 kA )2 f (?). By (13), this is
equivalent to ? ? 2 kA radt (v ? ). Note that nt (v ? ) ? (2/q) f (?) by our assumption on kA , so
by property (P2) in the definition of the clean execution, for each node vj , j ? {0, 1} we have
nt (vj ) ? f (?), which implies radt (vj ) ? ?/8. Therefore (8) gives a good estimate of W(v ? ),
namely Wt (v ? ) ? ?/4. It follows that Wt (v ? ) ? kA radt (v ? ), which violates Invariant (9).
We proved that W(v ? ) ? 2 kA radt (v ? ). Using (12), we have ?(v ? ) ? W(v ? ) < 2 kA radt (v ? ) and
It (v ? ) ? ?(v ? ) + 2 kA radt (v ? ) ? ?? ,
(14)
where the first inequality in (14) holds by definition (7) and property (P1) of a clean execution.
11
To make ?n well-defined for any n ? Thor , consider a hypothetical algorithm which coincides with
TaxonomyZoom for the first Thor rounds and then proceeds so that each tree node is selected Thor times.
6
We use Lemma 4.3 to show that the algorithm does not activate too many tree nodes with large
badness ?(?), and each such node is not played too often. For each tree node v, let N (v) be the
number of times node v was selected in step (S2) of the algorithm. Call v positive if N (v) > 0. We
partition all positive tree nodes and all deactivated tree nodes into sets
Si = {positive tree nodes v : 2?i < ?(v) ? 2?i+1 },
Si? = {deactivated tree nodes v : 2?i < 4 W(v) ? 2?i+1 }.
Lemma 4.4. Consider a clean execution of algorithm TaxonomyZoom(Thor , q ).
(a)
(b)
(c)
(d)
2
for each tree node v we have N (v) ? O(kA
log Thor ) ??2 (v).
if node v is de-activated at some point in the execution, then ?(v) ? 4 W(v).
For each i, |Si? | ? 2Ki , where Ki , c 2(i+1) d .
For each i, |Si | ? O(deg Ki+1 ).
Proof. For part (a), fix an arbitrary tree node v and let t be the last time v was selected in step (S2)
of the algorithm. By Lemma 4.3, at that point in the execution there was a tree node v ? such that
It (v ? ) ? ?? . Then using the selection rule (step (S2)) and the definition of index (7), we have
?? ? It (v ? ) ? It (v) ? ?(v) + (2 + 2 kA ) radt (v),
?(v) ? (2 + 2 kA ) radt (v).
N (v) ? nt (v) ?
2
O(kA
?2
log Thor ) ?
(15)
(v).
For part (b), suppose tree node v was de-activated at time s. Let t be the last round in which v was
selected. Then
W(v) ? Ws (v) ? kA rs (v) ?
1
3
(2 + 2 kA ) radt (v) ?
1
3
?(v).
(16)
Indeed, the first inequality in (16) holds since we are in a clean execution, the second inequality
in (16) holds because v was de-activated, the third inequality holds because ns (v) = nt (v) + 1, and
the last inequality in (16) holds by (15).
For part (c), let us fix i and define Yi = {x ? X : ?(x) ? 2?i+1 }. By Definition 1.1, this set can
be covered by Ki subtrees T (v1 ), . . . , T (vKi ), each of width < 2?i /4. Fix a deactivated tree node
v ? Si? . For each arm x ? X in subtree T (v) we have, by part (b),
?(x) ? ?(v) + W(v) ? 4 W(v) ? 2?i+1 ,
so x ? Yi and therefore is contained in some T (vj ). Note that vj ? T (v) since W(v) > W(vj ). It
follows that the subtrees T (v1 ), . . . , T (vK ) cover the leaf set of T (v).
Consider the graph G on the node set Si? ? {v1 , . . . , vK }, where two nodes u, v are connected by a
directed edge (u, v) if there is a path from u to v in the tree T . This is a directed forest of out-degree
at least 2, whose leaf set is a subset of {v1 , . . . , vKi }. Since in any directed tree of out-degree ? 2
the number of nodes is at most twice the number of leaves, G contains at most Ki internal nodes.
Thus |Si? | ? 2Ki , proving part (c).
For part (d), let us fix i and consider a positive tree node u ? Si . Since N (u) > 0, either u is active
at time Thor , or it was deactivated in some round before Thor . In the former case, let v be the parent
of u. In the latter case, let v = u. Then by part (b) we have 2?i ? ?(u) ? ?(v) + W(v) ? 4 W(v),
so v ? Sj? for some j ? i + 1.
For each tree node v, define its family as the set which consists of u itself and all its children.
We have proved that each positive node u ? Si belongs to the family of some deactivated node
?
v ? ?i+1
j=1 Sj . Since each family consists of at most 1 + deg nodes, it follows that
Pi+1
|Si | ? (1 + deg) ( j=1 Kj ) ? O(deg Ki+1 ).
Proof of Theorem 2.3: The theorem follows Lemma 4.4(ad). Let us assume a clean execution.
(Recall that by Lemma 4.2 the failure probability is sufficiently small to be neglected.) Then:
P
P
1
2
2
i
(i+2)(1+d)
,
v?Si N (v) ?(v) ? O(kA log Thor )
v?Si ?(v) ? O(kA log Thor ) |Si | 2 ? K 2
7
where K is defined in (10). For any ?0 = 2?i0 we have
P
R(T ) ?
N (v) ?(v)
Ptree nodes v
P
=
N (v) ?(v) +
N (v) ?(v)
v: ?(v)<?0
v: ?(v)??0
P
P
P
? ?0 T +
N
(v)
?(v)
? ?0 T + i?i0 K 2(i+2)(1+d)
i?i0
v?Si
? ?0 T + O(K) ( ?80 )(1+d) .
We obtain the desired regret bound (4) by setting ?0 as in (10).
5
(De)parameterizing the algorithm
Recall that TaxonomyZoom needs to be parameterized by Thor and q. dependence on the parameters can be removed using a suitable version of the standard doubling trick: consider a ?metaalgorithm? that proceeds in phases so that in each phase i = 1, 2, 3, . . . a fresh instance of
TaxonomyZoom(2i , qi ) is run for 2i rounds, where qi slowly decreases with i. For instance, if we
take qi = 2??i for some ? ? (0, 1) then this meta-algorithm has regret
R(T ) ? O(c deg log T )1/(2+d) ? T 1?(1??)/(2+d)
where d = ZoomDim(I, c), for any given c > 0.
?T ? quality?1/?
(17)
While the doubling trick is very useful in theory of online decision problems, its practical importance
is questionable, as running a fresh algorithm instance in each phase seems unnecessarily wasteful.
We conjecture that in practice one could run a single instance of the algorithm while gradually
increasing Thor and decreasing q. However, providing provable guarantees for this modified algorithm seems beyond the current techniques. In particular, extending a much simpler analysis of the
zooming algorithm [20] to arbitrary time horizon remains a challenge.12
Further, we conjecture that TaxonomyZoom will typically work in practice even if the parameters are
misspecified, i.e. even if Thor is too low and q is too optimistic. Indeed, recall that our algorithm
is index-based, in the style of UCB 1 [2]. The only place where the parameters are invoked is in
the definition of the index (7), namely in the constant in front of the exploration term. It has been
observed in [28, 29] that in a related MAB setting, reducing this constant to 1 from the theoretically
mandated ?(log T )-type term actually improves algorithms? performance in simulations.
6
Conclusions
In this paper, we have extended previous multi-armed bandit learning algorithms with large numbers
of available strategies. Whereas the most effective previous approaches rely on explicitly knowing
the distance between available strategies, we consider the case where the distances are implicit in a
hierarchy of available strategies. We have provided a learning algorithm for this setting, and show
that its performance almost matches the best known guarantees for the Lipschitz MAB problem.
Further, we have shown how our approach results in stronger provable guarantees than alternative
algorithms such as tree bandit algorithms [21, 24].
We conjecture that the dependence on quality (or some version thereof) is necessary for the worstcase regret bounds, even if ZoomDim is low. It is an open question whether there are non-trivial
families of problem instances with low quality for which one could achieve low regret.
Our results suggest some natural extensions. Most interestingly, a number of applications recently
posed as MAB problems over large sets of arms ? including learning to rank online advertisements
or web documents (e.g. [26, 29]) ? naturally involve choosing among arms (e.g. ads) that can be classified according to any of a number of hierarchies (e.g. by class of product sold, geographic location,
etc). In particular, such different hierarchies may be of different usefulness. Selecting among, or
combining from, a set of available hierarchical representations of arms poses interesting challenges.
More generally, we would like to generalize Theorem 2.3 to other structures that implicitly define
a metric space on arms (in the sense of (1)). One specific target would be directed acyclic graphs.
While our algorithm is well-defined for this setting, the theoretical analysis does not apply.
12
However, [7] obtains similar guarantees for arbitrary time horizon, with a different algorithm.
8
References
[1] Rajeev Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33(6):1926?
1951, 1995.
[2] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47(2-3):235?256, 2002. Preliminary version in 15th ICML, 1998.
[3] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM J. Comput., 32(1):48?77, 2002. Preliminary version in 36th IEEE FOCS, 1995.
[4] Peter Auer, Ronald Ortner, and Csaba Szepesv?ari. Improved Rates for the Stochastic Continuum-Armed
Bandit Problem. In 20th COLT, pages 454?468, 2007.
[5] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. J. of Computer
and System Sciences, 74(1):97?114, February 2008. Preliminary version in 36th ACM STOC, 2004.
[6] Andrei Broder, Marcus Fontoura, Vanja Josifovski, and Lance Riedel. A semantic approach to contextual
advertising. In 30th SIGIR, pages 559?566, 2007.
[7] S?ebastien Bubeck, R?emi Munos, Gilles Stoltz, and Csaba Szepesvari. Online Optimization in X-Armed
Bandits. J. of Machine Learning Research (JMLR), 12:1587?1627, 2011. Preliminary version in NIPS
2008.
[8] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, learning, and games. Cambridge Univ. Press, 2006.
[9] Eric Cope. Regret and convergence bounds for immediate-reward reinforcement learning with continuous
action spaces. IEEE Trans. on Automatic Control, 54(6):1243?1253, 2009. A manuscript from 2004.
[10] Varsha Dani and Thomas P. Hayes. Robbing the bandit: less regret in online geometric optimization
against an adaptive adversary. In 17th ACM-SIAM SODA, pages 937?943, 2006.
[11] Varsha Dani, Thomas P. Hayes, and Sham Kakade. The Price of Bandit Information for Online Optimization. In 20th NIPS, 2007.
[12] Abraham Flaxman, Adam Kalai, and H. Brendan McMahan. Online Convex Optimization in the Bandit
Setting: Gradient Descent without a Gradient. In 16th ACM-SIAM SODA, pages 385?394, 2005.
[13] Sylvain Gelly and David Silver. Combining online and offline knowledge in UCT. In 24th ICML, 2007.
[14] Sylvain Gelly and David Silver. Achieving master level play in 9x9 computer go. In 23rd AAAI, 2008.
[15] Anupam Gupta, Robert Krauthgamer, and James R. Lee. Bounded geometries, fractals, and low?
distortion embeddings. In 44th IEEE FOCS, pages 534?543, 2003.
[16] Sham M. Kakade, Adam T. Kalai, and Katrina Ligett. Playing Games with Approximation Algorithms.
In 39th ACM STOC, 2007.
[17] Robert Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In 18th NIPS, 2004.
[18] Robert Kleinberg. Online Decision Problems with Large Strategy Sets. PhD thesis, MIT, 2005.
[19] Robert Kleinberg and Aleksandrs Slivkins. Sharp Dichotomies for Regret Minimization in Metric Spaces.
In 21st ACM-SIAM SODA, 2010.
[20] Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-Armed Bandits in Metric Spaces. In 40th
ACM STOC, pages 681?690, 2008.
[21] Levente Kocsis and Csaba Szepesvari. Bandit Based Monte-Carlo Planning. In 17th ECML, pages 282?
293, 2006.
[22] T.L. Lai and Herbert Robbins. Asymptotically efficient Adaptive Allocation Rules. Advances in Applied
Mathematics, 6:4?22, 1985.
[23] H. Brendan McMahan and Avrim Blum. Online Geometric Optimization in the Bandit Setting Against
an Adaptive Adversary. In 17th COLT, pages 109?123, 2004.
[24] R?emi Munos and Pierre-Arnaud Coquelin. Bandit algorithms for tree search. In 23rd UAI, 2007.
[25] Sandeep Pandey, Deepak Agarwal, Deepayan Chakrabarti, and Vanja Josifovski. Bandits for Taxonomies:
A Model-based Approach. In SDM, 2007.
[26] Sandeep Pandey, Deepayan Chakrabarti, and Deepak Agarwal. Multi-armed Bandit Problems with Dependent Arms. In 24th ICML, 2007.
[27] Susan T. Dumais Paul N. Bennett, Krysta Marie Svore. Classification-enhanced ranking. In 19th WWW,
pages 111?120, 2010.
[28] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed
bandits. In 25th ICML, pages 784?791, 2008.
[29] Aleksandrs Slivkins, Filip Radlinski, and Sreenivas Gollapudi. Learning optimally diverse rankings over
large document collections. In 27th ICML, pages 983?990, 2010.
9
| 4332 |@word trial:1 exploitation:3 version:9 seems:2 stronger:1 suitably:1 open:1 r:1 crucially:1 simulation:1 contains:1 selecting:2 document:2 interestingly:1 past:1 ka:19 com:1 contextual:2 nt:16 current:1 activation:1 si:14 ronald:1 realistic:1 numerical:4 partition:3 benign:3 shape:1 ligett:1 update:2 aside:1 v:1 half:2 leaf:8 selected:4 underestimating:1 provides:2 node:60 complication:2 location:1 simpler:1 chakrabarti:2 focs:2 specialize:1 prove:2 consists:2 theoretically:1 indeed:3 expected:5 behavior:2 p1:4 planning:2 multi:11 decreasing:3 armed:15 increasing:2 becomes:1 provided:2 estimating:1 underlying:1 moreover:3 bounded:2 suffice:1 notation:2 what:1 mountain:1 affirmative:1 csaba:3 guarantee:13 hypothetical:1 questionable:1 tie:1 hit:3 control:2 omit:1 before:2 positive:5 understood:2 path:3 lugosi:1 twice:1 au:1 studied:1 josifovski:2 directed:4 practical:1 union:1 regret:19 practice:2 empirical:1 maxx:2 significantly:1 word:1 induce:1 confidence:5 suggest:1 cannot:2 close:1 valley:1 needle:1 convenience:1 selection:2 storage:1 influence:1 www:1 equivalent:1 go:2 economics:1 attention:1 starting:1 convex:1 focused:1 sigir:1 contradiction:1 rule:3 parameterizing:1 proving:1 notion:2 updated:2 hierarchy:3 play:3 suppose:4 target:1 enhanced:1 us:1 trick:2 particularly:1 observed:1 worst:2 susan:1 region:3 connected:1 trade:6 removed:1 decrease:1 observes:2 complexity:1 reward:3 neglected:1 tight:1 technically:1 eric:1 various:1 univ:1 effective:1 activate:3 monte:2 dichotomy:1 choosing:2 whose:3 quite:1 posed:1 say:2 relax:1 distortion:1 katrina:1 cov:6 statistic:4 fischer:1 itself:1 online:12 kocsis:1 sequence:1 agrawal:1 sdm:1 maximal:3 product:1 relevant:1 combining:2 mandated:1 achieve:1 gollapudi:1 parent:1 convergence:1 extending:1 adam:2 leave:1 silver:2 pose:1 received:1 p2:3 implemented:3 implies:3 quantify:2 posit:1 radius:3 stochastic:4 exploration:4 routing:1 violates:1 require:1 crux:1 pronged:1 fix:9 preliminary:5 mab:31 strictly:2 extension:2 hold:14 sufficiently:3 considered:1 maxu:1 algorithmic:1 minu:1 achieves:2 continuum:3 smallest:4 vki:2 purpose:1 robbins:1 largest:1 vanja:2 minimization:1 dani:2 mit:1 modified:2 rather:2 kalai:2 pn:2 focus:4 joachim:1 vk:2 rank:1 check:1 brendan:2 sense:2 milder:1 abstraction:1 dependent:2 i0:3 typically:1 abor:1 initially:1 w:1 bandit:28 ancestor:1 going:1 selects:2 interested:1 issue:1 classification:4 flexible:1 among:2 denoted:1 exponent:1 colt:2 art:1 special:1 once:2 sampling:4 identical:1 represents:1 broad:1 unnecessarily:1 icml:5 nearly:1 sreenivas:1 report:1 simplify:1 overestimating:1 few:1 ortner:1 randomly:2 phase:4 geometry:1 microsoft:2 maintain:2 n1:1 tq:1 huge:1 activated:6 subtrees:6 edge:1 necessary:1 unless:1 tree:46 stoltz:1 desired:2 theoretical:2 instance:22 cover:2 goodness:1 yoav:1 badness:2 subset:2 uniform:1 usefulness:1 successful:1 too:4 front:1 optimally:1 characterize:1 stored:1 svore:1 answer:1 supx:2 chooses:2 varsha:2 st:1 dumais:1 broder:1 siam:5 lee:1 off:6 thesis:1 aaai:1 x9:1 vastly:1 cesa:3 containing:2 choose:1 possibly:1 hoeffding:2 slowly:1 henceforth:1 adversely:1 style:2 return:1 account:1 de:6 satisfy:2 explicitly:4 ranking:3 depends:1 ad:2 root:2 view:2 lot:1 optimistic:1 linked:2 reached:1 start:1 maintains:1 complicated:1 minimize:1 efficiently:1 likewise:1 correspond:1 identify:2 generalize:1 krysta:1 carlo:2 advertising:2 history:1 classified:1 definition:17 inexpensive:1 failure:1 against:2 james:1 thereof:1 naturally:2 associated:1 proof:8 sampled:2 proved:4 ask:1 intensively:1 recall:6 knowledge:1 ut:6 dimensionality:2 improves:2 actually:2 auer:3 worry:1 manuscript:1 metaalgorithm:1 improved:2 formulation:1 done:2 generality:1 implicit:10 uct:1 receives:1 web:2 replacing:1 rajeev:1 defines:1 quality:14 reveal:1 concept:1 multiplier:4 geographic:1 former:1 awerbuch:1 arnaud:1 semantic:1 round:19 during:1 width:20 game:2 rooted:1 covering:8 coincides:2 performs:2 interface:1 meaning:1 radt:16 invoked:1 recently:1 ari:1 misspecified:1 common:2 exponentially:1 silicon:1 multiarmed:2 haystack:1 cambridge:1 rd:3 automatic:1 mathematics:1 hp:2 similarity:9 dimp:13 v0:2 etc:1 recent:1 inf:1 belongs:1 scenario:1 store:2 certain:2 meta:2 binary:1 arbitrarily:1 inequality:7 vt:2 yi:2 herbert:1 relaxed:1 preceding:1 maximize:1 u0:7 branch:1 full:2 sham:2 technical:1 match:3 long:4 lai:1 impact:1 qi:3 prediction:1 essentially:3 metric:21 expectation:3 sometimes:1 represent:1 agarwal:2 background:2 whereas:3 hurdle:1 szepesv:1 interval:1 else:1 crucial:2 extra:1 asymptotical:1 induced:2 call:4 near:2 revealed:5 embeddings:1 concerned:1 variety:1 nonstochastic:1 reduce:1 idea:2 knowing:2 tradeoff:1 whether:3 sandeep:2 peter:3 action:1 fractal:1 useful:5 generally:1 covered:3 informally:2 involve:1 induces:1 clip:2 schapire:1 exist:5 s3:3 estimated:1 per:1 diverse:2 blum:1 achieving:1 wasteful:1 levente:1 marie:1 clean:10 v1:6 graph:2 baruch:1 asymptotically:1 run:2 eli:1 parameterized:3 uncertainty:1 master:1 soda:3 place:2 almost:4 throughout:1 family:4 decision:3 grandchild:2 bound:23 ki:15 played:2 badly:1 adapted:1 riedel:1 encodes:1 sake:1 dominated:1 kleinberg:6 emi:2 min:3 expanded:1 relatively:1 conjecture:3 designated:1 according:1 ball:2 smaller:1 slightly:2 kakade:2 making:2 s1:4 invariant:5 pr:1 gradually:1 thorsten:1 remains:1 eventually:1 needed:1 know:1 letting:2 tractable:2 available:10 operation:1 junction:1 apply:3 hierarchical:1 pierre:1 alternative:6 encounter:1 anupam:1 thomas:2 running:2 include:1 krauthgamer:1 robbing:1 gelly:2 february:1 question:3 quantity:2 strategy:4 dependence:4 guessing:1 minx:3 gradient:2 distance:7 separate:1 zooming:10 argue:2 considers:1 extent:1 trivial:2 induction:1 fresh:2 provable:2 assuming:1 marcus:1 modeled:1 index:8 deepayan:2 providing:1 robert:8 taxonomy:23 statement:1 stoc:3 implementation:2 ebastien:1 perform:1 bianchi:3 upper:3 av:2 gilles:1 sold:1 benchmark:1 finite:3 descent:1 ecml:1 immediate:2 payoff:22 defining:1 extended:1 topical:1 arbitrary:5 sharp:1 aleksandrs:4 introduced:1 david:2 pair:2 namely:2 slivkins:5 rad:2 distinction:1 nip:3 trans:1 beyond:1 adversary:2 proceeds:3 below:2 azuma:2 challenge:2 reliable:1 max:1 lance:1 deactivated:6 memory:1 including:1 event:1 suitable:1 natural:4 treated:1 difficulty:1 rely:1 arm:48 flaxman:1 kj:1 prior:2 literature:1 geometric:2 nicol:3 freund:1 loss:1 interesting:1 proportional:1 allocation:1 acyclic:1 triple:1 upfal:1 degree:5 playing:2 pi:1 last:3 copy:1 offline:1 taking:1 munos:2 deepak:2 benefit:1 dimension:13 depth:3 collection:1 adaptive:4 reinforcement:1 cope:1 reconstructed:1 sj:2 obtains:1 ignore:1 implicitly:3 dealing:1 thor:30 deg:9 active:10 hayes:2 uai:1 conceptual:1 filip:2 search:2 continuous:1 pandey:2 szepesvari:2 ca:1 obtaining:1 forest:1 taxonomyzoom:16 vj:6 main:4 linearly:1 abraham:1 motivation:1 bounding:1 s2:5 paul:2 child:5 andrei:1 martingale:2 n:1 sub:1 theme:1 explicit:1 deterministically:1 comput:1 mcmahan:2 crude:2 breaking:1 jmlr:1 third:1 advertisement:1 theorem:11 xt:1 specific:1 explored:1 list:2 gupta:1 intractable:1 essential:2 exists:1 avrim:1 sequential:1 importance:1 phd:1 execution:13 subtree:19 horizon:4 lt:6 likely:2 infinitely:3 bubeck:1 contained:1 doubling:8 corresponds:1 satisfies:3 worstcase:1 acm:6 conditional:1 goal:2 presentation:1 sorted:1 lipschitz:14 price:1 considerable:1 feasible:1 bennett:1 included:1 specifically:5 sylvain:2 uniformly:1 operates:1 wt:6 reducing:1 principal:1 lemma:10 total:2 called:5 conservative:1 player:2 meaningful:1 ucb:3 formally:1 select:1 internal:4 support:2 coquelin:1 latter:1 radlinski:2 violated:1 evaluate:1 |
3,681 | 4,333 | Comparative Analysis of Viterbi Training and
Maximum Likelihood Estimation for HMMs
Armen Allahverdyan?
Yerevan Physics Institute
Yerevan, Armenia
[email protected]
Aram Galstyan
USC Information Sciences Institute
Marina del Rey, CA, USA
[email protected]
Abstract
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a
more conventional Maximum Likelihood (ML) approach to parameter estimation
in Hidden Markov Models. While ML estimator works by (locally) maximizing
the likelihood of the observed data, VT seeks to maximize the probability of the
most likely hidden state sequence. We develop an analytical framework based on
a generating function formalism and illustrate it on an exactly solvable model of
HMM with one unambiguous symbol. For this particular model the ML objective
function is continuously degenerate. VT objective, in contrast, is shown to have
only finite degeneracy. Furthermore, VT converges faster and results in sparser
(simpler) models, thus realizing an automatic Occam?s razor for HMM learning.
For more general scenario VT can be worse compared to ML but still capable of
correctly recovering most of the parameters.
1
Introduction
Hidden Markov Models (HMM) provide one of the simplest examples of structured data observed
through a noisy channel. The inference problems of HMM naturally divide into two classes [20, 9]:
i) recovering the hidden sequence of states given the observed sequence, and ii) estimating the model
parameters (transition probabilities of the hidden Markov chain and/or conditional probabilities of
observations) from the observed sequence. The first class of problems is usually solved via the maximum a posteriori (MAP) method and its computational implementation known as Viterbi algorithm
[20, 9]. For the parameter estimation problem, the prevailing method is maximum likelihood (ML)
estimation, which finds the parameters by maximizing the likelihood of the observed data. Since
global optimization is generally intractable, in practice it is implemented through an expectation?
maximization (EM) procedure known as Baum?Welch algorithm [20, 9].
An alternative approach to parameter learning is Viterbi Training (VT), also known in the literature
as segmental K-means, Baum?Viterbi algorithm, classification EM, hard EM, etc. Instead of maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely
hidden state sequence. Maximizing VT objective function is hard [8], so in practice it is implemented via an EM-style iterations between calculating the MAP sequence and adjusting the model
parameters based on the sequence statistics. It is known that VT lacks some of the desired features
of ML estimation such as consistency [17], and in fact, can produce biased estimates [9]. However,
it has been shown to perform well in practice, which explains its widespread use in applications
such as speech recognition [16], unsupervised dependency parsing [24], grammar induction [6], ion
channel modeling [19]. It is generally assumed that VT is more robust and faster but usually less
accurate, although for certain tasks it outperforms conventional EM [24].
?
Currently at: Laboratoire de Physique Statistique et Systemes Complexes, ISMANS, Le Mans, France.
1
The current understanding of when and under what circumstances one method should be preferred
over the other is not well?established. For HMMs with continuos observations, Ref. [18] established
an upper bound on the difference between the ML and VT objective functions, and showed that both
approaches produce asymptotically similar estimates when the dimensionality of the observation
space is very large. Note, however, that this asymptotic limit is not very interesting as it makes
the structure imposed by the Markovian process irrelevant. A similar attempt to compare both approaches on discrete models (for stochastic context free grammars) was presented in [23]. However,
the established bound was very loose.
Our goal here is to understand, both qualitatively and quantitatively, the difference between the two
estimation methods. We develop an analytical approach based on generating functions for examining the asymptotic properties of both approaches. Previously, a similar approach was used for
calculating entropy rate of a hidden Markov process [1]. Here we provide a non-trivial extension of
the methods that allows to perform comparative asymptotic analysis of ML and VT estimation. It is
shown that both estimation methods correspond to certain free-energy minimization problem at different temperatures. Furthermore, we demonstrate the approach on a particular class of HMM with
one unambiguous symbol and obtain a closed?form solution to the estimation problem. This class
of HMMs is sufficiently rich so as to include models where not all parameters can be determined
from the observations, i.e., the model is not identifiable [7, 14, 9].
We find that for the considered model VT is a better option if the ML objective is degenerate (i.e., not
all parameters can be obtained from observations). Namely, not only VT recovers the identifiable
parameters but it also provides a simple (in the sense that non-identifiable parameters are set to
zero) and optimal (in the sense of the MAP performance) solution. Hence, VT realizes an automatic
Occam?s razor for the HMM learning. In addition, we show that the VT algorithm for this model
converges faster than the conventional EM approach. Whenever the ML objective is not degenerate,
VT leads generally to inferior results that, nevertheless, may be partially correct in the sense of
recovering certain (not all) parameters.
2
Hidden Markov Process
Let S = {S0 , S1 , S2 , ...} be a discrete-time, stationary, Markov process with conditional probability
Pr[Sk+l = sk |Sk?1+l = sk?1 ] = p(sk |sk?1 ),
(1)
where l is an integer. Each realization sk of the random variable Sk takes values 1, ..., L. We assume
PL
that S is mixing: it has a unique stationary distribution pst (s), r=1 p(s|r)pst (r) = pst (s), that is
established from any initial probability in the long time limit.
Let random variables Xi , with realizations xi = 1, .., M , be noisy observations of Si : the (timeinvariant) conditional probability of observing Xi = xi given the realization Si = si of the Markov
process is ?(xk |sk ). Defining x ? (xN , ..., x1 ), s ? (sN , ..., s0 ), the joint probability of S, X reads
P (s, x) = TsN sN ?1 (xN )...Ts1 s0 (x1 ) pst (s0 ),
(2)
where the L ? L transfer-matrix T (x) with matrix elements Tsi si?1 (x) is defined as
Tsi si?1 (x) = ?(x|si ) p(si |si?1 ).
(3)
X = {X1 , X2 , ...} is called a hidden Markov process. Generally, it is not Markov, but it inherits
stationarity and mixing from S [9]. The probabilities for X can be represented as follows:
X
P (x) =
[T(x)]ss0 pst (s0 ), T(x) ? T (xN )T (xN ?1 ) . . . T (x1 ),
(4)
0
ss
where T(x) is a product of transfer matrices.
3
3.1
Parameter Estimation
Maximum Likelihood Estimation
The unknown parameters of an HMM are the transition probabilities p(s|s0 ) of the Markov process
and the observation probabilities ?(x|s); see (2). They have to be estimated from the observed
2
sequence x. This is standardly done via the maximum-likelihood approach: one starts with some
trial values p?(s|s0 ), ?
? (x|s) of the parameters and calculates the (log)-likelihood ln P? (x), where P?
means the probability (4) calculated at the trial values of the parameters. Next, one maximizes
ln P? (x) over p?(s|s0 ) and ?
? (x|s) for the given observed sequence x (in practice this is done via the
Baum-Welch algorithm [20, 9]). The rationale of this approach is as follows. Provided that the
length N of the observed sequence is long, and recaling that X is mixing (due to the analogous
feature of S) we get probability-one convergence (law of large numbers) [9]:
X
ln P? (x) ?
P (y) ln P? (y),
(5)
y
where the average is taken over the true probability P (...) that generated x. Since the relative
P
P
entropy is non-negative, x P (x) ln[P (x)/P? (x)] ? 0, the global maximum of x P (x) ln P? (x)
as a function of p?(s|s0 ) and ?
? (x|s) is reached for p?(s|s0 ) = p(s|s0 ) and ?
? (x|s) = ?(x|s). This
argument is silent on how unique this global maximum is and how difficult to reach it.
3.2
Viterbi Training
An alternative approach to the parameter learning employs the maximal a posteriori (MAP) estimation and proceeds as follows: Instead of maximizing the likelihood of observed data (5) one tries to
maximize the probability of the most likely sequence [20, 9]. Given the joint probability P? (s, x) at
trial values of parameters, and given the observed sequence x, one estimates the generating statesequence s via maximizing the a posteriori probability
P? (s|x) = P? (s, x)/P? (x)
(6)
over s. Since P? (x) does not depend on s, one can maximize ln P? (s, x). If the number of observations is sufficiently large N ? ?, one can substitute maxs ln P? (s, x) by its average over P (...)
[see (5)] and instead maximize (over model parameters)
X
P (x) maxs ln P? (s, x).
(7)
x
To relate (7) to the free energy concept (see e.g. [2, 4]), we define an auxiliary (Gibbsian) probability
i
hX
? ? (s0 , x) ,
P
(8)
??? (s|x) = P? ? (s, x)/
0
s
where ? > 0 is a parameter. As a function of s (and for a fixed x), ????? (s|x) concentrates on
those s that maximize ln P? (s, x):
1X
????? (s|x) ?
?[s, e
s[j] (x)],
(9)
j
N
where ?(s, s0 ) is the Kronecker delta, e
s[j] (x) are equivalent outcomes of the maximization, and N
is the number of such outcomes. Further, define
X
1X
P (x) ln
P? ? (s, x).
(10)
F? ? ?
x
s
?
Within statistical mechanics Eqs. 8 and 10 refer to, respectively, the Gibbs distribution and free
energy of a physical system with Hamiltonian H = ? ln P (s, x) coupled to a thermal bath at
inverse temperature ? = 1/T [2, 4]. It is then clear that ML and Viterbi Training correspond
2
toP
minimizing
P the free energy Eq. 10 at ? = 1 and ? = ?, respectively. Note that ? ?? F =
? x P (x) s ?? (s|x) ln ?? (s|x) ? 0, which yields F1 ? F? .
3.3
Local Optimization
As we mentioned, global maximization of neither objective is feasible in the general case. Instead,
in practice this maximization is locally implemented via an EM-type algorithm [20, 9]: for a given
observed sequence x, and for some initial values of the parameters, one calculates the expected value
of the objective function under the trial parameters (E-step), and adjusts the parameters to maximize
this expectation (M-step). The resulting estimates of the parameters are now employed as new trial
parameters and the previous step is repeated. This recursion continues till convergence.
3
For our purposes, this procedure can be understood as calculating certain statistics of the hidden sequence averaged over the Gibbs distribution Eqs. 8. Indeed, let us introduce f? (s) ?
PN
e?? i=1 ?(si+1 ,a)?(si ,b) and define X
X
?F? (?) ? ?
P (x) ln
P? ? (s, x)f? (s).
(11)
x
s
Then, for instance, the (iterative) Viterbi estimate of the transition probabilities are given as follows:
Pe(Sk+1 = a, Sk = b) = ??? [F? (?)]|??0 .
(12)
Conditional probabilities for observations are calculated similarly via a different indicator function.
4
Generating Function
Note from (4) that both P (x) and P? (x) are obtained as matrix-products. For a large number of
multipliers the behavior of such products is governed by the multiplicative law of large numbers.
We now recall its formulation from [10]: for N ? ? and x generated by the mixing process X
there is a probability-one convergence:
1
1X
ln ||T(x)|| ?
P (y) ln ?[T(y)],
(13)
y
N
N
where ||...|| is a matrix norm in the linear space of L ? L matrices, and ?[T(x)] is the maximal
eigenvalue of T(x). Note that (13) does not depend on the specific norm chosen, because all norms
in the finite-dimensional linear space are equivalent;
P they differ by a multiplicative factor that disappears for N ? ? [10]. Eqs. (4, 13) also imply x ?[T(x)] ? 1. Altogether, we calculate (5) via
its probability-one limit
1X
1X
?
P (x) ln P? (x) ?
?[T(x)] ln ?[T(x)].
(14)
x
x
N
N
Note that the multiplicative law of large numbers is normally formulated for the maximal singular
value. Its reformulation in terms of the maximal eigenvalue needs additional arguments [1].
Introducing the generating function
?N (n, N ) =
X
x
h
i
?
?[T(x)]?n T(x)
,
(15)
where n is a non-negative number, and where ?N (n, N ) means ?(n, N ) in degree of N , one represents (14) as
1X
?
?[T(x)] ln ?[T(x)]
= ?n ?(n, N )|n=0 ,
(16)
x
N
where we took into account ?(0, N ) = 1, as follows from (15).
The behavior of ?N (n, N ) is better understood after expressing it via the zeta-function ?(z, n) [1]
X
?
zm m
?(z, n) = exp ?
? (n, m) ,
(17)
m=1 m
where ?m (n, m) ? 0 is given by (15). Since for a large N , ?N (n, N ) ? ?N (n) [this is the content
1
:
of (13)], the zeta-function ?(z, n) has a zero at z = ?(n)
?(1/?(n), n) = 0.
(18)
P? zm m
P? [z?(n)]m
1
Indeed for z close (but smaller than) ?(n) , the series m=1 m ? (n, m) ? m=1 m
almost
diverges and one has ?(z, n) ? 1 ? z?(n). Recalling that ?(0) = 1 and taking n ? 0 in 0 =
d
1
dn ?( ?(n) , n), we get from (16)
1X
?n ?(1, 0)
?
?[T(x)] ln ?[T(x)]
=
.
(19)
x
N
?z ?(1, 0)
For calculating ??F? in (10) we have instead of (19)
?
?F?
?n ? [?] (1, 0)
=
,
N
?z ? [?] (1, 0)
(20)
where ? [?] (z, n) employs T?s?i si?1 (x) = ?
? ? (x|si ) p?? (si |si?1 ) instead of T?si si?1 (x) in (19).
Though in this paper we restricted ourselves to the limit N ? ?, we stress that the knowledge of
the generating function ?N (n, N ) allows to analyze the learning algorithms for any finite N .
4
2
p1
q1
1
q2
1
2
r2
p2
r1
3
Figure 1: The hidden Markov process (21?22) for = 0. Gray circles and arrows indicate on the
realization and transitions of the internal Markov process; see (21). The light circles and black
arrows indicate on the realizations of the observed process.
5
5.1
Hidden Markov Model with One Unambiguous Symbol
Definition
Given a L-state Markov process S, the observed process X has two states 1 and 2; see Fig. 1. All
internal states besides one are observed as 2, while the internal state 1 produces, respectively, 1 and
2 with probabilities 1 ? and . For L = 3 we obtain from (1) ?(1|1) = 1 ? ?(2|1) = 1 ? ,
?(1|2) = ?(1|3) = ?(2|1) = 0, ?(2|2) = ?(2|3) = 1. Hence 1 is unambiguous: if it is observed,
the unobserved process S was certainly in 1; see Fig. 1. The simplest example of such HMM exists
already for L = 2; see [12] for analytical features of entropy for this case. We, however, describe
in detail the L = 3 situation, since this case will be seen to be generic (in contrast to L = 2) and
it allows straightforward generalizations to L > 3. The transition matrix (1) of a general L = 3
Markov process reads
!
!
!
p0 q1 r1
p0
1 ? p1 ? p2
0
3
p1 q0 r2
q0
1 ? r1 ? r2
P ? { p(s|s ) }s,s0 =1 =
,
=
(21)
p2 q2 r0
r0
1 ? r1 ? r2
where, e.g., q1 = p(1|2) is the transition probability 2 ? 1; see Fig. 1. The corresponding transfer
matrices read from (3)
!
p0 q1 r1
0 0 0 , T (2) = P ? T (1).
T (1) = (1 ? )
(22)
0 0 0
Eq. (22) makes straightforward the reconstruction of the transfer-matrices for L ? 4. It should also
be obvious that for all L only the first row of T (1) consists of non-zero elements.
For = 0 we get from (22) the simplest example of an aggregated HMM, where several Markov
states are mapped into one observed state. This model plays a special role for the HMM theory,
since it was employed in the pioneering study of the non-identifiability problem [7].
5.2
Solution of the Model
For this model ?(z, n) can be calculated exactly, because T (1) has only one non-zero row. Using
the method outlined in the supplementary material (see also [1, 3]) we get
X?
?(z, n) = 1 ? z(t0 t?n0 + ?0 ??0n ) +
[? ??n t?nk?2 tk?2 ? t?nk?1 tk?1 ]z k
(23)
k=2
where ? and ?? are the largest eigenvalues of T (2) and T?(2), respectively
XL
tk ? h1|T (1)T (2)k |1i =
??k ?? ,
?=1
?? ? h1|T (1)|R? ihL? |1i,
h1| ? (1, 0, . . . , 0).
(24)
(25)
Here |R? i and hL? | are, respectively right and left eigenvalues of T (2), while ?1 , . . . , ?L (?L ? ? )
are the eigenvalues of T (2). Eqs. (24, 25) obviously extend to hatted quantities.
5
We get from (23, 19):
X?
?(1, n) = (1 ? ??n ? ) 1 ?
t?nk tk ,
k=0
P?
tk ln[t?k ]
?n ?(1, 0)
= P?k=0
.
?z ?(1, 0)
k=0 (k + 1)tk
(26)
(27)
Note that for = 0, tk are return probabilities to the state 1 of the L-state Markov
P?process. For
> 0 this interpretation does not hold, but tk still has a meaning of probability as k=0 tk = 1.
Turning to equations (19, 27) for the free energy, we note that as a function of trial values it depends
on the following 2L parameters:
(?
?1 , . . . , ??L , ??1 , . . . , ??L ).
(28)
As a function of the true values, the free energy depends on the same 2L parameters (28) [without
hats], though concrete dependencies are different. For the studied class of HMM there are at most
L(L ? 1) + 1 unknown parameters: L(L ? 1) transition probabilities of the unobserved Markov
chain, and one parameter coming from observations. We checked numerically that the Jacobian
of the transformation from the unknown parameters to the parameters (28) has rank 2L ? 1. Any
2L ? 1 parameters among (28) can be taken as independent ones.
For L > 2 the number of effective independent parameters that affect the free energy is smaller than
the number of parameters. So if the number of unknown parameters is larger than 2L ? 1, neither
of them can be found explicitly. One can only determine the values of the effective parameters.
6
The Simplest Non-Trivial Scenario
The following example allows the full analytical treatment, but is generic in the sense that it contains
all the key features of the more general situation given above (21). Assume that L = 3 and
q0 = q?0 = r0 = r?0 = 0,
= ? = 0.
(29)
Note the following explicit expressions
t0 = p0 , t1 = p1 q1 + p2 r1 , t2 = p1 r1 q2 + p2 q1 r2 ,
?
? = ?3 = q2 r2 , ?2 = ??, ?1 = 0,
(30)
(31)
?3 ? ?2 = t1 /?, ?3 + ?2 = t2 /? 2 ,
(32)
Eqs. (30?32) with obvious changes si ? s?i for every symbol si hold for t?k , ??k and ??k . Note a
P2
P2
P2
consequence of k=0 pk = k=0 qk = k=0 rk = 1:
? 2 (1 ? t0 ) = 1 ? t0 ? t1 ? t2 .
6.1
(33)
Optimization of F1
Eqs. (27) and (30?32) imply
P?
k=0 (k
+ 1)tk =
?
1?? 2 ,
? ? 1 ? ? 2 + t2 + (1 ? t0 )(1 + ? 2 ) > 0,
(34)
?F1
?
= t1 ln t?1 + t2 ln t?2 + (1 ? ? 2 )t0 ln t?0 + (1 ? t0 )? 2 ln ??2 .
(35)
N
The free energy F1 depends on three independent parameters t?0 , t?1 , t?2 [recall (33)]. Hence, minimizing F1 we get t?i = ti (i = 0, 1, 2), but we do not obtain a definite solution for the unknown parameters: any four numbers p?1 , p?2 , q?1 , r?1 satisfying three equations t0 = 1 ? p?1 ? p?2 , t1 = p?1 q?1 + p?2 r?1 ,
t2 = p?1 r?1 (1 ? q?1 ) + p?2 q?1 (1 ? r?1 ), minimize F1 .
6.2
Optimization of F?
In deriving (35) we used no particular feature of {?
pk }2k=0 , {?
qk }2k=1 , {?
rk }2k=1 . Hence, as seen from
?F
(20), the free energy at ? > 0 is recovered from (35) by equating its LHS to ? N? and by taking in
6
its RHS: t?0 ? p??0 , ??2 ? q?2? r?2? , t?1 ? p??1 q?1? + p??2 r?1? , t?2 ? p??1 r?1? q?2? + p??2 q?1? r?2? . The zero-temperature
free energy reads from (35)
?F?
?
= (1 ? ? 2 )t0 ln t?0 + (1 ? t0 )? 2 ln ??2 + t1 ln max[?
p1 q?1 , p?2 r?1 ]
N
+ t2 ln max[?
p2 q?1 r?2 , p?1 r?1 q?2 ].
(36)
We now minimize F? over the trial parameters p?1 , p?2 , q?1 , r?1 . This is not what is done by the VT
algorithm; see the discussion after (12). But at any rate both procedures aim to minimize the same
target. VT recursion for this models will be studied in section 6.3 ? it leads to the same result.
Minimizing F? over the trial parameters produces four distinct solutions:
{?
?i }4i=1 = {?
p1 = 0, p?2 = 0, q?1 = 0, r?1 = 0}.
(37)
For each of these four solutions: t?i = ti (i = 0, 1, 2) and F1 = F? . The easiest way to get these
results is to minimize F? under conditions t?i = ti (for i = 0, 1, 2), obtain F1 = F? and then to
conclude that due to the inequality F1 ? F? the conditional minimization led to the global minimization. The logics of (37) is that the unambiguous state tends to get detached from the ambiguous
ones, since the probabilities nullifying in (37) refer to transitions from or to the unambiguous state.
Note that although minimizing either F? and F1 produces correct values of the independent variables t0 , t1 , t2 , in the present situation minimizing F? is preferable, because it leads to the four-fold
degenerate set of solutions (37) instead of the continuously degenerate set. For instance, if the solution with p?1 = 0 is chosen we get for other parameters
t2
t1
p?2 = 1 ? t0 , q?1 =
, r?1 =
.
(38)
1 ? t0 ? t1
1 ? t0
Furthermore, a more elaborate analysis reveals that for each fixed set of correct parameters only one
among the four solutions Eq. 37 provides the best value for the quality of the MAP reconstruction,
i.e. for the overlap between the original and MAP-decoded sequences.
Finally, we note that minimizing F? allows one to get the correct values t0 , t1 , t2 of the independent
variables t?0 , t?1 and t?2 only if their number is less than the number of unknown parameters. This
is not a drawback, since once the number of unknown parameters is sufficiently small [less than
four for the present case (29)] their exact values are obtained by minimizing F1 . Even then, the
minimization of F? can provide partially correct answers. Assume in (36) that the parameter r?1 is
known, r?1 = r1 . Now F? has three local minima given by p?1 = 0, p?2 = 0 and q?1 = 0; cf. with
(37). The minimum with p?2 = 0 is the global one and it allows to obtain the exact values of the
two effective parameters: t?0 = 1 ? p?1 = t0 and t?1 = p?1 q?1 = t1 . These effective parameters are
recovered, because they do not depend on the known parameter r?1 = r1 . Two other minima have
greater values of F? , and they allow to recover only one effective parameter: t?0 = 1 ? p?1 = t0 .
If in addition to r?1 also q?1 is known, the two local minimia of F? (?
p1 = 0 and p?2 = 0) allow to
recover t?0 = t0 only. In contrast, if p?1 = p1 (or p?2 = p2 ) is known exactly, there are three local
minima again??
p2 = 0, q?1 = 0, r?1 = 0?but now none of effective parameters is equal to its true
value: t?i 6= ti (i = 0, 1, 2).
6.3
Viterbi EM
Recall the description of the VT algorithm given after (12). For calculating Pe(Sk+1 = a, Sk = b)
via (11, 12) we modify the transfer matrix element in (15, 17) as T?ab (k) ? T?ab (k)e? , which
produces from (11, 12) for the MAP-estimates of the transition probabilities
t1 ?
? 1 + t2 ?
?2
, pe2 = 1 ? t0 ? pe1 ,
(39)
pe1 =
t1 + t2 + t0 (1 ? ? 2 )
t1 ?
?1 + t2 (1 ? ?
?2 )
qe1 =
, qe2 = 1 ? qe1
(40)
t1 ?
?1 + t2 + (1 ? t0 )? 2
t1 (1 ? ?
? 1 ) + t2 ?
?2
re1 =
re2 = 1 ? re1 ,
(41)
t2 + t1 (1 ? ?
?1 ) + (1 ? t0 )? 2
where ?
?1 ?
p??
?1?
1q
? ?
p?1 q?1 +p??
?1?
2r
,?
?2 ?
p??
?1? q?2?
1r
? ? ?
p?1 r?1 q?2 +p??
?2? q?1?
2r
. The ? ? ? limit of ?
?1 and ?
?2 is obvious: each
of them is equal to 0 or 1 depending on the ratios
7
p?1 q?1
p?2 r?1
and
p?1 r?1 q?2
p?2 r?2 q?1 .
The EM approach amounts to
starting with some trial values p?1 , p?2 , q?1 , r?1 and using pe1 , pe2 , qe1 , re1 as new trial parameters (and
so on). We see from (39?41) that the algorithm converges just in one step: (39?41) are equal to
the parameters given by one of four solutions (37)?which one among the solutions (37) is selected
depends on the on initial trial parameters in (39?41)?recovering the correct effective parameters
(30?32); e.g. cf. (38) with (39, 41) under ?
?1 = ?
?2 = 0. Hence, VT converges in one step in
contrast to the Baum-Welch algorithm (that uses EM to locally minimize F1 ) which, for the present
model, obviously does not converge in one step. There is possibly a deeper point in the one-step
convergence that can explain why in practice VT converges faster than the Baum-Welch algorithm
[9, 21]: recall that, e.g. the Newton method for local optimization works precisely in one step for
quadratic functions, but generally there is a class of functions, where it performs faster than (say)
the steepest descent method. Further research should show whether our situation is similar: the VT
works just in one step for this exactly solvable HMM model that belongs to a class of models, where
VT generally performs faster than ML.
We conclude this section by noting that the solvable case (29) is generic: its key results extend to
the general situation defined above (21). We checked this fact numerically for several values of L.
In particular, the minimization of F? nullifies as many trial parameters as necessary to express the
remaining parameters via independent effective parameters t0 , t1 , . . .. Hence for L = 3 and = 0
two such trial parameters are nullified; cf. with discussion around (28). If the true error probability
6= 0, the trial value ? is among the nullified parameters. Again, there is a discrete degeneracy in
solutions provided by minimizing F? .
7
Summary
We presented a method for analyzing two basic techniques for parameter estimation in HMMs, and
illustrated it on a specific class of HMMs with one unambiguous symbol. The virtue of this class
of models is that it is exactly solvable, hence the sought quantities can be obtained in a closed
form via generating functions. This is a rare occasion, because characteristics of HMM such as
likelihood or entropy are notoriously difficult to calculate explicitly [1]. An important feature of the
example considered here is that the set of unknown parameters is not completely identifiable in the
maximum likelihood sense [7, 14]. This corresponds to the zero eigenvalue of the Hessian for the
ML (maximum-likelihood) objective function. In practice, one can have weaker degeneracy of the
objective function resulting in very small values for the Hessian eigenvalues. This scenario occurs
often in various models of physics and computational biology [11]. Hence, it is a drawback that the
theory of HMM learning was developed assuming complete identifiably [5].
One of our main result is that in contrast to the ML approach that produces continuously degenerate solutions, VT results in finitely degenerate solution that is sparse, i.e., some [non-identifiable]
parameters are set to zero, and, furthermore, converges faster. Note that sparsity might be a desired
feature in many practical applications. For instance, imposing sparsity on conventional EM-type
learning has been shown to produce better results part of speech tagging applications [25]. Whereas
[25] had to impose sparsity via an additional penalty term in the objective function, in our case sparsity is a natural outcome of maximizing the likelihood of the best sequence. While our results were
obtained on a class of exactly-solvable model, it is plausible that they hold more generally.
The fact that VT provides simpler and more definite solutions?among all choices of the parameters
compatible with the observed data?can be viewed as a type of the Occam?s razor for the parameter
learning. Note finally that statistical mechanics intuition behind these results is that the aposteriori
likelihood is (negative) zero-temperature free energy of a certain physical system. Minimizing this
free energy makes physical sense: this is the premise of the second law of thermodynamics that
ensures relaxation towards a more equilibrium state. In that zero-temperature equilibrium state
certain types of motion are frozen, which means nullifying the corresponding transition probabilities.
In that way the second law relates to the Occam?s razor. Other connections of this type are discussed
in [15].
Acknowledgments
This research was supported in part by the US ARO MURI grant No. W911NF0610094 and US
DTRA grant HDTRA1-10-1-0086.
8
References
[1] A. E. Allahverdyan, Entropy of Hidden Markov Processes via Cycle Expansion, J. Stat. Phys. 133, 535
(2008).
[2] A.E. Allahverdyan and A. Galstyan, On Maximum a Posteriori Estimation of Hidden Markov Processes,
Proc. of UAI, (2009).
[3] R. Artuso. E. Aurell and P. Cvitanovic, Recycling of strange sets, Nonlinearity 3, 325 (1990).
[4] P. Baldi and S. Brunak, Bioinformatics, MIT Press, Cambridge, USA (2001).
[5] L. E. Baum and T. Petrie, Statistical inference for probabilistic functions of finite state Markov chains,
Ann. Math. Stat. 37, 1554 (1966).
[6] J.M. Benedi, J.A. Sanchez, Estimation of stochastic context-free grammars and their use as language
models, Comp. Speech and Lang. 19, pp. 249-274 (2005).
[7] D. Blackwell and L. Koopmans, On the identifiability problem for functions of finite Markov chains, Ann.
Math. Statist. 28, 1011 (1957).
[8] S. B. Cohen and N. A. Smith, Viterbi training for PCFGs: Hardness results and competitiveness of
uniform initialization, Procs. of ACL (2010).
[9] Y. Ephraim and N. Merhav, Hidden Markov processes, IEEE Trans. Inf. Th., 48, 1518-1569, (2002).
[10] L.Y. Goldsheid and G.A. Margulis, Lyapunov indices of a product of random matrices, Russ. Math. Surveys 44, 11 (1989).
[11] R. N. Gutenkunst et al., Universally Sloppy Parameter Sensitivities in Systems Biology Models, PLoS
Computational Biology, 3, 1871 (2007).
[12] G. Han and B. Marcus, Analyticity of entropy rate of hidden Markov chains, IEEE Trans. Inf. Th., 52,
5251 (2006).
[13] R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, New Jersey, USA, 1985).
[14] H. Ito, S. Amari, and K. Kobayashi, Identifiability of Hidden Markov Information Sources, IEEE Trans.
Inf. Th., 38, 324 (1992).
[15] D. Janzing, On causally asymmetric versions of Occam?s Razor and their relation to thermodynamics,
arXiv:0708.3411 (2007).
[16] B. H. Juang and L. R. Rabiner, The segmental k-means algorithm for estimating parameters of hidden
Markov models, IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-38, no.9, pp.1639-1641,
(1990).
[17] B. G. Leroux, Maximum-Likelihood Estimation for Hidden Markov Models, Stochastic Processes and
Their Applications, 40, 127 (1992).
[18] N. Merhav and Y. Ephraim, Maximum likelihood hidden Markov modeling using a dominant sequence of
states, IEEE Transactions on Signal Processing, vol.39, no.9, pp.2111-2115 (1991).
[19] F. Qin, Restoration of single-channel currents using the segmental k-means method based on hidden
Markov modeling, Biophys J 86, 14881501 (2004).
[20] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc.
IEEE, 77, 257 (1989).
[21] L. J. Rodriguez and I. Torres, Comparative Study of the Baum-Welch and Viterbi Training Algorithms,
Pattern Recognition and Image Analysis, Lecture Notes in Computer Science, 2652/2003, 847 (2003).
[22] D. Ruelle, Statistical Mechanics, Thermodynamic Formalism, (Reading, MA: Addison-Wesley, 1978).
[23] J. Sanchez, J. Benedi, F. Casacuberta, Comparison between the inside-outside algorithm and the Viterbi
algorithm for stochastic context-free grammars, in Adv. in Struct. and Synt. Pattern Recognition (1996).
[24] V. I. Spitkovsky, H. Alshawi, D. Jurafsky, and C. D. Manning, Viterbi Training Improves Unsupervised
Dependency Parsing, in Proc. of the 14th Conference on Computational Natural Language Learning
(2010).
[25] A. Vaswani, A. Pauls, and D. Chiang, Efficient optimization of an MDL-inspired objective function for
unsupervised part-of-speech tagging, in Proc. ACL (2010).
9
| 4333 |@word trial:14 version:1 koopmans:1 norm:3 seek:2 p0:4 q1:6 initial:3 series:1 contains:1 outperforms:1 current:2 recovered:2 si:18 lang:1 parsing:2 pe1:3 n0:1 stationary:2 selected:2 xk:1 steepest:1 realizing:1 hamiltonian:1 smith:1 chiang:1 provides:3 math:3 simpler:2 dn:1 re2:1 consists:1 competitiveness:1 baldi:1 inside:1 introduce:1 tagging:2 indeed:2 hardness:1 expected:1 isi:1 p1:9 mechanic:3 behavior:2 inspired:1 provided:2 estimating:2 maximizes:1 what:2 easiest:1 q2:4 developed:1 unobserved:2 transformation:1 every:1 ti:4 exactly:6 preferable:1 normally:1 grant:2 causally:1 t1:18 kobayashi:1 understood:2 local:5 modify:1 tends:1 limit:5 consequence:1 analyzing:1 black:1 might:1 acl:2 initialization:1 studied:2 equating:1 jurafsky:1 hmms:5 pcfgs:1 vaswani:1 averaged:1 unique:2 practical:1 acknowledgment:1 horn:1 practice:7 definite:2 procedure:3 statistique:1 get:10 close:1 context:3 conventional:4 map:7 imposed:1 equivalent:2 maximizing:7 baum:7 straightforward:2 starting:1 survey:1 welch:5 estimator:1 adjusts:1 deriving:1 analogous:1 target:1 play:1 exact:2 us:1 element:3 recognition:4 satisfying:1 continues:1 asymmetric:1 muri:1 observed:18 role:1 solved:1 calculate:2 systemes:1 ensures:1 cycle:1 adv:1 plo:1 mentioned:1 intuition:1 ephraim:2 aram:1 depend:3 completely:1 joint:2 represented:1 various:1 jersey:1 distinct:1 describe:1 effective:8 outcome:3 outside:1 supplementary:1 larger:1 plausible:1 say:1 s:1 amari:1 grammar:4 statistic:2 noisy:2 obviously:2 sequence:17 eigenvalue:7 frozen:1 analytical:4 took:1 reconstruction:2 aro:1 product:4 galstyan:3 maximal:4 zm:2 coming:1 qin:1 realization:5 bath:1 mixing:4 degenerate:7 till:1 description:1 convergence:4 juang:1 diverges:1 r1:9 produce:8 comparative:3 generating:7 converges:6 tk:10 illustrate:1 develop:2 depending:1 stat:2 finitely:1 eq:9 p2:11 recovering:4 implemented:3 auxiliary:1 indicate:2 differ:1 concentrate:1 lyapunov:1 drawback:2 correct:6 stochastic:4 material:1 explains:1 premise:1 hx:1 f1:12 generalization:1 extension:1 pl:1 hold:3 sufficiently:3 considered:2 around:1 exp:1 equilibrium:2 viterbi:13 sought:1 purpose:1 estimation:16 proc:4 realizes:1 currently:1 largest:1 nullifies:1 minimization:5 mit:1 aim:1 pn:1 inherits:1 rank:1 likelihood:17 contrast:6 sense:6 am:1 posteriori:4 inference:2 hidden:22 relation:1 france:1 classification:1 among:5 prevailing:1 special:1 equal:3 once:1 procs:1 biology:3 represents:1 unsupervised:3 hdtra1:1 t2:16 quantitatively:1 employ:2 usc:1 ourselves:1 attempt:1 recalling:1 ab:2 stationarity:1 certainly:1 mdl:1 physique:1 light:1 behind:1 chain:5 accurate:1 gibbsian:1 capable:1 necessary:1 lh:1 divide:1 desired:2 circle:2 instance:3 formalism:2 modeling:3 markovian:1 restoration:1 maximization:4 introducing:1 rare:1 uniform:1 examining:1 johnson:1 dependency:3 answer:1 sensitivity:1 probabilistic:1 physic:2 zeta:2 continuously:3 concrete:1 again:2 possibly:1 worse:1 style:1 return:1 account:1 de:1 identifiably:1 explicitly:2 depends:4 multiplicative:3 try:1 h1:3 closed:2 observing:1 analyze:1 reached:1 start:1 recover:2 option:1 identifiability:3 minimize:5 qk:2 characteristic:1 correspond:2 yield:1 rabiner:2 nullified:2 rus:1 none:1 notoriously:1 comp:1 explain:1 reach:1 phys:1 janzing:1 whenever:1 checked:2 definition:1 energy:12 pp:3 obvious:3 naturally:1 recovers:1 degeneracy:3 pst:5 treatment:1 adjusting:1 ihl:1 recall:4 knowledge:1 dimensionality:1 improves:1 wesley:1 formulation:1 done:3 though:2 furthermore:4 just:2 lack:1 del:1 widespread:1 rodriguez:1 quality:1 gray:1 usa:3 detached:1 concept:1 armenia:1 true:4 multiplier:1 hence:8 read:4 q0:3 illustrated:1 inferior:1 unambiguous:7 razor:5 ambiguous:1 occasion:1 stress:1 complete:1 demonstrate:1 performs:2 motion:1 temperature:5 meaning:1 image:1 petrie:1 leroux:1 physical:3 cohen:1 extend:2 interpretation:1 discussed:1 numerically:2 refer:2 expressing:1 cambridge:2 gibbs:2 imposing:1 automatic:2 consistency:1 outlined:1 similarly:1 nonlinearity:1 language:2 had:1 han:1 etc:1 segmental:3 dominant:1 showed:1 re1:3 irrelevant:1 belongs:1 inf:3 scenario:3 certain:6 inequality:1 vt:26 seen:2 minimum:4 additional:2 greater:1 impose:1 dtra:1 employed:2 r0:3 aggregated:1 maximize:7 determine:1 converge:1 signal:2 ii:1 relates:1 full:1 thermodynamic:1 faster:7 long:2 marina:1 calculates:2 basic:1 circumstance:1 expectation:2 arxiv:1 iteration:1 ion:1 addition:2 whereas:1 laboratoire:1 singular:1 source:1 biased:1 sanchez:2 integer:1 noting:1 affect:1 silent:1 t0:23 whether:1 expression:1 penalty:1 speech:6 hessian:2 rey:1 generally:7 clear:1 amount:1 locally:3 statist:1 simplest:4 tutorial:1 estimated:1 delta:1 correctly:1 discrete:3 vol:1 express:1 key:2 four:7 reformulation:1 nevertheless:1 neither:2 asymptotically:1 relaxation:1 inverse:1 almost:1 strange:1 ruelle:1 bound:2 fold:1 quadratic:1 armen:1 identifiable:5 kronecker:1 precisely:1 x2:1 argument:2 structured:1 ss0:1 manning:1 smaller:2 em:11 s1:1 hl:1 continuo:1 restricted:1 pr:1 taken:2 ln:29 equation:2 previously:1 loose:1 addison:1 hatted:1 generic:3 alternative:2 struct:1 altogether:1 hat:1 substitute:1 original:1 top:1 remaining:1 include:1 cf:3 newton:1 recycling:1 alshawi:1 calculating:5 objective:12 already:1 quantity:2 occurs:1 mapped:1 hmm:14 trivial:2 induction:1 marcus:1 assuming:1 length:1 besides:1 index:1 ratio:1 minimizing:9 difficult:2 merhav:2 relate:1 negative:3 implementation:1 unknown:8 perform:2 upper:1 observation:10 markov:30 finite:5 descent:1 thermal:1 allahverdyan:3 defining:1 situation:5 assp:1 standardly:1 namely:1 blackwell:1 connection:1 acoustic:1 established:4 trans:4 proceeds:1 usually:2 pattern:2 sparsity:4 reading:1 pioneering:1 max:4 overlap:1 natural:2 tsn:1 solvable:5 indicator:1 recursion:2 turning:1 analyticity:1 thermodynamics:2 imply:2 disappears:1 coupled:1 sn:2 literature:1 understanding:1 asymptotic:4 law:5 relative:1 lecture:1 rationale:1 interesting:1 aurell:1 aposteriori:1 sloppy:1 degree:1 s0:14 occam:5 row:2 compatible:1 summary:1 casacuberta:1 supported:1 free:15 allow:2 understand:1 deeper:1 institute:2 weaker:1 taking:2 sparse:1 calculated:3 xn:4 transition:10 rich:1 qualitatively:1 universally:1 spitkovsky:1 transaction:1 preferred:1 logic:1 ml:14 global:6 tsi:2 reveals:1 uai:1 assumed:1 conclude:2 xi:4 iterative:1 sk:13 why:1 brunak:1 channel:3 transfer:5 robust:1 ca:1 expansion:1 complex:1 pk:2 main:1 arrow:2 s2:1 rh:1 paul:1 repeated:1 ref:1 x1:4 fig:3 elaborate:1 torres:1 decoded:1 explicit:1 xl:1 governed:1 pe:2 jacobian:1 ito:1 rk:2 specific:2 symbol:5 r2:6 virtue:1 intractable:1 exists:1 biophys:1 nk:3 sparser:1 entropy:6 led:1 likely:3 pe2:2 partially:2 corresponds:1 ma:1 conditional:5 goal:1 formulated:1 viewed:1 ann:2 towards:1 man:1 feasible:1 hard:2 content:1 change:1 determined:1 called:1 timeinvariant:1 internal:3 bioinformatics:1 |
3,682 | 4,334 | Sparse Filtering
Jiquan Ngiam, Pang Wei Koh, Zhenghao Chen, Sonia Bhaskar, Andrew Y. Ng
Computer Science Department, Stanford University
{jngiam,pangwei,zhenghao,sbhaskar,ang}@cs.stanford.edu
Abstract
Unsupervised feature learning has been shown to be effective at learning representations that perform well on image, video and audio classification. However,
many existing feature learning algorithms are hard to use and require extensive
hyperparameter tuning. In this work, we present sparse filtering, a simple new
algorithm which is efficient and only has one hyperparameter, the number of features to learn. In contrast to most other feature learning methods, sparse filtering
does not explicitly attempt to construct a model of the data distribution. Instead, it
optimizes a simple cost function ? the sparsity of `2 -normalized features ? which
can easily be implemented in a few lines of MATLAB code. Sparse filtering scales
gracefully to handle high-dimensional inputs, and can also be used to learn meaningful features in additional layers with greedy layer-wise stacking. We evaluate
sparse filtering on natural images, object classification (STL-10), and phone classification (TIMIT), and show that our method works well on a range of different
modalities.
1
Introduction
Unsupervised feature learning has recently emerged as a viable alternative to manually designing
feature representations. In many audio [1, 2], image [3, 4], and video [5] tasks, learned features
have matched or outperformed features specifically designed for such tasks. However, many current
feature learning algorithms are hard to use because they require a good deal of hyperparameter tuning. For example, the sparse RBM [6, 7] has up to half a dozen hyperparameters and an intractable
objective function, making it hard to tune and monitor convergence.
In this work, we present sparse filtering, a new feature learning algorithm which is easy to implement
and essentially hyperparameter-free. Sparse filtering is efficient and scales gracefully to handle
large input dimensions. In contrast, it is typically computationally expensive to run straightforward
implementations of many other feature learning algorithms on large inputs.
Sparse filtering works by optimizing exclusively for sparsity in the feature distribution. A key idea
in our method is avoiding explicit modeling of the data distribution; this gives rise to a simple
formulation and permits efficient learning. As a result, our method can be implemented in a few
lines of MATLAB code1 and works well with an off-the-shelf function minimizer such as L-BFGS.
Moreover, the hyperparameter-free approach means that sparse filtering works well on a range of
data modalities without the need for specific tuning on each modality. This allows us to easily learn
feature representations that are well-suited for a variety of tasks, including object classification and
phone classification.
1
Table 1. Comparison of tunable hyperparameters in various feature learning algorithms.
Algorithm
Our Method (Sparse Filtering)
ICA
Sparse Coding
Sparse Autoencoders
Sparse RBMs
2
Tunable hyperparameters
# features
# features
# features, sparsity penalty, mini-batch size
# features, target activation, weight decay, sparsity penalty
# features, target activation, weight decay, sparsity penalty,
learning rate, momentum
Unsupervised feature learning
Traditionally, feature learning methods have largely sought to learn models that provide good approximations of the true data distribution; these include denoising autoencoders [8], restricted Boltzmann machines (RBMs) [6, 7], (some versions of) independent component analysis (ICA) [9, 10],
and sparse coding [11], among others.
These feature learning approaches have been successfully used to learn good feature representations
for a wide variety of tasks [1, 2, 3, 4, 5]. However, they are also often challenging to implement,
requiring the tuning of various hyperparameters; see Table 1 for a comparison of tunable hyperparameters in several popular feature learning algorithms. Good settings for these hyperparameters
can vary widely from task to task, and can sometimes result in a drawn-out development process.
Though ICA has only one tunable hyperparameter, it scales poorly to large sets of features or large
inputs.2
In this work, our goal is to develop a simple and efficient feature learning algorithm that requires
minimal tuning. To this end, we only focus on a few key properties of our features ? population
sparsity, lifetime sparsity, and high dispersal ? without explicitly modeling the data distribution.
While learning a model for the data distribution is desirable, it can complicate learning algorithms:
for example, sparse RBMs need to approximate the log-partition function?s gradient in order to optimize for the data likelihood, while sparse coding needs to run relatively expensive inference at each
iteration to find the coefficients of the active bases. The relative weightage of a data reconstruction
term versus a sparsity-inducing term is also often a hyperparameter that needs to be tuned.
3
Feature distributions
The feature learning methods discussed in the previous section can all be viewed as generating
particular feature distributions. For instance, sparse coding represents each example using a few
non-zero coefficients (features). A feature distribution oriented approach can provide insights into
designing new algorithms based on optimizing for desirable properties of the feature distribution.
For clarity, let us consider a feature distribution matrix over a finite dataset, where each row is a
(i)
feature, each column is an example, and each entry fj is the activity of feature j on example i. We
assume that the features are generated through some deterministic function of the examples.
We consider the following as desirable properties of the feature distribution:
Sparse features per example (Population Sparsity). Each example should be represented by only
a few active (non-zero) features. Concretely, for each column (one example) in our feature matrix,
f (i) , we want a small number of active elements. For example, an image can be represented by a
description of the objects in it, and while there are many possible objects that can appear, only a few
are typically present at a single time. This notion is known as population sparsity [13, 14] and is
considered a principle adopted by the early visual cortex as an efficient means of coding.
1
We have included a complete MATLAB implementation of sparse filtering in the supplementary material.
ICA is unable to learn overcomplete feature representations unless one resorts to extremely expensive
approximate orthogonalization algorithms [12]. Even when learning complete feature representations, it still
requires an expensive orthogonalization step at every iteration.
2
2
Sparse features across examples (Lifetime Sparsity). Features should be discriminative and allow
us to distinguish examples; thus, each feature should only be active for a few examples. This means
that each row in the feature matrix should have few non-zero elements. This property is known as
lifetime sparsity [13, 14].
Uniform activity distribution (High Dispersal). For each row, the distribution should have similar statistics to every other row; no one row should have significantly more ?activity? than the other
rows. Concretely, we consider the mean squared activations of each feature obtained by averaging
the squared values in the feature matrix across the columns (examples). This value should be roughly
the same for all features, implying that all features have similar contributions. While high dispersal
is not strictly necessary for good feature representations, we found that enforcing high dispersal
prevents degenerate situations in which the same features are always active [14]. For overcomplete
representations, high dispersal translates to having fewer ?inactive? features. As an example, principle component analysis (PCA) codes do not generally satisfy high dispersal since the codes that
correspond to the largest eigenvalues are almost always active.
These properties of feature distributions have been explored in the neuroscience literature [9, 13,
14, 15]. For instance, [14] showed that population sparsity and lifetime sparsity are not necessarily
correlated. We note that the characterization of neural codes have conventionally been expressed as
properties of the feature distribution, rather than as a way of modeling the data distribution.
Many feature learning algorithms include these objectives. For example, the sparse RBM [6] works
by constraining the expected activation of a feature (over its lifetime) to be close to a target value.
ICA [9, 10] has constraints (e.g., each basis has unit norm) that normalize each feature, and further
optimizes for the lifetime sparsity of the features it learns. Sparse autoencoders [16] also explicitly
optimize for lifetime sparsity.
On the other hand, clustering-based methods such as k-means [17] can be seen as enforcing an
extreme form of population sparsity where each cluster centroid corresponds to a feature and only
one feature is allowed to be active per example. ?Triangle? activation functions, which essentially
serve to ensure population sparsity, have also been shown to obtain good classification results [17].
Sparse coding [11] is also typically seen as enforcing population sparsity.
In this work, we use the feature distribution view to derive a simple feature learning algorithm that
solely optimizes for population sparsity while enforcing high dispersal. In our experiments, we
found that realizing these two properties was sufficient to allow us to learn overcomplete representations; we also argue later that these two properties are jointly sufficient to ensure lifetime sparsity.
4
Sparse filtering
In this section, we will show how the sparse filtering objective captures the aforementioned principles. Consider learning a function that computes linear features for every example. Concretely, let
(i)
(i)
fj represent the j th feature value (rows) for the ith example (columns), where fj = wj T x(i) . Our
method simply involves first normalizing the feature distribution matrix by rows, then by columns
and finally summing up the absolute value of all entries.
Specifically, we first normalize each feature to be equally active by dividing each feature by its `2 norm across all examples: ?
fj = fj /kfj k2 . We then normalize these features per example, so that they
lie on the unit `2 -ball, by computing ?
f (i) = ?
f (i) /k?
f (i) k2 . The normalized features are optimized
for sparseness using the `1 penalty. For a dataset of M examples, this gives us the sparse filtering
objective (Eqn. 1):
M
M
?(i)
X
X
?(i)
f
minimize
(1)
f
=
(i)
.
k?
1
f k2
1
i=1
i=1
4.1
Optimizing for population sparsity
(i)
?
The term k?
f (i) k1 =
k?ff(i) k
measures the population sparsity of the features on the ith exam2 1
ple. Since the normalized features ?
f (i) are constrained to lie on the unit `2 -ball, this objective is
3
minimized when the features are sparse (Fig. 1-Left), which corresponds to being close to the axes.
Conversely, an example which has similar values for every feature would incur a high penalty.
f1
Row-Normalized
~
Features, f
f1
Column-Normalized
Features, f^
Increase
~
only in f1
^
Decrease in f2
f2
f2
Figure 1: Left: Sparse filtering showing two features (f1 , f2 ) and two examples (red and green).
Each example is first projected onto the `2 -ball and then optimized for sparseness. The `2 -ball is
shown together with level sets of the `1 -norm. Notice that the sparseness of the features (in the `1
sense) is maximized when the examples are on the axes. Right: Competition between features due
to normalization. We show one example where only f1 is increased. Notice that even though only
f1 is increased, the normalized value of the second feature, f?2 decreases.
One property of normalizing features is that it implicitly introduces competition between features.
(i)
Notice that if only one component of f (i) is increased, all the other components f?j will decrease
because of the normalization (Fig. 1-Right). Similarly, if only one component of f (i) is decreased,
all other components will increase. Since we are minimizing k?
f (i) k1 , the objective encourages
(i)
?
the normalized features, f , to be sparse and mostly close to zero. Putting this together with the
normalization, this means that some features in f (i) have to be large while most of them are small
(close to zero). Therefore, the objective optimizes for population sparsity.
The formulation above is closely related to the Treves-Rolls [14, 18] measure of population/lifei
hP
i2 h
?(i) /F / P (f?(i) )2 /F , where F is the total number of features.
time sparsity: s(i) =
f
j j
j j
This measure is commonly used to characterize the sparsity of neuron activations in the brain. In
particular, our proposed formulation can be viewed as a re-scaling of the square-root of this measure.
4.2
Optimizing for high dispersal
Recall that for high dispersal we want every feature to be equally active. Specifically, we want the
mean squared activation of each feature to be roughly equal. In our formulation of sparse filtering,
we first normalize each feature so that they are equally active by dividing each feature by its norm
across the examples: ?
fj = fj /kfj k2 . This has the same effect as constraining each feature to have
(i)
the same expected squared value, Ex(i) ?D [(fj )2 ] = 1, thus enforcing high dispersal.
4.3
Optimizing for lifetime sparsity
We found that optimizing for population sparsity and enforcing high dispersal led to lifetime sparsity
in our features. To understand how lifetime sparsity is achieved, first notice that a feature distribution which is population sparse must have many non-active (zero) entries in the feature distribution
matrix. Since these features are highly dispersed, these zero entries (and also the non-zero entries)
are approximately evenly distributed among all the features. Therefore, every feature must have a
significant number of zero entries and be lifetime sparse. This implies that optimizing for population
sparsity and high dispersal is sufficient to define a good feature distribution.
4
4.4
Deep sparse filtering
Since the sparse filtering objective is agnostic about the method which generates the feature matrix,
one is relatively free to choose the feedforward network that computes the features. It is thus possible
(i)
to use more complex non-linear functions (e.g., fj = log(1 + (wjT x(i) )2 )), or even multi-layered
networks, when computing the features. In this way, sparse filtering presents itself as a natural
framework for training deep networks.
Training a deep network with sparse filtering can be achieved using the canonical greedy layerwise
approach [7, 19]. In particular, after training a single layer of features with sparse filtering, one
can compute the normalized features ?
f (i) and then use these as input to sparse filtering for learning
another layer of features. In practice, we find that greedy layer-wise training with sparse filtering
learns meaningful representations on the next layer (Sec. 5.2).
5
Experiments
(i)
In our experiments, we adopted the soft-absolute function fj
=
q
+ (wjT x(i) )2 ? |wjT x(i) |
as our activation function, setting = 10?8 , and used an off-the-shelf L-BFGS [20] package to
optimize the sparse filtering objective until convergence.
5.1
Timing and scaling up
Figure 2: Timing comparisons between sparse coding, ICA, sparse autoencoders and sparse filtering
over different input sizes.
In this section, we examine the efficiency of the sparse filtering algorithm by comparing it against
ICA, sparse coding, and sparse autoencoders. We compared the convergence of each algorithm by
measuring the relative change in function value over each iteration of the algorithm, stopping when
this change dropped below a preset threshold. We performed experiments using 10,000 color image
patches with varying image sizes to evaluate the efficiency and scalability of the methods. For each
image size, we learned a complete set of features (i.e., equal to the number of input dimensions).
We implemented sparse autoencoders as described in Coates et al. [17]. For sparse coding, we used
code from [2], as it is fairly optimized and easy to modify.
For smaller image dimensions of sizes 8 ? 8 (192-dimensional inputs since our images have 3 color
channels) and 16 ? 16 (768-dimensional inputs), we found that the algorithms generally performed
similarly in terms of efficiency. However, with 32 ? 32 image patches (3072-dimensional inputs),
sparse coding, sparse autoencoders and ICA were significantly slower to converge than sparse filtering (Fig. 2). For ICA, each iteration of the algorithm (FastICA [12]) requires orthogonalizing the
bases learned; since the cost of orthogonalization is cubic in the number of features, the algorithm
can be very slow when the number of features is large. For sparse coding, as the number of features
increased, it took significantly longer to solve the `1 -regularized least squares problem for finding
the coefficients.
5
We obtained an overall speedup of at least 4x over sparse coding and ICA when learning features
from 32 ? 32 image patches. In contrast to ICA, optimizing the sparse filtering objective does not
require the expensive cubic-time whitening step. For the larger input dimensions, sparse coding and
sparse autoencoders did not converge in a reasonable time (<3 hours).
5.2
Natural images
In this section, we applied sparse filtering to learn features off 200,000 randomly sampled patches
(16x16) from natural images [9]. The only preprocessing done before feature learning was to subtract the mean of each image patch from itself (i.e., removing the DC component).
The first layer of features learned by sparse filtering corresponded to Gabor-like edge detectors, similar to those learned by standard sparse
feature learning methods [6, 9, 10, 11, 16].
More interestingly, when we learned a second
layer of features using greedy layer-wise stacking on the features produced by the first layer,
it discovers meaningful features that pool the
first layer features (Fig. 3). We highlight that
the second layer of features were learned using
the same algorithm without any tuning or preprocessing of the data. While recent work by
[21, 22] has also been able to learn meaningful second layer features, our method is simpler
to implement, fast to run and does not require
time-consuming tuning of hyperparameters.
5.3
Figure 3: Learned pooling units in a second
layer using sparse filtering. We show the most
strongly connected first layer units for each second layer unit; each column corresponds to a
second layer unit.
STL-10 object classification
Table 2. Classification accuracy on STL-10.
Method
Raw Pixels [17]
ICA (Complete)
K-means (Triangle) [17]
Random Weight Baseline
Our Method
Accuracy
31.8% ? 0.63%
48.0% ? 1.47%
51.5% ? 1.73%
50.2% ? 1.08%
53.5% ? 0.53%
Figure 4: A subset of the learned filters from
10?10 patches extracted from the STL dataset.
We also evaluated the performance of our model on an object classification task. We used the STL-10
dataset [17] which consists of a unsupervised training set of 100,000 images, a supervised training
set of 10 training folds, each with 500 training instances, and a test set of 8,000 test instances. Each
instance is a 96 ? 96 RGB image from 1 of 10 object categories.
To obtain features from the large image, we followed the protocol of [17]: features were extracted
densely from all locations in each image and later pooled into quadrants. Supervised training was
carried out by training a linear SVM on this representation of the training set where C and the
receptive field size were chosen by hold-out cross-validation. We obtain a test set accuracy of 53.5%
? 0.53% with features learned from 10 ? 10 patches. For a fair comparison, the number of features
learnt was also set to be consistent with the number of features used by [17].
In accordance with the recommended STL-10 testing protocol [17], we performed supervised training on each of the 10 supervised training folds and reported the mean accuracy on the full test set
along with the standard deviation across the 10 training folds (Table 2).
6
In order to show the effects of feature learning, we include a comparison to a random weight baseline
of our method. For the baseline, we keep the basic architecture (e.g., the divisive normalization),
but fill the entries of the weight matrix W by sampling random values. Random weight baselines
have been shown to perform remarkably well on a variety of tasks [23], and provide a means of
distinguishing the effect of our divisive normalization scheme versus the effect of feature learning.
5.4
Phone classification (TIMIT)
Table 3. Test accuracy for phone classification using features learned from MFCCs.
Method
ICA
SVM (Linear)
MFCC
SVM (Linear)
Sparse Coding
SVM (Linear)
Our Method
SVM (Linear)
MFCC
SVM (RBF)
MFCC+ICA
SVM (RBF)
MFCC+Sparse Coding SVM (RBF)
MFCC+Our Method SVM (RBF)
HMM [24]
Large Margin GMM (LMGMM) [25]
CRF [26]
MFCC+CDBN [2]
Hierarchical LMGMM [27]
Accuracy
57.3%
67.2%
76.8%
75.7%
80.4%
78.3%
80.1%
80.5%
78.6%
78.9%
79.2%
80.3%
81.3%
To evaluate the model?s ability to work with a range of data modalities, we further evaluated the
ability of our models to do 39-way phone classification on the TIMIT dataset [28]. As with [2,
29], our dataset comprised 132,833 training phones and 6,831 testing phones. Following standard
approaches [2, 24, 25, 26, 27, 29], we first extracted 13 mel-frequency cepstral coefficients (MFCCs)
and augmented them with the first and second order derivatives. Using sparse filtering, we learned
256 features from contiguous groups of 11 MFCC frames. For comparison, we also learned sets
of 256 features in a similar way using sparse coding [11, 30] and ICA [12]. A fixed-length feature
vector was formed from each example using the protocol described in [29].3
To evaluate the relative performances of the different feature sets (MFCC, ICA, sparse coding and
sparse filtering), we used a linear SVM, choosing the regularization coefficient C by cross-validation
on the development set. We found that the features learned using sparse filtering outperformed
MFCC features alone and ICA features; they were also competitive with sparse coding and faster to
compute. Using an RBF kernel [31] gave performances competitive with state-of-the-art methods
when MFCCs were combined with learned sparse filtering features (Table 3). In contrast, concatenating ICA and sparse coding features with MFCCs resulted in decreased performance when
compared to MFCCs alone.
While methods such as the HMM and LMGMM were included in Table 3 to provide context, we
note that these methods use pipelines that are more complex than straightforwardly applying a SVM.
Indeed, these pipelines are built on top of feature representations that can be derived from a variety
of sources, including sparse filtering. We thus see these methods as being complementary to sparse
filtering.
6
Discussion
6.1
Connections to divisive normalization
Our formulation of population sparsity in sparse filtering is closely related to divisive normalization
[32] ? a process in early visual processing in which a neuron?s response is divided by a (weighted)
3
We found that spliting the features into positive and negative components improved performance slightly.
7
sum of the responses of neighboring neurons. Divisive normalization has previously been found [33,
34] to be useful as part of a multi-stage object classification pipeline. However, it was introduced as
a processing stage [33], rather than a part of unsupervised feature learning (pretraining). Conversely,
sparse filtering uses divisive normalization as an integral component of the feature learning process
to introduce competition between features, resulting in population sparse representations.
6.2
Connections to ICA and sparse coding
The sparse filtering objective can be viewed as a normalized version of the ICA objective. In ICA
[12], the objective is to minimize the response of linear filters (e.g., kW xk1 ), subject to the constraint
that the filters are orthogonal to each other. The orthogonality constraint results in set of diverse
filters. In sparse filtering, we replace the objective with a normalized sparsity penalty, where the
response of filters are divided by the norm of the all the filters (kW xk1 /kW xk2 ). This introduces
competition between the filters and thus removes the need for orthogonalization.
Similarly, one can apply the normalization idea to the sparse coding framework. In particular,
sparse filtering resembles the ``21 sparsity penalty that has been used in non-negative matrix factorization [35]. Thus, instead of the usual `1 penalty that is used in conjunction with sparse coding
(i.e., ksk1 ), one can instead use a normalized penalty (i.e., ksk1 /ksk2 ). This normalized penalty is
scale invariant and can be more robust to variations in the data.
References
[1] G. E. Dahl, M. Ranzato, A. Mohamed, and G. E. Hinton. Phone recognition with the mean-covariance
restricted Boltzmann machine. In NIPS. 2010.
[2] H. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using
convolutional deep belief networks. In NIPS. 2009.
[3] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, 2009.
[4] M.A. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature
hierarchies with applications to object recognition. In CVPR, 2007.
[5] Q. V. Le, W. Y. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical spatio-temporal features for action
recognition with independent subspace analysis. In CVPR, 2011.
[6] H. Lee, C. Ekanadham, and A.Y. Ng. Sparse deep belief net model for visual area v2. In NIPS, 2008.
[7] G. E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[8] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. Extracting and composing robust features with
denoising autoencoders. In ICML, 2008.
[9] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proceedings: Biological Sciences, 265(1394):359?366, 1998.
[10] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Res., 37(23):3327?3338, December 1997.
[11] B. Olshausen and D. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1?
Nature, 1997.
[12] A. Hyv?arinen, J. Hurri, and Patrick O. Hoyer. Natural Image Statistics: A Probabilistic Approach to Early
Computational Vision. (Computational Imaging and Vision). Springer, 2nd printing. edition, 2009.
[13] D. J. Field. What is the goal of sensory coding? Neural Computation, 6(4):559?601, July 1994.
[14] B. Willmore and D. J. Tolhurst. Characterizing the sparseness of neural codes. Network, 12(3):255?270,
January 2001.
[15] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience,
4:819?825, 2001.
[16] M.A. Ranzato, C. Poultney, S. Chopra, and Y. Lecun. Efficient learning of sparse representations with an
energy-based model. In NIPS, 2006.
[17] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning.
In AISTATS, 2011.
8
[18] A. Treves and E. Rolls. What determines the capacity of autoassociative memories in the brain? Network:
Computation in Neural Systems, 2:371?397(27), 1991.
[19] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy Layer-Wise training of deep networks. In
NIPS, 2006.
[20] M. Schmidt. minFunc. http://www.cs.ubc.ca/?schmidtm/Software/minFunc.html,
2005.
[21] M. Ranzato and G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order
Boltzmann Machines. In CVPR, 2010.
[22] U. K?oster and A. Hyv?arinen. A two-layer model of natural stimuli estimated with score matching. Neural
Computation, 22(9):2308?2333, 2010.
[23] A. Saxe, M. Bhand, Z. Chen, P.W. Koh, B. Suresh, and A.Y. Ng. On random weights and unsupervised
feature learning. In ICML, 2011.
[24] S. Petrov, A. Pauls, and D. Klein. Learning structured models for phone recognition. In Proc. of EMNLPCoNLL, 2007.
[25] F. Sha and L.K. Saul. Large margin gaussian mixture modeling for phonetic classification and recognition.
In ICASSP. IEEE, 2006.
[26] D. Yu, L. Deng, and A. Acero. Hidden conditional random field with distribution constraints for phone
classification. In Interspeech, 2009.
[27] H.A. Chang and J.R. Glass. Hierarchical large-margin gaussian mixture models for phonetic classification.
In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on, pages 272?277.
IEEE, 2007.
[28] W. E. Fisher, G. R. Doddington, and K. M. Goudle-marshall. The DARPA speech recognition research
database: specifications and status. 1986.
[29] P. Clarkson and P. J. Moreno. On the use of support vector machines for phonetic classification. Acoustics,
Speech, and Signal Processing, IEEE International Conference on, 2:585?588, 1999.
[30] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In ICML, 2009.
[31] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on
Intelligent Systems and Technology, 2:27:1?27:27, 2011. Software available at http://www.csie.
ntu.edu.tw/?cjlin/libsvm.
[32] M. Wainwright, O. Schwartz, and E. Simoncelli. Natural image statistics and divisive normalization:
Modeling nonlinearity and adaptation in cortical neurons, 2001.
[33] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for
object recognition? In ICCV, 2009.
[34] N. Pinto, D. D. Cox, and J. J. DiCarlo. Why is Real-World visual object recognition hard? PLoS Comput
Biol, 4(1):e27+, January 2008.
[35] Patrik O. Hoyer. Non-negative matrix factorization with sparseness constraints. JMLR, 5:1457?1469,
2004.
9
| 4334 |@word cox:1 version:2 norm:5 nd:1 hyv:2 rgb:1 covariance:2 exclusively:1 score:1 tuned:1 interestingly:1 existing:1 ksk1:2 current:1 comparing:1 activation:8 must:2 partition:1 remove:1 designed:1 moreno:1 implying:1 greedy:5 half:1 fewer:1 alone:2 ith:2 realizing:1 characterization:1 tolhurst:1 location:1 simpler:1 along:1 viable:1 consists:1 introduce:1 expected:2 indeed:1 ica:21 roughly:2 examine:1 multi:3 brain:2 matched:1 moreover:1 agnostic:1 factorized:1 what:3 finding:1 temporal:1 sapiro:1 every:6 k2:4 schwartz:2 control:1 unit:7 appear:1 before:1 positive:1 dropped:1 timing:2 modify:1 accordance:1 willmore:1 solely:1 approximately:1 resembles:1 conversely:2 challenging:1 pangwei:1 factorization:2 range:3 jarrett:1 lecun:3 testing:2 practice:1 implement:3 suresh:1 area:1 bell:1 significantly:3 gabor:1 matching:2 quadrant:1 onto:1 close:4 layered:1 acero:1 context:1 applying:1 optimize:3 www:2 deterministic:1 straightforward:1 insight:1 fill:1 lamblin:1 population:17 handle:2 notion:1 traditionally:1 variation:1 target:3 hierarchy:1 distinguishing:1 designing:2 us:1 element:2 expensive:5 recognition:9 database:1 csie:1 capture:1 wj:1 connected:1 ranzato:5 plo:1 decrease:3 incur:1 serve:1 f2:4 efficiency:3 basis:2 triangle:2 easily:2 icassp:1 darpa:1 various:2 represented:2 fast:2 effective:1 sejnowski:1 corresponded:1 choosing:1 emerged:1 stanford:2 widely:1 supplementary:1 solve:1 larger:1 cvpr:4 ability:2 statistic:4 jointly:1 itself:2 online:1 eigenvalue:1 net:2 took:1 reconstruction:1 adaptation:1 neighboring:1 poorly:1 degenerate:1 description:1 inducing:1 normalize:4 competition:4 scalability:1 convergence:3 cluster:1 generating:1 object:11 derive:1 andrew:1 develop:1 gong:1 dividing:2 implemented:3 c:2 involves:1 implies:1 larochelle:2 closely:2 filter:9 saxe:1 material:1 require:4 arinen:2 f1:6 ntu:1 biological:1 strictly:1 hold:1 pham:1 dispersal:12 considered:1 sought:1 vary:1 early:3 xk2:1 dictionary:1 proc:1 outperformed:2 largest:1 successfully:1 weighted:1 always:2 gaussian:2 rather:2 shelf:2 varying:1 conjunction:1 ax:2 focus:1 derived:1 ponce:1 likelihood:1 contrast:4 centroid:1 baseline:4 sense:1 glass:1 inference:1 stopping:1 typically:3 hidden:1 bhand:1 pixel:2 overall:1 classification:19 among:2 aforementioned:1 html:1 development:2 constrained:1 art:1 fairly:1 spatial:1 schaaf:1 equal:2 construct:1 field:4 having:1 ng:6 sampling:1 manually:1 represents:1 kw:3 yu:2 unsupervised:9 icml:3 minimized:1 others:1 stimulus:1 intelligent:1 few:8 oriented:1 randomly:1 densely:1 resulted:1 attempt:1 highly:1 cdbn:1 introduces:2 mixture:2 extreme:1 edge:2 integral:1 necessary:1 orthogonal:1 unless:1 re:2 overcomplete:4 minimal:1 minfunc:2 instance:5 column:7 modeling:6 increased:4 soft:1 marshall:1 contiguous:1 measuring:1 cost:2 stacking:2 deviation:1 entry:7 subset:1 ekanadham:1 uniform:1 comprised:1 fastica:1 osindero:1 characterize:1 reported:1 straightforwardly:1 learnt:1 combined:1 international:1 lee:3 off:3 probabilistic:1 pool:1 together:2 squared:4 choose:1 huang:2 resort:1 derivative:1 bfgs:2 coding:26 sec:1 pooled:1 coefficient:5 satisfy:1 explicitly:3 later:2 view:1 root:1 performed:3 red:1 competitive:2 timit:3 contribution:1 minimize:2 formed:1 pang:1 square:2 convolutional:1 roll:2 largely:1 accuracy:6 maximized:1 correspond:1 raw:1 vincent:1 kavukcuoglu:1 produced:1 mfcc:9 detector:1 complicate:1 petrov:1 against:1 rbms:3 energy:1 frequency:1 mohamed:1 rbm:2 sampled:1 gain:1 tunable:4 dataset:6 popular:1 recall:1 color:2 spliting:1 supervised:4 response:4 wei:1 improved:1 formulation:5 done:1 though:2 strongly:1 evaluated:2 lifetime:12 xk1:2 stage:3 autoencoders:9 until:1 hand:1 eqn:1 schmidtm:1 olshausen:1 effect:4 normalized:12 true:1 requiring:1 regularization:1 i2:1 deal:1 interspeech:1 encourages:1 mel:1 complete:4 crf:1 fj:10 orthogonalization:4 largman:1 image:22 wise:4 discovers:1 recently:1 discussed:1 significant:1 tuning:7 automatic:1 similarly:3 hp:1 nonlinearity:1 mfccs:5 specification:1 cortex:2 longer:1 whitening:1 base:2 patrick:1 showed:1 recent:1 optimizing:8 optimizes:4 phone:10 phonetic:3 der:1 seen:2 additional:1 employed:1 deng:1 converge:2 recommended:1 july:1 signal:2 full:1 desirable:3 simoncelli:2 faster:1 cross:2 bach:1 lin:1 divided:2 equally:3 basic:1 essentially:2 vision:3 yeung:1 iteration:4 sometimes:1 represent:1 normalization:11 kernel:1 achieved:2 pyramid:1 cell:1 want:3 remarkably:1 decreased:2 source:1 modality:4 pooling:1 subject:1 december:1 bhaskar:1 extracting:1 chopra:1 yang:1 constraining:2 feedforward:1 easy:2 bengio:2 variety:4 gave:1 architecture:2 idea:2 translates:1 inactive:1 pca:1 penalty:10 clarkson:1 speech:3 pretraining:1 action:1 matlab:3 deep:7 autoassociative:1 generally:2 useful:1 tune:1 ang:1 category:1 http:2 canonical:1 coates:2 notice:4 neuroscience:2 estimated:1 per:3 klein:1 diverse:1 hyperparameter:7 group:1 key:2 putting:1 threshold:1 monitor:1 drawn:1 clarity:1 gmm:1 libsvm:2 dahl:1 v1:1 imaging:1 sum:1 run:3 package:1 almost:1 reasonable:1 patch:7 jiquan:1 scaling:2 layer:20 followed:1 distinguish:1 fold:3 activity:3 constraint:5 orthogonality:1 scene:1 software:2 generates:1 layerwise:1 extremely:1 relatively:2 speedup:1 department:1 structured:1 jngiam:1 ball:4 across:5 smaller:1 slightly:1 tw:1 making:1 restricted:2 invariant:2 iccv:1 koh:2 pipeline:3 computationally:1 previously:1 cjlin:1 end:1 adopted:2 ksk2:1 available:1 permit:1 apply:1 hierarchical:3 v2:1 sonia:1 alternative:1 batch:1 slower:1 schmidt:1 top:1 clustering:1 include:3 ensure:2 k1:2 objective:14 receptive:1 primary:1 strategy:1 usual:1 sha:1 hoyer:2 gradient:1 subspace:1 unable:1 capacity:1 hmm:2 gracefully:2 evenly:1 argue:1 enforcing:6 emnlpconll:1 code:6 length:1 dicarlo:1 mini:1 manzagol:1 minimizing:1 mostly:1 negative:3 rise:1 implementation:2 boltzmann:3 perform:2 teh:1 neuron:4 finite:1 january:2 situation:1 hinton:3 dc:1 frame:1 treves:2 introduced:1 extensive:1 optimized:3 connection:2 acoustic:1 learned:15 hour:1 zhenghao:2 nip:5 able:1 below:1 sparsity:34 poultney:1 built:1 including:2 green:1 video:2 belief:3 memory:1 wainwright:1 natural:10 regularized:1 scheme:1 technology:1 library:1 carried:1 conventionally:1 patrik:1 oster:1 popovici:1 literature:1 understanding:1 relative:3 highlight:1 filtering:42 versus:2 validation:2 sufficient:3 consistent:1 principle:3 row:9 free:3 allow:2 understand:1 wide:1 saul:1 characterizing:1 cepstral:1 absolute:2 sparse:87 distributed:1 van:2 dimension:4 cortical:1 world:1 computes:2 sensory:2 concretely:3 commonly:1 projected:1 preprocessing:2 ple:1 transaction:1 approximate:2 implicitly:1 status:1 keep:1 active:11 mairal:1 summing:1 hurri:1 consuming:1 discriminative:1 spatio:1 why:1 table:7 learn:9 channel:1 robust:2 correlated:1 composing:1 nature:2 ca:1 ngiam:1 necessarily:1 complex:2 zou:1 protocol:3 did:1 aistats:1 hyperparameters:7 edition:1 paul:1 allowed:1 fair:1 complementary:1 augmented:1 fig:4 e27:1 ff:1 cubic:2 x16:1 slow:1 momentum:1 kfj:2 explicit:1 concatenating:1 comput:1 lie:2 jmlr:1 printing:1 third:1 learns:2 dozen:1 removing:1 specific:1 showing:1 explored:1 decay:2 svm:11 stl:6 normalizing:2 intractable:1 workshop:1 orthogonalizing:1 sparseness:5 margin:3 boureau:1 chen:2 subtract:1 suited:1 led:1 simply:1 visual:5 prevents:1 expressed:1 chang:2 pinto:1 springer:1 corresponds:3 minimizer:1 determines:1 dispersed:1 extracted:3 ubc:1 acm:1 conditional:1 goal:2 viewed:3 rbf:5 wjt:3 asru:1 replace:1 fisher:1 hard:4 change:2 included:2 specifically:3 averaging:1 preset:1 denoising:2 total:1 code1:1 divisive:7 meaningful:4 support:2 doddington:1 avoiding:1 hateren:1 evaluate:4 audio:3 biol:1 ex:1 |
3,683 | 4,335 | Sparse Estimation with Structured Dictionaries
David P. Wipf ?
Visual Computing Group
Microsoft Research Asia
[email protected]
Abstract
In the vast majority of recent work on sparse estimation algorithms, performance
has been evaluated using ideal or quasi-ideal dictionaries (e.g., random Gaussian
or Fourier) characterized by unit ?2 norm, incoherent columns or features. But in
reality, these types of dictionaries represent only a subset of the dictionaries that
are actually used in practice (largely restricted to idealized compressive sensing
applications). In contrast, herein sparse estimation is considered in the context
of structured dictionaries possibly exhibiting high coherence between arbitrary
groups of columns and/or rows. Sparse penalized regression models are analyzed
with the purpose of finding, to the extent possible, regimes of dictionary invariant performance. In particular, a Type II Bayesian estimator with a dictionarydependent sparsity penalty is shown to have a number of desirable invariance
properties leading to provable advantages over more conventional penalties such
as the ?1 norm, especially in areas where existing theoretical recovery guarantees
no longer hold. This can translate into improved performance in applications such
as model selection with correlated features, source localization, and compressive
sensing with constrained measurement directions.
1
Introduction
We begin with the generative model
Y = ?X0 + E,
(1)
m?t
where ? ? R
is a dictionary of basis vectors or features, X0 ? R
is a matrix of unknown
coefficients we would like to estimate, Y ? Rn?t is an observed signal matrix, and E is a noise
matrix with iid elements distributed as N (0, ?). The objective is to estimate the unknown generative X0 under the assumption that it is row-sparse, meaning that many rows of X0 have zero norm.
The problem is compounded considerably by the additional assumption that m > n, meaning the
dictionary ? is overcomplete. When t = 1, this then reduces to the canonical sparse estimation of
a coefficient vector with mostly zero-valued entries or minimal ?0 norm [7]. In contrast, estimation
of X0 with t > 1 represents the more general simultaneous sparse approximation problem [6, 15]
relevant to numerous applications such as compressive sensing and multi-task learning [9, 16], manifold learning [13], array processing [10], and functional brain imaging [1]. We will consider both
scenarios herein but will primarily adopt the more general notation of the t > 1 case.
n?m
One possibility for estimating X0 involves solving
min kY ? ?Xk2F + ?d(X),
X
? > 0,
d(X) ,
m
X
i=1
I [kxi? k > 0] ,
(2)
where the indicator function I [kxk > 0] equals one if kxk > 0 and equals zero otherwise (kxk
is an arbitrary vector norm). d(X) penalizes the number of rows in X that are not equal to zero;
?
Draft version for NIPS 2011 pre-proceedings.
1
for nonzero rows there is no additional penalty for large magnitudes. Moreover, it reduces to the ?0
norm when t = 1, i.e., d(x) = kxk0 , or a count of the nonzero elements in the vector x. Note that
to facilitate later analysis, we define x?i as the i-th column of matrix X while xi? represents the i-th
row. For theoretical inquiries or low-noise environments, it is often convenient to consider the limit
as ? ? 0, in which case (2) reduces to
min d(X),
X
s.t. ?X0 = ?X.
(3)
Unfortunately, solving either (2) or (3) involves a combinatorial search and is therefore not tractable
in practice. Instead, a family of more convenient sparse penalized regression cost functions are reviewed in Section 2. In particular, we discuss conventional Type I sparsity penalties, such as the ?1
norm and the ?1,2 mixed norm, and a Type II empirical Bayesian alternative characterized by dictionary dependency. When the dictionary ? is incoherent, meaning the columns are roughly orthogonal
to one another, then certain Type I selections are well-known to produce good approximations of X0
via efficient implementations. However, as discussed in Section 3, more structured dictionary types
can pose difficulties. In Section 4 we analyze the underlying cost functions of Type I and Type II,
and demonstrate that the later maintains several properties that suggest it will be robust to highly
structured dictionaries. Brief empirical comparisons are presented in Section 5.
2
Estimation via Sparse Penalized Regression
Directly solving either (2) or (3) is intractable, so a variety of approximate methods have been
proposed. Many of these can be viewed simply as regression with a sparsity penalty convenient for
optimization purposes. The general regression problem we consider here involves solving
min kY ? ?Xk2F + ?g(X),
X
(4)
where g is some penalty function of the row norms. Type I methods use a separable penalty of the
form
X
g (I) (X) =
h (kxi? k2 ) ,
(5)
i
where h is a non-decreasing, typically concave function.1 Common examples include h(z) =
z p , p ? (0, 1] [11] and h(z) = log(z + ?), ? ? 0 [4]. The parameters p and ? are often heuristically
selected on an application-specific basis. In contrast, Type II methods, with origins as empirical
Bayesian estimators, implicitly utilize a more complicated penalty function that can only be expressed in a variational form [18]. Herein we will consider the selection
g (II) (X) , min Tr X T ??1 X + t log ?I + ???T , ? ? 0,
(6)
?0
where ? is a diagonal matrix of non-negative variational parameters [14, 18]. While less transparent
than Type I, it has been shown that (6) is a concave non-decreasing function of each row norm of X,
hence it promotes row sparsity as well. Moreover, the dictionary-dependency of this penalty appears
to be the source of some desirable invariance properties as discussed in Section 4. Analogous to (3),
for analytical purposes all of these methods can be reduced as ? ? 0 to solving
min g(X)
X
3
s.t. ?X0 = ?X.
(7)
Structured Dictionaries
It is now well-established that when the dictionary ? is constructed with appropriate randomness,
e.g.,
P iid Gaussian entries, then for certain choices of g, in particular the convex selection g(X) =
i kxi? k2 (which represents a generalization of the ?1 vector norm to row-sparse matrices), we
can expect to recover X0 exactly in the noiseless case or to close approximation otherwise. This
assumes that d(X0 ) is sufficiently small relative to some function of the dictionary coherence or a
related measure. However, with highly structured dictionaries these types of performance guarantees
completely break down.
1
Other row norms, such as the ?? , have been considered as well but are less prevalent.
2
At the most basic level, one attempt to standardize structured dictionaries is by utilizing some form
of column normalization as a pre-processing step. Most commonly, each column is scaled such that
it has unit ?2 norm. This helps ensure that no one column is implicitly favored over another during
the estimation process. However, suppose our observation matrix is generated via Y = ?X0 , where
e +?abT , ?
e is some well-behaved, incoherent dictionary, D is a diagonal matrix, and ?abT
? = ?D
represents a rank one adjustment. If we apply column normalization to remove the effect of D, the
resulting scale factors will be dominated by the rank one term when ? is large. But if we do not
column normalize, then D can completely bias the estimation results.
e
In general, if our given dictionary is effectively W ?D,
with W an arbitrary invertible matrix that
scales and correlates rows, and D diagonal, the combined effect can be severely disruptive. As
an example from neuroimaging, the MEG/EEG source localization problem involves estimating
sparse neural current sources within the brain using sensors placed near the surface of the scalp.
The effective dictionary or forward model is characterized by highly correlated rows (because the
sensors are physically constrained to be near one another) and columns with drastically different
scales (since deep brain sources produce much weaker signals at the surface than superficial ones).
e since an unrestricted matrix S can introduce
More problematic is the situation where ? = ?S,
arbitrary coherence structure between individual or groups of columns in ?, meaning the structure
e
of ? is now arbitrary regardless of how well-behaved the original ?.
4
Analysis
We will now analyze the properties of both Type I and Type II cost functions when coherent or highly
structured dictionaries are present. Ideally, we would like to arrive at algorithms that are invariant,
to the extent possible, to dictionary transformations that would otherwise disrupt the estimation
efficacy. For simplicity, we will primarily consider the noiseless case, although we surmise that
much of the underlying intuition carries over into the noiseless domain. This strategy mirrors the
progression in the literature of previous sparse estimation theory related to the ?1 norm [3, 7, 8]. All
proofs have been deferred to the Appendix, with some details omitted for brevity.
4.1
Invariance to W and D
e
We will first consider the case where the observation matrix is produced via Y = ?X0 = W ?DX
0.
e
Later in Sections 4.2 and 4.3 we will then address the more challenging situation where ? = ?S.
Lemma 1. Let W denote an arbitrary full-rank n ? n matrix and D an arbitrary full-rank m ? m
diagonal matrix. Then with ? ? 0, the Type II optimization problem
e
e
min g (II) (X)
s.t. W ?DX
(8)
0 = W ?DX
X
is invariant to W and D in the sense that if X ? is a global (or local) minimum to (8), then D?1 X ?
e 0 = ?X.
e
is a global (or local) minimum when we optimize g (II) (X) subject to the constraint ?X
e and ? = ?
e may influence the initialization and posTherefore, while switching between ? = W ?D
sibly the update rules of a particular Type II algorithm, it does not fundamentally alter the underlying
cost function. In contrast, Type I methods do not satisfy this invariance. Invariance is preserved with
a W factor in isolation. Likewise, inclusion of a D factor alone with column normalization leads to
invariance. However, inclusion of both W and D together can be highly disruptive.
Note that for improving Type I performance, it is not sufficient to apply some row decorrelating and
? ?1 to ? and then column normalize with some D
? ?1 . This is because the application
normalizing W
?1
?1
?
?
of D will disrupt the effects of W . But one possibility to compensate for dictionary structure is
? ?1 and D
? ?1 that produces a ? satisfying: (i) ??T = CI (meaning rows have a
to jointly learn a W
constant ?2 norm of C and are uncorrelated, (ii) k??i k2 = 1 for all i. Up to irrelevant scale factors, a
unique such transformation will always exist. In Section 5 we empirically demonstrate that this can
be a highly effective strategy for improving the performance of Type I methods. However, as a final
point, we should mention that the invariance Type II exhibits towards W and D (or any corrected
form of Type I) will no longer strictly hold once noise is added.
3
4.2
Invariance to S: The t > 1 Case (Simultaneous Sparse Approximation)
e
We now turn to the potentially more problematic scenario with ? = ?S.
We will assume that S
is arbitrary with the only restriction being that the resulting ? satisfies spark[?] = n + 1, where
matrix spark quantifies the smallest number of linearly dependent columns [7]. Consequently, the
spark condition is equivalent to saying that each n ? n sub-matrix of ? is full rank. This relatively
weak assumption is adopted for simplicity; in many cases it can be relaxed.
Lemma 2. Let ? be an arbitrary dictionary with spark [?] = n + 1 and X0 a coefficient matrix with
d(X0 ) < n. Then there exists a constant ? > 0 such that the optimization problem (7), with g(X) =
T
g (II) (X) and ? ? 0, has no local minima and a unique, global solution at X0 if (x0 )i? (x0 )j? ? ?
for all i 6= j (i.e., the nonzero rows of X0 are below some correlation threshold). Also, if we enforce
exactly zero row-wise correlations, meaning ? = 0, then a minimizing solution X ? will satisfy
kx?i? k2 = k(x0 )i? k2 for all i (i.e., a matching row-sparsity support), even for d(X0 ) ? n. This
solution will be unique whenever ?X0 X0T ? = ???T has a unique solution for some non-negative,
diagonal ?.2
Corollary 1. There will always exist dictionaries ? and coefficients X0 , consistent with the conditions fromP
Lemma 2, such that the optimization problem (7) with any possible g(X) of the form
g (I) (X) = i h (kxi? k2 ) will have minimizing solutions not equal to X0 (with or without column
normalization).
In general, Lemma 2 suggests that for estimation purposes uncorrelated rows in X0 can potentially
compensate for troublesome dictionary structure, and together with Corollary 1 it also describes a
potential advantage of Type II over Type I. Of course this result only stipulates sufficient conditions
for recovery that are certainly not necessary, i.e., effective sparse recovery is possible even with
correlated rows (more on this below). We also emphasize that the final property of Lemma 2 implies
that the row norms of X0 (and therefore the row-sparsity support) can still be recovered even up
to the extreme case of d(X0 ) = m > n. While this may seem surprising at first, especially since
even brute force minimization of (3) can not achieve a similar feat, it is important to keep in mind
that (3) is blind to the correlation structure of X0 . Although Type II does not explicitly require any
such structure, it is able to outperform (3) by implicitly leveraging this structure when the situation
happens to be favorable. While space prevents a full treatment, in the context of MEG/EEG source
estimation, we have successfully localized 500 nonzero sources (rows) using a 100?1000 dictionary.
However, what about the situation where strong correlations do exist between the nonzero rows of
X0 ? A couple things are worth mentioning in this regard. First, Lemma 2 can be strengthened
considerably via the expanded optimization problem: minX,B g (II) (X) s.t. ?X0 = ?XB, which
achieves a result similar to Lemma 2 but with a weaker correlation condition (although the rownorm recovery property is lost). Secondly, in the case of perfect correlation between rows (the
hardest case), the problem reduces to an equivalent one with t = 1, i.e., it exactly reduces to the
canonical sparse recovery problem. We address this situation next.
4.3
Invariance to S: The t = 1 Case (Standard Sparse Approximation)
This section considers the t = 1 case, meaning Y = y and X0 = x0 are now vectors. For
convenience, we define X (S, P) as the set of all coefficient vectors in Rm with support (or nonzero
coefficient locations) specified by the index set S ? {1, . . . , m} and sign pattern given by P ?
|S|
{?1, +1} (here the | ? | operator denotes the cardinality of a set).
Lemma 3. Let ? be an arbitrary dictionary with spark [?] = n + 1. Then for any X (S, P) with
|S| < n, there exists a non-empty subset X? ? X (S, P) (with nonzero Lebesgue measure), such
that if x0 ? X? , the Type II minimization problem
min g (II) (x)
x
s.t. ?x0 = ?x, ? ? 0
(9)
will have a unique minimum and it will be located at x0 .
2
See Appendix for more details about this condition. In most situations, it will hold if m < n(n + 1)/2,
and likely for many instances with m even greater than this.
4
This Lemma can be obtained with a slight modification of results in [18]. In other words, no matter how poorly structured a particular dictionary is with regard to a given sparsity profile, there
will always be sparse coefficients we are guaranteed to recover (provided we utilize a convergent
algorithm). In contrast, an equivalent claim can not be made for Type I:
P
Lemma 4. Given an arbitrary Type I penalty g (I) (x) = i h(|xi |), with h a fixed, non-decreasing
function, there will always exist a dictionary ? (with or without normalized columns) and set
X (S, P) such that for any x0 ? X (S, P), the problem
min g (I) (x)
s.t. ?x0 = ?x
x
(10)
will not have a unique minimum located at x0 .
This can happen because the global minimum does not equal x0 and/or because of the presence of
local minima. Of course this does not necessarily imply that a particular Type I algorithm will fail.
For example, even with multiple minima, an appropriate optimization strategy could conceivably
still locate an optimum that coincides with x0 . While it is difficult to analyze all possible algorithms,
we can address one influential variety based on iterative reweighted ?1 minimization [4, 18]. Here
the idea is that if h is concave and differentiable, then a convergent means of minimizing (10) is to
? This leads to an iterative
utilize a first-order Taylor series approximation of g (I) (x) at some point x.
procedure where at each step we must first compute h?i , dh(z)/dz|z=|?xi | and then minimize
P ?
? This method produces a sparse estimate at each
i hi |xi | subject to ?x0 = ?x to update x.
iteration and is guaranteed to converge to a local minima (or stationary point) of (10). However, this
solution may be suboptimal in the following sense:
Corollary 2. Given an arbitrary g (I) (x) as in Lemma 4, there will always exist a ? and X (S, P),
such that for any x0 ? X (S, P), iterative reweighted ?1 minimization will not converge to x0 when
initialized at the minimum ?1 norm solution.
Note that this failure does not result from a convergence pathology. Rather, the presence of minima
different from x0 explicitly disrupts the algorithm.
In general, with highly structured dictionaries deviating from the ideal, the global minimum of convex penalties often does not correspond with x0 as theoretical equivalence results break down. This
in turn suggests the use of concave penalty functions to seek possible improvement. However, as
illustrated by the following result, even the simplest of sparse recovery problems, that of estimating
some x0 with only one nonzero element using a dictionary with a 1D null-space, Type I can be
characterized by problematic local minima with (strictly) concave penalties. For this purpose we
? ? as all columns of ? excluding ?? .
define ?? as an arbitrary column of ? and ?
Lemma 5. Let h denote a concave, non-decreasing function with h?max , limz?0 dh(z)/dz and
h?min , limz?? dh(z)/dz. Also, let ? be a dictionary with unit ?2 norm columns and spark [?] =
m = n + 1 (i.e., a 1D null-space), and let x0 satisfy kx0 k0 = 1 with associated ?? . Then the Type
I problem (10) can have multiple local minima if
h?max
? ?1 ?? k1 .
> k?
?
h?min
(11)
This result has a very clear interpretation related to how dictionary coherence can potentially disrupt
even the most rudimentary of estimation tasks. The righthand side of (11) is bounded from below by
? ? are similar to ?? (i.e., coherent).
1, which is approached whenever one or more columns in some ?
Thus, even the slightest amount of curvature (or strict concavity) in h can lead to the inequality being
satisfied when highly coherent columns are present. While obviously with h(z) = z this will not be
an issue (consistent with the well-known convexity of the ?1 problem), for many popular non-convex
penalties, this gradient ratio may be large relative to the righthand side, indicating that local minima
5
are always possible. For example, with the h(z) = log(z + ?) selection from [4] h?min ? 0 for all ?
while h?max ? 1/?. We note that Type II has provably no local minima in this regime (this follows
as a special case of Lemma 3). Of course the point here is not that Type I algorithms are incapable
of solving simple problems with kx0 k0 = 1 (and any iterative reweighted ?1 scheme will succeed
on the first step anyway). Rather, Lemma 5 merely demonstrates how highly structured dictionaries
can begin to have negative effects on Type I, potentially more so than with Type II, even on trivial
tasks. The next section will empirically explore this conjecture.
5
Empirical Results
We now present two simulation examples illustrating the potential benefits of Type II with highly
structured dictionaries. In the first experiment, the dictionary represents an MEG leadfield, which at
a high level can be viewed as a mapping from the electromagnetic (EM) activity within m brain voxels to n sensors placed near the scalp surface. Computed using Maxwell?s equations and a spherical
shell head model [12], the resulting ? is characterized by highly correlated rows, because the small
scalp surface requires that sensors be placed close together, and vastly different column norms, since
the EM field strength drops off rapidly for deep brain sources. These effects are well represented by a
e as discussed previously. Figure 1 (Left) displays trial-averaged results
dictionary such as ? = W ?D
comparing Type I algorithms with Type II using such an MEG leadfield dictionary. Data generation
proceeded as follows: We produce ? by choosing 50 random sensor locations and 100 random voxels within the brain volume. We then create a coefficient matrix X0 with t = 5 columns and d(X0 )
an experiment-dependent parameter. Nonzero rows of X0 are drawn iid from a unit Gaussian distribution. The observation matrix is then computed as Y = ?X0 . We run each algorithm and attempt
to estimate X0 , calculating the probability of success averaged over 200 trials as d(X0 ) is varied
from 10 to 50. We compared Type II, implemented via a simple iterative reweighted ?2 approach,
with two different Type
P I schemes. The first is a homotopy continuation method using the Type I
penalty g (I) (X) = i log(kxi? k22 + ?), where ? is gradually reduced to zero during the estimation
process [5]. We have often found this to be the near optimal Type I approach on aP
variety empirical
tests. Secondly, we used the standard mixed-norm penalty g (I) (X) = kXk1,2 = i kxi? k2 , which
leads to a convex minimization problem that generalizes basis pursuit (or the lasso), to the t > 1
domain [6, 10].
While Type II displays invariance to W - and D-like transformations, Type I methods do not. Consequently, we examined two dictionary-standardization methods for Type I. First, we utilized basic
?2 column normalization, without which Type I will have difficulty with the vastly different column
? ??,
? with
scalings of ?. Secondly, we developed an algorithm to learn a transformed dictionary U
?
?
U arbitrary, ? diagonal, such that the combined dictionary has uncorrelated, unit ?2 norm rows, and
unit ?2 norm columns (as discussed in Section 4.1). Figure 1(left) contains results from all of these
variants, where it is clear that some compensation for the dictionary structure is essential for good
recovery performance. We also note that Type II still outperforms Type I in all cases, suggesting
that even after transformation of the latter, there is still residual structure in the MEG leadfield being
exploited by Type II. This is a very reasonable assumption given that ? will typically have strong
column-wise correlations as well, which are more effectively modeled by right multiplication by
some S. As a final point, the Type II success probability does not go to zero even when d(X0 ) = 50,
implying that in some cases it is able to find a number of nonzeros equal to the number of rows in ?.
This is possible because even with only t = 5 columns, the nonzero rows of X0 display somewhat
limited sample correlation, and so exact support recovery is still possible. With t > 5 these sample
correlations can be reduced further, allowing consistent support recovery when d(X0 ) > n (not
shown).
e we performed a
To further test the ability of Type II to handle structure imposed by some ?S,
second experiment with explicitly controlled correlations among groups of columns. For each trial
e Correlations were then introduced using a
we generated a 50 ? 100 Gaussian iid dictionary ?.
block-diagonal S with 4 ? 4 blocks created with iid entries drawn from a uniform distribution
e was then scaled to have unit ?2 norm columns. We
(between 0 and 1). The resulting ? = ?S
then generated a random x0 vector (t = 1 case) using iid Gaussian nonzero entries with d(x0 )
varied from 10 to 25 (with t = 1, we cannot expect to recover as many nonzeros as when t = 5).
Signal vectors are computed as y = ?x0 or, for purposes of direct comparison with a canonical
e 0 . We evaluated Type II and the Type I iterative reweighted ?1 minimization
iid dictionary, y = ?x
6
method from [4], which is guaranteed to do as well or better than standard ?1 norm minimization.
e are shown in Figure 1(right), where it is clear that while
Trial-averaged results using both ? and ?
Type II performance is essentially unchanged, Type I performance degrades substantially.
1
1
probability of success
0.8
probability of success
0.9
Type II
Type I, homotopy, norm
Type I, homotopy, invariant
Type I, basis pursuit, norm
Type I, basis pursuit, invariant
0.6
0.4
0.8
0.7
0.6
0.5
Type II, iid ?
Type II, coherent ?
Type I, iid ?
Type I, coherent ?
0.4
0.3
0.2
0.2
0
10
15
20
25
30
35
row?sparsity
40
45
0.1
10
50
15
row?sparsity
20
25
Figure 1: Left: Probability of success recovering coefficient matrices with varying degrees of rowsparsity using an MEG leadfield as the dictionary. Two Type I methods were compared, a homotopy
continuation method from [5] and a version of basis pursuit extended to the simultaneous sparse
approximation problem by minmizing the ?1,2 mixed norm [6, 10]. Type I methods were compared
using standard ?2 column normalization and a learned invariance transformation. Right: Probability
e and a coherent dictionary ?
of success recovering sparse vectors using a Gaussian iid dictionary ?
with clustered columns. The Type I method was the interactive reweighted ?1 algorithm from [4].
6
Conclusion
When we are free to choose the basis vectors of an overcomplete signal dictionary, the sparse estimation problem is supported by strong analytical and empirical foundations. However, there are many
applications where physical restrictions or other factors impose rigid constraints on the dictionary
structure such that the assumptions of theoretical recovery guarantees are violated. Examples include model selection problems with correlated features, source localization, and compressive sensing with constrained measurement directions. This can have significant consequences depending
on how the estimated coefficients will ultimately be utilized. For example, in the source localization problem, correlated dictionary columns may correspond with drastically different regions (e.g.,
brain areas), so recovering the exact sparsity profile can be important. Ideally we would like our
recovery algorithms to display invariance, to the extent possible, to the actual structure of the dictionary. With typical Type I sparsity penalties this can be a difficult undertaking; however, with the
natural dictionary dependence of the Type II penalty, to some extent it appears this structure can be
accounted for, leading to more consistent performance across dictionary types.
Appendix
Here we provide brief proofs of several results from the paper. Some details have been omitted for
space considerations.
Proof of Lemma 1: First we address invariance with respect to W . Obviously the equality constraint
is unaltered by a full rank W , so it only remains to check that the dictionary-dependent penalty
e
eT W T | =
g (II) is invariant. However, since by standard determinant relationships log |W ?D?D
?
e
e T ||W T | = log |?D?D
e
e T |+C, where C is an irrelevant constant for optimization
log |W ||?D?D
?
?
purposes, this point is established. With respect to D, we re-parameterize the problem by defining
e , DX and ?
e , D?D. It is then readily apparent that the penalty (6) satisfies
X
i
h
eT ?
e ?1 X
e + log |?
e?
e?
e T |. (12)
e
e T | = min Tr X
g (II) (X) ? min Tr X T ??1 X + log |?D?D
?
?0
e
?0
e s.t. ?DX
e
ee
So we are effectively solving: minXe g (II) (X)
0 = ?X.
Proof of Lemma 2 and Corollary 1: Minimizing the Type II cost function can be accomplished
equivalently by minimizing
7
h
?1 i
L(?) , Tr ?t?1 X0 X0T ?T ???T
+ log |???T |,
(13)
over the non-negative diagonal matrix ? (this follows from a duality principle in Type II models
[18]). L(?) includes an observed covariance ?t?1 X0 X0T ?T and a parameterized model covariance
???T , and is globally minimized with ?? = t?1 diag[X0 X0T ] [17]. Moreover, if ??? ?T is sufficiently close to t?1 ?X0 X0T ?T , meaning the off-diagonal elements of X0 X0T are not too large, then
it can be shown by differentiating along the direction between any arbitrary point ?? and ?? that no
local minima exist, leading to the first part of Lemma 2.
Regarding the second part, we now allow d(X0 ) to be arbitrary but require that X0 X0T be diagonal
(zero correlations). Using similar arguments as above, it is easily shown that any minimizing solution ?? must satisfy ??? ?T = ?t?1 X0 X0T ?T . This equality can be viewed as n(n + 1)/2 linear
equations (equal to the number of unique elements in an n ? n covariance matrix) and m unknowns,
namely, the diagonal elements of ?? . Therefore, if n(n + 1)/2 > m this system of equations will
typically be overdetermined (e.g., if suitable randomness is present to avoid adversarial conditions)
with a unique solution. Moreover, because of the requirement that ? be non-negative, it is likely that
a unique solution will exist in many cases where m is even greater than n(n + 1)/2 [2].
Finally, we address Corollary 1. First, consider the case where t = 1, so X0 = x0 . To satisfy the
now degenerate correlation condition, we must have d(x0 ) = 1. Even in this simple regime it can be
demonstrated that a unique minimum at x0 is possible iff h(z) = z based on Lemma 5 (below) and a
complementary result in [17]. So the only Type I possibility is h(z) = z. A simple counterexample
with t = 2 serves to rule this selection out. Consider a dictionary ? and two coefficient matrices
?
?
?
?
given by
#
"
0
1
1
1
?
? 1
1
? 0 ?1 ?
? 1 ?1 ?
0 , X(1) = ?
,
(14)
, X(2) = ?
? = 1 ?1 0
?
0 ?
0
0 ?
0
0 ? ??
?
0
0
0
It is easily verified that ?X(1) = ?X(2) and that X(1) = X0 , the maximally row-sparse
?
solution. Computing the Type I cost for each with h(z) = z gives g (I) (X(1) ) = 2 2 and
g (I) (X(2) ) = 2(1 + ?). Thus, if we allow ? to be small, g (I) (X(2) ) < g (I) (X(1) ), so X(1) = X0
cannot be the minimizing solution. Note that ?2 column normalization will not change this
conclusion since all columns of ? have equal norm already.
Proof of Lemma 4 and Corollary 2: For brevity, we will assume that h is concave and differentiable,
as is typical of most sparsity penalties used in practice (the more general case follows with some
additional effort). This of course includes h(z) = z, which is both concave and convex, and leads
to the ?1 norm penalty. These results will now be demonstrated using a simple counterexample
similar to the one above. Assume we have the dictionary ? from (14), and that S = {1, 2} and
P = {+1, +1}, which implies that any x0 ? X (S, P) can be expressed as x0 = [?1 , ?2 , 0, 0]T , for
some ?1 , ?2 > 0. We will now show that with any member from this set, there will not be a unique
minimum to the Type I cost at x0 for any possible concave, differentiable h.
First assume ?1 ? ?2 . Consider the alternative feasible solution x(2) = [(?1 ? ?2 ), 0, ??2 , ??2 ]T .
To check if this is a local minimum, we can evaluate the gradient of the penalty function g (I) (x)
along the feasible region near x(2) . Given v = [1, 1, ??, ??]T ? Null(?), this can be accomplished
by computing ?g (I) (x(2) + ?v)/?? = h? (|?1 ? ?2 + ?|) + h? (|?|) + 2?h? (|??2 ? ??|). In the limit
as ? ? 0 (from the right or left), this expression will always be positive for ? < 0.5 based on the
concavity of h. Therefore, x(2) must be a minimum. By symmetry an equivalent argument can be
made when ?2 ? ?1 . (In the special case where ?1 = ?2 , there will actually exist two maximally
sparse solutions, the generating x0 and x(2) .) It is also straightforward to verify analytically that
iterative reweighted ?1 minimization will fail on this example when initialized at the minimum ?1
norm solution. It will always become trapped at x(2) after the first iteration, assuming ?1 ? ?2 , or
a symmetric local minimum otherwise.
Proof of Lemma 5: This result can be shown by examining properties of various gradients along
the feasible region, not unlike some of the analysis above, and then bounding the resultant quantity.
We defer these details to a later publication.
8
References
[1] S. Baillet, J.C. Mosher, and R.M. Leahy, ?Electromagnetic brain mapping,? IEEE Signal
Processing Magazine, pp. 14?30, Nov. 2001.
[2] A.M. Bruckstein, M. Elad, and M. Zibulevsky, ?A non-negative and sparse enough solution of
an underdetermined linear system of equations is unique,? IEEE Trans. Information Theory,
vol. 54, no. 11, pp. 4813?4820, Nov. 2008.
[3] E. Cand`es, J. Romberg, and T. Tao, ?Robust uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information,? IEEE Trans. Information Theory, vol. 52, no.
2, pp. 489?509, Feb. 2006.
[4] E. Cand`es, M. Wakin, and S. Boyd, ?Enhancing sparsity by reweighted ?1 minimization,? J.
Fourier Anal. Appl., vol. 14, no. 5, pp. 877?905, 2008.
[5] R. Chartrand and W. Yin, ?Iteratively reweighted algorithms for compressive sensing,? Proc.
Int. Conf. Accoustics, Speech, and Signal Proc., 2008.
[6] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado, ?Sparse solutions to linear inverse
problems with multiple measurement vectors,? IEEE Trans. Signal Processing, vol. 53, no. 7,
pp. 2477?2488, April 2005.
[7] D.L. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization,? Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197?
2202, March 2003.
[8] J.J. Fuchs, ?On sparse representations in arbitrary redundant bases,? IEEE Trans. Information
Theory, vol. 50, no. 6, pp. 1341?1344, June 2004.
[9] S. Ji, D. Dunson, and L. Carin, ?Multi-task compressive sensing,? IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 92?106, Jan 2009.
[10] D.M. Malioutov, M. C
? etin, and A.S. Willsky, ?Sparse signal reconstruction perspective for
source localization with sensor arrays,? IEEE Transactions on Signal Processing, vol. 53, no.
8, pp. 3010?3022, August 2005.
[11] B.D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, ?Subset selection in noise
based on diversity measure minimization,? IEEE Trans. Signal Processing, vol. 51, no. 3, pp.
760?770, March 2003.
[12] J. Sarvas, ?Basic methematical and electromagnetic concepts of the biomagnetic inverse problem,? Phys. Med. Biol., vol. 32, pp. 11?22, 1987.
[13] J.G. Silva, J.S. Marques, and J.M. Lemos, ?Selecting landmark points for sparse manifold
learning,? Advances in Neural Information Processing Systems 18, pp. 1241?1248, 2006.
[14] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of
Machine Learning Research, vol. 1, pp. 211?244, 2001.
[15] J.A. Tropp, ?Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,?
Signal Processing, vol. 86, pp. 589?602, April 2006.
[16] M.B. Wakin, M.F. Duarte, S. Sarvotham, D. Baron, and R.G. Baraniuk, ?Recovery of jointly
sparse signals from a few random projections,? Advances in Neural Information Processing
Systems 18, pp. 1433?1440, 2006.
[17] D.P. Wipf, Bayesian Methods for Finding Sparse Representations, PhD Thesis, University of
California, San Diego, 2006.
[18] D.P. Wipf and S. Nagarajan, ?Iterative reweighted ?1 and ?2 methods for finding sparse solutions,? J. Selected Topics in Signal Processing (Special Issue on Compressive Sensing), vol. 4,
no. 2, April 2010.
9
| 4335 |@word trial:4 illustrating:1 unaltered:1 version:2 proceeded:1 norm:30 determinant:1 heuristically:1 seek:1 simulation:1 covariance:3 mention:1 tr:4 delgado:2 carry:1 series:1 efficacy:1 contains:1 selecting:1 mosher:1 outperforms:1 existing:1 kx0:2 current:1 com:1 recovered:1 surprising:1 comparing:1 gmail:1 dx:5 must:4 readily:1 happen:1 remove:1 drop:1 update:2 alone:1 generative:2 selected:2 stationary:1 implying:1 draft:1 location:2 along:3 constructed:1 direct:1 become:1 introduce:1 x0:77 roughly:1 disrupts:1 cand:2 multi:2 brain:8 globally:1 decreasing:4 spherical:1 actual:1 cardinality:1 begin:2 estimating:3 notation:1 moreover:4 underlying:3 provided:1 bounded:1 null:3 what:1 substantially:1 developed:1 compressive:7 finding:3 transformation:5 guarantee:3 concave:9 interactive:1 exactly:3 k2:7 scaled:2 rm:1 brute:1 unit:7 demonstrates:1 positive:1 local:12 limit:2 severely:1 switching:1 consequence:1 troublesome:1 ap:1 initialization:1 examined:1 equivalence:1 suggests:2 challenging:1 appl:1 mentioning:1 limited:1 palmer:1 averaged:3 unique:12 practice:3 lost:1 block:2 procedure:1 jan:1 area:2 empirical:6 convenient:3 matching:1 pre:2 word:1 boyd:1 projection:1 suggest:1 convenience:1 close:3 selection:8 operator:1 cannot:2 romberg:1 context:2 influence:1 optimize:1 conventional:2 restriction:2 equivalent:4 dz:3 imposed:1 demonstrated:2 go:1 regardless:1 straightforward:1 convex:6 simplicity:2 recovery:12 spark:6 estimator:2 rule:2 array:2 utilizing:1 sarvas:1 leahy:1 handle:1 anyway:1 analogous:1 diego:1 suppose:1 magazine:1 exact:3 overdetermined:1 origin:1 element:6 standardize:1 satisfying:1 located:2 utilized:2 surmise:1 observed:2 kxk1:1 parameterize:1 region:3 zibulevsky:1 intuition:1 environment:1 convexity:1 ideally:2 ultimately:1 solving:7 localization:5 basis:7 completely:2 easily:2 k0:2 represented:1 various:1 effective:3 approached:1 methematical:1 choosing:1 apparent:1 elad:2 valued:1 otherwise:4 ability:1 jointly:2 final:3 obviously:2 advantage:2 differentiable:3 analytical:2 reconstruction:2 relevant:1 rapidly:1 translate:1 poorly:1 achieve:1 degenerate:1 iff:1 academy:1 normalize:2 ky:2 convergence:1 empty:1 optimum:1 requirement:1 produce:5 generating:1 perfect:1 help:1 depending:1 pose:1 strong:3 implemented:1 recovering:3 involves:4 implies:2 exhibiting:1 direction:3 undertaking:1 require:2 nagarajan:1 transparent:1 generalization:1 electromagnetic:3 homotopy:4 clustered:1 secondly:3 underdetermined:1 strictly:2 hold:3 sufficiently:2 considered:2 mapping:2 nonorthogonal:1 claim:1 lemos:1 dictionary:59 adopt:1 smallest:1 omitted:2 achieves:1 purpose:7 estimation:15 favorable:1 leadfield:4 proc:3 combinatorial:1 create:1 successfully:1 cotter:2 minimization:11 sensor:6 gaussian:6 always:8 rather:2 avoid:1 varying:1 publication:1 corollary:6 june:1 improvement:1 prevalent:1 rank:6 check:2 contrast:5 adversarial:1 sense:2 duarte:1 dependent:3 rigid:1 typically:3 quasi:1 transformed:1 tao:1 provably:1 issue:2 among:1 favored:1 constrained:3 special:3 equal:8 once:1 field:1 represents:5 hardest:1 carin:1 alter:1 wipf:3 minimized:1 fundamentally:1 primarily:2 abt:2 few:1 national:1 individual:1 deviating:1 lebesgue:1 microsoft:1 attempt:2 possibility:3 highly:12 righthand:2 certainly:1 deferred:1 analyzed:1 extreme:1 xb:1 necessary:1 orthogonal:1 incomplete:1 taylor:1 penalizes:1 initialized:2 re:1 overcomplete:2 theoretical:4 minimal:1 instance:1 column:35 rao:2 cost:7 subset:3 entry:4 uniform:1 examining:1 too:1 optimally:1 dependency:2 kxi:6 considerably:2 combined:2 off:2 invertible:1 together:3 vastly:2 thesis:1 satisfied:1 choose:1 possibly:1 conf:1 leading:3 suggesting:1 potential:2 diversity:1 includes:2 coefficient:11 matter:1 int:1 satisfy:5 explicitly:3 idealized:1 blind:1 later:4 break:2 performed:1 analyze:3 recover:3 maintains:1 complicated:1 defer:1 minimize:1 baron:1 largely:1 likewise:1 correspond:2 chartrand:1 xk2f:2 weak:1 bayesian:5 produced:1 iid:10 worth:1 malioutov:1 randomness:2 simultaneous:4 inquiry:1 phys:1 whenever:2 failure:1 pp:15 frequency:1 resultant:1 proof:6 associated:1 couple:1 treatment:1 popular:1 actually:2 appears:2 maxwell:1 tipping:1 asia:1 improved:1 maximally:2 april:3 decorrelating:1 evaluated:2 correlation:13 tropp:1 behaved:2 facilitate:1 effect:5 k22:1 normalized:1 verify:1 concept:1 hence:1 equality:2 analytically:1 symmetric:1 nonzero:11 iteratively:1 illustrated:1 reweighted:10 during:2 coincides:1 demonstrate:2 silva:1 rudimentary:1 meaning:8 variational:2 wise:2 consideration:1 common:1 x0t:8 functional:1 empirically:2 physical:1 ji:1 volume:1 discussed:4 slight:1 interpretation:1 measurement:3 significant:1 counterexample:2 inclusion:2 pathology:1 longer:2 surface:4 feb:1 base:1 curvature:1 recent:1 perspective:1 irrelevant:2 scenario:2 certain:2 inequality:1 incapable:1 success:6 accomplished:2 exploited:1 minimum:23 additional:3 unrestricted:1 relaxed:1 kxk0:1 greater:2 somewhat:1 impose:1 converge:2 redundant:1 signal:15 ii:40 full:5 desirable:2 multiple:3 reduces:5 nonzeros:2 compounded:1 baillet:1 characterized:5 compensate:2 promotes:1 controlled:1 variant:1 regression:5 basic:3 noiseless:3 essentially:1 enhancing:1 physically:1 iteration:2 represent:1 normalization:7 preserved:1 source:11 unlike:1 strict:1 subject:2 med:1 thing:1 member:1 leveraging:1 seem:1 ee:1 near:5 presence:2 ideal:3 enough:1 variety:3 isolation:1 lasso:1 suboptimal:1 idea:1 regarding:1 expression:1 engan:2 fuchs:1 effort:1 penalty:23 speech:1 deep:2 clear:3 amount:1 simplest:1 reduced:3 continuation:2 outperform:1 exist:8 canonical:3 problematic:3 sign:1 estimated:1 trapped:1 stipulates:1 vol:13 group:4 threshold:1 drawn:2 verified:1 utilize:3 vast:1 imaging:1 relaxation:1 merely:1 run:1 inverse:2 parameterized:1 uncertainty:1 baraniuk:1 arrive:1 family:1 saying:1 reasonable:1 coherence:4 appendix:3 scaling:1 hi:1 guaranteed:3 convergent:2 display:4 scalp:3 activity:1 strength:1 constraint:3 dominated:1 fourier:2 argument:2 min:13 separable:1 expanded:1 relatively:1 conjecture:1 structured:12 influential:1 march:2 describes:1 across:1 em:2 modification:1 happens:1 conceivably:1 restricted:1 invariant:6 gradually:1 equation:4 previously:1 remains:1 discus:1 count:1 turn:2 fail:2 mind:1 tractable:1 serf:1 adopted:1 generalizes:1 pursuit:4 apply:2 progression:1 appropriate:2 enforce:1 alternative:2 original:1 assumes:1 denotes:1 include:2 ensure:1 wakin:2 calculating:1 k1:1 especially:2 unchanged:1 objective:1 added:1 already:1 quantity:1 strategy:3 degrades:1 dependence:1 diagonal:11 exhibit:1 minx:1 gradient:3 majority:1 landmark:1 topic:1 manifold:2 extent:4 considers:1 trivial:1 provable:1 willsky:1 assuming:1 meg:6 index:1 modeled:1 relationship:1 ratio:1 minimizing:7 disruptive:2 equivalently:1 difficult:2 mostly:1 unfortunately:1 neuroimaging:1 potentially:4 dunson:1 negative:6 implementation:1 anal:1 unknown:3 allowing:1 observation:3 compensation:1 marque:1 situation:6 extended:1 excluding:1 head:1 defining:1 locate:1 rn:1 varied:2 arbitrary:17 august:1 davidwipf:1 david:1 introduced:1 namely:1 specified:1 california:1 coherent:6 learned:1 herein:3 established:2 nip:1 trans:6 address:5 able:2 below:4 pattern:1 regime:3 sparsity:13 max:3 suitable:1 difficulty:2 fromp:1 force:1 natural:1 indicator:1 residual:1 scheme:2 brief:2 imply:1 numerous:1 created:1 incoherent:3 literature:1 voxels:2 multiplication:1 relative:2 expect:2 mixed:3 generation:1 localized:1 foundation:1 degree:1 sufficient:2 consistent:4 standardization:1 principle:2 uncorrelated:3 row:33 course:4 penalized:3 accounted:1 placed:3 supported:1 free:1 drastically:2 bias:1 weaker:2 side:2 allow:2 differentiating:1 sparse:35 distributed:1 regard:2 benefit:1 concavity:2 forward:1 commonly:1 made:2 san:1 correlate:1 transaction:1 approximate:1 emphasize:1 nov:2 implicitly:3 feat:1 keep:1 global:5 bruckstein:1 kreutz:2 xi:4 disrupt:3 search:1 iterative:8 slightest:1 quantifies:1 reality:1 reviewed:1 learn:2 robust:2 superficial:1 symmetry:1 eeg:2 improving:2 necessarily:1 domain:2 diag:1 linearly:1 bounding:1 noise:4 profile:2 complementary:1 strengthened:1 sub:1 limz:2 down:2 specific:1 sensing:7 normalizing:1 intractable:1 exists:2 essential:1 biomagnetic:1 effectively:3 ci:1 mirror:1 phd:1 magnitude:1 kx:1 yin:1 simply:1 likely:2 explore:1 visual:1 prevents:1 kxk:3 expressed:2 adjustment:1 satisfies:2 dh:3 shell:1 succeed:1 sarvotham:1 viewed:3 consequently:2 donoho:1 towards:1 feasible:3 change:1 typical:2 corrected:1 lemma:20 etin:1 invariance:13 duality:1 e:2 indicating:1 support:5 latter:1 brevity:2 relevance:1 violated:1 evaluate:1 biol:1 correlated:6 |
3,684 | 4,336 | Dimensionality Reduction
Using the Sparse Linear Model
Todd Zickler
Harvard SEAS
Cambridge, MA 02138
[email protected]
Ioannis Gkioulekas
Harvard SEAS
Cambridge, MA 02138
[email protected]
Abstract
We propose an approach for linear unsupervised dimensionality reduction, based
on the sparse linear model that has been used to probabilistically interpret sparse
coding. We formulate an optimization problem for learning a linear projection
from the original signal domain to a lower-dimensional one in a way that approximately preserves, in expectation, pairwise inner products in the sparse domain. We
derive solutions to the problem, present nonlinear extensions, and discuss relations
to compressed sensing. Our experiments using facial images, texture patches, and
images of object categories suggest that the approach can improve our ability to
recover meaningful structure in many classes of signals.
1
Introduction
Dimensionality reduction methods are important for data analysis and processing, with their use
motivated mainly from two considerations: (1) the impracticality of working with high-dimensional
spaces along with the deterioration of performance due to the curse of dimensionality; and (2) the
realization that many classes of signals reside on manifolds of much lower dimension than that
of their ambient space. Linear methods in particular are a useful sub-class, for both the reasons
mentioned above, and their potential utility in resource-constrained applications like low-power
sensing [1, 2]. Principal component analysis (PCA) [3], locality preserving projections (LPP) [4],
and neighborhood preserving embedding (NPE) [5] are some common approaches. They seek to
reveal underlying structure using the global geometry, local distances, and local linear structure,
respectively, of the signals in their original domain; and have been extended in many ways [6?8].1
On the other hand, it is commonly observed that geometric relations between signals in their original domain are only weakly linked to useful underlying structure. To deal with this, various feature
transforms have been proposed to map signals to different (typically higher-dimensional) domains,
with the hope that geometric relations in these alternative domains will reveal additional structure,
for example by distinguishing image variations due to changes in pose, illumination, object class,
and so on. These ideas have been incorporated into methods for dimensionality reduction by first
mapping the input signals to an alternative (higher-dimensional) domain and then performing dimensionality reduction there, for example by treating signals as tensors instead of vectors [9, 10] or
using kernels [11]. In the latter case, however, it can be difficult to design a kernel that is beneficial
for a particular signal class, and ad hoc selections are not always appropriate.
In this paper, we also address dimensionality reduction through an intermediate higher-dimensional
space: we consider the case in which input signals are samples from an underlying dictionary model.
This generative model naturally suggests using the hidden covariate vectors as intermediate features,
and learning a linear projection (of the original domain) to approximately preserve the Euclidean geometry of these vectors. Throughout the paper, we emphasize a particular instance of this model that
is related to sparse coding, motivated by studies suggesting that data-adaptive sparse representations
1
Other linear methods, most notably linear discriminant analysis (LDA), exploit class labels to learn projections. In this paper, we focus on the unsupervised setting.
1
are appropriate for signals such as natural images and facial images [12, 13], and enable state-ofthe-art performance for denoising, deblurring, and classification tasks [14?19].
Formally, we assume our input signal to be well-represented by a sparse linear model [20], previously used for probabilistic sparse coding. Based on this generative model, we formulate learning
a linear projection as an optimization problem with the objective of preservation, in expectation, of
pairwise inner products between sparse codes, without having to explicitly obtain the sparse representation for each new sample. We study the solutions of this optimization problem, and we discuss
how they are related to techniques proposed for compressed sensing. We discuss applicability of our
results to general dictionary models, and nonlinear extensions. Finally, by applying our method to
the visualization, clustering, and classification of facial images, texture patches, and general images,
we show experimentally that it improves our ability to uncover useful structure. Omitted proofs and
additional results can be found in the accompanying supplementary material.
2
The sparse linear model
We use RN to denote the ambient space of the input signals, and assume that each signal x ? RN is
generated as the sum of a noise term ? ? RN and a linear combination of the columns, or atoms, of
a N ? K dictionary matrix D = [d1 , . . . , dK ], with the coefficients arranged as a vector a ? RK ,
x = Da + ?.
(1)
2
We assume the noise to be white Gaussian, ? ? N (0N ?1 , ? I N ?N ). We are interested in the
sparse linear model [20], according to which the elements of a are a-priori independent from ? and
are identically and independently drawn from a Laplace distribution,
K
Y
1
|ai |
p (ai ) , p (ai ) =
p (a) =
.
(2)
exp ?
2?
?
i=1
In the context of this model, D is usually overcomplete (K > N ), and in practice often learned in
an unsupervised manner from training data. Several efficient algorithms exist for dictionary learning [21?23], and we assume in our analysis that a dictionary D adapted to the signals of interest is
given.
Our adoption of the sparse linear model is motivated by significant empirical evidence that it is
accurate for certain signals of interest, such as natural and facial images [12, 13], as well as the
fact that it enables high performance for such diverse tasks as denoising and inpainting [14, 24],
deblurring [15], and classification and clustering [13, 16?19]. Typically, the model (1) with an
appropriate dictionary D is employed as a means for feature extraction, in which input signals x in
RN are mapped to higher-dimensional feature vectors a ? RK . When inferring features a (termed
sparse codes) through maximum-a-posteriori (MAP) estimation, they are solutions to
1
1
2
kx ? Dak2 + kak1 .
(3)
?2
?
This problem, known as the lasso [25], is a convex relaxation of the more general problem of sparse
coding [26] (in the rest of the paper we use both terms interchangeably). A number of efficient
algorithms for computing a exist, with both MAP [21, 27] and fully Bayesian [20] procedures.
min
a
3
Preserving inner products
Linear dimensionality reduction from RN to RM , M < N , is completely specified by a projection
matrix L that maps each x ? RN to y = Lx, y ? RM , and different algorithms for linear dimensionality reduction correspond to different methods for finding this matrix. Typically, we are
interested in projections that reveal useful structure in a given set of input signals.
As mentioned in the introduction, structure is often better revealed in a higher-dimensional space
of features, say a ? RK . When a suitable feature transform can be found, this structure may exist
as simple Euclidean geometry and be encoded in pairwise Euclidean distances or inner products
between feature vectors. This is used, for example, in support vector machines and nearest-neighbor
classifiers based on Euclidean distance, as well as k-means and spectral clustering based on pairwise inner products. For the problem of dimensionality reduction, this motivates learning a projection matrix L such that, for any two input samples, the inner product between their resulting
low-dimensional representations is close to that of their corresponding high-dimensional features.
2
More formally, for two samples xk , k = 1, 2 with corresponding low-dimensional representations
y k = Lxk and feature vectors ak , we define ?p = y T1 y 2 ? aT1 a2 as a quantity whose magnitude
we want on average to be small. Assuming that an accurate probabilistic generative model for the
samples x and features a is available, we propose learning L by solving the optimization problem
(E denoting expectation with respect to subscripted variables)
min E x1 ,x2 ,a1 ,a2 ?p2 .
(4)
LM ?N
Solving (4) may in general be a hard optimization problem, depending on the model used for ak and
xk . Here we solve it for the case of the sparse linear model of Section 2, under which the feature
vectors are the sparse codes. Using (1) and denoting S = LT L, (4) becomes
2 i
h
.
(5)
min E a1 ,a2 ,?1 ,?2 aT1 D T SD ? I a2 + ?T1 SDa2 + ?T2 SDa1 + ?T1 S?2
LM ?N
Assuming that x1 and x2 are drawn independently, we prove that (5) is equivalent to problem
2
2
2
min 4? 4
D T SD ? I
+ 4? 2 ? 2 kSDkF + ? 4 kSkF ,
LM ?N
F
(6)
where k?kF is the Frobenius norm, which has the closed-form solution (up to an arbitrary rotation):
L = diag (f (?M )) V TM .
(7)
Here, ?M = (?1 , . . . , ?M ) is a M ? 1 vector composed of the M largest eigenvalues of the N ? N
matrix DD T , and V M is the N ? M matrix with the corresponding eigenvectors as columns. The
function f (?) is applied element-wise to the vector ?M such that
s
4? 4 ?i
f (?i ) =
,
(8)
4
2
? + 4? ? 2 ?i + 4? 4 ?2i
and diag (f (?M )) is the M ? M diagonal matrix formed from f (?M ). This solution assumes that
DD T has full rank N , which in practice is almost always true as D is overcomplete.
Through comparison with (5), we observe that (6) is a trade-off between bringing inner products
of sparse codes and their projections close (first term), and suppressing noise (second and third
terms). Their relative influence is controlled by the variance of ? and a, through the constants ?
and ? respectively. It is interesting to compare their roles in (3) and (6): as ? increases relative to
? , data fitting in (3) becomes less important, and (7) emphasizes noise suppression. As ? increases,
l1 -regularization in (3) is weighted less, and the first term in (6) more. In the extreme case of ? = 0,
the data term in (3) becomes a hard constraint, whereas (6) and (7) simplify, respectively, to
2
?1
min
D T SD ? I
, and L = diag (?M ) 2 V TM .
(9)
LM ?N
F
Interestingly, in this noiseless case, an ambiguity arises in the solution of (9), as a minimizer is
obtained for any subset of M eigenpairs and not necessarily the M largest ones.
The solution to (7) is similar?and in the noiseless case identical?to the whitening transform of
the atoms of D. When the atoms are centered at the origin, this essentially means that solving (4)
for the sparse linear model amounts to performing PCA on dictionary atoms learned from training
samples instead of the training samples themselves. The above result can also be interpreted in the
setting of [28]: dimensionality reduction in the case of the sparse linear model with the objective
of (4) corresponds to kernel PCA using the kernel DD T , modulo centering and the normalization.
3.1 Other dictionary models
Even though we have presented our results using the sparse linear model described in Section 2,
it is important to realize that our analysis is not limited to this model. The assumptions required
for deriving (5) are that signals are generated by a linear dictionary model such as (1), where the
coefficients of each of the noise and code vectors are independent and identically distributed according to some zero-mean distribution, with the two vectors also independent from each other.
The above assumptions apply for several other popular dictionary models. Examples include the
models used implicitly by ridge and bridge regression [29] and elastic-net [30], where the Laplace
3
q
prior on the code coefficients is replaced by a Gaussian, and priors of the form exp(?? kakq ) and
2
exp(?? kak1 ?? kak2 ), respectively. In the context of sparse coding, other sparsity-inducing priors
that have been proposed in the literature, such as Student?s t-distribution [31], also fall into the same
framework. We choose to emphasize the sparse linear model, however, due to the apparent structure
present in dictionaries learned using this model, and its empirical success in diverse applications.
It is possible to derive similar results for a more general model. Specifically, we make the same assumptions as above, except that we only require that elements of a be zero-mean and not necessarily
identically distributed, and similarly for ?. Then, we prove that (4) becomes
2
2
2
p
p
p
min
D T SD ? I ? W 1
+
(SD) ? W 2
+
S ? W 3
,
(10)
LM ?N
F
F
?
W
where ? denotes the Hadamard product and
ij
=
q
F
(W )ij . The elements of the weight
matrices W 1 , W 2 and W 3 in (10), of sizes K ? K, N ? K, and N ? N respectively, are
(W 1 )ij = E a21i a22j , (W 2 )ij = E ?21i a22j + E ?22i a21j , (W 3 )ij = E ?21i ?22j .
(11)
Problem (10) can still be solved efficiently, see for example [32].
3.2 Extension to the nonlinear case
We consider a nonlinear extension of the above analysis through the use of kernels. We denote by
? : RN ? H a mapping from the signal domain to a reproducing kernel Hilbert space H associated
with a kernel function k : RN ? RN ? R [33]. Using a set D = {d?i ? H, i = 1, . . . , K} as
dictionary, we extend the sparse linear model of Section 2 by replacing (1) for each x ? RN with
? (x) = Da + ??,
(12)
where Da ? i=1 ai d?i . For a ? RK we make the same assumptions as in the sparse linear model.
The term ?? denotes a Gaussian process over the domain RN whose sample paths are functions in H
and with covariance operator C?? = ? 2 I, where I is the identity operator on H [33, 34].
PK
This nonlinear extension of the sparse linear model is valid only in finite dimensional spaces H.
In the infinite dimensional case, constructing a Gaussian process with both sample paths in H and
identity covariance operator is not possible, as that would imply that the identity operator in H
has finite Hilbert-Schmidt norm [33, 34]. Related problems arise in the construction of cylindrical
Gaussian measures on infinite dimensional spaces [35]. We define ?? this way to obtain a probabilistic
model for which MAP inference of a corresponds to the kernel extension of the lasso (3) [36],
1
1
2
min
k? (x) ? DakH + kak1 ,
(13)
?
a?RK 2? 2
where k?kH is the norm H defined through k. In the supplementary material, we discuss an alternative to (12) that resolves these problems by requiring that all ? (x) be in the subspace spanned
by the atoms of D. Our results can be extended to this alternative, however in the following we
adopt (12) and limit ourselves to finite dimensional spaces H, unless mentioned otherwise.
In the kernel case, the equivalent of the projection matrix L (transposed) is a compact, linear operator
V : H ? RM , that maps an element x ? RN to y = V? (x) ? RM . We denote by V ? : RM ? H
the adjoint of V, and by S : H ? H the self-adjoint positive semi-definite linear operator of rank
M from their synthesis, S = V ? V. If we consider optimizing over S, we prove that (4) reduces to
K D
K X
K D
E
2
E
X
X
2
d?i , S d?j
min 4? 4
? ?ij + 4? 2 ? 2
S d?i , S d?i
+ kSkHS ,
(14)
S
i=1 i=1
H
H
i=1
where k?kHS is the Hilbert-Schmidt norm. Assuming that K DD has full rank (which is almost
always true in practice due to the very large dimension of the Hilbert spaces used) we extend the
representer theorem of [37] to prove that all solutions of (14) can be written in the form
S = (DB) ? (DB) ,
(15)
where ? denotes the tensor product between all pairs of elements of its operands, and B is a K ? M
matrix. Then, denoting Q = BB T , problem (14) becomes
1
2
2
1
1
2
2
2
2
min 4? 4 kK DD QK DD ? IkF + 4? 2 ? 2
K DD QK DD
QK DD
+ ? 4
K DD
, (16)
B K?M
F
4
F
Figure 1: Two-dimensional projection of CMU PIE dataset, colored by identity. Shown at high resolution and
at their respective projections are identity-averaged faces across the dataset for various illuminations, poses,
and expressions. Insets show projections of samples from only two distinct identities. (Best viewed in color.)
1
? = B T K 2 to turn (16) into an
where K DD (i, j) = hd?i , d?j iH , i, j = 1, . . . , K. We can replace L
DD
1
? of the form (6), with K 2 instead of D, and thus use (8) to obtain
equivalent problem over L
DD
B = V M diag (g (?M ))
(17)
where, similar to the linear case, ?M and V M are the M largest eigenpairs of the matrix K DD , and
s
4? 4
1
g (?i ) = ? f (?i ) =
.
(18)
? 4 + 4? 2 ? 2 ?i + 4? 4 ?2i
?i
Using the derived solution, a vector x ? RN is mapped to y = B T K D (x), where K D (x) =
[hd?1 , ? (x)iH , . . . , hd?M , ? (x)iH ]T . As in the linear case, this is similar to the result of applying
kernel PCA on the dictionary D instead of the training samples. Note that, in the noiseless case,
? = 0, the above analysis is also valid for infinite dimensional spaces H. Expression (17) simplifies
?1
to B = V M diag (?M ) where, as in the linear case, any subset of M eigenvalues may be selected.
Even though in the infinite dimensional case selecting the M largest eigenvalues cannot be justified
probabilistically, it is a reasonable heuristic given the analysis in the finite dimensional case.
3.3 Computational considerations
It is interesting to compare the proposed method in the nonlinear case with kernel PCA, in terms of
N
computational
and memory requirements. If we require dictionary atoms to have pre-images in R ,
that is D = ? (di ) , di ? RN , i = 1, . . . , K [36], then the proposed algorithm requires calculating
and decomposing the K ? K kernel matrix K DD when learning V, and performing K kernel
evaluations for projecting a new sample x. For kernel PCA on
the other hand, the S?S matrix K
XX
and S kernel evaluations are needed respectively, where X = ? (xi ) , xi ? RN , i = 1, . . . , S and
xi are the representations of the training samples in H, with S ? K. If the pre-image constraint is
dropped and the usual alternating procedure [21] is used for learning D, then the representer theorem
of [38] implies that D = X F , where F is an S ? K matrix. In this case, the proposed method also
requires calculating K X X during learning and S kernel evaluations for out-of-sample projections,
but only the eigendecomposition of the K ? K matrix F T K 2X X F is required.
On the other hand, we have assumed so far, in both the linear and nonlinear cases, that a dictionary
is given. When this is not true, we need to take into account the cost of learning a dictionary,
which greatly outweights the computational savings described above, despite advances in dictionary
learning algorithms [21, 22]. In the kernel case, whereas imposing the pre-image constraint has
the advantages we mentioned, it also makes dictionary learning a harder nonlinear optimization
problem, due to the need for evaluation of kernel derivatives. In the linear case, the computational
savings from applying (linear) PCA to the dictionary instead of the training samples are usually
negligible, and therefore the difference in required computation becomes even more severe.
5
Figure 2: Classification accuracy results. From left to right: CMU PIE (varying value of M ); CMU PIE
(varying number of training samples); brodatz texture patches; Caltech-101. (Best viewed in color.)
4
Experimental validation
In order to evaluate our proposed method, we compare it with other unsupervised dimensionality
reduction methods on visualization, clustering, and classification tasks. We use facial images in the
linear case, and texture patches and images of object categories in the kernel case.
Facial images: We use the CMU PIE [39] benchmark dataset of faces under pose, illumination and
expression changes, and specifically the subset used in [8].2 We visualize the dataset by projecting
all face samples to M = 2 dimensions using LPP and the proposed method, as shown in Figure 1.
Also shown are identity-averaged faces over the dataset, for various illumination, pose, and expression combinations, at the location of their projection. We observe that our method recovers a very
clear geometric structure, with changes in illumination corresponding to an ellipsoid, changes in
pose to moving towards its interior, and changes in expression accounting for the density on the
horizontal axis. We separately show the projections of samples from two distinct indviduals, and
see that different identities are mapped to parallely shifted ellipsoids, easily separated by a nearestneighbor classifier. On the other hand, such structure is not apparent when using LPP. A larger
version of Figure 1 and the corresponding for PCA are provided in the supplementary material.
To assess how well identity structure is recovered for increasing values of the target dimension
M , we also perform face recognition experiments. We compare against three baseline methods,
PCA, NPE, and LPP, linear extensions (spectral regression ?SRLPP? [7], spatially smooth LPP
?SmoothLPP? [8]), and random projections (see Section 5). We produce 20 random splits into
training and testing sets, learn a dictionary and projection matrices from the training set, and use the
obtained low-dimensional representations with a k-nearest neighbor classifier (k = 4) to classify the
test samples, as is common in the literature. In Figure 2, we show the average recognition accuracy
for the various methods as the number of projections is varied, when using 100 training samples
for each of the 68 individuals in the dataset. Also, we compare the proposed method with the best
performing alternative, when the number of training samples per individual is varied from 40 to 120.
We observe that the proposed method outperforms all other by a wide margin, in many cases even
when trained with fewer samples. However, it can only be used when there are enough training
samples to learn a dictionary, a limitation that does not apply to the other methods. For this reason,
we do not experiment with cases of 5-20 samples per individual, as commonly done in the literature.
Texture patches: We perform classification experiments on texture patches, using the Brodatz
dataset [40], and specifically classes 4, 5, 8, 12, 17, 84, and 92 from the 2-texture images. We
extract 12 ? 12 patches and use those from the training images to learn dictionaries and projections
for the Gaussian kernel.3 We classify the low-dimensional representations using an one-versus-all
linear SVM. In Figure 2, we compare the classification accuracy of the proposed method (?ker.dict?)
with the kernel variants of PCA and LPP (?KPCA? and ?KLPP? respectively), for varying M . KLPP
and the proposed method both outperform KPCA. Our method achieves much higher accuracy at
small values of M , and KLPP is better for large values; otherwise they perform similarly.
This dataset provides an illustrative example for the discussion in Section 3.3. For 20000 training
samples, KPCA and KLPP require storing and processing a 20000?20000 kernel matrix, as opposed
to 512 ? 512 for our method. On the other hand, training a dictionary with K = 512 for this dataset
takes approximately 2 hours, on an 8 core machine and using a C++ implementation of the learning
algorithm, as opposed to the few minutes required for the eigendecompositions in KPCA and KLPP.
2
Images are pre-normalized to unit length. We use the algorithm of [21] to learn dictionaries, with K equal
2
to the number of pixels N = 1024, due to the limited amount of training data, and ? = ?? = 0.05 as in [19].
3
Following [36], we set the kernel parameter ? = 8, and use their method for dictionary learning with
K = 512 and ? = 0.30, but with a conjugate gradient optimizer for the dictionary update step.
6
Method
Accuracy
NMI
Rand Index
KPCA (k-means)
0.6217
0.6380
0.4279
KLPP (spectral clustering)
0.6900
0.6788
0.5143
ker.dict (k-means)
0.7233
0.7188
0.5275
Table 1: Clustering results on Caltech-101.
Images of object categories: We use the Caltech-101 [41] object recognition dataset, with the
average of the 39 kernels used in [42]. Firstly, we use 30 training samples from each class to learn a
dictionary4 and projections using KPCA, KLPP, and the proposed method. In Figure 2, we plot the
classification accuracy achieved using a linear SVM for each method and varying M . We see that
the proposed method and KPCA perform similarly and outperform KLPP. Our algorithm performs
consistently well in both the datasets we experiment with in the kernel case.
We also perform unsupervised clustering experiments, where we randomly select 30 samples from
each of the 20 classes used in [43] to learn projections with the three methods, over a range of
values for M between 10 and 150. We combine each with three clustering algorithms, k-means,
spectral clustering [44], and affinity propagation [43] (using negative Euclidean distances of the
low-dimensional representations as similarities). In Table 1, we report for each method the best
overall result in terms of accuracy, normalized mutual information, and rand index [45], along with
the clustering algorithm for which these are achieved. We observe that the low-dimensional representations from the proposed method produce the best quality clusterings, for all three measures.
5
Discussion and future directions
As we remarked in Section 3, the proposed method uses available training samples to learn D and
ignores them afterwards, relying exclusively on the assumed generative model and the correlation
information in D. To see how this approach could fail, consider the degenerate case when D is the
identity matrix, that is the signal and sparse domains coincide. Then, to discover structure we need
to directly examine the training samples. Better use of the training samples within our framework
can be made by adopting a richer probabilistic model, using available data to train it, naturally
with appropriate regularization to avoid overfitting, and then minimizing (4) for the learned model.
For example, we can use the more general model of Section 3.1, and assume that each ai follows a
Laplace distribution with a different ?i . Doing so agrees with empirical observations that, when D is
learned, the average magnitude of coefficients ai varies significantly with i. An orthogonal approach
is to forgo adopting a generative model, and learn a projection matrix directly from training samples
using an appropriate empirical loss function. One possibility is minimizing kAT A?X T LT LXk2F ,
where the columns of X and A are the training samples and corresponding sparse code estimates,
which is an instance of multidimensional scaling [46] (as modified to achieve linear induction).
For the sparse linear model case, objective function (4) is related to the Restricted Isometry Property
(RIP) [47], used in the compressed sensing literature as a condition enabling reconstruction of a
sparse vector a ? RK from linear measurements y ? RM when M ? K. The RIP is a worstcase condition, requiring approximate preservation, in the low-dimensional domain, of pairwise
Euclidean distances of all a, and therefore stronger than the expectation condition (4). Verifying
the RIP for an arbitrary matrix is a hard problem, but it is known to hold for the equivalent dictio? = LD with high probability, if L is drawn from certain random distributions, and M is
nary D
of the order of only O k log K
[48]. Despite this property, our experiments demonstrate that a
k
learned matrix L is in practice more useful than random projections (see left of Figure 2). The formal guarantees that preservation of Euclidean geometry of sparse codes is possible with few linear
projections are unique for the sparse linear model, thus further justifying our choice to emphasize
this model throughout the paper.
? [49], and its approximate
Another quantity used in compressed sensing is the mutual coherence of D
minimization has been proposed as a way for learning L for signal reconstruction [50, 51]. One of
the optimization problems arrived at in this context [51] is the same as problem (9) we derived in
the noiseless case, the solution of which as we mentioned in Section 3 is not unique. This ambiguity
has been addressed heuristically by weighting the objective function with appropriate multiplicative
terms, so that it becomes k???V T LT LV ?k2F , where ? and V are eigenpairs of DD T [51]. This
4
We use a kernel extension of the algorithm of [21] without pre-image constraints. We select K = 300
and ? = 0.1 from a range of values, to achieve about 10% non-zero coefficients in the sparse codes and small
reconstruction error for the training samples. Using K = 150 or 600 affected accuracy by less than 1.5%.
7
problem admits as only minimizer the one corresponding to the M largest eigenvalues. Our analysis
addresses the above issue naturally by incorporating noise, thus providing formal justification for
the heuristic. Also, the closed-form solution of (9) is not shown in [51], though its existence is
mentioned, and the (weighted) problem is instead solved through an iterative procedure.
In Section 3, we motivated preserving inner products in the sparse domain by considering existing algorithms that employ sparse codes. As our understanding of sparse coding continues to improve [52], there is motivation for considering other structure in RK . Possibilities include preservation of linear subspace (as determined by the support of the sparse codes) or local group relations
in the sparse domain. Extending our analysis to also incorporate supervision is another important
future direction.
Linear dimensionality reduction has traditionally been used for data preprocessing and visualization, but we are also beginning to see its utility for low-power sensors. A sensor can be designed to
record linear projections of an input signal, instead of the signal itself, with projections implemented
through a low-power physical process like optical filtering. In these cases, methods like the ones
proposed in this paper can be used to obtain a small number of informative projections, thereby
reducing the power and size of the sensor while maintaining its effectiveness for tasks like recognition. An example for visual sensing is described in [2], where a heuristically-modified version
of our linear approach is employed to select projections for face detection. Rigorously extending
our analysis to this domain will require accounting for noise and constraints on the projections (for
example non-negativity, limited resolution) induced by fabrication processes. We view this as a
research direction worth pursuing.
Acknowledgments
This research was supported by NSF award IIS-0926148, ONR award N000140911022, and the US
Army Research Laboratory and the US Army Research Office under contract/grant number 54262CI.
References
[1] M.A. Davenport, P.T. Boufounos, M.B. Wakin, and R.G. Baraniuk. Signal processing with compressive
measurements. IEEE JSTSP, 2010.
[2] S.J. Koppal, I. Gkioulekas, T. Zickler, and G.L. Barrows. Wide-angle micro sensors for vision on a tight
budget. CVPR, 2011.
[3] I. Jolliffe. Principal component analysis. Wiley, 1986.
[4] X. He and P. Niyogi. Locality Preserving Projections. NIPS, 2003.
[5] X. He, D. Cai, S. Yan, and H.J. Zhang. Neighborhood preserving embedding. ICCV, 2005.
[6] D. Cai, X. He, J. Han, and H.J. Zhang. Orthogonal laplacianfaces for face recognition. IEEE IP, 2006.
[7] D. Cai, X. He, and J. Han. Spectral regression for efficient regularized subspace learning. ICCV, 2007.
[8] D. Cai, X. He, Y. Hu, J. Han, and T. Huang. Learning a spatially smooth subspace for face recognition.
CVPR, 2007.
[9] X. He, D. Cai, and P. Niyogi. Tensor subspace analysis. NIPS, 2006.
[10] J. Ye, R. Janardan, and Q. Li. Two-dimensional linear discriminant analysis. NIPS, 2004.
[11] B. Scholkopf, A. Smola, and K.R. Muller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural computation, 1998.
[12] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
V1? Vision Research, 1997.
[13] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse representation. PAMI, 2008.
[14] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE IP, 2006.
[15] J.F. Cai, H. Ji, C. Liu, and Z. Shen. Blind motion deblurring from a single image using sparse approximation. CVPR, 2009.
[16] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.
[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. NIPS, 2008.
8
[18] I. Ramirez, P. Sprechmann, and G. Sapiro. Classification and clustering via dictionary learning with
structured incoherence and shared features. CVPR, 2010.
[19] J. Yang, K. Yu, and T. Huang. Supervised translation-invariant sparse coding. CVPR, 2010.
[20] M.W. Seeger. Bayesian inference and optimal design for the sparse linear model. JMLR, 2008.
[21] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.
[22] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding.
JMLR, 2010.
[23] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-Parametric Bayesian Dictionary
Learning for Sparse Image Representations. NIPS, 2009.
[24] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration.
ICCV, 2009.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSS-B, 1996.
[26] A.M. Bruckstein, D.L. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse
modeling of signals and images. SIAM review, 2009.
[27] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of statistics, 2004.
[28] J. Ham, D.D. Lee, S. Mika, and B. Sch?olkopf. A kernel view of the dimensionality reduction of manifolds.
ICML, 2004.
[29] W.J. Fu. Penalized regressions: the bridge versus the lasso. JCGS, 1998.
[30] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. JRSS-B, 2005.
[31] S. Ji, Y. Xue, and L. Carin. Bayesian compressive sensing. IEEE SP, 2008.
[32] N. Srebro and T. Jaakkola. Weighted low-rank approximations. ICML, 2003.
[33] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics.
Kluwer, 2004.
[34] V.I. Bogachev. Gaussian measures. AMS, 1998.
[35] J. Kuelbs, FM Larkin, and J.A. Williamson. Weak probability distributions on reproducing kernel hilbert
spaces. Rocky Mountain J. Math, 1972.
[36] S. Gao, I. Tsang, and L.T. Chia. Kernel Sparse Representation for Image Classification and Face Recognition. ECCV, 2010.
[37] J. Abernethy, F. Bach, T. Evgeniou, and J.P. Vert. A new approach to collaborative filtering: Operator
estimation with spectral regularization. JMLR, 2009.
[38] B. Scholkopf, R. Herbrich, and A. Smola. A generalized representer theorem. COLT, 2001.
[39] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination, and expression (PIE) database. IEEE
ICAFGR, 2002.
[40] T. Randen and J.H. Husoy. Filtering for texture classification: A comparative study. PAMI, 2002.
[41] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories. CVPR Workshops, 2004.
[42] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. ICCV, 2009.
[43] D. Dueck and B.J. Frey. Non-metric affinity propagation for unsupervised image categorization. ICCV,
2007.
[44] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 2000.
[45] N.X. Vinh, J. Epps, and J. Bailey. Information theoretic measures for clusterings comparison: Variants,
properties, normalization and correction for chance. JMLR, 2010.
[46] T.F. Cox and M.A.A. Cox. Multidimensional Scaling. Chapman & Hall, 2000.
[47] E.J. Cand`es and T. Tao. Decoding by linear programming. IEEE IT, 2005.
[48] H. Rauhut, K. Schnass, and P. Vandergheynst. Compressed sensing and redundant dictionaries. IEEE IT,
2008.
[49] D.L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE IT, 2001.
[50] M. Elad. Optimized projections for compressed sensing. IEEE SP, 2007.
[51] J.M. Duarte-Carvajalino and G. Sapiro. Learning to sense sparse signals: Simultaneous sensing matrix
and sparsifying dictionary optimization. IEEE IP, 2009.
[52] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. NIPS, 2009.
9
| 4336 |@word cylindrical:1 cox:2 version:2 norm:4 stronger:1 heuristically:2 hu:1 seek:1 covariance:2 accounting:2 lpp:6 decomposition:1 thereby:1 inpainting:1 harder:1 ld:1 reduction:13 liu:1 exclusively:1 selecting:1 denoting:3 suppressing:1 interestingly:1 outperforms:1 existing:1 recovered:1 written:1 realize:1 informative:1 enables:1 laplacianfaces:1 treating:1 plot:1 update:1 designed:1 generative:6 selected:1 fewer:1 xk:2 beginning:1 huo:1 core:1 record:1 colored:1 provides:1 math:1 location:1 lx:1 herbrich:1 firstly:1 zhang:3 along:2 zickler:3 scholkopf:2 prove:4 fitting:1 combine:1 manner:1 pairwise:5 notably:1 themselves:1 examine:1 cand:1 relying:1 resolve:1 curse:1 considering:2 increasing:1 becomes:7 provided:1 xx:1 underlying:3 discover:1 baker:1 mountain:1 interpreted:1 compressive:2 finding:1 guarantee:1 sapiro:6 dueck:1 multidimensional:2 rm:6 classifier:3 berlinet:1 unit:1 grant:1 eigenpairs:3 t1:3 positive:1 dropped:1 local:5 todd:1 sd:5 limit:1 negligible:1 frey:1 despite:2 ak:2 path:2 incoherence:1 approximately:3 pami:3 mika:1 nearestneighbor:1 suggests:1 limited:3 factorization:1 range:2 adoption:1 averaged:2 unique:2 acknowledgment:1 testing:1 atomic:1 practice:4 definite:1 kat:1 procedure:3 ker:2 empirical:4 yan:1 significantly:1 vert:1 projection:32 pre:5 janardan:1 suggest:1 cannot:1 close:2 selection:3 operator:7 interior:1 unlabeled:1 context:3 applying:3 influence:1 equivalent:4 map:6 shi:1 independently:2 convex:1 formulate:2 impracticality:1 resolution:2 shen:1 d1:1 deriving:1 spanned:1 hd:3 embedding:2 variation:1 justification:1 laplace:3 traditionally:1 annals:1 construction:1 target:1 coordinate:1 modulo:1 rip:3 programming:1 distinguishing:1 deblurring:3 us:1 origin:1 harvard:4 element:6 ikf:1 recognition:8 continues:1 cut:1 database:1 gehler:1 observed:1 role:1 solved:2 verifying:1 tsang:1 trade:1 mentioned:6 ham:1 rigorously:1 trained:1 weakly:1 solving:3 tight:1 completely:1 basis:1 easily:1 various:4 represented:1 train:1 separated:1 distinct:2 neighborhood:2 abernethy:1 whose:2 encoded:1 supplementary:3 solve:1 apparent:2 say:1 larger:1 otherwise:2 compressed:6 cvpr:6 ability:2 niyogi:2 statistic:2 transform:2 itself:1 ip:3 online:1 hoc:1 advantage:1 eigenvalue:5 net:2 cai:6 propose:2 reconstruction:3 product:10 hadamard:1 realization:1 kak1:3 degenerate:1 achieve:2 adjoint:2 frobenius:1 inducing:1 kh:1 olkopf:1 requirement:1 extending:2 sea:4 produce:2 brodatz:2 comparative:1 incremental:1 categorization:1 object:7 derive:2 depending:1 gong:1 pose:6 ij:6 nearest:2 sim:1 p2:1 implemented:1 implies:1 direction:3 centered:1 enable:1 material:3 require:4 extension:8 kakq:1 correction:1 accompanying:1 hold:1 hall:1 wright:1 exp:3 mapping:2 visualize:1 lm:5 achieves:1 dictionary:32 adopt:1 a2:4 omitted:1 optimizer:1 estimation:2 label:1 bridge:2 largest:5 agrees:1 weighted:3 hope:1 minimization:1 sensor:4 always:3 gaussian:7 modified:2 avoid:1 zhou:1 shrinkage:1 varying:4 jaakkola:1 probabilistically:2 office:1 derived:2 focus:1 ponce:3 consistently:1 rank:4 mainly:1 greatly:1 seeger:1 suppression:1 baseline:1 am:1 duarte:1 posteriori:1 inference:2 sense:1 typically:3 hidden:1 relation:4 perona:1 interested:2 subscripted:1 tao:1 pixel:1 overall:1 classification:12 issue:1 colt:1 priori:1 constrained:1 art:1 mutual:2 equal:1 field:1 saving:2 having:1 extraction:1 atom:6 chapman:1 identical:1 ng:2 evgeniou:1 yu:2 unsupervised:6 k2f:1 representer:3 carin:2 icml:3 future:2 t2:1 report:1 simplify:1 micro:1 few:3 employ:1 randomly:1 composed:1 preserve:2 packer:1 individual:3 replaced:1 geometry:4 ourselves:1 detection:1 interest:2 possibility:2 evaluation:4 severe:1 extreme:1 accurate:2 ambient:2 fu:1 respective:1 facial:6 orthogonal:2 unless:1 euclidean:7 overcomplete:3 instance:2 column:3 classify:2 modeling:1 restoration:1 kpca:7 applicability:1 cost:1 subset:3 eigendecompositions:1 fabrication:1 varies:1 xue:1 density:1 siam:1 probabilistic:4 off:1 contract:1 lee:3 decoding:1 synthesis:1 ambiguity:2 opposed:2 choose:1 huang:2 davenport:1 derivative:1 li:1 suggesting:1 potential:1 account:1 ioannis:1 coding:11 student:1 coefficient:5 explicitly:1 ad:1 blind:1 multiplicative:1 view:2 closed:2 linked:1 doing:1 recover:1 vinh:1 collaborative:1 ass:1 formed:1 accuracy:8 variance:1 qk:3 efficiently:1 correspond:1 ofthe:1 weak:1 bayesian:5 emphasizes:1 rauhut:1 ren:1 worth:1 simultaneous:1 centering:1 against:1 remarked:1 naturally:3 proof:1 associated:1 transposed:1 di:2 recovers:1 dataset:10 popular:1 color:2 efron:1 dimensionality:14 improves:1 hilbert:6 segmentation:1 uncover:1 higher:6 supervised:2 zisserman:2 rand:2 arranged:1 done:1 though:3 smola:2 correlation:1 working:1 hand:5 horizontal:1 replacing:1 nonlinear:10 ganesh:1 propagation:2 lda:1 reveal:3 quality:1 olshausen:1 ye:1 requiring:2 true:3 normalized:3 regularization:4 alternating:1 spatially:2 laboratory:1 deal:1 white:1 interchangeably:1 self:2 during:1 illustrative:1 generalized:1 arrived:1 ridge:1 demonstrate:1 theoretic:1 performs:1 l1:1 motion:1 image:27 wise:1 consideration:2 common:2 rotation:1 operand:1 physical:1 ji:2 extend:2 he:6 kluwer:1 interpret:1 schnass:1 significant:1 measurement:2 cambridge:2 imposing:1 ai:6 paisley:1 sastry:1 similarly:3 moving:1 han:3 similarity:1 supervision:1 whitening:1 isometry:1 optimizing:1 elad:3 termed:1 certain:2 onr:1 success:1 muller:1 caltech:3 preserving:6 additional:2 employed:3 redundant:2 signal:27 preservation:4 semi:1 full:2 afterwards:1 ii:1 reduces:1 smooth:2 bach:4 chia:1 justifying:1 award:2 a1:2 controlled:1 variant:2 regression:6 noiseless:4 expectation:4 essentially:1 cmu:5 vision:2 metric:1 kernel:31 normalization:2 adopting:2 deterioration:1 achieved:2 justified:1 whereas:2 want:1 separately:1 addressed:1 sch:1 rest:1 nary:1 bringing:1 induced:1 db:2 effectiveness:1 yang:2 ideal:1 intermediate:2 revealed:1 identically:3 split:1 enough:1 heuristic:2 lasso:4 hastie:2 fm:1 inner:8 idea:1 tm:2 simplifies:1 multiclass:1 motivated:4 pca:10 expression:6 utility:2 kskf:1 useful:5 clear:1 eigenvectors:1 transforms:1 amount:2 category:4 outperform:2 exist:3 npe:2 nsf:1 shifted:1 per:2 tibshirani:2 diverse:2 affected:1 taught:1 group:1 sparsifying:1 drawn:3 v1:1 relaxation:1 sum:1 angle:2 baraniuk:1 uncertainty:1 throughout:2 almost:2 reasonable:1 pursuing:1 patch:7 epps:1 coherence:1 scaling:2 bogachev:1 adapted:1 constraint:5 fei:2 x2:2 min:9 performing:4 optical:1 structured:1 according:2 combination:3 conjugate:1 battle:2 beneficial:1 across:1 nmi:1 jr:2 projecting:2 restricted:1 iccv:5 invariant:1 resource:1 visualization:3 previously:1 equation:1 discus:4 turn:1 fail:1 jolliffe:1 needed:1 sprechmann:1 bsat:1 jcgs:1 available:3 decomposing:1 aharon:1 apply:2 observe:4 appropriate:6 spectral:6 bailey:1 alternative:5 schmidt:2 existence:1 original:4 thomas:1 assumes:1 clustering:13 include:2 denotes:3 wakin:1 maintaining:1 calculating:2 exploit:1 tensor:3 objective:4 malik:1 quantity:2 strategy:1 parametric:1 kak2:1 diagonal:1 usual:1 gradient:1 affinity:2 subspace:5 distance:5 mapped:3 manifold:2 discriminant:2 reason:2 induction:1 assuming:3 code:11 length:1 index:2 kk:1 ellipsoid:2 minimizing:2 providing:1 difficult:1 pie:5 negative:1 design:2 implementation:1 motivates:1 perform:5 observation:1 datasets:1 benchmark:1 finite:4 enabling:1 barrow:1 extended:2 incorporated:1 rn:15 varied:2 reproducing:3 arbitrary:2 pair:1 required:4 specified:1 optimized:1 learned:7 hour:1 nip:7 address:2 usually:2 agnan:1 sparsity:1 memory:1 power:4 suitable:1 dict:2 natural:2 regularized:1 raina:2 improve:2 imply:1 rocky:1 axis:1 negativity:1 extract:1 prior:3 geometric:3 literature:4 understanding:1 kf:1 review:1 relative:2 fully:1 loss:1 interesting:2 limitation:1 filtering:3 srebro:1 versus:2 lv:1 vandergheynst:1 validation:1 at1:2 eigendecomposition:1 dd:16 principle:1 storing:1 nowozin:1 translation:1 eccv:1 penalized:1 supported:1 larkin:1 formal:2 johnstone:1 neighbor:2 fall:1 face:10 wide:2 sparse:52 distributed:2 dimension:4 valid:2 ignores:1 reside:1 commonly:2 adaptive:1 coincide:1 made:1 preprocessing:1 far:1 bb:1 approximate:2 emphasize:3 compact:1 implicitly:1 global:1 overfitting:1 bruckstein:1 mairal:3 assumed:2 xi:3 fergus:1 iterative:1 table:2 learn:9 transfer:1 robust:1 elastic:2 williamson:1 necessarily:2 zou:1 constructing:1 domain:15 da:3 diag:5 sp:2 pk:1 motivation:1 noise:7 arise:1 x1:2 wiley:1 sub:1 inferring:1 richer:1 jmlr:4 third:1 weighting:1 rk:7 theorem:3 minute:1 covariate:1 inset:1 sensing:10 dk:1 svm:2 admits:1 evidence:1 incorporating:1 workshop:1 ih:3 ci:1 texture:8 magnitude:2 illumination:6 budget:1 kx:1 margin:1 chen:1 locality:2 lt:3 army:2 ramirez:1 gao:1 visual:2 corresponds:2 minimizer:2 khs:1 worstcase:1 chance:1 ma:3 identity:10 viewed:2 donoho:2 towards:1 replace:1 shared:1 change:5 experimentally:1 hard:3 specifically:3 except:1 infinite:4 determined:1 reducing:1 denoising:3 principal:2 boufounos:1 forgo:1 experimental:1 e:1 meaningful:1 formally:2 select:3 support:2 latter:1 arises:1 incorporate:1 evaluate:1 lxk:1 tested:1 |
3,685 | 4,337 | Large-Scale Sparse Principal Component Analysis
with Application to Text Data
Youwei Zhang
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Laurent El Ghaoui
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
Sparse PCA provides a linear combination of small number of features that maximizes variance across data. Although Sparse PCA has apparent advantages compared to PCA, such as better interpretability, it is generally thought to be computationally much more expensive. In this paper, we demonstrate the surprising fact
that sparse PCA can be easier than PCA in practice, and that it can be reliably
applied to very large data sets. This comes from a rigorous feature elimination
pre-processing result, coupled with the favorable fact that features in real-life data
typically have exponentially decreasing variances, which allows for many features
to be eliminated. We introduce a fast block coordinate ascent algorithm with much
better computational complexity than the existing first-order ones. We provide experimental results obtained on text corpora involving millions of documents and
hundreds of thousands of features. These results illustrate how Sparse PCA can
help organize a large corpus of text data in a user-interpretable way, providing an
attractive alternative approach to topic models.
1
Introduction
The sparse Principal Component Analysis (Sparse PCA) problem is a variant of the classical PCA
problem, which accomplishes a trade-off between the explained variance along a normalized vector,
and the number of non-zero components of that vector.
Sparse PCA not only brings better interpretation [1], but also provides statistical regularization [2]
when the number of samples is less than the number of features. Various researchers have proposed
different formulations and algorithms for this problem, ranging from ad-hoc methods such as factor
rotation techniques [3] and simple thresholding [4], to greedy algorithms [5, 6]. Other algorithms
include SCoTLASS by [7], SPCA by [8], the regularized SVD method by [9] and the generalized
power method by [10]. These algorithms are based on non-convex formulations, and may only
converge to a local optimum. The `1 -norm based semidefinite relaxation DSPCA, as introduced
in [1], does guarantee global convergence and as such, is an attractive alternative to local methods.
In fact, it has been shown in [1, 2, 11] that simple ad-hoc methods, and the greedy, SCoTLASS
and SPCA algorithms, often underperform DSPCA. However, the first-order
algorithm for solving
?
DSPCA, as developed in [1], has a computational complexity of O(n4 log n), with n the number of
1
features, which is too high for many large-scale data sets. At first glance, this complexity estimate
indicates that solving sparse PCA is much more expensive than PCA, since we can compute one
principal component with a complexity of O(n2 ).
In this paper we show that solving DSPCA is in fact computationally easier than PCA, and hence can
be applied to very large-scale data sets. To achieve that, we first view DSPCA as an approximation
to a harder, cardinality-constrained optimization problem. Based on that formulation, we describe
a safe feature elimination method for that problem, which leads to an often important reduction in
problem size, prior to solving the problem. Then we develop a block coordinate ascent algorithm,
with a computational complexity of O(n3 ) to solve DSPCA, which is much faster than the firstorder algorithm proposed in [1]. Finally, we observe that real data sets typically allow for a dramatic
reduction in problem size as afforded by our safe feature elimination result. Now the comparison
between sparse PCA and PCA becomes O(?
n3 ) v.s. O(n2 ) with n
? n, which can make sparse
PCA surprisingly easier than PCA.
In Section 2, we review the `1 -norm based DSPCA formulation, and relate it to an approximation to
the `0 -norm based formulation and highlight the safe feature elimination mechanism as a powerful
pre-processing technique. We use Section 3 to present our fast block coordinate ascent algorithm.
Finally, in Section 4, we demonstrate the efficiency of our approach on two large data sets, each one
containing more than 100,000 features.
Notation. R(Y ) denotes the range of matrix Y , and Y ? its pseudo-inverse. The notation log refers
to the extended-value function, with log x = ?? if x ? 0.
2
Safe Feature Elimination
Primal problem. Given a n ? n positive-semidefinite matrix ?, the ?sparse PCA? problem introduced in [1] is :
? = max Tr ?Z ? ?kZk1 : Z 0, Tr Z = 1
(1)
Z
where ? ? 0 is a parameter encouraging sparsity. Without loss of generality we may assume that
? 0.
Problem (1) is in fact a relaxation to a PCA problem with a penalty on the cardinality of the variable:
? = max xT ?x ? ?kxk0 : kxk2 = 1
x
(2)
Where kxk0 denotes the cardinality (number of non-zero elemements) in x. This can be seen by first
writing problem (2) as:
p
max Tr ?Z ? ? kZk0 : Z 0, Tr Z = 1, Rank(Z) = 1
Z
p
where
kZk
is
the
cardinality
(number
of
non-zero
elements)
of
Z.
Since
kZk
?
kZk0 kZkF =
0
1
p
kZk0 , we obtain the relaxation
max Tr ?Z ? ?kZk1 : Z 0, Tr Z = 1, Rank(Z) = 1
Z
Further drop the rank constraint, leading to problem (1).
By viewing problem (1) as a convex approximation to the non-convex problem (2), we can leverage
the safe feature elimination theorem first presented in [6, 12] for problem (2):
Theorem 2.1 Let ? = AT A, where A = (a1 , . . . , an ) ? Rm?n . We have
? = max
k?k2 =1
n
X
((aTi ?)2 ? ?)+ .
i=1
An optimal non-zero pattern corresponds to indices i with ? < (aTi ?)2 at optimum.
We observe that the i-th feature is absent at optimum if (aTi ?)2 ? ? for every ?, k?k2 = 1. Hence,
we can safely remove feature i ? {1, . . . , n} if
?ii = aTi ai < ?
2
(3)
A few remarks are in order. First, if we are interested in solving problem (1) as a relaxation to problem (2), we first calculate and rank all the feature variances, which takes O(nm) and O(n log(n))
respectively. Then we can safely eliminate any feature with variance less than ?. Second, the
elimination criterion above is conservative. However, when looking for extremely sparse solutions,
applying this safe feature elimination test with a large ? can dramatically reduce problem size and
lead to huge computational savings, as will be demonstrated empirically in Section 4. Third, in
practice, when PCA is performed on large data sets, some similar variance-based criteria is routinely employed to bring problem sizes down to a manageable level. This purely heuristic practice
has a rigorous interpretation in the context of sparse PCA, as the above theorem states explicitly the
features that can be safely discarded.
3
Block Coordinate Ascent Algorithm
The first-order
algorithm developed in [1] to solve problem (1) has a computational complexity
?
of O(n4 log n). With a theoretical convergence rate of O( 1 ), the DSPCA algorithm does not
converge fast in practice. In this section, we develop a block coordinate ascent algorithm with better
dependence on problem size (O(n3 )), that in practice converges much faster.
Failure of a direct method. We seek to apply a ?row-by-row? algorithm by which we update each
row/column pair, one at a time. This algorithm appeared in the specific context of sparse covariance
estimation in [13], and extended to a large class of SDPs in [14]. Precisely, it applies to problems of
the form
min f (X) ? ? log det X : L ? X ? U, X 0,
(4)
X
where X = X T is a n ? n matrix variable, L, U impose component-wise bounds on X, f is convex,
and ? > 0.
However, if we try to update the row/columns of Z in problem (1), the trace constraint will imply that
we never modify the diagonal elements of Z. Indeed at each step, we update only one diagonal element, and it is entirely fixed given all the other diagonal elements. The row-by-row algorithm does
not directly work in that case, nor in general for SDPs with equality constraints. The authors in [14]
propose an augmented Lagrangian method to deal with such constraints, with a complication due to
the choice of appropriate penalty parameters. In our case, we can apply a technique resembling the
augmented Lagrangian technique, without this added complication. This is due to the homogeneous
nature of the objective function and of the conic constraint. Thanks to the feature elimination result
2
(Thm. 2.1), we can always assume without loss of generality that ? < ?min
:= min1?i?n ?ii .
Direct augmented Lagrangian technique. We can express problem (1) as
1 2
1
? = max Tr ?X ? ?kXk1 ? (Tr X)2 : X 0.
(5)
X
2
2
This expression results from the change of variable X = ?Z, with Tr Z = 1, and ? ? 0. Optimizing
2
over ? ? 0, and exploiting ? > 0 (which comes from our assumption that ? < ?min
), leads to the
?
result, with the optimal scaling factor ? equal to ?. An optimal solution Z to (1) can be obtained
from an optimal solution X ? to the above, via Z ? = X ? /?. (In fact, we have Z ? = X ? / Tr(X ? ).)
To apply the row-by-row method to the above problem, we need to consider a variant of it, with a
strictly convex objective. That is, we address the problem
1
max Tr ?X ? ?kXk1 ? (Tr X)2 + ? log det X, : X 0,
X
2
(6)
where ? > 0 is a penalty parameter. SDP theory ensures that if ? = /n, then a solution to the
above problem is -suboptimal for the original problem [15].
Optimizing over one row/column. Without loss of generality, we consider the problem of updating the last row/column of the matrix variable X. Partition the latter and the covariance matrix S
as
Y y
S s
X=
, ?=
,
yT x
sT ?
3
where Y, S ? R(n?1)?(n?1) , y, s ? Rn?1 , and x, ? ? R. We are considering the problem above,
where Y is fixed, and (y, x) ? Rn is the variable. We use the notation t := Tr Y .
The conic constraint X 0 translates as y T Y ? y ? x, y ? R(Y ), where R(Y ) is the range of the
matrix Y . We obtain the sub-problem
2(y T s ? ?kyk1 ) + (? ? ?)x ? 21 (t + x)2
? := max
: y ? R(Y ).
(7)
x,y
+? log(x ? y T Y ? y)
Simplifying the sub-problem. We can simplify the above problem, in particular, avoid the step of
forming the pseudo-inverse of Y , by taking the dual of problem (7).
Using the conjugate relation, valid for every ? > 0:
log ? + 1 = min z? ? log z,
z>0
and with f (x) := (? ? ?)x
?+?
=
? 21 (t
T
2
+ x) , we obtain
max 2(y s ? ?kyk1 ) + f (x) + ? min z(x ? y T Y ? y) ? log z
z>0
y?R(Y )
=
min max 2(y s ? ?kyk1 ? ?zy Y ? y) + max (f (x) + ?zx) ? ? log z
=
min h(z) + 2g(z)
T
T
x
z>0 y?R(Y )
z>0
where, for z > 0, we define
h(z) := ?? log z + max (f (x) + ?zx)
x
1
?? log z + max ((? ? ? + ?z)x ? (t + x)2 )
x
2
1
1
= ? t2 ? ? log z + max ((? ? ? ? t + ?z)x ? x2 )
x
2
2
1
1 2
= ? t ? ? log z + (? ? ? ? t + ?z)2
2
2
with the following relationship at optimum:
x = ? ? ? ? t + ?z.
In addition,
?z T ?
g(z) :=
max y T s ? ?kyk1 ?
(y Y y)
2
y?R(Y )
?z T ?
=
max y T s + min y T v ?
(y Y y)
2
y?R(Y )
v : kvk? ??
?z T ?
=
min
max (y T (s + v) ?
(y Y y))
2
v : kvk? ?? y?R(Y )
?z T ?
=
min
max (y T u ?
(y Y y))
2
u : ku?sk? ?? y?R(Y )
1 T
=
min
u Y u.
u : ku?sk? ?? 2?z
with the following relationship at optimum:
1
y=
Y u.
?z
=
(8)
(9)
Putting all this together, we obtain the dual of problem (7): with ? 0 := ?+?+ 12 t2 , and c := ????t,
we have
1 T
1
? 0 = min
u Y u ? ? log z + (c + ?z)2 : z > 0, ku ? sk? ? ?.
u,z ?z
2
Since ? is small, we can avoid large numbers in the above, with the change of variable ? = ?z:
1
1
? 0 ? ? log ? = min uT Y u ? ? log ? + (c + ? )2 : ? > 0, ku ? sk? ? ?.
(10)
u,? ?
2
4
Solving the sub-problem. Problem (10) can be further decomposed into two stages.
First, we solve the box-constrained QP
R2 := min uT Y u : ku ? sk? ? ?,
u
(11)
using a simple coordinate descent algorithm to exploit sparsity of Y . Without loss of generality, we
consider the problem of updating the first coordinate of u. Partition u, Y and s as
y1 y?T
?
s1
u=
, Y =
,
s
=
,
u
?
s?
y? Y?
Where, Y? ? R(n?2)?(n?2) , u
?, y?, s? ? Rn?2 , y1 , s1 ? R are all fixed, while ? ? R is the variable.
We obtain the subproblem
min y1 ? 2 + (2?
yT u
?)? : k? ? s1 k ? ?
?
for which we can solve for ? analytically using the formula given below.
?
T
y?T u
?
?
if ks1 + y?y1u? k ? ?, y1 > 0,
?
? ? y1
?=
s1 ? ?
?
?
? s +?
1
if ?
if ?
y?T u
?
y1
T
y? u
?
y1
< s1 ? ?, y1 > 0 or if y?T u
? > 0, y1 = 0,
(12)
(13)
T
> s1 + ?, y1 > 0 or if y? u
? <= 0, y1 = 0.
Next, we set ? by solving the one-dimensional problem:
R2
1
? ? log ? + (c + ? )2 .
? >0 ?
2
The above can be reduced to a bisection problem over ? , or by solving a polynomial equation of
degree 3.
min
Obtaining the primal variables. Once the above problem is solved, we can obtain the primal
variables y, x, as follows. Using formula (9), with ?z = ? , we set y = ?1 Y u. For the diagonal
element x, we use formula (8): x = c + ? = ? ? ? ? t + ? .
Algorithm summary. We summarize the above derivations in Algorithm 1. Notation: for any
symmetric matrix A ? Rn?n , let A\i\j denote the matrix produced by removing row i and column
j. Let Aj denote column j (or row j) with the diagonal element Ajj removed.
Convergence and complexity. Our algorithm solves DSPCA by first casting it to problem (6),
which is in the general form (4). Therefore, the convergence result from [14] readily applies and
hence every limit point that our block coordinate ascent algorithm converges to is the global optimizer. The simple coordinate descent algorithm solving problem (11) only involves a vector product
and can take sparsity in Y easily. To update each column/row takes O(n2 ) and there are n such
columns/rows in total. Therefore, our algorithm has a computational complexity of O(Kn3 ), where
K is the number of sweeps through columns. In practice, K is fixed at a number independent of
problem size (typically
? K = 5). Hence our algorithm has better dependence on the problem size
compared to O(n4 log n) required of the first order algorithm developed in [1].
Fig 1 shows that our algorithm converges much faster than the first order algorithm. On the left, both
algorithms are run on a covariance matrix ? = F T F with F Gaussian. On the right, the covariance
matrix comes from a ?spiked model? similar to that in [2], with ? = uuT +V V T /m, where u ? Rn
is the true sparse leading eigenvector, with Card(u) = 0.1n, V ? Rn?m is a noise matrix with
Vij ? N (0, 1) and m is the number of observations.
4
Numerical Examples
In this section, we analyze two publicly available large data sets, the NYTimes news articles data
and the PubMed abstracts data, available from the UCI Machine Learning Repository [16]. Both
5
Algorithm 1 Block Coordinate Ascent Algorithm
Input: The covariance matrix ?, and a parameter ? > 0.
1: Set X (0) = I
2: repeat
3:
for j = 1 to n do
4:
Let X (j?1) denote the current iterate. Solve the box-constrained quadratic program
(j?1)
R2 := min uT X\j\j u : ku ? ?j k? ? ?
u
using the coordinate descent algorithm
Solve the one-dimensional problem
5:
min
? >0
R2
1
(j?1)
? ? log ? + (?jj ? ? ? Tr X\j\j + ? )2
?
2
using a bisection method, or by solving a polynomial equation of degree 3.
(j)
(j?1)
First set X\j\j = X\j\j , and then set both X (j) ?s column j and row j using
6:
(j)
Xj
=
1 (j?1)
X
u
? \j\j
(j?1)
(j)
Xjj = ?jj ? ? ? Tr X\j\j + ?
7:
end for
8:
Set X (0) = X (n)
9: until convergence
5
5
10
10
4
4
10
10
3
CPU Time (seconds)
CPU Time (seconds)
3
10
2
10
1
10
Block Coordinate Ascent
First Order
0
2
10
1
Block Coordinate Ascent
First Order
10
0
10
10
?1
10
10
?1
0
100
200
300
400
500
Problem Size
600
700
10
800
0
100
200
300
400
500
Problem Size
600
700
800
Figure 1: Speed comparisons between Block Coordinate Ascent and First-Order
text collections record word occurrences in the form of bag-of-words. The NYTtimes text collection
contains 300, 000 articles and has a dictionary of 102, 660 unique words, resulting in a file of size 1
GB. The even larger PubMed data set has 8, 200, 000 abstracts with 141, 043 unique words in them,
giving a file of size 7.8 GB. These data matrices are so large that we cannot even load them into
memory all at once, which makes even the use of classical PCA difficult. However with the preprocessing technique presented in Section 2 and the block coordinate ascent algorithm developed
in Section 3, we are able to perform sparse PCA analysis of these data, also thanks to the fact that
variances of words decrease drastically when we rank them as shown in Fig 2. Note that the feature
elimination result only requires the computation of each feature?s variance, and that this task is easy
to parallelize.
By doing sparse PCA analysis of these text data, we hope to find interpretable principal components
that can be used to summarize and explore the large corpora. Therefore, we set the target cardinality
for each principal component to be 5. As we run our algorithm with a coarse range of ? to search for
6
0
0
10
10
?1
?1
10
10
?2
?2
10
Variance
Variance
10
?3
10
?4
?3
10
?4
10
10
?5
?5
10
10
?6
?6
10
10
0
2
4
6
Word Index
8
10
12
0
5
10
Word Index
4
x 10
15
4
x 10
Figure 2: Sorted variances of 102,660 words in NYTimes (left) and 141,043 words in PubMed (right)
a solution with the given cardinality, we might end up accepting a solution with cardinality close,
but not necessarily equal to, 5, and stop there to save computational time.
The top 5 sparse principal components are shown in Table 1 for NYTimes and in Table 2 for PubMed.
Clearly the first principal component for NYTimes is about business, the second one about sports,
the third about U.S., the fourth about politics and the fifth about education. Bear in mind that the
NYTimes data from UCI Machine Learning Repository ?have no class labels, and for copyright
reasons no filenames or other document-level metadata? [16]. The sparse principal components still
unambiguously identify and perfectly correspond to the topics used by The New York Times itself to
classify articles on its own website.
Table 1: Words associated with the top 5 sparse principal components in NYTimes
1st PC (6 words) 2nd PC (5 words) 3rd PC (5 words)
4th PC (4 words)
5th PC (4 words)
million
point
official
president
school
percent
play
government
campaign
program
business
team
united states bush
children
company
season
us
administration student
market
game
attack
companies
After the pre-processing steps, it takes our algorithm around 20 seconds to search for a range of ?
and find one sparse principal component with the target cardinality (for the NYTimes data in our
current implementation on a MacBook laptop with 2.4 GHz Intel Core 2 Duo processor and 2 GB
memory).
Table 2:
1st PC (5 words)
patient
cell
treatment
protein
disease
Words associated with the top 5 sparse principal components in PubMed
2nd PC (5 words)
3rd PC (5 words) 4th PC (4 words) 5th PC (4 words)
effect
human
tumor
year
level
expression
mice
infection
activity
receptor
cancer
age
concentration binding
maligant
children
rat
carcinoma
child
A surprising finding is that the safe feature elimination test, combined with the fact that word variances decrease rapidly, enables our block coordinate ascent algorithm to work on covariance matrices of order at most n = 500, instead of the full order (n = 102660) covariance matrix for NYTimes,
so as to find a solution with cardinality of around 5. In the case of PubMed, our algorithm only needs
to work on covariance matrices of order at most n = 1000, instead of the full order (n = 141, 043)
7
covariance matrix. Thus, at values of the penalty parameter ? that target cardinality of 5 commands,
we observe a dramatic reduction in problem sizes, about 150 ? 200 times smaller than the original
sizes respectively. This motivates our conclusion that sparse PCA is in a sense, easier than PCA
itself.
5
Conclusion
The safe feature elimination result, coupled with a fast block coordinate ascent algorithm, allows
to solve sparse PCA problems for very large scale, real-life data sets. The overall method works
especially well when the target cardinality of the result is small, which is often the case in applications where interpretability by a human is key. The algorithm we proposed has better computational
complexity, and in practice converges much faster than, the first-order algorithm developed in [1].
Our experiments on text data also show that the sparse PCA can be a promising approach towards
summarizing and organizing a large text corpus.
References
[1] A. d?Aspremont, L. El Ghaoui, M. Jordan, and G. Lanckriet. A direct formulation of sparse
PCA using semidefinite programming. SIAM Review, 49(3), 2007.
[2] A.A. Amini and M. Wainwright. High-dimensional analysis of semidefinite relaxations for
sparse principal components. The Annals of Statistics, 37(5B):2877?2921, 2009.
[3] I. T. Jolliffe. Rotation of principal components: choice of normalization constraints. Journal
of Applied Statistics, 22:29?35, 1995.
[4] J. Cadima and I. T. Jolliffe. Loadings and correlations in the interpretation of principal components. Journal of Applied Statistics, 22:203?214, 1995.
[5] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: exact and greedy
algorithms. Advances in Neural Information Processing Systems, 18, 2006.
[6] A. d?Aspremont, F. Bach, and L. El Ghaoui. Optimal solutions for sparse principal component
analysis. Journal of Machine Learning Research, 9:1269?1294, 2008.
[7] I. T. Jolliffe, N.T. Trendafilov, and M. Uddin. A modified principal component technique based
on the LASSO. Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[8] H. Zou, T. Hastie, and R. Tibshirani. Sparse Principal Component Analysis. Journal of Computational & Graphical Statistics, 15(2):265?286, 2006.
[9] Haipeng Shen and Jianhua Z. Huang. Sparse principal component analysis via regularized low
rank matrix approximation. J. Multivar. Anal., 99:1015?1034, July 2008.
[10] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse
principal component analysis. arXiv:0811.4724, 2008.
[11] Y. Zhang, A. d?Aspremont, and L. El Ghaoui. Sparse PCA: Convex relaxations, algorithms
and applications. In M. Anjos and J.B. Lasserre, editors, Handbook on Semide
nite, Cone and Polynomial Optimization: Theory, Algorithms, Software and Applications.
Springer, 2011. To appear.
[12] L. El Ghaoui. On the quality of a semidefinite programming bound for sparse principal component analysis. arXiv:math/060144, February 2006.
[13] O.Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum
likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning
Research, 9:485?516, March 2008.
[14] Zaiwen Wen, Donald Goldfarb, Shiqian Ma, and Katya Scheinberg. Row by row methods for
semidefinite programming. Technical report, Dept of IEOR, Columbia University, 2009.
[15] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
New York, NY, USA, 2004.
[16] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
8
| 4337 |@word repository:3 manageable:1 polynomial:3 norm:3 loading:1 nd:2 underperform:1 seek:1 ajj:1 covariance:9 simplifying:1 dramatic:2 tr:15 sepulchre:1 harder:1 reduction:3 contains:1 dspca:9 united:1 document:2 ati:4 existing:1 current:2 surprising:2 readily:1 numerical:1 partition:2 enables:1 remove:1 drop:1 interpretable:2 update:4 greedy:3 website:1 core:1 record:1 accepting:1 provides:2 coarse:1 complication:2 math:1 attack:1 zhang:2 along:1 direct:3 introduce:1 market:1 indeed:1 nor:1 sdp:1 decreasing:1 decomposed:1 company:2 encouraging:1 cpu:2 cardinality:11 considering:1 becomes:1 notation:4 maximizes:1 laptop:1 duo:1 eigenvector:1 developed:5 finding:1 guarantee:1 pseudo:2 berkeley:6 every:3 safely:3 firstorder:1 rm:1 k2:2 appear:1 organize:1 positive:1 engineering:2 local:2 modify:1 limit:1 receptor:1 parallelize:1 laurent:1 might:1 katya:1 campaign:1 range:4 unique:2 practice:7 block:13 nite:1 thought:1 boyd:1 pre:3 word:22 refers:1 donald:1 protein:1 cannot:1 close:1 selection:1 context:2 applying:1 writing:1 demonstrated:1 lagrangian:3 yt:2 resembling:1 convex:7 shen:1 kzk0:3 vandenberghe:1 coordinate:17 president:1 annals:1 target:4 play:1 user:1 exact:1 programming:3 homogeneous:1 lanckriet:1 element:6 expensive:2 updating:2 kxk1:2 min1:1 subproblem:1 electrical:2 solved:1 thousand:1 calculate:1 ensures:1 news:1 richt:1 trade:1 removed:1 decrease:2 nytimes:8 disease:1 complexity:9 nesterov:1 solving:10 purely:1 efficiency:1 easily:1 various:1 routinely:1 derivation:1 fast:4 describe:1 apparent:1 heuristic:1 larger:1 solve:7 statistic:5 itself:2 hoc:2 advantage:1 semide:1 propose:1 product:1 uci:3 rapidly:1 organizing:1 achieve:1 haipeng:1 exploiting:1 convergence:5 optimum:5 converges:4 help:1 illustrate:1 develop:2 school:1 solves:1 involves:1 come:3 safe:8 human:2 viewing:1 elimination:12 education:1 government:1 strictly:1 around:2 optimizer:1 dictionary:1 favorable:1 estimation:2 bag:1 label:1 hope:1 clearly:1 always:1 gaussian:2 arik:1 modified:1 avoid:2 season:1 casting:1 command:1 rank:6 indicates:1 likelihood:1 rigorous:2 sense:1 summarizing:1 el:6 typically:3 eliminate:1 relation:1 journ:1 interested:1 overall:1 dual:2 constrained:3 equal:2 once:2 saving:1 never:1 eliminated:1 uddin:1 t2:2 report:1 simplify:1 few:1 wen:1 huge:1 kvk:2 semidefinite:6 pc:10 primal:3 copyright:1 moghaddam:1 theoretical:1 column:10 classify:1 hundred:1 too:1 eec:2 combined:1 thanks:2 st:3 siam:1 kn3:1 off:1 together:1 mouse:1 nm:1 containing:1 huang:1 shiqian:1 ieor:1 leading:2 student:1 explicitly:1 ad:2 performed:1 view:1 try:1 analyze:1 doing:1 asuncion:1 publicly:1 variance:12 correspond:1 identify:1 sdps:2 zy:1 bisection:2 produced:1 researcher:1 zx:2 processor:1 infection:1 failure:1 associated:2 stop:1 macbook:1 treatment:1 ut:3 unambiguously:1 wei:1 formulation:6 box:2 generality:4 stage:1 until:1 correlation:1 banerjee:1 glance:1 brings:1 quality:1 aj:1 usa:1 effect:1 normalized:1 true:1 regularization:1 analytically:1 hence:4 equality:1 symmetric:1 goldfarb:1 deal:1 attractive:2 game:1 rat:1 criterion:2 generalized:2 demonstrate:2 bring:1 percent:1 ranging:1 wise:1 rotation:2 empirically:1 qp:1 exponentially:1 million:2 interpretation:3 lieven:1 cambridge:1 scotlass:2 ai:1 rd:2 multivariate:1 own:1 optimizing:2 binary:1 life:2 seen:1 filename:1 impose:1 kxk0:2 employed:1 accomplishes:1 converge:2 july:1 ii:2 stephen:1 full:2 technical:1 faster:4 multivar:1 bach:1 a1:1 involving:1 variant:2 avidan:1 patient:1 arxiv:2 normalization:1 cell:1 addition:1 ascent:13 file:2 jordan:1 ee:1 leverage:1 cadima:1 spca:2 easy:1 iterate:1 xj:1 hastie:1 perfectly:1 suboptimal:1 lasso:1 reduce:1 translates:1 det:2 absent:1 politics:1 administration:1 expression:2 pca:30 gb:3 penalty:4 xjj:1 york:2 jj:2 remark:1 dramatically:1 generally:1 reduced:1 tibshirani:1 carcinoma:1 express:1 zaiwen:1 putting:1 key:1 relaxation:6 year:1 cone:1 run:2 inverse:2 powerful:1 fourth:1 scaling:1 jianhua:1 entirely:1 bound:3 quadratic:1 activity:1 constraint:7 precisely:1 n3:3 afforded:1 x2:1 software:1 speed:1 extremely:1 min:18 department:2 combination:1 march:1 conjugate:1 across:1 smaller:1 n4:3 s1:6 ks1:1 explained:1 spiked:1 ghaoui:6 computationally:2 equation:2 scheinberg:1 mechanism:1 jolliffe:3 mind:1 end:2 available:2 apply:3 observe:3 appropriate:1 amini:1 spectral:1 occurrence:1 save:1 alternative:2 original:2 denotes:2 top:3 include:1 graphical:2 exploit:1 giving:1 especially:1 february:1 classical:2 sweep:1 objective:2 added:1 concentration:1 dependence:2 diagonal:5 card:1 topic:2 reason:1 index:3 relationship:2 providing:1 difficult:1 relate:1 kzk1:2 frank:1 trace:1 implementation:1 reliably:1 motivates:1 anal:1 perform:1 observation:1 discarded:1 descent:3 extended:2 looking:1 team:1 y1:11 rn:6 thm:1 introduced:2 pair:1 required:1 california:2 address:1 able:1 below:1 pattern:1 appeared:1 sparsity:3 summarize:2 elghaoui:1 program:2 interpretability:2 max:18 memory:2 wainwright:1 power:2 business:2 regularized:2 imply:1 conic:2 aspremont:4 coupled:2 metadata:1 columbia:1 text:8 prior:1 review:2 loss:4 highlight:1 bear:1 age:1 degree:2 article:3 thresholding:1 editor:1 vij:1 row:17 cancer:1 summary:1 surprisingly:1 last:1 repeat:1 drastically:1 allow:1 taking:1 fifth:1 sparse:36 ghz:1 kzk:2 valid:1 author:1 collection:2 preprocessing:1 global:2 handbook:1 corpus:4 search:2 sk:5 table:4 lasserre:1 promising:1 nature:1 ku:6 ca:2 obtaining:1 necessarily:1 zou:1 official:1 noise:1 n2:3 child:3 augmented:3 fig:2 intel:1 pubmed:6 ny:1 sub:3 kxk2:1 third:2 kyk1:4 theorem:3 down:1 formula:3 removing:1 xt:1 specific:1 load:1 uut:1 r2:4 easier:4 explore:1 forming:1 sport:1 applies:2 binding:1 trendafilov:1 corresponds:1 springer:1 ma:1 kzkf:1 sorted:1 towards:1 change:2 principal:20 conservative:1 total:1 tumor:1 experimental:1 svd:1 latter:1 bush:1 dept:1 |
3,686 | 4,338 | Rapid Deformable Object Detection using Dual-Tree
Branch-and-Bound
Iasonas Kokkinos
Center for Visual Computing
Ecole Centrale de Paris
[email protected]
Abstract
In this work we use Branch-and-Bound (BB) to efficiently detect objects with deformable part models. Instead of evaluating the classifier score exhaustively over
image locations and scales, we use BB to focus on promising image locations.
The core problem is to compute bounds that accommodate part deformations; for
this we adapt the Dual Trees data structure [7] to our problem.
We evaluate our approach using Mixture-of-Deformable Part Models [4]. We obtain exactly the same results but are 10-20 times faster on average. We also develop a multiple-object detection variation of the system, where hypotheses for 20
categories are inserted in a common priority queue. For the problem of finding the
strongest category in an image this results in a 100-fold speedup.
1 Introduction
Deformable Part Models (DPMs) deliver state-of-the-art object detection results [4] on challenging
benchmarks when trained discriminatively, and have become a standard in object recognition research. At the heart of these models lies the optimization of a merit function -the classifier scorewith respect to the part displacements and the global object pose. In this work we take the classifier
for granted, using the models of [4], and focus on the optimization problem.
The most common detection algorithm used in conjunction with DPMs relies on Generalized Distance Transforms (GDTs) [5], whose complexity is linear in the image size. Despite its amazing
efficiency this algorithm still needs to first evaluate the score everywhere before picking its maxima.
In this work we use Branch-and-Bound in conjunction with part-based models. For this we exploit
the Dual Tree (DT) data structure [7], developed originally to accelerate operations related to Kernel
Density Estimation (KDE). We use DTs to provide the bounds required by Branch-and-Bound.
Our method is fairly generic; it applies to any star-shape graphical model involving continuous
variables, and pairwise potentials expressed as separable, decreasing binary potential kernels. We
evaluate our technique using the mixture-of-deformable part models of [4]. Our algorithm delivers
exactly the same results, but is 15-30 times faster. We also develop a multiple-object detection
variation of the system, where all object hypotheses are inserted in the same priority queue. If our
task is to find the best (or k-best) object hypotheses in an image this can result in a 100-fold speedup.
2 Previous Work on Efficient Detection
Cascaded object detection [20] has led to a proliferation of vision applications, but far less work
exists to deal with part-based models. The combinatorics of matching have been extensively studied
for rigid objects [8], while [17] used A? for detecting object instances. For categories, recent works
[1, 10, 11, 19, 6, 18, 15] have focused on reducing the high-dimensional pose search space during
1
detection by initially simplifying the cost function being optimized, mostly using ideas similar to
A? and coarse-to-fine processing. In the recent work of [4] thresholds pre-computed on the training
set are used to prune computation and result in substantial speedups compared to GDTs.
Branch-and-bound (BB) prioritizes the search of promising image areas, as indicated by an upper
bound on the classifier?s score. A most influential paper has been the Efficient Subwindow Search
(ESS) technique of [12], where an upper bound of a bag-of-words classifier score delivers the bounds
required by BB. Later [16] combined Graph-Cuts with BB for object segmentation, while in [13] a
general cascade system was devised for efficient detection with a nonlinear classifier.
Our work is positioned with respect to these works as follows: unlike existing BB works [16, 12, 15],
we use the DPM cost and thereby accommodate parts in a rigorous energy minimization framework.
And unlike the pruning-based works [1, 6, 4, 18], we do not make any approximations or assumptions about when it is legitimate to stop computation; our method is exact.
We obtain the bound required by BB from Dual Trees. To the best of our knowledge, Dual Trees
have been minimally been used in object detection; we are only aware of the work in [9] which used
DTs to efficiently generate particles for Nonparametric Belief Propagation. Here we show that DTs
can be used for part-based detection, which is related conceptually, but entirely different technically.
3 Preliminaries
We first describe the cost function used in DPMs, then outline the limitations of GDT-based detection, and finally present the concepts of Dual Trees relevant to our setting. Due to lack of space we
refer to [2, 4] for further details on DPMs and to [7] [14] for Dual Trees.
3.1 Merit function for DPMs
We consider a star-shaped graphical model consisting of a set of P + 1 nodes {n0 , . . . nP }; n0 is
called the root and the part nodes n1 , . . . , nP are connected to the root. Each node p has a unary
observation potential Up (x), indicating the fidelity of the image at x to the node; e.g. in [2] Up (x)
is the inner product of a HOG feature at x with a discriminant wp for p.
The location xp = (hp , vp ) of part p is constrained with respect to the root location x0 = (h0 , v0 )
in terms of a quadratic binary potential Bp (xp , x0 ) of the form:
T
Bp (xp , x0 )=? (xp ? x0 ? ?p ) Ip (xp ? x0 ? ?p )=?(hp ? h0 ? ?p )2 Hp ? (vp ? v0 ? ?p )2 Vp ,
where Ip = diag(Hp , Vp ) is a diagonal precision matrix and mp = (?p , ?p ) is the nominal difference
of root-part locations. We will freely alternate between the vector x and its horizontal/vertical h/v
coordinates. Moreover we consider ?0 = 0, ?0 = 0 and H0 , V0 large enough so that B0 (xp , x0 ) will
be zero for xp = x0 and practically infinite elsewhere.
If the root is at x0 the merit for part p being
P at xp is given by mp (xp , x0 ) = Up (xp ) +
Bp (xp , x0 ); summing over p gives the score p mp (xp , x0 ) of a root-and-parts configuration
X = (x0 , . . . , xP ). The detector score at point x is obtained by maximizing over those X with
x0 = x; this amounts to computing:
X
. X
max mp (xp , x) =
max Up (xp ) ? (hp ? h ? ?p )2 Hp ? (vp ? v ? ?p )2 Vp .
S(x) =
P
p=0
P
xp
p=0
xp
(1)
A GDT can be used to maximize each summand in Eq. 1 jointly for all values of x0 in time O(N ),
where N is the number of possible locations. This is dramatically faster than the naive O(N 2 )
computation. For a P-part model, complexity decreases from O(N 2 P ) to O(N P ).
Still, the N factor can make things slow for large images. If we know that a certain threshold will
be used for detection, e.g. ?1 for a classifier trained with SVMs, the GDT-based approach turns out
to be wasteful as it treats equally all image locations, even those where we can quickly realize that
the classifier score cannot exceed this threshold.
This is illustrated in Fig. 1: in (a) we show the part-root configuration that gives the maximum
score, and in (b) the score of a bicycle model from [4] over the whole image domain. Our approach
2
(a) Input & Detection result (b) Detector score S(x) (c) BB for arg maxx S(x) (d) BB for S(x) ? ?1.
Figure 1: Motivation for Branch-and-Bound (BB) approach: standard part-based models evaluate a classifier?s
score S(x) over the whole image domain. Typically only a tiny portion of the image domain should be positivein (b) we draw a black contour around {x : S(x) > ?1} for an SVM-based classifier. BB ignores large
intervals with low S(x) by upper bounding their values, and postponing their ?exploration? in favor of more
promising ones. In (c) we show as heat maps the upper bounds of the intervals visited by BB until the strongest
location was explored, and in (d) of the intervals visited until all locations x with S(x) > ?1 were explored.
speeds up detection by upper bounding the score of the detector within intervals of x while using
low-cost operations. This allows us to use a prioritized search strategy that can refine these bounds
on promising intervals, while postponing the exploration of less promising intervals.
This is demonstrated in Fig. 1(c,d) where we show as heat maps the upper bounds of the intervals
visited by BB: parts of the image where the heat maps are more fine grained correspond to image
locations that seemed promising. If our goal is to maximize S(x) BB discards a huge amount of
computation, as shown in (c); even with a more conservative criterion, i.e. finding all x : S(x) > ?1
(d), a large part of the image domain is effectively ignored and the algorithm obtains refined bounds
only around ?interesting? image locations.
3.2 Dual Trees: Data Structures for Set-Set interactions
The main technical challenge is to efficiently compute upper bounds for a model involving deformable parts; our main contribution consists in realizing that this can be accomplished with the
Dual Tree data structure of [7]. We now give a high-level description of Dual Trees, leaving concrete aspects for their adaptation to the detection problem; we assume the reader is familiar with
KD-trees.
Dual Trees were developed to efficiently evaluate expressions of the form:
P (xj ) =
N
X
wi K(xj , xi ),
xi ? XS ,
i = 1, . . . N,
xj ? XD
j = 1, . . . , M
(2)
i=1
where K(?, ?) is a separable, decreasing kernel, e.g. a Gaussian with diagonal covariance. We refer
to XS as ?source? terms, and to XD as ?domain? terms, the idea being that the source points XS
generate a ?field? P , which we want evaluate at the domain locations XP .
Naively performing the computation in Eq. 2 considers all source-domain interactions and takes
N M operations. The Dual Tree algorithm efficiently computes this sum by using two KD-trees,
one (S) for the source locations XS and another (D) for the domain locations XD . This allows for
substantial speedups when computing Eq. 2 for all domain points, as illustrated in Fig. 2: if a ?chunk?
of source points cannot affect a ?chunk? of domain points, we skip computing their domain-source
point interactions.
4 DPM opitimization using Dual Tree Branch and Bound
Brand and Bound (BB) is a maximization algorithm for non-parametric, non-convex or even nondifferentiable functions. BB searches for the interval containing the function?s maximum using a
prioritized search strategy; the priority of an interval is determined by the function?s upper bound
within it. Starting from an interval containing the whole function domain, BB increasingly narrows
down to the solution: at each step an interval of solutions is popped from a priority queue, split
into sub-intervals (Branch), and a new upper bound for those intervals is computed (Bound). These
intervals are then inserted in the priority queue and the process repeats until a singleton interval is
popped. If the bound is tight for singletons, the first singleton will be the function?s global maximum.
3
Figure 2: Left: Dual Trees efficiently deal with the interaction of ?source? (red) and ?domain? points (blue),
using easily computable bounds. For instance points lying in square 6 cannot have a large effect on points in
square A, therefore we do not need to go to a finer level of resolution to exactly estimate their interactions.
Right: illustration of the terms involved in the geometric bound computations of Eq. 10.
Coming to our case, the DPM criterion developed in Sec. 3.1 is a sum of scores of the form:
sp (x0 ) = max mp (xp , x0 ) = max Up (hp , vp ) ? (hp ? h0 ? ?p )2 Hp ? (vp ? v0 ? ?p )2 Vp .
xP
(hp ,vp )
(3)
Using Dual Tree terminology the ?source points? correspond to part locations xp , i.e. XSp = {xp },
and the ?domain points? to object locations x0 , i.e. XD = {x0 }. Dual Trees allow us to efficiently
derive bounds for sp (x0 ), x0 ? XD , the scores that a set of object locations can have due to a
set
P of part p locations. Once these are formed, we add over parts to bound the score S(x0 ) =
p sp (x0 ), x0 ? XD . This provides the bound needed by Branch-and Bound (BB).
We now present our approach through a series intermediate problems. These may be amenable to
simpler solutions, but the more complex solutions discussed finally lead to our algorithm.
4.1
Maximization for One Domain Point
We first introduce notation: we index the source/domain points in XS /XD using i/j respectively. We
denote by wip = Up (xi ) the unary potential of part p at location xi . We shift the unary scores by the
nominal offsets ?, which gives new source locations: xi ? xi ? ?p , (hi , vi ) ? (hi ? ?p , vi ? ?p ).
Finally, we drop p from mp , Hp and Vp unless necessary. We can now write Eq. 3 as:
m(h0 , v0 ) = max wi ? H(hi ? h0 )2 ? V (vi ? v0 )2 .
(4)
i?Sp
To evaluate Eq. 4 at (h0 , v0 ) we use prioritized search over intervals of i ? Sp , starting from Sp
and gradually narrowing down to the best i. To prioritize intervals we use a KD-tree for the source
points xi ? XSp to quickly compute bounds of Eq. 4. In specific, if Sn is the set of children of the
n-th node of the KD-tree for Sp , consider the subproblem:
mn (h0 , v0 ) = max
i?Sn
wi ? H(hi ? h0 )2 ? V (vi ? v0 )2 = max
i?Sn
wi + Gi ,
(5)
.
where Gi = ?H(hi ? h0 )2 ? V (vi ? v0 )2 stands for the geometric part of Eq. 5. We know that for
all points (hi , vi ) within Sn we have hi ? [ln , rn ] and vi ? [bn , tn ], where l, r, b, t are the left, right,
bottom, top axes defining n?s bounding box, Bn . We can then bound Gi within Sn as follows:
Gn
= ?H min(?l ? h0 ?, ?h0 ? r?)2 ? V min(?b ? v0 ?, ?v0 ? t?)2
(6)
Gn
2
(7)
2
= ?H max( l ? h0 , h0 ? r ) ? V max( b ? v0 , v0 ? t ) ,
where ??? = max(?, 0), and Gn ? Gi ? Gn ?i ? Sn . The upper bound is zero inside Bn and uses
the boundaries of Bn that lie closest to (h0 , v0 ), when (h0 , v0 ) is outside Bn . The lower bound uses
the distance from (h0 , v0 ) to the furthest point within Bn .
Regarding the wi term in Eq. 5, for both bounds we can use the value wj , j = arg maxi?Sn wi .
This is clearly suited for the upper bound. For the lower bound, since Gi > Gn ?i ? Sn , we
have maxi?Sn wi + Gi ? wj + Gj ? wj + Gn . So wj + Gn provides a proper lower bound for
maxi?Sn wi + Gi . Summing up, we bound Eq. 5 as: wj + Gn ? mn (h0 , v0 ) ? wj + Gn .
4
m 7
0
l
l1
l2
m1 4
2
n 3
0
m2 6
1
n1 2
0
n2 3
1
o 8
4
o1 5
4
o2 8
6
Figure 3: Supporter pruning: source nodes {m, n, o} are among the possible supporters of domain-node l.
Their upper and lower bounds (shown as numbers to the right of each node) are used to prune them. Here, the
upper bound for n (3) is smaller than the maximal lower bound among supporters (4, from o): this implies the
upper bound of n?s children contributions to l?s children (shown here for l1 ) will not surpass the lower bound
of o?s children. We can thus safely remove n from the supporters.
We can use the upper bound in a prioritized search for the maximum of m(h0 , v0 ), as described in
Table 1. Starting with the root of the KD-tree we expand its children nodes, estimate their prioritiesupper bounds, and insert them in a priority queue. The search stops when the first leaf node is
popped; this provides the maximizer, as its upper and lower bounds coincide and all other elements
waiting in queue have smaller upper bounds. The lower bound is useful in Sec. 4.2.
4.2 Maximization for All Domain Points
Having described how KD-trees to provide bounds in the single domain point case, we now describe
how Dual Trees can speedup this operation in when treating multiple domain points simultaneously.
In specific, we consider the following maximization problem:
x? = arg max m(x) = arg max max wi ? H(hi ? hj )2 ? V (vi ? vj )2 ,
x?XD
(8)
j?D i?S
where XD /D is the set of domain points/indices and S are the source indices. The previous algorithm could deliver x? by computing m(x) repeatedly for each x ? XD and picking the maximizer.
But this will repeat similar checks for neighboring domain points, which can instead be done jointly.
For this, as in the original Dual Tree work, we build a second KD-tree for the domain points (?Domain tree?, as opposed to ?Source tree?). The nodes in the Domain tree (?domain-nodes?) correspond
to intervals of domain points that are processed jointly. This saves repetitions of similar bounding
operations, and quickly discards large domain areas with poor bounds.
For the bounding operations, as in Sec. 4.1 we consider the effect of source points contained in a
node Sn of the Source tree. The difference is that now we bound the maximum of this quantity over
domain points contained in a domain-node Dl . In specific, we consider the quantity:
ml,n = max max
j?Dl i?Sn
wi ? H(hi ? hj )2 ? V (vi ? vj )2
(9)
Bounding Gi,j = ?H(hi ? hj )2 ? V (vi ? vj )2 involves two 2D intervals, one for the domain-node
l and one for the domain-node n. If the interval for node n is centered at hn , vn , and has dimensions
dh,n , dv,n , we use d?h = 12 (dh,l + dh,n ), d?v = 12 (dv,l + dv,n ) and write:
Gl,n = ?H max(?hn ? hl ? d?h ?, ?hl ? hn ? d?h ?)2 ? V max(?vn ? vl ? d?v ?, ?vl ? vn ? d?v ?)2
Gl,n = ?H max( hn ? hl + d?h , hl ? hn + d?h )2 ? V max( vn ? vl ? d?v , vl ? vn ? d?v )2
We illustrate these bounds in Fig. 2. The upper bound is zero if the boxes overlap, or else equals the
(scaled) distance of their closest points. The lower bound uses the furthest points of the two boxes.
As in Sec. 4.1, we use wn? = maxi?Sn wi for the first term in Eq. 9, and bound ml,n as follows:
Gl,n + wn? ? ml,n ? Gl,n + wn? .
(10)
This expression bounds the maximal value m(x) that a point x in domain-node l can have using
contributions from points in source-node n. Our initial goal was to find the maximum using all
possible source point contributions. We now describe a recursive approach to limit the set of sourcenodes considered, in a manner inspired from the ?multi-recursion? approach of [7].
5
For this, we associate every domain-node l with a set Sl of ?supporter? source-nodes that can yield
the maximal contribution to points in l. We start by associating the root node of the Domain tree
with the root node of the Source-tree, which means that all domain-source point interactions are
originally considered.
We then recursively increase the ?resolution? of the Domain-tree in parallel with the ?resolution? of
the Source-tree. More specifically, to determine the supporters for a child m of domain-node l we
consider only the children of the source-nodes in Sl ; formally, denoting by pa and ch the parent and
child operations respectively we have Sm ? ?n?Spa(m) {ch(n)}.
Our goal is to reduce computation by keeping Sm small. This is achieved by pruning based on both
the lower and upper bounds derived above. The main observation is that when we go from parents
to children we decrease the number of source/domain points; this tightens the bounds, i.e. makes
the upper bounds less optimistic and the lower bounds more optimistic. Denoting the maximal
lower bound for contributions to parent node l by Gl = maxn?Sl Gl,n , this means that Gk ? Gl if
pa(k) = l. On the flip side, Gl,n ? Gk,q if pa(k) = l, pa(q) = n. This means that if for sourcenode n at the parent level Gl,n < Gl , at the children level the children of n will contribute something
worse than Gm , the lower bound on l?s child score. We therefore do not need to keep n among Sl - its
children?s contribution will be certainly worse than the best contribution from other node?s children.
Based on this observation we can reduce the set of supporters, while guaranteeing optimality.
Pseudocode summarizing this algorithm is provided in Table 1. The bounds in Eq. 10 are used in a
prioritized search algorithm for the maximum of m(x) over x. The algorithm uses a priority queue
for Domain tree nodes, initialized with the root of the Domain tree (i.e. the whole range of possible
locations x). At each iteration we pop a Domain tree node from the queue, compute upper bounds
and supporters for its children, which are then pushed in the priority queue. The first leaf node that
is popped contains the best domain location: its upper bound equals its lower bound, and all other
nodes in the priority queue have smaller upper bounds, therefore cannot result in a better solution.
4.3 Maximization over All Domain Points and Multiple Parts: Branch and Bound for DPMs
The algorithm we described in the previous subsection is essentially a Branch-and-Bound (BB)
algorithm for the maximization of a merit function
x? = arg max m(x0 ) = arg max max wi ? H(hi ? h0 )2 ? V (vi ? v0 )2
(11)
x0
(h0 ,v0 ) i?Sp
corresponding to a DPM with a single-part (p). To see this, recall that at each step BB pops a
domain of the function being maximized from the priority queue, splits it into subdomains (Branch),
and computes a new upper bound for the subdomains (Bound). In our case Branching amounts
to considering the two descendants of the domain node being popped, while Bounding amounts to
taking the maximum of the upper bounds of the domain node supporters.
The single-part DPM optimization problem is rather trivial, but adapting the technique to the multipart case is now easy. For this, we rewrite Eq. 1 in a convenient form as:
m(h0 , v0 ) =
P
X
p=0
max wp,i ? Hp (hpi ? h0 )2 ? Vp (vip ? v0 )2
i?S
(12)
using the conventions we used in Eq. 4. Namely, we only consider using points in S for object parts,
and subtract mp from hi , vi to yield simple quadratic forms; since mp is part-dependent, we now
have a p superscript for hi , vi . Further, we have in general different H, V variables for different
parts, so we brought back the p subscript for these. Finally, wp,i depends on p, since the same image
point will give different unary potentials for different object parts.
From this form we realize that computing the upper bound of m(x) within a range of values of
x, as required by Branch-and-Bound is as easy as it was for the single terms in the previous secPP
tion. In specific we have m(x) = p=1 mp (x), where mp are the individual part contributions;
PP
PP
since maxx p=0 mp (x) ? p=0 maxx mp (x). we can separately upper bound the individual part
contributions, and sum them up to get an overall upper bound.
Pseudocode describing the maximization algorithm is provided in Table 1. Note that each part has its
own KDtree (SourcT[p]): we build a separate Source-tree per part using the part-specific coordinates
6
(hp , v p ) and weights wp,i . Each part?s contribution to the score is computed using the supporters it
lends to the node; the total bound is obtained by summing the individual part bounds.
Single Domain Point
IN: ST, x {Source Tree, Location x}
OUT: arg maxxi ?ST m(x, xi )
Push(S,ST.root);
while 1 do
Pop(S,popped);
if popped.UB = popped.LB then
return popped;
end if
for side = [Left,Right] do
child = popped.side;
child.UB = BoundU(x,child);
child.LB = BoundL(x,child);
Push(S,child);
end for
end while
Multiple Domain Points, Multiple Parts
IN: ST[P], DT {P Source
P Trees/Domain Tree}
OUT: arg maxx?DT p maxi?ST [P ] m(x, xp , i)
Seed = DT.root;
for p = 1 to P do
Seed.supporters[p] = ST[p].Root;
end for
Push(S,Seed);
while 1 do
Pop(S,popped);
if popped.UB = popped.LB then
return popped;
end if
for side = [Left,Right] do
child = popped.side;
UB = 0;
for part = 1:P do
supp = Descend(popped.supp[part])
UP,s = Bound(child,supp,DT,ST[p]);
child.supp[part] = s;
UB = UB + UP;
end for
child.UB = UB;
Push(S,child);
end for
end while
Multiple Domain Points
IN: ST, DT {Source/Domain Tree}
OUT: arg maxx?DT maxi?ST m(x, xi )
Seed = DT.root;
Seed.supporters = ST.Root;
Push(S,Seed);
while 1 do
Pop(S,popped);
if popped.UB = popped.LB then
return popped;
end if
for side = [Left,Right] do
child = popped.side;
supp = Descend(popped.supp);
UB,supc = Bound(child,supp,DT,ST);
child.UB = UB;
child.supc = supc;
Push(S,child);
end for
end while
Bounding Routine
IN: child,supporters,DT,ST
OUT: supch, LB {Chosen supporters, Max LB}
UB = ??; LB = ?;
for n ? supporters do
UB[n] = BoundU(DT.node[child],ST.node[n]);
LB[n] = BoundL(DT.node[child],ST.node[n]);
end for
MaxLB = max(LB);
supch = supporters(find(UB>MaxLB));
Return supch, MaxLB;
Table 1: Pseudocode for the algorithms presented in Section 4.
5 Results - Application to Deformable Object Detection
To estimate the merit of BB we first compare with the mixtures-of-DPMs developed and distributed
by [3]. We directly extend the Branch-and-Bound technique that we developed for a single DPM to
deal with multiple scales and mixtures (?ORs?) of DPMs [4, 21], by inserting all object hypotheses
into the same queue. To detect multiple instances of objects at multiple scales we continue BB after
getting the best scoring object hypothesis. As termination criterion we choose to stop when we pop
an interval whose upper bound is below a fixed threshold.
Our technique delivers essentially the same results as [4]. One minuscule difference is that BB
uses floating point arithmetic for the part locations, while in GDT they are necessarily processed at
integer resolution; other than that the results are identical. We therefore do not provide any detection
performance curves, but only timing results.
Coming to time efficiency, in Fig. 4 (a) we compare the results of the original DPM mixture model
and our implementation. We use 2000 images from the Pascal dataset and a mix of models for
different object clases (gains vary per category). We consider the standard detection scenario where
we want to detect all objects in an image having score above a certain threshold. We show how
7
Speedup: Single object
2
Speedup
k=1
2
2
10
10
t = ?0.4 101
t = ?0.6
t = ?0.8
t = ?1.0
0
1
0
Image rank
10
(a)
10
2
10
10
10
Speedup ? front?end
Speedup: M?objects, 1?best Speedup: 20?objects, k?best
M=1
M=5
M = 10
M = 20
Image rank
k=1
k=2
k=5
k = 10
1
10
0
10
(b)
Image rank
(c)
1
10
0
10
Image rank
(d)
Figure 4: (a) Single-object speedup of Branch and Bound compared to GDTs on images from the Pascal
dataset, (b,c) Multi-object speedup. (d) Speedup due to the front-end computation of the unary potentials.
Please see text for details.
the threshold affects the speedup we obtain; for a conservative threshold the speedup is typically
tenfold, but as we become more aggressive it doubles.
As a second application, we consider the problem of identifying the ?dominant? object present in
the image, i.e. the category the gives the largest score. Typically simpler models, like bag-of-words
classifiers are applied to this problem, based on the understanding that part-based models can be
time-consuming, therefore applying a large set of models to an image would be impractical.
Our claim is that Branch-and-Bound allows us to pursue a different approach, where in fact having
more object categories can increase the speed of detection, if we leave the unary potential computation aside. In specific, our approach can be directly extended to the multiple-object detection
setting; as long as the scores computed by different object categories are commensurate, they can all
be inserted in the same priority queue. In our experiments we observed that we can get a response
faster by introducing more models. The reason for this is that including into our object repertoire a
model giving a large score helps BB stop; otherwise BB keeps searching for another object.
In plots (b),(c) Fig. 4 we show systematic results on the Pascal dataset. We compare the time that
would be required by GDT to perform detection of all multiple objects considered in Pascal, to that
of a model simultaneously exploring all models. In (b) we show how finding the first-best result is
accelerated as the number of objects (M) increases; while in (c) we show how increasing the ?k? in
?k-best? affects the speedup. For small values of k the gains become more pronounced. Of course if
we use a fixed threshold the speedup would not change, when compared to plot (a), since essentially
the objects do not ?interact? in any way (we do not use nonmaximum suppression). But as we turn to
the best-first problem, the speedup becomes dramatic, ranging in the order of up to a hundred times.
We note that the timings refer to the ?message passing? part implemented with GDT and not the
computation of unary potentials, which is common for both models, and is currently the bottleneck.
Even though it is tangential to our contribution in this paper, we mention that as shown in plot (d)
we compute unary potentials approximately five times faster than the single-threaded convolution
provided by [3] by exploiting Matlab?s optimized matrix multiplication routines.
6 Conclusions
In this work we have introduced Dual-Tree Branch-and-Bound for efficient part-based detection.
We have used Dual Trees to compute upper bounds on the cost function of a part-based model and
thereby derived a Branch-and-Bound algorithm for detection. Our algorithm is exact and makes no
approximations, delivering identical results with the DPMs used in [4], but in typically 10-15 less
time. Further, we have shown that the flexibility of prioritized search allows us to consider new
tasks, such as multiple-object detection, which yielded further speedups. The main challenge for
future work will be to reduce the unary term computation cost; we intend to use BB for this task too.
7 Acknowledgements
We are grateful to the authors of [3, 12, 9] for making their code available, and to the reviewers for
constructive feedback. This work was funded by grant ANR-10-JCJC -0205.
8
References
[1] Y. Chen, L. Zhu, C. Lin, A. L. Yuille, and H. Zhang. Rapid inference on a novel and/or graph for object
detection, segmentation and parsing. In NIPS, 2007.
[2] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In CVPR, 2008.
[3] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models,
release 4. http://www.cs.brown.edu/ pff/latent-release4/.
[4] P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. Cascade object detection with deformable part
models. In CVPR, 2010.
[5] P. F. Felzenszwalb and D. P. Huttenlocher. Distance transforms of sampled functions. Technical report,
Cornell CS, 2004.
[6] V. Ferrari, M. J. Marin-Jimenez, and A. Zisserman. Progressive search space reduction for human pose
estimation. In CVPR, 2008.
[7] A. G. Gray and A. W. Moore. Nonparametric density estimation: Toward computational tractability. In
SIAM International Conference on Data Mining, 2003.
[8] E. Grimson. Object Recognition by Computer. MIT Press, 1991.
[9] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling from products
of gaussian mixtures. In NIPS, 2003.
[10] I. Kokkinos and A. Yuille. HOP: Hierarchical Object Parsing. In CVPR, 2009.
[11] I. Kokkinos and A. L. Yuille. Inference and learning with hierarchical shape models. International Journal
of Computer Vision, 93(2):201?225, 2011.
[12] C. Lampert, M. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization by efficient
subwindow search. In CVPR, 2008.
[13] C. H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In CVPR, 2010.
[14] D. Lee, A. G. Gray, and A. W. Moore. Dual-tree fast gauss transforms. In NIPS, 2005.
[15] A. Lehmann, B. Leibe, and L. V. Gool. Fast PRISM: Branch and Bound Hough Transform for Object
Class Detection. International Journal of Computer Vision, 94(2):175?197, 2011.
[16] V. Lempitsky, A. Blake, and C. Rother. Image segmentation by branch-and-mincut. In ECCV, 2008.
[17] P. Moreels, M. Maire, and P. Perona. Recognition by probabilistic hypothesis construction. In ECCV,
page 55, 2004.
[18] M. Pedersoli, A. Vedaldi, and J. Gonz`alez. A coarse-to-fine approach for fast deformable object detection.
In CVPR, 2011.
[19] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010.
[20] P. Viola and M. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. In CVPR,
2001.
[21] S. C. Zhu and D. Mumford. Quest for a Stochastic Grammar of Images. Foundations and Trends in
Computer Graphics and Vision, 2(4):259?362, 2007.
9
| 4338 |@word kokkinos:4 termination:1 bn:6 simplifying:1 covariance:1 dramatic:1 thereby:2 mention:1 accommodate:2 recursively:1 reduction:1 initial:1 configuration:2 series:1 score:22 contains:1 jimenez:1 ecole:1 denoting:2 o2:1 existing:1 parsing:2 realize:2 shape:2 hofmann:1 remove:1 drop:1 treating:1 plot:3 n0:2 aside:1 leaf:2 es:1 realizing:1 core:1 detecting:1 coarse:2 node:38 location:24 provides:3 contribute:1 simpler:2 zhang:1 five:1 become:3 descendant:1 consists:1 inside:1 manner:1 introduce:1 x0:25 pairwise:1 rapid:3 proliferation:1 multi:2 inspired:1 freeman:1 decreasing:2 tenfold:1 considering:1 increasing:1 becomes:1 provided:3 blaschko:1 moreover:1 notation:1 window:1 pursue:1 developed:5 finding:3 impractical:1 safely:1 alez:1 every:1 xd:10 exactly:3 classifier:11 scaled:1 ramanan:1 grant:1 supc:3 before:1 timing:2 treat:1 limit:1 despite:1 marin:1 subscript:1 approximately:1 black:1 minimally:1 studied:1 challenging:1 range:2 recursive:1 maire:1 displacement:1 area:2 maxx:5 cascade:4 adapting:1 matching:1 convenient:1 pre:1 word:2 vedaldi:1 get:2 cannot:4 applying:1 www:1 map:3 demonstrated:1 center:1 maximizing:1 reviewer:1 go:2 starting:3 convex:1 focused:1 resolution:4 identifying:1 legitimate:1 m2:1 searching:1 ferrari:1 variation:2 coordinate:2 construction:1 nominal:2 gm:1 exact:2 us:5 hypothesis:6 associate:1 element:1 pa:4 recognition:3 trend:1 cut:1 huttenlocher:1 narrowing:1 inserted:4 subproblem:1 bottom:1 observed:1 taskar:1 descend:2 wj:6 connected:1 decrease:2 substantial:2 grimson:1 complexity:2 exhaustively:1 trained:4 grateful:1 tight:1 rewrite:1 deliver:2 technically:1 yuille:3 efficiency:2 localization:1 accelerate:1 easily:1 articulated:1 heat:3 fast:3 describe:3 outside:1 h0:23 refined:1 whose:2 cvpr:8 otherwise:1 anr:1 grammar:1 favor:1 gi:8 jointly:3 transform:1 ip:2 superscript:1 interaction:6 product:2 coming:2 fr:1 adaptation:1 maximal:4 relevant:1 dpms:9 neighboring:1 inserting:1 minuscule:1 flexibility:1 deformable:11 description:1 pronounced:1 getting:1 exploiting:1 parent:4 double:1 guaranteeing:1 leave:1 object:48 help:1 derive:1 develop:2 illustrate:1 pose:4 amazing:1 b0:1 eq:14 implemented:1 c:2 skip:1 implies:1 involves:1 convention:1 stochastic:1 exploration:2 centered:1 human:1 mcallester:3 preliminary:1 repertoire:1 insert:1 exploring:1 practically:1 around:2 lying:1 considered:3 blake:1 seed:6 bicycle:1 claim:1 vary:1 estimation:4 bag:2 currently:1 visited:3 largest:1 repetition:1 minimization:1 brought:1 clearly:1 mit:1 gaussian:2 rather:1 clases:1 hj:3 cornell:1 boosted:1 conjunction:2 ax:1 focus:2 derived:2 nonmaximum:1 release:1 rank:4 check:1 rigorous:1 suppression:1 detect:3 summarizing:1 inference:2 dependent:1 rigid:1 unary:9 vl:4 typically:4 initially:1 perona:1 expand:1 arg:9 dual:21 fidelity:1 among:3 overall:1 pascal:4 art:1 constrained:1 fairly:1 field:1 aware:1 once:1 shaped:1 having:3 sampling:1 hop:1 identical:2 progressive:1 equal:2 jones:1 prioritizes:1 future:1 np:2 report:1 summand:1 tangential:1 simultaneously:2 individual:3 familiar:1 floating:1 consisting:1 n1:2 detection:31 huge:1 message:1 mining:1 certainly:1 mixture:6 hpi:1 amenable:1 necessary:1 unless:1 tree:44 divide:1 hough:1 initialized:1 deformation:1 girshick:2 instance:3 gn:9 maximization:7 cost:6 introducing:1 tractability:1 hundred:1 front:2 too:1 graphic:1 xsp:2 combined:1 chunk:2 st:14 density:2 international:3 siam:1 systematic:1 lee:1 probabilistic:1 picking:2 quickly:3 concrete:1 containing:2 opposed:1 prioritize:1 hn:5 choose:1 priority:11 worse:2 return:4 supp:7 aggressive:1 potential:10 de:1 singleton:3 star:2 sec:4 combinatorics:1 mp:12 vi:13 depends:1 later:1 root:16 tion:1 optimistic:2 portion:1 red:1 start:1 parallel:1 contribution:12 square:2 formed:1 efficiently:7 maximized:1 correspond:3 yield:2 conceptually:1 vp:12 finer:1 detector:3 strongest:2 energy:1 pp:2 involved:1 ihler:1 stop:4 gain:2 dataset:3 sampled:1 recall:1 knowledge:1 subsection:1 sapp:1 segmentation:3 iasonas:2 positioned:1 routine:2 back:1 originally:2 dt:12 response:1 zisserman:1 done:1 box:3 though:1 until:3 horizontal:1 nonlinear:2 maximizer:2 propagation:1 lack:1 multiscale:2 indicated:1 gray:2 effect:2 concept:1 brown:1 wp:4 moore:2 illustrated:2 deal:3 kdtree:1 during:1 branching:1 please:1 criterion:3 generalized:1 outline:1 tn:1 delivers:3 l1:2 image:28 ranging:1 novel:1 common:3 pseudocode:3 tightens:1 discussed:1 extend:1 m1:1 jcjc:1 refer:3 hp:13 particle:1 funded:1 v0:23 gj:1 add:1 something:1 dominant:1 closest:2 own:1 recent:2 discard:2 scenario:1 gonz:1 certain:2 binary:2 continue:1 prism:1 accomplished:1 scoring:1 prune:2 freely:1 determine:1 maximize:2 arithmetic:1 branch:20 multiple:13 mix:1 sliding:1 technical:2 faster:5 adapt:1 long:1 lin:1 devised:1 equally:1 involving:2 vision:4 essentially:3 iteration:1 kernel:3 achieved:1 want:2 fine:3 separately:1 interval:21 else:1 sudderth:1 leaving:1 source:28 unlike:2 dpm:7 thing:1 integer:1 exceed:1 split:2 enough:1 intermediate:1 wn:3 easy:2 xj:3 affect:3 associating:1 inner:1 idea:2 regarding:1 reduce:3 computable:1 supporter:16 shift:1 bottleneck:1 expression:2 granted:1 gdt:6 queue:13 passing:1 repeatedly:1 matlab:1 dramatically:1 ignored:1 useful:1 delivering:1 transforms:3 nonparametric:2 amount:4 extensively:1 svms:1 category:7 processed:2 generate:2 http:1 sl:4 per:2 blue:1 write:2 waiting:1 terminology:1 threshold:8 wasteful:1 graph:2 sum:3 everywhere:1 lehmann:1 reader:1 vn:5 ecp:1 draw:1 spa:1 pushed:1 entirely:1 bound:88 hi:13 fold:2 quadratic:2 refine:1 yielded:1 bp:3 toshev:1 aspect:1 speed:2 min:2 optimality:1 performing:1 separable:2 speedup:19 influential:1 maxn:1 alternate:1 centrale:1 poor:1 kd:7 smaller:3 increasingly:1 wi:12 making:1 hl:4 dv:3 gradually:1 heart:1 ln:1 turn:2 describing:1 needed:1 know:2 merit:5 popped:22 flip:1 vip:1 end:14 available:1 operation:7 leibe:1 hierarchical:2 generic:1 save:1 original:2 subdomains:2 top:1 mincut:1 graphical:2 exploit:1 giving:1 build:2 conquer:1 intend:1 quantity:2 mumford:1 strategy:2 parametric:1 diagonal:2 lends:1 distance:4 separate:1 nondifferentiable:1 threaded:1 considers:1 discriminant:1 trivial:1 reason:1 furthest:2 toward:1 willsky:1 rother:1 code:1 o1:1 index:3 illustration:1 mostly:1 postponing:2 kde:1 hog:1 gk:2 implementation:1 proper:1 perform:1 upper:30 vertical:1 observation:3 convolution:1 commensurate:1 sm:2 benchmark:1 defining:1 extended:1 viola:1 rn:1 lb:9 introduced:1 namely:1 paris:1 required:5 pedersoli:1 optimized:2 narrow:1 dts:3 pop:6 nip:3 beyond:1 below:1 challenge:2 max:25 including:1 belief:1 gool:1 moreels:1 overlap:1 cascaded:2 recursion:1 mn:2 zhu:2 naive:1 sn:13 text:1 geometric:2 l2:1 understanding:1 acknowledgement:1 multiplication:1 discriminatively:3 interesting:1 limitation:1 foundation:1 xp:23 tiny:1 eccv:3 elsewhere:1 course:1 repeat:2 gl:10 keeping:1 side:7 allow:1 taking:1 felzenszwalb:4 distributed:1 feedback:1 boundary:1 dimension:1 curve:1 evaluating:1 stand:1 contour:1 seemed:1 ignores:1 computes:2 subwindow:2 author:1 coincide:1 far:1 bb:26 pruning:3 obtains:1 keep:2 ml:3 global:2 summing:3 consuming:1 xi:9 continuous:1 search:13 latent:1 table:4 promising:6 interact:1 complex:1 necessarily:1 domain:52 diag:1 vj:3 sp:8 main:4 whole:4 motivation:1 bounding:8 lampert:2 n2:1 child:34 fig:6 slow:1 precision:1 sub:1 lie:2 grained:1 maxxi:1 down:2 specific:6 wip:1 maxi:6 explored:2 x:5 svm:1 offset:1 dl:2 exists:1 naively:1 effectively:1 push:6 pff:1 chen:1 subtract:1 suited:1 led:1 visual:1 expressed:1 contained:2 release4:1 applies:1 ch:2 relies:1 dh:3 lempitsky:1 goal:3 prioritized:6 change:1 infinite:1 determined:1 reducing:1 specifically:1 surpass:1 conservative:2 called:1 total:1 gauss:1 brand:1 indicating:1 formally:1 quest:1 ub:15 accelerated:1 constructive:1 evaluate:7 |
3,687 | 4,339 | Agnostic Selective Classification
Ran El-Yaniv and Yair Wiener
Computer Science Department
Technion ? Israel Institute of Technology
{rani,wyair}@{cs,tx}.technion.ac.il
Abstract
For a learning problem whose associated excess loss class is (?, B)-Bernstein, we
show that it is theoretically possible to track the same classification performance
of the best (unknown) hypothesis in our class, provided that we are free to abstain
from prediction in some region of our choice. The (probabilistic) volume
? of this
rejected region of the domain is shown to be diminishing at rate O(B?( 1/m)? ),
where ? is Hanneke?s disagreement coefficient. The strategy achieving this performance has computational barriers because it requires empirical error minimization
in an agnostic setting. Nevertheless, we heuristically approximate this strategy
and develop a novel selective classification algorithm using constrained SVMs.
We show empirically that the resulting algorithm consistently outperforms the traditional rejection mechanism based on distance from decision boundary.
1
Introduction
Is it possible to achieve the same test performance as the best classifier in hindsight? The answer
to this question is ?probably not.? However, when changing the rules of the standard game it is
possible. Indeed, consider a game where our classifier is allowed to abstain from prediction, without
penalty, in some region of our choice. For this case, and assuming a noise free ?realizable? setting,
it was shown in [1] that there is a ?perfect classifier.? This means that after observing only a finite
labeled training sample, the learning algorithm outputs a classifier that, with certainty, will never
err on any test point. To achieve this, this classifier must refuse to classify in some region of the
domain. Perhaps surprisingly it was shown that the volume of this rejection region is bounded, and
in fact, this volume diminishes with increasing training set sizes (under certain conditions). An open
question, posed in [1], is what would be an analogous notion of perfection in an agnostic, noisy
setting. Is it possible to achieve any kind of perfection in a real world scenario?
The setting under consideration, where classifiers can abstain from prediction, is called classification
with a reject option [2, 3], or selective classification [1]. Focusing on this model, in this paper
we present a blend of theoretical and practical results. We first show that the concept of ?perfect
classification? that was introduced for the realizable case in [1], can be extended to the agnostic
setting. While pure perfection is impossible to accomplish in a noisy environment, a more realistic
objective is to perform as well as the best hypothesis in the class within a region of our choice.
We call this type of learning ?weakly optimal? selective classification and show that a novel strategy
accomplishes this type of learning with diminishing rejection rate under certain Bernstein type conditions (a stronger notion of optimality is mentioned later as well). This strategy relies on empirical
risk minimization, which is computationally difficult. In the practical part of the paper we present
a heuristic approximation algorithm, which relies on constrained SVMs, and mimics the optimal
behavior. We conclude with numerical examples that examine the empirical performance of the new
algorithm and compare its performance with that of the widely used selective classification method
for rejection, based on distance from decision boundary.
1
2
Selective classification and other definitions
Consider a standard agnostic binary classification setting where X is some feature space, and H
is our hypothesis class of binary classifiers, h : X ? {?1}. Given a finite training sample of m
labeled examples, Sm = {(xi , yi )}m
i=1 , assumed to be sampled i.i.d. from some unknown underlying
distribution P (X, Y ) over X ? {?1}, our goal is to select the best possible classifier from H. For
?
any h ? H, its true error, R(h), and its empirical error, R(h),
are,
1 ?
?
R(h)
,
I (h(xi ) ?= yi ) .
m i=1
m
R(h) ,
Pr
(X,Y )?P
{h(X) ?= Y } ,
? , arg inf h?H R(h)
?
Let h
be the empirical risk minimizer (ERM), and h? , arg inf h?H R(h), the
true risk minimizer.
In selective classification [1], given Sm we need to select a binary selective classifier defined to be a
pair (h, g), with h ? H being a standard binary classifier, and g : X ? {0, 1} is a selection function
defining the sub-region of activity of h in X . For any x ? X ,
{
reject, if g(x) = 0;
(h, g)(x) ,
(1)
h(x),
if g(x) = 1.
Selective classification performance is characterized in terms of two quantities: coverage and risk.
The coverage of (h, g) is
?(h, g) , E [g(X)] .
For a bounded loss function ? : Y ? Y ? [0, 1], the risk of (h, g) is defined as the average loss on
the accepted samples,
E [?(h(X), Y ) ? g(X)]
R(h, g) ,
.
?(h, g)
As pointed out in [1], the trade-off between risk and coverage is the main characteristic of a selective
classifier. This trade-off is termed there the ?risk-coverage curve? (RC curve)1
Let G ? H. The disagreement set [4, 1] w.r.t. G is defined as
DIS(G) , {x ? X : ?h1 , h2 ? G s.t.
h1 (x) ?= h2 (x)} .
For any hypothesis class H, target hypothesis h ? H, distribution P , sample Sm , and real r > 0,
define
{
}
? r) = h? ? H : R(h
? ? ) ? R(h)
?
V(h, r) = {h? ? H : R(h? ) ? R(h) + r} and V(h,
+ r . (2)
Finally, for any h ? H we define a ball in H of radius r around h [5]. Specifically, with respect to
class H, marginal distribution P over X , h ? H, and real r > 0, define
{
}
B(h, r) , h? ? H : Pr {h? (X) ?= h(X)} ? r .
X?P
3
Perfect and weakly optimal selective classifiers
The concept of perfect classification was introduced in [1] within a realizable selective classification
setting. Perfect classification is an extreme case of selective classification where a selective classifier
(h, g) achieves R(h, g) = 0 with certainty; that is, the classifier never errs on its region of activity.
Obviously, the classifier must compromise sufficiently large part of the domain X in order to achieve
this outstanding performance. Surprisingly, it was shown in [1] that not-trivial perfect classification
exists in the sense that under certain conditions (e.g., finite hypothesis class) the rejected region
diminishes at rate ?(1/m), where m is the size of the training set.
In agnostic environments, as we consider here, such perfect classification appears to be out of reach.
In general, in the worst case no hypothesis can achieve zero error over any nonempty subset of the
1
Some authors refer to an equivalent variant of this curve as ?Accuracy-Rejection Curve? or ARC.
2
domain. We consider here the following weaker, but still extremely desirable behavior, which we
call ?weakly optimal selective classification.? Let h? ? H be the true risk minimizer of our problem.
Let (h, g) be a selective classifier selected after observing the training set Sm . We say that (h, g) is
a weakly optimal selective classifier if, for any 0 < ? < 1, with probability of at least 1 ? ? over
random choices of Sm , R(h, g) ? R(h? , g). That is, with high probability our classifier is at least
as good as the true risk minimizer over its region of activity. We call this classifier ?weakly optimal?
because a stronger requirement would be that the classifier should achieve the best possible error
among all hypotheses in H restricted to the region of activity defined by g.
4
A learning strategy
We now present a strategy that will be shown later to achieve non-trivial weakly optimal selective
classification under certain conditions. We call it a ?strategy? rather than an ?algorithm? because it
does not include implementation details.
Let?s begin with some motivation. Using standard concentration inequalities one can show that
the training error of the true risk minimizer, h? , cannot be ?too far? from the training error of the
? Therefore, we can guarantee, with high probability, that the class of
empirical risk minimizer, h.
all hypothesis with ?sufficiently low? empirical error includes the true risk minimizer h? . Selecting
only subset of the domain, for which all hypothesis in that class agree, is then sufficient to guarantee
weak optimality. Strategy 1 formulates this idea. In the next section we analyze this strategy and
show that it achieves this optimality with non trivial (bounded) coverage.
Strategy 1 Learning strategy for weakly optimal selective classifiers
Input: Sm , m, ?, d
Output: a selective classifier (h, g) such that R(h, g) = R(h? , g) w.p. 1 ? ?
? = ERM (H, Sm ), i.e., h
? is any empirical risk minimizer from H
1: Set h
)
(
?
8
d(ln 2me
+ln ?
)
d
? 4 2
? h,
(see Eq. (2))
2: Set G = V
m
3: Construct g such that g(x) = 1 ?? x ? {X \ DIS (G)}
?
4: h = h
5
Analysis
We begin with a few definitions. Consider an instance of a binary learning problem with hypothesis class H, an underlying distribution P over X ? Y, and a loss function ?(Y, Y). Let
h? = arg inf h?H {E?(h(X), Y )} be the true risk minimizer. The associated excess loss class [6] is
defined as
F , {?(h(x), y) ? ?(h? (x), y) : h ? H} .
Class F is said to be a (?, B)-Bernstein class with respect to P (where 0 < ? ? 1 and B ? 1), if
every f ? F satisfies
Ef 2 ? B(Ef )? .
Bernstein classes arise in many natural situations; see discussions in [7, 8]. For example, if the probability P (X, Y ) satisfies Tsybakov?s noise conditions then the excess loss function is a Bernstein
[8, 9] class. In the following sequence of lemmas and theorems we assume a binary hypothesis class
H with VC-dimension d, an underlying distribution P over X ? {?1}, and ? is the 0/1 loss function.
Also, F denotes the associated excess loss class. Our results can be extended to losses other than
0/1 by similar techniques to those used in [10].
Lemma 5.1. If F is a (?, B)-Bernstein class with respect to P , then for any r > 0
(
)
V(h? , r) ? B h? , Br? .
Proof. If h ? V(h? , r) then, by definition
E {I(h(X) ?= Y )} ? E {I(h? (X) ?= Y )} + r.
3
Using the linearity of expectation we have,
E {I(h(X) ?= Y ) ? I(h? (X) ?= Y )} ? r.
(3)
Since F is a (?, B)-Bernstein class,
E {I(h(X) ?= h? (X))}
= E {|I(h(X) ?= Y ) ? I(h? (X) ?= Y )|}
}
{
2
= E (?(h(X), Y ) ? ?(h? (X), Y )) = Ef 2 ? B(Ef )?
= B (E {I(h(X) ?= Y ) ? I(h? (X) ?= Y )}) .
?
)
(
By (3), for any r > 0, E {I(h(X) ?= h? (X))} ? Br? . Therefore, by definition, h ? B h? , Br? .
Throughout this section we denote
?
?(m, ?, d) , 2
(
)
d ln 2me
+ ln 2?
d
2
.
m
Theorem 5.2 ([11]). For any 0 < ? < 1, with probability of at least 1 ? ? over the choice of Sm
from P m , any hypothesis h ? H satisfies
?
R(h) ? R(h)
+ ?(m, ?, d).
?
Similarly R(h)
? R(h) + ?(m, ?, d) under the same conditions.
Lemma 5.3. For any r > 0, and 0 < ? < 1, with probability of at least 1 ? ?,
? r) ? V (h? , 2?(m, ?/2, d) + r) .
? h,
V(
? r), then, by definition, R(h)
? + r. Since h
? minimizes the empirical error,
? h,
?
? h)
Proof. If h ? V(
? R(
?
?
?
?
we have, R(h) ? R(h ). Using Theorem 5.2 twice, and applying the union bound, we know that
w.p. of at least 1 ? ?,
?
R(h) ? R(h)
+ ?(m, ?/2, d)
? ? ) ? R(h? ) + ?(m, ?/2, d).
R(h
?
Therefore, R(h) ? R(h? ) + 2?(m, ?/2, d) + r, and h ? V (h? , 2?(m, ?/2, d) + r).
For any G ? H, and distribution P we define, ?G , Pr {DIS(G)}. Hanneke introduced a
complexity measure for active learning problems termed the disagreement coefficient [5]. The disagreement coefficient of h with respect to H under distribution P is,
?h , sup
r>?
?B(h, r)
,
r
(4)
where ? = 0. The disagreement coefficient of the hypothesis class H with respect to P is defined as
? , lim sup ?h(k) ,
k??
{
}
where h(k) is any sequence of h(k) ? H with R(h(k) ) monotonically decreasing.
Theorem 5.4. Assume that H has disagreement coefficient ? and that F is a (?, B)-Bernstein class
w.r.t. P . Then, for any r > 0 and 0 < ? < 1, with probability of at least 1 ? ?,
? r) ? B? (2?(m, ?/2, d) + r)? .
? h,
?V(
Proof. Applying Lemmas 5.3 and 5.1 we get that with probability of at least 1 ? ?,
(
)
? r) ? B h? , B (2?(m, ?/2, d) + r)? .
? h,
V(
Therefore
(
)
? r) ? ?B h? , B (2?(m, ?/2, d) + r)? .
? h,
?V(
By the definition of the disagreement coefficient, for any r? > 0, ?B(h? , r? ) ? ?r? .
4
Theorem 5.5. Assume that H has disagreement coefficient ? and that F is a (?, B)-Bernstein class
w.r.t. P . Let (h, g) be the selective classifier chosen by Algorithm 1. Then, with probability of at
least 1 ? ?,
?(h, g) ? 1 ? B? (4?(m, ?/4, d))
?
?
R(h, g) = R(h? , g).
Proof. Applying Theorem 5.2 we get that with probability of at least 1 ? ?/4,
? ? ) ? R(h? ) + ?(m, ?/4, d).
R(h
? Applying again Theorem 5.2 we
Since h? minimizes the true error, wet get that R(h? ) ? R(h).
?
?
?
know that with probability of at least 1 ? ?/4, R(h) ? R(h) + ?(m, ?/4, d). Applying the union
? ?
? ?
bound we have that with probability of at least
( 1 ? ?/2, R(h ))? R(h) + 2?(m, ?/4, d). Hence,
? 2?(m, ?/4, d) = G. We note that the selection
with probability of at least 1 ? ?/2, h? ? V? h,
function g(x) equals one only for x ? X \DIS (G) . Therefore, for any x ? X , for which g(x) = 1,
? agree. Thus,
all the hypotheses in G agree, and in particular h? and h
?
?
? g) = E{I(h(X) ?= Y ) ? g(X)} = E {I(h (X) ?= Y ) ? g(X)} = R(h? , g).
R(h,
E{g(X)}
E{g(X)}
Applying Theorem 5.4 and the union bound we therefore know that with probability of at least 1??,
? g) = E{g(X)} = 1 ? ?G ? 1 ? B? (4?(m, ?/4, d))? .
?(h,
Hanneke introduced, in his original work [5], an alternative definition of the disagreement coefficient
?, for which the supermum in (4) is taken with respect to any fixed ? > 0. Using this alternative definition it is possible to show that fast coverage rates are achievable, not only for finite disagreement
coefficients (Theorem 5.5), but also if the disagreement coefficient grows slowly with respect to 1/?
(as shown by Wang [12], under sufficient smoothness conditions). This extension will be discussed
in the full version of this paper.
6 A disbelief principle and the risk-coverage trade-off
Theorem 5.5 tells us that the strategy presented in Section 4 not only outputs a weakly optimal
selective classifier, but this classifier also has guaranteed coverage (under some conditions). As
emphasized in [1], in practical applications it is desirable to allow for some control on the trade-off
between risk and coverage; in other words, we would like to be able to develop the entire riskcoverage curve for the classifier at hand and select ourselves the cutoff point along this curve in
accordance with other practical considerations we may have. How can this be achieved?
The following lemma facilitates a construction of a risk-coverage trade-off curve. The result is an
alternative characterization of the selection function g, of the weakly optimal selective classifier
chosen by Strategy 1. This result allows for calculating the value of g(x), for any individual test
point x ? X , without actually constructing g for the entire domain X .
Lemma 6.1. Let (h, g) be a selective classifier chosen by Strategy 1 after observing the training
? be the empirical risk minimizer over Sm . Let x be any point in X and
sample Sm . Let h
{
(
)}
e
?
?
hx , argmin R(h)
| h(x) = ?sign h(x)
,
h?H
?
an empirical risk minimizer forced to label x the opposite from h(x).
Then
? ? 2?(m, ?/4, d).
? e
? h)
g(x) = 0 ?? R(
hx ) ? R(
Proof. According to the definition of V? (see Eq. (2)),
? ? 2?(m, ?/4, d)
? e
? h)
R(
hx ) ? R(
??
(
)
e
? 2?(m, ?/4, d)
h ? V? h,
? e
?
? However, by construction, h(x)
? and g(x) = 0.
Thus, h,
hx ? V.
= ?e
h(x), so x ? DIS(V)
5
Lemma 6.1 tells us that in order to decide if point x should be rejected we need to measure the
? e
empirical error R(
hx ) of a special empirical risk minimizer, e
hx , which is constrained to label x the
?
? our classifier cannot be too sure about
? h)
opposite from h(x). If this error is sufficiently close to R(
the label of x and we must reject it. This result strongly motivates the following definition of a
?disbelief index? for each individual point.
Definition 6.2 (disbelief index). For any x ? X , define its disbelief index w.r.t. Sm and H,
?
? e
? h).
D(x) , D(x, Sm ) , R(
hx ) ? R(
Observe that D(x) is large whenever our model is sensitive to label of x in the sense that when we
are forced to bend our best model to fit the opposite label of x, our model substantially deteriorates,
giving rise to a large disbelief index. This large D(x) can be interpreted as our disbelief in the
possibility that x can be labeled so differently. In this case we should definitely predict the label of
x using our unforced model. Conversely, if D(x) is small, our model is indifferent to the label of x
and in this sense, is not committed to its label. In this case we should abstain from prediction at x.
This ?disbelief principle? facilitates an exploration of the risk-coverage trade-off curve for our classifier. Given a pool of test points we can rank these test points according to their disbelief index, and
points with low index should be rejected first. Thus, this ranking provides the means for constructing
a risk-coverage trade-off curve.
A similar technique of using an ERM oracle that can enforce an arbitrary number of example-based
constraints was used in [13, 14] in the context of active learning. As in our disbelief index, the
difference between the empirical risk (or importance weighted empirical risk [14]) of two ERM
oracles (with different constraints) is used to estimate prediction confidence.
7
Implementation
At this point in the paper we switch from theory to practice, aiming at implementing rejection
methods inspired by the disbelief principle and see how well they work on real world (well, ..., UCI)
problems. Attempting to implement a learning algorithm driven by the disbelief index we face a
major bottleneck because the calculation of the index requires the identification of ERM hypotheses.
To handle this computationally difficult problem, we ?approximate? the ERM as follows. Focusing
on SVMs we use a high C value (105 in our experiments) to penalize more on training errors than
on small margin. In this way the solution to the optimization problem tend to get closer to the ERM.
Another problem we face is that the disbelief index is a noisy statistic that highly depends on the
sample Sm . To overcome this noise we use robust statistics. First we generate 11 different samples
1
2
11
(Sm
, Sm
, . . . Sm
) using bootstrap sampling. For each sample we calculate the disbelief index for
all test points and for each point take the median of these measurements as the final index.
We note that for any finite training sample the disbelief index is a discrete variable. It is often the
case that several test points share the same disbelief index. In those cases we can use any confidence
measure as a tie breaker. In our experiments we use distance from decision boundary to break ties.
? e
In order to estimate R(
hx ) we have to restrict the SVM optimizer to only consider hypotheses that
classify the point x in a specific way. To accomplish this we use a weighted SVM for unbalanced
data. We add the point x as another training point with weight 10 times larger than the weight of all
training points combined. Thus, the penalty for misclassification of x is very large and the optimizer
finds a solution that doesn?t violate the constraint.
8
Empirical results
Focusing on SVMs with a linear kernel we compared the RC (Risk-Coverage) curves achieved by
the proposed method with those achieved by SVM with rejection based on distance from decision
boundary. This latter approach is very common in practical applications of selective classification.
For implementation we used LIBSVM [15].
Before presenting these results we wish to emphasize that the proposed method leads to rejection
regions fundamentally different than those obtained by the traditional distance-based technique. In
6
Figure 1 we depict those regions for a training sample of 150 points sampled from a mixture of
two identical normal distributions (centered at different locations). The height map reflects the
?confidence regions? of each technique according to its own confidence measure.
(a)
(b)
Figure 1: confidence height map using (a) disbelief index; (b) distance from decision boundary.
We tested our algorithm on standard medical diagnosis problems from the UCI repository, including
all datasets used in [16]. We transformed nominal features to numerical ones in a standard way using
binary indicator attributes. We also normalized each attribute independently so that its dynamic
range is [0, 1]. No other preprocessing was employed.
In each iteration we choose uniformly at random non overlapping training set (100 samples) and test
set (200 samples) for each dataset.SVM was trained on the entire training set and test samples were
sorted according to confidence (either using distance from decision boundary or disbelief index).
Figure 2 depicts the RC curves of our technique (red solid line) and rejection based on distance from
decision boundary (green dashed line) for linear kernel on all 6 datasets. All results are averaged
over 500 iterations (error bars show standard error).
Hypo
Pima
Hepatitis
0.25
0.15
0.03
0.01
test error
test error
0.2
0.02
0.15
0.1
0.1
0.05
0.05
0
0.2
0.4
0.6
c
Haberman
0.8
0
1
0.3
0.2
0.4
0.6
c
BUPA
0.8
0
1
test error
0.3
0.2
0.1
0
0
0
0.2
0.4
0.6
0.8
1
0.2
0.1
0
0
0.2
0.4
0.6
c
Breast
0
0.2
0.4
0.8
1
0.8
1
0.03
test error
0
0
0.2
0.4
c
0.6
c
0.8
1
0.02
0.01
0
0.6
c
Figure 2: RC curves for SVM with linear kernel. Our method in solid red, and rejection based on
distance from decision boundary in dashed green. Horizntal axis (c) represents coverage.
With the exception of the Hepatitis dataset, in which both methods were statistically indistinguishable, in all other datasets the proposed method exhibits significant advantage over the traditional
approach. We would like to highlight the performance of the proposed method on the Pima dataset.
While the traditional approach cannot achieve error less than 8% for any rejection rate, in our approach the test error decreases monotonically to zero with rejection rate. Furthermore, a clear advantage for our method over a large range of rejection rates is evident in the Haberman dataset.2 .
2
The Haberman dataset contains survival data of patients who had undergone surgery for breast cancer.
With estimated 207,090 new cases of breast cancer in the united states during 2010 [17] an improvement of 1%
affects the lives of more than 2000 women.
7
For the sake of fairness, we note that the running time of our algorithm (as presented here) is substantially longer than the traditional technique. The performance of our algorithm can be substantially
improved when many unlabeled samples are available. Details will be provided in the full paper.
9
Related work
The literature on theoretical studies of selective classification is rather sparse. El-Yaniv and Wiener
[1] studied the performance of a simple selective learning strategy for the realizable case. Given an
hypothesis class H, and a sample Sm , their method abstain from prediction if all hypotheses in the
version space do not agree on the target sample. They were able to show that their selective classifier
achieves perfect classification with meaningful coverage under some conditions. Our work can be
viewed as an extension of the above algorithm to the agnostic case.
Freund et al. [18] studied another simple ensemble method for binary classification. Given an
hypothesis class H, the method outputs a weighted average of all the hypotheses in H, where the
weight of each hypothesis exponentially depends on its individual training error. Their algorithm
abstains from prediction whenever the weighted average of all individual predictions is close to
zero. They were able to bound the probability of misclassification by 2R(h? ) + ?(m) and, under
some conditions, they proved a bound of 5R(h? ) + ?(F, m) on the rejection rate. Our algorithm
can be viewed as an extreme variation of the Freund et al. method. We include in our ?ensemble?
only hypotheses with sufficiently low empirical error and we abstain if the weighted average of all
predictions is not definitive ( ?= ?1). Our risk and coverage bounds are asymptotically tighter.
Excess risk bounds were developed by Herbei and Wegkamp [19] for a model where each rejection
incurs a cost 0 ? d ? 1/2. Their bound applies to any empirical risk minimizer over a hypothesis
class of ternary hypotheses (whose output is in {?1, reject}). See also various extensions [20, 21].
A rejection mechanism for SVMs based on distance from decision boundary is perhaps the most
widely known and used rejection technique. It is routinely used in medical applications [22, 23, 24].
Few papers proposed alternative techniques for rejection in the case of SVMs. Those include taking
the reject area into account during optimization [25], training two SVM classifiers with asymmetric
cost [26], and using a hinge loss [20]. Grandvalet et al. [16] proposed an efficient implementation
of SVM with a reject option using a double hinge loss. They empirically compared their results with
two other selective classifiers: the one proposed by Bartlett and Wegkamp [20] and the traditional
rejection based on distance from decision boundary. In their experiments there was no statistically
significant advantage to either method compared to the traditional approach for high rejection rates.
10
Conclusion
We presented and analyzed a learning strategy for selective classification that achieves weak optimality. We showed that the coverage rate directly depends on the disagreement coefficient, thus
linking between active learning and selective classification. Recently it has been shown that, for
the noise-free case, active learning can be reduced to selective classification [27]. We conjecture
that such a reduction also holds in noisy settings. Exact implementation of our strategy, or exact
computation of the disbelief index may be too difficult to achieve or even obtain with approximation
guarantees. We presented one algorithm that heuristically approximate the required behavior and
there is certainly room for other, perhaps better methods and variants. Our empirical examination
of the proposed algorithm indicate that it can provide significant and consistent advantage over the
traditional rejection technique with SVMs. This advantage can be of great value especially in medical diagnosis applications and other mission critical classification tasks. The algorithm itself can be
implemented using off-the-shelf packages.
Acknowledgments
This work was supported in part by the IST Programme of the European Community, under the
PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors?
views.
8
References
[1] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. JMLR, 11:1605?
1641, 2010.
[2] C.K. Chow. An optimum character recognition system using decision function. IEEE Trans. Computer,
6(4):247?254, 1957.
[3] C.K. Chow. On optimum recognition error and reject trade-off. IEEE Trans. on Information Theory,
16:41?36, 1970.
[4] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages 353?360, 2007.
[5] S. Hanneke. Theoretical Foundations of Active Learning. PhD thesis, Carnegie Mellon University, 2009.
[6] P.L. Bartlett, S. Mendelson, and P. Philips. Local complexities for empirical risk minimization. In COLT:
Proceedings of the Workshop on Computational Learning Theory, Morgan Kaufmann Publishers, 2004.
[7] V. Koltchinskii. 2004 IMS medallion lecture: Local rademacher complexities and oracle inequalities in
risk minimization. Annals of Statistics, 34:2593?2656, 2006.
[8] P.L. Bartlett and S. Mendelson. Discussion of ?2004 IMS medallion lecture: Local rademacher complexities and oracle inequalities in risk minimization? by V. koltchinskii. Annals of Statistics, 34:2657?2663,
2006.
[9] A.B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Mathematical Statistics, 32:135?166, 2004.
[10] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML ?09:
Proceedings of the 26th Annual International Conference on Machine Learning, pages 49?56. ACM,
2009.
[11] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. In Advanced
Lectures on Machine Learning, volume 3176 of Lecture Notes in Computer Science, pages 169?207.
Springer, 2003.
[12] L. Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning.
JMLR, pages 2269?2292, 2011.
[13] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS, 2007.
[14] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. Advances in Neural Information Processing Systems 23, 2010.
[15] C.C. Chang and C.J. Lin.
LIBSVM: A library for support vector machines.
ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011.
Software available at
?http://www.csie.ntu.edu.tw/ cjlin/libsvm?.
[16] Y. Grandvalet, A. Rakotomamonjy, J. Keshet, and S. Canu. Support vector machines with a reject option.
In NIPS, pages 537?544. MIT Press, 2008.
[17] American Cancer Society. Cancer facts and figures. 2010.
[18] Y. Freund, Y. Mansour, and R.E. Schapire. Generalization bounds for averaged classifiers. Annals of
Statistics, 32(4):1698?1722, 2004.
[19] R. Herbei and M.H. Wegkamp. Classification with reject option. The Canadian Journal of Statistics,
34(4):709?721, 2006.
[20] P.L. Bartlett and M.H. Wegkamp. Classification with a reject option using a hinge loss. Journal of
Machine Learning Research, 9:1823?1840, 2008.
[21] M.H. Wegkap. Lasso type classifiers with a reject option. Electronic Journal of Statistics, 1:155?168,
2007.
[22] S. Mukherjee, P. Tamayo, D. Slonim, A. Verri, T. Golub, J. P. Mesirov, and T. Poggio. Support vector
machine classification of microarray data. Technical report, AI Memo 1677, Massachusetts Institute of
Technology, 1998.
[23] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support
vector machines. Machine Learning, pages 389?422, 2002.
[24] S. Mukherjee. Chapter 9. classifying microarray data using support vector machines. In of scientists from
the University of Pennsylvania School of Medicine and the School of Engineering and Applied Science.
Kluwer Academic Publishers, 2003.
[25] G. Fumera and F. Roli. Support vector machines with embedded reject option. In Pattern Recognition
with Support Vector Machines: First International Workshop, pages 811?919, 2002.
[26] R. Sousa, B. Mora, and J.S. Cardoso. An ordinal data method for the classification with reject option. In
ICMLA, pages 746?750. IEEE Computer Society, 2009.
[27] R. El-Yaniv and Y. Wiener. Active learning via perfect selective classification. Accepted to JMLR, 2011.
9
| 4339 |@word repository:1 version:2 rani:1 achievable:1 stronger:2 open:1 heuristically:2 tamayo:1 incurs:1 solid:2 reduction:1 contains:1 selecting:1 united:1 outperforms:1 err:1 beygelzimer:2 must:3 numerical:2 realistic:1 depict:1 selected:1 characterization:1 provides:1 location:1 zhang:1 height:2 rc:4 along:1 mathematical:1 excellence:1 theoretically:1 indeed:1 behavior:3 examine:1 inspired:1 decreasing:1 haberman:3 increasing:1 provided:2 begin:2 bounded:3 underlying:3 linearity:1 agnostic:11 israel:1 what:1 kind:1 argmin:1 minimizes:2 substantially:3 interpreted:1 developed:1 hindsight:1 guarantee:3 certainty:2 every:1 tie:2 classifier:36 control:1 medical:3 before:1 scientist:1 accordance:1 herbei:2 local:3 slonim:1 aiming:1 engineering:1 lugosi:1 twice:1 koltchinskii:2 studied:2 conversely:1 bupa:1 range:2 statistically:2 averaged:2 practical:5 acknowledgment:1 ternary:1 union:3 practice:1 implement:1 bootstrap:1 area:1 empirical:20 reject:13 word:1 confidence:6 get:4 cannot:3 close:2 selection:4 bend:1 unlabeled:1 risk:31 impossible:1 applying:6 context:1 www:1 equivalent:1 map:2 independently:1 pure:1 rule:1 unforced:1 his:1 mora:1 handle:1 notion:2 variation:1 analogous:1 annals:4 target:2 construction:2 nominal:1 exact:2 hypothesis:25 recognition:3 asymmetric:1 mukherjee:2 labeled:3 csie:1 wang:2 worst:1 calculate:1 region:14 trade:8 decrease:1 ran:1 mentioned:1 environment:2 complexity:6 dynamic:1 trained:1 weakly:9 compromise:1 differently:1 various:1 tx:1 routinely:1 chapter:1 forced:2 fast:1 tell:2 whose:2 heuristic:1 posed:1 widely:2 larger:1 say:1 statistic:8 noisy:4 itself:1 final:1 obviously:1 sequence:2 advantage:5 mesirov:1 mission:1 uci:2 achieve:9 yaniv:4 requirement:1 double:1 optimum:2 rademacher:2 perfect:9 develop:2 ac:1 school:2 eq:2 coverage:17 c:1 implemented:1 indicate:1 radius:1 attribute:2 vc:1 exploration:1 centered:1 abstains:1 implementing:1 hx:8 generalization:1 ntu:1 tighter:1 extension:3 hold:1 around:1 sufficiently:4 normal:1 great:1 predict:1 major:1 achieves:4 optimizer:2 diminishes:2 wet:1 label:10 sensitive:1 weighted:6 reflects:2 minimization:5 mit:1 rather:2 shelf:1 icmla:1 publication:1 improvement:1 consistently:1 rank:1 hepatitis:2 realizable:4 sense:3 el:4 entire:3 chow:2 diminishing:2 selective:34 transformed:1 arg:3 classification:35 among:1 colt:1 constrained:3 special:1 marginal:1 equal:1 construct:1 never:2 sampling:1 identical:1 represents:1 icml:2 fairness:1 mimic:1 report:1 fundamentally:1 intelligent:1 few:2 individual:4 ourselves:1 disbelief:18 possibility:1 highly:1 certainly:1 indifferent:1 golub:1 mixture:1 extreme:2 analyzed:1 closer:1 poggio:1 theoretical:3 instance:1 classify:2 formulates:1 cost:2 rakotomamonjy:1 subset:2 technion:2 too:3 answer:1 accomplish:2 combined:1 definitely:1 international:2 probabilistic:1 off:9 pool:1 wegkamp:4 again:1 thesis:1 choose:1 slowly:1 woman:1 american:1 account:1 includes:1 coefficient:12 ranking:1 depends:3 later:2 h1:2 break:1 view:1 observing:3 analyze:1 sup:2 red:2 aggregation:1 option:8 il:1 accuracy:1 wiener:4 kaufmann:1 characteristic:1 who:1 ensemble:2 weak:2 identification:1 hanneke:5 reach:1 monteleoni:1 barnhill:1 whenever:2 definition:11 associated:3 proof:5 sampled:2 hsu:2 dataset:5 proved:1 massachusetts:1 lim:1 actually:1 focusing:3 appears:1 improved:1 verri:1 strongly:1 furthermore:1 rejected:4 langford:2 hand:1 overlapping:1 perhaps:3 grows:1 concept:2 true:8 normalized:1 hence:1 boucheron:1 indistinguishable:1 game:2 during:2 presenting:1 evident:1 consideration:2 abstain:6 novel:2 ef:4 recently:1 common:1 empirically:2 exponentially:1 volume:4 discussed:1 linking:1 kluwer:1 ims:2 refer:1 measurement:1 significant:3 mellon:1 ai:1 smoothness:2 canu:1 similarly:1 pointed:1 had:1 longer:1 add:1 own:1 showed:1 inf:3 driven:1 scenario:1 termed:2 certain:4 inequality:3 binary:8 errs:1 life:1 yi:2 morgan:1 employed:1 accomplishes:1 monotonically:2 dashed:2 full:2 desirable:2 violate:1 technical:1 characterized:1 calculation:1 academic:1 lin:1 prediction:9 variant:2 breast:3 patient:1 expectation:1 iteration:2 kernel:3 achieved:3 penalize:1 median:1 microarray:2 publisher:2 probably:1 sure:1 tend:1 facilitates:2 call:4 bernstein:9 canadian:1 switch:1 affect:1 fit:1 pennsylvania:1 restrict:1 opposite:3 lasso:1 idea:1 br:3 bottleneck:1 bartlett:4 penalty:2 clear:1 cardoso:1 tsybakov:2 svms:7 reduced:1 generate:1 http:1 schapire:1 sign:1 deteriorates:1 estimated:1 track:1 diagnosis:2 discrete:1 carnegie:1 dasgupta:2 ist:2 nevertheless:1 achieving:1 changing:1 cutoff:1 libsvm:3 asymptotically:1 package:1 throughout:1 guyon:1 decide:1 electronic:1 decision:11 bound:10 guaranteed:1 oracle:4 activity:4 annual:1 constraint:4 software:1 sake:1 bousquet:1 optimality:4 extremely:1 attempting:1 conjecture:1 department:1 according:4 ball:1 character:1 tw:1 restricted:1 pr:3 erm:7 taken:1 computationally:2 ln:4 agree:4 mechanism:2 nonempty:1 cjlin:1 know:3 ordinal:1 available:2 observe:1 enforce:1 disagreement:13 alternative:4 yair:1 sousa:1 original:1 denotes:1 running:1 include:3 hinge:3 calculating:1 medicine:1 giving:1 especially:1 society:2 surgery:1 objective:1 question:2 quantity:1 blend:1 strategy:17 concentration:1 traditional:8 said:1 exhibit:1 distance:11 philip:1 me:2 trivial:3 assuming:1 index:17 difficult:3 pima:2 memo:1 rise:1 implementation:5 motivates:1 unknown:2 perform:1 datasets:3 sm:17 arc:1 finite:5 defining:1 extended:2 situation:1 committed:1 mansour:1 arbitrary:1 community:1 introduced:4 pair:1 required:1 nip:2 trans:2 able:3 bar:1 pattern:1 refuse:1 including:1 green:2 pascal2:1 misclassification:2 critical:1 natural:1 examination:1 indicator:1 advanced:1 technology:3 library:1 axis:1 perfection:3 literature:1 freund:3 loss:12 lecture:4 highlight:1 embedded:1 h2:2 foundation:2 sufficient:2 consistent:1 undergone:1 principle:3 grandvalet:2 classifying:1 share:1 cancer:5 roli:1 surprisingly:2 supported:1 free:4 dis:5 weaker:1 allow:1 institute:2 face:2 barrier:1 taking:1 sparse:1 boundary:10 curve:12 dimension:1 world:2 overcome:1 doesn:1 author:2 preprocessing:1 programme:1 far:1 transaction:1 excess:5 approximate:3 emphasize:1 gene:1 active:11 conclude:1 assumed:1 xi:2 fumera:1 robust:1 european:1 constructing:2 domain:6 main:1 motivation:1 noise:5 arise:1 definitive:1 allowed:1 depicts:1 sub:1 wish:1 jmlr:3 theorem:10 specific:1 emphasized:1 svm:7 survival:1 exists:1 mendelson:2 hypo:1 workshop:2 vapnik:1 importance:2 phd:1 keshet:1 margin:1 rejection:21 chang:1 applies:1 springer:1 minimizer:13 satisfies:3 relies:2 acm:2 weston:1 goal:1 sorted:1 viewed:2 room:1 specifically:1 uniformly:1 lemma:7 called:1 accepted:2 meaningful:1 exception:1 select:3 support:7 latter:1 unbalanced:1 outstanding:1 tested:1 |
3,688 | 434 | Direct memory access using two cues: Finding
the intersection of sets in a connectionist model
Janet Wiles, Michael S. Humphreys, John D. Bain and Simon Dennis
Departments of Psychology and Computer Science
University of Queensland QLD 4072 Australia
email: [email protected]
Abstract
For lack of alternative models, search and decision processes have provided the
dominant paradigm for human memory access using two or more cues, despite
evidence against search as an access process (Humphreys, Wiles & Bain, 1990).
We present an alternative process to search, based on calculating the intersection
of sets of targets activated by two or more cues. Two methods of computing
the intersection are presented, one using information about the possible targets,
the other constraining the cue-target strengths in the memory matrix. Analysis
using orthogonal vectors to represent the cues and targets demonstrates the
competence of both processes, and simulations using sparse distributed
representations demonstrate the performance of the latter process for tasks
involving 2 and 3 cues.
1 INTRODUCTION
Consider a task in which a subject is asked to name a word that rhymes with oast. The
subject answers "most", (or post, host, toast, boast, ...). Now the subject is asked to find
a word that means a mythical being that rhymes with oast. She or he pauses slighUy and
replies "ghost".
The difference between the first and second questions is that the first requires the use of
one cue to access memory. The second question requires the use of two cues - either
combining them before the access process, or combining the targets they access. There
are many experimental paradigms in psychology in which a subject uses two or more
cues to perform a task (Rubin & Wallace, 1989). One default assumption underlying
635
636
Wiles, Humphreys, Bain, and Dennis
many explanations for the effective use of two cues relies on a search process through
memory.
Models of human memory based on associative access (using connectionist models) have
provided an alternative paradigm to search processes for memory access using a single cue
(Anderson, Silverstein, Ritz & Jones, 1977; McClelland & Rumelhart, 1986), and for
two cues which have been studied together (Humphreys, Bain & Pike 1989). In some
respects, properties of these models correspond very closely to the characteristics of
human memory (Rumelhart, 1989). In addition to the evidence against search processes
for memory access using a single cue, there is also experimental evidence against
sequential search in some tasks requiring the combination of two cues, such as cued recall
with an extra-list cue, cued recall with a part-word cue, lexical access and semantic access
(Humphreys, Wiles & Bain, 1990). Furthermore, in some of these tasks it appears that
the two cues have never jointly occurred with the target. In such a situation, the tensor
product employed by Humphreys et. a1. to bind the two cues to the target cannot be
employed, nor can the co-occurrences of the two cues be encoded into the hidden layer of a
three-layer network. In this paper we present the computational foundation for an
alternative process to search and decision, based on parallel (or direct) access for the
intersection of sets of targets that are retrieved in response to cues that have not been
studied together.
Definition of an intersection in the cue-target paradigm: Given a set of cue-target pairs,
and two (or more) access cues, then the intersection specified by the access cues is defined
to be the set of targets which are associated with both cues. If the cue-target strengths are
not binary, then they are constrained to lie between 0 and 1, and targets in the intersection
are weighted by the product of the cue-target strengths. A complementary definition for a
union process could be the set of targets associated with anyone or more of the access
cues, weighted by the sum of the target strengths.
In the models that are described below, we assume that the access cues and targets are
represented as vectors, the cue-target associations are represented in a memory matrix and
the set of targets retrieved in response to one or more cues is represented as a linear
combination, or blend, of target vectors associated with that cue or cues. Note that under
this definition, if there is more than one target in the intersection, then a second stage is
required to select a unique target to output from the retrieved linear combination. We do
not address this second stage in this paper.
A task requiring intersection: In the rhyming task described above, the rhyme and
semantic cues have extremely low separate probabilities of accessing the target, ghost,
but a very high joint probability. In this study we do not distinguish between the
representation of the semantic and part-word cues, although it would be required for a
more detailed model. Instead, we focus on the task of retrieving a target weakly associated
with two cues. We simulate this condition in a simple task using two cues, C1 and C2,
and three targets, T1, T2 and T3. Each cue is strongly associated with one target, and
weakly associated with a second target, as follows (strengths of association are shown
above the arrows):
Direct Memory Access Using Two Cues
The intersection of the targets retrieved to the two cues, Cl and C2, is the target, T2,
with a strength of 0.01. Note that in this example, a model based on vector addition
would be insufficient to select target, T2, which is weakly associated with both cues, in
preference to either target, Tl or TJ, which are strongly associated with one cue each.
2 IMPLEMENTATIONS OF INTERSECTION PROCESSES
2.1 LOCAL REPRESENTATIONS
Given a local representation for two sets of targets, their intersection can be computed by
multiplying the activations elicited by each cue. This method extends to sparse
representations with some noise from cross product terms, and has been used by Dolan
and Dyer (1989) in their tensor model, and Touretzky and Hinton (1989) in the
Distributed Connectionist Production System (for further discussion see Wiles,
Humphreys, Bain & Dennis, 1990). However, multiplying activation strengths does not
extend to fully distributed representations, since multiplication depends on the basis
representation (Le., the target patterns themselves) and the cross-product terms do not
necessarily cancel. One strong implication of this for implementing an intersection
process, is that the choice of patterns is not critical in a linear process (such as vector
addition) but can be critical in a non-linear process (which is necessary for computing
intersections). An intersection process requires more information about the target patterns
themselves.
It is interesting to note that the inner product of the target sets (equivalent to the match
process in Humphreys et. al.1s (1989) Matrix model) can be used to determine whether or
not the intersection of targets is empty, if the target vectors are orthogonal, although it
cannot be used to find the particular vectors which are in the intersection.
2.2 USING INFORMATION ABOUT TARGET VECfORS
A local representation enables multiplication of activation strengths because there is
implicit knowledge about the allowable target vectors in the local representation itself.
The first method we describe for computing the intersection of fully distributed vectors
uses information about the targets, explicitly represented in an auto-associative memory,
to filter out cross-product terms: In separate operatiOns, each cue is used to access the
memory matrix and retrieve a composite target vector (the linear combination of
associated targets). A temporary matrix is formed from the outer product of these two
composite vectors. This matrix will contain product terms between all the targets in the
intersection set as well as noise in the form of cross-product terms. The cross-product
terms can be filtered from the temporary matrix by using it as a retrieval cue for accessing
a three-dimensional auto-associator (a tensor of rank 3) over all the targets in the original
memory. If the target vectors are orthonormal, then this process will produce a vector
which contains no noise from cross-product terms, and is the linear combination of all
targets associated with both cues (see Box 1).
637
638
Wiles, Humphreys, Bain, and Dennis
Box 1. Creating a temporary matrix from the product of the target vectors, then filtering
out the noise terms: Let the cues and targets be represented by vectors which are mutually
orthonormal (Le., Ci.Ci = Ti.Ti = 1, Ci,Cj = Ti.Tj = 0, i, j = 1,2,3). The memory
matrix, M, is formed from cue-target pairs, weighted by their respective strengths, as
follows:
where T' represents the transpose of T, and Cj T;' is the outer product of Cj and Tj ?
In addition, let Z be a three-dimensional auto-associative memory (or tensor of rank 3)
created over three orthogonal representations of each target (i.e., Tj is a column vector, T;'
is a row vector which is the transpose of Tj , and Tj " is the vector in a third direction
orthogonal to both, where i=I,2,3), as follows:
z = I? T- T-' T?"
I
I
I
I
Let a two-dimensional temporary matrix, X, be formed by taking the outer product of
target vectors retrieved to the access cues, as follows:
X
=
(Cl M) (C2 M)'
=
(0.9Tl + 0.lT2) (0.1T2 + 0.9Tj )'
=
0.09Tl T2 ' + 0.81Tl Tj' + 0.01T2T2' + 0.09T2Tj'
Using the matrix X to access the auto-associator Z, will produce a vector from which all
the cross-product terms have been flltered, as follows:
X Z
=
(0.09Tl T2 ' + 0.81 TlT3' + 0.01T2T2' + 0.09T2Tj' ) (Ij Tj T;' T;")
=
(0.09TlT2') (Ii Ti T;' T;',) + (0.81Tl Tl) (Ii Tj T;' Ti")
+ (0.01 T2 T2') ( Ii Tj T;' Ti ") + (0.09T2Tj') <l:i Tj T;' T;" )
since all other terms cancel.
This vector is the required intersection of the linear combination of target vectors
associated with both the input cues, Cl and C2 weighted by the product of the strengths
of associations from the cues to the targets.
A major advantage of the above process is that only matrix (or tensor) operations are used,
which simplifies both the implementation and the analysis. The behaviour of the system
can be analysed either at the level of behaviours of patterns, or using a coordinate system
based on individual units, since in a linear system these two levels of description are
isomorphic. In addition, the auto-associative target matrix could be created incrementally
when the target vectors are first learnt by the system using the matrix memory. The
Direct Memory Access Using Two Cues
disadvantages include the requirement for dynamic creation and short term storage of the
two dimensional product-of-targets matrix, and the formation and much longer term
storage of the three dimensional auto-associative matrix. It is possible, however, that an
auto-associator may be part of the output process.
2.3 ADDITIVE APPROXIMATIONS TO MULTIPLICATIVE PROCESSES
An alternative approach to using the target auto-associator for computing the intersection,
is to incorporate a non-linearity at the time of memory storage, rather than memory
access. The aim of this transform would be to change the cue-target strengths so that
linear addition of vectors could be used for computing the intersection. An operation that
is equivalent to multiplication is the addition of logarithms. If the logarithm of each cuetarget strength was calculated and stored at the time of association, then an additive access
process would retrieve the intersection of the inputs. More generally, it may be possible
to use an operation that preserves the same order relations (in terms of strengths) as
multiplication. It is always possible to find a restricted range of association strengths
such that the sum of a number of weak cue-target associations will produce a stronger
target activation than the sum of a smaller number of strong cue-target associations. For
example, by scaling the target strengths to the range [(n-l)/n, 1] where n is the number
of simultaneously available cues, vector addition can be made to approximate
multiplication of target strengths.
This method has the advantage of extending naturally to non-orthogonal vectors, and to
the combination of three or more cues, with performance limits determined solely by
cross-talk between the vectors. Time taken is proportional to the number of cues, and
noise is proportional to the product of the set sizes and cross-correlation between the
vectors.
3 SIMULATIONS OF THE ADDITIVE PROCESS
Two simulations of the additive process using scaled target strengths were performed to
demonstrate the feasibility of the method producing a target weakly associated with two
cues, in preference to targets with much higher probabilities of being produced in
response to a single cue. As a work-around for the problem of how (and when) to
decompose the composite output vector, the target with the strongest correlation with the
composite output was selected as the winner. To simulate the addition of some noise,
non-orthogonal vectors were used.
The first simulation involved two cues, C 1 and C2, and three targets, T1, T2 and T3,
represented as randomly generated 100 dimensional vectors, 20% Is, the remainder Os.
Cue C1 was strongly associated with target T1 and weakly associated with target T2, cue
C2 was strongly associated with target T3 and weakly associated with target T2. A trial
consisted of generating random cue and target vectors, forming a memory matrix from
their outer products (multiplied by 0.9 for strong associates and 0.6 for weak associates;
note that these strengths have been scaled to the range, [0,1]), and then pre-multiplying
the memory matrix by the appropriate cue (i.e., either C1 or C2 or C1 + C2).
639
640
Wiles, Humphreys, Bain, and Dennis
The memory matrix, M, was formed as shown in Box 1. Retrieval to a cue, Cl , was as
follows: C1 M = 0.9 C1 .Cz Tz' + 0.6 C1 .C1 T2 ' + 0.6 C1 .C2 T2 ' + 0.9 C1.C2 T/. In
this case, the cross product terms, Cl.C2, do not cancel since the vectors are not
orthogonal, although their expected contribution to the output is small (expected
correlation 0.04). The winning target vector was the one that had the strongest
correlation (smallest normalized dot product) with the resulting output vector. The results
are shown in Table 1.
Table 1: Number of times each target was retrieved in 100 trials.
c1
c2
c1+c2
t1
t2
t3
92
8
9
80
o
o
11
91
9
Over 100 trials, the results show that when either cue Cl or C2 was presented alone, the
target with which it was most strongly paired was retrieved in over 90% of cases. Target
T2 had very low probabilities of recall given either Cl or C2 (8% and 9% respectively),
however, it was very likely to be recalled if both cues were presented (80%).
The first simulation demonstrated the multi-cue paradigm with the simple two-cue and
three-target case. In a second simulation, the system was tested for robustness in a
similar case involving three cues, C1 to C3, and four targets, T1 to T4. The results
show that T4 had low probabilities of recall given either Cl , C2 or C3 (13%, 22% and
18% respectively), medium probabilities of recall given a combination of two cues (36%,
31 % and 28%), and was most likely to be recalled if all three cues were presented (44%).
For this task, when three cues are presented concurrently, in the ideal intersection only T4
should be produced. The results show that it is produced more often than the other targets
(44% compared with 22%, 18% and 16%), each of which is strongly associated with two
out of the three cues, but there is considerably more noise than in the two-cue case. (See
Wiles, Humphreys, Bain & Dennis, 1990, for further details.)
4 DISCUSSION
The simulation results demonstrated the effect of the initial scaling of the cue-target
strengths, and non-linear competition between the target outputs. It is important to note
the difference between the association strengths from cues to targets and the cued recall
probability of each target. In memory research, the association strengths have been
traditionally identified with the probability of recall. However, in a connectionist model
the association strengths are related to the weights in the network and the cued recall
probability is the probability of recall of a given target to a given cue.
This paper builds on the idea that direct access is the default access method for human
memory, and that all access processes are cue based. The immediate response from
memory is a blend of patterns, which provide a useful intermediate stage. Other processes
may act on the blend of patterns before a single target is selected for output in a
Direct Memory Access Using Two Cues
successive stage. One such process that may act on the intermediate representation is an
intersection process that operates over blends of targets. Such a process would provide an
alternative to search as a computational technique in psychological paradigms that use
two or more cues. We don't claim that we have described the way to implement such a
process - much more is required to investigate these issues. The two methods presented
here have served to demonstrate that direct access intersection is a viable neural network
technique. This demonstration means that more processing can be performed in the
network dynamics, rather than by the control structures that surround memory.
Acknowledgements
Our thanks to Anthony Bloesch, Michael Jordan, Julie Stewart, Michael Strasser and
Roland Sussex for discussions and comments. This work was supported by grants from
the Australian Research Council, a National Research Fellowship to J. Wiles and an
Australian Postgraduate Research Award to S. Dennis.
References
Anderson, J.A., Silverstein, J.W., Ritz, S.A. and Jones, R.S. Distinctive features,
categorical perception, and probability learning: Some applications of a neural model.
Psychological Review, 84,413-451, 1977.
Dolan, C. and Dyer, M.G. Parallel retrieval and application of conceptual knowledge.
Proceedings of the 1988 Connectionist Models Summer School, San Mateo, Ca:
Morgan Kaufmann, 273-280, 1989.
Humphreys, M.S., Bain, J.D. and Pike, R. Different ways to cue a coherent memory
system: A theory for episodic, semantic and procedural tasks. Psychological Review,
96:2, 208-233, 1989.
Humphreys, M.S., Wiles, J. and Bain, J.D. Direct Access: Cues with separate histories.
Paper presented at Attention and Performance 14, Ann Arbor, Michigan, July, 1990.
McClelland, J.L. and Rumelhart, D.E. A distributed model of memory. In McClelland,
J.L. and Rumelhart, D.E. (eds.) Parallel Distributed Processing: Explorations in the
microstructure of cognition, 170-215, MIT Press, Cambridge, MA, 1986.
Rubin, D.C. and Wallace, W.T. Rhyme and reason: Analysis of dual retrieval cues.
Journal of Experimental Psychology: Learning, Memory and Cognition, 15:4, 698709, 1989.
Rumelhart, D.S. The architecture of mind: A connectionist approach. In Posner, M.1.
(ed.) Foundations of Cognitive Science, 133-159, MIT Press, Cambridge, MA, 1989.
Touretzky, D.S. and Hinton, G.E. A distributed connectionist production system.
Cognitive Science, 12, 423-466, 1988.
Wiles, J., Humphreys, M.S., Bain, J.D. and Dennis, S. Control processes and cue
combinations in a connectionist model of human memory. Department of Computer
Science Technical Report, #186, University of Queensland, October 1990, 40pp.
641
| 434 |@word trial:3 stronger:1 simulation:7 queensland:2 initial:1 contains:1 analysed:1 activation:4 john:1 additive:4 enables:1 alone:1 cue:86 selected:2 short:1 filtered:1 preference:2 successive:1 c2:16 direct:8 viable:1 retrieving:1 expected:2 themselves:2 wallace:2 nor:1 multi:1 provided:2 underlying:1 linearity:1 medium:1 psych:1 finding:1 ti:6 act:2 demonstrates:1 scaled:2 control:2 unit:1 grant:1 producing:1 before:2 t1:5 bind:1 local:4 limit:1 despite:1 rhyme:4 solely:1 au:1 studied:2 mateo:1 co:1 range:3 unique:1 union:1 implement:1 episodic:1 composite:4 word:4 pre:1 cannot:2 janet:2 storage:3 equivalent:2 demonstrated:2 lexical:1 attention:1 ritz:2 orthonormal:2 posner:1 retrieve:2 coordinate:1 traditionally:1 target:87 us:2 associate:2 rumelhart:5 accessing:2 asked:2 dynamic:2 weakly:6 creation:1 distinctive:1 basis:1 joint:1 represented:6 talk:1 effective:1 describe:1 formation:1 encoded:1 jointly:1 itself:1 transform:1 associative:5 advantage:2 product:21 remainder:1 combining:2 oz:1 description:1 competition:1 empty:1 requirement:1 extending:1 produce:3 generating:1 cued:4 ij:1 school:1 strong:3 australian:2 direction:1 closely:1 filter:1 exploration:1 human:5 australia:1 implementing:1 behaviour:2 microstructure:1 decompose:1 toast:1 around:1 cognition:2 claim:1 major:1 smallest:1 council:1 weighted:4 mit:2 concurrently:1 always:1 aim:1 rather:2 focus:1 she:1 rank:2 psy:1 hidden:1 relation:1 issue:1 dual:1 constrained:1 never:1 represents:1 jones:2 cancel:3 connectionist:8 t2:15 report:1 randomly:1 preserve:1 simultaneously:1 national:1 individual:1 investigate:1 activated:1 tj:12 implication:1 necessary:1 respective:1 orthogonal:7 logarithm:2 psychological:3 column:1 disadvantage:1 stewart:1 stored:1 answer:1 learnt:1 considerably:1 thanks:1 michael:3 together:2 tz:1 creating:1 cognitive:2 explicitly:1 depends:1 multiplicative:1 performed:2 elicited:1 parallel:3 simon:1 contribution:1 formed:4 kaufmann:1 characteristic:1 correspond:1 t3:4 silverstein:2 weak:2 produced:3 multiplying:3 served:1 history:1 strongest:2 touretzky:2 ed:2 email:1 definition:3 against:3 pp:1 involved:1 naturally:1 associated:17 recall:9 knowledge:2 cj:3 appears:1 higher:1 response:4 box:3 strongly:6 anderson:2 furthermore:1 stage:4 implicit:1 reply:1 correlation:4 dennis:8 o:1 lack:1 incrementally:1 name:1 effect:1 requiring:2 contain:1 consisted:1 normalized:1 semantic:4 sussex:1 allowable:1 demonstrate:3 winner:1 association:10 he:1 occurred:1 extend:1 surround:1 cambridge:2 had:3 dot:1 access:29 longer:1 dominant:1 retrieved:7 binary:1 bain:12 morgan:1 employed:2 determine:1 paradigm:6 july:1 ii:3 technical:1 match:1 cross:10 retrieval:4 host:1 post:1 roland:1 award:1 a1:1 feasibility:1 paired:1 involving:2 rhyming:1 represent:1 cz:1 c1:13 addition:9 fellowship:1 extra:1 comment:1 subject:4 jordan:1 ideal:1 constraining:1 intermediate:2 psychology:3 architecture:1 identified:1 inner:1 simplifies:1 idea:1 whether:1 boast:1 pike:2 generally:1 useful:1 detailed:1 mcclelland:3 four:1 procedural:1 bloesch:1 sum:3 extends:1 decision:2 scaling:2 layer:2 summer:1 distinguish:1 strength:22 simulate:2 anyone:1 extremely:1 department:2 combination:9 smaller:1 wile:11 lt2:1 restricted:1 taken:1 mutually:1 mind:1 dyer:2 available:1 operation:4 multiplied:1 appropriate:1 occurrence:1 uq:1 alternative:6 robustness:1 original:1 include:1 calculating:1 build:1 tensor:5 question:2 blend:4 separate:3 outer:4 reason:1 insufficient:1 demonstration:1 october:1 implementation:2 perform:1 immediate:1 situation:1 hinton:2 competence:1 pair:2 required:4 specified:1 c3:2 recalled:2 coherent:1 temporary:4 address:1 below:1 pattern:6 perception:1 ghost:2 memory:32 explanation:1 critical:2 pause:1 created:2 categorical:1 auto:8 review:2 acknowledgement:1 multiplication:5 dolan:2 fully:2 interesting:1 filtering:1 proportional:2 foundation:2 rubin:2 production:2 row:1 supported:1 transpose:2 taking:1 sparse:2 julie:1 distributed:7 default:2 calculated:1 made:1 san:1 approximate:1 qld:1 conceptual:1 don:1 search:9 table:2 associator:4 ca:1 cl:8 necessarily:1 anthony:1 arrow:1 noise:7 complementary:1 tl:7 winning:1 lie:1 third:1 humphreys:14 list:1 evidence:3 postgraduate:1 sequential:1 ci:3 t4:3 intersection:26 michigan:1 likely:2 forming:1 relies:1 ma:2 ann:1 change:1 determined:1 operates:1 isomorphic:1 experimental:3 arbor:1 select:2 latter:1 incorporate:1 tested:1 |
3,689 | 4,340 | Active Classification based on Value of Classifier
Daphne Koller
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Tianshi Gao
Department of Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Modern classification tasks usually involve many class labels and can be informed
by a broad range of features. Many of these tasks are tackled by constructing a
set of classifiers, which are then applied at test time and then pieced together in a
fixed procedure determined in advance or at training time. We present an active
classification process at the test time, where each classifier in a large ensemble
is viewed as a potential observation that might inform our classification process.
Observations are then selected dynamically based on previous observations, using
a value-theoretic computation that balances an estimate of the expected classification gain from each observation as well as its computational cost. The expected
classification gain is computed using a probabilistic model that uses the outcome
from previous observations. This active classification process is applied at test
time for each individual test instance, resulting in an efficient instance-specific decision path. We demonstrate the benefit of the active scheme on various real-world
datasets, and show that it can achieve comparable or even higher classification accuracy at a fraction of the computational costs of traditional methods.
1 Introduction
As the scope of machine learning applications has increased, the complexity of the classification
tasks that are commonly tackled has grown dramatically. On one dimension, many classification
problems involve hundreds or even thousands of possible classes [8]. On another dimension, researchers have spent considerable effort developing new feature sets for particular applications, or
new types of kernels. For example, in an image labeling task, we have the option of using GIST
feature [26], SIFT feature [23], spatial HOG feature [33], Object Bank [21] and more. The benefits
of combining information from different types of features can be very significant [12, 33].
To solve a complex classification problem, many researchers have resorted to ensemble methods, in
which multiple classifiers are combined to achieve an accurate classification decision. For example,
the Viola-Jones classifier [32] uses a cascade of classifiers, each of which focuses on different spatial
and appearance patterns. Boosting [10] constructs a committee of weak classifiers, each of which
focuses on different input distributions. Multiclass classification problems are very often reduced
to a set of simpler (often binary) decisions, including one-vs-one [11], one-vs-all, error-correcting
output codes [9, 1], or tree-based approaches [27, 13, 3]. Intuitively, different classifiers provide
different ?expertise? in making certain distinctions that can inform the classification task. However,
as we discuss in Section 2, most of these methods use a fixed procedure determined at training time
to apply the classifiers without adapting to each individual test instance.
In this paper, we take an active and adaptive approach to combine multiple classifiers/features at
test time, based on the idea of value of information [16, 17, 24, 22]. At training time, we construct
a rich family of classifiers, which may vary in the features that they use or the set of distinctions
that they make (i.e., the subset of classes that they try to distinguish). Each of these classifiers is
trained on all of the relevant training data. At test time, we dynamically select an instance-specific
1
subset of classifiers. We view each our pre-trained classifier as a possible observation we can make
about an instance; each one adds a potential value towards our ability to classify the instance, but
also has a cost. Starting from an empty set of observations, at each stage, we use a myopic value-ofinformation computation to select the next classifier to apply to the instance in a way that attempts to
increase the accuracy of our classification state (e.g., decrease the uncertainty about the class label)
at a low computational cost. This process stops when one of the suitable criteria is met (e.g., if
we are sufficiently confident about the prediction). We provide an efficient probabilistic method for
estimating the uncertainty of the class variable and about the expected gain from each classifier. We
show that this approach provides a natural trajectory, in which simple, cheap classifiers are applied
initially, and used to provide guidance on which of our more expensive classifiers is likely to be
more informative. In particular, we show that we can get comparable (or even better) performance
to a method that uses a large range of expensive classifiers, at a fraction of the computational cost.
2 Related Work
Our classification model is based on multiple classifiers, so it resembles ensemble methods like
boosting [10], random forests [4] and output-coding based multiclass classification [9, 1, 29, 14].
However, these methods use a static decision process, where all classifiers have to be evaluated
before any decision can be made. Moreover, they often consider a homogeneous set of classifiers,
but we consider a variety of heterogeneous classifiers with different features and function forms.
Some existing methods can make classification decisions based on partial observations. One example is a cascade of classifiers [32, 28], where an instance goes through a chain of classifiers and the
decision can be made at any point if the classifier response passes some threshold. Another type of
method focuses on designing the stopping criteria. Schwing et al. [30] proposed a stopping criterion
for random forests such that decisions can be made based on a subset of the trees. However, these
methods have a fixed evaluation sequence for any instance, so there is no adaptive selection of which
classifiers to use based on what we have already observed.
Instance-specific decision paths based on previous observations can be found in decision tree style
models, e.g., DAGSVM [27] and tree-based methods [15, 13, 3]. Instead of making hard decisions
based on individual observations like these methods, we use a probabilistic model to fuse information from multiple observations and only make decisions when it is sufficiently confident.
When observations are associated with different features, our method also performs feature selection. Instead of selecting a fixed set of features in the learning stage [34], we actively select instancespecific features in the test stage. Furthermore, our method also considers computational properties
of the observations. Our selection criterion trades off between the statistical gain and the computational cost of the classifier, resulting in a computationally efficient cheap-to-expensive evaluation
process. Similar ideas are hard-coded by Vedaldi et al. [31] without adaptive decisions about when to
switch to which classifier with what cost. Angelova et al. [2] performed feature selection to achieve
certain accuracy under some computational budget, but the selection is at training time without adaptation to individual test instances. Chai et al. [5] considered test-time feature value acquisition with
a strong assumption that observations are conditionally independent given the class variable.
Finally, our work is inspired by decision-making under uncertainty based on value of information [16, 17, 24, 22]. For classification, Krause and Guestrin [19] used it to compute a conditional
plan for asking the expert, trying to optimize classification accuracy while requiring as little expert
interaction as possible. In machine learning, Cohn et al. [7] used active learning to select training
instances to reduce the labeling cost and speedup the learning, while our work focuses on inference.
3 Model
We denote the instance and label pair as (X, Y ). Furthermore, we assume that we have been provided a set of trained classifiers H, where hi ? H : X ? R can be any real-valued classifiers
(functions) from existing methods. For example, for multiclass classification, hi can be one-vs-all
classifiers, one-vs-one classifiers and weak learners from the boosting algorithms. Note that hi ?s
do not have to be homogeneous meaning that they can have different function forms, e.g., linear
or nonlinear, and more importantly they can be trained on different types of features with various
computational costs. Given an instance x, our goal is to infer Y by sequentially selecting one hi to
evaluate at a time, based on what has already been observed, until we are sufficiently confident about
2
Y or some other stopping criterion is met, e.g., the computational constraint. The key in this process
is how valuable we think a classifier hi is, so we introduce the value of a classifier as follows.
Value of Classifier. Let O be the set of classifiers that have already been evaluated (empty at the
beginning). Denote the random variable Mi = hi (X) as the response/margin of the i-th classifier
in H and denote the random vector for the observed classifiers as MO = [Mo1 , Mo2 , . . . , Mo|O| ]T ,
where ?oi ? O. Given the actual observed values mO of MO , we have a posterior P (Y |mO )
over Y . For now, suppose we are given a reward R : P ? R which takes in a distribution P and
returns a real value indicating how preferable P is. Furthermore, we use C(hi |O) to denote the
computational cost of evaluating classifier hi conditioned on the set of evaluated classifiers O. This
is because if hi shares the same feature with some oi ? O, we do not need to compute the feature
again. With some chosen reward R and a computational model C(hi |O), we define the value of an
unobserved classifier as follows.
Definition 1 The value of classifier V (hi |mO ) for a classifier hi given the observed classifier responses mO is the combination of the expected reward of the state informed by hi and the computational cost of hi . Formally,
Z
1
?
V (hi |mO ) = P (mi |mO )R(P (Y |mi , mO ))dmi ? C(hi |O)
?
(1)
1
=Emi ?P (Mi |mO ) R(P (Y |mi , mO )) ? C(hi |O)
?
The value of classifier has two parts corresponding to the statistical and computational properties
?
of the classifier respectively. The first part VR (hi |mO ) = E R(P (Y |mi , mO )) is the expected
reward of P (Y |mi , mO ), where the expectation is with respect to the posterior of Mi given mO .
?
The second part VC (hi |mO ) = ? ?1 C(hi |O) is a computational penalty incurred by evaluating the
classifier hi . The constant ? controls the tradeoff between the reward and the cost.
Given the definition of the value of classifier, at each step of our sequential evaluations, our goal is
to pick hi with the highest value:
h? = argmax V (hi |mO ) = argmax VR (hi |mO ) + VC (hi |mO )
hi ?H\O
(2)
hi ?H\O
We introduce the building blocks of the value of classifier, i.e., the reward, the cost and the probabilistic model in the following, and then explain how to compute it.
Reward Definition. We propose two ways to define the reward R : P ? R.
Residual Entropy. From the information-theoretical point of view, we want to reduce the uncertainty
of the class variable Y by observing classifier responses. Therefore, a natural way to define the
reward is to consider the negative residual entropy, that is the lower the entropy the higher the
reward. Formally, given some posterior distribution P (Y |mO ) , we define
X
R(P (Y |mO )) = ?H(Y |mO ) =
P (y|mO ) log P (y|mO )
(3)
y
The value of classifier under this reward definition is closely related to information gain. Specifically,
VR (hi |mO ) =Emi ?P (Mi |mO ) ? H(Y |mi , mO ) + H(Y |mO ) ? H(Y |mO )
(4)
=I(Y ; Mi |mO ) ? H(Y |mO )
Since H(Y |mO ) is a constant w.r.t. hi , we have
h? = argmax VR (hi |mO ) + VC (hi |mO ) = argmax I(Y ; Mi |mO ) + VC (hi |mO )
hi ?H/O
(5)
hi ?H/O
Therefore, at each step, we want to pick the classifier with the highest mutual information with the
class variable Y given the observed classifier responses mO with a computational constraint.
Classification Loss. From the classification loss point of view, we want to minimize the expected
loss when choosing classifiers to evaluate. Therefore, given a loss function ?(y, y ? ) specifying the
3
penalty of classifying an instance of class y to y ? , we can define the reward as the negative of the
minimum expected loss:
X
R(P (Y |mO )) = ? min
P (y|mO )?(y, y ? ) = ? min
Ey?P (Y |mO ) ?(y, y ? )
(6)
?
?
y
y
y
To gain some intuition about this definition, consider a 0-1 loss function, i.e., ?(y, y ? ) = 1{y 6= y ? },
then R(P (Y |mO )) = ?1 + maxy? P (y ? |mO ). To maximize R, we want the peak of P (Y |mO ) to
be as high as possible. In our experiment, these two reward definitions give similar results.
Classification Cost. The cost of evaluating a classifier h on an instance x can be broken down into
two parts. The first part is the cost of computing the feature ? : X ? Rn on which h is built, and
the second is the cost of computing the function value of h given the input ?(x). If h shares the
same feature as some evaluated classifiers in O, then C(h|O) only consists of the cost of evaluating
the function h, otherwise it will also include the cost of computing the feature input ?. Note that
computing ? is usually much more expensive than evaluating the function value of h.
Probabilistic Model. Given a test instance x, we construct an instance-specific joint distribution
over Y and the selected observations MO . Our probabilistic model is a mixture model, where each
component corresponds to a class Y = y, and we use a uniform prior P (Y ). Starting from an empty
O, we model P (Mi , Y ) as a mixture of Gaussian distributions. At each step, given the selected MO ,
we model the new joint distribution P (Mi , MO , Y ) = P (Mi |MO , Y )P (MO , Y ) by modeling the
new P (Mi |MO , Y = y) as a linear Gaussian, i.e., P (Mi |MO , Y = y) = N (?yT MO , ?y2 ). As we
show in Section 5, this choice of probabilistic model works well empirically. We discuss how to
learn the distribution and do inference in the next section.
4 Learning and Inference
N
y
corresponding
Learning P (Mi |mO , y). Given the subset of the training set {(x(j) , y (j) = y)}j=1
(j)
to the instances from class y, we denote mi = hi (x(j) ), then our goal is to learn P (Mi |mO , y)
Ny
. If O = ?, then P (Mi |mO , y) reduces to the marginal distribution
from {(m(j) , y (j) = y)}j=1
P (j)
2
P (Mi |y) = N (?y , ?y ), and based on maximum likelihood estimation, we have ?y = N1y j mi ,
P
(j)
and ?y2 = N1y j (mi ? ?y )2 . If O 6= ?, we assume that P (Mi |mO , y) is a linear Gaussian, i.e., ?y = ?yT mO . Note that we also append a constant 1 to mO as the bias term. Since
we know mO at test time, we estimate ?y and ?y2 by maximizing the local likelihood with a Gaussian prior on ?y . Specifically, for each training instance j from class y, let wj = e?
where ? is a bandwidth parameter, then the regularized local log likelihood is
L(?y , ?y ; mO ) = ?? k
?y k22
+
Ny
X
(j)
(j) 2
kmO ?m
k
O
?
(j)
wj log N (mi ; ?yT mO , ?y2 )
,
(7)
j=1
where we overload the notation N (x; ?y , ?y2 ) to mean the value of a Gaussian PDF with mean ?y and
variance ?y2 evaluated at x. Note that maximizing (7) is equivalent to locally weighted regression [6]
with ?2 regularization. Maximizing (7) results in:
??y = argmin ? k ?y k22 +
?y
Ny
X
(j)
(j)
? T WM
? O + ?I)?1 M
? T WM
?i
wj k mi ? ?yT mO k22 = (M
O
O
(8)
j=1
? O is a matrix whose j-th row is m(j)T , W is a diagonal matrix whose diagonal entries are
where M
O
? i is an column vector whose j-th element is m(j) , and I is an identity matrix. It is worth
wj ?s , M
i
? T WM
? O + ?I)?1 W in (8) does not depend on i, so it can be computed once and
noting that (M
O
shared for different classifiers hi ?s. Finally, the estimated ?y2 is
2
1
??y = PNy
j=1
wj
Ny
X
T
(j)
(j)
wj k mi ? ??y mO k2
j=1
4
(9)
Computing V (fi |mO ). Given the learned distribution, we can easily compute the two CPDs
in
P (1), i.e., P (Mi |mO ) and P (Y |mi , mO ). P (Mi |mO ) can be obtained as P (Mi |mO ) =
y P (Mi |mO , y)P (y|mO ), where P (Y |mO ) is the posterior over Y given some observation
mO which is tracked over iterations. Specifically, P (Y |mi , mO ) ? P (mi , mO |Y )P (Y ) =
P (mi |mO , Y )P (mO |Y )P (Y ), where all terms are available by caching previous computations.
Finally, to compute V (fi |mO ), the computational part VC (fi |mO ) is just a lookup in a cost table,
and the expected reward part VR (fi |mO ) can be rewritten as:
X
VR (hi |mO ) =
P (y|mO )Emi ?P (Mi |mO ,y) R(P (Y |mi , mO ))
(10)
y
Therefore, each component Emi ?P (Mi |mO ,y) R(P (Y |mi , mO )) is the expectation of a function
of a scalar Gaussian variable. We use Gaussian quadrature [18] 1 to approximate each component
expectation, and then do the weighted average to get VR (hi |mO ).
Dynamic Inference. Given the building blocks introduced before, one can execute the classification
process in |H| steps, where at each step, the values of all the remaining classifiers are computed.
However, this will incur a large scheduling cost. This is due to the fact that usually |H| is large. For
example, in multiclass classification, if we include all one-vs-one classifiers into H, |H| is quadratic
in the number of classes. Since we are maintaining a belief over Y as observations are accumulated,
we can use it to make the inference process more adaptive resulting in small scheduling cost.
Early Stopping. Based on the posterior P (Y |mO ), we can make dynamic and adaptive decision
about whether to continue observing new classifiers or stop the process. We propose two stopping criteria. We stop the inference process whenever either of them is met, and use the posterior over Y at that point to make classification decision. The first criterion is based on the
information-theoretic point of view. Given the current posterior estimation P (Y |mi , mO ) and
the previous
posterior estimation P (Y |mO ), the relative entropy (KL-divergence) between them
is D P (Y |mO ) k P (Y |mi , mO ) . We stop the inference procedure when this divergence is below
some threshold t. The second criterion is based on the classification point of view. We consider the
gap between the probability of the current best class and that of the runner-up. Specifically, we define
the margin given a posterior P (Y |mO ) as ?m (P (Y |mO )) = P (y ? |mO ) ? maxy6=y? P (Y |mO ),
where y ? = argmaxy P (y|mO ). If ?m (P (Y |mO )) ? t? , then the inference stops.
Dynamic Pruning of Class Space. In many cases, a class is mainly confused with a small number of
other classes (the confusion matrix is often close to sparse). This implies that after observing a few
classifiers, the posterior P (Y |mO ) is very likely to be dominated by a few modes leaving the rest
with very small probability. For those classes y with very small P (y|mO ), their contributions to the
value of classifier (10) are negligible. Therefore, when computing (10), we ignore the components
whose P (y|mO ) is below some small threshold (equivalent to setting the contribution from this
component to 0). Furthermore, when P (y|mO ) falls below some very small threshold for a class y,
we will not estimate the likelihood related to y, i.e., P (Mi |mO , y), but use a small constant.
Dynamic Classifier Space. To avoid computing the values of all the remaining classifiers, we can
dynamically restrict the search space of classifiers to those having high expected mutual information with Y with respect to the current posterior P (Y |mO ). Specifically, during the training, for
each classifier hi we can compute the mutual information I(Mi ; By ) between its response Mi and
a class y, where By is a binary variable indicating whether an instance is from class y or not.
Given our current posterior P (Y |mO ), we tried two ways to rank the unobserved classifiers. First,
we simply select the top L classifiers with the highest I(Mi ; By?), where y? is the most probable
class based on current posterior. Since we can sort classifiers in the training stage, this step is constant time. P
Another way is that for each classifier, we can compute a weighted mutual information
score, i.e., y P (y|mO )I(Mi ; By ), and we restrict the classifier space to those with the top L
scores. Note that computing the scores is very efficient, since it is just an inner product between two
vectors, where I(Y ; By )?s have been computed and cached before testing. Our experiments showed
that these two scores have similar performances, and we used the first method to report the results.
Analysis of Time Complexity. At each iteration t, the scheduling overhead includes selecting
the top L candidate observations, and for each candidate i, learning P (Mi |mO , y) and computing
1
We found that 3 or 5 points provide an accurate approximation.
5
results on pendigits dataset
results on satimage dataset
results on letter dataset
results on vowel dataset
1
0.95
1
0.65
0.9
0.75
0.7
selection by value of classifier
random selection
one?vs?all
dagsvm
one?vs?one
tree
0.65
0.6
0
5
10
number of evaluated classifiers
0.8
0.7
0.6
selection by value of classifier
random selection
one?vs?all
dagsvm
one?vs?one
tree
0.5
0.4
15
0
5
10
15
20
25
30
35
40
0.55
0.5
0.45
0.4
selection by value of classifier
random selection
one?vs?all
dagsvm
one?vs?one
tree
0.35
0.3
0.25
45
number of evaluated classifiers
test classification accuracy
0.8
test classification accuracy
0.85
0.55
0.6
0.9
test classification accuracy
test classification accuracy
0.9
0.2
0
5
10
15
20
25
30
35
40
number of evaluated classifiers
45
50
0.8
0.7
0.6
0.5
0.4
0.3
0.2
55
0.1
0
10
selection by value of classifier
random selection
one?vs?all
dagsvm
one?vs?one
tree
1
10
2
10
number of evaluated classifiers
Figure 1: (Best viewed magnified and in colors) Performance comparisons on UCI datasets. From
the left to right are the results on satimage, pendigits, vowel and letter (in log-scale) datasets. Note
that the error bars for pendigits and letter datasets are very small (around 0.5% on average).
V (fi |mO ). First, selecting the top L candidate observations is a constant time, since we can sort
the observations based on I(Mi ; By ) before the test process. Second, estimating P (Mi |mO , y)
requires computing (8) and (9) for different y?s. Given our dynamic pruning of class space, suppose
there are only Nt,Y promising classes to consider instead of the total number of classes K. Since
? T WM
? O + ?I)?1 W in (8) does not depend on i, we compute it for each promising class, which
(M
O
takes O(tNy2 + t2 Ny + t3 ) floating point operations, and share it for different i?s. After computing
this shared component, for each pair of i and a promising class, computing (8) and (9) both take
2
O(tNy ). Finally, computing (10) takes O(Nt,Y
). Putting everything together, the overall cost at
2
2
3
2
iteration t is O(Nt,Y (tNy + t Ny + t ) + LNt,Y tNy + LNt,Y
). The key to have a low cost is to
effectively prune the class space (small Nt,Y ) and reach a decision quickly (small t).
5 Experimental Results
We performed experiments on a collection of four UCI datasets [25] and on a scene recognition
dataset [20]. All tasks are multiclass classification problems. The first set of experiments focuses on
a single feature type and aims to show that (i) our probabilistic model is able to combine multiple
binary classifiers to achieve comparable or higher classification accuracy than traditional methods;
(ii) our active evaluation strategy successfully selects a significantly fewer number of classifiers. The
second set of experiments considers multiple features, with varying computational complexities.
This experiment shows the real power of our active scheme. Specifically, it dynamically selects an
instance-specific subset of features, resulting in higher classification accuracy of using all features
but with a significant reduction in the computational cost.
Basic Setup. Given a feature ?, our set of classifiers H? consists of all one-vs-one classifiers, all
one-vs-all classifiers, and all node classifiers from a tree-based method [13], where a node classifier
can be trained to distinguish two arbitrary clusters of classes. Therefore, for a K-class problem, the
number of classifiers given a single feature is |H? | = (K?1)K
+ K + N?,tree , where N?,tree is the
2
number of nodes in the tree model. If there are multiple features {?i }F
i=1 , our pool of classifiers is
.
The
form
of
all
classifiers
is
linear
SVM
for
the
first
set
of experiments and nonlinear
H = ?F
H
?
i
i=1
SVM with various kernels for the second set of experiments. During training, in addition to learning
(j)
the classifiers, we also need to compute the response mi of each classifier hi ? H for each training
instance x(j) . In order to make the training distribution of the classifier responses better match the
test distribution, when evaluating classifier hi on x(j) , we do not want hi to be trained on x(j) . To
achieve this, we use a procedure similar to cross validation. Specifically, we split the training set
into 10 folds, and for each fold, instances from this fold are tested using the classifiers trained on the
other 9 folds. After this procedure, each training instance x(j) will be evaluated by all hi ?s. Note
that the classifiers used in the test stage are trained on the entire training set. Although for different
(k)
(j)
training instances x(j) and x(k) from different folds and a test instance x, mi , mi and mi are
obtained using different hi ?s, our experimental results confirmed that their empirical distributions
are close enough to achieve good performance.
Standard Multiclass Problems from UCI Repository. The first set of experiments are done on
four standard multiclass problems from the UCI machine learning repository [25]: vowel (speech
recognition, 11 classes), letter (optical character recognition, 26 classes), satimage (pixel-based classification/segmentation on satellite images, 6 classes) and pendigits (hand written digits recognition,
6
10 classes). We used the same training/test split as specified in the UCI repository. For each dataset,
there is only one type of feature, so it will be computed at the first step no matter which classifier
is selected. After that, all classifiers have the same complexity, so the results will be independent
of the ? parameter in the definition of value of classifier (1). For the baselines, we have one-vs-one
with max win, one-vs-all, DAGSVM [27] and a tree-based method [13]. These methods vary both
in terms of what set of classifiers they use and how those classifiers are evaluated and combined.
To evaluate the effectiveness of our classifier selection scheme, we introduce another baseline that
selects classifiers randomly, for which we repeated the experiments for 10 times and the average and
one standard deviation are reported. We compare different methods in terms of both the classification accuracy and the number of evaluated classifiers. For our algorithm and the random selection
baseline, we show the accuracy over iterations as well. Since in our framework the number of iterations (classifiers) needed varies over instances due to early stopping, the maximum number of
iterations shown is defined as the mean plus one standard derivation of the number of classifier
evaluations of all test instances. In addition, for the tree-based method, the number of evaluated
classifiers is the mean over all test instances.
Figure 1 shows a set of results. As can be seen, our method can achieve comparable or higher
accuracy than traditional methods. In fact, we achieved the best accuracy on three datasets and
the gains over the runner-up methods are 0.2%, 5.2%, 8.2% for satimage, vowel, and letter datasets
respectively. We think the statistical gain might come from two facts: (i) we are performing instancespecific ?feature selection? to only consider those most informative classifiers; (ii) another layer of
probabilistic model is used to combine the classifiers instead of the uniform voting of classifiers used
by many traditional methods. In terms of the number of evaluated classifiers, our active scheme is
very effective: the mean number of classifier evaluations for 6-class, 10-class, 11-class and 26-class
problems are 4.50, 3.22, 6.15 and 7.72. Although the tree-based method can also use a few number
of classifiers, sometimes it suffers from a significant drop in accuracy like on the vowel and letter
datasets. Furthermore, compared to the random selection scheme, our method can effectively select
more informative classifiers resulting in faster convergence to a certain classification accuracy.
The performance gain of our method is not free. To maintain a belief over the class variable Y
and to dynamically select classifiers with high value, we have introduced additional computational
costs, i.e., estimating conditional distributions and computing the value of classifiers. For example,
this additional cost is around 10ms for satimage, however, evaluating a linear classifier only takes
less than 1ms due to very low feature dimension, so the actual running time of the active scheme
is higher than one-vs-one. Therefore, our method will have a real computational advantage only
if the cost of evaluating the classifiers is higher than the cost of our probabilistic inference. We
demonstrate such benefit of our method in the context of multiple high dimensional features below.
Scene Recognition. We test our active classification on a benchmark scene recognition dataset
Scene15 [20]. It has 15 scene classes and 4485 images in total. Following the protocol used in
[20, 21], 100 images per class are randomly sampled for training and the remaining 2985 for test.
model
accuracy
all features
best feature OB [21]
fastest feature GIST [26]
ours ? = 25
ours ? = 100
ours ? = 600
86.40%
83.38%
72.70%
86.26%
86.77%
88.11%
feature cost
(# of features)
52.645s (184)
6.20s
0.399s
1.718s (5.62)
6.573s (4.71)
19.821s (4.46)
classifier
cost
0.426s
0.024s
0.0002s
0.010s
0.014s
0.031s
scheduling
cost
0
0
0
0.141s
0.116s
0.094s
total
running time
53.071s
6.224s
0.3992s
1.869s (28.4x)
6.703s (7.9x)
19.946s (2.7x)
Table 1: Detailed performance comparisons on Scene15 dataset with various feature types. For our
methods, we show the speedup factors with respective to using all the features in a static way.
We consider various types of features, since as shown in [33], the classification accuracy can be
significantly improved by combining multiple features but at a high computational cost. Our feature
set includes 7 features from [33], including GIST, spatial HOG, dense SIFT, Local Binary Pattern,
self-similarity, texton histogram, geometry specific histograms (please refer to [33] for details), and
another recently proposed high-level image feature Object Bank [21]. The basic idea of Object
Bank is to use the responses of various object detectors as the feature. The current release of the
code from the authors selected 177 object detectors, each of which outputs a feature vector ?i with
7
dimension 252. These individual vectors are concatenated together to form the final feature vector
? = [?1 ; ?2 ; . . . ; ?177 ] ? R44,604 . Instead of treating ? as an undecomposable single feature
vector, we can think of it as a collection of 177 different features {?i }177
i=1 . Therefore, our feature
pool consists of 184 features in total. Their computational costs vary from 0.035 to 13.796 seconds,
with the accuracy from 54% to 83%. One traditional way to combine these features is through
multiple kernel learning. Specifically, we take the average of individual kernels constructed based
on individual features, and train a one-vs-all SVM using the joint average kernel. Surprisingly, this
simple average kernel performs comparably with learning the weights to combine them [12].
For our active classification, we will not compute all features at the beginning of the evaluation
process, but will only compute a component ?i when a classifier h based on it is selected. We
will cache all evaluated ?i ?s, so different classifiers sharing the same ?i will not induce repeated
computation of the common ?i . We decompose the computational costs per instance into three
parts: (1) the feature cost, which is the time spent on computing the features; (2) the classifier cost,
which is the time spent on evaluating the function value of the classifiers; (3) the scheduling cost,
which is the time spent on selecting the classifiers using our method. To demonstrate the trade-off
between the accuracy and computational cost in the definition of value of classifier, we run multiple
experiments with various ? ?s.
The results are shown in Table 1. We also report
results on scene15 dataset
comparisons to the best individual features in terms
of either accuracy or speed (the reported accuracy
is the best of one-vs-one and one-vs-all). As can
be seen, combining all features using the traditional
method indeed improves the accuracy significantly
over those individual features, but at an expensive
computational cost. However, using active classifisequentially adding features
active classification
cation, to achieve similar accuracy as the baseline
GIST
of all features, we can get 28.4x speedup (? = 25).
LBP
spatial HOG
Note that at this configuration, our method is faster
Object Bank
than the state-of-the-art individual feature [21], and
dense SIFT
is also 2.8% better in accuracy. Furthermore, if we
running time (seconds)
put more emphasis on the accuracy, we can get the
best accuracy 88.11% when ? = 600.
Figure 2: Classification accuracy versus runTo further test the effectiveness of our active selec- ning time for the baseline, active classification scheme, we compare with another baseline that tion, and various individual features.
sequentially adds one feature at a time from a filtered
pool of features. Specifically, we first rank the individual features based on their classification accuracy, and only consider the top 80 features (using 80 features achieves essentially the same accuracy
as using 184 features). Given this selected pool, we arrange the features in order of increasing computational complexity, and then train a classifier based on the top N features for all values of N from
1 to 80. As shown in Figure 2, our active scheme is one order of magnitude faster than the baseline
given the same level of accuracy.
test classification accuracy
0.9
0.85
0.8
0.75
0.7
0.65
0
5
10
15
20
25
30
35
40
45
50
6 Conclusion and Future Work
In this paper, we presented an active classification process based on the value of classifier. We applied this active scheme in the context of multiclass classification, and achieved comparable and
even higher classification accuracy with significant computational savings compared to traditional
static methods. One interesting future direction is to estimate the value of features instead of individual classifiers. This is particularly important when computing the feature is much more expensive
than evaluating the function value of classifiers, which is often the case. Once a feature has been
computed, a set of classifiers that are built on it will be cheap to evaluate. Therefore, predicting the
value of the feature (equivalent to the joint value of multiple classifiers sharing the same feature) can
potentially lead to more computationally efficient classification process.
Acknowledgment. This work was supported by the NSF under grant No. RI-0917151, the Office of
Naval Research MURI grant N00014-10-10933, and the Boeing company. We thank Pawan Kumar
and the reviewers for helpful feedbacks.
8
References
[1] E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: a unifying approach for
margin classifiers. J. Mach. Learn. Res., 1:113?141, 2001.
[2] A. Angelova, L. Matthies, D. Helmick, and P. Perona. Fast terrain classification using variable-length
representation for autonomous navigation. CVPR, 2007.
[3] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multiclass task. In NIPS, 2010.
[4] L. Breiman. Random forests. In Machine Learning, pages 5?32, 2001.
[5] X. Chai, L. Deng, and Q. Yang. Test-cost sensitive naive bayes classification. In ICDM, 2004.
[6] W. S. Cleveland and S. J. Devlin. Locally weighted regression: An approach to regression analysis by
local fitting. Journal of the American Statistical Association, 83:596?610, 1988.
[7] D.A. Cohn, Zoubin Ghahramani, and M.I. Jordan. Active learning with statistical models. CoRR,
cs.AI/9603104, 1996.
[8] J. Deng, A.C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell
us? In ECCV10, pages V: 71?84, 2010.
[9] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. J.
of A. I. Res., 2:263?286, 1995.
[10] Y. Freud. Boosting a weak learning algorithm by majority. In Computational Learning Theory, 1995.
[11] Jerome H. Friedman. Another approach to polychotomous classification. Technical report, Department
of Statistics, Stanford University, 1996.
[12] P.V. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009.
[13] G. Griffin and P. Perona. Learning and using taxonomies for fast visual categorization. In CVPR, 2008.
[14] V. Guruswami and A. Sahai. Multiclass learning, boosting, and error-correcting codes. In Proc. of the
Twelfth Annual Conf. on Computational Learning Theory, 1999.
[15] T. Hastie, R. Tibshirani, and J. H. Friedman. The elements of statistical learning: data mining, inference,
and prediction. 2009.
[16] R. A. Howard. Information value theory. IEEE Trans. on Systems Science and Cybernetics, 1966.
[17] R. A. Howard. Decision analysis: Practice and promise. Management Science, 1988.
[18] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[19] A. Krause and C. Guestrin. Optimal value of information in graphical models. Journal of Artificial
Intelligence Research (JAIR), 35:557?591, 2009.
[20] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, 2006.
[21] L.-J. Li, H. Su, E.P. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene
classification and semantic feature sparsification. In NIPS, 2010.
[22] D. V. Lindley. On a Measure of the Information Provided by an Experiment. The Annals of Mathematical
Statistics, 27(4):986?1005, 1956.
[23] D.G. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
[24] V.S. Mookerjee and M.V. Mannino. Sequential decision models for expert system optimization. IEEE
Trans. on Knowledge & Data Engineering, (5):675.
[25] D.J. Newman, S. Hettich, C.L. Blake, and C.J. Merz. Uci repository of machine learning databases, 1998.
[26] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the
spatial envelope. IJCV, 2001.
[27] J.C. Platt, N. Cristianini, and J. Shawe-taylor. Large margin dags for multiclass classification. In NIPS,
2000.
[28] M.J. Saberian and N. Vasconcelos. Boosting classifier cascades. In NIPS, 2010.
[29] Robert E. Schapire. Using output codes to boost multiclass learing problems. In ICML, 1997.
[30] G. A. Schwing, C. Zach, Zheng Y., and M. Pollefeys. Adaptive random forest - how many ?experts? to
ask before making a decision? In CVPR, 2011.
[31] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV,
2009.
[32] P. Viola and M. Jones. Robust Real-time Object Detection. IJCV, 2002.
[33] J.X. Xiao, J. Hays, K.A. Ehinger, A. Oliva, and A.B. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010.
[34] Y. Yang and J. O. Pedersen. A comparative study on feature selection in text categorization. In ICML,
pages 412?420, 1997.
9
| 4340 |@word repository:4 twelfth:1 tried:1 pick:2 reduction:1 configuration:1 score:4 selecting:5 ours:3 existing:2 current:6 nt:4 written:1 informative:3 cpds:1 shape:1 cheap:3 drop:1 gist:4 treating:1 v:21 intelligence:1 selected:7 fewer:1 beginning:2 filtered:1 provides:1 boosting:6 node:3 simpler:1 daphne:1 mathematical:1 constructed:1 learing:1 consists:3 ijcv:2 combine:5 overhead:1 fitting:1 introduce:3 indeed:1 expected:9 inspired:1 company:1 little:1 actual:2 cache:1 increasing:1 provided:2 estimating:3 moreover:1 notation:1 confused:1 cleveland:1 what:5 argmin:1 informed:2 unobserved:2 magnified:1 sparsification:1 voting:1 preferable:1 classifier:133 k2:1 platt:1 control:1 grant:2 before:5 negligible:1 engineering:2 local:5 mach:1 path:2 might:2 plus:1 pendigits:4 emphasis:1 resembles:1 dynamically:5 instancespecific:2 specifying:1 fastest:1 range:2 acknowledgment:1 ofinformation:1 testing:1 practice:1 block:2 digit:1 procedure:5 empirical:1 cascade:3 adapting:1 vedaldi:2 significantly:3 pre:1 induce:1 matching:1 zoubin:1 get:4 close:2 selection:18 scheduling:5 put:1 context:2 optimize:1 equivalent:3 scene15:3 tny:3 yt:4 maximizing:3 reviewer:1 go:1 starting:2 correcting:3 importantly:1 varma:1 embedding:1 sahai:1 autonomous:1 annals:1 suppose:2 homogeneous:2 us:3 designing:1 element:2 expensive:6 recognition:8 particularly:1 muri:1 database:2 gehler:1 observed:6 electrical:1 thousand:1 pieced:1 wj:6 sun:1 trade:2 decrease:1 highest:3 valuable:1 intuition:1 broken:1 complexity:5 saberian:1 reward:14 cristianini:1 dynamic:5 trained:8 depend:2 solving:1 incur:1 learner:1 easily:1 joint:4 various:8 grown:1 derivation:1 train:2 fast:2 effective:1 artificial:1 labeling:2 tell:1 newman:1 outcome:1 choosing:1 whose:4 stanford:7 solve:1 valued:1 cvpr:5 otherwise:1 ability:1 statistic:2 think:3 final:1 sequence:1 advantage:1 propose:2 interaction:1 product:1 adaptation:1 relevant:1 combining:3 uci:6 holistic:1 achieve:8 chai:2 convergence:1 empty:3 cluster:1 satellite:1 cached:1 categorization:2 comparative:1 object:11 spent:4 strong:1 c:2 implies:1 come:1 met:3 direction:1 ning:1 closely:1 vc:5 everything:1 decompose:1 probable:1 eccv10:1 sufficiently:3 considered:1 around:2 blake:1 scope:1 mo:108 vary:3 early:2 achieves:1 arrange:1 abbey:1 torralba:2 estimation:3 proc:1 bag:1 label:4 sensitive:1 successfully:1 weighted:4 mit:1 gaussian:7 aim:1 caching:1 avoid:1 breiman:1 varying:1 allwein:1 office:1 release:1 focus:5 naval:1 ponce:1 rank:2 likelihood:4 mainly:1 baseline:7 helpful:1 inference:10 stopping:6 accumulated:1 entire:1 initially:1 perona:2 koller:3 selects:3 pixel:1 overall:1 classification:56 plan:1 spatial:6 art:1 mutual:4 marginal:1 dmi:1 construct:3 once:2 having:1 saving:1 vasconcelos:1 broad:1 jones:2 icml:2 matthies:1 future:2 report:3 t2:1 few:3 modern:1 randomly:2 divergence:2 individual:13 floating:1 argmax:4 geometry:1 pawan:1 vowel:5 maintain:1 attempt:1 friedman:3 detection:2 mining:1 zheng:1 evaluation:7 runner:2 mixture:2 argmaxy:1 navigation:1 myopic:1 chain:1 accurate:2 partial:1 respective:1 tree:16 taylor:1 re:2 guidance:1 theoretical:1 instance:31 column:1 increased:1 classify:1 asking:1 modeling:2 cost:41 deviation:1 subset:5 entry:1 hundred:1 uniform:2 recognizing:1 reported:2 varies:1 combined:2 confident:3 peak:1 probabilistic:11 off:2 pool:4 together:3 quickly:1 polychotomous:1 again:1 management:1 conf:1 expert:4 american:1 style:1 return:1 actively:1 li:2 potential:2 lookup:1 coding:1 includes:2 matter:1 performed:2 try:1 view:5 tion:1 lowe:1 observing:3 wm:4 sort:2 option:1 bayes:1 xing:1 lindley:1 contribution:2 minimize:1 oi:2 gulshan:1 accuracy:33 variance:1 ensemble:3 pny:1 t3:1 weak:3 pedersen:1 comparably:1 zoo:1 trajectory:1 expertise:1 researcher:2 worth:1 tianshi:1 confirmed:1 cation:1 cybernetics:1 explain:1 detector:2 inform:2 reach:1 suffers:1 whenever:1 sharing:2 definition:8 lnt:2 acquisition:1 associated:1 mi:54 static:3 gain:9 stop:5 dataset:9 sampled:1 ask:1 color:1 knowledge:1 improves:1 segmentation:1 higher:8 jair:1 response:9 improved:1 zisserman:1 evaluated:15 execute:1 done:1 furthermore:6 just:2 stage:5 until:1 jerome:1 hand:1 cohn:2 nonlinear:2 su:1 mode:1 aude:1 building:2 dietterich:1 k22:3 requiring:1 y2:7 regularization:1 semantic:1 conditionally:1 during:2 self:1 please:1 criterion:8 m:2 trying:1 pdf:1 theoretic:2 demonstrate:3 confusion:1 performs:2 image:7 meaning:1 lazebnik:1 fi:5 recently:1 common:1 empirically:1 tracked:1 association:1 significant:4 refer:1 ai:1 dag:1 grangier:1 shawe:1 similarity:1 add:2 posterior:13 showed:1 certain:3 n00014:1 hay:1 binary:5 continue:1 guestrin:2 minimum:1 seen:2 additional:2 ey:1 prune:1 deng:2 maximize:1 ii:2 multiple:13 infer:1 reduces:1 technical:1 match:1 faster:3 cross:1 maxy6:1 icdm:1 coded:1 prediction:2 regression:3 basic:2 heterogeneous:1 essentially:1 expectation:3 oliva:2 iteration:6 kernel:7 sometimes:1 histogram:2 texton:1 achieved:2 pyramid:1 addition:2 want:5 krause:2 lbp:1 leaving:1 envelope:1 rest:1 pass:1 effectiveness:2 jordan:1 noting:1 yang:2 split:2 enough:1 bengio:1 variety:1 switch:1 hastie:1 bandwidth:1 restrict:2 reduce:2 idea:3 inner:1 devlin:1 multiclass:15 tradeoff:1 whether:2 guruswami:1 effort:1 penalty:2 speech:1 antonio:1 dramatically:1 detailed:1 involve:2 locally:2 category:2 reduced:1 schapire:2 nsf:1 estimated:1 per:2 tibshirani:1 pollefeys:1 promise:1 key:2 putting:1 four:2 threshold:4 resorted:1 fuse:1 fraction:2 run:1 letter:6 uncertainty:4 family:1 hettich:1 decision:20 ob:1 griffin:1 comparable:5 layer:1 hi:44 distinguish:2 tackled:2 fold:5 quadratic:1 annual:1 constraint:2 fei:4 scene:8 ri:1 dominated:1 emi:4 speed:1 min:2 kumar:1 performing:1 optical:1 speedup:3 department:3 developing:1 combination:2 character:1 dagsvm:6 selec:1 making:4 maxy:1 intuitively:1 iccv:3 invariant:1 computationally:2 discus:2 committee:1 needed:1 know:1 singer:1 available:1 operation:1 rewritten:1 apply:2 top:6 remaining:3 include:2 running:3 graphical:2 maintaining:1 unifying:1 concatenated:1 ghahramani:1 bakiri:1 already:3 strategy:1 traditional:7 diagonal:2 win:1 thank:1 majority:1 considers:2 code:5 length:1 balance:1 setup:1 robert:1 potentially:1 hog:3 taxonomy:1 negative:2 append:1 boeing:1 observation:20 datasets:8 benchmark:1 howard:2 viola:2 rn:1 arbitrary:1 introduced:2 pair:2 kl:1 specified:1 distinction:2 learned:1 boost:1 nip:4 trans:2 able:1 bar:1 beyond:1 usually:3 pattern:2 below:4 built:2 including:2 max:1 belief:2 power:1 suitable:1 natural:3 regularized:1 predicting:1 residual:2 scheme:9 mo1:1 naive:1 schmid:1 text:1 prior:2 relative:1 loss:6 interesting:1 versus:1 validation:1 incurred:1 xiao:1 principle:1 bank:5 classifying:2 share:3 nowozin:1 row:1 surprisingly:1 supported:1 free:1 bias:1 fall:1 sparse:1 benefit:3 feedback:1 dimension:4 world:1 evaluating:10 rich:1 author:1 commonly:1 adaptive:6 made:3 collection:2 approximate:1 pruning:2 ignore:1 active:20 angelova:2 sequentially:2 terrain:1 search:1 table:3 promising:3 learn:3 robust:1 ca:2 forest:4 complex:1 constructing:1 protocol:1 dense:2 repeated:2 quadrature:1 ehinger:1 ny:6 vr:7 zach:1 candidate:3 down:1 specific:6 sift:3 svm:3 sequential:2 effectively:2 adding:1 corr:1 magnitude:1 budget:1 conditioned:1 margin:4 gap:1 entropy:4 simply:1 appearance:1 likely:2 gao:1 visual:1 scalar:1 corresponds:1 weston:1 conditional:2 viewed:2 goal:3 identity:1 towards:1 satimage:5 shared:2 considerable:1 hard:2 determined:2 specifically:9 reducing:1 schwing:2 total:4 experimental:2 merz:1 indicating:2 select:7 formally:2 berg:1 overload:1 evaluate:4 tested:1 |
3,690 | 4,341 | Bayesian Partitioning of Large-Scale Distance Data
David Adametz
Volker Roth
Department of Computer Science & Mathematics
University of Basel
Basel, Switzerland
{david.adametz,volker.roth}@unibas.ch
Abstract
A Bayesian approach to partitioning distance matrices is presented. It is inspired
by the Translation-invariant Wishart-Dirichlet process (TIWD) in [1] and shares
a number of advantageous properties like the fully probabilistic nature of the inference model, automatic selection of the number of clusters and applicability in
semi-supervised settings. In addition, our method (which we call fastTIWD) overcomes the main shortcoming of the original TIWD, namely its high computational
costs. The fastTIWD reduces the workload in each iteration of a Gibbs sampler
from O(n3 ) in the TIWD to O(n2 ). Our experiments show that the cost reduction
does not compromise the quality of the inferred partitions. With this new method
it is now possible to ?mine? large relational datasets with a probabilistic model,
thereby automatically detecting new and potentially interesting clusters.
1
Introduction
In cluster analysis we are concerned with identifying subsets of n objects that share some similarity
and therefore potentially belong to the same sub-population. Many practical applications leave us
without direct access to vectorial representations and instead only supply pairwise distance measures
collected in a matrix D. This poses a serious challenge, because great parts of geometric information
are hereby lost that could otherwise help to discover hidden structures. One approach to deal with
this is to encode geometric invariances in the probabilistic model, as proposed in [1]. The most
important properties that distinguish this Translation-invariant Wishart-Dirichlet Process (TIWD)
from other approaches working on pairwise data are its fully probabilistic model, automatic selection
of the number of clusters, and its applicability in semi-supervised settings in which not all classes
are known in advance. Its main drawback, however, is the high computational cost of order O(n3 )
per sweep of a Gibbs sampler, limiting its applicability to relatively small data sets.
In this work we present an alternative method which shares all the positive properties of the TIWD
while reducing the computational workload to O(n2 ) per Gibbs sweep. In analogy to [1] we call this
new approach fastTIWD. The main idea is to solve the problem of missing geometric information by
a normalisation procedure, which chooses one particular geometric embedding of the distance data
and allows us to use a simple probabilistic model for inferring the unknown underlying partition.
The construction we use is guaranteed to give the optimal such geometric embedding if the true
partition was known. Of course, this is only a hypothetical precondition, but we show that even rough
prior estimates of the true partition significantly outperform ?naive? embedding strategies. Using a
simple hierarchical clustering model to produce such prior estimates leads to clusterings being at
least of the same quality as those obtained by the original TIWD. The algorithmic contribution
here is an efficient algorithm for performing this normalisation procedure in O(n2 ) time, which
makes the whole pipeline from distance matrix to inferred partition an O(n2 ) process (assuming
a constant number of Gibbs sweeps). Detailed complexity analysis shows not only a worst-case
complexity reduction from O(n3 ) to O(n2 ), but also a drastic speed improvement. We demonstrate
1
this performance gain for a dataset containing ? 350 clusters, which now can be analysed in 6 hours
instead of ? 50 days with the original TIWD.
It should be noted that both the TIWD and our fastTIWD model expect (squared) Euclidean distances on input. While this might be seen as a severe limitation, we argue that (i) a ?zoo? of Mercer
kernels has been published in the last decade, e.g. kernels on graphs, sequences, probability distributions etc. All these kernels allow the construction of squared Euclidean distances; (ii) efficient
preprocessing methods like randomised versions of kernel PCA have been proposed, which can be
used to transform an initial matrix into one of squared Euclidean type; (iii) one might even use an
arbitrary distance matrix hoping that the resulting model mismatch can be tolerated.
In the next section we introduce a probabilistic model for partitioning inner product matrices, which
is generalised in section 3 to distance matrices using a preprocessing step that breaks the geometric symmetry inherent in distance representations. Experiments in section 4 demonstrate the high
quality of clusterings found by our method and its superior computational efficiency over the TIWD.
2
A Wishart Model for Partitioning Inner Product Matrices
Suppose there is a matrix X ? Rn?d representing n objects in Rd that belong to one of k subpopulations. For identifying the underlying cluster structure, we formulate a generative model by
assuming the columns xi ? Rn , i = 1 . . . d are i.i.d. according to a normal distribution with zero
mean and covariance ?n?n , i.e. xi ? N (0n , ?), or in matrix notation: X ? N (0n?d , ? ? I).
Then, S = d1 XX t ? Rn?n is central Wishart distributed, S ? Wd (?). For convenience we define
the generalised central Wishart distribution which also allows rank-deficient S and/or ? as
d
1
(1)
p(S|?, d) ? det(S) 2 (d?n?1) det(?) 2 exp ? d2 tr(?S) ,
where det(?) is the product of non-zero eigenvalues and ? denotes the (generalised) inverse of ?.
The likelihood as a function in ? is
d
L(?) = det(?) 2 exp ? d2 tr(?S) .
(2)
Consider now the case where we observe S without direct access to X. Then, an orthogonal transformation X ? OX cannot be retrieved anymore, but it is reasonable to assume such rotations are
irrelevant for finding the partition. Following the Bayesian inference principle, we complement the
likelihood with a prior over ?. Since by assumption there is an underlying joint normal distribution, a zero entry in ? encodes conditional independence between two objects, which means that
block diagonal ? matrices define a suitable partitioning model in which the joint normal is decomposed into independent cluster-wise normals. Note that the inverse of a block diagonal matrix is
also block diagonal, so we can formulate the prior in terms of ?, which is easier to parametrise.
For this purpose we adapt the method in [2] using a Multinomial-Dirichlet process model [3, 4, 5]
to define a flexible prior distribution over block matrices without specifying the exact number of
blocks. We only briefly sketch this construction and refer the reader to [1, 2] for further details. Let
Bn be the set of partitions of the index set [n]. A partition B ? Bn can be represented in matrix
form as B(i, j) = 1 if y(i) = y(j) and B(i, j) = 0 otherwise, with y being a function that maps
[n] to some label set L. Alternatively, B may be represented as a set of disjoint non-empty subsets
called ?blocks? b. A partition process is a series of distributions Pn on the set Bn in which Pn is the
marginal of Pn+1 . Using a multinomial model for the labels and a Dirichlet prior with rate parameter ? on the mixing proportions, we may integrate out the latter and derive a Dirichlet-Multinomial
prior over labels. Finally, after using a ?label forgetting? transformation, the prior over B is:
Q
?(?) b?B ?(nb + ?/k)
k!
.
(3)
p(B|?, k) =
(k ? kB )! [?(?/k)]kB ?(n + ?)
In this setting, k is the number of blocks in the population (k can be infinite, which leads to the
Ewens Process [6], a.k.a. Chinese Restaurant Process), nb is the number of objects in block b and
kB ? k is the total number of blocks in B. The prior is exchangeable meaning rows and columns
can be (jointly) permuted arbitrarily and therefore partition matrices can always be brought to block
diagonal form. To specify the variances of the normal distributions, the models in [1, 2] use two
global parameters, ?, ?, for the within- and between-class scatter. This model can be easily extended
to include block-wise scatter parameters, but for the sake of simplicity we will stay with the simple
parametrisation here. The final block diagonal covariance matrix used in (2) has the form
? = ??1 = ?(In + ?B), with ? := ?/?.
(4)
2
Inference by way of Gibbs sampling. Multiplying the Wishart likelihood (2), the prior over partitions (3) and suitable priors over ?, ? gives the joint posterior. Inference for B, ? and ? can then be
carried out via a Gibbs sampler. Each Gibbs sweep can be efficiently implemented since both trace
and determinant in (2) can be computed analytically, see [1]:
P
P
?
?
tr(?S) = b?B ?1 tr(Sbb ) ? 1+n
S?bb = ?1 tr(S) ? b?B 1+n
S?bb ,
(5)
b?
b?
where Sbb denotes the block submatrix corresponding to the bth diagonal block in B, and S?bb =
1tb Sbb 1b . 1b is the indicator function mapping block b to a {0, 1}n vector, whose elements are 1 if
a sample is contained in b, or 0 otherwise. For the determinant one derives
Q
det(?) = ??n b?B (1 + ?nb )?1 .
(6)
The conditional
likelihood for ? is Inv-Gamma(r, s) with shape parameter r = n ? d/2 ? 1 and scale
P
?
s = d2 tr(S) ? b?B 1+n
S?bb . Using the prior ? ? Inv-Gamma(r0 ? d/2, s0 ? d/2), the posterior
b?
is of the same functional form, and we can integrate out ? analytically:
?(n+r0 )d/2
d/2
Pn (B|?) ? Pn (B|?, k) det(?)(?=1) d2 tr(?S)(?=1) + s0
,
(7)
Q
P
?
S?bb . Note that
where det(?)(?=1) = b?B (1 + ?nb )?1 and tr(?S)(?=1) = tr(S) ? b?B 1+n
b?
the (usually unknown) degree of freedom d has the formal role of an annealing parameter, and it can
indeed be used to ?cool? the Markov chain by increasing d, if desired, until a partition is ?frozen?.
Complexity analysis. In one sweep of the Gibbs sampler, we have to iteratively compute the
membership probability of one object indexed by i to the kB currently existing blocks in partition B
(plus one new block), given the assignments for the n ? 1 remaining ones denoted by the superscript
(?i)
[7, 8]. In every step of this inner loop over kB existing blocks we have to evaluate the Wishart
likelihood, i.e. trace (5) and determinant (6). Given trace tr(?i) , we update S?bb for kB blocks
b ? B which in total needs O(n) operations. Given det(?i) , the computation of all kB updated
determinants induces costs of O(kB ). In total, there are n objects, so a full sweep requires O(n2 +
nkB ) operations, which is equal to O(n2 ) since the maximum number of blocks is n, i.e. kB ? n.
Following [1], we update ? on a discretised grid of values which adds O(kB ) to the workload,
thus not changing the overall complexity of O(n2 ). Compared to the original TIWD, the worst
case complexity in the Dirichlet process model with an infinite number of blocks in the population,
k = ?, is reduced from O(n3 ) to O(n2 ) .
3
The fastTIWD Model for Partitioning Distance Matrices
Consider now the case where S is not accessible, but only squared pairwise distances D ? Rn?n :
D(i, j) = S(i, i) + S(j, j) ? 2 S(i, j).
(8)
Observing one specific D does not imply a unique corresponding S, since there is a surjective
mapping from a set of S-matrices to D, S(D) 7? D. Hereby, not only do we lose information
about orthogonal transformations of X, but also information about the origin of the coordinate
system. If S? is one (any) matrix that fulfills (8) for a specific D, the set S(D) is formally defined
as S = {S|S = S? + 1v t + v1t , S 0, v ? Rn } [9]. The Wishart distribution, however, is
not invariant against the choice of S ? S. In fact, if S? ? W(?), the distribution of a general
S ? S is non-central Wishart, which can be easily seen as follows: S is exactly the set of inner
product matrices that can be constructed by varying c ? Rd in a modified matrix normal model
X ? N (M, ? ? Id ) with mean matrix M = 1n ct . Note that now the d columns in X are still
independent, but no longer identically distributed. Note further that ?shifts? ci do not affect pairwise
distances between rows in X. The modified matrix normal distribution implies that S = d1 XX t is
non-central Wishart, S ? W(?, ?), with non-centrality matrix ? := ??1 M M t . The practical use,
however, is limited by its complicated form and the fundamental problem of estimating ? based
on only one single observation S. It is thus desirable to work with a simpler probabilistic model.
In principle, there are two possibilities: either the likelihood is reformulated as being constant over
all S ? S (the approach taken in [1], called the translation-invariant Wishart distribution), or one
tries to find a ?good? candidate matrix S?0 that is ?close? to the underlying S? and uses the much
3
simpler central Wishart model. Both approaches have their pros and cons: encoding the translation
invariance directly in the likelihood is methodologically elegant and seems to work well in a couple
of experiments (cf. [1]), but it induces high computational cost. The alternative route of searching
for a good candidate S?0 close to S? is complicated, because S? is unknown and it is not immediately
clear what ?close? means. The positive aspect of this approach is the heavily reduced computational
cost due to the formal simplicity of the central Wishart model. It is important to discuss the ?naive?
way of finding a good candidate S?0 by subtracting the empirical column means in X, thus removing
the shifts ci . This normalisation procedure can be implemented solely based on S, leading to the
well-known centering procedure in kernel PCA, [10]:
Sc = QI S QtI ,
with projection QI = I ? (1/n)11t .
(9)
Contrary to the PCA setting, however, this column normalisation induced by QI does not work well
here, because the elements of a column vector in X are not independent. Rather, they are coupled
via the ? component in the covariance tensor ? ? Id . Hereby, we not only remove the shifts ci , but
also alter the distribution: the non-centrality matrix does not vanish in general and as a result, Sc is
no longer central Wishart distributed.
In the following we present a solution to the problem of finding a candidate matrix S?0 that recasts
inference based on the translation-invariant Wishart distribution as a method to reconstruct the optimal S? . Our proposal is guided by a particular analogy between trees and partition matrices and
aims at exploiting a tree-structure to guarantee low computational costs. The construction has the
same functional form as (9), but uses a different projection matrix Q.
The translation-invariant Wishart distribution. Let S? induce pairwise distances D. Assuming
that S? ? Wd (?), the distribution of an arbitrary member S ? S(D) can be derived analytically as
a generalised central Wishart distribution with a rank-deficient covariance, see [2]. Its likelihood in
e is
the rank-deficient inverse covariance matrix ?
e ? det(?)
e d2 exp ? d tr(?S
e ? ) = det(?)
e d2 exp d tr(?D)
e
L(?)
,
(10)
2
4
e = ? ? (1t ?1)?1 ?11t ?. Note that although S? appears in the first term in (10), the density
with ?
is constant on all S ? S(D), meaning it can be replaced by any other member of S(D). Note further
that S also contains rank-deficient matrices (like, e.g. the column normalised Sc ). By multiplying
(10) with the product of nonzero eigenvalues of such a matrix raised to the power of (d ? n ? 1)/2,
a valid generalised central Wishart distribution is obtained (see (1)), which is normalised on the
manifold of positive semi-definite matrices of rank r = n ? 1 with r distinct positive eigenvalues
e but not in the original ?, which finally
[11, 12, 13]. Unfortunately, (10) has a simple form only in ?,
leads to the O(n3 ) complexity of the TIWD model.
Selecting an optimal candidate S? . Introducing the projection matrix
Q=I?
t
1
1t ?1 11 ?,
(11)
e in (10) as ?Q or, equivalently, as Qt ?Q, see [2] for details. Assume now
one can rewrite ?
S ? Wd (?) induces distances D and consider the transformed S? = QSQt . Note that this transformation does not change the distances, i.e. S ? S(D) ? S? ? S(D), and that QSQt has rank
r = n ? 1 (because Q is a projection with kernel 1). Plugging our specific S? = QSQt into (10),
extending the likelihood to a generalised central Wishart (1) with rank-deficient inverse covariance
e exploiting the identity QQ = Q and using the the cyclic property of the trace, we arrive at
?,
e d) ? det(QSQt ) 12 (d?n?1) det(?)
e d2 exp ? d tr(?QSQt ) .
p(QSQt |?,
(12)
2
By treating Q as a fixed matrix, this expression can also be seen as a central Wishart in the
e is substitransformed matrix S? = QSQt , parametrised by the full-rank matrix ? if det(?)
tuted by the appropriate normalisation term det(?). From this viewpoint, inference using the
translation-invariant Wishart distribution can be interpreted as finding a (rank-deficient) representative S? = QSQt ? S(D) which follows a generalised central Wishart distribution with full-rank
inverse covariance matrix ?. For inferring ?, the rank deficiency of S? is not relevant, since only
the likelihood is needed. Thus S? can be seen as an optimal candidate inner-product matrix in the
set S(D) for a central Wishart model parametrised by ?.
4
Approximating S? with trees. The above selection of S? ? S(D) cannot be directly used in a
constructive way, since Q in (11) depends on unknown ?. If, on the other hand, we had some initial
estimate of ?, we could find a reasonable transformation Q0 and hereby a reasonable candidate
S?0 . Note that even if the estimate of ? is far away from the true inverse covariance, the pairwise
distances are at least guaranteed not to change under Q0 S(Q0 )t .
One particular estimate would be to assume that every object forms a singleton cluster, which means
that our estimate of ? is an identity matrix. After substitution into (11) it is easily seen that this assumption results in the column-normalisation projection QI defined in (9). However, if we assume
that there is some non-trivial cluster structure in the data, this would be a very poor approximation.
The main difficulty in finding a better estimate is to not specify the number of blocks. Our construction is guided by an analogy between binary trees and weighted sums of cut matrices, which
are binary complements of partition matrices with two blocks. We use a binary tree with n leaves
representing n objects. It encodes a path distance matrix Dtree between those n objects, and for an
optimal tree Dtree = D. Such an optimal tree exists only if D is additive, and the task of finding an
approximation is a well-studied problem. We will not discuss the various tree reconstruction algorithms, but only mention that there exist algorithms for reconstructing the closest ultrametric tree (in
the `? norm) in O(n2 ) time, [14].
Figure 1: From left to right: Unknown samples X, pairwise distances collected in D, closest tree
structure and an exemplary building block.
A tree metric induced by Dtree is composed of elementary cut (pseudo-)metrics. Any such metric
lies in the metric space L1 and is also a member of (L2 )2 , which is the metric part of the space of
squared Euclidean distance matrices D. Thus, there exists a positive (semi-)definite Stree such that
(Dtree )ij = (Stree )ii + (Stree )jj ? 2(Stree )ij . In fact, any matrix Stree has a canonical decomposition
into a weighted sum of 2-block partition matrices, which is constructed by cutting all edges (2n ? 2
for a rooted tree) and observing the resulting classification of leaf nodes. Suppose, we keep track of
such an assignment with indicator 1j induced by a single cut j, then the inner product matrix is
P2n?2
?j 1
? tj ),
Stree = j=1 ?j (1j 1tj + 1
(13)
? j 7? {0, 1}n is the complementary assignment, i.e.
where ?j is the weight of edge j to be cut and 1
t
t
?
?
1j flipped. Each term (1j 1j + 1j 1j ) is a 2-block partition matrix. We demonstrate the construction
of Stree in Fig. 2 for a small dataset of n = 25 objects sampled from S ? Wd (?) with d = 25 and
? = ?(In + ?B) as defined in (4) with ? = 2 and ? = 1. B contains 3 blocks and is depicted in the
first panel. The remaining panels show the single-linkage clustering tree, all 2n ? 2 = 48 weighted
2-block partition matrices, and the final Stree (= sum of all individual 2-block matrices, rescaled to
full gray-value range). Note that single-linkage fails to identify the clusters in the three branches
closest to root, but still the structure of B is clearly visible in Stree .
Figure 2: Inner product matrix of a tree. Left to right: Partition matrix B for n = 25 objects in 3
clusters, single-linkage tree, all weighted 2-block partition matrices, final Stree .
The idea is now to have Stree as an estimate of ?, and use its inverse ?tree to construct Qtree in (11),
which, however, naively would involve an O(n3 ) Cholesky decomposition of Stree .
5
Theorem 1. The n ? n matrix S? = Qtree SQttree can be computed in O(n2 ) time.
For the proof we need the following lemma:
Lemma 1. The product of Stree ? Rn?n and a vector y ? Rn can be computed in O(n) time.
Proof. (of lemma 1) Restating (13) and defining m := 2n ? 2, we have
Xm
Xm
Xn
Xn
?j 1
? tj y =
?j
?1jl yl
Stree y =
?j 1j 1tj + 1
?j 1j
1jl yl + 1
j=1
j=1
l=1
l=1
(14)
Xn
Xm
Xm
Xn
Xm
Xn
?j +
?j
=
yl
?j 1
?j 1j
1jl yl ?
?j 1
1jl yl .
l=1
j=1
j=1
l=1
j=1
l=1
In the next step, let us focus specifically on the ith element of the resulting vector. Furthermore,
assume Ri is the set of all nodes on the branch starting from node i and leading to the tree?s root:
Xn
X
X
X
X
X
(Stree y)i =
yl
?j +
?j
yl ?
?j
yl
l=1
j ?R
/ i
j?Ri
l?Rj
j ?R
/ i
l?Rj
(15)
Xn
Xm
Xm
X
X
=
yl
?j + 2
?j yj ?
?j yj .
?j ?
l=1
Pn
Pm
j=1
j?Ri
j?Ri
j=1
Pm
Note that l=1 yl , j=1 ?j and j=1 ?j yj are constants and computed in O(n) time. For each
element i, we are now left to find Ri in order to determine the remaining two terms. This can be
done directly on the tree structure in two separate traversals:
1. Bottom up: Starting from the leaf nodes, store the sum of both childrens? y values in their
parent node j (see Fig. 1, rightmost), then ascend. Do the same for ?j and compute ?j yj .
2. Top down: Starting from the root node, recursively descend into the child nodes j and sum
up ?j and ?j yj until reaching the leafs. This implicitly determines Ri .
It is important to stress that the above two tree traversals fully describe the complete algorithm.
Proof. (of theorem 1) First, note that only the matrix-vector product a := ?tree 1 is needed in
Qtree SQttree = I ? 1t ?1tree 1 11t ?tree S I ? ?tree 1t ?1tree 1 11t
(16)
= S ? (1/1t a) 1at S ? (1/1t a) S a1t + (1/1t a)2 1at S a1t .
One way of computing a = ?tree 1 is to employ conjugate gradients (CG) and iteratively minimise
||Stree a ? 1||2 . Theoretically, CG is guaranteed to find the true a in O(n) iterations, each evaluating
one matrix-vector product (Stree y), y ? Rn . Due to lemma 1, a can be computed in O(n2 ) time
and is used in (16) to compute S? = Qtree SQttree (only matrix-vector products, so O(n2 ) complexity
is maintained).
4
Experiments
Synthetic examples: normal clusters. In a first experiment we investigate the performance of
our method on artificial datasets generated in accordance with underlying model assumptions. A
partition matrix B of size n = 200 containing k = 3 blocks is sampled from which we construct
?B = ?(I + ?B). Then, X is drawn from N (M = 40 ? 1n 1td , ? = ?B ? Id ) with d = 300
to generate S = d1 XX t and D. The covariance parameters are set to ? = 2 and ? = 15/d,
which defines a rather difficult clustering problem with a hardly visible structure in D as can be
seen in the left part of Fig. 3. We compared the method to three different hierarchical clustering
strategies (single-linkage, complete-linkage, Ward?s method), to the standard central Wishart model
using two different normalisations of S (?WD C?: column normalisation using Sc = QI SQtI and
?WD R?: additional row normalisation after embedding Sc using kernel PCA) and to the original
TIWD model. The experiment was repeated 200 times and the quality of the inferred clusters was
measured by the adjusted Rand index w.r.t. the true labels. For the hierarchical methods we report
two different performance values: splitting the tree such that the ?true? number k = 3 of clusters is
obtained and computing the best value among all possible splits into [2, n] clusters (?*.best? in the
boxplot). The reader should notice that both values are in favour of the hierarchical algorithms, since
neither the true k nor the true labels are used for inferring the clusters in the Wishart-type methods.
From the right part of Fig. 3 we conclude that (i) both ?naive? normalisation strategies WD C and
WD R are clearly outperformed by TIWD and fastTIWD (?fTIWD? in the boxplot). Significance
of pairwise performance differences is measured with a nonparametric Kruskal-Wallis test with a
6
Bonferroni-corrected post-test of Dunn?s type, see the rightmost panel; (ii) the hierarchical methods
have severe problems with high dimensionality and low class separation, and optimising the tree
cutting does not help much. Even Ward?s method (being perfectly suited for spherical clusters) has
problems; (iii) there is no significant difference between TIWD and fastTIWD.
Figure 3: Normal distributed toy data. Left half: Partition matrix (top), distance matrix (bottom)
and 2D-PCA embedding of a dataset drawn from the generative model. Right half: Agreement with
?true? labels measured by the adjusted Rand index (left) and outcome of a Kruskal-Wallis/Dunn test
(right). Black squares mean two methods are different at a ?family? p-value ? 0.05.
Synthetic examples: log-normal clusters. In a second toy example we explicitly violate underlying model assumptions. For this purpose we sample again 3 clusters in d = 300 dimensions, but now
use a log-normal distribution that tends to produce a high number of ?atypical? samples. Note that
such a distribution should not induce severe problems for hierarchical methods when optimising the
Rand index over all possible tree cuttings, since the ?atypical? samples are likely to form singleton
clusters while the main structure is still visible in other branches of the tree. This should be particularly true for Ward?s method, since we still have spherically shaped clusters. As for the fastTIWD
model, we want to test if the prior over partitions is flexible enough to introduce additional singleton
clusters: In the experiment, it performed at least as well as Ward?s method, and clearly outperformed
single- and complete-linkage. We also compared it to the affinity-propagation method (AP), which,
however, has severe problems on this dataset, even when optimising the input preference parameter
that affects the number of clusters in the partition.
Figure 4: Log-normal distributed toy data. Left: Agreement with ?true? labels measured by the
adjusted Rand index. Right: Outcome of a Kruskal-Wallis/Dunn test, analogous to Fig. 3.
Semi-supervised clustering of protein sequences. As large-scale application we present a semisupervised clustering example which is an upscaled version of an experiment with protein sequences
presented in [1]. While traditional semi-supervised classifiers assume at least one labelled object
per class, our model is flexible enough to allow additional new clusters that have no counterpart
in the subset of labelled objects. We apply this idea on two different databases, one being high
quality due to manual annotation with a stringent review process (SwissProt) while the other contains
automatically annotated proteins and is not reviewed (TrEMBL). The annotations in SwissProt are
used as supervision information resulting in a set of class labels, whereas the proteins in TrEMBL
are treated as unlabelled objects, potentially forming new clusters. In contrast to a relatively small
set of globin sequences in [1], we extract a total number of 12,290 (manually or automatically)
annotated proteins to have some role in oxygen transport or binding. This set contains a richer class
including, for instance, hemocyanins, hemerythrins, chlorocruorins and erythrocruorins.
The proteins are represented as a matrix of pairwise alignment scores. A subset of 1731 annotated sequences is from SwissProt, resulting in 356 protein classes. Among the 10,559 TrEMBL sequences
7
we could identify 23 new clusters which are dissimilar to any SwissProt proteins, see Fig. 5. Most of
the newly identified clusters contain sequences sharing some rare and specific properties. In accordance with the results in [1], we find a large new cluster containing flavohemoglobins from specific
species of funghi and bacteria that share a certain domain architecture composed of a globin domain
fused with ferredoxin reductase-like FAD- and NAD-binding modules. An additional example is a
cluster of proteins with chemotaxis methyl-accepting receptor domain from a very special class of
magnetic bacteria to orient themselves according to earth?s magnetic field. The domain architecture
of these proteins involving 6 domains is unique among all sequences in our dataset. Another cluster
contains iron-sulfur cluster repair di-iron proteins that build on a polymetallic system, the di-iron
center, constituted by two iron ions bridged by two sulfide ions. Such di-iron centers occur only in
this new cluster.
Figure 5: Partition of all 12,290 proteins into 379 clusters: 356 predefined by sequences from
SwissProt and 23 new formed by sequences from TrEMBL (red box).
In order to gain the above results, 5000 Gibbs sweeps were conducted in a total runtime of ? 6
hours. Although section 2 highlighted the worst-case complexity of the original TIWD, it is also
important to experimentally compare both models in a real world scenario: we ran 100 sweeps
with each fastTIWD and TIWD and hereby observed an average improvement of factor 192, which
would lead to an estimated runtime of 1152 hours (? 50 days) for the latter model. On a side note,
automatic cluster identification is a nice example for benefits of large-scale data mining: clearly, one
could theoretically also identify special sequences by digging into various protein domain databases,
but without precise prior knowledge, this would hardly be feasible for ? 12,000 proteins.
5
Conclusion
We have presented a new model for partitioning pairwise distance data, which is motivated by the
great success of the TIWD model, shares all its positive properties, and additionally reduces the
computational workload from O(n3 ) to O(n2 ) per sweep of the Gibbs sampler. Compared to vectorial representations, pairwise distances do not convey information about translations and rotations of
the underlying coordinate system. While in the TIWD model this lack of information is handled by
making the likelihood invariant against such geometric transformations, here we break this symmetry by choosing one particular inner-product representation S? and thus, one particular coordinate
system. The advantage is being able to use a standard (i.e. central) Wishart distribution for which
we present an efficient Gibbs sampling algorithm.
We show that our construction principle for selecting S? among all inner product matrices corresponding to an observed distance matrix D and finds an optimal candidate if the true covariance was
known. Although it is a pure theoretical guarantee, it is successfully exploited by a simple hierarchical cluster method to produce an initial covariance estimate?all without specifying the number
of clusters, which is one of the model?s key properties. On the algorithmic side, we prove that S?
can be computed in O(n2 ) time using tree traversals. Assuming the number of Gibbs sweeps necessary is independent of n (which, of course, depends on the problem), we now have a probabilistic
algorithm for partitioning distance matrices running in O(n2 ) time. Experiments on simulated data
show that the quality of partitions found is at least comparable to that of the original TIWD. It is
now possible for the first time to use the Wishart-Dirichlet process model for large matrices. Our experiment containing ? 12,000 proteins shows that fastTIWD can be successfully used to mine large
relational datasets and leads to automatic identification of protein clusters sharing rare structural
properties. Assuming that in most clustering problems it is acceptable to obtain a solution within
some hours, any further size increase of the input matrix will become more and more a problem of
memory capacity rather than computation time.
Acknowledgments
This work has been partially supported by the FP7 EU project SIMBAD.
8
References
[1] J. Vogt, S. Prabhakaran, T. Fuchs, and V. Roth. The Translation-invariant Wishart-Dirichlet
Process for Clustering Distance Data. In Proceedings of the 27th International Conference on
Machine Learning, 2010.
[2] P. McCullagh and J. Yang. How Many Clusters? Bayesian Analysis, 3:101?120, 2008.
[3] Y. W. Teh. Dirichlet Processes. In Encyclopedia of Machine Learning. Springer, 2010.
[4] J. Sethuraman. A Constructive Definition of Dirichlet Priors. Statistica Sinica, 4:639?650,
1994.
[5] B. A. Frigyik, A. Kapila, and M. R. Gupta. Introduction to the Dirichlet Distribution and
Related Processes. Technical report, Departement of Electrical Engineering, University of
Washington, 2010.
[6] W. Ewens. The Sampling Theory of Selectively Neutral Alleles. Theoretical Population Biology, 3:87?112, 1972.
[7] D. Blei and M. Jordan. Variational Inference for Dirichlet Process Mixtures. Bayesian Analysis, 1:121?144, 2005.
[8] R. Neal. Markov Chain Sampling Methods for Dirichlet Process Mixture Models. Journal of
Computational and Graphical Statistics, 9(2):249?265, 2000.
[9] P. McCullagh. Marginal Likelihood for Distance Matrices. Statistica Sinica, 19:631?649,
2009.
[10] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 10(5):1299?1319, July 1998.
[11] J.A. Diaz-Garcia, J.R. Gutierrez, and K.V. Mardia. Wishart and Pseudo-Wishart Distributions
and Some Applications to Shape Theory. Journal of Multivariate Analysis, 63:73?87, 1997.
[12] H. Uhlig. On Singular Wishart and Singular Multivariate Beta Distributions. Annals of Statistics, 22:395?405, 1994.
[13] M. Srivastava. Singular Wishart and Multivariate Beta Distributions. Annals of Statistics,
31(2):1537?1560, 2003.
[14] M. Farach, S. Kannan, and T. Warnow. A Robust Model for Finding Optimal Evolutionary
Trees. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages
137?145, 1993.
9
| 4341 |@word determinant:4 briefly:1 version:2 nkb:1 advantageous:1 proportion:1 seems:1 norm:1 vogt:1 d2:7 bn:3 covariance:11 methodologically:1 decomposition:2 frigyik:1 mention:1 thereby:1 tr:13 recursively:1 reduction:2 initial:3 cyclic:1 series:1 contains:5 selecting:2 substitution:1 score:1 rightmost:2 existing:2 unibas:1 wd:8 analysed:1 scatter:2 additive:1 partition:26 visible:3 shape:2 hoping:1 remove:1 treating:1 update:2 generative:2 leaf:4 half:2 ith:1 accepting:1 blei:1 detecting:1 node:7 preference:1 simpler:2 ewens:2 direct:2 constructed:2 supply:1 become:1 beta:2 symposium:1 prove:1 introduce:2 theoretically:2 pairwise:11 ascend:1 forgetting:1 indeed:1 themselves:1 nor:1 v1t:1 inspired:1 decomposed:1 spherical:1 automatically:3 td:1 increasing:1 project:1 discover:1 underlying:7 notation:1 estimating:1 panel:3 xx:3 what:1 interpreted:1 finding:7 transformation:6 guarantee:2 pseudo:2 every:2 hypothetical:1 runtime:2 exactly:1 recasts:1 classifier:1 partitioning:8 exchangeable:1 positive:6 generalised:7 engineering:1 accordance:2 tends:1 receptor:1 encoding:1 id:3 solely:1 path:1 ap:1 might:2 plus:1 black:1 studied:1 specifying:2 limited:1 range:1 practical:2 unique:2 acknowledgment:1 yj:5 lost:1 block:31 definite:2 procedure:4 dunn:3 empirical:1 significantly:1 projection:5 induce:2 subpopulation:1 protein:16 convenience:1 cannot:2 selection:3 parametrise:1 close:3 nb:4 map:1 roth:3 missing:1 center:2 starting:3 formulate:2 simplicity:2 identifying:2 immediately:1 splitting:1 pure:1 methyl:1 population:4 embedding:5 searching:1 coordinate:3 analogous:1 limiting:1 updated:1 construction:7 suppose:2 heavily:1 qq:1 exact:1 ultrametric:1 kapila:1 us:2 annals:2 origin:1 agreement:2 element:4 particularly:1 cut:4 database:2 bottom:2 role:2 module:1 observed:2 electrical:1 precondition:1 worst:3 descend:1 eu:1 rescaled:1 ran:1 complexity:8 mine:2 traversal:3 rewrite:1 compromise:1 efficiency:1 workload:4 joint:3 easily:3 represented:3 various:2 distinct:1 shortcoming:1 describe:1 artificial:1 sc:5 outcome:2 choosing:1 whose:1 richer:1 solve:1 otherwise:3 reconstruct:1 statistic:3 ward:4 transform:1 jointly:1 highlighted:1 final:3 superscript:1 sequence:11 eigenvalue:4 frozen:1 exemplary:1 nad:1 advantage:1 reconstruction:1 subtracting:1 product:14 relevant:1 loop:1 mixing:1 olkopf:1 exploiting:2 parent:1 cluster:38 empty:1 extending:1 produce:3 leave:1 object:14 help:2 derive:1 bth:1 pose:1 measured:4 ij:2 qt:1 implemented:2 cool:1 qtree:4 implies:1 switzerland:1 guided:2 drawback:1 annotated:3 kb:10 allele:1 stringent:1 elementary:1 adjusted:3 normal:12 exp:5 great:2 algorithmic:2 mapping:2 kruskal:3 purpose:2 earth:1 outperformed:2 lose:1 label:9 currently:1 gutierrez:1 successfully:2 weighted:4 uller:1 rough:1 brought:1 clearly:4 always:1 aim:1 modified:2 rather:3 reaching:1 pn:6 volker:2 varying:1 encode:1 derived:1 focus:1 improvement:2 rank:11 likelihood:12 contrast:1 cg:2 inference:7 membership:1 hidden:1 transformed:1 prabhakaran:1 overall:1 classification:1 flexible:3 among:4 denoted:1 raised:1 special:2 marginal:2 equal:1 construct:2 field:1 shaped:1 washington:1 sampling:4 manually:1 optimising:3 flipped:1 biology:1 alter:1 report:2 inherent:1 serious:1 employ:1 composed:2 gamma:2 individual:1 replaced:1 freedom:1 normalisation:10 possibility:1 investigate:1 mining:1 severe:4 alignment:1 mixture:2 parametrised:2 tj:4 chain:2 predefined:1 edge:2 bacteria:2 necessary:1 orthogonal:2 indexed:1 tree:31 euclidean:4 desired:1 theoretical:2 instance:1 column:9 assignment:3 applicability:3 cost:7 introducing:1 subset:4 entry:1 rare:2 neutral:1 conducted:1 tolerated:1 chooses:1 synthetic:2 density:1 fundamental:1 international:1 accessible:1 stay:1 probabilistic:8 yl:10 chemotaxis:1 fused:1 parametrisation:1 squared:5 central:15 again:1 containing:4 wishart:32 a1t:2 leading:2 toy:3 singleton:3 explicitly:1 depends:2 performed:1 break:2 try:1 root:3 observing:2 red:1 complicated:2 annotation:2 contribution:1 square:1 formed:1 variance:1 efficiently:1 identify:3 farach:1 bayesian:5 identification:2 zoo:1 multiplying:2 published:1 manual:1 sharing:2 definition:1 centering:1 against:2 hereby:5 proof:3 di:3 con:1 couple:1 gain:2 sampled:2 dataset:5 newly:1 bridged:1 knowledge:1 dimensionality:1 iron:5 appears:1 supervised:4 day:2 specify:2 rand:4 done:1 ox:1 box:1 furthermore:1 smola:1 until:2 working:1 sketch:1 hand:1 transport:1 nonlinear:1 propagation:1 lack:1 defines:1 quality:6 gray:1 semisupervised:1 building:1 contain:1 true:12 counterpart:1 analytically:3 q0:3 iteratively:2 nonzero:1 spherically:1 neal:1 deal:1 bonferroni:1 rooted:1 noted:1 maintained:1 stress:1 complete:3 demonstrate:3 qti:1 l1:1 pro:1 trembl:4 oxygen:1 meaning:2 wise:2 variational:1 superior:1 rotation:2 permuted:1 multinomial:3 functional:2 jl:4 belong:2 refer:1 significant:1 gibbs:12 automatic:4 rd:2 grid:1 mathematics:1 pm:2 had:1 access:2 similarity:1 longer:2 supervision:1 etc:1 add:1 posterior:2 closest:3 multivariate:3 retrieved:1 irrelevant:1 scenario:1 route:1 store:1 certain:1 binary:3 arbitrarily:1 success:1 exploited:1 seen:6 additional:4 r0:2 determine:1 july:1 semi:6 ii:3 full:4 desirable:1 branch:3 reduces:2 rj:2 violate:1 technical:1 unlabelled:1 adapt:1 post:1 plugging:1 qi:5 involving:1 metric:5 iteration:2 kernel:8 globin:2 ion:2 proposal:1 addition:1 want:1 whereas:1 annealing:1 singular:3 sch:1 induced:3 deficient:6 elegant:1 member:3 contrary:1 jordan:1 call:2 structural:1 yang:1 iii:2 identically:1 concerned:1 split:1 enough:2 independence:1 restaurant:1 affect:2 architecture:2 perfectly:1 identified:1 inner:9 idea:3 det:14 shift:3 minimise:1 favour:1 expression:1 pca:5 motivated:1 handled:1 fuchs:1 linkage:6 reformulated:1 jj:1 hardly:2 detailed:1 clear:1 involve:1 nonparametric:1 encyclopedia:1 induces:3 reduced:2 generate:1 outperform:1 exist:1 canonical:1 notice:1 estimated:1 disjoint:1 per:4 track:1 diaz:1 key:1 drawn:2 changing:1 neither:1 swissprot:5 graph:1 sum:5 orient:1 inverse:7 arrive:1 family:1 reasonable:3 reader:2 separation:1 acceptable:1 comparable:1 submatrix:1 ct:1 guaranteed:3 distinguish:1 annual:1 simbad:1 occur:1 vectorial:2 deficiency:1 n3:7 ri:6 encodes:2 boxplot:2 sake:1 aspect:1 speed:1 performing:1 relatively:2 department:1 according:2 poor:1 conjugate:1 reconstructing:1 departement:1 making:1 invariant:9 repair:1 pipeline:1 taken:1 randomised:1 discus:2 needed:2 fp7:1 drastic:1 operation:2 apply:1 observe:1 hierarchical:7 away:1 appropriate:1 magnetic:2 anymore:1 centrality:2 fad:1 alternative:2 original:8 denotes:2 dirichlet:13 clustering:10 include:1 remaining:3 cf:1 top:2 running:1 graphical:1 chinese:1 build:1 surjective:1 approximating:1 sweep:10 tensor:1 strategy:3 diagonal:6 traditional:1 evolutionary:1 gradient:1 affinity:1 distance:27 separate:1 simulated:1 capacity:1 restating:1 manifold:1 argue:1 collected:2 trivial:1 kannan:1 assuming:5 index:5 upscaled:1 equivalently:1 difficult:1 unfortunately:1 digging:1 sinica:2 potentially:3 trace:4 basel:2 unknown:5 teh:1 observation:1 datasets:3 markov:2 defining:1 relational:2 extended:1 precise:1 rn:8 arbitrary:2 inv:2 inferred:3 david:2 complement:2 namely:1 discretised:1 hour:4 able:1 usually:1 mismatch:1 xm:7 challenge:1 tb:1 including:1 memory:1 power:1 suitable:2 difficulty:1 treated:1 indicator:2 representing:2 adametz:2 imply:1 sethuraman:1 carried:1 naive:3 coupled:1 extract:1 prior:15 geometric:7 l2:1 review:1 nice:1 fully:3 expect:1 interesting:1 limitation:1 analogy:3 integrate:2 degree:1 s0:2 mercer:1 principle:3 viewpoint:1 share:5 translation:9 row:3 course:2 supported:1 last:1 childrens:1 formal:2 allow:2 normalised:2 side:2 distributed:5 benefit:1 dimension:1 xn:7 valid:1 evaluating:1 world:1 preprocessing:2 sulfur:1 far:1 bb:6 cutting:3 implicitly:1 overcomes:1 keep:1 global:1 conclude:1 xi:2 alternatively:1 p2n:1 decade:1 reviewed:1 additionally:1 nature:1 robust:1 symmetry:2 domain:6 significance:1 main:5 constituted:1 statistica:2 whole:1 n2:16 child:1 complementary:1 repeated:1 convey:1 fig:6 representative:1 sub:1 inferring:3 fails:1 candidate:8 lie:1 atypical:2 vanish:1 mardia:1 warnow:1 removing:1 theorem:2 down:1 specific:5 gupta:1 derives:1 exists:2 naively:1 ci:3 easier:1 suited:1 depicted:1 garcia:1 likely:1 forming:1 contained:1 partially:1 binding:2 springer:1 ch:1 determines:1 acm:1 conditional:2 identity:2 labelled:2 feasible:1 change:2 experimentally:1 mccullagh:2 infinite:2 specifically:1 reducing:1 corrected:1 sampler:5 lemma:4 called:2 total:5 wallis:3 invariance:2 specie:1 formally:1 selectively:1 cholesky:1 latter:2 fulfills:1 dissimilar:1 dtree:4 constructive:2 evaluate:1 d1:3 srivastava:1 |
3,691 | 4,342 | Noise Thresholds for Spectral Clustering
Sivaraman Balakrishnan
Min Xu
Akshay Krishnamurthy
Aarti Singh
School of Computer Science, Carnegie Mellon University
{sbalakri,minx,akshaykr,aarti}@cs.cmu.edu
Abstract
Although spectral clustering has enjoyed considerable empirical success in machine learning, its theoretical properties are not yet fully developed. We analyze
the performance of a spectral algorithm for hierarchical clustering and show that
on a class of hierarchically structured similarity matrices, this algorithm can tolerate noise that grows with the number of data points while still perfectly recovering
the hierarchical clusters with high probability. We additionally improve upon previous results for k-way spectral clustering to derive conditions under which spectral clustering makes no mistakes. Further, using minimax analysis, we derive
tight upper and lower bounds for the clustering problem and compare the performance of spectral clustering to these information theoretic limits. We also present
experiments on simulated and real world data illustrating our results.
1
Introduction
Clustering, a fundamental and ubiquitous problem in machine learning, is the task of organizing data
points into homogenous groups using a given measure of similarity. Two popular forms of clustering
are k-way, where an algorithm directly partitions the data into k disjoint sets, and hierarchical,
where the algorithm organizes the data into a hierarchy of groups. Popular algorithms for the k-way
problem include k-means, spectral clustering, and density-based clustering, while agglomerative
methods that merge clusters from the bottom up are popular for the latter problem.
Spectral clustering algorithms embed the data points by projection onto a few eigenvectors of (some
form of) the graph Laplacian matrix and use this spectral embedding to find a clustering. This
technique has been shown to work on various arbitrarily shaped clusters and, in addition to being
straightforward to implement, often outperforms traditional clustering algorithms such as the kmeans algorithm.
Real world data is inevitably corrupted by noise and it is of interest to study the robustness of spectral
clustering algorithms. This is the focus of our paper.
Our main contributions are:
? We leverage results from perturbation theory in a novel analysis of a spectral algorithm
for hierarchical clustering to understand its behavior in the presence of noise. We provide
strong guarantees on its correctness; in particular, we show that the amount of noise spectral
clustering tolerates can grow rapidly with the size of the smallest cluster we want to resolve.
? We sharpen existing results on k-way spectral clustering. In contrast with earlier work, we
provide precise error bounds through a careful characterization of a k-means style algorithm run on the spectral embedding of the data.
? We also address the issue of optimal noise thresholds via the use of minimax theory. In
particular, we establish tight information-theoretic upper and lower bounds for cluster resolvability.
1
2
Related Work and Definitions
There are several high-level justifications for the success of spectral clustering. The algorithm has
deep connections to various graph-cut problems, random walks on graphs, electric network theory,
and via the graph Laplacian to the Laplace-Beltrami operator. See [16] for an overview.
Several authors (see von Luxburg et. al. [17] and references therein) have shown various forms of
asymptotic convergence for the Laplacian of a graph constructed from random samples drawn from
a distribution on or near a manifold. These results however often do not easily translate into precise
guarantees for successful recovery of clusters, which is the emphasis of our work.
There has also been some theoretical work on spectral algorithms for cluster recovery in random
graph models. McSherry [9] studies the ?cluster-structured? random graph model in which the
probability of adding an edge can vary depending on the clusters the edge connects. He considers a
specialization of this model, the planted partition model, which specifies only two probabilities, one
for inter-cluster edges and another for intra-cluster edges. In this case, we can view the observed
adjacency matrix as a random perturbation of a low rank ?expected? adjacency matrix which encodes the cluster membership. McSherry shows that one can recover the clusters from a low rank
approximation of the observed (noisy) adjacency matrix. These results show that low-rank matrices
have spectra that are robust to noise. Our results however, show that we can obtain similar insensitivity (to noise) guarantees for a class of interesting structured full-rank matrices, indicating that this
robustness extends to a much broader class of matrices.
More recently, Rohe et al [11] analyze spectral clustering in the stochastic block model (SBM),
which is an example of a structured random graph. They consider the high-dimensional scenario
where the number of clusters k grows with the number of data points n and show that under certain
assumptions the average number of mistakes made by spectral clustering ! 0 with increasing n.
Our work on hierarchical clustering also has the same high-dimensional flavor since the number of
clusters we resolve grows with n. However, in the hierarchical clustering setting, errors made at the
bottom level propogate up the tree and we need to make precise arguments to ensure that the total
number of errors ! 0 with increasing n (see Theorem 1).
Since Rohe et al [11] and McSherry [9] consider random graph models, the ?noise? on each entry has
bounded variance. We consider more general noise models and study the relation between errors in
clustering and noise variance. Another related line of work is on the problem of spectrally separating
mixtures of Gaussians [1, 2, 8].
Ng et al. [10] study k-way clustering and show that the eigenvectors of the graph Laplacian are stable
in 2-norm under small perturbations. This justifies the use of k-means in the perturbed subspace
since ideally without noise, the spectral embedding by the top k eigenvectors of the graph Laplacian
reflects the true cluster memberships, However, closeness in 2-norm does not translate into a strong
bound on the total number of errors made by spectral clustering.
Huang et al. [7] study the misclustering rate of spectral clustering under the somewhat unnatural
assumption that every coordinate of the Laplacian?s eigenvectors are perturbed by independent and
identically distributed noise. In contrast, we specify our noise model as an additive perturbation to
the similarity matrix, making no direct assumptions on how this affects the spectrum of the Laplacian. We show that the eigenvectors are stable in 1-norm and use this result to precisely bound the
misclustering rate of our algorithm.
2.1 Definitions
The clustering problem can be defined as follows: Given an (n ? n) similarity matrix on n data
points, find a set C of subsets of the points such that points belonging to the same subset have
high similarity and points in different subsets have low similarity. Our first results focus on binary
hierarchical clustering, which is formally defined as follows:
Definition 1 A hierarchical clustering T on data points {Xi }ni=1 is a collection of clusters (subsets
of the points) such that C0 := {Xi }ni=1 2 T and for any Ci , Cj 2 T , either Ci ? Cj , Cj ? Ci , or
Ci \Cj = ;. A binary hierarchical clustering T is a hierarchical clustering such that for each nonatomic Ck 2 T , there exists two proper subsets Ci , Cj 2 T with Ci \ Cj = ; and Ci [ Cj = Ck .
We label each cluster by a sequence s of Ls and Rs so that Cs?L and Cs?R partitions Cs , Cs?LL and
Cs?LR partititons Cs?L , and so on.
2
2
6
6
6
6
6
6
4
[?s?LL , s?LL ]
[?s?L , s?L ]
[?s?L , s?L ]
[?s?LR , s?LR ]
[?s , s ]
[?s?RL , s?RL ]
[?s?R , s?R ]
[?s , s ]
[?s?R , s?R ]
[?s?RR , s?RR ]
.
.
.
(a)
(b)
Figure 1: (a): Two moons data set (Top). For a similarity function defined on the ?-neighborhood
graph (Bottom), this data set forms an ideal matrix. (b) An ideal matrix for the hierarchical problem.
Ideally, we would like that at all levels of the hierarchy, points within a cluster are more similar
to each other than to points outside of the cluster. For a suitably chosen similarity function, a
data set consisting of clusters that lie on arbitrary manifolds with complex shapes can result in
this ideal case. As an example, in the two-moons data set in Figure 1(a), the popular technique of
constructing a nearest neighbor graph and defining the distance between two points as the length
of the longest edge on the shortest path between them results in an ideal similarity matrix. Other
non-Euclidean similarity metrics (for instance density based similarity metrics [12]) can also allow
for non-parametric cluster shapes.
For such ideal similarity matrices, we can show that the spectral clustering algorithm will deterministically recover all clusters in the hierarchy (see Theorem 5 in the appendix). However, since this
ideal case does not hold in general, we focus on similarity matrices that can be decomposed into an
ideal matrix and a high-variance noise term.
Definition 2 A similarity matrix W is a noisy hierarchical block matrix (noisy HBM) if W , A+R
where A is ideal and R is a perturbation matrix, defined as follows:
? An ideal similarity matrix, shown in Figure 1(b), is characterized by ranges of off-blockdiagonal similarity values [?s , s ] for each cluster Cs such that if x 2 Cs?L and y 2 Cs?R
then ?s ? Axy ? s . Additionally, min{?s?R , ?s?L } > s .
? A symmetric (n?n) matrix R is a perturbation matrix with parameter if (a) E(Rij ) = 0,
2 2
(b) the entries of R are subgaussian, that is E(exp(tRij )) ? exp( 2t ) and (c) for each
row i, Ri1 , . . . , Rin are independent.
The perturbations we consider are quite general and can accommodate bounded (with upper
bounded by the range), Gaussian (where is the standard deviation), and several other common
distributions. This model is well-suited to noise that arises from the direct measurement of similarities. It is also possible to assume instead that the measurements of individual data points are noisy
though we do not focus on this case in our paper.
In the k-way case, we consider the following similarity matrix which is studied by Ng et. al [10].
Definition 3 W is a noisy k-Block Diagonal matrix if W , A + R where R is a perturbation
matrix and A is an ideal matrix for the k-way problem. An ideal matrix for the k-way problem has
within-cluster similarities larger than 0 > 0 and between cluster similarities 0.
Finally, we define the combinatorial Laplacian matrix, which will be the focus of our spectral algorithm and our subsequent analysis.
Definition 4 The combinatorial
PnLaplacian L of a matrix W is defined as L , D
a diagonal matrix with Dii , j=1 Wij .
W where D is
We note that other analyses of spectral clustering have studied other Laplacian matrices, particularly,
1
1
the normalized Laplacians defined as Ln , D 1 L and Ln , D 2 LD 2 . However as we show in
Appendix E, the normalized Laplacian can mis-cluster points even for an ideal noiseless similarity
matrix.
3
3
7
7
7
???7
7
7
5
Algorithm 1 HS
input (noisy) n ? n similarity matrix W
Compute Laplacian L = D W
v2
smallest non-constant eigenvector of L
C1
{i : v2 (i) 0}, C2
{j : v2 (j) < 0}
C
{C1 , C2 }[ HS (WC1 )[ HS (WC2 )
Figure 2: An ideal matrix and a noisy HBM. Clusoutput C
ters at finer granularity are masked by noise.
Algorithm 2 K-WAY S PECTRAL
input (noisy) n ? n similarity matrix W , number of clusters k
Compute Laplacian L = D W
V
(n ? k) matrix with columns v1 , ..., vk , where vi , ith smallest eigenvector of L
c1
V1 (the first row of V ).
For i = 2 . . . k let ci
argmaxj2{1...n} minl2{1,...,i 1} ||Vj Vcl ||2 .
For i = 1 . . . n set c(i) = argminj2{1...k} ||Vi Vcj ||2
output C , {{j 2 {1 . . . n} : c(j) = i}}ki=1
3
Algorithms and Main Results
In our analysis we study the algorithms for hierarchical and k-way clustering, outlined in Algorithms 1 and 2. Both of these algorithms take a similarity matrix W and compute the eigenvectors
corresponding to the smallest eigenvalues of the Laplacian of W . The algorithms then run simple
procedures to recover the clustering from the spectral embedding of the data points by these eigenvectors. Our Algorithm 2 deviates slightly from the standard practice of running k-means in the
perturbed subspace. We instead use the optimal algorithm for the k-center problem (HochbaumShmoys [6]) because of its amenability to theoretical analysis. We will in this section outline our
main results; we sketch the proofs in the next section and defer full proofs to the Appendix.
We first state the following general assumptions, which we place on the ideal similarity matrix A:
Assumption 1 For all i, j, 0 < Aij ?
?
for some constant
?
.
Assumption 2 (Balanced clusters) There is a constant ? 1 such that at every split of the hierarchy
|Cmax |
|Cmin | ? ?, where |Cmax |, |Cmin | are the sizes of the biggest and smallest clusters respectively.
Assumption 3 (Range Restriction) For every cluster s, min{?s?L , ?s?R }
s
> ?(
s
?s ).
It is important to note that these assumptions are placed only on the ideal matrices. The noisy HBMs
can with high probability violate these assumptions.
We assume that the entries of A are strictly greater than 0 for technical reasons; we believe, as
confirmed empirically, that this restriction is not necessary for our results to hold. Assumption 2
says that at every level the largest cluster is only a constant fraction larger than the smallest. This
can be relaxed albeit at the cost of a worse rate. For the ideal matrix, the Assumption 3 ensures that
at every level of the hierarchy, the gap between the within-cluster similarities and between-cluster
similarities is larger than the range of between-cluster similarities. Earlier papers [9, 11] assume that
the ideal similarities are constant within a block in which case the assumption is trivially satisfied
by the definition of the ideal matrix. However, more generally this assumption is necessary to show
that the entries of the eigenvector are safely bounded away from zero. If this assumption is violated
by the ideal matrix, then the eigenvector entries can decay as fast as O(1/n) (see Appendix E for
more details), and our analysis shows that such matrices will no longer be robust to noise.
Other analyses of spectral clustering often directly make less interpretable assumptions about the
spectrum. For instance, Ng et al. [10] assume conditions on the eigengap of the normalized Laplacian and this assumption implicitly creates constraints on the entries of the ideal matrix A that can
be hard to make explicit.
4
To state our theorems concisely we will define an additional quantity S? . Intuitively,
how close the ideal matrix comes to violating Assumption 3 over a set of clusters S.
Definition 5 For a set of clusters S, define
?
S
, mins2S min{?s?L , ?s?R }
s
?(
?
S
s
quantifies
?s ).
We, as well as previous works [10, 11], rely on results from perturbation theory to bound the error
in the observed eigenvectors in 2-norm. Using this approach, the straightforward way to analyze
the number of errors is pessimistic since it assumes the difference between the two eigenvectors is
concentrated on a few entries. However, we show that the perturbation is in fact generated by a
random process and thus unlikely to be adversarially concentrated. We formalize this intuition to
uniformly bound the perturbations on every entry and get a stronger guarantee.
We are now ready to state our main result for hierarchical spectral clustering. At a high level, this
result gives conditions on the noise scale factor under which Algorithm HS will recover all clusters
s 2 Sm , where Sm is the set of all clusters of size at least m.
Theorem 1 Suppose that W = A + R is an (n ? n) noisy HBM ?where?A satisfies
Assumptions
1,
??
q
q
m
?4 4 m
2, and 3. Suppose that the scale factor of R increases at = o min ??5 log
n, ?
log n
?
?
?
?
1
Sm
where ? = min ?0 , 1+? , m > 0 and m = !(log n) . Then for all n large enough, with
probability at least 1
6/n, HS , on input M , will exactly recover all clusters of size at least m.
A few remarks are in order:
1. It is impossible to resolve the entire hierarchy, since small clusters can be irrecoverably
buried in noise. The amount of noise that algorithm HS can tolerate is directly dependent
on the size of the smallest cluster we want to resolve.
2. As a consequence of our proof, we show that to resolve only the first
p level of the hierarchy,
the amount of noise we can tolerate is (pessimistically) o(??5 4 n/ log n) which grows
rapidly with n.
3. Under this scaling between n and , it can be shown that popular agglomerative algorithms
such as single linkage will fail with high probability. We verify this negative result through
experiments (see Section 5).
4. Since we assume that ? does not grow with n, both the range ( s ?s ) and the gap
?
(min{?s?L , ?s?R }
s ) must decrease with n and hence that Sm must decrease as well.
For example, if we have uniform ranges and gaps across all levels, then S?m = ?(1/ log n).
?
Sm
For constant ?0 , for n large enough ?? = 1+?
. We see that in our analysis
determinant of the noise tolerance of spectral clustering.
?
Sm
is a crucial
We extend the intuition behind Theorem 1 to the k-way setting. Some arguments are more subtle
since spectral clustering uses the subspace spanned by the k smallest eigenvectors of the Laplacian.
We improve the results of Ng et. al. [10] to provide a coordinate-wise bound on the perturbation of
the subspace, and use this to make precise guarantees for Algorithm K-WAY S PECTRAL.
Theorem 2 Suppose that W = A+R is an (n?n) noisy k-Block Diagonal matrix where A satisfies
n
1/4
Assumptions 1 and 2. Suppose that the scale factor of R increases at rate = o( k0 ( k log
).
n)
Then with probability 1 8/n, for all n large enough, K-WAY S PECTRALwill exactly recover the
k clusters.
3.1 Information-Theoretic Limits
Having introduced our analysis for spectral clustering a pertinent question remains. Is the algorithm
optimal in its dependence on the various parameters of the problem?
We establish the minimax rate in the simplest setting of a single binary split and compare it to our
own results on spectral clustering. With the necessary machinery in place, the minimax rate for the
k-way problem follows easily. We derive lower bounds on the problem of correctly identifying two
clusters under the assumption that the clusters are balanced. In particular, we derive conditions on
(n, , ), i.e. the number of objects, the noise variance and the gap between inter and intra-cluster
similarities, under which any method will make an error in identifying the correct clusters.
1
Recall an = o(bn ) and bn = !(an ) if limn!1 abnn = 0
5
Theorem 3 There exists a constant ? 2 (0, 1/8) such that if,
q
n
? log( n
2)
the probability of
failure of any estimator of the clustering remains bounded away from 0 as n ! 1.
Under the conditions of this Theorem and ?? coincide, provided the inter-cluster similarities remain bounded away from 0 by
? at least a constant. As a direct
? consequence of Theorem 1, spectral
q n
q n
5
4
clustering requires ? min
, 4 C log n
(for a large enough constant C).
C log( n
(2)
2)
Thus, the noise threshold for spectral clustering does not match the lower bound. To establish
that this lower bound is indeed tight, we need to demonstrate a (not necessarily computationally
efficient) procedure that achieves this rate. We analyze a combinatorial procedure that solves the
NP-hard problem of finding the minimum cut of size exactly n/2 by searching over all subsets. This
algorithm is strongly related to spectral clustering with the combinatorial Laplacian, which solves a
relaxation of the balanced minimum cut problem. We prove the following theorem in the appendix.
Theorem 4 There exists a constant C such that if
described above succeeds with probability at least 1
<
1
n
q
n
C log( n
2)
the combinatorial procedure
which goes to 0 as n ! 1.
This theorem and the lower bound together establish the minimax rate. It however, remains an
open problem to tighten the analysis of spectral clustering in this paper to match this rate. In the
Appendix we modify the analysis of [9] to show that under the added restriction of block constant
ideal similarities there is an efficient algorithm that achieves the minimax rate.
4
Proof Outlines
Here, we present proof sketches of our main theorems, deferring the details to the Appendix.
Outline of proof of Theorem 1
Let us first restrict our attention toward finding the first split in the hierarchical clustering. Once we
prove that we can recover the first split correctly, we can then recursively apply the same arguments
along with some delicate union bounds to prove that we will recover all large-enough splits of the
hierarchy. To make presentation clearer, we will only focus here on the scaling between 2 and n.
Of course, when we analyze deeper splits, n becomes the size of the sub-cluster.
Let W = A + R be the n ? n noisy HBM. One can readily verify that the Laplacian of W , LW , can
be decomposed as LA + LR . Let v (2) , u(2) be the second eigenvector of LA , LW respectively.
We first show that the unperturbed v (2) can clearly distinguish the two outermost clusters and that
1 , 2 , and 3 (the first, second, and third smallest eigenvalues of LW respectively), are far away
(2)
from each other. More precisely we show |vi | = ?( p1n ) for all i = 1, ..., n and its sign corresponds to the cluster identity of point i. Further the eigen-gap, 2
1 =
2 = ?(n), and
=
?(n).
Now,
using
the
well-known
Davis-Kahan
perturbation
theorem,
we
can show that
3
2
!
r
p
?
?
n log n
log n
||v (2) u(2) ||2 = O
=O
min( 2 , 3
n
2)
The most straightforward way of turning this l2 -norm bound into uniform-entry-wise l1 bound is to
assume that only one coordinate has large perturbation and comprises all of the l2 -perturbation. We
perform a much more careful analysis
qto show that all coordinates uniformly have low perturbation.
q
(2)
(2)
Specifically, we show that if = O( 4 logn n ), then with high probability, ||vi
ui ||1 = O( n1 ).
(2)
Combining this and the fact that |vi | = ?( p1n ), and performing careful comparison with the
leading constants, we can conclude that spectral clustering will correctly recover the first split.
Outline of proof of Theorem 2
Leveraging our analysis of Theorem 1 we derive an `1 bound on the bottom k-eigenvectors. One
potential complication we need to resolve is that the k-Block Diagonal matrix has repeated eigenvalues and more careful subspace perturbation arguments are warranted.
6
0
2
4
6
Noise scale factor (?)
8
n = 256
n = 512
n = 1024
n = 2048
0.5
0
0.5
1
1.5
Rescaled noise scale factor ?(?, n)
(a)
1
0.6
0.4
0
(b)
HS
SL
AL
CL
0.8
10
20
30
40
Noise Scale Factor (?)
(c)
50
Fraction of Tree Correct
0.5
1
Fraction of Tree Correct
n = 256
n = 512
n = 1024
n = 2048
Prob. of Success
Prob. of Success
1
1
0.8
0.6
0.4
0
100
200
Sequence Length
HS
SL
300
(d)
Figure 3: (a),(b): Threshold curves for the first split in HBMs. Comparison of clustering algorithms
with n = 512, m = 9 (c), and on simulated phylogeny data (d).
We further propose a different algorithm, K-WAY S PECTRAL, from the standard k-means. The
algorithm carefully chooses cluster centers and then simply assigns each point to its nearest center. The `1 bound we derive is much stronger than `2 bounds prevalent in the literature and in a
straightforward way provides a no-error guarantee on K-WAY S PECTRAL.
Outline of proof of Theorem 3
As is typically the case with minimax analysis, we begin by restricting our attention to a small (but
hard to distinguish) class of models, and follow this by the application of Fano?s inequality. Models
are indexed by ?(n, , , I1 ), where I1 denotes the indices of the rows (and columns) in the first
cluster. For simplicity, we?ll focus only on models with |I1 | = n/2.
Since we are interested in the worst case we can make two further simplifications. The ideal (noiseless) matrix can be taken to be block-constant since the worst case is when the diagonal blocks are
at their lower bound (which we call p) and the off diagonal blocks are at their upper bound (q). We
consider matrices W = A + R, which are (n ? n) matrices, with Rij ? N (0, 2 ).
Given the true parameter ?0 we choose the following ?hard" subset {?1 , . . . , ?M }. We will select
models which mis-cluster only the last object in I1 , there are exactly n/2 such models. Our proof
is an application of Fano?s inequality, using the Hamming distance and the KL-divergence between
the true model I1 and the estimated model I?1 . See the appendix for calculations and proof details.
The proof of Theorem 4 follows from a careful union bound argument to show that even amongst
the combinatorially large number of balanced cuts of the graph, the true cut has the lowest weight.
5
Experiments
We evaluate our algorithms and theoretical guarantees on simulated matrices, synthetic phylogenies,
and finally on two real biological datasets. Our experiments focus on the effect of noise on spectral
clustering in comparison with agglomerative methods such as single, average, and complete linkage.
5.1 Threshold Behavior
One of our primary interests is to empirically validate the relation between the scale factor and
the sample size n derived in our theorems. For a range of scale factors and noisy HBMs of varying
size, we empirically compute the probability with which spectral clustering recovers the first split
of the hierarchy. From the probability of success curves (Figure 3(a)), we can conclude that spectral
clustering can tolerate noise that grows with the size of the clusters.
We further verify the dependence between and n for recovering the firstpsplit. For the first split we
observe that when we rescale the x-axis of the curves in Figure 3(a) by log(n)/n the curves line
up for different n. This shows that empirically, at least for the first split, spectral clustering appears
to achieve the minimax rate for the problem.
5.2 Simulations
We compare spectral clustering to several agglomerative methods on two forms of synthetic data:
noisy HBMs and simulated phylogenetic data. In these simulations, we exploit knowledge of the
true reference tree to quantitatively evaluate each algorithm?s output as the fraction of triplets of
leaves for which the most similar pair in the output tree matches that of the reference tree. One can
verify that a tree has a score of 1 if and only if it is identical to the reference tree.
Initially, we explore how HS compares to agglomerative algorithms on large noisy HBMs. In Figure 3(c), we compare performance, as measured by the triplets metric, of four clustering algorithms
(HS , and single, average, and complete linkage) with n = 512 and m = 9. We also evaluate
7
(a)
(b)
Figure 4: Experiments with real world data. (a): Heatmaps of single linkage (left) and HS (right)
on gene expression data with n = 2048. (b) -entropy scores on real world data sets.
HS and single linkage as applied to reconstructing phylogenetic trees from genetic sequences. In
Figure 3(d), we plot accuracy, again measured using the triplets metric, of the two algorithms as a
function of sequence length (for sequences generated from the phyclust R package [3]), which
is inversely correlated with noise (i.e. short sequences amount to noisy similarities). From these
experiments, it is clear that HS consistently outperforms agglomerative methods, with tremendous
improvements in the high-noise setting where it recovers a significant amount of the tree structure
while agglomerative methods do not.
5.3 Real-World Data
We apply hierarchical clustering methods to a yeast gene expression data set and one phylogenetic
data set from the PFAM database [5]. To evaluate our methods, we use a -entropy metric defined
as follows: Given a permutation ? and a similarity matrix W , we compute the rate of decay off of
Pn d
the diagonal as s?d , n 1 d i=1 W?(i),?(i+d) , for d 2 {1, ..., n 1}. Next, we compute the entropy
Pn 1
Pn
?
E(?)
,
?? (i) log p?? (i) where p?? (i) , ( d=1 s?d ) 1 s?i . Finally, we compute -entropy
i=1 p
? (?) = E(?
? random ) E(?).
?
as E
A good clustering will have a large amount of the probability
? (?). On the other hand, poor
mass concentrated at a few of the p?? (i)s, thus yielding a high E
clusterings will specify a more uniform distribution and will have lower -entropy.
We first compare HS to single linkage on yeast gene expression data from DeRisi et al [4]. This
dataset consists of 7 expression profiles, which we use to generate Pearson correlations that we use
as similarities. We sampled gene subsets of size n = 512, 1024, and 2048 and ran both algorithms on
the reduced similarity matrix. We report -entropy scores in Table 4(b). These scores quantitatively
demonstrate that HS outperfoms single linkage and additionally, we believe the clustering produced
by HS (Figure 4(a)) is qualitatively better than that of single linkage.
Finally, we run HS on real phylogeny data, specifically, a subset of the PDZ domain (PFAM Id:
PF00595). We consider this family because it is a highly-studied domain of evolutionarily wellrepresented protein binding motifs. Using alignments of varying length, we generated similarity
matrices and computed -entropy of clusterings produced by both HS and Single Linkage. The
results for three sequence lengths (Table 4(b)) show that HS and Single Linkage are comparable.
6
Discussion
In this paper we have presented a new analysis of spectral clustering in the presence of noise and
established tight information theoretic upper and lower bounds. As our analysis of spectral clustering
does not show that it is minimax-optimal it remains an open problem to further tighten, or establish
the tightness of, our analysis, and to find a computationally efficient minimax procedure in the
general case when similarities are not block constant. Identifying conditions under which one can
guarantee correctness for other forms of spectral clustering is another interesting direction. Finally,
our results apply only for binary hierarchical clusterings, yet k-way hierarchies are common in
practice. A future challenge is to extend our results to k-way hierarchies.
7
Acknowledgements
This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant
IIS-1116458. AK is supported in part by a NSF Graduate Research Fellowship. SB would like to
thank Jaime Carbonell and Srivatsan Narayanan for several fruitful discussions.
8
References
[1] Dimitris Achlioptas and Frank Mcsherry. On spectral learning of mixtures of distributions. In
Computational Learning Theory, pages 458?469, 2005.
[2] S. Charles Brubaker and Santosh Vempala. Isotropic pca and affine-invariant clustering. In
FOCS, pages 551?560, 2008.
[3] Wei-Chen Chen. Phylogenetic Clustering with R package phyclust, 2010.
[4] Joseph L. DeRisi, Vishwanath R. Iyer, and Patrick O. Brown. Exploring the Metabolic and
Genetic Control of Gene Expression on a Genomic Scale. Science, 278(5338):680?686, 1997.
[5] Robert D. Finn, Jaina Mistry, John Tate, Penny Coggill, Andreas Heger, Joanne E. Pollington,
O. Luke Gavin, Prasad Gunesekaran, Goran Ceric, Kristoffer Forslund, Liisa Holm, Erik L.
Sonnhammer, Sean R. Eddy, and Alex Bateman. The Pfam Protein Families Database. Nucleic
Acids Research, 2010.
[6] Dorit S. Hochbaum and David B. Shmoys. A Best Possible Heuristic for the K-Center Problem.
Mathematics of Operations Research, 10:180?184, 1985.
[7] Ling Huang, Donghui Yan, Michael I. Jordan, and Nina Taft. Spectral Clustering with Perturbed Data. In Advances in Neural Inforation Processing Systems, 2009.
[8] Ravindran Kannan, Hadi Salmasian, and Santosh Vempala. The spectral method for general
mixture models. In 18th Annual Conference on Learning Theory (COLT, pages 444?457, 2005.
[9] Frank McSherry. Spectral partitioning of random graphs. In IEEE Symposium on Foundations
of Computer Science, page 529, 2001.
[10] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On Spectral Clustering: Analysis and
an Algorithm. In Advances in Neural Information Processing Systems, pages 849?856. MIT
Press, 2001.
[11] Karl Rohe, Sourav Chatterjee, and Bin Yu. Spectral Clustering and the High-Dimensional
Stochastic Block Model. Technical Report 791, Statistics Department, UC Berkeley, 2010.
[12] Sajama and Alon Orlitsky. Estimating and Computing Density Based Distance Metrics. In
ICML05, 22nd International Conference on Machine Learning, 2005.
[13] Dan Spielman. Lecture Notes on Spectral Graph Theory, 2009.
[14] Terence Tao. Course notes on random Matrix Theory, 2010.
[15] Alexandre B. Tsybakov. Introduction a l?stimation Non-param?l?trique. Springer, 2004.
[16] Ulrike von Luxburg. A Tutorial on Spectral Clustering. Technical Report 149, Max Planck
Institute for Biological Cybernetics, August 2006.
[17] Ulrike von Luxburg, Mikhail Belkin, and Olivier Bousquet. Consistency of Spectral Clustering. In The Annals of Statistics, pages 857?864. MIT Press, 2004.
9
| 4342 |@word h:19 illustrating:1 determinant:1 norm:5 stronger:2 nd:1 c0:1 suitably:1 open:2 r:1 simulation:2 bn:2 prasad:1 accommodate:1 recursively:1 ld:1 score:4 genetic:2 outperforms:2 existing:1 yet:2 must:2 readily:1 john:1 additive:1 partition:3 subsequent:1 shape:2 pertinent:1 plot:1 interpretable:1 leaf:1 isotropic:1 ith:1 short:1 fa9550:1 lr:4 characterization:1 provides:1 complication:1 phylogenetic:4 along:1 constructed:1 direct:3 c2:2 symposium:1 focs:1 prove:3 consists:1 dan:1 inter:3 ravindran:1 indeed:1 expected:1 behavior:2 decomposed:2 resolve:6 param:1 increasing:2 becomes:1 provided:1 begin:1 bounded:6 estimating:1 mass:1 lowest:1 vcl:1 eigenvector:5 spectrally:1 developed:1 heger:1 finding:2 guarantee:8 safely:1 berkeley:1 every:6 orlitsky:1 exactly:4 control:1 partitioning:1 grant:2 planck:1 modify:1 mistake:2 limit:2 consequence:2 ak:1 id:1 path:1 merge:1 emphasis:1 therein:1 studied:3 luke:1 range:7 graduate:1 practice:2 block:12 implement:1 union:2 procedure:5 empirical:1 yan:1 projection:1 protein:2 get:1 onto:1 close:1 operator:1 impossible:1 restriction:3 jaime:1 fruitful:1 center:4 straightforward:4 go:1 attention:2 l:1 simplicity:1 recovery:2 identifying:3 assigns:1 estimator:1 sbm:1 spanned:1 embedding:4 searching:1 coordinate:4 krishnamurthy:1 justification:1 laplace:1 annals:1 hierarchy:11 suppose:4 olivier:1 us:1 particularly:1 cut:5 database:2 bottom:4 observed:3 rij:2 worst:2 ensures:1 decrease:2 rescaled:1 ran:1 balanced:4 intuition:2 ui:1 ideally:2 singh:1 tight:4 upon:1 creates:1 rin:1 easily:2 k0:1 various:4 fast:1 neighborhood:1 outside:1 pearson:1 quite:1 heuristic:1 larger:3 say:1 tightness:1 statistic:2 kahan:1 noisy:16 sequence:7 rr:2 eigenvalue:3 propose:1 combining:1 rapidly:2 organizing:1 translate:2 insensitivity:1 achieve:1 validate:1 convergence:1 cluster:55 object:2 derive:6 alon:1 andrew:1 misclustering:2 depending:1 clearer:1 rescale:1 school:1 measured:2 nearest:2 tolerates:1 solves:2 strong:2 recovering:2 c:10 come:1 direction:1 beltrami:1 amenability:1 correct:3 stochastic:2 wc2:1 dii:1 cmin:2 adjacency:3 bin:1 taft:1 pessimistic:1 biological:2 strictly:1 heatmaps:1 exploring:1 hold:2 gavin:1 exp:2 propogate:1 vary:1 achieves:2 smallest:9 aarti:2 label:1 combinatorial:5 sivaraman:1 largest:1 combinatorially:1 correctness:2 kristoffer:1 reflects:1 mit:2 clearly:1 genomic:1 gaussian:1 ck:2 pn:3 varying:2 broader:1 derived:1 focus:8 longest:1 vk:1 rank:4 prevalent:1 consistently:1 improvement:1 contrast:2 tate:1 dependent:1 motif:1 membership:2 sb:1 unlikely:1 entire:1 typically:1 initially:1 relation:2 wij:1 buried:1 i1:5 interested:1 tao:1 issue:1 colt:1 logn:1 uc:1 homogenous:1 santosh:2 once:1 shaped:1 ng:5 having:1 identical:1 adversarially:1 bateman:1 yu:1 donghui:1 future:1 np:1 report:3 quantitatively:2 few:4 belkin:1 pectral:4 divergence:1 individual:1 connects:1 consisting:1 delicate:1 n1:1 interest:2 highly:1 intra:2 alignment:1 mixture:3 yielding:1 behind:1 mcsherry:5 edge:5 necessary:3 machinery:1 tree:10 indexed:1 euclidean:1 walk:1 theoretical:4 instance:2 column:2 earlier:2 cost:1 deviation:1 entry:9 subset:9 ri1:1 masked:1 uniform:3 sajama:1 successful:1 perturbed:4 corrupted:1 synthetic:2 chooses:1 density:3 fundamental:1 international:1 off:3 terence:1 michael:2 together:1 von:3 again:1 satisfied:1 huang:2 choose:1 worse:1 style:1 leading:1 potential:1 vi:5 view:1 analyze:5 ulrike:2 recover:9 defer:1 contribution:1 ni:2 accuracy:1 moon:2 variance:4 acid:1 hadi:1 shmoys:1 produced:2 salmasian:1 confirmed:1 finer:1 cybernetics:1 definition:8 failure:1 vcj:1 proof:11 mi:2 recovers:2 hamming:1 sampled:1 dataset:1 popular:5 recall:1 knowledge:1 ubiquitous:1 cj:7 formalize:1 subtle:1 sean:1 carefully:1 eddy:1 appears:1 alexandre:1 tolerate:4 violating:1 follow:1 specify:2 wei:2 though:1 strongly:1 pfam:3 achlioptas:1 correlation:1 sketch:2 hand:1 yeast:2 grows:5 believe:2 effect:1 normalized:3 true:5 verify:4 brown:1 hence:1 symmetric:1 ll:4 davis:1 outline:5 theoretic:4 demonstrate:2 complete:2 l1:1 wise:2 novel:1 recently:1 charles:1 common:2 rl:2 overview:1 empirically:4 extend:2 he:1 mellon:1 measurement:2 significant:1 enjoyed:1 outlined:1 trivially:1 fano:2 mathematics:1 consistency:1 sharpen:1 stable:2 similarity:38 longer:1 patrick:1 own:1 scenario:1 certain:1 inequality:2 binary:4 success:5 arbitrarily:1 minimum:2 greater:1 somewhat:1 relaxed:1 additional:1 shortest:1 ii:1 full:2 violate:1 technical:3 match:3 characterized:1 calculation:1 laplacian:17 noiseless:2 cmu:1 metric:6 hochbaum:1 c1:3 addition:1 want:2 fellowship:1 grow:2 derisi:2 limn:1 crucial:1 balakrishnan:1 leveraging:1 jordan:2 call:1 subgaussian:1 near:1 leverage:1 presence:2 ideal:23 granularity:1 identically:1 split:11 enough:5 affect:1 perfectly:1 restrict:1 andreas:1 specialization:1 expression:5 pca:1 unnatural:1 eigengap:1 linkage:10 icml05:1 remark:1 deep:1 generally:1 clear:1 eigenvectors:11 amount:6 tsybakov:1 concentrated:3 narayanan:1 simplest:1 reduced:1 generate:1 specifies:1 sl:2 nsf:2 tutorial:1 sign:1 estimated:1 disjoint:1 correctly:3 carnegie:1 group:2 four:1 threshold:5 drawn:1 v1:2 graph:16 relaxation:1 fraction:4 run:3 luxburg:3 prob:2 package:2 extends:1 place:2 family:2 appendix:8 scaling:2 comparable:1 bound:22 ki:1 distinguish:2 simplification:1 annual:1 precisely:2 constraint:1 alex:1 encodes:1 bousquet:1 argument:5 min:9 performing:1 vempala:2 structured:4 department:1 poor:1 belonging:1 across:1 slightly:1 remain:1 reconstructing:1 deferring:1 joseph:1 making:1 intuitively:1 invariant:1 taken:1 ln:2 computationally:2 remains:4 hbm:4 fail:1 finn:1 gaussians:1 operation:1 apply:3 observe:1 hierarchical:17 v2:3 spectral:53 away:4 qto:1 robustness:2 srivatsan:1 eigen:1 yair:1 pessimistically:1 top:2 clustering:72 include:1 ensure:1 axy:1 running:1 assumes:1 denotes:1 cmax:2 exploit:1 establish:5 question:1 quantity:1 added:1 parametric:1 planted:1 dependence:2 primary:1 traditional:1 diagonal:7 minx:1 nonatomic:1 subspace:5 distance:3 amongst:1 thank:1 simulated:4 separating:1 vishwanath:1 carbonell:1 manifold:2 agglomerative:7 considers:1 mins2s:1 reason:1 toward:1 nina:1 kannan:1 erik:1 length:5 holm:1 index:1 robert:1 frank:2 negative:1 proper:1 perform:1 upper:5 hbms:5 nucleic:1 datasets:1 sm:6 inevitably:1 defining:1 precise:4 brubaker:1 perturbation:17 arbitrary:1 august:1 introduced:1 david:1 pair:1 kl:1 connection:1 concisely:1 tremendous:1 established:1 address:1 dimitris:1 laplacians:1 challenge:1 max:1 rely:1 turning:1 minimax:10 improve:2 inversely:1 axis:1 ready:1 deviate:1 literature:1 l2:2 blockdiagonal:1 acknowledgement:1 asymptotic:1 afosr:1 fully:1 lecture:1 permutation:1 interesting:2 foundation:1 affine:1 metabolic:1 row:3 karl:1 course:2 placed:1 last:1 supported:2 aij:1 allow:1 understand:1 deeper:1 institute:1 neighbor:1 akshaykr:1 akshay:1 mikhail:1 penny:1 distributed:1 tolerance:1 outermost:1 curve:4 world:5 author:1 made:3 collection:1 coincide:1 qualitatively:1 far:1 tighten:2 dorit:1 sourav:1 implicitly:1 gene:5 conclude:2 xi:2 spectrum:3 quantifies:1 triplet:3 table:2 additionally:3 robust:2 sbalakri:1 warranted:1 complex:1 necessarily:1 electric:1 constructing:1 vj:1 cl:1 domain:2 hierarchically:1 main:5 noise:33 ling:1 profile:1 repeated:1 evolutionarily:1 p1n:2 xu:1 biggest:1 sub:1 comprises:1 deterministically:1 explicit:1 lie:1 lw:3 third:1 theorem:20 embed:1 rohe:3 unperturbed:1 decay:2 closeness:1 exists:3 albeit:1 adding:1 restricting:1 ci:8 wc1:1 iyer:1 justifies:1 chatterjee:1 gap:5 flavor:1 chen:2 suited:1 entropy:7 simply:1 explore:1 ters:1 binding:1 springer:1 corresponds:1 satisfies:2 identity:1 presentation:1 kmeans:1 careful:5 resolvability:1 considerable:1 hard:4 specifically:2 uniformly:2 total:2 la:2 succeeds:1 organizes:1 indicating:1 formally:1 phylogeny:3 select:1 latter:1 arises:1 violated:1 spielman:1 evaluate:4 correlated:1 |
3,692 | 4,343 | The Fast Convergence of Boosting
Matus Telgarsky
Department of Computer Science and Engineering
University of California, San Diego
9500 Gilman Drive, La Jolla, CA 92093-0404
[email protected]
Abstract
This manuscript considers the convergence rate of boosting under a large class of
losses, including the exponential and logistic losses, where the best previous rate
of convergence was O(exp(1/?2 )). First, it is established that the setting of weak
learnability aids the entire class, granting a rate O(ln(1/?)). Next, the (disjoint)
conditions under which the infimal empirical risk is attainable are characterized
in terms of the sample and weak learning class, and a new proof is given for the
known rate O(ln(1/?)). Finally, it is established that any instance can be decomposed into two smaller instances resembling the two preceding special cases,
yielding a rate O(1/?), with a matching lower bound for the logistic loss. The
principal technical hurdle throughout this work is the potential unattainability of
the infimal empirical risk; the technique for overcoming this barrier may be of
general interest.
1
Introduction
Boosting is the task of converting inaccurate weak learners into a single accurate predictor. The
existence of any such method was unknown until the breakthrough result of Schapire [1]: under
a weak learning assumption, it is possible to combine many carefully chosen weak learners into a
majority of majorities with arbitrarily low training error. Soon after, Freund [2] noted that a single
majority is enough, and that O(ln(1/?)) iterations are both necessary and sufficient to attain accuracy ?. Finally, their combined effort produced AdaBoost, which attains the optimal convergence
rate (under the weak learning assumption), and has an astonishingly simple implementation [3].
It was eventually revealed that AdaBoost was minimizing a risk functional, specifically the exponential loss [4]. Aiming to alleviate perceived deficiencies in the algorithm, other loss functions
were proposed, foremost amongst these being the logistic loss [5]. Given the wide practical success of boosting with the logistic loss, it is perhaps surprising that no convergence rate better than
O(exp(1/?2 )) was known, even under the weak learning assumption [6]. The reason for this deficiency is simple: unlike SVM, least squares, and basically any other optimization problem considered in machine learning, there might not exist a choice which attains the minimal risk! This
reliance is carried over from convex optimization, where the assumption of attainability is generally
made, either directly, or through stronger conditions like compact level sets or strong convexity [7].
Convergence rate analysis provides a valuable mechanism to compare and improve of minimization algorithms. But there is a deeper significance with boosting: a convergence rate of O(ln(1/?))
means that, with a combination of just O(ln(1/?)) predictors, one can construct an ?-optimal classifier, which is crucial to both the computational efficiency and statistical stability of this predictor.
The contribution of this manuscript is to provide a tight convergence theory for a large class of
losses, including the exponential and logistic losses, which has heretofore resisted analysis. The goal
is a general analysis without any assumptions (attainability of the minimum, or weak learnability),
1
however this manuscript also demonstrates how the classically understood scenarios of attainability
and weak learnability can be understood directly from the sample and the weak learning class.
The organization is as follows. Section 2 provides a few pieces of background: how to encode the
weak learning class and sample as a matrix, boosting as coordinate descent, and the primal objective
function. Section 3 then gives the dual problem, max entropy. Given these tools, section 4 shows
how to adjust the weak learning rate to a quantity which is useful without any assumptions. The first
step towards convergence rates is then taken in section 5, which demonstrates that the weak learning
rate is in fact a mechanism to convert between the primal and dual problems.
The convergence rates then follow: section 6 and section 7 discuss, respectively, the conditions
under which classical weak learnability and (disjointly) attainability hold, both yielding the rate
O(ln(1/?)), and finally section 8 shows how the general case may be decomposed into these two,
and the conflicting optimization behavior leads to a degraded rate of O(1/?). The last section will
also exhibit an ?(1/?) lower bound for the logistic loss.
1.1
Related Work
The development of general convergence rates has a number of important milestones in the past
decade. The first convergence result, albeit without any rates, is due to Collins et al. [8]; the work
considered the improvement due to a single step, and as its update rule was less aggressive than the
line search of boosting, it appears to imply general convergence. Next, Bickel et al. [6] showed a
rate of O(exp(1/?2 )), where the assumptions of bounded second derivatives on compact sets are
also necessary here.
Many extremely important cases have also been handled. The first is the original rate of O(ln(1/?))
for the exponential loss under the weak learning assumption [3]. Next, R?atsch et al. [9] showed, for
a class of losses similar to those considered here, a rate of O(ln(1/?)) when the loss minimizer is
attainable. The current manuscript provides another mechanism to analyze this case (with the same
rate), which is crucial to being able to produce a general analysis. And, very recently, parallel to this
work, Mukherjee et al. [10] established the general convergence under the exponential loss, with a
rate of ?(1/?). The same matrix, due to Schapire [11], was used to show the lower bound there as
for the logistic loss here; their upper bound proof also utilized a decomposition theorem.
It is interesting to mention that, for many variants of boosting, general convergence rates were
known. Specifically, once it was revealed that boosting is trying to be not only correct but also
have large margins [12], much work was invested into methods which explicitly maximized the
margin [13], or penalized variants focused on the inseparable case [14, 15]. These methods generally
impose some form of regularization [15], which grants attainability of the risk minimizer, and allows
standard techniques to grant general convergence rates. Interestingly, the guarantees in those works
cited in this paragraph are O(1/?2 ).
2
Setup
A view of boosting, which pervades this manuscript, is that the action of the weak learning class
m
upon the sample can be encoded as a matrix [9, 15]. Let a sample S := {(xi , yi )}m
1 ? (X ? Y)
and a weak learning class H be given. For every h 2 H, let S|h denote the projection onto S
induced by h; that is, S|h is a vector of length m, with coordinates (S|h )i = yi h(xi ). If the set
of all such columns {S|h : h 2 H} is finite, collect them into the matrix A 2 Rm?n . Let ai
denote the ith row of A, corresponding to the example (xi , yi ), and let {hj }n1 index the set of weak
learners corresponding to columns of A. It is assumed, for convenience, that entries of A are within
[ 1, +1]; relaxing this assumption merely scales the presented rates by a constant.
The setting considered in this manuscript is that this finite matrix can be constructed. Note that this
can encode infinite classes, so long as they map to only k < 1 values (in which case A has at most
k m columns). As another example, if the weak learners are binary, and H has VC dimension d,
then Sauer?s lemma grants that A has at most (m + 1)d columns. This matrix view of boosting is
thus similar to the interpretation of boosting performing descent on functional space, but the class
complexity and finite sample have been used to reduce the function class to a finite object [16, 5].
2
Routine B OOST.
Input Convex function f A.
Output Approximate primal optimum .
1. Initialize 0 := 0n .
2. For t = 1, 2, . . ., while r(f A)( t 1 ) 6= 0n :
(a) Choose column jt := argmaxj |r(f A)( t 1 )> ej |.
(b) Line search: ?t apx. minimizes ? 7! (f A)( t 1 + ?ejt ).
(c) Update t := t 1 + ?t ejt .
3. Return t 1 .
Figure 1: l1 steepest descent [17, Algorithm 9.4] applied to f
A.
To make the connection to boosting, the missing ingredient is the loss function. Let G0 denote the
set of loss functions g satisfying: g is twice continuously differentiable, g 00 > 0 (which implies
strict convexity), and limx!1 g(x) = 0. (A few more conditions will be added in section 5 to prove
convergence rates, but these properties suffice for the current exposition.) Crucially, the exponential
loss exp( x) from AdaBoost and the logistic loss ln(1 + exp( x)) are in G0 (and the eventual G).
Boosting determines some weighting 2 Rn of the columns of A, which correspond to weak
learners in H. The (unnormalized) margin of example i is thus hai , i = e>
i A , where ei is an
indicator vector. Since the prediction on xi is 1[hai , i 0], it follows that A > 0m (where 0m is
the zero vector) implies a training error of zero. As such, boosting solves the minimization problem
infn
2R
m
X
i=1
g(hai , i) = infn
2R
m
X
g(e>
i A ) = infn f (A ) = infn (f
2R
i=1
2R
A)( ) =: f?A ,
(2.1)
P
where f : Rm ! R is the convenience function f (x) = i g((x)i ), and in the present problem
denotes the (unnormalized) empirical risk. f?A will denote the optimal objective value.
The infimum in eq. (2.1) may well not be attainable. Suppose there exists 0 such that A
(theorem 6.1 will show that this is equivalent to the weak learning assumption). Then
0 ? infn f (A ) ? inf {f (A ) :
2R
On the other hand, for any
learnability holds.
0
> 0m
= c 0 , c > 0} = inf f (c(A 0 )) = 0.
c>0
2 Rn , f (A ) > 0. Thus the infimum is never attainable when weak
The template boosting algorithm appears in fig. 1, formulated in terms of f A to make the connection to coordinate descent as clear as possible. To interpret the gradient terms, note that
(r(f
A)( ))j = (A> rf (A ))j =
m
X
i=1
g 0 (hai , i)hj (xi )yi ,
which is the expected correlation of hj with the target labels according to an unnormalized distribution with weights g 0 (hai , i). The stopping condition r(f A)( ) = 0m means: either the
distribution is degenerate (it is exactly zero), or every weak learner is uncorrelated with the target.
As such, eq. (2.1) represents an equivalent formulation of boosting, with one minor modification:
the column (weak learner) selection has an absolute value. But note that this is the same as closing
H under complementation (i.e., for any h 2 H, there exists h0 with h(x) = h0 (x)), which is
assumed in many theoretical treatments of boosting.
In the case of the exponential loss with binary weak learners, the line search step has a convenient closed form; but for other losses, or even for the exponential loss but with confidence-rated
predictors, there may not be a closed form. Moreover, this univariate search problem may lack a
minimizer. To produce the eventual convergence rates, this manuscript utilizes a step size minimizing an upper bounding quadratic (which is guaranteed to exist); if instead a standard iterative line
search guarantee were used, rates would only degrade by a constant factor [17, section 9.3.1].
3
n
As a final remark, consider the rows {ai }m
1 of A as a collection of m points in R . Due to the form
of g, B OOST is therefore searching for a halfspace, parameterized by a vector , which contains
all of the points. Sometimes such a halfspace may not exist, and g applies a smoothly increasing
penalty to points that are farther and farther outside it.
3
Dual Problem
This section provides the convex dual to eq. (2.1). The relevance of the dual to convergence rates is
as follows. First, although the primal optimum may not be attainable, the dual optimum is always
attainable?this suggests a strategy of mapping the convergence strategy to the dual, where there
exists a clear notion of progress to the optimum. Second, this section determines the dual feasible
set?the space of dual variables or what the boosting literature typically calls unnormalized weights.
Understanding this set is key to relating weak learnability, attainability, and general instances.
Before proceeding, note that the dual formulation will make use of the Fenchel conjugate h? ( ) =
supx2dom(h) hx, i h(x), a concept taking a central place in convex analysis [18, 19]. Interestingly, the Fenchel conjugates to the exponential and logistic losses are respectively the BoltzmannShannon and Fermi-Dirac entropies [19, Commentary, section 3.3], and thus the dual is explicitly
performing entropy maximization (cf. lemma C.2). As a final piece of notation, denote the kernel of
a matrix B 2 Rm?n by Ker(B) = { 2 Rn : B = 0m }.
P
Theorem 3.1. For any A 2 Rm?n and g 2 G0 with f (x) = i g((x)i ),
inf {f (A ) :
2 Rn } = sup { f ? (
):
2
>
m
where A := Ker(A
+ is the dual feasible set. The dual optimum
Pm )\R
?
?
Lastly, f ( ) = i=1 g (( )i ).
A} ,
A
(3.2)
is unique and attainable.
The dual feasible set A = Ker(A> ) \ Rm
2 A ; then
+ has a strong interpretation. Suppose
Pm
>
is a nonnegative vector (since 2 Rm
),
and,
for
any
j,
0
=
(
A)
=
y
j
i
i hj (xi ). That
+
i=1
is to say, every nonzero feasible dual vector provides a (an unnormalized) distribution upon which
every weak learner is uncorrelated! Furthermore, recall that the weak learning assumption states that
under any weighting of the input, there exists a correlated weak learner; as such, weak learnability
necessitates that the dual feasible set contains only the zero vector.
There is also a geometric interpretation. Ignoring the constraint, f ? attains its maximum at some
rescaling of the uniform distribution (for details, please see lemma C.2). As such, the constrained
dual problem is aiming to write the origin as a high entropy convex combination of the points {ai }m
1 .
4
A Generalized Weak Learning Rate
The weak learning rate was critical to the original convergence analysis of AdaBoost, providing a
handle on the progress of the algorithm. Recall that the quantity appeared in the denominator of the
convergence rate, and a weak learning assumption critically provided that this quantity is nonzero.
This section will generalize the weak learning rate to a quantity which is always positive, without
any assumptions.
Note briefly that this manuscript will differ slightly from the norm in that weak learning will be a
purely sample-specific concept. That is, the concern here is convergence, and all that matters is the
sample S = {(xi , yi )}m
1 , as encoded in A; it doesn?t matter if there are wild points outside this
sample, because the algorithm has no access to them.
This distinction has the following implication. The usual weak learning assumption states that there
exists no uncorrelating distribution over the input space. This of course implies that any training
sample S used by the algorithm will also have this property; however, it suffices that there is no
distribution over the input sample S which uncorrelates the weak learners from the target.
Returning to task, the weak learning assumption posits the existence of a constant, the weak learning
rate , which lower bounds the correlation of the best weak learner with the target for any distribu4
tion. Stated in terms of the matrix A,
m
X
0 < = infm max
( )i yi hj (xi ) =
2R+ j2[n]
i=1
k k=1
inf
2Rm
+ \{0m }
kA> k1
=
k k1
inf
2Rm
+ \{0m }
kA> k1
.
k
0m k1
(4.1)
The only way this quantity can be positive is if 62 Ker(A> ) \ Rm
A , meaning the dual
+ =
feasible set is exactly {0m }. As such, one candidate adjustment is to simply replace {0m } with the
dual feasible set:
kA> k1
0
:=
inf
.
k1
2Rm
+ \ A inf 2 A k
Indeed, by the forthcoming proposition 4.3, 0 > 0 as desired. Due to technical considerations
which will be postponed until the various convergence rates, it is necessary to tighten this definition
with another set.
Definition 4.2. For a given matrix A 2 Rm?n and set S ? Rm , define
?
kA> k1
(A, S) := inf
: 2 S \ Ker(A> ) .
?
inf 2S\Ker(A> ) k
k1
Crucially, for the choices of S pertinent here, this quantity is always positive.
Proposition 4.3. Let A 6= 0m?n and polyhedron S be given. If S \ Ker(A> ) 6= ; and S has
nonempty interior, (A, S) 2 (0, 1).
To simplify discussion, the following projection and distance notation will be used in the sequel:
PpC (x) 2 Argmin ky
y2C
DpC (x) = kx
xkp ,
PpC (x)kp ,
with some arbitrary choice made when the minimizer is not unique.
5
Prelude to Convergence Rates: Three Alternatives
The pieces are in place to finally sketch how the convergence rates may be proved. This section
identifies how the weak learning rate (A, S) can be used to convert the standard gradient guarantees
into something which can be used in the presence of no attainable minimum. To close, three basic
optimization scenarios are identified, which lead to the following three sections on convergence
rates. But first, it is a good time to define the final loss function class.
Definition 5.1. Every g 2 G satisfies the following properties. First, g 2 G0 . Next, for any x 2 Rm
satisfying f (x) ? f (A 0 ), and for any coordinate (x)i , there exist constants ? > 0 and > 0 such
that g 00 ((x)i ) ? ?g((x)i ) and g((x)i ) ?
g 0 ((x)i ).
?
The exponential loss is in this class with ? = = 1 since exp(?) is a fixed point with respect to
the differentiation operator. Furthermore, as is verified in remark F.1 of the full version, the logistic
loss is also in this class, with ? = 2m /(m ln(2)) and ? 1 + 2m . Intuitively, ? and encode
how similar some g 2 G is to the exponential loss, and thus these parameters can degrade radically.
However, outside the weak learnability case, the other terms in the bounds here will also incur a
penalty of the form em for the exponential loss, and there is some evidence that this is unavoidable
(see the lower bounds in Mukherjee et al. [10] or the upper bounds in R?atsch et al. [9]).
Next, note how the standard guarantee for coordinate descent methods can lead to guarantees on the
progress of the algorithm in terms of dual distances, thanks to (A, S).
Proposition 5.2. For any t, A 6= 0m?n , S ? { rf (A t )} with (A, S) > 0, and g 2 G,
f (A
t+1 )
f?A ? f (A t )
Proof. The stopping condition grants
(A, S) =
inf
2S\Ker(A> )
(A, S)2 D1S\Ker(A> ) ( rf (A t ))2
f?A
2?f (A t )
.
rf (A t ) 62 Ker(A> ). Thus, by definition of (A, S),
kA> k1
1
DS\Ker(A> ) (
5
)
?
kA> rf (A t )k1
.
rf (A t ))
D1S\Ker(A> ) (
(a) Weak learnability.
(b) Attainability.
(c) General case.
n
Figure 2: Viewing the rows {ai }m
1 of A as points in R , boosting seeks a homogeneous halfspace,
n
parameterized by a normal 2 R , which contains all m points. The dual, on the other hand, aims
to express the origin as a high entropy convex combination of the rows. The convergence rate and
dynamics of this process are controlled by A, which dictates one of the three above scenarios.
Combined with a standard guarantee of coordinate descent progress (cf. lemma F.2),
f (A t )
f (A
t+1 )
kA> rf (A t )k21
2?f (A t )
(A, S)2 D1S\Ker(A> ) ( rf (A t ))2
2?f (A t )
.
Subtracting f?A from both sides and rearranging yields the statement.
Recall the interpretation of boosting closing section 2: boosting seeks a halfspace, parameterized by
2 Rn , which contains the points {ai }m
1 . Progress onward from proposition 5.2 will be divided
into three cases, each distinguished by the kind of halfspace which boosting can reach.
These cases appear in fig. 2. The first case is weak learnability: positive margins can be attained
on each example, meaning a halfspace exists which strictly contains all points. Boosting races to
push all these margins unboundedly large, and has a convergence rate O(ln(1/?)). Next is the case
that no halfspace contains the points within its interior: either any such halfspace has the points on
its boundary, or no such halfspace exists at all (the degenerate choice = 0n ). This is the case of
attainability: boosting races towards finite margins at the rate O(ln(1/?)).
The final situation is a mix of the two: there exists a halfspace with some points on the boundary,
some within its interior. Boosting will try to push some margins to infinity, and keep others finite.
These two desires are at odds, and the rate degrades to O(1/?). Less metaphorically, the analysis
will proceed by decomposing this case into the previous two, applying the above analysis in parallel,
and then stitching the result back together. It is precisely while stitching up that an incompatibility
arises, and the rate degrades. This is no artifact: a lower bound will be shown for the logistic loss.
6
Convergence Rate under Weak Learnability
To start this section, the following result characterizes weak learnability, including the earlier relationship to the dual feasible set (specifically, that it is precisely the origin), and, as analyzed by many
authors, the relationship to separability [1, 9, 15].
Theorem 6.1. For any A 2 Rm?n and g 2 G the following conditions are equivalent:
9 2 Rn ? A 2 Rm
++ ,
(6.2)
infn f (A ) = 0,
(6.3)
= 0m ,
A = {0m }.
(6.4)
(6.5)
2R
A
The equivalence means the presence of any of these properties suffices to indicate weak learnability.
The last two statements encode the usual distributional version of the weak learning assumption.
6
The first encodes the fact that there exists a homogeneous halfspace containing all points within
its interior; this encodes separability, since removing the factor yi from the definition of ai will
place all negative points outside the halfspace. Lastly, the second statement encodes the fact that the
empirical risk approaches zero.
Theorem 6.6. Suppose A > 0m and g 2 G; then (A, Rm
+ ) > 0, and for all t,
?
2 ?t
(A, Rm
+)
f (A t ) f?A ? f (A 0 ) 1
.
2 2?
>
Proof. By theorem 6.1, Rm
+ \ Ker(A ) =
D1 A ( rf (A t )) = inf k
2
A
A
= {0m }, which combined with g ?
rf (A t )
k1 = krf (A t )k1
g 0 gives
f (A t )/ .
Plugging this and f?A = 0 (again by theorem 6.1) along with polyhedron Rm
rf (Rm ) (whereby
+ ?
m
(A, Rm
)
>
0
by
proposition
4.3
since
2
R
)
into
proposition
5.2
gives
A
+
+
?
m 2
2?
(A, R+ ) f (A t )
(A, Rm
+)
f (A t+1 ) ? f (A t )
= f (A t ) 1
,
2 2?
2 2?
and recursively applying this inequality yields the result.
Since the present setting is weak learnability, note by (4.1) that the choice of polyhedron Rm
+ grants
that (A, Rm
)
is
exactly
the
original
weak
learning
rate.
When
specialized
for
the
exponential
loss
+
2
t
(where ? = = 1), the bound becomes (1
(A, Rm
)
/2)
,
which
exactly
recovers
the
bound
of
+
Schapire and Singer [20], although via different analysis.
?
?t
?
?
(A t ) f?A
(f,A)2
(f,A)2 t
In general, solving for t in the expression ? = ff (A
?
1
?
exp
2?
2?
?
2
2
)
f
0
A
2
2 ?
reveals that t ? (A,S)
and ?, in the case of
2 ln(1/?) iterations suffice to reach error ?. Recall that
the logistic loss, have only been bounded by quantities like 2m . While it is unclear if this analysis
of and ? was tight, note that it is plausible that the logistic loss is slower than the exponential loss
in this scenario, as it works less in initial phases to correct minor margin violations.
7
Convergence Rate under Attainability
Theorem 7.1. For any A 2 Rm?n and g 2 G, the following conditions are equivalent:
8 2 Rn ? A 62 Rm
+ \ {0m },
f
A
A
A has minimizers,
2 Rm
++ ,
\ Rm
++ 6= ;.
(7.2)
(7.3)
(7.4)
(7.5)
Interestingly, as revealed in (7.4) and (7.5), attainability entails that the dual has fully interior points,
and furthermore that the dual optimum is interior. On the other hand, under weak learnability,
eq. (6.4) provided that the dual optimum has zeros at every coordinate. As will be made clear in
section 8, the primal and dual weights have the following dichotomy: either the margin hai , i goes
to infinity and ( A )i goes to zero, or the margin stays finite and ( A )i goes to some positive value.
Theorem 7.6. Suppose A 6= 0m?n , g 2 G, and the infimum of eq. (2.1) is attainable. Then there
exists a (compact) tightest axis-aligned retangle C containing the initial level set, and f is strongly
convex with modulus c > 0 over C. Finally, (A, rf (C)) > 0, and for all t,
?
?t
c (A, rf (C))2
?
?
f (A t ) fA ? (f (0m ) fA ) 1
.
?f (A 0 )
?f (A 0 )
1
In other words, t ? c (A,
rf (C))2 ln( ? ) iterations suffice to reach error ?. The appearance of a
modulus of strong convexity c (i.e., a lower bound on the eigenvalues of the Hessian of f ) may seem
surprising, and sketching the proof illuminates its appearance and subsequent function.
7
When the infimum is attainable, every margin hai , i converges to some finite value. In fact, they all
remain bounded: (7.2) provides that no halfspace contains all points, so if one margin becomes positive and large, another becomes negative and large, giving a terrible objective value. But objective
values never increase with coordinate descent. To finish the proof, strong convexity (i.e., quadratic
lower bounds in the primal) grants quadratic upper bounds in the dual, which can be used to bound
the dual distance in proposition 5.2, and yield the desired convergence rate. This approach fails
under weak learnability?some primal weights grow unboundedly, all dual weights shrink to zero,
and no compact set contains all margins.
8
General Convergence Rate
The final characterization encodes two principles: the rows of A may be partitioned into two matrices A0 , A+ which respectively satisfy theorem 6.1 and theorem 7.1, and that these two subproblems
affect the optimization problem essentially independently.
Theorem 8.1. Let A0 2 Rz?n , A+ 2 Rp?n , and g 2 G be given. Set m := z + p, and A 2 Rm?n
to be the matrix obtained by stacking A0 on top of A+ . The following conditions are equivalent:
(9 2 Rn ? A0 2 Rz++ ^ A+ = 0p ) ^ (8 2 Rn ? A+ 62 Rp+ \ {0p }),
(8.2)
( infn f (A ) = infn f (A+ )) ^ ( infn f (A0 ) = 0) ^ f
2R
2R
2R
h
i
A0
with A0 = 0z ^ A+ 2 Rp++ ,
A =
A
A+ has minimizers,
+
(
A0
= {0z }) ^ (
A+
\ Rp++ 6= ;) ^ (
A
=
A0
?
A+ ).
(8.3)
(8.4)
(8.5)
To see that any matrix A falls into one of the three scenarios here, fix a loss function g, and recall
from theorem 3.1 that A is unique. In particular, the set of zero entries in A exactly specifies
which of the three scenarios hold, the current scenario allowing for simultaneous positive and zero
entries. Although this reasoning made use of A , note that it is A which dictates the behavior: in
fact, as is shown in remark I.1 of the full version, the decomposition is unique.
Returning to theorem 8.1, the geometry of fig. 2c is provided by (8.2) and (8.5). The analysis
will start from (8.3), which allows the primal problem to be split into two pieces, which are then
individually handled precisely as in the preceding sections. To finish, (8.5) will allow these pieces
to be stitched together.
m
Theorem 8.6. Suppose A 6= 0m?n , g 2 G, A 2 Rm
+ \ R++ \ {0m }, and the notation from
1
theorem 8.1. Set w := supt krf (A+ t ) + P A ( rf (A+ t ))k1 . Then w < 1, and there exists
+
a tightest cube C+ so that C+ ? {x 2 Rp : f (x) ? f (A 0 )}, and let c > 0 be the modulus of
strong convexity of f over C+ . Then (A, Rz+ ? rf (C+ )) > 0, and for all t, f (A t ) f?A ?
2f (A 0 )/ (t + 1) min 1, (A, Rz+ ? rf (C+ ))2 /(( + w/(2c))2 ?) .
(In the case of the logistic loss, w ? supx2Rm krf (x)k1 ? m.)
As discussed previously, the bounds deteriorate to O(1/?) because the finite and infinite margins
sought by the two pieces A0 , A+ are in conflict. For a beautifully simple, concrete case of this,
consider the following matrix, due to Schapire [11]:
"
#
1 +1
1 .
S := +1
+1 +1
The optimal solution here is to push both coordinates of unboundedly positive, with margins
approaching (0, 0, 1). But pushing any coordinate i too quickly will increase the objective value,
rather than decreasing it. In fact, this instance will provide a lower bound, and the mechanism of the
proof shows that the primal weights grow extremely slowly, as O(ln(t)).
Theorem 8.7. Using the logistic loss and exact line search, for any t 1, f (S t ) f?S 1/(8t).
Acknowledgement
The author thanks Sanjoy Dasgupta, Daniel Hsu, Indraneel Mukherjee, and Robert Schapire for
valuable conversations. The NSF supported this work under grants IIS-0713540 and IIS-0812598.
8
References
[1] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5:197?227, July
1990.
[2] Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation,
121(2):256?285, 1995.
[3] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. J. Comput. Syst. Sci., 55(1):119?139, 1997.
[4] Leo Breiman. Prediction games and arcing algorithms. Neural Computation, 11:1493?1517,
October 1999.
[5] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28, 1998.
[6] Peter J. Bickel, Yaacov Ritov, and Alon Zakai. Some theory for generalized boosting algorithms. Journal of Machine Learning Research, 7:705?732, 2006.
[7] Z. Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convex
differentiable minimization. Journal of Optimization Theory and Applications, 72:7?35, 1992.
[8] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, AdaBoost and
Bregman distances. Machine Learning, 48(1-3):253?285, 2002.
[9] Gunnar R?atsch, Sebastian Mika, and Manfred K. Warmuth. On the convergence of leveraging.
In NIPS, pages 487?494, 2001.
[10] Indraneel Mukherjee, Cynthia Rudin, and Robert Schapire. The convergence rate of AdaBoost.
In COLT, 2011.
[11] Robert E. Schapire. The convergence rate of AdaBoost. In COLT, 2010.
[12] Robert E. Schapire, Yoav Freund, Peter Barlett, and Wee Sun Lee. Boosting the margin: A
new explanation for the effectiveness of voting methods. In ICML, pages 322?330, 1997.
[13] Gunnar R?atsch and Manfred K. Warmuth. Maximizing the margin with boosting. In COLT,
pages 334?350, 2002.
[14] Manfred K. Warmuth, Karen A. Glocer, and Gunnar R?atsch. Boosting algorithms for maximizing the soft margin. In NIPS, 2007.
[15] Shai Shalev-Shwartz and Yoram Singer. On the equivalence of weak learnability and linear
separability: New relaxations and efficient boosting algorithms. In COLT, pages 311?322,
2008.
[16] Llew Mason, Jonathan Baxter, Peter L. Bartlett, and Marcus R. Frean. Functional gradient
techniques for combining hypotheses. In A.J. Smola, P.L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 221?246, Cambridge, MA, 2000.
MIT Press.
[17] Stephen P. Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University
Press, 2004.
[18] Jean-Baptiste Hiriart-Urruty and Claude Lemar?echal.
Springer Publishing Company, Incorporated, 2001.
Fundamentals of Convex Analysis.
[19] Jonathan Borwein and Adrian Lewis. Convex Analysis and Nonlinear Optimization. Springer
Publishing Company, Incorporated, 2000.
[20] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated
predictions. Machine Learning, 37(3):297?336, 1999.
[21] George B. Dantzig and Mukund N. Thapa. Linear Programming 2: Theory and Extensions.
Springer, 2003.
[22] Adi Ben-Israel. Motzkin?s transposition theorem, and the related theorems of Farkas, Gordan
and Stiemke. In M. Hazewinkel, editor, Encyclopaedia of Mathematics, Supplement III. 2002.
9
| 4343 |@word version:3 briefly:1 stronger:1 norm:1 adrian:1 heretofore:1 crucially:2 seek:2 decomposition:2 attainable:10 mention:1 recursively:1 initial:2 contains:8 daniel:1 interestingly:3 past:1 current:3 ka:7 surprising:2 luo:1 attainability:10 subsequent:1 additive:1 pertinent:1 update:2 farkas:1 rudin:1 warmuth:3 ith:1 steepest:1 granting:1 farther:2 manfred:3 transposition:1 provides:6 boosting:35 characterization:1 along:1 constructed:1 prove:1 combine:1 wild:1 paragraph:1 deteriorate:1 indeed:1 expected:1 behavior:2 decomposed:2 decreasing:1 company:2 increasing:1 becomes:3 provided:3 bounded:3 suffice:3 moreover:1 notation:3 what:1 israel:1 argmin:1 kind:1 minimizes:1 differentiation:1 guarantee:6 every:7 voting:1 exactly:5 returning:2 classifier:2 demonstrates:2 milestone:1 rm:31 grant:7 appear:1 before:1 positive:8 engineering:1 understood:2 llew:1 aiming:2 might:1 mika:1 twice:1 dantzig:1 equivalence:2 collect:1 relaxing:1 suggests:1 practical:1 unique:4 ker:14 empirical:4 attain:1 dictate:2 matching:1 projection:2 convenient:1 confidence:2 word:1 boyd:1 onto:1 convenience:2 selection:1 interior:6 close:1 operator:1 risk:7 applying:2 disjointly:1 map:1 equivalent:5 missing:1 xkp:1 resembling:1 go:3 maximizing:2 independently:1 convex:11 focused:1 mtelgars:1 rule:1 vandenberghe:1 stability:1 searching:1 notion:1 coordinate:11 handle:1 annals:1 diego:1 suppose:5 target:4 exact:1 programming:1 homogeneous:2 hypothesis:1 origin:3 gilman:1 satisfying:2 utilized:1 mukherjee:4 distributional:1 sun:1 valuable:2 convexity:5 complexity:1 dynamic:1 tight:2 solving:1 incur:1 purely:1 upon:2 efficiency:1 learner:12 necessitates:1 various:1 leo:1 fast:1 kp:1 dichotomy:1 outside:4 h0:2 shalev:1 jean:1 encoded:2 plausible:1 say:1 statistic:1 invested:1 apx:1 final:5 differentiable:2 eigenvalue:1 claude:1 subtracting:1 hiriart:1 j2:1 aligned:1 combining:1 degenerate:2 dirac:1 ky:1 olkopf:1 convergence:37 fermi:1 optimum:7 unboundedly:3 produce:2 encyclopaedia:1 telgarsky:1 converges:1 ben:1 object:1 alon:1 frean:1 minor:2 progress:5 eq:5 solves:1 strong:5 c:1 implies:3 indicate:1 differ:1 posit:1 correct:2 vc:1 viewing:1 hx:1 suffices:2 fix:1 generalization:1 alleviate:1 proposition:7 indraneel:2 strictly:1 extension:1 onward:1 hold:3 ppc:2 considered:4 normal:1 exp:7 mapping:1 matus:1 bickel:2 inseparable:1 sought:1 perceived:1 label:1 individually:1 tool:1 minimization:3 mit:1 always:3 supt:1 aim:1 rather:1 hj:5 ej:1 incompatibility:1 breiman:1 arcing:1 encode:4 improvement:1 polyhedron:3 attains:3 stopping:2 minimizers:2 inaccurate:1 entire:1 typically:1 a0:10 dual:29 colt:4 development:1 constrained:1 special:1 breakthrough:1 initialize:1 cube:1 construct:1 once:1 never:2 represents:1 icml:1 others:1 simplify:1 few:2 wee:1 phase:1 geometry:1 n1:1 friedman:1 organization:1 interest:1 limx:1 gordan:1 adjust:1 violation:1 analyzed:1 yielding:2 primal:9 stitched:1 implication:1 accurate:1 bregman:1 necessary:3 sauer:1 desired:2 theoretical:1 minimal:1 fenchel:2 instance:4 soft:1 column:7 earlier:1 yoav:3 maximization:1 stacking:1 entry:3 predictor:4 uniform:1 too:1 learnability:18 combined:3 thanks:2 cited:1 fundamental:1 stay:1 sequel:1 lee:1 michael:1 together:2 continuously:1 sketching:1 infn:9 concrete:1 quickly:1 again:1 central:1 borwein:1 unavoidable:1 containing:2 choose:1 slowly:1 classically:1 derivative:1 return:1 rescaling:1 syst:1 aggressive:1 potential:1 matter:2 satisfy:1 explicitly:2 race:2 piece:6 tion:1 view:3 try:1 closed:2 analyze:1 sup:1 characterizes:1 start:2 parallel:2 shai:1 halfspace:13 contribution:1 square:1 accuracy:1 degraded:1 maximized:1 correspond:1 yield:3 generalize:1 weak:55 produced:1 basically:1 critically:1 drive:1 complementation:1 simultaneous:1 reach:3 sebastian:1 trevor:1 definition:5 proof:7 recovers:1 hsu:1 proved:1 treatment:1 recall:5 conversation:1 zakai:1 routine:1 carefully:1 back:1 manuscript:8 appears:2 attained:1 follow:1 adaboost:7 improved:1 formulation:2 ritov:1 shrink:1 strongly:1 prelude:1 furthermore:3 just:1 smola:1 lastly:2 until:2 correlation:2 hand:3 sketch:1 d:1 jerome:1 ei:1 nonlinear:1 lack:1 logistic:17 infimum:4 artifact:1 perhaps:1 glocer:1 modulus:3 concept:2 regularization:1 nonzero:2 game:1 please:1 noted:1 whereby:1 unnormalized:5 generalized:2 trying:1 illuminates:1 theoretic:1 l1:1 reasoning:1 meaning:2 consideration:1 recently:1 yaacov:1 specialized:1 functional:3 discussed:1 interpretation:4 relating:1 interpret:1 lieven:1 cambridge:2 ai:6 pm:2 mathematics:1 closing:2 access:1 entail:1 something:1 showed:2 jolla:1 inf:11 scenario:7 inequality:1 binary:2 arbitrarily:1 success:1 yi:7 postponed:1 minimum:2 commentary:1 george:1 preceding:2 impose:1 converting:1 july:1 ii:2 stephen:1 full:2 mix:1 technical:2 characterized:1 long:1 divided:1 baptiste:1 plugging:1 controlled:1 prediction:3 variant:2 basic:1 regression:2 denominator:1 essentially:1 foremost:1 iteration:3 sometimes:1 kernel:1 background:1 hurdle:1 grow:2 crucial:2 sch:1 ejt:2 unlike:1 strict:1 induced:1 leveraging:1 seem:1 effectiveness:1 call:1 odds:1 presence:2 revealed:3 split:1 enough:1 baxter:1 iii:1 affect:1 finish:2 forthcoming:1 hastie:1 identified:1 approaching:1 reduce:1 expression:1 handled:2 bartlett:2 effort:1 penalty:2 peter:3 karen:1 proceed:1 hessian:1 action:1 remark:3 generally:2 useful:1 clear:3 schapire:12 terrible:1 specifies:1 exist:4 nsf:1 metaphorically:1 disjoint:1 tibshirani:1 write:1 dasgupta:1 express:1 key:1 gunnar:3 reliance:1 krf:3 verified:1 relaxation:1 merely:1 convert:2 parameterized:3 place:3 throughout:1 barlett:1 utilizes:1 infimal:2 decision:1 bound:17 guaranteed:1 quadratic:3 nonnegative:1 strength:1 constraint:1 deficiency:2 infinity:2 precisely:3 encodes:4 extremely:2 min:1 performing:2 department:1 according:1 combination:3 conjugate:2 smaller:1 slightly:1 em:1 separability:3 remain:1 partitioned:1 modification:1 intuitively:1 taken:1 ln:15 previously:1 discus:1 eventually:1 mechanism:4 argmaxj:1 nonempty:1 singer:4 urruty:1 stitching:2 decomposing:1 tightest:2 infm:1 distinguished:1 alternative:1 slower:1 rp:5 existence:2 original:3 rz:4 denotes:1 top:1 cf:2 publishing:2 pushing:1 yoram:3 giving:1 k1:14 classical:1 objective:5 g0:4 added:1 quantity:7 strategy:2 degrades:2 fa:2 usual:2 hai:7 exhibit:1 amongst:1 gradient:3 unclear:1 distance:4 beautifully:1 sci:1 majority:4 degrade:2 considers:1 tseng:1 reason:1 marcus:1 length:1 index:1 relationship:2 providing:1 minimizing:2 setup:1 october:1 robert:9 statement:3 subproblems:1 stated:1 negative:2 implementation:1 unknown:1 allowing:1 upper:4 finite:9 descent:8 situation:1 incorporated:2 rn:9 ucsd:1 arbitrary:1 overcoming:1 connection:2 conflict:1 california:1 distinction:1 conflicting:1 established:3 nip:2 able:1 appeared:1 rf:17 including:3 max:2 explanation:1 critical:1 indicator:1 dpc:1 improve:1 rated:2 imply:1 identifies:1 axis:1 carried:1 literature:1 understanding:1 geometric:1 acknowledgement:1 freund:4 loss:36 fully:1 interesting:1 ingredient:1 astonishingly:1 sufficient:1 principle:1 editor:2 uncorrelated:2 row:5 echal:1 course:1 penalized:1 supported:1 last:2 soon:1 hazewinkel:1 side:1 allow:1 deeper:1 wide:1 template:1 taking:1 barrier:1 fall:1 absolute:1 boundary:2 dimension:1 doesn:1 author:2 made:4 collection:1 san:1 tighten:1 approximate:1 compact:4 keep:1 reveals:1 assumed:2 xi:8 shwartz:1 search:6 iterative:1 decade:1 ca:1 rearranging:1 ignoring:1 schuurmans:1 adi:1 significance:1 bounding:1 fig:3 ff:1 aid:1 fails:1 exponential:14 comput:1 candidate:1 weighting:2 theorem:19 removing:1 specific:1 jt:1 k21:1 cynthia:1 mason:1 svm:1 mukund:1 concern:1 evidence:1 exists:11 albeit:1 resisted:1 supplement:1 push:3 margin:19 kx:1 entropy:5 smoothly:1 simply:1 univariate:1 appearance:2 desire:1 adjustment:1 motzkin:1 applies:1 springer:3 radically:1 minimizer:4 determines:2 satisfies:1 lewis:1 ma:1 goal:1 formulated:1 exposition:1 towards:2 oost:2 eventual:2 replace:1 lemar:1 feasible:8 specifically:3 infinite:2 principal:1 lemma:4 sanjoy:1 la:1 atsch:5 arises:1 collins:2 jonathan:2 relevance:1 d1:1 correlated:1 |
3,693 | 4,344 | Global Solution of Fully-Observed
Variational Bayesian Matrix Factorization
is Column-Wise Independent
Shinichi Nakajima
Nikon Corporation
Tokyo, 140-8601, Japan
[email protected]
Masashi Sugiyama
Tokyo Institute of Technology
Tokyo 152-8552, Japan
[email protected]
Derin Babacan
University of Illinois at Urbana-Champaign
Urbana, IL 61801, USA
[email protected]
Abstract
Variational Bayesian matrix factorization (VBMF) ef?ciently approximates the
posterior distribution of factorized matrices by assuming matrix-wise independence of the two factors. A recent study on fully-observed VBMF showed that,
under a stronger assumption that the two factorized matrices are column-wise independent, the global optimal solution can be analytically computed. However,
it was not clear how restrictive the column-wise independence assumption is. In
this paper, we prove that the global solution under matrix-wise independence is
actually column-wise independent, implying that the column-wise independence
assumption is harmless. A practical consequence of our theoretical ?nding is that
the global solution under matrix-wise independence (which is a standard setup)
can be obtained analytically in a computationally very ef?cient way without any
iterative algorithms. We experimentally illustrate advantages of using our analytic
solution in probabilistic principal component analysis.
1
Introduction
The goal of matrix factorization (MF) is to approximate an observed matrix by a low-rank one. In
this paper, we consider fully-observed MF where the observed matrix has no missing entry1 . This
formulation includes classical multivariate analysis techniques based on singular-value decomposition such as principal component analysis (PCA) [9] and canonical correlation analysis [10].
In the framework of probabilistic MF [20, 17, 19], posterior distributions of factorized matrices are
considered. Since exact inference is computationally intractable, the Laplace approximation [3],
the Markov chain Monte Carlo sampling [3, 18], and the variational Bayesian (VB) approximation
[4, 13, 16, 15] were used for approximate inference in practice. Among them, the VB approximation
seems to be a popular choice due to its high accuracy and computational ef?ciency.
In the original VBMF [4, 13], factored matrices are assumed to be matrix-wise independent, and a
local optimal solution is computed by an iterative algorithm. A simpli?ed variant of VBMF (simpleVBMF) was also proposed [16], which assumes a stronger constraint that the factored matrices
1
This excludes the collaborative ?ltering setup, which is aimed at imputing missing entries of an observed
matrix [12, 7].
1
are column-wise independent. A notable advantage of simpleVBMF is that the global optimal solution can be computed analytically in a computationally very ef?cient way [15].
Intuitively, it is suspected that simpleVBMF only possesses weaker approximation ability due to its
stronger column-wise independence assumption. However, it was reported that no clear performance
degradation was observed in experiments [14]. Thus, simpleVBMF would be a practically useful
approach. Nevertheless, the in?uence of the stronger column-wise independence assumption was
not elucidated beyond this empirical evaluation.
The main contribution of this paper is to theoretically show that the column-wise independence
assumption does not degrade the performance. More speci?cally, we prove that a global optimal
solution of the original VBMF is actually column-wise independent. Thus, a global optimal solution of the original VBMF can be obtained by the analytic-form solution of simpleVBMF?no
computationally-expensive iterative algorithm is necessary. We show the usefulness of the analyticform solution through experiments on probabilistic PCA.
2
Formulation
In this section, we ?rst formulate the problem of probabilistic MF, and then introduce the VB approximation and its simpli?ed variant.
2.1
Probabilistic Matrix Factorization
The probabilistic MF model is given as follows [19]:
?
?
1
p(Y |A, B) ? exp ? 2 ?Y ? BA? ?2Fro ,
2?
?
?
?
?
?
?
1 ?
1 ?
?1 ?
?1 ?
p(A) ? exp ? tr ACA A
,
p(B) ? exp ? tr BCB B
,
2
2
(1)
(2)
where Y ? RL?M is an observed matrix, A ? RM ?H and B ? RL?H are parameter matrices to be
estimated, and ? 2 is the noise variance. Here, we denote by ? the transpose of a matrix or vector, by
? ? ?Fro the Frobenius norm, and by tr(?) the trace of a matrix. We assume that the prior covariance
matrices CA and CB are diagonal and positive de?nite, i.e.,
CA = diag(c2a1 , . . . , c2aH ),
CB = diag(c2b1 , . . . , c2bH )
for cah , cbh > 0, h = 1, . . . , H.
Without loss of generality, we assume that the diagonal entries of the product CA CB are arranged
in the non-increasing order, i.e., cah cbh ? cah? cbh? for any pair h < h? .
Throughout the paper, we denote a column vector of a matrix by a bold smaller letter, and a row
vector by a bold smaller letter with a tilde, namely,
?
??
e M )? ? RM ?H , B = (b1 , . . . , bH ) = e
A = (a1 , . . . , aH ) = (e
a1 , . . . , a
b1 , . . . , e
bL
? RL?H .
2.2
Variational Bayesian Approximation
The Bayes posterior is written as
p(A, B|Y ) =
p(Y |A, B)p(A)p(B)
,
Z(Y )
(3)
where Z(Y ) = ?p(Y |A, B)?p(A)p(B) is the marginal likelihood. Here, ???p denotes the expectation over the distribution p. Since the Bayes posterior (3) is computationally intractable, the VB
approximation was proposed [4, 13, 16, 15].
Let r(A, B), or r for short, be a trial distribution. The following functional with respect to r is called
the free energy:
?
?
?
?
r(A, B)
r(A, B)
= log
? log Z(Y ). (4)
F (r|Y ) = log
p(Y |A, B)p(A)p(B) r(A,B)
p(A, B|Y ) r(A,B)
2
In the last equation, the ?rst term is the Kullback-Leibler (KL) distance from the trial distribution
to the Bayes posterior, and the second term is a constant. Therefore, minimizing the free energy (4)
amounts to ?nding the distribution closest to the Bayes posterior in the sense of the KL distance. In
the VB approximation, the free energy (4) is minimized over some restricted function space.
A standard constraint for the MF model is matrix-wise independence [4, 13], i.e.,
rVB (A, B) = rAVB (A)rBVB (B).
(5)
This constraint breaks off the entanglement between the parameter matrices A and B, and leads to
a computationally-tractable iterative algorithm. Using the variational method, we can show that,
under the constraint (5), the VB posterior minimizing the free energy (4) is written as
rVB (A, B) =
M
Y
e
b m , ?A )
NH (e
am ; a
m=1
L
Y
e
NH (e
bl ; b
bl , ?B ),
l=1
where the parameters satisfy
?
??
e
e
b= a
b ?A ,
b1 , . . . , a
bM
A
= Y ?B
?2
?
??
e
b
b= e
b ?B ,
B
b1 , . . . , b
bL
=YA
?2
?
??1
b?B
b + L?B + ? 2 C ?1
?A = ? 2 B
,
A
(6)
?
??1
b? A
b + M ?A + ? 2 C ?1
?B = ? 2 A
.
B
(7)
Here, Nd (?; ?, ?) denotes the d-dimensional Gaussian distribution with mean ? and covariance mab ?A , B,
b and ?B by Eqs.(6) and (7) until convergence
trix ?. Iteratively updating the parameters A,
gives a local minimum of the free energy (4).
When the noise variance ? 2 is unknown, it can also be estimated based on the free energy minimization. The update rule for ? 2 is given by
?
?
b + L?B )
b + M ?A )(B
b?B
bA
b? ) + tr (A
b? A
?Y ?2Fro ? tr(2Y ? B
?2 =
.
(8)
LM
Furthermore, in the empirical Bayesian scenario, the hyperparameters CA and CB are also estimated
from data. In this scenario, CA and CB are updated in each iteration by the following formulas:
c2bh = ?b
bh ?2 /L + (?B )hh .
c2ah = ?b
ah ?2 /M + (?A )hh ,
2.3
(9)
SimpleVB Approximation
A simpli?ed variant, called the simpleVB approximation, assumes column-wise independence of
each matrix [16, 15], i.e.,
rsimpleVB (A, B) =
H
Y
rasimpleVB
(ah )
h
h=1
H
Y
rbsimpleVB
(bh ).
h
(10)
h=1
This constraint restricts the covariances ?A and ?B to be diagonal, and thus necessary memory storage and computational cost are substantially reduced [16]. The simpleVB posterior can be written
as
rsimpleVB (A, B) =
H
Y
b h , ?a2h IM )NL (bh ; b
NM (ah ; a
bh , ?b2h IL ),
h=1
where the parameters satisfy
?
??
X
?a2h
b
? b
b h = 2 ?Y ?
b?
a
bh? a
bh ,
h?
?
h? ?=h
?
?
X
?b2h
b
b
?a
b?
bh ,
bh = 2 ?Y ?
bh? a
h?
?
?
??1
?
,
bh ?2 + L?b2h + ? 2 c?2
?a2h = ? 2 ?b
ah
(11)
?
??1
.
?b2h = ? 2 ?b
ah ?2 + M ?a2h + ? 2 c?2
bh
(12)
h ?=h
3
Here, Id denotes the d-dimensional identity matrix. Iterating Eqs.(11) and (12) until convergence,
we can obtain a local minimum of the free energy. Eqs.(8) and (9) are similarly applied if the noise
variance ? 2 is unknown and in the empirical Bayesian scenario, respectively.
A recent study has derived the analytic solution for simpleVB when the observed matrix has no
missing entry [15]. This work made simpleVB more attractive, because it did not only provide substantial reduction of computation costs, but also guaranteed the global optimality of the solution.
However, it was not clear how restrictive the column-wise independence assumption is, beyond its
experimental success [14]. In the next section, we theoretically show that the column-wise independence assumption is actually harmless.
3
Analytic Solution of VBMF under Matrix-wise Independence
Under the matrix-wise independence constraint (5), the free energy (4) can be written as
F = ?log r(A) + log r(B) ? log p(Y |A, B)p(A)p(B)?r(A)r(B)
2
=
LM
M
|CA | L
|CB | ?Y ?
log ? 2 +
log
+ log
+
+ const.
2
2
|?A |
2
|?B |
2? 2
?
?
?
1 n ?1 ? b? b
?1 b ? b
+ tr CA
A A + M ?A + CB
B B + L?B
2
?
?
??
??o
b? Y ? B
b+ A
b? A
b + M ?A B
b?B
b + L?B
+? ?2 ?2A
.
(13)
b B,
b
Note that Eqs.(6) and (7) together form the stationarity condition of Eq.(13) with respect to A,
?A , and ?B .
Below, we show that a global solution of ?A and ?B is diagonal. When the product CA CB is nondegenerate (i.e., cah cbh > cah? cbh? for any pair h < h? ), the global solution is unique and diagonal.
On the other hand, when CA CB is degenerate, the global solutions are not unique because arbitrary
rotation in the degenerate subspace is possible without changing the free energy. However, still one
of the equivalent solutions is always diagonal.
Theorem 1 Diagonal ?A and ?B minimize the free energy (13).
The basic idea of our proof is that, since minimizing the free energy (13) with respect to A, B, ?A ,
and ?B is too complicated, we focus on a restricted space written in a particular form that includes
the optimal solution. From necessary conditions for optimality, we can deduce that the solutions ?A
and ?B are diagonal.
Below, we describe the outline of the proof for non-degenerate CA CB . The complete proof for
general cases is omitted because of the page limit.
?
?
(Sketch of proof of Theorem 1) Assume that (A? , B ? , ?A
, ?B
) is a minimizer of the free energy
(13), and consider the following set of parameters speci?ed by an H ? H orthogonal matrix ?:
b = A? C ?1/2 ? ? C 1/2 ,
A
A
A
?A = CA ?CA
b = B ? C 1/2 ? ? C ?1/2 ,
B
A
A
?B = CA
?1/2
1/2
?1/2
?1/2
?
?A
CA
? ? CA ,
1/2
?1/2
?
CA ? ? CA
?CA ?B
1/2
1/2
.
bA
b? is invariant with respect to ?, and (A,
b B,
b ?A , ?B ) = (A? , B ? , ? ? , ? ? ) holds if
Note that B
A
B
? = IH . Then, as a function of ?, the free energy (13) can be simpli?ed as
? 1/2 ? o
1 n ?1 ?1
1/2 ?
?
F (?) = tr CA
CB ?CA B ?? B ? + L?B
CA ?
+ const.
2
?
?
?
This is necessarily minimized at ? = IH , because we assumed that (A , B ? , ?A
, ?B
) is a mini?? ?
?
mizer. We can show that F (?) is minimized at ? = IH only if B B + L?B is diagonal. This
?
implies that ?A
(see Eq.(6)) should be diagonal.
Similarly, we consider another set of parameters speci?ed by an H ? H orthogonal matrix ? ? :
b = A? C 1/2 ? ?? C ?1/2 ,
A
B
B
b=B
B
?
?1/2
1/2
CB ? ?? CB ,
?1/2
?A = CB
?B =
4
?1/2
?
? ? CB ?A
CB ? ?? CB
1/2
1/2
,
1/2
?1/2 ? ?1/2 ?? 1/2
CB ? ? CB ?B
CB ? CB .
Then, as a function of ? ? , the free energy (13) can be expressed as
? 1/2 ?? o
1 n ?1 ?1 ? 1/2 ? ?? ?
?
F (? ? ) = tr CA
CB ? CB A A + M ?A
CB ?
+ const.
2
?
?
Similarly, this is minimized at ? ? = IH only if A?? A? + M ?A
is diagonal. Thus, ?B
should be
diagonal (see Eq.(7)).
?
The result that ?A and ?B become diagonal would be natural because we assumed the independent
Gaussian prior on A and B: the fact that any Y can be decomposed into orthogonal components may
imply that the observation Y cannot convey any preference for singular-component-wise correlation.
Note, however, that Theorem 1 does not necessarily hold when the observed matrix has missing
entries.
Theorem 1 implies that the stronger column-wise independence constraint (10) does not degrade
approximation accuracy, and the VB solution under matrix-wise independence (5) essentially agrees
with the simpleVB solution. Consequently, we can obtain a global analytic solution for VB, by
combining Theorem 1 above with Theorem 1 in [15]:
Corollary 1 Let ?h (? 0) be the h-th largest singular value of Y , and let ? ah and ? bh be the
associated right and left singular vectors:
L
X
Y =
?h ? bh ? ?
ah .
h=1
Let ?
bh be the second largest real solution of the following quartic equation with respect to t:
fh (t) := t4 + ?3 t3 + ?2 t2 + ?1 t + ?0 = 0,
where the coef?cients are de?ned by
?
!
p
(L ? M )2 ?h
(L2 + M 2 )?h2
2? 4
?3 =
, ?2 = ? ?3 ?h +
+ 2 2
, ?1 = ?3 ?0 ,
LM
LM
cah cbh
?
!2
?
??
?
?4
?2 L
?2 M
2
2
?0 = ?h ? 2 2
, ?h = 1 ? 2
1? 2
?h2 .
cah cbh
?h
?h
Let
v
v?
u
!2
u
u
u (L + M )? 2
4
4
u (L + M )? 2
?
?
?
eh = t
+ 2 2 +t
+ 2 2
? LM ? 4 .
2
2cah cbh
2
2cah cbh
(14)
(15)
Then, the global VB solution under matrix-wise independence (5) can be expressed as
?
H
X
?
bh if ?h > ?
eh ,
VB
b VB ? ?BA? ?rVB (A,B) = B
bA
b? =
=
U
?
bhVB ? bh ? ?
,
where
?
b
ah
h
0
otherwise.
h=1
Theorem 1 holds also in the empirical Bayesian scenario, where the hyperparameters (CA , CB )
are also estimated from observation. Accordingly, the empirical VB solution also agrees with the
empirical simpleVB solution, whose analytic-form is given in Corollary 5 in [15]. Thus, we obtain
the global analytic solution for empirical VB:
Corollary 2 The global empirical VB solution under matrix-wise independence (5) is given by
(
H
X
??hVB if ?h > ? h and ?h ? 0,
EVB
EVB
?
EVB
b
U
=
?
bh ? bh ? ah , where ?
bh
=
0
otherwise.
h=1
Here,
?
?
? h = ( L + M )?,
?
?
q
1
2
2
2
2
2
2
4
c?h =
?h ? (L + M )? + (?h ? (L + M )? ) ? 4LM ? ,
2LM
? ?
?
? ?
?
?
1 ?
h
h
?h = M log
??hVB + 1 + L log
??hVB + 1 + 2 ?2?h ??hVB + LM c?2h ,
2
2
M?
L?
?
and ??hVB is the VB solution for cah cbh = c?h .
5
(16)
(17)
(18)
When we calculate the empirical VB solution, we ?rst check if ?h > ? h holds. If it holds, we
compute ??hVB by using Eq.(17) and Corollary 1. Otherwise, ?
bhEVB = 0. Finally, we check if
?h ? 0 holds by using Eq.(18).
When the noise variance ? 2 is unknown, it is optimized by a naive 1-dimensional search to minimize
the free energy [15]. To evaluate the free energy (13), we need the covariances ?A and ?B , which
neither Corollary 1 nor Corollary 2 provides. The following corollary, which gives the complete
information on the VB posterior, is obtained by combining Theorem 1 above with Corollary 2 in
[15]:
Corollary 3 The VB posteriors under matrix-wise independence (5) are given by
VB
rA
(A) =
H
Y
b h , ?a2h IM ),
NM (ah ; a
VB
rB
(B) =
h=1
where, for
H
Y
NL (bh ; b
bh , ?b2h IL ),
h=1
?
bhVB
being the solution given by Corollary 1,
q
q
bh = ? ?
a
bhVB ?bh ? ? ah , b
bh = ? ?
bhVB ?bh?1 ? ? bh ,
?
? p 2
? ?bh2 ? ? 2 (M ? L) + (b
?h ? ? 2 (M ? L))2 + 4M ? 2 ?bh2
2
?ah =
,
2M (b
?hVB ?bh?1 + ? 2 c?2
ah )
?
? p 2
? ?bh2 + ? 2 (M ? L) + (b
?h + ? 2 (M ? L))2 + 4L? 2 ?bh2
2
?bh =
,
2L(b
?hVB ?bh + ? 2 c?2
bh )
q
4 LM
bhVB )2 + 4?
(M ? L)(?h ? ?
bhVB ) + (M ? L)2 (?h ? ?
c2a c2b
h
h
?bh =
,
2? 2 M c?2
ah
( 2
?h
if ?h > ?
eh ,
?bh2 =
?4
otherwise.
c2 c2
ah bh
Note that the ratio cah /cbh is arbitrary in empirical VB, so we can ?x it to, e.g., cah /cbh = 1 without
loss of generality [15].
4
Experimental Results
In this section, we ?rst introduce probabilistic PCA as a probabilistic MF model. Then, we show
experimental results on arti?cial and benchmark datasets, which illustrate practical advantages of
using our analytic solution.
4.1
Probabilistic PCA
e ? RH
In probabilistic PCA [20], the observation y ? RL is assumed to be driven by a latent vector a
in the following form:
y = Be
a + ?.
e and y, and ? ? RL is a Gaussian noise
Here, B ? RL?H speci?es the linear relationship between a
subject to NL (0, ? 2 IL ). Suppose that we are given M observed samples {y 1 , . . . , y M } generated
e M }, and each latent vector is subject to a
e ? NH (0, IH ). Then,
from the latent vectors {e
a1 , . . . , a
the probabilistic PCA model is written as Eqs.(1) and (2) with CA = IH .
If we apply Bayesian inference, the intrinsic dimension H is automatically selected without predetermination [4, 14]. This useful property is called automatic dimensionality selection (ADS).
4.2
Experiment on Arti?cial Data
We compare the iterative algorithm and the analytic solution in the empirical VB scenario with
unknown noise variance, i.e., the hyperparameters (CA , CB ) and the noise variance ? 2 are also
6
80
2.8
2.2
40
20
2
1.8
0
60
40
b
H
2.4
50
100
150
Iteration
200
0
0
250
Analytic
Iterative
80
60
Time(sec)
F /( LM )
2.6
100
Analytic
Iterative
Analytic
Iterative
(a) Free energy
20
50
100
150
Iteration
200
0
0
250
(b) Computation time
50
100
150
Iteration
200
250
(c) Estimated rank
Figure 1: Experimental results for Arti?cial1 dataset, where the data dimension is L = 100, the
number of samples is M = 300, and the true rank is H ? = 20.
80
30
3.2
Analytic
Iterative
25
Analytic
Iterative
Analytic
Iterative
60
2.8
20
b
H
Time(sec)
F /( LM )
3
40
15
10
20
2.6
5
0
50
100
150
Iteration
200
250
0
0
(a) Free energy
50
100
150
Iteration
200
(b) Computation time
250
0
0
50
100
150
Iteration
200
250
(c) Estimated rank
Figure 2: Experimental results for Arti?cial2 dataset (L = 70, M = 300, and H ? = 40).
estimated from observation. We use the full-rank model (i.e., H = min(L, M )), and expect the
ADS effect to automatically ?nd the true rank H ? .
Figure 1 shows the free energy, the computation time, and the estimated rank over iterations for an
arti?cial (Arti?cial1) dataset with L = 100, M = 300, and H ? = 20. We randomly created true
?
?
matrices A? ? RM ?H and B ? ? RL?H so that each entry of A? and B ? follows N1 (0, 1). An
observed matrix Y was created by adding a noise subject to N1 (0, 1) to each entry of B ? A?? .
The iterative algorithm consists of the update rules (6)?(9). Initial values were set in the following
b and B
b are randomly created so that each entry follows N1 (0, 1). Other variables are set to
way: A
?A = ?B = CA = CB = IH and ? 2 = 1. Note that we rescale Y so that ?Y ?2Fro /(LM ) = 1,
before starting iteration. We ran the iterative algorithm 10 times, starting from different initial
points, and each trial is plotted by a solid line in Figure 1. The analytic solution consists of applying
Corollary 2 combined with a naive 1-dimensional search for noise variance ? 2 estimation [15]. The
analytic solution is plotted by the dashed line. We see that the analytic solution estimates the true
b = H ? = 20 immediately (? 0.1 sec on average over 10 trials), while the iterative algorithm
rank H
does not converge in 60 sec.
Figure 2 shows experimental results on another arti?cial dataset (Arti?cial2) where L = 70, M =
300, and H ? = 40. In this case, all the 10 trials of the iterative algorithm are trapped at local
minima. We empirically observed a tendency that the iterative algorithm suffers from the local
minima problem when H ? is large (close to H).
4.3
Experiment on Benchmark Data
Figures 3 and 4 show experimental results on the Satellite and the Spectf datasets available from the
UCI repository [1], showing similar tendencies to Figures 1 and 2. We also conducted experiments
on various benchmark datasets, and found that the iterative algorithm typically converges slowly,
and sometimes suffers from the local minima problem, while our analytic-form gives the global
solution immediately.
7
5
400
Analytic
Iterative
Analytic
Iterative
50
40
4
3.5
3
2.5
0
300
30
b
H
Time(sec)
F /( LM )
500
Analytic
Iterative
4.5
200
20
100
50
100
150
Iteration
200
0
0
250
(a) Free energy
10
50
100
150
Iteration
200
0
0
250
(b) Computation time
50
100
150
Iteration
200
250
(c) Estimated rank
Figure 3: Experimental results for the Sat dataset (L = 36, M = 6435).
5
25
Analytic
Iterative
20
4
3.5
3
0
Analytic
Iterative
30
15
20
10
10
5
50
100
150
Iteration
200
250
Analytic
Iterative
40
b
H
Time(sec)
F /( LM )
4.5
0
0
(a) Free energy
50
100
150
Iteration
200
(b) Computation time
250
0
0
50
100
150
Iteration
200
250
(c) Estimated rank
Figure 4: Experimental results for the Spectf dataset (L = 44, M = 267).
5
Conclusion and Discussion
In this paper, we have analyzed the fully-observed variational Bayesian matrix factorization (VBMF)
under matrix-wise independence. We have shown that the VB solution under matrix-wise independence essentially agrees with the simpli?ed VB (simpleVB) solution under column-wise independence. As a consequence, we can obtain the global VB solution under matrix-wise independence
analytically in a computationally very ef?cient way.
Our analysis assumed uncorrelated priors. With correlated priors, the posterior is no longer uncorrelated and thus it is not straightforward to obtain a global solution analytically. Nevertheless, there
exists a situation where an analytic solution can be easily obtained: Suppose there exists an H ? H
?
?
non-singular matrix T such that both of CA
= T CA T ? and CB
= (T ?1 )? CB T ?1 are diagonal.
We can show that the free energy (13) is invariant under the following transformation for any T :
A ? AT ? ,
?A ? T ?A T ? ,
CA ? T CA T ? ,
B ? BT ?1 ,
?B ? (T ?1 )T ?B T ?1 ,
CB ? (T ?1 )? CB T ?1 .
Accordingly, the following procedure gives the global solution analytically: the analytic solution
?
?
given the diagonal (CA
, CB
) is ?rst computed, and the above transformation is then applied.
We have demonstrated the usefulness of our analytic solution in probabilistic PCA. On the other
hand, robust PCA has gathered a great deal of attention recently [5], and its Bayesian variant has
been proposed [2]. We expect that our analysis can handle more structured sparsity, in addition to
the current low-rank inducing sparsity. Extension of the current work along this line will allow us to
give more theoretical insights into robust PCA and provide computationally ef?cient algorithms.
Finally, a more challenging direction is to handle priors correlated over rows of A and B. This
allows us to model correlations in the observation space, and capture, e.g., short-term correlation
in time-series data and neighboring pixels correlation in image data. Analyzing such a situation, as
well as missing value imputation and tensor factorization [11, 6, 8, 21] is our important future work.
Acknowledgments
The authors thank anonymous reviewers for helpful comments. Masashi Sugiyama was supported
by the FIRST program. Derin Babacan was supported by a Beckman Postdoctoral Fellowship.
8
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[2] D. Babacan, M. Luessi, R. Molina, and A. Katsaggelos. Sparse Bayesian methods for low-rank matrix
estimation. arXiv:1102.5288v1 [stat.ML], 2011.
[3] C. M. Bishop. Bayesian principal components. In Advances in NIPS, volume 11, pages 382?388, 1999.
[4] C. M. Bishop. Variational principal components. In Proc. of ICANN, volume 1, pages 514?509, 1999.
[5] E.-J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? CoRR, abs/0912.3599,
2009.
[6] J. D. Carroll and J. J. Chang. Analysis of individual differences in multidimensional scaling via an n-way
generalization of ?eckart-young? decomposition. Psychometrika, 35:283?319, 1970.
[7] S. Funk. Try this at home. http://sifter.org/?simon/journal/20061211.html, 2006.
[8] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an ?explanatory?
multimodal factor analysis. UCLA Working Papers in Phonetics, 16:1?84, 1970.
[9] H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:417?441, 1933.
[10] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3?4):321?377, 1936.
[11] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500,
2009.
[12] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: Applying
collaborative ?ltering to Usenet news. Communications of the ACM, 40(3):77?87, 1997.
[13] Y. J. Lim and T. W. Teh. Variational Bayesian approach to movie rating prediction. In Proceedings of
KDD Cup and Workshop, 2007.
[14] S. Nakajima, M. Sugiyama, and D. Babacan. On Bayesian PCA: Automatic dimensionality selection and
analytic solution. In Proceedings of 28th International Conference on Machine Learning (ICML2011),
Bellevue, WA, USA, Jun. 28?Jul.2 2011.
[15] S. Nakajima, M. Sugiyama, and R. Tomioka. Global analytic solution for variational Bayesian matrix factorization. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances
in Neural Information Processing Systems 23, pages 1759?1767, 2010.
[16] T. Raiko, A. Ilin, and J. Karhunen. Principal component analysis for large scale problems with lots of
missing values. In J. Kok, J. Koronacki, R. Lopez de Mantras, S. Matwin, D. Mladenic, and A. Skowron,
editors, Proceedings of the 18th European Conference on Machine Learning, volume 4701 of Lecture
Notes in Computer Science, pages 691?698, Berlin, 2007. Springer-Verlag.
[17] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation,
11:305?345, 1999.
[18] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte
Carlo. In International Conference on Machine Learning, 2008.
[19] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In J. C. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1257?1264, Cambridge, MA, 2008. MIT Press.
[20] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statistical Society, 61:611?622, 1999.
[21] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279?311, 1996.
9
| 4344 |@word trial:5 repository:2 norm:1 stronger:5 seems:1 cah:12 nd:2 decomposition:3 covariance:4 arti:8 bellevue:1 tr:8 solid:1 reduction:1 initial:2 series:1 current:2 written:6 kdd:1 analytic:30 update:2 implying:1 selected:1 accordingly:2 short:2 provides:1 preference:1 org:1 mathematical:1 along:1 c2:2 become:1 ilin:1 prove:2 consists:2 lopez:1 introduce:2 theoretically:2 ra:1 nor:1 salakhutdinov:2 decomposed:1 automatically:2 increasing:1 psychometrika:2 factorized:3 substantially:1 transformation:2 corporation:1 cial:4 masashi:2 multidimensional:1 biometrika:1 rm:3 platt:1 harshman:1 positive:1 before:1 local:6 limit:1 consequence:2 usenet:1 id:1 analyzing:1 bhevb:1 challenging:1 co:1 factorization:9 practical:2 unique:2 acknowledgment:1 practice:1 procedure:2 nite:1 vbmf:8 empirical:11 cannot:1 close:1 selection:2 bh:33 storage:1 applying:2 equivalent:1 c2ah:2 demonstrated:1 missing:6 reviewer:1 straightforward:1 attention:1 starting:2 educational:1 williams:1 formulate:1 immediately:2 factored:2 rule:2 insight:1 harmless:2 handle:2 laplace:1 updated:1 kolda:1 suppose:2 exact:1 expensive:1 updating:1 observed:14 capture:1 calculate:1 eckart:1 culotta:1 news:1 ran:1 substantial:1 entanglement:1 rvb:3 matwin:1 easily:1 multimodal:1 various:1 describe:1 monte:2 zemel:1 newman:1 whose:1 otherwise:4 ability:1 advantage:3 bhvb:6 product:2 neighboring:1 cients:1 combining:2 uci:2 degenerate:3 roweis:2 frobenius:1 inducing:1 rst:5 convergence:2 satellite:1 converges:1 illustrate:2 ac:1 stat:1 rescale:1 eq:10 c:1 implies:2 direction:1 tokyo:3 bader:1 hvb:8 generalization:1 anonymous:1 mab:1 im:2 extension:1 hold:6 practically:1 considered:1 wright:1 exp:3 great:1 cb:32 lm:14 omitted:1 fh:1 estimation:2 proc:1 beckman:1 grouplens:1 derin:2 agrees:3 largest:2 minimization:1 mit:1 gaussian:4 always:1 corollary:11 derived:1 focus:1 parafac:1 rank:12 likelihood:1 check:2 sense:1 am:1 helpful:1 inference:3 typically:1 bt:1 explanatory:1 relation:1 koller:1 pixel:1 among:1 html:1 katsaggelos:1 marginal:1 sampling:1 future:1 minimized:4 t2:1 gordon:1 randomly:2 individual:1 n1:3 ab:1 stationarity:1 mnih:2 evaluation:1 mladenic:1 analyzed:1 nl:3 chain:2 necessary:3 orthogonal:3 cbh:12 taylor:1 plotted:2 uence:1 theoretical:2 column:16 evb:3 cost:2 entry:7 c2b:1 usefulness:2 conducted:1 too:1 reported:1 combined:1 international:2 siam:1 probabilistic:15 off:1 together:1 nm:2 slowly:1 li:1 japan:2 de:3 bold:2 sec:6 includes:2 skowron:1 satisfy:2 dbabacan:1 notable:1 ad:2 break:1 try:1 lot:1 aca:1 bayes:4 complicated:1 candes:1 asuncion:1 simon:1 jul:1 collaborative:2 contribution:1 il:4 ltering:2 accuracy:2 minimize:2 variance:7 miller:1 t3:1 gathered:1 bayesian:16 carlo:2 ah:16 coef:1 suffers:2 ed:7 energy:22 sugi:1 tucker:1 proof:4 associated:1 dataset:6 popular:1 lim:1 dimensionality:2 actually:3 tipping:1 a2h:5 formulation:2 arranged:1 generality:2 furthermore:1 correlation:5 until:2 hand:2 sketch:1 working:1 bh2:5 mode:1 usa:2 effect:1 true:4 analytically:6 leibler:1 iteratively:1 deal:1 attractive:1 c2bh:2 outline:1 complete:2 b2h:5 phonetics:1 c2b1:1 variational:9 wise:30 ef:6 recently:1 image:1 imputing:1 rotation:1 functional:1 rl:7 empirically:1 jp:2 nh:3 volume:3 approximates:1 cup:1 cambridge:1 automatic:2 similarly:3 sugiyama:4 illinois:2 shawe:1 funk:1 longer:1 carroll:1 deduce:1 posterior:11 multivariate:1 recent:2 showed:1 closest:1 quartic:1 driven:1 scenario:5 verlag:1 success:1 molina:1 minimum:5 simpli:5 speci:4 converge:1 dashed:1 full:1 champaign:1 a1:3 prediction:1 variant:4 basic:1 essentially:2 titech:1 expectation:1 arxiv:1 iteration:15 nakajima:4 sometimes:1 addition:1 fellowship:1 singular:5 posse:1 comment:1 subject:3 lafferty:1 ciently:1 independence:23 variate:1 psychology:1 idea:1 pca:10 c2a1:1 useful:2 iterating:1 clear:3 aimed:1 amount:1 kok:1 reduced:1 http:1 restricts:1 canonical:1 estimated:10 trapped:1 rb:1 mizer:1 nevertheless:2 imputation:1 changing:1 neither:1 nikon:2 v1:1 excludes:1 letter:2 throughout:1 home:1 scaling:1 vb:25 guaranteed:1 elucidated:1 constraint:7 ucla:1 babacan:4 optimality:2 min:1 ned:1 structured:1 riedl:1 mantra:1 smaller:2 intuitively:1 restricted:2 invariant:2 computationally:8 equation:2 hh:2 singer:1 tractable:1 available:1 apply:1 hotelling:2 original:3 assumes:2 denotes:3 unifying:1 const:3 cally:1 restrictive:2 ghahramani:1 classical:1 society:1 bl:4 tensor:2 diagonal:15 subspace:1 distance:2 thank:1 berlin:1 degrade:2 assuming:1 relationship:1 mini:1 ratio:1 minimizing:3 setup:2 trace:1 luessi:1 ba:5 herlocker:1 unknown:4 teh:1 observation:5 markov:2 urbana:2 benchmark:3 datasets:3 tilde:1 situation:2 communication:1 shinichi:1 arbitrary:2 rating:1 pair:2 namely:1 kl:2 optimized:1 nip:1 beyond:2 below:2 sparsity:2 program:1 royal:1 memory:1 natural:1 eh:3 movie:1 technology:1 imply:1 nding:2 created:3 raiko:1 fro:4 jun:1 naive:2 prior:5 review:2 l2:1 fully:4 loss:2 expect:2 lecture:1 h2:2 foundation:1 suspected:1 editor:3 nondegenerate:1 uncorrelated:2 row:2 supported:2 last:1 transpose:1 free:22 weaker:1 allow:1 institute:1 koronacki:1 sparse:1 dimension:2 author:1 made:1 bm:1 approximate:2 kullback:1 ml:1 global:20 sat:1 b1:4 assumed:5 postdoctoral:1 search:2 iterative:23 latent:3 robust:3 ca:31 necessarily:2 complex:1 european:1 diag:2 did:1 icann:1 main:1 rh:1 noise:9 hyperparameters:3 convey:1 cient:4 c2a:1 tomioka:1 ciency:1 konstan:1 icml2011:1 young:1 formula:1 theorem:8 bishop:3 showing:1 maltz:1 intractable:2 intrinsic:1 ih:7 exists:2 adding:1 corr:1 workshop:1 karhunen:1 t4:1 mf:7 expressed:2 trix:1 chang:1 springer:1 minimizer:1 acm:1 ma:2 goal:1 identity:1 consequently:1 experimentally:1 principal:8 degradation:1 called:3 experimental:9 ya:1 e:1 tendency:2 bcb:1 spectf:2 evaluate:1 correlated:2 |
3,694 | 4,345 | Structure Learning for Optimization
Shulin (Lynn) Yang
Department of Computer Science
University of Washington
Seattle, WA 98195
[email protected]
Ali Rahimi
Red Bow Labs
Berkeley, CA 94704
[email protected]
Abstract
We describe a family of global optimization procedures that automatically decompose optimization problems into smaller loosely coupled problems. The solutions
of these are subsequently combined with message passing algorithms. We show
empirically that these methods produce better solutions with fewer function evaluations than existing global optimization methods. To develop these methods, we
introduce a notion of coupling between variables of optimization. This notion
of coupling generalizes the notion of independence between random variables in
statistics, sparseness of the Hessian in nonlinear optimization, and the generalized distributive law. Despite its generality, this notion of coupling is easier to
verify empirically, making structure estimation easy, while allowing us to migrate
well-established inference methods on graphical models to the setting of global
optimization.
1
Introduction
We consider optimization problems where the objective function is costly to evaluate and may be
accessed only by evaluating it at requested points. In this setting, the function is a black box, and
have no access to its derivative or its analytical structure. We propose solving such optimization
problems by first estimating the internal structure of the black box function, then optimizing the
function with message passing algorithms that take advantage of this structure. This lets us solve
global optimization problems as a sequence of small grid searches that are coordinated by dynamic
programming. We are motivated by the problem of tuning the parameters of computer programs to
improve their accuracy or speed. For the programs that we consider, it can take several minutes to
evaluate these performance measures under a particular parameter setting.
Many optimization problems exhibit only loose coupling between many of the variables of optimization. For example, to tune the parameters of an audio-video streaming program, the parameters
of the audio codec could conceivably be tuned independently of the parameters of the video codec.
Similarly, to tune the networking component that glues these codecs together it suffices to consider
only a few parameters of the codecs, such as their output bit-rate. Such notions of conditional decoupling are conveniently depicted in a graphical form that represents the way the objective function
factors into a sum or product of terms each involving only a small subset of the variables. This
factorization structure can then be exploited by optimization procedures such as dynamic programming on trees or junction trees. Unfortunately, the factorization structure of a function is difficult to
estimate from function evaluation queries only.
We introduce a notion of decoupling that can be more readily estimated from function evaluations.
At the same time, this notion of decoupling is more general than the factorization notion of decoupling in that functions that do not factorize may still exhibit this type of decoupling. We say that two
variables are decoupled if the optimal setting of one variable does not depend on the setting of the
other. This is formalized below in a way that parallels the notion of conditional decoupling between
random variables in statistics. This parallel allows us to migrate much of the machinery developed
1
for inference on graphical models to global optimization . For example, decoupling can be visualized with a graphical model whose semantics are similar to those of a Markov network. Analogs of
the max-product algorithm on trees, the junction tree algorithm, and loopy belief propagation can be
readily adapted to global optimization. We also introduce a simple procedure to estimate decoupling
structure.
The resulting recipe for global optimization is to first estimate the decoupling structure of the objective function, then to optimize it with a message passing algorithm that utilises this structure. The
message passing algorithm relies on a simple grid search to solve the sub-problems it generates. In
many cases, using the same number of function evaluations, this procedure produces solutions with
objective values that improve over those produced by existing global optimizers by as much as 10%.
This happens because knowledge of the independence structure allows this procedure to explore the
objective function only along directions that cause the function to vary, and because the grid search
that solves the sub-problems does not get stuck in local minima.
2
Related work
The idea of estimating and exploiting loose coupling between variables of optimization appears
implicitly in Quasi-Newton methods that numerically estimate the Hessian matrix, such as BFGS
(Nocedal & Wright, 2006, Chap. 6). Indeed, the sparsity pattern of the Hessian indicates the pairs
of terms that do not interact with each other in a second-order approximation of the function. This
is strictly a less powerful notion of coupling than the factorization model, which we argue below, is
in turn less powerful than our notion of decoupling.
Others have proposed approximating the objective function while simultaneously optimizing over
it Srinivas et al. (2010). The procedure we develop here seeks only to approximate decoupling
structure of the function, a much simpler task to carry out accurately.
A similar notion of decoupling has been explored in the decision theory literature Keeney & Raiffa
(1976); Bacchus & Grove (1996), where decoupling was used to reason about preferences and utilities during decision making. In contrast, we use decoupling to solve black-box optimization problems and present a practical algorithm to estimate the decoupling structure.
3
Decoupling between variables of optimization
A common way to minimize an objective function over many variables is to factorize it into terms,
each of which involves only a small subset of the variables Aji & McEliece (2000). Such a representation, if it exists, can be optimized via a sequence of small optimization problems with dynamic
programming. This insight motivates message passing algorithms for inference on graphical models. For example, rather than minimizing the function f1 (x, y, z) = g1 (x, y) + g2 (y, z) over its three
variables simultaneously, one can compute the function g3 (y) = minz g2 (y, z), then the function
g4 (x) = miny g1 (x, y) + g3 (y), and finally minimizing g4 over x. A similar idea works for the
function f2 (x, y, z) = g1 (x, y)g2 (y, z) and indeed, whenever the operator that combines the factors
is associative, commutative, and allows the ?min? operator to distribute over it.
However, it is not necessary for a function to factorize for it to admit a simple dynamic programming
procedure. For example, a factorization for the function f3 (x, y, z) = x2 y 2 z 2 + x2 + y 2 + z 2 is
elusive, yet the arguments of f3 are decoupled in the sense that the setting of any two variables does
not affect the optimal setting of the third. For example, argminx f3 (x, y0 , z0 ) is always x = 0, and
similarly for y and z. This decoupling allows us to optimize over the variables separately. This is
not a trivial property. For example, the function f4 (x, y, z) = (x y)2 + (y z)2 exhibits no
such decoupling between x and y because the minimizer of argminx f4 (x, y0 , z0 ) is y0 , which is
obviously a function of the second argument of f . The following definition formalizes this concept:
Definition 1 (Blocking and decoupling). Let f : ? ! R be a function on a compact domain and let
X ? Y ? Z ? ? be a subset of the domain. We say that the coordinates Z block X from Y under
f if the set of minimizers of f over X does not change for any setting of the variables Y given a
setting of the variables Z:
argmin f (X, Y1 , Z) = argmin f (X, Y2 , Z).
8
Y1 2Y,Y2 2Y
Z2Z
X2X
X2X
We will say that X and Y are decoupled conditioned on Z under f , or X ?f Y Z, if Z blocks X
from Y and Z blocks Y from X under f at the same time.
2
We will simply say that X and Y are decoupled, or X ?f Y, when X ?f Y Z, ? = X ? Y ? Z,
and f is understood from context.
For a given function f (x1 , . . . , xn ), decoupling between the variables can be represented graphically
with an undirected graph analogous to a Markov network:
Definition 2. A graph G = ({x1 , . . . , xn }, E) is a coupling graph for a function f (x1 , . . . , xn ) if
(i, j) 2
/ E implies xi and xj are decoupled under f .
The following result mirrors the notion of separation in Markov networks and makes it easy to reason
about decoupling between groups of variables with coupling graphs (see the appendix for a proof):
Proposition 1. Let X , Y, Z be groups of nodes in a coupling graph for a function f . If every path
from a node in X to a node in Y passes through a node in Z, then X ?f Y Z.
Functions that factorize as a product of terms exhibit this type of decoupling. For subsets of variables
X , Y, Z, we say X is conditionally separated from Y by Z by factorization, or X ?? Y Z, if X and
Y are separated in that way in the Markov network induced by the factorization of f . The following
is a generalization of the familiar result that factorization implies the global Markov property (Koller
& Friedman, 2009, Thm. 4.3) and follows from Aji & McEliece (2000):
Theorem 1 (Factorization implies decoupling). Let f (x1 , . . . , xn ) be a function on a compact domain, and let X1 , . . . , XS , X , Y, Z be subsets of {x1 , . . . , xn }. Let ? be any commutative associative semi-ring operator over which the min operator distributes. If f factorizes as f (x1 , . . . , xn ) =
?Ss=1 gs (Xs ), then X ?f Y Z whenever X ?? Y Z.
However decoupling is strictly more powerful than factorization. While X ?? Y implies X ?f Y,
the reverse is not necessarily true: there exist functions that admit no factorization at all, yet whose
arguments are completely mutually decoupled. Appendix B gives an example.
4
Optimization procedures that utilize decoupling
When a cost function factorizes, dynamic programming algorithms can be used to optimize over the
variables Aji & McEliece (2000). When a cost function exhibits decoupling as defined above, the
same dynamic programming algorithms can be applied with a few minor modifications.
The algorithms below refer to a function f whose arguments are partitioned over the sets
X1 , . . . , Xn . Let Xi? denote the optimal value of Xi 2 Xi . We will take simplifying liberties with
the order of the arguments of f when this causes no ambiguity. We will also replace the variables
that do not participate in the optimization (per decoupling) with an ellipsis.
4.1
Optimization over trees
Suppose the coupling graph between some partitioning X1 , . . . , Xm of the arguments of f is treestructured, in the sense that Xi ?f Xj unless the edge (i, j) is in the tree. To optimize over f with
dynamic programming, define X0 arbitrarily as the root of the tree, let pi denote the index of the
parent of Xi , and let Ci1 , Ci2 , . . . denote the indices of its children. At each leaf node `, construct the
functions
? ` (Xp ) := argmin f (X` , Xp ).
X
(1)
`
`
X` 2X`
By decoupling, the optimal value of X` depends only on the optimal value of its parent, so X`? =
? ` (Xp? ).
X
`
For all other nodes i, define recursively starting from the parents of the leaf nodes the functions
? i (Xp ) = argmin f (Xi , Xp , X
? C 1 (Xi ), X
? C 2 (Xi ), . . .)
X
i
i
i
i
(2)
Xi 2Xi
Again, the optimal value of Xi depends only on the optimal setting of its parent, Xp?i , and it can be
? i (Xp? ).
verified that Xi? = X
i
? i (X), we discretize its argument
In our implementation of this algorithm, to represent a function X
into a grid, and store the function as a table. To compute the entries of the table, a subordinate global
?i.
optimizer computes the minimization that appears in the definition of X
3
4.2
Optimization over junction trees
Even when the coupling graph for a function is not tree-structured, a thin junction tree can often be
constructed for it. A variant of the above algorithm that mirrors the junction tree algorithm can be
used to efficiently search for the optima of the function.
Recall that a tree T of cliques is a junction tree for a graph G if it satisfies the following three
properties: there is one path between each pair of cliques; for each clique C of G there is some
clique A in T such that C ? A; for each pair of cliques A and B in T that contain node i of G, each
clique on the unique path between A and B also contains i.
These properties guarantee that T is tree-structured, that it covers all nodes and edges in G, and that
two nodes v and u in two different cliques Xi and Xj are decoupled from each other conditioned on
the union of the cliques on the path between u and v in T . Many heuristics exist for constructing a
thin junction tree for a graph Jensen & Graven-Nielsen (2007); Huang & Darwiche (1996).
To search for the minimizers of f , using a junction tree for its coupling graph, denote by Xij :=
Xi \ Xj the intersection of the groups of variables Xi and Xj and by Xi\j = Xi \ Xj the set of nodes
in Xi but not in Xj . At every leaf clique ` of the junction tree, construct the function
? ` (X`,p ) :=
X
`
argmin
X`\p` 2X`\p`
f (X` ).
(3)
For all other cliques i, compute recursively starting from the parents of the leaf cliques
? i (Xi,p ) =
X
i
? C 1 (Xi,C 1 ), X
? C 2 (Xi,C 2 ), . . .).
argmin f (Xi , X
i
i
i
i
Xi,pi 2Xi\pi
(4)
As before, decoupling between the cliques, conditioned on the intersection of the cliques, guarantees
? i (X ? ) = X ? . And as before, our implementation of this algorithm stores the intermediate
that X
i,pi
i
functions as tables by discretizing their arguments.
4.3
Other strategies
When the cliques of the junction tree are large, the subordinate optimizations in the above algorithm
become costly. In such cases, the following adaptations of approximate inference algorithms are
useful:
? The algorithm of Section 4.1 can be applied to a maximal spanning tree of the coupling
graph.
? Analogously to Loopy Belief Propagation Pearl (1997), an arbitrary neighbor of each node
can be declared as its parent, and the steps of Section 4.1 can be applied to each node until
convergence.
? Loops in the coupling graph can be broken by conditioning on a node in each loop, resulting
in a tree-structured coupling graph conditioned on those nodes. The optimizer of Section
4.1 then searches for the minima conditioned on the value of those nodes in the inner loop
of a global optimizer that searches for good settings for the conditioned nodes.
5
Graph structure learning
It is possible to estimate decoupling structure between the arguments of a function f with the help
of a subordinate optimizer that only evaluates f .
A straightforward application of definition 1 to assess empirically whether groups of variables X
and Y are decoupled conditioned on a group of variables Z would require comparing the minimizer
of f over X for every possible value of Z and Y. This is not practical because it is at least as difficult
as minimizing f . Instead, we rely on the following proposition, which follows directly from 1:
Proposition 2 (Invalidating decoupling). If for some Z 2 Z and Y0 , Y1 2 Y, we have
argminX2X f (X, Y0 , Z) 6= argminX2X f (X, Y1 , Z), then X 6?f Y|Z.
Following this result, an approximate coupling graph can be constructed by positing and invalidating
decoupling relations. Starting with a graph containing no edges, we consider all groupings X =
4
{xi }, Y = {xj }, Z = ?\{xi , xj }, of variables x1 , . . . , xn . We posit various values of Z 2 Z, Y0 2
Y and Y1 2 Y under this grouping, and compute the minimizers over X 2 X of f (X, Y0 , Z) and
f (X, Y1 , Z) with a subordinate optimizer. If the minimizers differ, then by the above proposition,
X and Y are not decoupled conditioned on Z, and an edge is added between xi and xj in the graph.
Algorithm 1 summarizes this procedure.
Algorithm 1 Estimating the coupling graph of a function.
input A function f : X1 ? ? ? ? Xn ! R, with Xi compact; A discretization X?i of Xi ; A similarity
threshold ? > 0; The number of times, NZ , to sample Z.
output A coupling graph G = ([x1 , . . . , xn ], E).
E
;
for i, j 2 [1, . . . , n]; y0 , y1 2 X?j ; 1 . . . NZ do
Z ? U (X?1 ? ? ? ? ? X?n \ X?i ? X?j )
x
?0
argminx2X?i f (x, y0 , Z); x
?1
argminx2X?i f (x, y1 , Z)
if k?
x0 x
?1 k ? then
E
E [ {(i, j)}
end if
end for
In practice, we find that decoupling relationships are correctly recovered if values of Y0 and Y1 are
chosen by quantizing Y into a set Y? of 4 to 10 uniformly spaced discrete values and exhaustively
? A few values of Z (fewer than five) sampled uniformly at
examining the settings of Y0 and Y1 in Y.
random from a similarly discretized set Z? suffice.
6
Experiments
We evaluate a two step process for global optimization: first estimating decoupling between variables using the algorithm of Section 5, then optimizing with this structure using an algorithm from
Section 4. Whenever Algorithm 1 detects tree-structured decoupling, we use the tree optimizer of
Section 4.1. Otherwise we either construct a junction tree and apply the junction tree optimizer of
Section 4.2 if the junction tree is thin, or we approximate the graph with a maximum spanning tree
and apply the tree solver of Section 4.1.
We compare this approach with three state-of-the-art black-box optimization procedures: Direct
Search Perttunen et al. (1993) (a deterministic space carving strategy), FIPS Mendes et al. (2004) (a
biologically inspired randomized algorithm), and MEGA Hazen & Gupta (2009) (a multiresolution
search strategy with numerically computed gradients). We use a publicly available implementation
of Direct Search 1 , and an implementation of FIPS and MEGA available from the authors of MEGA.
We set the number of particles for FIPS and MEGA to the square of the dimension of the problem
plus one, following the recommendation of their authors.
As the subordinate optimizer for Algorithm 1, we use a simple grid search for all our experiments.
As the subordinate optimizer for the algorithms of Section 4, we experiment with grid search and
the aforementioned state-of-the-art global optimizers.
We report results on both synthetic and real optimization problems. For each experiment, we report
the quality of the solution each algorithm produces after a preset number of function calls. To vary
the number of function calls the baseline methods invoke, we vary the number of time they iterate.
Since our method does not iterate, we vary the number of function calls its subordinate optimizer
invokes (when the subordinate optimizer is grid search, we vary the grid resolution).
The experiments demonstrate that using grid search as a subordinate strategy is sufficient to produce
better solutions than all the other global optimizers we evaluated.
1
Available from http://www4.ncsu.edu/?ctk/Finkel_Direct/.
5
Table 1: Value of the iterates of the functions of Table 2 after 10,000 function evaluations (for
our approach, this includes the function evaluations for structure learning). MIN is the ground
truth optimal value when available. GR is the number of discrete values along each dimension for
optimization. Direct Search (DIR), FIPS and MEGA are three state-of-the-art algorithms for global
optimization.
Function (n=50)
Colville
Levy
Michalewics
Rastrigin
Schwefel
Dixon&Price
Rosenbrock
Trid
Powell
6.1
min
0
0
n/a
0
0
0
0
n/a
0
GR
100
400
400
400
400
20
20
20
6
Ours
0
0.013
-48.9
0
8.6
1
0
-2.2e4
19.4
DIR
3e-6
2.80
-18.2
0
1.9e4
0.667
2.9e4
-185
324
FIPS
2e-14
4.20
-18.4
23.6
1.6e4
16.8
5.7e4
3.3e4
121
MEGA
3.75
3.22
-1.3e-3
4.2e-3
1.4e4
0.914
48.4
-41
0.014
Synthetic objective functions
We evaluated the above strategies on a standard benchmark of synthetic optimization problems 2
shown in Appendix A. These are functions of 50 variables and are used as black-box functions
in our experiments. In these experiments, the subordinate grid search of Algorithm 1 discretized
each dimension into four discrete values. The algorithms of Section 4 also used grid search as a
?
?S1
mc
subordinate optimizer. For this grid search, each dimension was discretized into GR = ENmax
mc
discrete values where Emax is a cap on the number of function evaluations to perform, Smc is the
size of the largest clique in the junction tree, and Nmc is the number of nodes in the junction tree.
Figure 1 shows that in all cases, Algorithm 1 recovered decoupling structure exactly even for very
coarse grids. Values of NZ greater than 1 did not improve the quality of the recovered graph,
justifying our heuristic of keeping NZ small. We used NZ = 1 in the remainder of this subsection.
Table 1 summarizes the quality of the solutions produced by the various algorithms after 10,000
function evaluations. Our approach outperformed the others on most of these problems. As expected, it performed particularly well on functions that exhibit sparse coupling, such as Levy, Rastrigin, and Schwefel.
In addition to achieving better solutions given the same number of function evaluations, our approach
also imposed lower computational overhead than the other methods: to process the entire benchmark
of this section takes our approach 2.2 seconds, while Direct Search, FIPS and MEGA take 5.7
minutes, 3.7 minutes and 53.3 minutes respectively.
100%
Number of evaluations
4.9e3 1.1e4 1.9e4 3.1e4 4.4e4
Colville
080%
100%
Number of evaluations
4.9e3 1.1e4 1.9e4 3.1e4 4.4e4
Levy
080%
100%
Number of evaluations
4.9e3 1.1e4 1.9e4 3.1e4 4.4e4
Rosenbrock
080%
100%
060%
060%
060%
040%
040%
040%
040%
020%
020%
020%
020%
000%
000%
000%
3
4
5
Grid resolution
6
2
3
4
5
Grid resolution
6
2
3
4
5
Grid resolution
6
Powell
080%
060%
2
Number of evaluations
4.9e3 1.1e4 1.9e4 3.1e4 4.4e4
000%
2
3
4
5
Grid resolution
6
Figure 1: Very coarse gridding is sufficient in Algorithm 1 to correctly recover decoupling structure.
The plots show percentage of incorrectly recovered edges in the coupling graph on four synthetic
cost functions as a function of the grid resolution (bottom x-axis) and the number of function evaluations (top x-axis). NZ = 1 in these experiments.
6.2
Experiments on real applications
We considered the real-world problem of automatically tuning the parameters of machine vision
and machine learning programs to improve their accuracy on new datasets. We sought to tune the
2
Acquired from http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/
Hedar_files/go.htm.
6
parameters of a face detector, a document topic classifier, and a scene recognizer to improve their
accuracy on new application domains. Automatic parameter tuning allows a user to quickly tune
a program?s default parameters to their specific application domain without tedious trial and error.
To perform this tuning automatically, we treated the accuracy of a program as a black box function
of the parameter values passed to it. These were challenging optimization problems because the
derivative of the function is elusive and each function evaluation can take minutes. Because the
output of a program tends to depend in a structured way on its parameters, our method achieved
significant speedups over existing global optimizers.
6.2.1
Face detection
The first application was a face detector. The program has five parameters: the size, in pixels, of
the smallest face to consider, the minimum distance, in pixels, between detected faces; a floating
point subsampling rate for building a multiresolution pyramid of the input image; a boolean flag that
determines whether to apply non-maximal suppression; and the choice of one of four wavelets to
use. Our goal was to minimize the detection error rate of this program on the GENKI-SZSL dataset
of 3, 500 faces 3 . Depending on the parameter settings, evaluating the accuracy of the program on
this dataset takes between 2 seconds and 2 minutes.
Face detector
classification error (%)
Algorithm 1 was run with a grid search as a subordinate optimizer with three discrete values along
the continuous dimensions. It invoked 90 function evaluations and produced a coupling graph
wherein the first three of the above parameters formed a clique and where the remaining two parameter were decoupled of the others. Given this coupling graph, our junction tree optimizer with
grid search (with the continuous dimensions quantized into 10 discrete values) invoked 1000 function evaluations, and found parameter settings for which the accuracy of the detector was 7% better
than the parameter settings found by FIPS and Direct Search after the same number of function evaluations. FIPS and Direct Search fail to improve their solution even after 1800 evaluations. MEGA
fails to improve over the initial detection error of 50.84% with any number of iterations. To evaluate
the accuracy of our method under different numbers of function invocations, we varied the grid resolution between 2 to 12. See Figure 2. These experiments demonstrate how a grid search can help
overcome local minima that cause FIPS and Direct Search to get stuck.
80
60
Junction tree solver with grid search
Direct search
FIPS
40
20
0
500
1000
Number of evaluations
1500
Figure 2: Depending on the number of function evaluations allowed, our method produces parameter
settings for the face detector that are better than those recovered by FIPS or Direct Search by as much
as 7%.
6.2.2
Scene recognition
The second application was a visual scene recognizer. It extracts GIST features Oliva & Torralba
(2001) from an input image and classifies these features with a linear SVM. Our task was to tune
the six parameters of GIST to improve the recognition accuracy on a subset of the LabelMe dataset
4
, which includes images of scenes such as coasts, mountains, streets, etc. The parameters of the
recognizer include a radial cut-off frequency (in cycles/pixel) of a circular filter that reduces illumination effects, the number of bins in a radial histogram of the response of a spatial spacial filter, and
the number of image regions in which to compute these histograms. Evaluating the classification
error under a set of parameters requires extracting GIST features with these parameters on a training
set, training a linear SVM, then applying the extractor and classifier to a test set. Each evaluation
takes between 10 and 20 minutes depending on the parameter settings.
3
4
Available from http://mplab.ucsd.edu.
Available from http://labelme.csail.mit.edu.
7
Algorithm 1 was run with a grid search as the subordinate optimizer, discretizing the search space
into four discrete values along each dimension. This results in a graph that admits no thin junction
tree, so we approximate it with a maximal spanning tree. We then apply the tree optimizer of Section
4.1 using as subordinate optimizers Direct Search, FIPS, and grid search (with five discrete values
along each dimension). After a total of roughly 300 function evaluations, the tree optimizer with
FIPS produces parameters that result in a classification error of 29.17%. With the same number
of function evaluations, Direct Search and FIPS produce parameters that resulted in classification
errors of 33.33% and 31.13% respectively. The tree optimizer with Direct Search and grid search as
subordinate optimizers resulted in error rates of 31.72% and 33.33%.
In this application, the proposed method enjoys only modest gains of ? 2% because the variables
are tightly coupled, as indicated by the denseness of the graph and the thickness of the junction tree.
6.2.3
Multi-class classification
The third application was to tune the hyperparameters of a multi-class SVM classifier on the RCV1v2 text categorization dataset 5 . This dataset consists of a training set of 23,149 documents and a
test set of 781,265 documents each labeled with one of 101 topics Lewis et al. (2004). Our task
was to tune the 101 regularization parameters of the 1 vs. all classifiers that comprise a multi-class
classifier. The objective was the so-called macro-average F -score Tague (1981) on the test set. The
F score for one category is F = 2rp/(r + p), where r and p are the recall and precision rates
for that category. The macro-average F score is the average of the F scores over all categories.
Each evaluation requires training the classifier using the given hyperparameters and evaluating the
resulting classifier on the test set, and takes only a second since the text features have been precomputed.
Algorithm 1 with grid search as a subordinate optimizer with a grid resolution of three discrete values
along each dimension found no coupling between the hyperparameters. As a result, the algorithms
of Section 4.1 reduce to optimizing over each one-dimensional parameter independently. We carried
out these one-dimensional optimizations with Direct Search, FIPS, and grid search (discretizing each
dimension into 100 values). After roughly 100,000 evaluations, these resulted in similar scores of
F = 0.6764, 0.6720, and 0.6743, respectively. But with the same number of evaluations, off-theshelf Direct Search and FIPS result in scores of F = 0.6324 and 0.6043, respectively, nearly 11%
worse.
The cost of estimating the structure in this problem was large, since it grows quadratically with the
number of classes, but worth the effort because it indicated that each variable should be optimized
independently, ultimately resulting in huge speedups 6 .
7
Conclusion
We quantified the coupling between variables of optimization in a way that parallels the notion of
independence in statistics. This lets us identify decoupling between variables in cases where the
function does not factorize, making it strictly stronger than the notion of decoupling in statistical
estimation. This type of decoupling is also easier to evaluate empirically. Despite these differences,
this notion of decoupling allows us to migrate to global optimization many of the message passing algorithms that were developed to leverage factorization in statistics and optimization. These
include belief propagation and the junction tree algorithm. We show empirically that optimizing
cost functions by applying these algorithms to an empirically estimated decoupling structure outperforms existing black box optimization procedures that rely on numerical gradients, deterministic
space carving, or biologically inspired searches. Notably, we observe that it is advantageous to
decompose optimization problems into a sequence of small deterministic grid searches using this
technique, as opposed to employing existing black box optimizers directly.
5
Available from http://trec.nist.gov/data/reuters/reuters.html.
After running these experiments, we discovered a result of Fan & Lin (2007) showing that optimizing the
macro-average F-measure is equivalent to optimizing per-category F-measure, thereby validating decoupling
structure recovered by Algorithm 1.
6
8
References
Aji, S. and McEliece, R. The generalized distributive law and free energy minimization. IEEE
Transaction on Informaion Theory, 46(2), March 2000.
Bacchus, F. and Grove, A. Utility independence in a qualitative decision theory. In Proceedings of
the 6th International Conference on Principles of Knowledge Representation and Reasoning, pp.
542?552, 1996.
Fan, R. E. and Lin, C. J. A study on threshold selection for multi-label classification. Technical
report, National Taiwan University, 2007.
Hazen, M. and Gupta, M. Gradient estimation in global optimization algorithms. Congress on
Evolutionary Computation, pp. 1841?1848, 2009.
Huang, C. and Darwiche, A. Inference in belief networks: A procedural guide. International Journal
of Approximate Reasoning, 15(3):225?263, 1996.
Jensen, F. and Graven-Nielsen, T. Bayesian Networks and Decision Graphs. Springer, 2007.
Keeney, R. L. and Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Trade-offs.
Wiley, 1976.
Koller, D. and Friedman, N. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
Lewis, D., Yang, Y., Rose, T., and Li, F. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 2004.
Mendes, R., Kennedy, J., and Neves, J. The fully informed particle swarm: Simpler, maybe better.
IEEE Transactions on Evolutionary Computation, 1(1):204?210, 2004.
Nocedal, J. and Wright, S. Numerical Optimization. Springer, 2nd edition, 2006.
Oliva, A. and Torralba, A. Modeling the shape of the scene: a holistic representation of the spatial
envelope. International Journal of Computer Vision, 43:145?175, 2001.
Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan
Kaufmann, 1997.
Perttunen, C., Jones, D., and Stuckman, B. Lipschitzian optimization without the Lipschitz constant.
Journal of Optimization Theory and Application, 79(1):157?181, 1993.
Srinivas, N., Krause, A., Kakade, S., and Seeger, M. Gaussian process optimization in the bandit
setting: No regret and experimental design. In International Conference on Machine Learning
(ICML), 2010.
Tague, J. M. The pragmatics of information retrieval experimentation. Information Retrieval Experiment, pp. 59?102, 1981.
9
| 4345 |@word trial:1 stronger:1 advantageous:1 nd:1 glue:1 tedious:1 seek:1 ci2:1 simplifying:1 thereby:1 recursively:2 carry:1 initial:1 contains:1 score:6 tuned:1 ours:1 document:3 amp:1 outperforms:1 existing:5 recovered:6 com:1 comparing:1 discretization:1 yet:2 readily:2 numerical:2 shape:1 plot:1 gist:3 v:1 fewer:2 leaf:4 rosenbrock:2 iterates:1 coarse:2 node:18 quantized:1 preference:2 simpler:2 accessed:1 positing:1 five:3 along:6 constructed:2 direct:14 become:1 qualitative:1 consists:1 combine:1 overhead:1 darwiche:2 introduce:3 acquired:1 g4:2 x0:2 expected:1 notably:1 indeed:2 roughly:2 multi:4 discretized:3 mplab:1 inspired:2 detects:1 chap:1 automatically:3 gov:1 solver:2 estimating:5 classifies:1 suffice:1 mountain:1 argmin:6 developed:2 informed:1 formalizes:1 guarantee:2 berkeley:1 every:3 exactly:1 classifier:7 partitioning:1 before:2 codec:2 understood:1 local:2 tends:1 congress:1 despite:2 path:4 black:8 plus:1 nz:6 quantified:1 challenging:1 factorization:12 smc:1 carving:2 practical:2 unique:1 union:1 block:3 practice:1 regret:1 optimizers:7 procedure:11 aji:4 powell:2 radial:2 get:2 selection:1 operator:4 context:1 applying:2 optimize:4 www:1 deterministic:3 imposed:1 equivalent:1 elusive:2 graphically:1 straightforward:1 starting:3 independently:3 go:1 resolution:8 formalized:1 emax:1 insight:1 swarm:1 notion:16 coordinate:1 analogous:1 suppose:1 user:1 programming:7 recognition:2 particularly:1 cut:1 labeled:1 blocking:1 bottom:1 region:1 cycle:1 trade:1 rose:1 broken:1 miny:1 dynamic:7 exhaustively:1 ultimately:1 depend:2 solving:1 ali:2 f2:1 completely:1 htm:1 represented:1 various:2 separated:2 describe:1 query:1 detected:1 whose:3 heuristic:2 solve:3 plausible:1 say:5 s:1 otherwise:1 statistic:4 codecs:2 g1:3 associative:2 obviously:1 advantage:1 sequence:3 quantizing:1 analytical:1 propose:1 product:3 maximal:3 adaptation:1 remainder:1 macro:3 loop:3 hazen:2 bow:1 holistic:1 multiresolution:2 recipe:1 seattle:1 exploiting:1 parent:6 optimum:2 convergence:1 produce:7 categorization:2 ring:1 help:2 coupling:24 develop:2 ac:1 depending:3 minor:1 solves:1 c:1 involves:1 implies:4 differ:1 direction:1 liberty:1 posit:1 f4:2 subsequently:1 filter:2 subordinate:16 bin:1 require:1 suffices:1 f1:1 generalization:1 decompose:2 ci1:1 proposition:4 z2z:1 strictly:3 considered:1 wright:2 ground:1 vary:5 optimizer:19 sought:1 smallest:1 torralba:2 theshelf:1 recognizer:3 estimation:3 outperformed:1 label:1 treestructured:1 largest:1 minimization:2 mit:2 offs:1 always:1 gaussian:1 rather:1 factorizes:2 indicates:1 contrast:1 seeger:1 suppression:1 baseline:1 sense:2 inference:6 minimizers:4 streaming:1 entire:1 koller:2 relation:1 bandit:1 quasi:1 semantics:1 pixel:3 aforementioned:1 classification:6 html:1 art:3 spatial:2 neve:1 construct:3 f3:3 comprise:1 washington:2 represents:1 jones:1 icml:1 nearly:1 thin:4 others:3 report:3 intelligent:1 few:3 simultaneously:2 resulted:3 tightly:1 national:1 familiar:1 floating:1 argminx:2 friedman:2 detection:3 huge:1 message:6 circular:1 evaluation:27 grove:2 edge:5 necessary:1 decoupled:10 machinery:1 tree:38 unless:1 loosely:1 modest:1 keeney:2 ctk:1 boolean:1 modeling:1 cover:1 loopy:2 cost:5 subset:6 entry:1 examining:1 bacchus:2 gr:3 thickness:1 dir:2 synthetic:4 combined:1 international:4 randomized:1 csail:1 probabilistic:2 invoke:1 off:2 together:1 analogously:1 quickly:1 argminx2x:4 again:1 ambiguity:1 containing:1 huang:2 opposed:1 worse:1 admit:2 derivative:2 li:1 distribute:1 bfgs:1 student:1 includes:2 dixon:1 coordinated:1 depends:2 performed:1 root:1 lab:1 red:1 recover:1 parallel:3 minimize:2 ass:1 publicly:1 accuracy:8 square:1 formed:1 kaufmann:1 efficiently:1 spaced:1 identify:1 bayesian:1 accurately:1 produced:3 mc:2 worth:1 kennedy:1 detector:5 networking:1 whenever:3 definition:5 evaluates:1 energy:1 frequency:1 pp:3 proof:1 sampled:1 gain:1 dataset:5 recall:2 knowledge:2 cap:1 subsection:1 nielsen:2 appears:2 wherein:1 response:1 evaluated:2 box:8 generality:1 until:1 mceliece:4 nonlinear:1 propagation:3 quality:3 indicated:2 grows:1 building:1 effect:1 verify:1 concept:1 y2:2 true:1 contain:1 regularization:1 conditionally:1 during:1 generalized:2 demonstrate:2 reasoning:3 image:4 coast:1 invoked:2 common:1 empirically:6 conditioning:1 jp:1 analog:1 numerically:2 refer:1 significant:1 tuning:4 automatic:1 grid:30 similarly:3 particle:2 access:1 similarity:1 etc:1 optimizing:7 reverse:1 store:2 discretizing:3 arbitrarily:1 exploited:1 morgan:1 minimum:4 greater:1 utilises:1 semi:1 multiple:1 kyoto:1 rahimi:1 reduces:1 technical:1 lin:2 retrieval:2 justifying:1 ellipsis:1 involving:1 variant:1 oliva:2 vision:2 iteration:1 represent:1 genki:1 histogram:2 pyramid:1 achieved:1 addition:1 separately:1 x2x:2 krause:1 envelope:1 pass:1 induced:1 validating:1 undirected:1 member:1 call:3 extracting:1 yang:3 leverage:1 intermediate:1 easy:2 iterate:2 independence:4 affect:1 xj:10 inner:1 idea:2 reduce:1 whether:2 motivated:1 six:1 utility:2 passed:1 effort:1 e3:4 passing:6 hessian:3 cause:3 migrate:3 useful:1 tune:7 maybe:1 visualized:1 category:4 http:5 exist:2 xij:1 percentage:1 estimated:2 per:2 correctly:2 mega:8 perttunen:2 discrete:9 group:5 four:4 procedural:1 threshold:2 achieving:1 verified:1 utilize:1 nocedal:2 graph:27 sum:1 run:2 powerful:3 family:1 separation:1 decision:5 appendix:3 summarizes:2 bit:1 fan:2 g:1 adapted:1 x2:2 scene:5 generates:1 declared:1 speed:1 argument:9 min:4 ncsu:1 rcv1:1 speedup:2 department:1 structured:5 march:1 smaller:1 y0:11 partitioned:1 kakade:1 g3:2 making:3 happens:1 conceivably:1 modification:1 biologically:2 s1:1 mutually:1 turn:1 loose:2 fail:1 precomputed:1 end:2 generalizes:1 junction:20 available:7 experimentation:1 raiffa:2 apply:4 observe:1 gridding:1 rp:1 top:1 remaining:1 subsampling:1 include:2 running:1 graphical:6 newton:1 lipschitzian:1 invokes:1 graven:2 approximating:1 objective:10 added:1 strategy:5 costly:2 exhibit:6 gradient:3 evolutionary:2 distance:1 distributive:2 street:1 participate:1 topic:2 argue:1 trivial:1 reason:2 spanning:3 schwefel:2 taiwan:1 index:2 relationship:1 minimizing:3 difficult:2 unfortunately:1 lynn:1 implementation:4 design:1 motivates:1 perform:2 allowing:1 discretize:1 markov:5 datasets:1 benchmark:3 nist:1 stuckman:1 incorrectly:1 y1:10 trec:1 varied:1 ucsd:1 discovered:1 arbitrary:1 thm:1 pair:3 optimized:2 quadratically:1 established:1 pearl:2 informaion:1 below:3 pattern:1 xm:1 sparsity:1 program:10 max:1 video:2 belief:4 treated:1 rely:2 improve:8 rastrigin:2 axis:2 carried:1 coupled:2 extract:1 text:3 literature:1 law:2 fully:1 sufficient:2 xp:7 principle:2 pi:4 keeping:1 free:1 enjoys:1 denseness:1 guide:1 neighbor:1 face:8 sparse:1 overcome:1 dimension:10 xn:10 evaluating:4 world:1 default:1 computes:1 stuck:2 author:2 collection:1 employing:1 transaction:2 approximate:6 compact:3 implicitly:1 clique:16 global:18 factorize:5 xi:30 search:41 continuous:2 spacial:1 table:6 ca:1 decoupling:43 requested:1 interact:1 rcv1v2:1 necessarily:1 constructing:1 domain:5 did:1 reuters:2 hyperparameters:3 edition:1 child:1 allowed:1 x1:12 wiley:1 precision:1 sub:2 fails:1 invocation:1 levy:3 minz:1 third:2 extractor:1 wavelet:1 minute:7 z0:2 theorem:1 e4:23 specific:1 invalidating:2 showing:1 jensen:2 explored:1 x:2 svm:3 gupta:2 admits:1 grouping:2 exists:1 mirror:2 illumination:1 conditioned:8 commutative:2 sparseness:1 easier:2 depicted:1 intersection:2 simply:1 explore:1 visual:1 conveniently:1 g2:3 recommendation:1 springer:2 minimizer:2 satisfies:1 relies:1 truth:1 determines:1 lewis:2 conditional:2 goal:1 replace:1 price:1 labelme:2 change:1 lipschitz:1 uniformly:2 preset:1 distributes:1 flag:1 total:1 called:1 experimental:1 pragmatic:1 internal:1 evaluate:5 audio:2 srinivas:2 mendes:2 |
3,695 | 4,346 | Select and Sample ? A Model of Efficient
Neural Inference and Learning
Jacquelyn A. Shelton, J?org Bornschein, Abdul-Saboor Sheikh
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
{shelton,bornschein,sheikh}@fias.uni-frankfurt.de
Pietro Berkes
Volen Center for Complex Systems
Brandeis University, Boston, USA
?
J?org Lucke
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
[email protected]
Abstract
An increasing number of experimental studies indicate that perception encodes a
posterior probability distribution over possible causes of sensory stimuli, which
is used to act close to optimally in the environment. One outstanding difficulty
with this hypothesis is that the exact posterior will in general be too complex to
be represented directly, and thus neurons will have to represent an approximation
of this distribution. Two influential proposals of efficient posterior representation
by neural populations are: 1) neural activity represents samples of the underlying distribution, or 2) they represent a parametric representation of a variational
approximation of the posterior. We show that these approaches can be combined
for an inference scheme that retains the advantages of both: it is able to represent
multiple modes and arbitrary correlations, a feature of sampling methods, and it
reduces the represented space to regions of high probability mass, a strength of
variational approximations. Neurally, the combined method can be interpreted as
a feed-forward preselection of the relevant state space, followed by a neural dynamics implementation of Markov Chain Monte Carlo (MCMC) to approximate
the posterior over the relevant states. We demonstrate the effectiveness and efficiency of this approach on a sparse coding model. In numerical experiments on
artificial data and image patches, we compare the performance of the algorithms
to that of exact EM, variational state space selection alone, MCMC alone, and
the combined select and sample approach. The select and sample approach integrates the advantages of the sampling and variational approximations, and forms
a robust, neurally plausible, and very efficient model of processing and learning
in cortical networks. For sparse coding we show applications easily exceeding a
thousand observed and a thousand hidden dimensions.
1
Introduction
According to the recently quite influential statistical approach to perception, our brain represents
not only the most likely interpretation of a stimulus, but also its corresponding uncertainty. In
other words, ideally the brain would represent the full posterior distribution over all possible interpretations of the stimulus, which is statistically optimal for inference and learning [1, 2, 3] ? a
hypothesis supported by an increasing number of psychophysical and electrophysiological results
[4, 5, 6, 7, 8, 9].
1
Although it is generally accepted that humans indeed maintain a complex posterior representation,
one outstanding difficulty with this approach is that the full posterior distribution is in general very
complex, as it may be highly correlated (due to explaining away effects), multimodal (multiple
possible interpretations), and very high-dimensional. One approach to address this problem in neural
circuits is to let neuronal activity represent the parameters of a variational approximation of the real
posterior [10, 11]. Although this approach can approximate the full posterior, the number of neurons
explodes with the number of variables ? for example, approximation via a Gaussian distribution
requires N 2 parameters to represent the covariance matrix over N variables. Another approach
is to identify neurons with variables and interpret neural activity as samples from their posterior
[12, 13, 3]. This interpretation is consistent with a range of experimental observations, including
neural variability (which would result from the uncertainty in the posterior) and spontaneous activity
(corresponding to samples from the prior in the absence of a stimulus) [3, 9]. The advantage of
using sampling is that the number of neurons scales linearly with the number of variables, and
it can represent arbitrarily complex posterior distributons given enough samples. The latter part
is the issue: collecting a sufficient number of samples to form such a complex, high-dimensional
representation is quite time-costly. Modeling studies have shown that a small number of samples
are sufficient to perform well on low-dimensional tasks (intuitively, this is because taking a lowdimensional marginal of the posterior accumulates samples over all dimensions) [14, 15]. However,
most sensory data is inherently very high-dimensional. As such, in order to faithfully represent
visual scenes containing potentially many objects and object parts, one requires a high-dimensional
latent space to represent the high number of potential causes, which returns to the problem sampling
approaches face in high dimensions.
The goal of the line of research pursued here is to address the following questions: 1) can we find
a sophisticated representation of the posterior for very high-dimensional hidden spaces? 2) as this
goal is believed to be shared by the brain, can we find a biologically plausible solution reaching it?
In this paper we propose a novel approach to approximate inference and learning that addresses the
drawbacks of sampling as a neural processing model, yet maintains its beneficial posterior representation and neural plausibility. We show that sampling can be combined with a preselection of
candidate units. Such a selection connects sampling to the influential models of neural processing
that emphasize feed-forward processing ([16, 17] and many more), and is consistent with the popular view of neural processing and learning as an interplay between feed-forward and recurrent stages
of processing [18, 19, 20, 21, 12]. Our combined approach emerges naturally by interpreting feedforward selection and sampling as approximations to exact inference in a probabilistic framework
for perception.
2
A Select and Sample Approach to Approximate Inference
Inference and learning in neural circuits can be regarded as the task of inferring the true hidden
causes of a stimulus. An example is inferring the objects in a visual scene based on the image
projected on the retina. We will refer to the sensory stimulus (the image) as a data point, ~y =
(y1 , . . . , yD ), and we will refer to the hidden causes (the objects) as ~s = (s1 , . . . , sH ) with sh
denoting hidden variablePor hidden unit h. The data distribution can then be modeled by a generative
data model: p(~y | ?) = ~s p(~y | ~s, ?) p(~s | ?) with ? denoting the parameters of the model1 . If we
assume that the data distribution can be optimally modeled by the generative distribution for optimal
parameters ?? , then the posterior probability p(~s | ~y , ?? ) represents optimal inference given a data
point ~y . The parameters ?? given a set of N data points Y = {~y1 , . . . , ~yN } are given by the
maximum likelihood parameters ?? = argmax? {p(Y | ?)}.
A standard procedure to find the maximum likelihood solution is expectation maximization (EM).
EM iteratively optimizes a lower bound of the data likelihood by inferring the posterior distribution
over hidden variables given the current parameters (the E-step), and then adjusting the parameters to
maximize the likelihood of the data averaged over this posterior (the M-step). The M-step updates
typically depend only on a small number of expectation values of the posterior as given by
P
hg(~s)ip(~s | ~y (n) ,?) = ~s p(~s | ~y (n) , ?) g(~s) ,
(1)
where g(~s) is usually an elementary function of the hidden variables (e.g., g(~s) = ~s or g(~s) = ~s~sT
in the case of standard sparse coding). For any non-trivial generative model, the computation of
1
In the case of continuous variables the sum is replaced by an integral. For a hierarchical model, the prior
distribution p(~s | ?) may be subdivided hierarchically into different sets of variables.
2
expectation values (1) is the computationally demanding part of EM optimization. Their exact computation is often intractable and many well-known algorithms (e.g., [22, 23]) rely on estimations.
The EM iterations can be associated with neural processing by the assumption that neural activity represents the posterior over hidden variables (E-step), and that synaptic plasticity implements
changes to model parameters (M-step). Here we will consider two prominent models of neural processing on the ground of approximations to the expectation values (1) and show how they can be
combined.
Selection. Feed-forward processing has frequently been discussed as an important component of
neural processing [16, 24, 17, 25]. One perspective on this early component of neural activity is
as a preselection of candidate units or hypotheses for a given sensory stimulus ([18, 21, 26, 19]
and many more), with the goal of reducing the computational demand of an otherwise too complex
computation. In the context of probabilistic approaches, it has recently been shown that preselection
can be formulated as a variational approximation to exact inference [27]. The variational distribution
in this case is given by a truncated sum over possible hidden states:
p(~s | ~y (n) , ?)
p(~s, ~y (n) | ?)
p(~s | ~y (n) , ?) ? qn (~s; ?) = X
?(~s ? Kn ) = X
?(~s ? Kn ) (2)
p(~s 0 | ~y (n) , ?)
p(~s 0 , ~y (n) | ?)
~
s 0 ?Kn
~
s 0 ?Kn
where ?(~s ? Kn ) = 1 if ~s ? Kn and zero otherwise. The subset Kn represents the preselected
latent states. Given a data point ~y (n) , Eqn. 2 results in good approximations to the posterior if Kn
contains most posterior mass. Since for many applications the posterior mass is concentrated in
small volumes of the state space, the approximation quality can stay high even for relatively small
sets Kn . This approximation can be used to compute efficiently the expectation values needed in the
P
M-step (1):
s, ~y (n) | ?) g(~s)
~
s?Kn p(~
.
(3)
hg(~s)ip(~s | ~y (n) ,?) ? hg(~s)iqn (~s;?) = P
s 0 , ~y (n) | ?)
~
s 0 ?Kn p(~
Eqn. 3 represents a reduction in required computational resources as it involves only summations (or
integrations) over the smaller state space Kn . The requirement is that the set Kn needs to be selected
prior to the computation of expectation values, and the final improvement in efficiency relies on such
selections being efficiently computable. As such, a selection function Sh (~y , ?) needs to be carefully
chosen in order to define Kn ; Sh (~y , ?) efficiently selects the candidate units sh that are most likely
to have contributed to a data point ~y (n) . Kn can then be defined by:
Kn = {~s | for all h 6? I : sh = 0} ,
(4)
where I contains the H 0 indices h with the highest values of Sh (~y , ?) (compare Fig. 1). For sparse
coding models, for instance, we can exploit that the posterior mass lies close to low dimensional
subspaces to define the sets Kn [27, 28], and appropriate Sh (~y , ?) can be found by deriving efficiently computable upper-bounds for probabilities p(sh = 1 | ~y (n) , ?) [27, 28] or by derivations
based on taking limits for no data noise [27, 29]. For more complex models, see [27] (Sec. 5.3-4)
for a discussion of suitable selection functions. Often the precise form of Sh (~y , ?) has limited influence on the final approximation accuracy because a) its values are not used for the approximation
(3) itself and b) the size of sets Kn can often be chosen generously to easily contain the regions with
large posterior mass. The larger Kn the less precise the selection has to be. For Kn equal to the
entire state space, no selection is required and the approximations (2) and (3) fall back to the case of
exact inference.
Sampling. An alternative way to approximate the expectation values in eq. 1 is by sampling from
the posterior distribution, and using the samples to compute the average:
PM
1
hg(~s)ip(~s | ~y (n) ,?) ? M
s(m) ) with ~s(m) ? p(~s | ~y , ?).
(5)
m=1 g(~
The challenging aspect of this approach is to efficiently draw samples from the posterior. In a
high-dimensional sample space, this is mostly done by Markov Chain Monte Carlo (MCMC). This
class of methods draws samples from the posterior distribution such that each subsequent sample is
drawn relative to the current state, and the resulting sequence of samples form a Markov chain. In
the limit of a large number of samples, Monte Carlo methods are theoretically able to represent any
probability distribution. However, the number of samples required in high-dimensional spaces can
be very large (Fig. 1A, sampling).
3
A
MAP estimate
exact EM
preselection
X
X
p(~
s|~
y )g(~
s)
qn (~
s; ?)g(~
s)
~
s?Kn
~
s
selected units
Sh (~y (n) )
B
select and
sample
Kn
Kn
~smax
g(~
smax )
sampling
M
1 X
g(~
s)
M m=1
with
~
s(m) ? p(~
s|~
y (n) , ?)
C
M
1 X
g(~
s)
M m=1
with
~
s(m) ? qn (~
s; ?)
selected units
s1
s1
sH
Wdh
y1
sH
Wdh
yD
y1
yD
Figure 1: A Simplified illustration of the posterior mass and the respective regions each approximation approach uses to compute the expectation values. B Graphical model showing each connection Wdh between the observed variables ~y and hidden variables ~s, and how H 0 = 2 hidden
variables/units are selected to form a set Kn . C Graphical model resulting from the selection of
hidden variables and associated weights Wdh (black).
Select and Sample. Although preselection is a deterministic approach very different than the
stochastic nature of sampling, its formulation as approximation to expectation values (3) allows for
a straight-forward combination of both approaches: given a data point, ~y (n) , we first approximate
the expectation value (3) using the variational distribution qn (~s; ?) as defined by preselection (2).
Second, we approximate the expectations w.r.t. qn (~s; ?) using sampling. The combined approach
is thus given by:
PM
1
s(m) ) with ~s(m) ? qn (~s; ?),
(6)
hg(~s)ip(~s | ~y (n) ,?) ? hg(~s)iqn (~s;?) ? M
m=1 g(~
where ~s(m) denote samples from the truncated distribution qn . Instead of drawing from a distribution
over the entire state space, approximation (6) requires only samples from a potentially very small
subspace Kn (Fig. 1). In the subspace Kn , most of the original probability mass is concentrated in a
smaller volume, thus MCMC algorithms perform more efficiently, which results in a smaller space
to explore, shorter burn-in times, and a reduced number of required samples. Compared to selection
alone, the select and sample approach will represent an increase in efficiency as soon as the number
of samples required for a good approximation is less then the number of states in Kn .
3
Sparse Coding: An Example Application
We systematically investigate the computational efficiency, performance, and biological plausibility
of the select and sample approach in comparison with selection and sampling alone using a sparse
coding model of images. The choice of a sparse coding model has numerous advantages. First, it
is a non-trivial model that has been extremely well-studied in machine learning research, and for
which efficient algorithms exist (e.g., [23, 30]). Second, it has become a standard (albeit somewhat
simplistic) model of the organization of receptive fields in primary visual cortex [22, 31, 32]. Here
we consider a discrete variant of this model known as Binary Sparse Coding (BSC; [29, 27], also
compare [33]), which has binary hidden variables but otherwise the same features as standard sparse
coding versions. The generative model for BSC is expressed by
1?sh
QH
p(~s|?) = h=1 ? sh 1 ? ?
,
p(~y |~s, W, ?) = N (~y ; W ~s, ? 2 1) ,
(7)
where W ? RD?H denotes the basis vectors and ? parameterizes the sparsity (~s and ~y as above).
The M-step updates of the BSC learning algorithm (see e.g. [27]) are given by:
T ?1
PN
T PN
W new =
y (n) h~s iqn
s ~s qn
,
(8)
n=1 ~
n=1 ~
(? 2 )new =
1
ND
2
P
(n)
~y ? W ~s q,n ? new =
n
1
N
P
n
| < ~s >qn |, where |~x| =
1
H
P
h
xh . (9)
The only expectation values needed for the M-step are thus h~siqn and ~s~sT qn . We will compare
learning and inference between the following algorithms:
4
BSCexact . An EM algorithm without approximations is obtained if we use the exact posterior for
the expectations: qn = p(~s | ~y (n) , ?). We will refer to this exact algorithm as BSCexact . Although
directly computable, the expectation values for BSCexact require sums over the entire state space,
i.e., over 2H terms. For large numbers of latent dimensions, BSCexact is thus intractable.
BSCselect . An algorithm that more efficiently scales with the number of hidden dimensions is
obtained by applying preselection. PFor the BSC model we use qn as given in (3) and Kn =
{~s | (for all h 6? I : sh = 0) or
h sh = 1}. Note that in addition to states as in (4) we include all states with one non-zero unit (all singletons). Including them avoids EM iterations in the
initial phases of learning that leave some basis functions unmodified (see [27]). As selection function Sh (~y (n) ) to define Kn we use:
q
~ T / ||W
~ h ||) ~y (n) ,
~ h || = PD (Wdh )2 .
Sh (~y (n) ) = (W
with
||
W
(10)
h
d=1
~ h as a component
A large value of Sh (~y (n) ) strongly indicates that ~y (n) contains the basis function W
(see Fig. 1C). Note that (10) can be related to a deterministic ICA-like selection of a hidden state
~s(n) in the limit case of no noise (compare [27]). Further restrictions of the state space are possible
but require modified M-step equations (see [27, 29]), which will not be considered here.
BSCsample . An alternative non-deterministic approach can be derived using Gibbs sampling. Gibbs
sampling is an MCMC algorithm which systematically explores the sample space by repeatedly
drawing samples from the conditional distributions of the individual hidden dimensions. In other
words, the transition probability from the current sample to a new candidate sample is given by
current
p(snew
s\h
). In our case of a binary sample space, this equates to selecting one random axis
h |~
h ? {1, . . . , H} and toggling its bit value (thereby changing the binary state in that dimension),
leaving the remaining axes unchanged. Specifically, the posterior probability computed for each
candidate sample is expressed by:
p(sh = 1, ~s\h , ~y )?
,
(11)
p(sh = 1 | ~s\h , ~y ) =
p(sh = 0, ~s\h , ~y )? + p(sh = 1, ~s\h , ~y )?
where we have introduced a parameter ? that allows for smoothing of the posterior distribution.
To ensure an appropriate mixing behavior of the MCMC chains over a wide range of ? (note that
? is a model parameter that changes with learning), we define ? = ?T2 , where T is a temperature
parameter that is set manually and selected such that good mixing is achieved. The samples drawn
in this manner can then be used to approximate the expectation values in (8) to (9) using (5).
BSCs+s . The EM learning algorithm given by combining selection and sampling is obtained by
applying (6). First note that inserting the BSC generative model into (2) results in:
N (~y ; W ~s, ? 2 1) BernoulliKn (~s; ?)
?(~s ? Kn )
(12)
y ; W ~s 0 , ? 2 1) BernoulliKn (~s 0 ; ?)
~
s 0 ?Kn N (~
Q
where BernoulliKn (~s; ?) = h?I ? sh (1 ? ?)1?sh . The remainder of the Bernoulli distribution
cancels out. If we define ~s? to be the binary vector consisting of all entries of ~s of the selected
? ? RD?H 0 contains all basis functions of those selected, we observe that the
dimensions, and if W
distribution is equal to the posterior w.r.t. a BSC model with H 0 instead of H hidden dimensions:
? ~s?, ? 2 1 H 0 ) Bernoulli(~s?; ?)
N (~y ; W
p(~s? | ~y , ?) = P
? ~s? 0 , ? 2 1 H 0 ) Bernoulli(~s? 0 ; ?)
y; W
~
s? 0 N (~
qn (~s; ?)
=
P
Instead of drawing samples from qn (~s; ?) we can thus draw samples from the exact posterior w.r.t.
the BSC generative model with H 0 dimensions. The sampling procedure for BSCsample can thus
be applied simply by ignoring the non-selected dimensions and their associated parameters. For
different data points, different latent dimensions will be selected such that averaging over data points
can update all model parameters. For selection we again use Sh (~y , ?) (10), defining Kn as in (4),
where I now contains the H 0 ?2 indices h with the highest values of Sh (~y , ?) and two randomly
selected dimensions (drawn from a uniform distribution over all non-selected dimensions). The
two randomly selected dimensions fulfill the same purpose as the inclusion of singleton states for
BSCselect . Preselection and Gibbs sampling on the selected dimensions define an approximation to
the required expectation values (3) and result in an EM algorithm referred to as BSCs+s .
5
Complexity. Collecting the number of operations necessary to compute the expectation values for
all four BSC cases, we arrive at
O N S( |{z}
D + |{z}
1 + |{z}
H )
(13)
p(~
s,~
y)
h~
si
h~
s~
sT i
where S denotes the number of hidden states that contribute to the calculation of the expectation
values. For the approaches with preselection (BSCselect , BSCs+s ), all the calculations of the expectation values can be performed on the reduced latent space; therefore the H is replaced by H 0 . For
BSCexact this number scales exponentially in H: S exact = 2H , and in in the BSCselect case, it scales
0
exponentially in the number of preselected hidden variables: S select = 2H . However, for the sampling based approaches (BSCsample and BSCs+s ), the number S directly corresponds to the number
of samples to be evaluated and is obtained empirically. As we will show later, S s+s = 200 ? H 0 is
a reasonable choice for the interval of H 0 that we investigate in this paper (1 ? H 0 ? 40).
4
Numerical Experiments
We compare the select and sample approach with selection and sampling applied individually on
different data sets: artifical images and natural image patches. For all experiments using the two
sampling approaches, we draw 20 independent chains that are initialized at random states in order to
increase the mixing of the samples. Also, of the samples drawn per chain, 13 were used to as burn-in
samples, and 23 were retained samples.
Artificial data. Our first set of experiments investigate the select and sample approach?s convergence properties on artificial data sets where ground truth is available. As the following experiments
were run on a small scale problem, we can compute the exact data likelihood for each EM step in all
four algorithms (BSCexact , BSCselect , BSCsample and BSCs+s ) to compare convergence on ground
truth likelihood.
B
A
L(?)
C
1
EM step
BSCsample
BSCselect
BSCexact
50 1
EM step
50 1
EM step
BSCs+s
50 1
EM step
50
Figure 2: Experiments using artificial bars data with H = 12, D = 6 ? 6. Dotted line indicates the
ground truth log-likelihood value. A Random selection of the N = 2000 training data points ~y (n) .
B Learned basis functions Wdh after a successful training run. C Development of the log-likelihood
over a period of 50 EM steps for all 4 investigated algorithms.
~ gt
Data for these experiments consisted of images generated by creating H = 12 basis functions W
h
in the form of horizontal and vertical bars on a D = 6 ? 6 = 36 pixel grid. Each bar was randomly
gt
assigned to be either positive (Wdh
? {0.0, 10.0}) or negative (Whgt0 d ? {?10.0, 0.0}). N = 2000
(n)
data points ~y
were generated by linearly combining these basis functions (see e.g., [34]). Using
2
a sparseness value of ?gt = H
resulted in, on average, two active bars per data point. According to
the model, we added Gaussian noise (?gt = 2.0) to the data (Fig. 2A).
We applied all algorithms to the same dataset and monitored the exact likelihood over a period of 50
EM steps (Fig. 2C). Although the calculation of the exact likelihood requires O(N 2H (D + H)) operations, this is feasible for such a small scale problem. For models using preselection (BSCselect and
BSCs+s ), we set H 0 to 6, effectively halving the number of hidden variables participating in the
calculation of the expectation values. For BSCsample and BSCs+s we drew 200 samples from the
posterior p(~s | ~y (n) ) of each data point, as such the number of states evaluated totaled S sample =
200 ? H = 2400 and S s+s = 200 ? H 0 = 1200, respectively. To ensure an appropriate mixing
behavior annealing temperature was set to T = 50. In each experiment the basis functions were
initialized at the data mean plus Gaussian noise, the prior probability to ?init = H1 and the data
noise to the variance of the data. All algorithms recover the correct set of bases functions in > 50%
of the trials, and the sparseness prior ? and the data noise ? with high accuracy. Comparing the
computational costs of algorithms shows the benefits of preselection already for this small scale
problem: while BSCexact evaluates the expectation values using the full set of 2H = 4096 hidden
6
0
states, BSCselect only considers 2H + (H ? H 0 ) = 70 states. The pure sampling based approaches
performs 2400 evaluations while BSCs+s requires 1200 evaluations.
Image patches. We test the select and sample approach on natural image data at a more challenging scale, to include biological plausibility in the demonstration of its applicability to larger scale
problems. We extracted N = 40, 000 patches of size D = 26 ? 26 = 676 pixels from the van
Hateren image database [31] 2 , and preprocessed them using a Difference of Gaussians (DoG) filter,
which approximates the sensitivity of center-on and center-off neurons found in the early stages of
the mammalian visual processing. Filter parameters where chosen as in [35, 28]. For the following
experiments we ran 100 EM iterations to ensure proper convergence. The annealing temperature
was set to T = 20.
?40
104
B
SC
s
el
SC
s
# of states
400 ? H 0
B
100 ? H 0
s
10
3
ec
t
-5.53e7
105
le
-5.51e7
106
+
-5.49e7
p
L(?)
S = 200 ? H 0
107
D
SC
s
-5.47e7
B
C
am
B
# of states
A
Figure 3: Experiments on image patches with D = 26 ? 26, H = 800 and H 0 = 20. A Random
selection of used patches (after DoG preprocessing). B Random selection of learned basis functions
(number of samples set to 200). C End approx. log-likelihood after 100 EM-steps vs. number of
samples per data point. D Number of states that had to be evaluated for the different approaches.
The first series of experiments investigate the effect of the number of drawn samples on the performance of the algorithm (as measured by the approximate data likelihood) across the entire range
of H 0 values between 12 and 36. We observe with BSCs+s that 200 samples per hidden dimension
(total states = 200 ? H 0 ) are sufficient: the final value of the likelihood after 100 EM steps begins
to saturate. Particularly, increasing the number of samples does not increase the likelihood by more
than 1%. In Fig. 3C we report the curve for H 0 = 20, but the same trend is observed for all other
values of H 0 . In another set of experiments, we used this number of samples (200 ? H) in the pure
sampling case (BSCsample ) in order to monitor the likelihood behavior. We observed two consistent
trends: 1) the algorithm was never observed to converge to a high-likelihood solution, and 2) even
when initialized at solutions with high likelihood, the likelihood always decreases. This example
demonstrates the gains of using select and sample above pure sampling: while BSCs+s only needs
200 ? 20 = 4, 000 samples to robustly reach a high-likelihood solutions, by following the same
regime with BSCsample , not only did the algorithm poorly converge on a high-likelihood solution,
but it used 200 ? 800 = 160, 000 samples to do so (Fig. 3D).
Large scale experiment on image patches. Comparison of the above results shows that the most
efficient algorithm is obtained by a combination of preselection and sampling, our select and sample approach (BSCs+s ), with no or only minimal effect on the performance of the algorithm ? as
depicted in Fig. 2 and 3. This efficiency allows for applications to much larger scale problems
than would be possible by individual approximation approaches. To demonstrate the efficiency of
the combined approach we applied BSCs+s to the same image dataset, but with a very high number of observed and hidden dimensions. We extracted from the database N = 500, 000 patches of
size D = 40 ? 40 = 1, 600 pixels. BSCs+s was applied with the number of hidden units set to
H = 1, 600 and with H 0 = 34. Using the same conditions as in the previous experiments (notably
S = 200 ? H 0 = 64, 000 samples and 100 EM iterations) we again obtain a set of Gabor-like
basis functions (see Fig. 4A) with relatively very few necessary states (Fig. 4B). To our knowledge,
the presented results illustrate the largest application of sparse coding with a reasonably complete
representation of the posterior.
5
Discussion
We have introduced a novel and efficient method for unsupervised learning in probabilistic models ? one which maintains a complex representation of the posterior for problems consistent with
2
We restricted the set of images to 900 images without man-made structures (see Fig 3A). The brightest 2%
of the pixels were clamped to the max value of the remaining 98% (reducing influences of light-reflections)
7
A
B
1012
# of states
BSCselect : S = 2H
0
8
10
BSCs+s : S = 200 ? H 0
104
100
0
H0
34
40
Figure 4: A Large-scale application of BSCs+s with H 0 = 34 to image patches (D = 40?40 = 1600
pixels and H = 1600 hidden dimensions). A random selection of the inferred basis functions is
shown (see Suppl for all basis functions and model parameters). B Comparison the of computational
complexity: BSCselect scales exponentially with H 0 whereas BSCs+s scales linearly. Note the large
difference at H 0 = 34 as used in A.
real-world scales. Furthermore, our approach is biologically plausible and models how the brain
can make sense of its environment for large-scale sensory inputs. Specifically, the method could
be implemented in neural networks using two mechanisms, both of which have been independently
suggested in the context of a statistical framework for perception: feed-forward preselection [27],
and sampling [12, 13, 3]. We showed that the two seemingly contrasting approaches can be combined based on their interpretation as approximate inference methods, resulting in a considerable
increase in computational efficiency (e.g., Figs. 3-4).
We used a sparse coding model of natural images ? a standard model for neural response properties
in V1 [22, 31] ? in order to investigate, both numerically and analytically, the applicability and efficiency of the method. Comparisons of our approach with exact inference, selection alone, and sampling alone showed a very favorable scaling with the number of observed and hidden dimensions. To
the best of our knowledge, the only other sparse coding implementation that reached a comparable
problem size (D = 20 ? 20, H = 2 000) assumed a Laplace prior and used a MAP estimation of the
posterior [23]. However, with MAP estimations, basis functions have to be rescaled (compare [22])
and data noise or prior parameters cannot be inferred (instead a regularizer is hand-set). Our method
does not require any of these artificial mechanisms because of its rich posterior representation. Such
representations are, furthermore, crucial for inferring all parameters such as data noise and sparsity
(learned in all of our experiments), and to correctly act when faced with uncertain input [2, 8, 3].
Concretely, we used a sparse coding model with binary latent variables. This allowed for a systematic comparison with exact EM for low-dimensional problems, but extension to the continuous case
should be straight-forward. In the model, the selection step results in a simple, local and neurally
plausible integration of input data, given by (10). We used this in combination with Gibbs sampling,
which is also neurally plausible because neurons can individually sample their next state based on
the current state of the other neurons, as transmitted through recurrent connections [15]. The idea
of combining sampling with feed-forward mechanisms has previously been explored, but in other
contexts and with different goals. Work by Beal [36] used variational approximations as proposal
distributions within importance sampling, and Zhu et al. [37] guided a Metropolis-Hastings algorithm by a data-driven proposal distribution. Both approaches are different from selecting subspaces
prior to sampling and are more difficult to link to neural feed-forward sweeps [18, 21].
We expect the select and sample strategy to be widely applicable to machine learning models whenever the posterior probability masses can be expected to be concentrated in a small sub-space of the
whole latent space. Using more sophisticated preselection mechanisms and sampling schemes could
lead to a further reduction in computational efforts, although the details will depend in general on
the particular model and input data.
Acknowledgements. We acknowledge funding by the German Research Foundation (DFG) in the project
LU 1196/4-1 (JL), by the German Federal Ministry of Education and Research (BMBF), project 01GQ0840
(JAS, JB, ASS), by the Swartz Foundation and the Swiss National Science Foundation (PB). Furthermore,
support by the Physics Dept. and the Center for Scientific Computing (CSC) in Frankfurt are acknowledged.
8
References
[1] P. Dayan and L. F. Abbott. Theoretical Neuroscience. MIT Press, Cambridge, 2001.
[2] R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki. Probabilistic Models of the Brain: Perception and
Neural Function. MIT Press, 2002.
[3] J. Fiser, P. Berkes, G. Orban, and M. Lengye. Statistically optimal perception and learning: from behavior
to neural representations. Trends in Cognitive Sciences, 14:119?130, 2010.
[4] M. D. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal
fashion. Nature, 415:419?433, 2002.
[5] Y. Weiss, E. P. Simoncelli, and E. H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience,
5:598?604, 2002.
[6] K. P. Kording and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244?247,
2004.
[7] J. M. Beck, W. J. Ma, R. Kiani, T. Hanksand A. K. Churchland, J. Roitman, M. N.. Shadlen, P. E. Latham,
and A. Pouget. Probabilistic population codes for bayesian decision making. Neuron, 60(6), 2008.
[8] J. Trommersh?auser, L. T. Maloney, and M. S. Landy. Decision making, movement planning and statistical
decision theory. Trends in Cognitive Science, 12:291?297, 2008.
[9] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Spontaneous cortical activity reveals hallmarks of an
optimal internal model of the environment. Science, 331(6013):83?87, 2011.
[10] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population
codes. Nature Neuroscience, 9:1432?1438, 2006.
[11] R. Turner, P. Berkes, and J. Fiser. Learning complex tasks with probabilistic population codes. In Frontiers
in Neuroscience, 2011. Comp. and Systems Neuroscience 2011.
[12] T. S. Lee and D. Mumford. Hierarchical Bayesian inference in the visual cortex. Journal of the Optical
Society of America A, 20(7):1434?1448, 2003.
[13] P. O. Hoyer and A. Hyvarinen. Interpreting neural response variability as Monte Carlo sampling from the
posterior. In Adv. Neur. Inf. Proc. Syst. 16, pages 293?300. MIT Press, 2003.
[14] E. Vul, N. D. Goodman, T. L. Griffiths, and J. B. Tenenbaum. One and done? Optimal decisions from
very few samples. In 31st Annual Meeting of the Cognitive Science Society, 2009.
[15] P. Berkes, R. Turner, and J. Fiser. The army of one (sample): the characteristics of sampling-based probabilistic neural representations. In Frontiers in Neuroscience, 2011. Comp. and Systems Neuroscience
2011.
[16] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the
brain. Psychological Review, 65(6), 1958.
[17] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
211(11):1019 ? 1025, 1999.
[18] V. A. F.. Lamme and P. R. Roelfsema. The distinct modes of vision offered by feedforward and recurrent
processing. Trends in Neurosciences, 23(11):571 ? 579, 2000.
[19] A. Yuille and D. Kersten. Vision as bayesian inference: analysis by synthesis? Trends in Cognitive
Sciences, 10(7):301?308, 2006.
[20] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The ?wake-sleep? algorithm for unsupervised neural
networks. Science, 268:1158 ? 1161, 1995.
[21] E. K?orner, M. O. Gewaltig, U. K?orner, A. Richter, and T. Rodemann. A model of computation in neocortical architecture. Neural Networks, 12:989 ? 1005, 1999.
[22] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[23] H. Lee, A. Battle, R. Raina, and A. Ng. Efficient sparse coding algorithms. NIPS, 20:801?808, 2007.
[24] Y. LeCun. Backpropagation applied to handwritten zip code recognition.
[25] M. Riesenhuber and T. Poggio. How visual cortex recognizes objects: The tale of the standard model.
2002.
[26] T. S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. J Opt Soc Am A Opt
Image Sci Vis, 20(7):1434?1448, July 2003.
[27] J. L?ucke and J. Eggert. Expectation Truncation And the Benefits of Preselection in Training Generative
Models. Journal of Machine Learning Research, 2010.
[28] G. Puertas, J. Bornschein, and J. L?ucke. The maximal causes of natural scenes are edge filters. NIPS, 23,
2010.
[29] M. Henniges, G. Puertas, J. Bornschein, J. Eggert, and J. L?ucke. Binary sparse coding. Latent Variable
Analysis and Signal Separation, 2010.
[30] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding.
The Journal of Machine Learning Research, 11, 2010.
[31] J. Hateren and A. Schaaf. Independent Component Filters of Natural Images Compared with Simple Cells
in Primary Visual Cortex. Proc Biol Sci, 265(1394):359?366, 1998.
[32] D. L. Ringach. Spatial Structure and Symmetry of Simple-Cell Receptive Fields in Macaque Primary
Visual Cortex. J Neurophysiol, 88:455?463, 2002.
[33] M. Haft, R. Hofman, and V. Tresp. Generative binary codes. Pattern Anal Appl, 6(4):269?284, 2004.
[34] P. O. Hoyer. Non-negative sparse coding. Neural Networks for Signal Processing XII: Proceedings of the
IEEE Workshop, pages 557?565, 2002.
[35] J. L?ucke. Receptive Field Self-Organization in a Model of the Fine Structure in V1 Cortical Columns.
Neural Computation, 2009.
[36] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London., 2003.
[37] Z. Tu and S. C. Zhu. Image Segmentation by Data-Driven Markov Chain Monte Carlo. PAMI, 24(5):657?
673, 2002.
9
| 4346 |@word trial:1 toggling:1 version:1 nd:1 ucke:4 covariance:1 thereby:1 reduction:2 initial:1 contains:5 series:1 selecting:2 denoting:2 current:5 comparing:1 si:1 yet:1 subsequent:1 csc:1 numerical:2 plasticity:1 update:3 v:1 alone:6 pursued:1 generative:8 selected:13 contribute:1 org:2 become:1 manner:1 theoretically:1 notably:1 ica:1 expected:1 indeed:1 behavior:4 frequently:1 planning:1 brain:6 increasing:3 begin:1 project:2 underlying:1 circuit:2 mass:8 totaled:1 interpreted:1 contrasting:1 sapiro:1 collecting:2 act:2 demonstrates:1 unit:10 yn:1 positive:1 local:1 frey:1 haft:1 limit:3 accumulates:1 yd:3 pami:1 black:1 burn:2 plus:1 studied:1 challenging:2 appl:1 limited:1 factorization:1 range:3 statistically:3 averaged:1 lecun:1 implement:1 swiss:1 illusion:1 richter:1 backpropagation:1 procedure:2 gabor:1 word:2 griffith:1 cannot:1 close:2 selection:23 storage:1 context:3 influence:2 applying:2 kersten:1 restriction:1 map:3 deterministic:3 center:4 independently:1 snew:1 pure:3 pouget:2 regarded:1 deriving:1 population:4 laplace:1 spontaneous:2 qh:1 exact:16 us:1 hypothesis:3 trend:6 recognition:2 particularly:1 mammalian:1 database:2 observed:7 thousand:2 region:3 adv:1 decrease:1 highest:2 rescaled:1 movement:1 ran:1 environment:3 pd:1 complexity:2 ideally:1 dynamic:1 depend:2 hofman:1 churchland:1 yuille:1 efficiency:8 basis:13 neurophysiol:1 easily:2 multimodal:1 represented:2 america:1 regularizer:1 derivation:1 distinct:1 london:1 monte:5 artificial:5 sc:3 h0:1 quite:2 larger:3 plausible:5 widely:1 drawing:3 otherwise:3 emergence:1 itself:1 ip:4 final:3 seemingly:1 interplay:1 advantage:4 sequence:1 beal:2 online:1 bornschein:4 propose:1 lowdimensional:1 maximal:1 remainder:1 inserting:1 relevant:2 combining:3 tu:1 mixing:4 poorly:1 ernst:1 participating:1 convergence:3 requirement:1 smax:2 leave:1 object:6 illustrate:1 recurrent:3 tale:1 measured:1 eq:1 soc:1 wdh:7 implemented:1 involves:1 indicate:1 guided:1 drawback:1 correct:1 filter:4 stochastic:1 human:2 education:1 require:3 subdivided:1 opt:2 biological:2 elementary:1 summation:1 extension:1 frontier:2 considered:1 ground:4 brightest:1 early:2 purpose:1 estimation:3 favorable:1 integrates:1 applicable:1 proc:2 individually:2 largest:1 faithfully:1 federal:1 bsc:8 generously:1 mit:3 gaussian:3 always:1 modified:1 jacquelyn:1 reaching:1 pn:2 fulfill:1 e7:4 derived:1 ax:1 ponce:1 improvement:1 bernoulli:3 likelihood:20 indicates:2 am:2 sense:1 inference:18 dayan:2 el:1 typically:1 entire:4 hidden:27 selects:1 germany:2 pixel:5 issue:1 development:1 smoothing:1 integration:3 auser:1 schaaf:1 marginal:1 equal:2 field:5 never:1 spatial:1 ng:1 sampling:36 manually:1 represents:6 adelson:1 cancel:1 unsupervised:2 t2:1 stimulus:7 report:1 jb:1 few:2 retina:1 gq0840:1 randomly:3 resulted:1 national:1 individual:2 dfg:1 beck:2 replaced:2 argmax:1 connects:1 phase:1 consisting:1 maintain:1 organization:3 highly:1 investigate:5 evaluation:2 sh:28 light:1 hg:6 chain:7 integral:1 edge:1 necessary:2 poggio:2 respective:1 shorter:1 initialized:3 theoretical:1 minimal:1 uncertain:1 psychological:1 instance:1 column:1 modeling:1 rao:1 retains:1 unmodified:1 maximization:1 cost:1 applicability:2 subset:1 entry:1 uniform:1 successful:1 too:2 optimally:2 kn:32 combined:9 iqn:3 st:4 explores:1 sensitivity:1 stay:1 probabilistic:9 off:1 systematic:1 physic:1 lee:3 synthesis:1 again:2 thesis:1 containing:1 cognitive:4 creating:1 abdul:1 return:1 syst:1 potential:1 de:2 singleton:2 coding:17 sec:1 vi:1 performed:1 view:1 later:1 h1:1 reached:1 recover:1 maintains:2 accuracy:2 variance:1 characteristic:1 efficiently:7 percept:1 identify:1 bayesian:7 handwritten:1 lu:1 carlo:5 fias:2 comp:2 straight:2 lengyel:1 reach:1 whenever:1 synaptic:1 maloney:1 orner:2 evaluates:1 sensorimotor:1 naturally:1 associated:3 monitored:1 gain:1 dataset:2 adjusting:1 popular:1 knowledge:2 emerges:1 electrophysiological:1 segmentation:1 sophisticated:2 carefully:1 back:1 feed:7 response:2 wei:1 formulation:1 done:2 evaluated:3 strongly:1 furthermore:3 stage:2 fiser:4 correlation:1 hand:1 eqn:2 horizontal:1 hastings:1 mode:2 quality:1 scientific:1 olshausen:2 usa:1 effect:3 roitman:1 contain:1 true:1 consisted:1 analytically:1 assigned:1 iteratively:1 neal:1 ringach:1 self:1 prominent:1 neocortical:1 demonstrate:2 complete:1 latham:2 performs:1 motion:1 interpreting:2 temperature:3 reflection:1 eggert:2 image:21 variational:10 hallmark:1 novel:2 recently:2 funding:1 empirically:1 exponentially:3 volume:2 jl:1 discussed:1 interpretation:5 approximates:1 interpret:1 numerically:1 refer:3 cambridge:1 gibbs:4 frankfurt:7 rd:2 approx:1 grid:1 pm:2 inclusion:1 luecke:1 had:1 cortex:7 gt:4 berkes:6 base:1 posterior:42 showed:2 perspective:1 optimizes:1 driven:2 inf:1 binary:8 arbitrarily:1 meeting:1 vul:1 transmitted:1 ministry:1 somewhat:1 zip:1 converge:2 maximize:1 period:2 swartz:1 signal:2 july:1 multiple:2 neurally:4 full:4 reduces:1 simoncelli:1 plausibility:3 believed:1 calculation:4 bach:1 halving:1 variant:1 simplistic:1 vision:2 expectation:22 iteration:4 represent:11 suppl:1 achieved:1 cell:3 proposal:3 addition:1 whereas:1 fine:1 interval:1 annealing:2 wake:1 leaving:1 crucial:1 goodman:1 explodes:1 haptic:1 effectiveness:1 feedforward:2 enough:1 architecture:1 idea:1 parameterizes:1 computable:3 effort:1 cause:5 repeatedly:1 generally:1 preselection:16 tenenbaum:1 concentrated:3 kiani:1 reduced:2 exist:1 dotted:1 neuroscience:10 per:4 correctly:1 rosenblatt:1 xii:1 discrete:1 four:2 pb:1 monitor:1 drawn:5 acknowledged:1 changing:1 preprocessed:1 abbott:1 henniges:1 v1:2 pietro:1 sum:3 run:2 uncertainty:2 arrive:1 roelfsema:1 reasonable:1 patch:9 separation:1 draw:4 decision:4 scaling:1 comparable:1 bit:1 bound:2 followed:1 sleep:1 annual:1 activity:7 lucke:1 strength:1 scene:3 encodes:1 aspect:1 orban:2 extremely:1 optical:1 relatively:2 influential:3 according:2 neur:1 combination:3 battle:1 beneficial:1 smaller:3 em:22 across:1 sheikh:2 metropolis:1 biologically:2 s1:3 making:2 jas:1 intuitively:1 restricted:1 computationally:1 resource:1 equation:1 previously:1 puertas:2 german:2 mechanism:4 needed:2 end:1 available:1 operation:2 gaussians:1 observe:2 hierarchical:4 away:1 appropriate:3 robustly:1 alternative:2 original:1 denotes:2 remaining:2 include:2 ensure:3 recognizes:1 graphical:2 landy:1 exploit:1 society:2 unchanged:1 psychophysical:1 sweep:1 question:1 added:1 already:1 mumford:2 parametric:1 costly:1 receptive:4 primary:3 strategy:1 hoyer:2 subspace:4 link:1 sci:2 considers:1 trivial:2 saboor:1 code:6 modeled:2 index:2 illustration:1 retained:1 demonstration:1 difficult:1 mostly:1 potentially:2 negative:2 implementation:2 anal:1 proper:1 perform:2 contributed:1 upper:1 vertical:1 neuron:8 observation:1 markov:4 acknowledge:1 riesenhuber:2 truncated:2 defining:1 hinton:1 variability:2 volen:1 precise:2 y1:4 arbitrary:1 inferred:2 introduced:2 dog:2 required:6 connection:2 shelton:2 learned:3 nip:2 macaque:1 address:3 able:2 bar:4 suggested:1 usually:1 perception:6 pattern:1 regime:1 sparsity:2 model1:1 preselected:2 including:2 max:1 suitable:1 demanding:1 difficulty:2 rely:1 natural:6 raina:1 advanced:2 zhu:2 turner:2 scheme:2 numerous:1 axis:1 gewaltig:1 tresp:1 faced:1 prior:8 review:1 acknowledgement:1 relative:1 expect:1 foundation:3 integrate:1 offered:1 sufficient:3 consistent:4 shadlen:1 bank:1 systematically:2 supported:1 soon:1 truncation:1 perceptron:1 institute:2 explaining:1 fall:1 taking:2 face:1 wide:1 sparse:18 benefit:2 van:1 curve:1 dimension:20 cortical:3 transition:1 avoids:1 world:1 qn:14 sensory:5 forward:9 made:1 rich:1 projected:1 simplified:1 preprocessing:1 concretely:1 brandeis:2 ec:1 hyvarinen:1 kording:1 approximate:11 emphasize:1 uni:2 active:1 reveals:1 mairal:1 assumed:1 continuous:2 latent:8 nature:7 reasonably:1 robust:1 inherently:1 ignoring:1 init:1 symmetry:1 as:1 investigated:1 complex:10 did:1 hierarchically:1 linearly:3 whole:1 noise:8 allowed:1 neuronal:1 fig:13 referred:1 fashion:1 gatsby:1 bmbf:1 sub:1 inferring:4 exceeding:1 xh:1 goethe:2 candidate:5 lie:1 clamped:1 saturate:1 showing:1 lamme:1 explored:1 intractable:2 workshop:1 albeit:1 effectively:1 equates:1 drew:1 importance:1 phd:1 sparseness:2 demand:1 boston:1 wolpert:1 depicted:1 simply:1 likely:2 explore:1 army:1 visual:10 expressed:2 lewicki:1 corresponds:1 truth:3 relies:1 extracted:2 ma:2 conditional:1 goal:4 formulated:1 shared:1 absence:1 feasible:1 change:2 man:1 considerable:1 specifically:2 reducing:2 averaging:1 total:1 accepted:1 experimental:2 select:15 college:1 internal:1 support:1 pfor:1 latter:1 outstanding:2 hateren:2 artifical:1 dept:1 mcmc:6 biol:1 correlated:1 |
3,696 | 4,347 | Large-Scale Category Structure Aware Image
Categorization
Bin Zhao
School of Computer Science
Carnegie Mellon University
Li Fei-Fei
Computer Science Department
Stanford University
Eric P. Xing
School of Computer Science
Carnegie Mellon University
[email protected]
[email protected]
[email protected]
Abstract
Most previous research on image categorization has focused on medium-scale
data sets, while large-scale image categorization with millions of images from
thousands of categories remains a challenge. With the emergence of structured
large-scale dataset such as the ImageNet, rich information about the conceptual
relationships between images, such as a tree hierarchy among various image categories, become available. As human cognition of complex visual world benefits
from underlying semantic relationships between object classes, we believe a machine learning system can and should leverage such information as well for better
performance. In this paper, we employ such semantic relatedness among image
categories for large-scale image categorization. Specifically, a category hierarchy
is utilized to properly define loss function and select common set of features for
related categories. An efficient optimization method based on proximal approximation and accelerated parallel gradient method is introduced. Experimental results on a subset of ImageNet containing 1.2 million images from 1000 categories
demonstrate the effectiveness and promise of our proposed approach.
1
Introduction
Image categorization / object recognition has been one of the most important research problems in
the computer vision community. While most previous research on image categorization has focused
on medium-scale data sets, involving objects from dozens of categories, there is recently a growing
consensus that it is necessary to build general purpose object recognizers that are able to recognize
many more different classes of objects. (A human being has little problem recognizing tens of
thousands of visual categories, even with very little ?training? data.) The Caltech 101/256 [14, 18]
is a pioneer benchmark data set on that front. LabelMe [31] provides 30k labeled and segmented
images, covering around 200 image categories. Moreover, the newly released ImageNet [12] data
set goes a big step further, in that it further increases the number of classes to over 15000, and has
more than 1000 images for each class on average. Similarly, TinyImage [36] contains 80 million
32 ? 32 low resolution images, with each image loosely labeled with one of 75,062 English nouns.
Clearly, these are no longer artificial visual categorization problems created for machine learning,
but instead more like a human-level cognition problem for real world object recognition with a
much bigger set of objects. A natural way to formulate this problem is a multi-way or multi-task
classification, but the seemingly standard formulation on such gigantic data set poses a completely
new challenge both to computer vision and machine learning. Unfortunately, despite the well-known
advantages and recent advancements of multi-way classification techniques [1, 19, 4] in machine
learning, complexity concerns have driven most research on such super large-scale data set back
to simple methods such as nearest neighbor search [6], least square regression [16] or learning
thousands of binary classifiers [24].
1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
(a)
(b)
(c)
Figure 1: (a) Image category hierarchy in ImageNet; (b) Overlapping group structure; (3) Semantic relatedness
measure between image categories.
The hierarchical semantic structure stemmed from the WordNet over image categories in the ImageNet data makes it distinctive from other existing large-scale dataset, and it reassembles how
human cognitive system stores visual knowledge. Figure 1(a) shows an example such as a tree
hierarchy, where leaf nodes are individual categories, and each internal node denotes the cluster
of categories corresponding to the leaf nodes in the subtree rooted at the given node. As human
cognition of complex visual world benefits from underlying semantic relationships between object
classes, we believe a machine learning system can and should leverage such information as well for
better performance. Specifically, we argue that instead of formulating the recognition task as a flat
classification problem, where each category is treated equally and independently, a better strategy
is to utilize the rich information residing in the concept hierarchy among image categories to train
a system that couples all different recognition tasks over different categories. It should be noted
that our proposed method is applicable to any tree structure for image category, such as the category
structure learned to capture visual appearance similarities between image classes [32, 17, 13].
To the best of our knowledge, our attempt in this paper represents an initial foray to systematically
utilizing information residing in concept hierarchy, for multi-way classification on super large-scale
image data sets. More precisely, our approach utilizes the concept hierarchy in two aspects: loss
function and feature selection. First, the loss function used in our formulation weighs differentially
for different misclassification outcomes: misclassifying an image to a category that is close to its
true identity should receive less penalty than misclassifying it to a totally unrelated one. Second,
in an image classification problem with thousands of categories, it is not realistic to assume that
all of the classes share the same set of relevant features. That is to say, a subset of highly related categories may share a common set of relevant features, whereas weakly related categories
are less likely to be affected by the same features. Consequently, the image categorization problem
is formulated as augmented logistic regression with overlapping-group-lasso regularization. The
corresponding optimization problem involves a non-smooth convex objective function represented
as summation over all training examples. To solve this optimization problem, we introduce the
Accelerated Parallel ProximaL gradiEnT (APPLET) method, which tackles the non-smoothness of
overlapping-group-lasso penalty via proximal gradient [20, 9], and the huge number of training samples by Map-Reduce parallel computing [10]. Therefore, the contributions made in this paper are:
(1) We incorporate the semantic relationships between object classes, into an augmented multi-class
logistic regression formulation, regularized by the overlapping-group-lasso penalty. The sheer size
of the ImageNet data set that our formulation is designed to tackle singles out our work from previous attempts on multi-class classification, or transfer learning. (2) We propose a proximal gradient
based method for solving the resulting non-smooth optimization problem, where the super large
scale of the problem is tackled by map-reduce parallel computation.
The rest of this paper is organized as follows. Detailed explanation of the formulation is provided in
Section 2. Section 3 introduces the Accelerated Parallel ProximaL gradiEnT (APPLET) method for
solving the corresponding large-scale non-smooth optimization problem. Section 4 briefly reviews
several related works. Section 5 demonstrates the effectiveness of the proposed algorithm using
millions of training images from 1000 categories, followed by conclusions in Section 6.
2
2.1
Category Structure Aware Image Categorization
Motivation
ImageNet organizes the different classes of images in a densely populated semantic hierarchy.
Specifically, image categories in ImageNet are interlinked by several types of relations, with the
2
?IS-A? relation being the most comprehensive and useful [11], resulting in a tree hierarchy over image categories. For example, the ?husky? category follows a path in the tree composed of ?working
dog?, ?dog?, ?canine?, etc. The distance between two nodes in the tree depicts the difference between
the two corresponding image categories. Consequently, in the category hierarchy in ImageNet, each
internal node near the bottom of the tree shows that the image categories of its subtree are highly
correlated, whereas the internal node near the root represents relatively weaker correlations among
the categories in its subtree.
The class hierarchy provides a measure of relatedness between image classes. Misclassifying an
image to a category that is close to its true identity should receive less penalty than misclassifying it
to a totally unrelated one. For example, although horses are not exactly ponies, we expect the loss for
classifying a ?pony? as a ?horse? to be lower than classifying it as a ?car?. Instead of using 0-1 loss
as in conventional image categorization, which treats image categories equally and independently,
our approach utilizes a loss function that is aware of the category hierarchy.
Moreover, highly related image categories are more likely to share common visual patterns. For
example, in Figure 1(a), husky and shepherd share similar object shape and texture. Consequently,
recognition of these related categories are more likely to be affected by the same features. In this
work, we regularize the sparsity pattern of weight vectors for related categories. This is equivalent
to learning a low dimensional representation that is shared across multiple related categories.
2.2
Logistic Regression with Category Structure
Given N training images, each represented as a J-dimensional input vector and belonging to one
of the K categories. Let X denote the J ? N input matrix, where each column corresponds to
an instance. Similarly, let Y denote the N ? 1 output vector, where each element corresponds
to the label for an image. Multi-class logistic regression defines a weight vector wk for each class
k ? {1, . . . , K} and classifies sample z by y ? = arg maxy?{1,...,k} P (y|x, W), with the conditional
likelihood computed as
exp(wyTi xi )
P (yi |xi , W) = ?
(1)
T
k exp(wk xi )
?
] are
The optimal weight vectors W? = [w1? , . . . , wK
W? = arg min ?
N
?
W
log P (yi |xi , W) + ??(W)
(2)
i=1
where ?(W) is a regularization term defined on W and ? is the regularization parameter.
2.2.1 Augmented Soft-Max Loss Function
Using the tree hierarchy on image categories, we could calculate a semantic relatedness (a.k.a. similarity) matrix S ? RK?K over all categories, where Sij measures the semantic relatedness of class
i and j. Using the semantic relatedness measure, the likelihood of xi belonging to category yi could
be modified as follows
P? (yi |xi , W) ?
Since
K
?
Syi ,r P (r|xi , W) ?
r=1
?K
r=1
K
?
?
exp(wrT xi )
Syi ,r ?
?
Syi ,r exp(wrT xi ) (3)
T
exp(w
x
)
i
k
k
r=1
r=1
K
P? (r|xi , W) = 1, consequently,
?K
Syi ,r exp(wrT xi )
P? (yi |xi , W) = ?K r=1
?K
T
r=1
k=1 Sk,r exp(wr xi )
(4)
For the special case where the semantic relatedness matrix S is an identity matrix, meaning each
class is only related to itself, Eq. (4) simplifies to Eq. (1). Using this modified softmax loss function,
the image categorization problem could be formulated as
[ (
)
(
)]
N
?
??
?
min
log
Sk,r exp(wrT xi ) ? log
Syi ,r exp(wrT xi )
+ ??(W) (5)
W
i=1
r
r
k
3
2.2.2
Semantic Relatedness Matrix
To compute semantic relatedness matrix S in the above formulation, we first define a metric measuring the semantic distance between image categories. A simple way to compute semantic distance
in a structure such as the one provided by ImageNet is to utilize the paths connecting the two corresponding nodes to the root node. Following [7] we define the semantic distance Dij between class i
and class j as the number of nodes shared by their two parent branches, divided by the length of the
longest of the two branches
intersect(path(i), path(j))
Dij =
(6)
max(length(path(i)), length(path(j)))
where path(i) is the path from the root node to node i and intersect(p1 , p2 ) counts the number of
nodes shared by two paths p1 and p2 . We construct the semantic relatedness matrix S = exp(??(1?
D)), where ? is a constant controlling the decay factor of semantic relatedness with respect to
semantic distance. Figure 1(c) shows the semantic relatedness matrix computed with ? = 5.
2.3
Tree-Guided Sparse Feature Coding
In ImageNet, image categories are grouped at multiple granularity as a tree hierarchy. As illustrated
in Section 2.1, the image categories in each internal node are likely to be influenced by a common set
of features. In order to achieve this type of structured sparsity at multiple levels of the hierarchy, we
utilize an overlapping-group-lasso penalty recently proposed in [21] for genetic association mapping
problem, where the goal is to identify a small number of SNPs (inputs) out of millions of SNPs that
influence phenotypes (outputs) such as gene expression measurements.
Specifically, given the tree hierarchy T = (V, E) over image categories, each node v ? V of tree T
is associated with group Gv , composed of all leaf nodes in the subtree rooted at v, as illustrated in
Figure 1(b). Clearly, each group Gv is a subset of the power set of {1, . . . , K}. Given these groups
G = {Gv }v?V of categories, we define the following overlapping-group-lasso penalty [21]:
??
?(W) =
?v ||wjGv ||2
(7)
j
v?V
where wjGv is the weight coefficients {wjk , k ? Gv } for input j ? {1, . . . , J} associated with categories in Gv , and each group Gv is associated with weight ?v that reflects the strength of correlation
within the group. It should be noted that we do not require groups in G to be mutually exclusive,
and consequently, each leaf node would belong to multiple groups at various granularity.
Inserting the above overlapping-group-lasso penalty into (5), we formulate the category structure
aware image categorization as follows:
)
(
)]
[ (
N
?
??
?
??
j
Syi ,r exp(wrT xi ) +?
?v ||wG
|| (8)
min
log
Sk,r exp(wrT xi ) ?log
v 2
W
3
i=1
r
r
k
j v?V
Accelerated Parallel ProximaL gradiEnT (APPLET) Method
The challenge in solving problem (8) lies in two facts: the non-separability of W in the non-smooth
overlapping-group-lasso penalty ?(W), and the huge number N of training samples. Conventionally, to handle the non-smoothness of ?(W), we could reformulate the problem as either second
order cone programming (SOCP) or quadratic programming (QP) [35]. However, the state-of-theart approach for solving SOCP and QP based on interior point method requires solving a Newton
system to find search direction, and is computationally very expensive even for moderate-sized problems. Moreover, due to the huge number of samples in the training set, off-the-shelf optimization
solvers are too slow to be used.
In this work, we adopt a proximal-gradient method to handle the non-smoothness of ?(W). Specifically, we first reformulate the overlapping-group-lasso penalty ?(W) into a max problem over
auxiliary variables using dual norm, and then introduce its smooth lower bound [20, 9]. Instead of
optimizing the original non-smooth penalty, we run the accelerated gradient descent method [27]
under a Map-Reduce framework [10] to optimize the smooth lower bound. The proposed approach
enjoys a fast convergence rate and low per-iteration complexity.
4
3.1
Reformulate the Penalty
For referring convenience, we number the elements in the set G = {Gv }v?V as G = {g1 , . . . , g|G| }
according to an arbitrary order, where |G| denotes the total number of elements in G. For each input
j and group gi associated with wjgi , we introduce a vector of auxiliary variables ?jgi ? R|gi | .
Since the dual norm of L2 norm is also an L2 norm, we can
? reformulate ||wjgi ||2 as ||wjgi ||2 =
max||?jgi ||2 ?1 ?Tjgi wjgi . Moreover, define the following g?G |g| ? J matrix
?
?
?1g1 . . . ?Jg1
?
?
..
..
..
A=?
(9)
?
.
.
.
?1g|G| . . . ?Jg|G|
in domain O = {A| ||?jgi ||2 ? 1, ?j ? {1, . . . , J}, gi ? G}. Following [9], the overlappinggroup-lasso penalty in (8) can be equivalently reformulated as
??
?(W) =
?i max ?Tjgi wjgi = max?CWT , A?
(10)
j
i
||?jgi ||2 ?1
A?O
?
where i = 1, . . . , |G|, j = 1, . . . , J, C ? R g?G |g|?K , and ?U, V? = Tr(UT V) is the inner
product of two matrices. Moreover, the matrix C is defined with rows indexed by (s, gi ) such that
s ? gi and i ? {1, . . . , |G|}, columns indexed by k ? {1, . . . , K}, and the value of the element at
row (s, gi ) and column k set to C(s,gi ),k = ?i if s = k and 0 otherwise.
After the above reformulation, (10) is still a non-smooth function of W, and this makes the optimization challenging. To tackle this problem, we introduce an auxiliary function [20, 9] to construct
a smooth approximation of (10). Specifically, our smooth approximation function is defined as:
f? (W) = max?CWT , A? ? ?d(A)
A?O
(11)
where ? is the positive smoothness parameter and d(A) is an arbitrary smooth strongly-convex
function defined on O. The original penalty term can be viewed as f? (W) with ? = 0. Since our
algorithm will utilize the optimal solution W? to (11), we choose d(A) = 12 ||A||2F so that we can
obtain the closed form solution for A? . Clearly, f? (W) is a lower bound of f0 (W), with the gap
computed as D = maxA?O d(A) = maxA?O 21 ||A||2F = 12 J|G|.
Theorem 1 For any ? > 0, f? (W) is a convex and continuously differentiable function in W, and
the gradient of f? (W) can be computed as ?f? (W) = A?T C, where A? is the optimal solution
to (11).
According to Theorem 1, f? (W) is a smooth function for any ? > 0, with a simple form of gradient
and can be viewed as a smooth approximation of f0 (W) with the maximum gap of ?D. Finally, the
? w
optimal solution A? of (11) is composed of ??jgi = S( i ?jgi ), where S is the shrinkage operator
defined as follows:
{ u
||u||2 , ||u||2 > 1
S(u) =
(12)
u,
||u||2 ? 1
3.2
Accelerated Parallel Gradient Method
Given the smooth approximation of ?(W) in (11) and the corresponding gradient presented in Theorem 1, we could apply gradient descent method to solve the problem. Specifically, we replace the
overlapping-group-lasso penalty in (8) with its smooth approximation f? (W) to obtain the following optimization problem
min f?(W) = g(W) + ?f? (W)
(13)
W
[
(
)
(
)]
?N
? ?
?
T
T
where g(W) =
is the augi=1 log
r
k Sk,r exp(wr xi ) ? log
r Syi ,r exp(wr xi )
mented logistic regression loss function. The gradient of g(W) w.r.t. wk could be calculated as
follows
[ ?
]
N
T
Syi ,k exp(wkT xi )
?g(W) ?
q Sk,q exp(wk xi )
=
xi ? ?
??
(14)
T
T
?wk
r
q Sr,q exp(wr xi )
r Syi ,r exp(wr xi )
i=1
5
?g(W)
Therefore, the gradient of g(W) w.r.t. to W could be computed as ?g(W) = [ ?g(W)
?w1 , . . . , ?wK ].
According to Theorem 1, the gradient of f?(W) is given by
?f?(W) = ?g(W) + ?A?T C
(15)
Although f?(W) is a smooth function of W, it is represented as a summation over all training samples. Consequently, ?f?(W) could only be computed by summing over all N training samples. Due
to the huge number of samples in the training set, we adopt a Map-Reduce parallel framework [10]
to compute ?g(W) as shown in Eq.(14). While standard gradient schemes have a slow convergence rate, they can often be accelerated. This stems from the pioneering work of Nesterov in [27],
which is a deterministic algorithm for smooth optimization. In this paper, we adopt this accelerated
gradient method , and the whole algorithm is shown in Algorithm 1.
Algorithm 1 Accelerated Parallel ProximaL gradiEnT method (APPLET)
Input: X, Y,C, desired accuracy ?, step parameters {?t }
Initialization: B0 = 0
for t = 1, 2, . . ., until convergence do
Map-step: Distribute data to M cores {X1 , . . . , XM }, compute in parallel ?gm (Bt?1 ) for Xm
Reduce-step:
?M
(1) ?f?(Bt?1 ) = m=1 ?gm (Bt?1 ) + ?A?T C
(2) Wt = Bt?1 ? ?t ?f?(Bt?1 )
(3) Bt = Wt + t?1
t+2 (Wt ? Wt?1 )
end for
? = Wt
Output: W
4
Related Works
Various attempts in sharing information across related image categories have been explored. Early
approaches stem from the neural networks, where the hidden layers are shared across different
classes [8, 23]. Recent approaches transfer information across classes by regularizing the parameters of the classifiers across classes [37, 28, 15, 33, 34, 2, 26, 30]. Common to all these approaches
is that experiments are always performed with relatively few classes [16]. It is unclear how these
approaches would perform on super large-scale data sets containing thousands of image categories.
Some of these approaches would encounter severe computational bottleneck when scaling up to
thousands of classes [16].
Another line of research is the ImageNet Large Scale Visual Recognition Challenge 2010
(ILSVRC10) [3], where best performing approaches use techniques such as spatial pyramid matching [22], locality-constrained linear coding [38], the Fisher vector [29], and linear SVM trained
using stochastic gradient descent. Success has been witnessed in ILSVRC10 even with simple machine learning techniques. However, none of these approaches utilize the semantic relationships
defined among image categories in ImageNet, which we argue is a crucial source of information for
further improvement in such super large scale classification problem.
5
Experiments
In this section, we test the performance of APPLET on a subset of ImageNet used in ILSVRC10,
containing 1.2 million images from 1000 categories, divided into distinct portions for training, validation and test. The number of images for each category ranges from 668 to 3047. We use the
provided validation set for parameter selection and the final results are obtained on the test set.
Before presenting the classification results, we?d like to make clear that the goal and contributions
of this work is different from the aforementioned approaches proposed in ILSVRC10. Those approaches were designed to enter a performance competition, where heavy feature engineering and
post processing (such as ad hoc voting for multiple algorithms) were used to achieve high accuracy.
Our work, on the other hand, looks at this problem from a different angle, focusing on principled
6
methodology that explores the benefit of utilizing class structure in image categorization and proposing a model and related optimization technique to properly incorporate such information. We did
not use the full scope of all the features, and post processing schemes to boost our classification
results as the ILSVRC10 competition teams did. Therefore we argue that the results of our work is
not directly comparable with the ILSVRC10 competitions.
5.1
Image Features
Each image is resized to have a max side length of 300 pixels. SIFT [25] descriptors are computed
on 20 ? 20 overlapping patches with a spacing of 10 pixels. Images are further downsized to 21
of the side length and then 14 of the side length, and more descriptors are computed. We then
perform k-means clustering on a random subset of 10 million SIFT descriptors to form a visual
vocabulary of 1000 visual words. Using this learned vocabulary, we employ Locality-constrained
Linear Coding (LLC) [38], which has shown state-of-the-art performance on several benchmark data
sets, to construct a vector representation for each image. Finally, a single feature vector is computed
for each image using max pooling on a spatial pyramid [22]. The pooled features from various
locations and scales are then concatenated to form a spatial pyramid representation of the image.
Consequently, each image is represented as a vector in a 21,000 dimensional space.
5.2 Evaluation Criteria
We adopt the same performance measures used in ILSVRC10. Specifically, for every image, each
tested algorithm will produce a list of 5 object categories in the descending order of confidence.
Performance is measured using the top-n error rate, n = 1, . . . , 5 in our case, and two error measures
are reported. The first is a flat error which equals 1 if the true class is not within the n most confident
predictions, and 0 otherwise. The second is a hierarchical error, reporting the minimum height of
the lowest common ancestors between true and predicted classes. For each of the above two criteria,
the overall error score for an algorithm is the average error over all test images.
Table 1: Classification results (both flat and hierarchical errors) of various algorithms.
Algorithm
LR
ALR
GroupLR
APPLET
Top 1
0.797
0.796
0.786
0.779
Top 2
0.726
0.723
0.699
0.698
Flat Error
Top 3
0.678
0.668
0.642
0.634
Top 4
0.639
0.624
0.600
0.589
Top 5
0.607
0.587
0.568
0.565
Top 1
8.727
8.259
7.620
7.208
Hierarchical Error
Top 2 Top 3 Top 4
6.974 5.997 5.355
6.234 5.061 4.269
5.460 4.322 3.624
4.985 3.798 3.166
Top 5
4.854
3.659
3.156
3.012
Figure 2: Left: image classes with highest accuracy. Right: image classes with lowest accuracy.
5.3
Comparisons & Classification Results
We have conducted comprehensive performance evaluations by testing our method under different circumstances. Specifically, to better understand the effect of augmenting logistic regression
with semantic relatedness and use of overlapping-group-lasso penalty to enforce group level feature selection, we study the model adding only augmented logistic regression loss and adding only
overlapping-group-lasso penalty separately, and compare with the APPLET method. We use the
conventional L2 regularized logistic regression [5] as baseline. The algorithms that we evaluated are
listed below: (1)L2 regularized logistic regression (LR) [5]; (2) Augmented logistic regression with
L2 regularization (ALR); (3) Logistic regression with overlapping-group-lasso regularization (GroupLR); (4) Augmented logistic regression with overlapping-group-lasso regularization (APPLET).
Table 1 presents the classification results of various algorithms. According to the classification
results, we could clearly see the advantage of APPLET over conventional logistic regression, especially on the top-5 error rate. Specifically, comparing the top-5 error rate, APPLET outperforms
LR by a margin of 0.04 on flat loss, and a margin of 1.84 on hierarchical loss. It should be noted
7
that hierarchical error is measured by the height of the lowest common ancestor in the hierarchy,
and moving up a level can more than double the number of descendants. Table 1 also compares the
performance of ALR with LR. Specifically, ALR outperforms LR slightly when using the top-1 prediction results. However, on top-5 prediction results, ALR performs clearly better than LR. Similar
phenomenon is observed when comparing the classification results of GroupLR with LR. Moreover,
Figure 2 shows the image categories with highest and lowest classification accuracy.
One key reason for introducing the augmented loss function is to ensure that predicted image class
falls not too far from its true class on the semantic hierarchy. Results in Table 2 demonstrate that
even though APPLET cannot guarantee to make the correct prediction on each image, it produces
labels that are closer to the true one than LR, which generates labels far from correct ones.
True class
APPLET
LR
laptop
laptop(0)
laptop(0)
linden
live oak(3)
log wood(3)
gordon setter
Irish setter(2)
alp(11)
gourd
acorn(2)
olive(2)
bullfrog
woodfrog(2)
water snake(9)
volcano
volcano(0)
geyser(4)
odometer
odometer(0)
odometer(0)
earthworm
earthworm(0)
slug(8)
Table 2: Example prediction results of APPLET and LR. Numbers indicate the hierarchical error of the
misclassification, defined in Section 5.2.
As shown in Table 1, a systematic reduction in classification error using APPLET shows that acknowledging semantic relationships between image classes enables the system to discriminate at
more informative semantic levels. Moreover, results in Table 2 demonstrate that classification results of APPLET can be significantly more informative, as labeling a ?bullfrog? as ?woodfrog? gives
a more useful answer than ?water snake?, as it is still correct at the ?frog? level.
Effects of ? and ? on the Performance of APPLET
5.4
We present in Figure 3 how categorization performance scales with ? and ?. According to Figure 3,
APPLET achieves lowest categorization error around ? = 0.01. Moreover, the error rate increases
10
8
Hierarchical Error
Flat Error
9
0.75
0.7
Top?1
Top?2
Top?3
Top?4
Top?5
0.95
0.9
0.85
7
6
5
0.65
10
1
Top?1
Top?2
Top?3
Top?4
Top?5
9
8
Hierarchical Error
Top?1
Top?2
Top?3
Top?4
Top?5
0.8
Flat Error
0.9
0.85
0.8
0.75
0.7
0.65
Top?1
Top?2
Top?3
Top?4
Top?5
7
6
5
0.6
4
0.6
4
0.55
0.55 ?3
10
?2
10
?1
10
Lambda
0
10
1
10
3 ?3
10
?2
10
?1
10
Lambda
0
10
0.5
0.5
1
10
5
50
Kappa
500
3
0.5
5
50
500
Kappa
Figure 3: Classification results (flat error and hierarchical error) of APPLET with various ? and ?.
when ? is larger than 0.1, when excessive regularization hampers the algorithm from differentiating
semantically related categories. Similarly, APPLET achieves best performance with ? = 5. When
? is too small, a large number of categories are mixed together, resulting in a much higher flat loss.
On the other hand, when ? ? 50, the semantic relatedness matrix is close to diagonal, resulting in
treating all categories independently, and categorization performance becomes similar as LR.
6
Conclusions
In this paper, we argue the positive effect of incorporating category hierarchy information in super
large scale image categorization. The sheer size of the problem considered here singles out our work
from any previous works on multi-way classification or transfer learning. Empirical study using 1.2
million training images from 1000 categories demonstrates the effectiveness and promise of our
proposed approach.
Acknowledgments
E. P. Xing is supported by NSF IIS-0713379, DBI-0546594, Career Award, ONR N000140910758,
DARPA NBCH1080007 and Alfred P. Sloan Foundation. L. Fei-Fei is partially supported by an
NSF CAREER grant (IIS-0845230) and an ONR MURI grant.
8
References
[1] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. JMLR, 4:83?99,
2003.
[2] E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature
replacement. In CVPR, 2005.
[3] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. http://www.imagenet.org/challenges/LSVRC/2010/, 2010.
[4] A. Binder, K.-R. Mller, and M. Kawanabe. On taxonomies for multi-class image categorization. IJCV,
pages 1?21, 2011.
[5] C. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., 2006.
[6] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In
CVPR, 2008.
[7] A. Budanitsky and G. Hirst. Evaluating wordnet-based measures of lexical semantic relatedness. Comput.
Linguist., 32:13?47, March 2006.
[8] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[9] X. Chen, Q. Lin, S. Kim, J. Carbonell, and E. P. Xing. Smoothing proximal gradient method for general
structured sparse learning. In UAI, 2011.
[10] C. Chu, S. Kim, Y. Lin, Y. Yu, G., A. Ng, and K. Olukotun. Map-reduce for machine learning on multicore.
In NIPS. 2007.
[11] J. Deng, A. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us?
In ECCV, 2010.
[12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR, 2009.
[13] J. Deng, S. Satheesh, A. Berg, and L. Fei-Fei. Fast and balanced: Efficient label tree learning for large
scale object recognition. In NIPS, 2011.
[14] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories. In CVPR Workshop on Generative-Model
Based Vision, 2004.
[15] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 28:594?611, 2006.
[16] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories.
In ECCV, ECCV?10, 2010.
[17] T. Gao and D. Koller. Discriminative learning of relaxed hierarchy for large-scale visual recognition. In
ICCV, 2011.
[18] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007.
[19] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In NIPS, 2008.
[20] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary
learning. In ICML, 2010.
[21] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML,
2010.
[22] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, 2006.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc. IEEE, 86:2278?2324, 1998.
[24] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image classification:
fast feature extraction and svm training. In CVPR, 2011.
[25] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91?110, 2004.
[26] E. Miller, N. Matsakis, and P. Viola. Learning from one example through shared densities on transforms.
In CVPR, 2000.
[27] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence
o( 12 ). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543?547, 1983.
[28] A. kOpelt, A. Pinz, and A. Zisserman. Incremental learning of object detectors using a visual shape
alphabet. In CVPR, 2006.
[29] F. Perronnin, J. Sanchez, and T. Mensink. Improving the fisher kernel for large-scale image classification.
In ECCV, 2010.
[30] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype
representations. In CVPR, 2008.
[31] B. Russell, A. Torralba, K. Murphy, and W. Freeman. Labelme: A database and web-based tool for image
annotation. IJCV, 77:157?173, 2008.
[32] R. Salakhutdinov, A. Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass
object detection. In CVPR, 2011.
[33] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects,
and parts. In CVPR, 2005.
[34] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Computation,
12:1247?1283, 2000.
[35] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso.
Journal of the Royal Statistical Society Series B, pages 91?108, 2005.
[36] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: A large data set for nonparametric object
and scene recognition. PAMI, 30:1958?1970, 2008.
[37] A. Torralba, K. Murphy, and W. Freeman. Sharing features: efficient boosting procedures for multiclass
object detection. In CVPR, 2004.
[38] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image
classification. In CVPR, 2010.
9
| 4347 |@word multitask:2 briefly:1 norm:4 jacob:1 tr:1 shot:1 shechtman:1 reduction:1 initial:1 contains:1 score:1 series:1 genetic:1 document:1 outperforms:2 existing:1 comparing:2 stemmed:1 chu:1 olive:1 pioneer:1 realistic:1 informative:2 shape:2 enables:1 gv:7 designed:2 treating:1 bart:1 generative:2 leaf:4 advancement:1 core:1 lr:11 provides:2 math:1 node:17 location:1 boosting:1 org:1 oak:1 height:2 become:1 descendant:1 ijcv:3 introduce:4 p1:2 growing:1 multi:11 freeman:5 salakhutdinov:1 little:2 solver:1 totally:2 becomes:1 provided:3 classifies:1 unrelated:2 underlying:2 moreover:8 medium:2 laptop:3 lowest:5 what:1 bakker:1 maxa:2 proposing:1 guarantee:1 every:1 voting:1 tackle:3 exactly:1 doklady:1 classifier:2 demonstrates:2 grant:2 gigantic:1 positive:2 before:1 engineering:1 treat:1 despite:1 bilinear:1 path:9 odometer:3 pami:2 initialization:1 frog:1 challenging:1 binder:1 range:1 docl:1 acknowledgment:1 lecun:1 testing:1 procedure:1 intersect:2 empirical:1 significantly:1 vert:1 matching:2 word:1 confidence:1 convenience:1 close:3 selection:3 interior:1 operator:1 cannot:1 influence:1 live:1 descending:1 optimize:1 conventional:3 map:6 equivalent:1 deterministic:1 www:1 lexical:1 go:1 independently:3 convex:5 focused:2 resolution:1 formulate:2 utilizing:2 dbi:1 regularize:1 handle:2 hierarchy:20 controlling:1 gm:2 programming:2 element:4 recognition:12 expensive:1 utilized:1 kappa:2 muri:1 labeled:2 database:2 bottom:1 observed:1 wang:1 capture:1 thousand:6 calculate:1 russell:1 highest:2 knight:1 principled:1 balanced:1 complexity:2 pinz:1 nesterov:2 n000140910758:1 trained:1 weakly:1 solving:5 distinctive:2 eric:1 completely:1 translated:1 darpa:1 various:7 represented:4 soviet:1 alphabet:1 train:1 distinct:1 fast:3 artificial:1 horse:2 labeling:1 tell:1 outcome:1 saunders:1 stanford:2 solve:2 larger:1 say:1 cvpr:13 otherwise:2 wg:1 slug:1 gi:7 g1:2 emergence:1 itself:1 final:1 seemingly:1 hoc:1 advantage:2 differentiable:1 propose:1 product:1 inserting:1 relevant:2 cao:1 achieve:2 competition:3 differentially:1 wjk:1 parent:1 cluster:1 convergence:4 double:1 cour:1 produce:2 categorization:18 incremental:2 bernal:1 darrell:1 object:20 augmenting:1 gong:1 pose:1 multicore:1 measured:2 nearest:2 school:2 b0:1 eq:3 p2:2 auxiliary:3 c:3 involves:1 predicted:2 indicate:1 direction:1 guided:2 sssr:1 correct:3 stochastic:1 human:5 alp:1 bin:1 require:1 generalization:1 clustered:1 feifeili:1 summation:2 around:2 residing:2 considered:1 exp:18 cognition:3 mapping:1 scope:1 mller:1 achieves:2 adopt:4 early:1 released:1 dictionary:1 purpose:1 torralba:6 proc:1 applicable:1 bag:1 label:5 grouped:1 tool:1 reflects:1 minimization:1 clearly:5 always:1 super:6 modified:2 shelf:1 shrinkage:1 resized:1 ponce:1 properly:2 longest:1 improvement:1 likelihood:2 baseline:1 kim:3 perronnin:1 bt:6 snake:2 hidden:1 relation:2 perona:3 koller:1 ancestor:2 pixel:2 arg:2 among:5 dual:2 overall:1 classification:24 aforementioned:1 smoothing:1 noun:1 special:1 softmax:1 spatial:4 constrained:3 art:1 aware:4 construct:3 equal:1 irish:1 ng:1 extraction:1 represents:2 look:1 yu:3 excessive:1 theart:1 icml:2 report:1 gordon:1 employ:2 few:2 composed:3 recognize:1 densely:1 individual:1 comprehensive:2 hamper:1 murphy:2 replacement:1 attempt:3 detection:2 huge:4 highly:3 evaluation:2 severe:1 introduces:1 closer:1 necessary:1 tree:14 indexed:2 loosely:1 desired:1 weighs:1 instance:1 column:3 soft:1 witnessed:1 measuring:1 alr:5 caruana:1 introducing:1 subset:5 recognizing:2 dij:2 conducted:1 front:1 too:3 reported:1 answer:1 cwt:2 proximal:10 rosset:1 referring:1 confident:1 density:1 explores:1 systematic:1 off:1 dong:1 connecting:1 continuously:1 together:1 fused:1 w1:2 containing:3 choose:1 huang:2 lambda:2 cognitive:1 zhao:1 style:1 ullman:1 li:4 distribute:1 socp:2 coding:4 wk:7 pooled:1 coefficient:1 inc:1 sloan:1 ad:1 performed:1 root:3 lowe:1 closed:1 applet:19 portion:1 xing:4 parallel:10 annotation:1 contribution:2 square:1 accuracy:5 descriptor:3 acknowledging:1 miller:1 boiman:1 identify:1 bayesian:2 none:1 detector:1 quattoni:1 influenced:1 sharing:3 associated:4 couple:1 newly:1 dataset:3 knowledge:2 car:1 ut:1 organized:1 holub:1 jenatton:1 back:1 pony:2 focusing:1 higher:1 methodology:1 zisserman:1 wei:1 formulation:7 evaluated:1 though:1 strongly:1 mensink:1 correlation:2 until:1 working:1 hand:2 mented:1 web:1 overlapping:15 defines:1 logistic:13 believe:2 effect:3 concept:3 true:7 regularization:7 irani:1 semantic:28 illustrated:2 covering:1 rooted:2 noted:3 criterion:2 presenting:1 demonstrate:3 performs:1 snp:2 image:81 meaning:1 lazebnik:1 novel:1 recently:2 regularizing:1 common:7 qp:2 million:9 association:1 belong:1 mellon:2 measurement:1 enter:1 smoothness:5 unconstrained:1 populated:1 similarly:3 heskes:1 jg:1 moving:1 f0:2 recognizers:1 longer:1 similarity:2 etc:1 recent:2 optimizing:1 moderate:1 driven:1 store:1 verlag:1 binary:1 success:1 onr:2 yi:5 caltech:2 minimum:1 relaxed:1 jgi:6 deng:4 ii:2 branch:2 multiple:5 full:1 keypoints:1 stem:2 segmented:1 smooth:17 technical:1 cross:1 bach:2 lin:3 divided:2 post:2 equally:2 award:1 bigger:1 prediction:5 involving:1 regression:15 vision:3 cmu:2 metric:1 circumstance:1 iteration:1 kernel:1 pyramid:4 receive:2 whereas:2 separately:1 spacing:1 sudderth:1 source:1 crucial:1 rest:1 sr:1 shepherd:1 wkt:1 pooling:1 sanchez:1 effectiveness:3 near:2 leverage:2 granularity:2 yang:2 bengio:1 lasso:16 reduce:6 simplifies:1 inner:1 haffner:1 prototype:1 multiclass:2 bottleneck:1 expression:1 defense:1 syi:9 penalty:16 reformulated:1 york:1 linguist:1 useful:2 detailed:1 clear:1 listed:1 transforms:1 nonparametric:1 ten:1 tenenbaum:2 category:70 http:1 misclassifying:4 nsf:2 wr:5 per:1 tibshirani:1 alfred:1 carnegie:2 promise:2 affected:2 group:24 key:1 sheer:2 reformulation:1 utilize:5 olukotun:1 cone:1 wood:1 run:1 angle:1 volcano:2 reporting:1 utilizes:2 patch:1 griffin:1 scaling:1 comparable:1 bound:3 layer:1 followed:1 tackled:1 quadratic:1 strength:1 precisely:1 fei:16 scene:3 flat:9 generates:1 aspect:1 min:4 formulating:1 performing:1 relatively:2 department:1 structured:4 according:5 march:1 belonging:2 across:5 slightly:1 separability:1 maxy:1 iccv:1 sij:1 invariant:1 computationally:1 mutually:1 remains:1 count:1 wrt:7 end:1 available:1 apply:1 kawanabe:1 hierarchical:13 enforce:1 encounter:1 matsakis:1 original:2 denotes:2 clustering:2 top:35 ensure:1 newton:1 concatenated:1 build:1 especially:1 society:1 objective:1 strategy:1 exclusive:1 diagonal:1 unclear:1 gradient:22 distance:5 separating:1 foray:1 carbonell:1 argue:4 consensus:1 reason:1 water:2 willsky:1 length:6 relationship:6 reformulate:4 equivalently:1 unfortunately:1 taxonomy:1 satheesh:1 canine:1 perform:2 benchmark:2 descent:3 viola:1 husky:2 team:1 arbitrary:2 community:1 introduced:1 dog:2 imagenet:16 california:1 learned:2 boost:1 nip:3 able:1 beyond:1 below:1 interlinked:1 pattern:3 xm:2 sparsity:4 challenge:6 pioneering:1 max:9 royal:1 explanation:1 power:1 misclassification:2 natural:2 treated:1 regularized:3 zhu:2 scheme:2 technology:1 epxing:1 created:1 conventionally:1 schmid:1 review:1 l2:5 loss:14 expect:1 mixed:1 lv:2 validation:2 foundation:1 setter:2 systematically:1 classifying:3 share:5 heavy:1 tiny:1 row:2 eccv:4 supported:2 english:1 enjoys:1 side:3 weaker:1 understand:1 institute:1 neighbor:2 fall:1 differentiating:1 sparse:4 benefit:3 calculated:1 vocabulary:2 world:3 llc:1 rich:2 evaluating:1 made:1 far:2 relatedness:15 gene:1 uai:1 mairal:1 conceptual:1 summing:1 xi:24 fergus:4 discriminative:1 search:2 sk:5 table:7 transfer:4 career:2 improving:1 bottou:1 complex:2 domain:1 did:2 big:1 motivation:1 whole:1 x1:1 augmented:7 geyser:1 depicts:1 slow:2 bullfrog:2 comput:1 lie:1 jmlr:1 dozen:1 rk:1 theorem:4 bishop:1 sift:2 gating:1 explored:1 decay:1 svm:2 list:1 linden:1 concern:1 incorporating:1 socher:1 workshop:1 adding:2 texture:1 subtree:4 downsized:1 margin:2 gap:2 phenotype:1 chen:1 locality:3 appearance:2 likely:4 gao:1 visual:15 josh:1 partially:1 springer:1 corresponds:2 obozinski:1 conditional:1 identity:3 formulated:2 goal:2 consequently:7 sized:1 viewed:2 labelme:2 shared:5 replace:1 fisher:2 content:1 lsvrc:1 specifically:11 semantically:1 wt:5 wordnet:2 hirst:1 total:1 discriminate:1 experimental:1 organizes:1 select:1 berg:3 internal:4 collins:1 accelerated:9 incorporate:2 tested:2 phenomenon:1 correlated:1 |
3,697 | 4,348 | On fast approximate submodular minimization
Stefanie Jegelka? , Hui Lin? , Jeff Bilmes?
Max Planck Institute for Intelligent Systems, Tuebingen, Germany
?
University of Washington, Dept. of EE, Seattle, U.S.A.
[email protected],{hlin,bilmes}@ee.washington.edu
?
Abstract
We are motivated by an application to extract a representative subset of machine
learning training data and by the poor empirical performance we observe of the
popular minimum norm algorithm. In fact, for our application, minimum norm can
have a running time of about O(n7 ) (O(n5 ) oracle calls). We therefore propose
a fast approximate method to minimize arbitrary submodular functions. For a
large sub-class of submodular functions, the algorithm is exact. Other submodular
functions are iteratively approximated by tight submodular upper bounds, and then
repeatedly optimized. We show theoretical properties, and empirical results suggest
significant speedups over minimum norm while retaining higher accuracies.
1
Introduction
Submodularity has been and continues to be an important property in many fields. A set function
f : 2V ? R defined on subsets of a finite ground set V is submodular if it satisfies the inequality
f (S) + f (T ) ? f (S ? T ) + f (S ? T ) for all S, T ? V. Submodular functions include entropy,
graph cuts (defined as a function of graph nodes), potentials in many Markov Random Fields
[3], clustering objectives [23],covering functions (e.g., sensor placement objectives), and many
more. One might consider submodular functions as being on the boundary between ?efficiently?,
i.e., polynomial-time, and ?not efficiently? optimizable set functions. Submodularity is gaining
importance in machine learning too, but many machine learning data sets are so large that mere
?polynomial-time? efficiency is not enough. Indeed, the submodular function minimization (SFM)
algorithms with proven polynomial running time are practical only for very small data sets. An
alternative, often considered to be faster in practice, is the minimum-norm point algorithm [7]. Its
worst-case running time however is still an open question.
Contrary to current wisdom, we demonstrate that for certain functions relevant in practice (see
Section 1.1), the minimum-norm algorithm has an impractical empirical running time of about
O(n7 ), requiring about O(n5 ) oracle function calls. To our knowledge, and interesting from an
optimization perspective, this is worse than any results reported in the literature, where times of
O(n3.3 ) were obtained with simpler graph cut functions [22].
Since we found the minimum-norm algorithm to be either slow (when accurate), or inaccurate (when
fast), in this work we take a different approach. We view the SFM problem as an instance of a larger
class of problems that includes NP-hard instances. This class admits approximation algorithms, and
we apply those instead of an exact method. Contrary to the possibly poor performance of ?exact?
methods, our approximate method is fast, is exact for a large class of submodular functions, and
approximates all other functions with bounded deviation.
Our approach combines two ingredients: 1) the representation of functions by graphs; and 2) a recent
generalization of graph cuts that combines edge-costs non-linearly. Representing functions as graph
cuts is a popular basis for optimization, but cuts cannot efficiently represent all submodular functions.
Contrary to previous constructions, including 2) leads to exact representations for any submodular
1
function. To optimize an arbitrary submodular function f represented in our formalism, we construct
a graph-representable tractable submodular upper bound f? that is tight at a given set T ? V, i.e.,
f?(T ) = f (T ), and f?(S) ? f (S) for all S ? V. We repeat this ?submodular majorization? step and
optimize, in at most a linear number of iterations. The resulting algorithm efficiently computes good
approximate solutions for our motivating application and other difficult functions as well.
1.1
Motivating application and the failure of the minimum-norm point algorithm
Our motivating problem is how to empirically evaluate new or expensive algorithms on large data sets
without spending an inordinate amount of time doing so [20, 21]. If a new idea ends up performing
poorly, knowing this sooner will avoid futile work. Often the complexity of a training iteration is
linear in the number of samples n but polynomial in the number c of classes or types. For example,
for object recognition, it typically takes O(ck ) time to segment an image into regions that each
correspond to one of c objects, using an MRF with non-submodular k-interaction potential functions.
In speech recognition, moreover, a k-gram language model with size-c vocabulary has a complexity
of O(ck ), where c is in the hundreds of thousands and k can be as large as six.
To reduce complexity one can reduce k, but this can be unsatisfactory since the novelty of the
algorithm might entail this very cost. An alternative is to extract and use a subset of the training data,
one with small c. We would want any such subset to possess the richness and intricacy of the original
data while simultaneously ensuring that c is bounded.
This problem can be solved via SFM using the following Bipartite neighborhoods class of submodular
functions: Define a bipartite graph H = (V, U, E, w) with left/right
nodes V/U, and a modular weight
P
function w : U ? R+ . A function is modular if w(U ) = u?U w(u). Let the neighborhood of a
set S ? P
V be N (S) = {u ? U : ? edge (i, u) ? E with i ? S}. Then f : 2V ? R+ , defined as
f (S) = u?N (S) w(u), is non-decreasing submodular. This function class encompasses e.g. set
S
covers of the form f (S) = | i?S Ui | for sets Ui covered by element i. We say f is the submodular
function induced by modular function w and graph H.
Let U be the set of types in a set of training samples V. Moremin?norm
16
O(n )
over, let w measure the cost of a type u ? U (this corresponds
14
e.g. to the ?undesirability? of type u).
Define
also
a
modular
P
function m : 2V ? R+ , m(S) = i?S m(i) as the benefit
12
of training samples (e.g., in vision, m(i) is the number of dif10
ferent objects in an image i ? V, and in speech, this is the
8
length of utterance i). Then the above optimization problem
6
can be solved by finding argminS?V w(N (S)) ? ?m(S) =
4
argminS?V w(N (S))+?m(V \S) where ? is a tradeoff coeffi9
9.5
10
10.5
cient. As shown below, this can be easily represented and solved
Ground set size (power of 2)
efficiently via graph cuts. In some cases, however, we prefer to
Figure 1: Running time of MN
pick certain subclasses of U together. We partition U = U1 ?U2
into blocks, and make it beneficial to pick items from the same block. Benefit restricted to blocks can
arise from non-negative non-decreasing submodular
functions g : 2U ? R+ restricted to blocks. The
P
resulting optimization problem is minS?V i g(Ui ? N (S)) + ?m(V \ S); the sum over i expresses
the obvious generalization to a partition into more than just two blocks. Unfortunately, this class of
submodular functions is no longer representable by a bipartite graph, and general SFM must be used.
p
With such a function, f (S) = m(S) + 100 w(N (S)), the empirical running time of the minimum
norm point algorithm (MN) scales as O(n7 ), with O(n5 ) oracle calls (Figure 1). This rules out large
data sets for our application, but is interesting with regard to the unknown complexity of MN.
CPU time (seconds, power of 2)
7
1.2
Background on Algorithms for submodular function minimization (SFM)
The first polynomial algorithm for SFM was by Gr?otschel et al. [13], with further milestones being the
first combinatorial algorithms [15, 27] ([22] contains a survey). The currently fastest strongly polynomial combinatorial algorithm has a running time of O(n5 T +n6 ) [24] (where T is function evaluation
time), far from practical. Thus, the minimum-norm algorithm [7] is often the method of choice.
2
Luckily, many sub-families of submodular functions permit specialized, faster algorithms. Graph
cut functions fall into this category [1]. They have found numerous applications in computer vision
[2, 12], begging the question as to which functions can be represented and minimized using graph
? y et al. [32] show that cut representations are indeed limited: even when allowing
cuts [9, 6, 31]. Zivn?
exponentially many additional variables, not all submodular functions can be expressed as graph cuts.
Moreover, to maintain efficiency, we do not wish to add too many auxiliary variables, i.e., graph
nodes. Other specific cases of relatively efficient SFM include graphic matroids [25] and symmetric
submodular functions, minimizable in cubic time [26].
P
A further class of benign functions are those of the form f (S) = ?( i?S w(i)) + m(S) for
nonnegative weights w : V ? R+ , and certain concave functions ? : R ? R. Fujishige and Iwata
[8] minimize such a function via a parametric max-flow, and we build on their results in Section 4.
However, restrictions apply to the effective number of breakpoints of ?. Stobbe and Krause [29]
generalize this class to arbitrary concave functions and exploit Nesterov?s accelerated gradient descent.
Whereas Fujishige and Iwata [8] decompose ? as a minimum of modular functions,
P Stobbe and
Krause [29] decompose it into a sum of truncated functions of the form f (A) = min{ i?A w0 (i), ?}
? this class of functions, however, is also limited. Truncations are expressible by graph cuts, as we
show in Figure 3(b). Thus, if truncations could express any submodular function, then so could
graph cuts, contradicting the results in [32]. This was proven independently in [30]. Moreover, the
formulation itself of some representable functions in terms of concave functions can be challenging.
In this paper, by contrast, we propose a model that is exact for graph-representable functions, and
yields an approximation for all other functions.
2
Representing submodular functions by generalized graph cuts
We begin with the representation of a set function f : 2V ? R by a graph
cut, and then extend this to submodular edge weights. Formally, f is
graph-representable if there exists a graph G = (V ? U ? {s, t}, E) with
terminal nodes s, t, one node for each element i in V, a set U of auxiliary nodes
(U can be empty), and edge weights w : E ? R+ such that, for any S ? V:
X
f (S) = min w(?(s ? S ? U )) = min
w(e).
(1)
U ?U
U ?U
s
1)
m(
m(
2)
1
2
w(
1)
w(
2)
u
w(2)
t
Figure 2: max
e??s (S?U )
?(S) is the set of edges leaving S, and ?s (S) = ?({s} ? S). Recall that any minimal (s, t)-cut
partitions the graph nodes into the set Ts ? V?U reachable from s and the set Tt = (V?U)\Ts disconnected from s. That means, f (S) equals the weight of the minimum (s, t)-cut that assigns S to Ts and
V \ S to Tt , and the auxiliary nodes to achieve the minimum. The nodes in U act as auxiliary
P variables.
As an illustrative example, Figure 2 represents the function f (S) = maxi?S w(i) + j?V\S m(j)
for two elements V = {1, 2} and w(2) > w(1), using one auxiliary node u. For any query set S, u
might be joined with S (u ? Ts ) or not (u ? Tt ). If S = {1}, then w(?s ({1, u})) = m(2) + w(2),
and w(?s ({1})) = m(2) + w(1) = f (S) < w(?s ({1, u})). If S = {1, 2}, then w(?s ({1, 2, u})) =
w(2) < w(?s ({1, 2})) = w(1) + w(2), and indeed f (S) = w(2). The graph representation (1) leads
to the equivalence between minimum cuts and the minimizers of f :
Lemma 1. Let S ? be a minimizer of f , and let U ? ? argminU ?U w(?s (S ? ?U )). Then the boundary
?s (S ? ? U ? ) ? E is a minimum cut in G.
The lemma (proven in [18]) is good news since minimum cuts can be computed efficiently. To derive
S ? from a minimum cut, recall that any minimum cut is the boundary of some set Ts? ? V ? U that
is still reachable from s after cutting. Then S ? = Ts? ? V, so S ? ? Ts? and (V \ S ? ) ? Tt? . A large
sub-family of submodular functions can be expressed exactly in the form (1), but possibly with an
exponentially large U. For efficiency, the size of U should remain small. To express any submodular
function with few auxiliary nodes, in this paper we extend Equation (1) as is seen below.
Unless the submodular function f is already a graph cut function (and directly representable), we first
decompose f into a modular function and a nondecreasing submodular function, and then build up
the graph part by part. This accounts for any graph-representable component of f . To approximate
the remaining component of the function that is not exactly representable, we use submodular costs
on graph edges (in contrast with graph nodes), a construction that has been introduced recently in
3
t
w(3)
?
s
(n)
1
1
(1)
?
?
t
s
V
?
?
t
?
s
t
(n)
-m
(n)
-m
(n)
-m
?
(d) bipartite
(c) partition matroid
(1)
V
?
-m
(1)
-m
(b) truncation
-m
(a) maximum
s
1
t
w(n)
V
U
s
(n)
-m
(n)
w
t
1
1
-m
-m
(n)
-m
(1)
w(2)
(n
)
s
w(
1)
w
-m
(1)
w(2)
V
V
w(1)
-m
(1)
V
?
(e) bipartite & truncation (f) basic submodular construction
Figure 3: Example graph constructions. Dashed blue edges can have submodular weights; auxiliary nodes are white and ground set nodes are shaded. The bipartite graph can have arbitrary
representations between U and t, 3(e) is one example. (All figures are best viewed in color.)
computer vision [16]. We first introduce a relevant decomposition result by Cunningham [4]. A
polymatroid rank function is totally normalized if f (V \ i) = f (V) for all i ? V. The marginal costs
are defined as ?f (i|S) = f (S ? {i}) ? f (S) for all i ? V \ S.
Theorem 1 ([4, Thm. 18]). Any submodular function f can be decomposed as f (S) = m(S) + g(S)
into a modular functionP
m and a totally normalized polymatroid rank function g. The components
are defined as m(S) = i?A ?f (i|V \ i) and g(S) = f (S) ? m(S) for all S ? V.
We may assume that m(i) < 0 for all i ? V. If m(i) ? 0 for any i ? V, then diminishing marginal
costs, a property of submodular functions, imply that we can discard element i immediately [5, 18].
To express such negative costs in a graph cut, we point P
out an equivalent formulation with positive
weights: since m(V) is constant, minimizing m(S) = i?S m(i) is equivalent to minimizing the
shifted function m(S) ? m(V) = ?m(V \ S). Thus, we instead minimize the sum of positive
weights on the complement of the solution. We implement this shifted function in the graph by adding
an edge (s, i) with nonnegative weight ?m(i) for each i ? V. Every element j ? Tt (i.e., j ?
/ S) that
is not selected must be separated from s, and the edge (s, j) contributes ?m(j) to the total cut cost.
Having constructed the modular part of the function f by edges (s, i) for all i ? V, we address
its submodular part g. If g is a sum of functions, we can add a subgraph for each function. We
begin with some example functions that are explicitly graph-representable with polynomially many
auxiliary nodes U. The illustrations in Figure 3 include the modular part m as well.
Maximum. The function g(S) = maxi?S w(i) for nonnegative weights w is an extension of
Figure 2. Without loss of generality, we assume the elements to be ordered by weight, so that
w(1) ? w(2) ? . . . w(n). We introduce n?1 auxiliary nodes uj , and connect them to form an imbalanced tree with leaves V, as illustrated in Figure 3(a). The minimum way to disconnect a set S from
t is to cut the single edge (uj?1 , uj ) with weight w(j) of the largest element j = argmaxi?S w(i).
Truncations. Truncated functions f (S) = min{w(S), ?} for w, ? ? 0 can be modeled by one
extra variable, as shown in Figure 3(b). If w(S) > ?, then the minimization in (1) puts u in
Ts and cuts the ?-edge. This construction has been successfully used in computer vision [19].
Truncations can model piecewise linear concave functions of w(S) [19, 29], and also represent
negative terms in a pseudo-boolean polynomial [18]. Furthermore, these functions include rank
functions g(S) = min{|S|, k} of uniform matroids, and rank functions of partition matroids. If V
is partitioned into groups G ? V, then the rank of the associated partition matroid counts the number
of groups that S intersects: f (S) = |{G|G ? S 6= ?}| (Fig. 3(c)).
Bipartite
neighborhoods. We already encountered bipartite submodular functions f (S) =
P
w(u)
in Section 1.1. The bipartite graph that defines N (S) is part of the representau?N (S)
4
tion shown in Figure 3(d), and its edges get infinite weight. As a result, if S ? Ts , then all neighbors
N (S) of S must also be in Ts , and the edges (u, t) for all u ? N (S) are cut. Each u ? U has such
an edge (u, t), and the weight of that edge is the weight w(u) of u.
Additional examples are given in [18].
Of course, all the above constructions can also be applied to subsets Q ? V of nodes. In fact, the
decomposition and constructions above permit us to address arbitrary sums and restrictions of such
graph-representable functions. These example families of functions already cover a wide variety of
functions needed in applications. Minimizing a graph-represented function is equivalent to finding
the minimum (s, t)-cut, and all edge weights in the above are nonnegative. Thus we can use any
efficient min-cut or max-flow algorithm for any of the above functions.
2.1
Submodular edge weights
Next we address the generic case of a submodular function that is not (efficiently) graph-representable
or whose functional form is unknown. We can still decompose this function into a modular part m
and a polymatroid g. Then we construct a simple graph as shown in Figure 3(f). The representation
of m is the same as above, but the cost of the edges (i, t) will be charged differently. Instead of
a sum of weights, we define the cost of a set of these edges to be a non-additive function on sets
of edges, a polymatroid rank function. Each edge (i, t) is associated with exactly one ground set
element i ? V, and selecting i (i ? Ts ) is equivalent to cutting the edge (i, t). Thus, the cost of edge
(i, t) will model the cost g(i) of its element i ? V. Let Et be the set of such edges (i, t), and denote,
for any subset C ? Et the set of ground set elements adjacent to C by V (C) = {i ? V|(i, t) ? C}.
Equivalently, C is the boundary of V (C) in Et : ?s (V (C)) ? Et = C. We define the cost of C to be
the cost of its adjacent ground set elements, hg (C) , g(V (C)); this implies hg (?s (S ? Et )) = g(S).
The equivalent of Equation (1) becomes
f (S) = min w(?s (S ? U ) \ Et ) + hg (?s (S ? U ) ? Et ) = ?m(V \ S) + g(S),
U ?U
(2)
with U = ? in Figure 3(f). This generalization from the standard sum of edge weights to a nondecreasing submodular function permits us to express many more functions, in fact any submodular function
[5]. Such expressiveness comes at a price, however: in general, finding a minimum (s, t)-cut with
such submodular edge weights is NP-hard, and even hard to approximate [17]. The graphs here that
represent submodular functions correspond to benign examples that are not NP-hard. Nevertheless,
we will use an approximation algorithm that applies to all such non-additive cuts. We describe the
algorithm in Section 3. For the moment, we assume that we can handle submodular costs on edges.
The simple construction in Figure 3(f) itself corresponds to a general submodular function minimization. It becomes powerful when combined with parts of f that are explicitly representable. If g
decomposes into a sum of graph-representable functions and a (nondecreasing submodular) remainder
gr , then we construct a subgraph for each graph-representable function, and combine these subgraphs
with the submodular-edge construction for gr . All the subgraphs share the same ground set nodes V.
In addition, we are in no way restricted to separating graph-representable and general submodular
functions. The cost function in our application is a submodular function induced by a bipartite graph
H = (V, U, E). Let, as before, N (S) be the neighborhood of S ? V in U. Given a nondecreasing
submodular function gU : 2U ? R+ on U, the graph H defines a function g(S) = gU (N (S)). If
gU is nondecreasing submodular, then so is g [28, ?44.6 g]. For any such function, we represent H
explicitly in G, and then add submodular-cost edges from U to t with hg (?s (N (S))) = gU (N (S)),
as shown in Figure 3(d). If gU is itself exactly representable, then we add the appropriate subgraph
instead (Figure 3(e)).
3
Optimization
To minimize a function f , we find a minimum (s, t)-cut in its representation graph. Algorithm 1
applies to any submodular-weight cut; this algorithm is exact if the edge costs are modular (a sum of
weights). In each iteration, we approximate f by a function f? that is efficiently graph-representable,
and minimize f? instead. In this section, we switch from costs f, f? of node sets S, T to equivalent
costs w, h of edge sets A, B, C and back.
5
Algorithm 1: Minimizing graph-based approximations.
create the representation graph G = (V ? U ? {s, t}, E) and set S0 = T0 = ?;
for i = 1, 2, . . . do
compute edge weights ?i?1 = ??s (Ti?1 ) (Equation 4);
find the (maximal) minimum (s, t)-cut Ti = argminT ? (V?U ) ?i?1 (?s T );
if f (Ti ) = f (Ti?1 ) then
return Si = Ti ? V;
end
end
The approximation f? arises from the cut representation constructed in Section 2: we replace the
exact edge costs by approximate modular edge weights ? in G. Recall that the representation G has
two types of edges: those whose weights w are counted as the usual sum, and those charged via a
submodular function hg derived from g. We denote the latter set by Et , and the former by Em . For
any e ? Em , we use the exact cost ?(e) = w(e). The submodular cost hg of the remaining edges is
upper bounded by referring to a fixed set B ? E that we specify later. For any A ? Et , we define
X
X
? B (A) , hg (B) +
h
?h (e|B ? Et ) ?
?h (e|Et \ e) ? hg (A).
(3)
e?A\B
e?B\A
This inequality holds thanks to diminishing marginal costs, and the approximation is tight at B,
? B (B) = hg (B). Up to a constant shift, this function is equivalent [16] to the edge weights:
h
?B (e) = ?h (e|B ? Et ) if e ? Et \ B;
and
?B (e) = ?h (e|Et \ e) if e ? B ? Et .
(4)
Plugging ?B into Equation (2) yields an approximation f? of f . In the algorithm, B is always the
boundary B = ?s (T ) of a set T ? (V ? U). Then G with weights ?B represents
f?(S) = min ?B (?s (S ? U ) ? Em ) + ?B (?s (S ? U ) ? Et )
U ?U
X
= min w(?s (S ? U ) ? Em ) +
?g (u|V ? U \ u) +
U ?U
(u,t)??s (S?U )?B
X
?g (u|T ).
(u,t)??s (S?U )\B
Here, we used the definition hg (C) , g(V (C)). Importantly, the edge weights ?B are always
nonnegative, because, by Theorem 1, g is guaranteed to be nondecreasing. Hence, we can efficiently
minimize f? as a standard minimum cut. If in Algorithm 1 there is more than one set T defining a
minimum cut, then we pick the largest (i.e., maximal) such set. Lemma 2 states properties of the Ti .
Lemma 2. Assume G is any of the graphs in Figure 3, and let T ? ? V ?U be the maximal set defining
a minimum-cost cut ?s (T ? ) in G, so that S ? = T ? ? V is a minimizer of the function represented by
G. Then, in any iteration i of Algorithm 1, it holds that Ti?1 ? Ti ? T ? . In particular, S ? S ? for
the returned solution S.
Lemma 2 has three important implications. First, the algorithm never picks any element outside the
maximal optimal solution. Second, because the Ti are growing, there are at most |T ? | ? |V ? U|
iterations, and the algorithm is strongly polynomial. Finally, the chain property permits more efficient
implementations. The proof of Lemma 2 relies on the definition of ? and submodularity [18].
Moreover, the weights ? lead to a bound the worst-case approximation factor [18].
3.1
Improvement via summarizations
The approximation f? is loosest if the sum of edge weights ?i (A) significantly overestimates the true
joint cost hg (A) of sets of edges A ? ?s T ? \ ?s Ti still to be cut. This happens if P
the joint marginal
cost ?h (A|?s Ti ) is much smaller than the estimated sum of weights, ?i (A) = e?A ?h (e|?s Ti ).
Luckily, many of the functions that show this behavior strongly resemble truncations. Thus, to tighten
the approximation, we summarize the joint cost of groups of edges by a construction similar to
Figure 3(b). Then the algorithm can take larger steps and pick groups of elements.
We partition Et into disjoint groups Gk of edges (u, t). For each group, we introduce an auxiliary
node tk and re-connect all edges (u, t) ? Gk to end in tk instead of t. Their cost remains the
6
same. An extra edge ek connects tk to t, and carries the joint weight ?i (ek ) of all edges in Gk ;
a tighter approximation. The weight ?i (ek ) is also adapted in each iteration. Initially, we set
?0 (ek ) = hg (Gk ) = g(V (Gk )). Subsequent approximations ?i refer to cuts ?s Ti , and such a cut can
contain either single edges from Gk , or the group edge ek . We set the next reference set Bi to be a
copy of ?s Ti in which each group edge ek was replaced by all
Pits group members Gk . The
Pjoint group
weight ?i (ek ) for any k is then ?i (ek ) = ?h (Gk \ Bi |Bi ) + e?Gk ?Bi ?h (e|Et \ e) ? e?Gk ?i (e).
Formally, these weights represent the upper bound
X
X
X
? 0 (A) = hg (B) +
?
h
?h (Gk \ B|B) +
?h (e|B) ?
?h (e|Et \ e) ? h(A),
B
Gk ?A
e?(Gk ?A)\B,Gk 6?A
e?B\A
where we replace Gk by ek whenever Gk ? A. In our experiments, this summarization helps improve
the results while simultaneously reducing running time.
4
Parametric constructions for special cases
For certain functions of the form f (S) = m(S) + g(N (S)), the graph representation in Figure 3(d)
admits a specific algorithm. We use approximations that are exact on limited ranges,
P and eventually
pick the best range. For this construction, g must have the form g(U ) = ?( u?U w(u))
?
for
weights w
? ? 0 and one piecewise linear, concave function ? with a small (polynomial) number
` of breakpoints.
Alternatively, ? can be any concave function if the weights w
? are such that
P
w(U
? ) = u?U w(u)
?
can take at most polynomially many distinct values xk ; e.g., if w(u)
?
= 1 for
all u, then effectively ` = |U| + 1 by using the xk as breakpoints and interpolating. In all these cases,
? is equivalent to the minimum of at most ` linear (modular) functions.
We build on the approach in [8], but, whereas their functions are defined on V, g here is defined on U.
Contrary to their functions and owing to our decomposition, the ? here is nondecreasing. We define `
linear functions, one for each breakpoint xk (and use x0 = 0):
?k (t) = (?(xk ) ? ?(xk?1 ))(t ? xk ) + ?(xk ) = ?k t + ?k .
(5)
The ?k are defined such that ?(t) = mink ?k (t). Therefore, we approximate f by a series f?k (S) =
?m(V \ S) + ?k (w(N
? (S))), and find the exact minimizer Sk for each k. To compute Sk via a
minimum cut in G (Fig. 3(d)), we define edge weights ?k (e) = w(e) for edges e ?
/ Et as in Section 3,
and ?k (u, t) = ?k w(u)
?
for e ? Et . Then Tk = Sk ? N (Sk ) defines a minimum cut ?s Tk in G. We
compute f?k (Sk ) = ?k (?s Tk ) + ?k + m(V); the optimal solution is the Sk with minimum cost f?k (Sk ).
This method is exact. To solve for all k within one max-flow, we use a parametric max-flow method
[10, 14]. Parametric max-flow usually works with both edges from s and to t. Here, ?k ? 0 because
? is nondecreasing, and thus we only need t-edges which already exist in the bipartite graph G.
This method is limited to few breakpoints. For more general concave ? and arbitrary w
? ? 0, we
can approximate ? by a piecewise linear function. Still,
the
parametric
approach
does
not
directly
P
generalize to more than one nonlinearity, e.g., g(U ) = i gi (U ? Wi ) for sets Wi ? U. In contrast,
Algorithm 1 (with the summarization) can handle all of these cases. We point out that without
indirection via the bipartite graph, i.e., f (S) = m(S) + ?(w(S)) for a ? with few breakpoints, we
can minimize f very simply: The solution for ?k includes all j ? V with ?k ? ?m(j)/w(j). The
advantage of the graph cut is that it easily combines with other objectives.
5
Experiments
In the experiments, we test whether the graph-based methods improve over the minimum-norm point
algorithm in the difficult cases of Section 1.1. We compare the following methods:
MN: a re-implementation of the minimum norm point algorithm in C++ that is about four times
faster than the C code in [7] (see [18]), ensuring that our results are not due to a slow implementation;
MC: a minimum cut with static edge weights ?(e) = hg (e);
GI: the graph-based iterative Algorithm
p 1, implemented in C++ with the max-flow code of [3], (i) by
itself; (ii) with summarization via |Et | random groups (GIr); (iii) with summarization via groups
generated by sorting the edges in Et by their weights hg (e), and then forming groups Gk of edges
adjacent in the order such that for each e ? Gk , hg (e) ? 1.1hg (Gk ) (GIs);
7
rel error
log time (s)
4
2
10
4
MN 1e?5
MC
GI
GIr
GIs
GP
4000
?
(a)
6000
8000
0
MC
GI
2
10
4000
GIr
GIs
GP
0
10
2000
1
2000
MN 1e?10
6000
2
10
10
8000
time (s)(s)
time
6
abs error
3
10
500
1000
?
1500
0
2000
(b)
?2
4000
6000
?
8000
10
(c)
log n
log n
3
10
(d)
Figure 4: (a) Running time, (b) relative and (c) absolute error with varying
? for a data set as described
p
?
in Section 1.1, |V| = 54915, |U| = 6871, and f (S) = ?m(S) + ? |N (S)|. Where
p f (S ) = 0, we
show absolute errors. (d) Running times with respect to |V|, f (S) = ?m(S) + ? w(N (S)).
GP: the parametric method from Section 4, using |Et | equispaced breakpoints; based on C code
from RIOT1 .
We also implemented the SLG method from [29] in C++ (public code is not available), but found
it to be impractical on the problems here, as gradient computation of our function requires finding
gradients of |U| truncation functions, which is quite expensive [18]. Thus, we did not include it in the
tests on the large graphs. We use bipartite graphs of the form described in Section p
1.1, with a cost
function f (S) = m(S) + ?g(N (S)). The function g uses a square root, g(U ) = w(U ). More
results, also on other functions, can be found in [18].
Solution quality with solution size. Running time and results depend on the size of S ? . Thus, we
vary ? from 50 (S ? ? V) to 9600 (S ? = ?) on a speech recognition data set [11]. The bipartite
graph represents a corpus subset extraction problem (Section 1.1) and has |V| = 54915, |U| = 6871
nodes, and uniform weights w(u) = 1 for all u ? U . The results look similar with non-uniform
weights, but for uniform weights the parametric method from Section 4 always finds the optimal
solution and thus allows us to report errors. Figure 4 shows the running times and the relative error
err(S) = |f (S) ? f (S ? )|/|f (S ? )| (note that f (S ? ) ? 0). If f (S ? ) = 0, we report absolute errors.
Because of the large graph, we used the minimum-norm algorithm with accuracy 10?5 . Still, it
takes up to 100 times longer than the other methods. It works well if S ? is large, but as ? grows,
its accuracy becomes poor. In particular when f (S ? ) = f (?) = 0, it returns large sets with large
positive cost. In contrast, the deviation of the approximate edge weights ?i from the true cost is
bounded [18]. All algorithms except MN return an optimal solution for ? ? 2000. Updating the
weights ? clearly improves the performance of Algorithm 1, as does the summarization (GIr/GIs
perform identically here). With the latter, the solutions are very often optimal, and almost always
very good.
Scaling: To test how the methods scale with the size |V|, we sample small graphs from the big
graph, and report average running times across 20 graphs for each size. As the graphsphave nonuniform weights, we use GP as an approximation method and estimate the nonlinearity w(U ) by
a piecewise linear function with |U| breakpoints. All algorithms find the same (optimal) solution.
Figure 4(d) shows that the minimum-norm algorithm with high accuracy is much slower than the
other methods. Empirically, MN scales as up to O(n5 ) (note that Figure 1 is a specific worst-case
graph), the parametric version approximately O(n2 ), and the variants of GI up to O(n1.5 ).
Acknowledgments: This material is based upon work supported in part by the National Science
Foundation under grant IIS-0535100, by an Intel research award, a Microsoft research award, and a
Google research award.
References
[1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows. Prentice Hall, 1993.
[2] Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary and region segmentation of objects
in n-d images. In ICCV, 2001.
1
http://riot.ieor.berkeley.edu/riot/Applications/Pseudoflow/parametric.html
8
[3] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. IEEE TPAMI, 26(9):1124?1137, 2004.
[4] W. H. Cunningham. Decomposition of submodular functions. Combinatorica, 3(1):53?68, 1983.
[5] W. H. Cunningham. Testing membership in matroid polyhedra. J Combinatorial Theory B, 36:161?188,
1984.
[6] D. Freedman and P. Drineas. Energy minimization via graph cuts: Settling what is possible. In CVPR,
2005.
[7] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm
base. Pacific Journal of Optimization, 7:3?17, 2011.
[8] S. Fujishige and S. Iwata. Minimizing a submodular function arising from a concave function. Discrete
Applied Mathematics, 92, 1999.
[9] S. Fujishige and S. B. Patkar. Realization of set functions as cut functions of graphs and hypergraphs.
Discrete Mathematics, 226:199?210, 2001.
[10] G. Gallo, M.D. Grigoriadis, and R.E. Tarjan. A fast parametric maximum flow algorithm and applications.
SIAM J Computing, 18(1), 1989.
[11] J.J. Godfrey, E.C. Holliman, and J. McDaniel. Switchboard: Telephone speech corpus for research and
development. In Proc. ICASSP, volume 1, pages 517?520, 1992.
[12] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images.
Journal of the Royal Statistical Society, 51(2), 1989.
[13] M. Gr?otschel, L. Lov?asz, and A. Schrijver. The ellipsoid algorithm and its consequences in combinatorial
optimization. Combinatorica, 1:499?513, 1981.
[14] D. Hochbaum. The pseudoflow algorithm: a new algorithm for the maximum flow problem. Operations
Research, 58(4), 2008.
[15] S. Iwata, L. Fleischer, and S. Fujishige. A combinatorial strongly polynomial algorithm for minimizing
submodular functions. J. ACM, 48:761?777, 2001.
[16] S. Jegelka and J. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In
CVPR, 2011.
[17] S. Jegelka and J. Bilmes. Approximation bounds for inference using cooperative cuts. In ICML, 2011.
[18] S. Jegelka, H. Lin, and J. Bilmes. Fast approximate submodular minimization: Extended version, 2011.
[19] P. Kohli, L. Ladick?y, and P. Torr. Robust higher order potentials for enforcing label consistency. Int. J.
Computer Vision, 82, 2009.
[20] H. Lin and J. Bilmes. An application of the submodular principal partition to training data subset selection.
In NIPS workshop on Discrete Optimization in Machine Learning, 2010.
[21] H. Lin and J. Bilmes. Optimal selection of limited vocabulary speech corpora. In Proc. Interspeech, 2011.
[22] S. T. McCormick. Submodular function minimization. In K. Aardal, G. Nemhauser, and R. Weismantel,
editors, Handbook on Discrete Optimization, pages 321?391. Elsevier, 2006. updated version 3a (2008).
[23] M. Narasimhan, N. Jojic, and J. Bilmes. Q-clustering. In NIPS, 2005.
[24] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming, 118(2):237?251, 2009.
[25] M. Preissmann and A. Seb?o. Research Trends in Combinatorial Optimization, chapter Graphic Submodular
Function Minimization: A Graphic Approach and Applications, pages 365?385. Springer, 2009.
[26] M. Queyranne. Minimizing symmetric submodular functions. Mathematical Programming, 82:3?12, 1998.
[27] A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J.
Combin. Theory Ser. B, 80:346?355, 2000.
[28] A. Schrijver. Combinatorial Optimization. Springer, 2004.
[29] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In NIPS, 2010.
[30] J. Vondr?ak. personal communication, 2011.
? y and P.G. Jeavons. Classes of submodular constraints expressible by graph cuts. Constraints, 15:
[31] S. Zivn?
430?452, 2010. ISSN 1383-7133.
? y, D. A. Cohen, and P. G. Jeavons. The expressive power of binary submodular functions. Discrete
[32] S. Zivn?
Applied Mathematics, 157(15):3347?3358, 2009.
9
| 4348 |@word kohli:1 version:3 polynomial:12 norm:15 open:1 decomposition:4 pick:6 carry:1 moment:1 contains:1 series:1 selecting:1 err:1 current:1 si:1 must:4 subsequent:1 additive:2 partition:8 benign:2 selected:1 leaf:1 item:1 xk:7 node:21 simpler:1 mathematical:2 constructed:2 combine:4 introduce:3 magnanti:1 x0:1 lov:1 indeed:3 behavior:1 growing:1 terminal:1 jolly:1 decreasing:2 decomposed:1 cpu:1 totally:2 becomes:3 begin:2 bounded:4 moreover:4 gir:4 minimizable:1 what:1 narasimhan:1 finding:4 impractical:2 pseudo:1 berkeley:1 every:1 subclass:1 concave:8 act:1 ti:14 interactive:1 exactly:4 milestone:1 ser:1 grant:1 planck:1 overestimate:1 positive:3 before:1 consequence:1 ak:1 inordinate:1 approximately:1 might:3 equivalence:1 challenging:1 shaded:1 pit:1 fastest:1 limited:5 bi:4 range:2 practical:2 acknowledgment:1 testing:1 practice:2 block:5 implement:1 empirical:4 significantly:1 suggest:1 get:1 cannot:1 seb:1 selection:2 put:1 prentice:1 optimize:2 restriction:2 equivalent:8 charged:2 independently:1 survey:1 decomposable:1 assigns:1 immediately:1 subgraphs:2 rule:1 importantly:1 handle:2 updated:1 construction:12 exact:13 programming:2 us:1 equispaced:1 element:13 trend:1 approximated:1 expensive:2 recognition:3 continues:1 updating:1 cut:52 cooperative:1 solved:3 worst:3 thousand:1 region:2 news:1 richness:1 complexity:4 ui:3 nesterov:1 personal:1 depend:1 tight:3 segment:1 upon:1 bipartite:14 efficiency:3 basis:1 gu:5 drineas:1 easily:2 joint:4 icassp:1 differently:1 represented:5 chapter:1 kolmogorov:1 intersects:1 separated:1 distinct:1 fast:6 effective:1 describe:1 argmaxi:1 query:1 neighborhood:4 outside:1 whose:2 modular:13 larger:2 solve:1 quite:1 say:1 cvpr:2 gi:9 gp:4 nondecreasing:8 itself:4 advantage:1 tpami:1 propose:2 interaction:1 maximal:4 remainder:1 relevant:2 realization:1 subgraph:3 poorly:1 achieve:1 seattle:1 empty:1 object:4 tk:6 derive:1 help:1 coupling:1 auxiliary:10 implemented:2 resemble:1 implies:1 come:1 submodularity:4 owing:1 luckily:2 public:1 material:1 generalization:3 decompose:4 tighter:1 extension:1 hold:2 considered:1 ground:7 hall:1 vary:1 estimation:1 proc:2 combinatorial:8 currently:1 label:1 largest:2 create:1 successfully:1 minimization:13 clearly:1 sensor:1 always:4 ck:2 avoid:1 varying:1 derived:1 improvement:1 unsatisfactory:1 rank:6 polyhedron:1 contrast:4 ladick:1 posteriori:1 inference:1 elsevier:1 minimizers:1 membership:1 inaccurate:1 typically:1 cunningham:3 diminishing:2 initially:1 expressible:2 germany:1 html:1 retaining:1 development:1 godfrey:1 special:1 marginal:4 field:2 construct:3 equal:1 having:1 washington:2 never:1 extraction:1 represents:3 look:1 icml:1 breakpoint:1 minimized:1 report:3 np:3 intelligent:1 piecewise:4 few:3 simultaneously:2 national:1 replaced:1 connects:1 maintain:1 n1:1 ab:1 microsoft:1 evaluation:1 hg:17 chain:1 implication:1 accurate:1 edge:58 unless:1 tree:1 sooner:1 re:2 combin:1 theoretical:1 minimal:1 riot:2 instance:2 formalism:1 boolean:1 cover:2 cost:33 deviation:2 subset:8 hundred:1 uniform:4 gr:4 too:2 graphic:3 motivating:3 reported:1 connect:2 combined:1 referring:1 thanks:1 siam:1 together:1 possibly:2 ieor:1 worse:1 ek:9 return:3 account:1 potential:3 de:1 includes:2 disconnect:1 int:1 explicitly:3 tion:1 view:1 later:1 root:1 doing:1 majorization:1 minimize:7 square:1 orlin:2 accuracy:4 efficiently:9 correspond:2 wisdom:1 yield:2 generalize:2 mere:1 mc:3 bilmes:8 whenever:1 stobbe:3 definition:2 failure:1 energy:3 obvious:1 associated:2 proof:1 static:1 popular:2 recall:3 knowledge:1 color:1 improves:1 segmentation:1 back:1 higher:2 specify:1 formulation:2 strongly:6 generality:1 furthermore:1 just:1 expressive:1 google:1 defines:3 quality:1 grows:1 requiring:1 normalized:2 true:2 contain:1 former:1 hence:1 jojic:1 symmetric:2 iteratively:1 illustrated:1 white:1 adjacent:3 interspeech:1 covering:1 illustrative:1 generalized:1 tt:5 demonstrate:1 spending:1 image:4 recently:1 boykov:2 specialized:1 polymatroid:4 functional:1 empirically:2 cohen:1 exponentially:2 volume:1 extend:2 hypergraphs:1 approximates:1 significant:1 refer:1 consistency:1 mathematics:3 nonlinearity:2 submodular:72 language:1 reachable:2 entail:1 longer:2 loosest:1 add:4 base:1 imbalanced:1 recent:1 perspective:1 pseudoflow:2 discard:1 certain:4 gallo:1 inequality:2 binary:2 seen:1 minimum:35 additional:2 novelty:1 dashed:1 ii:2 faster:4 lin:4 award:3 plugging:1 ensuring:2 mrf:1 basic:1 variant:1 n5:5 vision:6 iteration:6 represent:5 hochbaum:1 background:1 want:1 krause:3 whereas:2 addition:1 leaving:1 extra:2 posse:1 asz:1 induced:2 fujishige:6 member:1 contrary:4 flow:10 n7:3 call:3 ee:2 iii:1 enough:1 identically:1 variety:1 switch:1 matroid:3 mgp:1 greig:1 reduce:2 idea:1 knowing:1 tradeoff:1 shift:1 fleischer:1 t0:1 whether:1 motivated:1 six:1 queyranne:1 returned:1 speech:5 repeatedly:1 covered:1 amount:1 category:1 mcdaniel:1 http:1 exist:1 shifted:2 estimated:1 disjoint:1 arising:1 blue:1 discrete:5 express:5 group:13 four:1 nevertheless:1 graph:69 sum:12 powerful:1 family:3 almost:1 slg:1 weismantel:1 prefer:1 scaling:1 sfm:7 bound:5 breakpoints:7 guaranteed:1 encountered:1 oracle:3 nonnegative:5 adapted:1 placement:1 constraint:2 n3:1 grigoriadis:1 u1:1 min:11 performing:1 relatively:1 speedup:1 pacific:1 poor:3 representable:17 disconnected:1 beneficial:1 remain:1 em:4 smaller:1 across:1 partitioned:1 wi:2 happens:1 restricted:3 iccv:1 equation:4 remains:1 count:1 eventually:1 needed:1 tractable:1 end:4 optimizable:1 available:1 operation:1 permit:4 apply:2 observe:1 generic:1 appropriate:1 alternative:2 slower:1 original:1 running:13 include:5 clustering:2 remaining:2 porteous:1 exploit:1 build:3 uj:3 society:1 objective:3 question:2 already:4 parametric:10 usual:1 gradient:3 nemhauser:1 otschel:2 separating:1 w0:1 evaluate:1 tuebingen:2 enforcing:1 argmins:2 length:1 code:4 modeled:1 issn:1 illustration:1 ellipsoid:1 minimizing:8 equivalently:1 difficult:2 unfortunately:1 gk:19 negative:3 mink:1 implementation:3 summarization:6 unknown:2 perform:1 allowing:1 upper:4 mccormick:1 markov:1 finite:1 descent:1 t:11 truncated:2 defining:2 extended:1 communication:1 nonuniform:1 arbitrary:6 thm:1 tarjan:1 expressiveness:1 introduced:1 complement:1 optimized:1 nip:3 address:3 beyond:1 below:2 usually:1 summarize:1 encompasses:1 max:9 gaining:1 including:1 royal:1 power:3 settling:1 mn:8 representing:2 improve:2 imply:1 numerous:1 stefanie:1 extract:2 utterance:1 n6:1 literature:1 relative:2 loss:1 interesting:2 proven:3 ingredient:1 foundation:1 jegelka:5 switchboard:1 s0:1 editor:1 share:1 course:1 repeat:1 supported:1 truncation:8 copy:1 institute:1 fall:1 neighbor:1 wide:1 matroids:3 absolute:3 benefit:2 regard:1 boundary:6 vocabulary:2 gram:1 computes:1 ferent:1 counted:1 far:1 polynomially:2 tighten:1 zivn:3 approximate:12 vondr:1 cutting:2 argmint:1 handbook:1 corpus:3 alternatively:1 iterative:1 decomposes:1 sk:7 robust:1 contributes:1 futile:1 interpolating:1 did:1 linearly:1 big:1 arise:1 freedman:1 n2:1 contradicting:1 fig:2 representative:1 cient:1 intel:1 cubic:1 ahuja:1 slow:2 sub:3 hlin:1 wish:1 theorem:2 specific:3 maxi:2 admits:2 jeavons:2 exists:1 workshop:1 rel:1 adding:1 effectively:1 importance:1 hui:1 sorting:1 entropy:1 intricacy:1 simply:1 forming:1 expressed:2 ordered:1 joined:1 u2:1 applies:2 springer:2 corresponds:2 iwata:4 satisfies:1 minimizer:3 relies:1 acm:1 viewed:1 jeff:1 price:1 replace:2 hard:4 isotani:1 infinite:1 except:1 reducing:1 telephone:1 torr:1 lemma:6 argminu:1 total:1 principal:1 experimental:1 schrijver:3 formally:2 combinatorica:2 latter:2 arises:1 accelerated:1 dept:1 seheult:1 |
3,698 | 4,349 | Sparse Recovery with Brownian Sensing
Alexandra Carpentier
INRIA Lille
[email protected]
Odalric-Ambrym Maillard
INRIA Lille
[email protected]
R?emi Munos
INRIA Lille
[email protected]
Abstract
We consider the problem of recovering the parameter ? ? RK of a sparse function
f (i.e. the number of non-zero entries of ? is small compared to the number K of
features) given noisy evaluations of f at a set of well-chosen sampling points. We
introduce an additional randomization process, called Brownian sensing, based on
the computation of stochastic integrals, which produces a Gaussian sensing matrix, for which good recovery properties are proven, independently on the number
of sampling points N , even when the features are arbitrarily non-orthogonal. Under the assumption that f is H?older continuous with exponent at least ?
1/2, we provide an estimate ?
? of the parameter such that ?? ? ?
??2 = O(???2 / N ), where
? is the observation noise. The method uses a set of sampling points uniformly
distributed along a one-dimensional curve selected according to the features. We
report numerical experiments illustrating our method.
1
Introduction
We consider the problem of sensing an unknown function f : X ? R (where X ? Rd ), where f
belongs to span of a large set of (known) features {?k }1?k?K of L2 (X ):
f (x) =
K
?
?k ?k (x),
k=1
def
where ? ? RK is the unknown parameter, and is assumed to be S-sparse, i.e. ???0 = |{i : ?k ?=
0}| ? S. Our goal is to recover ? as accurately as possible.
In the setting considered here we are allowed to select the points {xn }1?n?N ? X where the
function f is evaluated, which results in the noisy observations
yn = f (xn ) + ?n ,
(1)
where ?n is an observation noise term.
We assume that the noise is bounded, i.e.,
N
?
def
?n2 ? ? 2 . We write DN = ({xn , yn }1?n?N ) the set of observations and we are in???22 =
n=1
terested in situations where N ? K, i.e., the number of observations is much smaller than the
number of features ?k .
The question we wish to address is: how well can we recover ? based on a set of N noisy measurements? Note that whenever the noise is non-zero, the recovery cannot be perfect, so we wish to
express the estimation error ?? ? ?
??2 in terms of N , where ?
? is our estimate.
1
The proposed method. We address the problem of sparse recovery by combining the two ideas:
? Sparse recovery theorems (see Section 2) essentially say that in order to recover a vector
with a small number of measurements, one needs incoherence. The measurement basis,
corresponding to the pointwise evaluations f (xn ), should to be incoherent with the representation basis, corresponding to the one on which the vector ? is sparse. Interpreting
these basis in terms of linear operators, pointwise evaluation of f is equivalent to meadef
suring f using Dirac masses ?xn (f ) = f (xn ). Since in general the representation basis
{?k }1?k?K is not incoherent with the measurement basis induced by Dirac operators, we
would like to consider another measurement basis, possibly randomized, in order that it
becomes incoherent with any representation basis.
? Since we are interested in reconstructing ?, and since we assumed
? that f is linear in ?,
we can apply any set of M linear operators {Tm }1?m?M to f = k ?k ?k , and consider
the problem transformed by the?
operators; the parameter ? is thus also the solution to the
transformed problem Tm (f ) = k ?k Tm (?k ).
Thus, instead of considering the N ?K sensing matrix ? = (?xn (?k ))k,n , we consider a new M ?K
sensing matrix A = (Tm (?k ))k,m , where the operators {Tm }1?m?M enforce incoherence between
bases. Provided that we can estimate Tm (f ) with the data set DN , we will be able to recover ?. The
Brownian sensing approach followed here uses stochastic integral operators {Tm }1?m?M , which
makes the measurement basis incoherent with any representation basis, and generates a sensing
matrix A which is Gaussian (with i.i.d. rows).
The proposed algorithm (detailed in Section 3) recovers ? by solving the system A? ? ?b by l1
minimization1 , where ?b ? RM is an estimate, based on the noisy observations yn , of the vector
b ? RM whose components are bm = Tm f .
Contribution: Our contribution is a sparse recovery result for arbitrary non-orthonormal functional
basis {?k }k?K of a H?
older continuous function f . Theorem 4 states that our estimate ?
? satis?es
?
?? ? ?
??2 = O(???2 / N ) with high probability whatever N , under the assumption that the noise
? is globally bounded, such as in [3, 12]. This result is obtained by combining two contributions:
? We show that when the sensing matrix A is Gaussian, i.e. when each row of the matrix is
drawn i.i.d. from a Gaussian distribution, orthonormality is not required for sparse recovery.
This result, stated in Proposition 1 (and used in Step 1 of the proof of Theorem 4), is a
consequence of Theorem 3.1 of [10].
? The sensing matrix A is made Gaussian by choosing the operators Tm to be stochastic inte?
def
grals: Tm f = ?1M C f dB m , where B m are Brownian motions, and C is a 1-dimensional
curve of X appropriately chosen according to the functions {?k }k?K (see the discussion
in Section 4). We call A the Brownian sensing matrix.
We have the property that the recovery property using the Brownian sensing matrix A only depends
on the number of Brownian motions M used in the stochastic integrals and not on the number of
sampled points N . Note that M can be chosen arbitrarily large as it is not linked with the limited
amount of data, but M affects the overall computational complexity of the method. The number of
sample N appears in the quality of estimation of b only, and this is where the assumption that f is
H?older continuous comes into the picture.
Outline: In Section 2, we survey the large body of existing results about sparse recovery and relate
our contribution to this literature. In Section 3, we explain in detail the Brownian sensing recovery
method sketched above and state our main result in Theorem 4.
In Section 4, we ?rst discuss our result and compare it with existing work. Then we comment on
the choice and in?uence of the sampling domain C on the recovery performance.
Finally in Section 5, we report numerical experiments illustrating the recovery properties of the
Brownian sensing method, and the bene?t of the latter compared to a straightforward application of
compressed sensing when there is noise and very few sampling points.
1
where the approximation sign ? refers to a minimization problem under a constraint coming from the
observation noise.
2
2
Relation to existing results
A standard approach in order to recover ? is to consider the N ? K matrix ? = (?k (xn ))k,n , and
solve the system ??
? ? y where y is the vector with components yn . Since N ? K this is an illposed problem. Under the sparsity assumption, a successful idea is ?rst to replace the initial problem
with the well-de?ned problem of minimizing the ?0 norm of ? under the constraint that ??
? ? y, and
then, since this problem is NP-hard, use convex relaxation of the ?0 norm by replacing it with the ?1
norm. We then need to ensure that the relaxation provides the same solution as the initial problem
making use of the ?0 norm. The literature on this problem is huge (see [3, 7, 8, 15, 18, 4, 11] for
examples of papers that initiated this ?eld of research).
Generally, we can decompose the reconstruction problem into two distinct sub-problems. The ?rst
sub-problem (a) is to state conditions on the matrix ? ensuring that the recovery is possible and
derive results for the estimation error under such conditions:
The ?rst important condition is the Restricted Isometry Property (RIP), introduced in [5], from
which we can derive the following recovery result stated in [6]:
Theorem 1 (Cand?es & al, 2006) Let ?S be the restricted isometry constant of
? ??
N
?? ,
N
de?ned as ?S =
a?2
sup{| ?a?2 ? 1|; ?a?0 ? S}. Then if ?3S + ?4S < 2, for every S-sparse vector ? ? RK , the
solution ?
? to the ?1 -minimization problem min{?a?1 ; a satis?es ??a ? y?22 ? ? 2 } satis?es
CS ? 2
??
? ? ??22 ?
,
N
where C depends only on ? .
S
4S
Apart from the historical RIP, many other conditions emerged from works reporting the practical
dif?culty to have the RIP satis?ed, and thus weaker conditions ensuring reconstruction were derived.
See [17] for a precise survey of such conditions. A weaker condition for recovery is the compatibility
condition which leads to the following result from [16]:
Theorem 2 (Van de Geer & Buhlmann, 2009) Assuming that the compatibility condition is satis?ed, i.e. for a set S of indices of cardinality S and a constant L,
? S? ?? ??22
?
N
C(L, S) = min
, ? satis?es ??S c ?1 ? L??S ?1 > 0,
2
??S ?1
? to the ?1 -minimization problem
then for every S-sparse vector ? ? RK , the solution ?
min{???1 ; ? satis?es ??S c ?1 ? L??S ?1 } satis?es for C a numerical constant:
? 2 log(K)
C
??
? ? ??22 ?
.
C(L, S)2
N
The second sub-problem (b) of the global reconstruction problem is to provide the user with a
simple way to ef?ciently sample the space in order to build a matrix ? such that the conditions
for recovery are ful?lled, at least with high probability. This can be dif?cult in practice since it
involves understanding the geometry of high dimensional objects. For instance, to the best of our
knowledge, there is no result explaining how to sample the space so that the corresponding sensing
matrix ? satis?es the nice recovery properties needed by the previous theorems, for a general family
of features {?k }k?K .
However, it is proven in [12] that under some hypotheses on the functional basis, we are able to
recover the strong RIP property for the matrix ? with high probability. This result, combined with a
recovery result, is stated as follows:
Theorem 3 (Rauhut, 2010) Assume that {?k }k?K is an orthonormal basis of functions under a
measure ?, bounded by a constant C? , and that we build DN by sampling f at random according
N
2
2
to ?. Assume also that the noise is bounded ???2 ? ?. If log(N
) ? c0 C? S log(S) log(K) and
N ? c1 C?2 S log(p?1 ), then with probability at least 1 ? p, for every S-sparse vector ? ? RK , the
solution ?
? to the ?1 -minimization problem min{?a?1 ; a satis?es ?Aa ? y?22 ? ? 2 } satis?es
??
? ? ??22 ?
3
c2 ? 2
,
N
where c0 , c1 and c2 are some numerical constants.
In order to prove this theorem, the author of [12] showed that by sampling the points i.i.d. from ?,
then with with high probability the resulting matrix ? is RIP. The strong point of this Theorem is that
we do not need to check conditions on the matrix ? to guarantee that it is RIP, which is in practice
infeasible. But the weakness of the result is that the initial basis has to be orthonormal and bounded
under the given measure ? in order to get the RIP satis?ed: the two conditions ensure incoherence
with Dirac observation basis. The speci?c case of an unbounded basis i.e., Legendre Polynomial
basis, has been considered in [13], but to the best of our knowledge, the problem of designing a
general sampling strategy such that the resulting sensing matrix possesses nice recovery properties
in the case of non-orthonormal basis remains unaddressed. Our contribution considers this case and
is described in the following section.
3
The ?Brownian sensing? approach
A need for incoherence. When the representation and observation basis are not incoherent, the
sensing matrix ? does not possess a nice recovery property. A natural idea is to change the observation basis by introducing a set of M linear operators {Tm }m?M acting on the functions {?k }k?K .
K
?
We have Tm (f ) =
?k Tm (?k ) for all 1 ? m ? M and our goal is to de?ne the operators
k=1
{Tm }m?M in order that the sensing matrix (Tm (?k ))m,k enjoys a nice recovery property, whatever
the representation basis {?k }k?K .
The Brownian sensing operators. We now consider linear operators de?ned by stochastic integrals on a 1-dimensional curve C of X . First, we need to select a?curve C ? X of length l, such
that the covariance matrix VC , de?ned by its elements (VC )i,j = C ?i ?j (for 1 ? i, j ? K), is
invertible. We will discuss the existence of a such a curve later in Section 4. Then, we de?ne the
?
def
linear operators {Tm }1?m?M as stochastic integrals over the curve C: Tm (g) = ?1M C gdB m ,
where {B m }m?M are M independent Brownian motions de?ned on C.
Note that up to an appropriate speed-preserving parametrization g : [0, l] ? X of C, we can
work with the corresponding induced family {?k }k?K , where ?k = ?k ? g, instead of the family {?k }k?K .
The sensing method. With the choice of the linear operators {Tm }m?M de?ned above, the parameter ? ? RK now satis?es the following equation
A? = b ,
def
where b ? RM is de?ned by its components bm = Tm (f ) =
(2)
?1
M
?
C
f (x)dB m (x) and the sodef
called Brownian sensing matrix A (of size M ? K) has elements Am,k = Tm (?k ). Note that we
do not require sampling f in order to compute the elements of A. Thus, the samples only serve for
estimating b and for this purpose, we sample f at points {xn }1?n?N regularly chosen along the
curve C.
In general, for a curve C parametrized with speed-preserving parametrization g : [0, l] ? X of C,
n
we have xn = g( N
l) and the resulting estimate ?b ? RM of b is de?ned with components:
N
?1
?
?bm = ?1
yn (B m (xn+1 ) ? B m (xn )) .
M n=0
Note that in the special case when X = C = [0, 1], we simply have xn =
(3)
n
N.
The ?nal step of the proposed method is to apply standard recovery techniques (e.g., l1 minimization
or Lasso) to compute ?
? for the system (2) where b is perturbed by the so-called sensing noise
def
?
? = b ? b (estimation error of the stochastic integrals).
4
3.1 Properties of the transformed objects
We now give two properties of the Brownian sensing matrix A and the sensing noise ? = b ? ?b .
Brownian sensing matrix. By de?nition of the stochastic integral operators {Tm }m?M , the sensing
matrix A = (Tm (?k ))m,k is a centered Gaussian matrix, with
?
1
?k (x)?k? (x)dx .
Cov(Am,k , Am,k? ) =
M C
Moreover by independence of the Brownian motions, each row Am,? is i.i.d. from a centered Gaus1
VC ), where VC is the K ? K covariance matrix of the basis, de?ned by its
sian distribution N?(0, M
elements Vk,k? = C ?k (x)?k? (x)dx. Thanks to this nice structure, we can prove that A possesses a
property similar to RIP (in the sense of [10]) whenever M is large enough:
?
Proposition 1 For p > 0 and any integer t > 0, when M > C4 (t log(K/t) + log 1/p)), with C ?
being a universal constant (de?ned in [14, 1]), then with probability at least 1 ? p, for all t?sparse
vectors x ? RK ,
3
1
?min,C ?x?2 ? ?Ax?2 ? ?max,C ?x?2 ,
2
2
1/2
where ?max,C and ?min,C are respectively the largest and smallest eigenvalues of VC .
Sensing noise. In order to state our main result, we need a bound on ???22 . We consider the simplest
deterministic sensing design where we choose the sensing points to be uniformly distributed along
the curve C 2 .
Proposition 2 Assume that ???22 ? ? 2 and that f is (L, ?)-H?older, i.e.
?(x, y) ? X 2 , |f (x) ? f (y)| ? L|x ? y|? ,
then for any p ? (0, 1], with probability at least 1 ? p, we have the following bound on the sensing
noise ? = b ? ?b:
?
? 2 (N, M, p)
???22 ?
,
N
where
?
??
? L2 l2?
log(1/p)
log(1/p) ?
def
2
2
.
1
+
2
?
? (N, M, p) = 2
+
?
+
4
N 2??1
M
M
Remark 1 The bound on the sensing noise ???22 contains two contributions: an approximation
error term which comes from the approximation of a stochastic integral with N points and that
scales with L2 l2? /N 2? , and the observation noise term of order ? 2 /N . The observation noise term
(when ? 2 > 0) dominates the approximation error term whenever ? ? 1/2.
3.2 Main result.
In this section, we state our main recovery result for the Brownian sensing method, described in
Figure 1, using a uniform sampling method along a one-dimensional curve C ? X ? Rd . The proof
of the following theorem can be found in the supplementary material.
Theorem 4 (Main result) Assume that f is (L, ?)-H?older on X and that VC is invertible. Let us
write the condition number ?C = ?max,C /?min,C , where ?max,C and ?min,C are respectively the
?
?2
1/2
largest and smallest eigenvalues of VC . Write r = (3?C ? 1)( 4?12?1 ) . For any p ? (0, 1], let
K
M ? 4c(4Sr log( 4Sr
) + log 1/p) (where c is a universal constant de?ned in [14, 1]). Then, with
probability at least 1 ? 3p, the solution ?
? obtained by the Brownian sensing approach described in
Figure 1, satis?es
??
?
? 2 (N, M, p)
?4C
? 2
??
? ? ??22 ? C
,
N
maxk C ?k
where C is a numerical constant and ?
? (N, M, p) is de?ned in Proposition 2.
Note that a similar result (not reported in this conference paper) can be proven in the case of
i.i.d. sub-Gaussian noise, instead of a noise with bounded ?2 norm considered here.
2
Note that other deterministic, random, or low-discrepancy sequence could be used here.
5
Input: a curve C of length l such that VC is invertible. Parameters N and M .
? Select N uniform samples {xn }1?n?N along the curve C,
? Generate M Brownian motions {B m }1?m?M along C.
M ?K
? Compute the Brownian sensing matrix
? A?R m
1
?
(i.e. Am,k = M C ?k (x)dB (x)).
? Compute the estimate ?
b ? RM
? ?1
m
m
?
(i.e. bm = ?1M N
n=0 yn (B (xn+1 ) ? B (xn ))).
? Find ?
?, solution to
?
?
? 2 (N, M, p) ?
b?22 ?
.
min ?a?1 such that ?Aa ? ?
a
N
Figure 1: The Brownian sensing approach using a uniform sampling along the curve C.
4
Discussion.
In this section we discuss the differences with previous results, especially with the work [12] recalled
in Theorem 3. We then comment on the choice of the curve C and illustrate examples of such curves
for different bases.
4.1
Comparison with known results
The order of the bound. Concerning the scaling of the estimation error in terms of the number
of sensing points N , Theorem 3 of [12] (reminded in Section 2) states that when N is large enough
2
(i.e., N = ?(S log(K))), we can build an estimate ?
? such that ??
? ? ??22 = O( ?N ). In comparison,
2 2?
2
our bound shows that ??
? ? ??22 = O( LN l2? + ?N ) for any values of N . Thus, provided that the
function f has a H?older exponent ? ? 1/2, we obtain the same rate as in Theorem 3.
A weak assumption about the basis. Note that our recovery performance scales with the condition number ?C of VC as well as the length l of the curve C. However, concerning the hypothesis
on the functions {?k }k?K , we only assume that the covariance matrix VC is invertible on the curve
C, which enables to handle arbitrarily non-orthonormal bases. This means that the orthogonality
condition on the basis functions is not a crucial requirement to deduce sparse recovery properties.
To the best of our knowledge, this is an improvement over previously known results (such as the
work of [12]). Note however that if ?C or l are too high, then the bound becomes loose. Also the
computational complexity of the Brownian sensing increases when ?C is large, since it is necessary
to take a large M , i.e. to simulate more Brownian motions in that case.
A result that holds without any conditions on the number of sampling points. Theorem 4
requires a constraint on the number of Brownian motions M (i.e., that M = ?(S log K)) and not
on the number of sampling points N (as in [12], see Theorem 3). This is interesting in practical
situations when we do not know the value of S, as we do not have to assume a lower-bound on N
to deduce the estimation error result. This is due to the fact that the Brownian sensing matrix A
only depends on the computation of the M stochastic integrals of the K functions ?k , and does not
depend on the samples. The bound shows that we should take M as large as possible. However, M
impacts the numerical cost of the method. This implies in practice a trade-off between a large M
for a good estimation of ? and a low M for low numerical cost.
4.2
The choice of the curve
Why sampling along a 1-dimensional curve C instead of sampling over the whole space X ? In
a bounded space X of dimension 1, both approaches are identical. But in dimension d > 1, following
the Brownian sensing approach while sampling over the whole space would require generating M
Brownian sheets (extension of Brownian motions to d > 1 dimensions) over X , and then building
6
?
the M ? K matrix A with elements Am,k = X ?k (t1 , ...td )dB1m (t1 )...dBdm (td ). Assuming that
the covariance matrix VX is invertible, this Brownian sensing matrix is also Gaussian and enjoys
the same recovery properties as
? in the one-dimensional case. However, in this case, estimating
the stochastic integrals bm = X f dB m using sensing points along a (d-dimensional) grid would
provide an estimation error ? = b??b that scales poorly with d since we integrate over a d dimensional
space. This explains our choice of selecting a 1-dimensional curve C instead of the whole space X
and sampling N points along the curve. This choice provides indeed a better estimation of b which
is de?ned by a 1-dimensional stochastic integrals over C. Note that the only requirement for the
choice of the curve C is that the covariance matrix VC de?ned along this curve should be invertible.
In addition, in some speci?c applications the sampling process can be very constrained by physical
systems and sampling uniformly in all the domain is typically costly. For example in some medical
experiments, e.g., scanner or I.R.M., it is only possible to sample along straight lines.
What the parameters of the curve tell us on a basis. In the result of Theorem 4, the length l of
the curve C as well as the condition number ?C = ?max,C /?min,C are essential characteristics of the
ef?ciency of the method. It is important to note that those two variables are actually related. Indeed,
it may not be possible to ?nd a short curve C such that ?C is small. For instance in the case where
the basis functions have compact support, if the curve C does not pass through the support of all
functions, VC will not be invertible. Any function whose support does not intersect with the curve
would indeed be an eigenvector of VC with a 0 eigenvalue. This indicates that the method will not
work well in the case of a very localized basis {?k }k?K (e.g. wavelets with compact support), since
the curve would have to cover the whole domain and thus l will be very large. On the other hand,
the situation may be much nicer when the basis is not localized, as in the case of a Fourier basis.
We show in the next subsection that in a d-dimensional Fourier basis, it is possible to ?nd a curve C
(actually a segment) such that the basis is orthonormal along the chosen line (i.e. ?C = 1).
4.3
Examples of curves
For illustration, we exhibit three cases for which one can easily derive a curve C such that VC is
invertible. The method described in the previous section will work with the following examples.
X is a segment of R: In this case, we simply take C = X , and the sparse recovery is possible
whenever the functions {?k }k?K are linearly independent in L2 .
Coordinate functions: Consider the case when the basis are the coordinate functions
?k (t1 , ...td ) = tk . Then we can de?ne the parametrization of the curve C by g(t) =
?
?(t)(t, t2 , . . . , td ), where ?(t) is?the solution
? to a differential equation such that ?g (t)?2 = 1 (which
implies that for any function h, h ? g = C h). The corresponding functions ?k (t) = ?(t)tk are
linearly independent, since the only functions ?(t) such that the {?k }k?K are not linearly independent are functions that are 0 almost everywhere, which would contradict the de?nition of ?(t). Thus
VC is invertible.
Fourier basis:
Let us now consider the Fourier basis in Rd with frequency T :
?
? 2i?nj tj ?
exp ?
?n1 ,...,nd (t1 , .., td ) =
,
T
j
where nj ? {0, ..., T ? 1} and tj ? [0, 1]. Note that this basis is orthonormal under the uniform
d?1
1
T
distribution on [0, 1]d . In this case we de?ne g by g(t) = ?(t T d?1
, t T d?1
, ..., t TT d?1 ) with ? =
?
1?T ?2
(so that ?g ? (t)?2 = 1), thus we deduce that:
1?T ?2d
?
? 2i?t? j nj T j?1 ?
.
?n1 ,...,nd (t) = exp ?
Td
?
j?1
to (n1 , . . . , nd ) is a bijection
Since nk ? {0, ..., T ? 1}, the mapping that associates j nj T
d
d
from {0, . . . , T ? 1} to {0, . . . , T ? 1}. Thus we can identify the family (?n1 ,...,nd ) with the one
d
dimensional Fourier basis with frequency T? , which means that the condition number ? = 1 for
this curve. Therefore, for a d-dimensional function f , sparse in the Fourier basis, it is suf?cient to
sample along the curve induced by g to ensure that VC is invertible.
7
5
Numerical Experiments
In this section, we illustrate the method of Brownian sensing in dimension one. We consider
a non-orthonormal family {?k }k?K of K = 100 functions of L2 ([0, 2?]) de?ned by ?k (t) =
cos(tk)+cos(t(k+1))
?
. In the experiments, we use a function f whose decomposition is 3-sparse and
2?
which is (10, 1)-H?older, and we consider a bounded observation noise ?, with different noise levels,
?N
where the noise level is de?ned by ? 2 = n=1 ?n2 .
Comparison of l1?minimization and Brownian Sensing
with noise variance 0
Comparison of l1?minimization and Brownian Sensing
with noise variance 0.5
200
220
180
200
Comparison of l1?minimization and Brownian Sensing
with noise variance 1
220
200
140
120
100
80
60
160
140
120
100
40
80
20
60
0
5
mean quadratic error
180
Mean quadratic error
mean quadratic error
160
180
160
140
120
10
15
20
25
30
35
number of sampling points
40
45
50
40
5
10
15
20
25
30
35
Number of sampling points
40
45
50
100
5
10
15
20
25
30
35
number of sampling points
40
45
50
Figure 2: Mean squared estimation error using Brownian sensing (plain curve) and a direct l1 minimization solving ?? ? y (dashed line), for different noise level (? 2 = 0, ? 2 = 0.5, ? 2 = 1),
plotted as a function of the number of sample points N .
In Figure 2, the plain curve represents the recovery performance, i.e., mean squared
error, of Brow?
?
nian sensing i.e., minimizing ?a?1 under constraint that ?Aa ? b?2 ? 1.95 2(100/N + 2) using
M = 100 Brownian motions and a regular grid of N points, as a function of N 3 . The dashed curve
represents the mean squared error of a regular l1 minimization of ?a?1 under the constraint that
??a ? y?22 ? ? 2 (as described e.g. in [12]), where the N samples are drawn uniformly randomly
over the domain. The three different graphics correspond to different values of the noise level ? 2
(from left to right 0, 0.5 and 1). Note that the results are averaged over 5000 trials.
Figure 2 illustrates that, as expected, Brownian sensing outperforms the method described in [12]
for noisy measurements4 . Note also that the method described in [12] recovers the sparse vector
when there is no noise, and that Brownian sensing in this case has a smoother dependency w.r.t. N .
Note that this improvement comes from the fact that we use the H?older regularity of the function:
Compressed sensing may outperform Brownian sensing for arbitrarily non regular functions.
Conclusion
In this paper, we have introduced a so-called Brownian sensing approach, as a way to sample an unknown function which has a sparse representation on a given non-orthonormal basis. Our approach
differs from previous attempts to apply compressed sensing in the fact that we build a ?Brownian
sensing? matrix A based on a set of Brownian motions, which is independent of the function f . This
enables us to guarantee nice recovery properties of A. The function evaluations are used to estimate
the right hand side term b (stochastic integrals). In dimension d we proposed to sample the function
along a well-chosen curve, i.e. such that the corresponding covariance
matrix is invertible. We pro?
vided competitive reconstruction error rates of order O(???2 / N ) when the observation noise ? is
bounded and f is assumed to be H?older continuous with exponent at least 1/2. We believe that the
H?older assumption is not strictly required (the smoothness of f is assumed to derive nice estimations
of the stochastic integrals only), and future works will consider weakening this assumption, possibly
by considering randomized sampling designs.
3
We assume that we know a loose bound on the noise level, here ? 2 ? 2, and we take p = 0.01.
Note however that there is no theoretical guarantee that the method described in [12] works here since the
functions are not orthonormal.
4
8
Acknowledgements
This research was partially supported by the French Ministry of Higher Education and Research,
Nord- Pas-de-Calais Regional Council and FEDER through CPER 2007-2013, ANR projects
EXPLO-RA (ANR-08-COSI-004) and Lampada (ANR-09-EMER-007), by the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project
CompLACS), and by Pascal-2.
References
[1] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253?263, 2008.
[2] G. Bennett. Probability inequalities for the sum of independent random variables. Journal of
the American Statistical Association, 57(297):33?45, 1962.
[3] E. Cand`es and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse Problems, 23:969?985, 2007.
[4] E. Candes and T. Tao. The Dantzig selector: statistical estimation when p is much larger than
n. Annals of Statistics, 35(6):2313?2351, 2007.
[5] E.J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory,
52(2):489?509, 2006.
[6] E.J. Cand`es, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate
measurements. Communications on Pure and Applied Mathematics, 59(8):1207, 2006.
[7] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?
1306, 2006.
[8] D.L. Donoho and P.B. Stark. Uncertainty principles and signal recovery. SIAM Journal on
Applied Mathematics, 49(3):906?931, 1989.
[9] M. Fornasier and H. Rauhut. Compressive Sensing. In O. Scherzer, editor, Handbook of
Mathematical Methods in Imaging. Springer, to appear.
[10] S. Foucart and M.J. Lai. Sparsest solutions of underdetermined linear systems via lqminimization for 0 < q < p. Applied and Computational Harmonic Analysis, 26(3):395?407,
2009.
[11] V. Koltchinskii. The Dantzig selector and sparsity oracle inequalities. Bernoulli, 15(3):799?
828, 2009.
[12] H. Rauhut. Compressive Sensing and Structured Random Matrices. Theoretical Foundations
and Numerical Methods for Sparse Recovery, 9, 2010.
[13] H. Rauhut and R. Ward. Sparse legendre expansions via l1 minimization. Arxiv preprint
arXiv:1003.0251, 2010.
[14] M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. Communications on Pure and Applied Mathematics, 61(8):1025?1045, 2008.
[15] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal
Statistical Society, Series B, 58:267?288, 1994.
[16] Sara A. van de Geer. The deterministic lasso. Seminar f?ur Statistik, Eidgen?ossische Technische
Hochschule (ETH) Z?urich, 2007.
[17] Sara A. van de Geer and Peter Buhlmann. On the conditions used to prove oracle results for
the lasso. Electronic Journal of Statistics, 3:1360?1392, 2009.
[18] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning
Research, 7:2563, 2006.
9
| 4349 |@word trial:1 illustrating:2 polynomial:1 norm:5 nd:6 c0:2 covariance:6 decomposition:1 eld:1 initial:3 contains:1 series:1 selecting:1 outperforms:1 existing:3 com:1 gmail:1 dx:2 numerical:9 nian:1 enables:2 selected:1 cult:1 parametrization:3 short:1 provides:2 bijection:1 unbounded:1 mathematical:1 along:15 dn:3 c2:2 differential:1 direct:1 prove:3 introduce:1 expected:1 ra:1 indeed:3 cand:4 globally:1 td:6 considering:2 cardinality:1 becomes:2 provided:2 estimating:2 bounded:9 moreover:1 project:2 mass:1 what:1 eigenvector:1 compressive:3 nj:4 guarantee:3 every:3 brow:1 ful:1 rm:5 whatever:2 medical:1 grant:1 yn:6 appear:1 t1:4 consequence:1 initiated:1 incoherence:5 inria:5 koltchinskii:1 dantzig:2 sara:2 co:2 dif:2 limited:1 averaged:1 practical:2 practice:3 differs:1 illposed:1 intersect:1 universal:2 eth:1 refers:1 regular:3 get:1 cannot:1 selection:2 operator:14 sheet:1 romberg:3 equivalent:1 deterministic:3 straightforward:1 urich:1 independently:1 convex:1 survey:2 recovery:32 pure:2 orthonormal:10 handle:1 coordinate:2 annals:1 rip:8 user:1 exact:1 us:2 designing:1 hypothesis:2 agreement:1 associate:1 element:5 pa:1 preprint:1 trade:1 complexity:2 depend:1 solving:2 segment:2 serve:1 basis:37 easily:1 cper:1 distinct:1 tell:1 choosing:1 ossische:1 whose:3 emerged:1 supplementary:1 solve:1 larger:1 say:1 compressed:4 anr:3 cov:1 statistic:2 ward:1 noisy:5 sequence:1 eigenvalue:3 reconstruction:6 coming:1 fr:2 combining:2 culty:1 poorly:1 dirac:3 rst:4 regularity:1 requirement:2 produce:1 generating:1 perfect:1 object:2 tk:3 derive:4 illustrate:2 strong:2 recovering:1 c:1 involves:1 come:3 implies:2 stochastic:14 vc:16 centered:2 vx:1 material:1 education:1 explains:1 require:2 fornasier:1 decompose:1 randomization:1 proposition:4 underdetermined:1 extension:1 strictly:1 hold:1 scanner:1 considered:3 exp:2 mapping:1 smallest:2 purpose:1 estimation:12 calais:1 council:1 largest:2 minimization:11 gaussian:9 shrinkage:1 derived:1 ax:1 vk:1 improvement:2 bernoulli:1 check:1 indicates:1 am:6 sense:1 inaccurate:1 typically:1 weakening:1 relation:1 transformed:3 interested:1 tao:3 sketched:1 overall:1 compatibility:2 pascal:1 exponent:3 constrained:1 special:1 sampling:24 identical:1 represents:2 lille:3 yu:1 discrepancy:1 future:1 report:2 np:1 t2:1 few:1 randomly:1 geometry:1 n1:4 attempt:1 huge:1 satis:14 highly:1 evaluation:4 weakness:1 odalricambrym:1 tj:2 integral:13 necessary:1 orthogonal:1 incomplete:2 plotted:1 uence:1 theoretical:2 instance:2 cover:1 cost:2 introducing:1 technische:1 entry:1 uniform:4 successful:1 seventh:1 too:1 graphic:1 reported:1 hochschule:1 dependency:1 perturbed:1 combined:1 vershynin:1 thanks:1 randomized:2 siam:1 off:1 invertible:11 complacs:1 nicer:1 squared:3 choose:1 possibly:2 davenport:1 american:1 zhao:1 stark:1 de:26 depends:3 later:1 linked:1 sup:1 competitive:1 recover:6 candes:1 contribution:6 variance:3 characteristic:1 correspond:1 identify:1 weak:1 accurately:1 rauhut:4 straight:1 explain:1 whenever:4 ed:3 frequency:3 proof:3 recovers:2 sampled:1 knowledge:3 subsection:1 maillard:2 actually:2 appears:1 higher:1 devore:1 evaluated:1 cosi:1 hand:2 grals:1 replacing:1 french:1 quality:1 gdb:1 believe:1 alexandra:2 building:1 orthonormality:1 outline:1 tt:1 l1:8 interpreting:1 pro:1 motion:10 harmonic:1 ef:2 functional:2 physical:1 association:1 measurement:8 smoothness:1 rd:3 grid:2 mathematics:3 consistency:1 stable:1 deduce:3 base:3 brownian:41 isometry:3 showed:1 belongs:1 apart:1 inequality:2 arbitrarily:4 nition:2 preserving:2 ministry:1 additional:1 speci:2 dashed:2 signal:3 smoother:1 lai:1 concerning:2 ensuring:2 impact:1 regression:1 essentially:1 arxiv:2 c1:2 addition:1 crucial:1 appropriately:1 regional:1 posse:3 sr:2 comment:2 induced:3 db:4 regularly:1 unaddressed:1 call:1 ciently:1 integer:1 enough:2 affect:1 independence:1 lasso:5 idea:3 tm:22 feder:1 peter:1 remark:1 generally:1 detailed:1 amount:1 simplest:1 generate:1 outperform:1 sign:1 tibshirani:1 write:3 express:1 drawn:2 carpentier:2 nal:1 imaging:1 relaxation:2 sum:1 inverse:1 everywhere:1 vided:1 baraniuk:1 uncertainty:2 reporting:1 family:5 almost:1 electronic:1 lled:1 scaling:1 def:7 bound:9 followed:1 quadratic:3 oracle:2 constraint:5 orthogonality:1 statistik:1 generates:1 fourier:7 emi:1 speed:2 span:1 min:10 simulate:1 ned:16 structured:1 according:3 legendre:2 smaller:1 reconstructing:1 ur:1 making:1 restricted:3 ln:1 equation:2 remains:1 previously:1 discus:3 loose:2 needed:1 know:2 fp7:1 apply:3 enforce:1 appropriate:1 existence:1 rudelson:1 ensure:3 wakin:1 build:4 especially:1 society:1 question:1 strategy:1 costly:1 exhibit:1 parametrized:1 odalric:1 considers:1 assuming:2 length:4 pointwise:2 index:1 illustration:1 minimizing:2 inte:1 scherzer:1 robert:1 relate:1 nord:1 stated:3 design:2 unknown:3 observation:14 situation:3 maxk:1 communication:2 precise:1 emer:1 arbitrary:1 buhlmann:2 community:1 introduced:2 lampada:1 required:2 bene:1 c4:1 recalled:1 address:2 able:2 sparsity:3 max:5 royal:1 natural:1 sian:1 older:10 ne:4 picture:1 incoherent:5 nice:7 literature:2 l2:8 understanding:1 acknowledgement:1 interesting:1 suf:1 proven:3 localized:2 foundation:1 integrate:1 eidgen:1 principle:2 editor:1 row:3 supported:1 infeasible:1 enjoys:2 side:1 weaker:2 ambrym:1 explaining:1 munos:2 sparse:22 distributed:2 van:3 curve:39 dimension:5 xn:16 plain:2 author:1 made:1 bm:5 historical:1 programme:1 transaction:2 compact:2 contradict:1 selector:2 global:1 handbook:1 assumed:4 continuous:4 why:1 robust:1 reminded:1 expansion:1 european:1 domain:4 main:5 linearly:3 whole:4 noise:28 n2:2 allowed:1 body:1 cient:1 sub:4 seminar:1 wish:2 ciency:1 sparsest:1 wavelet:1 rk:7 theorem:19 sensing:59 foucart:1 dominates:1 essential:1 illustrates:1 nk:1 remi:1 simply:2 partially:1 springer:1 aa:3 goal:2 donoho:2 replace:1 bennett:1 hard:1 change:1 uniformly:4 acting:1 called:4 geer:3 pas:1 e:15 explo:1 select:3 support:4 latter:1 constructive:1 |
3,699 | 435 | On Stochastic Complexity and Admissible
Models for Neural Network Classifiers
Padhraic Smyth
Communications Systems Research
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
Abstract
Given some training data how should we choose a particular network classifier from a family of networks of different complexities? In this paper
we discuss how the application of stochastic complexity theory to classifier
design problems can provide some insights into this problem. In particular
we introduce the notion of admissible models whereby the complexity of
models under consideration is affected by (among other factors) the class
entropy, the amount of training data, and our prior belief. In particular
we discuss the implications of these results with respect to neural architectures and demonstrate the approach on real data from a medical diagnosis
task.
1
Introduction and Motivation
In this paper we examine in a general sense the application of Minimum Description
Length (MDL) techniques to the problem of selecting a good classifier from a large
set of candidate models or hypotheses. Pattern recognition algorithms differ from
more conventional statistical modeling techniques in the sense that they typically
choose from a very large number of candidate models to describe the available data.
Hence, the problem of searching through this set of candidate models is frequently
a formidable one, often approached in practice by the use of greedy algorithms. In
this context, techniques which allow us to eliminate portions of the hypothesis space
are of considerable interest. We will show in this paper that it is possible to use the
intrinsic structure of the MDL formalism to eliminate large numbers of candidate
models given only minimal information about the data. Our results depend on the
818
On Stochastic Complexity
very simple notion that models which are obviously too complex for the problem
(e.g., models whose complexity exceeds that of the data itself) can be discarded
from further consideration in the search for the most parsimonious model.
2
2.1
Background on Stochastic Complexity Theory
General Principles
Stochastic complexity prescribes a general theory of inductive inference from data,
which, unlike more traditional inference techniques, takes into account the complexity of the proposed model in addition to the standard goodness-of-fit of the
model to the data. For a detailed rationale the reader is referred to the work of
Rissanen (1984) or Wallace and Freeman (1987) and the references therein. Note
that the Minimum Description Length (MDL) technique (as Rissanen's approach
has become known) is implicitly related to Maximum A Posteriori (MAP) Bayesian
estimation techniques if cast in the appropriate framework.
2.2
Minimum Description Length and Stochastic Complexity
Following the notation of Barron and Cover (1991), we have N data-points, described as a sequence of tuples of observations {xI, ... , xf , Yi}, 1 ::; i ::; N, to be
referred to as {xi,yd for short. The
correspond to values taken on by the f{
k
random variables X (which may be continuous or discrete), while , for the purposes
of this paper, the Yi are elements of the finite alphabet of the discrete m-ary class
variable Y. Let rN = {M l , ... , MlrNI} be the family of candidate models under
consideration. Note that by defining r N as a function of N, the number of data
points, we allow the possibility of considering more complicated models as more
data arrives. For each Mj ErN let C( Mj) be non-negative numbers such that
xf
L
2-C(Mj) ::;
l.
j
The C(Mj) can be interpreted as the cost in bits of specifying model Mj - in turn,
2-C(Mj) is the prior probability assigned to model M j (suitably normalized). Let
us use C = {C(Mt}, ... , C(M1rNln to refer to a particular coding scheme for r N .
Hence the total description length of the data plus a model Mj is defined as
L(Mj, {Xi , yd) = C(Mj) + log (p(
{Ydl~j( {~lJ)))
i.e., we first describe the model and then the class data relative to the given model
(as a function of {xd, the feature data). The stochastic complexity of the dat.a
{Xi, Yi} relative to Cand r N is the minimum description length
I( {Xi, yd)
-
= MjErN
min {L(M}", {Xi, yd n?
-
The problem of finding the model of shortest description length is intractable in
the general case - nonetheless the idea of finding the best model we can is well
motivated, works well in practice and is far preferable to the alternative approach
of ignoring the complexity issue entirely.
819
820
Smyth
3
Admissible Stochastic Complexity Models
3.1
Definition of Admissibility
We will find it useful to define the notion of an admissible model for the classification
problem: the set of admissible models ON (~ r N ) is defined as all models whose
complexity is such that there exists no other model whose description length is
known to be smaller. In other words we are saying that inadmissible models are
those which have complex~ty in bits greater than any known description length clearly they cannot be better than the best known model in terms of description
length and can be eliminated from consideration. Hence, ON is defined dynamically
and is a function of how many description lengths we have already calculated in our
search. Typically r N may be pre-defined, such as the class of all 3-layer feed-forward
neural networks with particular activation functions . We would like to restrict our
search for a good model to the set ON ~ rN as far as possible (since non-admissible
models are of no practical use). In practice it may be difficult to determine the
exact boundaries of ON, particularly when IrN I is large (with decision trees or
neural networks for example). Note that the notion of admissibility described here
is particularly useful when we seek a minimal description length, or equivalently a
model of maximal a posteriori probability - in situations where one's goal is to
average over a number of possible models (in a Bayesian manner) a modification of
the admissibility criterion would be necessary.
3.2
Results for Admissible Models
Simple techniques for eliminating obvious non-admissible models are of interest : for
the classification problem a necessary condition that a model M j be admissible is
that
C(Mj)
~
N? H(X)
~
Nlog(m)
where H(X) is the entropy ofthe m-ary class variable X. The obvious interpretation
in words is that any admissible model must have complexity less than that of the
data itself. It is easy to show in addition that the complexity of any admissible
model is upper bounded by the parameters of the classification problem:
Hence, the size of the space of admissible models can also be bounded:
Our approach suggests that for classification at least, once we know N and the
number of classes m, there are strict limitations on how many admissible models we
can consider. Of course the theory does not state that considering a larger subset
will necessarily result in a less optimal model being found, however, it is difficult to
argue the case for including large numbers of models which are clearly too complex
for the problem. At best, such an approach will lead to an inefficient search, whereas
at worst a very poor model will be chosen perhaps as a result of the use of a poor
coding scheme for the unnecessarily large hypothesis space.
On Stochastic Complexity
3.3
Admissible Models and Bayes Risk
The notion of minimal compression (the minimum achievable goodness-of-fit) is
intimately related in the classification problem to the minimal Bayes risk for the
problem (Kovalevsky, 1980). Let MB be any model (not necessarily unique) which
achieves the optimal Bayes risk (i.e., minimizes the classifier error) for the classification problem. In particular, C( {xdIMB( {yd)) is not necessarily zero, indeed
in most practical problems of interest it is non-zero, due to the ambiguity in the
mapping from the feature space to the class variable. In addition, MB may not be
defined in the set r N, and hence, MB need not even be admissible. If, in the limit
as N -+ 00, MB rt. roo then there is a fundamental approximation error in the representation being used, i.e., the family of models under consideration is not flexible
enough to optimally represent the mapping from {xd to {yd. Smyth (1991) has
shown how information about the Bayes error rateror the problem (if available)
can be used to further tighten the bounds on admissibility.
4
Applying Mininlum Description Length Principles to
Neural Network Design
In principle the admissibility results can be applied to a variety of classifier design
problems - applications to Markov model selection and decision tree design are
described elsewhere (Smyth, 1991). In this paper we limit our attention to the
problem of automatically selecting a feedforward multi-layer network architecture.
4.1
Calculation of the Goodness-of-Fit
As is clear from the preceding discussion, application of the MDL principle to classifier selection requires that the classifier produce a posterior probability estimate of
the class labels. In the context of a network model this is not a problem provided the
network is trained to provide such estimates. This requires a simple modification
of the objective function to a log-likelihood function - L~llog(p(ydxd), where Yi
is the class label of the ith training datum and pO is the network's estimate of pO.
This function has been proposed in the literature in the past under the guise of a
cross-entropy measure (for the special case of binary classes) and more recently it
has been derived from the more basic arguments of Minimum Mutual Information
(MMI) (Bridle, 1990) and Maximum Likelihood (ML) Estimation (Gish, 1990). The
cross-entropy function for network training is nothing more that the goodness-of-fit
component of the description length criterion. Hence, both MMI and ML (since
they are equivalent in this case) are special cases of the MDL procedure wherein
the complexity term is a constant and is left out of the optimization (all models are
assumed to be equally likely and likelihood alone is used as the decision criterion).
4.2
Complexity Penalization for Multi-layer Perceptron Models
It has been proposed in the past (Barron, 1989) to use a penalty term of (k/2) log N,
where k is the number of parameters (weights and biases) in the network. The origins of this complexity measure lie in general arguments originally proposed by
Rissanen (1984). However this penalty term is too large. Cybenko (1990) has
821
822
Smyth
pointed out that existing successful applications of networks have far more parameters than could possibly be justified by a statistical analysis, given the amount of
training data used to construct the network. The critical factor lies in the precision
to which these parameters are stated in the final model. In essence the principle
of MDL (and Bayesian techniques) dictates that the data only justifies the stating
of any parameter in the model to some finite precision, inversely proportional to
the inherent variance of the estimate. Approximate techniques for the calculation
of the complexity terms in this manner have been proposed (Weigend, Huberman
and Rumelhart, this volume) but a complete description length analysis has not yet
appeared in the literature.
4.3
Complexity Penalization for a Discrete Network Model
It turns out that there are alternatives to multi-layer perceptrons whose complexity
?
is much easier to calculate. We will look in particular at the rule-based network
of Goodman et al. (1990). In this model the hidden units correspond to Boolean
combinations of discrete input variables. The link weights from hidden to output
(class) nodes are proportional to log conditional probabilities of the class given the
activation of a hidden node. The output nodes form estimates of the posterior class
probabilities by a simple summation followed by a normalization. The implicit
assumption of conditional independence is ameliorated in practice by the fact that
the hidden units are chosen in a manner to ensure that the assumption is violated
as little as possible.
The complexity penalty for the network is calculated as being (1/2) log N per link
from the hidden to output layers, plus an appropriate coding term for the specification of the hidden units. Hence, the description length of a network with k hidden
units would be
L= -
N
k
i=1
i=1
L log(p(Yd x;)) + k /2 log N - L log 11"( od
where 0i is the order of the ith hidden node and 11"( OJ) is a prior probability on the
orders. Using this definition of description length we get from our earlier results
on admissible models that the number of hidden units in the architecture is upper
bounded by
k<
NH(C)
- 0.51ogN + logJ< + 1
where J< is the number of binary input attributes.
4.4
Application to a Medical Diagnosis Problem
We consider the application of our techniques to the discovery of a parsimonious
network for breast cancer diagnosis, using the discrete network model. A common
technique in breast cancer diagnosis is to obtain a fine needle aspirate (FNA) from
the patient. The FN A sample is then evaluated under a microscope by a physician
who makes a diagnosis. Ground truth in the form of binary class labels ("benign"
or "malignant") is obtained by re-examination or biopsy at a later stage. Wolberg
and Mangasarian (1991) described the collection of a database of such information.
On Stochastic Complexity
The feature information consisted of subjective evaluations of nine FNA sample
characteristics such as uniformity of cell size, marginal adhesion and mitoses. The
training data consists of 439 such FNA samples obtained from real patients which
were later assigned class labels. Given that the prior class entropy is almost 1 bit ,
one can immediately state from our bounds that networks with more than 51 hidden
units are inadmissible. Furthermore, as we evaluate different models we can narrow
the region of admissibility using the results stated earlier. Figure 1 gives a graphical
interpretation of this procedure.
/I)
!::
35
j~~~,;;;ibl.~,~~~:::,+<:
30
c
c 25
II
20
L
'0 15
...
.c
10
?
5
II
z
.
.
:l
:g
.
40
o
.
: : : : : : :?-::::::~I-=~IerOf:H'ddIunit5
.
, - -..-- Upper bound on admissible complexity
. ...
100
150
,
_.- ... ....
....
..... .... . _", --- - - . _. -_.. -.
200
250
Description Length (In bits)
300
350
Figure 1. Inadmissible region as a function of description length
The algorithm effectively moves up the left-hand axis, adding hidden units in a
greedy manner. Initially the description length (the lower curve) decreases rapidly
as we capture the gross structure in the data. For each model that we calculate a
description length, we can in turn calculate an upper bound on admissibility (the
upper curve) - this bound is linear in description length. Hence , for example by
the time we have 5 hidden units we know that any models with more than 21 hidden
units are inadmissible. Finally a local minimum of the description length function
is reached at 12 units, at which point we know that the optimal solution can have at
most 16 hidden units . As matter of interest, the resulting network with 12 hidden
units correctly classified 94 of 96 independent test cases.
5
Conclusion
There are a variety of related issues which arise in this context which we can only
briefly mention due to space constraints. For example, how does the prior "model
entropy", H(ON) = - Li p(l\1i) log(p(l\1d) , affect the complexity of the search
problem? Questions also naturally arise as to how ON should grow as a function of
N in an incrementa/learning scenario.
In conclusion, it should not be construed from this paper that consideration of
admissible models is the major factor in inductive inference - certainly the choice
of description lengths for the various models and the use of efficient optimization
823
824
Smyth
techniques for seeking the parameters of each model remain the cornerstones of
success. Nonetheless, our results provide useful theoretical insight and are practical
to the extent that they provide a "sanity check" for model selection in MDL.
Acknowledgments
The research described in this paper was performed at the Jet Propulsion Laboratories, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. In addition this work was supported in part by
the Air Force Office of Scientific Research under grant number AFOSR-90-0199.
References
A. R. Barron (1989), 'Statistical properties of artificial neural networks,' in Proceedings of 1989 IEEE Conference on Decision and Control.
A. R. Barron and T. M. Cover (1991), 'Minimum complexity density estimation,'
to appear in IEEE Trans. Inform. Theory.
J. Bridle (1990), 'Training stochastic model recognition algorithms as networks can
lead to maximum mutual information estimation of parameters,' in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, pp.211-217, San
Mateo, CA: Morgan Kaufmann.
G. Cybenko (1990), 'Complexity theory of neural networks and classification problems,' preprint.
H. Gish (1991), 'Maximum likelihood training of neural networks,' to appear in
Proceedings of the Third International Workshop on AI and Statistics, (D. Hand,
ed.), Chapman and Hall: London.
R. M. Goodman, C. Higgins, J. W. Miller, and P. Smyth (1990), 'A rule-based
approach to neural network classifiers,' in Proceedings of the 1990 International
Neural Network Conference, Paris, France.
V. A. Kovalevsky (1980), Image Pattern Recognition, translated from Russian by
A. Brown, New York: Springer Verlag, p.79.
J. Rissanen (1984), 'Universal coding, information, prediction, and estimation,'
IEEE Trans. Inform. Theory, vo1.30, pp.629-636.
P. Smyth (1991), 'Admissible stochastic complexity models for classification problems,' to appear in Proceedings of the Third International Workshop on AI and
Statistics, (D. Hand, ed.), Chapman and Hall: London.
C. S. Wallace and P. R. Freeman (1987), 'Estimation and inference by compact
coding,' J. Royal Stat. Soc . B, vol. 49 , no.3, pp.240-251.
W. H. Wolberg and O. L. Mangasarian (1991), Multi-surface method of pattern separation applied to breast cytology diagnosis, Proceedings of the National Academy
of Sciences, in press.
| 435 |@word briefly:1 eliminating:1 achievable:1 compression:1 suitably:1 seek:1 gish:2 mention:1 cytology:1 selecting:2 past:2 existing:1 subjective:1 od:1 activation:2 yet:1 must:1 fn:1 benign:1 alone:1 greedy:2 ith:2 short:1 node:4 become:1 consists:1 manner:4 introduce:1 indeed:1 cand:1 examine:1 frequently:1 wallace:2 multi:4 freeman:2 automatically:1 little:1 considering:2 provided:1 notation:1 bounded:3 formidable:1 interpreted:1 minimizes:1 finding:2 xd:2 preferable:1 classifier:9 control:1 unit:12 medical:2 grant:1 appear:3 local:1 limit:2 yd:7 plus:2 therein:1 mateo:1 dynamically:1 specifying:1 suggests:1 practical:3 unique:1 acknowledgment:1 practice:4 procedure:2 universal:1 dictate:1 word:2 pre:1 get:1 cannot:1 needle:1 selection:3 context:3 risk:3 applying:1 conventional:1 map:1 equivalent:1 attention:1 immediately:1 insight:2 rule:2 higgins:1 searching:1 notion:5 smyth:8 exact:1 hypothesis:3 origin:1 element:1 rumelhart:1 recognition:3 particularly:2 database:1 preprint:1 logj:1 capture:1 worst:1 calculate:3 region:2 decrease:1 gross:1 complexity:30 prescribes:1 depend:1 trained:1 uniformity:1 ogn:1 translated:1 po:2 various:1 alphabet:1 describe:2 london:2 artificial:1 approached:1 sanity:1 whose:4 larger:1 roo:1 statistic:2 itself:2 final:1 obviously:1 sequence:1 nlog:1 maximal:1 mb:4 rapidly:1 academy:1 description:23 produce:1 stating:1 stat:1 soc:1 differ:1 biopsy:1 attribute:1 stochastic:12 cybenko:2 summation:1 hall:2 ground:1 mapping:2 major:1 achieves:1 purpose:1 estimation:6 label:4 clearly:2 office:1 derived:1 likelihood:4 check:1 ibl:1 sense:2 posteriori:2 inference:4 typically:2 eliminate:2 lj:1 initially:1 pasadena:1 irn:1 hidden:15 france:1 issue:2 among:1 classification:8 flexible:1 special:2 mutual:2 marginal:1 once:1 construct:1 eliminated:1 chapman:2 unnecessarily:1 look:1 inherent:1 national:2 interest:4 possibility:1 evaluation:1 certainly:1 mdl:7 arrives:1 implication:1 necessary:2 tree:2 re:1 theoretical:1 minimal:4 formalism:1 modeling:1 boolean:1 earlier:2 cover:2 goodness:4 cost:1 subset:1 successful:1 too:3 optimally:1 density:1 fundamental:1 international:3 contract:1 physician:1 ambiguity:1 padhraic:1 choose:2 possibly:1 inefficient:1 li:1 account:1 coding:5 matter:1 later:2 performed:1 portion:1 reached:1 bayes:4 complicated:1 construed:1 air:1 variance:1 who:1 characteristic:1 kaufmann:1 correspond:2 ofthe:1 miller:1 bayesian:3 mmi:2 ary:2 classified:1 inform:2 touretzky:1 ed:3 definition:2 ty:1 nonetheless:2 pp:3 obvious:2 naturally:1 bridle:2 feed:1 originally:1 wherein:1 evaluated:1 furthermore:1 implicit:1 stage:1 hand:3 perhaps:1 scientific:1 russian:1 normalized:1 consisted:1 brown:1 inductive:2 hence:8 assigned:2 laboratory:2 essence:1 whereby:1 criterion:3 complete:1 demonstrate:1 image:1 consideration:6 mangasarian:2 recently:1 common:1 mt:1 volume:1 nh:1 ydl:1 interpretation:2 refer:1 ai:2 pointed:1 specification:1 surface:1 aeronautics:1 posterior:2 scenario:1 verlag:1 binary:3 success:1 yi:4 morgan:1 minimum:8 greater:1 preceding:1 determine:1 shortest:1 ii:2 exceeds:1 jet:2 xf:2 calculation:2 cross:2 equally:1 prediction:1 basic:1 breast:3 patient:2 represent:1 normalization:1 cell:1 microscope:1 justified:1 background:1 addition:4 whereas:1 fine:1 adhesion:1 grow:1 goodman:2 unlike:1 strict:1 feedforward:1 easy:1 enough:1 variety:2 independence:1 fit:4 affect:1 architecture:3 restrict:1 idea:1 administration:1 motivated:1 penalty:3 york:1 nine:1 wolberg:2 cornerstone:1 useful:3 detailed:1 clear:1 amount:2 per:1 correctly:1 diagnosis:6 discrete:5 vol:1 affected:1 rissanen:4 weigend:1 family:3 reader:1 saying:1 almost:1 separation:1 parsimonious:2 decision:4 bit:4 entirely:1 layer:5 bound:5 followed:1 datum:1 constraint:1 argument:2 min:1 ern:1 combination:1 poor:2 smaller:1 remain:1 intimately:1 modification:2 taken:1 discus:2 turn:3 malignant:1 know:3 available:2 barron:4 appropriate:2 alternative:2 ensure:1 graphical:1 dat:1 seeking:1 objective:1 move:1 already:1 question:1 rt:1 traditional:1 link:2 propulsion:2 argue:1 extent:1 length:23 equivalently:1 difficult:2 negative:1 stated:2 design:4 upper:5 observation:1 markov:1 discarded:1 finite:2 defining:1 situation:1 communication:1 rn:2 cast:1 paris:1 california:2 narrow:1 trans:2 pattern:3 appeared:1 including:1 oj:1 royal:1 belief:1 critical:1 examination:1 force:1 scheme:2 technology:2 inversely:1 axis:1 fna:3 prior:5 literature:2 discovery:1 relative:2 afosr:1 admissibility:7 rationale:1 limitation:1 proportional:2 penalization:2 principle:5 cancer:2 course:1 elsewhere:1 supported:1 bias:1 allow:2 perceptron:1 institute:2 boundary:1 calculated:2 curve:2 forward:1 collection:1 san:1 far:3 tighten:1 approximate:1 compact:1 ameliorated:1 implicitly:1 ml:2 assumed:1 tuples:1 xi:6 search:5 continuous:1 mj:10 ca:2 ignoring:1 complex:3 necessarily:3 motivation:1 arise:2 nothing:1 referred:2 precision:2 guise:1 candidate:5 lie:2 third:2 admissible:19 intrinsic:1 intractable:1 exists:1 workshop:2 adding:1 effectively:1 justifies:1 easier:1 entropy:6 likely:1 springer:1 truth:1 conditional:2 goal:1 considerable:1 huberman:1 vo1:1 llog:1 inadmissible:4 total:1 perceptrons:1 violated:1 evaluate:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.