Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,500 | 417 | A Short-Term Memory Architecture for the
Learning of Morphophonemic Rules
Michael Gasser and Chan-Do Lee
Computer Science Department
Indiana University
Bloomington, IN 47405
Abstract
Despite its successes, Rumelhart and McClelland's (1986) well-known approach to the learning of morphophonemic rules suffers from two deficiencies: (1) It performs the artificial task of associating forms with forms
rather than perception or production. (2) It is not constrained in ways
that humans learners are. This paper describes a model which addresses
both objections. Using a simple recurrent architecture which takes both
forms and "meanings" as inputs, the model learns to generate verbs in
one or another "tense", given arbitrary meanings, and to recognize the
tenses of verbs. Furthermore, it fails to learn reversal processes unknown
in human language.
1 BACKGROUND
In the debate over the power of connectionist models to handle linguistic phenomena, considerable attention has been focused on the learning of simple morphological
rules. It is a straightforward matter in a symbolic system to specify how the meanings of a stem and a bound morpheme combine to yield the meaning of a whole
word and how the form of the bound morpheme depends on the shape of the stem.
In a distributed connectionist system, however, where there may be no explicit
morphemes, words, or rules, things are not so simple.
The most important work in this area has been that of Rumelhart and McClelland
(1986), together with later extensions by Marchman and Plunkett (1989). The networks involved were trained to associate English verb stems with the corresponding
past-tense forms, successfully generating both regular and irregular forms and generalizing to novel inputs. This work established that rule-like linguistic behavior
605
606
Gasser and Lee
could be achieved in a system with no explicit rules. However, it did have important
limitations, among them the following:
1. The representation of linguistic form was inadequate. This is clear, for example, from the fact that distinct lexical items may be associated with identical
representations (Pinker & Prince, 1988).
2. The model was trained on an artificial task, quite unlike the perception and
production that real hearers and speakers engage in. Of course, because it has
no semantics, the model also says nothing about the issue of compositionality.
One consequence of both of these shortcomings is that there are few constraints on
the kinds of processes that can be learned.
In this paper we describe a model which addresses these objections to the earlier
work on morphophonemic rule acquisition. The model learns to generate forms
in one or another "tense", given arbitrary patterns representing "meanings", and
to yield the appropriate tense, given forms. The network sees linguistic forms
one segment at a time, saving the context in a short-term memory. This style of
representation, together with the more realistic tasks that the network is faced with,
results in constraints on what can be learned. In particular, the system experiences
difficulty learning reversal processes which do not occur in human language and
which were easily accommodated by the earlier models.
2
SHORT-TERM MEMORY AND PREDICTION
Language takes place in time, and at some point, systems that learn and process
language have to come to grips with this fact by accepting input in sequential form.
Sequential models require some form of short-term memory (STM) because the
decisions that are made depend on context. There are basically two options, window
approaches, which make available stretches of input events all at once, and dynamic
memory approaches (Port, 1990), which offer the possibility of a recoded version
of past events. Networks with recurrent connections have the capacity for dynamic
memory. We make use of a variant of a simple recurrent network (Elman, 1990),
which is a pattern associator with recurrent connections on its hidden layer. Because
the hidden layer receives input from itself as well as from the units representing the
current event, it can function as a kind of STM for sequences of events.
Elman has shown how networks of this type can learn a great deal about the structure of the inputs when trained on the simple, unsupervised task of predicting the
next input event. We are interested in what can be expected from such a network
that is given a single phonological segment (hereafter referred to as a phone) at a
time and trained to predict the next phone. If a system could learn to do this successfully, it would have a left-to-right version of what phonologists call phonotactic8j
that is, it would have knowledge of what phones tend to follow other phones in given
contexts. Since word recognition and production apparently build on phonotactic
knowledge of the language (Church, 1987), training on the prediction task might
provide a way of integrating the two processes within a single network.
A Short-Term Memory Architecture for the Learning of Morphophonemic Rules
3
ARCHITECTURE
The type of network we work with is shown m Figure 1. Both its inputs and
o
(amrent~
hidden layer/STM
FORM
( stem
I tense)
MEANING
Figure 1: Network Architecture
outputs include FORM, that is, an individual phone, and what we'll call MEANING,
that is, a pattern representing the stem of the word to be recognized or produced
and a single unit representing a grammatical feature such as PAST or PRESENT.
In fact, the meaning patterns have no real semantics, but like real meanings, they
are arbitrarily assigned to the various morphemes and thus convey nothing about
the phonological realization of the stem and grammatical feature. The network is
trained both to auto-associate the current phone and predict the next phone.
The word recognition task corresponds to being given phone inputs (together with a
default pattern on the meaning side) and generating meaning outputs. The meaning
outputs are copied to the input meaning layer on each time step. While networks
trained in this way can learn to recognize the words they are trained on, we have
not been able to get them to generalize well. Networks which are expected only to
output the grammatical feature, however, do generalize, as we shall see.
The word production task corresponds to being given a constant meaning input
and generating form output. Following an initial default phone pattern, the phone
input is what was predicted on the last time step. Again, however, though such a
network does fine on the training set, it does not generalize well to novel inputs.
We have had more success with a version using "teacher forcing". Here the correct
current phone is provided on the input at each time step.
4
4.1
SIMULATIONS
STIMULI
We conducted a set of experiments to test the effectiveness of this architecture for
the learning of morphophonemic rules. Input words were composed of sequences of
phones in an artificial language. Each of the 15 possible phones was represented
by a pattern over a set of 8 phonetic features. For each simulation, a set of 20
words was generated randomly from the set of possible words. Twelve of these were
607
608
Gasser and Lee
designated "training" words, 8 "test" words.
For each of these basic words, there was an associated inflected form. For each
simulation, one of a set of 9 rules was used to generate the inflected form: (1) suffix
(+ assimilation) (gip-+gips, gib-+gibz), (2) prefix (+ assimilation) (gip-+zgip,
kip-+skip), (3) gemination (iga-+igga), (4) initial deletion (gip-+ip), (5) medial
deletion (ipka-+ipa), (6) final deletion (gip-+gi), (7) tone change (glp-+glp),
(8) Pig Latin (gip-+ipge), and (9) reversal (gip-+pig).
In the two assimilation cases, the suffix or prefix agreed with the preceding or
following phone on the voice feature. In the suffixing example, p is followed by
s because it is voiceless, b by z because it is voiced. In the prefixing example, g
is preceded by z because it is voiced, k by s because it is voiceless. Because the
network is trained on prediction, these two rules are not symmetric. It would not
be surprising if such a network could learn to generate a final phone which agrees
in voicing with the phone preceding it. But in the prefixing case, the network must
choose the correct prefix before it has seen the phone with which it is to agree in
voicing. We thought this would still be possible, however, because the network also
receives meaning input representing the stem of the word to be produced.
We hoped that the network would succeed on rule types which are common in
natural languages and fail on those which are rare or non-existent. Types 1-4 are
relatively common, types 5-7 infrequent or rare, type 8 apparently known only in
language games, and type 9 apparently non-occurring.
For convenience, we will refer to the uninflected form of a word as the "present"
and the inflected form as the "past tense" of the word in question. Each input word
consisted of a present or past tense form preceded and followed by a word boundary
pattern composed of zeroes. Meaning patterns consisted of an arbitrary pattern
across a set of 6 "stem" units, representing the meaning of the "stem" of one of the
20 input words, plus a single bit representing the "tense" of the input word, that
is, present or past.
4.2
TRAINING
During training each of the training words was presented in both present and past
forms, while the test words appeared in the present form only. Each of the 32
separate words was trained in both the recognition and production directions.
For recognition training, the words were presented, one phone at a time, on the
form input units. The appropriate pattern was also provided on the stem meaning
units. Targets specified the current phone, next phone, and complete meaning.
Thus the network was actually being trained to generate only the tense portion of
the meaning for each word. The activation on the tense output unit was copied to
the tense input unit following each time step.
For production training, the stem and grammatical feature were presented on the
lexical input layer and held constant throughout the word. The phones making up
the word were presented one at a time beginning with the initial word boundary,
and the network was expected to predict the next phone in each case.
There were 10 separate simulations for each of the 9 inflectional rules. Pilot runs
A Short-Term Memory Architecture for the Learning of Morphophonemic Rules
Table 1: Results of Recognition and Production Tests
Suffix
Prefix
Tone change
Gemination
Deletion
Pig Latin
Reversal
RECOGNITION
% tenses correct
79
76
99
90
67
61
13
PRODUCTION
% affixes correct
82
83
62
76
98
74
42
31
27
23
% segments correct
were used to find estimates of the best hidden layer size. This varied between 16
and 26. Training continued until the mean sum-of-squares error was less than 0.05.
This normally required between 50 and 100 epochs. Then the connection weights
were frozen, and the network was tested in both the recognition and production
directions on the past tense forms of the test words.
4.3
RESULTS
In all cases, the network learned the training set quite successfully (at least 95% of
the phones for production and 96% of the tenses for recognition). Results for the
recognition and production of past-tense forms of test words are shown in Table 1.
For recognition, chance is 37.5%. For production, the network's output on a given
time step was considered to be that phone which was closest to the pattern on the
phone output units.
5
5.1
DISCUSSION
AFFIXATION AND ASSIMILATION
The model shows clear evidence of having learned morphophonemic rules which it
uses in both the production and perception directions. And the degree of mastery
of the rules, at least for production, mirrors the extent to which the types of rules
occur in natural languages. Significantly, the net is able to generate appropriate
forms even in the prefix case when a "right-to-Ieft" (anticipatory) rule is involved.
That is, the fact that the network is trained only on prediction does not limit it
to left-to-right (perseverative) rules because it has access to a "meaning" which
permits the required "lookahead" to the relevant feature on the phone following the
prefix. What makes this interesting is the fact that the meaning patterns bear no
relation to the phonology of the stems. The connections between the stem meaning
input units and the hidden layer are being trained to encode the voicing feature
even when, in the case of the test words, this was never required during training.
In any case, it is clear that right-to-Ieft assimilation in a network such as this is
more difficult to acquire than left-to-right assimilation, all else being equal. We are
609
610
Gasser and Lee
unaware of any evidence that would support this, though the fact that prefixes are
less common than suffixes in the world's languages (Hawkins & Cutler, 1988) means
that there are at least fewer opportunities for the right-to-Ieft process.
5.2
REVERSAL
What is it that makes the reversal rule, apparently difficult for human language
learners, so difficult for the network? Consider what the network does when it is
faced with the past-tense form of a verb trained only in the present. If the novel item
took the form of a set rather than a sequence, it would be identical to the familiar
present-tense form. What the network sees, however, is a sequence of phones, and
its task is to predict the next. There is thus no sharing at all between the present
and past forms and no basis for generalizing from the present to the past. Presented
with the novel past form, it is more likely to base its response on similarity with a
word containing a similar sequence of phones (e.g., gip and gif) than it is with the
correct mirror-image sequence.
It is important to note, however, that difficulty with the reversal process does not
necessarily presuppose the type of representations that result from training a simple
recurrent net on prediction. Rather this depends more on the fact that the network
is trained to map meaning to form and form to meaning, rather than form to form,
as in the case of the Rumelhart and McClelland (1986) model. Any network of the
former type which represents linguistic form in such a way that the contexts of the
phones are preserved is likely to exhibit this behavior. 1
6
LIMITATIONS AND EXTENSIONS
Despite its successes, this model is far from an adequate account of the recognition
and production of words in natural language. First, although networks of the type
studied here are capable of yielding complete meanings given words and complete
words given meanings, they have difficulty when expected to respond to novel forms
or combinations of known meanings. In the simulations, we asked the network
to recognize only the grammatical morpheme in a novel word, and in production
we kept it on track by giving it the correct input phone on each time step. It
will be important to discover ways to make the system robust enough to respond
appropriately to novel forms and combinations of meanings.
Equally important is the ability of the model to handle more complex phonological
processes. Recently Lakoff (1988) and Touretzky and Wheeler (1990) have developed connectionist models to deal with complicated interacting phonological rules.
While these models demonstrate that connectionism offers distinct advantages to
conventional serial approaches to phonology, they do not learn phonology (at least
not in a connectionist way), and they do not yet accommodate perception.
We believe that the performance of the model will be significantly improved by
the capacity to make reference directly to units larger than the phone. We are
currently investigating an architecture consisting of a hierarchy of networks of the
type described here, each trained on the prediction task at a different time scale.
lWe are indebted to Dave Touretzky for helping to clarify this issue.
A Short-Term Memory Architecture for the Learning of Morphophonemic Rules
7
CONCLUSIONS
It is by now clear that a connectionist system can be trained to exhibit rule-like behavior. What is not so clear is whether networks can discover how to map elements
of form onto elements of meaning and to use this knowledge to interpret and generate novel forms. It has been argued (Fodor & Pylyshyn, 1988) that this behavior
requires the kind of constituency which is not available to networks making use of
distributed representations.
The present study is one attempt to demonstrate that networks are not limited
in this way. We have shown that, given "meanings" and temporally distributed
representations of words, a network can learn to isolate stems and the realizations
of grammatical features, associate them with their meanings, and, in a somewhat
limited sense, use this knowledge to produce and recognize novel forms. In addition,
the nature of the training task constrains the system in such a way that rules which
are rare or non-occurring in natural language are not learned.
References
Church, K. W. (1987). Phonological parsing and lexical retrieval. Cognition,
53-69.
~5,
Elman, J. (1990). Finding structure in time. Cognitive Science, 14, 179-21l.
Fodor, J., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A
critical analysis. Cognition, ~8, 3-71.
Hawkins, J. A., & Cutler, A. (1988). Psychological factors in morphological asymmetry. In J. A. Hawkins (Ed.), Ezplaining language universals (pp. 280-317).
Oxford: Basil Blackwell.
Lakoff, G. (1988). Cognitive phonology. Paper presented at the Annual Meeting of
the Linguistics Society of America.
Marchman, V., & Plunkett, K. (1989). Token frequency and phonological predictability in a pattern association network: Implications for child language acquisition. Proceedings of the Annual Conference of the Cognitive Science Society, 11,
179-187.
Pinker, S., & Prince, A. (1988). On language and connectionism: Analysis of a
parallel distributed processing model of language acquisition. Cognition, ~8, 73193.
Port, R. (1990). Representation and recognition of temporal patterns. Connection
Science, ~, 151-176.
Rumelhart, D., & McClelland, J. (1986). On learning the past tense of English
verbs. In J. L. McClelland & D. E. Rumelhart (Eds.), Parallel Distributed Processing, Vol. 2 (pp. 216-271). Cambridge, MA: MIT Press.
Touretzky, D. and Wheeler, D. (1990). A computational basis for phonology. In
D. S. Touretzky (Ed.), Advances in Neural Information Processing Systems ~, San
Mateo, CA: Morgan Kaufmann.
611
| 417 |@word version:3 simulation:5 accommodate:1 initial:3 hereafter:1 mastery:1 prefix:7 past:14 current:4 surprising:1 activation:1 yet:1 must:1 parsing:1 realistic:1 shape:1 medial:1 pylyshyn:2 fewer:1 item:2 tone:2 beginning:1 short:7 accepting:1 affix:1 combine:1 expected:4 behavior:4 elman:3 window:1 stm:3 provided:2 discover:2 what:11 inflectional:1 kind:3 gif:1 developed:1 finding:1 indiana:1 temporal:1 unit:10 normally:1 before:1 limit:1 consequence:1 despite:2 oxford:1 might:1 plus:1 studied:1 mateo:1 limited:2 wheeler:2 area:1 universal:1 thought:1 significantly:2 word:37 integrating:1 regular:1 symbolic:1 get:1 convenience:1 onto:1 context:4 conventional:1 map:2 lexical:3 straightforward:1 attention:1 focused:1 rule:24 continued:1 handle:2 fodor:2 target:1 hierarchy:1 infrequent:1 engage:1 us:1 associate:3 element:2 rumelhart:5 recognition:12 morphological:2 constrains:1 asked:1 dynamic:2 existent:1 trained:16 depend:1 segment:3 learner:2 basis:2 easily:1 iga:1 plunkett:2 various:1 represented:1 america:1 distinct:2 shortcoming:1 describe:1 artificial:3 presuppose:1 quite:2 larger:1 say:1 ability:1 gi:1 itself:1 ip:1 final:2 sequence:6 advantage:1 frozen:1 prefixing:2 net:2 took:1 relevant:1 realization:2 lookahead:1 glp:2 asymmetry:1 produce:1 generating:3 recurrent:5 gip:7 predicted:1 gib:1 come:1 skip:1 direction:3 correct:7 morphophonemic:8 human:4 require:1 argued:1 connectionism:3 extension:2 helping:1 stretch:1 clarify:1 hawkins:3 considered:1 great:1 cognition:3 predict:4 currently:1 agrees:1 successfully:3 mit:1 rather:4 linguistic:5 encode:1 sense:1 suffix:4 hidden:5 relation:1 interested:1 semantics:2 issue:2 among:1 morpheme:5 constrained:1 equal:1 once:1 saving:1 phonological:6 having:1 never:1 identical:2 represents:1 unsupervised:1 connectionist:5 stimulus:1 few:1 randomly:1 composed:2 recognize:4 individual:1 familiar:1 consisting:1 attempt:1 possibility:1 cutler:2 yielding:1 held:1 implication:1 capable:1 experience:1 accommodated:1 prince:2 psychological:1 lwe:1 earlier:2 rare:3 inadequate:1 conducted:1 teacher:1 twelve:1 lee:4 michael:1 together:3 again:1 containing:1 hearer:1 choose:1 cognitive:4 style:1 account:1 matter:1 depends:2 later:1 apparently:4 portion:1 pinker:2 option:1 complicated:1 parallel:2 voiced:2 square:1 kaufmann:1 yield:2 generalize:3 produced:2 basically:1 dave:1 indebted:1 suffers:1 touretzky:4 sharing:1 ed:3 acquisition:3 pp:2 involved:2 frequency:1 associated:2 bloomington:1 pilot:1 knowledge:4 agreed:1 actually:1 follow:1 specify:1 response:1 improved:1 anticipatory:1 though:2 furthermore:1 until:1 receives:2 voiceless:2 believe:1 tense:19 consisted:2 lakoff:2 former:1 assigned:1 symmetric:1 phonotactic:1 deal:2 ll:1 game:1 during:2 speaker:1 complete:3 demonstrate:2 performs:1 meaning:32 image:1 novel:9 recently:1 common:3 preceded:2 marchman:2 association:1 interpret:1 refer:1 cambridge:1 gips:1 language:17 had:1 access:1 similarity:1 base:1 closest:1 chan:1 phone:31 forcing:1 phonetic:1 success:3 arbitrarily:1 meeting:1 seen:1 morgan:1 somewhat:1 preceding:2 recognized:1 stem:14 offer:2 retrieval:1 serial:1 equally:1 prediction:6 variant:1 basic:1 achieved:1 irregular:1 preserved:1 background:1 addition:1 fine:1 objection:2 else:1 appropriately:1 unlike:1 isolate:1 tend:1 thing:1 effectiveness:1 call:2 latin:2 enough:1 architecture:10 associating:1 whether:1 ieft:3 adequate:1 clear:5 grip:1 gasser:4 mcclelland:5 constituency:1 generate:7 ipa:1 track:1 shall:1 vol:1 inflected:3 basil:1 kept:1 sum:1 run:1 respond:2 place:1 throughout:1 decision:1 bit:1 layer:7 bound:2 followed:2 perseverative:1 copied:2 annual:2 occur:2 constraint:2 deficiency:1 relatively:1 department:1 designated:1 combination:2 describes:1 across:1 making:2 agree:1 fail:1 reversal:7 available:2 permit:1 appropriate:3 voicing:3 voice:1 include:1 linguistics:1 opportunity:1 phonology:5 giving:1 build:1 society:2 question:1 exhibit:2 separate:2 capacity:2 extent:1 acquire:1 difficult:3 debate:1 recoded:1 unknown:1 interacting:1 varied:1 verb:5 arbitrary:3 compositionality:1 required:3 specified:1 blackwell:1 connection:5 kip:1 learned:5 deletion:4 established:1 address:2 able:2 perception:4 pattern:15 appeared:1 pig:3 memory:9 power:1 event:5 critical:1 difficulty:3 natural:4 predicting:1 representing:7 temporally:1 church:2 auto:1 faced:2 epoch:1 bear:1 interesting:1 limitation:2 degree:1 port:2 production:16 course:1 token:1 last:1 english:2 side:1 distributed:5 grammatical:6 boundary:2 default:2 world:1 unaware:1 made:1 san:1 far:1 investigating:1 table:2 learn:8 nature:1 robust:1 associator:1 ca:1 necessarily:1 complex:1 did:1 whole:1 nothing:2 child:1 convey:1 referred:1 predictability:1 assimilation:6 fails:1 explicit:2 learns:2 evidence:2 sequential:2 mirror:2 hoped:1 occurring:2 generalizing:2 likely:2 corresponds:2 chance:1 ma:1 succeed:1 considerable:1 change:2 support:1 tested:1 phenomenon:1 |
3,501 | 4,170 | MAP estimation in Binary MRFs via Bipartite
Multi-cuts
Sashank J. Reddi?
IIT Bombay
[email protected]
Sunita Sarawagi
IIT Bombay
[email protected]
Sundar Vishwanathan
IIT Bombay
[email protected]
Abstract
We propose a new LP relaxation for obtaining the MAP assignment of a binary
MRF with pairwise potentials. Our relaxation is derived from reducing the MAP
assignment problem to an instance of a recently proposed Bipartite Multi-cut problem where the LP relaxation is guaranteed to provide an O(log k) approximation
where k is the number of vertices adjacent to non-submodular edges in the MRF.
We then propose a combinatorial algorithm to efficiently solve the LP and also
provide a lower bound by concurrently solving its dual to within an approximation. The algorithm is up to an order of magnitude faster and provides better MAP
scores and bounds than the state of the art message passing algorithm of [1] that
tightens the local marginal polytope with third-order marginal constraints.
1
Introduction
We consider pairwise Markov Random Field (MRF) over n binary variables x = x1 , . . . , xn expressed as a graph G = (V, E) and an energy function E(x|?) whose parameters ? decompose over
its vertices and edges as:
X
X
E(x|?) =
?i (xi ) +
?ij (xi , xj ) + ?const
(1)
i?V
(i,j)?E
Our goal is to find a x? = argminx?{0,1}n E(x|?). This is called the MAP assignment problem in
graphical models and for general graphs and arbitrary parameters is NP complete. Consequently,
there is an extensive literature of approximation schemes for the problem and new algorithms continue to be explored [2, 3, 4, 5, 6, 7, 8]. The most popular of these are based on the following linear
programming relaxation of the MAP problem.
X
X
min
?i (xi )?i (xi ) +
?ij (xi , xj )?ij (xi , xj )
?
i,xi
X
(i,j),xi ,xj
?ij (xi , xj ) = ?i (xi ) ?(i, j) ? E, ?xi ? {0, 1}
(2)
xj
X
?i (xi ) = 1 ?i ? V, ?ij (xi , xj ) ? 0 ?(i, j) ? E, ?xi , xj ? {0, 1}
xi
Broadly two main techniques are used to solve this relaxation: message-passing algorithms [9, 10,
11, 7, 12] such as TRW-S and Max-sum diffusion on the dual and, combinatorial algorithms based
on graph cuts and network flows [13, 14]. Both these methods find the exact MAP when the edge
parameters are submodular. For non-submodular parameters, these methods provide partial optimality guarantees for variables that get integral values. This observation is exploited in [14] to design
?
The author is currently affiliated with Google Inc.
1
an iterative probing scheme to expand the set of variables with optimal assignments. However,
this scheme is useful only for the case when the graphical model has a few non-submodular edges.
More principled methods to improve the solution output by the relaxed LP are based on progressively tightening the relaxation with violated constraints. Cycle constraints [15, 16, 17, 18, 1, 19]
and higher order marginal constraints [17, 1, 20] are two such types of constraints. However, these
are not backed by efficient algorithms and thus most of these tightenings come at a considerable
computational cost.
In this paper we propose a new relaxation of the MAP estimation problem via reduction to a recently
proposed Bipartite Multi-cut problem in undirected graphs [21]. We exploit this to show that after
adding a polynomial number of constraints, we get a O(log k) approximation guarantee on the MAP
objective where k is the number of variables
adjacent to non-submodular edges in the graphical
p
model, and this can be tightened to O( log(k) log(log(k))) using a semi-definite programming
relaxation1 . In this paper we explore only LP-based relaxation since our goal is to design practical
algorithms.
We propose a combinatorial algorithm to efficiently solve this LP by casting it as a Multi-cut problem on a specially constructed graph, the dual of which is a multi-commodity flow problem. The
algorithm, adapted from [22, 23], simultaneously updates the primal and dual solutions, and thus at
any point provides both a candidate solution and a lower bound to the energy function. It is guaranteed to provide an - approximate solution of the primal LP in O(?2 (|V|+|E|)2 ) time but in practice
terminates much faster. No such guarantees exist for any of the existing algorithms for tightening
the MAP LP based on cycle or higher order marginals constraints. Empirically, this algorithm is an
order of magnitude faster than the state of the art message passing algorithm[1] while yielding the
same or better MAP values and bounds. We show that our LP is a relaxation of the LP with cycle
constraints, but we still yield better and faster bounds because our combinatorial algorithm solves
the LP within a guaranteed approximation.
2
MAP estimation as Bipartite Multi-cut
We assume a reparameterization of the energy function so that the parameters of E(x|?) (Equation 1)
are
2
1. Symmetric, that is for {xi , xj } ? {0, 1} ?ij (xi , xj ) = ?ij (xi , xj ) where xi = 1 ? xi ,
2. Zero-normalized, that is min ?i (xi ) = 0 and min ?ij (xi , xj ) = 0.
xi ,xj
xi
It is easy to see that any energy function over binary variables can be reparameterized in this form2 .
Our starting point is the LP relaxation proposed in [13] for approximating MAP x? =
argminx E(x|?) as the minimum s-t cut in a suitably constructed graph H = (V H , E H ). We present
this construction for completeness.
2.1
Graph cut-based relaxation of [13]
For ease of notation, first augment the n variables with a special ?0? variable that always takes a
label of 0 and has an edge to all n variables. This enables us to redefine the node parameters ?i (xi )
as edge parameters ?0i (0, xi ). Add to H two vertices i0 and i1 for each variable i, 0 ? i ? n. For
each edge (i, j) ? E, add an edge between i0 and j0 with weight ?ij (0, 1) if the edge is submodular,
else add edge (i0 , j1 ) with weight ?ij (0, 0). For every vertex i, if ?i (1) is non-zero add an edge
between 00 and i0 with weight ?i (1) else add edge between 01 and i0 with weight ?i (0). It is easy
to see that the MAP problem minx?{0,1}n E(x) is equivalent to solving the following program if all
1
We note however that these multiplicative bounds may not be relevant for MAP estimation problem in
graphical models where reparameterization leaves behind negative constants which are kept outside the LP
objective.
2
0
0
0
0
Set: ?ij
(0, 0) = ?ij
(1,P
1) = (?ij (0, 0) + ?ij (1, 1))/2, ?ij
(0, 1) = ?ij
(1, 0) = (?ij (0, 1) +
0
0
?ij (1, 0))/2, ?i (1) = ?i (1) + (i,j)?E (?ij (1, 0) + ?ij (1, 1) ? ?ij (0, 1) ? ?ij (0, 0))/2, ?const
= ?const +
P
(?
(0,
0)
?
?
(1,
1))/2.
Then
zero
normalize
as
in
[9].
ij
ij
(i,j)?E
2
variables are further constrained to take integral values (with D(i0 ) ? xi ).
min
X
de ,D(.)
e?E H
we de
de + D(is ) ? D(jt ) ? 0
?e = (is , jt ) ? E H
de + D(jt ) ? D(is ) ? 0
?e = (is , jt ) ? E H
D(00 ) = 0
D(is ) ? [0, 1]
?is ? V H
de ? [0, 1]
?e ? E H
D(i0 ) + D(i1 ) = 1
?i ? {0, . . . , n}
(Min-cut LP)
An efficient way to solve this LP exactly is by finding a s-t Min-cut in H with (s t) as (00 , 01 ) and
setting D(i0 ) = 1/2 when both i0 and i1 fall on the same side otherwise setting it to 0 or 1 depending
on whether i0 or i1 are in the 00 side [13, 14]. It is easy to see that this LP is equivalent to the basic LP
relaxation in Equation 2 for which many alternative algorithms have been proposed [3, 6, 7, 9, 11].
On graphs with many cycles containing an odd number of non-submodular edges, this method yields
poor MAP assignments.
We next show how to tighten this LP based on a connection to a recently proposed Bipartite Multi-cut
problem [21].
2.2
Bipartite Multi-cut based LP relaxation
The Bipartite Multi-cut (BMC) problem is a generalization of the standard s-t Min-cut problem.
Given an undirected graph J = (N , A) with non-negative edge weights, the s-t Min-cut problem
finds the subset of edges with minimum total weight, whose deletion disconnects s and t. In BMC,
we are given k source-sink pairs ST = {(s1 , t1 ) . . . (sk , tk )}, and the goal is find a subset of
vertices M ? N such that | {si , ti } ? M |= 1 and the total weight of edges from M to the
remaining vertices N ? M is minimized. The BMC problem was recently proposed in [21] where it
was shown to be NP-hard and O(log k) approximable using a linear programming relaxation. The
BMC problem is also related to the more popular Multi-cut problem where the goal is to identify
the smallest weight set of edges such that every si and ti are separated. Any feasible BMC solution
is a solution to Multi-cut but not the other way round. To see this, consider a graph over six vertices
(s1 , s2 , s3 , t1 , t2 , t3 ) and three edges (s1 , s3 ), (t1 , t2 ), (s2 , t3 ). If ST = {(si , ti ) : 1 ? i ? 3}, then
all pairs in ST are separated and optimal Multi-cut solution has cost 0. But, for BMC one of the
three edges has to be cut. The LP relaxations for Multi-cut provide only a ?(k) approximation to
the BMC problem.
We reduce the MAP estimation problem to the Bipartite Multi-cut problem on an optimized version
of graph H constructed so that the set of variables R adjacent to non-submodular edges is minimized.
Later in Section 2.3 we will show how to create such an optimized graph. Without loss of generality,
we assume that the variables in R are 0, 1, . . . , k. The remaining variables j ? V ? R do not need
the j1 copy of j in H since there have no edges adjacent to j1 . We create an instance of a Bipartite
Multi-cut problem on H with the source-sink pairs ST = {(i0 , i1 ) : 0 ? i ? k}. Let M be
the subset of vertices output by BMC on this graph, and without loss of generality assume that M
contains 00 . The MAP labeling x? is obtained from M by setting xi = s if is ? M and xi = s? if
is ? V H ? M . This gives a valid MAP labeling because for each variable j that appears in the set
R, BMC ensures that M contains exactly one of (j0 , j1 ).
Using this connection, we tighten the Min-cut LP as follows. For each u ? {00 , 01 , . . . , k0 , k1 } and
js ? V H we define new variables Du (js ) and use these to augment the Min-cut LP with additional
3
constraints as follows:
min
X
de ,Du (.)
e?E H
we de
de + Du (is ) ? Du (jt ) ? 0
?e = (is , jt ) ? E H , ?u ? {00 , 01 . . . , k0 , k1 }
de + Du (jt ) ? Du (is ) ? 0
Di0 (i1 ) ? 1 ?i ? {0, . . . , k}
Du (js ) ? 0 ?js ? V H , ?u ? {00 , 01 . . . k0 , k1 }
de ? 0 ?e ? E H
Di0 (j0 ) = Di1 (j1 )
?i, j ? {0, . . . , k}
Di0 (j1 ) = Di1 (j0 )
(BMC LP)
A useful interpretation of the above LP is provided by viewing variables de as the distance between
is and jt for any edge e = (is , jt ), and variables Du (js ) as the distance between u and js . The
first two constraints ensure that these distance variables satisfy triangle inequality. These, along
withP
the constraint Di0 (i1 ) ? 1 ensure that for every ST pair (i0 , i1 ), any path P from i0 to i1
has e?P de ? 1. In contrast, the Min-cut LP ensures this kind of separation only for the (00 , 01 )
terminal pair. Later, in Section 5 we will establish a connection between these constraints and cycle
constraints [15, 16, 17, 18, 19]. When the LP returns integral solutions, we obtain an optimal MAP
labeling using M = {js : D00 (js ) = 0}. When the variables are not integral, [21] suggests a region
growing approach for rounding them so as to get a O(log k) approximation of the optimal objective.
In practice, we found that ICM starting with fractional node assignments xi = D00 (i0 ) gave better
results.
2.3
Reducing the size of ST set
In the LP above, for every edge that is non-submodular we add a terminal pair to ST corresponding
to any of its two endpoints. The problem of minimizing the size of the ST set is equivalent to the
problem of finding the minimum set R of variables of G such that all cycles with an odd number of
non-submodular edges are covered. It is easy that see that in any such cycle, it is always possible
to flip the variables such that any one selected edge is non-submodular and the rest are submodular.
Since finding the optimal R is NP-hard, we used the following heuristics.
First, we pick the set of variables to flip so as to minimize the number of non-submodular edges,
and then obtain a vertex cover of the reduced non-submodular edges using a greedy algorithm.
Interestingly, this problem can be cast as a MAP inference problem on G defined as follows: For
each variable, label 0 denotes that the variable is not flipped and 1 denotes that the node is flipped.
Thus, if an edge is submodular and both variables attached to it are flipped (i.e labeled 1) then the
edge remains submodular. We need to minimize the number of non-submodular edges. Therefore,
energy function for this new graphical model will be
?ij (xi , xj ) = xi ? xj ? is non submodular(i, j) ?(i, j) ? E
?i (0) = ?i (1) = 0 ?i ? V
When G is planar, for example a grid, the special structure of these potentials (Ising energy function)
enables us to get an optimal solution using the matching algorithm of [24, 8].
With the above LP formulation, we were able to obtain exact solutions for most 20x20 grids and 25
node clique graphs. However, the LP does not scale beyond 30x30 grid and 50 node clique graphs.
We therefore provide a combinatorial algorithm for solving the LP.
3
Combinatorial algorithm
We will adapt the primal-dual algorithm that was proposed in [22, 23] for solving the closely related
Multi-cut problem. We review this algorithm in Section 3.1 and in Section 3.2 show how we adapt
it to solve the BMC LP.
4
3.1
Garg?s algorithm for the Multi-cut problem
Recall that in the Multi-cut problem, the goal is to remove the minimum weight set of edges so as to
separate each (si , ti ) pair in ST. This problem is formulated as the followed primal dual LP pair in
[22].
Multi-cut LP: Primal
X
min
we de
d
e?E H
X
de ? 1 ?P ? P
Multi-cut LP: Dual
X
max
fP
f
X
P ?P
fP ? we
?e ? EH
P ?Pe
e?P
de ? 0
fP ? 0
?e ? E H
?P ? P
where P denotes all paths between a pair of vertices in ST and Pe denotes the set of paths in P which
contain edge e. Garg?s algorithm [22, 23] simultaneously solves the primal and dual so that they are
within an factor of each other for any user-provided > 0. The algorithm starts by setting all dual
variables flow variables to zero and all primal variables de = ? where ? is (1 + )/((1 + )L)1/ , and
L is the maximum number of edges for any path in P.PIt then iteratively updates the variables by
first finding the shortest path P ? P which violates the e?P de ? 1 constraint and then, modifying
P
variables as fP = mine?P we i.e f = f +fP and de = de (1+ f
we ) ?e ? P . At any point a feasible
solution can be obtained by rescaling all the primal and dual variables. Termination is reached when
the rescaled primal objective is within (1 + ) of the rescaled dual objective for error parameter .
This process is shown to terminate in O(m log1+ 1+
? ) steps where m = |E H |.
3.2
Solving the BMC LP
We first modify the edge weights on graph H constructed for the BMC LP so that for all edges
e = (is , jt ) and its complement e? = (is?, jt?), the weights are equal, that is, we = we?. This can be
easily ensured by setting we = we? = average of previous edge weights of e and e in H. This change
adds all (2n + 2) possible vertices to H i.e all nodes 0 ? i ? n contain terminal pairs (i0 , i1 )
in the ST set. For any path P in H we define its complementary path P? to be the path obtained
by reversing the order of edges and complementing all edges in P . For example, the complement
of path (20 , 11 , 30 , 21 ) is (20 , 31 , 10 , 21 ). Next, we consider the following alternative LP called
BMC-Sym LP for BMC on symmetric graphs, that is, graphs where we = we?
X
min
we de
e?E H
X
de ? 1 ?P ? P
(BMC-Sym LP)
e?P
de ? 0, de = de ?e ? E H
Lemma 1 When H is symmetric, the BMC-Sym LP, BMC LP, and Multi-cut LP are equivalent.
P ROOF Any feasible solution of BMC-Sym LP can be used to obtain a solution to BMC LP with the
same objective as follows: Set de variables unchanged, this keeps the objective intact. Set
P Du (is )
as the length of the shortest path between u and is that is, Du (is ) = minP ?paths(u,is ) e?P de .
This yields a feasible solution ? the constraints de + Du (is ) ? Du (jt ) ? 0 hold because Du (is )
variables are the shortest path between u and is . The constraints Di0 (i1 ) ? 1 hold because all paths
between i0 and i1 have a distance ? 1 in BMC-Sym LP. The constraints Di0 (j0 ) = Di1 (j1 ) and
Di0 (j1 ) = Di1 (j0 ) are satisfied because the distances are symmetric de = de .
We next show that any feasible solution of BMC LP gives a feasible solution to Multi-cut LP with
the same de and objective value. For any pair (p0 , p1 ) ? ST the P
constraint Dp0 (p1 ) ? 1 along with
repeated application of de +Dp0 (is )?Dp0 (jt ) ? 0 ensures that e?P de ? 1 for any path between
p0 and p1 .
Finally, we show that if {de } is a feasible solution to Multi-cut LP then it can be used to construct a
feasible solution {d0e } to BMC-Sym LP without changing the value of the objective function using
5
d0e = d0e = (de + de )/2. The objective value remains unchanged since we = we . The path conP
straints e?P d0e ?
because both
P1 hold ?P ? PP
Ppath P and its complementary path P are in P
and we know that e?P de ? 1 and e?P de = e?P de ? 1.
We modify Garg?s algorithm [22, 23] to exploit the fact that the graph is symmetric so that at each
iteration we push twice the flow while keeping the approximation guarantees intact. The key change
we make is that when augmenting flow f in some path P , we augment the same flow f to the
complementary path P as outlined in our final algorithm in Figure 1. This change ensures that we
always obtain symmetric distance values as we prove below.
Lemma 2 Suppose H is a symmetric graph then de = de ? e ? E H at the end of each iteration of
the while loop in algorithm in Figure 1.
P ROOF We prove by induction. The claim holds initially, since de = ? ?e ? E H and H is symmetric. Let Pi denote the path selected in the ith iteration of the algorithm. Now, suppose that the
hypothesis is true for the nth iteration. In the (n + 1)th iteration, we augment flow f in both paths
Pn+1 and P n+1 . These paths Pn+1 and P n+1 do not share any edge because this would imply that
there is another pair (j0 , j1 ) of shorter length, and we would choose Pn+1 to be this path instead.
P
We then do the following update de = de (1 + f
we ) with fP = mine?P we for both the paths Pn+1
and P n+1 . Since we = we for all e ? E and de = de ? e ? E H before this iteration, de = de
? e ? E H after (n + 1)th step.
Theorem 3 The modified algorithm also provides an -approximation algorithm to the BMC LP.
P ROOF Suppose, we do not augment the flow in the complementary path P while augmenting P .
In the next iteration the original algorithm of [22, 23] picks P or any path with the same path length
since the path length of P and P is equal before the iteration and they do not share any common
edges. Therefore, by forcing P we are not modifying the course of the original algorithm and the
analysis in [22, 23] holds here as well.
Input: Graphical model G with reparameterized energy function E, approximation guarantee
Create symmetric graph H from G and E
Initialize de = ? (? derived from as shown in Section 3.1), and f = 0, fe = 0,
x=arbitrary initial labeling of graphical
Pmodel G.
P
Define: Primal objective P ({de }) = e we de / minP ?P e?P de
Define: Dual objective D(f, {fe }) = f /(maxe fe /we )
while min (E(x) ? ?const , P ({de })) > (1 + )D(f, {fe }) do
P =P
Shortest path between (i0 , i1 ) ?(i0 , i1 ) ? ST
if ( e?P de < 1) then
P
With fP = min we update f = f + fP , fe = fe + fP , de = de (1 + f
we ) ?e ? P .
e?P
Repeat above for the complement path P
x0 = current solution after rounding, x =better of x and x0
end if
end while
Return bound = D(f, {fe }) + ?const , MAP = x.
Figure 1: Combinatorial Algorithm for MAP inference using BMC.
Our algorithm in addition to updating the primal and dual solutions at each iteration, also keeps track
of the primal objective obtained with the current best rounding (x in Figure 1). Often, the rounded
variables yielded lower primal objective values and led to early termination. The complexity of the
algorithm can be shown to be O(?2 km2 ) ignoring the polylog(m) factors. Fleischer [25] subsequently improved the above algorithm by reducing the complexity to O(?2 m2 ). It is interesting
to note that running time is independent of k. Though we have presented modification to algorithm
in [22, 23], we can fit our algorithm in Fleischer?s framework as well. In fact, we use Fleischer?s
modification for practical implementation of our algorithm.
6
MPLP
2.8
TRW-S
2.3
1.8
1.3
3.8
Time in secs/Clique Size
BMC
3.3
Bound/Clique Size
MAP Score/Clique Size
3.8
3.3
2.8
2.3
BMC
1.8
MPLP
1.3
TRW-S
0.8
0.8
0
20
40
60
0
80
20
40
60
300
BMC
250
MPLP
200
TRW-S
150
100
50
0
80
0
20
Clique Size
Clique Size
40
60
80
Clique Size
Figure 2: Clique size scaled values of MAP, Upper bound, and running time with increasing clique
size on three methods: BMC, MPLP, and TRW-S.
300
500
200
300
100
0
0
0
50
100
150
(a) Edge strength = 0.15
200
Map_MPLP
Bound_MPLP
Map_BMC
Bound_BMC
500
400
200
100
Time in seconds
Map_MPLP
Bound_MPLP
Map_BMC
Bound_BMC
400
Score
Score
400
Score
Map_MPLP
Bound_MPLP
Map_BMC
Bound_BMC
500
300
200
100
0
0
50
Time in seconds
100
150
(b) Edge strength = 0.5
200
0
50
Time in seconds
100
150
200
(c) Edge strength = 2
Figure 3: Comparing convergence rates of BMC and MPLP for three different clique graphs.
4
Experiments
We compare our proposed algorithm (called BMC here) with MPLP, a state-of-art message passing
algorithm [1] that tightens the standard MAP LP with third order marginal constraints, which are
equivalent to cycle constraints for binary MRFs. As reference we also present results for the TRW-S
algorithm [9]. BMC is implemented in Java whereas for MPLP we ran the C++ code provided by
the authors. We run BMC with = 0.02. MPLP was run with edge clusters until convergence (up to
a precision of 2 ? 10?4 ) or for at most 1000 iterations, whichever comes first. Our experiments were
performed on two kinds of datasets: (1) Clique graph based binary MRFs of various sizes generated
as per the method of [17] where edge potentials are Potts sampled from U [??, ?] (our default setting
was ? = 0.5) and node potentials via U [?1, 1], and (2) Maxcut instances of various sizes and
densities from the BiqMac library3 . Since the second task is formulated as a maximization problem,
for the sake of consistency we report all our results as maximizing the MAP score. We compare the
algorithms on the quality of the final solution, the upper bound to MAP score, and running time. It
should be noted that multiplicative bounds do not hold here since the reparameterizations give rise
to negative constants.
In the graphs in Figure 2 we compare BMC, MPLP, and TRW-S with increasing clique size averaged over five seeds. We observe that BMC provides much higher MAP scores and slightly tighter
bounds than MPLP. In terms of running time, BMC is more than an order of magnitude faster than
MPLP for large graphs. The baseline LP (TRW-S) while much faster than both BMC and MPLP
provides really poor MAP scores and bounds. We also compare BMC and MPLP on their speed of
convergence. In Figures 3(a), (b), and (c) we show the MAP and Upper bounds for different times
in the execution of the algorithm on cliques of size 50 and different edge strengths. BMC, whose
bounds and MAP appear as the two short arcs in-between the MAP scores and bounds of MPLP,
converges significantly faster and terminates well before MPLP while providing same or better MAP
scores and bounds for all edge strengths.
In Table 1 we compare the three algorithms on the various graphs from the BiqMac library. The
graphs are sorted by increasing density and are all of size 100. We observe that the MAP values for
BMC are significantly higher than those for TRW-S. For MPLP, the MAP values are always zero
because it decodes marginals purely based on node marginals which for these graphs are tied. The
upper bounds achieved by MPLP are significantly tighter than TRW-S, showing that with proper
rounding MPLP is likely to produce good MAP scores, but BMC provides even tighter bounds in
3
http://biqmac.uni-klu.ac.at/
7
Graph
pm1s
pw01
w01
g05
pw05
w05
pw09
w09
pm1d
density
0.1
0.1
0.1
0.5
0.5
0.5
0.9
0.9
0.99
BMC
110
1986
653
1409
7975
1444
13427
1995
347
MAP
MPLP
0
0
0
0
0
0
0
0
0
TRW-S
91
1882
495
1379
7786
1180
13182
1582
277
BMC
131
2079
720
1650
9131
2245
16493
4073
842
Bound
MPLP
200
2397
1115
1720
9195
2488
16404
4095
924
TRW-S
257
2745
1320
2475
13696
6588
24563
11763
2463
Time in seconds
BMC MPLP TRW-S
45
43
0.005
48
46
0.006
46
41
0.004
761
317
0.021
699
1139
0.021
737
1261
0.021
106
2524
0.041
123
2671
0.053
12
1307
0.047
Table 1: Comparisons on Maxcut graphs of size 100 from the BiqMac library.
most cases. The running time for BMC is significantly lower than MPLP for dense graphs but for
sparse graphs (10% edges) it requires the same time as MPLP.
Thus, overall we find that BMC achieves tighter bounds and better MAP solutions at a significantly
faster rate than the state-of-the-art method for tightening LPs. The gain over MPLP is highest for
the case of dense graphs. For sparse graphs many algorithms work, for example recently [8, 26]
reported excellent results on planar, or nearly planar graphs and [27] show that even local search
works when the graph is sparse.
5
Discussion and Conclusion
We put our tightening of the basic MAP LP (Marginal LP in Equation 2 or the Min-cut LP) in
perspective with other proposed tightenings based on cycle constraints [17, 18, 1, 19] and higher
order marginal constraints [17, 1, 20]. For binary MRFs cycle constraints are equivalent to adding
marginal consistency constraints among triples of variables [28]. We show the relationship between
cycle constraints and our constraints. Let S = (VS , ES ) denote the minimum cut graph created
from G as shown in Section 2.1 but without the i1 vertices for (1 ? i ? n) so that weights of
non-submodular edges in S will be negative. The LP relaxation of MAP based on cycle constraints
is defined as:
P 0
we de
min
P d e?ES P
(1 ? de ) +
de ?
1
?C ? C, F ? C and | F | is odd
e?F
e?C\F
de
?
?e ? ES
[0 . . . 1]
where C denotes the set of all cycles in S. Suppose we construct our symmetric minimum cut graph
H with edges (is , jt ) corresponding to all four possible values of (s, t) for each edge (i, j) ? E,
instead of two that we currently get due to zero-normalized edge potentials. Then, BMC-Sym LP
along with the constraints dis jt + dis jt = 1 ?(is , jt ) ? EH is equivalent to the cycle LP above. We
skip the proof due to lack of space.
Our main contribution is that by relaxing the cycle LP to the Bipartite Multi-cut LP we have been
able to design a combinatorial algorithm which is guaranteed to provide an approximation to the
LP in polynomial time. Since we solve the LP and its dual better than any of the earlier methods of
enforcing cycle constraints, we are able to obtain tighter bounds and MAP scores at a considerable
faster speed.
Future work in this area includes developing combinatorial algorithm for solving the semi-definite
program in [21] and extending our approach to multi label graphical models.
Acknowledgement We thank Naveen Garg for helpful discussion in relating the multi-commodity
flow problem with the Bipartite multi-cut problem. The second author acknowledges the generous
support of Microsoft Research and IBM?s Faculty award.
8
References
[1] David Sontag, Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss. Tightening LP Relaxations for MAP using Message Passing. In UAI, 2008.
[2] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[3] M.I. Schlesinger. Syntactic analysis of two-dimensional visual signals in noisy conditions. Kybernetica,
1976.
[4] Chandra Chekuri, Sanjeev Khanna, Joseph (Seffi) Naor, and Leonid Zosin. Approximation Algorithms
for the Metric Labeling Problem via a New Linear Programming Formulation. In SODA, 2001.
[5] Jon Kleinberg and Eva Tardos. Approximation Algorithms for Classification Problems with Pairwise
Relationships: Metric Labeling and Markov Random Fields. J. ACM, 49(5):616?639, 2002.
[6] M. Wainwright, T. Jaakkola, and A. Willsky. MAP Estimation Via Agreement on Trees: Message-Passing
and Linear Programming. IEEETIT: IEEE Transactions on Information Theory, 51, 2005.
[7] Tom?as Werner. A Linear Programming Approach to Max-Sum Problem: A Review. IEEE Trans. Pattern
Anal. Mach. Intell., 29(7):1165?1179, 2007.
[8] Nic Schraudolph. Polynomial-Time Exact Inference in NP-Hard Binary MRFs via Reweighted Perfect
Matching. In AISTATS, 2010.
[9] Vladimir Kolmogorov. Convergent Tree-Reweighted Message Passing for Energy Minimization. IEEE
Trans. Pattern Anal. Mach. Intell., 28(10):1568?1583, 2006.
[10] Talya Meltzer, Amir Globerson, and Yair Weiss. Convergent message passing algorithms - a unifying
view. In UAI, 2009.
[11] Pradeep Ravikumar, Alekh Agarwal, and Martin J. Wainwright. Message-passing for Graph-structured
Linear Programs: Proximal Methods and Rounding Schemes. JMLR, 11:1043?1080, 2010.
[12] David Sontag and Tommi Jaakkola. Tree Block Coordinate Descent for MAP in Graphical Models. In
AI-STATS, volume 9, pages 544?551, 2009.
[13] Endre Boros and Peter L. Hammer. Pseudo-Boolean Optimization. Discrete Applied Mathematics, 123(13):155?225, 2002.
[14] Carsten Rother, Vladimir Kolmogorov, Victor S. Lempitsky, and Martin Szummer. Optimizing Binary
MRFs via Extended Roof Duality. In CVPR, 2007.
[15] Francisco Barahona and Ali Ridha Mahjoub. On the cut polytope. Math. Program., 36(2):157?173, 1986.
[16] Uri Zwick. Outward Rotations: A Tool for Rounding Solutions of Semidefinite Programming Relaxations,
with Applications to MAX CUT and Other Problems. In STOC, 1999.
[17] David Sontag and Tommi Jaakkola. New Outer Bounds on the Marginal Polytope. In NIPS, 2007.
[18] M. Pawan Kumar, Vladimir Kolmogorov, and Philip H. S. Torr. An Analysis of Convex Relaxations for
MAP Estimation of Discrete MRFs. JMLR, 10:71?106, 2009.
[19] Nikos Komodakis and Nikos Paragios. Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing
Cycles. In ECCV, 2008.
[20] Tom?as Werner. High-arity interactions, polyhedral relaxations, and cutting plane algorithm for soft constraint optimisation (map-mrf). In CVPR, 2008.
[21] Sreyash Kenkre and Sundar Vishwanathan. Approximation algorithms for the Bipartite Multicut problem.
Information Processing Letters, 110(8-9):282 ? 287, 2010.
[22] Naveen Garg, Vijay V. Vazirani, and Mihalis Yannakakis. Approximate Max-Flow Min-(Multi)Cut Theorems and Their Applications. SIAM J. Comput., 25(2):235?251, 1996.
[23] Naveen Garg and Jochen Knemann. Faster and Simpler Algorithms for Multicommodity Flow and Other
Fractional Packing Problems. SIAM J. Comput. 37(2): (2007), 37(2):630?652, 2007.
[24] Amir Globerson and Tommi Jaakkola. Approximate inference using planar graph decomposition. In
NIPS, 2006.
[25] Lisa Fleischer. Approximating Fractional Multicommodity Flow Independent of the Number of Commodities. SIAM J. Discrete Math., 13(4):505?520, 2000.
[26] D Batra, A C Gallagher, D Parikh, and T Chen. Beyond trees: Mrf inference via outer-planar decomposition. In CVPR, 2010.
[27] Kyomin Jung, Pushmeet Kohli, and Devavrat Shah. Local Rules for Global MAP: When Do They Work?
In NIPS. 2009.
[28] David Sontag. Cutting plane algorithms for variational inference in graphical models. Master?s thesis,
MIT, Department of Electrical Engineering and Computer Science, 2007.
9
| 4170 |@word kohli:1 faculty:1 version:1 polynomial:3 suitably:1 termination:2 barahona:1 decomposition:2 p0:2 pick:2 multicommodity:2 reduction:1 initial:1 contains:2 score:13 interestingly:1 existing:1 current:2 comparing:1 si:4 j1:9 enables:2 remove:1 progressively:1 update:4 v:1 greedy:1 leaf:1 selected:2 complementing:1 amir:3 plane:2 ith:1 short:1 provides:6 completeness:1 cse:3 node:8 math:2 simpler:1 five:1 along:3 constructed:4 prove:2 naor:1 redefine:1 polyhedral:1 x0:2 pairwise:3 p1:4 growing:1 multi:28 terminal:3 talya:2 increasing:3 provided:3 notation:1 mahjoub:1 kind:2 finding:4 pmodel:1 guarantee:5 pseudo:1 commodity:3 every:4 ti:4 exactly:2 ensured:1 scaled:1 appear:1 t1:3 before:3 engineering:1 local:3 modify:2 mach:2 path:29 garg:6 twice:1 suggests:1 relaxing:1 pit:1 ease:1 averaged:1 practical:2 globerson:3 practice:2 block:1 definite:2 sarawagi:1 j0:7 area:1 java:1 significantly:5 matching:2 get:5 put:1 equivalent:7 map:48 maximizing:1 backed:1 starting:2 convex:1 stats:1 m2:1 rule:1 reparameterization:2 coordinate:1 tardos:1 construction:1 suppose:4 user:1 exact:3 programming:7 hypothesis:1 agreement:1 updating:1 cut:41 ising:1 labeled:1 electrical:1 region:1 ensures:4 cycle:17 eva:1 rescaled:2 highest:1 ran:1 principled:1 complexity:2 reparameterizations:1 mine:2 solving:6 ali:1 purely:1 bipartite:12 sink:2 triangle:1 packing:1 easily:1 k0:3 iit:3 various:3 kolmogorov:3 g05:1 separated:2 labeling:6 repairing:1 outside:1 whose:3 heuristic:1 solve:6 cvpr:3 otherwise:1 zosin:1 syntactic:1 noisy:1 final:2 klu:1 propose:4 interaction:1 km2:1 relevant:1 loop:1 normalize:1 w01:1 convergence:3 cluster:1 extending:1 produce:1 perfect:1 converges:1 tk:1 depending:1 polylog:1 ac:4 augmenting:2 ij:25 odd:3 solves:2 implemented:1 skip:1 come:2 tommi:4 closely:1 hammer:1 modifying:2 subsequently:1 viewing:1 violates:1 generalization:1 really:1 decompose:1 d00:2 tighter:5 hold:6 seed:1 claim:1 achieves:1 early:1 smallest:1 generous:1 estimation:7 combinatorial:9 currently:2 label:3 create:3 tool:1 minimization:1 di0:7 mit:2 concurrently:1 always:4 modified:1 form2:1 pn:4 casting:1 jaakkola:5 zwick:1 derived:2 potts:1 contrast:1 baseline:1 helpful:1 inference:6 mrfs:8 i0:18 initially:1 koller:1 expand:1 i1:15 overall:1 dual:14 among:1 classification:1 augment:5 art:4 special:2 constrained:1 initialize:1 marginal:8 field:2 equal:2 construct:2 bmc:47 flipped:3 yannakakis:1 nearly:1 jon:1 jochen:1 future:1 minimized:2 np:4 t2:2 di1:4 report:1 few:1 sunita:2 simultaneously:2 intell:2 roof:4 argminx:2 pawan:1 microsoft:1 friedman:1 message:9 withp:1 pradeep:1 yielding:1 pw09:1 primal:13 behind:1 semidefinite:1 edge:53 integral:4 partial:1 approximable:1 shorter:1 tree:4 schlesinger:1 instance:3 earlier:1 boolean:1 bombay:3 soft:1 cover:1 assignment:6 maximization:1 werner:2 cost:2 vertex:12 subset:3 rounding:6 reported:1 dp0:3 proximal:1 st:13 density:3 siam:3 probabilistic:1 rounded:1 sanjeev:1 thesis:1 satisfied:1 containing:1 choose:1 multicut:1 return:2 rescaling:1 potential:5 de:59 sec:1 includes:1 disconnect:1 inc:1 satisfy:1 multiplicative:2 later:2 performed:1 view:1 reached:1 start:1 iitb:3 contribution:1 minimize:2 efficiently:2 yield:3 identify:1 t3:2 sundar:3 decodes:1 energy:8 pp:1 proof:1 seffi:1 sampled:1 gain:1 popular:2 recall:1 fractional:3 trw:13 appears:1 higher:5 planar:5 tom:2 improved:1 wei:2 formulation:2 though:1 generality:2 chekuri:1 until:1 lack:1 google:1 khanna:1 quality:1 normalized:2 contain:2 true:1 symmetric:10 iteratively:1 reweighted:2 adjacent:4 round:1 komodakis:1 noted:1 complete:1 variational:1 recently:5 parikh:1 common:1 rotation:1 empirically:1 tightens:2 endpoint:1 attached:1 volume:1 interpretation:1 relating:1 marginals:3 ai:1 grid:3 outlined:1 consistency:2 mathematics:1 maxcut:2 submodular:19 alekh:1 add:7 j:8 perspective:1 optimizing:2 forcing:1 inequality:1 binary:9 continue:1 exploited:1 victor:1 minimum:6 additional:1 relaxed:1 nikos:2 shortest:4 signal:1 semi:2 faster:10 adapt:2 schraudolph:1 ravikumar:1 award:1 mrf:5 basic:2 optimisation:1 chandra:1 metric:2 iteration:10 agarwal:1 achieved:1 addition:1 whereas:1 else:2 source:2 rest:1 specially:1 undirected:2 flow:12 reddi:1 easy:4 meltzer:2 xj:15 fit:1 gave:1 reduce:1 fleischer:4 whether:1 six:1 peter:1 sashank:2 sontag:4 mihalis:1 passing:9 boros:1 useful:2 covered:1 outward:1 nic:1 reduced:1 http:1 exist:1 s3:2 track:1 per:1 broadly:1 discrete:3 key:1 four:1 changing:1 diffusion:1 kept:1 graph:40 relaxation:21 sum:2 run:2 letter:1 master:1 soda:1 separation:1 bound:23 guaranteed:4 followed:1 convergent:2 yielded:1 adapted:1 strength:5 vishwanathan:2 constraint:30 sake:1 kleinberg:1 speed:2 min:19 optimality:1 kumar:1 kyomin:1 martin:2 structured:1 developing:1 department:1 poor:2 endre:1 terminates:2 slightly:1 lp:65 joseph:1 modification:2 s1:3 equation:3 remains:2 devavrat:1 loose:1 know:1 flip:2 whichever:1 end:3 observe:2 alternative:2 yair:2 shah:1 original:2 denotes:5 remaining:2 ensure:2 straints:1 running:5 graphical:11 unifying:1 const:5 exploit:2 k1:3 establish:1 approximating:2 unchanged:2 objective:14 minx:1 distance:6 separate:1 thank:1 mplp:24 philip:1 outer:2 polytope:3 induction:1 enforcing:1 willsky:1 rother:1 length:4 code:1 relationship:2 providing:1 minimizing:1 vladimir:3 x20:1 fe:7 stoc:1 negative:4 tightening:7 rise:1 design:3 implementation:1 affiliated:1 proper:1 anal:2 upper:4 observation:1 markov:2 datasets:1 arc:1 descent:1 reparameterized:2 extended:1 arbitrary:2 david:4 complement:3 pair:12 cast:1 extensive:1 connection:3 optimized:2 deletion:1 nip:3 trans:2 able:3 beyond:3 ppath:1 below:1 pattern:2 fp:9 program:4 max:5 wainwright:2 eh:2 nth:1 scheme:4 improve:1 imply:1 library:2 created:1 acknowledges:1 log1:1 review:2 literature:1 acknowledgement:1 loss:2 interesting:1 conp:1 triple:1 minp:2 principle:1 tightened:1 pi:1 share:2 ibm:1 eccv:1 course:1 jung:1 repeat:1 copy:1 sym:7 keeping:1 dis:2 side:2 lisa:1 d0e:4 fall:1 sparse:3 default:1 xn:1 valid:1 author:3 tighten:2 pushmeet:1 transaction:1 vazirani:1 approximate:3 uni:1 cutting:2 keep:2 clique:14 global:1 uai:2 francisco:1 xi:32 search:1 iterative:1 sk:1 table:2 terminate:1 ignoring:1 obtaining:1 du:13 excellent:1 aistats:1 main:2 dense:2 s2:2 repeated:1 complementary:4 icm:1 x1:1 probing:1 precision:1 paragios:1 comput:2 candidate:1 pe:2 tied:1 jmlr:2 third:2 theorem:2 jt:17 showing:1 arity:1 explored:1 adding:2 magnitude:3 execution:1 gallagher:1 push:1 uri:1 x30:1 vijay:1 chen:1 led:1 explore:1 likely:1 visual:1 expressed:1 acm:1 lempitsky:1 goal:5 formulated:2 sorted:1 consequently:1 carsten:1 leonid:1 considerable:2 hard:3 feasible:8 change:3 torr:1 reducing:3 reversing:1 lemma:2 called:3 total:2 batra:1 duality:1 e:3 intact:2 maxe:1 naveen:3 support:1 szummer:1 violated:1 |
3,502 | 4,171 | Bayesian Action-Graph Games
Albert Xin Jiang
Department of Computer Science
University of British Columbia
[email protected]
Kevin Leyton-Brown
Department of Computer Science
University of British Columbia
[email protected]
Abstract
Games of incomplete information, or Bayesian games, are an important gametheoretic model and have many applications in economics. We propose Bayesian
action-graph games (BAGGs), a novel graphical representation for Bayesian games.
BAGGs can represent arbitrary Bayesian games, and furthermore can compactly
express Bayesian games exhibiting commonly encountered types of structure including symmetry, action- and type-specific utility independence, and probabilistic
independence of type distributions. We provide an algorithm for computing expected utility in BAGGs, and discuss conditions under which the algorithm runs in
polynomial time. Bayes-Nash equilibria of BAGGs can be computed by adapting
existing algorithms for complete-information normal form games and leveraging
our expected utility algorithm. We show both theoretically and empirically that our
approaches improve significantly on the state of the art.
1
Introduction
In the last decade, there has been much research at the interface of computer science and game
theory (see e.g. [19, 22]). One fundamental class of computational problems in game theory is
the computation of solution concepts of a finite game. Much of current research on computation
of solution concepts has focused on complete-information games, in which the game being played
is common knowledge among the players. However, in many multi-agent situations, players are
uncertain about the game being played. Harsanyi [10] proposed games of incomplete information (or
Bayesian games) as a mathematical model of such interactions. Bayesian games have found many
applications in economics, including most notably auction theory and mechanism design.
Our interest is in computing with Bayesian games, and particularly in identifying sample Bayes-Nash
equilibrium. There are two key obstacles to performing such computations efficiently. The first
is representational: the straightforward tabular representation of Bayesian game utility functions
(the Bayesian Normal Form) requires space exponential in the number of players. For large games,
it becomes infeasible to store the game in memory, and performing even computations that are
polynomial time in the input size are impractical. An analogous obstacle arises in the context of
complete-information games: there the standard representation (normal form) also requires space
exponential in the number of players. The second obstacle is the lack of existing algorithms for
identifying sample Bayes-Nash equilibrium for arbitrary Bayesian games. Harsanyi [10] showed
that a Bayesian game can be interpreted as an equivalent complete-information game via ?induced
normal form? or ?agent form? interpretations. Thus one approach is to interpret a Bayesian game
as a complete-information game, enabling the use of existing Nash-equilibrium-finding algorithms
(e.g. [24, 9]). However, generating the normal form representations under both of these completeinformation interpretations causes a further exponential blowup in representation size.
Most games of interest have highly-structured payoff functions, and thus it is possible to overcome
the first obstacle by representing them compactly. This has been done for complete information
games through (e.g.) the graphical games [16] and Action-Graph Games (AGGs) [1] representations.
In this paper we propose Bayesian Action-Graph Games (BAGGs), a compact representation for
1
Bayesian games. BAGGs can represent arbitrary Bayesian games, and furthermore can compactly
express Bayesian games with commonly encountered types of structure. The type profile distribution
is represented as a Bayesian network, which can exploit conditional independence structure among
the types. BAGGs represent utility functions in a way similar to the AGG representation, and like
AGGs, are able to exploit anonymity and action-specific utility independencies. Furthermore, BAGGs
can compactly express Bayesian games exhibiting type-specific independence: each player?s utility
function can have different kinds of structure depending on her instantiated type. We provide an
algorithm for computing expected utility in BAGGs, a key step in many algorithms for game-theoretic
solution concepts. Our approach interprets expected utility computation as a probabilistic inference
problem on an induced Bayesian Network. In particular, our algorithm runs in polynomial time
for the important case of independent type distributions. To compute Bayes-Nash equilibria for
BAGGs, we consider the agent form interpretation of the BAGG. Although a naive normal form
representation would require an exponential blowup, BAGGs can act as a compact representation
of the agent form. Computational tasks on the agent form can be done efficiently by leveraging our
expected utility algorithm for BAGGs. We have implemented our approach by adapting two Nash
equilibrium algorithms, the simplicial subdivision algorithm [24] and Govindan and Wilson?s global
Newton method [9]. We show empirically that our approach outperforms the existing approaches of
solving for Nash on the induced normal form or on the normal form representation of the agent form.
We now discuss some related literature. There has been some research on heuristic methods for
finding Bayes-Nash equilibria for certain classes of auction games using iterated best response (see
e.g. [21, 25]). Such methods are not guaranteed to converge to a solution. Howson and Rosenthal
[12] applied the agent form transformation to 2-player Bayesian games, resulting in a completeinformation polymatrix game. Our approach can be seen as a generalization of their method to
general Bayesian games. Singh et al. [23] proposed a incomplete information version of the graphical
game representation, and presented efficient algorithms for computing approximate Bayes-Nash
equilibria in the case of tree games. Gottlob et al. [7] considered a similar extension of the graphical
game representation and analyzed the problem of finding a pure-strategy Bayes-Nash equilibrium.
Like graphical games, such representations are limited in that they can only exploit strict utility
independencies. Oliehoek et al. [20] proposed a heuristic search algorithm for common-payoff
Bayesian games, which has applications to cooperative multi-agent problems. Bayesian games can
be interpreted as dynamic games with a initial move by Nature; thus, also related is the literature
on representations for dynamic games, including multi-agent influence diagrams (MAIDs) [17]
and temporal action-graph games (TAGGs) [14]. Compared to these representations for dynamic
games, BAGGs focus explicitly on structure common to Bayesian games; in particular, only BAGGs
can efficiently express type-specific utility structure. Also, by representing utility functions and
type distributions as separate components, BAGGs can be more versatile (e.g., a future direction
is to answer computational questions that do not depend on the type distribution, such as ex-post
equilibria). Furthermore, BAGGs can be solved by adapting Nash-equilibrium algorithms such as
Govindan and Wilson?s global Newton method [9] for static games; this is generally more practical
than their related Nash equilibrium algorithm [8] that directly works on dynamic games: while both
approach avoids the exponential blowup of transforming to the induced normal form, the algorithm
for dynamic games has to solve an additional quadratic program at each step.
2
2.1
Preliminaries
Complete-information Games
We assume readers are familiar with the basic concepts of complete-information games and here we
only establish essential notation. A complete-information game is a tuple (N, {Ai }i?N , {ui }i?N )
where N = {1, . . . , n} is the set of agents; for each agent i, Ai is the setQof i?s actions. We denote
by ai ? Ai one of i?s actions. An action profileQa = (a1 , . . . , an ) ? i?N Ai is a tuple of the
agents? actions. Agent i?s utility function is ui : j?N Aj ? R. A mixed strategy ?i for player i
is a probability distribution over Ai . A mixed strategy profile ? is a tuple of the n players? mixed
strategies. We denote by ui (?) the expected utility of player i under the mixed strategy profile ?. We
adopt the following notational convention: for any n-tuple X we denote by X?i the elements of X
corresponding to players other than i.
A game representation is a data structure that stores all information needed to specify a game. A
normal form representation
Q of a game uses a matrix to represent each utility function ui . The size of
this representation is n j?N |Aj |, which grows exponentially in the number of players.
2
2.2
Bayesian Games
We now define Bayesian games and discuss common types of structure.
Definition 1. A Bayesian game is a tuple (N, {Ai }i?N , ?, P, {u
Qi }i?N ) where N = {1, . . . , n} is
the setQof players; each Ai is player i?s action set, and A = i Ai is the set of action profiles;
? = i ?i is the set of type profiles, where ?i is player i?s set of types; P : ? ? R is the type
distribution and ui : A ? ? ? R is the utility function for player i.
As in the complete-information case, we denote by ai an element of Ai , and a = (a1 , . . . , an ) an
action profile. Furthermore we denote by ?i an element of ?i , and by ? a type profile. The game
is played as follows. A type profile ? = (?1 , . . . , ?n ) ? ? is drawn according to the distribution P .
Each player i observes her type ?i and, based on this observation, chooses from her set of actions Ai .
Each player i?s utility is then given by ui (a, ?), where a is the resulting action profile.
Player i can deterministically choose a pure strategy si , in which given each ?i ? ?i she deterministically chooses an action si (?i ). Player i can also randomize and play a mixed strategy ?i , in which her
probability of choosing ai given ?i is ?i (ai |?i ). That is, given a type ?i ? ?i , she plays according to
distribution ?i (?|?i ) over her set of actions Ai . A mixed strategy profile ? = (?1 , . . . , ?n ) is a tuple
of the players? mixed strategies.
The expected utility of i given ?i under a mixed strategy profile ? is the expected value of i?s utility
under the resulting joint distribution of a and ?, conditioned on i receiving type ?i :
X
X
Y
ui (a, ?)
?j (aj |?j ).
(1)
ui (?|?i ) =
P (??i |?i )
a
??i
j
A mixed strategy profile ? is a Bayes-Nash equilibrium if for all i, for all ?i , for all ai ? Ai ,
ui (?|?i ) ? ui (? ?i ?ai |?i ), where ? ?i ?ai is the mixed strategy profile that is identical to ? except
that i plays ai with probability 1 given ?i .
In specifying a Bayesian game, the space bottlenecks are the type distribution and the utility functions.
Without additional structure, we cannot do better than representing each utility function ui : A?? ?
R as a table and the type distribution as a table as Q
well. We call this representation
the Bayesian
Qn
n
normal form. The size of this representation is n ? i=1 (|?i | ? |Ai |) + i=1 |?i |.
We say a Bayesian game has independent type distributions if players? types
Qare drawn independently,
i.e. the type-profile distribution P (?) is a product P
distribution: P (?) = i P (?i ). In this case the
distribution P can be represented compactly using i |?i | numbers.
Given a permutation of players ? : N ? N and an action profile a = (a1 , . . . , an ), let a? =
(a?(1) , . . . , a?(n) ). Similarly let ?? = (??(1) , . . . , ??(n) ). We say the type distribution P is symmetric
if |?i | = |?j | for all i, j ? N , and if for all permutations ? : N ? N , P (?) = P (?? ). We say a
Bayesian game has symmetric utility functions if |Ai | = |Aj | and |?i | = |?j | for all i, j ? N , and if
for all permutations ? : N ? N , we have ui (a, ?) = u?(i) (a? , ?? ) for all i ? N . A Bayesian game
is symmetric if its type distribution and utility functions
are symmetric. The utility functions of such
i ||Ai |
a game range over at most |?i ||Ai | n?2+|?
unique
utility values.
|?i ||Ai |?1
A Bayesian game exhibits conditional utility independence if each player i?s utility depends on the
action profile a and her own type ?i , but does not depend on the other players? types. Then the utility
function of each player i ranges over at most |A||?i | unique utility values.
2.2.1
Complete-information interpretations
Harsanyi [10] showed that any Bayesian game can be interpreted as a complete-information game,
such that Bayes-Nash equilibria of the Bayesian game correspond to Nash equilibria of the completeinformation game. There are two complete-information interpretations of Bayesian games.
A Bayesian game can be converted to its induced normal form, which is a complete-information game
with the same set of n players, in which each player?s set of actions is her set of pure strategies in the
Bayesian game. Each player?s utility under an action profile is defined to be equal to the player?s
expected utility under the corresponding pure strategy profile in the Bayesian game.
Alternatively, a Bayesian game can be transformed to its agent form, where each type of each player
in the Bayesian game is turned into one player in a complete-information game. Formally, given a
3
Bayesian game (N, {Ai }i?N , ?, P, {ui }i?N ), we define its agent form as the complete-information
? , {A?j,? }
? consists of P
game (N
uj,?j }(j,?j )?N? ), where N
? , {?
j (j,?j )?N
j?N |?j | players, one for every
type of every player of the Bayesian game. We index the players by the tuple (j, ?j ) where j ? N
? of the agent form game, her action set A?(j,? ) is Aj , the
and ?j ? ?j . For each player (j, ?j ) ? N
j
Q
action set of j in the Bayesian game. The set of action profiles is then A? = j,?j A(j,?j ) . The utility
? u
function of player (j, ?j ) is u
?j,? : A? ? R. For all a
? ? A,
?j,? (?
a) is equal to the expected utility of
j
j
player j of the Bayesian game given type ?j , under the pure strategy profile sa? , where for all i and all
?i , sai? (?i ) = a
?(i,?i ) . Observe that there is a one-to-one correspondence between action profiles in
the agent form and pure strategies of the Bayesian game. A similar correspondence exists for mixed
strategy profiles: each mixed strategy profile ? of the Bayesian game corresponds to a mixed strategy
?
? of the agent form, with ?
?(i,?i ) (ai ) = ?i (ai |?i ) for all i, ?i , ai . It is straightforward to verify that
u
?i,?i (?
? ) = ui (?|?i ) for all i, ?i . This implies a correspondence between Bayes Nash equilibria of a
Bayesian game and Nash equilibria of its agent form.
Proposition 2. ? is a Bayes-Nash equilibrium of a Bayesian game if and only if ?
? is a Nash
equilibrium of its agent form.
3
Bayesian Action-Graph Games
In this section we introduce Bayesian Action-Graph Games (BAGGs), a compact representation of
Bayesian games. First consider representing the type distributions. Specifically, the type distribution
P is specified by a Bayesian network (BN) containing at least n random variables corresponding to
the n players? types ?1 , . . . , ?n . For example, when the types are independently distributed, then P
can be specified by the simple BN with n variables ?1 , . . . , ?n and no edges.
Now consider representing the utility functions. Our approach is to adapt concepts from the AGG
representation [1, 13] to the Bayesian game setting. At a high level, a BAGG is a Bayesian game on
an action graph, a directed graph on a set of action nodes A. To play the game, each player i, given
her type ?i , simultaneously chooses an action node from her type-action set Ai,?i ? A. Each action
node thus corresponds to an action choice that is available to one or more of the players. Once the
players have made their choices, an action count is tallied for each action node ? ? A, which is the
number of agents that have chosen ?. A player?s utility depends only on the action node she chose
and the action counts on the neighbors of the chosen node.
We now turn to a formal description of BAGG?s utility function representation. Central to our model
is the action graph. An action graph G = (A, E) is a directed graph where A is the set of action
nodes, and E is a set of directed edges, with self edges allowed. We say ?0 is a neighbor of ? if there
is an edge from ?0 to ?, i.e., if (?0 , ?) ? E. Let the neighborhood of ?, denoted ?(?), be the set of
neighbors of ?.
For each player i and each instantiation of her type ?i ? ?i , her type-action set Ai,?i ? A is the set
of possible action choices of i given ?i . These subsets are unrestricted: different type-action
sets
S
may (partially or completely) overlap. Define player i?s total action set to be A?
= ?i ??i Ai,?i .
i
Q
We denote by A = i A?
i the set of action profiles, and by a ? A an action profile. Observe that
the action profile a provides sufficient information about the type profile to be able to determine the
outcome of the game; there is no need to additionally encode the realized type distribution. We note
that for different types ?i , ?i0 ? ?i , Ai,?i and Ai,?i0 may have different sizes; i.e., i may have different
numbers of available action choices depending on her realized type.
A configuration c is a vector of |A| non-negative integers, specifying for each action node the
numbers of players choosing that action. Let c(?) be the element of c corresponding to the action
?. Let C : A 7? C be the function that maps from an action profile a to the corresponding
configuration c. Formally, if c = C(a) then c(?) = |{i ? N : ai = ?}| for all ? ? A. Define
C = {c : ?a ? A such that c = C(a)}. In other words, C is the set of all possible configurations.
We can also define a configuration over a subset of nodes. In particular, we will be interested in
configurations over a node?s neighborhood. Given a configuration c ? C and a node ? ? A, let
the configuration over the neighborhood of ?, denoted c(?) , be the restriction of c to ?(?), i.e.,
c(?) = (c(?0 ))?0 ??(?) . Similarly, let C (?) denote the set of configurations over ?(?) in which at
least one player plays ?. Let C (?) : A 7? C (?) be the function which maps from an action profile to
the corresponding configuration over ?(?).
4
Definition 3. A Bayesian action-graph game (BAGG)
is a tuple (N, ?, P, {Ai,?i }i?N,?i ??i ,
Q
G, {u? }??A ) where N is the set of agents; ? = i ?i is the set of type profiles; P is the type
distribution, represented as a Bayesian network; Ai,?i ? A is the type-action set of i given ?i ;
G = (A, E) is the action graph; and for each ? ? A, the utility function is u? : C (?) ? R.
Intuitively, this representation captures two types of structure in utility functions: firstly, shared
actions capture the game?s anonymity structure: if two action choices from different type-action sets
share an action node ?, it means that these two actions are interchangeable as far as the other players?
utilities are concerned. In other words, their utilities may depend on the number of players that chose
the action node ?, but not the identities of those players. Secondly, the (lack of) edges between
nodes in the action graph expresses action- and type-specific independencies of utilities of the game:
depending on player i?s chosen action node (which also encodes information about her type), her
utility depends on configurations over different sets of nodes.
Lemma 4. An arbitrary Bayesian game given in Bayesian normal form can be encoded as a BAGG
storing the same number of utility values.
Proof. Provided in the supplementary material.
Bayesian games with symmetric utility functions exhibit anonymity structure, which can be expressed
in BAGGs by sharing action nodes. Specifically, we label each ?i as {1, . . . , T }, so that each
t ? {1, . . . , T } corresponds to a class of equivalent types. Then for each t ? {1, . . . , T }, we have
Ai,t = Aj,t for all i, j ? N , i.e. type-action sets for equivalent types are identical.
3.1
BAGGs with function nodes
In this section we extend the basic BAGG representation by introducing function nodes to the action
graph. The concept of function nodes was first introduced in the (complete-information) AGG setting
[13]. Function nodes allow us to exploit a much wider variety of utility structures in BAGGs.
In this extended representation, the action graph G?s vertices consist of both the set of action nodes A
and the set of function nodes F. We require that no function node p ? F can be in any player?s action
set. Each function node p ? F is associated with a function f p : C (p) ? R. We extend c by defining
c(p) to be the result of applying f p to the configuration over p?s neighbors, f p (c(p) ). Intuitively, c(p)
can be used to describe intermediate parameters that players? utilities depend on. To ensure that the
BAGG is meaningful, the graph restricted to nodes in F is required to be a directed acyclic graph. As
before, for each action node ? we define a utility function u? : C (?) ? R.
Of particular computational interest is the subclass of contribution-independent function nodes
(also introduced by [13]). A function node p in a BAGG is contribution-independent if ?(p) ? A,
there exists a commutative and associative operator ?, and for each ? ? ?(p) an integer w? , such
that given an action profile a = (a1 , . . . , an ), c(p) = ?i?N :ai ??(p) wai . A BAGG is contributionindependent if all its function nodes are contribution-independent. Intuitively, if function node p is
contribution-independent, each player?s strategy affects c(p) independently.
A very useful kind of contribution-independent function nodes are counting function nodes, which
set ? to the summation operator + and the weights to 1. Such a function node p simply counts the
number of players that chose any action in ?(p).
Let us consider the size of a BAGG representation. The representation size of P
the Bayesian network
for P is exponential only in the in-degree of the BN. The utility functions store ? |C (?) | values. As
in similar analysis for AGGs [15], estimations of this size generally depend on what types of function
nodes are included. We state only the following (relatively straightforward) result since in this paper
we are mostly concerned with BAGGs with counting function nodes.
Theorem 5. Consider BAGGs whose only function nodes, if any, are counting function nodes. If the
in-degrees of the action nodes as well as the in-degrees of the Bayesian networks for P P
are bounded
by a constant, then the sizes of the BAGGs are bounded by a polynomial in n, |A|, |F|, i |?i | and
the sizes of domains of variables in the BN.
This theorem shows a nice property of counting function nodes: representation size does not grow
exponentially in the in-degrees of these counting function nodes. The next example illustrates the
usefulness of counting function nodes, including for expressing conditional utility independence.
5
Example 6 (Coffee Shop game). Consider a symmetric Bayesian game involving n players; each
player plans to open a new coffee shop in a downtown area, but has to decide on the location. The
downtown area is represented by a r ? k grid. Each player can choose to open a shop located within
any of the B ? rk blocks or decide not to enter the market. Each player has T types, representing
her private information about her cost of opening a coffee shop. Players? types are independently
distributed. Conditioned on player i choosing some location, her utility depends on: (a) her own
type; (b) the number of players that chose the same block; (c) the number of players that chose any of
the surrounding blocks; and (d) the number of players that chose any other location.
The Bayesian normal form representation of this game has size n[T (B + 1)]n . The game can be
expressed as a BAGG as follows. Since the game is symmetric, we label the types as {1, . . . , T }. A
contains one action O corresponding to not entering and T B other action nodes, with each location
corresponding to a set of T action nodes, each representing the choice of that location by a player
with a different type. For each t ? {1, . . . , T }, the type-action sets Ai,t = Aj,t for all i, j ? N and
each consists of the action O and B actions corresponding to locations for type t. For each location
(x, y) we create three function nodes: pxy representing the number of players choosing this location,
p0xy representing the number of players choosing any surrounding blocks, and p00xy representing the
number of players choosing any other block. Each of these function nodes is a counting function
node, whose neighbors are action nodes corresponding to the appropriate locations (for all types).
Each action node for location (x, y) has three neighbors, pxy , p0xy , and p00xy . Since the BAGG action
graph has maximum in-degree 3, by Theorem 5 the representation size is polynomial in n, B and T .
4
Computing a Bayes-Nash Equilibrium
In this section we consider the problem of finding a sample Bayes-Nash equilibrium given a BAGG.
Our overall approach is to interpret the Bayesian game as a complete-information game, and then to
apply existing algorithms for finding Nash equilibria of complete-information games. We consider
two state-of-the-art Nash equilibrium algorithms, van der Laan et al?s simplicial subdivision [24]
and Govindan and Wilson?s global Newton method [9]. Both run in exponential time in the worst
case, and indeed recent complexity theoretic results [3, 6, 4] imply that a polynomial-time algorithm
for Nash equilibrium is unlikely to exist.1 Nevertheless, we show that we can achieve exponential
speedups in these algorithms by exploiting the structure of BAGGs.
Recall from Section 2.2.1 that a Bayesian game can be transformed into its induced normal form or
|? |
its agent form. In the induced normal form, each player i has |Ai | i actions (corresponding to her
pure strategies of the Bayesian game). Solving such a game would be infeasible for large |?i |; just to
represent an Nash equilibrium requires space exponential in |?i |.
A more promising approach is to consider the agent form. Note that we can straightforwardly adapt
the agent-form transformation described in Section 2.2.1 to the setting of BAGGs: now the action set
of player (i, ?i ) of the agent form corresponds
to the type-action set Ai,?i of the BAGG. The resulting
P
complete-information game has i?N |?i | players
P P and |Ai,?i | actions for each player (i, ?i ); a
Nash equilibrium can be represented using just i ?i |Ai,?i | numbers. However, the normal form
P
Q
representation of the agent form has size j?N |?j | i,?i |Ai,?i |, which grows exponentially in n
and |?i |. Applying the Nash equilibrium algorithms to this normal form would be infeasible in terms
of time and space. Fortunately, we do not have to explicitly represent the agent form as a normal
form game. Instead, we treat a BAGG as a compact representation of its agent form, and carry out
any required computation on the agent form by operating on the BAGG. A key computational task
required by both Nash equilibrium algorithms in their inner loops is the computation of expected
utility of the agent form. Recall from Section 2.2.1 that for all (i, ?i ) the expected utility u
?i,?i (?
? ) of
the agent form is equal to the expected utility ui (?|?i ) of the Bayesian game. Thus in the remainder
of this section we focus on the problem of computing expected utility in BAGGs.
4.1
Computing Expected Utility in BAGGs
Recall that ? ?i ?ai is the mixed strategy profile that is identical to ? except that i plays ai given ?i .
The main quantity we are interested in is ui (? ?i ?ai |?i ), player i?s expected utility given ?i under
1
There has been some research on efficient Nash-equilibrium-finding algorithms for subclasses of games,
such as Daskalakis and Papadimitriou?s [5] PTAS for anonymous games with fixed numbers of actions. One
future direction would be to adapt these algorithms to subclasses of Bayesian games.
6
the strategy P
profile ? ?i ?ai . Note that the expected utility ui (?|?i ) can then be computed as the sum
ui (?|?i ) = ai ui (? ?i ?ai |?i )?i (ai |?i ).
One approach is to directly apply Equation (1), which has (|??i | ? |A|) terms in the summation.
For games represented in Bayesian normal form, this algorithm runs in time polynomial in the
representation size. Since BAGGs can be exponentially more compact than their equivalent Bayesian
normal form representations, this algorithm runs in exponential time for BAGGs.
In this section we present a more efficient algorithm that exploits BAGG structure. We first formulate
the expected utility problem as a Bayesian network inference problem. Given a BAGG and a mixed
strategy profile ? ?i ?ai , we construct the induced Bayesian network (IBN) as follows.
We start with the BN representing the type distribution P , which includes (at least) the random
variables ?1 , . . . , ?n . The conditional probability distributions (CPDs) for the network are unchanged.
We add the following random variables: one strategy variable Dj for each player j; one action
count variable for each action node ? ? A, representing its action count, denoted c(?); one function
variable for each function node p ? F, representing its configuration value, denoted c(p); and one
utility variable U ? for each action node ?. We then add the following edges: an edge from ?j to Dj
for each player j; for each player j and each ? ? A?
j , an edge from Dj to c(?); for each function
variable c(p), all incoming edges corresponding to those in the action graph G; and for each ? ? A,
for each action or function node m ? ?(?) in G, an edge from c(m) to U ? in the IBN.
The CPDs of the newly added random variables are defined as follows. Each strategy variable
?
Dj has domain A?
j , and given its parent ?j , its CPD chooses an action from Aj according to the
?i ?ai
mixed strategy ?j
. In other words, if j 6= i then Pr(Dj = aj |?j ) is equal to ?j (aj |?j ) for all
aj ? Aj,?j and 0 for all aj ? A?
j \ Aj,?j ; and if j = i we have Pr(Dj = ai |?j ) = 1. For each
action node ?, the parents of its action-count variable c(?) are strategy variables that have ? in their
domains. The CPD is a deterministic function that returns the number of its parents that take value ?;
i.e., it calculates the action count of ?. For each function variable c(p), its CPD is the deterministic
function f p . The CPD for each utility variable U ? is a deterministic function specified by u? .
It is straightforward to verify that the IBN is a directed acyclic graph (DAG) and thus represents a
valid joint distribution. Furthermore, the expected utility ui (? ti ?ai |?i ) is exactly the expected value
of the variable U ai conditioned on the instantiated type ?i .
Lemma 7. For all i ? N , all ?i ? ?i and all ai ? Ai,?i , we have ui (? ?i ?ai |?i ) = E[U ai |?i ].
Standard BN inference methods could be used to compute E[U ai |?i ]. However, such standard
algorithms do not take advantage of structure that is inherent in BAGGs. In particular, recall that
in the induced network, each action count variable c(?)?s parents are all strategy variables that
have ? in their domains, implying large in-degrees for action count variables. Applying (e.g.) the
clique-tree algorithm would yield large clique sizes, which is problematic because running time scales
exponentially in the largest clique size of the clique tree. However, the CPDs of these action count
variables are structured counting functions. Such structure is an instance of causal independence in
BNs [11]. It also corresponds to anonymity structure for complete-information game representations
like symmetric games and AGGs [13]. We can exploit this structure to speed up computation of
expected utility in BAGGs. Our approach is a specialization of Heckerman and Breese?s method
[11] for exploiting causal independence in BNs, which transforms the original BN by creating new
nodes that represent intermediate results, and re-wiring some of the arcs, resulting in an equivalent
BN with small in-degree. Given an action count variable c(?) with parents (say) {D1 . . . Dn }, for
each i ? {1 . . . n ? 1} we create a node M?,i , representing the count induced by D1 . . . Di . Then,
instead of having D1 . . . Dn as parents of c(?), its parents become Dn and M?,n?1 , and each M?,i ?s
parents are Di and M?,i?1 . The resulting graph has in-degree at most 2 for c(?) and the M?,i ?s. The
CPDs of function variables corresponding to contribution-independent function nodes also exhibit
causal independence, and thus we can use a similar transformation to reduce their in-degree to 2. We
call the resulting Bayesian network the transformed Bayesian network (TBN) of the BAGG.
It is straightforward to verify that the representation size of the TBN is polynomial in the size of the
BAGG. We can then use standard inference algorithms to compute E[U ? |?i ] on the TBN. For classes
of BNs with bounded treewidths, this can be computed in polynomial time. Since the graph structure
(and thus the treewidth) of the TBN does not depend on the strategy profile and only depends on the
BAGG, we have the following result.
7
10
1
0.1
3
4
5
6
7
number of players
Figure 1: GW, varying
players.
1000
BAGG-AF
100
10
1
0.1
6
8 10 12 14 16 18 20
number of locations
100
10
1
0.1
01
0.01
2
3
4
5
6
7
8
types per player
Figure 3: GW, varying
types.
Figure 2: GW, varying
locations.
CPU time
e in seconds
100
10000
1000
10000
CPU time in seconds
econds
nds
CPU time in seconds
1000
10000
100000
BAGG-AF
NF-AF
INF
10000
CPU time in seconds
econds
ds
100000
NF-AF
1000
100
10
1
2
3
4
5
6
7
number of players
Figure 4: simplicial
subdivision.
Theorem 8. For BAGGs whose TBNs
Phave bounded treewidths, expected utility can be computed in
time polynomial in n, |A|, |F| and | i ?i |.
Bayesian games with independent type distributions are an important class of games and have many
applications, such as independent-private-value auctions. When contribution-independent BAGGs
have independent type distributions, expected utility can be efficiently computed.
Theorem 9. For contribution-independent BAGGs with independent type distributions, expected
utility can be computed in time polynomial in the size of the BAGG.
Proof. Provided in the supplementary material.
Note that this result is stronger than that of Theorem 8, which only guarantees efficient computation
when TBNs have constant treewidth.
5
Experiments
We have implemented our approach for computing a Bayes-Nash equilibrium given a BAGG by
applying Nash equilibrium algorithms on the agent form of the BAGG. We adapted two algorithms,
GAMBIT?s [18] implementation of simplicial subdivision and GameTracer?s [2] implementation of
Govindan and Wilson?s global Newton method, by replacing calls to expected utility computations
of the complete-information game with corresponding expected utility computations of the BAGG.
We ran experiments that tested the performance of our approach (denoted by BAGG-AF) against
two approaches that compute a Bayes-Nash equilibrium for arbitrary Bayesian games. The first
(denoted INF) computes a Nash equilibrium on the induced normal form; the second (denoted NFAF) computes a Nash equilibrium on the normal form representation of the agent form. Both were
implemented using the original, normal-form-based implementations of simplicial subdivision and
global Newton method. We thus studied six concrete algorithms, two for each game representation.
We tested these algorithms on instances of the Coffee Shop Bayesian game described in Example 6.
We created games of different sizes by varying the number of players, the number of types per player
and the number of locations. For each size we generated 10 game instances with random integer
payoffs, and measured the running (CPU) times. Each run was cut off after 10 hours if it had not yet
finished. All our experiments were performed using a computer cluster consisting of 55 machines
with dual Intel Xeon 3.2GHz CPUs, 2MB cache and 2GB RAM, running Suse Linux 11.1.
We first tested the three approaches based on the Govindan-Wilson (GW) algorithm. Figure 1 shows
running time results for Coffee Shop games with n players, 2 types per player on a 2 ? 3 grid, with
n varying from 3 to 7. Figure 2 shows running time results for Coffee Shop games with 3 players,
2 types per player on a 2 ? x grid, with x varying from 3 to 10. Figure 3 shows results for Coffee
Shop games with 3 players, T types per player on a 1 ? 3 grid, with T varying from 2 to 8. The data
points represent the median running time of 10 game instances, with the error bars indicating the
maximum and minimum running times. All results show that our BAGG-based approach (BAGG-AF)
significantly outperformed the two normal-form-based approaches (INF and NF-AF). Furthermore,
as we increased the dimensions of the games the normal-form based approaches quickly ran out of
memory (hence the missing data points), whereas BAGG-NF did not.
We also did some preliminary experiments on BAGG-AF and NF-AF running the simplicial subdivision algorithm. Figure 4 shows running time results for Coffee Shop games with n players, 2 types
per player on a 1 ? 3 grid, with n varying from 3 to 6. Again, BAGG-AF significantly outperformed
NF-AF, and NF-AF ran out of memory for game instances with more than 4 players.
8
References
[1] N. Bhat and K. Leyton-Brown. Computing Nash equilibria of action-graph games. In UAI,
pages 35?42, 2004.
[2] B. Blum, C. Shelton, and D. Koller. Gametracer. http://dags.stanford.edu/Games/
gametracer.html, 2002.
[3] X. Chen and X. Deng. Settling the complexity of 2-player Nash-equilibrium. In FOCS:
Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages
261?272, 2006.
[4] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nash
equilibrium. In STOC: Proceedings of the Annual ACM Symposium on Theory of Computing,
pages 71?78, 2006.
[5] C. Daskalakis and C. Papadimitriou. Computing equilibria in anonymous games. In FOCS:
Proceedings of the Annual IEEE Symposium on Foundations of Computer Science, pages 83?93,
2007.
[6] P. W. Goldberg and C. H. Papadimitriou. Reducibility among equilibrium problems. In STOC:
Proceedings of the Annual ACM Symposium on Theory of Computing, pages 61?70, 2006.
[7] G. Gottlob, G. Greco, and T. Mancini. Complexity of pure equilibria in Bayesian games. In
IJCAI, pages 1294?1299, 2007.
[8] S. Govindan and R. Wilson. Structure theorems for game trees. Proceedings of the National
Academy of Sciences, 99(13):9077?9080, 2002.
[9] S. Govindan and R. Wilson. A global Newton method to compute Nash equilibria. Journal of
Economic Theory, 110:65?86, 2003.
[10] J.C. Harsanyi. Games with incomplete information played by ?Bayesian? players, i-iii. part i.
the basic model. Management science, 14(3):159?182, 1967.
[11] David Heckerman and John S. Breese. Causal independence for probability assessment and
inference using Bayesian networks. IEEE Transactions on Systems, Man and Cybernetics,
26(6):826?831, 1996.
[12] J.T. Howson Jr and R.W. Rosenthal. Bayesian equilibria of finite two-person games with
incomplete information. Management Science, pages 313?315, 1974.
[13] A. X. Jiang and K. Leyton-Brown. A polynomial-time algorithm for Action-Graph Games. In
AAAI, pages 679?684, 2006.
[14] A. X. Jiang, A. Pfeffer, and K. Leyton-Brown. Temporal Action-Graph Games: A new
representation for dynamic games. In UAI, 2009.
[15] Albert Xin Jiang, Kevin Leyton-Brown, and Navin Bhat. Action-graph games. Games and
Economic Behavior, 2010. In press.
[16] M.J. Kearns, M.L. Littman, and S.P. Singh. Graphical models for game theory. In UAI, pages
253?260, 2001.
[17] D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. In
IJCAI, 2001.
[18] R. D. McKelvey, A. M. McLennan, and T. L. Turocy. Gambit: Software tools for game theory,
2006. http://econweb.tamu.edu/gambit.
[19] N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani, editors. Algorithmic Game Theory.
Cambridge University Press, Cambridge, UK, 2007.
[20] Frans A. Oliehoek, Matthijs T. J. Spaan, Jilles Dibangoye, and Christopher Amato. Heuristic
search for identical payoff bayesian games. In AAMAS: Proceedings of the International Joint
Conference on Autonomous Agents and Multiagent Systems, pages 1115?1122, May 2010.
[21] Daniel M. Reeves and Michael P. Wellman. Computing best-response strategies in infinite
games of incomplete information. In UAI, pages 470?478, 2004.
[22] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and
Logical Foundations. Cambridge University Press, New York, 2009.
[23] S. Singh, V. Soni, and M. Wellman. Computing approximate Bayes-Nash equilibria in treegames of incomplete information. In EC: Proceedings of the ACM Conference on Electronic
Commerce, pages 81?90. ACM, 2004.
[24] G. van der Laan, A.J.J. Talman, and L. van der Heyden. Simplicial variable dimension algorithms
for solving the nonlinear complementarity problem on a product of unit simplices using a general
labelling. Mathematics of Operations Research, 12(3):377?397, 1987.
[25] Yevgeniy Vorobeychik. Mechanism Design and Analysis Using Simulation-Based Game Models.
PhD thesis, University of Michigan, 2008.
9
| 4171 |@word private:2 version:1 polynomial:12 stronger:1 nd:1 open:2 simulation:1 bn:8 versatile:1 carry:1 initial:1 configuration:12 contains:1 daniel:1 outperforms:1 existing:5 current:1 si:2 yet:1 john:1 cpds:4 implying:1 provides:1 node:57 location:13 firstly:1 vorobeychik:1 mathematical:1 dn:3 become:1 symposium:4 focs:2 consists:2 frans:1 introduce:1 theoretically:1 notably:1 market:1 indeed:1 expected:26 blowup:3 behavior:1 multi:4 cpu:6 cache:1 becomes:1 provided:2 notation:1 bounded:4 what:1 kind:2 interpreted:3 tallied:1 finding:6 transformation:3 impractical:1 guarantee:1 temporal:2 every:2 sai:1 act:1 subclass:3 ti:1 nf:7 exactly:1 uk:1 unit:1 before:1 treat:1 jiang:5 chose:6 studied:1 howson:2 specifying:2 limited:1 range:2 directed:5 practical:1 unique:2 commerce:1 block:5 area:2 adapting:3 significantly:3 shoham:1 word:3 cannot:1 operator:2 context:1 influence:2 applying:4 milch:1 restriction:1 equivalent:5 map:2 deterministic:3 missing:1 straightforward:5 economics:2 tbn:4 independently:4 focused:1 formulate:1 identifying:2 pure:8 d1:3 autonomous:1 analogous:1 tardos:1 play:6 us:1 goldberg:2 complementarity:1 element:4 particularly:1 anonymity:4 located:1 cut:1 cooperative:1 kevinlb:1 pfeffer:1 solved:1 oliehoek:2 capture:2 worst:1 soni:1 observes:1 ran:3 transforming:1 nash:39 ui:21 complexity:4 littman:1 dynamic:6 singh:3 solving:4 depend:6 interchangeable:1 completely:1 compactly:5 joint:3 represented:6 surrounding:2 instantiated:2 describe:1 kevin:2 choosing:6 neighborhood:3 outcome:1 whose:3 heuristic:3 encoded:1 solve:1 supplementary:2 say:5 stanford:1 associative:1 advantage:1 gambit:3 propose:2 interaction:1 product:2 mb:1 remainder:1 turned:1 loop:1 achieve:1 representational:1 gametheoretic:1 academy:1 description:1 exploiting:2 parent:8 cluster:1 ijcai:2 generating:1 wider:1 depending:3 measured:1 sa:1 implemented:3 c:2 ibn:3 implies:1 treewidth:2 convention:1 exhibiting:2 direction:2 downtown:2 mclennan:1 dibangoye:1 material:2 require:2 generalization:1 preliminary:2 anonymous:2 proposition:1 secondly:1 summation:2 extension:1 considered:1 normal:26 equilibrium:43 algorithmic:2 adopt:1 estimation:1 outperformed:2 label:2 largest:1 create:2 tool:1 varying:8 wilson:7 encode:1 focus:2 amato:1 notational:1 she:3 inference:5 i0:2 unlikely:1 her:20 koller:2 transformed:3 interested:2 overall:1 among:3 dual:1 html:1 denoted:7 plan:1 art:2 equal:4 once:1 construct:1 having:1 yevgeniy:1 identical:4 represents:1 tabular:1 future:2 papadimitriou:4 cpd:4 inherent:1 opening:1 simultaneously:1 national:1 familiar:1 consisting:1 interest:3 highly:1 analyzed:1 wellman:2 tuple:8 edge:10 tree:4 incomplete:7 re:1 causal:4 uncertain:1 instance:5 xeon:1 increased:1 obstacle:4 cost:1 introducing:1 vertex:1 subset:2 usefulness:1 straightforwardly:1 answer:1 chooses:4 person:1 fundamental:1 international:1 matthijs:1 probabilistic:2 off:1 receiving:1 michael:1 quickly:1 concrete:1 linux:1 again:1 central:1 aaai:1 management:2 containing:1 choose:2 thesis:1 creating:1 return:1 converted:1 includes:1 suse:1 explicitly:2 depends:5 nisan:1 performed:1 start:1 bayes:16 contribution:8 efficiently:4 simplicial:7 correspond:1 yield:1 bayesian:85 iterated:1 cybernetics:1 sharing:1 wai:1 definition:2 against:1 proof:2 associated:1 di:2 static:1 newly:1 logical:1 recall:4 knowledge:1 response:2 specify:1 done:2 furthermore:7 just:2 d:1 navin:1 replacing:1 christopher:1 nonlinear:1 assessment:1 lack:2 aj:14 grows:2 concept:6 brown:6 verify:3 hence:1 entering:1 symmetric:8 gw:4 wiring:1 game:160 self:1 complete:22 theoretic:3 interface:1 harsanyi:4 auction:3 novel:1 common:4 empirically:2 exponentially:5 extend:2 interpretation:5 interpret:2 expressing:1 cambridge:3 ai:62 enter:1 dag:2 reef:1 grid:5 mathematics:1 similarly:2 dj:6 had:1 operating:1 add:2 own:2 showed:2 recent:1 inf:3 store:3 certain:1 der:3 seen:1 minimum:1 additional:2 unrestricted:1 fortunately:1 ptas:1 deng:1 converge:1 determine:1 adapt:3 af:12 post:1 a1:4 qi:1 calculates:1 involving:1 basic:3 albert:2 represent:8 whereas:1 bhat:2 diagram:2 grow:1 median:1 strict:1 induced:11 leveraging:2 call:3 integer:3 counting:8 intermediate:2 iii:1 concerned:2 variety:1 independence:10 affect:1 interprets:1 inner:1 reduce:1 economic:2 bottleneck:1 specialization:1 six:1 utility:68 gb:1 york:1 cause:1 action:108 generally:2 useful:1 transforms:1 http:2 exist:1 problematic:1 mckelvey:1 rosenthal:2 per:6 express:5 key:3 independency:3 nevertheless:1 blum:1 drawn:2 ram:1 graph:28 sum:1 run:6 reader:1 decide:2 electronic:1 guaranteed:1 played:4 pxy:2 correspondence:3 quadratic:1 encountered:2 annual:4 roughgarden:1 adapted:1 software:1 encodes:1 speed:1 bns:3 performing:2 relatively:1 speedup:1 department:2 structured:2 according:3 jr:1 heckerman:2 spaan:1 intuitively:3 restricted:1 pr:2 equation:1 discus:3 count:12 mechanism:2 turn:1 needed:1 available:2 operation:1 apply:2 observe:2 appropriate:1 original:2 running:9 ensure:1 mancini:1 graphical:6 newton:6 exploit:6 uj:1 establish:1 coffee:8 unchanged:1 move:1 greco:1 question:1 realized:2 quantity:1 added:1 strategy:31 randomize:1 maid:1 exhibit:3 separate:1 index:1 mostly:1 stoc:2 negative:1 design:2 implementation:3 observation:1 govindan:7 arc:1 finite:2 enabling:1 situation:1 payoff:4 extended:1 defining:1 treewidths:2 jilles:1 arbitrary:5 introduced:2 david:1 required:3 specified:3 hour:1 able:2 bar:1 program:1 including:4 memory:3 overlap:1 settling:1 representing:15 shop:9 improve:1 imply:1 finished:1 created:1 naive:1 columbia:2 nice:1 literature:2 reducibility:1 multiagent:2 permutation:3 mixed:16 acyclic:2 heyden:1 foundation:3 agent:36 degree:9 sufficient:1 editor:1 storing:1 share:1 last:1 infeasible:3 formal:1 allow:1 neighbor:6 distributed:2 van:3 overcome:1 ghz:1 dimension:2 valid:1 avoids:1 qn:1 computes:2 commonly:2 made:1 far:1 ec:1 transaction:1 vazirani:1 approximate:2 compact:5 clique:4 global:6 instantiation:1 incoming:1 uai:4 alternatively:1 daskalakis:3 search:2 decade:1 table:2 additionally:1 promising:1 nature:1 ca:2 symmetry:1 domain:4 did:2 main:1 profile:35 allowed:1 aamas:1 intel:1 simplices:1 deterministically:2 exponential:10 british:2 theorem:7 rk:1 specific:5 essential:1 exists:2 consist:1 phd:1 labelling:1 conditioned:3 commutative:1 illustrates:1 chen:1 michigan:1 simply:1 gottlob:2 expressed:2 agg:3 partially:1 ubc:2 leyton:6 corresponds:5 acm:4 conditional:4 identity:1 shared:1 man:1 included:1 specifically:2 except:2 infinite:1 laan:2 lemma:2 kearns:1 total:1 breese:2 xin:2 player:92 subdivision:6 meaningful:1 indicating:1 formally:2 arises:1 tested:3 shelton:1 ex:1 |
3,503 | 4,172 | 000
001
002
003
004
005
006
007
Switching state space model for simultaneously
estimating state transitions and nonstationary firing
rates
008
009
010
011
Anonymous Author(s)
Affiliation
Address
email
012
013
014
015
016
017
018
Abstract
019
020
We propose an algorithm for simultaneously estimating state transitions among
neural states, the number of neural states, and nonstationary firing rates using a
switching state space model (SSSM). This algorithm enables us to detect state
transitions on the basis of not only the discontinuous changes of mean firing rates
but also discontinuous changes in temporal profiles of firing rates, e.g., temporal
correlation. We construct a variational Bayes algorithm for a non-Gaussian SSSM
whose non-Gaussian property is caused by binary spike events. Synthetic data
analysis reveals that our algorithm has the high performance for estimating state
transitions, the number of neural states, and nonstationary firing rates compared
to previous methods. We also analyze neural data that were recorded from the
medial temporal area. The statistically detected neural states probably coincide
with transient and sustained states that have been detected heuristically. Estimated
parameters suggest that our algorithm detects the state transition on the basis of
discontinuous changes in the temporal correlation of firing rates, which transitions previous methods cannot detect. This result suggests that our algorithm is
advantageous in real-data analysis.
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
1 Introduction
Elucidating neural encoding is one of the most important issues in neuroscience. Recent studies have
suggested that cortical neuron activities transit among neural states in response to applied sensory
stimuli[1-3]. Abeles et al. detected state transitions among neural states using a hidden Markov
model whose output distribution is multivariate Poisson distribution (multivariate-Poisson hidden
Markov model(mPHMM))[1]. Kemere et al. indicated the correspondence relationship between
the time of the state transitions and the time when input properties change[2]. They also suggested
that the number of neural states corresponds to the number of input properties. Assessing neural
states and their transitions thus play a significant role in elucidating neural encoding. Firing rates
have state-dependent properties because mean and temporal correlations are significantly different
among all neural states[1]. We call the times of state transitions as change points. Change points
are those times when the time-series data statistics change significantly and cause nonstationarity
in time-series data. In this study, stationarity means that time-series data have temporally uniform
statistical properties. By this definition, data that do not have stationarity have nonstationarity.
Previous studies have detected change points on the basis of discontinuous changes in mean firing rates using an mPHMM. In this model, firing rates in each neural state take a constant value.
However, actually in motor cortex, average firing rates and preferred direction change dynamically
in motor planning and execution[4]. This makes it necessary to estimate state-dependent, instantaneous firing rates. On the other hand, when place cells burst within their place field[5], the inter-burst
1
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
intervals correspond to the ? rhythm frequency. Medial temporal (MT) area neurons show oscillatory firing rates when the target speed is modulated in the manner of a sinusoidal function[6]. These
results indicate that change points also need to be detected when the temporal profiles of firing rates
change discontinuously.
One solution is to simultaneously estimate both change points and instantaneous firing rates. A
switching state space model(SSSM)[7] can model nonstationary time-series data that include change
points. An SSSM defines two or more system models, one of which is modeled to generate observation data through an observation model. It can model nonstationary time-series data while switching
system models at change points. Each system model estimates stationary state variables in the region
that it handles. Recent studies have been focusing on constructing algorithms for estimating firing
rates using single-trial data to consider trial-by-trial variations in neural activities [8]. However,
these previous methods assume firing rate stationarity within a trial. They cannot estimate nonstationary firing rates that include change points. An SSSM may be used to estimate nonstationary
firing rates using single-trial data.
We propose an algorithm for simultaneously estimating state transitions among neural states and
nonstationary firing rates using an SSSM. We expect to be able to estimate change points when
not only mean firing rates but also temporal profiles of firing rates change discontinuously. Our
algorithm consists of a non-Gaussian SSSM, whose non-Gaussian property is caused by binary
spike events. Learning and estimation algorithms consist of variational Bayes[9,10] and local variational methods[11,12]. Automatic relevance determination (ARD) induced by the variational Bayes
method[13] enables us to estimate the number of neural states after pruning redundant ones. For
simplicity, we focus on analyzing single-neuron data. Although many studies have discussed state
transitions by analyzing multi-neuron data, some of them have suggested that single-neuron activities reflect state transitions in a recurrent neural network[14]. Note that we can easily extend our
algorithm to multi-neuron analysis using the often-used assumption that change points are common
among recorded neurons[1-3].
2 Definitions of Probabilistic Model
2.1 Likelihood Function
Observation time T consists of K time bins of widths ? (ms), and each bin includes at most one
spike (? ? 1). The spike timings are t = {t1 , ..., tS } where S is the total number of observed
spikes. We define ?k such that ?k = +1 if the kth bin includes a spike and ?k = ?1 otherwise
(k = 1, ..., K). The likelihood function is defined by the Bernoulli distribution
1+?k
1??k
?K
p(t|?) = k=1 (?k ?) 2 (1 ? ?k ?) 2 ,
(1)
where ? = {?1 , ..., ?K } and ?k is the firing rate at the kth bin. The product of firing rates and bin
width corresponds to the spike-occurrence probability and ?k ? ? [0, 1) since ? ? 1. The logit
?k ?
transformation of exp(2xk ) = 1??
(xk ? (??, ?)) lets us consider the nonnegativity of firing
k?
rates in detail[11]. Hereinafter, we call x = {x1 , ..., xK } the ?firing rates?.
Since K is a large because ? ? 1, the computational cost and memory accumulation do matter.
We thus use coarse graining[15]. Observation time T consists of M coarse bins of widths r = C?
(ms). A coarse bin includes many spikes and the firing rate in each bin is constant. The likelihood
function which is obtained by applying the logit transformation and the coarse graining to eq. (1) is
?M
p(t|x) = m=1 [exp(?
?m xm ? C log 2 cosh xm )],
(2)
?C
where ??m = u=1 ?(m?1)C+u .
xN xN
xN
2.2
Firing
rate
Switching State Space Model
1
2
M
x11
x21
z2
xM1
zM
^
?
2
^
?
M
An SSSM consists of N system models; for each model, we den
fine a prior distribution. We define label variables zm
such that Label z 1
n
zm = 1 if the nth system model generates an observation in the variable
n
mth bin and zm
= 0 otherwise (n = 1, ..., N, m = 1, ..., M ). Spike ^
?
train
2
1
Figure 1: Graphical model representation of SSSM.
108
109
We call N the number of labels and the nth system model the nth
label. The joint distribution is defined by
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
p(t, x, z|? 0 ) = p(t|x, z)p(z|?, a)p(x|?, ?),
(3)
1
N
where x = {x , ..., x }, x =
...,
z = {z11 , .., zM
, ..., z1N , ..., zM
}, and ? 0 =
{?, a, ?, ?} are parameters. The likelihood function, including label variables, is given by
?N ?M
n
p(t|x, z) = n=1 m=1 [exp(?
?m xnm ? C log 2 cosh xnm )]zm .
(4)
1
N
n
{xn1 ,
xnM },
We define the prior distributions of label variables as
?N
?N
n
p(z 1 |?) = n=1 (? n )z1 ?( n=1 ? n ? 1),
?N ?N
?N
n k
p(z m+1 |z m , a) = n=1 k=1 (ank )zm zm+1 ?( k=1 ank ? 1),
(5)
(6)
where ? n and ank are the probabilities that the nth label is selected at the initial time and that the
nth label switches to the kth one, respectively. The prior distributions of firing rates are Gaussian
?N
?N ? |? n ?|
?n
n
p(x) = n=1 p(xn |? n , ?n ) = n=1 (2?)
? ?n )T ?(xn ? ?n )),
(7)
M exp(? 2 (x
where ? n , ?n respectively mean the temporal correlation and the mean values of the nth-label firing
rates (n = 1, ..., N ). Here for simplicity, we introduced ?, which is the structure of the temporal
?
n
correlation satisfying p(xn |? n , ?n ) ? m exp(? ?2 ((xm ? ?m ) ? (xm?1 ? ?m?1 ))2 ). Figure 1
depicts a graphical model representation of an SSSM.
Ghahramani & Hinton (2000) did not introduce a priori knowledge about the label switching frequencies. However, in many cases, the time scale of state transitions is probably slower than that of
the temporal variation of firing rates. We define prior distributions of ? and a to introduce a priori
knowledge about label switching frequencies using Dirichlet distributions
?N
?N
n
(8)
p(?|? n ) = C(? n ) n=1 (? n )? ?1 ?( n=1 ? n ? 1),
]
[
?
?
?
nk
N
N
N
(9)
p(a|? nk ) = n=1 C(? nk ) k=1 (ank )? ?1 ?( k=1 ank ? 1) ,
?(
PN
?n)
?(
PN
? nk )
141
142
nk
n
nk
n=1
k=1
where C(? n ) = ?(? 1 )...?(?
) = ?(? n1 )...?(?
) correspond to the
N ) , C(?
nN ) . C(? ) and C(?
n
nk
normalization
? ?constants of p(?|? ) and p(a|? ), respectively. ?(u) is the gamma function defined
by ?(u) = 0 dttu?1 exp(?t). ? n , ? nk are hyperparameters to control the probability that the nth
label is selected at the initial time and that the nth label switches to the kth. We define the prior
distributions of ?n and ? n using non-informative priors. Since we do not have a priori knowledge
about neural states, ? and ?, which characterize each neural state, should be estimated from scratch.
143
144
3
137
138
139
140
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Estimation and Learning of non-Gaussian SSSM
It is generally computationally difficult to calculate the marginal posterior distribution in an
SSSM[6]. We thus use the variational Bayes method to calculate approximated posterior distributions q(w) and q(?) that minimize the variational free energy
??
F[q] =
dwd?q(w)q(?) log q(w)q(?)
(10)
p(t,w,?) = U[q] ? S[q]
where w =?{z,
(
)
? x} are hidden variables, ? = {?, a} are parameters,
??
U[q] = ?
dwd?q(w)q(?) log p(t, w, ?) and S[q] = ?
dwd?q(w)q(?) log q(w)q(?) .
We denote q(w) and q(?) as test distributions. The variational free energy satisfies
log p(t) = ?F[q] + KL(q(w)q(?)kp(w, ?|t)),
(11)
where KL(q(w)q(?)kp(w, ?|t)) is the Kullback-Leibler divergence between test distributions and
?
q(y)
a posterior distribution p(w, ?|t) defined by KL(q(y)kp(y|t)) = dyq(y) log p(y|t)
. Since the
marginal likelihood log p(t) takes a constant value, the minimization of variational free energy indirectly minimizes Kullback-Leibler divergence. The variational Bayes method requires conjugacy
between the likelihood function (eq. (4)) and the prior distribution (eq. (7)). However, eqs. (4) and
(7) are not conjugate to each other because of the binary spike events. The local variational method
enables us to construct a variational Bayes algorithm for a non-Gaussian SSSM.
3
162
163
164
165
166
167
168
169
170
171
172
173
3.1
Local Variational Method
The local variational method, which was proposed by Jaakola & Jordan[11], approximately transforms a non-Gaussian distribution into a quadratic-form distribution by introducing variational parameters. Watanabe et al. have proven the effectiveness of this method in estimating stationary
firing rates[12]. The exponential function in eq. (4) includes f (xnm ) = log 2 cosh xnm , which is a
concave function of y = (xnm )2 . The concavity can be confirmed by showing the negativity of the
second-order derivative of f (xnm ) with respect to (xnm )2 for all xnm . Considering the tangent line of
n 2
f (xnm ) with respect to (xnm )2 at (xnm )2 = (?m
) , we get a lower bound for eq. (4)
?N ?M
n
tanh ? n
n 2
n zm
) )) ? C log 2 cosh ?m
)] , (12)
p? (t|x, z) = n=1 m=1 [exp(?
?m xnm ? C 2?n m ((xnm )2 ? (?m
m
181
182
is a variational parameter. Equation (12) satisfies the inequality p? (t|x, z) ? p(t|x, z).
where
We use eq. (12) as the likelihood function instead of eq. (4). The conjugacy between eqs. (12)
and (7) enables us to construct the variational Bayes algorithm. Using eq. (12), we find that the
variational free energy
??
F? [q] =
dwd?q(w)q(?) log pq(w)q(?)
= U? [q] ? S[q]
(13)
? (t,w,?)
??
satisfies the inequality F? [q] ? F[q], where U? [q] = ?
dwd?q? (w)q? (?) log p? (s, w, ?).
Since the inequality log p(t, x, z) ? ?F[q] ? ?F? [q] is satisfied, the test distributions that minimize F? [q] can indirectly minimize F[q] which is analytically intractable. Using the EM algorithm
to estimate variational parameters improves the approximation accuracy of F? [q][16].
183
184
3.2
174
175
176
177
178
179
180
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
n
?m
Variational Bayes Method
?N
We assume the test distributions that satisfy the constraints q(w) = n=1 (q(xn |?n , ? n ))q(z)
1
N
1
N
and
q(?) = q(?)q(a),
? {? , ..., ? }, ? ?= {? , ..., ? }. Under constraints
?
? where ? =
dxq(x|?, ?) = 1, z q(z) = 1, d?q(?) = 1 and daq(a) = 1, we can obtain the test
distributions of hidden variables xn , z that minimize eq. (13) as follows:
? n
|W |
1
n
q(xn |?n , ? n ) = (2?)
??
? n )T W n (xn ? ?
? n )),
(14)
M exp(? 2 (x
q(z) ?
?N
n=1
n
exp(?
? n )z1
?N
n=1
n
?M
m=1
?N
n k
n ?M ?1 ?N
zm
zm+1
exp(?bnm )zm m=1 n=1 k=1 exp(?
ank
, (15)
m)
where W n = CLn + ? n ?, ?
? = (W n )?1 (wn + ? n ??n ), ?
? n = hlog ? n i, ?bnm = ??m hxnm i ?
n
C tanh ?m
n 2
n 2
n
nk
nk
(h(xm ) i ? (?m ) ) ? C log 2 cosh ?m , a
? = hlog a i, Ln is the diagonal matrix whose
2? n
m
tanh ? n
n
n
i ?n m , wn is the vector whose (1, m) component is hzm
i?
?m . h?i means
(m, m) component is hzm
m
the average obtained using a test distribution q(?). The computational cost of calculating the inverse
of each W is O(M) because ? is defined by a tridiagonal and Ln is a diagonal matrix.
n
i controls the effective variance of the likelihood function. A higher
In the calculation of q(xn ), hzm
n
n
i means the data are
hzm
i means the data are reliable for the nth label in the mth bin and lower hzm
?N
n
unreliable. Under the constraint n=1 hzm i = 1, all labels estimate their firing rates on the basis
n 2
of divide-and-conquer principle of data reliability. Using the equality (?m
) = h(xnm )2 i that will
( be
n
n
?
developed in the next section, we obtain bm = ??m hxm i ? C log 2 coshhxnm i ? C2 log 2 cosh 1 +
)
?1
(W n )(m,m)
/hxnm i2 in eq. (15). When the mth bin includes many (few) spikes, the nth label tends
to be selected if it estimates the highest (lowest) firing rate among the labels. But the variance of the
nth label (W n )?1
(m,m) penalizes that label?s selection probability.
We can also obtain the test distribution of parameters ?, a as
?N
?N
n
q(?) = C(?
? n ) n=1 (? n )?? ?1 ?( n=1 ? n ? 1),
]
?N [
?N
?N
nk
q(a) =
? nk ) k=1 (ank )?? ?1 ?( k=1 ank ? 1) ,
n=1 C(?
where C(?
?n) =
PN
?( n=1 ?
?n )
,
?(?
? 1 )...?(?
?N )
C(?
? nk ) =
normalization constants of q(?) and q(a),
PN
(16)
(17)
?( k=1 ?
? nk )
. C(?
? n ) and C(?
? nk ) correspond to the
?(?
? n1 )...?(?
? nN )
?
M ?1 n k
and ?? n = hz1n i + ? 1 , ?? nk = m=1 hzm
zm+1 i + ? nk .
4
216
217
218
219
220
221
We can see ? n in ?? n controls the probability that the nth label is selected at the initial time, and
? nk in ?? nk biases the probability of the transition from the nth label to the kth label. A forwardbackward algorithm enables us to calculate the first- and second-order statistics of q(z). Since an
SSSM involves many local solutions, we search for a global one using deterministic annealing,
which is proven to be effective for estimating and learning in an SSSM [7].
222
223
3.3 EM algorithm
224
225
The EM algorithm enables us to estimate variational parameters ? and parameters ? and ?. In the
EM algorithm, the calculation of the Q function is computationally difficult because it requires us to
calculate averages using the true posterior distribution. We thus calculate the Q function using test
distributions instead of the true posterior distributions as follows:
?
0
0
0
0
0
?
Q(?,
?, ?k?(t ) , ? (t ) , ? (t ) ) = dxq(x|?(t ) , ? (t ) )q(z)q(?)q(a) log p? (t, x, z, ?, a|?, ?). (18)
226
227
228
229
230
231
0
232
233
234
235
236
237
238
239
240
n 2
(?m
) = h(xnm )2 i,
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
?nm = hxnm i,
and
?n =
M
Tr[?((Wn )?1 +(hxn i??n )(hxn i??n )T )]
(19)
?
Summary of our algorithm
Set ? 1 and ? nk . t0 ? 1 Initialize parameters of model.
Perform the following VB and EM algorithm until F? [q] converges.
0
0
0
? (t ) , ?(t ) , ? (t ) ? ?, ?, ?
VB-E step:
VB-M step:
0
0
Compute q(x|?(t ) , ? (t ) ) and q(z) using eq. (14) and eq. (15).
Compute q(?) and q(a) using eq. (16) and eq. (17).
EM algorithm
Compute ?, ?, ? using eq. (19).
t0 ? t0 + 1
?
4
?
Results
?m
The estimated firing rate in the mth bin is defined by x
?m = hxnm
i, where n
? m satisfies n
?m =
n
n
k
?
arg maxn hzm
i. The estimated change points mr
? = mC?
?
satisfies hzm
i
>
hz
i
(
k
6= n)
?
m
?
n
k
?
? is given by N
? =
and hzm+1
i
<
hz
i
(
k
=
6
n).
The
estimated
number
of
labels
N
?
m+1
?
n
N ? (the number of pruned labels), where we assume that the nth label is pruned out if hzm
i<
10?5 (? m). We call our algorithm ?the variational Bayes switching state space model? (VB-SSSM).
4.1
Synthetic data analysis and Comparison with previous methods
We artificially generate spike trains from arbitrarily set firing rates with an inhomogeneous gamma
process. Throughout this study, we set ? which means the spike irregularity to 2.4 in generating
spike trains. We additionally confirmed that the following results are invariant if we generate spikes
using inhomogeneous Poisson or inverse Gaussian process.
264
265
In this section, we set parameters to N = 5, T = 4000, ? = 0.001, r = 0.04, ? n = 1, ? nk =
100(n = k) or 2.5(n 6= k). The hyperparameters ? nk represent the a priori knowledge where the
time scale of transitions among labels is sufficiently slower than that of firing-rate variations.
266
267
4.1.1
268
269
?
Variational Bayes algorithm Perform the VB-E and VB-M step until F?(t0 ) [q] converges.
243
244
247
248
0
maximize the Q function. The following table summarizes our algorithm.
241
242
245
246
0
?
Since Q(?,
?, ?k?(t ) , ? (t ) , ?(t ) ) = ?U[q]? , maximizing the Q function with respect to ?, ?, ? is
equivalent to minimizing the variational free energy (eq. (10) ). The update rules
Accuracy of change-point detections
This section discusses the comparative results between the VB-SSSM and mPHMM regarding the
accuracy of change-point detections and number-of-labels estimation. We used the EM algorithm to
5
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
0
1000
2000
Time (ms)
3000
4000
(d)
0
120
80
True firing rate
40
Firing rate (Hz)
120
80
<z>
1
(b)
mPHMM VB-SSSM 0
276
277
(c)
True firing rate
40
274
275
Firing rate (Hz)
272
273
(a)
mPHMM VB-SSSM 0
270
271
0
<z>
1
1000
2000
3000
4000
0
Time (ms)
Figure 2: Comparative results of change-point detections for the VB-SSSM and the mPHMM. (a)
and (c): Arbitrary set firing rates for validating the accuracy of change-point detections when firing
rates include discontinuous changes in mean value (fig. (a)) or temporal correlation (fig. (c)). (b)
and (d): Comparative results that correspond to firing rates in (a) ((b)) and (c) ((d)). The stronger
the white color becomes, the more dominant the label is in the bin.
estimate the label variables in the mPHMM[1-3]. Since the mPHMM is useful in analyzing multitrial data, in the estimation of mPHMM we used ten spike trains under the assumption that change
points were common among ten spike trains. On the other hand, VB-SSSM uses single-trial data.
Fig. 2(a) displays arbitrarily set firing rates to verify the change point detection accuracy when
(
mean firing rates changed discontinuously.
The
firing
rate
at
time
t(ms)
was
set
to
?
=
0.0
t )?
t
)
(
)
(
[0, 1000), t ? [2000, 3000) , ?t = 110.0 t ? [1000, 2000) , and ?t = 60.0 t ? [3000, 4000] .
The upper graph in fig. 2(b) indicates the label variables estimated with the VB-SSSM and the
lower indicates those estimated with the mPHMM. In the VB-SSSM, ARD estimated the number
of labels to be three after pruning redundant labels. As a result of ten-trial data analysis, the VBSSSM estimated the number of labels to be three in nine over ten spike trains. The estimated change
points were 1000?0.0, 2000?0.0, and 2990?16.9ms. The true change points were 1000, 2000, and
3000ms.
309
310
Fig. 2(c) plots the arbitrarily set firing rates for verifying the change point detection accuracy when
temporal
discontinuously. (The firing rate at time
( correlation changes
)
) t(ms) was set to ?t = ?t?1 +
2.0zt t ? [0, 2000) , ?t = ?t?1 + 20.0zt t ? [2000, 4000] , where zt is a standard normal
random variable that satisfies hzt i = 0, hzt zt0 i = ?tt0 (?tt0 = 0(t 6= t0 ), 1(t = t0 )). Fig. 2(d)
shows the comparative results between the VB-SSSM and mPHMM. ARD estimates the number of
labels to be two after pruning redundant labels. As a result of ten-trial data analysis, our algorithm
estimated the number of labels to be two in nine over ten spike trains. The estimated change points
was 1933?315.1ms and the true change point was 2000ms.
311
312
4.1.2 Accuracy of firing-rate estimation
305
306
307
308
313
314
315
316
317
318
319
320
321
322
323
This section discusses the nonstationary firing rate estimation accuracy. The comparative methods
include kernel smoothing (KS), kernel band optimization (KBO)[17], adaptive kernel smoothing
(KSA)[18], Bayesian adaptive regression splines (BARS)[19], and Bayesian binning (BB)[20]. We
used a Gaussian kernel in KS, KBO, and KSA. The kernel widths ? were set to ? = 30 (ms) (KS30),
? = 50 (ms) (KS50) and ? = 100 (ms) (KS100) in KS. In KSA, we used the bin widths estimated
using KBO. Cunningham et al. have reviewed all of these compared methods [8].
(
)
A firing rate at time t(ms) was set to ?t = 5.0 t ? [0, 480), t ? [3600, 4000] , ?t = 90.0 ?
(
)
(
)
exp(?11 (t?480)
4000 ) t ? [480, 2400) , ?t = 80.0 ? exp(?0.5(t ? 2400)/4000)) t ? [2400, 3600)
and we reset ?t to 5.0 if ?t < 5.0. We set these firing rates assuming an experiment in which transient and persistent inputs are applied to an observed neuron in a series. Note that input information,
such as timings, properties, and sequences is entirely unknown.
6
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
3000
4000
7
6
5
0
1000
2000
Time (ms)
3000
4000
0
BB
2000
Time (ms)
**
VB-SSSM
1000
**
8
KSA
< z >1 0
**
KBO
120
80
Firing rate (Hz)
4000
**
9
KS50
3000
11
KS100
2000
Time (ms)
*
*
**??
?p<0.005
**
10
40
120
80
40
1000
12
KS30
336
337
0
1
2
3
4
5
*??
?p<0.01
(d)
BARS
334
335
(b)
Estimated value using label 1
Estimated value using label 2
Estimated value using label 3
True firing rate
Mean absolute error
332
333
0
330
331
Firing rate (Hz)
328
329
(c)
Estimated firing rate
True firing rate
0
326
327
(a)
Label number
324
325
Figure 3: Results of firing-rate estimation. (a): Estimated firing rates. Vertical bars above abscissa
n
axes are spikes used for estimates. (b): Averaged label variables hzm
i. (c): Estimated firing rates
using each label. (d): Mean absolute error ? standard deviation when applying our algorithm and
other methods to estimate firing rates plotted in (a). * indicates p<0.01 and ** indicates p<0.005.
Fig. 3(a) plots the estimated firing rates (red line). Fig. 3(b) plots the estimated label variables and
fig. 3(c) plots the estimated firing rates when all labels other than the pruned ones were used. ARD
estimates the number of labels to be three after pruning redundant labels. As a result of ten spike
trains analysis, the VB-SSSM estimated the number of labels to be three in eight over ten spike
trains. The change points were estimated at 420?82.8, 2385?20.7, and 3605?14.1ms. The true
change points were 480, 2400, and 3600ms.
?K
1
?
?
The mean-absolute-error (MAE) is defined by MAE = K
k=1 |?k ? ?k |, where ?k and ?k are
the true and estimated firing rates in the kth bin. All the methods estimate the firing rates at ten
times. Fig. 3(d) shows the mean MAE values averaged across ten trials and the standard deviations.
We investigated the significant differences in firing-rate estimation among all the methods using
Wilcoxon signed rank test. Both the VB-SSSM and BB show the high performance. Note that the
VB-SSSM can estimate not only firing rates but change points and the number of neural states.
4.2
Real Data Analysis
In area MT, neurons preferentially respond to the movement directions of visual inputs[21]. We analyzed the neural data recorded from area MT of a rhesus monkey when random dots were presented.
These neural data are available from the Neural Signal Archive (http://www.neuralsignal.org.), and
detailed experimental setups are described by Britten et al. [22]. The input onsets correspond to
t = 0(ms), and the end of the recording corresponds to t = 2000(ms). This section discusses our
analysis of the neural data included in nsa2004.1 j001 T2. These data were recorded from the same
neuron of the same subject. Parameters were set as follows: T = 2000, ? = 0.001, N = 5, r =
0.02, ? n = 1(n = 1, ..., 5), ? nk = 100(n = k) or 2.5(n 6= k).
Fig. 4 shows the analysis results when random dots have 3.2% coherence. Fig. 4 (a) plots the
estimated firing rates (red line) and a Kolmogorov-Smirnov plot (K-S plot) (inset)[23]. Since the
true firing rates for the real data are entirely unknown, we evaluated the reliability of estimated
values from the confidence intervals. The black and gray lines in the inset denote the K-S plot and
95 % confidence intervals. The K-S plot supported the reliability of the estimated firing rates since
it fits into the 95% confidence intervals. Fig. 4(b) depicts the estimated label variables, and fig.
4(c) shows the estimated firing rates using all labels other than the pruned ones. The VB-SSSM
estimates the number of labels to be two. We call the label appearing on the right after the input
onset ?the 1st neural state? and that appearing after the 1st neural state ?the 2nd neural state?. The
1st and 2nd neural states in fig. 4 might corresponded to transient and sustained states[6] that have
been heuristically detected, e.g. assuming the sustained state lasts for a constant time[24].
We analyzed all 105 spike trains recorded under presentations of random dots with 3.2%, 6.4%,
12.8%, and 99.9% coherence, precluding the neural data in which the total spike count was less than
7
388
389
390
391
392
393
394
395
396
397
398
500
1000
1500
1
2
3
4
5
0
500
1000
Time (ms)
1500
?
200
Firing rate (Hz)
2000
< z >1
2000
0
p<0.0005
2.5
100
200
100
(b) 0
(d) x3.510 5
1.5
The 1st neural state
The 2nd neural state
0.5
500
1000
Time (ms)
5
10 15
Trial number
20
p>0.1
(e)
1500 2000
-1.4
<d?>
386
387
Estimated value using label 2
Estimated value using label 4
0
0
384
385
Firing rate (Hz)
382
383
(c)
Estimated firing rate
K-S plot
0
380
381
(a)
Label number
378
379
The 1st neural state
The 2nd neural state
-1.8
-2.2
0
0
5
10 15
Trial number
20
Figure 4: Estimated results when applying the VB-SSSM to area MT neural data. (a): Estimated
firing rates. Vertical bars above abscissa axes are spikes used for estimates. Inset is result of
Kolmogorov-Smirnov goodness-of-fit. Solid and gray lines correspond to K-S plot and 95% confidence interval. (b): Averaged label variables using test distribution. (c): Estimated firing rates using
each label. (d) and (e): Estimated parameters in the 1st and the 2nd neural states.
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
20. The VB-SSSM estimated the number of labels to be two in 25 over 30 spike trains (3.2%), 19
over 30 spike trains (6.4%), 26 over 30 spike trains (12.8%), and 16 over 16 spike trains (99.9%). In
summary, the number of labels is estimated to be two in 85 over 101 spike trains.
Figs. 4(d) and (e) show the estimated parameters from 19 spike trains whose estimated number
of labels was two (6.4% coherence). The horizontal axis denotes the arranged number of trials
in ascending order. Figs. 4 (d) and (e) correspond to the estimated temporal correlation ? and
?Tn n
the time average of ?, which is defined by h?n i = T1n t=1
?t , where Tn denotes the sojourn
time in the nth label or the total observation time T . The estimated temporal correlation differed
significantly between the 1st and 2nd neural states (Wilcoxon signed rank test, p<0.00005). On the
other hand, the estimated mean firing rates did not differ significantly between these neural states
(Wilcoxon signed rank test, p>0.1). Our algorithm thus detected the change points on the basis of
discontinuous changes in temporal correlations. We could see the similar tendencies for all randomdot coherence conditions (data not shown). We confirmed that the mPHMM could not detect these
change points (data not shown), which we were able to deduce from the results shown in fig. 2(d).
These results suggest that our algorithm is effective in real data analysis.
5 Discussion
We proposed an algorithm for simultaneously estimating state transitions, the number of neural
states, and nonstationary firing rates using single-trial data.
There are ways of extending our research to analyze multi-neuron data. The simplest one assumes
that the time of state transitions is common among all recorded neurons[1-3]. Since this assumption
can partially include the effect of inter-neuron interactions, we can define prior distributions that are
independent between neurons. Because there are no loops in the statistical dependencies of firing
rates under these conditions, the variational Bayes method can be applied directly.
One important topic for future study is optimization of coarse bin widths r = C?. A bin width
that is too wide obscures both the time of change points and temporal profile of nonstationary firing
rates. A bin width that is too narrow, on the other hand, increases computational costs and worsens
estimation accuracy. Watanabe et al. proposed an algorithm for estimating the optimal bin width by
maximization the marginal likelihood [15], which is probably applicable to our algorithm.
8
432
433
[1] Abeles, M. et al. (1995), PNAS, pp. 609-616.
434
435
[2] Kemere, C. et al. (2008) J. Neurophyiol. 100(7):2441-2452.
[3] Jones, L. M. et al. (2007), PNAS 104(47):18772-18777.
436
437
[4] Rickert, J. et al. (2009) J. Neurosci. 29(44): 13870-13882.
438
439
440
441
442
443
444
445
[5] Harvey, C. D. et al. (2009), Nature 461(15):941-946.
[6] Lisberger, et al. (1999), J. Neurosci. 19(6):2224-2246.
[7] Ghahramani, Z., and Hinton, G. E. (2000) Neural Compt. 12(4):831-864.
[8] Cunningham J. P. et al. (2007), Neural Netw. 22(9):1235-1246.
[9] Attias, H. (1999), Proc. 15th Conf. on UAI
[10] Beal, M. (2003), Pd. D thesis University College London.
446
447
[11] Jaakkola, T. S., and Jordan, M. I. (2000)., Stat. and Compt. 10(1): pp. 25-37.
448
[12] Watanabe, K. and Okada, M. (2009) Lecture Notes in Computer Science 5506:655-662.
449
450
[13] Corduneanu, A. and Bishop, C. M. (2001) Artificial Intelligence and Statistics: 27-34.
451
452
[14] Fuzisawa, S. et al. (2005), Cerebral Cortex 16(5):639-654.
453
454
455
456
457
458
459
460
461
462
463
464
465
466
[15] Watanabe, K. et al. (2009), IEICE E92-D(7):1362-1368.
[16] Bishop, C. M. (2006), Pattern Recognition and Machine Learning, Springer.
[17] Shimazaki, H., and Shinomoto, S. (2007), Neural Coding Abstract :120-123.
[18] Richmond, B. J. et al. (1990), J. Neurophysiol. 64(2):351-369.
[19] Dimatteo, I., et al. (2001), Biometrika 88(4):1055-1071.
[20] Endres, D. et al. (2008), Adv. in NIPS 20:393-340.
[21] Maunsell, J. H. and Van Essen, D. C. (1983) J. Neurophysiol. 49(5): 1127-1147.
[22] Britten, K. H. et al. (1992), J. Neurosci. 12:4745-4765.
[23] Brown, E. N. et al. (2002), Neural Compt. 14(2):325-346.
[24] Bair, W. and Koch, C. (1996) Neural Compt. 8(6): 1185-1202.
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
9
| 4172 |@word trial:13 worsens:1 stronger:1 advantageous:1 logit:2 smirnov:2 nd:6 heuristically:2 rhesus:1 tr:1 solid:1 initial:3 series:6 precluding:1 z2:1 informative:1 enables:6 motor:2 plot:11 medial:2 update:1 stationary:2 intelligence:1 selected:4 xk:3 coarse:5 org:1 burst:2 c2:1 xnm:16 persistent:1 consists:4 sustained:3 introduce:2 manner:1 inter:2 abscissa:2 planning:1 multi:3 detects:1 considering:1 becomes:1 estimating:9 dxq:2 lowest:1 minimizes:1 monkey:1 developed:1 transformation:2 temporal:17 concave:1 biometrika:1 control:3 maunsell:1 t1:1 local:5 timing:2 tends:1 switching:8 encoding:2 analyzing:3 firing:80 approximately:1 signed:3 black:1 might:1 k:3 dynamically:1 suggests:1 statistically:1 jaakola:1 averaged:3 irregularity:1 x3:1 area:5 significantly:4 confidence:4 suggest:2 get:1 cannot:2 selection:1 applying:3 accumulation:1 equivalent:1 deterministic:1 www:1 maximizing:1 simplicity:2 rule:1 handle:1 variation:3 compt:4 target:1 play:1 us:1 satisfying:1 approximated:1 recognition:1 binning:1 observed:2 role:1 verifying:1 calculate:5 region:1 adv:1 movement:1 highest:1 forwardbackward:1 pd:1 basis:5 neurophysiol:2 easily:1 joint:1 kolmogorov:2 train:16 effective:3 london:1 kp:3 detected:7 artificial:1 corresponded:1 whose:6 otherwise:2 statistic:3 beal:1 sequence:1 propose:2 interaction:1 product:1 reset:1 zm:14 loop:1 randomdot:1 assessing:1 extending:1 generating:1 comparative:5 converges:2 recurrent:1 stat:1 ard:4 z1n:1 eq:18 involves:1 indicate:1 differ:1 direction:2 inhomogeneous:2 discontinuous:6 transient:3 bin:19 anonymous:1 sufficiently:1 koch:1 normal:1 exp:13 estimation:9 proc:1 applicable:1 label:60 tanh:3 minimization:1 gaussian:10 pn:4 obscures:1 jaakkola:1 ax:2 focus:1 bernoulli:1 likelihood:9 indicates:4 rank:3 richmond:1 hzt:2 detect:3 dependent:2 nn:2 cunningham:2 hidden:4 mth:4 x11:1 issue:1 among:11 arg:1 priori:4 smoothing:2 initialize:1 marginal:3 field:1 construct:3 jones:1 bnm:2 future:1 t2:1 stimulus:1 spline:1 few:1 simultaneously:5 gamma:2 divergence:2 n1:2 zt0:1 detection:6 stationarity:3 essen:1 elucidating:2 analyzed:2 necessary:1 divide:1 sojourn:1 penalizes:1 plotted:1 ksa:4 goodness:1 maximization:1 cost:3 introducing:1 deviation:2 uniform:1 tridiagonal:1 too:2 cln:1 characterize:1 dependency:1 abele:2 synthetic:2 endres:1 st:7 probabilistic:1 dimatteo:1 graining:2 thesis:1 reflect:1 recorded:6 satisfied:1 nm:1 conf:1 derivative:1 sinusoidal:1 coding:1 includes:5 matter:1 satisfy:1 caused:2 onset:2 analyze:2 red:2 bayes:11 minimize:4 accuracy:9 variance:2 correspond:7 xm1:1 bayesian:2 mc:1 confirmed:3 oscillatory:1 nonstationarity:2 email:1 definition:2 energy:5 frequency:3 pp:2 xn1:1 knowledge:4 color:1 improves:1 actually:1 focusing:1 higher:1 response:1 arranged:1 evaluated:1 correlation:10 until:2 hand:4 horizontal:1 defines:1 corduneanu:1 gray:2 indicated:1 tt0:2 ieice:1 effect:1 verify:1 true:11 brown:1 analytically:1 equality:1 leibler:2 i2:1 white:1 shinomoto:1 width:9 rhythm:1 m:23 tn:2 variational:24 instantaneous:2 common:3 mt:4 cerebral:1 discussed:1 extend:1 mae:3 significant:2 automatic:1 pq:1 reliability:3 dot:3 cortex:2 deduce:1 dominant:1 wilcoxon:3 multivariate:2 posterior:5 recent:2 harvey:1 inequality:3 affiliation:1 arbitrarily:3 binary:3 mr:1 maximize:1 redundant:4 signal:1 dwd:5 pnas:2 z11:1 kemere:2 determination:1 calculation:2 regression:1 poisson:3 normalization:2 represent:1 kernel:5 cell:1 fine:1 interval:5 ank:8 annealing:1 archive:1 probably:3 induced:1 hz:8 validating:1 recording:1 subject:1 effectiveness:1 jordan:2 call:5 nonstationary:11 wn:3 switch:2 fit:2 regarding:1 attias:1 t0:6 bair:1 hxn:2 dyq:1 cause:1 nine:2 generally:1 useful:1 detailed:1 transforms:1 cosh:6 ten:10 band:1 simplest:1 generate:3 http:1 estimated:44 neuroscience:1 graph:1 inverse:2 respond:1 place:2 throughout:1 daq:1 coherence:4 summarizes:1 vb:21 entirely:2 bound:1 display:1 correspondence:1 quadratic:1 activity:3 constraint:3 generates:1 speed:1 pruned:4 maxn:1 conjugate:1 across:1 em:7 den:1 invariant:1 computationally:2 equation:1 conjugacy:2 ln:2 discus:3 count:1 ascending:1 end:1 available:1 eight:1 indirectly:2 occurrence:1 appearing:2 slower:2 denotes:2 dirichlet:1 include:5 assumes:1 x21:1 graphical:2 calculating:1 ghahramani:2 conquer:1 spike:31 diagonal:2 kth:6 transit:1 topic:1 assuming:2 modeled:1 relationship:1 minimizing:1 preferentially:1 difficult:2 setup:1 hlog:2 zt:3 unknown:2 perform:2 upper:1 vertical:2 neuron:14 observation:6 markov:2 hereinafter:1 t:1 hinton:2 arbitrary:1 introduced:1 t1n:1 kl:3 z1:2 rickert:1 narrow:1 nip:1 address:1 able:2 suggested:3 bar:4 pattern:1 xm:5 including:1 memory:1 reliable:1 event:3 nth:15 temporally:1 axis:1 negativity:1 britten:2 prior:8 tangent:1 expect:1 lecture:1 proven:2 shimazaki:1 principle:1 summary:2 changed:1 supported:1 last:1 free:5 bias:1 wide:1 absolute:3 van:1 cortical:1 transition:18 xn:11 concavity:1 sensory:1 author:1 adaptive:2 coincide:1 bm:1 bb:3 pruning:4 netw:1 preferred:1 kullback:2 unreliable:1 global:1 reveals:1 uai:1 search:1 table:1 additionally:1 reviewed:1 nature:1 okada:1 investigated:1 artificially:1 constructing:1 did:2 neurosci:3 hyperparameters:2 profile:4 x1:1 fig:18 depicts:2 differed:1 nonnegativity:1 watanabe:4 exponential:1 bishop:2 inset:3 showing:1 consist:1 intractable:1 execution:1 nk:23 visual:1 partially:1 springer:1 corresponds:3 lisberger:1 satisfies:6 presentation:1 change:40 included:1 discontinuously:4 total:3 experimental:1 tendency:1 college:1 modulated:1 relevance:1 scratch:1 |
3,504 | 4,173 | Probabilistic latent variable models for distinguishing
between cause and effect
Oliver Stegle
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Joris M. Mooij
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Dominik Janzing
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Kun Zhang
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Bernhard Sch?olkopf
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Abstract
We propose a novel method for inferring whether X causes Y or vice versa from joint
observations of X and Y . The basic idea is to model the observed data using probabilistic
latent variable models, which incorporate the effects of unobserved noise. To this end, we
consider the hypothetical effect variable to be a function of the hypothetical cause variable
and an independent noise term (not necessarily additive). An important novel aspect of
our work is that we do not restrict the model class, but instead put general non-parametric
priors on this function and on the distribution of the cause. The causal direction can then
be inferred by using standard Bayesian model selection. We evaluate our approach on
synthetic data and real-world data and report encouraging results.
1
Introduction
The challenge of inferring whether X causes Y (?X ? Y ?) or vice versa (?Y ? X?) from
joint observations of the pair (X, Y ) has recently attracted increasing interest [1, 2, 3, 4, 5, 6, 7,
8]. While the traditional causal discovery methods [9, 10] based on (conditional) independences
between variables require at least three observed variables, some recent approaches can deal with
pairs of variables by exploiting the complexity of the (conditional) probability distributions. On
an intuitive level, the idea is that the factorization of the joint distribution P (cause, effect) into
P (cause)P (effect | cause) typically yields models of lower total complexity than the factorization
into P (effect)P (cause | effect). Although the notion of ?complexity? is intuitively appealing, it is
not obvious how it should be precisely defined.
If complexity is measured in terms of Kolmogorov complexity, this kind of reasoning would be
in the spirit of the principle of ?algorithmically independent conditionals? [11], which can also
be embedded into a general theory of algorithmic-information-based causal discovery [12]. The
following theorem is implicitly stated in the latter reference (see remarks before (26) therein):
1
Theorem 1 Let P (X, Y ) be a joint distribution with finite Kolmogorov complexity such that P (X)
and P (Y | X) are algorithmically independent, i.e.,
+
I P (X) : P (Y | X) = 0 ,
(1)
+
where = denotes equality up to additive constants. Then:
+
K P (X) + K P (Y | X) ? K P (Y ) + K P (X | Y ) .
(2)
The proof is given by observing that (1) implies that the shortest description of P (X, Y ) is given
by separate descriptions of P (X) and P (Y | X). It is important to note at this point that the total
complexity of the causal model consists of both the complexity of the conditional distribution and
of the marginal of the putative cause. However, since Kolmogorov complexity is uncomputable,
this does not solve the causal discovery problem in practice. Therefore, other notions of complexity
need to be considered.
The work of [4] measures complexity in terms of norms in a reproducing kernel Hilbert space, but
due to the high computational costs it applies only to cases where one of the variables is binary. The
methods [1, 2, 3, 5, 6] define classes of conditionals C and marginal distributions M, and prefer
X ? Y whenever P (X) ? M and P (Y | X) ? C but P (Y ) 6? M or P (X | Y ) 6? C. This can
be interpreted as a (crude) notion of model complexity: all probability distributions inside the class
are simple, and those outside the class are complex. However, this a priori restriction to a particular
class of models poses serious practical limitations (even when in practice some of these methods
?soften? the criteria by, for example, using the p-values of suitable hypothesis tests).
In the present work we propose to use a fully non-parametric, Bayesian approach instead. The
key idea is to define appropriate priors on marginal distributions (of the cause) and on conditional
distributions (of the effect given the cause) that both favor distributions of low complexity. To decide
upon the most likely causal direction, we can compare the marginal likelihood (also called evidence)
of the models corresponding to each of the hypotheses X ? Y and Y ? X. An important novel
aspect of our work is that we explicitly treat the ?noise? as a latent variable that summarizes the
influence of all other unobserved causes of the effect. The additional key assumption here is the
independence of the ?causal mechanism? (the function mapping from the cause and noise to the
effect) and the distribution of the cause, an idea that was exploited in a different way recently for the
deterministic (noise-free) case [13]. The three main contributions of this work are:
? to show that causal discovery for the two-variable cause-effect problem can be done without
restricting the class of possible causal mechanisms;
? to point out the importance of accounting for the complexity of the distribution of the cause, in
addition to the complexity of the causal mechanism (like in equation (2));
? to show that a Bayesian approach can be used for causal discovery even in the case of two
continuous variables, without the need for explicit independence tests.
The last aspect allows for a straightforward extension of the method to the multi-variable case, the
details of which are beyond the scope of this article.1 Apart from discussing the proposed method on
a theoretical level, we also evaluate our approach on both simulated and real-world data and report
good empirical results.
2
Theory
We start with a theoretical treatment of how to solve the basic causal discovery task (see Figure 1a).
1
For the special case of additive Gaussian noise, the method proposed in [1] would also seem to be a valid
Bayesian approach to causal discovery with continuous variables. However, that approach is flawed, as it
either completely ignores the distribution for the cause, or uses a simple Gaussian marginal distribution for the
cause, which may not be realistic (from the paper it is not clear exactly what is proposed). But, as suggested
by Theorem 1, and as illustrated by our empirical results, the complexity of the input distribution plays an
important role here that cannot be neglected, especially in the two-variable case.
2
(a)
(b)
X
Y
E
?X causes Y ?
or
X
?X
Y
?
E
?f
f
?Y causes X?
xi
ei
yi = f (xi , ei )
i = 1, . . . , N
Figure 1: Observed variables are colored gray, and unobserved variables are white. (a) The basic
causal discovery task: which of the two causal models gives the best explanation of the observed
data D = {(xi , yi )}N
i=1 ? (b) More detailed version of the graphical model for ?X causes Y ?.
2.1
Probabilistic latent variable models for causal discovery
First, we give a more precise definition of the class of models that we use for representing that X
causes Y (?X ? Y ?). We assume that the relationship between X and Y is not deterministic, but
disturbed by unobserved noise E (effectively, the summary of all other unobserved causes of Y ).
The situation is depicted in the left-hand part of Figure 1a: X and E both cause Y , but although X
and Y are observed, E is not. We make the following additional assumptions:
(A) There are no other causes of Y , or in other words, we assume determinism: a function f exists
such that
Y = f (X, E).
This function will henceforth be called the causal mechanism.
(B) X and E have no common causes, i.e., X and E are independent:
X?
?E.
(C) The distribution of the cause is ?independent? from the causal mechanism.2
(D) The noise has a standard-normal distribution: E ? N (0, 1).3
Several recent approaches to causal discovery are based on the assumptions (A) and (B) only, but
pose one of the following additional restrictions on f :
? f is linear [2];
? additive noise [5], where f (X, E) = F (X) + E for some function F ;
? the post-nonlinear model [6], where f (X, E) = G(F (X) + E) for some functions F, G.
For these special cases, it has been shown that a model of the same (restricted) form in the reverse
direction Y ? X that induces the same joint distribution on (X, Y ) does not exist in general. This
asymmetry can be used for inferring the causal direction.
In practice, a limited model class may lead to wrong conclusions about the causal direction. For
example, when assuming additive noise, it may happen that neither of the two directions provides
a sufficiently good fit to the data and hence no decision can be made. Therefore, we would like to
drop this kind of assumptions that limit the model class. However, assumptions (A) and (B) are not
? ? N (0, 1) and a
enough on their own: in general, one can always construct a random variable E
2
?
function f : R ? R such that
?
?
X = f?(Y, E),
Y?
?E
(3)
(for a proof of this statement, see e.g., [14, Theorem 1]).
In combination with the other two assumptions (C) and (D), however, one does obtain an asymmetry
that can be used to infer the causal direction. Note that assumption (C) still requires a suitable mathematical interpretation. One possibility would be to interpret this independence as an algorithmic
2
This assumption may be violated in biological systems, for example, where the causal mechanisms may
have been tuned to their input distributions through evolution.
3
?
This is not a restriction of the model
class, since in general we can write E = g(E) for some function g,
? ? N (0, 1) and f? = f ?, g(?) .
with E
3
independence similar to Theorem 1, but then we could not use it in practice. Another interpretation
has been used in [13] for the noise-free case (i.e., the deterministic model Y = f (X)). Here, our
aim is to deal with the noisy case. For this setting we propose a Bayesian approach, which will be
explained in the next subsection.
2.2
The Bayesian generative model for X ? Y
The basic idea is to define non-parametric priors on the causal mechanisms and input distributions
that favor functions and distributions of low complexity. Inferring the causal direction then boils
down to standard Bayesian model selection, where preference is given to the model with the largest
marginal likelihood.
We introduce random variables xi (the cause), yi (the effect) and ei (the noise), for i = 1, . . . , N
where N is the number of data points. We use vector notation x = (xi )N
i=1 to denote the whole N tuple of X-values xi , and similarly for y and e. To make a Bayesian model comparison between the
two models X ? Y and Y ? X, we need to calculate the marginal likelihoods p(x, y | X ? Y )
and p(x, y | Y ? X). Below, we will only consider the model X ? Y and omit this from the
notation for brevity. The other model Y ? X is completely analogous, and can be obtained by
simply interchanging the roles of X and Y .
The marginal likelihood for the observed data x, y under the model X ? Y is given by (see also
Figure 1b):
p(x, y) = p(x)p(y | x) =
"Z
!
# "Z
N
Y
p(xi | ?X ) p(?X )d?X
i=1
N
Y
!
#
? yi ? f (xi , ei ) pE (ei ) de p(f | ?f )df p(?f )d?f
i=1
(4)
Here, ?X and ?f parameterize prior distributions of the cause X and the causal mechanism f ,
respectively. Note how the four assumptions discussed in the previous subsection are
incorporated
into the model: assumption (A) results in Dirac delta distributions ? yi ? f (xi , ei ) for each i =
1, . . . , N . Assumption (B) is realized by the a priori independence p(x, e | ?X ) = p(x | ?X )pE (e).
Assumption (C) is realized as the a priori independence p(f, ?X ) = p(f )p(?X ). Assumption (D)
is obvious by taking pE (e) := N (e | 0, 1).
2.3
Choosing the priors
In order to completely specify the model X ? Y , we need to choose particular priors. In this work,
we assume that all variables are real numbers (i.e., x, y and e are random variables taking values in
RN ), and use the following choices (although other choices are also possible):
? For the prior distribution of the cause X, we use a Gaussian mixture model
p(xi | ?X ) =
k
X
?j N (xi | ?j , ?j2 )
j=1
with hyperparameters ?X = (k, ?1 , . . . , ?k , ?1 , . . . , ?k , ?1 , . . . , ?k ). We put an improper
Dirichlet prior (with parameters (?1, ?1, . . . , ?1)) on the component weights ? and flat priors
on the component parameters ?, ?.
? For the prior distribution p(f | ?f ) of the causal mechanism f , we take a Gaussian process with
zero mean function and squared-exponential covariance function:
(x ? x0 )2
(e ? e0 )2
k?f (x, e), (x0 , e0 ) = ?2Y exp ?
exp
?
(5)
2?2X
2?2E
where ?f = (?X , ?Y , ?E ) are length-scale parameters. The parameter ?Y determines the
amplitude of typical functions f (x, e), and the length scales ?X and ?E determine how quickly
typical functions change depending on x and e, respectively. In the additive noise case, for
example, the length scale ?E is large compared to the length scale ?X , as this leads to an almost
linear dependence of f on e. We put broad Gamma priors on all length-scale parameters.
4
2.4
Approximating the evidence
Now that we have fully specified the model X ? Y , the remaining task is to calculate the integral (4)
for given observations x, y. As the exact calculation seems intractable, we here use a particular
approximation of this integral.
The marginal distribution
For the model of the distribution of the cause p(x), we use an asymptotic expansion based on the
Minimum Message Length principle that yields the following approximation (for details, see [15]):
?
?
k
X
N ?j
k
N
3k
? log p(x) ? min ?
log
+ log
+
? log p(x | ?X )? .
(6)
?X
12
2
12
2
j=1
The conditional distribution
For the conditional distribution p(y | x) according to the model X ? Y , we start by replacing the
integral over the length-scales ?f by a MAP estimate:
Z
p(y | x) ? max p(?f ) ? y ? f (x, e) pE (e)de p(f | ?f )df.
?f
Integrating over the latent variables e and using the Dirac delta function calculus (where we assume
invertability of the functions fx : e 7? f (e, x) for all x), we obtain:4
Z
Z
p(f | ?f )
df
(7)
? y ? f (x, e) pE (e)de p(f | ?f )df = pE (f )
J(f )
where (f ) is the (unique) vector satisfying y = f (x, ), and
N
?f
Y
J(f ) = det ?e f x, (f ) =
x
,
(f
)
?e i i
i=1
is the absolute value of the determinant of the Jacobian which results when integrating over the
Dirac delta function. The next step would be to integrate over all possible causal mechanisms f
(which would be an infinite-dimensional integral). However, this integral again seems intractable,
and hence we revert to the following approximation. Because of space constraints, we only give a
brief sketch of the procedure here.
Let us suppress the hyperparameters ?f for the moment to simplify notation. The idea is to approximate the infinite-dimensional GP function f by a linear combination over basis functions ?j
parameterized by a weight vector ? ? RN with a Gaussian prior distribution:
f? (x, e) =
N
X
?j ?j (x, e),
? ? N (0, 1).
j=1
Now, defining the matrix ?ij (x, ) := ?j (xi , i ), the relationship y = ?(x, )? gives a correspondence between and ? (for fixed x and y), which we assume to be one-to-one. In particular,
? = ?(x, )?1 y. We can then approximate equation (7) by replacing the integral by a maximum:
Z
N (? | 0, 1)
N (y | 0, ??T )
N (? | 0, 1)
d? ? max pE (?)
= max pE
, (8)
pE (?)
?
J(?)
J(?)
J()
where in the last step we used the one-to-one correspondence between and ?.
4
Alternatively, one could first integrate over the causal mechanisms f , and then optimize over the noise
values e, similar to what is usually done in GPLVMs [16]. However, we believe that for the purpose of causal
discovery, that approach does not work well. The reason is that when optimizing over e, the result is often quite
dependent on x, which violates our basic assumption that X?
?E. The approach we follow here is more related
to nonlinear ICA, whereas GPLVMs are related to nonlinear PCA.
5
After working out the details and taking the negative logarithm, the final optimization problem
becomes:
?
?
?
?
N
X
?
?
?1 ?
?
? log p(y | x) ? min ?? log p(?f ) ? log N ( | 0, I) ? log N (y | 0, K) +
log Mi? K y ? .
?f , ?|
{z
} |
{z
}
{z } |
?
i=1
Noise prior
GP marginal
Hyperpriors
|
{z
}
Information term
(9)
Here, the kernel (Gram) matrix K is defined by Kij := k (xi , i ), (xj , j ) , where k : R4 ? R
is the covariance function (5). It corresponds to ??T in our approximation. The matrix M
contains the expected
mean derivatives of the GP with respect to e and is defined by Mij :=
?k
(x
,
),
(x
,
)
i i
j j . Note that the matrices K and M both depend upon .
?e
The Information term in the objective function (involving the partial derivatives ?k
?e ) may be surprising at first sight. It is necessary, however, to penalize dependences between x and : ignoring
it would yield an optimal that is heavily dependent on x, violating assumption (B). Interestingly,
this term is not present in the additive noise case that is usually considered, as the derivative of the
causal mechanism with respect to the noise equals one, and its logarithm therefore vanishes. In the
next subsection, we discuss some implementation issues that arise when one attempts to solve (6)
and (9) in practice.
Implementation issues
First of all, we preprocess the observed data x and y by standardizing them to zero mean and unit
variance for numerical reasons: if the length scales become too large, the kernel matrix K becomes
difficult to handle numerically.
We solve the optimization problem (6) concerning the marginal distribution numerically by means
of the algorithm written by Figueiredo and Jain [15]. We use a small but nonzero value (10?4 ) of
the regularization parameter.
The optimization problem (9) concerning the conditional distribution poses more serious practical
problems. Basically, since we approximate a Bayesian integral by an optimization problem, the
objective function (9) still needs to be regularized: if one of the partial derivatives ?f
?e becomes zero,
the objective function diverges. In addition, the kernel matrix corresponding to (5) is extremely
ill-posed. To deal with these matters, we propose the following ad-hoc solutions:
? We regularize the ?
numerically ill-behaving logarithm in the last term in (9) by approximating
it as log |x| ? log x2 + with 1.
? We add a small amount of N (0, ? 2 )-uncertainty to each observed yi -value, with ? 1. This
is equivalent to replacing K by K + ? 2 I, which regularizes the ill-conditioned matrix K. We
used ? = 10?5 .
Further, note that in the final optimization problem (9), the unobserved noise values can in fact
also be regarded as additional hyperparameters, similar to the GPLVM model [16]. In our setting,
this optimization is particularly challenging, as the number of parameters exceeds the number of
observations. In particular, for small length scales ?X and ?E the objective function may exhibit
a large number of local minima. In our implementation we applied the following measures to deal
with this issue:
? We initialize with an additive noise model, by taking the residuals from a standard GP regression as initial values for . The reason for doing this is that in an additive noise model, all
partial derivatives ?f
?e are positive and constant. This initialization effectively leads to a solution
that satisfies the invertability assumption that we made in approximating the evidence.5
? We implemented a log barrier that heavily penalized negative values of ?f
?e . This was done
to avoid sign flips of these terms that would violate the invertability assumption. Basically,
together with our earlier regularization of the logarithm, we replaced the logarithms log |x| in
5
This is related in spirit to the standard initialization of GPLVM models by PCA.
6
the last term in (9) by:
p
p
?
log (x ? )2 + + A log (x ? )2 + ? log 1x?
with 1. We used = 10?3 and A = 102 .
The resulting optimization problem can be solved using standard numerical optimization methods
(we used LBFGS). The source code of our implementation is available as supplementary material
and can also be downloaded from http://webdav.tuebingen.mpg.de/causality/.
3
Experiments
To evaluate the ability of our method to identify causal directions, we have tested our approach
on simulated and real-world data. To identify the most probable causal direction, we evaluate the
marginal likelihoods corresponding to both possible causal directions (which are given by combining the results of equations (6) and (9)), choosing the model that assigns higher probability to the
observed data. We henceforth refer to this approach as GPI-MML. For comparison, we also considered the marginal likelihood using a GP covariance function that is constant with respect to e,
i.e., assuming additive noise. For this special case, the noise values e can be integrated out analytically, resulting in standard GP regression. We call this approach AN-MML. We also compare
with the method proposed in [1], which also uses an additive noise GP regression for the conditional
model, but uses a simple Gaussian model for the input distribution p(x). We refer to this approach
as AN-GAUSS.
We complemented the marginal likelihood as selection criterion with another possible criterion for
causal model selection: the independence of the cause and the estimated noise [5]. Using HSIC [17]
as test criterion for independence, this approach can be applied to both the additive noise GP and
the more general latent variable approach. As the marginal likelihood does not provide a significance level for the inferred causal direction, we used the ratio of the p-values of HSIC for both
causal directions as prediction criterion, preferring the direction with a higher p-value (i.e., with
less dependence between the estimated noise and the cause). HSIC as selection criterion applied
to the additive or general Gaussian process model will be referred to as AN-HSIC and GPI-HSIC
respectively.
We compared these methods with other related methods: IGCI [13], a method that is also based
on assumption (C), although designed for the noise-free case; LINGAM [2], which assumes a linear
causal mechanism; and PNL, the Post-NonLinear model [6]. We evaluated all methods in the ?forced
decision? scenario, i.e., the only two possible decisions that a method could take were X ? Y and
Y ? X (so decisions like ?both models fit the data? or ?neither model fits the data? were not
possible).
Simulated data Inspired by the experimental setup in [5], we generated simulated datasets from
the model Y = (X+bX 3 )e?E +(1??)E. Here, the random variables X and E where sampled from
a Gaussian distribution with their absolute values raised to the power q, while keeping the original
sign. The parameter ? controls the type of the observation noise, interpolating between purely
additive noise (? = 0) and purely multiplicative noise (? = 1). The coefficient b determines the nonlinearity of the true causal model, with b = 0 corresponding to the linear case. Finally, the parameter
q controls the non-Gaussianity of the input and noise distributions: q = 1 gives a Gaussian, while
q > 1 and q < 1 produces super-Gaussian and sub-Gaussian distributions respectively.
For alternative parameter settings ?, b and q, we generated D = 40 independent datasets. Each
dataset consisted of N = 500 samples from the corresponding generative model. Figure 2 shows
the accuracy of the considered methods evaluated on these simulated datasets. Encouragingly, GPI
appears to be robust with respect to the type of noise, outperforming additive noise models in the
full range between additive and multiplicative noise (Figure 2a). Note that the additive noise models
actually yield the wrong decision for high values of ?, whereas the GPI methods stay well above
chance level. Figure 2b shows accuracies for a linear model and a non-Gaussian noise and input
distribution. Figure 2c shows accuracies for a non-linear model with Gaussian additive noise. We
observe that GPI-MML performs well in each scenario. Further, we observe that AN-GAUSS, the
method proposed in [1], only performs well for Gaussian input distributions and additive noise.
7
Accuracy
Accuracy
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
?
1
Accuracy
(a) From additive to multiplicative noise
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
?1 ?0.8?0.6?0.4?0.2
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.2
0.4
0.6
0.8
1
q
1.2
1.4
1.6
1.8
(b) Linear function, non-Gaussian additive
noise
AN?MML
AN?HSIC
AN?GAUSS
GPI?MML
GPI?HSIC
IGCI
0
b
0.2 0.4 0.6 0.8
1
(c) Non-linear function, Gaussian additive
noise
(d) Legend
Figure 2: Accuracy of recovering the true causal direction in simulated datasets. (a) From additive
(? = 0) to multiplicative noise (? = 1), for q = 1 and b = 1; (b) from sub-Gaussian noise (q < 1),
Gaussian noise (q = 1) to super-Gaussian noise (q > 1), for a linear function (b = 0) with additive
noise (? = 0); (c) from non-linear (b < 0) to linear (b = 0) to non-linear (b > 1), with additive
Gaussian noise (q = 1,? = 0).
Table 1: Accuracy (in percent) of recovering the true causal direction in 68 real world datasets.
AN-MML
68 ? 1
AN-HSIC
68 ? 3
AN-GAUSS
45 ? 3
GPI-MML
72 ? 2
GPI-HSIC
62 ? 4
IGCI
76 ? 1
LINGAM
62 ? 3
PNL
67 ? 4
Results on cause-effect pairs Next, we applied the same methods and selection criteria to realworld cause-effect pairs where the true causal direction is known. The data was obtained from
http://webdav.tuebingen.mpg.de/cause-effect/. We considered a total of 68
pairs in this dataset collected from a variety of domains. To reduce computation time, we subsampled the data, using a total of at most N = 500 samples for each cause-effect pair. Table 1 shows
the prediction accuracy for the same approaches as in the simulation study, reporting averages and
standard deviations estimated from 3 repetitions of the experiments with different subsamples.
4
Conclusions and discussion
We proposed the first method (to the best of our knowledge) for addressing the challenging task of
distinguishing between cause and effect without an a priori restriction to a certain class of models. The method compares marginal likelihoods that penalize complex input distributions and causal
mechanisms. Moreover, our framework generalizes a number of existing approaches that assume a
limited class of possible causal mechanisms functions. A more extensive evaluation of the performance of our method has to be performed in future. Nevertheless, the encouraging results that we
have obtained thus far confirm the hypothesis that asymmetries of the joint distribution of cause and
effect provide useful hints on the causal direction.
Acknowledgments
We thank Stefan Harmeling and Hannes Nickisch for fruitful discussions. We also like to thank the
authors of the GPML toolbox [18], which was very useful during the development of our software.
OS was supported by a fellowship from the Volkswagen Foundation.
8
References
[1] N. Friedman and I. Nachman. Gaussian process networks. In Proc. of the 16th Annual Conference on
Uncertainty in Artificial Intelligence, pages 211?219, 2000.
[2] S. Shimizu, P. O. Hoyer, A. Hyv?arinen, and A. J. Kerminen. A linear non-Gaussian acyclic model for
causal discovery. Journal of Machine Learning Research, 7:2003?2030, 2006.
[3] X. Sun, D. Janzing, and B. Sch?olkopf. Causal inference by choosing graphs with most plausible Markov
kernels. In Proceeding of the 9th Int. Symp. Art. Int. and Math., Fort Lauderdale, Florida, 2006.
[4] X. Sun, D. Janzing, and B. Sch?olkopf. Distinguishing between cause and effect via kernel-based complexity measures for conditional probability densities. Neurocomputing, pages 1248?1256, 2008.
[5] P. O. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Sch?olkopf. Nonlinear causal discovery with
additive noise models. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in
Neural Information Processing Systems 21 (NIPS*2008), pages 689?696, 2009.
[6] K. Zhang and A. Hyv?arinen. On the identifiability of the post-nonlinear causal model. In Proceedings of
the 25th Conference on Uncertainty in Artificial Intelligence, Montreal, Canada, 2009.
[7] D. Janzing, P. Hoyer, and B. Sch?olkopf. Telling cause from effect based on high-dimensional observations.
In Proceedings of the International Conference on Machine Learning (ICML 2010), pages 479?486, 2010.
[8] J. M. Mooij and D. Janzing. Distinguishing between cause and effect. In Journal of Machine Learning
Research Workshop and Conference Proceedings, volume 6, pages 147?156, 2010.
[9] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. Springer-Verlag, 1993. (2nd
ed. MIT Press 2000).
[10] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
[11] J. Lemeire and E. Dirkx.
Causal models as minimal descriptions of multivariate systems.
http://parallel.vub.ac.be/?jan/, 2006.
[12] D. Janzing and B. Sch?olkopf. Causal inference using the algorithmic Markov condition. IEEE Transactions on Information Theory, 56(10):5168?5194, 2010.
[13] P. Daniu?sis, D. Janzing, J. M. Mooij, J. Zscheischler, B. Steudel, K. Zhang, and B. Sch?olkopf. Inferring
deterministic causal relations. In Proceedings of the 26th Annual Conference on Uncertainty in Artificial
Intelligence (UAI-10), 2010.
[14] A. Hyv?arinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness
results. Neural Networks, 12(3):429?439, 1999.
[15] M. A. T. Figueiredo and A. K. Jain. Unsupervised learning of finite mixture models. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 24(3):381?396, March 2002.
[16] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In
Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference, page 329.
The MIT Press, 2004.
[17] A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch?olkopf. Kernel methods for measuring
independence. Journal of Machine Learning Research, 6:2075?2129, 2005.
[18] C. E. Rasmussen and H. Nickisch. Gaussian Processes for Machine Learning (GPML) Toolbox. Journal
of Machine Learning Research, accepted, 2010.
9
| 4173 |@word determinant:1 version:1 norm:1 seems:2 nd:1 calculus:1 hyv:3 simulation:1 accounting:1 covariance:3 volkswagen:1 moment:1 initial:1 contains:1 tuned:1 interestingly:1 existing:1 surprising:1 si:1 attracted:1 written:1 additive:26 realistic:1 happen:1 numerical:2 drop:1 designed:1 generative:2 intelligence:4 colored:1 provides:1 math:1 preference:1 herbrich:1 zhang:4 mathematical:1 become:1 consists:1 symp:1 inside:1 introduce:1 x0:2 expected:1 ica:1 mpg:7 multi:1 inspired:1 encouraging:2 increasing:1 becomes:3 notation:3 moreover:1 what:2 kind:2 interpreted:1 unobserved:6 hypothetical:2 exactly:1 wrong:2 control:2 unit:1 omit:1 before:1 positive:1 local:1 treat:1 limit:1 therein:1 invertability:3 initialization:2 r4:1 challenging:2 factorization:2 limited:2 range:1 practical:2 unique:1 acknowledgment:1 harmeling:1 practice:5 procedure:1 jan:1 empirical:2 word:1 integrating:2 cannot:1 selection:6 put:3 influence:1 restriction:4 disturbed:1 deterministic:4 map:1 optimize:1 equivalent:1 fruitful:1 straightforward:1 assigns:1 regarded:1 regularize:1 handle:1 notion:3 fx:1 analogous:1 hsic:9 play:1 heavily:2 exact:1 distinguishing:4 us:3 hypothesis:3 satisfying:1 particularly:1 observed:9 role:2 solved:1 parameterize:1 calculate:2 schoelkopf:1 improper:1 sun:2 zscheischler:1 vanishes:1 complexity:18 neglected:1 depend:1 purely:2 upon:2 completely:3 basis:1 joint:6 kolmogorov:3 revert:1 jain:2 forced:1 encouragingly:1 artificial:3 outside:1 choosing:3 quite:1 posed:1 solve:4 supplementary:1 plausible:1 favor:2 ability:1 gp:8 noisy:1 final:2 hoc:1 subsamples:1 propose:4 j2:1 combining:1 intuitive:1 description:3 dirac:3 olkopf:8 exploiting:1 asymmetry:3 diverges:1 produce:1 depending:1 montreal:1 pose:3 ac:1 measured:1 ij:1 implemented:1 recovering:2 implies:1 direction:18 violates:1 material:1 require:1 arinen:3 biological:6 probable:1 extension:1 sufficiently:1 considered:5 normal:1 exp:2 lawrence:1 algorithmic:3 mapping:1 scope:1 purpose:1 uniqueness:1 proc:1 nachman:1 lemeire:1 largest:1 vice:2 repetition:1 stefan:1 mit:2 gaussian:24 always:1 aim:1 sight:1 super:2 avoid:1 gpml:2 likelihood:9 inference:3 dependent:2 typically:1 integrated:1 koller:1 relation:1 visualisation:1 germany:5 issue:3 ill:3 priori:4 development:1 raised:1 special:3 initialize:1 art:1 marginal:16 equal:1 construct:1 flawed:1 broad:1 icml:1 unsupervised:1 future:1 interchanging:1 report:2 simplify:1 serious:2 hint:1 causation:1 gamma:1 neurocomputing:1 subsampled:1 replaced:1 attempt:1 friedman:1 interest:1 message:1 possibility:1 evaluation:1 mixture:2 oliver:2 tuple:1 integral:7 partial:3 necessary:1 logarithm:5 causal:52 e0:2 theoretical:2 minimal:1 kij:1 vub:1 earlier:1 measuring:1 kerminen:1 soften:1 cost:1 deviation:1 addressing:1 too:1 synthetic:1 nickisch:2 density:1 international:1 preferring:1 stay:1 probabilistic:3 lauderdale:1 together:1 quickly:1 squared:1 again:1 choose:1 henceforth:2 derivative:5 bx:1 de:10 standardizing:1 gaussianity:1 coefficient:1 matter:1 int:2 explicitly:1 ad:1 multiplicative:4 performed:1 observing:1 doing:1 start:2 parallel:1 identifiability:1 pajunen:1 contribution:1 accuracy:9 variance:1 yield:4 preprocess:1 identify:2 bayesian:9 basically:2 cybernetics:5 janzing:9 whenever:1 ed:1 definition:1 obvious:2 proof:2 mi:1 boil:1 sampled:1 dataset:2 treatment:1 subsection:3 knowledge:1 hilbert:1 amplitude:1 actually:1 appears:1 higher:2 violating:1 follow:1 specify:1 hannes:1 done:3 evaluated:2 smola:1 hand:1 sketch:1 working:1 ei:6 replacing:3 nonlinear:7 o:1 gray:1 believe:1 effect:21 consisted:1 true:4 evolution:1 equality:1 hence:2 regularization:2 analytically:1 nonzero:1 spirtes:1 illustrated:1 deal:4 white:1 during:1 mpi:5 criterion:7 performs:2 percent:1 reasoning:2 novel:3 recently:2 common:1 volume:1 discussed:1 interpretation:2 interpret:1 numerically:3 refer:2 versa:2 cambridge:1 similarly:1 nonlinearity:1 behaving:1 add:1 multivariate:1 own:1 recent:2 optimizing:1 apart:1 reverse:1 scenario:2 certain:1 verlag:1 ubingen:5 binary:1 outperforming:1 discussing:1 yi:6 exploited:1 minimum:2 additional:4 determine:1 shortest:1 violate:1 full:1 infer:1 gretton:1 exceeds:1 calculation:1 concerning:2 post:3 prediction:3 involving:1 basic:5 regression:3 df:4 kernel:7 penalize:2 addition:2 conditionals:2 whereas:2 fellowship:1 source:1 sch:8 legend:1 spirit:2 seem:1 call:1 bengio:1 enough:1 variety:1 independence:10 fit:3 xj:1 restrict:1 reduce:1 idea:6 det:1 whether:2 pca:2 gpi:9 peter:1 cause:43 remark:1 useful:2 clear:1 detailed:1 amount:1 induces:1 http:3 exist:1 sign:2 delta:3 algorithmically:2 estimated:3 write:1 key:2 four:1 nevertheless:1 neither:2 graph:1 realworld:1 parameterized:1 uncertainty:4 reporting:1 almost:1 decide:1 putative:1 decision:5 prefer:1 summarizes:1 steudel:1 correspondence:2 annual:2 precisely:1 constraint:1 x2:1 flat:1 software:1 bousquet:1 aspect:3 min:2 extremely:1 glymour:1 according:1 combination:2 march:1 appealing:1 intuitively:1 restricted:1 explained:1 lingam:2 equation:3 scheines:1 discus:1 mechanism:15 mml:7 flip:1 end:1 available:1 generalizes:1 hyperpriors:1 observe:2 appropriate:1 alternative:1 florida:1 existence:1 original:1 denotes:1 dirichlet:1 remaining:1 assumes:1 graphical:1 joris:2 especially:1 approximating:3 objective:4 realized:2 parametric:3 dependence:3 traditional:1 exhibit:1 hoyer:3 separate:1 thank:2 simulated:6 collected:1 tuebingen:7 reason:3 assuming:2 length:9 code:1 relationship:2 ratio:1 kun:2 difficult:1 setup:1 statement:1 stated:1 negative:2 suppress:1 implementation:4 observation:6 datasets:5 markov:2 finite:2 gplvm:2 situation:1 defining:1 incorporated:1 precise:1 regularizes:1 rn:2 reproducing:1 canada:1 inferred:2 pair:6 fort:1 specified:1 extensive:1 toolbox:2 pearl:1 nip:1 beyond:1 suggested:1 below:1 stegle:2 usually:2 pattern:1 challenge:1 max:3 explanation:1 power:1 suitable:2 regularized:1 residual:1 representing:1 brief:1 prior:13 discovery:13 mooij:5 asymptotic:1 embedded:1 fully:2 limitation:1 acyclic:1 foundation:1 integrate:2 downloaded:1 article:1 principle:2 editor:1 daniu:1 summary:1 penalized:1 supported:1 last:4 free:3 keeping:1 figueiredo:2 rasmussen:1 telling:1 taking:4 barrier:1 absolute:2 determinism:1 world:4 valid:1 gram:1 ignores:1 author:1 made:2 far:1 transaction:2 approximate:3 implicitly:1 bernhard:2 confirm:1 uai:1 xi:13 alternatively:1 continuous:2 latent:7 search:1 table:2 robust:1 ignoring:1 schuurmans:1 expansion:1 bottou:1 necessarily:1 complex:2 interpolating:1 domain:1 significance:1 main:1 whole:1 noise:48 hyperparameters:3 arise:1 causality:2 referred:1 sub:2 inferring:5 explicit:1 exponential:1 crude:1 pe:9 dominik:2 jacobian:1 theorem:5 down:1 evidence:3 exists:1 intractable:2 workshop:1 restricting:1 effectively:2 importance:1 pnl:2 conditioned:1 shimizu:1 depicted:1 simply:1 likely:1 lbfgs:1 applies:1 springer:1 mij:1 corresponds:1 determines:2 satisfies:1 complemented:1 chance:1 conditional:9 change:1 typical:2 infinite:2 gplvms:2 total:4 called:2 accepted:1 gauss:4 experimental:1 latter:1 brevity:1 violated:1 incorporate:1 evaluate:4 tested:1 |
3,505 | 4,174 | Learning sparse dynamic linear systems using
stable spline kernels and exponential hyperpriors
Alessandro Chiuso
Department of Management and Engineering
University of Padova
Vicenza, Italy
[email protected]
Gianluigi Pillonetto?
Department of Information Engineering
University of Padova
Padova, Italy
[email protected]
Abstract
We introduce a new Bayesian nonparametric approach to identification of sparse
dynamic linear systems. The impulse responses are modeled as Gaussian processes whose autocovariances encode the BIBO stability constraint, as defined by
the recently introduced ?Stable Spline kernel?. Sparse solutions are obtained by
placing exponential hyperpriors on the scale factors of such kernels. Numerical
experiments regarding estimation of ARMAX models show that this technique
provides a definite advantage over a group LAR algorithm and state-of-the-art
parametric identification techniques based on prediction error minimization.
1
Introduction
Black-box identification approaches are widely used to learn dynamic models from a finite set of
input/output data [1]. In particular, in this paper we focus on the identification of large scale linear
systems that involve a wide amount of variables and find important applications in many different
domains such as chemical engineering, economic systems and computer vision [2]. In this scenario
a key point is that the identification procedure should be sparsity-favouring, i.e. able to extract from
the large number of subsystems entering the system description just that subset which influences
significantly the system output. Such sparsity principle permeates many well known techniques
in machine learning and signal processing such as feature selection, selective shrinkage and compressed sensing [3, 4].
In the classical identification scenario, Prediction Error Methods (PEM) represent the most used
approaches to optimal prediction of discrete-time systems [1]. The statistical properties of PEM
(and Maximum Likelihood) methods are well understood when the model structure is assumed to be
known. However, in real applications, first a set of competitive parametric models has to be postulated. Then, a key point is the selection of the most adequate model structure, usually performed by
AIC and BIC criteria [5, 6]. Not surprisingly, the resulting prediction performance, when tested on
experimental data, may be distant from that predicted by ?standard? (i.e. without model selection)
statistical theory, which suggests that PEM should be asymptotically efficient for Gaussian innovations. If this drawback may affect standard identification problems, a fortiori it renders difficult the
study of large scale systems where the elevated number of parameters, as compared to the number
of data available, may undermine the applicability of the theory underlying e.g. AIC and BIC.
Some novel estimation techniques inducing sparse models have been recently proposed. They include the well known Lasso [7] and Least Angle Regression (LAR) [8] where variable selection is
performed exploiting the `1 norm. This type of penalty term encodes the so called bi-separation
?
This research has been partially supported by the PRIN Project ?Sviluppo di nuovi metodi e algoritmi
per l?identificazione, la stima Bayesiana e il controllo adattativo e distribuito?, by the Progetto di Ateneo
CPDA090135/09 funded by the University of Padova and by the European Community?s Seventh Framework
Programme under agreement n. FP7-ICT-223866-FeedNetBack.
1
feature, i.e. it favors solutions with many zero entries at the expense of few large components. Consistency properties of this method are discussed e.g. in [9, 10]. Extensions of this procedure for
group selection include Group Lasso and Group LAR (GLAR) [11] where the sum of the Euclidean
norms of each group (in place of the absolute value of the single components) is used. Theoretical analyses of these approaches and connections with the multiple kernel learning problem can be
found in [12, 13]. However, most of the work has been done in the ?static? scenario while very little,
with some exception [14, 15], can be found regarding the identification of dynamic systems.
In this paper we adopt a Bayesian point of view to prediction and identification of sparse linear systems. Our starting point is the new identification paradigm developed in [16] that relies on nonparametric estimation of impulse responses (see also [17] for extensions to predictor estimation). Rather
than postulating finite-dimensional structures for the system transfer function, e.g. ARX, ARMAX
or Laguerre [1], the system impulse response is searched for within an infinite-dimensional space.
The intrinsical ill-posed nature of the problem is circumvented using Bayesian regularization methods. In particular, working under the framework of Gaussian regression [18], in [16] the system
impulse response is modeled as a Gaussian process whose autocovariance is the so called stable
spline kernel that includes the BIBO stability constraint.
In this paper, we extend this nonparametric paradigm to the design of optimal linear predictors for
sparse systems. Without loss of generality, analysis is restricted to MISO systems so that we interpret the predictor as a system with m + 1 inputs (given by past outputs and inputs) and one output
(output predictions). Thus, predictor design amounts to estimating m + 1 impulse responses modeled as realizations of Gaussian processes. We set their autocovariances to stable spline kernels with
different (and unknown) scale factors which are assigned exponential hyperpriors having a common
hypervariance. In this way, while GLAR uses the sum of the `1 norms of the single impulse responses, our approach favors sparsity through an `1 penalty on kernel hyperparameters. Inducing
sparsity by hyperpriors is an important feature of our approach. In fact, this permits to obtain the
marginal posterior of the hyperparameters in closed form and hence also their estimates in a robust
way. Once the kernels are selected, the impulse responses are obtained by a convex Tikhonov-type
variational problem. Numerical experiments involving sparse ARMAX systems show that this approach provides a definite advantage over both GLAR and PEM (equipped with AIC or BIC) in
terms of predictive capability on new output data.
The paper is organized as follows. In Section 2, the nonparametric approach to system identification
introduced in [16] is briefly reviewed. Section 3 reports the statement of the predictor estimation
problem while Section 4 describes the new Bayesian model for system identification of sparse linear
systems. In Section 5, a numerical algorithm which returns the unknown components of the prior
and the estimates of predictor and system impulse responses is derived. In Section 6 we use simulated data to demonstrate the effectiveness of the proposed approach. Conclusions end the paper.
2
2.1
Preliminaries: kernels for system identification
Kernel-based regularization
A widely used approach to reconstruct a function from indirect measurements {yt } consists of minimizing a regularization functional in a reproducing kernel Hilbert space (RKHS) H associated with
a symmetric and positive-definite kernel K [19]. Given N data points, least-squares regularization
in H estimates the unknown function as
? = arg min
h
h
N
X
2
(yt ? ?t [h]) + ?khk2H
(1)
t=1
where {?t } are linear and bounded functionals on H related to the measurement model while the
positive scalar ? trades off empirical error and solution smoothness [20].
Under the stated assumptions and according to the representer theorem [21], the minimizer of (1)
is the sum of N basis functions defined by the kernel filtered by the operators {?t }, with coefficients obtainable solving a linear system of equations. Such solution enjoys also an interpretation
in Bayesian terms. It corresponds to the minimum variance estimate of f when f is a zero-mean
Gaussian process with autocovariance K and {yt ? ?t [f ]} is white Gaussian noise independent of
f [22]. Often, prior knowledge is limited to the fact that the signal, and possibly some of its derivatives, are continuous with bounded energy. In this case, f is often modeled as the p-fold integral of
2
Figure 1: Realizations of a stochastic process f with autocovariance proportional to the standard
Cubic Spline kernel (left), the new Stable Spline kernel (middle)
and its sampled version enriched
?
by a parametric component defined by the poles ?0.5 ? 0.6 ?1 (right).
white noise. If the white noise has unit intensity, the autocorrelation of f is Wp where
Z 1
p?1
(r ? u)+
u if u ? 0
Wp (s, t) =
Gp (s, u)Gp (t, u)du,
Gp (r, u) =
, (u)+ =
0 if u < 0
(p
?
1)!
0
(2)
This is the autocovariance associated with the Bayesian interpretation of p-th order smoothing
splines [23]. In particular, when p = 2, one obtains the cubic spline kernel.
2.2
Kernels for system identification
In the system identification scenario, the main drawback of the kernel (2) is that it does not account
for impulse response stability. In fact, the variance of f increases over time. This can be easily
appreciated by looking at Fig. 1 (left) which displays 100 realizations drawn from a zero-mean
Gaussian process with autocovariance proportional to W2 . One of the key contributions of [16] is
the definition of a kernel specifically suited to linear system identification leading to an estimator
with favorable bias and variance properties. In particular, it is easy to see that if the autocovariance
of f is proportional to Wp , the variance of f (t) is zero at t = 0 and tends to ? as t increases.
However, if f represents a stable impulse response, we would rather let it have a finite variance at
t = 0 which goes exponentially to zero as t tends to ?. This property can be ensured by considering
autocovariances proportional to the class of kernels given by
Kp (s, t) = Wp (e??s , e??t ),
s, t ? R+
(3)
where ? is a positive scalar governing the decay rate of the variance [16]. In practice, ? will be
unknown so that it is convenient to treat it as a hyperparameter to be estimated from data.
In view of (3), if p = 2 the autocovariance becomes the Stable Spline kernel introduced in [16]:
K2 (t, ? ) =
e?3? max(t,? )
e??(t+? ) e?? max(t,? )
?
2
6
(4)
Proposition 1 [16] Let f be zero-mean Gaussian with autocovariance K2 . Then, with probability
one, the realizations of f are continuous impulse responses of BIBO stable dynamic systems.
The effect of the stability constraint is visible in Fig. 1 (middle) which displays 100 realizations
drawn from a zero-mean Gaussian process with autocovariance proportional to K2 with ? = 0.4.
3
Statement of the system identification problem
In what follows, vectors are column vectors, unless other is specified. We denote with {yt }t?Z ,
yt ? R and {ut }t?Z , ut ? Rm a pair of jointly stationary stochastic processes which represent,
3
Figure 2: Bayesian network describing the new nonparametric model for identification of sparse
linear systems where y l := [yl?1 , yl?2 , . . .] and, in the reduced model, ? := ?1 = . . . = ?m+1 .
respectively, the output and input of an unknown time-invariant dynamical system. With some
abuse of notation, yt will both denote a random variable (from the random process {yt }t?Z ) and its
sample value. The same holds for ut . Our aim is to identify a linear dynamical system of the form
yt =
?
X
fi ut?i +
i=1
?
X
gi et?i
(5)
i=0
from {ut , yt }t=1,..,N . In (5), fi ? R1?m and gi ? R are matrix and scalar coefficients of the
unknown system impulse responses while et is the Gaussian innovation sequence.
Following the Prediction Error Minimization framework, identification of the dynamical system (5)
is converted in estimation of the associated one-step-ahead predictor. Letting hk := {hkt }t?N denote
the predictor impulse response associated with the k-th input {ukt }t?Z , one has
Pm P? k k P? m+1
(6)
yt = k=1
yt?i + et
i=1 hi ut?i +
i=1 hi
where hm+1 := {hm+1
}t?N is the impulse response modeling the autoregressive component of the
t
predictor. As is well known, if the joint spectrum of {yt } and {ut } is bounded away from zero, each
hk is (BIBO) stable. Under such assumption, our aim is to estimate the predictor impulse responses,
in a scenario where the number of measurements N is not large, as compared with m, and many
measured inputs could be irrelevant for the prediction of yt . We will focus on the identification of
ARMAX models, so that the zeta-transforms of {hk } are rational functions all sharing the same
denominator, even if the approach described below immediately extends to general linear systems.
4
4.1
A Bayesian model for identification of sparse linear systems
Prior for predictor impulse responses
We model {hk } as independent Gaussian processes whose kernels share the same hyperparameters
apart from the scale factors. In particular, each hk is proportional to the convolution of a zeromean Gaussian process, with autocovariance given by the sampled version of K2 , with a parametric
impulse response r, used to capture dynamics hardly represented by a smooth process, e.g. highfrequency oscillations. For instance, the zeta-transform R(z) of r can be parametrized as follows
R(z) =
z2
,
P? (z)
P? (z) = z 2 + ?1 z + ?2 ,
? ? ? ? R2
(7)
where the feasible region ? constraints the two roots of P? (z) to belong to the open left unit
semicircle in the complex plane. To better appreciate the role of the finite-dimensional component of the model, Fig. 1 (right panel) shows some realizations (with samples linearly interpolated)
drawn from a discrete-time zero-mean normal process with autocovariance given by K2 enriched by
? = [1 0.61] in (7). Notice that, in this way, an oscillatory behavior is introduced in the realizations
4
?
by enriching the Stable Spline kernel with the poles ?0.5 ? 0.6 ?1.
The kernel of hk defined by K2 and (7) is denoted by K : N ? N 7? R and depends on ?, ?. Thus,
letting E[?] denote the expectation operator, the prior model on the impulse responses is given by
E[hkj hki ] = ?2k K(j, i; ?, ?),
4.2
k = 1, . . . , m + 1,
i, j ? N
Hyperprior for the hyperparameters
The noise variance ? 2 will always be estimated via a preliminary step using a low-bias ARX model,
as described in [24]. Thus, this parameter will be assumed known in the description of our Bayesian
model. The hyperparameters ?, ? and {?k } are instead modeled as mutually independent random
vectors. ? is given a non informative probability density on R+ while ? has a uniform distribution
on ?. Each ?k is an exponential random variable with inverse of the mean (and SD) ? ? R+ , i.e.
p(?k ) = ? exp (???k ) ?(?k ? 0),
k = 1, . . . , m + 1
with ? the indicator function. We also interpret ? as a random variable with a non informative prior
on R+ . Finally, ? indicates the hyperparameter random vector, i.e. ? := [?1 , . . . , ?m+1 , ?1 , ?2 , ?, ?].
4.3
The full Bayesian model
Let Ak ? RN ?? where, for j = 1, . . . , N and i ? N, we have:
[Ak ]ji = ukj?i
for
k = 1, . . . , m,
[Am+1 ]ji
= yj?i
(8)
In view of (6), using notation of ordinary algebra to handle infinite-dimensional objects with each
hk interpreted as an infinite-dimensional column vector, it holds that
y+
=
m
X
Ak (uk )hk + Am+1 (y + , y - )hm+1 + e
(9)
k=1
where y + = [y1 , y2 , . . . , yN ]T , y - = [y0 , y?1 , y?2 , . . .]T , e = [e1 , e2 , . . . , eN ]T (10)
In practice, y - is never completely known and a solution is to set its unknown components to zero,
see e.g. Section 3.2 in [1]. Further, the following approximation is exploited:
p(y + , {hk }, y - |?) ? p(y + |{hk }, y - , ?)p({hk }|?)p(y - )
(11)
i.e. the past y - is assumed not to carry information on the predictor impulse responses and the
hyperparameters. Our stochastic model is described by the Bayesian network in Fig. 2 (left side).
The dependence on y - is hereafter omitted as well as dependence of the {Ak } on y + or uk . We
start reporting a preliminary lemma, whose proof can be found in [17], which will be needed in
propositions 2 and 3.
Lemma 1 Let the roots of P? in (7) be stable. Then, if {yt } and {ut } are zero mean, finite variance
stationary stochastic processes, each operator {Ak } is almost surely (a.s.) continuous in HK .
5
5.1
Estimation of the hyper-parameters and the predictor impulse responses
Estimation of the hyper-parameters
We estimate the hyperparameter vector ? by optimizing its marginal posterior, i.e. the joint density
of y + , ? and {hk } where all the {hk } are integrated out. This is described in the next proposition
that derives from simple manipulations of probability densities whose well-posedness is guaranteed
by lemma 1. Below, IN is the N ? N identity matrix while, with a slight abuse of notation, K is
now seen as an element of R??? , i.e. its i-th column is the sequence K(?, i), i ? N.
Proposition 2 Let {yt } and {ut } be zero mean, finite variance stationary stochastic processes.
Then, under the approximation (11), the maximum a posteriori estimate of ? given y + is
?? = arg min J(y + ; ?) s.t. ? ? ?, ?, ? > 0, ? ? 0 (k = 1, . . . , m + 1)
(12)
k
?
5
where J is almost surely well defined pointwise and given by
m+1
X
1
1
log det[2?V [y + ]] + (y + )T (V [y + ])?1 y + + ?
?k ? log(?)
2
2
k=1
Pm+1
with V [y + ] = ? 2 IN + k=1 ?k Ak KATk .
J(y + ; ?) =
(13)
The objective (13), including the `1 penalty on {?k }, is a Bayesian modified version of that con-
nected with multiple kernel learning, see Section 3 in [25]. Additional terms are log det[V [y + ]]
and log(?) that permits to estimate the weight of the `1 norm jointly with the other hyperparameters.
An important issue for the practical use of our numerical scheme is the availability of a good starting point for the optimizer. Below, we describe a scheme that achieves a suboptimal solution just
solving an optimization problem in R4 related to the reduced Bayesian model of Fig. 2 (right side).
? k }, ?? and ?? solving the following modified version of problem (12)
i) Obtain {?
"
#
m+1
X
+
arg min J(y ; ?) ? ?
? + log(?) s.t. ? ? ?, ? > 0, ? = . . . = ?
k
?
1
m+1
?0
k=1
? 1 and ?? = [?
?1, . . . , ?
? m+1 , ?,
? ?,
? ?? ]. Then, for k = 1, . . . , m + 1: set ?? = ??
ii) Set ?? = 1/?
?
? ? J(y + ; ?),
? set ?? = ?.
?
except for the k-th component of ? which is set to 0; if J(y + ; ?)
5.2
Estimation of the predictor impulse responses for known ?
? k = E[hk |y + , ?]. The
Let HK be the RKHS associated with K, with norm k ? kHK . Let also h
following result comes from the representer theorem whose applicability is guaranteed by lemma 1.
Proposition 3 Under the same assumptions of Proposition 2, almost surely we have
? k }m+1 = arg
{h
k=1
min
{f k ?HK }m+1
k=1
ky + ?
m+1
X
Ak f k k2 + ? 2
k=1
m+1
X
k=1
kf k k2HK
?2k
where k ? k is the Euclidean norm. Moreover, almost surely we also have for k = 1, . . . , m + 1
!?1
m+1
X
k
2
T
2
T
?
h = ? KA c,
c = ? IN +
?k Ak KA
y+
(14)
k
k
k
k=1
After obtaining the estimates of the {hk }, simple formulas can then be used to derive the system
impulse responses f and g in (5) and hence also the k-step ahead predictors, see [1] for details.
6
Numerical experiments
We consider two Monte Carlo studies of 200 runs where at any run an ARMAX linear system with
15 inputs is generated as follows
? the number of hk different from zero is randomly drawn from the set {0, 1, 2, .., 8}.
? Then, the order of the ARMAX model is randomly chosen in [1, 30] and the model is
generated by the MATLAB function drmodel.m. The system and the predictor poles are
restricted to have modulus less than 0.95 with the `2 norm of each hk bounded by 10.
In the first Monte Carlo experiment, at any run an identification data set of size 500 and a test set
of size 1000 is generated using independent realizations of white noise as input. In the second
experiment, the prediction on new data is more challenging. In fact, at any run, an identification
data set of size 500 and a test set of size 1000 is generated via the MATLAB function idinput.m
using, respectively, independent realizations of a random Gaussian signal with band [0, 0.8] and
[0, 0.9] (the interval boundaries specify the lower and upper limits of the passband, expressed as
fractions of the Nyquist frequency). We compare the following estimators:
6
1
1
0.5
0.5
0
1
COD
COD1
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
#1
PEM+Or Stable Spline GLAR
#2
PEM+BIC
PEM+Or Stable Spline GLAR
PEM+BIC
Figure 3: Boxplots of the values of COD1 obtained by PEM+Or, Stable Spline, GLAR and
PEM+BIC in the two experiments. The outliers obtained by PEM+BIC are not all displayed.
Experiment
#1
#2
PEM+Oracle
100%
100%
Stable Spline
98.7%
98.4%
Subopt. Stable Spline
97.5%
98.2%
GLAR
45.6%
52.4%
Table 1: Percentage of the hk equal to zero correctly set to zero by the employed estimator.
1. GLAR: this is the GLAR algorithm described in [11] applied to ARX models; the order
(between 1 and 30) and the level of sparsity (i.e. the number of null hk ) is determined using
the first 2/3 of the 500 available data as training set and the remaining part as validation
data (the use of Cp statistics does not provide better results in this case).
2. PEM+Oracle: this is the classical PEM approach, as implemented in the pem.m function
of the MATLAB System Identification Toolbox [26], equipped with an oracle that, at every
run, knows which predictor impulse response are zero and, having access to the test set,
selects those model orders that provide the best prediction performance.
3. PEM+BIC: this is the classical PEM approach that uses BIC for model order selection. The
order of the polynomials in the ARMAX model are not allowed to be different each other
since this would lead to a combinatorial explosion of the number of competitive models.
4. Stable Spline: this is the approach based on the full Bayesian model of Fig. 2. The first
40 available input/output pairs enter the {Ak } in (9) so that N = 460. For computational
reasons, the number of estimated predictor coefficients is 40.
5. Suboptimal Stable Spline: the same as above except that we exploit the reduced Bayesian
model of Fig. 2 complemented with the procedure described at the end of subsection 5.1.
The following performance indexes are considered:
1. Percentage of the impulse responses equal to zero correctly set to zero by the estimator.
2. k-step-ahead Coefficient of Determination, denoted by CODk , quantifying how much of
the test set variance is explained by the forecast. It is computed at each run as
v
u
2
u 1 1000
X
RM S
t
test )2
CODk := 1? 1 P1000 testk
,
RM
S
:=
(y test ? y?t|t?k
k
1000 t=1 t
? y?ttest )2
i=1 (yt
1000
(15)
7
Average COD
1
Stable Spline
Suboptimal Stable Spline
PEM + Oracle
GLAR
0.9
0.8
0.7
0.6
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
1
Average COD
#1
#2
0.5
0
?0.5
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
k
Figure 4: CODk , i.e. average coefficient of determination relative to k-step ahead prediction, obtained during the Monte Carlo study #1 (top) and #2 (bottom) using PEM+Oracle (?), GLAR (?)
Stable Spline based on the full (?) and the reduced (+) Bayesian model of Fig. 2.
test
?t|t?k
is the k-step
where y?test is the sample mean of the test set data {yttest }1000
t=1 and y
ahead prediction computed using the estimated model. The average index obtained during
the Monte Carlo study, as a function of k, is then denoted by CODk .
Notice that, in both of the cases, the larger the index, the better is the performance of the estimator.
In every experiment the performance of PEM+BIC has been largely unsatisfactory, providing
strongly negative values for CODk . This is illustrated e.g. in Fig. 3 showing the boxplots of the
200 values of COD1 obtained by 4 of the employed estimators during the two Monte Carlo studies.
We have also assessed that results do not improve using AIC. In view of this, in what follows other
results from PEM+BIC will not be shown.
Table 1 reports the percentage of the predictor impulse responses equal to zero correctly estimated as
zero by the estimators. Remarkably, in all the cases the Stable Spline estimators not only outperform
GLAR but the achieved percentage is close to 99%. This shows that the use of the marginal posterior
permits to effectively detect the subset of the {?k } equal to zero. Finally, Fig. 4 displays CODk as a
function of the prediction horizon obtained during the Monte Carlo study #1 (top) and #2 (bottom).
The performance of Stable Spline appears superior than that of GLAR and is comparable with that
of PEM+Oracle also when the reduced Bayesian model of Fig. 2 is used.
7
Conclusions
We have shown how identification of large sparse dynamic systems can benefit from the flexibility
of kernel methods. To this aim, we have extended a recently proposed nonparametric paradigm to
identify sparse models via prediction error minimization. Predictor impulse responses are modeled
as zero-mean Gaussian processes using stable spline kernels encoding the BIBO-stability constraint
and sparsity is induced by exponential hyperpriors on their scale factors. The method compares
much favorably with GLAR, with its performance close to that achievable combining PEM with an
oracle which exploits the test set in order to select the best model order. In the near future we plan to
provide a theoretical analysis characterizing the hyperprior-based scheme as well as to design new
ad hoc optimization schemes for hyperparameters estimation.
8
References
[1] L. Ljung. System Identification - Theory For the User. Prentice Hall, 1999.
[2] J. Mohammadpour and K.M. Grigoriadis.
Springer, 2010.
Efficient Modeling and Control of Large-scale Systems.
[3] T. J. Hastie and R. J. Tibshirani. Generalized additive models. In Monographs on Statistics and Applied
Probability, volume 43. Chapman and Hall, London, UK, 1990.
[4] D. Donoho. Compressed sensing. IEEE Trans. on Information Theory, 52(4):1289?1306, 2006.
[5] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control,
19:716?723, 1974.
[6] G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6:461?464, 1978.
[7] R. Tibshirani. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society,
Series B., 58, 1996.
[8] B. Efron, T. Hastie, L. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407?
499, 2004.
[9] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research,
7:2541?2563, 2006.
[10] H. Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association,
101:1418?1429, 2006.
[11] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of
the Royal Statistical Society, Series B, 68:49?67, 2006.
[12] F.R. Bach. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res., 9:1179?
1225, 2008.
[13] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6:1099?1125, 2005.
[14] H. Wang, G. Li, and C.L. Tsai. Regression coefficient and autoregressive order shrinkage and selection
via the lasso. Journal Of The Royal Statistical Society Series B, 69(1):63?78, 2007.
[15] Nan-Jung Hsu, Hung-Lin Hung, and Ya-Mei Chang. Subset selection for vector autoregressive processes
using lasso. Computational Statistics and Data Analysis, 52:3645?3657, 2008.
[16] G. Pillonetto and G. De Nicolao. A new kernel-based approach for linear system identification. Automatica, 46(1):81?93, 2010.
[17] G. Pillonetto, A. Chiuso, and G. De Nicolao. Prediction error identification of linear systems: a nonparametric Gaussian regression approach. Automatica (in press), 2011.
[18] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[19] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society,
68:337?404, 1950.
[20] G. Wahba. Support vector machines, reproducing kernel Hilbert spaces and randomized GACV. Technical
Report 984, Department of Statistics, University of Wisconsin, 1998.
[21] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical
Analysis and Applications, 33(1):82?95, 1971.
[22] A. J. Smola and B. Sch?olkopf. Bayesian kernel methods. In S. Mendelson and A. J. Smola, editors,
Machine Learning, Proceedings of the Summer School, Australian National University, pages 65?117,
Berlin, Germany, 2003. Springer-Verlag.
[23] G. Wahba. Spline models for observational data. SIAM, Philadelphia, 1990.
[24] G.C. Goodwin, M. Gevers, and B. Ninness. Quantifying the error in estimated transfer functions with
application to model order selection. IEEE Transactions on Automatic Control, 37(7):913?928, 1992.
[25] F. Dinuzzo. Kernel machines with two layers and multiple kernel learning. Technical report, Preprint
arXiv:1001.2709, 2010. Available at http://www-dimat.unipv.it/ dinuzzo.
[26] L. Ljung. System Identification Toolbox V7.1 for Matlab. Natick, MA: The MathWorks, Inc., 2007.
9
| 4174 |@word middle:2 version:4 briefly:1 polynomial:1 norm:7 achievable:1 k2hk:1 open:1 carry:1 series:3 hereafter:1 rkhs:2 favouring:1 past:2 ka:2 z2:1 additive:1 distant:1 visible:1 informative:2 numerical:5 stationary:3 selected:1 plane:1 dinuzzo:2 filtered:1 provides:2 pillonetto:3 mathematical:2 chiuso:3 yuan:1 consists:1 khk:1 autocorrelation:1 introduce:1 behavior:1 ming:1 little:1 equipped:2 considering:1 becomes:1 project:1 estimating:2 underlying:1 bounded:4 notation:3 panel:1 moreover:1 null:1 what:2 kimeldorf:1 interpreted:1 developed:1 every:2 ensured:1 k2:7 rm:3 uk:3 control:3 unit:2 yn:1 positive:3 engineering:3 understood:1 treat:1 tends:2 sd:1 limit:1 encoding:1 ak:9 mach:1 abuse:2 black:1 r4:1 suggests:1 challenging:1 limited:1 bi:1 enriching:1 practical:1 yj:1 practice:2 definite:3 procedure:3 mei:1 pontil:1 empirical:1 semicircle:1 significantly:1 convenient:1 close:2 subsystem:1 selection:12 operator:3 prentice:1 influence:1 www:1 yt:16 go:1 williams:1 starting:2 convex:1 immediately:1 estimator:8 stability:5 handle:1 annals:2 user:1 us:2 unipd:2 akaike:1 agreement:1 element:1 bottom:2 role:1 preprint:1 wang:1 capture:1 region:1 trade:1 alessandro:2 monograph:1 dynamic:7 solving:3 algebra:1 predictive:1 basis:1 completely:1 gacv:1 easily:1 joint:2 indirect:1 represented:1 describe:1 cod:3 monte:6 kp:1 london:1 hyper:2 whose:6 widely:2 posed:1 larger:1 reconstruct:1 compressed:2 favor:2 statistic:6 gi:2 gp:3 jointly:2 transform:1 hoc:1 advantage:2 sequence:2 combining:1 realization:9 glar:14 flexibility:1 description:2 inducing:2 ky:1 olkopf:1 exploiting:1 r1:1 hkt:1 object:1 derive:1 measured:1 school:1 implemented:1 predicted:1 come:1 australian:1 drawback:2 stochastic:5 observational:1 preliminary:3 proposition:6 extension:2 hold:2 considered:1 hall:2 normal:1 exp:1 optimizer:1 adopt:1 achieves:1 omitted:1 estimation:11 favorable:1 miso:1 combinatorial:1 schwarz:1 grouped:1 minimization:3 mit:1 gaussian:17 always:1 aim:3 modified:2 rather:2 shrinkage:3 encode:1 derived:1 focus:2 unsatisfactory:1 likelihood:1 indicates:1 hk:22 am:2 detect:1 posteriori:1 integrated:1 selective:1 selects:1 germany:1 arg:4 issue:1 ill:1 denoted:3 plan:1 art:1 smoothing:1 marginal:3 equal:4 once:1 never:1 having:2 chapman:1 placing:1 represents:1 look:1 yu:1 representer:2 future:1 report:4 spline:25 few:1 randomly:2 national:1 integral:1 explosion:1 autocovariance:11 unless:1 euclidean:2 hyperprior:2 re:1 theoretical:2 instance:1 column:3 modeling:2 ordinary:1 applicability:2 pole:3 subset:3 entry:1 predictor:20 uniform:1 gianluigi:1 seventh:1 density:3 randomized:1 siam:1 off:1 yl:2 zeta:2 ukt:1 management:1 possibly:1 v7:1 american:2 derivative:1 leading:1 return:1 zhao:1 li:1 account:1 converted:1 de:2 includes:1 coefficient:6 availability:1 inc:1 postulated:1 depends:1 ad:1 performed:2 view:4 root:2 closed:1 competitive:2 start:1 capability:1 ttest:1 gevers:1 contribution:1 il:1 square:1 variance:10 largely:1 identify:2 bayesian:18 identification:30 carlo:6 oscillatory:1 sharing:1 definition:1 energy:1 frequency:1 e2:1 associated:5 di:2 proof:1 static:1 con:1 hsu:1 sampled:2 rational:1 knowledge:1 ut:9 subsection:1 efron:1 organized:1 hilbert:2 obtainable:1 appears:1 response:26 specify:1 done:1 box:1 strongly:1 generality:1 zeromean:1 just:2 governing:1 smola:2 working:1 undermine:1 aronszajn:1 lar:3 impulse:26 modulus:1 effect:1 y2:1 regularization:5 assigned:1 chemical:1 entering:1 hence:2 symmetric:1 wp:4 illustrated:1 white:4 during:4 criterion:1 generalized:1 demonstrate:1 cp:1 variational:1 novel:1 recently:3 fi:2 common:1 superior:1 functional:1 ji:2 exponentially:1 volume:1 hki:1 discussed:1 elevated:1 extend:1 interpretation:2 interpret:2 belong:1 slight:1 measurement:3 association:1 enter:1 smoothness:1 automatic:2 consistency:3 pm:2 funded:1 stable:24 access:1 nected:1 fortiori:1 posterior:3 italy:2 irrelevant:1 apart:1 optimizing:1 scenario:5 tikhonov:1 manipulation:1 verlag:1 yi:1 exploited:1 seen:1 minimum:1 additional:1 arx:3 employed:2 surely:4 paradigm:3 signal:3 ii:1 multiple:4 full:3 smooth:1 technical:2 determination:2 bach:1 lin:2 e1:1 prediction:15 involving:1 regression:7 subopt:1 denominator:1 vision:1 expectation:1 arxiv:1 natick:1 kernel:35 represent:2 achieved:1 remarkably:1 interval:1 sch:1 w2:1 induced:1 effectiveness:1 near:1 easy:1 affect:1 bic:11 hastie:2 lasso:8 suboptimal:3 wahba:3 economic:1 regarding:2 det:2 pem:23 nyquist:1 penalty:3 render:1 armax:7 hardly:1 adequate:1 matlab:4 bibo:5 involve:1 amount:2 nonparametric:7 transforms:1 band:1 reduced:5 http:1 outperform:1 percentage:4 notice:2 estimated:6 per:1 correctly:3 tibshirani:3 discrete:2 hyperparameter:3 group:6 key:3 tchebycheffian:1 drawn:4 boxplots:2 asymptotically:1 fraction:1 sum:3 run:6 angle:2 inverse:1 place:1 extends:1 reporting:1 almost:4 separation:1 oscillation:1 comparable:1 layer:1 hi:2 summer:1 guaranteed:2 aic:4 display:3 fold:1 nan:1 oracle:8 ahead:5 constraint:5 encodes:1 grigoriadis:1 prin:1 interpolated:1 min:4 circumvented:1 department:3 according:1 describes:1 y0:1 outlier:1 restricted:2 invariant:1 explained:1 equation:1 mutually:1 describing:1 mathworks:1 needed:1 know:1 letting:2 fp7:1 end:2 available:4 permit:3 hyperpriors:5 away:1 top:2 remaining:1 include:2 exploit:2 classical:3 passband:1 appreciate:1 society:4 micchelli:1 objective:1 parametric:4 dependence:2 highfrequency:1 simulated:1 berlin:1 parametrized:1 reason:1 padova:4 modeled:6 pointwise:1 index:3 providing:1 minimizing:1 innovation:2 difficult:1 statement:2 expense:1 favorably:1 stated:1 negative:1 design:3 unknown:7 upper:1 convolution:1 finite:6 displayed:1 extended:1 looking:1 y1:1 rn:1 reproducing:3 community:1 intensity:1 posedness:1 introduced:4 pair:2 goodwin:1 specified:1 toolbox:2 connection:1 trans:1 able:1 usually:1 laguerre:1 dynamical:3 below:3 sparsity:6 max:2 including:1 royal:3 indicator:1 scheme:4 improve:1 dei:1 hm:3 extract:1 philadelphia:1 prior:5 ict:1 kf:1 relative:1 wisconsin:1 loss:1 ljung:2 proportional:6 validation:1 autocovariances:3 principle:1 editor:1 share:1 jung:1 surprisingly:1 supported:1 rasmussen:1 enjoys:1 appreciated:1 bias:2 side:2 johnstone:1 wide:1 characterizing:1 absolute:1 sparse:12 benefit:1 boundary:1 dimension:1 autoregressive:3 adaptive:1 programme:1 transaction:3 functionals:1 obtains:1 automatica:2 assumed:3 spectrum:1 continuous:3 reviewed:1 table:2 learn:2 transfer:2 nature:1 robust:1 obtaining:1 du:1 european:1 complex:1 zou:1 domain:1 main:1 linearly:1 noise:5 hyperparameters:8 allowed:1 enriched:2 fig:11 en:1 cubic:2 postulating:1 exponential:5 theorem:2 formula:1 showing:1 sensing:2 r2:1 decay:1 derives:1 mendelson:1 effectively:1 horizon:1 forecast:1 suited:1 nicolao:2 expressed:1 partially:1 scalar:3 chang:1 springer:2 corresponds:1 minimizer:1 ukj:1 relies:1 complemented:1 ma:1 identity:1 quantifying:2 donoho:1 feasible:1 infinite:3 specifically:1 except:2 determined:1 hkj:1 lemma:4 called:2 experimental:1 la:1 ya:1 exception:1 select:1 searched:1 support:1 assessed:1 tsai:1 tested:1 hung:2 |
3,506 | 4,175 | Efficient Relational Learning with
Hidden Variable Detection
Ni Lao, Jun Zhu, Liu Liu, Yandong Liu, William W. Cohen
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
{nlao,junzhu,liuliu,yandongl,wcohen}@cs.cmu.edu
Abstract
Markov networks (MNs) can incorporate arbitrarily complex features in modeling
relational data. However, this flexibility comes at a sharp price of training an exponentially complex model. To address this challenge, we propose a novel relational
learning approach, which consists of a restricted class of relational MNs (RMNs)
called relation tree-based RMN (treeRMN), and an efficient Hidden Variable Detection algorithm called Contrastive Variable Induction (CVI). On one hand, the
restricted treeRMN only considers simple (e.g., unary and pairwise) features in relational data and thus achieves computational efficiency; and on the other hand, the
CVI algorithm efficiently detects hidden variables which can capture long range
dependencies. Therefore, the resultant approach is highly efficient yet does not
sacrifice its expressive power. Empirical results on four real datasets show that the
proposed relational learning method can achieve similar prediction quality as the
state-of-the-art approaches, but is significantly more efficient in training; and the
induced hidden variables are semantically meaningful and crucial to improve the
training speed and prediction qualities of treeRMNs.
1 Introduction
Statistical relational learning has attracted ever-growing interest in the last decade, because of widely
available relational data, which can be as complex as citation graphs, the World Wide Web, or relational databases. Relational Markov Networks (RMNs) are excellent tools to capture the statistical
dependency among entities in a relational dataset, as has been shown in many tasks such as collective classification [22] and information extraction [18][2]. Unlike Bayesian networks, RMNs
avoid the difficulty of defining a coherent generative model, thereby allowing tremendous flexibility
in representing complex patterns [21]. For example, Markov Logic Networks [10] can be automatically instantiated as a RMN, given just a set of predicates representing attributes and relations
among entities. The algorithm can be applied to tasks in different domains without any change.
Relational Bayesian networks [22], in contrary, would require expert knowledge to design proper
model structures and parameterizations whenever the schema of the domain under consideration is
changed. However, this flexibility of RMN comes at a high price in training very complex models.
For example, work by Kok and Domingos [10][11][12] has shown that a prominent problem of relational undirected models is how to handle the exponentially many features, each of which is an
conjunction of several neighboring variables (or ?ground atoms? in terms of first order logic). Much
computation is spent on proposing and evaluating candidate features.
The main goal of this paper is to show that instead of learning a very expressive relational model,
which can be extremely expensive, an alternative approach that explores Hidden Variable Detection
(HVD) to compensate a family of restricted relational models (e.g., treeRMNs) can yield a very
efficient yet competent relational learning framework. First, to achieve efficient inference, we introduce a restricted class of RMNs called relation tree-based RMNs (treeRMNs), which only considers
unary (single variable assignment) and pairwise (conjunction of two variable assignments) features.
1
Since the Markov blanket of a variable is concisely defined by a relation tree on the schema, we
can easily control the complexities of treeRMN models. Second, to compensate for the restricted
expressive power of treeRMNs, we further introduce a hidden variable induction algorithm called
Contrastive Variable Induction (CVI), which can effectively detect latent variables capturing long
range dependencies. It has been shown in relational Bayesian networks [24] that hidden variables
can help propagating information across network structures, thus reducing the burden of extensive
structural learning. In this work, we explore the usefulness of hidden variables in learning RMNs.
Our experiments on four real datasets show that the proposed relational learning framework can
achieve similar prediction quality to the state-of-the-art RMN models, but is significantly more efficient in training. Furthermore, the induced hidden variables are semantically meaningful and are
crucial to improving training speed of treeRMN.
In the remainder of this paper, we first briefly review related work and training undirected graphical
models with mean field contrastive divergence. Then we present the treeRMN model and the CVI
algorithm for variable induction. Finally, we present experimental results and conclude this paper.
2
Related Work
There has been a series of work by Kok and Domingos [10][11][12] developing Markov Logic
Networks (MLNs) and showing their flexibility in different applications. The treeRMN model we
introduced in this work is intended to be a simpler model than MLNs, which can be trained more
efficiently, yet still be able to capture complex dependencies. Most of the existing RMN models
construct Markov networks by applying templates to entity relation graphs [21][8]. The treeRMN
model that we are going to introduce uses a type of template called a relation tree, which is very
general and applicable to a wide range of applications. This relation tree template resembles the
path-based feature generation approach for relational classifiers developed by Huang et al. [7].
Recently, much work has been done to induce hidden variables for generative Bayesian networks
[5][4][16][9][20][14]. However, previous studies [6][19] have pointed out that the generality of
Bayesian Networks is limited by their need for prior knowledge on the ordering of nodes. On the
other hand, very little progress has been made in the direction of non-parametric hidden variable
models based on discriminative Markov networks (MNs). One recent attempt is the Multiple Relational Clustering (MRC) [11] algorithm, which performs top-down clustering of predicates and
symbols. However, it is computationally expensive because of its need for parameter estimation
when evaluating candidate structures. The CVI algorithm introduced in this work is most similar to
the ?ideal parent? algorithm [16] for Gaussian Bayesian networks. The ?ideal parent? evaluates candidate hidden variables based on the estimated gain of log-likelihood they can bring to the Bayesian
network. Similarly, the CVI algorithm evaluates candidate hidden variables based on the estimated
gain of an regularized RMN log-likelihood, thus avoids the costly step of parameter estimation.
3
Preliminaries
Before describing our model, let?s briefly review undirected graphical models (a.k.a, Markov networks). Since our goal is to develop an efficient RMN model, we use the simple but very efficient
mean field contrastive divergence [23] method. Our empirical results show that even the simplest
naive mean field can yield very promising results. Extension to using more accurate (but also more
expensive) inference methods, such as loopy BP [15] or structured mean fields can be done similarly.
Here we consider the general case that Markov networks have observed variables O, labeled variables Y, and hidden variables H. Let X = (Y, H) be the joint of hidden and labeled variables. The
?
conditional distribution of X given observations O is p(x|o; ?) = exp(?
where f
? f (x, o))/Z(?),
is a vector of feature functions fk ; ? is a vector of weights; Z(?) = x exp(?? f (x, o)) is a normalization factor; and fk (x, o) counts the number of times the k-th feature fires in (x, o). Here we
assume that the range of each variable is discrete and finite. Many commonly used graphical models have tied parameters, which allow a small number of parameters to govern a large number of
features. For example, in a linear chain CRF, each parameter is associated with a feature template:
e.g. ?the current node having label yt = 1 and the immediate next neighbor having label yt+1 = 1?.
After applying each template to all the nodes in a graph, we get a graphical model with a large
number of features (i.e., instantiations of feature templates). In general, a model?s order of Markov
dependence is determined by the maximal number of neighboring steps considered by any one of
2
its feature templates. In the context of relational learning, the templates can be defined similarly,
except having richer representations?with multiple types of entities and neighboring relations.
Given a set of training samples D = {(ym , om )}M
m=1 , the parameter estimation of MN can be
formulated as maximizing the following regularized log-likelihood
L(?) =
M
?
lm (?) ? ????1 ?
m=1
1
????22 ,
2
(1)
where ? and ? are non-negative regularization constants for the ?1 and ?2 -norm respectively. Because of its singularity at the origin, the ?1 -norm can yield a sparse estimate, which is a desired
property for hidden variable discovery, as we shall see. The differentiable ?2 -norm is useful when
there are strongly correlated features. The composite ?1 /?2 -norm is known as ElasticNet [27], which
has been shown to have nice properties. The log-likelihood for a single sample is
l(?) = log p(y|o; ?) = log
?
p(h, y|o; ?),
(2)
h
and its gradient is ?? l(?) = ?f ?py ? ?f ?p , where ???p is the expectation under the distribution p. To
simplify notation, we use p to denote the distribution p(h, y|o; ?) and py to denote p(h|y, o; ?).
For simple (e.g. tree-structured) MNs, message passing algorithms can be used to infer the marginal
probabilities as required in the gradients exactly. For general MNs, however, we need approximate strategies like variational or Monte Carlo methods. Here we use simple mean field variational
method [23]. By analogy with statistical physics, the free energy of any distribution q is defined as
F (q) = ???? f ?q ? H(q).
?
(3)
?
Therefore, F (p) = ? log Z(?), F (py ) = ? log h exp(? f (y, h, o)), and l(?) = F (p) ? F (py ).
Let q0 be the mean field approximation of p(h, y|o; ?) with y clamped to their true values, and qt
be the approximation of p(h, y|o; ?) obtained by applying t steps of mean field updates to q0 with
y free. Then F (q0 ) ? F (qt ) ? F (q? ) ? F (p). As in [23], we set t = 1, and use
lCD1 (?) , F (q1 ) ? F (q0 )
(4)
to approximate l(?), and its gradient is ?? l
(?) = ?f ?q0 ? ?f ?q1 . The new objective function
LCD1 (?) uses lCD1 (?) to replace l(?). One advantage of CD is that it avoids q being trapped in a
possible multimodal distribution of p(h, y|o; ?) [25][3]. With the above approximation, we can use
orthant-wise L-BFGS [1] to estimate the parameters ?.
CD1
4
Relation Tree-Based RMNs
In the following, we formally define the treeRMN model with relation tree templates, which is very
general and applicable to a wide range of applications.
A schema S (Figure 1 left) is a pair (T, R). T = {Ti } is a set of entity types which include
both basic entity types (e.g., P erson, Class) and composite entity types (e.g., ?P erson, P erson?,
?P erson, Class?). Each entity type is associated with a set of attributes A(T ) = {T.Ai }: e.g.,
A(P erson) = {P erson.gender}. R = {R} is a set of binary relations. We use dom(R) to denote
the domain type of R and range(R) to denote its range. For each argument of a composite entity
type, we define two relations, one with outward direction (e.g. P P 1 means from a Person-Person
pair to its first argument) and another with inward direction (e.g. P P 1?1 ). Here we use ?1 to denote
the inverse of a relation. We further introduce a T win relation, which connects a composite entity
type to itself. Its semantics will be clear later. In principle, we can define other types of relations
F atherOf
such as those corresponding to functions in second order logic (e.g. P erson ???????? P erson).
An entity relation graph G = IE (S) (Figure 1 right), is the instantiation of schema S on a set of
basic entities E = {ei }. We define the instantiation of a basic entity type T as IE (T ) = {e :
e.T = T }, and similarly for a composite type IE (T = ?T1 , ..., Tk ?) = {?e1 , ..., ek ? : ei .T = Ti }.
In the given example, IE (P erson) = {p1, p2} is the set of persons; IE (Class) = {c1} is the
set of classes; IE (?P erson, P erson?) = {?p1, p2?, ?p2, p1?} is the set of person-person pairs; and
IE (?P erson, Class?) = {?p1, c1?, ?p2, c1?} is the set of person-class pairs. Each entity e has a set
of variables {e.Xi } that correspond to the set of attributes of its entity type A(e.T ). For a composite
entity that consists of two entities of the same type, we?d like to capture its correlation with its twin?
the composite entity made of the same basic entities but in reversed order. Therefore, we add the
T win relation between all pairs of twin entities: e.g., from ?p1, p2? to ?p2, p1?, and vice versa.
3
Twin
<Person, Person>
advise
co-auther
PP2
PP1
PP2
<Class>
isGrduateCourse
-1
-1
PC2
-1
PP1
<Person >
gender
PC1
PC2
<p1,c1>
give=1
take=0
<Person,Class>
give
take
PC1
-1
Twin
<p1,p2>
advise=1
co-auther =1
Twin
<p1>
gender =M
<p2,p1>
advise=0
co-auther =1
<p2>
gender =F
<p2,c1>
give=0
take=1
<c1>
isGrduateCourse =0
Figure 1: (Left) is a schema, where round and rectangular boxes represent basic and composite
entity types respectively. (Right) is a corresponding entity relation graph with three basic entities:
p1, p2, c1. For clarity we only show one direction of the relations and omit their labels.
<Person>
-1
PP1
<Person, Person>
PP2
PP2
-1
PC1
PC2
<Class>
<Person , Person>
<Person>
<Person , Person>
-1
PP2
PP1
PP2
Twin
<Person>
<Person, Person>
<Person ,Class>
<Person>
PP1
-1
-1
-1
PC1 PP1
<Person , Person>
<Person, Person>
PP2
<Person, Person >
-1
-1
PP1
-1
PC1
<Person, Person>
<Person>
<Person ,Class>
<Person ,Class>
Figure 2: Two-level relation trees for the P erson type (left) and the ?P erson, P erson? type (right).
Given a schema, we can conveniently express how one entity can reach another entity by the concept of a relation path. A relation path P is a sequence of relations R1 . . . R? for which the domains and ranges of adjacent relations are compatible?i.e., range(Ri ) = dom(Ri+1 ). We define
dom(R1 . . . R? ) ? dom(R1 ) and range(R1 . . . R? ) ? range(R? ), and when we wish to emphasize the types associated with each step in a path, we will write the path P = R1 . . . R? as
R?
R1
T0 ???
. . . ???
T? , where T0 = dom(R1 ) = dom(P ), T1 = range(R1 ) = dom(R2 ) and
so on. Note that, because some of the relations reflect one-to-one mappings, there are groups of
P C1?1
paths that are equivalent?e.g., the path P erson is actually equivalent to the path P erson ?????
P C1
?P erson, Class? ???? P erson. To avoid creating these uninteresting paths, we add a constraint
to outward composite relations (e.g. P P 1,P C1) that they cannot be immediately preceded by their
inverse. We also constrain that the T win relation should not be combined with any other relations.
Now, the Markov blanket of an entity e ? T can be concisely defined by the set of all relation paths
with domain T and of length ? ? (as shown in Figure 2). We call this set the relation tree of type
T , and denote it as T ree(T, ?) = {P }. We define a unary template as T.Ai = a, where Ai is an
attribute of type T , and a ? range(Ai ). This template can be applied to?any entity e of type T
in the entity relation graph. We define a pairwise template as T.Ai = a P.Bj = b, where Ai
is an attribute of type T , a ? range(Ai ), P.Bj is an attribute of type range(P ), dom(P ) = T ,
and b ? range(Bj ). This template can be applied to any entity pair (e1 , e2 ), where e1 .T = T and
e2 ? e1 .P . Here we define e.P as the set of entities reach able from entity e ? T through the
relation path P . For example, the following template
pp.coauthor = 1
?
PP1
P P 1?1
pp ???? p ?????? pp.advise = 1
can be applied to any person-person pair, and it fires whenever co-author=1 for this person pair, and
PP1
the first person (identified as pp ???? p ) also have advise=1 with another person. Here we use
p as a shorthand for the type P erson, and pp a shorthand for ?P erson, P erson?. In our current
implementation, we systematically enumerate all possible unary and pairwise templates.
Given the above concepts, we define a treeRMN model M = (G, f , ?) as the tuple of an entity relation graph G, a set of feature functions f , and their weights ?. Each feature function fk counts the
number of times the k-th template fires in G. Generally, the complexity of inference is exponential
in the depth of the relation trees, because both the number of templates and their sizes of Markov
blankets grow exponentially w.r.t. the depth ?. TreeRMN provides us a very convenient way to control the complexity by the single parameter ?. Since treeRMN only considers pairwise and unary
features, it is less expressive than Markov Logic Networks [10], which can define higher order
features by conjunction of predicates; and treeRMN is also less expressive than relational Bayesian
networks [9][20][14], which have factor functions with three arguments. However, the limited expressive power of treeRMN can be effectively compensated for by detecting hidden variables, which
is another key component of our relational learning approach, as explained in the next section.
4
Algorithm 1 Contrastive Variable Induction
initialize a treeRMN M = (G, f , ?)
while true do
estimate parameters ? by L-BFGS
(f ? , ?? ) = induceHiddenVariables(M)
if no hidden variable is induced then
break
end if
end while
return M
5
Algorithm 2 Bottom Up Clustering of Entities
initialize clustering ? = {Ii = {i}}
while true do
for any pair of clusters I1 ,I2 ? ? do
inc(I1 , I2 ) = ?I1 ?I2 ? ?I1 ? ?I2
end for
if the largest increment ? 0 then
break
end if
merge the pair with the largest increment
end while
return ?
Contrastive Variable Induction (CVI)
As we have explained in the previous section, in order to compensate for the limited expressive
power of a shallow treeRMN and capture long-range dependencies in complex relational data, we
propose to introduce hidden variables. These variables are detected effectively with the Contrastive
Variable Induction (CVI) algorithm as explained below.
The basic procedure (Algorithm 1) starts with a treeRMN model on observed variables, which can
be manually designed or automatically learned [13]; then it iteratively introduces new HVs to the
model and estimate its parameters. The key to making this simple procedure highly efficient is a
fast algorithm to evaluate and select good candidate HVs. We give closed-form expressions of the
likelihood gain and the weights of newly added features under contrastive divergence approximation
[23] (other type of inference can be done similarly). Therefore, the CVI process can be very efficient,
only adding small overhead to the training of a regular treeRMN.
Consider introducing a new HV H to the entity type T . In order for H to influence the model, it
needs to be connected to the existing model. This is done by defining additional feature templates:
we can denote a HV candidate by a tuple ({q (i) (H)}, fH , ?H ), where {q (i) (H)} is the set of distributions of the hidden variable H on all entities of type T , fH is a set of pairwise feature templates
that connect H to the existing model, and ?H is a vector of feature weights. Here we assume that
any feature f ? fH is in the pairwise form fH=1 ? A=a , where a is the assignment to one of the
existing variables A in the relation tree of type T . Ideally, we would like to identify the candidate
HV, which gives the maximal gain in the regularized objective function LCD1 (?).
For easy evaluation of H, we set its mean field variational parameters ?H to either 0 or 1 on the
entities of type T . This yields a lower bound to the gain of LCD1 (?). Therefore, a candidate HV
can be represented as (I, fH , ?H ), where I is the set of indices to the entities with ?H = 1. Using
second order Taylor expansion, we can show that for a particular feature f ? fH the maximal gain
?I,f =
is achieved at
?f =
1 ??eI [f ]?2?
2 ?I [f ] + ?
(5)
??eI [f ]??
,
?I [f ] + ?
(6)
where ?? is a truncation operator: ?a?b = a?b, if a > b; a+b, if a < ?b; 0, otherwise. Error eI [f ] =
?f ?q1 ,I ? ?f ?q0 ,I is the difference of f ?s expectations, and ?I [f ] = V arq?1 ,I [f ] ? V arq?0 ,I [f ] is the
differences of f ?s variances1 . Here we use q, I to denote the distribution q of the existing variables
augmented by the distribution of H parameterized by the index set I. q0 and q1 are the wake
and sleep distributions estimated by 1-step mean-field CD. The estimations in Eq. (5) and (6) are
simple, yet have nice intuitive explanations about the effects of the ?1 and ?2 regularizer as used in
Eq. (1): a large ?2 -norm (i.e. large ?) smoothly shrinks both the (estimated) likelihood gain and the
feature weights; while the non-differentiable ?1 -norm not only shrink the estimated gain and feature
weights, but also drive features to have zero gains, therefore, can automatically select the features.
If we assume that the gains of individual features are independent, then the estimated gain for H is
1
V arq,I [f ] is intractable when we have tied parameters. Therefore, we approximate
it by assuming
?
?
that
V ?V V arq,I [f (V )] =
? the occurrences of f are independent to each other: i.e. V arq,I [f ] =
V ?V ?f (V )?q,I (1 ? ?f (V )?q,I ), where V is any specific subset of variables that f can be applied to.
5
?I ?
?
?I,f ,
f ?fI
where fI = {f : ?I,f > 0} is the set of features that are expected to improve the objective function.
However, finding the index set I that maximizes ?I is still non-trivial?an NP-hard combinatory
optimization problem, which is often tackled by top-down or bottom-up procedures in the clustering
literature. Algorithm 2 uses a simple bottom up clustering algorithm to build a hierarchy of clusters.
It starts with each sample as an individual cluster, and then repeatedly merges the two clusters that
lead to the best increment of gain. The merging is stopped if the best increment ? 0.
After clustering, we introduce a single categorical variable that treats each cluster with positive gain
as a category, and the remaining useless clusters are merged into a separate category. Introducing
this categorical variable is equivalent to introducing a set of binary variables?one for each cluster
with positive gain. From the above derivation, we can see that the essential part of the CVI algorithm
is to compute the expectations and variances of RMN features, both of which can be done by any
inference procedures, including the mean field as we have used. Therefore, in principle, the CVI
algorithm can be extended to use other inference methods like belief propagation or exact inference.
Remark 1 after the induction step, the introduced HVs are treated as observations: i.e. their variational parameters are fixed to their initial 0 or 1 values. In the future, we?d like to treat the HVs as
free variables. This can potentially correct the errors made by the greedy clustering procedure. The
cardinalities of HVs may be adapted by operators like deleting, merging, or splitting of categories.
Remark 2 currently, we only induce HVs to basic entity types. Extension to composite types can
show interesting tenary relations such as ?Abnormality can be PartOf Animals?. However, this
requires clustering over a much larger number of entities, which cannot be done by our simple
implementation of bottom up clustering.
6
Experiment
In this section, we present both qualitative and quantitative results of treeRMN model. We demonstrate that CVI can discover semantically meaningful hidden variables, which can significantly improve the speed and quality of treeRMN models.
6.1 Datasets
Basic Composite
Table 1 shows the statistics of the four datasets used in our ex#E #A
#E #A
periments. These datasets are commonly used by previous work Animal 50 80
0 0
Nation 14 111
196 56
in relational learning [9][11][20][14]. The Animal dataset conUML 135
0 18,225 49
tains a set of animals and their attributes. It consists exclusively
of unary predicates of the form A(a) where A is an attribute and Kinship 104 0 10,816 1*
a is an animal (e.g., Swims(Dolphin)). This is a simple proposi- Table 1: Number of entities
tional dataset with no relational structure, but is useful as a base case (#E) and attributes (#A) for
for comparison. The Nation dataset contains attributes of nations four datasets. ? The kinship
and relations among them. The binary predicates are of the form data has only one attribute
R(n1 , n2 ), where n1 , n2 are nations and R is a relation between which has 26 possible values.
them (e.g., ExportsTo, GivesEconomicAidTo). The unary predicates
are of the form A(n), where n is a nation and A is a attribute (e.g.,
Communist(China)). The UML dataset is a biomedical ontology called Unified Medical Language System. It consists of binary predicates of the form R(c1 , c2 ), where c1 and c2 are biomedical
concepts and R is a relation between them (e.g.,Treats(Antibiotic,Disease)). The Kinship dataset
contains kinship relationships among members of the Alyawarra tribe from Central Australia. Predicates are of the form R(p1 , p2 ), where R is a kinship term and p1 , p2 are persons. Except for the
animal data, the number of composite entities is the square of the number of basic entities.
6.2 Characterization of treeRMN and CVI
In this section, we analyze the properties of the discovered hidden variables and demonstrate the
behavior of the CVI algorithm. For the simple non-relational Animal data, if we start with a full
model with all pairwise features, CVI will decide not to introduce any hidden variables. If we run
CVI starting from a model with only unary features, however, CVI decides to introduce one hidden
variable H0 with 8 categories. Table 2 shows the associated entities and features for the first four
categories. We can see that they nicely identify marine mammals, predators, rodents, and primates.
6
C0
C1
C2
C3
Entities
KillerWhale Seal Dolphin BlueWhale
Walrus HumpbackWhale
GrizzlyBear Tiger GermanShepherd
Leopard Wolf Weasel Raccoon Fox
Bobcat Lion
Hamster Skunk Mole Rabbit Rat Raccoon Mouse
SpiderMonkey Gorilla Chimpanzee
Positive Features
Flippers Ocean Water Swims
Fish Hairless Coastal Arctic ...
Stalker Fierce Meat Meatteeth
Claws Hunter Nocturnal Paws
Smart Pads ...
Hibernate Buckteeth Weak
Small Fields Nestspot Paws ...
Tree Jungle Bipedal Hands
Vegetation Forest ...
Negative Features
Quadrapedal Ground Furry
Strainteeth Walks ...
Timid Vegetation Weak
Grazer Toughskin Hooves
Domestic ...
Strong Muscle Big Toughskin ...
Plains Fields Patches ...
Table 2: The associated entities and features (sorted by decreasing magnitude of feature weights)
for the first four categories?of the induced hidden variable a.H0 on the Animal data. The features
are in the form a.H0 = Ci a.A = 1, where A is any of the variables in the last two columns.
Entities
Positive Features
CC2?1
CC1?1
C0 AcquiredAbnormality AnatomicalAb- c??????cc.Causes c??????cc.PartOf
CC2?1
CC2?1
normality CongenitalAbnormality
c??????cc.Complicates c??????cc.CooccursWith ...
C1 Alga Plant
CC1?1
CC1?1
c??????cc.InteractsWith
c??????cc.LocationOf ...
CC1?1
CC2?1
C2 Amphibian Animal Bird Invertebrate c??????cc.InteractsWith c??????cc.PropertyOf
CC2?1
CC2?1
Fish Mammal Reptile Vertebrate
c??????cc.InteractsWith c??????cc.PartOf ...
Table 3: The associated entities and features (sorted by decreasing magnitude of feature weights)
for the first three categories
? of the induced hidden variable c.H0 on the UML data. The features are
in the form c.H0 = Ci A = 1, where A is any of the variables in the last column.
CLL
0
For the three relational datasets, we use UML as an example. The
induction process of Nation and Kinship datasets are similar, and
-0.1
we omit their details due to space limitation. For the UML task,
-0.2
CVI induces two multinomial hidden variables H0 and H1 . As we
Introduce c.H1
-0.3
can see from Figure 3, the inclusion of each hidden variable sigIntroduce c.H0
nificantly improves the conditional log likelihood of the model.
-0.4
The first hidden variable C.H0 has 43 categories, and Table 3
-0.5
shows the top three of them. We can see that these categories
Initial model
-0.6
represent the hidden concepts Abnormalities, Animals and Plants
-0.7
respectively. Abnormalities can be caused or treated by other con0
10
20
30
40
50
60
cepts, and it can also be a part of other concepts. Plants can be
L-BFGS Iteration
the location of some other concepts; and some other concepts can
be part of or the property of Animals. These grouping of concepts Figure 3: change of the conditional log likelihood during
are similar to those reported by Kok and Domingos [11].
training for the UML data.
6.3 Overall Performance
Now we present quantitative evaluation of the treeRMN model, and compare it with other relational
learning methods including MLN structure learning (MLS) [10], Infinite Relational Models (IRM)
[9] and Multiple Relational Clustering (MRC) [11]. Following the methodology of [11], we situate
our experiment in prediction tasks. We perform 10 fold cross validation by randomly splitting all
the variables into 10 sets. At each run, we treat one fold as hidden during training, and then evaluate
the prediction of these variables conditioned on the observed variables during testing. The overall
performance is measured by training time, average Conditional Log-Likelihood (CLL), and Area
Under the precision-recall Curve (AUC) [11]. All implementation is done with Java 6.0.
Table 4 compares the overall performance of treeRMN (RMN), treeRMN with hidden variable discovery (RMNCV I ), and other relational models (MSL, IRM and MRC) as reported in [11]. We
use subscripts (0, 1, 2) to indicate the order of Markov dependency (depth of relation trees), and
dim? for the number of parameters. First, we can see that, without HVs, the treeRMNs with higher
Markov orders generally perform better in terms of CLL and AUC. However, due to the complexity of high-order treeRMNs, this comes with large increases in training time. In some cases (e.g.,
Kinship data), a high order treeRMN can perform worse than a low order treeRMN probably due to
the difficulty of inference with a large number of features. Second, training a treeRMN with CVI
7
RMN0
I?
RMNCV
0
MSL
MRC
IRM
RMN0
RMN1
RMN2
I?
RMNCV
1
MSL
MRC
IRM
Animal, ?=0.01, ?=1
CLL
AUC
dim? Time
-0.34?0.03 0.88?0.02 3,655
5s RMN0
RMN1
RMN2
I
-0.33?0.02 0.89?0.02 4,349
9s RMNCV
1
?
-0.54?0.04 0.68?0.04
24h MSL
?
-0.43?0.04 0.80?0.04
10h MRC
?
-0.43?0.06 0.79?0.08
10h IRM
UML, ?=0.01, ?=10
CLL
AUC
dim? Time
-0.056?0.005 0.70?0.02 1,081 0.3h RMN0
-0.044?0.002 0.68?0.04 2,162 1.0h RMN1
-0.028?0.003 0.71?0.02 6,440 14.5h RMN2
I
-0.005?0.001 0.94?0.01 6,946 453s RMNCV
1
?
-0.025?0.002 0.47?0.06
24h MSL
?
-0.004?0.000 0.97?0.00
10h MRC
?
-0.011?0.001 0.79?0.01
10h IRM
Nation, ?=0.01, ?=1
CLL
AUC
dim?
-0.40?0.01 0.63?0.04 7,812
-0.33?0.02 0.72?0.04 21,840
-0.38?0.03 0.71?0.04 40,489
-0.31?0.02 0.83?0.04 22,191
-0.33?0.04 0.77?0.04
-0.31?0.02 0.75?0.03
-0.32?0.02 0.75?0.03
Kinship, ?=0.01, ?=10
CLL
AUC
dim?
?
-2.95?0.01 0.08?0.00
25
?
-1.36?0.05 0.66?0.03
350
?
-2.34?0.01 0.33?0.00 1,625
?
-1.04?0.03 0.81?0.01
900
-0.066?0.006 0.59?0.08
-0.048?0.002 0.84?0.01
-0.063?0.002 0.68?0.01
Time
15s
70s
446s
104s
?
24h
?
10h
?
10h
Time
6s
107s
2.1h
402s
?
24h
?
10h
?
10h
Table 4: Overall performance. Bold identifies the best performance, and ? marks the standard
deviations. Experiments are conducted with Intel Xeon 2.33GHz CPU (E5410). ? These results were
started with a treeRMN that only has unary features. ? The CLL of kinship data is not comparable to
previous approaches, because we treat each of its labels as one variable with 26 categories instead
of 26 binary variables. ? The results of existing methods were run on different machines (Intel Xeon
2.8GHz CPU), and their 10-fold data splits are independent to those used for the RMN models.
They were allowed to run up to 10-24 hours, and here we assumes that these methods cannot achieve
similar accuracy when the amount of training time is significantly reduced.
is only 2?4 times slower than training a treeRMN of the same order of Markov dependency. On
all three relational datasets, treeRMNs with CVI can significantly improve CLL and AUC. For the
simple Animal dataset, the improvement is less significant because there is no long range dependency to be captured in this data. Although the CVI models have similar number features as the
second order treeRMNs, their inferences are much faster due to their much smaller Markov blankets. Finally, on all datasets, the treeRMNs with CVI can achieve similar prediction quality as the
existing methods (i.e., MSL, IRM and MRC), but is about two orders of magnitude more efficient
in training. Specifically, it achieves significant improvements on the Animal and Nation data, but
moderately worse results on the UML and Kinship data. Since both UML and Kinship data have
no attributes in basic entity types, composite entities become more important to model. Therefore,
we suspect that the MRC model achieves better performance because it can perform clustering on
two-argument predicates which corresponds to composite entities.
7
Conclusions and Future Work
We have presented a novel approach for efficient relational learning, which consists of a restricted
class of Relational Markov Networks (RMN) called relation tree-based RMN (treeRMN) and an
efficient hidden variable induction algorithm called Contrastive Variable Induction (CVI). By using
simple treeRMNs, we achieve computational efficiency, and CVI can effectively detect hidden variables, which compensates for the limited expressive power of treeRMNs. Experiments on four real
datasets show that the proposed relational learning approach can achieve state-of-the-art prediction
accuracy and is much faster than existing relational Markov network models.
We can improve the presented approach in several aspects. First, to further speedup the treeRMN
model we can apply efficient Markov network feature selection methods [17][26] instead of systematically enumerating all possible feature templates. Second, as we have explained at the end
of section 5, we?d like to apply HVD on composite entity types. Third, we?d also like to treat the
introduced hidden variables as free variables and to make their cardinalities adaptive. Finally, we
would like to explore high order features which involves more than two variable assignments.
Acknowledgements.
We gratefully acknowledge the support of NSF grant IIS-0811562 and NIH grant R01GM081293.
8
References
[1] Galen Andrew and Jianfeng Gao. Scalable training of ?1 -regularized log-linear models. In
ICML, 2007.
[2] Razvan C. Bunescu and Raymond J. Mooney. Collective information extraction with relational
Markov networks. In ACL, 2004.
[3] Miguel A. Carreira-Perpinan and Geoffrey E. Hinton. On contrastive divergence learning. In
AISTATS, 2005.
[4] Gal Elidan and Nir Friedman. The information bottleneck em algorithm. In UAI, 2003.
[5] Gal Elidan, Noam Lotner, Nir Friedman, and Daphne Koller. Discovering hidden variables: A
structure-based approach. In NIPS, 2000.
[6] Nir Friedman, Lise Getoor, Daphne Koller, and Avi Pfeffer. Learning probabilistic relational
models. In IJCAI, 1999.
[7] Yi Huang, Volker Tresp, and Stefan Hagen Weber. Predictive modeling using features derived
from paths in relational graphs. In Technical report, 2007.
[8] Ariel Jaimovich, Ofer Meshi, and Nir Friedman. Template-based inference in symmetric relational Markov random fields. In UAI, 2007.
[9] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori
Ueda. Learning systems of concepts with an infinite relational model. In AAAI, 2006.
[10] Stanley Kok and Pedro Domingos. Learning the structure of Markov logic networks. In ICML,
2005.
[11] Stanley Kok and Pedro Domingos. Statistical predicate invention. In ICML, 2007.
[12] Stanley Kok and Pedro Domingos. Learning Markov logic networks using structural motifs.
In ICML, 2010.
[13] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient structure learning of Markov networks using ?1 -regularization. In NIPS, 2006.
[14] Kurt T. Miller, Thomas L. Griffiths, and Michael I. Jordan. Nonparametric latent feature models for link prediction. In NIPS, 2009.
[15] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate
inference: An empirical study. In UAI, 1999.
[16] Iftach Nachman, Gal Elidan, and Nir Friedman. ?Ideal parent? structure learning for continuous variable networks. In UAI, 2004.
[17] Simon Perkins, Kevin Lacker, and James Theiler. Grafting: Fast, incremental feature selection
by gradient descent in function spaces. In JMLR, 2003.
[18] Hoifung Poon and Pedro Domingos. Joint inference in information extraction. In AAAI, 2007.
[19] Karen Sachs, Omar Perez, Dana Peer, Douglas A. Lauffenburger, and Garry P. Nolan. Causal
protein-signaling networks derived from multiparameter single-cell data. In Science, 2005.
[20] Ilya Sutskever, Ruslan Salakhutdinov, and Josh Tenenbaum. Modelling relational data using
Bayesian clustered tensor factorization. In NIPS, 2009.
[21] Benjamin Taskar, Pieter Abbeel, and Daphne Koller. Discriminative probabilistic models for
relational data. In UAI, 2002.
[22] Benjamin Taskar, Eran Segal, and Daphne Koller. Probabilistic classification and clustering in
relational data. In IJCAI, 2001.
[23] Max Welling and Geoffrey E. Hinton. A new learning algorithm for mean field Boltzmann
machines. In ICANN, 2001.
[24] Zhao Xu, Volker Tresp, Kai Yu, and Hans-Peter Kriegel. Infinite hidden relational models. In
UAI, 2006.
[25] Alan Yuille. The convergence of contrastive divergence. In NIPS, 2004.
[26] Jun Zhu, Ni Lao, and Eric P. Xing. Grafting-light: Fast, incremental feature selection and
structure learning of Markov random fields. In KDD, 2010.
[27] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. In Journal
Of The Royal Statistical Society Series B, 2005.
9
| 4175 |@word nificantly:1 briefly:2 norm:6 seal:1 c0:2 pieter:1 contrastive:11 q1:4 mammal:2 thereby:1 initial:2 liu:3 series:2 exclusively:1 contains:2 paw:2 kurt:1 existing:8 current:2 yet:4 attracted:1 kdd:1 flipper:1 designed:1 update:1 hvs:7 generative:2 greedy:1 discovering:1 mln:1 marine:1 yamada:1 provides:1 parameterizations:1 node:3 detecting:1 characterization:1 location:1 simpler:1 daphne:5 c2:4 become:1 qualitative:1 consists:5 shorthand:2 overhead:1 con0:1 introduce:9 pairwise:8 sacrifice:1 expected:1 behavior:1 p1:13 ontology:1 growing:1 salakhutdinov:1 detects:1 decreasing:2 automatically:3 little:1 cpu:2 cardinality:2 vertebrate:1 domestic:1 discover:1 notation:1 maximizes:1 inward:1 kinship:11 developed:1 proposing:1 unified:1 finding:1 gal:3 quantitative:2 ti:2 nation:8 exactly:1 classifier:1 control:2 medical:1 omit:2 grant:2 before:1 t1:2 positive:4 treat:6 jungle:1 subscript:1 path:12 ree:1 merge:1 acl:1 bird:1 meatteeth:1 resembles:1 china:1 co:4 limited:4 factorization:1 range:18 hoifung:1 testing:1 partof:3 razvan:1 signaling:1 procedure:5 area:1 empirical:3 significantly:5 composite:15 convenient:1 java:1 induce:2 regular:1 griffith:2 advise:5 protein:1 get:1 cannot:3 selection:4 operator:2 context:1 applying:3 influence:1 py:4 equivalent:3 yt:2 maximizing:1 compensated:1 starting:1 rabbit:1 rectangular:1 splitting:2 immediately:1 handle:1 increment:4 hierarchy:1 alyawarra:1 exact:1 us:3 domingo:7 origin:1 pa:1 expensive:3 hagen:1 database:1 labeled:2 observed:3 bottom:4 pfeffer:1 taskar:2 capture:5 hv:4 connected:1 ordering:1 disease:1 benjamin:2 govern:1 complexity:4 moderately:1 ideally:1 dom:8 trained:1 smart:1 predictive:1 yuille:1 efficiency:2 eric:1 easily:1 joint:2 multimodal:1 represented:1 regularizer:1 derivation:1 instantiated:1 fast:3 monte:1 detected:1 jianfeng:1 avi:1 h0:8 kevin:2 peer:1 richer:1 widely:1 larger:1 kai:1 otherwise:1 nolan:1 compensates:1 statistic:1 multiparameter:1 itself:1 advantage:1 differentiable:2 sequence:1 net:1 propose:2 maximal:3 remainder:1 neighboring:3 poon:1 flexibility:4 achieve:7 intuitive:1 hibernate:1 dolphin:2 parent:3 cluster:7 ijcai:2 r1:8 sutskever:1 convergence:1 incremental:2 tk:1 spent:1 help:1 develop:1 communist:1 propagating:1 hoof:1 measured:1 andrew:1 qt:2 pp1:9 progress:1 eq:2 strong:1 p2:13 miguel:1 c:1 involves:1 come:3 blanket:4 indicate:1 direction:4 merged:1 correct:1 attribute:13 australia:1 meshi:1 require:1 galen:1 abbeel:1 clustered:1 preliminary:1 singularity:1 extension:2 leopard:1 cvi:24 considered:1 ground:2 exp:3 mapping:1 bj:3 lm:1 achieves:3 fh:6 mlns:2 estimation:4 ruslan:1 applicable:2 label:4 currently:1 nachman:1 coastal:1 pp2:7 largest:2 vice:1 tool:1 stefan:1 gaussian:1 avoid:2 volker:2 conjunction:3 lise:1 derived:2 improvement:2 modelling:1 likelihood:9 detect:2 dim:5 inference:12 tional:1 motif:1 unary:9 pad:1 hidden:37 relation:41 koller:5 going:1 i1:4 semantics:1 overall:4 among:4 classification:2 alga:1 animal:14 art:3 initialize:2 marginal:1 field:15 construct:1 nicely:1 extraction:3 having:3 atom:1 manually:1 arctic:1 msl:6 yu:1 icml:4 future:2 np:1 report:1 simplify:1 randomly:1 divergence:5 individual:2 murphy:1 intended:1 connects:1 fire:3 william:1 n1:2 attempt:1 hamster:1 friedman:5 detection:3 interest:1 message:1 highly:2 evaluation:2 introduces:1 bipedal:1 perez:1 light:1 chain:1 accurate:1 tuple:2 naonori:1 fox:1 tree:15 taylor:1 irm:7 walk:1 desired:1 causal:1 stopped:1 complicates:1 column:2 modeling:2 xeon:2 assignment:4 loopy:2 introducing:3 deviation:1 subset:1 uninteresting:1 usefulness:1 predicate:10 conducted:1 reported:2 dependency:8 connect:1 combined:1 person:40 explores:1 cll:9 ie:7 probabilistic:3 physic:1 lee:1 michael:2 ym:1 mouse:1 ilya:1 reflect:1 central:1 aaai:2 huang:2 worse:2 creating:1 expert:1 ek:1 zhao:1 return:2 ganapathi:1 segal:1 bfgs:3 twin:6 bold:1 inc:1 junzhu:1 caused:1 wcohen:1 later:1 break:2 h1:2 closed:1 schema:6 analyze:1 start:3 xing:1 predator:1 simon:1 forbes:1 om:1 square:1 ni:2 accuracy:2 variance:1 efficiently:2 miller:1 yield:4 correspond:1 identify:2 weak:2 bayesian:9 hunter:1 carlo:1 mrc:9 drive:1 cc:10 mooney:1 reach:2 whenever:2 trevor:1 evaluates:2 energy:1 pp:5 james:1 e2:2 resultant:1 associated:6 gain:14 newly:1 dataset:7 recall:1 knowledge:2 improves:1 stanley:3 actually:1 higher:2 varun:1 methodology:1 amphibian:1 wei:1 done:7 rmns:7 strongly:1 generality:1 furthermore:1 just:1 box:1 shrink:2 biomedical:2 correlation:1 hand:4 yandong:1 web:1 expressive:8 ei:5 su:1 propagation:2 quality:5 effect:1 concept:9 true:3 regularization:3 q0:7 iteratively:1 furry:1 symmetric:1 i2:4 round:1 adjacent:1 during:3 auc:7 rat:1 prominent:1 crf:1 demonstrate:2 performs:1 bring:1 weber:1 variational:4 consideration:1 novel:2 recently:1 wise:1 fi:2 nih:1 charles:1 rmn:12 multinomial:1 preceded:1 lotner:1 cohen:1 exponentially:3 vegetation:2 mellon:1 significant:2 versa:1 ai:7 fk:3 similarly:5 pointed:1 inclusion:1 language:1 gratefully:1 han:1 add:2 base:1 recent:1 binary:5 arbitrarily:1 yi:1 joshua:1 muscle:1 captured:1 additional:1 mole:1 quadrapedal:1 elidan:3 ii:2 multiple:3 full:1 infer:1 alan:1 technical:1 faster:2 cross:1 compensate:3 long:4 e1:4 prediction:8 scalable:1 basic:11 cmu:1 expectation:3 coauthor:1 iteration:1 normalization:1 represent:2 achieved:1 cell:1 c1:14 wake:1 grow:1 stalker:1 crucial:2 unlike:1 probably:1 induced:5 suspect:1 undirected:3 chimpanzee:1 member:1 contrary:1 jordan:2 call:1 structural:2 ideal:3 abnormality:3 split:1 easy:1 hastie:1 identified:1 avenue:1 proposi:1 enumerating:1 t0:2 bottleneck:1 expression:1 swim:2 peter:1 karen:1 passing:1 strainteeth:1 cause:1 repeatedly:1 remark:2 enumerate:1 useful:2 generally:2 clear:1 nocturnal:1 takeshi:1 outward:2 amount:1 kok:6 bunescu:1 tenenbaum:2 nonparametric:1 induces:1 category:10 simplest:1 reduced:1 antibiotic:1 nsf:1 fish:2 estimated:6 trapped:1 carnegie:1 discrete:1 shall:1 write:1 express:1 group:1 key:2 four:7 clarity:1 douglas:1 invention:1 graph:8 run:4 inverse:2 parameterized:1 family:1 decide:1 ueda:1 patch:1 pc2:3 comparable:1 capturing:1 bound:1 tackled:1 fold:3 sleep:1 adapted:1 periments:1 constraint:1 perkins:1 constrain:1 bp:1 ri:2 invertebrate:1 aspect:1 speed:3 argument:4 extremely:1 claw:1 uml:8 structured:2 developing:1 speedup:1 across:1 smaller:1 em:1 shallow:1 making:1 primate:1 explained:4 restricted:6 iftach:1 ariel:1 computationally:1 describing:1 count:2 cepts:1 end:6 available:1 ofer:1 lauffenburger:1 apply:2 ocean:1 occurrence:1 alternative:1 yair:1 slower:1 thomas:2 top:3 clustering:13 include:1 remaining:1 assumes:1 graphical:4 lacker:1 build:1 society:1 tensor:1 objective:3 added:1 parametric:1 costly:1 dependence:1 strategy:1 eran:1 gradient:4 win:3 reversed:1 separate:1 link:1 entity:51 timid:1 omar:1 considers:3 kemp:1 trivial:1 water:1 induction:11 assuming:1 length:1 arq:5 index:3 useless:1 relationship:1 cc1:4 hairless:1 fierce:1 potentially:1 noam:1 negative:2 design:1 implementation:3 collective:2 proper:1 boltzmann:1 perform:4 allowing:1 observation:2 markov:26 datasets:11 finite:1 acknowledge:1 descent:1 orthant:1 immediate:1 defining:2 relational:46 ever:1 extended:1 hinton:2 discovered:1 sharp:1 pc1:5 cc2:6 introduced:4 pair:10 required:1 extensive:1 c3:1 coherent:1 concisely:2 learned:1 tremendous:1 merges:1 hour:1 nip:5 address:1 able:2 kriegel:1 below:1 pattern:1 lion:1 challenge:1 gorilla:1 toughskin:2 royal:1 including:2 explanation:1 belief:2 deleting:1 power:5 getoor:1 max:1 difficulty:2 treated:2 regularized:4 zhu:2 mn:6 representing:2 improve:5 elasticnet:1 normality:1 lao:2 identifies:1 started:1 categorical:2 jun:2 naive:1 tresp:2 nir:5 raymond:1 review:2 prior:1 discovery:2 nice:2 literature:1 acknowledgement:1 garry:1 plant:3 generation:1 interesting:1 limitation:1 analogy:1 geoffrey:2 dana:1 validation:1 theiler:1 principle:2 systematically:2 cd:2 compatible:1 changed:1 last:3 free:4 truncation:1 tribe:1 allow:1 wide:3 template:21 neighbor:1 sparse:1 ghz:2 curve:1 depth:3 plain:1 world:1 evaluating:2 avoids:2 author:1 made:3 commonly:2 adaptive:1 situate:1 reptile:1 welling:1 citation:1 approximate:4 emphasize:1 skunk:1 meat:1 grafting:2 logic:7 tains:1 ml:1 decides:1 instantiation:3 uai:6 pittsburgh:1 conclude:1 discriminative:2 xi:1 nestspot:1 latent:2 continuous:1 decade:1 table:8 promising:1 correlated:1 elastic:1 improving:1 forest:1 expansion:1 excellent:1 complex:7 zou:1 domain:5 jaimovich:1 aistats:1 icann:1 main:1 sachs:1 big:1 n2:2 allowed:1 competent:1 xu:1 augmented:1 intel:2 precision:1 wish:1 exponential:1 candidate:8 clamped:1 tied:2 perpinan:1 jmlr:1 third:1 down:2 specific:1 showing:1 grazer:1 symbol:1 r2:1 grouping:1 burden:1 intractable:1 essential:1 adding:1 effectively:4 merging:2 ci:2 hui:1 magnitude:3 conditioned:1 rodent:1 smoothly:1 cd1:1 explore:2 raccoon:2 gao:1 josh:1 conveniently:1 hvd:2 gender:4 wolf:1 corresponds:1 pedro:4 conditional:4 goal:2 formulated:1 sorted:2 e5410:1 price:2 replace:1 change:2 hard:1 tiger:1 determined:1 except:2 reducing:1 semantically:3 combinatory:1 infinite:3 specifically:1 carreira:1 called:8 experimental:1 meaningful:3 formally:1 select:2 mark:1 support:1 incorporate:1 evaluate:2 erson:22 ex:1 |
3,507 | 4,176 | Active Learning by Querying
Informative and Representative Examples
Sheng-Jun Huang1
Rong Jin2
Zhi-Hua Zhou1
1
National Key Laboratory for Novel Software Technology,
Nanjing University, Nanjing 210093, China
2
Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824
{huangsj, zhouzh}@lamda.nju.edu.cn
[email protected]
Abstract
Most active learning approaches select either informative or representative unlabeled instances to query their labels. Although several active learning algorithms
have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this challenge by a principled approach, termed Q UIRE,
based on the min-max view of active learning. The proposed approach provides
a systematic way for measuring and combining the informativeness and representativeness of an instance. Extensive experimental results show that the proposed
Q UIRE approach outperforms several state-of -the-art active learning approaches.
1
Introduction
In this work, we focus on the pool-based active learning, which selects an unlabeled instance from
a given pool for manually labeling. There are two main criteria, i.e., informativeness and representativeness, that are widely used for active query selection. Informativeness measures the ability of
an instance in reducing the uncertainty of a statistical model, while representativeness measures if
an instance well represents the overall input patterns of unlabeled data [16]. Most active learning
algorithms only deploy one of the two criteria for query selection, which could significantly limit the
performance of active learning: approaches favoring informative instances usually do not exploit the
structure information of unlabeled data, leading to serious sample bias and consequently undesirable
performance for active learning; approaches favoring representative instances may require querying
a relatively large number of instances before the optimal decision boundary is found. Although several active learning algorithms [19, 8, 11] have been proposed to find the unlabeled instances that
are both informative and representative, they are usually ad hoc in measuring the informativeness
and representativeness of an instance, leading to suboptimal performance.
In this paper, we propose a new active learning approach by QUerying Informative and Representative Examples (Q UIRE for short). The proposed approach is based on the min-max view of active
learning [11], which provides a systematic way for measuring and combining the informativeness
and the representativeness. The interesting feature of the proposed approach is that it measures both
the informativeness and representativeness of an instance by its prediction uncertainty: the informativeness of an instance x is measured by its prediction uncertainty based on the labeled data, while
the representativeness of x is measured by its prediction uncertainty based on the unlabeled data.
The rest of this paper is organized as follows: Section 2 reviews the related work on active learning;
Section 3 presents the proposed approach in details; experimental results are reported in Section 4;
Section 5 concludes this work with issues to be addressed in the future.
1
(a) A binary classification (b) An approach favoring (c) An approach favoring
problem
representative instances
informative instances
(d) Our approach
Figure 1: An illustrative example for selecting informative and representative instances
2
Related Work
Querying the most informative instances is probably the most popular approach for active learning.
Exemplar approaches include query-by-committee [17, 6, 10], uncertainty sampling [13, 12, 18, 2]
and optimal experimental design [9, 20]. The main weakness of these approaches is that they are
unable to exploit the abundance of unlabeled data and the selection of query instances is solely
determined by a small number of labeled examples, making it prone to sample bias. Another school
of active learning is to select the instances that are most representative to the unlabeled data. These
approaches aim to exploit the cluster structure of unlabeled data [14, 7], usually by a clustering
method. The main weakness of these approaches is that their performance heavily depends on the
quality of clustering results [7].
Several active learning algorithms tried to combine the informativeness measure with the representativeness measure for finding the optimal query instances. In [19], the authors propose a sampling
algorithm that exploits both the cluster information and the classification margins of unlabeled instances. One limitation of this approach is that since clustering is only performed on the instances
within the classification margin, it is unable to exploit the unlabeled instances outside the margin.
In [8], Donmez et al. extended the active learning approach in [14] by dynamically balancing the
uncertainty and the density of instances for query selection. This approach is ad hoc in combining
the measure of informativeness and representativeness for query selection, leading to suboptimal
performance.
Our work is based on the min-max view of active learning, which was first proposed in the study of
batch mode active learning [11]. Unlike [11] which measures the representativeness of an instance
by its similarity to the remaining unlabeled instances, our proposed measure of representativeness
takes into account the cluster structure of unlabeled instances as well as the class assignments of the
labeled examples, leading to a better selection of unlabeled instances for active learning.
3
QUIRE: QUery Informative and Representative Examples
We start with a synthesized example that illustrates the importance of querying instances that are
both informative and representative for active learning. Figure 1 (a) shows a binary classification
problem with each class represented by a different legend. We examine three different active learning
algorithms by allowing them to sequentially select 15 data points. Figure 1 (b) and (c) show the
data points selected by an approach favoring informative instances (i.e., [18]) and by an approach
favoring representative instances (i.e., [7]), respectively. As indicated by Figure 1 (b), due to the
sample bias, the approach preferring informative instances tends to choose the data points close to
the horizontal line, leading to incorrect decision boundaries. On the other hand, as indicated by
Figure 1 (c), the approach preferring representative instances is able to identify the approximately
correct decision boundary but with a slow convergence. Figure 1 (d) shows the data points selected
by the proposed approach that favors data points that are both informative and representative. It is
clear that the proposed algorithm is more efficient in finding the accurate decision boundary than the
other two approaches.
We denote by D = {(x1 , y1 ), (x2 , y2 ), ? ? ? , (xnl , ynl ), xnl +1 , ? ? ? , xn } the training data set that
consists of nl labeled instances and nu = n ? nl unlabeled instances, where each instance
xi = [xi1 , xi2 , ? ? ? , xid ]? is a vector of d dimension and yi ? {?1, +1} is the class label of xi .
2
Active learning selects one instance xs from the pool of unlabeled data to query its class label. For
convenience, we divide the data set D into three parts: the labeled data Dl , the currently selected
instance xs , and the rest of the unlabeled data Du . We also use Da = Du ? {xs } to represent all
the unlabeled instances. We use y = [yl , ys , yu ] for the class label assignment of the entire data set,
where yl , ys and yu are the class labels assigned to Dl , xs and Du , respectively. Finally, we denote
by ya = [ys , yu ] the class assignment for all the unlabeled instances.
3.1
The Framework
To motivate the proposed approach, we first re-examine the margin-based active learning from the
viewpoint of min-max [11]. Let f ? be a classification model trained by the labeled examples, i.e.,
nl
X
?
f ? = arg min |f |2H +
?(yi , f (xi )),
(1)
2
f ?H
i=1
where H is a reproducing kernel Hilbert space endowed with kernel function ?(?, ?) : Rd ? Rd ? R.
?(z) is the loss function. Given the classifier f ? , the margin-based approach chooses the unlabeled
instance closest to the decision boundary, i.e.,
s? = arg min |f ? (xs )|.
(2)
nl <s?n
It is shown in the supplementary document that this criterion can be approximated by
s? = arg min L(Dl , xs ),
(3)
n1 <s?n
where
nl
X
? 2
?(yi , f (xi )) + ?(ys , f (xs )).
|f |H +
ys =?1 f ?H 2
i=1
L(Dl , xs ) = max min
(4)
We can also write Eq. 3 in a minimax form
min
where
A(Dl , xs ) =
max A(Dl , xs ),
nl <s?n ys =?1
nl
X
?
min |f |2H +
?(yi , f (xi ))
f ?H 2
i=1
+ ?(ys , f (xs )).
In this min-max view of active learning, it guarantees that the selected instance xs will lead to a
small value for the objective function regardless of its class label ys . In order to select queries that
are both informative and representative, we extend the evaluation function L(Dl , xs ) to include all
the unlabeled data. Hypothetically, if we know the class assignment yu for the unselected unlabeled
instances in Du , the evaluation function can be modified as
n
X
?
L(Dl , Du , yu , xs ) = max min |f |2H +
?(yi , f (xi )).
(5)
ys =?1 f ?H 2
i=1
The problem is that the class assignment yu is unknown. According to the manifold assumption [3],
we expect that a good solution for yu should result in a small value of L(Dl , Du , yu , xs ). We
therefore approximate the solution for yu by minimizing L(Dl , Du , yu , xs ), which leads to the
following evaluation function for query selection:
b l , D u , xs ) =
L(D
min
L(Dl , Du , yu , xs )
(6)
yu ?{?1}nu ?1
=
3.2
n
X
? 2
|f |H +
?(yi , f (xi ))
yu ?{?1}nu ?1 ys =?1 f ?H 2
i=1
min
max min
The Solution
For computational simplicity, for the rest of this work, we choose a quadratic loss function, i.e.,
?(y, yb) = (y ? yb)2 /2 1 . It is straightforward to show
n
1
1X
?
(yi ? f (xi ))2 = y? Ly,
min |f |2H +
f ?H 2
2 i=1
2
1
Although quadratic loss may not be ideal for classification, it does yield competitive classification results
when compared to the other loss functions such as hinge loss [15].
3
where L = (K + ?I)?1 and K = [?(xi , xj )]n?n is the kernel matrix of size n ? n. Thus, the
b l , Du , xs ) is simplified as
evaluation function L(D
b l , D u , xs ) =
L(D
max
min
yu ?{?1,+1}nu ?1 ys ?{?1,+1}
y? Ly.
(7)
Our goal is to efficiently compute the above quantity for each unlabeled instance. For the convenience of presentation, we refer to by subscript u the rows/columns in a matrix M for the unlabeled
instances in Du , by subscript l the rows/columns in M for labeled instances in Dl , and by subscript
s the row/column in M for the selected instance. We also refer to by subscript a the rows/columns
in M for all the unlabeled instances (i.e., Du ? {xs }). Using these conventions, we rewrite the
objective y? Ly as
y? Ly = yl Ll,l yl + Ls,s + yuT Lu,u yu + 2yuT (Lu,l yl + Lu,s ys ) + 2ys yl? Ll,s .
Note that since the above objective function is concave (linear) in ys and convex (quadratic) in
yu , we can switch the maximization of yu with the minimization of ys in (7). By relaxing yu to
continuous variables, the solution to minyu y? Ly is given by
bu = ?Lu,u ?1 (Lu,l yl + Lu,s ys ),
y
b l , Du , xs ):
leading to the following expression for the evaluation function L(D
b l , Du , xs ) = Ls,s + yT Ll,l yl + max{2ys Ls,l yl
L(D
l
ys
(8)
(9)
?(Lu,l yl + Lu,s ys )T Lu,u ?1 (Lu,l yl + Lu,s ys )}
det(La,a )
? Ls,s ?
+ 2 Ls,l ? Ls,u L?1
u,u Lu,l yl ,
Ls,s
where the last step follows the relation
A11 A12
= det(A22 )det A11 ? A12 A?1
det
22 A21 .
A21 A22
Note that although yu is relaxed to real numbers, according to our empirical studies, we find that in
most cases, yu falls between ?1 and +1.
b l , Du , xs ) essentially consists of two components: Ls,s ?
Remark. The evaluation function L(D
det(La,a )/Ls,s and |(Ls,l ? Ls,u L?1
u,u Lu,l )yl |. Minimizing the first component is equivalent to
minimizing Ls,s because La,a is independent from the selected instance xs . Since L = (K +?I)?1 ,
we have
?1
Kl,l Kl,u
Kl,s
Ls,s =
Ks,s ? (Ks,l , Ks,u )
Ku,l Ku,u
Ku,s
1
1
Kl,l Kl,u
Kl,s
1+
(Ks,l , Ks,u )
.
?
Ku,l Ku,u
Ku,s
Ks,s
Ks,s
Therefore, to choose an instance with small Ls,s , we select the instance with large self-similarity
Ks,s . When self-similarity Ks,s is a constant, this term will not affect query selection.
To analyze the effect of the second component, we approximate it as:
? 2 |Ls,l yl | + 2 Ls,u L?1
2 Ls,l ? Ls,u L?1
u,u Lu,l yl
u,u Lu,l yl
bu |.
? 2|Ls,l yl | + 2|Ls,u y
(10)
The first term in the above approximation measures the confidence in predicting xs using only
labeled data, which corresponds to the informativeness of xs . The second term measures the prediction confidence using only the predicted labels of the unlabeled data, which can be viewed as the
measure of representativeness. This is because when xs is a representative instance, it is expected to
share a large similarity with many of the unlabeled instances in the pool. As a result, the prediction
bu . If we
for xs by the unlabeled data in Du is decided by the average of their assigned class labels y
assume that the classes are evenly distributed over the unlabeled data, we should expect a low confidence in predicting the class label for xs by unlabeled data. It is important to note that unlike the
4
Algorithm 1 The Q UIRE Algorithm
Input:
D : A data set of n instances
Initialize:
Dl = ?; nl = 0 % no labeled data is available at the very beginning
Du = D; nu = n % the pool of unlabeled data
Calculate K
repeat
Calculate L?1
a,a using Proposition 2 and det(La,a )
for s = 1 to nu do
Calculate L?1
uu according to Theorem 1
b
Calculate L(Dl , Du , xs ) using Eq. 9
end for
b l , Du , xs? ) and query its label ys?
Select the xs? with the smallest L(D
Dl = Dl ? (xs? , ys? ); Du = Du \ xs?
until the number of queries or the required accuracy is reached
existing work that measures the representativeness only by the cluster structure of unlabeled data,
bu , which essentially combines the cluster
our proposed measure of representativeness depends on y
structure of unlabeled data with the class assignments of labeled data. Given high-dimensional data,
there could be many possible cluster structures that are consistent with the unlabeled data and it is
unclear which one is consistent with the target classification problem. It is therefore critical to take
into account the label information when exploiting the cluster structure of unlabeled data.
3.3
Efficient Algorithm
b l , Du , xs ) in Eq. 9 requires computing L?1
Computing the evaluation function L(D
u,u for every unlabeled instance xs , leading to high computational cost when the number of unlabeled instances is
very large. The theorem below allows us to improve the computational efficiency dramatically.
?1
Theorem 1. Let
Ls,s Ls,u
a ?b?
L?1
=
=
.
a,a
Lu,s Lu,u
?b
D
We have
1 ?
L?1
u,u = D ? bb .
a
The proof can be found in the supplementary document. As indicated by Theorem 1, we only need
?1
?1
to compute L?1
a,a once; for each xs , its Lu,u can be computed directly from La,a . The following
?1
proposition allows us to simplify the computation for La,a .
?1
Proposition 2. L?1
Kl,a
a,a = (?Ia + Ka,a ) ? Ka,l (?Il + Kl,l )
Proposition 2 follows directly from the inverse of a block matrix. As indicated by Proposition 2,
we only need to compute (?I + Kl,l )?1 . Given that the number of labeled examples is relatively
small compared to the size of unlabeled data, the computation of L?1
a,a is in general efficient. The
pseudo-code of Q UIRE is summarized in Algorithm 1. Excluding the time for computing the kernel
matrix, the computational complexity of our algorithm is just O(nu ).
4
Experiments
We compare Q UIRE with the following five baseline approaches: (1) R ANDOM: randomly select
query instances, (2) M ARGIN: margin-based active learning [18], a representative approach which
selects informative instances, (3) C LUSTER: hierarchical-clustering-based active learning [7], a representative approach that chooses representative instances, (4) IDE: active learning that selects informative and diverse examples [11], and (5) DUAL: a dual strategy for active learning that exploits
both informativeness and representativeness for query selection. Note that the original algorithm
in [11] is designed for batch mode active learning. We turn it into an active learning algorithm that
selects a single instance in each iteration by setting the parameter k = 1.
5
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
0
20
40
60
80
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
0
80
20
80
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
0
100
200
300
400
500
600
Number of queried examples
(c) g241n
15
20
25
70
60
Random
Margin
Cluster
IDE
DUAL
Quire
50
40
30
0
50
(d) isolet
Accuracy (%)
80
70
60
Random
Margin
Cluster
IDE
DUAL
Quire
50
40
30
40
50
50
90
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
20
30
40
50
80
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
60
0
10
Accuracy (%)
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
40
50
Number of queried examples
100
90
90
80
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
60
0
20
40
60
30
40
50
60
(i) letterEvsF
100
70
20
Number of queried examples
(h) letterDvsP
70
150
(f) vehicle
80
10
100
Number of queried examples
Number of queried examples
80
(j) letterIvsJ
0
90
50
0
60
90
30
50
300
100
(g) wdbc
20
250
Random
Margin
Cluster
IDE
DUAL
Quire
60
100
Number of queried examples
10
200
70
(e) titato
90
20
150
80
Number of queried examples
100
10
100
Accuracy (%)
10
80
80
100
Accuracy (%)
5
90
90
Number of queried examples
Accuracy (%)
50
100
Accuracy (%)
Accuracy (%)
Accuracy (%)
90
Accuracy (%)
80
Random
Margin
Cluster
IDE
DUAL
Quire
60
100
100
0
60
70
(b) digit1
(a) austra
0
40
80
Number of queried examples
Number of queried examples
50
0
Accuracy (%)
Accuracy (%)
Accuracy (%)
90
80
80
70
Random
Margin
Cluster
IDE
DUAL
Quire
60
50
0
Number of queried examples
(k) letterMvsN
10
20
30
40
50
60
Number of queried examples
(l) letterUvsV
Figure 2: Comparison on classification accuracy
Twelve data sets are used in our study and their statistics are shown in the supplementary document.
Digit1 and g241n are benchmark data sets for semi-supervised learning [5]; austria, isolet, titato,
vechicle, and wdbc are UCI data sets [1]; letter is a multi-class data set [1] from which we select
five pairs of letters that are relatively difficult to distinguish, i.e., D vs P, E vs F, I vs J, M vs N,
U vs V, and construct a binary class data set for each pair. Each data set is randomly divided into
two parts of equal size, with one part as the test data and the other part as the unlabeled data that is
used for active learning. We assume that no labeled data is available at the very beginning of active
learning. For M ARGIN, IDE and DUAL, instances are randomly selected when no classification
model is available, which only takes place at the beginning. In each iteration, an unlabeled instance
is first selected to solicit its class label and the classification model is then retrained using additional
labeled instance. We evaluate the classification model by its performance on the holdout test data.
Both classification accuracy and Area Under ROC curve (AUC) are used for evaluation metrics. For
every data set, we run the experiment for ten times, each with a random partition of the data set. We
also conduct experiments with a few initially labeled examples and have similar observation. Due to
the space limit, we put in the supplementary document the experimental results with a few initially
labeled examples. In all the experiments, the parameter ? is set to 1 and a RBF kernel with default
6
Table 1: Comparison on AUC values (mean ? std). The best performance and its comparable
performances based on paired t-tests at 95% significance level are highlighted in boldface.
Data
austra
digit1
g241n
isolet
titato
vehicle
wdbc
letterDvsP
letterEvsF
letterIvsJ
letterMvsN
letterUvsV
Algorithms
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
Q UIRE
5%
.868?.027
.751?.137
.877?.045
.858?.101
.866?.037
.887?.014
.945?.009
.941?.028
.938?.035
.954?.011
.929?.014
.976?.006
.713?.040
.700?.057
.720?.038
.727?.030
.722?.040
.757?.035
.995?.006
.965?.052
.998?.002
.998?.003
.993?.008
.997?.002
.762?.033
.645?.096
.717?.087
.735?.040
.708?.069
.736?.037
.818?.064
.693?.078
.771?.088
.731?.141
.680?.074
.750?.137
.984?.006
.967?.038
.981?.007
.983?.006
.955?.025
.985?.006
.990?.004
.994?.005
.988?.008
.992?.006
.978?.005
.998?.001
.977?.020
.987?.008
.975?.016
.977?.014
.976?.011
.988?.009
.943?.025
.882?.096
.952?.022
.934?.030
.819?.120
.951?.023
.977?.010
.964?.040
.971?.017
.969?.017
.950?.025
.986?.007
.992?.005
.998?.002
.990?.008
.995?.004
.983?.014
.999?.001
Number of queries (percentage of the unlabeled data)
10%
20%
30%
40%
.894?.022
.897?.023
.901?.022
.909?.015
.838?.119
.885?.043
.909?.010
.911?.012
.888?.029
.894?.015
.896?.015
.903?.014
.885?.058
.902?.012
.912?.008
.913?.009
.878?.036
.875?.018
.876?.016
.879?.013
.901?.010
.906?.016
.912?.009
.914?.009
.969?.006
.979?.005
.984?.003
.985?.003
.972?.009
.989?.002
.992?.002
.992?.002
.952?.018
.963?.019
.974?.011
.985?.002
.973?.007
.987?.002
.991?.002
.992?.002
.953?.009
.975?.004
.982?.005
.985?.003
.986?.003
.990?.002
.992?.002
.992?.002
.769?.021
.822?.018
.854?.016
.873?.015
.751?.048
.830?.022
.864?.019
.896?.012
.770?.024
.815?.018
.835?.021
.860?.022
.786?.029
.840?.017
.866?.016
.883?.013
.751?.019
.822?.011
.838?.022
.865?.016
.825?.019
.857?.020
.884?.013
.900?.009
.998?.002
.999?.001
1.00?.000
1.00?.000
.999?.001
1.00?.000
1.00?.000
1.00?.000
.999?.002
1.00?.000
1.00?.000
1.00?.000
.999?.002
.999?.001
1.00?.001
1.00?.000
.999?.001
.999?.001
1.00?.000
1.00?.001
.999?.001
.999?.001
1.00?.000
1.00?.001
.861?.031
.954?.023
.979?.011
.991?.007
.753?.078
.946?.043
.998?.001
1.00?.000
.806?.054
.908?.031
.971?.021
.989?.010
.906?.029
.996?.003
.999?.001
1.00?.001
.782?.064
.900?.027
.981?.012
.995?.006
.861?.025
.991?.004
.999?.001
1.00?.000
.864?.039
.925?.032
.949?.026
.968?.016
.828?.077
.883?.105
.981?.014
.993?.005
.845?.056
.927?.022
.955?.018
.973?.010
.849?.106
.878?.093
.957?.037
.977?.010
.706?.114
.817?.061
.875?.035
.908?.035
.912?.024
.956?.025
.985?.007
.989?.006
.986?.005
.990?.004
.991?.004
.991?.004
.990?.002
.993?.003
.993?.003
.993?.003
.987?.004
.991?.003
.992?.003
.992?.003
.984?.008
.990?.004
.992?.003
.993?.003
.964?.016
.972?.015
.988?.009
.992?.003
.990?.004
.993?.003
.993?.003
.993?.003
.995?.002
.997?.002
.998?.001
.998?.001
.999?.001
.999?.000
.999?.001
.999?.001
.995?.004
.997?.002
.998?.001
.999?.001
.997?.002
.998?.001
.999?.001
.999?.001
.986?.001
.988?.004
.990?.004
.996?.001
.999?.001
.999?.001
.999?.001
.999?.001
.988?.009
.994?.002
.997?.002
.998?.001
.999?.001
1.00?.000
1.00?.000
1.00?.000
.991?.003
.997?.004
.999?.001
1.00?.000
.995?.003
.999?.000
.999?.000
.999?.000
.993?.003
.996?.002
.996?.002
.996?.002
.999?.000
1.00?.000
1.00?.000
1.00?.000
.966?.017
.980?.004
.983?.005
.985?.005
.960?.027
.986?.005
.989?.006
.991?.004
.961?.017
.976?.008
.985?.007
.987?.006
.969?.011
.979?.006
.980?.006
.982?.008
.897?.058
.934?.030
.954?.017
.959?.014
.963?.013
.976?.011
.989?.010
.991?.004
.992?.002
.994?.003
.996?.002
.997?.001
.991?.014
.999?.000
.999?.000
.999?.000
.986?.009
.994?.003
.997?.002
.998?.001
.988?.007
.997?.002
.998?.001
.998?.001
.972?.011
.974?.007
.980?.008
.983?.007
.996?.003
.998?.001
.999?.000
.999?.000
.996?.004
.998?.001
.999?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.996?.009
1.00?.000
1.00?.000
1.00?.000
.999?.001
1.00?.000
1.00?.000
1.00?.000
.986?.008
.990?.008
.991?.008
.993?.007
1.00?.000
1.00?.000
1.00?.000
1.00?.000
50%
.909?.012
.914?.009
.907?.015
.914?.007
.881?.013
.915?.007
.988?.003
.992?.002
.988?.003
.992?.002
.987?.003
.992?.002
.886?.012
.911?.008
.880?.013
.899?.011
.881?.012
.912?.006
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.997?.004
1.00?.000
.997?.003
1.00?.000
.999?.001
1.00?.000
.975?.013
.993?.005
.978?.011
.985?.009
.947?.035
.991?.005
.991?.004
.993?.003
.993?.003
.993?.003
.992?.003
.993?.003
.998?.001
.999?.001
.999?.001
.999?.001
.998?.001
.999?.001
.999?.001
1.00?.000
1.00?.000
1.00?.000
.998?.001
1.00?.000
.987?.004
.991?.004
.989?.005
.985?.005
.953?.015
.991?.004
.997?.001
.999?.000
.998?.001
.998?.001
.983?.007
.999?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.995?.005
1.00?.000
80%
.917?.011
.915?.008
.913?.011
.916?.007
.904?.008
.916?.007
.991?.002
.992?.002
.992?.002
.992?.002
.991?.002
.992?.002
.906?.014
.918?.008
.909?.009
.916?.010
.912?.007
.920?.009
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.989?.006
.992?.005
.992?.006
.991?.006
.980?.016
.992?.005
.993?.003
.993?.003
.993?.003
.993?.003
.992?.004
.993?.003
.999?.001
.999?.001
.999?.001
.999?.001
.999?.001
.999?.001
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.990?.004
.991?.004
.991?.004
.990?.004
.988?.004
.991?.004
.998?.001
.999?.000
.999?.000
.999?.000
.998?.001
.999?.000
1.00?.000
1.00?.000
1.00?.000
1.00?.000
.999?.000
1.00?.000
parameters is used (performances with linear kernel are not as stable as that with RBF kernel).
LibSVM [4] is used to train a SVM classifier for all active learning approaches in comparison.
7
Table 2: Win/tie/loss counts of Q UIRE versus the other methods with varied numbers of queries.
Algorithms
R ANDOM
M ARGIN
C LUSTER
IDE
DUAL
In All
4.1
Number of queries (percentage of the unlabeled data)
5%
10%
20%
30%
40%
50%
80%
4/8/0
8/4/0
9/3/0
9/2/1
10/2/0
10/2/0
6/6/0
6/6/0
4/7/1
2/8/2
2/8/2
0/11/1
0/11/1
1/11/0
6/6/0
7/5/0
8/4/0
11/1/0
9/3/0
6/6/0
3/9/0
6/6/0
6/5/1
6/5/1
8/4/0
8/4/0
8/4/0
2/10/0
8/4/0
10/2/0
11/1/0
10/2/0
10/2/0
11/1/0
9/3/0
30/30/0 35/23/2 36/21/3 40/17/3 37/22/1 35/24/1 21/39/0
In All
56/27/1
15/62/7
50/34/0
44/38/2
69/15/0
234/176/10
Results
Figure 2 shows the classification accuracy of different active learning approaches with varied numbers of queries. Table 1 shows the AUC values, with 5%, 10%, 20%, 30%, 40%, 50% and 80% of
unlabeled data used as queries. For each case, the best result and its comparable performances are
highlighted in boldface based on paired t-tests at 95% significance level. Table 2 summarizes the
win/tie/loss counts of Q UIRE versus the other methods based on the same test. We also perform the
Wilcoxon signed ranks test at 95% significance level, and obtain almost the same results, which can
be found in the supplementary document.
First, we observe that the R ANDOM approach tends to yield decent performance when the number
of queries is very small. However, as the number of queries increases, this simple approach loses
its edge and often is not as effective as the other active learning approaches. M ARGIN, the most
commonly used approach for active learning, is not performing well at the beginning of the learning stage. As the number of queries increases, we observe that M ARGIN catches up with the other
approaches and yields decent performance. This phenomenon can be attributed to the fact that with
only a few training examples, the learned decision boundary tends to be inaccurate, and as a result,
the unlabeled instances closest to the decision boundary may not be the most informative ones. The
performance of C LUSTER is mixed. It works well on some data sets, but performs poorly on the
others. We attribute the inconsistency of C LUSTER to the fact that the identified cluster structure
of unlabeled data may not always be consistent with the target classification model. The behavior
of IDE is similar to that of C LUSTER in that it achieves good performance on certain data sets and
fails on the others. DUAL does not yield good performance on most data sets although we have
tried our best efforts to tune the related parameters. We attribute the failure of DUAL to the setup
of our experiment in which no initially labeled examples are provided. Further study shows that
starting with a few initially labeled examples does improve the performance of DUAL though it is
still significantly outperformed by Q UIRE.Detailed results can be found in the supplementary document. Finally, we observe that for most cases, Q UIRE is able to outperform the baseline methods
significantly, as indicated by Figure 2, Tables 1 and 2. We attribute the success of Q UIRE to the principle of choosing unlabeled instances that are both informative and representative, and the specially
designed computational framework that appropriately measures and combines the informativeness
and representativeness. The computational cost are reported in the supplementary document.
5
Conclusion
We propose a new approach for active learning, called Q UIRE, that is designed to find unlabeled instances that are both informative and representative. The proposed approach is based on the min-max
view of active learning, which provides a systematic way for measuring and combining the informativeness and the representativeness. Our current work is restricted to binary classification. In the
future, we plan to extend this work to multi-class learning. We also plan to develop the mechanism
which allows the user to control the tradeoff between informativeness and representativeness based
on their domain, leading to the incorporation of domain knowledge into active learning algorithms.
Acknowledgements
This work was supported in part by the NSFC (60635030), 973 Program (2010CB327903), JiangsuSF (BK2008018) and NSF (IIS-0643494).
8
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[2] M. F. Balcan, A. Z. Broder, and T. Zhang. Margin based active learning. In Proceedings of the
20th Annual Conference on Learning Theory, pages 35?50, 2007.
[3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework
for learning from labeled and unlabeled examples. Journal of Machine Learning Research,
7:2399?2434, 2006.
[4] C. C. Chang and C. J. Lin. LIBSVM: A library for support vector machines, 2001.
[5] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-supervised learning. MIT Press, Cambridge, MA, 2006.
[6] I. Dagan and S. P. Engelson. Committee-based sampling for training probabilistic classifiers.
In Proceedings of the 12th International Conference on Machine Learning, pages 150?157,
1995.
[7] S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In Proceedings of the 25th
International Conference on Machine Learning, pages 208?215, 2008.
[8] P. Donmez, J. G. Carbonell, and P. N. Bennett. Dual strategy active learning. In Proceedings
of the 18th European Conference on Machine Learning, pages 116?127, 2007.
[9] P. Flaherty, M. I. Jordan, and A. P. Arkin. Robust design of biological experiments. In Advances
in Neural Information Processing Systems 18, pages 363?370, 2005.
[10] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by
committee algorithm. Machine Learning, 28(2-3):133?168, 1997.
[11] S. C. H. Hoi, R. Jin, J. Zhu, and M. R. Lyu. Semi-supervised svm batch mode active learning
for image retrieval. In Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, 2008.
[12] D. D. Lewis and J. Catlett. Heterogeneous uncertainty sampling for supervised learning. In
Proceedings of the 11th International Conference on Machine Learning, pages 148?156, 1994.
[13] D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In Proceedings
of the 17th Annual International ACM-SIGIR Conference on Research and Development in
Information Retrieval, pages 3?12, 1994.
[14] H. T. Nguyen and A. W. M. Smeulders. Active learning using pre-clustering. In Proceedings
of the 21st International Conference on Machine Learning, pages 623?630, 2004.
[15] R. Rifkin R, G. Yeo, and T. Poggio. Regularized least squares classification. In S. Basu
C. Micchelli J. A. K. Suykens, G. Horvath and J. Vandewalle, editors, Advances in Learning
Theory: Methods, Model and Applications, NATO Science Series III: Computer and Systems
Sciences. Volume 190, pages 131?154, 2003.
[16] B. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin?Madison, 2009.
[17] H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of the 5th
ACM Workshop on Computational Learning Theory, pages 287?294, 1992.
[18] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In Proceedings of the 17th International Conference on Machine Learning, pages
999?1006, 2000.
[19] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang. Representative sampling for text classification using support vector machines. In Proceedings of the 25th European Conference on Information
Retrieval Research, pages 393?407, 2003.
[20] K. Yu, J. Bi, and V. Tresp. Active learning via transductive experimental design. In Proceedings
of the 23th International Conference on Machine Learning, pages 1081?1088, 2006.
9
| 4176 |@word repository:1 tried:2 series:1 selecting:1 document:7 outperforms:1 existing:1 ka:2 current:1 partition:1 informative:20 designed:3 v:5 selected:8 beginning:4 short:1 provides:3 cse:1 zhang:1 five:2 incorrect:1 consists:2 combine:4 lansing:1 expected:1 behavior:1 examine:2 multi:2 zhouzh:1 zhi:1 provided:1 finding:3 guarantee:1 pseudo:1 every:2 concave:1 tie:2 classifier:4 control:1 ly:5 before:1 nju:1 engineering:1 tends:3 limit:2 nsfc:1 subscript:4 solely:1 approximately:1 signed:1 china:1 k:9 dynamically:1 relaxing:1 bi:1 decided:1 block:1 area:1 empirical:1 significantly:3 confidence:3 pre:1 nanjing:2 convenience:2 unlabeled:49 selection:10 undesirable:1 close:1 put:1 equivalent:1 yt:1 straightforward:1 regardless:1 starting:1 l:22 convex:1 sigir:1 survey:1 simplicity:1 isolet:3 target:2 deploy:1 heavily:1 user:1 shamir:1 arkin:1 approximated:1 recognition:1 std:1 labeled:18 wang:1 calculate:4 sompolinsky:1 principled:1 complexity:1 seung:2 motivate:1 trained:1 rewrite:1 efficiency:1 represented:1 train:1 effective:1 query:28 zhou1:1 labeling:1 newman:1 outside:1 choosing:1 huang1:1 widely:1 supplementary:7 ability:1 favor:1 statistic:1 niyogi:1 transductive:1 highlighted:2 hoc:3 propose:3 uci:2 combining:4 rifkin:1 poorly:1 olkopf:1 exploiting:1 convergence:1 cluster:20 a11:2 develop:1 measured:2 exemplar:1 school:1 a22:2 eq:3 predicted:1 uu:1 convention:1 correct:1 attribute:3 a12:2 settle:1 hoi:1 xid:1 require:1 proposition:5 biological:1 rong:1 lyu:1 achieves:1 digit1:3 smallest:1 catlett:1 outperformed:1 label:12 currently:1 minimization:1 mit:1 always:1 aim:1 lamda:1 modified:1 focus:1 rank:1 baseline:2 inaccurate:1 entire:1 initially:4 relation:1 favoring:6 koller:1 selective:1 selects:5 overall:1 issue:1 classification:19 arg:3 dual:32 g241n:3 development:1 plan:2 art:1 initialize:1 equal:1 once:1 construct:1 sampling:7 manually:1 represents:1 yu:22 future:2 argin:17 report:1 others:2 simplify:1 serious:1 few:4 belkin:1 engelson:1 randomly:3 national:1 n1:1 evaluation:8 weakness:2 nl:8 accurate:1 edge:1 poggio:1 conduct:1 divide:1 re:1 instance:66 column:4 measuring:4 assignment:6 maximization:1 cost:2 vandewalle:1 tishby:1 reported:2 chooses:2 st:1 density:1 twelve:1 broder:1 international:7 preferring:2 bu:4 systematic:3 xi1:1 yl:17 probabilistic:1 pool:5 yut:2 choose:3 gale:1 leading:8 yeo:1 account:2 summarized:1 representativeness:18 ad:3 depends:2 performed:1 view:5 vehicle:2 analyze:1 reached:1 start:1 competitive:1 asuncion:1 smeulders:1 il:1 square:1 accuracy:16 efficiently:1 yield:4 identify:1 lu:18 solicit:1 failure:1 proof:1 mi:1 attributed:1 hsu:1 holdout:1 popular:1 austria:1 knowledge:1 organized:1 hilbert:1 andom:15 supervised:4 yb:2 though:1 jiangsusf:1 just:1 stage:1 until:1 sheng:1 hand:1 horizontal:1 mode:3 quality:1 indicated:5 quire:13 effect:1 y2:1 regularization:1 assigned:2 laboratory:1 ll:3 self:2 auc:3 illustrative:1 ide:28 criterion:4 cb327903:1 performs:1 balcan:1 image:1 novel:1 donmez:2 volume:1 extend:2 synthesized:1 refer:2 cambridge:1 queried:12 rd:2 chapelle:1 stable:1 similarity:4 wilcoxon:1 closest:2 termed:1 certain:1 binary:4 success:1 inconsistency:1 yi:7 additional:1 relaxed:1 ynl:1 semi:3 ii:1 zien:1 technical:1 lin:1 retrieval:3 divided:1 y:22 paired:2 prediction:5 heterogeneous:1 essentially:2 vision:1 metric:1 iteration:2 represent:1 kernel:7 suykens:1 addressed:1 appropriately:1 sch:1 rest:3 unlike:2 specially:1 probably:1 legend:1 jordan:1 ideal:1 iii:1 decent:2 switch:1 xj:1 affect:1 identified:1 suboptimal:2 cn:1 luster:17 tradeoff:1 det:6 expression:1 effort:1 remark:1 dramatically:1 clear:1 detailed:1 tune:1 ten:1 outperform:1 percentage:2 nsf:1 diverse:1 write:1 dasgupta:1 key:1 libsvm:2 run:1 inverse:1 letter:2 uncertainty:7 place:1 almost:1 decision:7 summarizes:1 comparable:2 distinguish:1 quadratic:3 annual:2 incorporation:1 x2:1 software:1 min:18 performing:1 relatively:3 department:1 according:3 making:1 restricted:1 turn:1 count:2 committee:4 xi2:1 mechanism:1 know:1 end:1 available:3 endowed:1 observe:3 hierarchical:2 batch:3 original:1 clustering:5 include:2 remaining:1 hinge:1 madison:1 exploit:6 society:1 micchelli:1 objective:3 quantity:1 strategy:2 unclear:1 flaherty:1 win:2 unable:2 evenly:1 carbonell:1 manifold:2 boldface:2 code:1 horvath:1 minimizing:3 difficult:1 setup:1 design:3 unknown:1 perform:1 allowing:1 observation:1 benchmark:1 jin:1 extended:1 excluding:1 y1:1 varied:2 reproducing:1 retrained:1 pair:2 required:1 kl:9 extensive:1 learned:1 nu:7 address:1 able:2 usually:4 pattern:2 below:1 challenge:1 jin2:1 program:1 max:12 ia:1 critical:1 regularized:1 predicting:2 zhu:1 minimax:1 improve:2 technology:1 library:1 unselected:1 concludes:1 jun:1 catch:1 tresp:2 text:3 review:1 geometric:1 acknowledgement:1 literature:1 wisconsin:1 freund:1 loss:7 expect:2 mixed:1 interesting:1 limitation:1 querying:5 versus:2 xnl:2 consistent:3 informativeness:14 principle:1 viewpoint:1 editor:2 share:1 balancing:1 row:4 prone:1 repeat:1 last:1 supported:1 bias:3 fall:1 dagan:1 basu:1 distributed:1 boundary:7 dimension:1 xn:1 curve:1 default:1 opper:1 author:1 commonly:1 simplified:1 nguyen:1 bb:1 approximate:2 nato:1 active:49 sequentially:1 xi:9 msu:1 continuous:1 table:5 ku:6 robust:1 rongjin:1 du:21 european:2 domain:2 da:1 significance:3 main:3 x1:1 xu:2 representative:22 roc:1 slow:1 tong:1 fails:1 a21:2 austra:2 abundance:1 theorem:4 x:38 svm:2 dl:16 workshop:1 sequential:1 importance:1 illustrates:1 margin:19 wdbc:3 michigan:1 sindhwani:1 hua:1 chang:1 corresponds:1 loses:1 lewis:2 acm:2 ma:1 goal:1 presentation:1 viewed:1 consequently:1 rbf:2 bennett:1 determined:1 reducing:1 called:1 experimental:5 ya:1 la:6 east:1 select:8 hypothetically:1 support:3 evaluate:1 phenomenon:1 |
3,508 | 4,177 | Multi-label Multiple Kernel Learning by Stochastic
Approximation: Application to Visual Object Recognition
Serhat S. Bucak?
[email protected]
Rong Jin?
[email protected]
Dept. of Comp. Sci. & Eng.?
Michigan State University
East Lansing, MI 48824,U.S.A.
Anil K. Jain??
[email protected]
Dept. of Brain & Cognitive Eng.?
Korea University, Anam-dong,
Seoul, 136-713, Korea
Abstract
Recent studies have shown that multiple kernel learning is very effective for object recognition,
leading to the popularity of kernel learning in computer vision problems. In this work, we develop
an efficient algorithm for multi-label multiple kernel learning (ML-MKL). We assume that all the
classes under consideration share the same combination of kernel functions, and the objective is
to find the optimal kernel combination that benefits all the classes. Although several algorithms
have been developed for ML-MKL, their computational cost is linear in the number of classes,
making them unscalable when the number of classes is large, a challenge frequently encountered
in visual object recognition. We address this computational challenge by developing a framework
for ML-MKL that combines the worst-case analysis with
? stochastic approximation. Our analysis
shows that the complexity of our algorithm is O(m1/3 lnm), where m is the number of classes.
Empirical studies with object recognition show that while achieving similar classification accuracy,
the proposed method is significantly more efficient than the state-of-the-art algorithms for ML-MKL.
1
Introduction
Recent studies have shown promising performance of kernel methods for object classification, recognition and localization [1]. Since the choice of kernel functions can significantly affect the performance of kernel methods, kernel
learning, or more specifically Multiple Kernel Learning (MKL) [2, 3, 4, 5, 6, 7], has attracted considerable amount
of interest in computer vision community. In this work, we focuss on kernel learning for object recognition because
the visual content of an image can be represented in many ways, depending on the methods used for keypoint detection, descriptor/feature extraction, and keypoint quantization. Since each representation leads to a different similarity
measure between images (i.e., kernel function), the related fusion problem can be cast into a MKL problem.
A number of algorithms have been developed for MKL. In [2], MKL is formulated as a quadratically constraint
quadratic program (QCQP). [8] suggests an algorithm based on sequential minimization optimization (SMO) to improve the efficiency of [2]. [9] shows that MKL can be formulated as a semi-infinite linear program (SILP) and can
be solved efficiently by using off-the-shelf SVM implementations. In order to improve the scalability of MKL, several
first order optimization methods have been proposed, including the subgradient method [10], the level method [11], the
method based on equivalence between group lasso and MKL [12, 13, 14]. Besides L1-norm [15] and L2-norm [16],
Lp-norm [17] has also been proposed to regularize the weights for kernel combination. Other then the framework
based on maximum margin classification, MKL can also be formulated by using kernel alignment [18] and Fisher
discriminative analysis frameworks [19].
1
Although most efforts in MKL focus on binary classification problems, several recent studies have attempted to extend
MKL to multi-class and multi-label learning [3, 20, 21, 22, 23]. Most of these studies assume that either the same or
similar kernel functions are used by different but related classification tasks. Even though studies show that MKL for
multi-class and multi-label learning can result in significant improvement in classification accuracy, the computational
cost is often linear in the number of classes, making it computationally expensive when dealing with a large number of
classes. Since most object recognition problems involve many object classes, whose number might go up to hundreds
or sometimes even to thousands, it is important to develop an efficient learning algorithm for multi-class and multilabel MKL that is sublinear in the number of classes.
In this work, we develop an efficient algorithm for Multi-Label MKL (ML-MKL) that assumes all the classifiers
share the same combination of kernels. We note that although this assumption significantly constrains the choice of
kernel functions for different classes, our empirical studies with object recognition show that it does not affect the
classification performance. A similar phenomenon was also observed in [21]. A naive implementation of ML-MKL
with shared kernel combination will lead to a computational cost linear in the number of classes. We alleviate this
computational challenge by exploring the idea of combining worst case analysis?with stochastic approximation. Our
analysis reveals that the convergence rate of the proposed algorithm is O(m1/3 ln m), which is significantly better
than a linear dependence on m, where m is the number of classes. Our empirical studies show that the proposed MKL
algorithm yields similar performance as the state-of-the-art algorithms for ML-MKL, but with a significantly shorter
running time, making it suitable for multi-label learning with a large number of classes.
The rest of this paper is organized as follows. Section 2 presents the proposed algorithm for Multi-Label MKL,
along with its convergence analysis. Section 3 summarizes the experimental results for object recognition. Section 4
concludes this work.
2
Multi-label Multiple Kernel Learning (ML-MKL)
We denote by D = {x1 , . . . , xn } the collection of n training instances, and by m the number of classes. We introduce
yk = (y1k , . . . , ynk )> ? {?1, +1}n , the assignment of the kth class to all the training instances: yik = +1 if xi is
assigned to the k-th class and yik = ?1 otherwise. We introduce ?a (x, x0 ) : Rd ? Rd 7? R, a = 1, . . . , s, the s kernel
functions to be combined. We denote by {Ka ? Rn?n , a = 1, . . . , s} the collection of s kernel matrices for the data
a
points in D, i.e., Ki,j
= ?a (xi , xj ).
Ps
We introduce p = (p1 , . . . , ps ), a probability distribution, for combining kernels. We denote by K(p) = a=1 pa Ka
the combined kernel matrices. We introduce the domain P for the probability distribution p, i.e., P = {p ? Rs+ :
p> 1 = 1}. Our goal is to learn from the training examples the optimal kernel combination p for all the m classes.
The simplest approach for multi-label multiple kernel learning with shared kernel combination is to find the optimal
kernel combination p by minimizing the sum of regularized loss functions of all m classes, leading to the following
optimization problem:
))
(m
(
n
m
X
X
X
1
k
2
,
(1)
` yi fk (xi )
|fk |H(p) +
min
min
Hk =
p?P {fk ?H(p)}m
2
k=1
i=1
k=1
k=1
where `(z) = max(0, 1 ? z) and H(p) is a Reproducing Kernel Hilbert Space endowed with kernel ?(x, x0 ; p) =
P
s
a
0
a=1 p ?a (x, x ). Hk is the regularized loss function for the kth class. It is straightforward to verify the following
dual problem of (1):
(
)
m
X
1 k
k >
k >
k
k
[? ] 1 ? (? ? y ) K(p)(? ? y )
min max L(p, ?) =
,
(2)
p?P ??Q1
2
k=1
where Q1 = ? = (?1 , . . . , ?m ) : ?k ? [0, C]n , k = 1, . . . , m . To solve the optimization problem in Eq. (2), we
can view it as a minimization problem, i.e., minp?P A(p), where A(p) = max??Q1 L(p, ?). We then follow the
subgradient descent approach in [10] and compute the gradient of A(p) as
m
?pi A(p) = ?
1X k
(? (p) ? yk )> Ki (?k (p) ? yk ),
2
k=1
2
where ?k (p) = arg max??[0,C]n [?k ]> 1 ? (?k ? yk )> K(p)(?k ? yk ). We refer to this approach as Multi-label
Multiple Kernel Learning by Sum, or ML-MKL-Sum. Note that this approach is similar to the one proposed
in [21]. The main computational problem with ML-MKL-Sum is that by treating every class equally, in each iteration
of subgradient descent, it requires solving m kernel SVMs, making it unscalable to a very large number of classes.
Below we present a formulation for multi-label MKL whose computational cost is sublinear in the number of classes.
2.1
A Minimax Framework for Multi-label MKL
In order to alleviate the computational difficulty arising from a large number of classes, we search for the combined
kernel matrix K(p) that minimizes the worst classification error among m classes, i.e.,
min
max Hk
min
(3)
p?P {fk ?H(p)}m
1?k?m
k=1
Pm
Eq. (3) differs from Eq. (1) inPthat it replaces k=1 Hk with max1?k?m Hk . The main computational advantage
of using maxk Hk instead of k Hk is that by using an appropriately designed method, we may be able to figure
out the most difficult class in a few iterations, and spend most of the computational cycles on learning the optimal
kernel combination for the most difficult class. In this way, we are able to achieve a running time that is sublinear
in the number of classes. Below, we present an optimization strategy for Eq. (3) based on the idea of stochastic
approximation.
A direct approach is to solve the optimization problem in Eq. (3) by its dual form. It is straightforward to derive the
dual problem of Eq. (3) as follows (more details can be found in the supplementary documents)
min max
p?P ??B
where
B=
(
?
?
?
L(p, ?) =
1
m
(
m
X
k=1
k
(? , . . . , ? ) : ? ?
1
[? k ]> 1 ? (? k ? yk )> K(p)(? k ? yk )
2
Rn+ , k
k
n
= 1, . . . , m, ? ? [0, C?k ] s.t.
?
21 )2 ?
m
X
?
.
(4)
)
?k = 1 .
k=1
The challenge in solving Eq. (4) is that the solutions {? 1 , . . . , ? m } in domain B are correlated with each other, making
it impossible to solve each ? k independently by an off-the-shelf SVM solver. Although a gradient descent approach
can be developed for optimizing Eq. (4), it is unable to explore the sparse structure in ? k making it less efficient than
state-of-the-art SVM solvers. In order to effectively explore the power of off-the-shelf SVM solvers, we rewrite (3) as
follows
(
)
m
X
1 k
k>
k
k >
k
k
? ? 1 ? (? ? y ) K(p)(? ? y )
min max L(p, ?) = max
,
(5)
??Q1
p?P ???
2
k=1
>
where ? = {(? 1 , . . . , ? m ) ? Rm
+ : ? 1 = 1}. In Eq. (5), we replace max1?k?m with max??? . The advantage of
using Eq. (5) is that we can resort to a SVM solver to efficiently find ?k for a given combination of kernels K(p).
Given Eq. (5), we develop a subgradient descent approach for solving the optimization problem. In particular, in each
iteration of subgradient descent, we compute the gradient L(p, ?) with respect to p and ? as follows
m
?pa L(p, ?) = ?
1X k k
1
? (? ? yk )> Ka (?k ? yk ), ?? k L(p, ?) = [?k ]> 1 ? (?k ? yk )> K(p)(?k ? yk ),
2
2
(6)
k=1
where ?k = arg max??[0,C]n ?> 1 ? (? ? yk )> K(p)(? ? yk )/2, i.e., a SVM solution to the combined kernel K(p).
? Ps
Following the mirror prox descent method [24], we define potential functions ?p = ??p a=1 pa ln pa for p and
Pm
?? = i=1 ? i ln ? i for ?, and have the following equations for updating pt and ?t
pat+1 =
?tk
pat
k
exp(??? ?? k L(pt , ?t )),
p exp(??p ?pa L(pt , ?t )), ?t+1 =
Zt
Zt?
3
(7)
>
where Ztp and Zt? are normalization factors that ensure p>
t 1 = ? t 1 = 1. ?p > 0 and ?? > 0 are the step sizes for
optimizing p and ?, respectively.
Unfortunately, the algorithm described above shares the same shortcoming as the other approaches for multiple label multiple kernel learning, i.e., it requires solving m SVM problems in each iteration, and therefore its computational complexity is linear in the number of classes. To alleviate this problem, we modify the above algorithm
by introducing the stochastic approximation method. In particular, in each iteration t, instead of computing the full
gradients that requirs solving m SVMs, we sample one classification task according to the multinomial distribution
M ulti(?t1 , . . . , ?tm ). Let jt be the index of the sampled classification task. Using the sampled task jt , we estimate the
gradient of L(p, ?) with respect to pa and ? k , denoted by gbap (pt , ?t ) and gbk? (pt , ?t ), as follows
1
(8)
gbap (pt , ?t ) = ? (?jt ? yjt )> Ka (?jt ? yjt ),
2
0
k 6= jt .
(9)
gbk? (pt , ?t ) =
1
1
>
k
k >
k
k
?
1
?
(?
?
y
)
K(p)(?
?
y
)
k = jt
k
?k
2
The computation of gbap (pt , ?t ) and gbi? (pt , ?t ) only requires ?jt and therefore only needs to solve one SVM problem,
instead of m SVMs. The key property of the estimated gradients in Eqs. (8) and (9) is that their expectations equal to
the true gradients, as summarized by Proposition 1. This property is the key to the correctness of this algorithm.
Proposition 1. We have
gi? (pt , ?t )] = ??i L(pt , ?t ),
Et [b
gap (pt , ?t )] = ?pa L(pt , ?t ), Et [b
where Et [?] stands for the expectation over the randomly sampled task jt .
Given the estimated gradients, we will follow Eq. (7) for updating p and ? in each iteration. Since gbi? (pt , ?t ) is
proportional to 1/?t , to ensure the norm of gbi? (pt , ?t ) to be bounded, we need to smooth ?t+1 . In order to have the
0
smoothing effect, without modifying ?t+1 , we will sample directly from ?t+1
,
?
0k
k
?? ? ?, ?? 0 ? ?0 , s.t. ?t+1
? ?t+1
(1 ? ?) + , k = 1, . . . , m,
m
where ? > 0 is a small probability mass used for smoothing and
?
?0 = ? 0> 1 = 1, ?k0 ? , k = 1, . . . , m .
m
We refer to this algorithm as Multi-label Multiple Kernel Learning by Stochastic Approximation, or ML-MKLSA for short. Algorithm 1 gives the detailed description.
2.2
Convergence Analysis
Since Eq. (5) is a convex-concave optimization problem, we introduce the following citation for measuring the quality
of a solution (p, ?)
?(p, ?) = max
L(p, ? 0 ) ? min
L(p0 , ?).
(11)
0
0
? ??
p ?P
We denote by (p? , ?? ) the optimal solution to Eq. (5).
Proposition 2. We have the following properties for ?(p, ?)
1. ?(p, ?) ? 0 for any solution p ? P and ? ? ?
2. ? (p? , ? ? ) = 0
3. ?(p, ?) is jointly convex in both p and ?
We have the following theorem for the convergence rate for Algorithm 1. The detailed proof can be found in the
supplementary document.
b and ?
b
Theorem 1. After running Algorithm 1 over T iterations, we have the following inequality for the solution p
obtained by Algorithm 1
1
m2
b )] ?
E [? (b
p, ?
(ln m + ln s) + ?? d 2 ?20 n2 C 4 + n2 C 2 ,
?? T
2?
where d is a constant term, E[?] stands for the expectation over the sampled task indices of all iterations, and ?0 =
max ?max (Ka ), where ?max (Z) stands for the maximum eigenvalue of matrix Z.
1?a?s
4
Algorithm 1 Multi-label Multiple Kernel Learning: ML-MKL-SA
1: Input
? ?p , ?? : step sizes
? K 1 , . . . , K s : s kernel matrices
? y1 , . . . , ym : the assignments of m different classes to n training instances
? T : number of iterations
? ?: smoothing parameter
2: Initialization
? ?1 = 1/m and p1 = 1/s
3: for t = 1, . . . , T do
4:
Sample a classification task jt according to the distribution M ulti(?t1 , . . . , ?tm ).
5:
Compute ?jt = arg max??[0,C]n ?> 1 ? (? ? yjt )> K(p)(? ? yjt )/2 using an off shelf SVM solver.
6:
Compute the estimated gradients gbap (pt , ?t ) and gbi? (pt , ?t ) using Eq. (8) and (9).
0
7:
Update pt+1 , ?t+1 and ?t+1
as follows
pat+1
=
k
=
[?t+1 ]
pat
exp(??? gbap (pt , ?t )), a = 1, . . . , s.
Ztp
?tk
?
0
exp(?? gbk? (pt , ?t )), k = 1, . . . , m; ?t+1
= (1 ? ?)?t+1 + 1.
Zt?
m
8: end for
b and ?
b as
9: Compute the final solution p
?
b=
T
1X
?t ,
T t=1
b=
p
T
1X
pt .
T t=1
(10)
2
1p
Corollary 1. With ? = m 3 and ?? = n1 m? 3 (ln m)/T , after running Algorithm 1 (on the original paper) over T
p
iterations, we have E[?(b
p, ?
b)] ? O(nm1/3 (ln m)/T ) in terms of m,n and T .
Since we only need to solve one kernel
p SVM at each iteration, we have the computational complexity for the proposed
algorithm on the order of O(m1/3 (ln m)/T ), sublinear in the number of classes m.
3
Experiments
In this section, we empirically evaluate the proposed multiple kernel learning algorithm2 by demonstrating its efficiency and effectiveness on the visual object recognition task.
3.1
Data sets
We use three benchmark data sets for visual object recognition: Caltech-101, Pascal VOC 2006 and Pascal VOC 2007.
Caltech-101 contains 101 different object classes in addition to a ?background? class. We use the same settings as [25]
in which 30 instances of each class are used for training and 15 instances for testing. Pascal VOC 2006 data set [26]
consists of 5, 303 images distributed over 10 classes, of which 2, 618 are used for training. Pascal VOC 2007 [27]
consists of 5, 011 training images and 4, 932 test images that are distributed over 20 classes. For both data sets, we
used the default train-test partition provided by VOC Challenge. Unlike Caltech-101 data set, where each image is
assigned to one class, images in VOC data sets can be assigned to multiple classes simultaneously, making it more
suitable for multi-label learning.
2
Codes can be downloaded from http://www.cse.msu.edu/? bucakser/ML-MKL-SA.rar
5
Table 1: Classification accuracy (AUC) and running times (second) of all ML-MKL algorithms on three data sets.
Abbreviations SA, GMKL, Sum, Simple, VSKL, AVG stand for ML-MKL-SA, Generalized MKL, ML-MKL-Sum,
SimpleMKL, variable sparsity kernel learning and average kernel, respectively
SA
0.80
0.75
0.50
GMKL
0.79
0.75
0.49
Accuracy (AUC)
Sum Simple
0.80 0.78
0.74 0.74
0.47 0.42
VSKL
0.77
0.74
0.46
AVG
0.77
0.72
0.45
SA
191.17
245.10
1329.40
1
1
1
0.8
0.8
0.4
0.2
0
0
200
400
600
800
time(sec)
ML-MKL-SA
kernel coefficients
0.6
Training Time (sec)
Sum
Simple
1814.50 9869.40
890.65
11549.00
1372.60 18536.37
0.8
0.8
kernel coefficients
kernel coefficients
1
GMKL
18292.00
2586.90
30333.14
0.6
0.4
0.2
0
0
0.5
1
time(sec)
1.5
0.6
0.4
0.2
0
2
kernel coefficients
dataset
CALTECH-101
VOC2006
VOC2007
0
500
1000
time(sec)
4
x 10
GMKL
ML-MKL-Sum
1500
VSKL
21266.05
7368.27
11370.48
AVG
N/A
N/A
N/A
0.6
0.4
0.2
0
0
0.5
1
time(sec)
1.5
2
4
x 10
VSKL
Figure 1: The evolution of kernel weights over time for CALTECH-101 data set. For GMKL and VSKL, the curves
display the kernel weights that are averaged over all the classes since a different kernel combination is learnt for each
class.
3.2
Kernels
We extracted 9 kernels for Caltech-101 data set by using the software provided in [28]. Three different feature
extraction methods are used for kernel construction: (i) GB: geometric blur descriptors are applied to the detected
keypoints [29]; RBF kernel is used in which the distance between two images is computed by averaging the distance
of the nearest descriptor pairs for the image pair. (ii) PHOW gray/color: keypoints based on dense sampling; SIFT
descriptors are quantized to 300 words and spatial histograms with 2x2 and 4x4 subdivisions are built to generate
chi-squared kernels [30]. (iii) SSIM: self-similarity features taken from [31] are used and spatial histograms based on
300 visual words are used to form the chi-squared kernel.
For VOC data sets, a different procedure, based on the reports of VOC challenges [1], is used to construct multiple
visual dictionaries, and each dictionary results in a different kernel. To obtain multiple visual dictionaries, we deploy
(i) three keypoint detectors, i.e., dense sampling, HARHES [32] and HESLAP [33], (ii) two keypoint descriptors,
i.e., SIFT [33] and SPIN [34]), (iii) two different numbers of visual words, i.e., 500 and 1, 000 visual words, (iv)
two different kernel functions, i.e., linear kernel and chi-squared kernel. The bandwidth of the chi-squared kernels
is calculated using the procedure in [25]. Using the above variants in visual dictionary construction, we constructed
22 kernels for both VOC2007 and VOC2006 data sets. In addition to the K-means implementation in [28], we also
applied a hierarchical clustering algorithm [35] to descriptor quantization for VOC 2007 data set, leading to four more
kernels for VOC2007 data set.
3.3
Baseline Methods
We first compare the proposed algorithm ML-MKL-SA to the following MKL algorithms that learn a different kernel
combination for each class: (i) Generalized multiple kernel learning method (GMKL) [25], which reports promising
results for object recognition, (ii) SimpleMKL [10], which learns the kernel combination by a subgradient approach
and (iii) Variable Sparsity Kernel Learning (VSKL), a miror-prox descent based algorithm for MKL [36]. We also
compare ML-MKL-SA to ML-MKL-Sum, which learns a kernel combination shared by all classes as described in
Section 2 using the optimization method in [21]. In all implementations of ML multiple kernel learning algorithms,we
use LIBSVM implementation of one-versus-all SVM where needed.
3.4
Experimental Results
To evaluate the effectiveness of different algorithms for multi-label multiple kernel learning, we first compute the area
under precision-recall curve (AUC) for each class, and report the value of AUC averaged over all the classes. We
6
0.8
0.5
0.75
0.48
0.74
0.73
AUC
0.76
AUC
AUC
0.78
0.72
0.74
0.72
500
1000
time(sec)
1500
0.44
0.71
ML?MKL?SA
ML?MKL?SUM
0
0.42
ML?MKL?SA
ML?MKL?SUM
0.7
2000
0.46
0
200
CALTECH-101
400
600
time(sec)
800
ML?MKL?SA
ML?MKL?SUM
200
1000
VOC-2006
400
600
800
time(sec)
1000
1200
1400
VOC-2007
Figure 2: The evolution of classification accuracy over time for ML-MKL-SA and ML-MKL-Sum on three data sets
0.81
0.84
?=0.01
?=0.001
?=0.0001
0.805
0.82
AUC
AUC
0.8
0.795
0.8
0.79
?=0
?=0.2
?=0.6
?=1
0.785
0.78
0.775
50
100
150
200
250
number of iterations
300
350
0.78
0.76
400
200
400
600
800
1000
1200
number of iterations
Figure 3: Classification accuracy (AUC) of the proposed
algorithm Ml-MKL-SA on CALTECH-101 using different values of ? (for ?p = ?? = 0.01).
Figure 4: Classification accuracy (AUC) of the proposed
algorithm Ml-MKL-SA on CALTECH-101 using different values of ?p = ?? = ? for (? = 0).
evaluate the efficiency of algorithms by their running times for training. All methods are coded in MATLAB and
are implemented on machines with 2 dual-core AMD Opterons running at 2.2GHz, 8GB RAM and linux operating
system.
pt?1
For the proposed method, itarations stop when pt ?b
is smaller than 0.01. Unless stated, the smoothing parameter
bt
p
? is set to be 0.2. For simplicity we take ? = ?p = ?? in all the following experiments. Step size ? is chosen as 0.0001
for CALTECH-101 data set and 0.001 for VOC data sets in order to achieve the best computational efficiency.
b
Table 1 summarizes the classification accuracies (AUC) and the running times of all the algorithms over the three
data sets. We first note that the proposed MKL method for multi-labeled data, i.e., ML-MKL-SA, yields the best
performance among the methods in comparison, which justifies the assumption of using the same kernel combination
for all the classes. Note that a simple approach that uses the average of all kernels yields reasonable performance,
although its classification accuracy is significantly worse than the proposed approach ML-MKL-SA. Second, we
observe that except for the average kernel method that does not require learning the kernel combination weights, MLMKL-SA and ML-MKL-Sum are significantly more efficient than the other baseline approaches. This is not surprising
as ML-MKL-SA and ML-MKL-Sum compute a single kernel combination for all classes. Third, compared to MLMKL-Sum, we observe that ML-MKL-SA is overall more efficient, and significantly more efficient for CALTECH101 data set. This is because the number of classes in CALTECH-101 is significantly larger than that of the two VOC
challenge data sets. This result further confirms that the proposed algorithm is scalable to the data sets with a large
number of classes.
Fig. 1 shows the change in the kernel weights over time for the proposed method and the three baseline methods (i.e.,
ML-MKL-Sum, GMKL, and VSKL) on CALTECH-101 data set. We observe that, overall, ML-MKL-SA shares a
similar pattern as GMKL and VSKL in the evolution curves of kernel weights, but is ten times faster than the two
baseline methods. Although ML-MKL-Sum is significantly more efficient than GMKL and VSKL, the kernel weights
learned by ML-MKL-Sum vary significantly, particularly at the beginning of the learning process, making it a less
stable algorithm than the proposed algorithm ML-MKL-SA. To further compare ML-MKL-SA with ML-MKL-Sum,
in Fig. 2, we show how the classification accuracy is changed over time for both methods for all three data sets.
We again observe the unstable behavior of ML-MKL-Sum: the classification accuracy of ML-MKL-Sum could vary
significantly over a relatively short period of time, making it less desirable method for MKL.
7
To evaluate the sensitivity of the proposed method to parameters ? and ?, we conducted experiments with varied
values for the two parameters. Fig. 3 shows how the classification accuracy (AUC) of the proposed algorithm changes
over iterations on CALTECH-101 using four different values of ?. We observe that the final classification accuracy
is comparable for different values of ?, demonstrating the robustness of the proposed method to the choice of ?. We
also note that the two extreme cases, i.e, ? = 0 and ? = 1, give the worst performance, indicating the importance of
choosing an optimal value for ?. Fig. 4 shows the classification accuracy for three different values of ? on CALTECH101 data set. We observe that the proposed algorithm achieves similar classification accuracy when ? is set to be a
relatively small value (i.e., ? = 0.001 and ? = 0.0001). This result demonstrates that the proposed algorithm is in
general insensitive to the choice of step size (?).
4
Conclusion and Future Work
In this paper, we present an efficient optimization framework for multi-label multiple kernel learning that combines
worst-case analysis with stochastic approximation. Compared to the other algorithms for ML-MKL, the key advantage
of the proposed algorithm is that its computational cost is sublinear in the number of classes, making it suitable for
handling a large number of classes. We verify the effectiveness of the proposed algorithm by experiments in object
recognition on several benchmark data sets. There are two directions that we plan to explore in the future. First, we
aim to further improve the efficiency of ML-MKL by reducing its dependence on the number of training examples and
speeding up the convergence rate. Second, we plan to improve the effectiveness and efficiency of multi-label learning
by exploring the correlation and structure among the classes.
5
Acknowledgements
This work was supported in part by National Science Foundation (IIS-0643494), US Army Research (ARO Award
W911NF-08-010403) and Office of Naval Research (ONR N00014-09-1-0663). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views
of NFS, ARO, and ONR. Part of Anil Jain?s research was supported by WCU (World Class University) program
through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology
(R31-2008-000-10008-0).
References
[1] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, ?The PASCAL Visual Object Classes Challenge
2009 (VOC2009) Results.? http://www.pascal-network.org/challenges/VOC/voc2009/workshop/index.html.
[2] G. Lanckriet, T. De Bie, N. Cristianini, M. Jordan, and W. Noble, ?A statistical framework for genomic data fusion,? Bioinformatics, vol. 20, pp. 2626?2635, 2004.
[3] S. Ji, L. Sun, R. Jin, and J. Ye, ?Multi-label multiple kernel learning,? in Proceedings of Neural Information Processings
Systems, 2008.
[4] G. Lanckriet, N. Cristianini, P. Bartlett, L. Ghaoui, and M. Jordan, ?Learning the kernel matrix with semidefinite programming,? Journal of Machine Learning Research, vol. 5, pp. 27?72, 2004.
[5] O. Chapelle and A. Rakotomamonjy, ?Second order optimization of kernel parameters,? in NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, 2008.
[6] P. Gehler and S. Nowozin, ?On feature combination for multiclass object classification,? in Proceedings of the IEEE International Conference on Computer Vision, 2009.
[7] P. Gehler and S. Nowozin, ?Let the kernel figure it out: Principled learning of pre-processing for kernel classifiers,? in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[8] F. Bach, G. Lanckriet, and M. Jordan, ?Multiple kernel learning, conic duality, and the smo algorithm,? in Proceedings of the
21st International Conference on Machine Learning, 2004.
[9] S. Sonnenburg, G. Ratsch, and C. Schafer, ?A general and efficient multiple kernel learning algorithm,? in Proceedings of
Neural Information Processings Systems, pp. 1273?1280, 2006.
[10] A. Rakotomamonjy, F. Bach, Y. Grandvalet, and S. Canu, ?SimpleMKL,? Journal of Machine Learning Research, vol. 9,
pp. 2491?2521, 2008.
8
[11] Z. Xu, R. Jin, I. King, and M. R. Lyu, ?An extended level method for efficient multiple kernel learning,? in Proceedings of
Neural Information Processings Systems, pp. 1825?1832, 2008.
[12] Z. Xu, R. Jin, H. Yang, I. King, and M. R. Lyu, ?Simple and efficient multiple kernel learning by group lasso,? in Proceedings
of the 27th International Conference on Machine Learning, 2010.
[13] F. Bach, ?Consistency of the group lasso and multiple kernel learning,? Journal of Machine Learning Research, vol. 9,
pp. 1179?1225, 2008.
[14] Z. Xu, R. Jin, S. Zhu, M. R. Lyu, and I. King, ?Smooth optimization for effective multiple kernel learning,? in Proceedings of
the AAAI Conference on Artificial Intelligence, 2010.
[15] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet, ?More efficiency in multiple kernel learning,? in Proceedings of the
24th International Conference on Machine Learning, 2007.
[16] M. Kloft, U. Brefeld, A. Sonnenburg, and A. Zien, ?Comparing sparse and non-sparse multiple kernel learning,? in NIPS
Workshop on Understanding Multiple Kernel Learning Methods, 2009.
[17] M. Kloft, U. Brefeld, A. Sonnenburg, P. Laskov, K.-R. Muller, and A. Zien, ?Efficient and accurate lp-norm multiple kernel
learning,? in Proceedings of Neural Information Processings Systems, 2009.
[18] S. Hoi, M. Lyu, and E. Chang, ?Learning the unified kernel machines for classification,? in Proceedings of the Conference on
Knowledge Discovery and Data Mining, p. 187196, 2006.
[19] J. Ye, J. Chen, and J. S., ?Discriminant kernel and regularization parameter learning via semidefinite programming,? in
Proceedings of the International Conference on Machine Learning, p. 10951102, 2007.
[20] A. Zien and S. Cheng, ?Multiclass multiple kernel learning,? in Proceedings of the 24th International Conference on Machine
Learning, 2007.
[21] L. Tang, J. Chen, and J. Ye, ?On multiple kernel learning with multiple labels,? in Proceedings of the 21st International Jont
Conference on Artifical Intelligence, 2009.
[22] J. Yang, Y. Li, Y. Tian, L. Duan, and W. Gao, ?Group-sensitive multiple kernel learning for object categorization,? in Proceedings of the IEEE International Conference on Computer Vision, 2009.
[23] F. Orabona, L. Jie, and B. Caputo, ?Online-batch strongly convex multi kernel learning,? in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[24] A. Nemirovski, ?Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone
operators and smooth convex-concave saddle point problems,? SIAM Journal on Optimization, vol. 15, pp. 229?251, 2004.
[25] M. Varma and D. Ray, ?Learning the discriminative power-invariance trade-off,? in Proceedings of the IEEE International
Conference on Computer Vision, October 2007.
[26] M. Everingham, A. Zisserman, C. K. I. Williams, and L. Van Gool, ?The PASCAL Visual Object Classes Challenge 2006
(VOC2006) Results.? http://www.pascal-network.org/challenges/VOC/voc2006/results.pdf.
[27] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, ?The PASCAL Visual Object Classes Challenge
2007 (VOC2007) Results.? http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html.
[28] A. Vedaldi and B. Fulkerson, ?VLFeat: An open and portable library of computer vision algorithms.? http://www.
vlfeat.org/, 2008.
[29] A. Berg, T. Berg, and J. Malik, ?Shape matching and object recognition using low distortion correspondences,? in Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 2005.
[30] S. Lazebnik, C. Schmid, and P. Ponce, ?Beyond bag of features: Spatial pyramid matching for recognizing natural scene
categories,? in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2006.
[31] E. Shechtman and I. M., ?Matching local self-similarities across images and videos,? in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2007.
[32] K. Mikolajczyk and C. Schmid, ?Distinctive image features from scale-invariant keypoints,? IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615?1630, 2005.
[33] D. Lowe, ?Distinctive image features from scale-invariant keypoints,? International Journal of Computer Vision, vol. 2, no. 60,
pp. 91?110, 2004.
[34] S. Lazebnik, C. Schmid, and P. Ponce, ?Sparse texture representation using affine-invariant neighborhoods,? in Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 2003.
[35] M. Muja and D. G. Lowe, ?Fast approximate nearest neighbors with automatic algorithm configuration,? in Proceedings of
the International Conference on Computer Vision Theory and Application, pp. 331?340, INSTICC Press, 2009.
[36] J. Saketha Nath, G. Dinesh, S. Raman, C. Bhattacharyya, A. Ben-Tal, and K. Ramakrishan, ?On the algorithmics and applications of a mixed-norm based kernel learning formulation,? in Proceedings of Neural Information Processings Systems,
2009.
9
| 4177 |@word norm:6 everingham:3 open:1 confirms:1 r:1 eng:2 p0:1 q1:4 shechtman:1 configuration:1 contains:1 document:2 bhattacharyya:1 ka:5 comparing:1 surprising:1 bie:1 attracted:1 partition:1 blur:1 shape:1 voc2006:4 treating:1 designed:1 update:1 intelligence:3 beginning:1 short:2 core:1 quantized:1 cse:4 org:4 along:1 constructed:1 direct:1 consists:2 combine:2 ray:1 introduce:5 lansing:1 x0:2 behavior:1 p1:2 frequently:1 multi:24 brain:1 chi:4 voc:16 duan:1 solver:5 provided:2 bounded:1 schafer:1 mass:1 minimizes:1 developed:3 unified:1 finding:1 every:1 concave:2 classifier:2 rm:1 demonstrates:1 vlfeat:2 t1:2 local:1 modify:1 simplemkl:3 might:1 initialization:1 equivalence:1 suggests:1 nemirovski:1 tian:1 averaged:2 testing:1 silp:1 differs:1 procedure:2 area:1 empirical:3 significantly:12 vedaldi:1 matching:3 word:4 pre:1 selection:1 operator:1 impossible:1 www:5 go:1 straightforward:2 williams:3 independently:1 convex:4 simplicity:1 ynk:1 m2:1 regularize:1 varma:1 fulkerson:1 pt:23 construction:2 deploy:1 programming:2 us:1 lanckriet:3 pa:7 recognition:20 expensive:1 updating:2 particularly:1 labeled:1 gehler:2 observed:1 solved:1 worst:5 thousand:1 cycle:1 sun:1 sonnenburg:3 trade:1 yk:13 principled:1 complexity:3 constrains:1 cristianini:2 multilabel:1 solving:5 rewrite:1 localization:1 max1:2 efficiency:7 distinctive:2 k0:1 represented:1 train:1 jain:3 fast:1 effective:2 shortcoming:1 detected:1 artificial:1 choosing:1 neighborhood:1 whose:2 spend:1 solve:5 supplementary:2 larger:1 distortion:1 otherwise:1 gi:1 saketha:1 jointly:1 final:2 online:1 advantage:3 eigenvalue:1 brefeld:2 aro:2 combining:2 achieve:2 description:1 scalability:1 convergence:6 p:3 categorization:1 ben:1 object:21 tk:2 depending:1 develop:4 derive:1 nearest:2 sa:23 eq:16 implemented:1 direction:1 modifying:1 stochastic:7 opinion:1 material:1 hoi:1 rar:1 education:1 require:1 alleviate:3 proposition:3 rong:1 exploring:2 exp:4 lyu:4 dictionary:4 vary:2 achieves:1 nm1:1 bag:1 label:21 r31:1 sensitive:1 correctness:1 minimization:2 genomic:1 aim:1 shelf:4 office:1 corollary:1 focus:2 naval:1 improvement:1 ponce:2 hk:7 baseline:4 bt:1 arg:3 classification:25 dual:4 among:3 denoted:1 pascal:10 overall:2 html:2 plan:2 art:3 smoothing:4 spatial:3 equal:1 construct:1 extraction:2 sampling:2 x4:1 noble:1 future:2 report:3 few:1 randomly:1 simultaneously:1 national:2 n1:1 detection:1 interest:1 mining:1 alignment:1 extreme:1 semidefinite:2 accurate:1 algorithm2:1 korea:3 shorter:1 unless:1 iv:1 y1k:1 instance:5 w911nf:1 measuring:1 assignment:2 cost:5 introducing:1 rakotomamonjy:3 hundred:1 recognizing:1 conducted:1 learnt:1 combined:4 st:2 international:11 sensitivity:1 siam:1 kloft:2 dong:1 off:5 unscalable:2 ym:1 linux:1 squared:4 again:1 reflect:1 aaai:1 worse:1 cognitive:1 resort:1 leading:3 li:1 potential:1 prox:3 de:1 summarized:1 sec:8 nfs:1 coefficient:4 view:2 lowe:2 spin:1 accuracy:15 descriptor:6 efficiently:2 yield:3 comp:1 detector:1 pp:10 proof:1 mi:1 sampled:4 stop:1 dataset:1 recall:1 color:1 knowledge:1 organized:1 hilbert:1 follow:2 gbi:4 zisserman:3 formulation:2 though:1 strongly:1 correlation:1 mkl:69 quality:1 gray:1 effect:1 ye:3 verify:2 true:1 evolution:3 regularization:1 assigned:3 jont:1 dinesh:1 self:2 auc:13 ulti:2 generalized:2 pdf:1 l1:1 image:12 variational:1 consideration:1 lazebnik:2 muja:1 multinomial:1 empirically:1 ji:1 insensitive:1 extend:1 m1:3 significant:1 refer:2 rd:2 automatic:2 fk:4 pm:2 canu:2 consistency:1 funded:1 chapelle:1 stable:1 similarity:3 operating:1 recent:3 optimizing:2 n00014:1 inequality:2 binary:1 onr:2 yi:1 muller:1 caltech:13 ministry:1 period:1 semi:1 ii:4 multiple:34 full:1 keypoints:4 desirable:1 zien:3 smooth:3 faster:1 bach:4 dept:2 yjt:4 equally:1 award:1 coded:1 variant:1 scalable:1 vision:14 expectation:3 iteration:14 kernel:104 sometimes:1 normalization:1 gmkl:9 histogram:2 pyramid:1 addition:2 background:1 winn:2 ratsch:1 appropriately:1 vskl:9 rest:1 ztp:2 unlike:1 nath:1 effectiveness:4 jordan:3 yang:2 iii:3 affect:2 xj:1 lasso:3 bandwidth:1 gbk:3 idea:2 tm:2 multiclass:2 bartlett:1 gb:2 effort:1 matlab:1 jie:1 yik:2 detailed:2 involve:1 amount:1 ten:1 svms:3 category:1 simplest:1 http:5 generate:1 estimated:3 arising:1 popularity:1 lnm:1 vol:7 group:4 key:3 four:2 demonstrating:2 achieving:1 libsvm:1 ram:1 subgradient:6 monotone:1 sum:23 reasonable:1 raman:1 summarizes:2 comparable:1 ki:2 laskov:1 display:1 cheng:1 correspondence:1 quadratic:1 replaces:1 encountered:1 constraint:1 x2:1 software:1 scene:1 qcqp:1 tal:1 min:8 relatively:2 developing:1 according:2 combination:18 smaller:1 voc2009:2 across:1 lp:2 making:10 invariant:3 ghaoui:1 taken:1 computationally:1 ln:8 equation:1 needed:1 end:1 endowed:1 observe:6 hierarchical:1 batch:1 robustness:1 original:1 assumes:1 running:8 ensure:2 clustering:1 objective:1 malik:1 strategy:1 dependence:2 gradient:9 kth:2 distance:2 unable:1 sci:1 amd:1 portable:1 unstable:1 discriminant:1 besides:1 code:1 index:4 minimizing:1 difficult:2 unfortunately:1 october:1 stated:1 implementation:5 zt:4 bucak:1 ssim:1 benchmark:2 jin:5 descent:7 pat:4 maxk:1 phow:1 extended:1 y1:1 rn:2 varied:1 reproducing:1 community:1 cast:1 pair:2 smo:2 quadratically:1 learned:1 algorithmics:1 nip:2 address:1 able:2 beyond:1 below:2 pattern:8 sparsity:2 challenge:13 program:3 built:1 including:1 max:15 video:1 gool:3 power:2 suitable:3 difficulty:1 natural:1 regularized:2 zhu:1 minimax:1 improve:4 voc2007:5 technology:1 keypoint:4 library:1 conic:1 concludes:1 naive:1 schmid:3 speeding:1 geometric:1 l2:1 acknowledgement:1 understanding:1 discovery:1 loss:2 sublinear:5 mixed:1 proportional:1 versus:1 foundation:2 downloaded:1 affine:1 minp:1 grandvalet:2 nowozin:2 share:4 pi:1 caltech101:2 changed:1 supported:2 neighbor:1 sparse:4 benefit:1 distributed:2 curve:3 default:1 xn:1 stand:4 calculated:1 ghz:1 world:1 van:3 author:1 collection:2 avg:3 mikolajczyk:1 transaction:1 citation:1 approximate:1 dealing:1 ml:49 reveals:1 discriminative:2 xi:3 msu:4 search:1 continuous:1 table:2 promising:2 learn:2 correlated:1 rongjin:1 caputo:1 necessarily:1 domain:2 main:2 dense:2 n2:2 x1:1 xu:3 fig:4 precision:1 third:1 learns:2 tang:1 anil:2 theorem:2 jt:10 sift:2 svm:11 fusion:2 workshop:4 quantization:2 sequential:1 effectively:1 importance:1 mirror:1 texture:1 justifies:1 margin:1 gap:1 chen:2 michigan:1 explore:3 army:1 gao:1 saddle:1 visual:14 expressed:1 recommendation:1 chang:1 extracted:1 abbreviation:1 goal:1 formulated:3 king:3 rbf:1 orabona:1 lipschitz:1 shared:3 fisher:1 considerable:1 content:1 replace:1 change:2 specifically:1 infinite:1 except:1 reducing:1 averaging:1 duality:1 experimental:2 subdivision:1 attempted:1 invariance:1 east:1 indicating:1 berg:2 seoul:1 bioinformatics:1 artifical:1 evaluate:4 phenomenon:1 handling:1 |
3,509 | 4,178 | Multivariate Dyadic Regression Trees for Sparse
Learning Problems
Han Liu and Xi Chen
School of Computer Science, Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We propose a new nonparametric learning method based on multivariate dyadic
regression trees (MDRTs). Unlike traditional dyadic decision trees (DDTs) or
classification and regression trees (CARTs), MDRTs are constructed using penalized empirical risk minimization with a novel sparsity-inducing penalty. Theoretically, we show that MDRTs can simultaneously adapt to the unknown sparsity
and smoothness of the true regression functions, and achieve the nearly optimal
rates of convergence (in a minimax sense) for the class of (?, C)-smooth functions. Empirically, MDRTs can simultaneously conduct function estimation and
variable selection in high dimensions. To make MDRTs applicable for large-scale
learning problems, we propose a greedy heuristics. The superior performance of
MDRTs are demonstrated on both synthetic and real datasets.
1
Introduction
Many application problems need to simultaneously predict several quantities using a common set
of variables, e.g. predicting multi-channel signals within a time frame, predicting concentrations of
several chemical constitutes using the mass spectra of a sample, or predicting expression levels of
many genes using a common set of phenotype variables. These problems can be naturally formulated
in terms of multivariate regression.
{
}
In particular, let (x1 , y1 ), . . . , (xn , yn ) be n independent and identically distributed pairs of data
with xi ? X ? Rd and yi ? Y ? Rp for i = 1, . . . , n. Moreover, we denote the jth dimension of y
by yj = (yj1 , . . . , yjn )T and kth dimension of x by xk = (x1k , . . . , xnk )T . Without loss of generality,
we assume X = [0, 1]d and the true model on yj is :
yji = fj (xi ) + ?ij , i = 1, . . . , n,
(1)
d
where fj : R ? R is a smooth function. In the sequel, let f = (f1 , . . . , fp ), where f : Rd ? Rp is
i
i
i
a p-valued smooth function. The vector
{ i } form of (1) then becomes y = f (x )+? , i = 1, . . . , n. We
also assume that the noise terms ?j i,j are independently distributed and bounded almost surely.
This is a general setting of the nonparametric multivariate regression. From the minimax theory, we
know that estimating f in high dimensions is very challenging. For example, when f1 , . . . , fp lie in
a d-dimensional Sobolev ball with order ? and radius C, the best convergence rate for the minimax
risk is p ? n?2?/(2?+d) . For a fixed ?, such rate can be very slow when d becomes large.
However, in many real world applications, the true regression function f may depend only on a
small set of variables. In other words, the problem is jointly sparse:
f (x) = f (xS ) = (f1 (xS ), . . . , fp (xS )),
where xS = (xk : k ? S), S ? {1, . . . , d} is a subset of covariates with size r = |S| ? d. If
S has been given, the minimax lower bound can be improved to be p ? n?2?/(2?+r) , which is the
best possible rate can be expected. For sparse learning problems, our task is to develop an estimator,
which adaptively achieves this faster rate of convergence without knowing S in advance.
1
Previous research on these problems can be roughly divided into three categories: (i) parametric linear models, (ii) nonparametric additive models, and (iii) nonparametric tree models. The methods
in the first category assume that the true models are linear and use some block-norm regularization to induce jointly sparse solutions [16, 11, 13, 5]. If the linear model assumptions are correct,
accurate estimates can be obtained. However, given the increasing complexity of modern applications, conclusions inferred under these restrictive linear model assumptions can be misleading.
Recently, significant progress has been made on inferring nonparametric additive models with joint
sparsity constraints [7, 10]. For additive models, each fj (x) is assumed to have an additive form:
?d
fj (x) = k=1 fjk (xk ). Although they are more flexible than linear models, the additivity assumptions might still be too stringent for real world applications.
A family of more flexible nonparametric methods are based on tree models. One of the most popular
tree methods is the classification and regression tree (CART) [2]. It first grows a full tree by orthogonally splitting the axes at locally optimal splitting points, then prunes back the full tree to form
a subtree. Theoretically, CART is hard to analyze unless strong assumptions have been enforced
[8]. In contrast to CART, dyadic decision trees (DDTs) are restricted to only axis-orthogonal dyadic
splits, i.e. each dimension can only be split at its midpoint. For a broad range of classification problems, [15] showed that DDTs using a special penalty can attain nearly optimal rate of convergence in
a minimax sense. [1] proposed a dynamic programming algorithm for constructing DDTs when the
penalty term has an additive form, i.e. the penalty of the tree can be written as the sum of penalties
on all terminal nodes. Though intensively studied for classification problems, the dyadic decision
tree idea has not drawn much attention in the regression settings. One of the closest results we are
aware of is [4], in which a single response dyadic regression procedure is considered for non-sparse
learning problems. Another interesting tree model, ?Bayesian Additive Regression Trees (BART)?,
is proposed under Bayesian framework [6], which is essentially a ?sum-of-trees? model. Most of the
existing work adopt the number of terminal nodes as the penalty. Such penalty cannot lead to sparse
models since a tree with a small number of terminal nodes might still involve too many variables.
To obtain sparse models, we propose a new nonparametric method based on multivariate dyadic
regression trees (MDRTs). Similar to DDTs, MDRTs are constructed using penalized empirical
risk minimization. The novelty of MDRT is to introduce a sparsity-inducing term in the penalty,
which explicitly induces sparse solutions. Our contributions are two-fold: (i) Theoretically, we
show that MDRTs can simultaneously adapt to the unknown sparsity and smoothness of the true
regression functions, and achieve the nearly optimal rate of convergence for the class of (?, C)smooth functions. (ii) Empirically, to avoid computationally prohibitive exhaustive search in high
dimensions, we propose a two-stage greedy algorithm and its randomized version that achieve good
performance in both function estimation and variable selection. Note that our theory and algorithm
can be straightforwardly adapted to univariate sparse regression problem, which is a special case of
the multivariate one. To the best of our knowledge, this is the first time such a sparsity-inducing
penalty is equipped to tree models for solving sparse regression problems.
The rest of this paper is organized as follows. Section 2 presents MDRTs in detail. Section 3
studies the statistical properties of MDRTs. Section 4 presents the algorithms which approximately
compute the MDRT solutions. Section 5 reports empirical results of MDRTs and their comparison
with CARTs. Conclusions are made in Section 6.
2
Multivariate Dyadic Regression Trees
We adopt the notations in [15]. A MDRT T is a multivariate regression tree that recursively divides
the input space X by means of axis-orthogonal dyadic splits. The nodes of T are associated with
hyperrectangles (cells) in X = [0, 1]d . The root node corresponds to X itself. If a node is associated
?d
to the cell B = j=1 [aj , bj ], after being dyadically split on the dimension k, the two children are
k,2
associated to the subcells B k,1
}
{ and B :
ak + bk
B k,1 = xi ? B | xik ?
and B k,2 = B \ B k,1 .
2
The set of terminal nodes of a MDRT T is denoted as term(T ). Let Bt be the cell in X induced by
a terminal node t, the partition induced by term(T ) can be denoted as ?(T ) = {Bt |t ? term(T )}.
2
For each terminal node t, we can fit a multivariate m-th order polynomial regression on data points
falling in Bt . Instead of using all covariates, such a polynomial regression is only fitted on a set of
active variables, which is denoted as A(t). For each node b ? T (not necessarily a terminal node),
A(b) can be an arbitrary subset of {1, . . . , d} satisfying two rules:
1. If a node is dyadically split perpendicular to the axis k, k must belong to the active sets of
its two children.
2. For any node b, let par(b) be its parent node, then A(par(b)) ? A(b).
For a MDRT T , we define FTm to be the class of p-valued measurable m-th order polynomials
corresponding to ?(T ). Furthermore, for a dyadic integer N = 2L , let TN be the collection of all
MDRTs such that no terminal cell has a side length smaller than 2?L .
Given integers M and N , let F M,N be defined as
F M,N = ?0?m?M ?T ?TN FTm .
The final MDRT estimator with respect to F M,N , denoted as fbM,N , can then be defined as
n
1? i
fbM,N = arg min
?y ? f (xi )?22 + pen(f ).
f ?F M,N n i=1
(2)
To define in detail pen(f ) for f ? F M,N , let T and m be the MDRT and the order of polynomials
corresponding to f , pen(f ) then takes the following form:
)
p(
pen(f ) = ? ?
log n(rT + 1)m (NT + 1)rT + |?(T )| log d ,
(3)
n
where ? > 0 is a regularization parameter, rT = | ?t?term(T ) A(t)| corresponds to the number of
relevant dimensions and
NT = min{s ? {1, 2, . . . , N } | T ? Ts }.
There are two terms in (3) within the parenthesis. The latter one penalizing the number of terminal
nodes |?(T )| has been commonly adopted in the existing tree literature. The former one is novel.
Intuitively, it penalizes non-sparse models since the number of relevant dimensions rT appears in
the exponent term. In the next section, we will show that this sparsity-inducing term is derived by
bounding the VC-dimension of the underlying subgraph of regression functions. Thus it has a very
intuitive interpretation.
3
Statistical Properties
In this section, we present theoretical properties of the MDRT estimator. Our main technical result
is Theorem 1, which provides the nearly optimal rate of the MDRT estimator.
To evaluate the algorithm performance, we use? the L2 -risk with respect to the Lebesgue measure
?p
?(?), which is defined as R(fb, f ) = E j=1 X |fbj (x) ? fj (x)|2 d?(x), where fb is the function
estimate constructed from n observed samples. Note that all the constants appear in this section are
generic constants, i.e. their values can change from one line to another in the analysis.
Let N0 = {0, 1, . . .} be the set of natural number, we first define the class of (?, C)-smooth functions.
Definition 1 ((?, C)-smoothness) Let ? = q + ? for some q ? N0 , 0 < ? ? 1, and let C > 0. A
?d
function g : Rd ? R is called (?, C)-smooth if for every ? = (?1 , . . . , ?d ), ?i ? N0 , j=1 ?j =
q
g
d
q, the partial derivative ?x?1?...?x
?d exists and satisfies, for all x, z ? R ,
1
d
q
q
?? g(x) ? ? ?? g(z) ? ? C ? ?x ? z?? .
2
1
d
?x 1 . . . ?x d
?x . . . ?x
1
d
1
d
In the following, we denote the class of (?, C)-smooth functions by D(?, C).
Assumption 1 We assume f1 , . . . , fp ? D(?, C) for some ?, C > 0 and for all j ? {1, . . . , p},
fj (x) = fj (xS ) with r = |S| ? d.
Theorem 3.2 of [9] shows that the lower minimax rate of convergence for class D(?, C) is exactly
the same as that for class of d-dimensional Sobolev ball with order ? and radius C.
3
Proposition 1 The proof of this proposition can be found in [9].
1
lim inf ? n2?/(2?+d) inf
sup
R(fb, f ) > 0.
n?? p
fb f1 ,...,fp ?D(?,C)
Therefore, the lower minimax rate of convergence is p ? n?2?/(2?+d) . Similarly, if the problem is
jointly sparse with the index set S and r = |S| ? d, the best rate of convergence can be improved
to p ? n?2?/(2?+r) when S is given.
The following is another technical assumption needed for the main theorem.
Assumption 2 Let 1 ? ? < ?, we assume that
max sup |fj (x)| ? ? and max ?yi ?? ? ? a.s.
1?j?p x
1?i?n
This condition is mild. Indeed, we can even allow ? to increase with the sample size n at a certain
rate. This will not affect the final result. For example, when {?ij }i,j are i.i.d. Gaussian random
?
variables, this assumption easily holds with ? = O( log n), which only contributes a logarithmic
term to the final rate of convergence.
The next assumption specifies the scaling of the relevant dimension r and ambient dimension d with
respect to the sample size n.
Assumption 3 r = O(1) and d = O(exp(n? )) for some 0 < ? < 1.
Here, r = O(1) is crucial, since even if r increases at a logarithmic rate with respect to n, i.e.
r = O(log n), it is hopeless to get any consistent estimator for the class D(?, C) since n?(1/ log n) =
1/e. On the other hand, the ambient dimension d can increase exponentially fast with the sample
size, which is a realistic scaling for high dimensional settings.
The following is the main theorem.
Theorem 1 Under Assumptions 1 to 3, there exist a positive number ? that only depends on ?, ?
and r, such that
)
p(
pen(f ) = ? ?
(4)
(log n)(rT + 1)m (NT + 1)rT + |?(T )| log d ,
n
For large enough M, N , the solution fbM,N obtained from (2) satisfies
(
)2?/(2?+r)
log n + log d
M,N
b
R(f
, f) ? c ? p ?
,
(5)
n
where c is some generic constant.
Remark 1 As discussed in Proposition 1, the obtained rate of convergence in (5) is nearly optimal
up to a logarithmic term.
Remark 2 Since the estimator defined in (2) does not need to know the smoothness ? and the
sparsity level r in advance, MDRTs are simultaneously adaptive to the unknown smoothness and
sparsity level.
Proof of Theorem 1: To find an upper bound of R(fbM,N , f ), we need to analyze and control
the approximation and estimation errors separately. Our analysis closely follows the least squares
regression analysis in [9] and some specific coding scheme of trees in [15].
Without loss of generality, we always assume fbM,N obtained from (2) satisfies the condition that
max1?j?p supx |fjM,N (x)| ? ?. if this is not true, we can always truncate fbM,N at the rate ? and
obtain the desired result in Theorem 1.
Let STm be the class of scalar-valued measurable m-th order polynomials corresponding to ?(T ),
and let GTm be the class of all subgraphs
of functions of STm , i.e.
{
}
m
GT = (z, t) ? Rd ? R; t ? g(z); g ? STm .
Let VGTm be the VC-dimension of GTm , we have the following lemma:
4
Lemma 1 Let rT and NT be defined as in (3), we know that
VGTm ? (rT + 1)m ? (NT + 1)rT .
(6)
Sketch of Proof: From Theorem 9.5 of [9], we only need to show the dimension of GTm is upper
bounded by the R.H.S. of (6). By the definition of rT and NT , the result follows from a straightforward combinatorial analysis.
The next lemma provides an upper bound of the approximation error for the class D(?, C).
Lemma 2 Let f = (f1 , . . . , fp ) be the true regression function, there exists a set of piecewise
polynomials h1 , . . . , hp ? ?T ?TK STm
?j ? {1, . . . , p}, sup |fj (x) ? hj (x)| ? cK ??
x?X
where K ? N , c is a generic constant depends on r.
Sketch of Proof: This is a standard approximation result using multivariate piecewise polynomials.
The main idea is based on a multivariate Taylor expansion of the function fj at a given point x0 .
Then try to utilize Definition 1 to bound the remainder terms. For the sake of brevity, we omit the
technical details.
The next lemma is crucial, it provides an oracle inequality to bound the risk using an approximation
term and an estimation term. Its analysis follows from a simple adaptation of Theorem 12.1 on page
227 of [9].
?
2
e f ) = ?p
First, we define R(g,
j=1 X |gj (x) ? fj (x)| d?(x),
Lemma 3 [9] Choose
(
)
[[T ]] log 2
?4
log(120e? 4 n)VGTm +
n
2
?
for some prefix code [[T ]] > 0 satisfying T ?TN 2?[[T ]] ? 1. Then, we have
{
}
?4
e f) .
R(fbM,N , f ) ? 12840 ? p ?
+ 2 inf
inf
p ? pen(g) + R(g,
T ?TN g?F M,N
n
pen(f ) ? 5136 ? p
(7)
(8)
One appropriate prefix code [[T ]] for each MDRT T is proposed in [15], which specifies that
[[T ]] = 3|?(T )| ? 1 + (|?(T )| ? 1) log d/ log 2. A simpler upper bound for [[T ]] is [[T ]] ?
(3 + log d/ log 2)|?(T )|.
Remark 3 The derived constants in the Lemma 3 will be pessimistic due to the very large numerical
values. This may result in selecting oversimplified tree structures. In practice, we always use crossvalidation to choose the tuning parameters.
To prove Theorem 1, first, using Assumption 1 and Lemma 2, we know that for any K ? N , there
must exists generic constants c1 , c2 , c3 and a function f ? that is conformal with a MDRT T ? ? TK ,
satisfying f ? (x) = f ? (xS ) and |?(T ? )| ? (K + 1)r such that
e ? , f ) ? c1 ? p ? K ?2? ,
R(f
(9)
and
(log n)(r + 1)M (K + 1)r
log d(K + 1)r
pen(f ? ) ? c2
+ c3
.
(10)
n
n
The desired result then follows by plugging (9) and (10) into (8) and balancing these three terms.
4
Computational Algorithm
Exhaustive search of fbM,N in the MDRT space has similar complexity as that of DDTs and could be
computationally very expansive. To make MDRTs scalable for high dimensional massive datasets,
using similar ideas as CARTs, we propose a two-stage procedure: (1) we grow a full tree in a greedy
manner; (2) we prune back the full tree to from the final tree. Before going to the detail of the
algorithm, we firstly introduce some necessary notations.
Given a MDRT T , denote the corresponding multivariate m-th order polynomial fit on ?(T ) by
fbTm = {fbtm }t??(T ) , where fbtm is the m-th order polynomial regression fit on the partition Bt . For
5
each xi falling in Bt , let fbtm (xi , A(t)) be the predicted function value for xi . We denote the the
bm (t, A(t)):
local squared error (LSE) on node t by R
?
bm (t, A(t)) = 1
R
?yi ? fbtm (xi , A(t))?22 .
n i
x ?Bt
bm (t, A(t)) is calculated as the average with respect to the total sample
It is worthwhile noting that R
b ) can
size n, instead of the number of data points contained in Bt . The total MSE of the tree R(T
then be computed by the following equation:
?
b )=
bm (t, A(t)).
R(T
R
t?term(T )
The total cost of T , which is defined as the the right hand side of (2), then can be written as:
b ) = R(T
b ) + pen(fbm ).
C(T
(11)
T
Our goal is to find the tree structure with the polynomial regression on each terminal node that can
minimize the total cost.
The first stage is tree growing, in which a terminal node t is first selected in each step. We then
perform one of two actions a1 and a2:
a1: adding another dimension k ?? A(t) to A(t), and refit the regression model on all data
points falling in Bt ;
a2: dyadically splitting t perpendicular to the dimension k ? A(t).
In each tree growing step, we need to decide which action to perform. For action a1, we denote the
drop in LSE as:
bm (t, A(t)) ? R
bm (t, A(t) ? {k}).
bm (t, k) = R
(12)
?R
1
For action a2, let sl(t(k) ) be the side length of Bt on dimension k ? A(t). If sl(t(k) ) > 2?L , the
(k)
(k)
dimension k of Bt can then be dyadically split. In this case, let tL and tR be the left and right
child of node t. The drop in LSE takes the following form:
bm (t, A(t)) ? R
bm (t(k) , A(t) ? R
bm (t(k) , A(t)).
bm (t, k) = R
(13)
?R
2
L
R
For each terminal node t, we greedily perform the action a? on the dimension k ? , which are determined by
bam (t, k).
(14)
(a? , k ? ) =
argmax
?R
a?{1,2},k?{1...d}
In high dimensional setting, the above greedy procedure may not lead to the optimal tree since successively locally optimal splits cannot guarantee the global optimum. Once an irrelevant dimension
has been added in or split, the greedy procedure can never fix the mistake. To make the algorithm
more robust, we propose a randomized scheme. Instead of greedily performing the action on the
dimension that leads the maximum drop in LSE, we randomly choose which action to perform acb such that:
cording to a multinomial distribution. In particular, we normalize ?R
2
??
bam (t, k) = 1.
?R
(15)
a=1
k
b The action a? is then performed on the
And a sample (a? , k ? ) is drawn from multinomial(1, ?R).
?
dimension k . In general, when the randomized scheme is adopted, we need to repeat our algorithm
many times to pick the best tree.
The second stage is cost complexity pruning. For each step, we either merge a pair of terminal nodes
or remove a variable from the active set of a terminal node such that the resulted tree has the smaller
cost. We repeat this process until the tree becomes a single root node with an empty active set. The
tree with the minimum cost in this process is returned as the final tree. The pseudocode for the
growing stage and cost complexity pruning stage are presented in the Appendix. Moreover, to avoid
a cell with too few data points, we pre-define a quantity nmax . Let n(t) be the the number of data
points fall into Bt , if n(t) ? nmax , Bt will no longer be split. It is worthwhile noting that we ignore
those actions that lead to ?R = 0. In addition, whenever we perform the mth order polynomial
regression on the active set of a node, we need to make sure it is not rank deficient.
6
5
Experimental Results
In this section, we present numerical results for MDRTs applied to both synthetic and real datasets.
We compare five methods: [1] Greedy MDRT with M = 1 (MDRT(G, M=1)); [2] Randomized
MDRT with M = 1 (MDRT(R, M=1)); [3] Greedy MDRT with M = 0 (MDRT(G, M=0)); [4]
Randomized MDRT with M = 0 (MDRT(R, M=0)); [5] CART. For randomized scheme, we run
50 random trials and pick the minimum cost tree.
As for CART, we adopt the MATLAB package from [12], which fits piecewise constant on each
b ) = R(T
b ) + ? p |?(T )|, where ? is the tuning
terminal node with the cost complexity criterion: C(T
n
parameter playing the same role as ? in (3).
Synthetic Data: For the synthetic data experiment, we consider the high dimensional compound
symmetry covariance structure of the design matrix with n = 200 and d = 100. Each dimension xj
is generated according to
Wj + tU
, j = 1, . . . , d,
xj =
1+t
where W1 , . . . , Wd and U are i.i.d. sampled from Uniform(0,1). Therefore the correlation between
xj and xk is t2 /(1 + t2 ) for j ?= k.
We study three models as shown below: the first one is linear; the second one is nonlinear but
additive; the third one is nonlinear with three-way interactions. All these models only involve four
relevant variables. The noise terms, denoted as ? , are independently drawn from a standard normal
distribution.
Model 1:
Model 2:
Model 3:
y1i = 2xi1 + 3xi2 + 4xi3 + 5xi4 + ?i1
y1i = exp(xi1 ) + (xi2 )2 + 3xi3 + 2xi4 + ?i1
y1i = exp(2xi1 xi2 + xi3 ) + xi4 + ?i1
y2i = 5xi1 + 4xi2 + 3xi3 + 2xi4 + ?i2
y2i = (xi1 )2 + 2xi2 + exp(xi3 ) + 3xi4 + ?i2
y2i = sin(xi1 xi2 ) + (xi3 )2 + 2xi4 + ?i2
We compare the performances of different methods using two criteria: (i) variable selection and (ii)
function estimation. For each model, we generate 100 designs and an equal-sized validation set per
design. For more detailed experiment protocols, we set nmax = 5 and L = 6. By varying the values
of ? or ? from large to small, we obtain a full regularization path. The tree with the minimum MSE
on the validation set is then picked as the best tree. For criterion (i), if the variables involved in the
best tree are exactly the first four variables, the variable selection task for this design is deemed as
successful. The numerical results are presented in Table 1. For each method, the three quantities
reported in order are the number of success out of 100 designs, the mean and standard deviation of
the MSE on the validation set. Note that we omit ?MDRT? in Table 1 due to space limitations.
From Table 1, the performance of MDRT with M = 1 is dominantly better in both variable selection
and estimation than those of the others. For linear models, MDRT with M = 1 always select
the correct variables even for large ts. For variable selection, MDRT with M = 0 has a better
performance compared with CART due to its sparsity-inducing penalty. In contrast, CART is more
flexible in the sense that its splits are not necessarily dyadic. As a consequence, they are comparable
in function estimation. Moreover, the performance of randomized scheme is slightly better than
its deterministic version in variable selection. Another observation is that, when t becomes larger,
although the performance of variable selection decreases on all methods, the estimation performance
becomes slightly better. This might be counter-intuitive at the first sight. In fact, with the increase
of t, all methods tend to select more variables. Due to the high correlations, even the irrelevant
variables are also helpful in predicting the responses. This is an expected effect.
Real Data: In this subsection, we compare these methods on three real datasets. The first dataset
is the Chemometrics data (Chem for short), which has been extensively studied in [3]. The data are
from a simulation of a low density tubular polyethylene reactor with n = 56, d = 22 and p = 6.
Following the same procedures in [3], we log-transformed the responses because they are skewed.
The second dataset is Boston Housing 1 with n = 506, d = 10 and p = 1. We add 10 irrelevant
variables randomly drawn from Uniform(0,1) to evaluate the variable selection performance. The
third one, Space ga2 , is an election data with spatial coordinates on 3107 US counties. Our task
is to predict the x, y coordinates of each county given 5 variables regarding voting information.
1
2
Available from UCI Machine Learning Database Repository: http:archive.ics.uci.edu/ml
Available from StatLib: http:lib.stat.cmu.edu/datasets/
7
Table 1: Comparison of Variable Selection and Function Estimation on Synthetic Datasets
Model 1
t=0
t = 0.5
t=1
100
100
100
R, M=1
2.03 (0.14)
2.05 (0.14)
2.05 (0.13)
100
100
100
G, M=1
2.08 (0.15)
2.06 (0.15)
2.05 (0.16)
100
76
19
R, M=0
5.84 (0.51)
5.42 (0.53)
5.40 (0.60)
Model 2
t=0
t = 0.5
t=1
100
96
76
R, M=1
2.07 (0.13)
2.05 (0.15)
2.09 (0.14)
100
93
68
G, M=1
2.06 (0.15)
2.09 (0.17)
2.21 (0.19)
39
17
2
R, M=0
3.21 (0.26)
3.10 (0.25)
3.17 (0.30)
Model 3
t=0
t = 0.5
t=1
98
84
65
R, M=1
2.68 (0.31)
2.56 (0.21)
2.51 (0.26)
95
86
50
G, M=1
2.67 (0.47)
2.52 (0.25)
2.62 (0.23)
75
32
3
R, M=0
3.90 (0.47)
3.63 (0.47)
3.75 (0.45)
G, M=0
5.74 (0.54)
5.36 (0.60)
5.56 (0.69)
97
68
20
31
11
2
63
32
4
G, M=0
3.22 (0.28)
3.15 (0.26)
3.16 (0.26)
G, M=0
4.03 (0.54)
3.60 (0.40)
3.88 (0.51)
52
29
3
25
5
1
29
15
2
CART
6.17 (0.55)
5.48 (0.51)
5.30 (0.58)
CART
3.52 (0.31)
3.20 (0.27)
3.16 (0.27)
CART
4.35 (0.73)
3.69 (0.38)
3.66 (0.38)
For Space ga, we normalize the responses to [0, 1]. Similarly, we add other 15 irrelevant variables
randomly drawn from Uniform(0,1). For all these datasets, we scale the input variables into a unit
cube.
For evaluation purpose, each dataset is randomly split such that half data are used for training and
the other half for testing. We run a 5-fold cross-validation on the training set to pick the best tuning
parameter ?? and ?? . We then train MDRTs and CART on the entire training data using ?? and ?? .
We repeat this process 20 times and report the mean and standard deviation of the testing MSE in
Table 2. nmax is set to be 5 for the first dataset and 20 for the latter two. For all datasets, we set
L = 6. Moreover, for randomized scheme, we run 50 random trials and pick the minimum cost tree.
Table 2: Testing MSE on Real Datasets
Chem
Housing
Space ga
R, M=1
0.15 (0.09)
20.18 (2.94)
0.054 (7.8e-4)
G, M=1
0.18 (0.12)
21.60 (2.83)
0.055 (8.0e-4)
R, M=0
0.38 (0.18)
24.67 (2.05)
0.068 (7.2e-4)
G, M=0
0.52 (0.06)
29.46 (1.95)
0.068 (9.2e-4)
CART
0.40 (0.09)
25.91 (3.05)
0.064 (8.3e-4)
From Table 2, we see that MDRT with M = 1 has the best estimation performance. Moreover,
randomized scheme does improve the performance compared to the deterministic counterpart. In
particularly, such an improvement is quite significant when M = 0. The performance of MDRT(G,
M=0) is always worse than CART since CART can have more flexible splits. However, using randomized scheme, the performance of MDRT(R, M=0) achieves a comparable performance as CART.
As for variable selection of Housing data, in all the 20 runs, MDRT(G, M=1) and MDRT(R, M=1)
never select the artificially added variables. However, for the other three methods, nearly 10 out of
20 runs involve at least one extraneous variable. In particular, we compare our results with those
reported in [14]. They find that there are 4 (indus, age, dis, tax) irrelevant variables in the Housing
data. Our experiments confirm this result since in 15 out of the 20 trials, MDRT(G, M=1) and
MDRT(R, M=1) never select these four variables. Similarly, for Space ga data, there are only 2 and
1 times that MDRT(G, M=1) and MDRT(R, M=1) involve the artificially added variables.
6 Conclusions
We propose a novel sparse learning method based on multivariate dyadic regression trees (MDRTs).
Our approach adopts a new sparsity-inducing penalty that simultaneously conduct function estimation and variable selection. Some theoretical analysis and practical algorithms have been developed.
To the best of our knowledge, it is the first time that such a penalty is introduced in the tree literature
for high dimensional sparse learning problems.
8
References
[1] G. Blanchard, C. Sch?afer, Y. Rozenholc, and K.-R. M?uller. Optimal dyadic decision trees.
Machine Learning Journal, 66(2-3):209?241, 2007.
[2] Leo Breiman, Jerome Friedman, Charles J. Stone, and R.A. Olshen. Classification and regression trees. Wadsworth Publishing Co Inc, 1984.
[3] Leo Breiman and Jerome H. Friedman. Predicting multivariate responses in multiple linear
regression. J. Roy. Statist. Soc. B, 59:3, 1997.
[4] R. Castro, R. Willett, and R. Nowak. Fast rates in regression via active learning. NIPS, 2005.
[5] Xi Chen, Weike Pan, James T. Kwok, and Jamie G. Carbonell. Accelerated gradient method
for multi-task sparse learning problem. In ICDM, 2009.
[6] Hugh A. Chipman, Edward I. George, and Robert E. McCulloch. Bart: Bayesian additive regression trees. Technical report, Department of Mathematics and Statistics, Acadia University,
Canada, 2006.
[7] Jerome H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19:1?
67, 1991.
[8] S. Gey and E. Nedelec. Model selection for cart regression trees. IEEE Tran. on Info. Theory,
51(2):658? 670, 2005.
[9] L?aszl?o Gy?orfi, Michael Kohler, Adam Krzy?zak, and Harro Walk. A Distribution-Free Theory
of Nonparametric Regression. Springer-Verlag, 2002.
[10] Han Liu, John Lafferty, and Larry Wasserman. Nonparametric regression and classification
with joint sparsity constraints. In NIPS. MIT Press, 2008.
[11] Han Liu and Jian Zhang. On the estimation consistency of the group lasso and its applications.
AISTATS, pages 376?383, 2009.
[12] Wendy L. Martinez and Angel R. Martinez. Computational Statistics Handbook with MATLAB.
Chapman & Hall CRC, 2 edition, 2008.
[13] G. Obozinski, M. J. Wainwright, and M. I. Jordan. High-dimensional union support recovery
in multivariate regression. In NIPS. MIT Press, 2009.
[14] Pradeep Ravikumar, Han Liu, John Lafferty, and Larry Wasserman. Spam: Sparse additive
models. In NIPS. MIT Press, 2007.
[15] C. Scott and R.D. Nowak. Minimax-optimal classification with dyadic decision trees. IEEE
Tran. on Info. Theory, 52(4):1335?1353, 2006.
[16] B.A. Turlach, W. N. Venables, and S. J. Wright. Simultaneous variable selection. Technometrics, 27:349?363, 2005.
9
| 4178 |@word mild:1 trial:3 repository:1 version:2 polynomial:11 norm:1 turlach:1 simulation:1 covariance:1 pick:4 tr:1 recursively:1 liu:4 selecting:1 fbj:1 prefix:2 existing:2 wd:1 nt:6 written:2 must:2 john:2 additive:9 partition:2 realistic:1 numerical:3 remove:1 drop:3 n0:3 bart:2 greedy:7 prohibitive:1 selected:1 half:2 xk:4 short:1 provides:3 node:25 firstly:1 simpler:1 zhang:1 five:1 constructed:3 c2:2 prove:1 manner:1 introduce:2 x0:1 theoretically:3 angel:1 indeed:1 expected:2 roughly:1 growing:3 multi:2 terminal:15 oversimplified:1 election:1 equipped:1 increasing:1 becomes:5 stm:4 estimating:1 moreover:5 bounded:2 notation:2 mass:1 underlying:1 lib:1 mcculloch:1 developed:1 guarantee:1 every:1 voting:1 exactly:2 hyperrectangles:1 control:1 unit:1 omit:2 yn:1 appear:1 positive:1 before:1 local:1 mistake:1 consequence:1 ak:1 path:1 approximately:1 merge:1 might:3 studied:2 challenging:1 co:1 range:1 perpendicular:2 practical:1 yj:2 testing:3 practice:1 block:1 union:1 procedure:5 y2i:3 empirical:3 attain:1 orfi:1 word:1 induce:1 pre:1 nmax:4 get:1 cannot:2 ga:3 selection:14 risk:5 measurable:2 deterministic:2 demonstrated:1 straightforward:1 attention:1 independently:2 rozenholc:1 splitting:3 recovery:1 wasserman:2 subgraphs:1 estimator:6 rule:1 ftm:2 coordinate:2 dyadically:4 annals:1 massive:1 programming:1 pa:1 roy:1 satisfying:3 particularly:1 database:1 observed:1 role:1 aszl:1 wj:1 decrease:1 counter:1 complexity:5 covariates:2 dynamic:1 depend:1 solving:1 max1:1 easily:1 joint:2 gtm:3 leo:2 additivity:1 train:1 fast:2 exhaustive:2 quite:1 heuristic:1 larger:1 valued:3 statistic:3 jointly:3 itself:1 final:5 housing:4 propose:7 tran:2 jamie:1 interaction:1 remainder:1 adaptation:1 tu:1 relevant:4 uci:2 subgraph:1 achieve:3 tax:1 intuitive:2 inducing:6 normalize:2 crossvalidation:1 chemometrics:1 convergence:10 parent:1 optimum:1 empty:1 adam:1 tk:2 develop:1 stat:1 ij:2 school:1 progress:1 edward:1 soc:1 strong:1 predicted:1 radius:2 closely:1 correct:2 vc:2 stringent:1 larry:2 crc:1 f1:6 fix:1 county:2 proposition:3 pessimistic:1 hold:1 considered:1 ic:1 normal:1 exp:4 hall:1 wright:1 predict:2 bj:1 achieves:2 adopt:3 a2:3 purpose:1 estimation:12 applicable:1 combinatorial:1 venables:1 minimization:2 uller:1 mit:3 gaussian:1 always:5 sight:1 ck:1 avoid:2 hj:1 breiman:2 varying:1 krzy:1 ax:1 derived:2 improvement:1 rank:1 expansive:1 contrast:2 greedily:2 sense:3 helpful:1 bt:12 entire:1 xnk:1 mth:1 transformed:1 going:1 i1:3 arg:1 classification:7 flexible:4 denoted:5 exponent:1 extraneous:1 spatial:1 special:2 wadsworth:1 cube:1 equal:1 aware:1 once:1 never:3 chapman:1 broad:1 nearly:6 constitutes:1 report:3 t2:2 piecewise:3 others:1 few:1 spline:1 modern:1 randomly:4 simultaneously:6 resulted:1 argmax:1 reactor:1 lebesgue:1 friedman:3 technometrics:1 evaluation:1 pradeep:1 accurate:1 ambient:2 nowak:2 partial:1 necessary:1 orthogonal:2 unless:1 conduct:2 tree:49 divide:1 taylor:1 penalizes:1 desired:2 walk:1 theoretical:2 fitted:1 yjn:1 cost:9 deviation:2 subset:2 uniform:3 successful:1 too:3 reported:2 straightforwardly:1 acb:1 supx:1 synthetic:5 adaptively:1 density:1 randomized:10 hugh:1 sequel:1 xi1:6 michael:1 w1:1 squared:1 successively:1 choose:3 worse:1 derivative:1 gy:1 coding:1 blanchard:1 inc:1 explicitly:1 depends:2 performed:1 root:2 h1:1 try:1 picked:1 analyze:2 sup:3 contribution:1 minimize:1 square:1 bayesian:3 polyethylene:1 simultaneous:1 whenever:1 harro:1 definition:3 involved:1 james:1 naturally:1 associated:3 proof:4 sampled:1 dataset:4 popular:1 intensively:1 knowledge:2 lim:1 subsection:1 organized:1 back:2 appears:1 response:5 improved:2 nedelec:1 though:1 generality:2 furthermore:1 stage:6 until:1 correlation:2 hand:2 sketch:2 jerome:3 nonlinear:2 aj:1 grows:1 effect:1 true:7 counterpart:1 former:1 regularization:3 chemical:1 i2:3 sin:1 skewed:1 criterion:3 stone:1 tn:4 fj:11 lse:4 novel:3 recently:1 charles:1 superior:1 common:2 pseudocode:1 multinomial:2 empirically:2 exponentially:1 belong:1 interpretation:1 discussed:1 willett:1 mellon:1 significant:2 zak:1 smoothness:5 rd:4 tuning:3 consistency:1 mathematics:1 similarly:3 hp:1 afer:1 han:4 longer:1 gj:1 gt:1 add:2 xi3:6 multivariate:16 closest:1 showed:1 inf:4 irrelevant:5 compound:1 certain:1 verlag:1 inequality:1 success:1 yi:3 minimum:4 george:1 prune:2 surely:1 novelty:1 signal:1 ii:3 multiple:1 full:5 fbm:9 smooth:7 technical:4 faster:1 adapt:2 cross:1 tubular:1 divided:1 icdm:1 ravikumar:1 plugging:1 parenthesis:1 a1:3 scalable:1 regression:36 essentially:1 cmu:1 cell:5 c1:2 addition:1 separately:1 grow:1 jian:1 crucial:2 sch:1 rest:1 unlike:1 archive:1 sure:1 cart:19 induced:2 deficient:1 tend:1 lafferty:2 jordan:1 integer:2 chipman:1 noting:2 iii:1 identically:1 split:12 enough:1 acadia:1 affect:1 fit:4 xj:3 lasso:1 idea:3 regarding:1 knowing:1 indus:1 expression:1 x1k:1 penalty:12 returned:1 remark:3 action:9 matlab:2 detailed:1 involve:4 nonparametric:9 locally:2 extensively:1 induces:1 statist:1 category:2 generate:1 specifies:2 sl:2 exist:1 http:2 xi4:6 per:1 wendy:1 ddt:6 carnegie:1 group:1 four:3 falling:3 drawn:5 penalizing:1 utilize:1 sum:2 enforced:1 run:5 package:1 almost:1 family:1 decide:1 sobolev:2 decision:5 appendix:1 scaling:2 comparable:2 bound:6 fold:2 oracle:1 adapted:1 constraint:2 sake:1 y1i:3 min:2 performing:1 department:1 according:1 truncate:1 ball:2 smaller:2 slightly:2 pan:1 castro:1 intuitively:1 restricted:1 computationally:2 equation:1 xi2:6 needed:1 know:4 conformal:1 yj1:1 adopted:2 available:2 kwok:1 worthwhile:2 generic:4 appropriate:1 rp:2 publishing:1 restrictive:1 added:3 quantity:3 parametric:1 concentration:1 rt:10 traditional:1 gradient:1 kth:1 carbonell:1 evaluate:2 length:2 code:2 index:1 subcells:1 olshen:1 robert:1 xik:1 info:2 design:5 refit:1 unknown:3 perform:5 upper:4 observation:1 datasets:9 t:2 frame:1 y1:1 arbitrary:1 canada:1 inferred:1 bk:1 introduced:1 pair:2 c3:2 nip:4 below:1 bam:2 scott:1 fp:6 sparsity:12 fjm:1 max:2 wainwright:1 natural:1 predicting:5 minimax:8 scheme:8 improve:1 misleading:1 orthogonally:1 axis:3 deemed:1 fjk:1 literature:2 l2:1 loss:2 par:2 interesting:1 limitation:1 age:1 validation:4 consistent:1 playing:1 balancing:1 statlib:1 cording:1 hopeless:1 penalized:2 repeat:3 free:1 jth:1 dominantly:1 dis:1 side:3 allow:1 fall:1 midpoint:1 sparse:16 distributed:2 dimension:24 xn:1 world:2 calculated:1 fb:4 adopts:1 made:2 collection:1 commonly:1 adaptive:2 bm:11 spam:1 pruning:2 ignore:1 gene:1 confirm:1 ml:1 global:1 active:6 handbook:1 pittsburgh:1 assumed:1 xi:10 spectrum:1 yji:1 search:2 pen:9 table:7 channel:1 robust:1 symmetry:1 contributes:1 expansion:1 mse:5 necessarily:2 artificially:2 constructing:1 protocol:1 aistats:1 main:4 bounding:1 noise:2 edition:1 n2:1 martinez:2 dyadic:15 child:3 x1:1 tl:1 slow:1 inferring:1 lie:1 third:2 theorem:10 specific:1 x:6 exists:3 adding:1 subtree:1 chen:2 phenotype:1 boston:1 logarithmic:3 univariate:1 contained:1 scalar:1 springer:1 corresponds:2 satisfies:3 obozinski:1 goal:1 formulated:1 sized:1 hard:1 change:1 determined:1 lemma:8 called:1 total:4 experimental:1 select:4 support:1 latter:2 chem:2 brevity:1 accelerated:1 kohler:1 |
3,510 | 4,179 | Implicitly Constrained Gaussian Process Regression
for Monocular Non-Rigid Pose Estimation
Raquel Urtasun
TTI Chicago
[email protected]
Mathieu Salzmann
ICSI & EECS, UC Berkeley
TTI Chicago
[email protected]
Abstract
Estimating 3D pose from monocular images is a highly ambiguous problem. Physical constraints can be exploited to restrict the space of feasible configurations. In
this paper we propose an approach to constraining the prediction of a discriminative predictor. We first show that the mean prediction of a Gaussian process
implicitly satisfies linear constraints if those constraints are satisfied by the training examples. We then show how, by performing a change of variables, a GP can
be forced to satisfy quadratic constraints. As evidenced by the experiments, our
method outperforms state-of-the-art approaches on the tasks of rigid and non-rigid
pose estimation.
1
Introduction
Estimating the 3D pose of an articulated body or of a deformable surface from monocular images is
one of the fundamental problems in computer vision. It is known to be highly ambiguous and therefore requires the use of prior knowledge to restrict the pose to feasible configurations. Throughout
the years, two main research directions have emerged to provide such knowledge: approaches that
rely on modeling the explicit properties of the object of interest, and techniques that learn these
properties from data.
Methods that exploit known physical properties have been proposed both for deformable shape
recovery [9, 15] and for articulated pose estimation [18, 7]. Unfortunately, in most cases, the constraints introduced to disambiguate the pose are quadratic and non-convex. This, for example, is
the case of fixed-length constraints [9, 15, 18] or unit norm constraints. As a consequence, such
constraints are hard to optimize and often yield solutions that are sub-optimal.
Learning a prior over possible poses seems an attractive alternative [13, 3, 16, 22]. However, these
priors are employed in generative approaches that require accurate initialization. In articulated pose
estimation [14, 1, 21], the need for initialization has often been prevented by relying on discriminative predictors that learn a mapping from image observations to 3D pose. Unfortunately, the
employed approaches typically assume that the output dimensions are independent given the inputs
and are therefore only adapted to cases where the outputs are weakly correlated. In pose estimation,
this independence assumption is in general violated, and these techniques yield solutions that are
far from optimal. Recently, [12] proposed to overcome this issue by imposing additional physical constraints on the pose estimated by the predictor. However, these constraints were imposed at
inference, which required to solve a non-convex optimization problem for each test example.
In this paper, we propose to make the predictor implicitly satisfy physical constraints. This lets
us overcome the issues related to the output independence assumption of discriminative methods
while avoiding the computational burden of enforcing constraints at inference. To this end, we first
show that the mean prediction of a Gaussian process implicitly satisfies linear constraints. We then
address the case of quadratic constraints by replacing the original unknowns of our problem with
quadratic unknowns under which the constraints are linear. We demonstrate the effectiveness of
our approach to predict rotations expressed either as quaternions under unit L2-norm constraints,
1
or as rotation matrices under orthonormality constraints, as well as to predict 3D non-rigid poses
under constant length constraints. Our experiments show that our approach significantly outperforms
Gaussian process regression, as well as imposing constraints at inference [12]. Furthermore, for high
dimensional inputs and large training sets, our approach is orders of magnitude faster than [12].
2
Constrained Gaussian Process Regression
In this section, we first review Gaussian process regression and then show that, for vector outputs,
linear constraints between the output dimensions are implicitly satisfied by the predictor. We then
propose a change in parameterization that enables us to incorporate quadratic constraints. Finally,
we rely on a simple factorization to recover the variables of interest for the pose estimation problem.
2.1 Gaussian Process Regression
A Gaussian process is a collection of random variables, any finite number of which have consistent
joint Gaussian distributions [10]. Let D = {(xi , yi ), i = 1, ? ? ? , N } be a training set composed
of inputs xi and noisy outputs yi generated from a latent function f (x) with i.i.d. Gaussian noise
i ? N (0, ?n2 ), such that yi = f (xi ) + i . Let f = [f (x1 ), ? ? ? , f (xN )]T be the vector of function
values, and X = [x1 , ? ? ? , xN ]T be the inputs. GP regression assumes a GP prior over functions,
p(f |X) = N (0, K) ,
(1)
where K is a covariance matrix whose entries are given by a covariance function, Ki,j = k(xi , xj ).
Inference in the GP model is straightforward. Let y = [y1 , ? ? ? , yN ]T be the vector of training
outputs. Given a new input x? , the prediction f (x? ) follows a Gaussian distribution with mean
?(x? ) = yT K?1 k? and variance ?(x? ) = k(x? , x? ) ? kT? K?1 k? .
The simplest way to extend a GP to deal with multiple outputs is to assume that, given the inputs, the
outputs are independent. However, for correlated output dimensions, this is a poor approximation.
Recent research has focused on learning the interactions between the output dimensions [6, 19, 4, 2].
In this paper, we take an alternative approach, since, for the problem at hand, the constraints are
known a priori. In particular, we show that for any input we can enforce the mean prediction of a
GP to implicitly satisfy the constraints linking the output dimensions, and thus implicitly model the
output correlations.
2.2 Linear Constraints
As mentioned above, we consider the problem of vector output regression and seek to learn a predictor able to model the constraints linking the different dimensions of the output. Let us first study the
case of linear relationships between the output dimensions. Here, we show that, if the training data
satisfies a set of linear constraints, the mean prediction of a GP implicitly satisfies these constraints.
Proposition 1. Let {y1 , ? ? ? , yN } be a set of training examples that satisfy the linear constraints
Ayi = b. For any input x? , the mean prediction of a Gaussian process ?(x? ) will also satisfy the
constraints A?(x? ) = b.
? = [?
? N ]T be the matrix of mean-subtracted training examples. The prediction
Proof: Let Y
y1 , ? ? ? , y
? , and a linear combination of
of a GP can be computed as the sum of the mean of the training data, y
T
?1
?
? + Y K k? . The mean of the training data satisfies the constraints since
the y?i , i.e., ?(x? ) = y
PN
PN
?i
A?
y = A( N1 i=1 yi ) = N1 i=1 b = b. Furthermore, any mean-subtracted training example y
satisfies A?
yi = 0, since A?
yi = Ayi ?A?
y = b?b = 0. As a consequence, any linear combination
? T w of the mean-subtracted training data satisfies AY
? T w = P wi A?
Y
yi = 0. This, in particular,
i
? T K?1 k? ) = b.
is the case when w = K?1 k? , and therefore A?(x? ) = A(?
y+Y
Thus, if the training examples satisfy linear constraints, the mean prediction of the GP will always
satisfy the constraints. Note that this result not only holds for GPs, but for any predictor whose
output is a linear combination of the training outputs. However, most of the physical constraints
that rule non-rigid motion are more complex than simple linear functions. In particular, quadratic
constraints are commonly used [9, 15].
2.3 Quadratic Constraints
We now show how we can enforce the prediction to satisfy quadratic constraints. To this end, we
propose to perform a change in parameterization such that the constraints become linear in the new
2
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Samples from our datasets. (a) Square plane used for rotation estimation. (b) Similar
square deformed by assigning a random value to the angle between its facets. (c) Synthetically
generated inextensible mesh. (d) Image generated by texturing the mesh in (c). (e) Similar image
obtained with a more uniform texture. (f) Image from the HumanEva dataset [17] registered with
the method of [11].
variables. This can simply be achieved by replacing the original variables with their pairwise products. In cases where we do not expect the constraints to depend on all the quadratic variables (e.g.,
distance constraints between 3D points), we can consider only a subset of them. In this paper, we
will investigate three types of quadratic constraints involved in rigid and non-rigid pose estimation.
More formally, let Z ? <P ?D be a matrix encoding a training point. For rotations, when expressed
in quaternion space, P = 1 and D = 4, and when expressed as rotation matrices, P = 1 and D = 9.
In the case of a non-rigid pose expressed as a set of 3 dimensional points (e.g., human joints or mesh
vertices), P = 3, and D is the number of points representing the pose. Let Q ? <D?D be the matrix
encoding quadratic variables such that Q = ZT Z. Since by definition Q is symmetric, it is fully
determined by its upper triangular part. Thus, we define a training point for our Gaussian process as
the concatenation of the upper triangular elements of Q, i.e.,
y = [Q11 , ? ? ? , Qij , ? ? ? QDD ]T
with
i ? j.
Note that, when P = 1, any quadratic equality constraint can be written as a linear equality constraint in terms of y. As shown in the previous section, the prediction of a Gaussian process will
always satisfy linear constraints if the training points satisfy the constraints. As a consequence, the
? built
variables we regress to will satisfy the quadratic constraints, and by construction, the matrix Q
from the mean prediction of the GP will be symmetric.
However, to solve the pose estimation problem, we are interested in Z, not in y. Thus, we need
an additional step that transforms the quadratic variables into the original variables. We propose to
cast this problem as a matrix factorization problem and minimize the Frobenius norm between the
factorization and the output of the GP. The solution to this problem can be obtained in closed form
? = ?(x? ), i.e.,
by computing the SVD of the matrix built from the predicted quadratic variables y
?
?
y?1 y?2 ? ? ?
y?D
? y?2 ? ? ? ? ? ?
??? ?
?
? =?
..
..
..
Q
? ..
? =V?VT .
? .
?
.
.
.
y?D ? ? ? ? ? ? y? D(D+1)
2
The final solution is obtained by taking into account only the singular vectors corresponding to the
P largest singular values. Assuming that the values in ? are ordered, this yields
p
? = ?1:P,1:P VT
Z
(2)
:,1:P ,
where the subscript 1 : P denotes the first P rows or columns of a matrix. Note that the GP does not
? has rank P . Therefore, we do not truly guarantee that Z
? satisfies the
guarantee that the predicted Q
constraints. However, as shown in our experiments, the violation of the constraints induced by the
factorization is much smaller than the one produced by doing prediction in the original variables.
Note that the solution to the factorization of Eq. 2 is not unique. First, it is subject to P sign
ambiguities arising from taking the square root of ?. Second, when P > 1, the solution can
only be determined up to an orthonormal transformation T, since (V:,1:P T)?1:P,1:P (V:,1:P T)T =
T
T
(V:,1:P T)?1:P,1:P (TT V:,1:P
) = V:,1:P ?1:P,1:P V:,1:P
. To overcome both sources of ambiguities,
we rely on image information. Since we consider the case of rigid and non-rigid pose estimation, we can make the typical assumption that we have correspondences between 3D points on the
object of interest and 2D image locations [8, 9]. The sign ambiguities result in 2P discrete solutions. We disambiguate between them by choosing the one that yields the smallest reprojection
3
5
10
5
10
6
4
2
0
0
200
300
400
Nb. training examples
5.5
PnP [8]
GP
Our appr
8
100
Mean 3D error [mm]
Mean 3D error [mm]
10
2
4
6
8
Gaussian noise variance
PnP [8]
GP
Our appr
5
4.5
4
3.5
3
2
4
6
8
Gaussian noise variance
10
100
200
300
400
Nb. training examples
0.3
GP
Our appr
0.2
0.1
0
0
500
Mean constraint violation
0
0
500
Mean constraint violation
10
PnP [8]
GP
Our appr
15
0.3
0.2
GP
Our appr
0.1
0
2
4
6
8
Gaussian noise variance
10
0.2
0.15
0.1
0.05
0
GP
Our appr
2
4
6
8
Gaussian noise variance
100
Mean constraint violation
15
Mean constraint violation
20
PnP [8]
GP
Our appr
20
Mean 3D error [mm]
Mean 3D error [mm]
25
200
300
400
Nb. training examples
500
0.15
GP
Our appr
0.1
0.05
10
100
200
300
400
Nb. training examples
500
(a)
(b)
(c)
(d)
Figure 2: Estimating the rotation of a plane. (Top) Mean reconstruction error and constraint
violation when parameterizing the rotations with quaternions. (Bottom) Similar plots when the
rotations were parameterized as rotation matrices. Note that our approach outperforms the baselines
and is insensitive to the parameterization used.
error. Note, however, that other types of image information, such as silhouettes or texture, could
? we similarly rely on 3D-to-2D
also be employed. To determine the global transformation of Z,
correspondences; finding a rigid transformation that minimizes the reprojection error of 3D points
is a well-studied problem in computer vision, called the PnP problem. In practice, we employ the
closed-form solution of [8] to estimate T.
3
Experimental Evaluation
In this section, we show our results on rigid and non-rigid reconstruction problems involving
quadratic constraints. Samples from the diverse datasets employed are depicted in Fig. 1. As our
error measure, we report mean point-to-point distance between the recovered 3D shape and groundtruth averaged over 10 partitions for a fixed test set size of 500 examples. Furthermore, we also show
error bars that represent ? one standard deviation computed over the 10 partitions. These error bars
are non-overlapping for all constraint violation plots, as well as for most of the reconstruction errors,
which shows that our results are statistically significant. For all experiments we used a covariance
function which is the sum of an RBF and a noise term, and fixed the width of the RBF to the mean
squared distance between the training inputs and the noise variance to ?n2 = 0.01. Furthermore,
in cases where the number of training examples is smaller than the output dimensionality (i.e. for
large deformable meshes and for human poses), we performed principal component analysis on the
training outputs to speed up training. To entail no loss in the data, we only removed the components
with corresponding zero-valued eigenvalues.
3.1
Rotation of a Plane
First, we considered the case of inferring the rotation in 3D space of the square in Fig. 1(a) given
noisy 2D image observations of its corners. Note that this is an instance of the PnP problem. We used
two different parameterizations of the rotations: quaternions and rotation matrices. In the first case,
? 2 = 1. In the second case, the recovered
the recovered quaternion must have unit norm, i.e., ||Z||
T
? Z
? = I.
rotation matrix must be orthonormal, i.e., Z
Fig. 2(a,b) depicts the reconstruction errors obtained with quaternions (top) and rotation matrices
(bottom), as a function of the Gaussian noise variance on the 2D image locations when using a
training set of 100 examples (a), and as a function of the number of training examples for a Gaussian
noise variance of 5 (b). We compare the results of our approach to those obtained by a GP trained on
the original variables, as well as to the results of a state-of-the-art PnP method [8], which would be
the standard approach to solving this problem. In all cases, our approach outperforms the baselines.
More importantly, our approach performs equally well for all the parameterizations of the rotation.
Fig. 2(c,d) shows the mean constraint violation for both parameterizations. For quaternions, this
error is computed as the absolute difference between the norm of the recovered quaternion and 1.
?T Z
? and
For rotation matrices it is computed as the Frobenius norm of the difference between Z
4
Mean 3D error [mm]
0
5
10
15
Gaussian noise variance
20
12
10
5
0
5
10
15
Gaussian noise variance
14
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
14
10
8
6
4
Mean constraint violation [%]
5
15
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
20
Mean constraint violation [%]
10
Mean 3D error [mm]
15
20
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Mean 3D error [mm]
Mean 3D error [mm]
20
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
12
10
8
6
4
100
200
300
400
Nb. training examples
500
100
200
300
400
Nb. training examples
8
6
GP
Cstr GP [12]
Our appr
4
2
0
0
5
10
15
Gaussian noise variance
8
20
GP
Cstr GP [12]
Our appr
6
4
2
0
500
100
200
300
400
Nb. training examples
500
10
8
16
14
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
12
10
8
6
5
10
15
Gaussian noise variance
11
20
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
10
9
8
7
6
100
200
300
400
Nb. training examples
500
0
Mean 3D error [mm]
0
Mean 3D error [mm]
18
5
10
15
Gaussian noise variance
20
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
14
12
10
8
100
200
300
400
Nb. training examples
500
Mean constraint violation [%]
12
20
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Mean 3D error [mm]
Mean 3D error [mm]
14
Mean constraint violation [%]
Figure 3: Estimating the 3D shape of a 2 ? 2 mesh from 2D image locations. (Top) Mean reconstruction error and constraint violation as a function of the input noise. The global transformation
was estimated either (left) from the ground truth, or (middle) using a PnP method [8]. (Bottom)
Similar errors shown as a function of the number of training examples.
15
GP
Cstr GP [12]
Our appr
10
5
0
0
5
10
15
Gaussian noise variance
20
20
15
GP
Cstr GP [12]
Our appr
10
5
0
100
200
300
400
Nb. training examples
500
Figure 4: Estimating the 3D shape of a 9 ? 9 mesh from 2D image locations. (Top) Mean reconstruction error and constraint violation as a function of the input noise. The global transformation
was estimated either (left) from the ground truth, or (middle) using a PnP method [8]. (Bottom)
Similar errors shown as a function of the number of training examples. Note that the global transformations estimated with the PnP method yield poor reconstructions. However, our approach performs
best among those that use these transformations.
the identity matrix. Note that in both cases, our approach better satisfies the quadratic constraints
than the standard GP. This is especially true in the case of unit norm quaternions, where the results
obtained with the GP strongly violate the constraints.
3.2
Surface Deformations
Next, we considered the problem of estimating the shape of a deforming surface from a single
image. In this context, the output space is composed of the 3D locations of the vertices of the mesh
that represents the surface, and the quadratic constraints encode the fact that the length of the mesh
edges should remain constant. The constraint error measure was taken to be the average over all
edges of the percentage of length variation. We compare against two baselines, GP in the original
variables (i.e., 3D locations of mesh vertices), and the approach of [12] where the constraints are
explicitly enforced at inference. Since our approach only allows us to recover the shape up to a
global transformation, we show results estimating this transformation either from the ground-truth
data, which can be done by computing an SVD [20], or by applying a PnP method [8]. To make our
evaluation fair, we also computed similar global transformations for the baselines.
We tested our approach on the same square as before, but allowing it to deform by letting the edge
between its two facets act as a hinge, as shown in Fig. 1(b). Doing so ensures that the length of
5
10
8
6
16
14
12
10
8
16
200
300
400
Nb. training examples
500
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
14
12
10
8
100
Mean 3D error [mm]
100
Mean 3D error [mm]
18
200
300
400
Nb. training examples
500
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
20
15
10
6
100
200
300
400
Nb. training examples
500
100
200
300
400
Nb. training examples
Mean constraint violation [%]
12
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
20
500
Mean constraint violation [%]
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
14
Mean 3D error [mm]
Mean 3D error [mm]
16
20
GP
Cstr GP [12]
Our appr
15
10
5
0
100
20
200
300
400
Nb. training examples
500
GP
Cstr GP [12]
Our appr
15
10
5
0
100
200
300
400
Nb. training examples
500
Figure 5: Estimating the 3D shape of a 9 ? 9 mesh from PHOG features. Mean reconstruction error and constraint violation obtained from (top) well-textured images (Fig. 1(d)), or (bottom)
poorly-textured ones (Fig. 1(e)). The global transformation was estimated either (left) from the
ground truth, or (middle) using a PnP method [8].
Figure 6: Non-rigid reconstruction from real images. Reconstructions of a piece of paper from
2D image locations. We show the recovered mesh overlaid in red on the original image, and a side
view of this mesh.
the mesh edges remains constant. Similarly as before, the inputs to the GP, x, were taken to be
the 2D image locations of the corners of the square. Fig. 3 depicts the reconstruction error and
constraint violation as a function of the Gaussian noise variance added to the 2D image locations for
training sets composed of 100 training examples (top), and as a function of the number of training
examples for a Gaussian noise variance of 10 (bottom). Note that our approach is more robust
to input noise than the baselines. Furthermore, unlike the standard GP, our approach satisfies the
quadratic constraints.
We then tested our approach on the larger mesh shown in Fig. 1(c). In that case, the matrix Z ?
<3?81 . We generated inextensible deformed mesh examples by randomly sampling the values of a
subset of the angles between the facets of the mesh. Fig. 4 depicts the results obtained when using
the 2D image locations as inputs. As before, we can see that our approach is more robust to input
noise than the baselines1 . Note that the global transformations estimated with the PnP method tend
to be inaccurate and therefore yield poor results. However, our approach performs best among the
ones that utilize the PnP method. We can also notice that our approach better satisfies the constraints
than GP prediction in the original space. The small violation of the constraints is due to the fact that
our prediction is not guaranteed to be rank 3, and therefore the factorization may result in some loss.
We then considered the more general case of having images as inputs instead of the 2D locations of
the mesh vertices. For this purpose, we generated images such as those of Fig. 1(d,e) from which
we computed PHOG features [5]. As shown in Fig. 5, our approach outperforms the baselines for
all training set sizes.
To demonstrate our method?s ability to deal with real images, we reconstructed the deformations of
a piece of paper from a video sequence. We used the 2D image locations of the vertices of the mesh
as inputs, which were obtained by tracking the surface in 2D using template matching. For this
case, the training data was obtained by deforming a piece of cardboard in front of an optical motion
capture system. Results for some frames of the sequence are shown in Fig. 6. Note that, for small
deformations, the problem is subject to concave-convex ambiguities arising from the insufficient
perspective. As a consequence, the shape is less accurate than when the deformations are larger.
1
In [12], they proposed to optimize either directly the pose, or the vector of kernel values k? . The second
choice requires having more training examples than the number of constraints. Since here this is not always the
case, for this dataset we optimized the pose.
6
6
4
2
4
500
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
6
5
4
3
2
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
25
20
15
10
5
100
200
300
400
Nb. training examples
500
100
10
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
6
5
4
Mean 3D error [cm]
200
300
400
Nb. training examples
Mean 3D error [cm]
Mean 3D error [cm]
6
2
100
Common referential
30
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
8
Mean 3D error [cm]
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
8
Mean 3D error [cm]
Mean 3D error [cm]
Camera referential
10
3
2
200
300
400
Nb. training examples
500
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Our appr + cstr
8
6
4
1
100
200
300
400
Nb. training examples
500
100
200
300
400
Nb. training examples
500
100
200
300
400
Nb. training examples
500
GP human
GP cam
Cstr GP [12]
Our appr
Our appr + cstr
15
10
5
0
100
200
300
400
Nb. training examples
500
GP human
GP cam
Cstr GP [12]
Our appr
Our appr + cstr
15
10
5
0
100
200
300
400
Nb. training examples
500
Mean constraint violation [%]
20
Mean constraint violation [%]
Mean constraint violation [%]
Features of [11]
PHOG
SIFT Hist.
Figure 7: Human pose estimation from different image features. Mean reconstruction error
as a function of the number of training examples for 3 different feature types and with the pose
represented in 2 different referentials.
60
GP human
GP cam
Cstr GP [12]
Our appr
Our appr + cstr
50
40
30
20
10
0
100
200
300
400
Nb. training examples
500
Features of [11]
PHOG
SIFT Hist.
Figure 8: Constraint violation in human pose estimation. Mean constraint violation for 3 different
image feature types. Note that the constrained GP [12] best satisfies the constraints, since it explicitly
enforces them at inference. However, our approach is more stable than the standard GP.
Figure 9: Human pose estimation from real images. We show the rectified image from [11] and
the pose recovered by our approach using PHOG features as input seen from a different viewpoint.
3.3
Human Pose Estimation
We also applied our method to the problem of estimating the pose of a human skeleton. To this end,
we used the HumanEva dataset [17], which consists of synchronized images and motion capture
data. In particular, we used the rectified images of [11] and relied on three different image features as
input: histograms of SIFT features, PHOG features, and the features of [11]. In this case Z ? <3?19 .
We performed experiments with two representations of the pose: all poses aligned to a common
referential, and all poses in the camera referential. We estimated the global transformation from
the ground-truth. As show in Fig. 7 for all feature types our approach outperforms the baselines.
Fig. 8 shows the constraint violation for the different settings. Due to our parameterization, the
amount of constraint violation induced by our approach is independent of the pose referential. This
is in contrast with the standard GP, which is very sensitive to the representation. In addition, we
also enforced the constraints at inference, similarly as [12], but starting from our results. As can be
observed from the figures, while this reduced the constraint violation, it had very little influence on
the reconstruction error. Fig. 9 depicts some of our results obtained from PHOG features.
3.4
Running Time
We compared the running times of our algorithm to those of solving the non-convex constraints at
inference [12]. As shown in Table 1, the running times of our algorithm are constant with respect
7
Constr GP [12]
50
250
500
2.0
5.1
21.3
26.1
49.6
101.0
1664.9 1625.6 1599.8
Training size
2 ? 2 mesh (D = 4)
HumanEva (D = 19)
9 ? 9 mesh (D = 81)
Our approach
50 250 500
8.0 7.9 8.0
4.9 4.9 4.8
8.7 8.9 9.0
15
18
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
Mean 3D error [mm]
Mean 3D error [mm]
20
10
14
12
GP
GP + trsf
Cstr GP [12]
Cstr GP [12] + trsf
Our appr + trsf
10
8
6
5
0
16
Mean constraint violation [%]
Table 1: Running times comparison. Average running times per test example in milliseconds for
different datasets and different number of training examples. We show results for the constrained
GP of [12] and for our approach. Note that, as opposed to [12], our approach is relatively insensitive
to the number of training examples and to the dimension of the data.
2
4
6
8
Output Gaussian noise variance
10
0
2
4
6
8
Output Gaussian noise variance
10
10
8
6
GP
Cstr GP [12]
Our appr
4
2
0
0
2
4
6
8
Output Gaussian noise variance
10
Figure 10: Robustness to output noise. Mean reconstruction error and constraint violation as
a function of the output noise on the training examples. As a pre-processing step, we projected
the noisy training examples to the closest shape that satisfies the constraints. We then trained all
approaches with this data. Note that our approach outperforms the baselines.
to the overall size of the problem. This is due to the fact that most of the computation time is
spent doing the factorization and not the prediction. In contrast, enforcing constraints at inference is
sensitive to the dimension of the data, as well as to the number of training examples2 . Therefore, for
large, high-dimensional training sets, our algorithm is several orders of magnitude faster than [12],
and, as shown above, obtains similar or better accuracies.
3.5
Robustness to Noise in the Outputs
As shown in Section 2.2, the mean prediction of a GP satisfies linear constraints under the assumption that the training examples all satisfy these constraints. This suggests that our approach might
be sensitive to noise on the training outputs, y. To study this, we added Gaussian noise with variance ranging from 2mm to 10mm on the 3D coordinates of the 2 ? 2 deformable mesh of 100mm
side (Fig. 1(b)). To overcome the effect of noise, we first pre-processed the training examples and
projected them to the closest shape that satisfies the constraints in a similar manner as in [12]. We
then used these rectified shapes as training data for our approach as well as for the baselines. Fig. 10
depicts the reconstruction error and constraint violation as a function of the output noise. We used
the image locations of the vertices with noise variance 10 as inputs, and N = 100 training examples.
Note that our approach outperforms the baselines. Furthermore, our pre-processing step improved
the results of all approaches compared to using the original noisy data. Note, however, that in the
case of extreme output noise, projecting the training examples on the constraint space might yield
meaningless results. This would have a negative impact on the learned predictor, and thus on the
performance of all the methods.
4
Conclusion
In this paper, we have proposed an approach to implicitly enforcing constraints in discriminative prediction. We have shown that the prediction of a GP always satisfies linear constraints if the training
data satisfies these constraints. From this result, we have proposed an effective method to enforce
quadratic constraints by changing the parameterization of the problem. We have demonstrated on
several rigid and non-rigid monocular pose estimation problems that our method outperforms GP
regression, as well as enforcing the constraints at inference [12]. Furthermore, we have shown that
our algorithm is very efficient, and makes real-time non-rigid reconstruction an achievable goal. In
the future, we intend to investigate other types of image information to estimate the global transformation, as well as study the use of our approach to tasks involving different constraints, such as
dynamics.
2
For the last dataset, the running times of [12] are independent of N . This is due to the fact that, in this
case, we optimized the pose directly (see note 1 on page 6).
8
References
[1] A. Agarwal and B. Triggs. 3d human pose from silhouettes by relevance vector regression. In Conference
on Computer Vision and Pattern Recognition, 2004.
[2] M. Alvarez and N. D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. In
Neural Information Processing Systems, pages 57?64. MIT Press, Cambridge, MA, 2009.
[3] V. Blanz and T. Vetter. A Morphable Model for The Synthesis of 3?D Faces. In ACM SIGGRAPH, pages
187?194, Los Angeles, CA, August 1999.
[4] E. Bonilla, K. M. Chai, and C. Williams. Multi-task gaussian process prediction. In J. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Neural Information Processing Systems, pages 153?160, Cambridge,
MA, 2008. MIT Press.
[5] A. Bosch, A. Zisserman, and X. Munoz. Image classification using random forests and ferns. In International Conference on Computer Vision, 2007.
[6] P. Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, 1997.
[7] L. Herda, R. Urtasun, and P. Fua. Hierarchical Implicit Surface Joint Limits to Constrain Video-Based
Motion Capture. In European Conference on Computer Vision, Prague, Czech Republic, May 2004.
[8] F. Moreno-Noguer, V. Lepetit, and P. Fua. Accurate Non-Iterative O(n) Solution to the PnP Problem. In
International Conference on Computer Vision, Rio, Brazil, October 2007.
[9] M. Perriollat, R. Hartley, and A. Bartoli. Monocular template-based reconstruction of inextensible surfaces. In British Machine Vision Conference, 2008.
[10] J. Quinonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process
regression. Journal of Machine Learning Research, pages 1935?1959, 2006.
[11] G. Rogez, J. Rihan, S. Ramalingam, C. Orrite, and P. Torr. Randomized Trees for Human Pose Detection.
In Conference on Computer Vision and Pattern Recognition, 2008.
[12] M. Salzmann and R. Urtasun. Combining discriminative and generative methods for 3d deformable
surface and articulated pose reconstruction. In Conference on Computer Vision and Pattern Recognition,
San Francisco, CA, June 2010.
[13] M. Salzmann, R. Urtasun, and P. Fua. Local deformation models for monocular 3d shape recovery. In
Conference on Computer Vision and Pattern Recognition, Anchorage, AK, June 2008.
[14] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In
International Conference on Computer Vision, Nice, France, 2003.
[15] S. Shen, W. Shi, and Y. Liu. Monocular template-based tracking of inextensible deformable surfaces
under l2-norm. In Asian Conference on Computer Vision, 2009.
[16] H. Sidenbladh, M. J. Black, and D. J. Fleet. Stochastic Tracking of 3D human Figures using 2D Image
Motion. In European Conference on Computer Vision, June 2000.
[17] L. Sigal and M. J. Black. Humaneva: Synchronized video and motion capture dataset for evaluation of
articulated human motion. Technical Report CS-06-08, Brown University, 2006.
[18] C. Sminchisescu and B. Triggs. Kinematic Jump Processes for Monocular 3D Human Tracking. In
Conference on Computer Vision and Pattern Recognition, volume I, page 69, Madison, WI, June 2003.
[19] E. Snelson, C. E. Rassmussen and Z. Ghahramani. Warped Gaussian Processes. In Neural Information
Processing Systems. MIT Press, Cambridge, MA, 2004.
[20] S. Umeyama. Least-squares estimation of transformation parameters between two point patterns. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 13(4), Apr. 1991.
[21] R. Urtasun and T. Darrell. Sparse Probabilistic Regression for Activity-independent Human Pose Inference. In Conference on Computer Vision and Pattern Recognition, Anchorage, AK, 2008.
[22] R. Urtasun, D. Fleet, A. Hertzman, and P. Fua. Priors for people tracking from small training sets. In
International Conference on Computer Vision, Beijing, China, October 2005.
9
| 4179 |@word deformed:2 middle:3 achievable:1 norm:8 seems:1 triggs:2 seek:1 covariance:3 lepetit:1 configuration:2 liu:1 salzmann:4 outperforms:9 recovered:6 assigning:1 written:1 must:2 mesh:21 chicago:2 partition:2 shape:12 enables:1 moreno:1 plot:2 generative:2 intelligence:1 parameterization:5 phog:7 plane:3 parameterizations:3 location:13 anchorage:2 become:1 qij:1 consists:1 manner:1 pnp:15 pairwise:1 multi:2 relying:1 little:1 estimating:9 cm:6 minimizes:1 finding:1 transformation:15 guarantee:2 berkeley:1 rihan:1 act:1 concave:1 platt:1 unit:4 yn:2 before:3 local:1 limit:1 consequence:4 encoding:2 ak:2 oxford:1 subscript:1 might:2 black:2 initialization:2 studied:1 china:1 suggests:1 factorization:7 statistically:1 averaged:1 unique:1 camera:2 enforces:1 practice:1 goovaerts:1 significantly:1 matching:1 pre:3 vetter:1 nb:25 context:1 applying:1 influence:1 optimize:2 imposed:1 demonstrated:1 yt:1 shi:1 straightforward:1 williams:1 starting:1 convex:4 focused:1 shen:1 recovery:2 rule:1 parameterizing:1 importantly:1 orthonormal:2 variation:1 coordinate:1 brazil:1 construction:1 gps:1 element:1 recognition:6 bottom:6 observed:1 capture:4 ensures:1 removed:1 icsi:1 mentioned:1 skeleton:1 cam:3 dynamic:1 trained:2 weakly:1 depend:1 solving:2 shakhnarovich:1 texturing:1 textured:2 joint:3 siggraph:1 represented:1 articulated:5 forced:1 fast:1 effective:1 choosing:1 whose:2 emerged:1 larger:2 solve:2 valued:1 triangular:2 ability:1 blanz:1 gp:138 noisy:4 final:1 sequence:2 eigenvalue:1 propose:5 reconstruction:18 interaction:1 product:1 aligned:1 combining:1 umeyama:1 poorly:1 deformable:6 roweis:1 frobenius:2 los:1 chai:1 reprojection:2 darrell:2 tti:2 object:2 spent:1 pose:40 bosch:1 eq:1 predicted:2 c:1 synchronized:2 direction:1 hartley:1 stochastic:1 human:16 cstr:59 require:1 proposition:1 rurtasun:1 hold:1 mm:21 considered:3 ground:5 overlaid:1 mapping:1 predict:2 lawrence:1 appr:47 smallest:1 purpose:1 estimation:17 sensitive:4 largest:1 mit:3 gaussian:36 always:4 pn:2 encode:1 june:4 rank:2 contrast:2 baseline:10 rio:1 inference:11 rigid:17 inaccurate:1 typically:1 koller:1 france:1 interested:1 issue:2 among:2 overall:1 classification:1 bartoli:1 priori:1 constrained:4 art:2 uc:1 having:2 sampling:1 represents:1 future:1 report:2 employ:1 randomly:1 composed:3 asian:1 n1:2 detection:1 interest:3 highly:2 investigate:2 kinematic:1 evaluation:4 violation:30 truly:1 extreme:1 accurate:3 kt:1 edge:4 tree:1 hertzman:1 cardboard:1 deformation:5 instance:1 column:1 modeling:1 facet:3 republic:1 vertex:6 entry:1 subset:2 deviation:1 predictor:8 uniform:1 front:1 eec:1 fundamental:1 international:4 randomized:1 probabilistic:1 synthesis:1 squared:1 ambiguity:4 satisfied:2 q11:1 opposed:1 corner:2 warped:1 account:1 deform:1 satisfy:12 explicitly:2 bonilla:1 piece:3 performed:2 root:1 view:2 ayi:2 closed:2 doing:3 candela:1 red:1 recover:2 relied:1 minimize:1 square:7 accuracy:1 variance:21 yield:7 produced:1 fern:1 rectified:3 definition:1 against:1 involved:1 regress:1 proof:1 dataset:5 knowledge:2 dimensionality:1 humaneva:4 hashing:1 zisserman:1 improved:1 alvarez:1 fua:4 done:1 strongly:1 furthermore:7 implicit:1 correlation:1 hand:1 replacing:2 overlapping:1 effect:1 brown:1 orthonormality:1 true:1 equality:2 symmetric:2 deal:2 attractive:1 width:1 ambiguous:2 ramalingam:1 ay:1 tt:1 demonstrate:2 performs:3 motion:7 ranging:1 image:36 snelson:1 recently:1 common:2 rotation:17 physical:5 insensitive:2 volume:1 extend:1 linking:2 significant:1 cambridge:3 imposing:2 munoz:1 similarly:3 had:1 stable:1 entail:1 surface:9 morphable:1 closest:2 recent:1 perspective:1 vt:2 yi:7 exploited:1 seen:1 additional:2 employed:4 determine:1 multiple:1 violate:1 technical:1 faster:2 prevented:1 equally:1 impact:1 prediction:20 involving:2 regression:12 vision:16 histogram:1 represent:1 kernel:1 agarwal:1 achieved:1 addition:1 singular:2 source:1 meaningless:1 unlike:1 induced:2 subject:2 tend:1 effectiveness:1 prague:1 synthetically:1 constraining:1 independence:2 xj:1 restrict:2 angeles:1 orrite:1 fleet:2 transforms:1 amount:1 referential:5 processed:1 simplest:1 reduced:1 percentage:1 millisecond:1 notice:1 sign:2 estimated:7 arising:2 per:1 diverse:1 discrete:1 changing:1 utilize:1 year:1 sum:2 enforced:2 beijing:1 angle:2 parameterized:1 raquel:1 throughout:1 groundtruth:1 ki:1 guaranteed:1 correspondence:2 quadratic:20 activity:1 adapted:1 constraint:100 constrain:1 speed:1 performing:1 optical:1 relatively:1 combination:3 poor:3 smaller:2 remain:1 wi:2 constr:1 projecting:1 taken:2 monocular:8 resource:1 remains:1 singer:1 letting:1 end:3 hierarchical:1 noguer:1 enforce:3 subtracted:3 alternative:2 robustness:2 convolved:1 original:9 assumes:1 denotes:1 top:6 running:6 hinge:1 madison:1 unifying:1 exploit:1 ghahramani:1 especially:1 intend:1 added:2 distance:3 sidenbladh:1 concatenation:1 quinonero:1 urtasun:6 enforcing:4 assuming:1 length:5 relationship:1 insufficient:1 unfortunately:2 october:2 negative:1 zt:1 unknown:2 perform:1 allowing:1 upper:2 observation:2 datasets:3 finite:1 viola:1 y1:3 frame:1 august:1 ttic:2 introduced:1 evidenced:1 cast:1 required:1 optimized:2 registered:1 learned:1 czech:1 geostatistics:1 address:1 able:1 bar:2 pattern:8 built:2 video:3 natural:1 rely:4 representing:1 mathieu:1 prior:5 review:1 l2:2 nice:1 fully:1 expect:1 loss:2 consistent:1 sigal:1 viewpoint:1 editor:1 row:1 last:1 rasmussen:1 side:2 template:3 taking:2 face:1 absolute:1 sparse:3 overcome:4 dimension:9 xn:2 collection:1 commonly:1 projected:2 san:1 jump:1 far:1 transaction:1 reconstructed:1 approximate:1 obtains:1 implicitly:9 silhouette:2 global:10 hist:2 francisco:1 discriminative:5 xi:4 latent:1 iterative:1 table:2 disambiguate:2 learn:3 robust:2 ca:2 forest:1 sminchisescu:1 complex:1 european:2 rogez:1 apr:1 main:1 noise:33 n2:2 fair:1 body:1 x1:2 fig:18 depicts:5 sub:1 inferring:1 explicit:1 british:1 sift:3 burden:1 texture:2 magnitude:2 depicted:1 simply:1 expressed:4 ordered:1 tracking:5 truth:5 satisfies:17 acm:1 ma:3 identity:1 goal:1 rbf:2 feasible:2 change:3 hard:1 determined:2 typical:1 torr:1 principal:1 called:1 svd:2 experimental:1 deforming:2 formally:1 people:1 quaternion:9 relevance:1 violated:1 incorporate:1 tested:2 avoiding:1 correlated:2 |
3,511 | 418 | A Method for the Efficient Design
of Boltzmann Machines for Classification
Problems
Ajay Gupta and Wolfgang Maass?
Department of Mathematics, Statistics, and Computer Science
University of Illinois at Chicago
Chicago IL, 60680
Abstract
We introduce a method for the efficient design of a Boltzmann machine (or
a Hopfield net) that computes an arbitrary given Boolean function f . This
method is based on an efficient simulation of acyclic circuits with threshold
gates by Boltzmann machines. As a consequence we can show that various
concrete Boolean functions f that are relevant for classification problems
can be computed by scalable Boltzmann machines that are guaranteed
to converge to their global maximum configuration with high probability
after constantly many steps.
1
INTRODUCTION
A Boltzmann machine ([AHS], [HS], [AK]) is a neural network model in which the
units update their states according to a stochastic decision rule. It consists of a
set U of units, a set C of unordered pairs of elements of U, and an assignment
of connection strengths S : C -- R. A configuration of a Boltzmann machine
is a map k : U -- {O, I}. The consensus C(k) of a configuration k is given by
C(k) = L:{u,v}ECS({u,v}) .k(u) .k(v). If the Boltzmann machine is currently in
configuration k and unit u is considered for a state change, then the acceptance
?This paper was written during a visit of the second author at the Department of
Computer Science of the University of Chicago.
825
826
Gupta and Maass
probability for this state change is given by l+e':AC /
Here ~c is the change in
the value of the consensus function C that would result from this state change of
u, and c> is a fixed parameter (the "temperature").
C '
?
Assume that n units of a Boltzmann machine B have been declared as input units
and m other units as output units . One says that B computes a function f :
{O,l}n -+ {a, I}m if for any clamping of the input units of B according to some Q E
{O,l}n the only global maxima of the consensus function of the clamped Boltzmann
machine are those configurations where the output units are in the states given by
f(Q)?
Note that even if one leaves the determination of the connection strengths for a
Boltzmann machine up to a learning procedure ([AHS), [HS], [AK)) , one has to
know in advance the required number of hidden units, and how they should be
connected (see section 10.4.3 of [AK] for a discussion of this open problem).
Ad hoc constructions of efficient Boltzmann machines tend to be rather difficult
(and hard to verify) because of the cyclic nature of their "computations".
We introduce in this paper a new method for the construction of efficient Boltzmann
machines for the computation of a given Boolean function f (the same method can
also be used for the construction of Hopfield nets). We propose to construct first an
acyclic Boolean circuit T with threshold gates that computes f (this turns out to
be substantially easier). We show in section 2 that any Boolean threshold circuit T
can be simulated by a Boltzmann machine B(T) of the same size as T. Furthermore
we show in section 3 that a minor variation of B(T) is likely to converge very fast.
In Section 4 we discuss applications of our method for various concrete Boolean
functions .
2
SIMULATION OF THRESHOLD CIRCUITS BY
BOLTZMANN MACHINES
A threshold circuit T (see [M), [PS], [R], [HMPST]) is a labeled acyclic directed
graph. We refer to the number of edges that are directed into (out of) a node of T
as the in degree (outdegree) of that node. Its nodes of indegree are labeled by inpu t
variables Xi (i E {I, . . . , n} ). Each node 9 of indegree I > in T is labeled by some
arbitrary Boolean threshold function Fg : {a, I}' -+ {a, I}, where Fg(Y1, ... , y,)::: 1
ifand only ifL:!=t O:iYi ~ t (for some arbitrary parameters 0:1, ... ,0:" t E R; w.l.o .g.
0:1, ?. . , 0:" t E Z M]). One views such node 9 as a threshold gate that computes
F g ? If m nodes of a threshold circuit T are in addition labeled as output nodes,
one defines in the usual manner the Boolean function f : {O, l}n --+ {a, l}m that is
computed by T .
??
\Ve simulate T by the following Boltzmann machine B(T) = < U, C, S > (note that
T has directed edges, while B(T) has undirected edges) . We reserve for each node 9
of T a separate unit beg) of B(T). We set
U::::
c::::
{b(g)lg is a node of T} and
{{beg'), b(g)}lg', 9 are nodes of T so that either g'
g' ,g are connected by an edge in T} .
= 9 or
Efficient Design of Boltzmann Machines
Consider an arbitrary unit beg) of B(T). We define the connection strengths
S( {b(g)}) and S( {b(g'), b(g)}) (for edges < g', g > of T) by induction on the length
of the longest path in T from g to a node of T with outdegree O.
If g is a gate of T with outdegree 0 then we define S( {b(g)}) := -2t + 1, where t is
the threshold of g, and we set S({b(g'),b(g)}):= 2a? g',g ? (where a? g',g ?
is the weight of the directed edge < g', g > in T).
Assume that g is a threshold gate of T with outdegree > O. Let gl, ... ,gk be the
immediate successors of gin T. Set w := 2:::1IS({b(g),b(gi)})1 (we assume that
the connection strengths S( {beg), b(gi)}) have already been defined). We define
S( {b(g)}) := -(2w + 2) . t + w + 1, where t is the threshold of gate g. Furthermore
for every edge < g', g > in T we set S( {b(g'), b(g)}) := (2w + 2) . a ? g', g ?.
Remark: It is obvious that for problems in TGo (see section 4) the size of connection strengths in B(T) can be bounded by a polynomial in n.
Theorem 2.1 For any threshold circuit T the Boltzmann machine B(T) computes
the same Boolean function as T.
Proof of Theorem 2.1:
Let Q E {O, l}n be an arbitrary input for circuit T. We write g(gJ E {O, I} for the
output of gate g of T for circuit input Q.
Consider the Boltzmann machine B(T)a with the n units b(g) for input nodes g of
T clamped according to Q. We show that the configuration K a of B(T)a where b(g)
is on if and only if g(Q) = 1 is the only global maximum (in fact: the only local
maximum) of the consensus function G for B(T)!!..
Assume for a contradiction that configuration K of B(T)a is a global maximum of
the consensus function G and K 1= K a. Fix a node g of T of minimal depth in T
so that K(b(g)) 1= Ka(b(g?
g(Q). By definition of B(T)a this node g is not an
input node of T. Let I{' result form K by changing the stat~ of beg). We will show
that G(K') > G(K), which is a contradiction to the choice of K.
=
We have (by the definition of G)
G(K') - G(K) = (1- 2K(b(g?) . (SI + S2 + S( {b(g)}? , where
SI:= 2:{K(b(g'?. S({b(g'),b(g)})1 < g',g > is an edge in T}
S2:= E{K(b(g'?' S({b(g),b(g')})1 < g,g' > is an edge in T}.
Let w be the parameter that occurs in the definition of S( {b(g)}) (set w := 0 if g
has outdegree 0). Then IS21 < w. Let PI, ... , Pm be the immediate predecessors
of g in T, and let t be the threshold of gate g. Assume first that g(Q) = 1. Then
SI = (2w+2). E~1
Pi,g ? 'Pi(Q) ~ (2w+2) ?t. This implies that SI +S2 >
(2w + 2).t - w-l, and therefore SI +S2 +S( {beg)}) > 0, hence G(I(') - G(K) > O.
a?
If g(Q) = 0 then we have E~1 a( < Pi, g ? . Pi(Q) < t - 1, thus SI = (2w + 2) .
2:~1 a( < Pi, g ? . Pi(Q) < (2w + 2) . t - 2w - 2. This implies that SI + S2 <
(2w + 2) . t - w - 1, and therefore 51 + S2 + 5( {beg)}) < O. We have in this case
K(b(g? = 1, hence G(K') - G(K) (-1)? (SI + 52 + S({b(g)}? > o. 0
=
827
828
Gupta and Maass
3
THE CONVERGENCE SPEED OF THE
CONSTRUCTED BOLTZMANN MACHINES
We show that the constructed Boltzmann machines will converge relatively fast to
a global maximum configuration. This positive result holds both if we view B(T) as
a sequential Boltzmann machine (in which units are considered for a state change
one at a time), and if we view B(T) as a parallel Boltzmann machine (where several
units are simultaneously considered for a state change). In fact, it even holds for
unlimited parallelism, where every unit is considered for a state change at every
step. Although unlimited parallelism appears to be of particular interest in the
context of brain models and for the design of massively parallel machines, there are
hardly any positive results known for this case (see section 8.3 in [AK]).
If 9 is a gate in T with outdegree > 1 then the current state of unit b(g) of B(T)
becomes relevant at several different time points (whenever one of the immediate
successors of 9 is considered for a state change). This effect increases the probability
that unit b(g) may cause an "error." Therefore the error probability of an output
unit of B(T) does not just depend on the number of nodes in T, but on the number
N (T) of nodes in a tree T' that results if we replace in the usual fashion the directed
graph of T by a tree T' of the same depth (one calls a directed graph a tree if aU of
its nodes have outdegree ~ 1).
To be precise, we define by induction on the depth of 9 for each gate 9 of T a
tree Tree(g) that replaces the sub circuit of T below g. If g1, ... ,gk are the immediate predecessors of 9 in T then Tree(g) is the tree which has 9 as root and
Tree(gl), ... ,Tree(gk) as immediate subtrees (it is understood that if some gi has
another immediate successor g' "# 9 then different copies of Tree(gd are employed
in the definition of Tree(g) and Tree(g'?.
We write ITree(g)I for the number of nodes in Tree(g) , and N(T) for
L {ITree(g) 1 Ig is an output node of T}. It is easy to see that if T is synchronous
(Le. depth (gff):::: depth(g')+ 1 for all edges < g',g" > in T) then ITree(g)1 < sd-1
for any node 9 in T of depth d which has s nodes in the subcircuit of T below g.
Therefore N(T) is polynomial in n if T is of constant depth and polynomial size
(this can be achieved for all problems in Teo, see Section 4).
We write B 6(T) for the variation of the Boltzmann machine B(T) of section 2 where
each connection strength in B(T) is multiplied by 6 (6 > 0). Equivalently one could
view B6 (T) as a machine with the same connection strengths as B(T) but a lower
"temperature" (replace c by c/6).
Theorem 3.1 Assume that T is a threshold circuit of depth d that computes a
Boolean function f : {O, l}n -+ {O, l}m. Let B6(T)a be the Boltzmann machine that
results from clamping the input units of B 6 (T) ac~rding to Q (g E {O, l}n).
?: :
Assume that
qo < ql < ... < qd are arbitrary numbers such that for every
i E {I, ... , d} and every gate 9 of depth i in T the corresponding unit b(g) is
considered for a state change at some step during interval (qi-1, qi]. There is no
restriction on how many other units are considered for a state change at any step.
Let t be an arbitrary time step with t
>
qd. Then the output units of B(T) are at
Efficient Design of Boltzmann Machines
the end of step t with probability
> 1-
N(T) . 1+!67 c in the states given by f(g.).
Remarks:
1. For 8 := n this probability converges to 1 for n
and polynomial size.
--+ 00
if T is of constant depth
2. The condition on the timing of state changes in Theorem 3.1 has been formulated in a very general fashion in order to make it applicable to all of the
common types of Boltzmann machines.For a sequential Boltzmann machine
(see [AK], section 8.2) one can choose qi - qi-1 sufficiently large (for example polynomially in the size of T) so that with high probability every unit of
B(T) is considered for a state change during the interval (qi-1, qd. On the
other hand, for a synchronous Boltzmann machine with limited parallelism
([AK], section 8.3) one may apply the result to the case where every unit beg)
with 9 of depth i in T is considered for a state change at step i (set qi := i).
Theorem 3.1 also remains valid for unlimited parallelism ([AK], section 8.3),
where every unit is considered for a state change at every step (set qi := i). In
fact, not even synchronicity is required for Theorem 3.1, and it also applies to
asynchronous parallel Boltzmann machines ([AK], section 8.3.2) .
3. For sequential Boltzmann machines in general the available upper bounds for
their convergence speed are very unsatisfactory. In particular no upper bounds
are known which are polynomial in the number of units (see section 3.5 of
[AK]). For Boltzmann machines with unlimited parallelism one can in general
not even prove that they converge to a global maximum of their consensus
function (section 8.3 of [AK]).
Proof of Theorem 3.1: We prove by induction on i that for every gate 9 of depth
i in T and every step t 2: qi the unit b(g) is at the end of step t with probability
~ 1 - ITree(g)1 . l+!A/c in state g(g.).
Assume that g1, .. . , gk are the immediate predecessors of gate 9 in T. By definition
we have ITree(g)1 = 1 + 2:7=1 1Tree(gj )1. Let t' ~ t be the last step before t at which
beg) has been considered for a state change. Since T ~ qi we have t' > qi-1. Thus
for each j = 1, ... ,k we can apply the induction hypothesis to unit b(gj) and step
t' - 1 ~ qdepth(9J)' Hence with probability > 1- (ITree(g)l- 1) . 1+~6/C the state of
the units b(g1), ... , b(gk) at the end of step t' - 1 are g1 (.q), ... ,gk (gJ. Assume now
that the unit b(gj) is at the end of step t' - 1 in state gj (.q), for j = 1, ... , k. If 9 is
at the beginning of step t' not in state g(.!!), then a state change of unit b(g) would
increase the consensus function by 6C ~ 8 (independently of the current status
of units beg) for immediate successors g of 9 in T). Thus b(g) accepts in this case
the change to state g(S!) with probability 1+e_l~c/c > 1+e: 6 / C = 1 - 1+!6/C' On the
other hand, if beg) is already at the beginning of step t' in state g(!!), then a change
of its state would decrease the consensus by at least 8. Thus beg) remains with
probability > 1 - 1+!6/C in stat.e g(.g.). The preceding considerations imply that
unit b(g) is at the end of step t' (and hence at the end of step t) with probability
> 1 - ITree(g)1 . 1+!6/C in state g(g.). D
829
830
Gupta and Maass
4
APPLICATIONS
The complexity class Teo is defined as the class of all Boolean functions f :
{O,l}* ---+ {0,1}* for which there exists a family (Tn)nEN of threshold circuits
of some constant depth so that for each n the circuit Tn computes f for inputs of
length n, and so that the number of gates in Tn and the absolute value of he weights
of threshold gates in Tn (all weights are assumed to be integers) are bounded by a
polynomial in n ([HMPST], [PS]).
Corollary 4.1 (to Theorems 2.1, 3.1): Every Boolean function f that belongs
to the complexity class Teo can be computed by scalable (i.e. polynomial size)
Boltzmann machines whose connection strengths are integers of polynomial size and
which converge for state changes with unlimited parallelism with high probability in
constantly many steps to a global maximum of their consensus function.
The following Boolean functions are known to belong to the complexity class TeO:
AND, OR, PARITY; SORTING, ADDITION, SUBTRACTION, MULTIPLICATION and DIVISION of binary numbers; DISCRETE FOURIER TRANSFORM,
and approximations to arbitrary analytic functions with a convergent rational power
series ([CVS], [R], [HMPST]).
Remarks:
1. One can also use the method from this paper for the efficient construction
of a Boltzmann machine B P1 ""'Pk that can decide very fast to which of k
stored "patterns" PI,"" Pk E {O, l}n the current input x E {O,l}n to the
Boltzmann machine has the closest "similarity."
For arbitrary fixed "patterns" PI,"', Pk E {O, l}n let fpl,""p" : {O, l}n --+
{O, l}k be the pattern classification function whose ith output bit is 1 if and
only if the Hamming distance between the input ?. E {O, l}n and Pi is less or
equal to the Hamming distance between?. and Pj, for all j"# i.
We write H D(~, y) for the Hamming distance L~I IXi - y. I of strings
?.,l!., E {O, l}n. O;e has H D(z.,l!.} = Lyi:o Xi + Ly,:1 (1 - xd, and therefore H D(~, pj) - H D(?, p,) = L~:l fiiX. + c for suit.able coefficients fii E
{-2, -1, 0,1, 2} and c E Z (that depend on the fixed patterns Pj, PI E {O, l}n).
Thus there is a threshold circuit that consists of a single threshold gate which
outputs 1 if HD(x,pj) < HD(!.,PI}, and otherwise.
The function fpl, "" P" can be computed by a threshold circuit T of depth 2
whose jth output gate is the AND of k - 1 gates as above which check for
I E {I, ... , k} - {j} whether H D(?, Pi) < H D(?, PI) (note that the underlying graph of T is the same for any choice of the patterns PI, ... ,Pk)' The
desired Boltzmann machine Bp1, .. .,p" is the Boltzmann machine B(T) for this
threshold circuit T.
?
2. Our results are also of interest in the context of learning algorithms for Boltzmann machines. For example, the previous remark provides a single graph
< u, C > of a Boltzmann machine with n input units, k output units, and
k 2 - k hidden units, that is able to compute with a suitable assignment of
Efficient Design of Boltzmann Machines
connection strengths (that may arise from a learning algorithm for Boltzmann
machines) any function Ipl, ... ,PIc (for any choice of Pl,"" Pk E {O, l}n).
Similarly we get from Theorem 2.1 together with a result from [M] the graph
< u, C > of a Boltzmann machine with n input units, n hidden units, and
one output unit, that can compute with a suitable assignment of connection
strengths any symmetric function 1 : {O,l}n ---+ {O, I} (I is called symmetric
if I(Zi,"" zn) depends only on E~=l Xi; examples of symmetric functions are
AND, OR, PARITY).
Acknowledgment: We would like to thank Georg Schnitger for his suggestion to
investigate the convergence speed of the constructed Boltzmann machines.
References
[AK]
[AHS]
E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines, John
Wiley & Sons (New York, 1989).
D.H. Ackley, G.E. Hinton, T.J. Sejnowski, A learning algorithm for
Boltzmann machines, Cognitive Science, 9, 1985, pp. 147-169.
[HS]
G.E. Hinton, T.J. Sejinowski, Learning and relearning in Boltzmann machines, in: D.E. Rumelhart, J.L McCelland, & the PDP Research Group
(Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press (Cambridge, 1986), pp. 282-317.
[CVS]
A.K. Chandra, L.J. Stockmeyer, U. Vishkin, Constant. depth reducibilit.y,
SIAM, J. Comp., 13, 1984, pp. 423-439.
[HMPST] A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, G. Turan, Threshold circuits of bounded depth, to appear in J. of Compo and Syst. Sci. (for an
extended abstract see Proc. of the 28th IEEE Conf. on Foundations of
Computer Science, 1987, pp.99-110).
[M]
S. Muroga, Threshold Logic and its Applications, John \Viley & Sons
(New York, 1971).
[PS]
I. Parberry, G. Schnitger, Relating Boltzmann machines to conventional
models of computation, Neural Networks, 2, 1989, pp. 59-67.
[R]
J. Reif, On threshold circuits and polynomial computation, Proc. of
the 2nd Annual Conference on Structure in Complexity Theory, IEEE
Computer Society Press, Washington, 1987, pp. 118-123.
831
| 418 |@word h:3 polynomial:9 nd:1 open:1 simulation:2 configuration:8 cyclic:1 series:1 ka:1 current:3 si:8 schnitger:2 written:1 john:2 synchronicity:1 chicago:3 hajnal:1 analytic:1 update:1 leaf:1 beginning:2 ith:1 compo:1 provides:1 node:22 constructed:3 predecessor:3 consists:2 prove:2 manner:1 introduce:2 rding:1 p1:1 brain:1 itree:7 becomes:1 bounded:3 underlying:1 circuit:18 substantially:1 string:1 turan:1 every:12 xd:1 unit:40 ly:1 appear:1 positive:2 before:1 understood:1 local:1 timing:1 sd:1 consequence:1 ak:11 path:1 au:1 limited:1 directed:6 acknowledgment:1 procedure:1 pudlak:1 get:1 viley:1 context:2 restriction:1 conventional:1 map:1 independently:1 contradiction:2 rule:1 his:1 hd:2 variation:2 construction:4 ixi:1 hypothesis:1 element:1 rumelhart:1 labeled:4 ackley:1 connected:2 ifl:1 decrease:1 complexity:4 depend:2 division:1 hopfield:2 various:2 fast:3 sejnowski:1 whose:3 say:1 otherwise:1 statistic:1 gi:3 g1:4 vishkin:1 transform:1 hoc:1 net:2 propose:1 relevant:2 convergence:3 p:3 converges:1 ac:2 stat:2 minor:1 implies:2 beg:12 qd:3 stochastic:1 exploration:1 successor:4 fix:1 microstructure:1 pl:1 hold:2 sufficiently:1 considered:11 cognition:1 reserve:1 proc:2 applicable:1 currently:1 teo:4 mit:1 rather:1 corollary:1 longest:1 unsatisfactory:1 check:1 hidden:3 classification:3 equal:1 construct:1 lyi:1 washington:1 outdegree:7 muroga:1 simultaneously:1 ve:1 suit:1 acceptance:1 interest:2 investigate:1 subtrees:1 edge:10 tree:14 reif:1 desired:1 minimal:1 boolean:13 zn:1 assignment:3 stored:1 gd:1 siam:1 together:1 concrete:2 choose:1 korst:1 cognitive:1 conf:1 szegedy:1 syst:1 unordered:1 coefficient:1 ad:1 depends:1 view:4 root:1 wolfgang:1 parallel:4 b6:2 il:1 ahs:3 comp:1 aarts:1 whenever:1 ed:1 definition:5 pp:6 obvious:1 proof:2 hamming:3 rational:1 appears:1 stockmeyer:1 furthermore:2 just:1 hand:2 qo:1 defines:1 effect:1 verify:1 hence:4 symmetric:3 maass:5 during:3 tn:4 temperature:2 consideration:1 common:1 belong:1 he:1 relating:1 refer:1 cambridge:1 cv:2 mccelland:1 mathematics:1 pm:1 similarly:1 illinois:1 iyi:1 similarity:1 gj:6 fii:1 closest:1 belongs:1 massively:1 binary:1 preceding:1 employed:1 subtraction:1 converge:5 determination:1 visit:1 inpu:1 qi:10 scalable:2 ajay:1 chandra:1 achieved:1 addition:2 interval:2 annealing:1 tend:1 undirected:1 call:1 integer:2 easy:1 zi:1 synchronous:2 whether:1 york:2 cause:1 hardly:1 remark:4 write:4 discrete:1 georg:1 group:1 threshold:23 changing:1 pj:4 graph:6 family:1 decide:1 decision:1 bit:1 bound:2 guaranteed:1 convergent:1 replaces:1 annual:1 strength:10 unlimited:5 declared:1 fourier:1 simulate:1 speed:3 relatively:1 department:2 according:3 son:2 remains:2 turn:1 discus:1 know:1 end:6 available:1 multiplied:1 apply:2 nen:1 gate:18 bp1:1 society:1 already:2 occurs:1 indegree:2 usual:2 gin:1 subcircuit:1 distance:3 separate:1 thank:1 simulated:2 sci:1 consensus:9 induction:4 length:2 equivalently:1 difficult:1 lg:2 ql:1 gk:6 design:6 boltzmann:46 upper:2 immediate:8 hinton:2 extended:1 precise:1 y1:1 pdp:1 arbitrary:9 pic:1 pair:1 required:2 connection:10 accepts:1 able:2 parallelism:6 below:2 pattern:5 power:1 suitable:2 imply:1 parberry:1 fpl:2 multiplication:1 suggestion:1 acyclic:3 foundation:1 degree:1 pi:15 gl:2 last:1 copy:1 asynchronous:1 parity:2 jth:1 ipl:1 absolute:1 fg:2 distributed:1 depth:16 valid:1 computes:7 author:1 ig:1 ec:1 polynomially:1 status:1 logic:1 global:7 assumed:1 xi:3 nature:1 pk:5 s2:6 arise:1 fashion:2 wiley:1 sub:1 clamped:2 theorem:9 gupta:4 exists:1 sequential:3 clamping:2 relearning:1 sorting:1 easier:1 likely:1 applies:1 constantly:2 formulated:1 replace:2 change:19 hard:1 called:1 |
3,512 | 4,180 | A Reduction from Apprenticeship Learning to
Classification
Umar Syed?
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Robert E. Schapire
Department of Computer Science
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
We provide new theoretical results for apprenticeship learning, a variant of reinforcement learning in which the true reward function is unknown, and the goal
is to perform well relative to an observed expert. We study a common approach
to learning from expert demonstrations: using a classification algorithm to learn
to imitate the expert?s behavior. Although this straightforward learning strategy
is widely-used in practice, it has been subject to very little formal analysis. We
prove that, if the learned classifier has error rate ?, the difference
? between the
value of the apprentice?s policy and the expert?s policy is O( ?). Further, we
prove that this difference is only O(?) when the expert?s policy is close to optimal.
This latter result has an important practical consequence: Not only does imitating
a near-optimal expert result in a better policy, but far fewer demonstrations are
required to successfully imitate such an expert. This suggests an opportunity for
substantial savings whenever the expert is known to be good, but demonstrations
are expensive or difficult to obtain.
1
Introduction
Apprenticeship learning is a variant of reinforcement learning, first introduced by Abbeel & Ng [1]
(see also [2, 3, 4, 5, 6]), designed to address the difficulty of correctly specifying the reward function
in many reinforcement learning problems. The basic idea underlying apprenticeship learning is that
a learning agent, called the apprentice, is able to observe another agent, called the expert, behaving
in a Markov Decision Process (MDP). The goal of the apprentice is to learn a policy that is at least
as good as the expert?s policy, relative to an unknown reward function. This is a weaker requirement
than the usual goal in reinforcement learning, which is to find a policy that maximizes reward.
The development of the apprenticeship learning framework was prompted by the observation that,
although reward functions are often difficult to specify, demonstrations of good behavior by an
expert are usually available. Therefore, by observing such a expert, one can infer information about
the true reward function without needing to specify it.
Existing apprenticeship learning algorithms have a number of limitations. For one, they typically
assume that the true reward function can be expressed as a linear combination of a set of known features. However, there may be cases where the apprentice is unwilling or unable to assume that the
rewards have this structure. Additionally, most formulations of apprenticeship learning are actually
harder than reinforcement learning; apprenticeship learning algorithms typically invoke reinforcement learning algorithms as subroutines, and their performance guarantees depend strongly on the
quality of these subroutines. Consequently, these apprenticeship learning algorithms suffer from the
same challenges of large state spaces, exploration vs. exploitation trade-offs, etc., as reinforcement
?
Work done while the author was a student at Princeton University.
1
learning algorithms. This fact is somewhat contrary to the intuition that demonstrations from an
expert ? especially a good expert ? should make the problem easier, not harder.
Another approach to using expert demonstrations that has received attention primarily in the empirical literature is to passively imitate the expert using a classification algorithm (see [7, Section 4] for
a comprehensive survey). Classification is the most well-studied machine learning problem, and it
is sensible to leverage our knowledge about this ?easier? problem in order to solve a more ?difficult?
one. However, there has been little formal analysis of this straightforward learning strategy (the
main recent example is Ross & Bagnell [8], discussed below). In this paper, we consider a setting
in which an apprentice uses a classification algorithm to passively imitate an observed expert in an
MDP, and we bound the difference between the value of the apprentice?s policy and the value of
the expert?s policy in terms of the accuracy of the learned classifier. Put differently, we show that
apprenticeship learning can be reduced to classification. The idea of reducing one learning problem
to another was first proposed by Zadrozny & Langford [9].
Our main contributions in this paper are a pair of theoretical results. First, we?show that the difference between the value of the apprentice?s policy and the expert?s policy is O( ?),1 where ? ? (0, 1]
is the error of the learned classifier. Secondly, and perhaps more interestingly, we extend our first
result to prove that the difference in policy values is only O(?) when the expert?s policy is close to
optimal. Of course, if one could perfectly imitate the expert, then naturally a near-optimal expert
policy is preferred. But our result implies something further: that near-optimal experts are actually
easier to imitate, in the sense that fewer demonstration are required to achieve the same performance
guarantee. This has important practical consequences. If one is certain a priori that the expert is
demonstrating good behavior, then our result implies that many fewer demonstrations need to be collected than if this were not the case. This can yield substantial savings when expert demonstrations
are expensive or difficult to obtain.
2
Related Work
Several authors have reduced reinforcement learning to simpler problems. Bagnell et al [10] described an algorithm for constructing a good nonstationary policy from a sequence of good ?onestep? policies. These policies are only concerned with maximizing reward collected in a single
time step, and are learned with the help of observations from an expert. Langford & Zadrozny
[11] reduced reinforcement learning to a sequence of classification problems (see also Blatt & Hero
[12]), but these problems have an unusual structure, and the authors are only able to provide a small
amount of guidance as to how data for these problems can be collected. Kakade & Langford [13]
reduced reinforcement learning to regression, but required additional assumptions about how easily
a learning algorithm can access the entire state space. Importantly, all this work makes the standard
reinforcement learning assumptions that the true rewards are known, and that a learning algorithm
is able to interact directly with the environment. In this paper we are interested in settings where
the reward function is not known, and where the learning algorithm is limited to passively observing
an expert. Concurrently to this work, Ross & Bagnell [8] have described an approach to reducing
imitation learning to classification, and some of their analysis resembles ours. However, their framework requires somewhat more than passive observation of the expert, and is focused on improving
the sensitivity of the reduction to the horizon length, not the classification error. They also assume
that the expert follows a deterministic policy, and assumption we do not make.
3
Preliminaries
We consider a finite-horizon MDP, with horizon H. We will allow the state space S to be infinite,
but assume that the action space A is finite. Let ? be the initial state distribution, and ? the transition
function, where ?(s, a, ?) specifies the next-state distribution from state s ? S under action a ? A.
The only assumption we make about the unknown reward function R is that 0 ? R(s) ? Rmax for
all states s ? S, where Rmax is a finite upper bound on the reward of any state.
1
The big-O notation is concealing a polynomial dependence on other problem parameters. We give exact
bounds in the body of the paper.
2
We introduce some notation and definitions regarding policies. A policy ? is stationary if it is a
mapping from states to distributions over actions. In this case, ?(s, a) denotes the probability of
taking action a in state s. Let ? be the set of all stationary policies. A policy ? is nonstationary if it
belongs to the set ?H = ? ? ? ? ? (H times) ? ? ? ? ? . In this case, ?t (s, a) denotes the probability of
taking action a in state s at time t. Also, if ? is nonstationary, then ?t refers to the stationary policy
that is equal to the tth component of ?. A (stationary or nonstationary) policy ? is deterministic if
each one of its action distributions is concentrated on a single action. If a deterministic policy ? is
stationary, then ?(s) is the action taken in state s, and if ? is nonstationary, the ?t (s) is the action
taken in state s at time t.
We define the value function Vt? (s) for a nonstationary policy ? at time t as follows in the usual
manner:
"H
#
X
?
Vt (s) , E
R(st? ) st = s, at? ? ?t? (st? , ?), st? +1 ? ?(st? , at? , ?) .
t? =t
So Vt? (s) is the expected cumulative reward for following policy ? when starting at state s and time
step t. Note that there are several value functions per nonstationary policy, one for each time step t.
The value of a policy is defined to be V (?) , E[V1? (s) | s ? ?(?)], and an optimal policy ? ? is one
that satisfies ? ? , arg max? V (?).
We write ? E to denote the (possibly nonstationary) expert policy, and VtE (s) as an abbreviation for
E
Vt? (s). Our goal is to find a nonstationary apprentice policy ? A such that V (? A ) ? V (? E ). Note
that the values of these policies are with respect to the unknown reward function.
Let Dt? be the distribution on state-action pairs at time t under policy ?. In other words, a sample
(s, a) is drawn from Dt? by first drawing s1 ? ?(?), then following policy ? for time steps 1 through
t, which generates a trajectory (s1 , a1 , . . . , st , at ), and then letting (s, a) = (st , at ). We write DtE as
E
an abbreviation for Dt? . In a minor abuse of notation, we write s ? Dt? to mean: draw state-action
pair (s, a) ? Dt? , and discard a.
4
Details and Justification of the Reduction
Our goal is to reduce apprenticeship learning to classification, so let us describe exactly how this
reduction is defined, and also justify the utility of such a reduction.
In a classification problem, a learning algorithm is given a training set h(x1 , y1 ), . . . , (xm , ym )i,
where each labeled example (xi , yi ) ? X ? Y is drawn independently from a distribution D on X ?
Y. Here X is the example space and Y is the finite set of labels. The learning algorithm is also given
the definition of a hypothesis class H, which is a set of functions mapping X to Y. The objective
of the learning algorithm is to find a hypothesis h ? H such that the error Pr(x,y)?D (h(x) 6= y) is
small.
For our purposes, the hypothesis class H is said to be PAC-learnable if there exists a learning
algorithm A such that, whenever A is given a training set of size m = poly( 1? , 1? ), the algorithm
? ? H such that, with probability at least 1 ? ?,
runs for poly( 1? , 1? )
steps and outputs
a hypothesis h
?
6= y ? ??
+ ?. Here ??
= inf h?H Pr(x,y)?D (h(x) 6= y) is the
we have Pr(x,y)?D h(x)
H,D
H,D
poly( 1? , 1? )
error of the best hypothesis in H. The expression
will typically also depend on other
quantities, such as the number of labels |Y| and the VC-dimension of H [14], but this dependence is
not germane to our discussion.
The existence of PAC-learnable hypothesis classes is the reason that reducing apprenticeship learning to classification is a sensible endeavor. Suppose that the apprentice observes m independent
trajectories from the expert?s policy ? E , where the ith trajectory is a sequence si1 , ai1 , . . . , siH , aiH .
The key is to note that each (sit , ait ) can be viewed as an independent sample from the distribution
DtE . Now consider a PAC-learnable hypothesis class H, where H contains a set of functions map1 1
, ? ), then for each time step
ping the state space S to the finite action space A. If m = poly( H?
?
t, the apprentice can use a PAC learning algorithm for
H to learn
a hypothesis ht ? H such that,
1
? t (s) 6= a ? ?? E + ?. And by the union
, we have Pr(s,a)?DtE h
with probability at least 1 ? H?
H,D
t
3
bound, this inequality holds for all t with probability at least 1 ? ?. If each ??H,DE + ? is small, then a
t
? t for all t. This policy uses the learned
natural choice for the apprentice?s policy ? A is to set ?tA = h
classifiers to imitate the behavior of the expert.
In light of the preceding discussion, throughout the remainder of this paper we make the following
assumption about the apprentice?s policy.
Assumption 1. The apprentice policy ? A is a deterministic policy that satisfies
Pr(s,a)?DtE (?tA (s) 6= a) ? ? for some ? > 0 and all time steps t.
As we have shown, an apprentice policy satisfying Assumption 1 with small ? can be found with
high probability, provided that expert?s policy is well-approximated by a PAC-learnable hypothesis
class and that the apprentice is given enough trajectories from the expert. A reasonable intuition is
that the value of the policy ? A in Assumption 1 is nearly as high as the value of the policy ? E ; the
remainder of this paper is devoted to confirming this intuition.
5
Guarantee for Any Expert
If the error rate ? in Assumption 1 is small, then the apprentice?s policy ? A closely imitates the
expert?s policy ? E , and we might hope that this implies that V (? A ) is not much less than V (? E ).
This is indeed the case, as the next theorem shows.
?
Theorem 1. If Assumption 1 holds, then V (? A ) ? V (? E ) ? 2 ?H 2 Rmax .
In a typical classification problem, it is assumed that the training and test examples are drawn from
the same distribution. The main challenge in proving Theorem 1 is that this assumption does not hold
for the classification problems to which we have reduced the apprenticeship learning problem. This
is because, although each state-action pair (sit , ait ) appearing in an expert trajectory is distributed
according to DtE , a state-action pair (st , at ) visited by the apprentice?s policy may not follow this
distribution, since the behavior of the apprentice prior to time step t may not exactly match the
expert?s behavior. So our strategy for proving Theorem 1 will be to show that these differences do
not cause the value of the apprentice policy to degrade too much relative to the value of the expert?s
policy.
Before proceeding, we will show that Assumption 1 implies a condition that is, for our purposes,
more convenient.
?t (s) 6= a) ? ?, then for
Lemma 1. Let ?
? be a deterministic nonstationary policy. If Pr(s,a)?DtE (?
?
E
?t (s)) ? 1 ? ?1 ? 1 ? ?1
all ?1 ? (0, 1] we have Prs?DtE ?t (s, ?
?t (s)) ? 1 ? ?1 <
Proof. Fix any ?1 ? (0, 1], and suppose for contradiction that Prs?DtE ?tE (s, ?
?t (s)) ? 1 ? ?1 , and that s is bad otherwise. Then
1 ? ??1 . Say that a state s is good if ?tE (s, ?
?t (s) = a | s is good)
?t (s) = a) = Prs?DtE (s is good) ? Pr(s,a)?DtE (?
Pr(s,a)?DtE (?
+ Prs?DtE (s is bad) ? Pr(s,a)?DtE (?
?t (s) = a | s is bad)
? Prs?DtE (s is good) ? 1 + (1 ? Prs?DtE (s is good)) ? (1 ? ?1 )
= 1 ? ?1 (1 ? Prs?DtE (s is good))
<1??
?t (s) = a | s is bad) ? 1 ? ?1 , and the second
where the first inequality holds because Pr(s,a)?DtE (?
?
inequality holds because Prs?DtE (s is good) < 1 ? ?1 . This chain of inequalities clearly contradicts
the assumption of the lemma.
The next two lemmas are the main tools used to prove Theorem 1. In the proofs of these lemmas, we
write sa to denote a trajectory, where sa = (?
s1 , a
?1 , . . . , s?H , a
?H ) ? (S ? A)H . Also, let dP? denote
PH
the probability measure induced on trajectories by following policy ?, and let R(sa) = t=1 R(?
st )
4
denote the sum of the rewards of the states in trajectory sa. Importantly, using these definitions we
have
Z
V (?) =
R(sa)dP? .
sa
The next lemma proves that if a deterministic policy ?almost? agrees with the expert?s policy ? E in
every state and time step, then its value is not much worse the value of ? E .
Lemma 2. Let ?
? be a deterministic nonstationary policy. If for all states s and time steps t we have
?t (s)) ? 1 ? ? then V (?
? ) ? V (? E ) ? ?H 2 Rmax .
?tE (s, ?
? ? that is, ?
? (?
st ) = a
?t for all time steps
Proof. Say a trajectory sa is good if it is ?consistent? with ?
t ? and that sa is bad otherwise. We have
Z
R(sa)dP?E
V (? E ) =
Zsa
Z
R(sa)dP?E +
R(sa)dP?E
=
sa good
sa bad
Z
R(sa)dP?E + ?H 2 Rmax
?
sa good
Z
R(sa)dP?? + ?H 2 Rmax
?
sa good
= V (?
? ) + ?H 2 Rmax
where the first inequality holds because, by the union bound, P?E assigns at most an ?H fraction
of its measure to bad trajectories, and the maximum reward of a trajectory is HRmax . The second
inequality holds because good trajectories are assigned at least as much measure by P?? as by P?E ,
because ?
? is deterministic.
The next lemma proves a slightly different statement than Lemma 2: If a policy exactly agrees with
the expert?s policy ? E in ?almost? every state and time step, then its value is not much worse the
value of ? E .
Lemma 3. Let ?
? be a nonstationary policy.
If for all time steps t we have
?t (s, ?) = ?tE (s, ?) ? 1 ? ? then V (?
? ) ? V (? E ) ? ?H 2 Rmax .
Prs?DtE ?
st , ?) = ?
?t (?
st , ?) for all time steps t, and that sa is bad
Proof. Say a trajectory sa is good if ?tE (?
otherwise. We have
Z
V (?
?) =
R(sa)dP??
sa
Z
Z
R(sa)dP?? +
R(sa)dP??
=
sa good
sa bad
Z
Z
R(sa)dP?E +
R(sa)dP??
=
sa good
sa bad
Z
Z
Z
R(sa)dP?E ?
R(sa)dP?E +
R(sa)dP??
=
sa
sa bad
sa bad
Z
? V (? E ) ? ?H 2 Rmax +
R(sa)dP??
sa bad
? V (? E ) ? ?H 2 Rmax .
The first inequality holds because, by the union bound, P?E assigns at most an ?H fraction of
its measure to bad trajectories, and the maximum reward of a trajectory is HRmax . The second
inequality holds by our assumption that all rewards are nonnegative.
We are now ready to combine the previous lemmas and prove Theorem 1.
5
Proof of Theorem 1. Since the apprentice?s policy ? A satisfies Assumption 1, by Lemma 1 we can
choose any ?1 ? (0, 1] and have
Prs?DtE ?tE (s, ?tA (s)) ? 1 ? ?1 ? 1 ? ??1 .
Now construct a ?dummy? policy ?
? as follows: For all time steps t, let ?
?t (s, ?) = ?tE (s, ?) for any
E
A
A
state s where ?t (s, ?t (s)) ? 1 ? ?1 . On all other states, let ?
?t (s, ?t (s)) = 1. By Lemma 2
V (? A ) ? V (?
? ) ? ?1 H 2 Rmax
and by Lemma 3
?
V (?
? ) ? V (? E ) ? H 2 Rmax .
?1
Combining these inequalities yields
?
A
E
V (? ) ? V (? ) ? ?1 +
H 2 Rmax .
?1
?
Since ?1 was chosen arbitrarily, we set ?1 = ?, which maximizes this lower bound.
6
Guarantee for Good Expert
Theorem 1 makes no assumptions about the value of the expert?s policy. However, in many cases it
may be reasonable to assume that the expert is following a near-optimal policy (indeed, if she is not,
then we should question the decision to select her as an expert). The next theorem shows that the
dependence of V (? A ) on the classification error ? is significantly better when the expert is following
a near-optimal policy.
Theorem 2. If Assumption 1 holds, then V (? A ) ? V (? E ) ? 4?H 3 Rmax + ??E , where ??E ,
V (? ? ) ? V (? E ) is the suboptimality of the expert?s policy ? E .
?
Note that the bound in Theorem 2 varies with ? and not with ?. We can interpret this bound as
follows: If our goal is to learn an apprentice policy whose value is within ??E of the expert policy?s
value, we can double our progress towards that goal by halving the classification error rate. On the
other hand, Theorem 2 suggests that the error rate must be reduced by a factor of four.
To see why a near-optimal expert policy should yield a weaker dependence on ?, consider an expert
policy ? E that is an optimal policy, but in every state s ? S selects one of two actions as1 and
as2 uniformly at random. A deterministic apprentice policy ? A that closely imitates the expert will
either set ? A (s) = as1 or ? A (s) = as2 , but in either case the classification error will not be less than
1
E
s
s
2 . However, since ? is optimal, both actions a1 and a2 must be optimal actions for state s, and so
A
the apprentice policy ? will be optimal as well.
Our strategy for proving Theorem 2 is to replace Lemma 2 with a different result ? namely, Lemma
6 below ? that has a much weaker dependence on the classification error ? when ??E is small.
To help us prove Lemma 6, we will first need to define several useful policies. The next several
definitions will be with respect to an arbitrary nonstationary base policy ? B ; in the proof of Theorem
2, we will make a particular choice for the base policy.
Fix a deterministic nonstationary policy ? B,? that satisfies
?tB (s, ?tB,? (s)) ? 1 ? ?
for some ? ? (0, 1] and all states s and time steps t. Such a policy always exists by letting ? = 1, but
if ? is close to zero, then ? B,? is a deterministic policy that ?almost? agrees with ? B in every state
and time step. Of course, depending on the choice of ? B , a policy ? B,? may not exist for small ?,
but let us set aside that concern for the moment; in the proof of Theorem 2, the base policy ? B will
be chosen so that ? can be as small as we like.
Having thus defined ? B,? , we define ? B\? as follows: For all states s ? S and time steps t, if
?tB (s, ? B,? (s)) < 1, then let
?
0
if ?tB,? (s) = a
?
?
?
B\?
?t (s, a) =
?tB (s, a)
?
?
otherwise
? P
B
?
a? 6=? B,? (s) ?t (s, a )
t
6
B\?
for all actions a ? A, and otherwise let ?t
(s, a) =
1
|A|
B\?
?t (s, ?)
for all a ? A. In other words, in each
state s and time step t, the distribution
is obtained by proportionally redistributing the
probability assigned to action ?tB,? (s) by the distribution ?tB (s, ?) to all other actions. The case
where ?tB (s, ?) assigns all probability to action ?tB,? (s) is treated specially, but as will be clear from
B\?
the proof of Lemma 4, it is actually immaterial how the distribution ?t (s, ?) is defined in these
cases; we choose the uniform distribution for definiteness.
Let ? B+ be a deterministic policy defined by
i
h B
?
?tB+ (s) = arg max E Vt+1
(s? ) s? ? ?(s, a, ?)
a
for all states s ? S and time steps t. In other words, ?tB+ (s) is the best action in state s at time t,
assuming that the policy ? B is followed thereafter.
The next definition requires the use of mixed policies. A mixed policy consists of a finite set of
deterministic nonstationary policies, along with a distribution over those policies; the mixed policy
is followed by drawing a single policy according to the distribution in the initial time step, and
following that policy exclusively thereafter. More formally, a mixed policy is defined by a set of
for some finite N , where each component policy ? i is a deterministic
ordered pairs {(? i , ?(i))}N
PN i=1
nonstationary policy, i=1 ?(i) = 1 and ?(i) ? 0 for all i ? [N ].
We define a mixed policy ?
? B,?,+ as follows: For each component policy ? i and each time step t,
B,?
i
i
either ?t = ?t or ?t = ?tB+ . There is one component policy for each possible choice; this yields
N = 2|H| component policies. And the probability ?(i) assigned to each component policy ? i is
?(i) = (1 ? ?)k(i) ?H?k(i) , where k(i) is the number of times steps t for which ?ti = ?tB,? .
Having established these definitions, we are now ready to prove several lemmas that will help us
prove Theorem 2.
Lemma 4. V (?
? B,?,+ ) ? V (? B ).
B,?,+
B
Proof. The proof will be by backwards induction on t. Clearly VH??
(s) = VH? (s) for all states
?
s, since the value function VH for any policy ? depends only on the reward function R. Now suppose
?B
?
? B,?,+
(s) for all states s. Then for all states s
(s) ? Vt+1
for induction that Vt+1
h
i
B,?,+
?
? B,?,+ ? ?
Vt??
(s) = R(s) + E Vt+1
(s ) a ? ?
?tB,?,+ (s, ?), s? ? ?(s, a? , ?)
h B
i
?
? R(s) + E Vt+1
(s? ) a? ? ?
?tB,?,+ (s, ?), s? ? ?(s, a? , ?)
h B
i
h B
i
?
?
= R(s) + (1 ? ?)E Vt+1
(s? ) s? ? ?(s, ?tB,? (s), ?) + ?E Vt+1
(s? ) s? ? ?(s, ?tB+ (s), ?)
h B
i
?
? R(s) + ?tB (s, ?tB,? (s)) ? E Vt+1
(s? ) s? ? ?(s, ?tB,? (s), ?)
h B
i
?
(s? ) s? ? ?(s, ?tB+ (s), ?)
+ 1 ? ?tB (s, ?tB,? (s)) ? E Vt+1
h B
i
?
? R(s) + ?tB (s, ?tB,? (s)) ? E Vt+1
(s? ) s? ? ?(s, ?tB,? (s), ?)
h B
i
B\?
?
+ 1 ? ?tB (s, ?tB,? (s)) ? E Vt+1
(s? ) a? ? ?t (s, ?), s? ? ?(s, a? , ?)
h B
i
?
= R(s) + E Vt+1
(s? ) a? ? ?tB (s), s? ? ?(s, a? , ?)
B
= Vt? (s).
The first equality holds for all policies ?, and follows straightforwardly from the definition of Vt? .
The rest of the derivation uses, in order: the inductive hypothesis; the definition of ?
? B,?,+ ; property
B+
B,?
?B
of ?
and the fact that ?t (s) is the best action with respect to Vt+1 ; the fact that ?tB+ (s) is the
B
?B
; the definition of ? B\? ; the definition of Vt? (s).
best action with respect to Vt+1
Lemma 5. V (?
? B,?,+ ) ? (1 ? ?H)V (? B,? ) + ?HV (? ? ).
7
Proof. Since ?
? B,?,+ is a mixed policy, by the linearity of expectation we have
V (?
? B,?,+ ) =
N
X
?(i)V (? i )
i=1
i
B,?,+
where each ? is a component policy of ?
?
and ?(i) is its associated probability. Therefore
X
V (?
? B,?,+ ) =
?(i)V (? i )
i
? (1 ? ?)H V (? B,? ) + (1 ? (1 ? ?)H )V (? ? )
? (1 ? ?H)V (? B,? ) + ?HV (? ? ).
Here we used the fact that probability (1 ? ?)H ? 1 ? ?H is assigned to a component policy that is
identical to ? B,? , and the value of any component policy is at most V (? ? ).
Lemma 6. If ? <
1
H,
then V (? B,? ) ? V (? B ) ?
?H
1??H ?? B .
Proof. Combining Lemmas 4 and 5 yields
(1 ? ?H)V (? B,? ) + ?HV (? ? ) ? V (? B ).
And via algebraic manipulation we have
(1 ? ?H)V (? B,? ) + ?HV (? ? ) ? V (? B )
? (1 ? ?H)V (? B,? ) ? (1 ? ?H)V (? B ) + ?HV (? B ) ? ?HV (? ? )
? (1 ? ?H)V (? B,? ) ? (1 ? ?H)V (? B ) ? ?H??B
?H
? V (? B,? ) ? V (? B ) ?
? B.
1 ? ?H ?
In the last line, we were able to divide by (1 ? ?H) without changing the direction of the inequality
1
because of our assumption that ? < H
.
We are now ready to combine the previous lemmas and prove Theorem 2.
Proof of Theorem 2. Since the apprentice?s policy ? A satisfies Assumption 1, by Lemma 1 we can
1
choose any ?1 ? (0, H
) and have
Prs?DtE ?tE (s, ?tA (s)) ? 1 ? ?1 ? 1 ? ??1 .
As in the proof of Theorem 1, let us construct a ?dummy? policy ?
? as follows: For all time steps
t, let ?
?t (s, ?) = ?tE (s, ?) for any state s where ?tE (s, ?tA (s)) ? 1 ? ?1 . On all other states, let
?
?t (s, ?tA (s)) = 1. By Lemma 3 we have
?
(1)
V (?
? ) ? V (? E ) ? H 2 Rmax .
?1
? ) = V (? ? ) ? ??? and rearranging yields
Substituting V (? E ) = V (? ? ) ? ??E and V (?
?
??? ? ??E + H 2 Rmax .
?1
(2)
Now observe that, if we set the base policy ? B = ?
? , then by definition ? A is a valid choice for
1
B,?1
. And since ?1 < H we have
?
?1 H
???
1 ? ?1 H
?
?1 H
??E + H 2 Rmax
? V (?
?) ?
1 ? ?1 H
?1
?
?
?
1H
? V (? E ) ? H 2 Rmax ?
??E + H 2 Rmax
?1
1 ? ?1 H
?1
V (? A ) ? V (?
?) ?
where we used Lemma 6, (2) and (1), in that order. Letting ?1 =
8
1
2H
proves the theorem.
(3)
References
[1] Pieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In
Proceedings of the 21st International Conference on Machine Learning, 2004.
[2] P Abbeel and A Y Ng. Exploration and apprenticeship learning in reinforcement learning. In
Proceedings of the 22nd International Conference on Machine Learning, 2005.
[3] Nathan D. Ratliff, J. Andrew Bagnell, and Martin A. Zinkevich. Maximum margin planning.
In Proceedings of the 23rd International Conference on Machine Learning, 2006.
[4] Umar Syed and Robert E. Schapire. A game-theoretic approach to apprenticeship learning. In
Advances in Neural Information Processing Systems 20, 2008.
[5] J. Zico Kolter, Pieter Abbeel, and Andrew Ng. Hierarchical apprenticeship learning with application to quadruped locomotion. In Advances in Neural Information Processing Systems 20,
2008.
[6] Umar Syed and Robert E. Schapire. Apprenticeship learning using linear programming. In
Proceedings of the 25th International Conference on Machine Learning, 2008.
[7] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot
learning from demonstration. Robotics and Autonomous Systems, 57(5):469?483, 2009.
[8] St?ephane Ross and J. Andrew Bagnell. Efficient reduction for imitation learning. In AISTATS,
2010.
[9] Bianca Zadrozny, John Langford, and Naoki Abe.
Cost-sensitive learning by costproportionate example weighting. In Proceedings of the Third IEEE International Conference
on Data Mining, 2003.
[10] J. Andrew Bagnell, Sham Kakade, Andrew Y. Ng, and Jeff Schneider. Policy search by dynamic programming. In Advances in Neural Information Processing Systems 15, 2003.
[11] John Langford and Bianca Zadrozny. Relating reinforcement learning performance to classification performance. In Proceedings of the 22nd International Conference on Machine Learning, 2005.
[12] Doron Blatt and Alfred Hero. From weighted classification to policy search. In Advances in
Neural Information Processing Systems 18, pages 139?146, 2006.
[13] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In Proceedings 19th International Conference on Machine Learning, 2002.
[14] V. N. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability and Its Applications, 16:264?280, 1971.
9
| 4180 |@word exploitation:1 polynomial:1 nd:2 pieter:2 harder:2 moment:1 reduction:6 initial:2 contains:1 exclusively:1 chervonenkis:1 ours:1 interestingly:1 existing:1 must:2 john:3 confirming:1 designed:1 v:1 stationary:5 aside:1 fewer:3 imitate:7 ith:1 simpler:1 si1:1 along:1 doron:1 prove:9 consists:1 combine:2 manner:1 introduce:1 apprenticeship:18 upenn:1 indeed:2 expected:1 behavior:6 planning:1 little:2 provided:1 brett:1 underlying:1 notation:3 maximizes:2 linearity:1 rmax:19 argall:1 nj:1 guarantee:4 every:4 ti:1 exactly:3 classifier:4 zico:1 before:1 naoki:1 consequence:2 abuse:1 approximately:1 usyed:1 might:1 studied:1 resembles:1 suggests:2 specifying:1 limited:1 practical:2 practice:1 union:3 empirical:1 significantly:1 convenient:1 word:3 refers:1 close:3 put:1 zinkevich:1 deterministic:14 maximizing:1 straightforward:2 attention:1 starting:1 independently:1 survey:2 focused:1 unwilling:1 assigns:3 contradiction:1 importantly:2 proving:3 autonomous:1 justification:1 suppose:3 exact:1 programming:2 us:3 hypothesis:10 locomotion:1 pa:1 expensive:2 satisfying:1 approximated:1 labeled:1 observed:2 hv:6 trade:1 observes:1 substantial:2 intuition:3 environment:1 reward:20 dynamic:1 immaterial:1 depend:2 easily:1 differently:1 derivation:1 describe:1 quadruped:1 sih:1 whose:1 widely:1 solve:1 say:3 drawing:2 otherwise:5 sequence:3 remainder:2 combining:2 achieve:1 convergence:1 double:1 requirement:1 help:3 depending:1 andrew:6 minor:1 received:1 progress:1 sa:37 c:1 implies:4 direction:1 closely:2 germane:1 vc:1 exploration:2 abbeel:4 fix:2 preliminary:1 secondly:1 hold:11 mapping:2 substituting:1 a2:1 purpose:2 label:2 visited:1 ross:3 sensitive:1 agrees:3 successfully:1 tool:1 weighted:1 hope:1 offs:1 concurrently:1 clearly:2 always:1 pn:1 she:1 sense:1 browning:1 typically:3 entire:1 her:1 subroutine:2 interested:1 selects:1 arg:2 classification:20 priori:1 development:1 equal:1 construct:2 saving:2 having:2 ng:5 identical:1 nearly:1 ephane:1 primarily:1 comprehensive:1 mining:1 ai1:1 chernova:1 light:1 devoted:1 chain:1 divide:1 guidance:1 theoretical:2 cost:1 uniform:2 too:1 straightforwardly:1 varies:1 st:14 international:7 sensitivity:1 invoke:1 ym:1 choose:3 possibly:1 worse:2 expert:51 de:1 student:1 kolter:1 depends:1 observing:2 blatt:2 contribution:1 accuracy:1 yield:6 trajectory:15 ping:1 whenever:2 definition:11 frequency:1 naturally:1 proof:14 associated:1 knowledge:1 actually:3 ta:6 dt:5 follow:1 specify:2 formulation:1 done:1 strongly:1 langford:6 hand:1 aih:1 quality:1 perhaps:1 mdp:3 true:4 inductive:1 equality:1 assigned:4 game:1 suboptimality:1 vte:1 theoretic:1 passive:1 common:1 extend:1 discussed:1 relating:1 interpret:1 rd:1 access:1 robot:1 behaving:1 etc:1 base:4 something:1 map1:1 recent:1 belongs:1 inf:1 discard:1 manipulation:1 certain:1 inequality:10 arbitrarily:1 vt:22 yi:1 additional:1 somewhat:2 preceding:1 schneider:1 sham:2 needing:1 infer:1 match:1 veloso:1 a1:2 halving:1 variant:2 basic:1 regression:1 expectation:1 robotics:1 rest:1 specially:1 induced:1 subject:1 contrary:1 nonstationary:16 near:6 leverage:1 backwards:1 enough:1 concerned:1 pennsylvania:1 perfectly:1 reduce:1 idea:2 regarding:1 expression:1 utility:1 suffer:1 algebraic:1 cause:1 action:24 useful:1 proportionally:1 clear:1 amount:1 ph:1 concentrated:1 tth:1 reduced:6 schapire:4 specifies:1 exist:1 correctly:1 per:1 dummy:2 alfred:1 write:4 key:1 four:1 thereafter:2 demonstrating:1 drawn:3 changing:1 ht:1 v1:1 fraction:2 sum:1 concealing:1 run:1 inverse:1 throughout:1 reasonable:2 almost:3 draw:1 decision:2 redistributing:1 bound:9 followed:2 nonnegative:1 as2:2 generates:1 nathan:1 passively:3 martin:1 department:2 according:2 combination:1 slightly:1 contradicts:1 kakade:3 s1:3 pr:21 imitating:1 taken:2 letting:3 hero:2 unusual:1 available:1 observe:2 hierarchical:1 appearing:1 apprentice:24 sonia:1 existence:1 denotes:2 opportunity:1 umar:3 especially:1 prof:3 objective:1 question:1 quantity:1 strategy:4 dependence:5 usual:2 bagnell:6 said:1 dp:16 unable:1 sensible:2 degrade:1 collected:3 reason:1 induction:2 assuming:1 length:1 prompted:1 demonstration:10 difficult:4 robert:3 statement:1 ratliff:1 policy:107 unknown:4 perform:1 upper:1 observation:3 markov:1 finite:7 zadrozny:4 y1:1 arbitrary:1 abe:1 introduced:1 pair:6 required:3 namely:1 learned:5 established:1 address:1 able:4 usually:1 below:2 xm:1 challenge:2 tb:30 max:2 event:1 syed:3 difficulty:1 natural:1 treated:1 ready:3 philadelphia:1 imitates:2 vh:3 prior:1 literature:1 relative:4 mixed:6 limitation:1 agent:2 consistent:1 course:2 last:1 formal:2 weaker:3 allow:1 taking:2 brenna:1 distributed:1 dimension:1 zsa:1 transition:1 cumulative:1 valid:1 author:3 reinforcement:15 far:1 approximate:1 preferred:1 manuela:1 assumed:1 xi:1 imitation:2 search:2 why:1 additionally:1 as1:2 learn:4 rearranging:1 dte:21 improving:1 interact:1 poly:4 constructing:1 aistats:1 main:4 big:1 ait:2 body:1 x1:1 bianca:2 definiteness:1 weighting:1 third:1 theorem:20 bad:14 pac:5 learnable:4 concern:1 sit:2 exists:2 vapnik:1 ci:1 te:10 horizon:3 margin:1 easier:3 expressed:1 ordered:1 satisfies:5 abbreviation:2 goal:7 endeavor:1 viewed:1 consequently:1 towards:1 jeff:1 replace:1 onestep:1 infinite:1 typical:1 reducing:3 uniformly:1 justify:1 lemma:26 called:2 select:1 formally:1 latter:1 princeton:4 |
3,513 | 4,181 | Error Propagation for Approximate Policy and
Value Iteration
R?emi Munos
Sequel Project, INRIA Lille
Lille, France
[email protected]
Amir massoud Farahmand
Department of Computing Science
University of Alberta
Edmonton, Canada, T6G 2E8
[email protected]
Csaba Szepesv?ari ?
Department of Computing Science
University of Alberta
Edmonton, Canada, T6G 2E8
[email protected]
Abstract
We address the question of how the approximation error/Bellman residual at each
iteration of the Approximate Policy/Value Iteration algorithms influences the quality of the resulted policy. We quantify the performance loss as the Lp norm of the
approximation error/Bellman residual at each iteration. Moreover, we show that
the performance loss depends on the expectation of the squared Radon-Nikodym
derivative of a certain distribution rather than its supremum ? as opposed to what
has been suggested by the previous results. Also our results indicate that the
contribution of the approximation/Bellman error to the performance loss is more
prominent in the later iterations of API/AVI, and the effect of an error term in the
earlier iterations decays exponentially fast.
1
Introduction
The exact solution for the reinforcement learning (RL) and planning problems with large state space
is difficult or impossible to obtain, so one usually has to aim for approximate solutions. Approximate
Policy Iteration (API) and Approximate Value Iteration (AVI) are two classes of iterative algorithms
to solve RL/Planning problems with large state spaces. They try to approximately find the fixedpoint solution of the Bellman optimality operator.
AVI starts from an initial value function V0 (or Q0 ), and iteratively applies an approximation of
T ? , the Bellman optimality operator, (or T ? for the policy evaluation problem) to the previous
estimate, i.e., Vk+1 ? T ? Vk . In general, Vk+1 is not equal to T ? Vk because (1) we do not have
direct access to the Bellman operator but only some samples from it, and (2) the function space
in which V belongs is not representative enough. Thus there would be an approximation error
?k = T ? Vk ? Vk+1 between the result of the exact VI and AVI.
Some examples of AVI-based approaches are tree-based Fitted Q-Iteration of Ernst et al. [1], multilayer perceptron-based Fitted Q-Iteration of Riedmiller [2], and regularized Fitted Q-Iteration of
Farahmand et al. [3]. See the work of Munos and Szepesv?ari [4] for more information about AVI.
?
Csaba Szepesv?ari is on leave from MTA SZTAKI. We would like to acknowledge the insightful comments
by the reviewers. This work was partly supported by AICML, AITF, NSERC, and PASCAL2 under no 216886.
1
API is another iterative algorithm to find an approximate solution to the fixed point of the Bellman
optimality operator. It starts from a policy ?0 , and then approximately evaluates that policy ?0 , i.e.,
it finds a Q0 that satisfies T ?0 Q0 ? Q0 . Afterwards, it performs a policy improvement step, which
is to calculate the greedy policy with respect to (w.r.t.) the most recent action-value function, to get
a new policy ?1 , i.e., ?1 (?) = arg maxa?A Q0 (?, a). The policy iteration algorithm continues by
approximately evaluating the newly obtained policy ?1 to get Q1 and repeating the whole process
again, generating a sequence of policies and their corresponding approximate action-value functions
Q0 ? ?1 ? Q1 ? ?2 ? ? ? ? . Same as AVI, we may encounter a difference between the approximate solution Qk (T ?k Qk ? Qk ) and the true value of the policy Q?k , which is the solution
of the fixed-point equation T ?k Q?k = Q?k . Two convenient ways to describe this error is either
by the Bellman residual of Qk (?k = Qk ? T ?k Qk ) or the policy evaluation approximation error
(?k = Qk ? Q?k ).
API is a popular approach in RL literature. One well-known algorithm is LSPI of Lagoudakis and
Parr [5] that combines Least-Squares Temporal Difference (LSTD) algorithm (Bradtke and Barto
[6]) with a policy improvement step. Another API method is to use the Bellman Residual Minimization (BRM) and its variants for policy evaluation and iteratively apply the policy improvement
step (Antos et al. [7], Maillard et al. [8]). Both LSPI and BRM have many extensions: Farahmand et al. [9] introduced a nonparametric extension of LSPI and BRM and formulated them as
an optimization problem in a reproducing kernel Hilbert space and analyzed its statistical behavior.
Kolter and Ng [10] formulated an l1 regularization extension of LSTD. See Xu et al. [11] and Jung
and Polani [12] for other examples of kernel-based extension of LSTD/LSPI, and Taylor and Parr
[13] for a unified framework. Also see the proto-value function-based approach of Mahadevan and
Maggioni [14] and iLSTD of Geramifard et al. [15].
A crucial question in the applicability of API/AVI, which is the main topic of this work, is to understand how either the approximation error or the Bellman residual at each iteration of API or AVI
affects the quality of the resulted policy. Suppose we run API/AVI for K iterations to obtain a policy
?K . Does the knowledge that all ?k s are small (maybe because we have had a lot of samples and
used powerful function approximators) imply that V ?K is close to the optimal value function V ?
too? If so, how does the errors occurred at a certain iteration k propagate through iterations of
API/AVI and affect the final performance loss?
There have already been some results that partially address this question. As an example, Proposition 6.2 of Bertsekas and Tsitsiklis [16] shows that for API applied to a finite MDP, we have
2?
?k
lim supk?? kV ? ? V ?k k? ? (1??)
? Vk k? where ? is the discount facto.
2 lim supk?? kV
Similarly for AVI, if the approximation errors are uniformly bounded (kT ? Vk ? Vk+1 k? ? ?), we
2?
have lim supk?? kV ? ? V ?k k? ? (1??)
2 ? (Munos [17]).
Nevertheless, most of these results are pessimistic in several ways. One reason is that they are
expressed as the supremum norm of the approximation errors kV ?k ? Vk k? or the Bellman error
kQk ? T ?k Qk k? . Compared to Lp norms, the supremum norm is conservative. It is quite possible
that the result of a learning algorithm has a small Lp norm but a very large L? norm. Therefore, it
is desirable to have a result expressed in Lp norm of the approximation/Bellman residual ?k .
In the past couple of years, there have been attempts to extend L? norm results to Lp ones [18, 17,
7]. As a typical example, we quote the following from Antos et al. [7]:
Proposition 1 (Error Propagation for API ? [7]). Let p ? 1 be a real and K be a positive integer.
Then, for any sequence of functions {Q(k) } ? B(X ? A; Qmax )(0 ? k < K), the space of Qmax bounded measurable functions, and their corresponding Bellman residuals ?k = Qk ? T ? Qk , the
following inequalities hold:
K
2? 1/p
p ?1 R
kQ? ? Q?K kp,? ?
C
max
k?
k
+
?
,
k
max
p,?
(1 ? ?)2 ?,? 0?k<K
where Rmax is an upper bound on the magnitude of the expected reward function and
X
d (?P ?1 ? ? ? P ?m )
.
C?,? = (1 ? ?)2
m? m?1 sup
d?
?1 ,...,?m
?
m?1
This result indeed uses Lp norm of the Bellman residuals and is an improvement over results
like Bertsekas and Tsitsiklis [16, Proposition 6.2], but still is pessimistic in some ways and does
2
not answer several important questions. For instance, this result implies that the uniform-over-alliterations upper bound max0?k<K k?k kp,? is the quantity that determines the performance loss. One
may wonder if this condition is really necessary, and ask whether it is better to put more emphasis
on earlier/later iterations? Or another question is whether the appearance of terms in the form of
?1
?m
|| d(?P d????P ) ||? is intrinsic to the difficulty of the problem or can be relaxed.
The goal of this work is to answer these questions and to provide tighter upper bounds on the
performance loss of API/AVI algorithms. These bounds help one understand what factors contribute
to the difficulty of a learning problem. We base our analysis on the work of Munos [17], Antos et al.
[7], Munos [18] and provide upper bounds on the performance loss in the form of kV ? ? V ?k k1,?
(the expected loss weighted according to the evaluation probability distribution ? ? this is defined
in Section 2) for API (Section 3) and AVI (Section 4). This performance loss depends on a certain
function of ?-weighted L2 norms of ?k s, in which ? is the data sampling distribution, and C?,? (K)
that depends on the MDP, two probability distributions ? and ?, and the number of iterations K.
In addition to relating the performance loss to Lp norm of the Bellman residual/approximation error, this work has three main contributions that to our knowledge have not been considered before:
(1) We show that the performance loss depends on the expectation of the squared Radon-Nikodym
derivative of a certain distribution, to be specified in Section 3, rather than its supremum. The difference between this expectation and the supremum can be considerable. For instance, for a finite
state space with N states, the ratio can be of order O(N 1/2 ). (2) The contribution of the Bellman/approximation error to the performance loss is more prominent in later iterations of API/AVI.
and the effect of an error term in early iterations decays exponentially fast. (3) There are certain
structures in the definition of concentrability coefficients that have not been explored before. We
thoroughly discuss these qualitative/structural improvements in Section 5.
2
Background
In this section, we provide a very brief summary of some of the concepts and definitions from
the theory of Markov Decision Processes (MDP) and reinforcement learning (RL) and a few other
notations. For further information about MDPs and RL the reader is referred to [19, 16, 20, 21].
A finite-action discounted MDP is a 5-tuple (X , A, P, R, ?), where X is a measurable state space, A
is a finite set of actions, P is the probability transition kernel, R is the reward kernel, and 0 ? ? < 1
is the discount factor. The transition kernel P is a mapping with domain X ? A evaluated at
(x, a) ? X ? A that gives a distribution over X , which we shall denote by P (?|x, a). Likewise,
R is a mapping with domain X ? A that gives a distribution of immediate reward over R, which
is denoted by R(?|x, a). We denote r(x, a) = E [R(?|x, a)], and assume that its absolute value is
bounded by Rmax .
A mapping ? : X ? A is called a deterministic Markov stationary policy, or just a policy in
short. Following a policy ? in an MDP means that at each time step At = ?(Xt ). Upon taking
action At at Xt , we receive reward Rt ? R(?|x, a), and the Markov chain evolves according to
Xt+1 ? P (?|Xt , At ). We denote the probability transition kernel of following a policy ? by P ? ,
i.e., P ? (dy|x) = P (dy|x, ?(x)).
hP
i
?
t
The value function V ? for a policy ? is defined as V ? (x) , E
?
R
X
=
x
and the
t
0
t=0
hP
i
?
t
action-value function is defined as Q? (x, a) , E
t=0 ? Rt X0 = x, A0 = a . For a discounted
MDP, we define the optimal value and action-value functions by V ? (x) = sup? V ? (x) (?x ? X )
and Q? (x, a) = sup? Q? (x, a) (?x ? X , ?a ? A). We say that a policy ? ? is optimal
?
if it achieves the best values in every state, i.e., if V ? = V ? . We say that a policy ? is
greedy w.r.t. an action-value function Q and write ? = ?
? (?; Q), if ?(x) ? arg maxa?A Q(x, a)
holds for allR x ? X . Similarly, the policy ? is greedy w.r.t. V , if for all x ? X , ?(x) ?
argmaxa?A P (dx0 |x, a)[r(x, a) + ?V (x0 )] (If there exist multiple maximizers, some maximizer
is chosen in an arbitrary deterministic manner). Greedy policies are important because a greedy policy w.r.t. Q? (or V ? ) is an optimal policy. Hence, knowing Q? is sufficient for behaving optimally
(cf. Proposition 4.3 of [19]).
3
R
We define the Bellman operator for a policy ? as (T ? V )(x) , r(x, ?(x)) + ? V ? (x0 )P (dx0 |x, a)
R
0
0
and (T ? Q)(x, a) , r(x, a) + ? Q(x0 , ?(x
optimality opn ))P (dx |x,R a). Similarly, the Bellman
o
?
0
0
erator is defined as (T V )(x) , maxa r(x, a) + ? V (x )P (dx |x, a) and (T ? Q)(x, a) ,
R
r(x, a) + ? maxa0 Q(x0 , a0 )P (dx0 |x, a).
For a measurable space X , with a ?-algebra ?X , we define M(X ) as the set of all probability
measures overR ?X . For a probability measure ? ? M(X ) and the transition kernel P ? , we define
?P ? (dx0 ) = P (dx0 |x, ?(x))d?(x). In words, ?(P ? )m ? M(X ) is an m-step-ahead probability
distribution of states if the starting state distribution is ? and we follow P ? for m steps. In what
follows we shall use kV kp,? to denote the Lp (?)-norm of a measurable function V : X ? R:
R
p
p
kV kp,? , ?|V |p , X |V (x)|p d?(x). For a function Q : X ? A 7? R, we define kQkp,? ,
R
P
1
p
a?A X |Q(x, a)| d?(x).
|A|
3
Approximate Policy Iteration
Consider the API procedure and the sequence Q0 ? ?1 ? Q1 ? ?2 ? ? ? ? ? QK?1 ? ?K ,
where ?k is the greedy policy w.r.t. Qk?1 and Qk is the approximate action-value function for policy
?k . For the sequence {Qk }K?1
k=0 , denote the Bellman Residual (BR) and policy Approximation Error
(AE) at each iteration by
?k
?BR
Qk ,
k = Qk ? T
?AE
k
?k
= Qk ? Q .
(1)
(2)
The goal of this section is to study the effect of ?-weighted L2p norm of the Bellman residual
K?1
AE K?1
sequence {?BR
k }k=0 or the policy evaluation approximation error sequence {?k }k=0 on the per?
?K
formance loss kQ ? Q kp,? of the outcome policy ?K .
The choice of ? and ? is arbitrary, however, a natural choice for ? is the sampling distribution of the
data, which is used by the policy evaluation module. On the other hand, the probability distribution
? reflects the importance of various regions of the state space and is selected by the practitioner. One
common choice, though not necessarily the best, is the stationary distribution of the optimal policy.
Because of the dynamical nature of MDP, the performance loss kQ? ? Q?K kp,? depends on the
difference between the sampling distribution ? and the future-state distribution in the form of
?P ?1 P ?2 ? ? ? . The precise form of this dependence will be formalized in Theorems 3 and 4. Before
stating the results, we require to define the following concentrability coefficients.
Definition 2 (Expected Concentrability of the Future-State Distribution). Given ?, ? ? M(X ),
? ?1 (? is the Lebesgue measure), m ? 0, and an arbitrary sequence of stationary policies
{?m }m?1 , let ?P ?1 P ?2 . . . P ?m ? M(X ) denote the future-state distribution obtained when the
first state is distributed according to ? and then we follow the sequence of policies {?k }m
k=1 .
Define the following concentrability coefficients that is used in API analysis:
?
2 ?? 21
d ?(P ?? )m1 (P ? )m2
cPI1 ,?,? (m1 , m2 ; ?) , ?EX?? ?
(X) ?? ,
d?
?
?
2 ?? 21
d ?(P ?? )m1 (P ?1 )m2 P ?2
cPI2 ,?,? (m1 , m2 ; ?1 , ?2 ) , ?EX?? ?
(X) ?? ,
d?
?
?
2 ?? 12
d ?P ??
, ?EX?? ?
(X) ?? ,
d?
?
cPI3 ,?,?
1
For two measures ?1 and ?2 on the same measurable space, we say that ?1 is absolutely continuous with
respect to ?2 (or ?2 dominates ?1 ) and denote ?1 ?2 iff ?2 (A) = 0 ? ?1 (A) = 0.
4
?
with the understanding that if the future-state distribution ?(P ? )m1 (P ? )m2 (or
?
?
?(P ? )m1 (P ?1 )m2 P ?2 or ?P ? ) is not absolutely continuous w.r.t.
?, then we take
cPI1 ,?,? (m1 , m2 ; ?) = ? (similar for others).
Also define the following concentrability coefficient that is used in AVI analysis:
?
?
2 ?? 21
d ?(P ? )m1 (P ?? )m2
cVI,?,? (m1 , m2 ; ?) , ?EX?? ?
(X) ?? ,
d?
?
with the understanding that if the future-state distribution ?(P ? )m1 (P ? )m2 is not absolutely continuous w.r.t. ?, then we take cVI,?,? (m1 , m2 ; ?) = ?.
In order to compactly present our results, we define the following notation:
?k =
(1 ? ?)? K?k?1
1 ? ? K+1
0 ? k < K.
Theorem 3 (Error Propagation for API). Let p ? 1 be a real number, K be a positive integer,
K?1
Rmax
and Qmax ? 1??
. Then for any sequence {Qk }k=0
? B(X ? A, Qmax ) (space of Qmax -bounded
K?1
measurable functions defined on X ? A) and the corresponding sequence {?k }k=0
defined in (1)
or (2) , we have
1
1
K
2?
2p
2p (? , . . . , ?
p ?1 R
inf
C
;
r)
+
?
(K;
r)E
.
0
K?1
max
(1 ? ?)2 r?[0,1] PI(BR/AE),?,?
PK?1
2p
where E(?0 , . . . , ?K?1 ; r) = k=0 ?k2r k?k k2p,? .
kQ? ? Q?K kp,? ?
(a) If ?k = ?BR for all 0 ? k < K, we have
CPI(BR),?,? (K; r) = (
K?1
X 2(1?r)
1?? 2
?k
) sup
0
2
?00 ,...,?K
k=0
X
0
? m cPI1 ,?,? (K ? k ? 1, m + 1; ?k+1
)+
m?0
cPI1 ,?,? (K ?
k, m; ?k0 )
!
2
.
(b) If ?k = ?AE for all 0 ? k < K, we have
CPI(AE),?,? (K; r, s) = (
K?1
X 2(1?r)
1?? 2
) sup
?k
0
2
?00 ,...,?K
k=0
X
0
? m cPI1 ,?,? (K ? k ? 1, m + 1; ?k+1
)+
m?0
!2
X
0
? m cPI2 ,?,? (K ? k ? 1, m; ?k+1
, ?k0 ) + cPI3 ,?,?
m?1
4
Approximate Value Iteration
Consider the AVI procedure and the sequence V0 ? V1 ? ? ? ? ? VK?1 , in which Vk+1 is the
result of approximately applying the Bellman optimality operator on the previous estimate Vk , i.e.,
Vk+1 ? T ? Vk . Denote the approximation error caused at each iteration by
?k = T ? Vk ? Vk+1 .
(3)
The goal of this section is to analyze AVI procedure and to relate the approximation error sequence
?
?K
{?k }K?1
kp,? of the obtained policy ?K , which is the greedy
k=0 to the performance loss kV ? V
policy w.r.t. VK?1 .
5
.
Theorem 4 (Error Propagation for AVI). Let p ? 1 be a real number, K be a positive integer, and
Rmax
Vmax ? 1??
. Then for any sequence {Vk }K?1
k=0 ? B(X , Vmax ), and the corresponding sequence
{?k }K?1
defined
in
(3),
we
have
k=0
1
1
K
2
2?
2p
?
?K
2p
p
(K; r)E (?0 , . . . , ?K?1 ; r) +
inf C
? Rmax ,
kV ? V kp,? ?
(1 ? ?)2 r?[0,1] VI,?,?
1??
where
CVI,?,? (K; r) = (
1?? 2
) sup
2
?0
and E(?0 , . . . , ?K?1 ; r) =
5
K?1
X
2(1?r)
?k
X
?
k=0
? m (cVI,?,? (m, K ? k; ? 0 ) + cVI,?,? (m + 1, K ? k ? 1; ? 0 ))? ,
m?0
k=0
PK?1
?2
?
2p
?k2r k?k k2p,? .
Discussion
In this section, we discuss significant improvements of Theorems 3 and 4 over previous results such
as [16, 18, 17, 7].
5.1
Lp norm instead of L? norm
As opposed to most error upper bounds, Theorems 3 and 4 relate kV ? ? V ?K kp,? to the Lp norm
of the approximation or Bellman errors k?k k2p,? of iterations in API/AVI. This should be contrasted with the traditional, and more conservative, results such as lim supk?? kV ? ? V ?k k? ?
2?
?k
? Vk k? for API (Proposition 6.2 of Bertsekas and Tsitsiklis [16]). The
(1??)2 lim supk?? kV
use of Lp norm not only is a huge improvement over conservative supremum norm, but also allows
us to benefit from the vast literature on supervised learning techniques, which usually provides error
upper bounds in the form of Lp norms, in the context of RL/Planning problems. This is especially
interesting for the case of p = 1 as the performance loss kV ? ? V ?K k1,? is the difference between
the expected return of the optimal policy and the resulted policy ?K when the initial state distribution is ?. Convenient enough, the errors appearing in the upper bound are in the form of k?k k2,?
which is very common in the supervised learning literature. This type of improvement, however,
has been done in the past couple of years [18, 17, 7] - see Proposition 1 in Section 1.
5.2
Expected versus supremum concentrability of the future-state distribution
The concentrability coefficients (Definition 2) reflect the effect of future-state distribution on the performance loss kV ? ? V ?K kp,? . Previously it was thought that the key contributing factor to the performance loss is the supremum of the Radon-Nikodym derivative of these two distributions. This is
? m
evident in the definition of C?,? in Proposition 1 where we have terms in the form of || d(?(Pd? ) ) ||?
h
i 12
? m
instead of EX?? | d(?(Pd? ) ) (X)|2
that we have in Definition 2.
Nevertheless, it turns out that the key contributing factor that determines the performance loss is
the expectation of the squared Radon-Nikodym derivative instead of its supremum. Intuitively this
? m
implies that even if for some subset of X 0 ? X the ratio d(?(Pd? ) ) is large but the probability ?(X 0 )
is very small, performance loss due to it is still small. This phenomenon has not been suggested by
previous results.
As an illustration of this difference, consider a Chain Walk with 1000 states with a single policy that
1
drifts toward state 1 of the chain. We start with ?(x) = 201
h for x ? [400,
i 600] and zero everywhere
? m
? m
1
else. Then we evaluate both || d(?(Pd? ) ) ||? and (EX?? | d(?(Pd? ) ) |2 ) 2 for m = 1, 2, . . . when ?
is the uniform distribution. The result is shown in Figure 1a. One sees that the ratio is constant in the
beginning, but increases when the distribution ?(P ? )m concentrates around state 1, until it reaches
steady-state. The growth and the final value of the expectation-based concentrability coefficient is
much smaller than that of supremum-based.
6
3
5
4
Infinity norm?based concentrability
Expectation?base concentrability
Uniform
Exponential
3
2
2
10
L1 error
Concentrability Coefficients
10
1
1.5
1
0.8
0.6
0.5
0.4
10
0.3
0
10
1
500
Step (m)
1000
1500
(a)
0.2
10
20
40
60
80
100
120
140
160
180
Iteration
(b)
h
i 12
? m
? m
and
d(?(Pd? ) )
Figure 1: (a) Comparison of EX?? | d(?(Pd? ) ) |2
?
?
(b) Comparison of
kQ ? Qk k1 for uniform and exponential data sampling schedule. The total number of samples
is the same. [The Y -scale of both plots is logarithmic.]
It is easy to show that if the Chain Walk has N states and the policy
same
i concentrating
h has?the
?
d(?(P ? )m )
d(?(P )m ) 2 1
behavior and ? is uniform, then ||
||? ? N , while (EX?? |
| ) 2 ? N when
d?
d?
?
m ? ?. The ratio, therefore, would be of order ?( N ). This clearly shows the improvement of
this new analysis in a simple problem. One may anticipate that this sharper behavior happens in
many other problems too.
h
i
d? 2 12
More generally, consider C? = || d?
d? ||? and CL2 = (EX?? | d? | ) . For a finite state space
?
with N states and ? is the uniform distribution, C? ? N but CL2 ? N . Neglecting all
other differences between our results and the previous ones, we get a performance upper bound
in the form of kQ? ? Q?K k1,? ? c1 (?)O(N 1/4 ) supk k?k k2,? , while Proposition 1 implies that
kQ? ? Q?K k1,? ? c2 (?)O(N 1/2 ) supk ||k ||2,? . This difference between O(N 1/4 ) and O(N 1/2 )
shows a significant improvement.
5.3
Error decaying property
Theorems 3 and 4 show that the dependence of performance loss kV ? ? V ?K kp,? (or
PK?1 2r
2p
kQ? ? Q?K kp,? ) on {?k }K?1
k=0 ?k k?k k2p,? . This has
k=0 is in the form of E(?0 , . . . , ?K?1 ; r) =
a very special structure in that the approximation errors at later iterations have more contribution to
the final performance loss. This behavior is obscure in previous results such as [17, 7] that the dependence of the final performance loss is expressed as E(?0 , . . . , ?K?1 ; r) = maxk=0,...,K?1 k?k kp,?
(see Proposition 1).
This property has practical and algorithmic implications too. It says that it is better to put more
effort on having a lower Bellman or approximation error at later iterations of API/AVI. This, for
instance, can be done by gradually increasing the number of samples throughout iterations, or to use
more powerful, and possibly computationally more expensive, function approximators for the later
iterations of API/AVI.
To illustrate this property, we compare two different sampling schedules on a simple MDP. The
MDP is a 100-state, 2-action chain similar to Chain Walk problem in the work of Lagoudakis and
Parr [5]. We use AVI with a lookup-table function representation. In the first sampling schedule,
every 20 iterations we generate a fixed number of fresh samples by following a uniformly random
walk on the chain (this means that we throw away old samples). This is the fixed strategy. In the
exponential strategy, we again generate new samples every 20 iterations but the number of samples
at the k th iteration is ck ? . The constant c is tuned such that the total number of both sampling
strategy is almost the same (we give a slight margin of about 0.1% of samples in favor of the fixed
strategy). What we compare is kQ? ? Qk k1,? when ? is the uniform distribution. The result can be
seen in Figure 1b. The improvement of the exponential sampling schedule is evident. Of course, one
7
200
may think of more sophisticated sampling schedules but this simple illustration should be sufficient
to attract the attention of practitioners to this phenomenon.
5.4
Restricted search over policy space
One interesting feature of our results is that it puts more structure and restriction on the way policies
may be selected. Comparing CPI,?,? (K; r) (Theorem 3) and CVI,?,? (K; r) (Theorem 4) with C?,?
(Proposition 1) we see that:
(1) Each concentrability coefficient in the definition of CPI,?,? (K; r) depends only on a single or
two policies (e.g., ?k0 in cPI1 ,?,? (K ? k, m; ?k0 )). The same is true for CVI,?,? (K; r). In contrast, the
mth term in C?,? has ?1 , . . . , ?m as degrees of freedom, and this number is growing as m ? ?.
(2) The operator sup in CPI,?,? and CVI,?,? appears outside the summation. Because of that, we
0
only have K + 1 degrees of freedom ?00 , . . . , ?K
to choose from in API and remarkably only a
single degree of freedom in AVI. On the other other hand, sup appears inside the summation in the
definition of C?,? . One may construct an MDP that this difference in the ordering of sup leads to an
arbitrarily large ratio of two different ways of defining the concentrability coefficients.
(3) In API, the definitions of concentrability coefficients cPI1 ,?,? , cPI2 ,?,? , and cPI3 ,?,? (Defini?
tion 2) imply that if ? = ?? , the stationary distribution
induced by
an optimal policy ? , then
2
? ? m2 1
cPI1 ,?,? (m1 , m2 ; ?) = cPI1 ,?,? (?, m2 ; ?) = (EX?? d(? (Pd? ) ) ) 2 (similar for the other two
coefficients). This special structure is hidden in the definition of C?,? in Proposition 1, and instead
we have an extra m1 degrees of flexibility.
Remark 1. For general MDPs, the computation of concentrability coefficients in Definition 2 is
difficult, as it is for similar coefficients defined in [18, 17, 7].
6
Conclusion
To analyze an API/AVI algorithm and to study its statistical properties such as consistency or convergence rate, we require to (1) analyze the statistical properties of the algorithm running at each
iteration, and (2) study the way the policy approximation/Bellman errors propagate and influence
the quality of the resulted policy.
The analysis in the first step heavily uses tools from the Statistical Learning Theory (SLT) literature,
e.g., Gy?orfi et al. [22]. In some cases, such as AVI, the problem can be cast as a standard regression
with the twist that extra care should be taken to the temporal dependency of data in RL scenario.
The situation is a bit more complicated for API methods that directly aim for the fixed-point solution
(such as LSTD and its variants), but still the same kind of tools from SLT can be used too ? see Antos
et al. [7], Maillard et al. [8].
The analysis for the second step is what this work has been about. In our Theorems 3 and 4, we
have provided upper bounds that relate the errors at each iteration of API/AVI to the performance
loss of the whole procedure. These bounds are qualitatively tighter than the previous results such as
those reported by [18, 17, 7], and provide a better understanding of what factors contribute to the
difficulty of the problem. In Section 5, we discussed the significance of these new results and the
way they improve previous ones.
Finally, we should note that there are still some unaddressed issues. Perhaps the most important one
is to study the behavior of concentrability coefficients cPI1 ,?,? (m1 , m2 ; ?), cPI2 ,?,? (m1 , m2 ; ?1 , ?2 ),
and cVI,?,? (m1 , m2 ; ?) as a function of m1 , m2 , and of course the transition kernel P of MDP. A
better understanding of this question alongside a good understanding of the way each term ?k in
E(?0 , . . . , ?K?1 ; r) behaves, help us gain more insight about the error convergence behavior of the
RL/Planning algorithms.
References
[1] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement
learning. Journal of Machine Learning Research, 6:503?556, 2005.
8
[2] Martin Riedmiller. Neural fitted Q iteration ? first experiences with a data efficient neural
reinforcement learning method. In 16th European Conference on Machine Learning, pages
317?328, 2005.
[3] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv?ari, and Shie Mannor.
Regularized fitted Q-iteration for planning in continuous-space markovian decision problems.
In Proceedings of American Control Conference (ACC), pages 725?730, June 2009.
[4] R?emi Munos and Csaba Szepesv?ari. Finite-time bounds for fitted value iteration. Journal of
Machine Learning Research, 9:815?857, 2008.
[5] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine
Learning Research, 4:1107?1149, 2003.
[6] Steven J. Bradtke and Andrew G. Barto. Linear least-squares algorithms for temporal difference learning. Machine Learning, 22:33?57, 1996.
[7] Andr?as Antos, Csaba Szepesv?ari, and R?emi Munos. Learning near-optimal policies with
Bellman-residual minimization based fitted policy iteration and a single sample path. Machine
Learning, 71:89?129, 2008.
[8] Odalric Maillard, R?emi Munos, Alessandro Lazaric, and Mohammad Ghavamzadeh. Finitesample analysis of bellman residual minimization. In Proceedings of the Second Asian Conference on Machine Learning (ACML), 2010.
[9] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv?ari, and Shie Mannor.
Regularized policy iteration. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors,
Advances in Neural Information Processing Systems 21, pages 441?448. MIT Press, 2009.
[10] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML ?09: Proceedings of the 26th Annual International Conference
on Machine Learning, pages 521?528, New York, NY, USA, 2009. ACM.
[11] Xin Xu, Dewen Hu, and Xicheng Lu. Kernel-based least squares policy iteration for reinforcement learning. IEEE Trans. on Neural Networks, 18:973?992, 2007.
[12] Tobias Jung and Daniel Polani. Least squares SVM for least squares TD learning. In In Proc.
17th European Conference on Artificial Intelligence, pages 499?503, 2006.
[13] Gavin Taylor and Ronald Parr. Kernelized value function approximation for reinforcement
learning. In ICML ?09: Proceedings of the 26th Annual International Conference on Machine
Learning, pages 1017?1024, New York, NY, USA, 2009. ACM.
[14] Sridhar Mahadevan and Mauro Maggioni. Proto-value functions: A Laplacian framework
for learning representation and control in markov decision processes. Journal of Machine
Learning Research, 8:2169?2231, 2007.
[15] Alborz Geramifard, Michael Bowling, Michael Zinkevich, and Richard S. Sutton. iLSTD: Eligibility traces and convergence analysis. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors,
Advances in Neural Information Processing Systems 19, pages 441?448. MIT Press, Cambridge, MA, 2007.
[16] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and
Neural Computation Series, 3). Athena Scientific, 1996.
[17] R?emi Munos. Performance bounds in lp norm for approximate value iteration. SIAM Journal
on Control and Optimization, 2007.
[18] R?emi Munos. Error bounds for approximate policy iteration. In ICML 2003: Proceedings of
the 20th Annual International Conference on Machine Learning, 2003.
[19] Dimitri P. Bertsekas and Steven E. Shreve. Stochastic Optimal Control: The Discrete-Time
Case. Academic Press, 1978.
[20] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction (Adaptive
Computation and Machine Learning). The MIT Press, 1998.
[21] Csaba Szepesv?ari. Algorithms for Reinforcement Learning. Morgan Claypool Publishers,
2010.
[22] L?aszl?o Gy?orfi, Michael Kohler, Adam Krzy?zak, and Harro Walk. A Distribution-Free Theory
of Nonparametric Regression. Springer Verlag, New York, 2002.
9
| 4181 |@word norm:21 hu:1 propagate:2 q1:3 initial:2 series:1 daniel:1 tuned:1 past:2 comparing:1 dx:2 john:1 ronald:2 plot:1 stationary:4 greedy:7 selected:2 intelligence:1 amir:3 beginning:1 short:1 erator:1 provides:1 mannor:2 contribute:2 c2:1 direct:1 farahmand:5 qualitative:1 dewen:1 combine:1 inside:1 manner:1 x0:5 indeed:1 expected:5 behavior:6 planning:5 growing:1 bellman:26 discounted:2 alberta:2 td:1 increasing:1 project:1 provided:1 moreover:1 bounded:4 notation:2 opn:1 what:6 kind:1 rmax:5 maxa:3 unified:1 csaba:7 temporal:4 every:3 growth:1 k2:2 facto:1 control:4 zico:1 platt:1 brm:3 bertsekas:5 louis:1 positive:3 before:3 api:26 sutton:2 path:1 approximately:4 inria:2 emphasis:1 aitf:1 practical:1 procedure:4 riedmiller:2 thought:1 orfi:2 convenient:2 word:1 argmaxa:1 get:3 close:1 selection:1 operator:7 put:3 context:1 influence:2 impossible:1 applying:1 restriction:1 measurable:6 deterministic:2 reviewer:1 zinkevich:1 attention:1 starting:1 formalized:1 m2:18 insight:1 maggioni:2 suppose:1 heavily:1 ualberta:2 exact:2 programming:1 us:2 expensive:1 continues:1 steven:2 module:1 aszl:1 calculate:1 region:1 ordering:1 e8:2 alessandro:1 pd:8 reward:4 defini:1 tobias:1 dynamic:1 ghavamzadeh:3 algebra:1 upon:1 compactly:1 ilstd:2 k0:4 various:1 fast:2 describe:1 kp:14 artificial:1 avi:27 outcome:1 outside:1 quite:1 solve:1 say:4 favor:1 think:1 final:4 sequence:14 fr:1 iff:1 ernst:2 flexibility:1 kv:15 olkopf:1 convergence:3 generating:1 adam:1 leave:1 help:2 illustrate:1 andrew:3 stating:1 damien:1 throw:1 indicate:1 implies:3 quantify:1 concentrate:1 stochastic:1 require:2 maxa0:1 really:1 proposition:11 pessimistic:2 tighter:2 anticipate:1 summation:2 extension:4 k2r:2 hold:2 cvi:9 around:1 considered:1 gavin:1 claypool:1 mapping:3 algorithmic:1 parr:5 achieves:1 early:1 proc:1 quote:1 tool:2 weighted:3 reflects:1 minimization:3 hoffman:1 mit:3 clearly:1 aim:2 rather:2 ck:1 barto:3 krzy:1 june:1 vk:20 improvement:11 contrast:1 attract:1 a0:2 mth:1 hidden:1 koller:1 kernelized:1 france:1 arg:2 issue:1 denoted:1 geramifard:2 special:2 equal:1 construct:1 having:1 ng:2 sampling:9 lille:2 icml:3 future:7 others:1 richard:2 few:1 resulted:4 asian:1 lebesgue:1 attempt:1 freedom:3 huge:1 evaluation:6 analyzed:1 antos:5 chain:7 implication:1 kt:1 tuple:1 neglecting:1 necessary:1 experience:1 tree:2 taylor:2 old:1 walk:5 fitted:7 instance:3 earlier:2 markovian:1 applicability:1 subset:1 kq:9 uniform:7 wonder:1 too:4 optimally:1 reported:1 dependency:1 answer:2 thoroughly:1 international:3 siam:1 sequel:1 michael:3 squared:3 again:2 reflect:1 opposed:2 choose:1 possibly:1 american:1 derivative:4 dimitri:2 return:1 sztaki:1 lookup:1 gy:2 coefficient:14 kolter:2 caused:1 depends:6 vi:2 later:6 try:1 lot:1 tion:1 analyze:3 sup:9 start:3 decaying:1 complicated:1 contribution:4 square:7 qk:20 formance:1 likewise:1 lu:1 acc:1 reach:1 concentrability:16 harro:1 definition:11 evaluates:1 couple:2 gain:1 newly:1 popular:1 ask:1 concentrating:1 knowledge:2 lim:5 maillard:3 hilbert:1 schedule:5 sophisticated:1 appears:2 supervised:2 follow:2 alborz:1 evaluated:1 though:1 done:2 just:1 shreve:1 until:1 hand:2 maximizer:1 propagation:4 mode:1 quality:3 perhaps:1 scientific:1 mdp:11 usa:2 effect:4 concept:1 true:2 regularization:2 hence:1 q0:7 iteratively:2 bowling:1 eligibility:1 szepesva:1 steady:1 prominent:2 evident:2 mohammad:3 geurts:1 performs:1 bradtke:2 l1:2 ari:8 lagoudakis:3 common:2 behaves:1 kqkp:1 rl:8 twist:1 exponentially:2 extend:1 occurred:1 m1:17 relating:1 slight:1 discussed:1 significant:2 cambridge:1 zak:1 consistency:1 similarly:3 hp:2 had:1 access:1 behaving:1 v0:2 base:2 recent:1 belongs:1 inf:2 scenario:1 certain:5 verlag:1 inequality:1 arbitrarily:1 approximators:2 seen:1 morgan:1 relaxed:1 care:1 michail:1 afterwards:1 desirable:1 multiple:1 academic:1 laplacian:1 variant:2 regression:2 neuro:1 multilayer:1 ae:6 expectation:6 iteration:44 kernel:9 c1:1 receive:1 szepesv:8 addition:1 background:1 remarkably:1 else:1 crucial:1 sch:1 extra:2 publisher:1 comment:1 induced:1 shie:2 unaddressed:1 integer:3 practitioner:2 structural:1 near:1 mahadevan:2 enough:2 easy:1 bengio:1 affect:2 l2p:1 knowing:1 br:6 whether:2 effort:1 york:3 action:10 remark:1 generally:1 maybe:1 repeating:1 nonparametric:2 discount:2 generate:2 exist:1 massoud:3 andr:1 lazaric:1 per:1 write:1 discrete:1 shall:2 key:2 nevertheless:2 polani:2 kqk:1 v1:1 vast:1 year:2 run:1 everywhere:1 powerful:2 qmax:5 throughout:1 reader:1 almost:1 decision:3 dy:2 radon:4 bit:1 bound:14 annual:3 ahead:1 infinity:1 emi:6 optimality:5 cl2:2 martin:1 department:2 mta:1 according:3 smaller:1 lp:13 evolves:1 happens:1 intuitively:1 gradually:1 restricted:1 taken:1 computationally:1 equation:1 previously:1 discus:2 turn:1 apply:1 away:1 pierre:1 appearing:1 batch:1 encounter:1 running:1 cf:1 wehenkel:1 k1:6 especially:1 lspi:4 question:7 already:1 quantity:1 strategy:4 rt:2 dependence:3 traditional:1 mauro:1 athena:1 topic:1 evaluate:1 odalric:1 reason:1 toward:1 fresh:1 aicml:1 illustration:2 ratio:5 difficult:2 sharper:1 relate:3 trace:1 policy:60 upper:9 markov:4 acknowledge:1 finite:6 immediate:1 maxk:1 defining:1 situation:1 precise:1 acml:1 reproducing:1 arbitrary:3 canada:2 drift:1 introduced:1 cast:1 specified:1 trans:1 address:2 suggested:2 alongside:1 usually:2 dynamical:1 max:3 pascal2:1 difficulty:3 natural:1 regularized:3 residual:13 improve:1 brief:1 imply:2 mdps:2 literature:4 l2:1 understanding:5 finitesample:1 contributing:2 loss:24 interesting:2 versus:1 degree:4 sufficient:2 t6g:2 editor:2 nikodym:4 pi:1 obscure:1 course:2 summary:1 jung:2 supported:1 free:1 tsitsiklis:4 understand:2 perceptron:1 taking:1 munos:11 absolute:1 slt:2 distributed:1 benefit:1 evaluating:1 transition:5 qualitatively:1 reinforcement:8 vmax:2 fixedpoint:1 adaptive:1 approximate:13 supremum:10 continuous:4 iterative:2 search:1 table:1 nature:1 ca:2 schuurmans:1 bottou:1 necessarily:1 european:2 domain:2 pk:3 main:2 significance:1 whole:2 sridhar:1 xu:2 cpi:5 k2p:4 representative:1 referred:1 edmonton:2 ny:2 exponential:4 theorem:9 xt:4 insightful:1 explored:1 decay:2 svm:1 dominates:1 maximizers:1 intrinsic:1 importance:1 magnitude:1 margin:1 logarithmic:1 remi:1 appearance:1 expressed:3 nserc:1 partially:1 supk:7 applies:1 lstd:4 springer:1 satisfies:1 determines:2 acm:2 ma:1 goal:3 formulated:2 considerable:1 typical:1 uniformly:2 contrasted:1 conservative:3 max0:1 called:1 total:2 partly:1 xin:1 dx0:5 absolutely:3 kohler:1 proto:2 phenomenon:2 ex:10 |
3,514 | 4,182 | A Non-Parametric Approach to
Dynamic Programming
Oliver B. Kroemer1,2
1
Jan Peters1,2
Intelligent Autonomous Systems, Technische Universit?t Darmstadt
Robot Learning Lab, Max Planck Institute for Intelligent Systems
{kroemer,peters}@ias.tu-darmstadt.de
2
Abstract
In this paper, we consider the problem of policy evaluation for continuousstate systems. We present a non-parametric approach to policy evaluation,
which uses kernel density estimation to represent the system. The true
form of the value function for this model can be determined, and can be
computed using Galerkin?s method. Furthermore, we also present a unified
view of several well-known policy evaluation methods. In particular, we
show that the same Galerkin method can be used to derive Least-Squares
Temporal Difference learning, Kernelized Temporal Difference learning, and
a discrete-state Dynamic Programming solution, as well as our proposed
method. In a numerical evaluation of these algorithms, the proposed approach performed better than the other methods.
1
Introduction
Value functions are an essential concept for determining optimal policies in both optimal control [1] and reinforcement learning [2, 3]. Given the value function of a policy, an improved
policy is straightforward to compute. The improved policy can subsequently be evaluated
to obtain a new value function. This loop of computing value functions and determining
better policies is known as policy iteration. However, the main bottleneck in policy iteration
is the computation of the value function for a given policy. Using the Bellman equation, only
two classes of systems have been solved exactly: tabular discrete state and action problems
[4] as well as linear-quadratic regulation problems [5]. The exact computation of the value
function remains an open problem for most systems with continuous state spaces [6]. This
paper focuses on steps toward solving this problem.
As an alternative to exact solutions, approximate policy evaluation methods have been
developed in reinforcement learning. These approaches include Monte Carlo methods, temporal difference learning, and residual gradient methods. However, Monte Carlo methods
are well-known to have an excessively high variance [7, 2], and tend to overfit the value
function to the sampled data [2]. When using function approximations, temporal difference
learning can result in a biased solution[8]. Residual gradient approaches are biased unless
multiple samples are taken from the same states [9], which is often not possible for real
continuous systems.
In this paper, we propose a non-parametric method for continuous-state policy evaluation.
The proposed method uses a kernel density estimate to represent the system in a flexible
manner. Model-based approaches are known to be more data efficient than direct methods,
and lead to better policies [10, 11]. We subsequently show that the true value function
for this model has a Nadaraya-Watson kernel regression form [12, 13]. Using Galerkin?s
projection method, we compute a closed-form solution for this regression problem. The
1
resulting method is called Non-Parametric Dynamic Programming (NPDP), and is a stable
as well as consistent approach to policy evaluation.
The second contribution of this paper is to provide a unified view of several sample-based
algorithms for policy evaluation, including the NPDP algorithm. In Section 3, we show how
Least-Squares Temporal Difference learning (LSTD) in [14], Kernelized Temporal Difference
learning (KTD) in [15], and Discrete-State Dynamic Programming (DSDP) in [4, 16] can
all be derived using the same Galerkin projection method used to derive NPDP. In Section
4, we compare these methods using empirical evaluations.
In reinforcement learning, the uncontrolled system is usually represented by a Markov Decision Process (MDP). An MDP is defined by the following components: a set of states
S; a set of actions A; a transition distribution p(s0 |a, s), where s0 ? S is the next state
given action a ? A in state s ? S; a reward function r, such that r(s, a) is the immediate
reward obtained for performing action a in state s; and a discount factor ? ? [0, 1) on future
rewards. Actions a are selected according to the stochastic policy
P? ?(a|s). The goal is to
maximize the discounted rewards that are obtained; i.e., max t=0 ? t r(st , at ). The term
system will refer jointly to the agent?s policy and the MDP.
The value of a state V (s), for a specific policy ?, is defined as the expected discounted sum
of rewards that an agent will receive after visiting state s and executing policy ?; i.e.,
P? t
V (s) = E
(1)
t=0 ? r(st , at ) s0 = s, ? .
By using the Markov property, Eq. (1) can be rewritten as the Bellman equation
? ?
V (s) = A S ? (a|s) p (s0 |s, a) [r (s, a) + ?V ? (s0 )] ds0 da.
(2)
The advantage of using the Bellman equation is that it describes the relationship between the
value function at one state s and its immediate follow-up states s0 ? p(s0 |s, a). In contrast,
the direct computation of Eq. (1) relies on the rewards obtained from entire trajectories.
2
Non-Parametric Model-based Dynamic Programming
We begin describing the NPDP approach by introducing the kernel density estimation framework used to represent the system. The true value function for this model has a kernel
regression form, which can be computed by using Galerkin?s projection method. We subsequently discuss some of the properties of this algorithm, including its consistency.
2.1
Non-Parametric System Modeling
The dynamics of a system are compactly represented by the joint distribution p(s, a, s0 ).
Using Bayes rule and marginalization, one can compute the transition probabilities
p(s0 |s, a) and
policy ?(a|s) from this joint distribution; e.g. p(s0 |s, a) =
? the current
0
0
0
p(s, a, s )/ p(s, a, s )ds . Rather than assuming that certain prior information is given,
we will focus on the problem where only sampled information of the system is available.
Hence, the system?s joint distribution is modeled from a set of n samples obtained from the
real system. The ith sample includes the current state si ? S, the selected action ai ? A,
and the follow-up state s0i ? S, as well as the immediate reward ri ? R. The state space S
and the action space A are assumed to be continuous.
We propose using kernel density estimation to represent the joint distribution [17, 18]
in a non-parametric manner. Unlike parametric models, non-parametric approaches
use the collected data as features, which leads to accurate representations of arbitrary
functions
is therefore modeled as p(s, a, s0 ) =
Pn [19].0 The system?s joint distribution
?1
0
0 0
n
i=1 ?i (s ) ?i (a) ?i (s), where ?i (s ) = ? (s , si ), ?i (a) = ? (a, ai ), and ?i (s) =
? (s, si ) are symmetric kernel functions. In practice, the kernel functions ? and ? will often
be
? the same. To ensure a valid probability density, each kernel must integrate to one; i.e.,
?i (s) ds = 1, ?i, and similarly for ? and ?. As an additional constraint, the kernel must
always be positive; i.e., ?i (s0 ) ?i (a) ?i (s) ? 0, ?s ? S. This representation implies a factorization into separate ?i (s0 ), ?i (a), and ?i (s) kernels. As a result, an individual sample
cannot express correlations between s0 , a, and s. However, the representation does allow
multiple samples to express correlations between these components in p(s, a, s0 ).
2
The reward function r(s, a) must also be represented. Given the kernel density estimate
representation, the expected reward for a state-action pair is denoted as [12]
Pn
k=1 rk ?k (a) ?k (s)
r(s, a) = E[r|s, a] = P
.
n
i=1 ?i (a) ?i (s)
Having specified the model of the system dynamics and rewards, the next step is to derive
the corresponding value function.
2.2
Resulting Solution
In this section, we propose an approach to computing the value function for the continuous
model specified in Section 2.1. Every policy has a unique value function, which fulfills the
Bellman equation, Eq. (2), for all states [2, 20]. Hence, the goal is to solve the Bellman
equation for the entire state space, and not just at the sampled states. This goal can be
achieved by using the Galerkin projection method to compute the value function for the
model [21].
The Galerkin method involves first projecting the integral equation into the space spanned
by a set of basis functions. The integral equation is then solved in this projected space. To
begin, the Bellman equation, Eq. (2), is rearranged as
? ?
? ?
V (s) =
? (a|s) r (s, a) p (s0 |s, a) ds0 da + S A p (s0 |s, a) ?V (s0 ) ? (a|s) dads0 ,
A S
?
?
p (s) V (s) =
p (a, s) r (s, a) da + ? p (s0 , s) V (s0 ) ds0 .
(3)
A
S
Before applying the Galerkin method, we derive the exact form of the value function. Expanding the reward function and joint distributions, as defined in Section 2.1, gives
Pn
?
?
Pn
ri ?i (a) ?i (s)
?1
i=1
Pn
p (s) V (s) = n
da + ? p (s0 , s) V (s0 ) ds0 ,
k=1 ?k (a) ?k (s)
A
S
j=1 ?j (a) ?j (s)
?
?
P
P
n
n
p (s) V (s) =
n?1 i=1 ri ?i (a) ?i (s) da + ? n?1 i=1 ?i (s0 ) ?i (s) V (s0 ) ds0 ,
A
S
?
?1 Pn
?1 Pn
p (s) V (s) = n
?i (s0 ) ?i (s) V (s0 ) ds0 ,
i=1 ri ?i (s) + n
i=1 ?
S
Pn
Therefore,
p(s)V (s) = n
i=1 ?i ?i (s), where ? are value weights. Given that p(s) =
Pn
n?1 j=1 ?j (s), the true value function of the kernel density estimate system has a
Nadaraya-Watson kernel regression [12, 13] form
Pn
?i ?i (s)
.
(4)
V (s) = Pi=1
n
j=1 ?j (s)
?1
Having computed the true form of the value function, the Galerkin projection method can be
used to compute the value weights ?. The projection is performed by taking the expectation
of the integral equation with respect to each of the n basis function ?i . The resulting n
simultaneous equations can be written as the vector equation
?
?
? ?
T
? (s) n?1 ? (s) rds+?
? (s) p(s)V (s)ds =
S
S
S
T
? (s) n?1 ? (s) ? (s0 ) V (s0 )ds0 ds,
S
where the i elements of the vectors are given by [r]i = ri , [? (s)]i = ?i (s), and [? (s0 )]i =
?i (s0 ). Expanding the value functions gives
th
?
?
T
S
? ?
T
? (s) ? (s) ?ds =
? (s) ? (s) rds + ?
S
S
? (s0 )T ?
T
? (s) ? (s) ? (s0 ) Pn
ds0 ds,
0)
?
(s
i
S
i=1
C? = Cr + ?C??,
? Pn
T
where C = S ? (s) ? (s) ds, and ? = S ( i=1 ?i (s0 ))?1 ? (s0 ) ? (s0 ) ds0 is a stochastic
matrix; i.e., a transition matrix. The matrix C can become singular if two basis functions
?
T
3
Algorithm 1 Non-Parametric Dynamic Programming
Input:
Computation:
n system samples:
Reward vector:
state si , next state s0i , and reward ri
[r]i = ri
Kernel functions:
Transition matrix:
? ?j (s0 )?i (s0 ) 0
ds
?i (sj ) = ? (si , sj ), and ?i s0j = ? s0i , s0j
[?]i,j = S Pn
0
k=1 ?k (s )
Discount factor:
Value weights:
0??<1
? = (I ? ??)?1 r
Output:
Pn
?i ?i (s)
Value function:
V (s) = Pi=1
n
j=1 ?j (s)
are coincident. In such cases, there exists an infinite set of solutions for ?. However, all of
the solutions result in identical values. The NPDP algorithm uses the solution given by
? = (I ? ??)?1 r,
(5)
which always exists for any stochastic matrix ?. Thus, the derivation has shown that the
exact value function for the model in Section 2.1 has a Nadaraya-Watson kernel regression
form, as shown in Eq. (4), with weights ? given by Eq. (5). The non-parametric dynamic
programming algorithm is summarized in Alg. 1. The NPDP algorithm ultimately requires
only the state information s and s0 , and not the actions a. In Section 3, we will show how
this form of derivation can also be used to derive the LSTD, KTD, and DSDP algorithms.
2.3
Properties of the NPDP Algorithm
In this section, we discuss some of the key properties of the proposed NPDP algorithm,
including precision, accuracy, and computational complexity. Precision refers to how close
the predicted value function is to the true value function of the model, while accuracy refers
to how close the model is to the true system.
One of the key contributions of this paper is providing the true form of the value function for
policy evaluation with the non-parametric model described in Section 2.1. The parameters
of this value function can be computed precisely by solving Eq. (5). Even if ? is evaluated
numerically, a high level of precision can still be obtained.
As a non-parametric method, the accuracy of the NPDP algorithm depends on the number
of samples obtained from the system. It is important that the model, and thus the value
function, converges to that of the true system as the number of samples increases; i.e., that
the model is statistically consistent. In fact, kernel density estimation can be proven to have
almost sure convergence to the true distribution for a wide range of kernels [22].
Given that ? is a stochastic matrix and 0 ? ? < 1, it is well-known that the inversion of
(I ? ??) is well-defined [16]. P
The inversion can therefore also be expanded according to
?
the Neumann series; i.e., ? = i=0 [??]i r. Similar to other kernel-based policy evaluation
methods [23, 24], NPDP has a computational complexity of O(n3 ) when performed naively.
However, by taking advantage of sparse matrix computations, this complexity can be reduced
to O(nz), where z is the number of non-zero elements in (I ? ??).
3
Relation to Existing Methods
The second contribution of this paper is to provide a unified view of Least Squares Temporal
Difference learning (LSTD), Kernelized Temporal Difference learning (KTD), Discrete-State
Dynamic Programming (DSDP), and the proposed Non-Parametric Dynamic Programming
(NPDP). In this section, we utilize the Galerkin methodology from Section 2.2 to re-derive
the LSTD, KTD, and DSDP algorithms, and discuss how these methods compare to NPDP.
A numerical comparison is given in Section 4.
4
3.1
Least Squares Temporal Difference Learning
The LSTD algorithm allows the value function V (s) to be represented by a set of m arbitrary
Pm
? (s)T ?,
? where ?? is a vector
basis functions ??i (s), see [14]. Hence, V (s) = i=1 ??i ??i (s) = ?
?
?
of coefficients learned during policy evaluation, and [? (s)]i = ?i (s). In order to re-derive
the LSTD policy evaluation,
the joint distribution is represented as a set of delta functions
Pn
p (s, a, s0 ) = n?1 i=1 ?i (s, a, s0 ), where ?i (s, a, s0 ) is a Dirac delta function centered on
(si , ai , s0i ). Using Galerkin?s method, the integral equation is projected into the space of
the basis functions ?? (s). Thus, Eq. (3) becomes
?
? ?
?
? (s) p (s) ?
? (s)T ?ds
? =
?
S
? (s) p (s, a) r (s, a) dsda+?
?
A
n
X
S
? (si ) ?
? (si )T ?? =
?
i=1
n
X
? (sj ) + ?
r (sj , aj ) ?
j=1
n
X
? (s) p (s, s0 ) ?
? (s0 )T ?ds
? 0 ds,
?
S
n
X
? (sk ) ?
? (s0 )T ?,
?
?
k
k=1
n
X
? (si ) ?
? (si )T ? ? ?
? (s0 )T ?? =
? (sj ) ,
?
r (sj , aj ) ?
i
i=1
j=1
T
?
?
? (s0 )T ) and b
and thus A?? = b, where A =
? ??
i
i=1 ? (si ) (? (si )
Pn
? (sj ). The final weights are therefore given by
r
(s
,
a
)
?
j
j
j=1
Pn
=
?? = A?1 b.
This equation is also solved by LSTD, including the incremental updates of A and b as
new samples are acquired [14]. Therefore, LSTD can be seen as computing the transitions
between the basis functions using a Monte Carlo approach. However, Monte Carlo methods
rely on large numbers of samples to obtain accurate results.
A key disadvantage of the LSTD method is the need to select a specific set of basis functions.
The computed value function will always be a projection of the true value function into the
space of these basis functions [8]. If the true value function does not lie within the space of
these basis functions, the resulting approximation may be arbitrarily inaccurate, regardless
of the number of acquired samples. However, using predefined basis functions only requires
inverting an m ? m matrix, which results in a lower computational complexity than NPDP.
The LSTD may also need to be regularized, as the inversion of A becomes ill-posed if the
basis functions are too densely spaced. Regularization has a similar effect to changing the
transition probabilities of the system [25].
3.2
Kernelized Temporal Difference Learning Methods
The proposed approach is of course not the first to use kernels for policy evaluation. Methods such as kernelized least-squares temporal difference learning [24] and Gaussian process
temporal difference learning [23] have also employed kernels in policy evaluation. Taylor
and Parr demonstrated that these methods differ mainly in their use of regularization [15].
The unified view of these methods is referred to as Kernelized Temporal Difference learning.
The KTD approach assumes that the reward and value functions can be represented by
?
kernelized linear least-squares regression; i.e., r(s) = k(s)T K ?1 r and V (s) = k(s)T ?,
where [k(s)]i = k(s, si ), [K]ij = k(si , sj ), [r]i = ri , and ?? is a weight vector. In order to
derive KTD using Galerkin?s
method, it is necessary to again represent the joint distribution
Pn
as p (s, a, s0 ) = n?1 i=1 ?i (s, a, s0 ). The Galerkin method projects the integral equation
0
0
?
?
?
into the space of the Kronecker delta functions [?(s)]
i = ?i (s, ai , si ), where ?i (s, a, s ) = 1
0
0
0
?
if s = si , a = ai , and s = si ; otherwise ?i (s, a, s ) = 0. Thus, Eq. (3) becomes
?
?
S
?
?? (s) p (s) r (s) ds + ?
? =
?? (s) p (s) k(s)T ?ds
S
? 0 ds,
?? (s) p (s, s0 ) k(s0 )T ?ds
S
5
By substituting p(s, a, s0 ) and applying the sifting property of delta functions, this equation
becomes
n
n
n
X
X
X
?1
T?
T
?
?
? k )k(s0k )T ?,
?
?(si )k(si ) ? =
?(sj )k(sj ) K r + ?
?(s
i=1
j=1
k=1
? where [K 0 ]ij = k(s0 , sj ). The value function weights are therefore
and thus K ?? = r+?K 0 ?,
i
?? = (K ? ?K 0 )?1 r,
which is identical to the solution found by the KTD approach [15]. In this manner, the
KTD approach computes a weighting ?? such that the difference in the value at si and the
discounted value at s0i equals the observed empirical reward ri . Thus, only the finite set
of sampled states are regarded for policy evaluation. Therefore, some KTD methods, e.g.
Gaussian process temporal difference learning [23], require that the samples are obtained
from a single trajectory to ensure that s0i = si+1 .
A key difference between KTD and NPDP is the representation of the value function V (s).
The form of the value function is a direct result of the representation used to embody the
state transitions. In the original paper [15], the KTD algorithm represents the transitions
? 0 ) = k(s)T K ?1 K 0 , where [k(s
? 0 )]i = E[k(s0 , si )].
by using linear kernelized regression k(s
T?
The value function V (s) = k(s) ? is the correct form for this transition model. However,
the transition model does not explicitly represent a conditional distribution and can lead
to inaccurate predictions. For example, consider two samples that start at s1 = 0 and
s2 = 0.75 respectively, and both transition to s0 = 0.75. For clarity, we use a box-cart
kernel with a width of one k(si , sj ) = 1 iff ksi ? sj k ? 0.5 and 0 otherwise. Hence, K = I
and each row of K? corresponds to (0, 1). In the region 0.25 ? s ? 0.5, where the two
?
kernels overlap, the transition model would then predict k(s)
= k(s)T K ?1 K 0 = [ 0 2 ].
This prediction is however impossible as it requires that E[k(s0 , s2 )] > maxs k(s, s2 ). In
comparison, NPDP would predict the distribution ?(s0 ) ? ?1 (s0 ) ? ?2 (s0 ) for all states in
the range ?0.5 ? s ? 1.25.
Similar as for LSTD, the matrix (K ??K 0 ) may become singular and thus not be invertible.
As a result, KTD usually needs to be regularized [15]. Given that KTD requires inverting
an n ? n matrix, this approach has a computational complexity similar to NPDP.
3.3
Discrete-State Dynamic Programming
The standard tabular DSDP approach can also be derived using the Galerkin method.
? T v,
Given a system with q discrete states, the value function has the form V (s) = ?(s)
?
where ?(s)
is a vector of q Kronecker delta functions centered on the discrete states. The
? T r?. The joint distribution is given by p(s0 , s) =
corresponding reward function is r(s) = ?(s)
Pq
?1
T
0
q ?(s) P ?(s ), where P is a stochastic matrix
j=1 [P ]ij = 1, ?i and hence p(s) =
P
q
q ?1 i=1 ?i (s). Galerkin?s method projects the integral equation into the space of the
?
states ?(s).
Thus, Eq. (3) becomes
?
?
?
? T vds = ?? (s) p (s) ?(s)
? T r?ds + ? ?? (s) p (s, s0 ) ?(s
? 0 )T vds0 ds,
?? (s) p (s) ?(s)
S
S
S
?
T
0 ? 0 T
?
Iv = I r? + ? ? (s) ?(s) P ?(s )?(s ) vds0 ds,
S
v = r? + ?P v,
v = (I ? ?P )?1 r?,
(6)
which is the same computation used by DSDP [16]. The DSDP and NPDP methods actually
use similar models to represent the system. While NPDP uses a kernel density estimation,
the DSDP algorithm uses a histogram representation. Hence, DSDP can be regarded as a
special case of NPDP for discrete state systems.
The DSDP algorithm has also been the basis for continuous-state policy evaluation algorithms [26, 27]. These algorithms first use the sampled states as the discrete states of an
MDP and compute the corresponding values. The computed values are then generalized,
under a smoothness assumption, to the rest of the state-space using local averaging. Unlike
these methods, NPDP explicitly performs policy evaluation for a continuous set of states.
6
4
Numerical Evaluation
In this section, we compare the different policy evaluation methods discussed in the previous
section, with the proposed NPDP method, on an illustrative benchmark system.
4.1
Benchmark Problem and Setup
In order to compare the LSTD, KTD, DSDP, and NPDP approaches, we evaluated the
methods on a discrete-time continuous-state system. A standard linear-Gaussian system
was used for the benchmark problem, with transitions given by s0 = 0.95s + ? where ? is
Gaussian noise N (? = 0, ? = 0.025). The initial states are restricted to the range 0.95 to 1.
The reward functions consist of three Gaussians, as shown by the black line in Fig. 1.
The KTD method was implemented using a Gaussian kernel function and regularization.
The LSTD algorithm was implemented using 15 uniformly-spaced normalized Gaussian
basis functions, and did not require regularization. The DSDP method was implemented
by discretizing the state-space into 10 equally wide regions. The NPDP method was also
implemented using Gaussian kernels.
The hyper-parameters of all four methods, including the number of basis functions for
LSTD and DSDP, were carefully tuned to achieve the best performance. As a performance
base-line, the values of the system in the range 0 < s < 1 were computed using a Monte
Carlo estimate based on 50000 trajectories. The policy evaluations performed by the tested
methods were always based on only 500 samples; i.e. 100 times less samples than the baseline. The experiment was run 500 times using independent sets of 500 samples. The samples
were not drawn from the same trajectory.
4.2
Results
The performance of the different methods were compared using three performance measures.
Two of the performance measures are based on the weighted Mean Squared Error (MSE)
?1
2
[2] E(V ) = 0 W (s) (V (s) ? V ? (s)) ds where V ? is the true value function and W (s) ? 0,
?1
for all states, is a weighting distribution 0 W (s)ds = 1. The first performance measure
Eunif corresponds to the MSE where W (s) = 1 for all states in the range zero to one. The
second performance measure Esamp corresponds to the MSE where W (s) = n?1 ?ni=1 ?i (s)
respectively. Thus, Esamp is an indicator of the accuracy in the space of the samples, while
Eunif is an indicator of how well the computed value function generalizes to the entire state
space. The third performance measure Emax is given by the maximum error in the value
function. This performance measure is the basis of a bound on the overall value function
approximation [20].
The results of the experiment are shown in Table 1. The performance measures were averaged over the 500 independent trials of the experiment. For all three performance measures,
the NPDP algorithm achieved the highest levels of performance, while the DSDP approach
consistently led to the worst performance.
NPDP
LSTD
KTD
DSDP
Eunif
0.5811 ? 0.0333
0.6898 ? 0.0443
0.7585 ? 0.0460
1.6979 ? 0.0332
Esamp
0.7185 ? 0.0321
0.8932 ? 0.0412
0.8681 ? 0.0270
2.1548 ? 0.1082
Emax
1.4971 ? 0.0309
1.5591 ? 0.0382
2.5329 ? 0.0391
2.9985 ? 0.0449
Table 1: Each row corresponds to one of the four tested algorithms for policy evaluation.
The columns indicate the performance of the approaches during the experiment. The performance indexes include the mean squared error evaluated uniformly over the zero to one
range, the mean squared error evaluated at the 500 sampled points, and the maximum error.
The results are averaged over 500 trials. The standard errors of the means are also given.
7
12
12
True Value
Reward
LSTD
KTD
10
10
8
Value
8
Value
True Value
Reward
DSDP
NPDP
6
6
4
4
2
2
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
State
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
State
Figure 1: Value functions obtained by the evaluated methods. The black lines show the
reward function. The blue lines show the value function computed from the trajectories of
50,000 uniformly sampled points. The LSTD, KTD, DSDP, and NPDP methods evaluated
the policy using only 500 points. The presentation was divided into two plots for improved
clarity
4.3
Discussion
The LSTD algorithm achieved a relatively low Eunif value, which indicates that the tuned
basis functions could accurately represent the true value function. However, the performance
of LSTD is sensitive to the choice of basis functions and the number of samples per basis
function. Using 20 basis functions instead of 15 reduces the performance of LSTD to Eunif =
2.8705 and Esamp = 1.0256 as a result of overfitting. The KTD method achieved the second
best performance for Esamp , as a result of using a non-parametric representation. However,
the value tended to drop in sparsely-sampled regions, which lead to relatively high Eunif
and Emax values. The discretization of states for DSDP is generally a disadvantage when
modeling continuous systems, and resulted in poor overall performance for this evaluation.
The NPDP approach out-performed the other methods in all three performance measures.
The performance of NPDP could be further improved by using adaptive kernel density
estimation [28] to locally adapt the kernels? bandwidths according to the sampling density.
However, all methods were restricted to using a single global bandwidth for the purpose of
this comparison.
5
Conclusion
This paper presents two key contributions to continuous-state policy evaluation. The first
contribution is the Non-Parametric Dynamic Programming algorithm for policy evaluation.
The proposed method uses a kernel density estimate to generate a consistent representation
of the system. It was shown that the true form of the value function for this model is
given by a Nadaraya-Watson kernel regression. The NPDP algorithm provides a solution for
calculating the value function. As a kernel-based approach, NPDP simultaneously addresses
the problems of function approximation and policy evaluation.
The second contribution of this paper is providing a unified view of Least-Squares Temporal
Difference learning, Kernelized Temporal Difference learning, and discrete-state Dynamic
Programming, as well as NPDP. All four approaches can be derived from the Bellman
equation using the Galerkin projection method. These four approaches were also evaluated
and compared on an empirical problem with a continuous state space and non-linear reward
function, wherein the NPDP algorithm out-performed the other methods.
Acknowledgements
The project receives funding from the European Community?s Seventh Framework Programme under grant agreement n? ICT- 248273 GeRT and n? 270327 Complacs.
8
References
[1] Dimitri P. Bertsekas. Dynamic Programming and Optimal Control, Vol. II. Athena Scientific,
2007.
[2] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. 1998.
[3] H. Maei, C. Szepesvari, S. Bhatnagar, D. Precup, D. Silver, and R. Sutton. Convergent
temporal-difference learning with arbitrary smooth function approximation. In NIPS, pages
1204?1212, 2009.
[4] Richard Bellman. Bottleneck problems and dynamic programming. Proceedings of the National
Academy of Sciences of the United States of America, 39(9):947?951, 1953.
[5] R.E. Kalman. Contributions to the theory of optimal control, 1960.
[6] Warren B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality
(Wiley Series in Probability and Statistics). Wiley-Interscience, 2007.
[7] R?mi Munos. Geometric Variance Reduction in Markov Chains: Application to Value Function
and Gradient Estimation. Journal of Machine Learning Research, 7:413?427, 2006.
[8] Ralf Schoknecht. Optimality of reinforcement learning algorithms with linear function approximation. In NIPS, pages 1555?1562, 2002.
[9] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. In
ICML, 1995.
[10] Christopher G. Atkeson and Juan C. Santamaria. A Comparison of Direct and Model-Based
Reinforcement Learning. In ICRA, pages 3557?3564, 1997.
[11] H. Bersini and V. Gorrini. Three connectionist implementations of dynamic programming for
optimal control: A preliminary comparative analysis. In Nicrosp, 1996.
[12] E. Nadaraya. On estimating regression. Theory of Prob. and Appl., 9:141?142, 1964.
[13] G. Watson. Smooth regression analysis. Sankhya, Series, A(26):359?372, 1964.
[14] Justin A. Boyan. Least-squares temporal difference learning. In ICML, pages 49?56, San
Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.
[15] Taylor, Gavin and Parr, Ronald. Kernelized value function approximation for reinforcement
learning. In ICML, pages 1017?1024, New York, NY, USA, 2009. ACM.
[16] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
1996.
[17] Murray Rosenblatt. Remarks on Some Nonparametric Estimates of a Density Function. The
Annals of Mathematical Statistics, 27(3):832?837, September 1956.
[18] Emanuel Parzen. On Estimation of a Probability Density Function and Mode. The Annals of
Mathematical Statistics, 33(3):1065?1076, 1962.
[19] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of
Mathematical Analysis and Applications, 33(1):82?95, 1971.
[20] R?mi Munos. Error bounds for approximate policy iteration. In ICML, pages 560?567, 2003.
[21] Kendall E. Atkinson. The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, 1997.
[22] Dominik Wied and Rafael Weissbach. Consistency of the kernel density estimator: a survey.
Statistical Papers, pages 1?21, 2010.
[23] Yaakov Engel, Shie Mannor, and Ron Meir. Reinforcement learning with Gaussian processes.
In ICML, pages 201?208, New York, NY, USA, 2005. ACM.
[24] Xin Xu, Tau Xie, Dewen Hu, and Xicheng Lu. Kernel least-squares temporal difference learning. International Journal of Information Technology, 11:54?63, 1997.
[25] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML, pages 521?528. ACM, 2009.
[26] Nicholas K. Jong and Peter Stone. Model-based function approximation for reinforcement
learning. In AAMAS, May 2007.
[27] Dirk Ormoneit and ?aunak Sen. Kernel-Based reinforcement learning. Machine Learning,
49(2):161?178, November 2002.
[28] B. W. Silverman. Density estimation: for statistics and data analysis. London, 1986.
9
| 4182 |@word trial:2 inversion:3 open:1 hu:1 reduction:1 initial:1 series:3 united:1 tuned:2 existing:1 current:2 discretization:1 si:23 must:3 written:1 john:1 ronald:1 numerical:4 plot:1 drop:1 update:1 selected:2 ith:1 provides:1 mannor:1 ron:1 mathematical:3 direct:4 become:2 dewen:1 interscience:1 manner:3 acquired:2 expected:2 embody:1 bellman:8 discounted:3 curse:1 becomes:5 begin:2 project:3 estimating:1 kimeldorf:1 kind:1 developed:1 unified:5 temporal:20 every:1 exactly:1 universit:1 control:4 zico:1 grant:1 planck:1 bertsekas:2 positive:1 before:1 local:1 sutton:2 black:2 nz:1 appl:1 nadaraya:5 factorization:1 range:6 statistically:1 averaged:2 unique:1 practice:1 silverman:1 jan:1 powell:1 empirical:3 projection:8 refers:2 cannot:1 close:2 selection:1 applying:2 impossible:1 demonstrated:1 straightforward:1 regardless:1 survey:1 emax:3 rule:1 estimator:1 regarded:2 spanned:1 ralf:1 gert:1 autonomous:1 annals:2 exact:4 programming:17 us:6 agreement:1 element:2 sparsely:1 observed:1 solved:3 worst:1 region:3 sifting:1 highest:1 complexity:5 reward:21 dynamic:19 ultimately:1 solving:3 basis:19 compactly:1 joint:9 represented:6 america:1 leemon:1 derivation:2 london:1 monte:5 rds:2 hyper:1 posed:1 solve:1 otherwise:2 statistic:4 jointly:1 final:1 advantage:2 sen:1 propose:3 tu:1 loop:1 iff:1 achieve:1 academy:1 dirac:1 convergence:1 neumann:1 comparative:1 incremental:1 executing:1 converges:1 silver:1 derive:8 andrew:1 ij:3 eq:10 implemented:4 predicted:1 involves:1 implies:1 indicate:1 differ:1 correct:1 subsequently:3 stochastic:5 centered:2 require:2 darmstadt:2 preliminary:1 gavin:1 predict:2 parr:2 substituting:1 purpose:1 estimation:9 sensitive:1 engel:1 weighted:1 always:4 gaussian:8 rather:1 pn:18 cr:1 barto:1 derived:3 focus:2 consistently:1 indicates:1 mainly:1 contrast:1 baseline:1 inaccurate:2 entire:3 kernelized:10 relation:1 overall:2 flexible:1 ill:1 denoted:1 special:1 equal:1 having:2 ng:1 sampling:1 identical:2 represents:1 icml:6 tabular:2 future:1 connectionist:1 spline:1 intelligent:2 richard:1 simultaneously:1 densely:1 resulted:1 individual:1 national:1 evaluation:26 chain:1 predefined:1 accurate:2 oliver:1 integral:7 necessary:1 unless:1 iv:1 taylor:2 re:2 santamaria:1 column:1 modeling:2 disadvantage:2 introducing:1 technische:1 seventh:1 too:1 st:2 density:16 international:1 invertible:1 complacs:1 parzen:1 precup:1 again:1 squared:3 juan:1 dimitri:2 de:1 summarized:1 includes:1 coefficient:1 baird:1 kroemer:1 inc:1 kolter:1 explicitly:2 depends:1 performed:6 view:5 lab:1 closed:1 kendall:1 start:1 bayes:1 contribution:7 square:10 ni:1 accuracy:4 variance:2 kaufmann:1 spaced:2 accurately:1 lu:1 carlo:5 trajectory:5 bhatnagar:1 simultaneous:1 tended:1 mi:2 sampled:8 emanuel:1 dimensionality:1 carefully:1 actually:1 xie:1 follow:2 methodology:1 wherein:1 improved:4 evaluated:8 box:1 furthermore:1 just:1 correlation:2 overfit:1 d:20 receives:1 christopher:1 mode:1 aj:2 scientific:2 mdp:4 usa:3 effect:1 excessively:1 normalized:1 true:17 concept:1 hence:6 regularization:5 symmetric:1 during:2 width:1 illustrative:1 generalized:1 stone:1 performs:1 funding:1 discussed:1 numerically:1 refer:1 cambridge:1 ai:5 smoothness:1 ktd:19 consistency:2 similarly:1 pm:1 pq:1 robot:1 stable:1 schoknecht:1 base:1 continuousstate:1 certain:1 discretizing:1 watson:5 arbitrarily:1 seen:1 morgan:1 additional:1 yaakov:1 employed:1 maximize:1 ii:1 multiple:2 reduces:1 smooth:2 adapt:1 divided:1 equally:1 prediction:2 neuro:1 regression:10 expectation:1 iteration:3 kernel:34 represent:8 histogram:1 achieved:4 receive:1 singular:2 publisher:1 biased:2 rest:1 unlike:2 sure:1 cart:1 tend:1 shie:1 marginalization:1 bandwidth:2 wahba:1 bottleneck:2 peter:2 york:2 action:9 remark:1 generally:1 nonparametric:1 discount:2 locally:1 rearranged:1 reduced:1 eunif:6 generate:1 meir:1 delta:5 per:1 blue:1 rosenblatt:1 discrete:11 vol:1 express:2 key:5 four:4 tchebycheffian:1 drawn:1 changing:1 clarity:2 utilize:1 sum:1 run:1 prob:1 almost:1 decision:1 bound:2 uncontrolled:1 atkinson:1 convergent:1 quadratic:1 constraint:1 precisely:1 kronecker:2 ri:9 n3:1 optimality:1 performing:1 expanded:1 relatively:2 according:3 poor:1 describes:1 s1:1 projecting:1 restricted:2 taken:1 equation:18 remains:1 describing:1 discus:3 available:1 gaussians:1 rewritten:1 generalizes:1 nicholas:1 alternative:1 original:1 assumes:1 include:2 ensure:2 calculating:1 bersini:1 murray:1 icra:1 parametric:16 visiting:1 september:1 gradient:3 separate:1 vd:1 athena:2 collected:1 toward:1 assuming:1 kalman:1 modeled:2 relationship:1 index:1 providing:2 regulation:1 setup:1 implementation:1 policy:39 markov:3 benchmark:3 finite:1 coincident:1 november:1 immediate:3 dirk:1 arbitrary:3 community:1 maei:1 inverting:2 pair:1 specified:2 ds0:9 learned:1 nip:2 address:1 justin:1 usually:2 max:3 including:5 s0k:1 tau:1 ia:1 overlap:1 rely:1 regularized:2 boyan:1 indicator:2 ormoneit:1 residual:3 technology:1 prior:1 ict:1 acknowledgement:1 geometric:1 determining:2 proven:1 integrate:1 agent:2 consistent:3 s0:61 pi:2 row:2 course:1 tsitsiklis:1 warren:1 allow:1 institute:1 wide:2 taking:2 munos:2 sparse:1 transition:13 valid:1 computes:1 reinforcement:11 projected:2 adaptive:1 san:1 programme:1 atkeson:1 sj:13 approximate:3 rafael:1 global:1 overfitting:1 assumed:1 francisco:1 continuous:11 s0i:6 sk:1 table:2 szepesvari:1 expanding:2 ca:1 alg:1 s0j:2 mse:3 european:1 da:5 did:1 main:1 s2:3 noise:1 aamas:1 xu:1 fig:1 referred:1 sankhya:1 ny:2 wiley:2 galerkin:16 precision:3 lie:1 dominik:1 weighting:2 third:1 rk:1 specific:2 essential:1 exists:2 naively:1 consist:1 ksi:1 led:1 lstd:20 corresponds:4 relies:1 acm:3 conditional:1 goal:3 presentation:1 determined:1 infinite:1 uniformly:3 averaging:1 called:1 xin:1 jong:1 select:1 fulfills:1 tested:2 |
3,515 | 4,183 | Heavy-tailed Distances for
Gradient Based Image Descriptors
Yangqing Jia and Trevor Darrell
UC Berkeley EECS and ICSI
{jiayq,trevor}@eecs.berkeley.edu
Abstract
Many applications in computer vision measure the similarity between images or
image patches based on some statistics such as oriented gradients. These are often modeled implicitly or explicitly with a Gaussian noise assumption, leading to
the use of the Euclidean distance when comparing image descriptors. In this paper, we show that the statistics of gradient based image descriptors often follow
a heavy-tailed distribution, which undermines any principled motivation for the
use of Euclidean distances. We advocate for the use of a distance measure based
on the likelihood ratio test with appropriate probabilistic models that fit the empirical data distribution. We instantiate this similarity measure with the Gammacompound-Laplace distribution, and show significant improvement over existing
distance measures in the application of SIFT feature matching, at relatively low
computational cost.
1
Introduction
A particularly effective image representation has developed in recent years, formed by computing
the statistics of oriented gradients quantized into various spatial and orientation selective bins. SIFT
[14], HOG [6], and GIST [17] have been shown to have extraordinary descriptiveness on both instance and category recognition tasks, and have been designed with invariances to many common
nuisance parameters. Significant motivation for these architectures arises from biology, where models of early visual processing similarly integrate statistics over orientation selective units [21, 18].
Two camps have developed in recent years regarding how such descriptors should be compared. The
first advocates comparison of raw descriptors. Early works [6] considered the distance of patches to
a database from labeled images; this idea was reformulated as a probabilistic classifier in the NBNN
technique [4], which has surprisingly strong performance across a range of conditions. Efficient
approximations based on hashing [22, 12] or tree-based data structures [14, 16] or their combination
[19] have been commonly applied, but do not change the underlying ideal distance measure.
The other approach is perhaps the more dominant contemporary paradigm, and explores a quantizedprototype approach where descriptors are characterized in terms of the closest prototype, e.g., in
a vector quantization scheme. Recently, hard quantization and/or Euclidean-based reconstruction
techniques have been shown inferior to sparse coding methods, which employ a sparsity prior to
form a dictionary of prototypes. A series of recent publications has proposed prototype formation
methods including various sparsity-inducing priors, including most commonly the L1 prior [15], as
well as schemes for sharing structure in a ensemble-sparse fashion across tasks or conditions [10]. It
is informative that sparse coding methods also have a foundation as models for computational visual
neuroscience [18].
Virtually all these methods use the Euclidean distance when comparing image descriptors against
the prototypes or the reconstructions, which is implicitly or explicitly derived from a Gaussian noise
assumption on image descriptors. In this paper, we ask whether this is the case, and further, whether
1
(a) Histogram
(b) Matching Patches
Figure 1: (a) The histogram of the difference between SIFT features of matching image patches from
the Photo Tourism dataset. (b) A typical example of matching patches. The obstruction (wooden
branch) in the bottom patch leads to a sparse change to the histogram of oriented gradients (the two
red bars).
there is a distance measure that better fits the distribution of real-world image descriptors. We
begin by investigating the statistics of oriented gradient based descriptors, focusing on the well
known Photo Tourism database [25] of SIFT descriptors for the case of simplicity. We evaluate
the statistics of corresponding patches, and see the distribution is heavy-tailed and decidedly nonGaussian, undermining any principled motivation for the use of Euclidean distances.
We consider generative factors why this may be so, and derive a heavy-tailed distribution (that we
call the gamma-compound-Laplace distribution) in a Bayesian fashion, which empirically fits well
to gradient based descriptors. Based on this, we propose to use a principled approach using the
likelihood ratio test to measure the similarity between data points under any arbitrary parameterized
distribution, which includes the previously adopted Gaussian and exponential family distributions
as special cases. In particular, we prove that for the heavy-tailed distribution we proposed, the
corresponding similarity measure leads to a distance metric, theoretically justifying its use as a
similarity measurement between image patches.
The contribution of this paper is two-fold. We believe ours is the first work to systematically examine the distribution of the noise in terms of oriented gradients for corresponding keypoints in
natural scenes. In addition, the likelihood ratio distance measure establishes a principled connection
between the distribution of data and various distance measures in general, allowing us to choose
the appropriate distance measure that corresponds to the true underlying distribution in an application. Our method serves as a building block in either nearest-neighbor distance computation (e.g.
NBNN [4]) and codebook learning (e.g. vector quantization and sparse coding), where the Euclidean
distance measure can be replaced by our distance measure for better performance.
It is important to note that in both paradigms listed above ? nearest-neighbor distance computation
and codebook learning ? discriminative variants and structured approaches exist that can optimize a
distance measure or codebook based on a given task. Learning a distance measure that incorporate
both the data distribution and task-dependent information is the subject of future work.
2
Statistics of Local Image Descriptors
In this section, we focus on examining the statistics of local image descriptors, using the SIFT
feature [14] as an example. Classical feature matching and clustering methods on SIFT features
use the Euclidean distance to compare two descriptors. In a probabilistic perspective, this implies
a Gaussian noise model for SIFT: given a feature prototype ? (which could be the prototype in
feature matching, or a cluster center in clustering), the probability that an observation x matches the
prototype can be evaluated by the Gaussian probability
p(x|?) ? exp
2
kx ? ?k22
2? 2
,
(1)
Figure 2: The probability values of the GCL, Laplace and Gaussian distributions via ML estimation,
compared against the empirical distribution of local image descriptor noises. The figure is in log
scale and curves are normalized for better comparison. For details about the data, see Section 4.
where ? is the standard deviation of the noise. Such a Gaussian noise model has been explicitly or
implicitly assumed in most algorithms including vector quantization, sparse coding (on the reconstruction error), etc.
Despite the popular use of Euclidean distance, the distribution of the noise between matching SIFT
patches does not follow a Gaussian distribution: as shown in Figure 1(a), the distribution is highly
kurtotic and heavy tailed, indicating that Euclidean distance may not be ideal.
The reason why the Gaussian distribution may not be a good model for the noise of local image
descriptors can be better understood from the generative procedure of the SIFT features. Figure
1(b) shows a typical case of matching patches: one patch contains a partially obstructing object
while the other does not. The resulting histogram differs only in a sparse subset of the oriented
gradients. Further, research on the V1 receptive field [18] suggests that natural images are formed
from localized, oriented, bandpass patterns, implying that changing the weight of one such building
pattern may tend to change only one or a few dimensions of the binned oriented gradients, instead
of imposing an isometric Gaussian change to the whole feature.
2.1
A Heavy-tailed Distribution for Image Descriptors
We first explore distributions that fits such heavy-tailed property. A common approach to cope with
heavy-tails is to use the L1 distance, which corresponds to the Laplace distribution
p(x|?; ?) ?
?
exp (??|x ? ?|) .
2
(2)
However, the tail of the noise distribution is often still heavier than the Laplace distribution: empirically, we find the kurtosis of the SIFT noise distribution to be larger than 7 for most dimensions,
while the kurtosis of the Laplace distribution is only 3. Inspired by the hierarchical Bayesian models
[11], instead of fixing the ? value in the Laplace distribution, we introduce a conjugate Gamma prior
over ? modeled by hyperparameters {?, ?}, and compute the probability of x given the prototype ?
by integrating over ?:
Z
? ??|x??| 1 ??1 ? ???
e
?
? e
d?
p(x|?; ?, ?) =
?(?)
? 2
1 ?
=
?? (|x ? ?| + ?)???1 .
(3)
2
This leads to a heavier tail than the Laplace distribution. We call Equation (3) the Gammacompound-Laplace (GCL) distribution, in which the hyperparameters ? and ? control the shape
of the tail. Figure 2 shows the empirical distribution of the SIFT noise and the maximum likelihood
fitting of various models. It can be observed that the GCL distribution enables us to fit the heavy
tailed empirical distribution better than other distributions. We note that similar approaches have
been exploited in the compressive sensing context [9], and are shown to perform better than using
the Laplace distribution as the sparse prior in applications such as signal recovery.
Further, we note that the statistics of a wide range of other natural image descriptors beyond SIFT
features are known to be highly non-Gaussian and have heavy tails [24]. Examples of these include
3
derivative-like wavelet filter responses [23, 20], optical flow and stereo vision statistics [20, 8], shape
from shading [3], and so on.
In this paper we retract from the general question ?what is the right distribution for natural images?,
and ask specifically whether there is a good distance metric for local image descriptors that takes
the heavy-tailed distribution into consideration. Although heuristic approaches such as taking the
squared root of the feature values before computing the Euclidean distance are sometimes adopted
to alleviate the effect of heavy tails, there lacks a principled way to define a distance for heavytailed data in computer vision to the best of our knowledge. To this end, we start with a principled
similarity measure based on the well known statistical hypothesis test, and instantiate it with heavytailed distributions we propose for local image descriptors.
3
Distance For Heavy-tailed Distributions
In statistics, the hypothesis test [7] approach has been widely adopted to test if a certain statistical
model fits the observation. We will focus on the likelihood ratio test in this paper. In general, we
assume that the data is generated by a parameterized probability distribution p(x|?), where ? is the
vector of parameters. A null hypothesis is stated by restricting the parameter ? in a specific subset
?0 , which is nested in a more general parameter space ?. To test if the restricted null hypothesis
fits a set of observations X , a natural choice is to use the ratio of the maximized likelihood of the
restricted model to the more general model:
? 0 ; X )/L(?;
? X ),
?(X ) = L(?
(4)
? 0 is the maximum likelihood estimate of the parameter
where L(?; X ) is the likelihood function, ?
? is the maximum likelihood estimate under the general case.
within the restricted subset ?0 , and ?
It is easily verifiable that ?(X ) always lies in the range [0, 1], as the maximum likelihood estimate
of the general case would always fit at least as well as the restricted case, and that the likelihood
is always a nonnegative value. The likelihood ratio test is then defined as a statistical test that
rejects the null hypothesis when the statistic ?(X ) is smaller than a certain threshold ?, such as the
Pearson?s chi-square test [7] for categorical data.
Instead of producing a binary decision, we propose to use the score directly as the generative similarity measure between two single data points. Specifically, we assume that each data point x is
generated from a parameterized distribution p(x|?) with unknown prototype ?. Thus, the statement
?two data points x and y are similar? can be reasonably represented by the null hypothesis that the
two data points are generated from the same prototype ?, leading to the probability
q0 (x, y|?xy ) = p(x|?xy )p(y|?xy ).
(5)
This restricted model is further nested in the more general model that generates the two data points
from two possibly different prototypes:
q(x, y|?x , ?y ) = p(x|?x )p(y|?y ),
(6)
where ?x and ?y are not necessarily equal.
The similarity between the two data points x and y is then defined by the the likelihood ratio statistics
between the null hypothesis of equality and the alternate hypothesis of inequality over prototypes:
p(x|?
?xy )p(y|?
?xy )
s(x, y) =
,
(7)
p(x|?
?x )p(y|?
?y )
where ?
?x , ?
?y and ?
?xy are the maximum likelihood estimates of the prototype based on x, y, and
{x, y} respectively. We call (7) the likelihood ratio similarity between x and y, which provides
us information from a generative perspective: two similar data points, such as two patches of the
same real-world location, are more likely to be generated from the same underlying distribution,
thus have a large likelihood ratio value. In the following parts of the paper, we define the likelihood
ratio distance between x and y as the square root of the negative logarithm of the similarity:
p
d(x, y) = ? log(s(x, y)).
(8)
It is worth pointing out that, for arbitrary distributions p(x), d(x, y) is not necessarily a distance
metric as the triangular inequality may not hold. However, for heavy-tailed distributions, we have
the following sufficient condition in the 1-dimensional case:
4
Theorem 3.1. If the distribution p(x|?) can be written as p(x|?) = exp(?f (x ? ?))b(x), where
f (t) is a non-constant quasiconvex function w.r.t. t that satisfies f 00 (t) ? 0, ?t ? R\{0}, then the
distance defined in Equation (8) is a metric.
Proof. First we point out the following lemmas:
Lemma 3.2. If a function d(x, y) defined on X ? X ? R is a distance metric, then
a distance metric.
p
d(x, y) is also
Lemma 3.3. If function f (t) is defined as in Theorem 3.1, then we have:
(1) the minimizer ?
?xy = arg min? f (x??) + f (y??) is either x or y.
(2) the function g(t) = min(f (t), f (?t)) ? f (0) is monotonically increasing and concave in R+ ?
{0}, and g(0) = 0.
With Lemma 3.3, it is easily verifiable that d2 (x, y) = g(|x ? y|). Then, via the subadditivity of g(?)
we can reach a result stronger than Theorem 3.1 that d2 (x, y) is a distance metric. Thus, d(x, y) is
also a distance metric based on Lemma 3.2. Note that we keep the square root here in conformity
with classical distance metrics, which we will discuss in the later parts of the paper. Detailed proofs
of the theorem and lemmas can be found in the supplementary material.
As an extreme case, when f 00 (t) = 0 (t 6= 0), the distance defined above is the square root of the
(scaled) L1 distance.
3.1
Distance for the GCL distribution
We use the GCL distribution parameterized by the prototype ? with fixed hyperparameters (?, ?)
as the SIFT noise model, which leads to the following GCL distance between dimensions of SIFT
patches1 :
d2 (x, y) = (? + 1)(log(|x ? y| + ?) ? log ?)
(9)
The distance between two patches is then defined as the sum of per-dimension distances. Intuitively,
while the Euclidean distance grows linearly w.r.t. to the difference between the coordinates, the GCL
distance grows in a logarithmic way, suppressing the effect of too large differences. Further, we have
the following theoretical justification which is a direct result of Theorem 3.1.:
Proposition 3.4. The distance d(x, y) defined in (9) is a metric.
3.2
Hyperparameter Estimation for GCL
In the following, we discuss how to estimate the hyperparameters ? and ? in the GCL distribution.
Assuming that we are given a set of one-dimensional data D = {x1 , x2 , ? ? ? , xn } that follows the
GCL distribution, we estimate the hyperparameters by maximizing the log likelihood
n
X
?
l(?, ?; D) =
log + ? log ? ? (? + 1) log (|xi | + ?)
(10)
2
i=1
The ML estimation does not have a closed-form solution, so we adopt an alternate optimization and
iteratively update ? and ? until convergence. Updating ? with fixed ? can be achieved by computing
!?1
n
X
??n
log(|xi | + ?) ? n log(?)
.
(11)
i=1
Updating ? can be done via the Newton-Raphson method ? ? ? ?
l0 (?) =
n
X
n?
?+1
?
,
?
|x
i| + ?
i=1
l00 (?) =
n
X
i=1
l0 (?)
l00 (?) ,
where
?+1
n?
? 2
(|xi | + ?)2
?
(12)
1
For more than two data points X = {xi }, it is generally difficult to find the maximum likelihood estimation
of ? as the likelihood is nonconvex. However, with two data points x and y, it is trivial to see that ? = x
and ? = y are the two global optimums of the likelihood L(?; {x, y}), both leading to the same distance
representation in (9).
5
3.3
Relation to Existing Measures
The likelihood ratio distance is related to several existing methods. In particular, we show that under
the exponential family distribution, it leads to several widely used distance measures.
The exponential family distribution has drawn much attention in the recent years. Here we focus on
the regular exponential family, where the distribution of data x can be written in the following form:
p(x) = exp (?dB (x, ?)) b(x),
(13)
where ? is the mean in the exponential family sense, and dB is the regular Bregman divergence
corresponding to the distribution [2]. When applying the likelihood ratio distance on the distribution,
we obtain the distance
q
d(x, y) = dB (x, ?
?xy ) + dB (x, ?
?x,y )
(14)
since ?
?x ? x and dB (x, x) ? 0 for any x. We note that this is the square root of the Jensen-Bregman
divergence and is known to be a distance metric [1]. Several popular distances can be derived in this
way. In the two most common cases, the Gaussian distribution leads to the Euclidean distance,
and the multinomial distribution leads to the square root of the Jensen-Shannon divergence, whose
first-order approximation is the ?-squared distance. More generally, for (non-regular) Bregman
divergences dB (x, ?) defined as dB (x, ?) = F (x) ? F (?) + (x ? ?)F 0 (?) with arbitrary smooth
function F , the condition on which the square root of the corresponding Jensen-Bregman divergence
is a metric has been discussed in [5].
While the exponential family embraces a set of mathematically elegant distributions whose properties are well known, it fails to capture the heavy-tailed property of various natural image statistics,
as the tail of the sufficient statistics is exponentially bounded by definition. The likelihood ratio
distance with heavy-tailed distributions serves as a principled extension of several popular distance
metrics based on the exponential family distribution. Further, there are principled approaches that
connect distances with kernels [1], upon which kernel methods such as support vector machines may
be built with possible heavy-tailed property of the data taken into consideration.
The idea of computing the similarity between data points based on certain scores has also been seen
in the one-shot learning context [26] that uses the average prediction score taking one data point
as training and the other as testing, and vice versa. Our method shares similar merit, but with a
generative probabilistic interpretation. Integration of our method with discriminative information or
latent application-dependent structures is one future direction.
4
Experiments
In this section, we apply the GCL distance to the problem of local image patch similarity measure using the SIFT feature, a common building block of many applications such as stereo vision,
structure from motion, photo tourism, and bag-of-words image classification.
4.1
The Photo Tourism Dataset
We used the Photo Tourism dataset [25] to evaluate different similarity measures of the SIFT feature.
The dataset contains local image patches extracted from three scenes namely Notredame, Trevi
and Halfdome, reflecting different natural scenarios. Each set contains approximately 30,000
ground-truth 3D points, with each point containing a bag of 2d image patches of size 64 ? 64
corresponding to the 3D point. To the best of our knowledge, this is the largest local image patch
database with ground-truth correspondences. Figure 3 shows a typical subset of patches from the
dataset.
The SIFT features are computed using the code in [13]. Specifically, two different normalization
schemes are tested: the l2 scheme simply normalizes each feature to be of length 1, and the thres
scheme further thresholds the histogram at 0.2, and rescales the resulting feature to length 1. The
latter is the classical hand-tuned normalization designed in the original SIFT paper, and can be seen
as a heuristic approach to suppress the effect of heavy tails.
Following the experimental setting of [25], we also introduce random jitter effects to the raw patches
before SIFT feature extraction by warping each image by the following random warping parame6
Figure 3: An example of the Photo Tourism dataset. From top to bottom patches are sampled from
Notredame, Trevi and Halfdome respectively. Within each row, every adjacent two patches forms a
matching pair.
PR-curve trevi
PR-curve notredame
1.00
0.95
0.90
0.90
0.90
0.85
0.85
0.85
0.75
0.70
0.65
0.60
0.80
Precision
0.95
0.80
0.80
0.75
L2
L1
symmKL
chi2
gcl
0.85
0.70
0.65
0.90
Recall
0.95
(a) trevi
1.00
0.60
0.80
PR-curve halfdome
1.00
0.95
Precision
Precision
1.00
0.80
0.75
L2
L1
symmKL
chi2
gcl
0.85
0.70
0.65
0.90
Recall
0.95
(b) notredame
1.00
0.60
0.80
L2
L1
symmKL
chi2
gcl
0.85
0.90
Recall
0.95
1.00
(c) halfdome
Figure 4: The mean precision-recall curve over 20 independent runs. In the figure, solid lines are
experiments using features that are normalized in the l2 scheme, and dashed lines using features
normalized in the thres scheme. Best viewed in color.
ters: position shift, rotation and scale with standard deviations of 0.4 pixels, 11 degrees and 0.12
octaves respectively. Such jitter effects represent the noise we may encounter in real feature detection and localization [25], and allows us to test the robustness of different distance measures. For
completeness, the data without jitter effects are also tested and the results reported.
4.2
Testing Protocol
The testing protocol is as follows: 10,000 matching pairs and 10,000 non-matching pairs are randomly sampled from the dataset, and we classify each pair to be matching or non-matching based on
the distance computed from different testing metrics. The precision-recall (PR) curve is computed,
and two values, namely the average precision (AP) computed as the area under the PR curve and
the false positive rate at 95% recall (95%-FPR) are reported to compare different distance measures.
To test the statistical significance, we carry out 20 independent runs and report the mean and standard deviation in the paper. We focus on comparing distance measures that presume the data to lie
in a vector space. Five different distance measures are compared, namely the L2 distance, the L1
distance, the symmetrized KL divergence, the ?2 distance, and the GCL distance.
The hyperparameters of the GCL distance measure are learned by randomly sampling 50,000 matching pairs from the set Notredame, and performing hyperparameter estimation as described in Section 3.2. They are then fixed and used universally for all other experiments without re-estimation.
As a final note, the code for the experiments in the paper will be released to public for repeatability.
4.3
Experimental Results
Figure 4 shows the average precision-recall curve for all the distances on the three datasets respectively. The numerical results on the data with jitter effects are summarized in Table 1, with
statistically significant values shown in bold. Table 2 shows the 99% FPR on the data without jitter
effects2 . We refer to the supplementary materials for other results on the no jitter case due to space
constraints. Notice that, the observed trends and conclusions from the experiments with jitter effects
are also confirmed on those without jitter effects.
The GCL distance outperforms other base distance measures in all the experiments. Notice that the
hyperparameters learned from the notredame set performs well on the other two datasets as well,
2
As the accuracy for the no jitter effects case is much higher in general, 99% FPR is reported instead of
95% FPR as in the jitter effect case.
7
AP
trevi-l2
trevi-thres
notre-l2
notre-thres
halfd-l2
halfd-thres
L2
96.61?0.16
97.23?0.12
95.90?0.14
96.76?0.13
94.51?0.16
95.55?0.14
L1
98.08?0.10
98.05?0.10
97.83?0.10
97.84?0.10
96.75?0.11
96.90?0.11
SymmKL
97.40?0.12
97.40?0.11
96.96?0.12
97.05?0.12
94.87?0.15
95.08?0.16
?2
97.69?0.11
97.71?0.11
97.31?0.11
97.39?0.11
95.42?0.14
95.64?0.14
GCL
98.33?0.09
98.21?0.10
98.19?0.10
98.07?0.10
98.19?0.10
97.21?0.10
95%-FPR
trevi-l2
trevi-thres
notre-l2
notre-thres
halfd-l2
halfd-thres
L2
23.61?1.14
19.23?0.84
26.43?1.03
21.88?1.21
36.34?0.98
31.44?1.20
L1
12.71?0.83
13.08?0.91
14.27?1.09
14.49?1.25
24.11?1.13
23.14?0.13
SymmKL
17.58?0.96
17.57?0.98
19.56?1.00
19.07?1.11
34.55?0.96
33.71?1.05
?2
15.85?0.74
15.66?0.77
17.70?1.08
17.38?0.95
31.62?1.09
30.56?1.13
GCL
10.52?0.73
11.21?0.71
11.58?1.00
12.09?1.11
19.76?1.03
20.74?1.16
Table 1: The average precision (above) and the false positive rate at 95% recall (below) of different
distance measures on the Photo Tourism datasets, with random jitter effects. A larger AP score and
a smaller FPR score are desired. The l2 and thres in the leftmost column indicate the two different
feature normalization schemes.
99%-FPR
trevi-l2
trevi-thres
notre-l2
notre-thres
halfd-l2
halfd-thres
L2
11.36?1.65
7.14?1.31
19.69?1.93
11.9?1.19
44.55?9.42
40.58?1.63
L1
3.44?0.75
3.24?0.69
6.09?0.72
5.17?0.58
34.01?2.10
32.30?2.28
SymmKL
8.02?1.04
7.93?1.11
14.81?1.66
13.11?1.39
43.51?1.07
42.51?1.22
?2
8.02?1.08
5.06?0.97
9.40?1.04
8.24?1.12
40.53?1.12
39.28?1.49
GCL
2.42?0.58
2.23?0.48
4.16?0.57
3.72?0.56
26.06?2.25
26.36?2.50
Table 2: The false positive rate at 99% recall of different distance measures on the Photo Tourism
datasets without jitter effects.
indicating that they capture the general statistics of the SIFT feature, instead of dataset-dependent
statistics. Also, the thresholding and renormalization of SIFT features does provide a significant
improvement for the Euclidean distance, but its effect is less significant for other distances. In fact,
the hard thresholding may introduce artificial noise to the data, counterbalancing the positive effect
of reducing the tail, especially when the distance measure is already able to cope with heavy tails.
We argue that the key factor leading to the performance improvement is taking the heavy tail property of the data into consideration but not others. For instance, the Laplace distribution has a heavier
tail than distributions corresponding to other base distance measures, and a better performance of the
corresponding L1 distance over other distance measures is observed, showing a positive correlation
between tail heaviness and performance. Notice that the tails of distributions assumed by the baseline distances are still exponentially bounded, and performance is further increased by introducing
heavy-tailed distributions such as the GCL distribution in our experiment.
5
Conclusion
While visual representations based on oriented gradients have been shown to be effective in many applications, scant attention has been paid to the issue of the heavy-tailed nature of their distributions,
undermining the use of distance measures based on exponentially bounded distributions. In this paper, we advocate the use of distance measures that are derived from heavy-tailed distributions, where
the derivation can be done in a principled manner using the log likelihood ratio test. In particular,
we examine the distribution of local image descriptors, and propose the Gamma-compound-Laplace
(GCL) distribution and the corresponding distance for image descriptor matching. Experimental
results have shown that this yields to more accurate feature matching than existing baseline distance
measures.
8
References
[1] A Agarwal and H Daume III. Generative kernels for exponential families. In AISTATS, 2011.
[2] A Banerjee, S Merugu, I Dhillon, and J Ghosh. Clustering with Bregman divergences. JMLR, 6:1705?
1749, 2005.
[3] JT Barron and J Malik. High-frequency shape and albedo from shading using natural image statistics. In
CVPR, 2011.
[4] O Boiman, E Shechtman, and M Irani. In defense of nearest-neighbor based image classification. In
CVPR, 2008.
[5] P Chen, Y Chen, and M Rao. Metrics defined by bregman divergences. Communications in Mathematical
Sciences, 6(4):915?926, 2008.
[6] N Dalal. Histograms of oriented gradients for human detection. In CVPR, 2005.
[7] AC Davison. Statistical models. Cambridge Univ Press, 2003.
[8] J Huang, AB Lee, and D Mumford. Statistics of range images. In CVPR, 2000.
[9] S Ji, Y Xue, and L Carin. Bayesian compressive sensing. IEEE Trans. Signal Processing, 56(6):2346?
2356, 2008.
[10] Y Jia, M Salzmann, and D Trevor. Factorized latent spaces with structured sparsity. In NIPS, 2010.
[11] D Koller and N Friedman. Probabilistic graphical models. MIT press, 2009.
[12] B Kulis and T Darrell. Learning to hash with binary reconstructive embeddings. In NIPS, 2009.
[13] S Lazebnik, C Schmid, and J Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, 2006.
[14] D Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[15] J Mairal, F Bach, J Ponce, and G Sapiro. Online learning for matrix factorization and sparse coding.
JMLR, 11:19?60, 2010.
[16] AW Moore. The anchors hierarchy: using the triangle inequality to survive high dimensional data. In
UAI, 2000.
[17] A Oliva and A Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope.
IJCV, 42(3):145?175, 2001.
[18] B Olshausen. Emergence of simple-cell receptive field properties by learning a sparse code for natural
images. Nature, 381(6583):607?609, 1996.
[19] M Ozuysal and P Fua. Fast keypoint recognition in ten lines of code. In CVPR, 2007.
[20] J Portilla, V Strela, MJ Wainwright, and EP Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Image Processing, 12(11):1338?1351, 2003.
[21] M Riesenhuber and T Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
2:1019?1025, 1999.
[22] G Shakhnarovich, P Viola, and T Darrell. Fast pose estimation with parameter-sensitive hashing. In
ICCV, 2003.
[23] EP Simoncelli. Statistical models for images: compression, restoration and synthesis. In Asilomar Conference on Signals, Systems & Computers, 1997.
[24] Y Weiss and WT Freeman. What makes a good model of natural images? In CVPR, 2007.
[25] S Winder and M Brown. Learning local image descriptors. In CVPR, 2007.
[26] L Wolf, T Hassner, and Y Taigman. The one-shot similarity kernel. In ICCV, 2009.
9
| 4183 |@word kulis:1 dalal:1 compression:1 stronger:1 d2:3 thres:12 paid:1 solid:1 shot:2 shading:2 carry:1 shechtman:1 series:1 contains:3 score:5 salzmann:1 tuned:1 ours:1 suppressing:1 outperforms:1 existing:4 comparing:3 written:2 numerical:1 informative:1 shape:4 enables:1 designed:2 gist:1 update:1 hash:1 implying:1 generative:6 instantiate:2 fpr:7 davison:1 provides:1 quantized:1 codebook:3 location:1 completeness:1 five:1 mathematical:1 direct:1 prove:1 ijcv:2 advocate:3 fitting:1 manner:1 introduce:3 theoretically:1 examine:2 chi:1 inspired:1 freeman:1 increasing:1 begin:1 underlying:3 bounded:3 factorized:1 null:5 what:2 strela:1 developed:2 compressive:2 ghosh:1 sapiro:1 berkeley:2 every:1 concave:1 classifier:1 scaled:1 control:1 unit:1 producing:1 before:2 positive:5 understood:1 local:11 despite:1 approximately:1 ap:3 suggests:1 factorization:1 range:4 statistically:1 trevi:10 testing:4 block:2 differs:1 procedure:1 area:1 scant:1 empirical:4 reject:1 matching:17 word:1 integrating:1 regular:3 context:2 applying:1 optimize:1 center:1 maximizing:1 attention:2 simplicity:1 recovery:1 coordinate:1 justification:1 laplace:12 hierarchy:1 us:1 hypothesis:8 trend:1 recognition:3 particularly:1 updating:2 database:3 labeled:1 bottom:2 observed:3 ep:2 capture:2 contemporary:1 icsi:1 principled:9 shakhnarovich:1 upon:1 localization:1 distinctive:1 triangle:1 easily:2 various:5 represented:1 derivation:1 univ:1 fast:2 effective:2 reconstructive:1 artificial:1 formation:1 pearson:1 whose:2 heuristic:2 larger:2 widely:2 supplementary:2 cvpr:8 triangular:1 statistic:19 emergence:1 final:1 online:1 kurtosis:2 reconstruction:3 propose:4 descriptiveness:1 holistic:1 inducing:1 convergence:1 cluster:1 darrell:3 optimum:1 gcl:22 object:2 derive:1 ac:1 pose:1 fixing:1 nearest:3 strong:1 implies:1 indicate:1 direction:1 filter:1 human:1 material:2 public:1 bin:1 hassner:1 alleviate:1 proposition:1 mathematically:1 extension:1 hold:1 considered:1 ground:2 nbnn:2 exp:4 pointing:1 dictionary:1 early:2 adopt:1 released:1 heavytailed:2 torralba:1 albedo:1 estimation:7 jiayq:1 bag:3 sensitive:1 largest:1 vice:1 establishes:1 mit:1 gaussian:12 always:3 publication:1 derived:3 focus:4 l0:2 ponce:2 improvement:3 likelihood:25 baseline:2 sense:1 camp:1 wooden:1 dependent:3 relation:1 koller:1 selective:2 pixel:1 arg:1 classification:2 orientation:2 issue:1 spatial:3 tourism:8 special:1 uc:1 integration:1 field:2 equal:1 extraction:1 sampling:1 biology:1 survive:1 carin:1 future:2 report:1 others:1 employ:1 few:1 oriented:10 randomly:2 gamma:3 divergence:8 replaced:1 ab:1 friedman:1 detection:2 highly:2 mixture:1 extreme:1 notre:6 heaviness:1 accurate:1 bregman:6 xy:8 poggio:1 tree:1 euclidean:13 logarithm:1 re:1 desired:1 theoretical:1 instance:2 classify:1 column:1 increased:1 rao:1 kurtotic:1 modeling:1 restoration:1 cost:1 introducing:1 deviation:3 subset:4 undermines:1 recognizing:1 examining:1 too:1 reported:3 connect:1 aw:1 eec:2 xue:1 explores:1 probabilistic:5 lee:1 synthesis:1 nongaussian:1 squared:2 containing:1 choose:1 possibly:1 huang:1 derivative:1 leading:4 winder:1 coding:5 summarized:1 includes:1 bold:1 rescales:1 explicitly:3 later:1 root:7 lowe:1 closed:1 red:1 start:1 jia:2 contribution:1 formed:2 square:7 accuracy:1 merugu:1 descriptor:24 ensemble:1 maximized:1 yield:1 repeatability:1 boiman:1 raw:2 bayesian:3 worth:1 presume:1 confirmed:1 reach:1 sharing:1 trevor:3 definition:1 against:2 frequency:1 proof:2 sampled:2 dataset:8 popular:3 ask:2 recall:9 knowledge:2 color:1 reflecting:1 focusing:1 hashing:2 higher:1 isometric:1 follow:2 response:1 wei:1 fua:1 evaluated:1 done:2 notredame:6 until:1 correlation:1 hand:1 banerjee:1 lack:1 perhaps:1 believe:1 grows:2 olshausen:1 building:3 effect:15 k22:1 normalized:3 true:1 counterbalancing:1 brown:1 equality:1 q0:1 iteratively:1 dhillon:1 irani:1 moore:1 adjacent:1 nuisance:1 inferior:1 leftmost:1 octave:1 performs:1 l1:11 motion:1 image:41 lazebnik:1 consideration:3 recently:1 common:4 rotation:1 multinomial:1 empirically:2 ji:1 exponentially:3 tail:14 discussed:1 interpretation:1 significant:5 measurement:1 refer:1 versa:1 imposing:1 cambridge:1 similarly:1 similarity:14 cortex:1 etc:1 base:2 dominant:1 closest:1 recent:4 perspective:2 scenario:1 compound:2 certain:3 nonconvex:1 inequality:3 binary:2 exploited:1 seen:2 paradigm:2 monotonically:1 signal:3 dashed:1 branch:1 simoncelli:2 keypoints:2 smooth:1 match:1 characterized:1 bach:1 raphson:1 justifying:1 prediction:1 variant:1 oliva:1 vision:4 metric:15 histogram:6 sometimes:1 kernel:4 normalization:3 represent:1 achieved:1 agarwal:1 pyramid:1 cell:1 addition:1 envelope:1 subject:1 tend:1 virtually:1 db:7 elegant:1 flow:1 call:3 ideal:2 iii:1 embeddings:1 fit:8 architecture:1 regarding:1 idea:2 prototype:14 subadditivity:1 shift:1 whether:3 heavier:3 defense:1 stereo:2 reformulated:1 generally:2 detailed:1 listed:1 verifiable:2 obstruction:1 ten:1 category:2 exist:1 notice:3 neuroscience:2 per:1 hyperparameter:2 key:1 threshold:2 yangqing:1 drawn:1 changing:1 v1:1 year:3 sum:1 taigman:1 run:2 parameterized:4 jitter:12 family:8 patch:21 decision:1 correspondence:1 fold:1 nonnegative:1 binned:1 constraint:1 scene:4 x2:1 generates:1 min:2 performing:1 optical:1 relatively:1 embrace:1 structured:2 alternate:2 combination:1 conjugate:1 across:2 smaller:2 intuitively:1 restricted:5 pr:5 invariant:1 iccv:2 taken:1 asilomar:1 equation:2 previously:1 discus:2 merit:1 serf:2 photo:8 end:1 adopted:3 gaussians:1 apply:1 hierarchical:2 barron:1 appropriate:2 undermining:2 encounter:1 robustness:1 symmetrized:1 original:1 top:1 clustering:3 include:1 graphical:1 newton:1 especially:1 classical:3 warping:2 malik:1 question:1 already:1 mumford:1 receptive:2 gradient:12 distance:84 conformity:1 argue:1 trivial:1 reason:1 assuming:1 code:4 length:2 modeled:2 ratio:14 difficult:1 statement:1 hog:1 stated:1 negative:1 suppress:1 unknown:1 perform:1 allowing:1 observation:3 datasets:4 riesenhuber:1 viola:1 communication:1 portilla:1 arbitrary:3 namely:3 pair:5 kl:1 connection:1 learned:2 nip:2 trans:2 beyond:2 bar:1 able:1 below:1 pattern:2 sparsity:3 built:1 including:3 wainwright:1 natural:11 decidedly:1 scheme:8 keypoint:1 categorical:1 schmid:1 prior:5 l2:19 localized:1 foundation:1 integrate:1 degree:1 sufficient:2 thresholding:2 systematically:1 share:1 heavy:24 normalizes:1 row:1 surprisingly:1 neighbor:3 wide:1 taking:3 sparse:10 curve:8 dimension:4 xn:1 world:2 commonly:2 universally:1 cope:2 implicitly:3 l00:2 keep:1 ml:2 global:1 investigating:1 anchor:1 mairal:1 uai:1 assumed:2 discriminative:2 xi:4 latent:2 tailed:18 why:2 table:4 nature:3 reasonably:1 mj:1 necessarily:2 protocol:2 domain:1 aistats:1 significance:1 linearly:1 motivation:3 noise:15 whole:1 hyperparameters:7 daume:1 x1:1 fashion:2 extraordinary:1 renormalization:1 precision:8 quasiconvex:1 fails:1 position:1 bandpass:1 exponential:8 lie:2 jmlr:2 wavelet:2 theorem:5 specific:1 jt:1 sift:21 showing:1 jensen:3 sensing:2 quantization:4 restricting:1 false:3 kx:1 chen:2 ozuysal:1 logarithmic:1 simply:1 explore:1 likely:1 visual:3 partially:1 obstructing:1 ters:1 corresponds:2 nested:2 satisfies:1 minimizer:1 extracted:1 truth:2 wolf:1 viewed:1 halfdome:4 change:4 hard:2 typical:3 retract:1 specifically:3 reducing:1 wt:1 denoising:1 lemma:6 invariance:1 experimental:3 shannon:1 indicating:2 support:1 latter:1 arises:1 incorporate:1 evaluate:2 tested:2 |
3,516 | 4,184 | Maximum Margin Multi-Label Structured Prediction
Christoph H. Lampert
IST Austria (Institute of Science and Technology Austria)
Am Campus 1, 3400 Klosterneuburg, Austria
http://www.ist.ac.at/?chl
[email protected]
Abstract
We study multi-label prediction for structured output sets, a problem that occurs,
for example, in object detection in images, secondary structure prediction in computational biology, and graph matching with symmetries. Conventional multilabel classification techniques are typically not applicable in this situation, because they require explicit enumeration of the label set, which is infeasible in case
of structured outputs. Relying on techniques originally designed for single-label
structured prediction, in particular structured support vector machines, results in
reduced prediction accuracy, or leads to infeasible optimization problems.
In this work we derive a maximum-margin training formulation for multi-label
structured prediction that remains computationally tractable while achieving high
prediction accuracy. It also shares most beneficial properties with single-label
maximum-margin approaches, in particular formulation as a convex optimization
problem, efficient working set training, and PAC-Bayesian generalization bounds.
1
Introduction
The recent development of conditional random fields (CRFs) [1], max-margin Markov networks
(M3Ns) [2], and structured support vector machines (SSVMs) [3] has triggered a wave of interest in
the prediction of complex outputs. Typically, these are formulated as graph labeling or graph matching tasks in which each input has a unique correct output. However, not all problems encountered
in real applications are reflected well by this assumption: machine translation in natural language
processing, secondary structure prediction in computational biology, and object detection in computer vision are examples of tasks in which more than one prediction can be ?correct? for each data
sample, and that are therefore more naturally formulated as multi-label prediction tasks.
In this paper, we study multi-label structured prediction, defining the task and introducing the necessary notation in Section 2. Our main contribution is a formulation of a maximum-margin training
problem, named MLSP, which we introduce in Section 3. Once trained it allows the prediction of
multiple structured outputs from a single input, as well as abstaining from a decision. We study
the generalization properties of MLSP in form of a generalization bound in Section 3.2, and we
introduce a working set optimization procedure in Section 3.3. The main insights from these is that
MLSP behaves similarly to a single-label SSVM in terms of efficient use of training data and computational effort during training, despite the increased complexity of the problem setting. In Section 4
we discuss MLSP?s relation to existing methods for multi-label prediction with simple label sets,
and to single-label structured prediction. We furthermore compare MLSP to a multi-label structured
prediction methods within the SSVM framework in Section 4.1. In Section 5 we compare the different approaches experimentally, and we conclude in Section 6 by summarizing and discussing our
contribution.
1
2
Multi-label structured prediction
We first recall some background and establish the notation necessary to discuss multi-label classification and structured prediction in a maximum margin framework. Our overall task is predicting
outputs y ? Y for inputs x ? X in a supervised learning setting.
In ordinary (single-label) multi-class prediction we use a prediction function, g : X ? Y, for this,
which we learn from i.i.d. example pairs {(xi , y i )}i=1,...,n ? X ? Y. Adopting a maximum-margin
setting, we set
g(x) := argmaxy?Y f (x, y) for a compatibility function f (x, y) := hw, ?(x, y)i.
(1)
The joint feature map ? : X ?Y ? H maps input-output pairs into a Hilbert space H with inner
product h? , ?i. It is defined either explicitly, or implicitly through a joint kernel function k : (X ?
Y) ? (X ? Y) ? R. We measure the quality of predictions by a task-dependent loss function
? : Y ? Y ? R+ , where ?(y, y?) specifies what cost occurs if we predict an output y? while the
correct prediction is y.
Structured output prediction can be seen as a generalization of the above setting, where one wants to
make not only one, but several dependent decisions at the same time, for example, deciding for each
pixel of an image to which out of several semantic classes it belongs. Equivalently, one can interpret
the same task as a special case of supervised single-label prediction, where inputs and outputs consist
of multiple parts. In the above example, a whole image is one input sample, and a segmentation mask
with as many entries as the image has pixels is an output. Having a choice of M ? 2 classes per pixel
of a (w?h)-sized image leads to an output set of M w?h elements. Enumerating all of these is out of
question, and collecting training examples for each of them even more so. Consequently, structured
output prediction requires specialized techniques that avoid enumerating all possible outputs, and
that can generalize between labels in the output set. A popular technique for this task is the structured
(output) support vector machine (SSVM) [3]. To train it, one has to solve a quadratic program
subject to n|Y| linear constraints. If an efficient separation oracle is available, i.e. a technique for
identifying the currently most violated linear constraints, working set training, in particular cutting
plane [4] or bundle methods [5] allow SSVM training to arbitrary precision in polynomial time.
Multi-label prediction is a generalization of single-label prediction that gives up the condition of a
functional relation between inputs and outputs. Instead, each input object can be associated with
any (finite) number of outputs, including none. Formally, we are given pairs {(xi , Y i )}i=1,...,n ?
X ? P(Y), where P denotes the power set operation, and we want to determine a set-valued function
G : X ? P(Y). Often it is convenient to use indicator vectors instead of variable size subsets.
We say that v ? {?1}Y represents the subset Y ? P(Y) if vy = +1 for y ? Y and vy = ?1
otherwise. Where no confusion arises, we use both representations interchangeably, e.g., we write
either Y i or v i for a label set in the training data. To measure the quality of a predicted set we use a
set loss function ?ML : P(Y) ? P(Y) ? R. Note that multi-label prediction can also be interpreted
as ordinary single-output prediction with P(Y) taking the place of the original output set Y. We will
come back to this view in Section 4.1 when discussing related work.
Multi-label structured prediction combines the aspects of multi-label prediction and structured output sets: we are given a training set {(xi , Y i )}i=1,...,n ? X ? P(Y), where Y is a structured output
set of potentially very large size, and we would like to learn a prediction function: G : X ? P(Y)
with the ability to generalize also in the output set. In the following, we will take the view of structured prediction point of view, deriving expressions for predicting multiple structured outputs instead
of single ones. Alternatively, the same conclusions could be reached by interpreting the task as performing multi-label predicting with binary output vectors that are too large to store or enumerate
explicitly, but that have an internal structure allowing generalization between the elements.
3
Maximum margin multi-label structured prediction
In this section we propose a learning technique designed for multi-label structure prediction that we
call MLSP. It makes set-valued prediction by1 ,
G(x) := {y ? Y : f (x, y) > 0}
for
f (x, y) := hw, ?(x, y)i.
(2)
1
More complex prediction rules exist in the multi-label literature, see, e.g., [6]. We restrict ourselves to perlabel thresholding, because more advanced rules complicate the learning and prediction problem even further.
2
Note that the compatibility function, f (x, y), acts on individual inputs and outputs, as in single-label
prediction (1), but the prediction step consists of collecting all outputs of positive scores instead of
finding the outputs of maximal score. By including a constant entry into the joint feature map ?(x, y)
we can model a bias term, thereby avoiding the need of a threshold during prediction (2). We can
also add further flexibility by a data-independent, but label-dependent term. Note that our setup
differs from SSVMs training in this regard. There, a bias term, or a constant entry of the feature
map, would have no influence, because during training only pairwise differences of function values
are considered, and during prediction a bias does not affect the argmax-decision in Equation (1).
We learn the weight vector w for the MLSP compatibility function in a maximum-margin framework
that is derived from regularized risk minimization. As the risk depends on the loss function chosen,
we first study the possibilities we have for the set loss ?ML : P(Y) ? P(Y) ? R+ . There are no
established functions for this in the structured prediction setting, but it turns out that two canonical
set losses are consistent with the following first principles. Positivity: ?ML (Y, Y? ) ? 0, with equality
only if Y = Y? , Modularity: ?ML should decompose over the elements of Y (in order to facilitate
efficient computation), Monotonicity: ?ML should reflect that making a wrong decision about some
element y ? Y can never reduce the loss. The last criterion we formalize as
?ML (Y, Y? ? {?
y }) ? ?ML (Y, Y? ) for any y? 6? Y , and
(3)
?
?
?
?ML (Y ? {y}, Y ) ? ?ML (Y, Y ) for any y 6? Y .
(4)
P
?
Two candidates that fulfill these criteria are the sum loss, ?sum (Y, Y ) := y?Y Y? ?(Y, y), and the
max loss, ?max (Y, Y? ) := maxy?Y Y? ?(Y, y), where Y Y? := (Y \Y? )?(Y? \Y ) is the symmetric set
difference, and ? : P(Y) ? Y ? R+ is a task-dependent per-label misclassification cost. Assuming
that a set Y is the correct prediction, ?(Y, y?) specifies either the cost of predicting y?, although y? 6? Y ,
or of not predicting y?, when really y? ? Y . In the special case of ? ? 1 the sum loss is known as
symmetric difference loss, and it coincides with the Hamming loss of the binary indicator vector
representation. The max loss becomes the 0/1-loss between sets in this case. In a general case, ?
typically expresses partial correctness, generalizing the single-label structured loss ?(y, y?). Note
that in evaluating ?(Y, y?) one has access to the whole set Y , not just single elements. Therefore, a
flexible penalization of multiple errors is possible, e.g., submodular behavior.
While in the small-scale multi-label situation, the sum loss is more common, we argue in this work
that that the max loss has advantages in the structured prediction situation. For once, the sum loss has
a scaling problem. Because it adds potentially exponentially many terms, the ratio in loss between
making few mistakes and making many mistakes is very large. If used in the unnormalized form
given above this can result in impractically large values. Normalizing the expression by multiplying
with 1/|Y| stabilizes the upper value range, but it leads to a situation where ?sum (Y, Y? ) ? 0 in
the common situation that Y? differs from Y in only a few elements. The value range of the max
loss, on the other hand, is the same as the value range of ? and therefore easy to keep reasonable.
A second advantage of the max loss is that it leads to an efficient constraint generation technique
during training, as we will see in Section 3.3.
3.1
Maximum margin multi-label structured prediction (MLSP)
To learn the parameters w of the compatibility function f (x, y) we follow a regularized risk minimization framework: given i.i.d. training examples {(xi , Y i )}i=1,...,n , we would like to minimize
P
1
C
2
i
i
2 kwk + n P i ?max (Y , G(x )). Using the definition of ?max this is equivalent to minimizing
1
C
2
i
i
i
i
i
i ? , subject to ? ? ?(Y , y) for all y ? Y with vy f (x , y) ? 0. Upper bounding the
2 kwk + n
inequalities by a Hinge construction yields the following maximum-margin training problem:
n
1
CX i
(w? , ? ? ) =
argmin
kwk2 +
?
(5)
n i=1
w?H,? 1 ,...,? n ?R+ 2
subject to, for i = 1, . . . , n,
? i ? ?(Y i , y)[1 ? vyi f (xi , y)], for all y ? Y.
(6)
Note that making per-label decisions through thresholding does not rule out the sharing of information between
labels. In the terminology of [7], Equation (2) corresponds to a conditional label independence assumption.
Through the joint feature function ?(x, y) te proposed model can still learn unconditional dependence between
labels, which relates closer to an intuition of the form ?Label A tends to co-occur with label B?.
3
Besides this slack rescaled variant, one can also form margin rescaled training using the constraints
? i ? ?(Y i , y) ? vyi f (xi , y),
for all y ? Y.
(7)
Both variants coincide in the case of 0/1 set loss, ?(Y i , y) ? 1. The main difference between slack
and margin rescaled training is how they treat the case of ?(Y i , y) = 0 for some y ? Y. In slack
rescaling, the corresponding outputs have no effect on the training at all, whereas for margin rescaling, no margin is enforced for such examples, but a penalization still occurs whenever f (xi , y) > 0
for y 6? Y i , or if f (xi , y) < 0 for y ? Y i .
3.2
Generalization Properties
Maximum margin structured learning has become successful not only because it provides a powerful framework for solving practical prediction problems, but also because it comes with certain
theoretical guarantees, in particular generalization bounds. We expect that many of these results
will have multi-label analogues. As an initial step, we formulate and prove a generalization bound
for slack-rescaled MLSP similar to the single-label SSVM analysis in [8].
Let Gw (x) := {y ? Y : fw (x, y) > 0} for fw (x, y) = hw, ?(x, y)i. We assume |Y| < r and
k?(x, y)k < s for all (x, y) ? X ? Y, and ?(Y, y) ? ? for all (Y, y) ? P(Y) ? Y.
For any distribution Qw over weight vectors, that may depend on w, we denote by L(Qw , P ) the
expected ?max -risk for P -distributed data,
L(Qw , P ) = Ew?Q
?max (Y, Gw? (x)) .
(8)
?
? ) = Ew?Q
?
w RP,?max (Gw
w ,(x,Y )?P
The following theorem bounds the expected risk in terms of the total margin violations.
Theorem 1. With probability at least 1 ? ? over the sample S of size n, the following inequality
holds simultaneously for all weight vectors w.
n
1X
||w||2 s2 ||w||2 ln(rn/||w||2 ) + ln n? 1/2
L(Qw ,D) ?
`(xi , Y i , f ) +
(9)
+
n i=1
n
2(n ? 1)
for `(xi , Y i , f ) := max ?(Y i , y)Jvyi f (xi , y) < 1K, where v i is the binary indicator vector of Y i .
y?Y
Proof. The argument follows [8, Section 11.6]. It can be found in the supplemental material.
A main insight from Theorem 1 is that the number of samples needed for good generalization grows
only logarithmically with r, i.e. the size of Y. This is the same complexity as for single-label
prediction using SSVMs, despite the fact that multi-label prediction formally maps into P(Y), i.e.
an exponentially larger output set.
3.3
Numeric Optimization
The numeric solution of MLSP training resembles SSVM training. For explicitly given joint feature maps, ?(x, y), we can solve the optimization problem (5) in the primal, for example using
subgradient descent. To solve MLSP in a kernelized setup we introduce Lagrangian multipliers
(?yi )i=1,...,n;y?Y for the constraints (7)/(6). For the margin-rescaled variant we obtain the dual
X i i
1 X i ?? i ??
vy vy??y ?y? k (xi , y), (x?? , y?) +
?y ?y
(10)
max ?
?iy ?R+ 2
(i,y),(?
?,?
y)
subject to
X
(i,y)
C
?yi ? ,
y
n
for i = 1, . . . , n.
(11)
For slack-rescaled MLSP, the dual is computed analogously as
X i
1 X i ?? i ??
i
?
?
v
v
?
?
k
(x
,
y),
(x
,
y
?
)
+
?y
max
?
y
y
?
y
y
?
2
?iy ?R+
(i,y),(?
?,?
y)
subject to
(12)
(i,y)
X ?yi
C
? ,
y ?iy
n
for i = 1, . . . , n,
4
(13)
with the convention that only terms with ?iy 6= 0 enter the summation. In both cases, the compatibility function becomes
X
f (x, y) =
?yi?vyi? k (xi , y?), (x, y) .
(14)
(i,?
y)
Comparing the optimization problems (10)/(11) and (12)/(13) to the ordinary SVM dual, we see
that MLSP couples |Y| binary SVM problems by the joint kernel function and the summed-over box
constraints. In particular, whenever only a feasibly small subset of variables has to be considered, we
can solve the problem using a general purpose QP solver, or a slightly modified SVM solver. Overall,
however, there are infeasibly many constraints in the primal, or variables in the dual. Analogously
to the SSVM situation we therefore apply iterative working set training, which we explain here
using the terminology of the primal. We start with an arbitrary, e.g. empty, working set S. Then, in
each step we solve the optimization using only the constraints indicated by the working set. For the
resulting solution (wS , ?S ) we check whether any constraints of the full set (6)/(7) are violated up
to a target precision . If not, we have found the optimal parameters. Otherwise, we add the most
violated constraint to S and start the next iteration. The same monotonicity argument as in [3] shows
that we reach an objective value -close to the optimal one within O( 1 ) steps. Consequently, MLSP
training is roughly comparable in computational complexity to SSVM training.
The crucial step in working set training is the identification of violated constraints. Note that constraints in MLSP are determined by pairs of samples and single labels, not pairs of samples and
sets of labels. This allows us to reuse existing methods for loss augmented single label inference.
In practice, it is safe to assume that the sets Y i are feasibly small, since they are given to us explicitly. Consequently, we can identify violated ?positive? constraints by explicitly checking the
inequalities (7)/(6) for y ? Y i . Identifying violated ?negative? constraint requires loss-augmented
prediction over Y \Y i . We are not aware of a general purpose solution for this task, but at least all
problems that allow efficient K-best MAP prediction can be handled by iteratively performing lossaugmented prediction within Y until a violating example from Y \Y i is found, or it is confirmed that
no such example exists. Note that K-best versions of most standard MAP prediction methods have
been developed, including max-flow [9], loopy BP [10], LP-relaxations [11], and Sampling [12].
3.4
Prediction problem
After training, Equation (2) specifies the rule to predict output sets for new input data. In contrast
to single-label SSVM prediction this requires not only a maximization over all elements of Y, but
the collection of all elements y ? Y of positive score. The structure of the output set is not as
immediately helpful for this as it is, e.g., in MAP prediction. Task-specific solutions exist, however,
for example branch-and-bound search for object detection [13]. Also, it is often possible to establish
an upper bound on the number of desired outputs, and then, K-best prediction techniques can again
be applied. This makes MLSP of potential use for several classical tasks, such as parsing and
chunking in natural language processing, secondary structured prediction in computational biology,
or human pose estimation in computer vision. In general situations, evaluating (2) might require
approximate structured prediction techniques, e.g. iterative greedy selection [14]. Note that the use
of approximation algorithms is little problematic here, because, in contrast to training, the prediction
step is not performed in an iterative manner, so errors do not accumulate.
4
Related Work
Multi-label classification is an established field of research in machine learning and several established techniques are available, most of which fall into one of three categories: 1) Multi-class
reformulations [15] treat every possible label subset, Y ? P(Y), as a new class in an independent
multi-class classification scenario. 2) Per-label decomposition [16] trains one classifier for each output label and makes independent decision for each of those. 3) Label ranking [17] learns a function
that ranks all potential labels for an input sample. Given the size of Y, 1) is not a promising direction
for multi-label structured prediction. A straight-forward application of 2) and 3) are also infeasible
if Y is too large to enumerate. However, MLSP resembles both approaches by sharing their prediction rule (2). MLSP can be seen as a way to make a combination of approaches applicable to the
situation of structured prediction by incorporating the ability to generalize in the label set.
Besides the general concepts above, many specific techniques for multi-label prediction have been
proposed, several of them making use of structured prediction techniques: [18] introduces an SSVM
5
formulation that allows direct optimization of the average precision ranking loss when the label
set can be enumerated. [19] relies on a counting framework for this purpose, and [20] proposes
an SSVM formulation for enforcing diversity between the labels. [21] and [22] identify shared
subspaces between sets of labels, [23] encodes linear label relations by a change of the SSVM
regularizer, and [24] handles the case of tree- and DAG-structured dependencies between possible
outputs. All these methods work in the multi-class setup and require an explicit enumerations of the
label set. They use a structured prediction framework to encode dependencies between the individual
output labels, of which there are relatively few. MLSP, on the other hand, aims at predicting multiple
structured object, i.e. the structured prediction framework is not just a tool to improve multi-class
classification with multiple output labels, but it is required as a core component for predicting even
a single output.
Some previous methods targeting multi-label prediction with large output sets, in particular using
label compression [25] or a label hierarchy [26]. This allows handling thousands of potential output
classes, but a direct application to the structured prediction situation is not possible, because the
methods still require explicit handling of the output label vectors, or cannot predict labels that were
not part of the training set.
The actual task of predicting multiple structured outputs has so far not appeared explicitly in the
literature. The situation of multiple inputs during training has, however, received some attention:
[27] introduces a one-class SVM based training technique for learning with ambiguous ground truth
data. [13] trains an SSVM for the same task by defining a task-adapted loss function ?min (Y, y?) =
miny?Y ?(y, y?). [28] uses a similar min-loss in a CRF setup to overcome problems with incomplete
annotation. Note that ?min (Y, y?) has the right signature to be used as a misclassification cost ?(Y, y?)
in MLSP. The compatibility functions learned by the maximum-margin techniques [13, 27] have
the same functional form as f (x, y) in MLSP, so they can, in principle, be used to predict multiple
outputs using Equation (2). However, our experiments of Section 5 show that this leads to low multilabel prediction accuracy, because the training setup is not designed for this evaluation procedure.
4.1
Structured Multilabel Prediction in the SSVM Framework
At first sight, it appears unnecessary to go beyond the standard structured prediction framework at
all in trying to predict subsets of Y. As mentioned in Section 3, multi-label prediction into Y can
be interpreted as single-label prediction into P(Y), so a straight-forward approach to multi-label
structured prediction would be to use an ordinary SSVM with output set P(Y). We will call this
setup P-SSVM. It has previously been proposed for classical multi-label prediction, for example
in [23]. Unfortunately, as we will show in this section, the P-SSVM setup is not well suited to the
structured prediction situation.
A P-SSVM learns a prediction function, G(x) := argmaxY ?P(Y) F (x, Y ), with linearly parameterized compatibility function, F (x, Y ) := hw, ?(x, Y )i, by solving the optimization problem
n
1
CX i
kwk2 +
?,
n i=1
w?H,? 1 ,...,? n ?R+2
argmin
subject to ? i ? ?(y i , Y ) + F (xi , Y ) ? F (xi , Y i ), (15)
for i = 1, . . . , n, and for all Y ? P(Y). The main problem with this general form is that identifying
violated constraints of (15) requires loss-augmented maximization of F over P(Y), i.e. an exponentially larger set than Y. To better understand this problem, we analyze what happens when making
the same simplifyingP
assumptions as for MLSP in Section 3.1. First, we assume additivity of F over
Y, i.e. F (x, Y ) := y?Y f (x, y) for f (x, y) := hw, ?(x, y)i. This turns the argmax-evaluation
for G(x) exactly into the prediction rule (2), and the constraint set in (15) simplifies to
X
? i ? ?ML (Y i , Y ) ?
vyi f (xi , y),
for i = 1, . . . , n, and for all Y ? P(Y), (16)
i
y?Y Y
Choosing ?ML as max loss does not allow
P us to further simplify this expression, but choosing the
sum loss does: with ?ML (Y i , Y ) = y?Y Y i ?(Y i , y), we obtain an explicit expression for the
label set maximizing the right hand side of the constraint (16), namely
i
Yviol
={y ? Y i : f (xi , y) < ?(Y i , y)} ? {y ? Y \ Y i : f (xi , y) > ??(Y i , y)}.
Thus, we avoid having to maximize a function over P(Y). Unfortunately, the set
tion (17) can contain exponentially many terms, rendering a numeric computation of
6
(17)
i
Yviol
in Equai
F (xi , Yviol
) or
its gradient still infeasible in general. Note that this is not just a rare, easily avoidable case. Because
w, and thereby f , are learned iteratively, they typically go through phases of low prediction quali
i
ity, i.e. large Yviol
. In fact, starting the optimization with w = 0 would already lead to Yviol
=Y
for all i = 1, . . . , n. Consequently, we presume that P-SSVM training is intractable for structured
prediction problems, except for the case of a small label set.
Note that while computational complexity is the most prominent problem of P-SSVM training, it is
not the only one. For example, even if we did find a polynomial-time training algorithm to solve (15)
the generalization ability of the resulting predictor would be unclear: the SSVM-generalization
bounds [8] suggest that training sets of size O(log |P(Y)|) = O(|Y|) will be required, compared to
the O(log |Y|) bound we established for MLSP in Section 3.2.
5
Experimental Evaluation
To show the practical use of MLSP we performed experiments on multi-label hierarchical classification and object detection in natural images. The complete protocol of training a miniature toy
example can be found in the supplemental material (available from the author?s homepage).
5.1
Multi-label hierarchical classification
We use hierarchical classification as an illustrative example that in particular allows us to compare
MLSP to alternative, less scalable, methods. On the one hand, it is straight-forward to model as a
structured prediction task, see e.g. [3, 29, 30, 31]. On the other hand, its output set is small enough
such that we can compare MLSP also against other approaches that cannot handle very large output
sets, in particular P-SSVM and independent per-label training.
The task in hierarchical classification is to classify samples into a number of discrete classes, where
each class corresponds to a path in a tree. Classes are considered related if they share a path in the
tree, and this is reflected by sharing parts of the joint feature representations. In our experiments,
we use the PASCAL VOC2006 dataset that contains 5304 images, each belonging to between 1
and 4 out of 10 classes. We represent each image x by 960-dimensional GIST features ?(x) and
use the same 19-node hierarchy ? and joint feature function, ?(x, y) = vec(?(x) ? ?(y)), as in
[30]. As baselines we use P-SSVM [23], JKSE [27], and an SSVM trained with the normal, singlelabel objective, but evaluated by Equation (2). We follow the pre-defined data splits, doing model
selection using the train and val parts to determine C ? {2?1 , . . . , 214 } (MLSP, P-SSVM, SSVM),
or ? ? {0.05, 0.10, . . . , 0.95} (JKSE). We then retrain on the combination of train and val and we
test on the test part of the dataset. As the label set is small, we use exhaustive search over Y to
identify violated constraints during training and to perform the final predictions.
We report results in Table 1a). As there is no single established multi-label error measure, and because it illustrates the effect of training with different loss function, we report several common measures. The results show nicely how the assumptions made during training influence the prediction
characteristics. Qualitatively, MLSP achieves best prediction accuracy in the max loss, P-SSVM is
better if we judge by the sum loss. This exactly reflects the loss functions they are trained with. Independent training achieves very good results with respect to both measures, justifying its common
use for multi-label prediction with small label sets and many training examples per label2 Ordinary
SSVM training does not achieve good max- or sum-loss scores, but it performs well if quality is
measured by the average of the area under the precision-recall curves across labels for each individual test example. This is also plausible, as SSVM training uses a ranking-like loss: all potential
labels for each input are enforced to be in the right order (correct labels have higher score than incorrect ones), but nothing in the objective encourages a cut-off point at 0. As a consequence, too
few or too many labels are predicted by Equation (2). In Table 1a) it appears to be too many, visible
as high recall but low precision. JKSE does not achieve competitive results in max loss, mAUC loss
or F1-score. Potentially this is because we use it with a linear kernel to stay comparable with the
other methods, whereas [27] reported good results mainly for nonlinear kernels.
Qualitatively, MLSP and P-SSVM show comparable prediction quality. We take this as an indication
that both, training with sum loss and training with max loss, make sense conceptually. However, of
2
For ?sum this is not surprising: independent training is known to be the optimal setup, if enough data is
available [32]. For ?sum , the multi-class reformulation would be the optimal setup. The problem in multi-label
structured prediction is solely that |Y| is too large, and training data too scarce, to use either of these setups.
7
Figure 1: Multi-label structured prediction results. ?max /?sum : max/sum loss (lower is better),
mAUC: mean area under per-sample precision-recall curve, prec/rec/F1: precision, recall, F1-score
(higher is better). Methods printed in italics are infeasible for general structured output sets.
MLSP
JKSE
SSVM
P-SSVM
indep.
(a)
?max ?sum mAUC F1 ( prec / rec )
0.73 1.59
0.82 0.42 ( 0.40 / 0.46 )
1.00 1.91
0.54 0.23 ( 0.14 / 0.76 )
0.88 3.86
0.84 0.37 ( 0.24 / 0.88 )
0.75 1.11
0.83 0.44 ( 0.48 / 0.41 )
0.73 1.07
0.84 0.46 ( 0.61 / 0.38 )
Hierarchical classification results.
MLSP
JKSE
SSVM
P-SSVM
indep.
(b)
?max ?sum F1 ( prec / rec )
0.66 1.31 0.46 ( 0.60 / 0.52 )
0.99 7.29 0.09 ( 0.60 / 0.16 )
0.93 3.71 0.21 ( 0.79 / 0.33 )
infeasible
infeasible
Object detection results.
the five methods, only MLSP, JKSE and SSVM generalize to more general structured prediction
setting, as they do not require exhaustive enumeration of the label set. Amongst these, MLSP is
preferable, except if one is only interested in ranking the labels, for which SSVM also works well.
5.2
Object class detection in natural images
Object detection can be solved as a structured prediction problem where natural images are the inputs
and coordinate tuples of bounding boxes are the outputs. The label set is of quadratic size in the number of image pixels and thus cannot be searched exhaustively. However, efficient (loss-augmented)
argmax-prediction can be performed by branch-and-bound search [33]. Object detection is also inherently a multi-label task, because natural images contain different numbers of objects. We perform
experiments on the public UIUC-Cars dataset [34]. Following the experimental setup of [27] we use
the multiscale part of the dataset for training and the singlescale part for testing. The additional set
of pre-cropped car and background images serves as validation set for model selection. We use the
localization kernel, k (x, y), (?
x, y?) = ?(x|y )t ?(?
x|y?) where ?(x|y ) is a 1000-dimensional bag of
visual words representation of the region y within the image x [13]. As misclassification cost we
use ?(Y, y) := 1 for y ? Y , and ?(Y, y) := miny??Y A(?
y , y) otherwise, where A(?
y , y) := 0 if
area(?
y ?y)
y , y) := 1 otherwise. This is a common measure in object detection, which
area(?
y ?y) ? 0.5, and A(?
reflects the intuition that all objects in an image should be identified, and that an object?s position is
acceptable if it overlaps sufficiently with at least one ground truth object. P-SSVM and independent
training are not applicable in this setup, so we compare MLSP against JKSE and SSVM. For each
method we train models on the training set and choose the C or ? value that maximizes the F1 score
over the validation set of precropped object and background images. Prediction is performed using
branch and bound optimization with greedy non-maximum suppression [35]. Table 1b) summarizes
the results on the test set (we do not report the mAUC measure, as computing this would require
summing over the complete output set). One sees that MLSP achieves the best results amongst the
three method. SSVM as well as JKSE suffer particularly from low recall, and their predictions also
have higher sum loss as well as max loss.
6
Summary and Discussion
We have studied multi-label classification for structured output sets. Existing multi-label techniques
cannot directly be applied to this task because of the large size of the output set, and our analysis
showed that formulating multi-label structured prediction set a set-valued structured support vector machine framework also leads to infeasible training problems. Instead, we proposed an new
maximum-margin formulation, MLSP, that remains computationally tractable by use of the max
loss instead of sum loss between sets, and shows several of the advantageous properties known
from other maximum-margin based techniques, in particular a convex training problem and PACBayesian generalization bounds. Our experiments showed that MLSP has higher prediction accuracy
than baseline methods that remain applied in structured output settings. For small label sets, where
both concepts are applicable, MLSP performs comparable to the set-valued SSVM formulation.
Besides these promising initial results, we believe that there are still several aspects of multi-label
structured prediction that need to be better understood, in particular the prediction problem at test
time. Collecting all elements of positive score is a natural criterion, but it is costly to perform
exactly if the output set is very large. Therefore, it would be desirable to develop sparsity enforcing
variations of Equation (2), for example by adopting ideas from compressed sensing [25].
8
References
[1] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In ICML, 2001.
[2] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
[3] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. JMLR, 6, 2006.
[4] T. Joachims, T. Finley, and C. N. J. Yu. Cutting-plane training of structural SVMs. Machine Learning,
77(1), 2009.
[5] C. H. Teo, SVN Vishwanathan, A. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. JMLR, 11, 2010.
[6] G. Tsoumakas and I. Katakis. Multi-label classification: An overview. International Journal of Data
Warehousing and Mining, 3(3), 2007.
[7] K. Dembczynski, W. Cheng, and E. H?ullermeier. Bayes optimal multilabel classification via probabilistic
classifier chains. In ICML, 2011.
[8] D. McAllester. Generalization bounds and consistency for structured labeling. In G. Bak?r, T. Hofmann,
B. Sch?olkopf, A.J. Smola, and B. Taskar, editors, Predicting Structured Data. MIT Press, 2007.
[9] D. Nilsson. An efficient algorithm for finding the M most probable configurations in probabilistic expert
systems. Statistics and Computing, 8(2), 1998.
[10] C. Yanover and Y. Weiss. Finding the M most probable configurations using loopy belief propagation. In
NIPS, 2004.
[11] M. Fromer and A. Globerson. An LP View of the M-best MAP problem. In NIPS, 2009.
[12] J. Porway and S.-C. Zhu. C 4 : Exploring multiple solutions in graphical models by cluster sampling.
PAMI, 33(9), 2011.
[13] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In
ECCV, 2008.
[14] A. Bordes, N. Usunier, and L. Bottou. Sequence labelling SVMs trained in one pass. ECML PKDD, 2008.
[15] M. R. Boutell, J. Luo, X. Shen, and C.M. Brown. Learning multi-label scene classification. Pattern
Recognition, 37(9), 2004.
[16] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In
ECML, 1998.
[17] R. E. Schapire and Y. Singer. Boostexter: A boosting-based system for text categorization. Machine
Learning, 39(2?3), 2000.
[18] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In ACM SIGIR, 2007.
[19] T. G?artner and S. Vembu. On structured output training: Hard cases and an efficient alternative. Machine
Learning, 76(2):227?242, 2009.
[20] Y. Yue and T. Joachims. Predicting diverse subsets using structural SVMs. In ICML, 2008.
[21] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting shared subspaces for multi-label classification. In ACM
SIGKDD, 2008.
[22] P. Rai and H. Daum?e III. Multi-label prediction via sparse infinite CCA. In NIPS, 2009.
[23] B. Hariharan, L. Zelnik-Manor, S. V. N. Vishwanathan, and M. Varma. Large scale max-margin multilabel classification with priors. In ICML, 2010.
[24] W. Bi and J. Kwok. Multi-label classification on tree- and DAG-structured hierarchies. In ICML, 2011.
[25] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS,
2009.
[26] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and efficient multilabel classification in domains
with large number of labels. In ECML PKDD, 2008.
[27] C. H. Lampert and M. B. Blaschko. Structured prediction by joint kernel support estimation. Machine
Learning, 77(2?3), 2009.
[28] J. Petterson, T. S. Caetano, J. J. McAuley, and J. Yu. Exponential family graph matching and ranking. In
NIPS, 2009.
[29] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor. Kernel-based learning of hierarchical multilabel
classification models. JMLR, 7, 2006.
[30] A. Binder, K.-R. M?uller, and M. Kawanabe. On taxonomies for multi-class image categorization. IJCV,
2011.
[31] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In ICKM,
2004.
[32] K. Dembczynski, W. Cheng, and E. H?ullermeier. Bayes optimal multilabel classification via probabilistic
classifier chains. In ICML, 2010.
[33] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound
framework for object localization. PAMI, 31(12), 2009.
[34] S. Agarwal, A. Awan, and D. Roth. Learning to detect objects in images via a sparse, part-based representation. PAMI, 26(11), 2004.
[35] C. H. Lampert. An efficient divide-and-conquer cascade for nonlinear object detection. In CVPR, 2010.
9
| 4184 |@word version:1 compression:1 polynomial:2 advantageous:1 zelnik:1 decomposition:1 thereby:2 mcauley:1 initial:2 configuration:2 contains:1 score:9 document:1 existing:3 comparing:1 surprising:1 luo:1 parsing:1 visible:1 hofmann:4 voc2006:1 designed:3 gist:1 greedy:2 plane:2 mccallum:1 core:1 provides:1 boosting:1 node:1 zhang:1 five:1 direct:2 become:1 incorrect:1 consists:1 prove:1 artner:1 combine:1 ijcv:1 manner:1 introduce:3 pairwise:1 mask:1 expected:2 roughly:1 pkdd:2 behavior:1 uiuc:1 multi:52 relying:1 little:1 enumeration:3 actual:1 solver:2 becomes:2 blaschko:3 campus:1 notation:2 maximizes:1 homepage:1 qw:4 katakis:2 what:2 argmin:2 interpreted:2 developed:1 supplemental:2 finding:3 guarantee:1 every:1 collecting:3 act:1 exactly:3 preferable:1 wrong:1 classifier:3 segmenting:1 positive:4 understood:1 treat:2 tends:1 mistake:2 consequence:1 despite:2 path:2 solely:1 pami:3 might:1 resembles:2 studied:1 christoph:1 binder:1 co:1 range:3 bi:1 unique:1 practical:2 globerson:1 testing:1 practice:1 differs:2 procedure:2 area:4 cascade:1 printed:1 matching:3 convenient:1 pre:2 word:1 suggest:1 altun:1 cannot:4 close:1 selection:3 targeting:1 tsochantaridis:1 risk:6 influence:2 www:1 conventional:1 map:10 equivalent:1 lagrangian:1 crfs:1 maximizing:1 go:2 attention:1 starting:1 roth:1 convex:2 sigir:1 formulate:1 boutell:1 shen:1 identifying:3 immediately:1 insight:2 rule:6 deriving:1 varma:1 ity:1 handle:2 coordinate:1 variation:1 construction:1 target:1 hierarchy:3 us:2 element:9 logarithmically:1 recognition:1 particularly:1 rec:3 cut:1 taskar:2 solved:1 thousand:1 region:1 caetano:1 indep:2 rescaled:6 mentioned:1 intuition:2 complexity:4 miny:2 exhaustively:1 multilabel:8 trained:4 depend:1 solving:2 signature:1 localization:2 easily:1 joint:9 regularizer:1 train:6 additivity:1 effective:1 labeling:3 choosing:2 exhaustive:2 saunders:1 larger:2 solve:6 valued:4 say:1 plausible:1 otherwise:4 compressed:2 cvpr:1 ability:3 statistic:1 final:1 triggered:1 advantage:2 indication:1 sequence:2 cai:1 propose:1 product:1 maximal:1 relevant:1 flexibility:1 achieve:2 boostexter:1 olkopf:1 chl:2 empty:1 cluster:1 categorization:4 klosterneuburg:1 object:20 derive:1 develop:1 ac:2 pose:1 measured:1 received:1 predicted:2 come:2 judge:1 convention:1 direction:1 safe:1 bak:1 correct:5 human:1 mcallester:1 material:2 public:1 tsoumakas:2 require:6 f1:6 generalization:14 really:1 decompose:1 probable:2 summation:1 enumerated:1 exploring:1 hold:1 sufficiently:1 considered:3 ground:2 normal:1 deciding:1 predict:5 stabilizes:1 miniature:1 achieves:3 purpose:3 estimation:2 applicable:4 bag:1 label:104 currently:1 pacbayesian:1 teo:1 correctness:1 tool:1 reflects:2 minimization:3 uller:1 mit:1 sight:1 aim:1 modified:1 fulfill:1 manor:1 avoid:2 encode:1 derived:1 joachim:5 rank:1 check:1 mainly:1 contrast:2 sigkdd:1 suppression:1 baseline:2 detect:1 am:1 helpful:1 summarizing:1 inference:1 dependent:4 sense:1 typically:4 kernelized:1 relation:3 w:1 koller:1 interested:1 singlelabel:1 pixel:4 compatibility:7 overall:2 classification:20 flexible:1 vyi:4 dual:4 pascal:1 development:1 proposes:1 special:2 summed:1 singlescale:1 field:3 once:2 never:1 having:2 aware:1 sampling:2 nicely:1 biology:3 represents:1 yu:3 icml:6 report:3 ullermeier:2 simplify:1 feasibly:2 few:4 simultaneously:1 petterson:1 individual:3 argmax:3 ourselves:1 phase:1 detection:10 interest:1 possibility:1 mining:1 evaluation:3 violation:1 argmaxy:2 introduces:2 unconditional:1 primal:3 bundle:2 chain:2 closer:1 partial:1 necessary:2 tree:4 infeasibly:1 incomplete:1 taylor:1 divide:1 desired:1 theoretical:1 increased:1 classify:1 maximization:2 ordinary:5 cost:5 introducing:1 loopy:2 subset:6 entry:3 rare:1 predictor:1 successful:1 too:7 reported:1 dependency:2 international:1 stay:1 probabilistic:4 off:1 analogously:2 iy:4 again:1 reflect:1 choose:1 positivity:1 expert:1 rescaling:2 toy:1 potential:4 diversity:1 mlsp:38 explicitly:6 ranking:5 depends:1 performed:4 view:4 tion:1 kwk:2 analyze:1 reached:1 wave:1 start:2 doing:1 competitive:1 dembczynski:2 annotation:1 bayes:2 contribution:2 minimize:1 hariharan:1 accuracy:5 characteristic:1 yield:1 identify:3 conceptually:1 generalize:4 bayesian:1 identification:1 none:1 multiplying:1 confirmed:1 presume:1 straight:3 explain:1 reach:1 sharing:3 complicate:1 whenever:2 definition:1 against:2 naturally:1 associated:1 proof:1 hamming:1 couple:1 hsu:1 dataset:4 popular:1 austria:3 recall:6 car:2 hilbert:1 segmentation:1 formalize:1 back:1 appears:2 originally:1 higher:4 supervised:2 follow:2 reflected:2 violating:1 wei:1 formulation:7 evaluated:1 box:2 furthermore:1 just:3 smola:2 until:1 langford:1 working:7 hand:5 nonlinear:2 multiscale:1 propagation:1 quality:4 indicated:1 grows:1 believe:1 facilitate:1 effect:2 ye:1 concept:2 multiplier:1 contain:2 brown:1 equality:1 symmetric:2 iteratively:2 semantic:1 gw:3 during:8 interchangeably:1 encourages:1 ambiguous:1 illustrative:1 coincides:1 unnormalized:1 criterion:3 trying:1 prominent:1 crf:1 complete:2 confusion:1 performs:2 interpreting:1 image:18 common:5 specialized:1 behaves:1 functional:2 qp:1 overview:1 ji:1 exponentially:4 vembu:1 interpret:1 kwk2:2 accumulate:1 vec:1 enter:1 dag:2 consistency:1 similarly:1 submodular:1 language:2 shawe:1 access:1 add:3 recent:1 showed:2 optimizing:1 belongs:1 scenario:1 store:1 certain:1 inequality:3 binary:4 discussing:2 yi:4 seen:2 guestrin:1 additional:1 determine:2 maximize:1 relates:1 multiple:10 full:1 branch:4 desirable:1 justifying:1 prediction:101 variant:3 scalable:1 regression:1 vision:2 rousu:1 lossaugmented:1 iteration:1 kernel:7 adopting:2 represent:1 agarwal:1 background:3 want:2 whereas:2 cropped:1 crucial:1 sch:1 ssvms:3 yue:2 subject:6 flow:1 lafferty:1 call:2 extracting:1 structural:2 counting:1 split:1 easy:1 enough:2 rendering:1 iii:1 affect:1 independence:1 restrict:1 identified:1 inner:1 reduce:1 simplifies:1 idea:1 svn:1 enumerating:2 whether:1 expression:4 handled:1 reuse:1 effort:1 suffer:1 ssvm:40 enumerate:2 svms:3 category:1 reduced:1 http:1 specifies:3 schapire:1 exist:2 vy:5 canonical:1 problematic:1 per:7 diverse:1 write:1 discrete:1 express:1 ist:3 terminology:2 threshold:1 reformulation:1 achieving:1 localize:1 abstaining:1 graph:4 subgradient:1 relaxation:1 sum:18 enforced:2 parameterized:1 powerful:1 named:1 place:1 family:1 reasonable:1 separation:1 decision:6 acceptable:1 scaling:1 summarizes:1 comparable:4 bound:14 cca:1 cheng:2 quadratic:2 encountered:1 oracle:1 adapted:1 occur:1 constraint:18 vishwanathan:2 bp:1 scene:1 encodes:1 aspect:2 argument:2 min:3 formulating:1 performing:2 relatively:1 structured:61 rai:1 combination:2 belonging:1 beneficial:1 slightly:1 across:1 remain:1 lp:2 kakade:1 making:6 happens:1 nilsson:1 maxy:1 chunking:1 computationally:2 equation:7 ln:2 remains:2 previously:1 discus:2 turn:2 slack:5 needed:1 singer:1 tractable:2 serf:1 reformulations:1 available:4 operation:1 usunier:1 apply:1 kwok:1 hierarchical:7 kawanabe:1 prec:3 vlahavas:1 alternative:2 rp:1 original:1 denotes:1 graphical:1 hinge:1 daum:1 establish:2 conquer:1 classical:2 objective:3 question:1 already:1 occurs:3 costly:1 dependence:1 italic:1 unclear:1 gradient:1 amongst:2 subspace:2 argue:1 enforcing:2 assuming:1 besides:3 ratio:1 minimizing:1 equivalently:1 setup:12 unfortunately:2 potentially:3 taxonomy:1 warehousing:1 negative:1 fromer:1 perform:3 allowing:1 upper:3 markov:2 finite:1 descent:1 ecml:3 situation:11 defining:2 rn:1 arbitrary:2 pair:5 required:2 namely:1 learned:2 established:5 nip:6 beyond:1 pattern:1 appeared:1 sparsity:1 program:1 max:29 including:3 belief:1 analogue:1 power:1 misclassification:3 overlap:1 natural:7 regularized:3 predicting:10 indicator:3 scarce:1 advanced:1 yanover:1 zhu:1 improve:1 technology:1 finley:2 szedmak:1 text:2 prior:1 literature:2 interdependent:1 checking:1 val:2 loss:45 expect:1 generation:1 by1:1 penalization:2 validation:2 consistent:1 thresholding:2 principle:2 editor:1 share:2 bordes:1 translation:1 eccv:1 summary:1 last:1 infeasible:8 bias:3 allow:3 understand:1 side:1 institute:1 fall:1 taking:1 sparse:2 distributed:1 regard:1 overcome:1 curve:2 evaluating:2 numeric:3 forward:3 collection:1 author:1 coincide:1 made:1 qualitatively:2 subwindow:1 far:1 approximate:1 implicitly:1 cutting:2 keep:1 monotonicity:2 ml:12 summing:1 mauc:4 conclude:1 unnecessary:1 tuples:1 xi:19 alternatively:1 search:4 iterative:3 modularity:1 table:3 promising:2 learn:5 m3ns:1 inherently:1 symmetry:1 bottou:1 complex:2 protocol:1 domain:1 did:1 main:5 linearly:1 whole:2 bounding:2 lampert:5 s2:1 nothing:1 augmented:4 retrain:1 precision:8 position:1 pereira:1 explicit:4 exponential:1 candidate:1 jmlr:3 learns:2 hw:5 porway:1 tang:1 theorem:3 quali:1 specific:2 pac:1 sensing:2 svm:4 normalizing:1 consist:1 exists:1 incorporating:1 intractable:1 te:1 labelling:1 illustrates:1 margin:24 suited:1 generalizing:1 cx:2 visual:1 corresponds:2 truth:2 relies:1 avoidable:1 acm:2 conditional:3 sized:1 formulated:2 consequently:4 shared:2 experimentally:1 fw:2 change:1 determined:1 except:2 hard:1 infinite:1 impractically:1 total:1 secondary:3 pas:1 experimental:2 ew:2 formally:2 internal:1 support:8 searched:1 radlinski:1 arises:1 violated:8 avoiding:1 handling:2 |
3,517 | 4,185 | Phase transition in the family of p-resistances
Morteza Alamgir
Max Planck Institute for Intelligent Systems
T?ubingen, Germany
[email protected]
Ulrike von Luxburg
Max Planck Institute for Intelligent Systems
T?ubingen, Germany
[email protected]
Abstract
We study the family of p-resistances on graphs for p 1. This family generalizes
the standard resistance distance. We prove that for any fixed graph, for p = 1
the p-resistance coincides with the shortest path distance, for p = 2 it coincides
with the standard resistance distance, and for p ! 1 it converges to the inverse
of the minimal s-t-cut in the graph. Secondly, we consider the special case of
random geometric graphs (such as k-nearest neighbor graphs) when the number
n of vertices in the graph tends to infinity. We prove that an interesting phase
transition takes place. There exist two critical thresholds p? and p?? such that
if p < p? , then the p-resistance depends on meaningful global properties of the
graph, whereas if p > p?? , it only depends on trivial local quantities and does
not convey any useful information. We can explicitly compute the critical values:
p? = 1 + 1/(d 1) and p?? = 1 + 1/(d 2) where d is the dimension of
the underlying space (we believe that the fact that there is a small gap between
p? and p?? is an artifact of our proofs). We also relate our findings to Laplacian
regularization and suggest to use q-Laplacians as regularizers, where q satisfies
1/p? + 1/q = 1.
1
Introduction
The graph Laplacian is a popular tool for unsupervised and semi-supervised learning problems on
graphs. It is used in the context of spectral clustering, as a regularizer for semi-supervised learning,
or to compute the resistance distance on graphs. However, it has been observed that under certain
circumstances, standard Laplacian-based methods show undesired artifacts. In the semi-supervised
learning setting Nadler et al. (2009) showed that as the number of unlabeled points increases, the solution obtained by Laplacian regularization degenerates to a non-informative function. von Luxburg
et al. (2010) proved that as the number of points increases, the resistance distance converges to a
meaningless limit function. Independently of these observations, a number of authors suggested to
generalize Laplacian methods. The observation was that the ?standard? Laplacian methods correspond to a vector space setting with L2 -norms, and that it might be beneficial to work in a more
general Lp setting for p 6= 2 instead. See B?uhler and Hein (2009) for an application to clustering
and Herbster and Lever (2009) for an application to label propagation. In this paper we take up
several of these loose ends and connect them.
The main object under study in this paper is the family of p-resistances, which is a generalization
of the standard resistance distance. Our first major result proves that the family of p-resistances is
very rich and contains several special cases. The general picture is that the smaller p is, the more the
resistance is concentrated on ?short paths?. In particular, the case p = 1 corresponds to the shortest
path distance in the graph, the case p = 2 to the standard resistance distance, and the case p ! 1
to the inverse s-t-mincut.
Second, we study the behavior of p-resistances in the setting of random geometric graphs like lattice
graphs, "-graphs or k-nearest neighbor graphs. We prove that as the sample size n increases, there
1
are two completely different regimes of behavior. Namely, there exist two critical thresholds p? and
p?? such that if p < p? , the p-resistances convey useful information about the global topology of
the data (such as its cluster properties), whereas for p > p?? the resistance distances approximate a
limit that does not convey any useful information. We can explicitly compute the value of the critical
thresholds p? := 1 + 1/(d 1) and p?? := 1 + 1/(d 2). This result even holds independently of
the exact construction of the geometric graph.
Third, as we will see in Section 5, our results also shed light on the Laplacian regularization
and semi-supervised learning setting. As there is a tight relationship between p-resistances and
graph Laplacians, we can reformulate the artifacts described in Nadler et al. (2009) in terms of
p-resistances. Taken together, our results suggest that standard Laplacian regularization should be
replaced by q-Laplacian regularization (where q is such that 1/p? + 1/q = 1).
2
Intuition and main results
Consider an undirected, weighted graph G = (V, E) with n vertices. As is standard in machine
learning, the edge weights are supposed to indicate similarity of the adjacent points (not distances).
Denote the weight of edge e by we P 0 and the degree of vertex u by du . The length of a path
in the weighted graph is defined as e2 1/we . In the electrical network interpretation, a graph is
considered as a network where each edge e 2 E has resistance re = 1/we . The effective resistance
(or resistance distance) R(s, t) between two vertices s and t in the network is defined as the overall
resistance one obtains when connecting a unit volt battery to s and t. It can be computed in many
ways, but the one most useful for our paper is the following representation in terms of flows (cf.
Section IX.1 of Bollobas, 1998):
nP
o
2
R(s, t) = min
r
i
i
=
(i
)
unit
flow
from
s
to
t
.
(1)
e
e
e2E
e
e2E
In von Luxburg et al. (2010) it has been proved that in many random graph models, the resistance
distance R(s, t) between two vertices s and t converges to the trivial limit expression 1/ds + 1/dt
as the size of the graph increases. We now want to present some intuition as to how this problem
can be resolved in a natural way. For a subset M ? E of edges we define the contribution of M
to the resistance R(s, t) as the part of the sum in (1) that runs over the edges in M . Let i? be
a flow minimizing (1). To explain our intuition we separate this flow into two parts: R(s, t) =
R(s, t)local + R(s, t)global . The part R(s, t)local stands for the contribution of i? that stems from
the edges in small neighborhoods around s and t, whereas R(s, t)global is the contribution of the
remaining edges (exact definition given below). A useful distance function is supposed to encode
the global geometry of the graph, for example its cluster properties. Hence, R(s, t)global should be
the most important part in this decomposition. However, in case of the standard resistance distance
the contribution of the global part becomes negligible as n ! 1 (for many different models of
graph construction). This effect happens because as the graph increases, there are so many different
paths between s and t that once the flow has left the neighborhood of s, electricity can flow ?without
considerable resistance?. The ?bottleneck? for the flow is the part that comes from the edges in the
local neighborhoods of s and t, because here the flow has to concentrate on relatively few edges. So
the dominating part is R(s, t)local .
In order to define a useful distance function, we have to ensure that the global part has a significant
contribution to the overall resistance. To this end, we have to avoid that the flow is distributed over
?too many paths?. In machine learning terms, we would like to achieve a flow that is ?sparser?
in the number of paths it uses. From this point of view, a natural attempt is to replace the 2-normoptimization problem (1) by a p-norm optimization problem for some p < 2. Based on this intuition,
our idea is to replace the squares in the flow problem (1) by a general exponent p 1 and define the
following new distance function on the graph.
Definition 1 (p-resistance) On any weighted graph G, for any p 1 we define
nP
o
p
Rp (s, t) := min
r
|i
|
i
=
(i
)
unit
flow
from
s
to
t
.
e
e
e
e2E
e2E
(?)
As it turns out, our newly defined distance function Rp is closely related but not completely identical
to the p-resistance RpH defined by Herbster and Lever (2009). A discussion of this issue can be found
in Section 6.1.
2
30
30
30
25
25
25
20
20
20
15
15
15
10
10
10
5
5
5
0
0
0
5
10
15
20
25
30
(a) p = 2
0
5
10
15
20
(b) p = 1.33
25
30
0
0
5
10
15
20
25
30
(c) p = 1.1
Figure 1: The s-t-flows minimizing (?) in a two-dimensional grid for different values of p. The
smaller p, the more the flow concentrates along the shortest path.
In toy simulations we can observe that the desired effect of concentrating the flow on fewer paths
takes place indeed. In Figure 1 we show how the optimal flow between two points s and t gets
propagated through the network. We can see that the smaller p is, the more the flow is concentrated
along the shortest path between s and t. We are now going to formally investigate the influence of
the parameter p. Our first question is how the family Rp (s, t) behaves as a function of p (that is, on
a fixed graph and for fixed s, t). The answer is given in the following theorem.
Theorem 2 (Family of p-resistances) For any weighted graph G the following statements are true:
1. For p = 1, the p-resistance coincides with the shortest path distance on the graph.
2. For p = 2, the p-resistance reduces to the standard resistance distance.
3. For p ! 1, Rp (s, t)p
1
converges to 1/m where m is the unweighted s-t-mincut.
This theorem shows that our intuition as outlined above was exactly the right one. The smaller p
is, the more flow is concentrated along straight paths. The extreme case is p = 1, which yields the
shortest path distance. In the other direction, the larger p is, the more widely distributed the flow is.
Moreover, the theorem above suggests that for p close to 1, Rp encodes global information about the
part of the graph that is concentrated around the shortest path. As p increases, global information
is still present, but now describes a larger portion of the graph, say, its cluster structure. This is
the regime that is most interesting for machine learning. The larger p becomes, the less global
information is present in Rp (because flows even use extremely long paths that take long detours),
and in the extreme case p ! 1 we are left with nothing but the information about the minimal
s-t-cut. In many large graphs, the latter just contains local information about one of the points s or t
(see the discussion at the end of this section). An illustration of the different behaviors can be found
in Figure 2.
The next question, inspired by the results of von Luxburg et al. (2010), is what happens to Rp (s, t)
if we fix p but consider a family (Gn )n2N of graphs such that the number n of vertices in Gn tends
to 1. Let us consider geometric graphs such as k-nearest neighbor graphs or "-graphs. We now
give exact definitions of the local and global contributions to the p-resistance. Let r and R be real
numbers that depend on n (they will be specified in Section 4) and C R/r a constant. We define
the local neighborhood N (s) of vertex s as the ball with radius C ? r around s. We will see later that
the condition C
R/r ensures that N (s) contains at least all vertices adjacent to s. By abuse of
notation we also write e 2 N (s) if both endpoints of edge e are contained in N (s). Let i? be the
optimal flow in Problem (?). We define
P
Rplocal (s) := e2N (s) re |i?e |p ,
Rplocal (s, t) := Rplocal (s) + Rplocal (t), and Rpglobal (s, t) := Rp (s, t) Rplocal (s, t). Our next result
conveys that the behavior of the family of p-resistances shows an interesting phase transition. The
statements involve a term ?n that should be interpreted as the average degree in the graph Gn (exact
definition see later).
3
50
50
50
50
100
100
100
100
150
150
150
150
200
200
200
200
250
250
250
250
300
300
300
300
350
350
350
350
400
400
400
400
450
450
450
500
50
100
150
200
250
300
350
(a) p = 1
400
450
500
500
50
100
150
200
250
300
350
400
450
500
500
(b) p = 1.11
450
50
100
150
200
250
300
350
400
450
500
500
50
(c) p = 1.5
100
150
200
250
300
350
400
450
500
(d) p = 2
Figure 2: Heat plots of the Rp distance matrices for a mixture of two Gaussians in R10 . We can see
that the larger p it, the less pronounced the ?global information? about the cluster structure is.
Theorem 3 (Phase transition for p-resistances in large geometric graphs) Consider a family
(Gn )n2N of unweighted geometric graphs on Rd , d > 2 that satisfies some general assumptions
(see Section 4 for definitions and details). Fix two vertices s and t. Define the two critical values
p? := 1 + 1/(d 1) and p?? := 1 + 1/(d 2). Then, as n ! 1, the following statements hold:
1. If p < p? and ?n is sub-polynomial in n, then Rpglobal (s, t)/Rplocal (s, t) ! 1, that is the global
contribution dominates the local one.
2. If p > p?? and ?n ! 1, then Rplocal (s, t)/Rpglobal (s, t) ! 1 and Rp (s, t) !
is all global information vanishes.
1
dp
s
1
+
1
dp
t
1
, that
This result is interesting. It shows that there exists a non-trivial point of phase transition in the
behavior of p-resistances: if p < p? , then p-resistances are informative about the global topology
of the graph, whereas if p > p?? the p-resistances converge to trivial distance functions that do not
depend on any global properties of the graph. In fact, we believe that p?? should be 1 1/(d 1) as
well, but our current proof leaves the tiny gap between p? = 1 1/(d 1) and p?? = 1 1/(d 2).
Theorem 3 is a substantial extension of the work of von Luxburg et al. (2010), in several respects.
First, and most importantly, it shows the complete picture of the full range of p
1, and not
just the single snapshot at p = 2. We can see that there is a range of values for p for which presistance distances convey very important information about the global topology of the graph, even
in extremely large graphs. Also note how nicely Theorems 2 and 3 fit together. It is well-known
that as n ! 1, the shortest path distance corresponding to p = 1 converges to the (geodesic)
distance of s and t in the underlying space (Tenenbaum et al., 2000), which of course conveys
global information. von Luxburg et al. (2010) proved that the standard resistance distance (p = 2)
converges to the trivial local limit. Theorem 3 now identifies the point of phase transition p? between
the boundary cases p = 1 and p = 2. Finally, for p ! 1, we know by Theorem 2 that the presistance converges to the inverse of the s-t-min-cut. It is widely believed that the minimal s-t cut
in geometric graphs converges to the minimum of the degrees of s and t as n ! 1 (even though
a formal proof has yet to be presented and we cannot point to any reference). This is in alignment
with the result of Theorem 3 that the p-resistance converges to 1/dps 1 + 1/dpt 1 . As p ! 1,
only the smaller of the two degrees contributes to the local part, which agrees with the limit for the
s-t-mincut.
3
Equivalent optimization problems and proof of Theorem 2
In this section we will consider different optimization problems that are inherently related to presistances. All graphs in this section are considered to be weighted.
3.1
Equivalent optimization problems
Consider the following two optimization problems for p > 1:
nP
o
p
Flow-problem:
Rp (s, t) := min
i = (ie )e2E unit flow from s to t (?)
e2E re |ie |
4
Potential problem:
Cp (s, t) = min
n X |'(u)
e=(u,v)
'(v)|1+ p
1
1
'(s)
1
rep
1
o
'(t) = 1
(??)
It is well known that these two problems are equivalent for p = 2 (see Section 1.3 of Doyle and
Snell, 2000). We will now extend this result to general p > 1.
Proposition 4 (Equivalent optimization problems) For p > 1, the following statements are true:
1. The flow problem (?) has a unique solution.
2. The solutions of (?) and (??) satisfy Rp (s, t) = (Cp (s, t))
1
p
1
.
To prove this proposition, we derive the Lagrange dual of problem (?) and use the homogeneity of
the variables to convert it to the form of problem (??). Details can be found in the supplementary
material. With this proposition we can now easily see why Theorem 2 is true.
Proof of Theorem 2. Part (1). If we set p = 1, Problem (?) coincides with the well-known linear
programming formulation of the shortest path problem, see Chapter 12 of Bazaraa et al. (2010).
Part (2). For p = 2, we get the well-known formula for the effective resistance.
Part (3). For p ! 1, the objective function in the dual problem (??) converges to
nP
o
C1 (s, t) := min
|'(u)
'(v)|
'(s)
'(t)
=
1
.
e=(u,v)
This coincides with the well-known linear programming formulation of the min-cut problem in
unweighted graphs. Using Proposition 4 we finally obtain
1
1
1
lim Rp (s, t)p 1 = lim
=
=
.
p!1
p!1 Cp (s, t)
C1 (s, t)
s-t-mincut
4
Geometric graphs and the Proof of Theorem 3
In this section we consider the class of geometric graphs. The vertices of such graphs consist of
points X1 , .., Xn 2 Rd , and vertices are connected by edges if the corresponding points are ?close?
(for example, they are k-nearest neighbors of each other). In most cases, we consider the set of
points as drawn i.i.d from some density on Rd . Consider the following general assumptions.
General Assumptions: Consider a family (Gn )n2N of unweighted geometric graphs where Gn is
based on X1 , ..., Xn 2 M ? Rd , d > 2. We assume that there exist 0 < r ? R (depending on n
and d) such that the following statements about Gn holds simultaneously for all x 2 {X1 , ..., Xn }:
1. Distribution of points: For ? 2 {r, R} the number of sample points in B(x, ?) is of the
order ?(n ? ?d ).
2. Graph connectivity: x is connected to all sample points inside B(x, r) and x is not connected to any sample point outside B(x, R).
3. Geometry of M : M is a compact, connected set such that M \ @M is still connected.
The boundary @M is regular in the sense that there exist positive constants ? > 0 and
"0 > 0 such that if " < "0 , then for all points x 2 @M we have vol(B" (x) \ M )
? vol(B" (x)) (where vol denotes the Lebesgue volume). Essentially this condition just
excludes the situation where the boundary has arbitrarily thin spikes.
It is a straightforward consequence of these assumptions that there exists some function ? (n) =: ?n
such that r and R are both of the order ?((?n /n)1/d ) and all degrees in the graph are of order ?(?n ).
4.1
Lower and upper bounds and the proof of Theorem 3
To prove Theorem 3 we need to study the balance between Rplocal and Rpglobal . We introduce the
shorthand notation
1/r
?
?
? 1
?
X
1
1
T1 = ?
,
T
=
?
.
2
p(1+1/d) 1
2(p 1)
k (d 2)(p 1)
np(1 1/d) 1 ?n
?n
k=1
5
Theorem 5 (General bounds on Rplocal and Rpglobal ) Consider a family (Gn )n2N of unweighted
geometric graphs that satisfies the general assumptions. Then the following statements are true
for any fixed pair s, t of vertices in Gn :
4C > Rplocal (s, t)
1
dps
1
+
1
dpt
1
and
T1 + T2
Rpglobal (s, t)
T1 .
Note that by taking the sum of the two inequalities this theorem also leads to upper and lower
bounds for Rp (s, t) itself. The proof of Theorem 5 consists of several parts. To derive lower bounds
on Rp (s, t) we construct a second graph G0n which is a contracted version of Gn . Lower bounds
can then be obtained by Rayleigh?s monotonicity principle. To get upper bounds on RpP
(s, t) we
exploit the fact that the p-resistance in an unweighted graph can be upper bounded by e2E ipe ,
where i is any unit flow from s to t. We construct a particular flow that leads to a good upper bound.
Finally, investigating the properties of lower and upper bounds we can derive the individual bounds
on Rplocal and Rpglobal . Details can be found in the supplementary material.
Theorem 3 can now be derived from Theorem 5 by straight forward computations.
4.2
Applications
Our general results can directly be applied to many standard geometric graph models.
The "-graph. We assume that X1 , ..., Xn have been drawn i.i.d from some underlying density f
on Rd , where M := supp(f ) satisfies Part (3) of the general assumptions. Points are connected by
unweighted edges in the graph if their Euclidean distances are smaller than ". Exploiting standard
results on "-graphs (cf. the appendix in von Luxburg et al., 2010), it is easy to see that the general
assumptions (1) and (2) are satisfied with probability at least 1 c1 n exp( c2 n"d ) (where c1 , c2 are
constants independent of n and d) with r = R = " and ?n = ?(n"d ). The probability converges to
1 if n ! 1, " ! 0 and n"d / log(n) ! 1.
k-nearest neighbor graphs. We assume that X1 , ..., Xn have been drawn i.i.d from some underlying density f on Rd , where M := supp(f ) satisfies Part (3) of the general assumptions. We
connect each point to its k nearest neighbors by an undirected, unweighted edge. Exploiting standard results on kNN-graphs (cf. the appendix in von Luxburg et al., 2010), it is easy to see that
the general assumptions (1) and (2) are satisfied with probability at least 1 c1 k exp( c2 k) with
r = ? (k/n)1/d , R = ? (k/n)1/d , and ?n = k. The probability converges to 1 if n ! 1,
k ! 1, and k/ log(n) ! 1.
Lattice graphs. Consider uniform lattices such as the square lattice or triangular lattice in Rd .
These lattices have constant degrees, which means that ?n = ?(1). If we denote the edge length of
grid by ", the total number of nodes in the support will be in the order of n = ?(1/"d ). This means
1
that the general assumptions hold for r = R = " = ?( n1/d
) and ?n = ?(1). Note that while the
lower bounds of Theorem 3 can be applied to the lattice case, our current upper bounds do not hold
because they require that ?n ! 1.
5
Regularization by p-Laplacians
One of the most popular methods for semi-supervised learning on graphs is based on Laplacian
regularization. In Zhu et al. (2003) the label assignment problem is formulated as
' = argminx C(x)
subject to
'(xi ) = yi , i = 1, . . . , l
(2)
where yi 2 {?1}, C(x) := 'T L' is the energy function involving the standard (p = 2) graph
Laplacian L. This formulation is appealing and works well for small sample problems. However,
Nadler et al. (2009) showed that the method is not well posed when the number of unlabeled data
points is very large. In this setting, the solution of the optimization problem converges to a constant
function with ?spikes? at the labeled points. We now present a simple theorem that connects these
findings to those concerning the resistance distance.
Theorem 6 (Laplacian regularization in terms of resistance distance) Consider a semi-supervised classification problem with one labeled point per class: '(s) = 1, '(t) = 1. Denote
6
the solution of (2) by '? , and let v be an unlabeled data point. Then
'? (v)
'? (t) > '? (s)
'? (v) () R2 (v, t) > R2 (v, s).
Proof. It is easy to verify that '? = L? (es et ) and R2 (s, t) = (es et )T L? (es et ) where L?
is the pseudo-inverse of the Laplacian matrix L. Therefore we have '? (v) = (ev )T L? (es et ) and
'? (v)
'? (t) > '? (s)
(a)
() (ev
'? (v) () (ev
et )T L? (ev
et ) > (ev
et )T L? (es
es )T L? (ev
et ) > (es
ev )T L? (es
et )
es ) () R2 (v, t) > R2 (v, s).
Here in step (a) we use the symmetry of L? to state that eTv L? es = eTs L? ev .
2
What does this theorem mean? We have seen above that in case p = 2, if n ! 1,
1
1
1
1
R2 (v, t) ?
+
and
R2 (v, s) ?
+ .
dv
dt
dv
ds
Hence, the theorem states that if we threshold the function '? at 0 to separate the two classes, then
all the points will be assigned to the labeled vertex with larger degree.
Our conjecture is that an analogue to Theorem 6 also holds for general p. For a precise formulation,
define the matrix r as
?
'(i) '(j) i ? j
ri,j =
0
otherwise
P P m 1/m n 1/n
and introduce the matrix norm kAkm,n =
)
. Consider q such that 1/p +
i ((
j aij )
1/q = 1. We conjecture that if we used krkq,q as a regularizer for semi-supervised learning, then
the corresponding solution '? would satisfy
'? (v)
'? (t) > '? (s)
'? (v) () Rp (v, t) > Rp (v, s).
That is, the solution of the q-regularized problem would assign labels according to the Rp -distances.
In particular, using q-regularization for the value q with 1/q + 1/p? = 1 would resolve the artifacts
of Laplacian regularization described in Nadler et al. (2009).
It is worth mentioning that this regularization is different from others in the literature. The usual
Laplacian regularization term as in Zhu et al. (2003) coincides with krk2,2 , Zhou and Sch?olkopf
(2005) use the krk2,p norm, and our conjecture is that the krkq,q norm would be a good candidate.
Proving whether this conjecture is right or wrong is a subject of future work.
6
Related families of distance functions on graphs
In this section we sketch some relations between p-resistances and other families of distances.
6.1
Comparing Herbster?s and our definition of p-resistances
For p ? 2, Herbster and Lever (2009) introduced the following definition of p-resistances:
n X |'(u) '(v)|p0
o
1
RpH0 (s, t) := H
'(s) '(t) = 1 .
with CpH0 (s, t) := min
re
Cp0 (s, t)
e=(u,v)
In Section 3.1 we have seen that the potential and flow optimization problems are duals of each
other. Based on this derivation we believe that the natural way of relating RH and C H would be to
replace the p0 in Herbster?s potential formulation by q 0 such that 1/p0 + 1/q 0 = 1. That is, one would
bH0 := 1/C H0 . In particular, reducing Herbster?s p0 towards 1
have to consider CqH0 and then define R
p
q
has the same influence as increasing our p to infinity and makes RpH0 converge to the minimal s-t-cut.
To ease further comparison, let us assume for now that we use ?our? p in the definition of Herbster?s
resistances. Then one can see by similar arguments as in Section 3.1 that RpH can be rewritten as
nX
o
RpH (s, t) := min
rep 1 |ie |p i = (ie )e2E unit flow from s to t .
(H)
e2E
7
Now it is easy to see that the main difference between Herbster?s definition (H) and our definition
(?) is that (H) takes the power p 1 of the resistances re , while we keep the resistances with
power 1. In many respects, Rp and RpH have properties that are similar to each other: they satisfy
slightly different versions (with different powers or weights) of the triangle inequality, Rayleigh?s
monotonicity principle, laws for resistances in series and in parallel, and so on. We will not discuss
further details due to space constraints.
6.2
Other families of distances
There also exist other families of distances on graphs that share some of the properties of presistances. We will only discuss the ones that are most related to our work, for more references
see von Luxburg et al. (2010). The first such family was introduced by Yen et al. (2008), where the
authors use a statistical physics approach to reduce the influence of long paths to the distance. This
family is parameterized by a parameter ?, contains the shortest path distance at one end (? ! 1)
and the standard resistance distance at the other end (? ! 0). However, the construction is somewhat
ad hoc, the resulting distances cannot be computed in closed form and do not even satisfy the triangle
inequality. A second family is the one of ?logarithmic forest distances? by Chebotarev (2011). Even
though its derivation is complicated, it has a closed form solution and can be interpreted intuitively:
The contribution of a path to the overall distance is ?discounted? by a factor (1/?)l where l is the
length of the path. For ? ! 0, the logarithmic forest distance distance converges to the shortest path
distance, for ? ! 1, it converges to the resistance distance.
At the time of writing this paper, the major disadvantage of both the families introduced by Yen
et al. (2008) and Chebotarev (2011) is that it is unknown how their distances behave as the size of
the graph increases. It is clear that on the one end (shortest path), they convey global information,
whereas on the other end (resistance distance) they depend on local quantities only when n ! 1.
But what happens to all intermediate parameter values? Do all of them lead to meaningless distances
as n ! 1, or is there some interesting phase transition as well? As long as this question has not
been answered, one should be careful when using these distances. In particular, it is unclear how the
parameters (? and ?, respectively) should be chosen, and it is hard to get an intuition about this.
7
Conclusions
We proved that the family of p-resistances has a wide range of behaviors. In particular, for p = 1
it coincides with the shortest path distance, for p = 2 with the standard resistance distance and
for p ! 1 it is related to the minimal s-t-cut. Moreover, an interesting phase transition takes
place: in large geometric graphs such as k-nearest neighbor graphs, the p-resistance is governed by
meaningful global properties as long as p < p? := 1 + 1/(d 1), whereas it converges to the trivial
local quantity 1/dps 1 + 1/dpt 1 if p > p?? := 1 + 1/(d 2). Our suggestion for practice is to use
p-resistances with p ? p? . For this value of p, the p-resistances encode those global properties of
the graph that are most important for machine learning, namely the cluster structure of the graph.
Our findings are interesting on their own, but also help in explaining several artifacts discussed in the
literature. They go much beyond the work of von Luxburg et al. (2010) (which only studied the case
p = 2) and lead to an intuitive explanation of the artifacts of Laplacian regularization discovered in
Nadler et al. (2009). An interesting line of future research will be to connect our results to the ones
about p-eigenvectors of p-Laplacians (B?uhler and Hein, 2009). For p = 2, the resistance distance
can be expressed in terms of the eigenvalues and eigenvectors of the Laplacian. We are curious to
see whether a refined theory on p-eigenvalues can lead to similarly tight relationships for general
values of p.
Acknowledgements
We would like to thank the anonymous reviewers who discovered an inconsistency in our earlier
proof, and Bernhard Sch?olkopf for helpful discussions.
8
References
M. Bazaraa, J. Jarvis, and H. Sherali. Linear Programming and Network Flows. Wiley-Interscience,
2010.
B. Bollobas. Modern Graph Theory. Springer, 1998.
T. B?uhler and M. Hein. Spectral clustering based on the graph p-Laplacian. In Proceedings of the
International Conference on Machine Learning (ICML), pages 81?88, 2009.
P. Chebotarev. A class of graph-geodetic distances generalizing the shortets path and the resistance
distances. Discrete Applied Mathematics, 159(295 ? 302), 2011.
P. G. Doyle and J. Laurie Snell. Random walks and electric networks, 2000. URL http://www.
citebase.org/abstract?id=oai:arXiv.org:math/0001057.
M. Herbster and G. Lever. Predicting the labelling of a graph via minimum p-seminorm interpolation. In Conference on Learning Theory (COLT), 2009.
B. Nadler, N. Srebro, and X. Zhou. Semi-supervised learning with the graph Laplacian: The limit
of infinite unlabelled data. In Advances in Neural Information Processing Systems (NIPS), 2009.
J. Tenenbaum, V. de Silva, and J. Langford. Supplementary material to ?A Global Geometric
Framework for Nonlinear Dimensionality Reduction?. Science, 290:2319 ? 2323, 2000. URL
http://isomap.stanford.edu/BdSLT.pdf.
U. von Luxburg, A. Radl, and M. Hein. Getting lost in space: Large sample analysis of the commute
distance. In Neural Information Processing Systems (NIPS), 2010.
L. Yen, M. Saerens, A. Mantrach, and M. Shimbo. A family of dissimilarity measures between
nodes generalizing both the shortest-path and the commute-time distances. In Proceedings of the
14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages
785?793, 2008.
D. Zhou and B. Sch?olkopf. Regularization on discrete spaces. In DAGM-Symposium, pages 361?
368, 2005.
X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using Gaussian fields and
harmonic functions. In ICML, pages 912?919, 2003.
9
| 4185 |@word version:2 polynomial:1 norm:5 simulation:1 decomposition:1 p0:4 commute:2 reduction:1 contains:4 series:1 sherali:1 current:2 comparing:1 yet:1 informative:2 plot:1 fewer:1 leaf:1 short:1 math:1 node:2 org:2 along:3 c2:3 symposium:1 prove:5 shorthand:1 consists:1 interscience:1 inside:1 introduce:2 indeed:1 behavior:6 mpg:2 inspired:1 discounted:1 resolve:1 cp0:1 increasing:1 becomes:2 underlying:4 moreover:2 notation:2 bounded:1 what:3 interpreted:2 finding:3 pseudo:1 shed:1 exactly:1 g0n:1 wrong:1 unit:6 planck:2 positive:1 negligible:1 t1:3 local:13 tends:2 limit:6 consequence:1 ets:1 id:1 bazaraa:2 path:26 interpolation:1 abuse:1 might:1 studied:1 suggests:1 mentioning:1 ease:1 range:3 unique:1 practice:1 lost:1 regular:1 suggest:2 get:4 cannot:2 unlabeled:3 close:2 context:1 influence:3 writing:1 www:1 equivalent:4 reviewer:1 bollobas:2 straightforward:1 go:1 independently:2 importantly:1 proving:1 alamgir:1 construction:3 exact:4 programming:3 us:1 cut:7 labeled:3 observed:1 electrical:1 ensures:1 connected:6 substantial:1 intuition:6 vanishes:1 battery:1 geodesic:1 depend:3 tight:2 completely:2 triangle:2 resolved:1 easily:1 chapter:1 regularizer:2 derivation:2 heat:1 effective:2 neighborhood:4 outside:1 h0:1 refined:1 larger:5 dominating:1 widely:2 say:1 supplementary:3 posed:1 otherwise:1 triangular:1 stanford:1 knn:1 itself:1 hoc:1 eigenvalue:2 jarvis:1 degenerate:1 achieve:1 supposed:2 intuitive:1 pronounced:1 olkopf:3 getting:1 exploiting:2 cluster:5 converges:16 object:1 help:1 derive:3 depending:1 nearest:7 chebotarev:3 indicate:1 come:1 concentrate:2 direction:1 radius:1 closely:1 duals:1 material:3 require:1 assign:1 fix:2 generalization:1 anonymous:1 snell:2 proposition:4 secondly:1 extension:1 hold:6 around:3 considered:2 exp:2 nadler:6 major:2 e2n:1 label:3 agrees:1 tool:1 weighted:5 gaussian:1 avoid:1 zhou:3 encode:2 derived:1 sigkdd:1 sense:1 rpp:1 helpful:1 dagm:1 relation:1 going:1 germany:2 overall:3 issue:1 dual:2 classification:1 colt:1 exponent:1 special:2 field:1 once:1 construct:2 nicely:1 identical:1 unsupervised:1 icml:2 thin:1 future:2 np:5 t2:1 intelligent:2 others:1 few:1 modern:1 simultaneously:1 doyle:2 homogeneity:1 individual:1 replaced:1 phase:8 connects:1 lebesgue:1 geometry:2 argminx:1 n1:1 attempt:1 uhler:3 investigate:1 mining:1 alignment:1 mixture:1 extreme:2 light:1 regularizers:1 edge:14 detour:1 euclidean:1 walk:1 re:5 desired:1 hein:4 minimal:5 earlier:1 gn:10 disadvantage:1 assignment:1 lattice:7 electricity:1 vertex:13 subset:1 uniform:1 too:1 connect:3 answer:1 density:3 herbster:9 international:2 ie:4 physic:1 together:2 connecting:1 connectivity:1 von:11 lever:4 satisfied:2 toy:1 supp:2 potential:3 de:3 satisfy:4 explicitly:2 depends:2 ad:1 later:2 view:1 closed:2 ulrike:2 portion:1 parallel:1 complicated:1 e2e:9 contribution:8 yen:3 square:2 who:1 correspond:1 yield:1 generalize:1 worth:1 straight:2 explain:1 definition:10 energy:1 e2:1 conveys:2 proof:10 propagated:1 newly:1 proved:4 popular:2 concentrating:1 lim:2 knowledge:1 dimensionality:1 dt:2 supervised:9 formulation:5 though:2 just:3 langford:1 d:2 sketch:1 nonlinear:1 propagation:1 artifact:6 believe:3 seminorm:1 effect:2 verify:1 true:4 isomap:1 regularization:14 hence:2 assigned:1 volt:1 undesired:1 adjacent:2 mantrach:1 coincides:7 pdf:1 complete:1 cp:3 saerens:1 silva:1 harmonic:1 behaves:1 endpoint:1 volume:1 extend:1 interpretation:1 discussed:1 relating:1 significant:1 rd:7 grid:2 outlined:1 similarly:1 mathematics:1 similarity:1 own:1 showed:2 certain:1 ubingen:2 inequality:3 rep:2 arbitrarily:1 inconsistency:1 yi:2 seen:2 minimum:2 somewhat:1 converge:2 shortest:14 semi:9 full:1 reduces:1 stem:1 unlabelled:1 believed:1 long:5 concerning:1 laplacian:19 involving:1 circumstance:1 essentially:1 arxiv:1 c1:5 whereas:6 want:1 sch:3 meaningless:2 subject:2 undirected:2 flow:29 lafferty:1 curious:1 intermediate:1 easy:4 fit:1 topology:3 reduce:1 idea:1 bottleneck:1 whether:2 expression:1 url:2 ipe:1 resistance:62 useful:6 clear:1 involve:1 eigenvectors:2 tenenbaum:2 concentrated:4 http:2 exist:5 per:1 write:1 discrete:2 vol:3 threshold:4 drawn:3 r10:1 graph:77 excludes:1 sum:2 convert:1 luxburg:12 inverse:4 run:1 parameterized:1 place:3 family:22 appendix:2 bound:11 infinity:2 constraint:1 ri:1 encodes:1 answered:1 argument:1 min:9 extremely:2 relatively:1 conjecture:4 according:1 ball:1 beneficial:1 smaller:6 describes:1 slightly:1 lp:1 appealing:1 happens:3 dv:2 intuitively:1 taken:1 turn:1 loose:1 discus:2 know:1 end:7 generalizes:1 gaussians:1 rewritten:1 observe:1 radl:1 spectral:2 rp:19 dpt:3 denotes:1 remaining:1 clustering:3 cf:3 mincut:4 ensure:1 exploit:1 ghahramani:1 prof:1 objective:1 question:3 quantity:3 spike:2 usual:1 unclear:1 dp:5 oai:1 distance:55 separate:2 n2n:4 thank:1 nx:1 tuebingen:2 trivial:6 length:3 relationship:2 reformulate:1 illustration:1 minimizing:2 balance:1 statement:6 relate:1 unknown:1 upper:7 shimbo:1 observation:2 snapshot:1 behave:1 situation:1 precise:1 discovered:2 introduced:3 namely:2 pair:1 specified:1 nip:2 beyond:1 suggested:1 below:1 ev:8 laplacians:4 regime:2 max:2 explanation:1 analogue:1 power:3 critical:5 natural:3 regularized:1 predicting:1 zhu:3 picture:2 identifies:1 geometric:14 l2:1 literature:2 acknowledgement:1 discovery:1 law:1 interesting:8 suggestion:1 srebro:1 degree:7 principle:2 tiny:1 share:1 course:1 aij:1 formal:1 institute:2 neighbor:7 wide:1 taking:1 explaining:1 distributed:2 boundary:3 dimension:1 xn:5 transition:8 stand:1 rich:1 unweighted:8 author:2 forward:1 approximate:1 obtains:1 compact:1 bernhard:1 keep:1 monotonicity:2 global:23 investigating:1 xi:1 etv:1 why:1 inherently:1 symmetry:1 contributes:1 forest:2 du:1 laurie:1 electric:1 main:3 rh:1 nothing:1 convey:5 x1:5 contracted:1 wiley:1 sub:1 candidate:1 krk2:2 governed:1 third:1 ix:1 theorem:27 formula:1 r2:7 dominates:1 exists:2 consist:1 dissimilarity:1 labelling:1 gap:2 morteza:2 sparser:1 generalizing:2 rayleigh:2 logarithmic:2 lagrange:1 expressed:1 contained:1 springer:1 corresponds:1 satisfies:5 acm:1 formulated:1 careful:1 towards:1 replace:3 considerable:1 hard:1 infinite:1 reducing:1 total:1 e:10 meaningful:2 formally:1 support:1 latter:1 |
3,518 | 4,186 | Maximum Covariance Unfolding:
Manifold Learning for Bimodal Data
Vijay Mahadevan
Department of ECE
University of California, San Diego
La Jolla, CA 92093
[email protected]
Chi Wah Wong
Department of Radiology
University of California, San Diego
La Jolla, CA 92093
[email protected]
Jose Costa Pereira
Department of ECE
University of California, San Diego
La Jolla, CA 92093
[email protected]
Thomas T. Liu
Department of Radiology
University of California, San Diego
La Jolla, CA 92093
[email protected]
Nuno Vasconcelos
Department of ECE
University of California, San Diego
La Jolla, CA 92093
[email protected]
Lawrence K. Saul
Department of CSE
University of California, San Diego
La Jolla, CA 92093
[email protected]
Abstract
We propose maximum covariance unfolding (MCU), a manifold learning algorithm for simultaneous dimensionality reduction of data from different input modalities. Given high dimensional inputs from two different but naturally
aligned sources, MCU computes a common low dimensional embedding that
maximizes the cross-modal (inter-source) correlations while preserving the local
(intra-source) distances. In this paper, we explore two applications of MCU. First
we use MCU to analyze EEG-fMRI data, where an important goal is to visualize
the fMRI voxels that are most strongly correlated with changes in EEG traces. To
perform this visualization, we augment MCU with an additional step for metric
learning in the high dimensional voxel space. Second, we use MCU to perform
cross-modal retrieval of matched image and text samples from Wikipedia. To
manage large applications of MCU, we develop a fast implementation based on
ideas from spectral graph theory. These ideas transform the original problem for
MCU, one of semidefinite programming, into a simpler problem in semidefinite
quadratic linear programming.
1
Introduction
Recent advances in manifold learning and nonlinear dimensionality reduction have led to powerful,
new methods for the analysis and visualization of high dimensional data [14, 1, 20, 24, 16]. These
methods have roots in nonparametric statistics, spectral graph theory, convex optimization, and multidimensional scaling. Notwithstanding individual differences in motivation and approach, these
methods share certain features that account for their overall popularity: (i) they generally involve
few tuning parameters; (ii) they make no strong distributional assumptions; (iii) efficient algorithms
exist to compute the global minima of their cost functions.
1
All these methods solve variants of the same basic underlying problem: given high dimensional
inputs, {x1 , x2 , . . . , xn }, compute low dimensional outputs {y1 , y2 , . . . , yn } that preserve certain
nearness relations (e.g., local distances). Solutions to this problem have found applications in many
areas of science and engineering. However, many real-world applications do not map neatly into this
framework. For instance, in certain applications, aligned data is acquired from two different modalities ? we refer to such data as bimodal ? and the goal is to find low dimensional representations
that capture their interdependencies.
In this paper, we investigate the use of maximum variance unfolding (MVU) [24] for the simultaneous dimensionality reduction of data from different input modalities. Though the original algorithm
does not solve this problem, we show that it can be adapted to provide a compelling solution. In its
original formulation, MVU computes a low dimensional embedding that maximizes the variance of
its outputs, subject to constraints that preserve local distances. We explore a modification of MVU
that computes a joint embedding of high dimensional inputs from different data sources. In this
joint embedding, our goal is to discover a common low dimensional representation of just those
degrees of variability that are correlated across different modalities. To achieve this goal, we design
the embedding to maximize the inter-source correlation between aligned outputs while preserving
the local, intra-source distances. By analogy to MVU, we call our approach maximum covariance
unfolding (MCU).
The optimization for MCU inherits the basic form of the optimization for MVU. In particular, it can
be cast as a semidefinite program (SDP). For applications to large datasets, we can also exploit the
same strategies behind recent, much faster implementations of MVU [25]. In particular, using these
same strategies, we show how to reformulate the optimization for MCU as a semidefinite quadratic
linear program (SQLP). In addition, for one of our applications?the analysis of EEG-fMRI data?
we show how to extend the basic optimization of MCU to visualize the high dimensional correlations
between different input modalities. This is done by adding extra variables to the original SDP; these
variables can be viewed as performing a type of metric learning in the high dimensional voxel space.
In particular, they indicate which fMRI voxels (in the high dimensional space of fMRI images)
correlate most strongly with observed changes in the EEG recordings.
As related work, we mention several other studies that have proposed SDPs to achieve different
objectives than those of the original algorithm for MVU. Bowling et al [4, 5] developed a related
approach known as action-respecting embedding for problems in robot localization. Song et al [18]
reinterpreted the optimization criterion of MVU, then proposed an extension of the original algorithm that computes low dimensional embeddings subject to class labels or other side information.
Finally, Shaw and Jebara [15, 16] have explored related SDPs to produce minimum-volume and
structure-preserving embeddings; these SDPs yield much more sensible visualizations of social networks and large graphs that do not necessarily resemble a discretized manifold. Our work builds
on the successes of these earlier studies and further extends the applicability of SDPs for nonlinear
dimensionality reduction.
2
Maximum Covariance Unfolding
We propose a novel adaptation of MVU, termed maximum covariance unfolding or MCU to perform
non-linear correlation between two aligned datasets whose points have a one-to-one correspondence.
MCU embeds the two datasets, of different dimensions, into a single low dimensional manifold such
that the two resulting embeddings are maximally correlated. As in MVU, the embeddings are such
that local distances are preserved. The problem formulation is described in detail next.
2.1 Formulation
Let {x1i }ni=1 , x1i ? Rp1 and {x2i }ni=1 , x2i ? Rp2 be two aligned datasets belonging to two different input spaces, and {y1i }ni=1 , y1i ? Rd and {y2i }ni=1 , y2i ? Rd be the corresponding low
dimensional representations (in the output space), with d ? p1 and d ? p2 .
As in MVU [21], we need to find a low dimensional mapping such that the Euclidean distance
between pairs of points in a local neighborhood are preserved. For each dataset s ? {1, 2}, if
points xsj and xsk are neighbors or are common neighbors of another point, we denote an indicator
variable ?sij = 1. The neighborhood constraints can then be written as
2
||ysi ? ysj || = ||xsi ? xsj ||
2
2
if ?sij = 1
(1)
To simplify the notation, we concatenate
the output points from both datasets into one large set,
?
y
i?n
1i
{zi }2n
i=1 containing 2n points, zi =
y2(i?n) i > n
We also define the inner-product matrix for {zi }, Kij = zi .zj . This allows us to formulate the MCU
very similarly to the MVU formulation of [21], and so we omit the details for the sake of brevity.
The distance constraint of (1) is written in the matrix form as:
Kii ? 2Kij + Kjj
Kii ? 2Kij + Kjj
=
=
D1ij , {(i, j) : i, j ? n and ?1ij = 1}
D2(i?n)(j?n) , {(i, j) : i, j > n and ?2(i?n)(j?n) = 1}
(2)
(3)
The centeringPconstraint to ensure that the output points of both datasets are centered at the origin
requires that i ysi = 0, ?s ? {1, 2}. The equivalent matrix constraints are,
X
X
(4)
Kij = 0, ?i, j ? n
Kij = 0, ?i, j > n
ij
ij
The objective function is to maximize the covariance between the low dimensional representations
of the two datasets. We can use the trace of the covariance matrix as a measure of how strongly the
two outputs are correlated. The average covariance can be written as:
1X
tr(cov(y1 , y2 )) = tr(E(y1 y2T )) = E(tr(y1 y2T )) = E(y1 .y2 ) ?
y1i .y2i
(5)
n i
Combining all the constraints together with the objective function, we can write the optimization as:
Maximize:
X
?
Wij Kij ,
with
W =
ij
subject to:
0
In
In
0
?
Kii ? 2Kij + Kjj = D1ij , {(i, j) : i, j ? n and ?1ij = 1}
Kii ? 2Kij + Kjj = D2(i?n)(j?n) , {(i, j) : i, j > n and ?2(i?n)(j?n) = 1}
X
X
K ? 0,
Kij = 0, ?i, j ? n,
Kij = 0, ?i, j > n
ij
(6)
ij
As in the original MVU formulation [21], this is a semi-definite program (SDP) and can be solved
using general-purpose toolboxes such as SeDuMi [19]. The solution returned by the SDP can be
used to find the coordinates in the low-dimensional embedding, {y1i }ni=1 and {y2i }ni=1 , using the
spectral decomposition method described in [21].
One shortcoming of the MCU formulation is that it provides no means to visualize the results.
While the low-dimensional embeddings of the two datasets may be well correlated, there is no way
to identify which dimensions or covariates of the data points in one modality contribute to high
correlation with the points in the other modality. To address this issue, we include a novel metric
learning framework in the MCU formulation, as described in the next section.
2.2 Metric Learning for Visualization
For each dimension in one dataset, we need to compute a measure of how much it contributes to the
correlation between the datasets. This can be done using a metric learning type step applied to data
of one or both modalities within the MCU formulation. In this work we describe this approach for
the situation where metric learning is applied to only {x1i }.
The MCU formulation of Section 2 assumes that the distances between the points is Euclidean. So in
the computation of nearest neighbor distances, each of the p1 dimensions of {x1i } receive the same
weight, as shown in (1). However, inspired by the recently proposed ideas in metric learning [22],
we use a more general distance metric by applying a linear transformation T1 of size p1 ? p1 in the
space, and then perform MCU using the transformed points, T1 xi . This allows some distances to
shrink/expand if that would help in increasing the correlation with {x2i }.
For the sake of simplicity, we choose a diagonal weight matrix T1 , whose diagonal entries are
1
{?i }pi=1
, ?i ? 0, ?i. This allows us to weight each dimension of the input space separately.
In order to find the weight vector that produces the maximal correlation between the two datasets,
these p1 new variables can be learned within the MCU framework by adding them to the optimization
3
problem. As each dimension has a corresponding weight, the optimal weight vector returned would
be a map over the dimensions indicating how strongly each is correlated to {x2i }.
To modify the MCU formulation to include these new variables, we replace all Euclidean distance measurements
for the data points in the first dataset in (2) with the weighted distance
P
D1ij = m ?m (xim ? xjm )2 .
This adds a linear function of the new weight variables to the existing distance constraints of (2).
However, if we had to define the neighborhood of a data point itself using this weighted distance,
the formulation would become non-convex. So we assume that the neighborhood is composed of
points that are closest in time . An alternative is to use neighbors as computed in the original space
using the un-weighted distance. We also add constraints to make the weights positive and sum to p1 .
The objective function of (6) does not change, but we need to maximize the objective over the p1
weight variables also. The problem still remains an SDP and can be solved as before. The new
formulation, denoted MCU-ML, is written as:
Maximize:
subject to:
?
?
0 In
Wij Kij , with W =
In 0
ij
X
?k ? 0, ?k ? {1 . . . p1 }, and
?k = p1 .
X
Kii ? 2Kij + Kjj ?
X
k
?m (xim ? xjm )2 = 0, {(i, j) : i, j ? n and ?1ij = 1}
m
Kii ? 2Kij + Kjj = D2(i?n)(j?n) , {(i, j) : i, j > n and ?2(i?n)(j?n) = 1}
X
X
K ? 0,
Kij = 0, ?i, j ? n,
Kij = 0, ?i, j > n
ij
(7)
ij
We next describe how these formulations for MCU can be applied to find optimal representations
for high dimensional EEG-fMRI data.
3
Resting-state EEG-fMRI Data
In the absence of an explicit task, temporal synchrony of the blood oxygenation level dependent
(BOLD) signal is maintained across distinct brain regions. Taking advantage of this synchrony,
resting-state fMRI has been used to study connectivity. fMRI datasets have high resolution of the
order of a few millimeters, but offer poor temporal resolution as it measures the delayed haemodynamic response to neural activity. In addition, changes in resting-state BOLD connectivity measures
are typically interpreted as changes in coherent neural activity across respective brain regions. However, this interpretation may be misleading because the BOLD signal is a complex function of neural
activity, oxygen metabolism, cerebral blood flow (CBF), and cerebral blood volume (CBV) [3]. To
address these shortcomings, simultaneous acquisition of electroencephalographic data (EEG) during
functional magnetic resonance imaging (fMRI) is becoming more popular in brain imaging [13].
The EEG recording provides high temporal resolution of neural activity (5kHz), but poor spatial
resolution due to electric signal distortion by the skull and scalp and the limitations on the number of electrodes that can be placed on the scalp. Therefore the goal of simultaneous acquisition
of EEG and fMRI is to exploit the complementary nature of the two imaging modalities to obtain
spatiotemporally resolved neural signal and metabolic state information [13]. Specifically, using
high temporal resolution EEG data, we are able to examine dynamic changes and non-stationary
properties of neural activity at different frequency bands. By correlating with the EEG data with
the high resolution BOLD data, we are able to examine the corresponding spatial regions in which
neural activity occurs.
Conventional approaches to analyzing the joint EEG-fMRI data have relied on linear methods. Most
often, a simple voxel-wise correlation of the fMRI data with the EEG power time series in a specific
frequency band is performed [13]. But this technique does not exploit the rich spatial dependencies
of the fMRI data. To address this issue, more sophisticated linear methods such as canonical correlation analysis (CCA) [7], and the partial least squares method [11] have been proposed. However, all
linear approaches have a fundamental shortcoming - the space of images, which is highly non-linear
and thought to form a manifold, may not be well represented by a linear subspace. Therefore, lin4
ear approaches to correlate the fMRI data with the EEG data may not capture any low dimensional
manifold structure.
To address these limitations we propose the use of MCU to learn low dimensional manifolds for both
the fMRI and EEG data such that the output embeddings are maximally correlated. In addition, we
learn a metric in the fMRI input space to identify which voxels of the fMRI correlate most strongly
with observed changes in the EEG recordings. We first describe the methods used to acquire the
EEG-fMRI dataset.
3.1 Method for Data Acquisition
One 5 minute simultaneous EEG-fMRI resting state run was recorded and processed with eyes
closed (EC). Data were acquired using a 3 Tesla GE HDX system and a 64 channel EEG system
supplied by Brain Products. EEG signals were recorded at 5kHz sampling rate. Impedances of the
electrodes were kept below 20k?. Recorded EEG data were pre-processed using Vision Analyzer
2.0 software (Brain Products). Subtraction-based MR-gradient and Cardio-ballistic artifact removal
were applied. A low pass filter with cut off frequency 30Hz was applied to all channels and the
processed signals were down-sampled to 250Hz. fMRI data were acquired with the following parameters: echo planar imaging with 150 volumes, 30 slices, 3.438 ? 3.438 ? 5mm3 voxel size,
64 ? 64 matrix size, TR=2s, TE=30ms. fMRI data were pre-processed using an in-house developed
package. The 5 frequency channels of the EEG data were averaged to produce a 63 dimensional
time series of 145 time points. The fMRI data consisted of a 122880 (64 ? 64 ? 30) dimensional
time series with 145 time points.
3.2 Results on EEG-fMRI Dataset
The EEG and fMRI data points described in the previous section are extremely high dimensional.
However, both EEG and fMRI signals are the result of sparse neuronal activity. Therefore, attempts
to embed these points, especially the fMRI data, into a low dimensional manifold have been made
using non-linear dimensionality reduction techniques such as Laplacian eigenmaps [17]. While such
techniques may be used to find manifold embeddings for fMRI and EEG data separately, they are
not useful for finding patterns of correlation between the two. We demonstrate how MCU can be
applied to this setting below.
Due to the very high dimensionality of the fMRI dataset, we pre-processed the data as follows. An
anatomical region of interest mask was used, followed by PCA to project the fMRI samples to a
subspace of dimension p1 = 145 (which represented all of the energy of the samples, because there
are only 145 time points). The EEG data was not subject to any pre-processing, and p2 remained
63. We applied the MCU-ML approach to learn a visualization map and a joint low dimensional
embedding for the EEG-fMRI dataset. We compared the results to two other techniques - the voxelwise correlation, and the linear CCA approach inspired by [7]. The MCU-ML solution directly
returned a weight vector of length 145. For CCA, the average of the canonical directions (weighted
using the canonical correlations) was used as the weight vector. In both cases, the 145 dimensional
weight vector was projected back to the fMRI voxel space using the principal components of the
PCA step.
Two types of voxel wise correlations maps were computed to assess the performance of MCU-ML.
First, a naive correlation map was generated where each voxel was separately correlated with the average EEG power time course from the alpha aband (8-12Hz) (which is known to be correlated with
the fMRI resting-state network [13]) from all the 63 electrodes. Second, a functional connectivity
map was generated using the knowledge that at rest state (during which the dataset was recorded),
the Posterior Cingulate Cortex (PCC) is known to be active [8] and is correlated with the Default
Mode Network (DMN) while anti-correlated with the Task Positive Network (TPN). To achieve this,
a seed region of interest (ROI) was first selected from PCC. The averaged fMRI signal from the ROI
was then correlated with the whole brain to obtain a voxel-wise correlation map. Therefore, voxels
in the PCC region should have high correlation with the EEG data. This information provides a
?sanity-check? version of the fMRI correlation map.
The results for the anatomically significant slice 18, within which both DMN and the TPN are
located, are shown in Figure 1. The functional connectivity map is shown in 2(a), and the correlation
map obtained using MCU-ML, overlaid with the relevant anatomical regions appears in 2(b). The
MCU-ML map shows the activation of Default Mode Network (DMN) and a suppression of Task
Positive Network (TPN). From the results, it is clear that the MCU-ML approach produces the best
5
0.8
10
10
0.6
0.2
30
30
30
?0.2
?0.2
?0.4
50
50
?0.8
10
20
30
40
50
?0.2
40
60
50
?0.6
?0.8
60
10
20
(a)
30
40
50
0.4
0.2
30
0
?0.2
40
?0.4
?0.4
?0.6
60
0
0
40
0.6
20
0.2
0.2
0
40
0.4
20
0.4
20
0.8
10
0.6
0.6
0.4
20
0.8
0.8
10
?0.6
?0.8
60
10
60
(b)
20
30
40
50
60
(c)
?0.4
50
?0.6
?0.8
60
10
20
30
40
50
60
(d)
Figure 1: Comparison of results on the EEG-fMRI dataset. (a) naive correlation map (b) using only PCA (c)
using CCA (d) using MCU-ML
match, showing well localized regions of positive correlation in the DMN, and regions of negative
correlation in the TPN. The correlation maps for 12 slices overlaid with over a high-resolution T1weighted image for the proposed MCU-ML approach are shown in Figure 3(b).
0.8
10
0.6
0.4
20
0.2
30
0
?0.2
40
?0.4
50
?0.6
?0.8
60
10
20
30
40
50
60
(a)
(b)
Figure 2: (a) the functional connectivity map, and (b) the MCU-ML correlation map overlaid with information
about the anatomical regions relevant during rest state.
MCU
PCA
CCA
Relative Weights
1
0.8
0.6
0.4
0.2
0
0
50
100
150
PCA Index
(a)
(b)
Figure 3: (a) The plot showing the normalized weights for the 145 dimensions for CCA, MCU-ML and PCA.
(b) A montage showing the recovered weights for each voxel in the 12 anatomically significant slices, with the
MCU overlaid on a high-resolution T1-weighted image.
To compare the learned weights using the MCU-ML and CCA, we plot the normalized importance
of each of the 145 dimensions in Figure 3(a). We also plot the eigenvalues for the 145 dimensions
obtained using PCA. It is seen that the weights produced by the MCU-ML approach have fewer
components (around 20) than those of CCA. It is also interesting to see that the weights that produce
maximal correlation with the EEG dataset are very different from the eigenvalues of PCA themselves, indicating that the dimensions that are important for correlation are not necessarily the ones
with maximum variance.
4 Fast MCU
One of the primary limitations of the SDP based formulation for MCU in Section 2.1, shared with
MVU, is its inability to scale to problems involving a large number of data points [23]. To address
this issue, Weinberger et al. [23] modified the original formulation using graph laplacian regularization to reduce the size of the SDP. However, recent work has shown that even this reduced formulation of MVU can be solved more efficiently by reframing it as a semidefinite quadratic linear
programming (SQLP) [25]. In this section, we show how a fast version of MCU, denoted Fast-MCU,
can be implemented using a similar approach.
6
Let L1 and L2 denote the graph laplacians [6] of the two sets of points, {y1i } and {y2i }, respectively.
The graph laplacian depends only on nearest neighbor relations and in MCU these are assumed to
be unchanged as the points are embedded from the original space to the low dimensional manifold.
Therefore, L1 and L2 can be obtained using the graph of data points, {x1i } and {x2i }, in the original
space. Let Q1 , Q2 ? Rn?m contain the bottom m eigenvectors of L1 and L2 . Then we can write
m
2n vectors {y1i } and {y2i } in terms of two new sets of m unknown vectors, {u1 }m
i=1 and {u2 }i=1 ,
with m ? n, using the approximation:
m
m
X
X
y1i ?
Q1i? u1? and y2i ?
Q2i? u2?
(8)
?=1
?=1
As in Section 2, we concatenate the vectors from both datasets into one larger set, {ui }2m
i=1 containing 2m points:
?
u1i
i?m
(9)
ui =
u2(i?m) i > m
We define m ? m inner product matrices, (Uij )?? = uTi? uj? ?i, j ? {1,?2} ??, ? ? {1
? . . . m}, and
U
U
11
12
a 2m ? 2m matrix, U?? = uT? u? ??, ? ? {1 . . . 2m}. Therefore, U =
.
U21 U22
The 2n ? 2n inner product matrix K can therefore be approximated in terms of the much smaller
matrix 2m ? 2m matrix U :
?
?
Q1 U11 QT1 Q1 U21 QT2
(10)
K?
Q2 U21 QT1 Q2 U22 QT2
The formulate MCU as an SQLP, we first rewrite (6) by bringing the distance constraints into the objective function using regularization parameters ?1 , ?2
>
0:
Maximize:
X
X
Wij Kij ??1
ij
Kii ? 2Kij + Kjj ? D1ij
?
?2
Kii ? 2Kij + Kjj ? D2ij
i?j,?i,j>n
subject to:
X
K ? 0,
?2
i?j,?i,j?n
X
??2
?
Kij = 0, ?i, j ? n,
ij
X
Kij = 0, ?i, j > n
(11)
ij
By using (10) in (11), and by noting that the centering constraint is automatically satisfied [23], we
get the modified formulation in terms of U :
Maximize:
2tr(Q1 U21 QT2 ) ?
subject to:
U ?0
X
k
?k
X?
(Qk Ukk QTk )ii ? 2(Qk Ukk QTk )ij + (Qk Ukk QTk )jj ? Dkij
?2
i?k j
where i ?k j for k ? {1, 2} encodes the neighborhood relationships of the k th dataset.
(12)
This SDP is similar to the formulation proposed by [23]. In order to obtain further simplification, let
2
U ? R4m be the concatenation of the columns of U . Then, (12) can be reformulated by collecting
the coefficients of all quadratic terms in the objective function in a positive semi-definite matrix
2
2
2
A ? R4m ?4m , and those of the linear terms, including the trace term, in a vector b ? R4m :
UAU T + bT U
U ?0
Minimize:
subject to:
(13)
This minimization problem can be solved using the SQLP approach of [6]. From the solution of the
m
SQLP, the vectors{u1i }m
i=1 and {u2i }i=1 , can be obtained using the spectral decomposition method
described in [21], followed by the low dimensional coordinates {y1i }ni=1 and {y2i }ni=1 , using (8).
Finally, these coordinates are refined using gradient based improvement of the original objective
function of (11) using the procedure described in [23].
5 Results
We apply the Fast-MCU algorithm to n = 1000 points generated from two ?Swiss rolls? in three
dimensions, with m set to 20. Figure 4 shows the embeddings of this data generated by CCA and
7
by Fast-MCU. While CCA discovers two significant dimensions, the Fast-MCU accurately extracts
the low dimensional manifold where the embeddings lie in a narrow strip.
OUTPUT1 CCA
INPUT1 N=1000)
10
10
10
10
0
?40
?10
?10
0
50
0
10
?10
0
10
?50
0
50
OUTPUT2 Fast MCU
(after gradient?based improvement)
40
20
?505
10
?10
20
0
0
?20
?20
?40
?40
?50
(a)
0
40
?10
?10
?20
?50
OUTPUT1 Fast MCU
(after gradient ?based improvement)
?10
?505
10
0
0
?5
0
20
0
0
0
?10
40
20
?40
?5
OUTPUT2 Fast MCU
(before fine?tuning)
40
?20
5
5
OUTPUT1 Fast MCU
(before fine?tuning)
OUTPUT2 CCA
INPUT2 N=1000
0
50
(b)
?50
0
50
(c)
Figure 4: (a) Two ?swiss rolls? consisting of 1000 points each in 3D with the aligned pairs of points shown
in the same color. (b) the 2D embedding obtained using CCA. (c) low dimensional manifolds obtained using
Fast-MCU, before and after the gradient based improvement step. (best viewed in color)
To further test the proposed Fast-MCU on real data, we use the recently proposed Wikipedia dataset
composed of text and image pairs [12]. The dataset consists of 2866 text - image pairs, each belonging to one of 10 semantic categories. The corpus is split into a training set with 2173 documents, and
a test set with 693 documents. The retrieval task consists of two parts. In the first, each image in the
test set is used as a query, and the goal is to rank all the texts in the test set based on their match to
the query image. In the second, a text query is used to rank the images. In both parts, performance
is measured using the mean average precision (MAP). The MAP score is the average precision at
the ranks where recall changes.
The experimental evaluation was similar to that of [12]. We first represented the text using an LDA
model [2] with 20 topics, and the image using a histogram over a SIFT [10] codebook of 4096
codewords. The common low dimensional manifold was learned from the text-image pairs of the
training set using the SQLP based formulation of (13), with m = 20, followed by a gradient ascent
step as described in the previous section. To compare the performance of Fast-MCU, we also used
CCA and kernel CCA (kCCA) to learn the maximally correlated joint spaces from the training set.
For kCCA we used a Gaussian kernel and implemented it using code from the authors of [9].
Given a test sample (image or text), it is first projected into the learned subspace or manifold. For
CCA, this involves a linear transformation to the low dimensional subspace, while for kCCA this
is achieved by evaluating a linear combination of the kernel functions of the training points [9].
For Fast-MCU, the nearest neighbors of the test point among the training samples in the original
space are used to obtain a mapping of the point as a weighted combination of these neighbors. The
same mapping is then applied to the projection of the neighbors in the learned low dimensional joint
manifold to compute the projection of the test point. To perform retrieval, all the test points of both
modalities, image and text, are projected to the joint space learned using the training set. For a
given test point of one modality, its distance to all the projected test points of the other modality
are computed, and these are then ranked. In this work, we used the normalized correlation distance,
which was shown to be the best performing distance metric in [12]. A retrieved sample is considered
to be correct if it belongs to the same category as the query.
The results of the retrieval task are shown in Table 1. The performance of a random retrieval scheme
is also shown to indicate the baseline chance level.It is clear that Fast-MCU outperforms both CCA
and kCCA in both image-to-text and text-to-image retrieval tasks. In addition, Fast-MCU produced
significantly lower number of dimensions for the embeddings - CCA produced 19 signficant dimensions compared to just 3 for Fast-MCU.
Query
Text - Image
Image - Text
6
Table 1: MAP Scores for image-text retrieval tasks
Random CCA KCCA
0.118
0.193
0.170
0.118
0.154
0.172
Fast-MCU
0.264
0.198
Conclusions
In this paper, we describe an adaptation of MVU to analyze correlation of high-dimensional aligned
data such as EEG-fMRI data and image-text corpora. Our results on EEG-fMRI data show that
8
the proposed approach is able to make anatomically significant predictions about which voxels of
the fMRI are most correlated with changes in EEG signals. Likewise, the results on the Wikipedia
set demonstrate the ability of MCU to discover the correlations between images and text. In both
these applications, it is important to realize that MCU is not only revealing the correlated degrees of
variability from different input modalities, but also pruning away the uncorrelated ones. This ability
of MCU makes it much more broadly applicable because in general we expect inputs from truly
different modalities to have many independent degrees of freedom: e.g., there are many ways in text
to describe a single, particular image, just as there are many ways in pictures to illustrate a single,
particular word.
7
Acknowledgements
This work was supported by NSF award CCF-0830535, NIH Grant R01NS051661 and ONR MURI
Award No. N00014-10-1-0072.
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, 2003.
[2] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. JMLR , 3:993?1022, 2003.
[3] R. Buxton, K. Uluda, D. Dubowitz, and T. T Liu. Modeling the hemodynamic response to brain activation.
Neuroimage, 23(1):220-233, 2004.
[4] M. Bowling, A. Ghodsi, and D. Wilkinson. Action respecting embedding. In ICML, pages 65?72, 2005.
[5] M. Bowling, D. Wilkinson, A. Ghodsi, and A. Milstein. Subjective localization with action respecting
embedding. In ISRR, 2005.
[6] F. Chung. Spectral graph theory. Amer Mathematical Society, 1997.
[7] N. Correa, T. Eichele, T. AdalI, Y. Li, and V. Calhoun. Multi-set canonical correlation analysis for the
fusion of concurrent single trial ERP and functional MRI. NeuroImage, 2010.
[8] M. Greicius, B. Krasnow, A. Reiss, and V. Menon. Functional connectivity in the resting brain: a network
analysis of the default mode hypothesis. PNAS, 100(1):253, 2003.
[9] D. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639?2664, 2004.
[10] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[11] E. Martinez-Montes, P. Vald?es-Sosa, F. Miwakeichi, R. Goldman, and M. Cohen. Concurrent EEG/fMRI
analysis by multiway partial least squares. NeuroImage, 22(3):1023?1034, 2004.
[12] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. Lanckriet, R. Levy, and N. Vasconcelos. A new
approach to cross-modal multimedia retrieval. In ACM Multimedia, pages 251?260, 2010.
[13] P. Ritter and A. Villringer. Simultaneous EEG-fMRI. Neuroscience & Biobehavioral Reviews, 30(6):823?
838, 2006.
[14] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323?2326, 2000.
[15] B. Shaw and T. Jebara. Minimum volume embedding. In AISTATS, pages 460?467, San Juan, Puerto
Rico, 2007.
[16] B. Shaw and T. Jebara. Structure preserving embedding. In ICML, 2009.
[17] X. Shen and F. Meyer. Low-dimensional embedding of fMRI datasets. Neuroimage, 41(3):886?902,
2008.
[18] L. Song, A. Smola, K. Borgwardt, and A. Gretton. Colored maximum variance unfolding. NIPS 2008.
[19] J. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization
methods and software, 11(1):625?653, 1999.
[20] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319?2323, 2000.
[21] K. Weinberger and L. Saul. Unsupervised learning of image manifolds by semidefinite programming.
IJCV, 70(1):77?90, 2006.
[22] K. Weinberger and L. Saul. Distance metric learning for large margin nearest neighbor classification.
JMLR, 10:207?244, 2009.
[23] K. Weinberger, F. Sha, Q. Zhu, and L. Saul. Graph laplacian regularization for large-scale semidefinite
programming. NIPS, 19:1489, 2007.
[24] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction.
ICML, 2004.
[25] X. Wu, A. So, Z. Li, and S. Li. Fast graph laplacian regularized kernel learning via semidefinite?
quadratic?linear programming. NIPS, 22:1964?1972.
9
| 4186 |@word trial:1 version:2 cingulate:1 pcc:3 mri:1 d2:3 covariance:8 decomposition:2 u11:1 q1:4 mention:1 tr:5 reduction:9 liu:2 series:3 score:2 document:2 outperforms:1 existing:1 subjective:1 recovered:1 sosa:1 activation:2 written:4 realize:1 concatenate:2 oxygenation:1 plot:3 stationary:1 selected:1 metabolism:1 fewer:1 rp1:1 colored:1 blei:1 nearness:1 provides:3 cse:1 contribute:1 codebook:1 org:1 simpler:1 u2i:1 mathematical:1 become:1 consists:2 ijcv:2 acquired:3 inter:2 mask:1 p1:10 examine:2 sdp:8 themselves:1 brain:8 chi:1 discretized:1 inspired:2 multi:1 montage:1 automatically:1 goldman:1 increasing:1 hardoon:1 project:1 discover:2 matched:1 underlying:1 maximizes:2 notation:1 biobehavioral:1 mcu:64 interpreted:1 q2:3 developed:2 finding:1 transformation:2 temporal:4 multidimensional:1 collecting:1 grant:1 omit:1 yn:1 vald:1 t1:4 positive:5 engineering:1 local:6 modify:1 before:4 analyzing:1 becoming:1 greicius:1 averaged:2 definite:2 swiss:2 procedure:1 y2i:8 dmn:4 area:1 thought:1 significantly:1 projection:2 revealing:1 haemodynamic:1 spatiotemporally:1 pre:4 word:1 get:1 mvu:16 applying:1 wong:1 equivalent:1 map:18 conventional:1 convex:2 formulate:2 resolution:8 simplicity:1 shen:1 embedding:15 coordinate:3 diego:6 programming:6 hypothesis:1 origin:1 lanckriet:1 approximated:1 located:1 reframing:1 cut:1 distributional:1 muri:1 observed:2 bottom:1 solved:4 capture:2 region:10 d2ij:1 t1weighted:1 respecting:3 covariates:1 ui:2 wilkinson:2 dynamic:1 rewrite:1 localization:2 distinctive:1 resolved:1 joint:7 represented:3 distinct:1 fast:20 shortcoming:3 describe:5 monte:1 query:5 qt2:3 neighborhood:5 refined:1 sanity:1 whose:2 larger:1 solve:2 distortion:1 calhoun:1 ability:2 statistic:1 cov:1 qt1:2 niyogi:1 radiology:2 transform:1 itself:1 echo:1 advantage:1 eigenvalue:2 voxelwise:1 propose:3 product:5 maximal:2 adaptation:2 aligned:7 combining:1 relevant:2 achieve:3 roweis:1 xim:2 electrode:3 produce:5 help:1 illustrate:1 develop:1 measured:1 cardio:1 nearest:4 ij:15 p2:2 strong:1 implemented:2 c:1 resemble:1 indicate:2 involves:1 direction:1 correct:1 filter:1 centered:1 kii:8 extension:1 around:1 considered:1 roi:2 lawrence:1 mapping:3 seed:1 visualize:3 overlaid:4 purpose:1 applicable:1 label:1 ballistic:1 concurrent:2 puerto:1 weighted:6 unfolding:7 minimization:1 gaussian:1 modified:2 cbf:1 xsk:1 inherits:1 improvement:4 rank:3 electroencephalographic:1 check:1 u21:4 suppression:1 baseline:1 dependent:1 typically:1 bt:1 relation:2 uij:1 expand:1 wij:3 transformed:1 overall:1 issue:3 among:1 classification:1 augment:1 denoted:2 resonance:1 spatial:3 vasconcelos:2 ng:1 sampling:1 icml:3 unsupervised:1 fmri:42 simplify:1 few:2 belkin:1 composed:2 preserve:2 doyle:1 individual:1 delayed:1 consisting:1 attempt:1 freedom:1 interest:2 investigate:1 highly:1 intra:2 reinterpreted:1 evaluation:1 truly:1 semidefinite:8 behind:1 partial:2 sedumi:2 respective:1 euclidean:3 taylor:1 output2:3 instance:1 kij:20 earlier:1 compelling:1 column:1 modeling:1 tpn:4 cost:1 applicability:1 entry:1 eigenmaps:2 dependency:1 borgwardt:1 fundamental:1 ritter:1 off:1 input2:1 together:1 connectivity:6 ear:1 manage:1 containing:2 choose:1 recorded:4 satisfied:1 uau:1 juan:1 chung:1 rasiwasia:1 li:3 account:1 de:1 bold:4 coefficient:1 sqlp:6 depends:1 performed:1 root:1 lowe:1 closed:1 analyze:2 relied:1 synchrony:2 ass:1 square:2 ni:8 minimize:1 roll:2 variance:4 qk:3 efficiently:1 likewise:1 yield:1 identify:2 millimeter:1 sdps:4 accurately:1 produced:3 simultaneous:6 strip:1 centering:1 energy:1 acquisition:3 frequency:4 u22:2 nuno:1 naturally:1 costa:2 sampled:1 dataset:13 popular:1 recall:1 knowledge:1 ut:1 dimensionality:10 color:2 sophisticated:1 back:1 appears:1 rico:1 planar:1 modal:3 maximally:3 response:2 formulation:19 done:2 though:1 strongly:5 shrink:1 amer:1 just:3 smola:1 correlation:31 langford:1 sturm:1 nonlinear:5 mode:3 artifact:1 lda:1 menon:1 consisted:1 y2:4 normalized:3 contain:1 ccf:1 regularization:3 symmetric:1 semantic:1 bowling:3 during:3 maintained:1 criterion:1 m:1 mm3:1 demonstrate:2 correa:1 l1:3 silva:1 oxygen:1 image:24 wise:3 novel:2 recently:2 discovers:1 nih:1 common:4 wikipedia:3 functional:6 overview:1 cohen:1 khz:2 volume:4 cerebral:2 extend:1 interpretation:1 resting:6 vmahadev:1 refer:1 measurement:1 significant:4 tuning:3 rd:2 similarly:1 neatly:1 analyzer:1 multiway:1 shawe:1 had:1 robot:1 cortex:1 add:2 closest:1 posterior:1 recent:3 retrieved:1 jolla:6 belongs:1 termed:1 certain:3 n00014:1 onr:1 success:1 preserving:4 minimum:3 additional:1 seen:1 mr:1 subtraction:1 maximize:7 signal:9 ii:2 semi:2 interdependency:1 pnas:1 keypoints:1 gretton:1 faster:1 match:2 cross:3 offer:1 retrieval:8 award:2 laplacian:6 prediction:1 variant:1 basic:3 xsi:1 involving:1 vision:1 metric:11 kcca:5 histogram:1 kernel:5 bimodal:2 achieved:1 preserved:2 addition:4 receive:1 separately:3 fine:2 source:6 modality:14 extra:1 rest:2 bringing:1 coviello:1 ascent:1 subject:8 recording:3 hz:3 flow:1 jordan:1 call:1 input1:1 noting:1 mahadevan:1 iii:1 embeddings:10 u1i:2 split:1 xsj:2 zi:4 inner:3 idea:3 reduce:1 pca:8 song:2 returned:3 reformulated:1 jj:1 action:3 matlab:1 generally:1 useful:1 clear:2 involve:1 eigenvectors:1 nonparametric:1 band:2 locally:1 tenenbaum:1 processed:5 category:2 reduced:1 supplied:1 exist:1 zj:1 canonical:5 nsf:1 neuroscience:1 popularity:1 anatomical:3 broadly:1 write:2 blood:3 hdx:1 erp:1 kept:1 imaging:4 graph:10 isrr:1 sum:1 cone:1 run:1 jose:1 package:1 powerful:1 extends:1 uti:1 wu:1 scaling:1 cca:19 followed:3 simplification:1 correspondence:1 quadratic:5 activity:7 scalp:2 adapted:1 constraint:9 ghodsi:2 x2:1 software:2 rp2:1 encodes:1 y1i:8 sake:2 u1:2 extremely:1 performing:2 department:6 combination:2 poor:2 belonging:2 across:3 smaller:1 skull:1 modification:1 anatomically:3 invariant:1 sij:2 visualization:5 remains:1 ge:1 apply:1 away:1 spectral:5 eichele:1 magnetic:1 shaw:3 alternative:1 weinberger:5 thomas:1 original:13 assumes:1 dirichlet:1 ensure:1 include:2 exploit:3 build:1 especially:1 uj:1 society:1 unchanged:1 objective:8 dkij:1 occurs:1 codewords:1 strategy:2 primary:1 sha:2 diagonal:2 gradient:6 y2t:2 subspace:4 distance:21 concatenation:1 sensible:1 topic:1 manifold:17 length:1 code:1 index:1 relationship:1 reformulate:1 acquire:1 trace:3 negative:1 implementation:2 design:1 unknown:1 perform:5 datasets:13 anti:1 situation:1 ysj:1 variability:2 y1:5 rn:1 ucsd:5 qtk:3 nvasconcelos:1 jebara:3 cast:1 pair:5 toolbox:2 wah:1 california:6 coherent:1 learned:6 narrow:1 output1:3 nip:3 address:5 able:3 below:2 pattern:1 laplacians:1 program:3 including:1 power:2 ranked:1 regularized:1 indicator:1 zhu:1 scheme:1 x2i:5 misleading:1 eye:1 picture:1 naive:2 extract:1 szedmak:1 text:17 review:1 voxels:5 l2:3 removal:1 acknowledgement:1 geometric:1 relative:1 r4m:3 embedded:1 expect:1 interesting:1 limitation:3 allocation:1 krasnow:1 analogy:1 localized:1 buxton:1 degree:3 metabolic:1 uncorrelated:1 share:1 pi:1 course:1 placed:1 supported:1 side:1 saul:7 neighbor:9 taking:1 sparse:1 slice:4 dimension:16 xn:1 world:1 default:3 rich:1 computes:4 evaluating:1 author:1 xjm:2 made:1 san:7 projected:4 voxel:9 ec:1 social:1 correlate:3 alpha:1 pruning:1 ml:13 global:2 correlating:1 active:1 corpus:2 assumed:1 xi:1 un:1 latent:1 table:2 impedance:1 channel:3 nature:1 learn:4 ca:6 eeg:37 contributes:1 kjj:8 necessarily:2 complex:1 electric:1 aistats:1 cbv:1 motivation:1 whole:1 martinez:1 tesla:1 complementary:1 x1:1 neuronal:1 q1i:1 embeds:1 precision:2 neuroimage:4 meyer:1 pereira:2 explicit:1 x1i:5 lie:1 house:1 jmlr:2 levy:1 minute:1 down:1 embed:1 remained:1 specific:1 showing:3 sift:1 explored:1 fusion:1 adding:2 importance:1 notwithstanding:1 te:1 margin:1 vijay:1 led:1 ysi:2 explore:2 u2:3 chance:1 acm:1 goal:6 viewed:2 replace:1 absence:1 shared:1 change:9 specifically:1 principal:1 multimedia:2 pas:1 ece:3 experimental:1 la:6 e:1 indicating:2 inability:1 brevity:1 adali:1 hemodynamic:1 reiss:1 correlated:15 |
3,519 | 4,187 | Crowdclustering
Ryan Gomes?
Caltech
Peter Welinder
Caltech
Andreas Krause
ETH Zurich & Caltech
Pietro Perona
Caltech
Abstract
Is it possible to crowdsource categorization? Amongst the challenges: (a) each
worker has only a partial view of the data, (b) different workers may have different clustering criteria and may produce different numbers of categories, (c) the
underlying category structure may be hierarchical. We propose a Bayesian model
of how workers may approach clustering and show how one may infer clusters
/ categories, as well as worker parameters, using this model. Our experiments,
carried out on large collections of images, suggest that Bayesian crowdclustering
works well and may be superior to single-expert annotations.
1
Introduction
Outsourcing information processing to large groups of anonymous workers has been made easier
by the internet. Crowdsourcing services, such as Amazon?s Mechanical Turk, provide a convenient
way to purchase Human Intelligence Tasks (HITs). Machine vision and machine learning researchers
have begun using crowdsourcing to label large sets of data (e.g., images and video [1, 2, 3]) which
may then be used as training data for AI and computer vision systems. In all the work so far
categories are defined by a scientist, while categorical labels are provided by the workers.
Can we use crowdsourcing to discover categories? I.e., is it possible to use crowdsourcing not only
to classify data instances into established categories, but also to define the categories in the first
place? This question is motivated by practical considerations. If we have a large number of images,
perhaps several tens of thousands or more, it may not be realistic to expect a single person to look
at all images and form an opinion as to how to categorize them. Additionally, individuals, whether
untrained or expert, might not agree on the criteria used to define categories and may not even agree
on the number of categories that are present. In some domains unsupervised clustering by machine
may be of great help; however, unsupervised categorization of images and video is unfortunately a
problem that is far from solved. Thus, it is an interesting question whether it is possible to collect
and combine the opinion of multiple human operators, each one of which is able to view a (perhaps
small) subset of a large image collection.
We explore the question of crowdsourcing clustering in two steps: (a) Reduce the problem to a
number of independent HITs of reasonable size and assign them to a large pool of human workers
(Section 2). (b) Develop a model of the annotation process, and use the model to aggregate the
human data automatically (Section 3) yielding a partition of the dataset into categories. We explore
the properties of our approach and algorithms on a number of real world data sets, and compare
against existing methods in Section 4.
2
Eliciting Information from Workers
How shall we enable human operators to express their opinion on how to categorize a large collection
of images? Whatever method we choose, it should be easy to learn and it should be implementable
by means of a simple graphical user interface (GUI). Our approach (Figure 1) is based on displaying
small subsets of M images and asking workers to group them by means of mouse clicks. We
provide instructions that may cue workers to certain attributes but we do not provide the worker
with category definitions or examples. The worker groups the M items into clusters of his choosing,
as many as he sees fit. An item may be placed in its own cluster if it is unlike the others in the
HIT. The choice of M trades off between the difficulty of the task (worker time required for a HIT
?
Corresponding author, e-mail: [email protected]
1
Image set
Annotators
Model / inference
6=
Data Items
?
6=
GUI
zi
Vk
xi
?k
lt
atbtjt
Categories
?Atomic? clusters
Pairwise labels
Wj
?j
6=
Annotators
Pairwise Labels
Figure 1: Schematic of Bayesian crowdclustering. A large image collection is explored by workers. In each
HIT (Section 2), the worker views a small subset of images on a GUI. By associating (arbitrarily chosen)
colors with sets of images the worker proposes a (partial) local clustering. Each HIT thus produces multiple
binary pairwise labels: each pair of images shown in the same HIT is placed by the worker either in the same
category or in different categories. Each image is viewed by multiple workers in different contexts. A model
of the annotation process (Sec. 3.1) is used to compute the most likely set of categories from the binary labels.
Worker parameters are estimated as well.
increases super-linearly with the number of items), the resolution of the images (more images on
the screen means that they will be smaller), and contextual information that may guide the worker
to make more global category decisions (more images give a better context, see Section 4.1.) Partial
clusterings on many M -sized subsets of the data from many different workers are thus the raw data
on which we compute clustering.
An alternative would have been to use pairwise distance judgments or three-way comparisons. A
large body of work exists in the social sciences that makes use of human-provided similarity values
defined between pairs of data items (e.g., Multidimensional Scaling [4].) After obtaining pairwise
similarity ratings from workers, and producing a Euclidean embedding, one could conceivably proceed with unsupervised clustering of the data in the Euclidean space. However, accurate distance
judgments may be more laborious to specify than partial clusterings. We chose to explore what we
can achieve with partial clusterings alone.
We do not expect workers to agree on their definitions of categories, or to be consistent in categorization when performing multiple HITs. Thus, we avoid explicitlyassociating categories across
HITs. Instead, we represent the results of each HIT as a series of M
2 binary labels (see Figure 1).
We assume that there are N total items (indexed by i), J workers (indexed by j), and H HITs
(indexed by h). The information obtained from workers is a set of binary variables L, with elements lt ? {?1, +1} indexed by a positive integer t ? {1, . . . , T }. Associated with the t-th label
is a quadruple (at , bt , jt , ht ), where jt ? {1, . . . , J} indicates the worker that produced the label,
and at ? {1, . . . , N } and bt ? {1, . . . , N } indicate the two data items compared by the label.
ht ? {1, . . . , H} indicates
the HIT from which the t-th pairwise label was derived. The number of
labels is T = H M
.
2
Sampling Procedure We have chosen to structure HITs as clustering tasks of M data items, so
we must specify them. If we simply seperate the items into disjoint sets, then it will be impossible to
infer a clustering over the entire data set. We will not know whether two items in different HITs are
in the same cluster or not. There must be some overlap or redundancy: data items must be members
of multiple HITs.
In the other extreme, we could construct HITs such that each pair of items may be found in at least
one HIT, so that every possible pairwise category relation is sampled. This would be quite expensive
for large number of items N , since the number of labels scales asymptotically as T ? ?(N 2 ).
However, we expect a noisy transitive property to hold: if items a and b are likely to be in the same
cluster, and items b and c are (not) likely in the same cluster, then items a and c are (not) likely to
be in the same cluster as well. The transitive nature of binary cluster relations should allow sparse
sampling, especially when the number of clusters is relatively small.
As a baseline sampling method, we use the random sampling scheme outlined by Strehl and
Ghosh [5] developed for the problem of object distributed clustering, in which a partition of a complete data set is learned from a number of clusterings restricted to subsets of the data. (We compare
our aggregation algorithm to this work in Section 4.) Their scheme controls the level of sampling
redundancy with a single parameter V , which in our problem is interpreted as the expected number
of HITs to which a data item belongs.
2
The N items are first distributed deterministically among the HITs, so that there are d M
V e items
M
in each HIT. Then the remaining M ? d V e items in each HIT are filled by sampling without reNV
placement from the N ? d M
V e items that are not yet allocated to the HIT. There are a total of d M e
unique HITs. We introduce an additional parameter R, which is the number of different workers that
perform each constructed HIT. The total number of HITs distributed to the crowdsourcing service is
therefore H = Rd NMV e, and we impose the constraint that a worker can not perform the same HIT
more than once. This sampling scheme generates T = Rd NMV e M
2 ? O(RN V M ) binary labels.
With this exception, we find a dearth of ideas in the literature pertaining to sampling methods for
distributed clustering problems. Iterative schemes that adaptively choose maximally informative
HITs may be preferable to random sampling. We are currently exploring ideas in this direction.
3
Aggregation via Bayesian Crowdclustering
There is an extensive literature in machine learning on the problem of combining multiple alternative
clusterings of data. This problem is known as consensus clustering [6], clustering aggregation [7],
or cluster ensembles [5]. While some of these methods can work with partial input clusterings, most
have not been demonstrated in situations where the input clusterings involve only a small subset of
the total data items (M << N ), which is the case in our problem.
In addition, existing approaches focus on producing a single ?average? clustering from a set of input
clusterings. In contrast, we are not merely interested in the average clustering produced by a crowd
of workers. Instead, we are interested in understanding the ways in which different individuals
may categorize the data. We seek a master clustering of the data that may be combined in order to
describe the tendencies of individual workers. We refer to these groups of data as atomic clusters.
For example, suppose one worker groups objects into a cluster of tall objects and another of short
objects, while a different worker groups the same objects into a cluster of red objects and another
of blue objects. Then, our method should recover four atomic clusters: tall red objects, short red
objects, tall blue objects, and short blue objects. The behavior of the two workers may then be
summarized using a confusion table of the atomic clusters (see Section 3.3). The first worker groups
the first and third atomic cluster into one category and the second and fourth atomic cluster into
another category. The second worker groups the first and second atomic clusters into a category and
the third and fourth atomic clusters into another category.
3.1 Generative Model
We propose an approach in which data items are represented as points in a Euclidean space and
workers are modeled as pairwise binary classifiers in this space. Atomic clusters are then obtained
by clustering these inferred points using a Dirichlet process mixture model, which estimates the
number of clusters [8]. The advantage of an intermediate Euclidean representation is that it provides
a compact way to capture the characteristics of each data item. Certain items may be inherently more
difficult to categorize, in which case they may lie between clusters. Items may be similar along one
axis but different along another (e.g., object height versus object color.) A similar approach was
proposed by Welinder et al. [3] for the analysis of classification labels obtained from crowdsourcing
services. This method does not apply to our problem, since it involves binary labels applied to single
data items rather than to pairs, and therefore requires that categories be defined a priori and agreed
upon by all workers, which is incompatible with the crowdclustering problem.
We propose a probabilistic latent variable model that relates pairwise binary labels to hidden variables associated with both workers and images. The graphical model is shown in Figure 1. xi
is a D dimensional vector, with components [xi ]d that encodes item i?s location in the embedding space RD . Symmetric matrix Wj ? RD?D with entries [Wj ]d1 d2 and bias ?j ? R are
used to define a pairwise binary classifier, explained in the next paragraph, that represents worker
j?s labeling behavior. Because Wj is symmetric, we need only specify its upper triangular portion: vecp{Wj } which is a vector formed by ?stacking? the partial columns of Wj according
to the ordering [vecp{Wj }]1 = [Wj ]11 , [vecp{Wj }]2 = [Wj ]12 , [vecp{Wj }]3 = [Wj ]22 , etc.
?k = {?k , ?k } are the mean and covariance parameters associated with the k-th Gaussian atomic
cluster, and Uk are stick breaking weights associated with a Dirichlet process.
The key term is the pairwise quadratic logistic regression likelihood that captures worker j?s tendency to label the pair of images at and bt with lt :
1
p(lt |xat , xbt , Wjt , ?jt ) =
(1)
1 + exp(?lt At )
3
where we define the pairwise quadratic activity At = xTat Wjt xbt + ?jt . Symmetry of Wj ensures
that p(lt |xat , xbt , Wjt , ?jt ) = p(lt |xbt , xat , Wjt , ?jt ). This form of likelihood yields a compact
and tractable method of representing classifiers defined over pairs of points in Euclidean space.
Pairs of vectors with large pairwise activity tend to be classified as being in the same category,
and in different categories otherwise. We find that this form of likelihood leads to tightly grouped
clusters of points xi that are then easily discovered by mixture model clustering.
The joint distribution is
p(?, U, Z, X, W, ?, L) =
?
Y
k=1
J
Y
j=1
p(Uk |?)p(?k |m0 , ?0 , J0 , ?0 )
p(vecp{Wj }|?0w )p(?j |?0? )
N
Y
i=1
T
Y
t=1
p(zi |U )p(xi |?zi )
p(lt |xat , xbt , Wjt , ?jt ).
The conditional distributions are defined as follows:
p(Uk |?) = Beta(Uk ; 1, ?)
p(zi = k|U ) = Uk
p(vecp{Wj }|?0w )
=
Y
k?1
Y
l=1
p(xi |?0x ) =
p(xi |?zi ) = Normal(xi ; ?zi , ?zi )
Normal([Wj ]d1 d2 ; 0, ?0w )
d1 ?d2
(2)
Y
(1 ? Ul )
(3)
Normal([xi ]d ; 0, ?0x )
d
p(?j |?0? )
= Normal(?j ; 0, ?0? )
p(?k |m0 , ?0 , J0 , ?0 ) = Normal-Wishart(?k ; m0 , ?0 , J0 , ?0 )
where (?0x , ?0? , ?0w , ?, m0 , ?0 , J0 , ?0 ) are fixed hyper-parameters. Our model is similar to that
of [9], which is used to model binary relational data. Salient differences include our use of a logistic
rather than a Gaussian likelihood, and our enforcement of the symmetry of Wj . In the next section,
we develop an efficient deterministic inference algorithm to accomodate much larger data sets than
the sampling algorithm used in [9].
3.2
Approximate Inference
Exact posterior inference in this model is intractable, since computing it involves integrating over
variables with complex dependencies. We therefore develop an inference algorithm based on the
Variational Bayes method [10]. The high level idea is to work with a factorized proxy posterior
distribution that does not model the full complexity of interactions between variables; it instead
represents a single mode of the true posterior. Because this distribution is factorized, integrations
involving it become tractable. We define the proxy distribution q(?, U, Z, X, W, ? ) =
?
Y
k=K+1
p(Uk |?)p(?k |m0 , ?0 , J0 , ?0 )
K
Y
q(Uk )q(?k )
N
Y
i=1
k=1
q(zi )q(xi )
J
Y
j=1
q(vecp{Wj })q(?j ) (4)
using parametric distributions of the following form:
q(Uk ) = Beta(Uk ; ?k,1 , ?k,2 )
Y
q(xi ) =
Normal([xi ]d ; [?xi ]d , [? xi ]d )
q(?k ) = Normal-Wishart(mk , ?k , Jk , ?k )
(5)
q(?j ) = Normal(?j ; ??j , ?j? )
d
q(zi = k) = qik
q(vecp{Wj }) =
Y
w
Normal([Wj ]d1 d2 ; [?w
j ]d1 d2 , [? j ]d1 d2 )
d1 ?d2
To handle the infinite number of mixture components, we follow the approach of [11] where we
define variational distributions for the first K components, and fix the remainder to their corresponding priors. {?k,1 , ?k,2 } and {mk , ?k , Jk , ?k } are the variational parameters associated with the k-th
mixture component. q(zi = k) = qik form the factorized assignment distribution for item i. ?xi and
? xi are variational mean and variance parameters associated with data item i?s embedding location.
w
?w
j and ? j are symmetric matrix variational mean and variance parameters associated with worker
j, and ??j and ?j? are variational mean and variance parameters for the bias ?j of worker j. We use
diagonal covariance Normal distributions over Wj and xi to reduce the number of parameters that
must be estimated.
4
Next, we define a utility function which allows us to determine the variational parameters. We use
Jensen?s inequality to develop a lower bound to the log evidence:
log p(L|?0x , ?0? , ?0w , ?, m0 , ?0 , J0 , ?0 )
?Eq log p(?, U, Z, X, W, ?, L) + H{q(?, U, Z, X, W, ? )},
(6)
g(?t ) exp{(lt At ? ?t )/2 + ?(?t )(A2t ? ?2t )} ? p(lt |xat , xbt , Wjt , ?jt ).
(7)
H{?} is the entropy of the proxy distribution, and the lower bound is known as the Free Energy.
However, the Free Energy still involves intractable integration, because the normal distributions over
variables Wj , xi , and ?j are not conjugate [12] to the logistic likelihood term. We therefore locally
approximate the logistic likelihood with an unnormalized Gaussian function lower bound, which is
the left hand side of the following inequality:
This was adapted from [13] to our case of quadratic pairwise logistic regression. Here g(x) =
(1 + e?x )?1 and ?(?) = [1/2 ? g(?)]/(2?). This expression introduces an additional variational
parameter ?t for each label, which are optimized in order to tighten the lower bound. Our utility
function is therefore:
F =Eq log p(?, U, Z, X, W, ? ) + H{q(?, U, Z, X, W, ? )}
(8)
X
lt
?t
+
log g(?t ) + Eq {At } ?
+ ?(?t )(Eq {A2t } ? ?2t )
2
2
t
which is a tractable lower bound to the log evidence. Optimization of variational parameters is
carried out in a coordinate ascent procedure, which exactly maximizes each variational parameter in
turn while holding all others fixed. This is guaranteed to converge to a local maximum of the utility
function. The update equations are given in an extended technical report [14]. We initialize the variational parameters by carrying out a layerwise procedure: first, we substitute a zero mean isotropic
w
?
?
normal prior for the mixture model and perform variational updates over {?xi , ? xi , ?w
j , ? j , ?j , ?j }.
x
Then we use ?i as point estimates for xi and update {mk , ?k , Jk , ?k , ?k,1 , ?k,2 } and determine the
initial number of clusters K as in [11]. Finally, full joint inference updates are performed. Their
computational complexity is O(D4 T + D2 KN ) = O(D4 N V RM + D2 KN ).
3.3 Worker Confusion Analysis
As discussed in Section 3, we propose to understand a worker?s behavior in terms of how he groups
atomic clusters into his own notion of categories. We are interested in the predicted confusion matrix
Cj for worker j, where
nZ
o
[Cj ]k1 k2 = Eq
p(l = 1|xa , xb , Wj , ?j )p(xa |?k1 )p(xb |?k2 )dxa dxb
(9)
which expresses the probability that worker j assigns data items sampled from atomic cluster k1
and k2 to the same cluster, as predicted by the variational posterior. This integration is intractable.
We use the expected values E{?k1 } = {mk1 , Jk1 /?k1 } and E{?k2 } = {mk2 , Jk2 /?k2 } as point
estimates in place of the variational distributions over ?k1 and ?k2 . We then use Jensen?s inequality
and Eq. 7 again to yield a lower bound. Maximizing this bound over ? yields
?
? j ]k k = g(?
? k k j ) exp{(mTk ?w
?
[C
(10)
j mk + ?j ? ?k k j )/2}
1 2
1 2
1
2
1 2
? k k j is given in [14].
which we use as our approximate confusion matrix, where ?
1 2
4
Experiments
We tested our method on four image data sets that have established ?ground truth? categories, which
were provided by a single human expert. These categories do not necessarily reflect the uniquely
valid way to categorize the data set, however they form a convenient baseline for the purpose of
quantitative comparison. We used 1000 images from the Scenes data set from [15] to illustrate our
approach (Figures 2, 3, and 4.) We used 1354 images of birds from 10 species in the CUB-200 data
set [16] (Table 1) and the 3845 images in the Stonefly9 data set [17] (Table 1) in order to compare
our method quantitatively to other cluster aggregation methods. We used the 37794 images from the
Attribute Discovery data set [18] in order to demonstrate our method on a large scale problem.
We set the dimensionality of xi to D = 4 (since higher dimensionality yielded no additional clusters) and we iterated the update equations 100 times, which was enough for convergence. Hyperparameters were tuned once on synthetic pairwise labels that simulated 100 data points drawn from 4
clusters, and fixed during all experiments.
5
Average assignment entropy (bits): 0.0029653
1
1
0.5
26
91
2
4
3
8
107 5 11
3
?1
?1.5
Ground Truth
0
?0.5
4
?2
5
?2.5
?1.5
?1
?0.5
0
0.5
1
bedroom
suburb
kitchen
living room
coast
forest
highway
inside city
mountain
open country
street
tall building
office
70
60
50
40
30
20
10
1 2 3 4 5 6 7 8 9 10 11
Inferred Cluster
1.5
0
Figure 2: Scene Dataset. Left: Mean locations ?xi projected onto first two Fisher discriminant vectors, along
with cluster labels superimposed at cluster means mk . Data items are colored according to their MAP label
argmaxk qik . Center: High confidence example images from the largest five clusters (rows correspond to
clusters.) Right: Confusion table between ground truth scene categories and inferred clusters. The first cluster
includes three indoor ground truth categories, the second includes forest and open country categories, and the
third includes two urban categories. See Section 4.1 for a discussion and potential solution of this issue.
Worker: 9, # of HITs: 74
Worker: 45, # of HITs: 15
1
1
9
0.8
10
Worker: 29, # of HITs: 1
1
1
0.9
0.9
9
1
1
0.9
9
0.8
7
0.8
10
7
0.7
10
0.7
5
0.7
3
0.6
3
0.6
3
0.6
5
0.5
5
0.5
8
0.5
8
0.4
8
0.4
7
0.4
11
0.3
11
0.3
11
0.3
4
0.2
2
0.1
6
4
0.2
6
0.1
2
1
9
10
7
3
5
8
11
4
2
0
6
4
0.2
2
0.1
6
1
9
7
10 3
5
8
11 4
6
2
0
1
9 10
5
3
8
7 11
4
2
6
0
Figure 3: (Left of line) Worker confusion matrices for the 40 most active workers. (Right of line) Selected
worker confusion matrices for Scenes experiment. Worker 9 (left) makes distinctions that correspond closely to
the atomic clustering. Worker 45 (center) makes coarser distinctions, often combining atomic clusters. Right:
Worker 29?s single HIT was largely random and does not align with the atomic clusters.
Figure 2 (left) shows the mean locations of the data items ?xi learned from the Scene data set,
visualized as points in Euclidean space. We find well seperated clusters whose labels k are displayed
at their mean locations mk . The points are colored according to argmaxk qik , which is item i?s MAP
cluster assignment. The cluster labels are sorted according to the number of assigned items, with
cluster 1 being the largest. The axes are the first two Fisher discriminant directions (derived from
the MAP cluster assignments) as axes. The clusters
Pare well seperated in the four dimensionsal
space (we give the average assignment entropy ? N1 ik qik log qik in the figure title, which shows
little cluster overlap.) Figure 2 (center) shows six high confidence examples from clusters 1 through
5. Figure 2 (right) shows the confusion table between the ground truth categories and the MAP
clustering. We find that the MAP clusters often correspond to single ground truth categories, but they
sometimes combine ground truth categories in reasonable ways. See Section 4.1 for a discussion and
potential solution of this issue.
Figure 3 (left of line) shows the predicted confusion matrices (Section 3.3) associated with the
40 workers that performed the most HITs. This matrix captures the worker?s tendency to label
items from different atomic clusters as being in the same or different category. Figure 3 (right of
line) shows in detail the predicted confusion matrices for three workers. We have sorted the MAP
cluster indices to yield approximately block diagonal matrices, for ease of interpretation. Worker 9
makes relatively fine grained distinctions, including seperating clusters 1 and 9 that correspond to
the indoor categories and the bedroom scenes, respectively. Worker 45 combines clusters 5 and 8
which correspond to city street and highway scenes in addition to grouping together all indoor scene
categories. The finer grained distinctions made by worker 9 may be a result of performing more
HITs (74) and seeing a larger number of images than worker 45, who performed 15 HITs. Finally
(far right), we find a worker whose labels do not align with the atomic clustering. Inspection of his
labels show that they were entered largely at random.
Figure 4 (top left) shows the number of HITs performed by each worker according to descending
rank. Figure 4 (bottom left) is a Pareto curve that indicates the percentage of the HITs performed
by the most active workers. The Pareto principle (i.e., the law of the vital few) [19] roughly holds:
the top 20% most active workers perform nearly 80% of the work. We wish to understand the
extent to which the most active workers contribute to the results. For the purpose of quantitative
comparisons, we use Variation of Information (VI) [20] to measure the discrepancy between the
6
R=5
200
2.3
Variation of Information
0
0
2.1
20
40
60
80
Worker Rank
100
120
140
100
50
Bayes Crowd
NMF Consensus
S&G Cluster Ensembles
Bayes Consensus
3.5
Top workers excluded
Bottom workers excluded
2.2
100
Variation of Information
% of total HITs
Completed HITs
4
2
1.9
1.8
1.7
3
2.5
1.6
2
1.5
1.4
0
0
20
40
60
% of total workers
80
100
1.3
200
400
600
800
1000 1200
Number of HITs Remaining
1400
1600
1.5
3
4
5
6
7
V
8
9
10
11
Figure 4: Scene Data set. Left Top: Number of completed HITs by worker rank. Left Bottom: Pareto curve.
Center: Variation of Information on the Scene data set as we incrementally remove top (blue) and bottom (red)
ranked workers. The top workers are removed one at a time, bottom ranked workers are removed in groups so
that both curves cover roughly the same domain. The most active workers do not dominate the results. Right:
Variation of Information between the inferred clustering and the ground truth categories on the Scene data set,
as a function of sampling parameter V . R is fixed at 5.
Bayes Crowd
Bayes Consensus
NMF [21]
Strehl & Ghosh [5]
Birds [16] (VI) 1.103 ? 0.082
1.721 ? 0.07
1.500 ? 0.26
1.256 ? 0.001
Birds (time)
18.5 min
18.1 min
27.9 min
0.93 min
Stonefly9 [17] (VI) 2.448 ? 0.063
2.735 ? 0.037
4.571 ? 0.158
3.836 ? 0.002
Stonefly9 (time)
100.1 min
98.5 min
212.6 min
46.5 min
Table 1: Quantitative comparison on Bird and Stonefly species categorization data sets. Quality is measured
using Variation of Information between the inferred clustering and ground truth. Bayesian Crowdclustering
outperforms the alternatives.
inferred MAP clustering and the ground truth categorization. VI is a metric with strong information
theoretic justification that is defined between two partitions (clusterings) of a data set; smaller values
indicate a closer match and a VI of 0 means that two clusterings are identical. In Figure 4 (center)
we incrementally remove the most active (blue) and least active (red) workers. Removal of workers
corresponds to moving from right to left on the x-axis, which indicates the number of HITs used to
learn the model. The results show that removing the large number of workers that do fewer HITs is
more detrimental to performance than removing the relatively few workers that do a large number
of HITs (given the same number of total HITs), indicating that the atomic clustering is learned from
the crowd at large.
In Figure 4 (right), we judge the impact of the sampling redundancy parameter V described in Section 2. We compare our approach (Bayesian crowdclustering) to two existing clustering aggregation
methods from the literature: consensus clustering by nonnegative matrix factorization (NMF) [21]
and the cluster ensembles method of Strehl and Ghosh (S&G) [5]. NMF and S&G require the number of inferred clusters to be provided as a parameter, and we set this to the number of ground truth
categories. Even without the benefit of this additional information, our method (which automatically infers the number of clusters) outperforms the alternatives. To judge the benefit of modeling
the characteristics of individual workers, we also compare against a variant of our model in which
all HITs are treated as if they are performed by a single worker (Bayesian consensus.) We find a
significant improvement. We fix R = 5 in this experiment, but we find a similar ranking of methods
at other values of R. However, the performance benefit of the Bayesian methods over the existing
methods increases with R.
We compare the four methods quantitatively on two additional data sets, with the results summarized
in Table 1. In both cases, we instruct workers to categorize based on species. This is known to be
a difficult task for non-experts. We set V = 6 and R = 5 for these experiments. Again, we find
that Bayesian Crowdclustering outperforms the alternatives. A run time comparison is also given
in Table 1. Bayesian Crowdclustering results on the Bird and Stonefly data sets are summarized
in [14].
Finally, we demonstrate Bayesian crowdclustering on the large scale Attribute Discovery data set.
This data set has four image categories: bags, earrings, ties, and women?s shoes. In addition, each
image is a member of one of 27 sub-categories (e.g., the bags category includes backpacks and totes
as sub-categories.) See [14] for summary figures. We find that our method easily discovers the four
7
70
60
50
40
30
20
10
1
2
Inferred Cluster
3
0
bedroom
suburb
kitchen
living room
coast
forest
highway
inside city
mountain
open country
street
tall building
office
Original Cluster 8
70
60
50
40
30
20
10
1
Inferred Cluster
0
Ground Truth
bedroom
suburb
kitchen
living room
coast
forest
highway
inside city
mountain
open country
street
tall building
office
Original Cluster 4
Ground Truth
Ground Truth
Original Cluster 1
bedroom
suburb
kitchen
living room
coast
forest
highway
inside city
mountain
open country
street
tall building
office
50
40
30
20
10
1
2
Inferred Cluster
3
0
8.1
1.1
8.2
1.2
4.1
8.3
1.3
Figure 5: Divisive Clustering on the Scenes data set. Left: Confusion matrix and high confidence examples
when running our method on images assigned to cluster one in the original experiment (Figure 2). The three
indoor scene categories are correctly recovered. Center: Workers are unable to subdivide mountain scenes
consistently and our method returns a single cluster. Right: Workers may find perceptually relevant distinctions
not present in the ground truth categories. Here, the highway category is subdivided according to the number
of cars present.
categories. The subcategories are not discovered, likely due to limited context associated with HITs
with size M = 36 as discussed in the next section. Runtime was approximately 9.5 hours on a six
core Intel Xeon machine.
4.1 Divisive Clustering
As indicated by the confusion matrix in Figure 2 (right), our method results in clusters that correspond to reasonable categories. However, it is clear that the data often has finer categorical distinctions that go undiscovered. We conjecture that this is a result of the limited context presented to the
worker in each HIT. When shown a set of M = 36 images consisting mostly of different types of
outdoor scenes and a few indoor scenes, it is reasonable for a worker to consider the indoor scenes
as a unified category. However, if a HIT is composed purely of indoor scenes, a worker might draw
finer distinctions between images of offices, kitchens, and living rooms. To test this conjecture,
we developed a hierarchical procedure in which we run Bayesian crowdclustering independently on
images that are MAP assigned to the same cluster in the original Scenes experiment.
Figure 5 (left) shows the results on the indoor scenes assigned to original cluster 1. We find that when
restricted to indoor scenes, the workers do find the relevant distinctions and our algorithm accurately
recovers the kitchen, living room, and office ground truth categories. In Figure 5 (center) we ran the
procedure on images from original cluster 4, which is composed predominantly of mountain scenes.
The algorithm discovers one subcluster. In Figure 5 (right) the workers divide a cluster into three
subclusters that are perceptually relevant: they have organized them according to the number of cars
present.
5
Conclusions
We have proposed a method for clustering a large set of data by distributing small tasks to a large
group of workers. It is based on using a novel model of human clustering, as well as a novel machine learning method to aggregate worker annotations. Modeling both data item properties and the
workers? annotation process and parameters appears to produce performance that is superior to existing clustering aggregation methods. Our study poses a number of interesting questions for further
research: Can adaptive sampling methods (as opposed to our random sampling) reduce the number
of HITs that are necessary to achieve high quality clustering? Is it possible to model the workers?
tendency to learn over time as they perform HITs, rather than treating HITs independently as we do
here? Can we model contextual effects, perhaps by modeling the way that humans ?regularize? their
categorical decisions depending on the number and variety of items present in the task?
Acknowledgements This work was supported by ONR MURI grant 1015-G-NA-127, ARL grant
W911NF-10-2-0016, and NSF grants IIS-0953413 and CNS-0932392.
8
References
[1] A. Sorokin and D. A. Forsyth. Utility data annotation with Amazon Mechanical Turk. In
Internet Vision, pages 1?8, 2008.
[2] Sudheendra Vijayanarasimhan and Kristen Grauman. Large-Scale Live Active Learning:
Training Object Detectors with Crawled Data and Crowds. In CVPR, 2011.
[3] Peter Welinder, Steve Branson, Serge Belongie, and Pietro Perona. The multidimensional
wisdom of crowds. In Neural Information Processing Systems Conference (NIPS), 2010.
[4] J. B. Kruskal. Multidimensional scaling by optimizing goodness-of-fit to a nonmetric hypothesis. PSym, 29:1?29, 1964.
[5] Alexander Strehl and Joydeep Ghosh. Cluster ensembles?A knowledge reuse framework for
combining multiple partitions. Journal of Machine Learning Research, 3:583?617, 2002.
[6] Stefano Monti, Pablo Tamayo, Jill Mesirov, and Todd Golub. Consensus clustering: A
resampling-based method for class discovery and visualization of gene expression microarray data. Machine Learning, 52(1?2):91?118, 2003.
[7] Gionis, Mannila, and Tsaparas. Clustering aggregation. In ACM Transactions on Knowledge
Discovery from Data, volume 1. 2007.
[8] A.Y. Lo. On a class of bayesian nonparametric estimates: I. density estimates. The Annals of
Statistics, pages 351?357, 1984.
[9] I. Sutskever, R. Salakhutdinov, and J.B. Tenenbaum. Modelling relational data using bayesian
clustered tensor factorization. Advances in Neural Information Processing Systems (NIPS),
2009.
[10] Hagai Attias. A variational baysian framework for graphical models. In NIPS, pages 209?215,
1999.
[11] Kenichi Kurihara, Max Welling, and Nikos Vlassis. Accelerated variational dirichlet process
mixtures. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[12] J. M. Bernardo and A. F. M. Smith. Bayesian Theory. Wiley, 1994.
[13] Tommi S. Jaakkola and Michael I. Jordan. A variational approach to Bayesian logistic regression models and their extensions, August 13 1996.
[14] Ryan Gomes, Peter Welinder, Andreas Krause, and Pietro Perona. Crowdclustering. Technical
Report CaltechAUTHORS:20110628-202526159, June 2011.
[15] Li Fei-Fei and Pietro Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR, pages 524?531. IEEE Computer Society, 2005.
[16] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. CaltechUCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology,
2010.
[17] G. Martinez-Munoz, N. Larios, E. Mortensen, W. Zhang, A. Yamamuro, R. Paasch, N. Payet,
D. Lytle, L. Shapiro, S. Todorovic, et al. Dictionary-free categorization of very similar objects
via stacked evidence trees. 2009.
[18] T. Berg, A. Berg, and J. Shih. Automatic attribute discovery and characterization from noisy
web data. Computer Vision?ECCV 2010, pages 663?676, 2010.
[19] V. Pareto. Cours d?economie politique. 1896.
[20] M. Meila. Comparing clusterings by the variation of information. In Learning theory and
Kernel machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop,
COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003: proceedings, volume 2777,
page 173. Springer Verlag, 2003.
[21] Tao Li, Chris H. Q. Ding, and Michael I. Jordan. Solving consensus and semi-supervised
clustering problems using nonnegative matrix factorization. In ICDM, pages 577?582. IEEE
Computer Society, 2007.
9
| 4187 |@word open:5 instruction:1 d2:9 tamayo:1 seek:1 covariance:2 tr:1 initial:1 series:1 tuned:1 undiscovered:1 outperforms:3 existing:5 recovered:1 contextual:2 comparing:1 yet:1 must:4 realistic:1 partition:4 informative:1 remove:2 treating:1 update:5 resampling:1 alone:1 intelligence:1 cue:1 generative:1 item:38 selected:1 fewer:1 inspection:1 isotropic:1 smith:1 short:3 core:1 colored:2 provides:1 characterization:1 contribute:1 location:5 zhang:1 five:1 height:1 along:3 constructed:1 beta:2 become:1 ik:1 combine:3 paragraph:1 inside:4 introduce:1 pairwise:15 expected:2 roughly:2 behavior:3 salakhutdinov:1 automatically:2 little:1 provided:4 discover:1 underlying:1 maximizes:1 factorized:3 what:1 mountain:6 interpreted:1 developed:2 unified:1 ghosh:4 quantitative:3 every:1 multidimensional:3 bernardo:1 tie:1 preferable:1 exactly:1 classifier:3 hit:51 uk:9 whatever:1 control:1 stick:1 grant:3 grauman:1 producing:2 rm:1 platt:1 positive:1 service:3 scientist:1 local:2 todd:1 runtime:1 quadruple:1 approximately:2 might:2 chose:1 bird:6 nz:1 collect:1 branson:2 ease:1 factorization:3 limited:2 practical:1 unique:1 atomic:18 block:1 mannila:1 procedure:5 mortensen:1 j0:6 eth:1 convenient:2 sudheendra:1 confidence:3 integrating:1 seeing:1 suggest:1 onto:1 operator:2 context:4 impossible:1 a2t:2 vijayanarasimhan:1 descending:1 live:1 deterministic:1 demonstrated:1 outsourcing:1 maximizing:1 map:8 center:7 go:1 independently:2 resolution:1 amazon:2 assigns:1 d1:7 dominate:1 regularize:1 his:3 embedding:3 handle:1 notion:1 coordinate:1 variation:7 justification:1 annals:1 suppose:1 user:1 exact:1 hypothesis:1 element:1 expensive:1 jk:3 coarser:1 muri:1 bottom:5 ding:1 solved:1 capture:3 thousand:1 wj:23 ensures:1 ordering:1 trade:1 removed:2 ran:1 complexity:2 carrying:1 solving:1 purely:1 upon:1 easily:2 joint:2 represented:1 stacked:1 seperate:1 describe:1 pertaining:1 labeling:1 aggregate:2 hyper:1 choosing:1 crowd:6 quite:1 whose:2 larger:2 cvpr:2 vecp:8 otherwise:1 triangular:1 statistic:1 economie:1 noisy:2 advantage:1 propose:4 mesirov:1 interaction:1 remainder:1 relevant:3 combining:3 entered:1 achieve:2 olkopf:1 sutskever:1 convergence:1 cluster:72 produce:3 categorization:6 object:15 help:1 tall:7 develop:4 illustrate:1 pose:1 depending:1 measured:1 eq:6 strong:1 predicted:4 involves:3 indicate:2 judge:2 arl:1 tommi:1 direction:2 closely:1 attribute:4 human:9 enable:1 opinion:3 require:1 subdivided:1 assign:1 fix:2 clustered:1 anonymous:1 kristen:1 ryan:2 hagai:1 exploring:1 extension:1 hold:2 ground:16 normal:12 exp:3 great:1 m0:6 kruskal:1 dictionary:1 cub:1 purpose:2 schroff:1 bag:2 label:27 currently:1 title:1 highway:6 grouped:1 largest:2 city:5 hoffman:1 mit:1 gaussian:3 super:1 rather:3 avoid:1 jaakkola:1 office:6 crawled:1 derived:2 focus:1 june:1 jk1:1 ax:2 vk:1 improvement:1 rank:3 consistently:1 indicates:4 likelihood:6 contrast:1 superimposed:1 modelling:1 baseline:2 inference:6 bt:3 entire:1 perona:5 relation:2 hidden:1 interested:3 tao:1 issue:2 among:1 classification:1 colt:1 priori:1 proposes:1 integration:3 initialize:1 construct:1 once:2 washington:1 sampling:14 identical:1 represents:2 look:1 unsupervised:3 nearly:1 purchase:1 discrepancy:1 report:3 others:2 quantitatively:2 few:3 composed:2 tightly:1 individual:4 kitchen:6 consisting:1 cns:2 n1:1 gui:3 laborious:1 golub:1 introduces:1 mixture:6 extreme:1 monti:1 yielding:1 xat:5 xb:2 accurate:1 closer:1 worker:94 partial:7 necessary:1 indexed:4 filled:1 euclidean:6 divide:1 tree:1 joydeep:1 mk:6 instance:1 classify:1 column:1 modeling:3 asking:1 xeon:1 cover:1 w911nf:1 goodness:1 assignment:5 stacking:1 subset:6 entry:1 welinder:5 dependency:1 kn:2 synthetic:1 combined:1 adaptively:1 person:1 density:1 probabilistic:1 off:1 pool:1 michael:2 together:1 mouse:1 na:1 again:2 reflect:1 opposed:1 choose:2 woman:1 wishart:2 expert:4 return:1 li:2 potential:2 sec:1 summarized:3 includes:4 gionis:1 forsyth:1 explicitly:1 ranking:1 vi:5 performed:6 view:3 red:5 portion:1 aggregation:7 recover:1 bayes:5 annotation:6 formed:1 variance:3 characteristic:2 largely:2 ensemble:4 judgment:2 yield:4 correspond:6 who:1 serge:1 wisdom:1 bayesian:17 raw:1 iterated:1 accurately:1 produced:2 researcher:1 finer:3 earring:1 classified:1 detector:1 definition:2 against:2 energy:2 turk:2 suburb:4 associated:9 recovers:1 sampled:2 dataset:2 begun:1 crowdclustering:12 knowledge:2 color:2 car:2 dimensionality:2 infers:1 cj:2 agreed:1 organized:1 nonmetric:1 appears:1 steve:1 higher:1 subclusters:1 follow:1 supervised:1 specify:3 maximally:1 jk2:1 xa:2 hand:1 web:1 incrementally:2 logistic:6 mode:1 quality:2 perhaps:3 indicated:1 building:4 effect:1 usa:1 true:1 assigned:4 excluded:2 symmetric:3 during:1 uniquely:1 k2:6 unnormalized:1 d4:2 criterion:2 complete:1 demonstrate:2 confusion:12 theoretic:1 interface:1 stefano:1 image:33 variational:17 consideration:1 coast:4 discovers:2 novel:2 predominantly:1 superior:2 seperating:1 volume:2 discussed:2 he:2 interpretation:1 refer:1 significant:1 cambridge:1 munoz:1 ai:1 rd:4 automatic:1 outlined:1 meila:1 moving:1 similarity:2 etc:1 align:2 posterior:4 own:2 optimizing:1 belongs:1 certain:2 verlag:1 inequality:3 binary:11 arbitrarily:1 onr:1 caltech:5 additional:5 impose:1 nikos:1 determine:2 converge:1 living:6 semi:1 renv:1 multiple:7 relates:1 full:2 infer:2 ii:1 technical:3 match:1 instruct:1 icdm:1 schematic:1 mk1:1 involving:1 regression:3 impact:1 variant:1 vision:5 metric:1 represent:1 sometimes:1 kernel:3 qik:6 addition:3 krause:2 fine:1 country:5 microarray:1 allocated:1 sch:1 unlike:1 ascent:1 tend:1 member:2 jordan:2 integer:1 intermediate:1 vital:1 easy:1 enough:1 variety:1 fit:2 zi:10 bedroom:5 associating:2 click:1 andreas:2 reduce:3 idea:3 attias:1 whether:3 motivated:1 expression:2 six:2 utility:4 distributing:1 reuse:1 ul:1 peter:3 proceed:1 todorovic:1 clear:1 involve:1 nonparametric:1 ten:1 locally:1 tenenbaum:1 visualized:1 category:53 shapiro:1 percentage:1 nsf:1 backpack:1 payet:1 estimated:2 disjoint:1 correctly:1 blue:5 shall:1 express:2 group:11 redundancy:3 four:6 key:1 salient:1 shih:1 drawn:1 urban:1 ht:2 asymptotically:1 pietro:4 merely:1 run:2 master:1 fourth:2 place:2 mk2:1 reasonable:4 draw:1 decision:2 incompatible:1 scaling:2 bit:1 bound:7 internet:2 guaranteed:1 quadratic:3 yielded:1 activity:2 nonnegative:2 adapted:1 sorokin:1 placement:1 constraint:1 annual:1 fei:2 scene:23 encodes:1 generates:1 layerwise:1 min:8 performing:2 relatively:3 conjecture:2 according:7 conjugate:1 kenichi:1 smaller:2 across:1 dxa:1 conceivably:1 explained:1 restricted:2 equation:2 zurich:1 agree:3 visualization:1 turn:1 know:1 enforcement:1 tractable:3 apply:1 hierarchical:3 alternative:5 subdivide:1 substitute:1 original:7 top:6 clustering:47 remaining:2 dirichlet:3 include:1 graphical:3 completed:2 running:1 k1:6 especially:1 eliciting:1 society:2 tensor:1 question:4 parametric:1 diagonal:2 amongst:1 detrimental:1 distance:2 unable:1 simulated:1 street:5 chris:1 mail:1 extent:1 consensus:8 discriminant:2 modeled:1 index:1 difficult:2 unfortunately:1 mostly:1 holding:1 perform:5 upper:1 implementable:1 displayed:1 situation:1 relational:2 extended:1 vlassis:1 dc:1 rn:1 discovered:2 august:2 nmf:4 inferred:10 rating:1 pablo:1 pair:7 mechanical:2 required:1 extensive:1 optimized:1 baysian:1 wah:1 california:1 learned:3 distinction:8 established:2 hour:1 nip:3 able:1 mtk:1 indoor:9 challenge:1 including:1 max:1 video:2 subcluster:1 overlap:2 difficulty:1 ranked:2 treated:1 natural:1 representing:1 scheme:4 jill:1 pare:1 technology:1 axis:2 carried:2 categorical:3 transitive:2 argmaxk:2 prior:2 literature:3 understanding:1 discovery:5 removal:1 acknowledgement:1 law:1 subcategories:1 expect:3 interesting:2 versus:1 annotator:2 consistent:1 proxy:3 displaying:1 principle:1 editor:1 pareto:4 strehl:4 row:1 lo:1 eccv:1 summary:1 placed:2 supported:1 free:3 guide:1 allow:1 bias:2 side:1 understand:2 institute:1 tsaparas:1 sparse:1 distributed:4 benefit:3 curve:3 world:1 valid:1 author:1 collection:4 made:2 projected:1 adaptive:1 far:3 tighten:1 social:1 dearth:1 transaction:1 welling:1 approximate:3 compact:2 crowdsource:1 gene:1 global:1 active:8 gomes:3 belongie:2 xi:24 iterative:1 latent:1 seperated:2 table:8 additionally:1 learn:3 nature:1 xbt:6 inherently:1 obtaining:1 symmetry:2 forest:5 untrained:1 complex:1 necessarily:1 domain:2 linearly:1 hyperparameters:1 martinez:1 body:1 intel:1 screen:1 wiley:1 sub:2 deterministically:1 wish:1 lie:1 outdoor:1 breaking:1 third:3 grained:2 removing:2 caltechucsd:1 jt:8 jensen:2 explored:1 evidence:3 grouping:1 exists:1 intractable:3 workshop:1 accomodate:1 perceptually:2 easier:1 entropy:3 lt:11 simply:1 explore:3 likely:5 shoe:1 springer:1 corresponds:1 truth:16 acm:1 ma:1 conditional:1 viewed:1 sized:1 sorted:2 wjt:6 room:6 fisher:2 infinite:1 kurihara:1 total:7 specie:3 tendency:4 divisive:2 xtat:1 exception:1 indicating:1 berg:2 categorize:6 alexander:1 accelerated:1 mita:1 tested:1 crowdsourcing:7 |
3,520 | 4,188 | Solving Decision Problems with Limited Information
Cassio P. de Campos
IDSIA
Manno, CH 6928
[email protected]
Denis D. Mau?a
IDSIA
Manno, CH 6928
[email protected]
Abstract
We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of
no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination
procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 1064 strategies.
1
Introduction
In many tasks, bounded resources and physical constraints force decisions to be made based on limited information [1, 2]. For instance, a policy for a partially observable Markov decision process
(POMDP) might be forced to disregard part of the available information in order to meet computational demands [3]. Cooperative multi-agent settings offer another such example: each agent might
perceive only its surroundings and be unable to communicate with all other agents; hence, a policy
specifying an agent?s behavior must rely exclusively on local information [4]; it might be further
constrained to a maximum size to be computationally tractable [5].
Influence diagrams [6] are representational devices for utility-based decision making under uncertainty. Many popular decision-making frameworks such as finite-horizon POMDPs can be casted
as influence diagrams [7]. Traditionally, influence diagrams target problems involving a single,
non-forgetful decision maker; this makes them unfitted to represent decision-making with limited
information. Limited memory influence diagrams (LIMIDs) generalize influence diagrams to allow
for (explicit representation of) bounded memory policies and simultaneous decisions [1, 2]. More
precisely, LIMIDs relax the regularity and no forgetting assumptions of influence diagrams, namely,
that there is a complete temporal ordering over the decisions, and that observations and decisions
are permanently remembered.
Solving a LIMID refers to finding a combination of policies that maximizes expected utility. This
task has been empirically and theoretically shown to be a very hard problem [8]. Under certain
graph-structural conditions (which no forgetting and regularity imply), Lauritzen and Nilsson [2]
show that LIMIDs can be solved by dynamic programming with complexity exponential in the
treewidth of the graph. However, when these conditions are not met, their iterative algorithm might
converge to a local optimum that is far from the optimum. Recently, de Campos and Ji [8] formulated
the CR (Credal Reformulation) algorithm that solves a LIMID by mapping it into a mixed integer
programming problem; they show that CR is able to solve small problems exactly and obtain good
approximations for medium-sized problems.
In this paper, we formally describe LIMIDs (Section 2) and show that policies can be partially
ordered, and that the ordering can be extended monotonically, allowing for the generalized variable
elimination procedure in Section 3. We show experimentally in Section 4 that the algorithm built
on these ideas can enormously save computational resources, allowing many problems to be solved
exactly. In fact, our algorithm is orders of magnitude faster than the CR algorithm on randomly
generated diagrams containing up to 150 variables. Finally, we write our conclusions in Section 5.
1
2
Limited memory influence diagrams
In the LIMID formalism, the quantities and events of interest are represented by three distinct types
of variables or nodes: chance variables (oval nodes) represent events on which the decision maker
has no control, such as outcomes of tests or consequences of actions; decision variables (square
nodes) represent the alternatives a decision maker might have; value variables (diamond-shaped
nodes) represent additive parcels of the overall utility. Let U be the set of all variables relevant to
a problem. Each variable X in U has associated a domain ?X , which is the finite non-empty set
of values or states X can assume. The empty domain ?? , {?} contains a single element ? that
is not in any other domain. Decision and chance variables have domains different from the empty
domain, whereas value variables are always associated to the empty domain. The domain ?x of a
set of variables x = {X1 , . . . , Xn } ? U is the Cartesian product ?X1 ? ? ? ? ? ?Xn of the variable
domains. If x and y are sets of variables such that y ? x ? U, and x is an element of the domain ?x ,
we write x?y to denote the projection of x onto the smaller domain ?y , that is, x?y ? ?y contains
only the components of x that are compatible with the variables in y. By convention, x?? , ?. The
cylindrical extension of y ? ?y to ?x is the set y ?x , {x ? ?x : x?y = y}. Oftentimes, if clear
from the context, we write X1 ? ? ? Xn to denote the set {X1 , . . . , Xn }, and X to denote {X}.
We notate point-wise comparison of functions implicitly. For example, if f and g are real-valued
functions over a domain ?x and k is a real number, we write f ? g and f = k meaning f (x) ? g(x)
and f (x) = k, respectively, for all x ? ?x . Any function over a domain containing a single element
is identified by the real number it returns. If f and g are functions over domains ?x and ?y ,
respectively, their product f g is the function over ?x?y such that (f g)(w) = f (w?x )g(w?y ) for all
w. Sum of functions is defined analogously:
(f + g)(w) = f (w?x ) + g(w?y ). If f is a function
P
over ?x , and y ? U, the sum-marginal y f returns a function over ?x\y such that for any element
P
P
P
w of its domain we have ( y f )(w) = x?w?x f (x). Notice that if y ? x = ?, then y f = f .
Let C, D and V denote the sets of chance, decision and value variables, respectively, in U. A LIMID
L is an annotated direct acyclic graph (DAG) over the set of variables U, where the nodes in V have
no children. The precise meanings of the arcs in L vary according to the type of node to which
they point. Arcs entering chance and value nodes denote stochastic and functional dependency,
respectively; arcs entering decision nodes describe information awareness or relevance at the time
the decision is made. If X is a node in L, we denote by paX the set of parents of X, that is, the
set of nodes of L from which there is an arc pointing to X. Similarly, we let chX denote the set
of children of X (i.e., nodes to which there is an arc from X), and faX , paX ? {X} denote
pa
its family. Each chance variable C in C has an associated function pC C specifying the probability
Pr(C = x?C |paC = x?paC ) of C assuming value x?C ? ?C given that the parents take on values
x?paC ? ?paC for all x ? ?faC . We assume that the probabilities associated to any chance node
respect the Markov condition, that is, that any variable X ? C is stochastically independent from its
non-descendant non-parents given its parents. Each value variable V ? V is associated to a bounded
real-valued utility function uV over ?paV , which quantifies the (additive) contribution of the states
of its parents to the overall
P utility. Thus, the overall utility of a joint state x ? ?C?D is given by the
sum of utility functions V ?V uV (x?paV ). For any decision variable D ? D, a policy ?D specifies
an action for each possible state configuration of its parents, that is, ?D : ?paD ? ?D . If D has no
parents, then ?D is a function from the empty domain to ?D , and therefore constitutes a choice of
x ? ?D . The set of all policies ?D for a variable D is denoted by ?D .
To illustrate the use of LIMIDs, consider the following example involving a memoryless robot in
a 5-by-5 gridworld (Figure 1a). The robot has 9 time steps to first reach a position sA of the grid,
for which it receives 10 points, and then a position sB , for which it is rewarded with 20 points. If
the positions are visited in the wrong order, or if a point is re-visited, no reward is given. At each
step, the robot can perform actions move north, south, east or west, which cost 1 point and succeed
with 0.9 probability, or do nothing, which incurs no cost and always succeeds. Finally, the robot can
estimate its position in the grid by measuring the distance to each of the four walls. The estimated
position is correct 70% of the time, wrong by one square 20% of the time, and by two squares 10%
of the time. The LIMID in Figure 1b formally represents the environment and the robot behavior.
The action taken by the robot at time step t is represented by variable Dt (t = 1, . . . , 8). The costs
associated to decisions are represented by variables Ct , which have associated functions uCt that
2
C1
sA
O1
sB
R
C2
D1
O2
S1
S2
A1
A2
B1
B2
C8
D2
O8
???
R1
D8
S8
S9
A8
A9
B8
B9
R2
R8
(a)
R9
(b)
Figure 1: (a) A robot R in a 5-by-5 gridworld with two goal-states. (b) The corresponding LIMID.
return zero if Dt = nothing, and otherwise return -1. The variables St (t = 1, . . . , 9) represent
the robot?s actual position at time step t, while variables Ot denote its estimated position. The
S
Dt
function pSt?1
associated to St specifies the probabilities Pr(St = st |St?1 = st?1 , Dt = dt ) of
t
transitioning to state St = st from a state St?1 = st?1 when the robot executes action Dt = dt .
The function pSOtt is associated to Ot and quantifies the likelihood of estimating position Ot = ot
when in position St = st . We use binary variables At and Bt to denote whether positions sA and
A
St?1
sB , respectively, have been visited by the robot before time step t. Hence, the function pAt?1
t
associated to At equals one for At = y if St?1 = sa or At?1 = y, and zero otherwise. Likewise, the
B
S
function pBtt?1 t?1 equals one for Bt = y only if either St?1 = sB or Bt?1 = y. The reward received
by the robot in step t is represented by variable Rt . The utility function uRt associated to Rt equals
10 if st = sA and At = n and Bt = n, 20 if st = sB and At = y and Bt = n, and zero otherwise.
Let ? , ?D?D ?D denote the space of possible combinations of policies. An element s =
pa
(?D )D?D ? ? is said to be a strategy for L. Given a policy ?D , let pD D denote a function such that
pa
for each x ? ?faD it equals one if x?D = ?D (x?paD ) and zero otherwise. In other words, pD D is a
conditional probability table representing policy ?D . There is a one-to-one correspondence between
pa
pa
functions pD D and policies ?D ? ?D , and specifying a policy ?D is equivalent to specifying pD D .
paD
We denote the set of all functions pD by PD . A strategy s induces a joint probability mass function
over the variables in C ? D by
Y pa Y pa
ps ,
pC C
pD D ,
(1)
C?C
D?D
and has an associated expected utility given by
X
X
X
X
Es [L] ,
ps (x)
uV (x?paV ) =
ps
uV .
x??C?D
V ?V
C?D
(2)
V ?V
The treewidth of a graph measures its resemblance to a tree and is given by the number of vertices
in the largest clique of the corresponding triangulated moral graph minus one. Given a LIMID L
of treewidth ?, we can evaluate the expected utility of any strategy s in time and space at most
exponential in ?. Hence, if ? is bounded by a constant, computing Es [L] takes polynomial time [9].
The primary task of a LIMID is to find an optimal strategy s? with maximal expected utility, that is,
to find s? such that Es [L] ? Es? [L] for all s ? ?. The value Es? [L] is called the maximum expected
utility of L and it is denoted by MEU[L]. In the LIMID of Figure 1, the goal is to find an optimal
strategy s = (?D1 , . . . , ?D8 ), where the optimal policies ?Dt for t = 1, . . . , 8 prescribe an action in
?Dt = {north, south, west, east, nothing} for each possible estimated position in ?Ot .
For most real problems, enumerating all the strategies is prohibitively costly. In fact, computing the
MEU is NP-hard even in bounded treewidth diagrams [8]. It is well-known that any LIMID L can
be mapped into an equivalent LIMID L0 where all utilities take values on the real interval [0, 1] [10].
The mapping preserves optimality of strategies, that is, any optimal strategy for L0 is also an optimal
3
strategy for L (and vice-versa). This allows us, in the rest of the paper, to focus on LIMIDs whose
utilities are defined in [0, 1] with no loss of generality for the algorithm we devise.
3
A fast algorithm for solving LIMIDs exactly
The basic ingredients of our algorithmic framework for representing and handling information in
LIMIDs are the so-called valuations, which encode information (probabilities, utilities and policies)
about the elements of a domain. Each valuation is associated to a subset of the variables in U,
called its scope. More concretely, we define a valuation ? with scope x as a pair (p, u) of bounded
nonnegative real-valued functions p and u over the domain ?x ; we refer to p and u as the probability
and utility part, respectively, of ?. Often, we write ?x to make explicit the scope x of a valuation
?. For any x ? U, we denote the set of S
all possible valuations with scope x by ?x . The set of
all possible valuations is given by ? , x?U ?x . The set ? is closed under the operations of
combination and marginalization. Combination represents the aggregation of information and is
defined as follows. If ? = (p, u) and ? = (q, v) are valuations with scopes x and y, respectively, its
combination ? ? ? is the valuation (pq, pv + qu) with scope x ? y. Marginalization, on the other
hand, acts by coarsening information. If ? = (p, u) is a valuation
with
P
P scope x, and y is a set of
variables such that y ? x, the marginal ??y is the valuation ( x\y p, x\y u) with scope y. In this
case, we say that z , x \ y has been eliminated from ?, which we denote by ??z . The following
result shows that our framework respects the necessary conditions for computing efficiently with
valuations (in the sense of keeping the scope of valuations minimal during the variable elimination
procedure).
Proposition 1. The system (?, U, ?, ?) satisfies the following three axioms of a (weak) labeled
valuation algebra [11, 12].
(A1) For any ?1 , ?2 , ?3 ? ? we have that ?1 ? ?2 = ?2 ? ?1 and ?1 ? (?2 ? ?3 ) = (?1 ? ?2 ) ? ?3 .
?y
(A2) For any ?z ? ?z and y ? x ? z we have that (??x
= ??y
z )
z .
(A3) For any ?x ? ?x , ?y ? ?y and x ? z ? x ? y we have that (?x ? ?y )?z = ?x ? ?y?y?z .
Proof. (A1) follows directly from commutativity, associativity and distributivity of product and sum
of real-valued functions, and (A2) follows directly from commutativity of the sum-marginal operation. To show (A3), consider any two valuations (p, u) and (q, v) with scopes x and y, respectively,
and a set z such that xP
? z ? x ? y.
PBy definition of combination and marginalization, we have that
[(p, u) ? (q, v)]?z = ( x?y\z pq, x?y\z (pv + qu)). Since x ? y \ z = y \ z, and p and u are funcP
P
P
P
P
tions over ?x , it follows that ( x?y\z pq, x?y\z (pv + qu)) = (p y\z q, p y\z v + u y\z q),
P
P
which equals (p, u) ? ( y\z q, y\z v) = (p, y) ? (q, v)?y?z . Hence, [(p, u) ? (q, v)]?z =
(p, y) ? (q, v)?y?z .
The following lemma is a direct consequence of (A3) shown by [12], required to prove the correctness of our algorithm later on.
Lemma 2. If z ? y and z ? x = ? then (?x ? ?y )?z = ?x ? ??z
y .
The framework of valuations allows us to compute the expected utility of a given strategy efficiently:
Proposition 3. Given a LIMID L and a strategy s = (?D )D?D , let
hO
i hO
i hO
i
pa
pa
?s ,
(pC C , 0) ?
(pD D , 0) ?
(1, uV ) ,
(3)
C?C
where, for each D,
pa
pD D
D?D
V ?V
is the function in PD associated with policy ?D . Then ???
s = (1, Es [L]).
Proof. Let p and u denote the probability
of ???
spa. By definition of
Pand utility part, respectively,
Q
combination, we have that ?s = (ps , ps V ?V uV ), where ps = P X?C?D pX X as in (1). Since
ps is a probability distribution over C ? D, it follows that p =
x??C?D ps (x) = 1. Finally,
P
P
u = C?D ps V ?V uV , which equals Es [L] by (2).
4
Input: elimination ordering B < C < A and strategy s = (?B , ?C )
Initialization:
A
?A = (pA , 0)
B
C
D
E
?B = (pA
B , 0)
?C = (pA
C , 0)
?D = (1, uD )
?E = (1, uE )
Propagation:
?1 = (?B ? ?D )?B
?2 = (?C ? ?E )?C
?3 = (?1 ? ?2 ? ?A )?A
Termination: return the utility part of ???
s = ?3
Figure 2: Computing the expected utility of a strategy by variable elimination.
Given any strategy s, we can use a variable elimination procedure to efficiently compute ???
s and
hence its expected utility in time polynomial in the largest domain of a variable but exponential in
the width of the elimination ordering.1 Figure 2 shows a variable elimination procedure used to
compute the expected utility of a strategy of the simple LIMID on the left-hand side. However,
computing the MEU in this way is unfeasible for any reasonable diagram due to the large number of
strategies that would need to be enumerated. For example, if the variables A, B and C in the LIMID
in Figure 2 have each ten states, there are 1010 1010 = 1020 possible strategies.
In order to avoid considering all possible strategies, we define a partial order (i.e., a reflexive, antisymmetric and transitive relation) over ? as follows. For any two valuations ? = (p, u) and
? = (q, v) in ?, if ? and ? have equal scope, p ? q and u ? v, then ? ? ? holds. The following
result shows that ? is monotonic with respect to combination and marginalization.
Proposition 4. The system (?, U, ?, ?, ?) satisfies the following two additional axioms of an ordered valuation algebra [13].
(A4) If ?x ? ?x and ?y ? ?y , then (?x ? ?y ) ? (?x ? ?y ).
?y
(A5) If ?x ? ?x then ??y
x ? ?x .
Proof. (A4). Consider two valuations (px , ux ) and (qx , vx ) with scope x such that (px , ux ) ?
(qx , vx ), and two valuations (py , uy ) and (qy , vy ) with scope y satisfying (py , uy ) ? (qy , vy ). By
definition of ?, we have that px ? qx , ux ? vx , py ? qy and uy ? vy . Since all functions are
nonnegative, it follows that px py ? qx qy , px uy ? qx vy and py ux ? qy vx . Hence, (px , ux ) ?
(py , uy ) = (px py , px uy + py ux ) ? (qx qy , qx vy + qy vx ) = (qx , vx ) ? (qy , vy ). (A5). Let y be
a subset of x. P
It followsPfrom monotonicity
ofP
? with respect to addition of real numbers that
P
(px , ux )?y = ( x\y px , x\y ux ) ? ( x\y qx , x\y vx ) = (qx , vx )?y .
The monotonicity of ? allows us to detect suboptimal strategies during variable elimination. To
illustrate this, consider the variable elimination scheme in Figure 2 for two different strategies s
and s0 , and let ?1s , ?2s , ?3s be the valuations produced in the propagation step for strategy s and
0
0
0
0
0
?1s , ?2s , ?3s the valuations for s0 . If ?1s ? ?1s and ?2s ? ?2s then Proposition 4 tells us that
0
?3s ? ?3s , which implies Es [L] ? Es0 [L]. As a consequence, we can abort variable elimination for
s after the second iteration. We can also exploit the redundancy between valuations produced during
variable elimination for neighbor strategies. For example, if s and s0 specify the same policy for B,
0
then we know in advance that ?1s = ?2s , so that only one of them needs to be computed.
In order to facilitate the description of our algorithm, we define operations over sets of valuations.
If ?x is a set of valuations with scope x and ?y is a set of valuations with scope y the operation
?x ? ?y , {?x ? ?y : ?x ? ?x , ?y ? ?y } returns the set of combinations of a valuation in ?x
and a valuation in ?y . For X ? x, the operation ??X
, {??X
: ?x ? ?x } eliminates variable X
x
x
from all valuations in ?x . Given a finite set of valuations ? ? ?, we say that a valuation ? ? ? is
maximal if for all ? ? ? such that ? ? ? it holds that ? ? ?. The operator prune returns the set
prune(?) of maximal valuations of ? (by pruning non-maximal valuations).
1
The width of an elimination ordering is the maximum cardinality of the scope of a valuation produced
during variable elimination minus one.
5
We are now ready to describe the Multiple Policy Updating (MPU) algorithm, which solves arbitrary LIMIDs exactly. Consider a LIMID L and an elimination ordering X1 < ? ? ? < Xn over
the variables in C ? D. The elimination ordering can be selected using the standard methods for
Bayesian networks [9]. Note that unlike standard algorithms for variable elimination in influence
diagrams we allow any elimination ordering. The algorithm is initialized by generating one set of
valuations for each variable X in U as follows.
Initialization: Let V0 be initially the empty set.
pa
1. For each chance variable X ? C, add a singleton ?X , {(pX X , 0)} to V0 ;
pa
pa
2. For each decision variable X ? D, add a set of valuations ?X , {(pX X , 0) : pX X ? PX }
to V0 ;
3. For each value variable X ? V, add a singleton ?X , {(1, uX )} to V0 .
Once V0 has been initialized with a set of valuations for each variable in the diagram, we recursively
eliminate a variable Xi in C ? D in the given ordering and remove any non-maximal valuation:
Propagation: For i = 1, . . . , n do:
1. Let Bi be the set of all valuations in Vi?1 whose scope contains Xi ;
N
2. Compute ?i , prune([ ??Bi ?]?Xi );
3. Set Vi , Vi?1 ? {?i } \ Bi .
N
Finally, the algorithm outputs the utility part of the single maximal valuation in the set ??Vn ?:
N
Termination: Return the real number u such that (p, u) ? prune( ??Vn ?).
N
u is a real number because the valuations in ??Vn ? have empty scope and thus both their probability and utility parts are identified with real numbers. The following result is a straightforward extension of [14, Lemma 1(iv)] that is needed to guarantee the correctness of discarding non-maximal
valuations in the propagation step.
Lemma 5. (Distributivity of maximality). If ?x and ?y are two sets of ordered valuations and z ? x
then (i) prune(?x ?prune(?y )) = prune(?x ??y ) and (ii) prune(prune(?x )?z ) = prune(??z
x ).
The
N result shows that, like marginalization, the prune operation distributes over any factorization of
X?U ?X . The following lemma shows that at any iteration i of the propagation step the combination of all sets in the current pool of sets Vi produces the set of maximal valuations of the initial
factorization.
N
N
Lemma 6. For i ? {1, . . . , n}, it follows that prune([ ??V0 ?]?X1 ???Xi ) = prune( ??Vi ?).
Proof. We show the result by induction on i. The basis is easilyNobtained by applying Lemmas 2 and 5 N
and the axioms of valuation algebra to prune([ ??V0 ?]?X1 ) in order to
obtain prune( ??V1 ?). For the induction step, assume the result holds at i, that is,
N
N
prune([ ??V0 ?]?X1 ???Xi ) = prune( ??Vi ?). By eliminating Xi+1 from both sides and
N
then applying the prune operation we get to prune([prune([ ??V0 ?]?X1 ???Xi )]?Xi+1 ) =
N
prune([prune( ??Vi ?)]?Xi+1 ).
By Lemma 5(ii) and (A2), we have that
N
N
prune([ ??V0 ?]?{X1 ???Xi+1 } ) = prune([ ??Vi ?]?Xi+1 ). It follows from (A1) and Lemma 2
N
N
that the right-hand part equals prune(( ??Vi \Bi+1 ?) ? [( ??Bi+1 ?)]?Xi+1 ), which by
N
N
Lemma 5(i) equals prune(( ??Vi \Bi+1 ?) ? prune([( ??Bi+1 ?)]?Xi+1 )), which by definition
N
of Vi+1 equals prune( ??Vi+1 ?).
Let ?L , {?s : s ? ?}, where ?s is given by (3). According to Proposition 3, each ele1 ???Xn
1 ???Xn
ment ??X
in ??X
is a valuation whose probability part is one and utility part equals
s
L
Es [L]. Thus, the maximal expected utility MEU[L] is the utility part of the single valuation in
1 ???Xn
prune(??X
). It is not difficult to see that after the initialization step, the set V0 contains sets
L
6
N
? of valuations such that ??V0 ? = ?L . Hence, Lemma 6 states that after the last iteration,
N
1 ???Xn
MPU produces a set Vn of sets of valuations such that prune( ??Vn ?) = prune(??X
)=
L
MEU[L]. This is precisely what the following theorem shows.
Theorem 7. Given a LIMID L, MPU outputs MEU[L].
N
Proof. The algorithm returns the utility part of a valuation (p, u) in prune( ??Vn ?), which,
N
by Lemma
6 for i = n, equals prune([ ??V0 ?]?? ). By definition of V0 , any
N
N valuation ?
in ( ??V0 ?) factorizes as in (3). Also, there is exactly one valuation ? ? ( ??V0 ?) for
N
each strategy in ?. Hence, by Proposition 3, the set ( ??V0 ?)?? contains a pair (1, Es [L])
for every strategy s inducing a distinct expected utility. Moreover, since functions with empty
scope
N correspond to numbers, the relation ? specifies a total ordering over the valuations in
( ??V0 ?)?? , which implies a single maximal element. Let s? be a strategy associated to (p, u).
N
Since (p, u) ? prune([ ??V0 ?]?? ), it follows from maximality that Es? [L] ? Es [L] for all s, and
hence u = MEU[L].
The time complexity of the algorithm is given by the cost of creating the sets of valuations in the
initialization step plus the overall cost of the combination and marginalization operations performed
during the propagation step. Regarding the initialization step, the loops for chance and value variables generate singletons, and thus take time linear in the input. For any decision variable D,
let ?D , |?D ||?paD | denote the number of policies in ?D (which coincides with the number of
functions in PD ). There is exactly one valuation in ?D in V0 for every policy in ?D . Also, let
? , pruneD?D ?D be the cardinality of the largest policy set. Then the initialization loop for decision variables takes O(|D|?) time, which is exponential in the input (the sets of policies are not
considered as an input of the problem). Let us analyze the propagation step. As with any variable
elimination procedure, the running time of propagating (sets of) valuations is exponential in the
width of the given ordering, which is in the best case given by the treewidth of the diagram. Consider the case of an ordering with bounded width ? and a bounded number of states per variable ?.
Then the cost of each combination or marginalization is bounded by a constant, and the complexity
depends only on the number of operations performed. Let ? denote the cardinality of the largest
set ?i , for i = 1, . . . , n. Computing ?i requires at most ? |U |?1 operations of combination and ?
?
operations of marginalization. In the worst case, ? is equal to ?|D| ? O(?|D|? ), that is, all sets
associated to decision variables have been combined without discarding any valuation. Hence, the
worst-case complexity of the propagation step is exponential in the input, even if the ordering width
and the number of states per variable are bounded. This is not surprising given that the problem
is still NP-hard in these cases. However, this is a very pessimistic scenario and, on average, the
removal of non-maximal elements greatly reduces the complexity, as we show in the next section.
4
Experiments
We evaluate the performance of the algorithms on random LIMIDs generated in the following way.
Each LIMID is parameterized by the number of decision nodes d, the number of chance nodes c, the
maximum cardinality of the domain of a chance variable ?C , and the maximum cardinality of the
domain of a decision variable ?D . We set the number of value nodes v to be d + 2. For each variable
Xi , i = 1, . . . , c + d + v, we sample ?Xi to contain from 2 to 4 states. Then we repeatedly add
an arc from a decision node with no children to a value node with no parents (so that each decision
node has at least one value node as children). This step guarantees that all decisions are relevant for
the computation of the MEU. Finally, we repeatedly add an arc that neither makes the domain of a
variable greater than the given bounds nor makes the treewidth more than 10, until no arcs can be
added without exceeding the bounds.2 Note that this generates diagrams where decision and chance
variables have at most log2 ?D ? 1 and log2 ?C ? 1 parents, respectively. Once the graph structure
is obtained, we specify the functions associated to value variable by randomly sampling numbers in
[0, 1]. The probability mass functions associated to chance variables are randomly sampled from a
uniform prior distribution.
2
Checking the treewidth of a graph might be hard. We instead use a greedy heuristic that resulted in diagrams whose treewidth ranged from 5 to 10.
7
Running time (s)
104
102
100
MPU
CR
?2
10
101
1020
1040
1060
Number of strategies (|?|)
Figure 3: Running time of MPU and CR on randomly generated LIMIDs.
We compare MPU against the CR algorithm of [8] in 2530 LIMIDs randomly generated by the
described procedure with parameters 5 ? d ? 50, 8 ? c ? 50, 8 ? ?D ? 64 and 16 ? ?C ? 64.
MPU was implemented in C++ and tested in the same computer as CR.3 A good succinct indicator
of the hardness of solving a LIMID is the total number of strategies |?|, which represents the size
of the search space in a brute-force approach. |?| can also be loosely interpreted as the total number
of alternatives (over all decision variables) in the problem instance. Figure 3 depicts running time
against number of strategies in a log-log scale for the two algorithms on the same test set of random
diagrams. For each algorithm, only solved instances are shown, which covers approximately 96%
of the cases for MPU, and 68% for CR. A diagram is consider unsolved by an algorithm, if the
algorithm was not able to reach the exact solution within the limit of 12 hours. Since CR uses an
integer program solver, it can output a feasible solution within any given time limit; we consider a
diagram solved by CR only if the solution returned at the end of 12 hours is exact, that is, only if its
upper and lower bound values match. We note that MPU solved all cases that CR solved (but not
the opposite). From the plot, one can see that MPU is orders of magnitude faster than CR. Within
the limit of 12 hours, MPU was able to compute diagrams containing up to 1064 strategies, whereas
CR solved diagrams with at most 1025 strategies. We remark that when CR was not able to solve a
diagram, it almost always returned a solution that was not within 5% of the optimum. This implies
that MPU would outperform CR even if the latter was allowed a small imprecision in its output.
5
Conclusion
LIMIDs are highly expressive models for utility-based decision making that subsume influence diagrams and finite-horizon (partially observable) Markov decision processes. Furthermore, they allow
constraints on policies to be explicitly represented in a concise and intuitive graphical language.
Unfortunately, solving LIMIDs is a very hard task of combinatorial optimization. Nevertheless, we
showed here that our MPU algorithm can solve a large number of randomly generated problems in
reasonable time. The algorithm efficiency is based on the early removal of suboptimal solutions,
which drastically reduces the search space. An interesting extension is to improve MPU?s running
time at the expense of accuracy. This can be done by arbitrarily discarding valuations during the
propagation step so as to bound the size of propagated sets. Future work is necessary to validate the
feasibility of this idea.
Acknowledgments
This work was partially supported by the Swiss NSF grant nr. 200020 134759 / 1, and by the Computational Life Sciences Project, Canton Ticino.
3
We used the CR implementation available at http://www.idsia.ch/?cassio/id2mip/ and
CPLEX [15] as mixed integer programming solver. Our MPU implementation can be downloaded at http:
//www.idsia.ch/?cassio/mpu/
8
References
[1] N. L. Zhang, R. Qi, and D. Poole. A computational theory of decision networks. International
Journal of Approximate Reasoning, 11(2):83?158, 1994.
[2] S. L. Lauritzen and D. Nilsson. Representing and solving decision problems with limited
information. Management Science, 47:1235?1251, 2001.
[3] P. Poupart and C. Boutilier. Bounded finite state controllers. In Advances in Neural Information
Processing Systems 16 (NIPS), 2003.
[4] A. Detwarasiti and R. D. Shachter. Influence diagrams for team decision analysis. Decision
Analysis, 2(4):207?228, 2005.
[5] C. Amato, D. S. Bernstein, and S. Zilberstein. Optimizing fixed-size stochastic controllers
for POMDPs and decentralized POMDPs. Autonomous Agents and Multi-Agent Systems,
21(3):293?320, 2010.
[6] R. A. Howard and J. E. Matheson. Influence diagrams. In Readings on the Principles and
Applications of Decision Analysis, pages 721?762. Strategic Decisions Group, 1984.
[7] J. A. Tatman and R. D. Shachter. Dynamic programming and influence diagrams. IEEE
Transactions on Systems, Man and Cybernetics, 20(2):365?379, 1990.
[8] C. P. de Campos and Q. Ji. Strategy selection in influence diagrams using imprecise probabilities. In Proceedings of the 24th Conference in Uncertainty in Artificial Intelligence, pages
121?128, 2008.
[9] D. Koller and N. Friedman. Probabilistic Graphical Models - Principles and Techniques. MIT
Press, 2009.
[10] G. F. Cooper. A method for using belief networks as influence diagrams. Fourth Workshop on
Uncertainty in Artificial Intelligence, 1988.
[11] P. Shenoy and G. Shafer. Axioms for probability and belief-function propagation. In Proceedings of the Fourth Conference on Uncertainty in Artificial Intelligence, pages 169?198.
Elsevier Science, 1988.
[12] J. Kohlas. Information Algebras: Generic Structures for Inference. Springer-Verlag, 2003.
[13] R. Haenni. Ordered valuation algebras: a generic framework for approximating inference.
International Journal of Approximate Reasoning, 37(1):1?41, 2004.
[14] H. Fargier, E. Rollon, and N. Wilson. Enabling local computation for partially ordered preferences. Constraints, 15:516?539, 2010.
[15] Ilog Optimization. CPLEX documentation. http://www.ilog.com, 1990.
9
| 4188 |@word cylindrical:1 eliminating:1 polynomial:2 termination:2 d2:1 concise:1 incurs:1 minus:2 recursively:1 initial:1 configuration:1 contains:5 exclusively:1 o2:1 current:1 com:1 surprising:1 must:1 additive:2 remove:1 plot:1 greedy:1 selected:1 device:1 intelligence:3 node:19 denis:2 preference:1 zhang:1 c2:1 direct:2 descendant:1 prove:1 theoretically:1 forgetting:3 hardness:1 expected:11 behavior:2 nor:1 multi:2 actual:1 es0:1 considering:1 cardinality:5 solver:2 project:1 estimating:1 bounded:11 moreover:1 maximizes:1 medium:1 mass:2 what:1 cassio:4 interpreted:1 finding:1 guarantee:2 temporal:1 every:2 act:1 exactly:7 prohibitively:1 wrong:2 control:1 brute:1 grant:1 shenoy:1 before:1 local:3 limit:3 consequence:3 meet:1 approximately:1 might:6 plus:1 initialization:6 specifying:4 limited:7 factorization:2 bi:7 uy:6 acknowledgment:1 implement:1 swiss:1 procedure:7 axiom:4 projection:1 imprecise:1 word:1 refers:1 get:1 onto:1 unfeasible:1 selection:1 s9:1 operator:1 context:1 influence:15 applying:2 py:8 www:3 equivalent:2 straightforward:1 pomdp:1 perceive:1 d1:2 traditionally:1 autonomous:1 target:1 exact:2 programming:4 us:1 prescribe:1 pa:16 element:8 idsia:6 satisfying:1 documentation:1 updating:1 cooperative:1 labeled:1 solved:7 worst:2 ordering:13 environment:1 pd:11 complexity:5 reward:2 dynamic:2 solving:7 algebra:5 efficiency:1 basis:1 manno:2 joint:2 represented:6 forced:1 distinct:2 describe:3 fac:1 fast:1 artificial:3 tell:1 outcome:1 whose:4 heuristic:1 solve:4 valued:4 say:2 relax:1 otherwise:4 a9:1 ment:1 product:3 maximal:11 relevant:2 loop:2 matheson:1 representational:1 description:1 inducing:1 intuitive:1 validate:1 parent:9 regularity:3 optimum:3 empty:8 r1:1 p:9 generating:1 produce:2 tions:1 illustrate:2 propagating:1 lauritzen:2 received:1 sa:5 solves:2 implemented:1 treewidth:8 implies:3 met:1 convention:1 triangulated:1 annotated:1 correct:1 stochastic:2 vx:8 elimination:19 require:1 wall:1 proposition:6 pessimistic:1 enumerated:1 extension:3 hold:3 b9:1 considered:1 mapping:2 algorithmic:1 scope:19 pointing:1 vary:1 early:1 a2:4 combinatorial:1 maker:3 visited:3 largest:4 vice:1 correctness:2 mit:1 always:3 avoid:1 cr:16 factorizes:1 wilson:1 zilberstein:1 encode:1 l0:2 focus:1 amato:1 likelihood:1 greatly:1 sense:1 detect:1 elsevier:1 inference:2 sb:5 bt:5 eliminate:1 associativity:1 pad:4 initially:1 relation:2 koller:1 overall:4 denoted:2 art:1 constrained:1 marginal:3 equal:13 once:2 shaped:1 eliminated:1 sampling:1 represents:3 constitutes:1 future:1 np:2 randomly:7 surroundings:1 preserve:1 resulted:1 cplex:2 friedman:1 interest:1 a5:2 highly:1 pc:3 partial:1 necessary:2 tree:1 commutativity:2 iv:1 loosely:1 initialized:2 re:1 minimal:1 pax:2 instance:3 formalism:1 cover:1 measuring:1 strategic:1 cost:6 reflexive:1 vertex:1 subset:2 uniform:1 dependency:1 notate:1 combined:1 st:17 international:2 probabilistic:1 pool:1 analogously:1 b8:1 management:1 containing:3 d8:2 stochastically:1 creating:1 return:9 de:3 singleton:3 b2:1 north:2 explicitly:1 vi:12 depends:1 later:1 performed:2 closed:1 analyze:1 aggregation:1 contribution:1 square:3 pand:1 accuracy:1 likewise:1 efficiently:3 correspond:1 generalize:1 weak:1 bayesian:1 produced:3 pomdps:3 cybernetics:1 executes:1 simultaneous:1 reach:2 definition:5 against:2 associated:18 proof:5 unsolved:1 propagated:1 sampled:1 pst:1 popular:1 sophisticated:1 dt:9 specify:2 done:1 generality:1 furthermore:1 uct:1 until:1 hand:3 receives:1 expressive:1 propagation:10 abort:1 resemblance:1 facilitate:1 contain:1 ranged:1 hence:10 entering:2 memoryless:1 imprecision:1 parcel:1 pav:3 during:6 ue:1 width:5 coincides:1 funcp:1 generalized:1 complete:1 reasoning:2 meaning:2 wise:1 recently:1 functional:1 empirically:2 physical:1 ji:2 s8:1 refer:1 versa:1 dag:1 uv:7 grid:2 similarly:1 language:1 pq:3 robot:11 v0:20 add:5 showed:1 optimizing:1 rewarded:1 scenario:1 certain:1 verlag:1 binary:1 remembered:1 arbitrarily:1 life:1 devise:1 additional:1 greater:1 prune:34 converge:1 ud:1 monotonically:1 ii:2 multiple:1 reduces:2 faster:2 match:1 offer:1 ofp:1 a1:4 feasibility:1 qi:1 involving:2 basic:1 controller:2 iteration:3 represent:5 qy:8 c1:1 whereas:2 addition:1 campos:3 interval:1 diagram:28 ot:5 rest:1 eliminates:1 unlike:1 south:2 coarsening:1 integer:3 structural:1 bernstein:1 marginalization:8 identified:2 suboptimal:2 opposite:1 idea:2 regarding:1 enumerating:1 whether:1 casted:1 utility:30 moral:1 returned:2 action:6 repeatedly:2 remark:1 boutilier:1 clear:1 ten:1 induces:1 generate:1 specifies:3 outperform:2 http:3 vy:6 nsf:1 notice:1 estimated:3 per:2 write:5 group:1 redundancy:1 four:1 reformulation:1 nevertheless:1 neither:1 v1:1 graph:7 sum:5 parameterized:1 uncertainty:4 communicate:1 fourth:2 family:1 reasonable:2 almost:1 vn:6 decision:39 spa:1 bound:4 ct:1 correspondence:1 nonnegative:2 constraint:3 precisely:2 chx:1 generates:1 optimality:1 c8:1 pruned:1 forgetful:1 px:15 according:2 combination:13 smaller:1 qu:3 making:5 s1:1 nilsson:2 pr:2 taken:1 computationally:1 resource:2 needed:1 know:1 tractable:1 end:1 available:2 operation:11 decentralized:1 generic:2 fad:1 save:1 alternative:2 ho:3 permanently:1 running:5 meu:8 a4:2 log2:2 graphical:2 exploit:1 approximating:1 mpu:16 move:1 added:1 quantity:1 strategy:32 primary:1 rt:2 usual:1 costly:1 nr:1 said:1 distance:1 unable:1 mapped:1 poupart:1 valuation:58 induction:2 assuming:1 o1:1 difficult:1 unfortunately:1 expense:1 implementation:2 policy:22 diamond:1 allowing:2 perform:1 upper:1 observation:1 ilog:2 markov:3 arc:8 finite:5 howard:1 enabling:1 pat:1 subsume:1 extended:1 precise:1 team:1 gridworld:2 arbitrary:1 namely:1 pair:2 required:1 hour:3 nip:1 able:4 poole:1 reading:1 program:1 built:1 memory:3 belief:2 event:2 force:2 rely:1 indicator:1 representing:3 scheme:1 improve:1 maximality:2 imply:1 ready:1 transitive:1 fax:1 prior:1 removal:2 checking:1 loss:1 mixed:2 distributivity:2 interesting:1 acyclic:1 ingredient:1 downloaded:1 awareness:1 agent:6 xp:1 o8:1 s0:3 principle:2 compatible:1 supported:1 last:1 keeping:1 drastically:1 side:2 allow:3 neighbor:1 xn:9 concretely:1 made:2 oftentimes:1 far:1 qx:10 transaction:1 pruning:1 observable:2 approximate:2 implicitly:1 clique:1 monotonicity:2 b1:1 mau:1 xi:15 urt:1 iterative:1 search:2 quantifies:2 table:1 domain:21 antisymmetric:1 s2:1 shafer:1 pby:1 nothing:3 succinct:1 child:4 allowed:1 x1:10 west:2 enormously:1 depicts:1 cooper:1 position:11 pv:3 explicit:2 exceeding:1 exponential:6 theorem:2 transitioning:1 discarding:3 pac:4 r8:1 r2:1 a3:3 workshop:1 magnitude:2 cartesian:1 demand:1 horizon:2 shachter:2 ordered:5 ux:9 partially:5 monotonic:1 springer:1 ch:6 a8:1 chance:12 satisfies:2 succeed:1 conditional:1 sized:1 formulated:1 goal:2 man:1 feasible:1 hard:5 experimentally:1 distributes:1 lemma:12 called:3 oval:1 r9:1 total:3 e:12 disregard:1 succeeds:1 east:2 formally:2 latter:1 relevance:1 evaluate:2 tested:1 handling:1 |
3,521 | 4,189 | Joint 3D Estimation of Objects and Scene Layout
Andreas Geiger
Karlsruhe Institute of Technology
Christian Wojek
MPI Saarbr?ucken
Raquel Urtasun
TTI Chicago
[email protected]
[email protected]
[email protected]
Abstract
We propose a novel generative model that is able to reason jointly about the 3D
scene layout as well as the 3D location and orientation of objects in the scene.
In particular, we infer the scene topology, geometry as well as traffic activities
from a short video sequence acquired with a single camera mounted on a moving
car. Our generative model takes advantage of dynamic information in the form of
vehicle tracklets as well as static information coming from semantic labels and geometry (i.e., vanishing points). Experiments show that our approach outperforms
a discriminative baseline based on multiple kernel learning (MKL) which has access to the same image information. Furthermore, as we reason about objects in
3D, we are able to significantly increase the performance of state-of-the-art object
detectors in their ability to estimate object orientation.
1
Introduction
Visual 3D scene understanding is an important component in applications such as autonomous driving and robot navigation. Existing approaches produce either only qualitative results [11] or a mild
level of understanding, e.g., semantic labels [10, 26], object detection [5] or rough 3D [15, 24]. A
notable exception are approaches that try to infer the scene layout of indoor scenes in the form of
3D bounding boxes [13, 22]. However, these approaches can only cope with limited amounts of
clutter (e.g., beds), and rely on the fact that indoor scenes satisfy very closely the manhattan world
assumption, i.e., walls (and often objects) are aligned with the three dominant vanishing points. In
contrast, outdoor scenarios often show more clutter, vanishing points are not necessarily orthogonal
[25, 2], and objects often do not agree with the dominant vanishing points.
Prior work on 3D urban scene analysis is mostly limited to simple ground plane estimation [4, 29]
or models for which the objects and the scene are inferred separately [6, 7]. In contrast, in this paper
we propose a novel generative model that is able to reason jointly about the 3D scene layout as well
as the 3D location and orientation of objects in the scene. In particular, given a video sequence
of short duration acquired with a single camera mounted on a moving car, we estimate the scene
topology and geometry, as well as the traffic activities and 3D objects present in the scene (see Fig.
1 for an illustration). Towards this goal we propose a novel image likelihood which takes advantage
of dynamic information in the form of vehicle tracklets as well as static information coming from
semantic labels and geometry (i.e., vanishing points). Interestingly, our inference reasons about
whether vehicles are on the road, or parked, in order to get more accurate estimations. Furthermore,
we propose a novel learning-based approach to detecting vanishing points and experimentally show
improved performance in the presence of clutter when compared to existing approaches [19].
We focus our evaluation mainly on estimating the layout of intersections, as this is the most challenging inference task in urban scenes. Our approach proves superior to a discriminative baseline
based on multiple kernel learning (MKL) which has access to the same image information (i.e., 3D
tracklets, segmentation and vanishing points). We evaluate our method on a wide range of metrics
including the accuracy of estimating the topology and geometry of the scene, as well as detecting
1
Vehicle Tracklets
Vanishing Points
?
Scene Labels
Figure 1: Monocular 3D Urban Scene Understanding. (Left) Image cues. (Right) Estimated layout: Detections
belonging to a tracklet are depicted with the same color, traffic activities are depicted with red lines.
activities (i.e., traffic situations). Furthermore, we show that we are able to significantly increase the
performance of state-of-the-art object detectors [5] in terms of estimating object orientation.
2
Related Work
While outdoor scenarios remain fairly unexplored, estimating the 3D layout of indoor scenes has
experienced increased popularity in the past few years [13, 27, 22]. This can be mainly attributed
to the success of novel structured prediction methods as well as the fact that indoor scenes behave
mostly as ?Manhattan worlds?, i.e., edges on the image can be associated with parallel lines defined
in terms of the three dominant vanishing points which are orthonormal. With a moderate degree of
clutter, accurate geometry estimation has been shown for this scenario.
Unfortunately, most urban scenes violate the Manhattan world assumption. Several approaches
have focused on estimating vanishing points in this more adversarial setting [25]. Barinova et al. [2]
proposed to jointly perform line detection as well as vanishing point, azimut and zenith estimation.
However, their approach does not tackle the problem of 3D scene understanding and 3D object
detection. In contrast, we propose a generative model which jointly reasons about these two tasks.
Existing approaches to estimate 3D from single images in outdoor scenarios typically infer popups [14, 24]. Geometric approaches, reminiscent to the blocksworld model, which impose physical
constraints between objects (e.g., object A supports object B) have also been introduced [11]. Unfortunately, all these approaches are mainly qualitative and do not provide the level of accuracy
necessary for real-world applications such as autonomous driving and robot navigation. Prior work
on 3D traffic scene analysis is mostly limited to simple ground plane estimation [4], or models for
which the objects and scene are inferred separately [6]. In contrast, our model offers a much richer
scene description and reasons jointly about 3D objects and the scene layout.
Several methods have tried to infer the 3D locations of objects in outdoor scenarios [15, 1]. The most
successful approaches use tracklets to prune spurious detections by linking consistent evidence in
successive frames [18, 16]. However, these models are either designed for static camera setups in
surveillance applications [16] or do not provide a rich scene description [18]. Notable exceptions
are [3, 29] which jointly infer the camera pose and the location of objects. However, the employed
scene models are rather simplistic containing only a single flat ground plane.
The closest approach to ours is probably the work of Geiger et al. [7], where a generative model is
proposed in order to estimate the scene topology, geometry as well as traffic activities at intersections. Our work differs from theirs in two important aspects. First, they rely on stereo sequences
while we make use of monocular imagery. This makes the inference problem much harder, as the
noise in monocular imagery is strongly correlated with depth. Towards this goal we develop a richer
image likelihood model that takes advantage of vehicle tracklets, vanishing points as well as segmentations of the scene into semantic labels. The second and most important difference is that
Geiger et al. [7] estimate only the scene layout, while we reason jointly about the layout as well as
the 3D location and orientation of objects in the scene (i.e., vehicles).
2
1
2
3
4
5
6
7
(a) Model Geometry (? = 4)
(b) Model Topology ?
Figure 2: (a) Geometric model. In (b), the grey shaded areas illustrate the range of ?.
Finally, non-parametric models have been proposed to perform traffic scene analysis from a stationary camera with a view similar to bird?s eye perspective [20, 28]. In our work we aim to infer similar
activities but use video sequences from a camera mounted on a moving car with a substantially lower
viewpoint. This makes the recognition task much more challenging. Furthermore, those models do
not allow for viewpoint changes, while our model reasons about over 100 unseen scenes.
3
3D Urban Scene Understanding
We tackle the problem of estimating the 3D layout of urban scenes (i.e., road intersections) from
monocular video sequences. In this paper 2D refers to observations in the image plane while 3D
refers to the bird?s eye perspective (in our scenario the height above ground is non-informative). We
assume that the road surface is flat, and model the bird?s eye perspective as the y = 0 plane of the
standard camera coordinate system. The reference coordinate system is given by the position of the
camera in the last frame of the sequence. The intrinsic parameters of the camera are obtained using
camera calibration and the extrinsics using a standard Structure-from-Motion (SfM) pipeline [12].
We take advantage of dynamic and static information in the form of 3D vehicle tracklets, semantic labels (i.e., sky, background, road) and vanishing points. In order to compute 3D tracklets,
we first detect vehicles in each frame independently using a semi-supervised version of the partbased detector of [5] in order to obtain orientation estimates. 2D tracklets are then estimated using
?tracking-by-detection?: First adjacent frames are linked and then short tracklets are associated to
create longer ones via the hungarian method. Finally, 3D vehicle tracklets are obtained by projecting the 2D tracklets into bird?s eye perspective, employing error-propagation to obtain covariance
estimates. This is illustrated in Fig. 1 where detections belonging to the same tracklet are grouped
by color. The observer (i.e., our car) is shown in black. See sec 3.2 for more details on this process.
Since depth estimates in the monocular case are much noisier than in the stereo case, we employ
a more constrained model than the one utilized in [7]. In particular, as depicted in Fig. 2, we
model all intersection arms with the same width and force alternate arms to be collinear. We model
lanes with splines (see red lines for active lanes in Fig. fig:motivation), and place parking spots
at equidistant places along the street boundaries (see Fig. 3(b)). Our model then infers whether
the cars participate in traffic or are parked in order to get more accurate layout estimations. Latent
variables are employed to associate each detected vehicle with positions in one of these lanes or
parking spaces. In the following, we first give an overview of our probabilistic model and then
describe each part in detail.
3.1
Probabilistic Model
As illustrated in Fig. 2(b), we consider a fixed set of road layouts ?, including straight roads, turns,
3- and 4- armed intersections. Each of these layouts is associated with a set of geometric random
variables: The intersection center c, the street width w, the global scene rotation r and the angle of
the crossing street ? with respect to r (see Fig. 2(a)). Note that for ? = 1, ? does not exist.
Joint Distribution: Our goal is to estimate the most likely configuration R = (?, c, w, r, ?) given
the image evidence E = {T, V, S}, which comprises vehicle tracklets T = {t1 , .., tN }, vanish3
(a) Graphical model
(b) Road model
Figure 3: Graphical model and road model with lanes represented as B-splines.
ing points V = {vf , vc } and semantic labels S. We assume that, given R, all observations are
independent. Fig. 3(a) depicts our graphical model which factorizes the joint distribution as
#
"N
YX
p(tn , ln |R, C) p(vf |R, C)p(vc |R, C) p(S|R, C)
(1)
p(E, R|C) = p(R)
|
{z
} | {z }
n=1 ln
Semantic Labels
Vanishing Points
|
{z
}
Vehicle Tracklets
where C are the (known) extrinsic and intrinsic camera parameters for all the frames in the video
sequence, N is the total number of tracklets and {ln } denotes latent variables representing the lane
or parking positions associated with every vehicle tracklet. See Fig. 3(b) for an illustration.
Prior:
Let us first define a scene prior, which factorizes as
p(R) = p(?)p(c, w)p(r)p(?)
(2)
where c and w are modeled jointly to capture their correlation. We model w using a log-Normal
distribution since it takes only positive values. Further, since it is highly multimodal, we model p(?)
in a non-parametric fashion using kernel density estimation (KDE), and define:
r ? N (?r , ?r )
(c, log w)T ? N (?cw , ?cw )
? ? ?(?M AP )
In order to avoid the requirement for trans-dimensional inference procedures, the topology ?M AP
is estimated a priori using joint boosting, and set fixed at inference. To estimate ?M AP , we use the
same feature set employed by the MKL baseline (see Sec. 4 for details).
3.2
Image Likelihood
This section details our image likelihood for tracklets, vanishing points and semantic labels.
Vehicle Tracklets: In the following, we drop the tracklet index n to simplify notation. Let us
define a 3D tracklet as a set of object detections t = {d1 , .., dM }. Here, each object detection
dm = (fm , bm , om ) contains the frame index fm ? N, the object bounding box bm ? R4 defined
as 2D position and size, as well as a normalized orientation histogram om ? R8 with 8 bins. We
compute the bounding box bm and orientation om by supervised training of a part-based object
detector [5], where each component contains examples from a single orientation. Following [5], we
apply the softmax function on the output scores and associate frames using the hungarian algorithm
in order to obtain tracklets.
As illustrated in Fig. 3(b), we represent drivable locations with splines, which connect incoming
and outgoing lanes of the intersection. We also allow cars to be parked on the side of the road, see
Fig. 3(b) for an illustration. Thus, for a K-armed intersection, we have l ? {1, .., K(K ? 1) + 2K}
in total, where K(K ? 1) is the number of lanes and 2K is the number of parking areas. We use the
latent variable l to index the lane or parking position associated with a tracklet. The joint probability
of a tracklet t and its lane index l is given by p(t, l|R, C) = p(t|l, R, C)p(l). We assume a uniform
prior over lanes and parking positions l ? U(1, K(K ? 1) + 2K), and denote the posterior by pl
when l corresponds to a lane, and pp when it is a parking position.
In order to evaluate the tracklet posterior for lanes pl (t|l, R, C), we need to associate all object
detections t = {d1 , .., dM } to locations on the spline. We do this by augmenting the observation
4
Figure 4: Scene Labels: Scene labels obtained from joint boosting (left) and from our model (right).
model with an additional latent variable s per object detection d as illustrated in Fig. 3(b). The
posterior is modeled using a left-to-right Hidden Markov Model (HMM), defined as:
pl (t|l, R, C) =
X
pl (s1 )pl (d1 |s1 , l, R, C)
s1 ,..,sM
M
Y
pl (sm |sm?1 )pl (dm |sm , l, R, C)
(3)
m=2
We constrain all tracklets to move forward in 3D by defining the transition probability p(sm |sm?1 )
as uniform on sm ? sm?1 and 0 otherwise. Further, uniform initial probabilites pl (s1 ) are employed, since no location information is available a priori. We assume that the emission likelihood
pl (dm |sm , l, R, C) factorizes into the object location and its orientation. We impose a multinomial
distribution over the orientation pl (fm , om |sm , l, R, C), where each object orientation votes for its
bin as well as neighboring bins, accounting for the uncertainty of the object detector. The 3D object
location is modeled as a Gaussian with uniform outlier probability cl
pl (fm , bm |sm , l, R, C) ? cl + N (? m |?m , ?m )
(4)
2
where ? m = ? m (fm , bm , C) ? R denotes the object detection mapped into bird?s eye perspective, ?m = ?m (sm , l, R) ? R2 is the coordinate of the spline point sm on lane l and
?m = ?m (fm , bm , C) ? R2?2 is the covariance of the object location in bird?s eye coordinates.
We now describe how we transform the 2D tracklets into 3D tracklets {? 1 , ?1 , .., ? M , ?M }, which
we use in pl (dm |sm , l, R, C): We project the image coordinates into bird?s eye perspective by backprojecting objects into 3D using several complementary cues. Towards this goal we use the 2D
bounding box foot-point in combination with the estimated road plane. Assuming typical vehicle
dimensions obtained from annotated ground truth, we also exploit the width and height of the bounding box. Covariances in bird?s eye perspective are obtained by error-propagation. In order to reduce
noise in the observations we employ a Kalman smoother with constant 3D velocity model.
Our parking posterior model is similar to the lane posterior described above, except that we do not
allow parked vehicles to move; We assume them to have arbitrary orientations and place them at the
sides of the road. Hence, we have
pp (t|l, R, C) =
M
XY
s
pp (dm |s, l, R, C)p(s)
(5)
m=1
with s the index for the parking spot location within a parking area and
pp (dm |s, l, R, C) = pp (fm , bm |s, l, R, C) ? cp + N (? m |?m , ?m )
(6)
Here, cp , ? m and ?m are defined as above, while ?m = ?m (s, l, R) ? R2 is the coordinate of
the parking spot location in bird?s eye perspective (see Fig. 3(b) for an illustration). For inference,
we subsample each tracklet trajectory equidistantly in intervals of 5 meters in order to reduce the
number of detections within a tracklet and keep the total evaluation time of p(R, E|C) low.
Vanishing Points: We detect two types of dominant vanishing points (VP) in the last frame of
each sequence: vf corresponding to the forward facing street and vc corresponding to the crossing
street. While vf is usually in the image, the u-coordinate of the crossing VP is often close to infinity
(see Fig. 1). As a consequence, we represent vf ? R by its image u-coordinate and vc ? [? ?4 , ?4 ]
by the angle of the crossing road, back projected into the image.
Following [19], we employ a line detector to reason about dominant VPs in the scene. We relax
the original model of [19] to allow for non-orthogonal VPs, as intersection arms are often nonorthogonal. Unfortunately, traditional VP detectors tend to fail in the presence of clutter, which
our images exhibit to a large extent, for example generated by shadows. To tackle this problem we
5
1
true positive rate
0.8
Felzenszwalb et al. [5] (raw)
Felzenszwalb et al. [5] (smoothed)
Our method (? unknown)
Our method (? known)
0.6
0.4
0.2
Learning based
Kosecka et al.
0
0
0.2
0.4
0.6
false positive rate
0.8
Error
32.6 ?
31.2 ?
15.7 ?
13.7 ?
1
(a) Detecting Structured Lines
(b) Object Orientation Error
Figure 5: Detecting Structured Lines and Object Orientation Errors: Our approach outperforms [19] in
the task of VP estimation, and [5] in estimating the orientation of objects.
reweight line segments according to their likelihood of carrying structural information. To this end,
we learn a k-nn classifier on an annotated training database where lines are labeled as either structure
or clutter. Here, structure refers to line segments that are aligned with the major orientations of the
road, as well as facade edges of buildings belonging to dominant VPs. Our feature set comprises
geometric information in the form of position, length, orientation and number of lines with the
same orientation as well as perpendicular orientation in a local window. The local appearance is
represented by the mean, standard deviation and entropy of all pixels on both sides of the line.
Finally, we add texton-like features using a Gabor filter bank, as well as 3 principal components of
the scene GIST [23]. The structure k-nn classifier?s confidence is used in the VP voting process to
reweight the lines. The benefit of our learning-based approach is illustrated in Fig. 5.
To avoid estimates from spurious outliers we threshold the dominant VPs and only retain the most
confident ones. We assume that vf and vc are independent given the road parameters. Let ?f =
?f (R, C) be the image u-coordinate (in pixels) of the forward facing street?s VP and let ?c =
?c (R, C) be the orientation (in radians) of the crossing street in the image. We define
p(vf |R, C) ? cf + ?f N (vf |?f , ?f )
p(vc |R, C) ? cc + ?c N (vc |?c , ?c )
where {cf , cc } are small constants capturing outliers, {?f , ?c } take value 1 if the corresponding VP
has been detected in the image and 0 otherwise, and {?f , ?c } are parameters of the VP model.
Semantic Labels: We segment the last frame of the sequence pixelwise into 3 semantic classes,
i.e., road, sky and background. For each patch, we infer a score for each of the 3 labels using the
boosting algorithm of [30] with a combination of Walsh-Hadamard filters [30], as well as multi-scale
features developed for detecting man-made structures [21] on patches of size 16?16, 32?32 and
64?64. We include the latter ones as they help in discriminating buildings from road. For training,
we use a set of 200 hand-labeled images which are not part of the test data.
(i)
Given the softmax normalized label scores Su,v ? R of each class i for the patch located at position
(u, v) in the image, we define the likelihood of a scene labeling S = {S(1) , S(2) , S(3) } as
p(S|R, C) ? exp(?
3
X
X
(i)
Su,v
)
(7)
i=1 (u,v)?Si
where ? is a model parameter and Si is the set of all pixels of class i obtained from the reprojection
of the geometric model into the image. Note that the road boundaries directly define the lower end
of a facade while we assume a typical building height of 4 stories, leading to the upper end. Facades
adjacent to the observers own? street are not considered. Fig. 4 illustrates an example of the scene
labeling returned by boosting (left) as well as the labeling generated from the reprojection of our
model (right). Note that a large overlap corresponds to a large likelihood in Eq. 7
3.3
Learning and Inference
Our goal is to estimate the posterior of R, given the image evidence E and the camera calibration C:
p(R|E, C) ? p(E|R, C)p(R)
(8)
Learning the prior: We estimate the parameters of the prior p(R) using maximum likelihood
leave-one-out cross-validation on the scene database of [7]. This is straightforward as the prior in
Eq. 2 factorizes. We employ KDE with ? = 0.02 to model p(?), as it works well in practice.
6
(Inference with known ?)
(Inference with unknown ?)
Baseline
Ours
Baseline
Ours
Location
6.0 m
5.8 m
?
27.4 %
70.8 %
Orientation
9.6 deg
5.9 deg
Location
6.2 m
6.6 m
Overlap
44.9 %
53.0 %
Orientation
21.7 deg
7.2 deg
Activity
18.4 %
11.5 %
Overlap
39.3 %
48.1 %
Activity
28.1 %
16.6 %
Figure 6: Inference of topology and geometry .
Stereo
Ours
k
92.9 %
71.7 %
Location
4.4 m
6.6 m
Orientation
6.6 deg
7.2 deg
Overlap
62.7 %
48.1 %
Activity
8.0 %
16.6 %
Figure 7: Comparison with stereo when k and ? are unknown.
Learning the 3D tracklet parameters: Eq. 4 requires a function ? : f, b, C ? ?, ? which takes
a frame index f ? N, an object bounding box b ? R4 and the calibration parameters C as input
and maps them to the object location ? ? R2 and uncertainty ? ? R2?2 in bird?s eye perspective.
As cues for this mapping we use the bounding box width and height, as well as the location of the
bounding box foot-point. Scene depth adaptive error propagation is employed for obtaining ?. The
unknown parameters of the mapping are the uncertainty in bounding box location ?u , ?v , width ??u
and height ??v as well as the real-world object dimensions ?x , ?y along with their uncertainties
??x , ??y . We learn these parameters using a separate training dataset, including 1020 images with
3634 manually labeled vehicles and depth information [8].
Inference: Since the posterior in Eq. 8 cannot be computed in closed form, we approximate it
using Metropolis-Hastings sampling [9]. We exploit a combination of local and global moves to
obtain a well-mixing Markov chain. While local moves modify R slightly, global moves sample R
directly from the prior. This ensures quickly traversing the search space, while still exploring local
modes. To avoid trans-dimensional jumps, the road layout ? is estimated separately beforehand using
MAP estimation ?M AP provided by joint boosting [30]. We pick each of the remaining elements of
R at random and select local and global moves with equal probability.
4
Experimental Evaluation
In this section, we first show that learning which line features convey structural information improves
dominant vanishing point detection. Next, we compare our method to a multiple kernel learning
(MKL) baseline in estimating scene topology, geometry and traffic activities on the dataset of [7], but
only employing information from a single camera. Finally, we show that our model can significantly
improve object orientation estimates compared to state-of-the-art part based models [5]. For all
experiments, we set cl = cp = 10?15 , ?f = 0.1, cf = 10?10 , ?c = 0.01, cc = 10?30 and ? = 0.1.
Vanishing Point Estimation: We use a database of 185 manually annotated images to learn a
predictor of which line segments are structured. This is important since cast shadows often mislead
the VP estimation process. Fig. 5(a) shows the ROC curves for the method of [19] relaxed to nonorthogonal VPs (blue) as well as our learning-based approach (red). While the baseline gets easily
disturbed by clutter, our method is more accurate and has significantly less false positives.
3D Urban Scene Inference: We evaluate our method?s ability to infer the scene layout by building
a competitive baseline based on multi-kernel Gaussian process regression [17]. We employ a total of
4 kernels built on GIST [23], tracklet histograms, VPs as well as scene labels. Note that these are the
same features employed by our model to estimate the scene topology, ?M AP . For the tracklets, we
discretize the 50?50 m area in front of the vehicle into bins of size 5?5 m. Each bin consists of four
binary elements, indicating whether forward, backward, left or right motion has been observed at
that location. The VPs are included with their value as well as an indicator variable denoting whether
the VP has been found or not. For each semantic class, we compute histograms at 3 scales, which
divide the image into 3 ? 1, 6 ? 2 and 12 ? 4 bins, and concatenate them. Following [7] we measure
error in terms of the location of the intersection center in meters, the orientation of the intersection
arms in degrees, the overlap of road area with ground truth as well as the percentage of correctly
discovered intersection crossing activities. For details about these metrics we refer the reader to [7].
7
Figure 8: Automatically inferred scene descriptions. (Left) Trackets from all frames superimposed. (Middle)
Inference result with ? known and (Right) ? unknown. The inferred intersection layout is shown in gray, ground
truth labels are given in blue. Detected activities are marked by red lines.
We perform two types of experiments: In the first one we assume that the type of intersection ? is
given, and in the second one we estimate ? as well. As shown in Fig. 6, our method significantly
outperforms the MKL baseline in almost all error measures. Our method particularly excels in
estimating the intersection arm orientations and activities. We also compare our approach to [7] in
Fig. 7. As this approach uses stereo cameras, it can be considered as an oracle, yielding the highest
performance achievable. Our approach is close to the oracle; The difference in performance is due
to the depth uncertainties that arise in the monocular case, which makes the problem much more
ambiguous. Fig. 8 shows qualitative results, with detections belonging to the same tracklet depicted
with the same color. The trajectories of all the trackets are superimposed in the last frame. Note
that, while for the 2-armed and 4-armed case the topology has been estimated correctly, the 3-armed
case has been confused with a 4-armed intersection. This is our most typical failure mode. Despite
this, the orientations are correctly estimated and the vehicles are placed at the correct locations.
Improving Object Orientation Estimation: We also evaluate the performance of our method
in estimating 360 degree object orientations. As cars are mostly aligned with the road surface,
we only focus on the orientation angle in bird?s eye coordinates. As a baseline, we employ the
part-based detector of [5] trained in a supervised fashion to distinguish between 8 canonical views,
where each view is a mixture component. We correct for the ego motion and project the highest
scoring orientation into bird?s eye perspective. For our method, we infer the scene layout R using
our approach and associate every tracklet to its lane by maximizing pl (l|t, R, C) over l using Viterbi
decoding. We then select the tangent angle at the associated spline?s footpoint s on the inferred lane
l as our orientation estimate. Since parked cars are often oriented arbitrarily, our evaluation focuses
on moving vehicles only. Fig. 5(b) shows that we are able to significantly reduce the orientation
error with respect to [5]. This also holds true for the smoothed version of [5], where we average
orientations over temporally neighboring bins within each tracklet.
5
Conclusions
We have proposed a generative model which is able to perform joint 3D inference over the scene
layout as well as the location and orientation of objects. Our approach is able to infer the scene
topology and geometry, as well as traffic activities from a short video sequence acquired with a
single camera mounted on a car driving around a mid-size city. Our generative model proves superior to a discriminative approach based on MKL. Furthermore, our approach is able to outperform
significantly a state-of-the-art detector on its ability to estimate 3D object orientation. In the future, we plan to incorporate more discriminative cues to further boost performance in the monocular
case. We also believe that incorporating traffic sign states and pedestrians into our model will be an
interesting avenue for future research towards fully understanding complex urban scenarios.
8
References
[1] S. Bao, M. Sun, and S. Savarese. Toward coherent object detection and scene layout understanding. In
CVPR, 2010.
[2] O. Barinova, V. Lempitsky, E. Tretyak, and P. Kohli. Geometric image parsing in man-made environments. In ECCV, 2010.
[3] W. Choi and S. Savarese. Multiple target tracking in world coordinate with single, minimally calibrated
camera. In ECCV, 2010.
[4] A. Ess, B. Leibe, K. Schindler, and L. Van Gool. Robust multi-person tracking from a mobile platform.
PAMI, 31:1831?1846, 2009.
[5] P. Felzenszwalb, R.Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part-based models. PAMI, 32:1627?1645, 2010.
[6] D. Gavrila and S. Munder. Multi-cue pedestrian detection and tracking from a moving vehicle. IJCV,
73:41?59, 2007.
[7] A. Geiger, M. Lauer, and R. Urtasun. A generative model for 3d urban scene understanding from movable
platforms. In Computer Vision and Pattern Recognition, 2011.
[8] A. Geiger, M. Roser, and R. Urtasun. Efficient large-scale stereo matching. In Asian Conference on
Computer Vision, 2010.
[9] W. Gilks and S. Richardson, editors. Markov Chain Monte Carlo in Practice. Chapman & Hall, 1995.
[10] S. Gould, T. Gao, and D. Koller. Region-based segmentation and object detection. In NIPS, 2009.
[11] A. Gupta, A. Efros, and M. Hebert. Blocks world revisited: Image understanding using qualitative geometry and mechanics. In ECCV, 2010.
[12] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge, 2004.
[13] V. Hedau, D. Hoiem, and D.A. Forsyth. Recovering the spatial layout of cluttered rooms. In ICCV, 2009.
[14] D. Hoiem, A. Efros, and M. Hebert. Recovering surface layout from an image. IJCV, 75:151?172, 2007.
[15] D. Hoiem, A. Efros, and M. Hebert. Putting objects in perspective. IJCV, 80:3?15, 2008.
[16] C. Huang, B. Wu, and R. Nevatia. Robust object tracking by hierarchical association of detection responses. In ECCV, 2008.
[17] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Gaussian processes for object categorization. IJCV,
88:169?188, 2010.
[18] R. Kaucic, A. Perera, G. Brooksby, J. Kaufhold, and A. Hoogs. A unified framework for tracking through
occlusions and across sensor gaps. In CVPR, 2005.
[19] J. Kosecka and W. Zhang. Video compass. In ECCV, 2002.
[20] D. Kuettel, M. Breitenstein, L. Gool, and V. Ferrari. What?s going on?: Discovering spatio-temporal
dependencies in dynamic scenes. In CVPR, 2010.
[21] S. Kumar and M. Hebert. Man-made structure detection in natural images using a causal multiscale
random field. In CVPR, 2003.
[22] D. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating spatial layout of rooms using volumetric reasoning about objects and surfaces. In NIPS, 2010.
[23] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42:145?175, 2001.
[24] A. Saxena, S. H. Chung, and A. Y. Ng. 3-D depth reconstruction from a single still image. IJCV, 76:53?
69, 2008.
[25] G. Schindler and F. Dellaert. Atlanta world: An expectation maximization framework for simultaneous
low-level edge grouping and camera calibration in complex man-made environments. In CVPR, 2004.
[26] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class object
recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 81:2?23, 2009.
[27] H. Wang, S. Gould, and D. Koller. Discriminative learning with latent variables for cluttered indoor scene
understanding. In ECCV, 2010.
[28] X. Wang, X. Ma, and W. Grimson. Unsupervised activity perception in crowded and complicated scenes
using hierarchical bayesian models. PAMI, 2009.
[29] C. Wojek, S. Roth, K. Schindler, and B. Schiele. Monocular 3D Scene Modeling and Inference: Understanding Multi-Object Traffic Scenes. In ECCV, 2010.
[30] C. Wojek and B. Schiele. A dynamic CRF model for joint labeling of object and scene classes. In ECCV,
2008.
9
| 4189 |@word mild:1 kohli:1 version:2 middle:1 achievable:1 grey:1 tried:1 covariance:3 accounting:1 pick:1 textonboost:1 harder:1 initial:1 configuration:1 contains:2 score:3 hoiem:3 denoting:1 ours:4 interestingly:1 outperforms:3 existing:3 past:1 si:2 reminiscent:1 parsing:1 chicago:1 concatenate:1 informative:1 shape:1 christian:1 designed:1 drop:1 gist:2 stationary:1 generative:8 cue:5 discovering:1 plane:6 es:1 vanishing:19 short:4 detecting:5 boosting:5 revisited:1 location:23 successive:1 zhang:1 height:5 along:2 qualitative:4 consists:1 ijcv:7 acquired:3 mpg:1 mechanic:1 multi:6 automatically:1 ucken:1 armed:6 window:1 munder:1 project:2 estimating:11 notation:1 provided:1 confused:1 what:1 probabilites:1 substantially:1 developed:1 unified:1 temporal:1 sky:2 unexplored:1 every:2 voting:1 saxena:1 tackle:3 grauman:1 classifier:2 ramanan:1 t1:1 positive:4 local:6 modify:1 consequence:1 despite:1 ap:5 pami:3 black:1 bird:12 minimally:1 r4:2 challenging:2 shaded:1 limited:3 walsh:1 range:2 perpendicular:1 camera:17 gilks:1 practice:2 backprojecting:1 block:1 differs:1 spot:3 procedure:1 area:5 significantly:7 gabor:1 matching:1 confidence:1 road:20 refers:3 get:3 cannot:1 close:2 context:1 disturbed:1 map:2 center:2 maximizing:1 roth:1 layout:24 straightforward:1 duration:1 independently:1 focused:1 cluttered:2 mislead:1 orthonormal:1 ferrari:1 autonomous:2 coordinate:11 target:1 us:1 perera:1 associate:4 crossing:6 velocity:1 recognition:3 element:2 utilized:1 located:1 particularly:1 ego:1 database:3 labeled:3 observed:1 wang:2 capture:1 region:1 ensures:1 sun:1 highest:2 grimson:1 environment:2 schiele:2 dynamic:5 trained:2 carrying:1 segment:4 multimodal:1 joint:9 easily:1 represented:2 describe:2 monte:1 detected:3 labeling:4 richer:2 cvpr:5 relax:1 otherwise:2 ability:3 unseen:1 richardson:1 jointly:9 transform:1 tracklet:15 sequence:10 advantage:4 propose:5 reconstruction:1 coming:2 neighboring:2 aligned:3 facade:3 hadamard:1 kapoor:1 roser:1 holistic:1 mixing:1 bed:1 description:3 bao:1 requirement:1 reprojection:2 darrell:1 produce:1 categorization:1 tti:1 leave:1 object:56 help:1 illustrate:1 develop:1 augmenting:1 pose:1 eq:4 recovering:2 hungarian:2 shadow:2 foot:2 closely:1 annotated:3 correct:2 filter:2 hartley:1 vc:7 criminisi:1 zenith:1 mcallester:1 bin:7 wall:1 exploring:1 pl:13 rurtasun:1 hold:1 around:1 considered:2 ground:7 normal:1 exp:1 hall:1 nonorthogonal:2 mapping:2 viterbi:1 driving:3 major:1 efros:3 torralba:1 estimation:13 label:16 grouped:1 create:1 city:1 rough:1 sensor:1 gaussian:3 aim:1 rather:1 tracklets:22 avoid:3 surveillance:1 factorizes:4 mobile:1 focus:3 emission:1 likelihood:9 mainly:3 superimposed:2 contrast:4 adversarial:1 baseline:10 detect:2 inference:15 nn:2 typically:1 spurious:2 hidden:1 koller:2 going:1 pixel:3 orientation:37 priori:2 plan:1 art:4 constrained:1 fairly:1 softmax:2 platform:2 equal:1 field:1 spatial:3 ng:1 sampling:1 manually:2 chapman:1 unsupervised:1 future:2 spline:6 simplify:1 few:1 employ:6 oriented:1 asian:1 geometry:13 occlusion:1 detection:21 atlanta:1 highly:1 evaluation:4 navigation:2 mixture:1 yielding:1 chain:2 accurate:4 beforehand:1 edge:3 necessary:1 xy:1 orthogonal:2 traversing:1 divide:1 savarese:2 causal:1 girshick:1 increased:1 modeling:3 compass:1 maximization:1 deviation:1 uniform:4 predictor:1 successful:1 front:1 pixelwise:1 connect:1 dependency:1 calibrated:1 confident:1 person:1 density:1 discriminating:1 retain:1 probabilistic:2 lee:1 decoding:1 quickly:1 imagery:2 containing:1 huang:1 chung:1 leading:1 nevatia:1 de:1 kuettel:1 sec:2 crowded:1 pedestrian:2 forsyth:1 satisfy:1 notable:2 vehicle:21 try:1 view:4 observer:2 closed:1 linked:1 traffic:12 red:4 competitive:1 parked:5 parallel:1 complicated:1 equidistantly:1 kosecka:2 om:4 accuracy:2 vp:10 raw:1 bayesian:1 carlo:1 trajectory:2 cc:3 straight:1 detector:9 simultaneous:1 volumetric:1 failure:1 pp:5 dm:8 associated:6 attributed:1 static:4 radian:1 dataset:2 color:3 car:9 infers:1 improves:1 segmentation:4 back:1 supervised:3 zisserman:1 improved:1 response:1 box:9 strongly:1 furthermore:5 correlation:1 hand:1 hastings:1 su:2 multiscale:1 propagation:3 mkl:6 mode:2 gray:1 karlsruhe:1 believe:1 building:4 normalized:2 true:2 hence:1 semantic:11 illustrated:5 adjacent:2 width:5 ambiguous:1 mpi:2 crf:1 tn:2 motion:3 cp:3 reasoning:1 image:32 novel:5 superior:2 rotation:1 multinomial:1 physical:1 overview:1 linking:1 association:1 theirs:1 refer:1 cambridge:1 moving:5 access:2 robot:2 calibration:4 surface:4 longer:1 add:1 movable:1 dominant:8 closest:1 posterior:7 own:1 perspective:11 inf:1 moderate:1 scenario:7 binary:1 success:1 arbitrarily:1 scoring:1 additional:1 relaxed:1 kit:1 impose:2 employed:6 prune:1 wojek:3 semi:1 smoother:1 multiple:5 violate:1 infer:10 ing:1 offer:1 cross:1 prediction:1 simplistic:1 regression:1 oliva:1 vision:3 metric:2 tretyak:1 expectation:1 histogram:3 kernel:6 represent:2 texton:1 background:2 separately:3 interval:1 winn:1 envelope:1 probably:1 lauer:1 tend:1 gavrila:1 structural:2 presence:2 shotton:1 equidistant:1 topology:11 fm:7 andreas:1 reduce:3 avenue:1 whether:4 collinear:1 stereo:6 returned:1 dellaert:1 amount:1 clutter:7 mid:1 outperform:1 exist:1 percentage:1 canonical:1 sign:1 estimated:7 extrinsic:1 popularity:1 per:1 correctly:3 blue:2 putting:1 four:1 threshold:1 urban:9 schindler:3 backward:1 year:1 angle:4 uncertainty:5 raquel:1 place:3 almost:1 reader:1 wu:1 patch:3 geiger:6 sfm:1 vf:8 capturing:1 distinguish:1 oracle:2 activity:15 constraint:1 infinity:1 constrain:1 scene:64 flat:2 lane:16 aspect:1 kumar:1 gould:2 structured:4 according:1 alternate:1 combination:3 belonging:4 remain:1 slightly:1 across:1 metropolis:1 s1:4 projecting:1 outlier:3 iccv:1 pipeline:1 ln:3 monocular:8 agree:1 turn:1 fail:1 end:3 available:1 apply:1 leibe:1 hierarchical:2 extrinsics:1 original:1 denotes:2 remaining:1 cf:3 include:1 graphical:3 yx:1 exploit:2 prof:2 move:6 parametric:2 traditional:1 exhibit:1 excels:1 cw:2 separate:1 mapped:1 street:8 hmm:1 participate:1 extent:1 urtasun:4 reason:9 toward:1 assuming:1 rother:1 kalman:1 length:1 modeled:3 index:6 illustration:4 setup:1 mostly:4 unfortunately:3 partbased:1 kde:2 reweight:2 hoogs:1 unknown:5 perform:4 upper:1 discretize:1 observation:4 markov:3 sm:14 behave:1 situation:1 defining:1 frame:12 discovered:1 smoothed:2 arbitrary:1 ttic:1 inferred:5 introduced:1 cast:1 coherent:1 saarbr:1 boost:1 nip:2 trans:2 able:8 usually:1 pattern:1 perception:1 indoor:5 built:1 including:3 video:7 gool:2 overlap:5 natural:1 rely:2 force:1 indicator:1 arm:5 representing:1 improve:1 technology:1 eye:12 temporally:1 prior:9 understanding:12 geometric:6 meter:2 tangent:1 manhattan:3 fully:1 discriminatively:1 interesting:1 mounted:4 facing:2 validation:1 degree:3 consistent:1 viewpoint:2 bank:1 story:1 editor:1 eccv:8 placed:1 last:4 hebert:5 side:3 allow:4 institute:1 wide:1 felzenszwalb:3 benefit:1 van:1 boundary:2 depth:6 dimension:2 world:8 transition:1 rich:1 curve:1 hedau:1 forward:4 made:4 adaptive:1 projected:1 jump:1 bm:7 employing:2 cope:1 approximate:1 vps:7 keep:1 deg:6 global:4 active:1 incoming:1 spatio:1 discriminative:5 search:1 latent:5 kanade:1 learn:3 robust:2 obtaining:1 improving:1 necessarily:1 cl:3 complex:2 bounding:9 noise:2 motivation:1 subsample:1 arise:1 complementary:1 convey:1 fig:22 roc:1 depicts:1 fashion:2 experienced:1 position:9 comprises:2 parking:11 outdoor:4 choi:1 barinova:2 kaufhold:1 r8:1 r2:5 gupta:2 evidence:3 grouping:1 intrinsic:2 incorporating:1 false:2 texture:1 illustrates:1 gap:1 entropy:1 intersection:16 depicted:4 likely:1 appearance:1 gao:1 visual:1 tracking:6 corresponds:2 truth:3 ma:1 lempitsky:1 goal:5 marked:1 towards:4 room:2 man:4 experimentally:1 change:1 included:1 typical:3 except:1 principal:1 total:4 experimental:1 vote:1 exception:2 select:2 indicating:1 support:1 latter:1 noisier:1 incorporate:1 evaluate:4 outgoing:1 d1:3 correlated:1 |
3,522 | 419 | Transforming Neural-Net Output Levels
to Probability Distributions
John S. Denker and Yann leCun
AT&T Bell Laboratories
Holmdel, NJ 07733
Abstract
(1) The outputs of a typical multi-output classification network do not
satisfy the axioms of probability; probabilities should be positive and sum
to one. This problem can be solved by treating the trained network as a
preprocessor that produces a feature vector that can be further processed,
for instance by classical statistical estimation techniques. (2) We present a
method for computing the first two moments ofthe probability distribution
indicating the range of outputs that are consistent with the input and the
training data. It is particularly useful to combine these two ideas: we
implement the ideas of section 1 using Parzen windows, where the shape
and relative size of each window is computed using the ideas of section 2.
This allows us to make contact between important theoretical ideas (e.g.
the ensemble formalism) and practical techniques (e.g. back-prop). Our
results also shed new light on and generalize the well-known "soft max"
scheme.
1
Distribution of Categories in Output Space
In many neural-net applications, it is crucial to produce a set of C numbers that
serve as estimates of the probability of C mutually exclusive outcomes. For example, in speech recognition, these numbers represent the probability of C different
phonemes; the probabilities of successive segments can be combined using a Hidden
Markov Model. Similarly, in an Optical Character Recognition ("OCR") application, the numbers represent C possible characters. Probability information for the
"best guess" category (and probable runner-up categories) is combined with context, cost information, etcetera, to produce recognition of multi-character strings.
853
854
Denker and IeCun
According to the axioms of probability, these C numbers should be constrained to be
positive and sum to one. We find that rather than modifying the network architecture and/or training algorithm to satisfy this constraint directly, it is advantageous
to use a network without the probabilistic constraint, followed by a statistical postprocessor. Similar strategies have been discussed before, e.g. (Fogelman, 1990).
The obvious starting point is a network with C output units. We can train the network with targets that obey the probabilistic constraint, e.g. the target for category
"0" is [1, 0, 0, ...J, the target for category "1" is [0, 1, 0, ...J, etcetera. This would
not, alas, guarantee that the actual outputs would obey the constraint. Of course,
the actual outputs can always be shifted and normalized to meet the requirement;
one of the goals of this paper is to understand the best way to perform such a
transformation. A more sophisticated idea would be to construct a network that
had such a transformation (e.g. softmax (Bridle, 1990; Rumelhart, 1989)) "built
in" even during training. We tried this idea and discovered numerous difficulties,
as discussed in (Denker and leCun, 1990).
The most principled solution is simply to collect statistics on the trained network.
Figures 1 and 2 are scatter plots of output from our OCR network (Le Cun et al.,
1990) that was trained to recognize the digits "0" through "9~' In the first figure,
the outputs tend to cluster around the target vectors [the points (T-, T+) and
(T+ , T-)], and even though there are a few stragglers, decision regions can be
found that divide the space into a high-confidence "0" region, a high-confidence "I"
region, and a quite small "rejection" region. In the other figure, it can be seen that
the "3 versus 5" separation is very challenging.
In all cases, the plotted points indicate the output of the network when the input
image is taken from a special "calibration" dataset ?. that is distinct both from
the training set M (used to train the network) and from the testing set 9 (used to
evaluate the generalization performance of the final, overall system).
This sort of analysis is applicable to a wide range of problems. The architecture of
the neural network (or other adaptive system) should be chosen to suit the problem
in each case. The network should then be trained using standard techniques. The
hope is that the output will constitute a sufficent statistic.
Given enough training data, we could use a standard statistical technique such
as Parzen windows (Duda and Hart, 1973) to estimate the probability density in
output space. It is then straightforward to take an unknown input, calculate the
corresponding output vector 0, and then estimate the probability that it belongs
to each class, according to the density of points of category c "at" location 0 in the
scatter plot.
We note that methods such as Parzen windows tend to fail when the number of
dimensions becomes too large, because it is exponentially harder to estimate probability densities in high-dimensional spaces; this is often referred to as "the curse
of dimensionality" (Duda and Hart, 1973). Since the number of output units (typically 10 in our OCR network) is much smaller than the number of input units
(typically 400) the method proposed here has a tremendous advantage compared to
classical statistical methods applied directly to the input vectors. This advantage
is increased by the fact that the distribution of points in network-output space is
much more regular than the distribution in the original space.
Transforming Neural-Net Output Levels to Probability Distributions
Calibration
Category 1
Calibration
Category 0
Figure 1: Scatter Plot: Category 1 versus 0
One axis in each plane represents the activation level of output unit
j=O, while the other axis represents activation level of output unit j=l;
the other 8 dimensions of output space are suppressed in this projection.
Points in the upper and lower plane are, respectively, assigned category
"I" and "0" by the calibration set. The clusters appear elongated because
there are so many ways that an item can be neither a "I" nor a "O~' This
figure contains over 500 points; the cluster centers are heavily overexposed.
Calibration
Category 5
Calibration
Category 3
Figure 2: Scatter Plot: Category 5 versus 3
This is the same as the previous figure except for the choice of data
points and projection axes.
855
856
Denker and leCun
2
Output Distribution for a Particular Input
The purpose of this section is to discuss the effect that limitations in the quantity
and/or quality oftraining data have on the reliability of neural-net outputs. Only an
outline of the argument can be presented here; details of the calculation can be found
in (Denker and leCun, 1990). This section does not use the ideas developed in the
previous section; the two lines of thought will converge in section 3. The calculation
proceeds in two steps: (1) to calculate the range of weight values consistent with the
training data, and then (2) to calculate the sensitivity of the output to uncertainty in
weight space. The result is a network that not only produces a "best guess" output,
but also an "error bar" indicating the confidence interval around that output.
The best formulation of the problem is to imagine that the input-output relation
of the network is given by a probability distribution P(O, I) [rather than the usual
function 0 = f( I)] where I and 0 represent the input vector and output vector respectively. For any specific input pattern, we get a probability distribution
POl(OII), which can be thought of as a histogram describing the probability of
various output values.
Even for a definite input I, the output will be probabilistic, because there is never
enough information in the training set to determine the precise value of the weight
vector W. Typically there are non-trivial error bars on the training data. Even when
the training data is absolutely noise-free (e.g. when it is generated by a mathematical
function on a discrete input space (Denker et al., 1987)) the output can still be
uncertain if the network is underdetermined; the uncertainty arises from lack of
data quantity, not quality. In the real world one is faced with both problems: less
than enough data to (over ) determine the network, and less than complete confidence
in the data that does exist.
We assume we have a handy method (e.g. back-prop) for finding a (local) minimum
W of the loss function E(W). A second-order Taylor expansion should be valid in
the vicinity of W. Since the loss function E is an additive function of training data,
and since probabilities are multiplicative, it is not surprising that the likelihood of a
weight configuration is an exponential function of the loss (Tishby, Levin and SoHa,
1989). Therefore the probability can be modelled locally as a multidimensional
gaussian centered at W; to a reasonable (Denker and leCun, 1990) approximation
the probability is proportional to:
(1)
i
where h is the second derivative of the loss (the Hessian), f3 is a scale factor that
determines our overall confidence in the training data, and po expresses any information we have about prior probabilities. The sums run over the dimensions of
parameter space. The width of this gaussian describes the range of networks in the
ensemble that are reasonably consistent with the training data.
Because we have a probability distribution on W, the expression 0 = fw (1) gives
a probability distribution on outputs 0, even for fixed inputs I. We find that the
most probable output () corresponds to the most probable parameters W. This
unsurprising result indicates that we are on the right track.
'Ji'ansforming Neural-Net Output Levels to Probability Distributions
We next would like to know what range of output values correspond to the allowed
range of parameter values. We start by calculating the sensitivity of the output
o fw (1) to changes in W (holding the input I fixed). For each output unit
j, the derivative of OJ with respect to W can be evaluated by a straightforward
modification of the usual back-prop algorithm.
=
Our distribution of output values also has a second moment, which is given by a
surprisingly simple expression:
2
Uj
=
(0
j -
0- )2)
j
p ...
=
~
~
,
.
2
"(j,i
{3h"
(2)
II
where "(j,i denotes the gradient of OJ with respect to Wi. We now have the first
two moments of the output probability distribution (0 and u)j we could calculate
more if we wished.
It is reasonable to expect that the weighted sums (before the squashing function)
at the last layer of our network are approximately normally distributed, since they
are sums of random variables. If the output units are arranged to be reasonably
linear, the output distribution is then given by
(3)
where N is the conventional Normal (Gaussian) distribution with given mean and
variance, and where 0 and U depend on I. For multiple output units, we must
consider the joint probability distribution POl(OII). If the different output units'
distributions are independent, POI can be factored:
POl(OII) =
IT Pjl(OjlI)
(4)
j
We have achieved the goal of this section: a formula describing a distribution of
outputs consistent with a given input. This is a much fancier statement than the
vanilla network's statement that () is "the" output. For a network that is not
underdetermined, in the limit {3 ~ 00, POI becomes a b function located at 0,
so our formalism contains the vanilla network as a special case. For general {3, the
region where POI is large constitutes a "confidence region" of size proportional to the
fuzziness 1/{3 of the data and to the degree to which the network is underdetermined.
Note that algorithms exist (Becker and Le Cun, 1989), (Le Cun, Denker and Solla,
1990) for calculating "( and h very efficiently - the time scales linearly with the
time of calculation of O. Equation 4 is remarkable in that it makes contact between
important theoretical ideas (e.g. the ensemble formalism) and practical techniques
(e.g. back-prop).
3
Combining the Distributions
Our main objective is an expression for P(cII), the probability that input I should
be assigned category c. We get it by combining the idea that elements of the
calibration set I:- are scattered in output space (section 1) with the idea that the
network output for each such element is uncertain because the network is underdetermined (section 2). We can then draw a scatter plot in which the calibration
857
858
Denker and leCun
data is represented not by zero-size points but by distributions in output space. One
can imagine each element of C, as covering the area spanned by its "error bars" of
size u as given by equation 2. We can then calculate P(cII) using ideas analogous
to Parzen windows, with the advantage that the shape and relative size of each
window is calculated, not assumed. The answer comes out to be:
P(cII)
=J
E/e'cc POl(OIII)
E/e,C POl(OIJl) POl(OII) dO
(5)
where we have introduced c,e to denote the subset of C, for which the assigned
category is c. Note that POI (given by equation 4) is being used in two ways in this
formula: to calibrate the statistical postprocessor by summing over the elements of
c', and also to calculate the fate of the input I (an element of the testing set).
Our result can be understood by analogy to Parzen windows, although it differs
from the standard Parzen windows scheme in two ways. First, it is pleasing that
we have a way of calculating the shape and relative size of the windows, namely
POI. Secondly, after we have summed the windows over the calibration set c', the
standard scheme would probe each window at the single point OJ our expression
(equation 5) accounts for the fact that the network's response to the testing input
I is blurred over a region given by POl(OII) and calls for a convolution.
Correspondence with Softmax
We were not surprised that, in suitable limits, our formalism leads to a generalization of the highly useful "softmax" scheme (Bridle, 1990j Rumelhart, 1989). This
provides a deeper understanding of softmax and helps put our work in context.
The first factor in equation 5 is a perfectly well-defined function of 0, but it could
be impractical to evaluate it from its definition (summing over the calibration set)
whenever it is needed. Therefore we sought a closed-form approximation for it.
After making some ruthless approximations and carrying out the integration in
equation 5, it reduces to
exp[TL\(Oe - TO)/u~e]
- Eel exp[TL\(Oel - TO)/U~/C/]
P( II) _
c
(6)
where TL\ is the difference between the target values (T+ - T-), TO is the average
of the target values, and u e; is the second moment of output unit j for data in
category c. This can be compared to the standard softmax expression
P( ell) =
exp[rOe]
Eel exp[rOe /]
(7)
We see that our formula has three advantages: (1) it is clear how to handle the
case where the targets are not symmetric about zero (non-vanishing ro); (2) the
"gain" of the exponentials depends on the category c; and (3) the gains can be
calculated from measurable! properties of the data. Having the gain depend on
the category makes a lot of sense; one can see in the figures that some categories
lOur formulas contain the overall confidence factor
as we would like.
/3, which is not as easily measurable
Transforming Neural-Net Output Levels to Probability Distributions
are more tightly clustered than others. One weakness that our equation 6 shares
with softmax is the assumption that the output distribution of each output j is
circular (i.e. independent of c). This can be remedied by retracting some of the
approximations leading to equation 6.
Summary: In a wide range of applications, it is extremely important to have good
estimates of the probability of correct classification (as well as runner-up probabilities). We have shown how to create a network that computes the parameters
of a probability distribution (or confidence interval) describing the set of outputs
that are consistent with a given input and with the training data. The method has
been described in terms of neural nets, but applies equally well to any parametric
estimation technique that allows calculation of second derivatives. The analysis
outlined here makes clear the assumptions inherent in previous schemes and offers
a well-founded way of calculating the required probabilities.
References
Becker, S. and Le Cun, Y. (1989). Improving the Convergence of Back-Propagation
Learning with Second-Order Methods. In Touretzky, D., Hinton, G., and Sejnowski, T., editors, Proc. of the 1988 Connectionist Models Summer School,
pages 29-37, San Mateo. Morgan Kaufman.
Bridle, J. S. (1990). Training Stochastic Model Recognition Algorithms as Networks can lead to Maximum Mutual Information Estimation of Parameters.
In Touretzky, D., editor, Advances in Neural Information Processing Systems,
volume 2, (Denver, 1989). Morgan Kaufman.
Denker, J. and leCun, Y. (1990). Transforming Neural-Net Output Levels to Probability Distributions. Technical Memorandum TM11359-901120-05, AT&T Bell
Laboratories, Holmdel NJ 07733.
Denker, J., Schwartz, D., Wittner, B., Solla, S. A., Howard, R., Jackel, L., and
Hopfield, J. (1987). Automatic Learning, Rule Extraction and Generalization.
Complex Systems, 1:877-922.
Duda, R. and Hart, P. (1973). Pattern Classification And Scene Analysis. Wiley
and Son.
Fogelman, F. (1990). personal communication.
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard,
W., and Jackel, L. D. (1990). Handwritten Digit Recognition with a BackPropagation Network. In Touretzky, D., editor, Advances in Neural Information Processing Systems, volume 2, (Denver, 1989). Morgan Kaufman.
Le Cun, Y., Denker, J. S., and Solla, S. (1990). Optimal Brain Damage. In Touretzky, D., editor, Advances in Neural Information Processing Systems, volume 2,
(Denver, 1989). Morgan Kaufman.
Rumelhart, D. E. (1989). personal communication.
Tishby, N., Levin, E., and Solla, S. A. (1989). Consistent Inference of Probabilities
in Layered Networks: Predictions and Generalization. In Proceedings of the
International Joint Conference on Neural Networks, Washington DC.
It is a pleasure to acknowledge useful conversations with John Bridle.
859
| 419 |@word advantageous:1 duda:3 tried:1 harder:1 moment:4 configuration:1 contains:2 ala:1 surprising:1 activation:2 scatter:5 must:1 john:2 additive:1 shape:3 treating:1 plot:5 guess:2 item:1 plane:2 vanishing:1 provides:1 location:1 successive:1 mathematical:1 surprised:1 combine:1 nor:1 multi:2 brain:1 actual:2 curse:1 window:11 becomes:2 what:1 kaufman:4 string:1 developed:1 finding:1 transformation:2 nj:2 impractical:1 guarantee:1 multidimensional:1 shed:1 ro:1 schwartz:1 unit:10 normally:1 appear:1 positive:2 before:2 understood:1 local:1 limit:2 meet:1 approximately:1 mateo:1 collect:1 challenging:1 range:7 practical:2 lecun:7 testing:3 implement:1 definite:1 handy:1 differs:1 backpropagation:1 digit:2 area:1 axiom:2 bell:2 thought:2 projection:2 confidence:8 regular:1 get:2 layered:1 put:1 context:2 measurable:2 conventional:1 elongated:1 center:1 straightforward:2 starting:1 factored:1 rule:1 spanned:1 sufficent:1 handle:1 memorandum:1 analogous:1 target:7 imagine:2 heavily:1 element:5 rumelhart:3 recognition:5 particularly:1 located:1 solved:1 calculate:6 region:7 oe:1 solla:4 principled:1 transforming:4 pol:7 retracting:1 personal:2 trained:4 straggler:1 depend:2 segment:1 carrying:1 serve:1 po:1 joint:2 easily:1 hopfield:1 various:1 represented:1 train:2 distinct:1 sejnowski:1 outcome:1 quite:1 statistic:2 final:1 advantage:4 net:8 combining:2 convergence:1 cluster:3 requirement:1 produce:4 help:1 school:1 wished:1 indicate:1 come:1 correct:1 modifying:1 stochastic:1 centered:1 etcetera:2 generalization:4 clustered:1 probable:3 underdetermined:4 secondly:1 around:2 normal:1 exp:4 sought:1 purpose:1 estimation:3 proc:1 applicable:1 jackel:2 hubbard:1 create:1 weighted:1 hope:1 always:1 gaussian:3 rather:2 poi:5 ax:1 likelihood:1 indicates:1 sense:1 inference:1 typically:3 hidden:1 relation:1 oiii:1 fogelman:2 overall:3 classification:3 oii:5 constrained:1 softmax:6 special:2 summed:1 integration:1 ell:1 construct:1 never:1 f3:1 having:1 washington:1 mutual:1 extraction:1 represents:2 constitutes:1 others:1 connectionist:1 inherent:1 few:1 recognize:1 tightly:1 suit:1 pleasing:1 highly:1 circular:1 runner:2 weakness:1 henderson:1 light:1 divide:1 taylor:1 plotted:1 theoretical:2 uncertain:2 instance:1 formalism:4 soft:1 increased:1 calibrate:1 cost:1 subset:1 levin:2 too:1 tishby:2 unsurprising:1 answer:1 combined:2 density:3 international:1 sensitivity:2 probabilistic:3 eel:2 parzen:6 derivative:3 leading:1 account:1 blurred:1 satisfy:2 depends:1 multiplicative:1 lot:1 closed:1 start:1 sort:1 phoneme:1 variance:1 efficiently:1 ensemble:3 correspond:1 ofthe:1 generalize:1 modelled:1 handwritten:1 cc:1 touretzky:4 whenever:1 definition:1 obvious:1 bridle:4 gain:3 dataset:1 conversation:1 dimensionality:1 sophisticated:1 back:5 response:1 formulation:1 evaluated:1 though:1 arranged:1 lack:1 propagation:1 quality:2 effect:1 normalized:1 contain:1 vicinity:1 assigned:3 symmetric:1 laboratory:2 during:1 width:1 covering:1 outline:1 complete:1 image:1 ji:1 denver:3 exponentially:1 volume:3 discussed:2 automatic:1 vanilla:2 outlined:1 similarly:1 had:1 reliability:1 calibration:10 belongs:1 seen:1 minimum:1 morgan:4 cii:3 converge:1 determine:2 ii:2 multiple:1 reduces:1 technical:1 calculation:4 offer:1 wittner:1 hart:3 equally:1 prediction:1 histogram:1 represent:3 roe:2 achieved:1 interval:2 crucial:1 postprocessor:2 tend:2 fate:1 call:1 enough:3 architecture:2 perfectly:1 idea:11 expression:5 becker:2 speech:1 hessian:1 constitute:1 useful:3 clear:2 locally:1 processed:1 category:19 exist:2 shifted:1 track:1 discrete:1 express:1 neither:1 sum:5 run:1 uncertainty:2 reasonable:2 yann:1 separation:1 draw:1 decision:1 holmdel:2 layer:1 oftraining:1 followed:1 summer:1 correspondence:1 constraint:4 scene:1 argument:1 extremely:1 optical:1 according:2 smaller:1 describes:1 son:1 character:3 suppressed:1 wi:1 cun:6 modification:1 making:1 taken:1 equation:8 mutually:1 discus:1 describing:3 fail:1 needed:1 know:1 denker:13 obey:2 ocr:3 probe:1 original:1 denotes:1 calculating:4 uj:1 classical:2 contact:2 objective:1 quantity:2 strategy:1 parametric:1 exclusive:1 usual:2 damage:1 gradient:1 pleasure:1 remedied:1 trivial:1 lour:1 statement:2 holding:1 unknown:1 perform:1 upper:1 convolution:1 markov:1 howard:2 acknowledge:1 hinton:1 communication:2 precise:1 dc:1 discovered:1 introduced:1 namely:1 required:1 boser:1 tremendous:1 bar:3 proceeds:1 pattern:2 built:1 max:1 oj:3 suitable:1 difficulty:1 scheme:5 numerous:1 axis:2 faced:1 prior:1 understanding:1 relative:3 loss:4 expect:1 limitation:1 proportional:2 analogy:1 versus:3 remarkable:1 degree:1 consistent:6 editor:4 share:1 squashing:1 course:1 summary:1 surprisingly:1 last:1 free:1 understand:1 deeper:1 wide:2 distributed:1 dimension:3 calculated:2 world:1 valid:1 computes:1 adaptive:1 san:1 founded:1 summing:2 assumed:1 reasonably:2 pjl:1 improving:1 expansion:1 complex:1 main:1 linearly:1 noise:1 allowed:1 referred:1 tl:3 scattered:1 wiley:1 exponential:2 formula:4 preprocessor:1 specific:1 rejection:1 soha:1 simply:1 applies:1 corresponds:1 determines:1 prop:4 goal:2 fuzziness:1 fw:2 change:1 typical:1 except:1 indicating:2 arises:1 fancier:1 absolutely:1 evaluate:2 |
3,523 | 4,190 | Active Learning with a Drifting Distribution
Liu Yang
Machine Learning Department
Carnegie Mellon University
[email protected]
Abstract
We study the problem of active learning in a stream-based setting, allowing the
distribution of the examples to change over time. We prove upper bounds on
the number of prediction mistakes and number of label requests for established
disagreement-based active learning algorithms, both in the realizable case and
under Tsybakov noise. We further prove minimax lower bounds for this problem.
1
Introduction
Most existing analyses of active learning are based on an i.i.d. assumption on the data. In this work,
we assume the data are independent, but we allow the distribution from which the data are drawn to
shift over time, while the target concept remains fixed. We consider this problem in a stream-based
selective sampling model, and are interested in two quantities: the number of mistakes the algorithm
makes on the first T examples in the stream, and the number of label requests among the first T
examples in the stream.
In particular, we study scenarios in which the distribution may drift within a fixed totally bounded
family of distributions. Unlike previous models of distribution drift [Bar92, CMEDV10], the minimax number of mistakes (or excess number of mistakes, in the noisy case) can be sublinear in the
number of samples.
We specifically study the classic CAL active learning strategy [CAL94] in this context, and bound
the number of mistakes and label requests the algorithm makes in the realizable case, under conditions on the concept space and the family of possible distributions. We also exhibit lower bounds
on these quantities that match our upper bounds in certain cases. We further study a noise-robust
variant of CAL, and analyze its number of mistakes and number of label requests in noisy scenarios
where the noise distribution remains fixed over time but the marginal distribution on X may shift.
In particular, we upper bound these quantities under Tsybakov?s noise conditions [MT99]. We also
prove minimax lower bounds under these same conditions, though there is a gap between our upper
and lower bounds.
2
Definition and Notations
As in the usual statistical learning problem, there is a standard Borel space X , called the instance
space, and a set C of measurable classifiers h : X ? {?1, +1}, called the concept space. We
additionally have a space D of distributions on X , called the distribution space. Throughout, we
suppose that the VC dimension of C, denoted d below, is finite.
For any ?1 , ?2 ? D, let k?1 ??2 k = supA ?1 (A)??2 (A) denote the total variation pseudo-distance
between ?1 and ?2 , where the set A in the sup ranges over all measurable subsets of X . For any
? > 0, let D? denote a minimal ?-cover of D, meaning that D? ? D and ??1 ? D, ??2 ? D? s.t.
k?1 ? ?2 k < ?, and that D? has minimal possible size |D? | among all subsets of D with this property.
In the learning problem, there is an unobservable sequence of distributions D1 , D2 , . . ., with each
Dt ? D, and an unobservable time-independent regular conditional distribution, which we represent
1
by a function ? : X ? [0, 1]. Based on these quantities, we let Z = {(Xt , Yt )}?
t=1 denote an infinite
sequence of independent random variables, such that ?t, Xt ? Dt , and the conditional distribution
of Yt given Xt satisfies ?x ? X , P(Yt = +1|Xt = x) = ?(x). Thus, the joint distribution of
(Xt , Yt ) is specified by the pair (Dt , ?), and the distribution of Z is specified by the collection
{Dt }?
t=1 along with ?. We also denote by Zt = {(X1 , Y1 ), (X2 , Y2 ), . . . , (Xt , Yt )} the first t
such labeled examples. Note that the ? conditional distribution is time-independent, since we are
restricting ourselves to discussing drifting marginal distributions on X , rather than drifting concepts.
Concept drift is an important and interesting topic, but is beyond the scope of our present discussion.
In the active learning protocol, at each time t, the algorithm is presented with the value Xt , and
is required to predict a label Y?t ? {?1, +1}; then after making this prediction, it may optionally
request to observe the true label value Yt ; as a means of book-keeping, if the algorithm requests a
label Yt on round t, we define Qt = 1, and otherwise Qt = 0.
h
i
? T = PT I Y?t 6= Yt , is the cumulative
We are primarily interested in two quantities. The first, M
t=1
? T = PT Qt , is the total
number of mistakes up to time T . The second quantity of interest, Q
t=1
number of labels requested
we will study the expectations of these
i
i up to time T .h In particular,
h
?T = E Q
?T = E M
? T . We are particularly interested in the asymptotic
? T and Q
quantities: M
i
h
? T and M
?T ? M
? ? on T , where M
? ? = inf h?C E PT I [h(Xt ) 6= Yt ] . We refer
dependence of Q
T
T
t=1
? T as the expected number of label requests, and to M
?T ? M
? ? as the expected excess number
to Q
T
of mistakes. For any distribution P on X , we define erP (h) = EX?P [?(X)I[h(X) = ?1] + (1 ?
?(X))I[h(X) = +1]], the probability of h making a mistake for X ? P and Y with conditional
probability of being +1 equal ?(X). Note that, abbreviating ert (h) = erDt (h) = P(h(Xt ) 6= Yt ),
? ? = inf h?C PT ert (h).
we have M
T
t=1
?T ? M
? ? and Q
? T are o(T ) (i.e., sublinear) are considered desirable, as
Scenarios in which both M
T
these represent cases in which we do ?learn? the proper way to predict labels, while asymptotically using far fewer labels than passive learning. Once establishing conditions under which this is
possible, we may then further explore the trade-off between these two quantities.
We will additionally make use of the following notions. For V ? C, let diamt (V ) =
Pt
1
suph,g?V Dt ({x : h(x) 6= g(x)}). For h : X ? {?1, +1}, er
? s:t (h) = t?s+1
u=s eru (h),
P
1
I[h(x)
=
6
y].
Also
let
C[S]
=
{h ? C :
and for finite S ? X ? {?1, +1}, er(h;
?
S) = |S|
(x,y)?S
er(h;
?
S) = 0}. Finally, for a distribution P on X and r > 0, define BP (h, r) = {g ? C : P (x :
h(x) 6= g(x)) ? r}.
2.1 Assumptions
In addition to the assumption of independence of the Xt variables and that d < ?, each result
below is stated under various additional assumptions. The weakest such assumption is that D is
totally bounded, in the following sense. For each ? > 0, let D? denote a minimal subset of D such
that ?D ? D, ?D? ? D? s.t. kD ? D? k < ?: that is, a minimal ?-cover of D. We say that D is totally
bounded if it satisfies the following assumption.
Assumption 1. ?? > 0, |D? | < ?.
In some of the results below, we will be interested in deriving specific rates of convergence. Doing so
requires us to make stronger assumptions about D than mere total boundedness. We will specifically
consider the following condition, in which c, m ? [0, ?) are constants.
Assumption 2. ?? > 0, |D? | < c ? ??m .
For an example of a class D satisfying the total boundedness assumption, consider X = [0, 1]n , and
let D be the collection of distributions that have uniformly continuous density function with respect
to the Lebesgue measure on X , with modulus of continuity at most some value ?(?) for each value
of ? > 0, where ?(?) is a fixed real-valued function with lim??0 ?(?) = 0.
As a more concrete example, when ?(?) = L? for some L ? (0, ?), this corresponds to the family
of Lipschitz continuous density functions with Lipschitz constant at most L. In this case, we have
|D? | ? O (??n ), satisfying Assumption 2.
2
3
Related Work
We discuss active learning under distribution drift, with fixed target concept. There are several
branches of the literature that are highly relevant to this, including domain adaptation [MMR09,
MMR08], online learning [Lit88], learning with concept drift, and empirical processes for independent but not identically distributed data [vdG00].
Streamed-based Active Learning with a Fixed Distribution [DKM09] show that a certain modified perceptron-like active learning algorithm can achieve a mistake bound O(d log(T )) and query
? log(T )), when learning a linear separator under a uniform distribution on the unit sphere,
bound O(d
in the realizable case. [DGS10] also analyze the problem of learning linear separators
under
a uni 2?
2
?
?
?+2
?+2
queries,
T
form distribution, but allowing Tsybakov noise. They find that with QT = O d
?+1
1
?T ? M? = O
? d ?+2 ? T ?+2
it is possible to achieve an expected excess number of mistakes M
.
T
At this time, we know of no work studying the number of mistakes and queries achievable by active
learning in a stream-based setting where the distribution may change over time.
Stream-based Passive Learning with a Drifting Distribution There has been work on learning
with a drifting distribution and fixed target, in the context of passive learning. [Bar92, BL97] study
the problem of learning a subset of a domain from randomly chosen examples when the probability
distribution of the examples changes slowly but continually throughout the learning process; they
give upper and lower bounds on the best achievable probability of misclassification after a given
number of examples. They consider learning problems in which a changing environment is modeled
by a slowly changing distribution on the product space. The allowable drift is restricted by ensuring
that consecutive probability distributions are close in total variation distance. However, this assumption allows for certain malicious choices of distribution sequences, which shift the probability mass
into smaller and smaller regions where the algorithm is uncertain of the target?s behavior, so that
the number of mistakes grows linearly in the number of samples in the worst case. More recently,
[FM97] have investigated learning when the distribution changes as a linear function of time. They
present algorithms that estimate the error of functions, using knowledge of this linear drift.
4
Active Learning in the Realizable Case
Throughout this section, suppose C is a fixed concept space and h? ? C is a fixed target function:
that is, ert (h? ) = 0. The family of scenarios in which this is true are often collectively referred
to as the realizable case. We begin our analysis by studying this realizable case because it greatly
simplifies the analysis, laying bare the core ideas in plain form. We will discuss more general
scenarios, in which ert (h? ) ? 0, in later sections, where we find that essentially the same principles
apply there as in this initial realizable-case analysis.
We will be particularly interested in the performance of the following simple algorithm, due to
[CAL94], typically referred to as CAL after its discoverers. The version presented here is specified in
terms of a passive learning subroutine A (mapping any sequence of labeled examples to a classifier).
In it, we use the notation DIS(V ) = {x ? X : ?h, g ? V s.t. h(x) 6= g(x)}, also used below.
CAL
? 0 = A(?)
1. t ? 0, Q0 ? ?, and let h
2. Do
3. t ? t + 1
? t?1 (Xt )
4. Predict Y?t = h
5. If max min er(h;
?
Qt?1 ? {(Xt , y)}) = 0
y?{?1,+1} h?C
6.
7.
Request Yt , let Qt = Qt?1 ? {(Xt , Yt )}
Else let Yt? = argmin min er(h;
?
Qt?1 ? {(Xt , y)}), and let Qt ? Qt?1 ? {(Xt , Yt? )}
8.
? t = A(Qt )
Let h
y?{?1,+1} h?C
Below, we let A1IG denote the one-inclusion graph prediction strategy of [HLW94]. Specifically,
the passive learning algorithm A1IG is specified as follows. For a sequence of data points U ? X t+1 ,
3
the one-inclusion graph is a graph, where each vertex represents a distinct labeling of U that can be
realized by some classifier in C, and two vertices are adjacent if and only if their corresponding
labelings for U differ by exactly one label. We use the one-inclusion graph to define a classifier
based on t training points as follows. Given t labeled data points L = {(x1 , y1 ), . . . , (xt , yt )}, and
one test point xt+1 we are asked to predict a label for, we first construct the one-inclusion graph
on U = {x1 , . . . , xt+1 }; we then orient the graph (give each edge a unique direction) in a way that
minimizes the maximum out-degree, and breaks ties in a way that is invariant to permutations of the
order of points in U ; after orienting the graph in this way, we examine the subset of vertices whose
corresponding labeling of U is consistent with L; if there is only one such vertex, then we predict for
xt+1 the corresponding label from that vertex; otherwise, if there are two such vertices, then they are
adjacent in the one-inclusion graph, and we choose the one toward which the edge is directed and
use the label for xt+1 in the corresponding labeling of U as our prediction for the label of xt+1 . See
[HLW94] and subsequent work for detailed studies of the one-inclusion graph prediction strategy.
4.1 Learning with a Fixed Distribution
We begin the discussion with the simplest case: namely, when |D| = 1.
Definition 1. [Han07, Han11] Define the disagreement coefficient of h? under a distribution P as
?P (?) = sup P (DIS(BP (h? , r))) /r.
r>?
Theorem 1. For any distribution P on X , if D = {P }, then running CAL with A =
? T = O (d log(T )) and expected query bound Q
?T =
A1IG achieves expected
mistake bound M
2
O ?P (?T )d log (T ) , for ?T = d log(T )/T .
For completeness, the proof is included in the supplemental materials.
4.2 Learning with a Drifting Distribution
We now generalize the above results to any sequence of distributions from a totally bounded space
D. Throughout this section, let ?D (?) = supP ?D ?P (?).
First, we prove a basic result stating that CAL can achieve a sublinear number of mistakes, and
under conditions on the disagreement coefficient, also a sublinear number of queries.
Theorem 2. If D is totally bounded (Assumption 1), then CAL (with A any empirical risk minimiza? T = o(T ), and if ?D (?) = o(1/?), then CAL
tion algorithm) achieves an expected mistake bound M
?
makes an expected number of queries QT = o(T ).
Proof. As mentioned, given that erQt?1 (h? ) = 0, we have that Yt? in Step 7 must equal h? (Xt ),
so that the invariant erQt (h? ) = 0 is maintained for all t by induction. In particular, this implies
Qt = Zt for all t.
Fix any ? > 0, and enumerate the elements of D? so that D? = {P1 , P2 , . . . , P|D? | }. For each t ? N,
let k(t) = argmink?|D? | kPk ? Dt k, breaking ties arbitrarily. Let
8
24
4
L(?) = ? d ln ? + ln ?
.
?
?
?
For each i ? |D? |, if k(t) = i for infinitely many t ? N, then let Ti denote the smallest value of T
such that |{t ? T : k(t) = i}| = L(?). If k(t) = i only finitely many times, then let Ti denote the
largest index t for which k(t) = i, or Ti = 1 if no such index t exists.
Let T? = maxi?|D? | Ti and V? = C[ZT? ]. We have that ?t > T? , diamt (V? ) ? diamk(t) (V? ) + ?.
For each i, let Li be a sequence of L(?) i.i.d. pairs (X, Y ) with X ? Pi and Y = h? (X), and let
Vi = C[Li ]. Then ?t > T? ,
X
E diamk(t) (V? ) ? E diamk(t) (Vk(t) ) +
kDs ?Pk(s) k ? E diamk(t) (Vk(t) ) +L(?)?.
s?Ti :k(s)=k(t)
By classic
results in the
of PAC learning [AB99, Vap82] and our choice of L(?), ?t >
theory
?
T? , E diamk(t) (Vk(t) ) ? ?.
4
Combining the above arguments,
" T
#
T
T
X
X
X
E
E diamk(t) (V? )
E [diamt (V? )] ? T? + ?T +
diamt (C[Zt?1 ]) ? T? +
t=1
t=T? +1
t=T? +1
? T? + ?T + L(?)?T +
? T? + ?T + L(?)?T +
T
X
t=T? +1
?
?T.
E diamk(t) (Vk(t) )
Let ?T be any nonincreasing sequence in (0, 1) such that 1 ? T?T ? T . Since |D? | < ? for all
? > 0, we must have ?T ? 0. Thus, noting that lim??0 L(?)? = 0, we have
" T
#
X
?
E
(1)
diamt (C[Zt?1 ]) ? T?T + ?T T + L(?T )?T T + ?T T ? T.
t=1
? t?1 ? C[Zt?1 ] has ert (h
? t?1 ) ?
? T now follows by noting that for any h
The result on M
diamt (C[Zt?1 ]), so
#
" T
" T
#
X
X
? t?1
?T = E
?E
M
diamt (C[Zt?1 ]) ? T.
ert h
t=1
t=1
Similarly, for r > 0, we have
P(Request Yt ) = E [P(Xt ? DIS(C[Zt?1 ])|Zt?1 )] ? E [P(Xt ? DIS(C[Zt?1 ] ? BDt (h? , r)))]
? E [?D (r) ? max {diamt (C[Zt?1 ]), r}] ? ?D (r) ? r + ?D (r) ? E [diamt (C[Zt?1 ])] .
hP
i
T
Letting rT = T ?1 E
t=1 diamt (C[Zt?1 ]) , we see that rT ? 0 by (1), and since ?D (?) =
? T equals
o(1/?), we also have ?D (rT )rT ? 0, so that ?D (rT )rT T ? T . Therefore, Q
T
X
t=1
P(Request Yt ) ? ?D (rT )?rT ?T +?D (rT )?E
"
T
X
t=1
#
diamt (C[Zt?1 ]) = 2?D (rT )?rT ?T ? T.
We can also state a more specific result in the case when we have some more detailed information
on the sizes of the finite covers of D.
Theorem 3. If Assumption 2 is satisfied, then CAL (with A any empirical risk minimization algo?
?
?T =
rithm)
an expected
number
M
achieves
mistake bound
MT andmexpected
of queries QT such that
m
1
1
1
2
2
? T = O ?D (?T ) T m+1 d m+1 log T , where ?T = (d/T ) m+1 .
O T m+1 d m+1 log T and Q
Proof. Fix ? > 0, enumerate D? = {P1 , P2 , . . . , P|D? | }, and for each t ? N, let k(t) =
?
argmin1?k?|D? | kDt ? Pk k. Let {Xt? }?
t=1 be a sequence of independent samples, with Xt ? Pk(t) ,
and Zt? = {(X1? , h? (X1? )), . . . , (Xt? , h? (Xt? )}. Then
" T
#
" T
#
T
X
X
X
?
E
diamt (C[Zt?1 ]) ? E
kDt ? Pk(t) k
diamt (C[Zt?1 ]) +
t=1
t=1
t=1
?E
"
T
X
t=1
#
?
diamt (C[Zt?1
]) + ?T ?
T
X
?
E diamPk(t) (C[Zt?1
]) + 2?T.
t=1
The classic convergence rates results from PAC learning [AB99, Vap82] imply
T
T
X
X
d log t
?
O |{i?t:k(i)=k(t)}|
E diamPk(t) (C[Zt?1
]) =
t=1
t=1
? O(d log T ) ?
T
X
t=1
1
|{i?t:k(i)=k(t)}|
? O(d log T ) ? |D? | ?
5
?T /|D? |?
X
u=1
1
u
? O d|D? | log2 (T ) .
Thus,
PT
t=1
2
2
?m
E [diamt (C[Zt?1 ])] ?
O
d|D
|
log
(T
)
+
?T
?
O
d
?
?
log
(T
)
+
?T
.
?
1
1
m
Taking ? = (T /d)? m+1 , this is O d m+1 ? T m+1 log2 (T ) . We therefore have
?T ? E
M
"
T
X
sup
t=1 h?C[Zt?1 ]
#
ert (h) ? E
"
T
X
t=1
#
1
m
diamt (C[Zt?1 ]) ? O d m+1 ? T m+1 log2 (T ) .
1
m+1
? T is at most
Similarly, letting ?T = (d/T )
,Q
" T
#
" T
#
X
X
?
E
Dt (DIS(C[Zt?1 ])) ? E
Dt (DIS (BDt (h , max {diamt (C[Zt?1 ]), ?T })))
t=1
?E
"
?E
"
t=1
T
X
t=1
T
X
t=1
#
?D (?T ) ? max {diamt (C[Zt?1 ]), ?T }
#
m
1
?D (?T ) ? diamt (C[Zt?1 ]) + ?D (?T ) T ?T ? O ?D (?T ) ? d m+1 ? T m+1 log2 (T ) .
We can additionally construct a lower bound for this scenario, as follows. Suppose C contains a full
infinite binary tree for which all classifiers in the tree agree on some point. That is, there is a set of
points {xb : b ? {0, 1}k , k ? N} such that, for b1 = 0 and ?b2 , b3 , . . . ? {0, 1}, ?h ? C such that
h(x(b1 ,...,bj?1 ) ) = bj for j ? 2. For instance, this is the case for linear separators (and most other
natural ?geometric? concept spaces).
Theorem 4. For any C as above, for any active learning algorithm, ? a set D satsifying Assumption 2, a target function h? ? C, and asequence of distributions
{Dt }Tt=1 in D such that the achieved
m
m
m
?
?
?
?
? T = ? T m+1
m+1
m+1
MT and QT satisfy MT = ? T
, and MT = O T
=? Q
.
The proof is analogous to that of Theorem 9 below, and is therefore omitted for brevity.
5
Learning with Noise
In this section, we extend the above analysis to allow for various types of noise conditions commonly
studied in the literature. For this, we will need to study a noise-robust variant of CAL, below
referred to as Agnostic CAL (or ACAL). We prove upper bounds achieved by ACAL, as well as
(non-matching) minimax lower bounds.
5.1 Noise Conditions
The following assumption may be referred to as a strictly benign noise condition, which essentially
says the model is specified correctly in that h? ? C, and though the labels may be stochastic, they
are not completely random, but rather each is slightly biased toward the h? label.
Assumption 3. h? = sign(? ? 1/2) ? C and ?x, ?(x) 6= 1/2.
A particularly interesting special case of Assumption 3 is given by Tsybakov?s noise conditions,
which essentially control how common it is to have ? values close to 1/2. Formally:
Assumption 4. ? satisfies Assumption 3 and for some c > 0 and ? ? 0,
?t > 0, P (|?(x) ? 1/2| < t) < c ? t? .
In the setting of shifting distributions, we will be interested in conditions for which the above assumptions are satisifed simultaneously for all distributions in D. We formalize this in the following.
Assumption 5. Assumption 4 is satisfied for all D ? D, with the same c and ? values.
5.2 Agnostic CAL
The following algorithm is essentially taken from [DHM07, Han11], adapted here for this streambased setting. It is based on a subroutine: L EARN(L, Q) = argmin er(h;
?
Q) if min er(h;
?
L) =
h?C:er(h;L)=0
?
0, and otherwise L EARN(L, Q) = ?.
6
h?C
ACAL
? t be any element of C
1. t ? 0, Lt ? ?, Qt ? ?, let h
2. Do
3.
t?t+1
? t?1 (Xt )
4.
Predict Y?t = h
For each y ? {?1, +1}, let h(y) = L EARN(Lt?1 , Qt?1 )
If either y has h(?y) = ? or
? t?1 (Lt?1 , Qt?1 )
er(h
? (?y) ; Lt?1 ? Qt?1 ) ? er(h
? (y) ; Lt?1 ? Qt?1 ) > E
7.
Lt ? Lt?1 ? {(Xt , y)}, Qt ? Qt?1
8.
Else Request Yt , and let Lt ? Lt?1 , Qt ? Qt?1 ? {(Xt , Yt )}
? t = L EARN(Lt , Qt )
9.
Let h
10. If t is a power of 2
11.
Lt ? ?, Qt ? ?
? t (L, Q), defined as follows. Let ?i be
The algorithm is expressed in terms of a function E
a nonincreasing sequence of values in (0, 1). Let ?1 , ?2 , . . . denote a sequence of independent Uniform({?1, +1}) random variables, also independent from the data. For V ? C,
Pt
1
? t (V ) = suph ,h ?V
?
let R
m=2?log2 (t?1)? +1 ?m ? (h1 (Xm ) ? h2 (Xm )), Dt (V ) =
1
2
t?2?log2 (t?1)?
P
t
?t (V, ?) = 12R
? t (V ) +
suph1 ,h2 ?V t?2?log12 (t?1)? m=2?log2 (t?1)? +1 |h1 (Xm ) ? h2 (Xm )|, U
q
2
? t (V ) ln(32t2 /?) + 752 ln(32t /?) . Also, for any finite sets L, Q ? X ? Y, let C[L] = {h ?
34 D
t
t
? L, Q) = {h ? C[L] : er(h;
C : er(h;
?
L) = 0}, C(?;
?
L ? Q) ? ming?C[L] er(g;
?
L ? Q) ? ?}. Then
j
?
?
?
define Ut (?, ?; L, Q) = Ut (Ct (?; L, Q), ?), and (letting Z? = {j ? Z : 2 ? ?})
j?4
?
?
.
Et (L, Q) = inf ? > 0 : ?j ? Z? , min Ut (?, ??log(t)? ; L, Q) ? 2
5.
6.
m?N
5.3 Learning with a Fixed Distribution
The following results essentially follow from [Han11], adapted to this stream-based setting.
i
Theorem 5. For any strictly benign (P, ?), if 2?2 ? ?i ? 2?i /i, ACAL achieves an expected
? T ? M ? = o(T ), and if ?P (?) = o(1/?), then ACAL makes an
excess number of mistakes M
T
?
expected number of queries QT = o(T ).
Theorem 6. For any (P, ?) satisfying Assumption
if D = {P }, ACAL
achieves an expected
P
1 4, ?+1
?log(T )?
1
?
?
?
?i 2i . and
excess number of mistakes MT ? MT = O d ?+2 ? T ?+2 log ??log(T )? + i=0
P
2
?
?log(T )?
1
i
?T = O
? ?P (?T ) ? d ?+2
.
an expected number of queries Q
+
?
2
? T ?+2 log ??log(T
i
i=0
)?
?
where ?T = T ? ?+2 .
Corollary 1. For any (P, ?) satisfying Assumption 4, if D = {P } and ?i = 2?i in ACAL, the
? T and expected number of queries Q
? T such
algorithm achieves an expected numberof mistakes M
?+1
?
2
?
1
?T = O
? ?P (?T ) ? d ?+2 ? T ?+2
?T ? M? = O
? d ?+2 ? T ?+2 , and Q
that, for ?T = T ? ?+2 , M
.
T
5.4 Learning with a Drifting Distribution
We can now state our results concerning ACAL, which are analogous to Theorems 2 and 3 proved
earlier for CAL in the realizable case.
Theorem 7. If D is totally bounded (Assumption 1) and ? satisfies Assumption 3, then ACAL with
? T ? M ? = o(T ), and if additionally
?i = 2?i achieves an excess expected mistake bound M
T
? T = o(T ).
?D (?) = o(1/?), then ACAL makes an expected number of queries Q
The proof of Theorem 7 essentially follows from a combination of the reasoning for Theorem 2 and
Theorem 8 below. Its proof is omitted.
Theorem 8. If Assumptions 2 and
are satisfied,then ACAL
an expected excess num achieves
5(?+2)m+1
P?log(T )? i
1
?
?
?
(?+2)(m+1)
log ??log(T )? + i=0
?i 2 , and an expected
ber of mistakes MT ? MT = O T
P
(?+2)(m+1)??
?log(T
)?
1
i
?T = O
? ?D (?T )T (?+2)(m+1) log
number of queries Q
+
?
2
, where ?T =
i
i=0
??log(T )?
?
T ? (?+2)(m+1) .
7
The proof of this result is in many ways similar to that given above for the realizable case, and is
included among the supplemental materials.
We immediately have the following corollary for a specific ?i sequence.
? and
Corollary 2. With ?i = 2?i in ACAL, the algorithm achieves expected number of mistakes M
?
? (?+2)(m+1)
?
expected number of queries QT such that, for ?T = T
,
(?+2)m+1
(?+2)(m+1)??
?T = O
? ?D (?T ) ? T (?+2)(m+1) .
?T ? M? = O
? T (?+2)(m+1) and Q
M
T
Just as in the realizable case, we can also state a minimax lower bound for this noisy setting.
Theorem 9. For any C as in Theorem 4, for any active learning algorithm, ? a set D satisfying
Assumption 2, a conditional distribution ?, such that Assumption 5 is satisfied, and a sequence of
T
? T and Q
? achieved by the learning algorithm satisfy
such that the M
distributions {D
t }t=1 in D
T 1+m?
2+m?
1+m?
? T ? M ? = ? T ?+2+m? and M
? T ? M ? = O T ?+2+m? =? Q
? T = ? T ?+2+m?
M
.
T
T
The proof is included in the supplemental material.
6
Discussion
Querying before Predicting: One interesting alternative to the above framework is to allow the
learner to make a label request before making its label predictions. From a practical perspective, this
may be more desirable and in many cases quite realistic. From a theoretical perspective, analysis
of this alternative framework essentially separates out the mistakes due to over-confidence from the
mistakes due to recognized uncertainty. In some sense, this is related to the KWIK model of learning
of [LLW08].
Analyzing the above procedures in this alternative model yields several interesting details. Specifically, the natural modification of CAL produces a method that (in the realizable case) makes the
same number of label requests as before, except that now it makes zero mistakes, since CAL will
request a label if there is any uncertainty about its label.
On the other hand, the analysis of the natural modification to ACAL can be far more subtle, when
there is noise. In particular, because the version space is only guaranteed to contain the best classifier with high confidence, there is still a small probability of making a prediction that disagrees
with the best classifier h? on each round that we do not request a label. So controlling the number of mistakes in this setting comes down to controlling the probability of removing h? from
the version space. However, this confidence parameter appears in the analysis of the number of
queries, so that we have a natural trade-off between the number of mistakes and the number of
label requests. In particular, under Assumptions 2 and 5, this procedure achieves an expected
? T ? M ? ? P?log(T )? ?i 2i , and an expected number of queries
excess number
of mistakes M
T
i=1
P
(?+2)(m+1)??
?
?log(T )?
1
?T = O
? ?D (?T ) ? T (?+2)(m+1) log
Q
+
?i 2i , where ?T = T ? (?+2)(m+1) .
i=0
??log(T )?
In particular, given any nondecreasing sequence MT , we can set this ?i sequence to maintain
? T ? M ? ? MT for all T .
M
T
Open Problems: What is not implied by the results above is any sort of trade-off between the
number of mistakes and the number of queries. Intuitively, such a trade-off should exist; however,
as CAL lacks any parameter to adjust the behavior with respect to this trade-off, it seems we need a
different approach to address that question. In the batch setting, the analogous question is the tradeoff between the number of label requests and the number of unlabeled examples needed. In the
realizable case, that trade-off is tightly characterized by Dasgupta?s splitting index analysis [Das05].
It would be interesting to determine whether the splitting index tightly characterizes the mistakesvs-queries trade-off in this stream-based setting as well.
In the batch setting, in which unlabeled examples are considered free, and performance is only measured as a function of the number of label requests, [BHV10] have found that there is an important
distinction between the verifiable label complexity and the unverifiable label complexity. In particular, while the former is sometimes no better than passive learning, the latter can always provide
improvements for VC classes. Is there such a thing as unverifiable performance measures in the
stream-based setting? To be concrete, we have the following open problem. Is there a method for
every VC class that achieves O(log(T )) mistakes and o(T ) queries in the realizable case?
8
References
[AB99]
M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, 1999.
[Bar92]
P. L. Bartlett. Learning with a slowly changing distribution. In Proceedings of the fifth annual
workshop on Computational learning theory, COLT ?92, pages 243?252, 1992.
[BHV10]
M.-F. Balcan, S. Hanneke, and J. Wortman Vaughan. The true sample complexity of active
learning. Machine Learning, 80(2?3):111?139, September 2010.
[BL97]
R. D. Barve and P. M. Long. On the complexity of learning from drifting distributions. Inf.
Comput., 138(2):170?193, 1997.
[CAL94]
D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine
Learning, 15(2):201?221, 1994.
[CMEDV10] K. Crammer, Y. Mansour, E. Even-Dar, and J. Wortman Vaughan. Regret minimization with
concept drift. In COLT, pages 168?180, 2010.
[Das05]
S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural
Information Processing Systems 18, 2005.
[DGS10]
O. Dekel, C. Gentile, and K. Sridharam. Robust selective sampling from single and multiple
teachers. In Conference on Learning Theory, 2010.
[DHM07]
S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. Technical Report CS2007-0898, Department of Computer Science and Engineering, University of
California, San Diego, 2007.
[DKM09]
S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal
of Machine Learning Research, 10:281?299, 2009.
[FM97]
Y. Freund and Y. Mansour. Learning under persistent drift. In Proceedings of the Third European
Conference on Computational Learning Theory, EuroCOLT ?97, pages 109?118, 1997.
[Han07]
S. Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the
24th International Conference on Machine Learning, 2007.
[Han11]
S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361,
2011.
[HLW94]
D. Haussler, N. Littlestone, and M. Warmuth. Predicting {0, 1}-functions on randomly drawn
points. Information and Computation, 115:248?292, 1994.
[Lit88]
N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285?318, 1988.
[LLW08]
L. Li, M. L. Littman, and T. J. Walsh. Knows what it knows: A framework for self-aware
learning. In International Conference on Machine Learning, 2008.
[MMR08]
Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In In
Advances in Neural Information Processing Systems (NIPS), pages 1041?1048, 2008.
[MMR09]
Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009.
[MT99]
E. Mammen and A.B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics,
27:1808?1829, 1999.
[Vap82]
V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York,
1982.
[vdG00]
S. van de Geer. Empirical Processes in M-Estimation (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge University Press, 2000.
9
| 4190 |@word version:3 achievable:2 stronger:1 seems:1 dekel:1 open:2 d2:1 boundedness:2 initial:1 liu:1 contains:1 series:1 existing:1 must:2 subsequent:1 realistic:1 benign:2 atlas:1 discrimination:1 fewer:1 warmuth:1 core:1 num:1 completeness:1 coarse:1 along:1 persistent:1 prove:5 expected:22 behavior:2 p1:2 examine:1 abbreviating:1 ming:1 eurocolt:1 kds:1 totally:6 abound:1 begin:2 bounded:6 notation:2 mass:1 agnostic:4 what:2 argmin:2 minimizes:1 supplemental:3 pseudo:1 every:1 ti:5 tie:2 exactly:1 classifier:7 control:1 unit:1 continually:1 before:3 engineering:1 mistake:31 analyzing:1 establishing:1 studied:1 walsh:1 range:1 directed:1 unique:1 practical:1 regret:1 procedure:2 barve:1 empirical:5 matching:1 confidence:3 regular:1 close:2 unlabeled:2 cal:16 context:2 risk:2 vaughan:2 measurable:2 yt:20 splitting:2 immediately:1 haussler:1 deriving:1 classic:3 notion:1 variation:2 ert:7 analogous:3 annals:2 target:6 suppose:3 pt:7 controlling:2 diego:1 element:2 satisfying:5 particularly:3 labeled:3 worst:1 region:1 trade:7 mentioned:1 environment:1 complexity:6 asked:1 littman:1 algo:1 minimiza:1 learner:1 completely:1 joint:1 various:2 distinct:1 query:18 bdt:2 labeling:3 whose:1 quite:1 valued:1 say:2 otherwise:3 statistic:2 nondecreasing:1 noisy:3 online:1 sequence:16 product:1 adaptation:3 relevant:1 combining:1 argmink:1 achieve:3 convergence:3 produce:1 stating:1 measured:1 finitely:1 qt:28 p2:2 c:1 implies:1 come:1 differ:1 direction:1 ab99:3 attribute:1 stochastic:1 vc:3 material:3 fix:2 generalization:1 strictly:2 considered:2 scope:1 predict:6 mapping:1 bj:2 achieves:11 consecutive:1 smallest:1 omitted:2 estimation:2 label:30 largest:1 minimization:2 always:1 argmin1:1 modified:1 rather:2 kalai:1 corollary:3 llw08:2 vk:4 improvement:1 greatly:1 rostamizadeh:2 sense:2 realizable:13 typically:1 selective:2 subroutine:2 interested:6 labelings:1 unobservable:2 among:3 colt:3 denoted:1 special:1 marginal:2 equal:3 once:1 construct:2 aware:1 sampling:2 represents:1 t2:1 report:1 primarily:1 randomly:2 simultaneously:1 tightly:2 ourselves:1 lebesgue:1 maintain:1 interest:1 highly:1 han07:2 adjust:1 nonincreasing:2 xb:1 edge:2 tree:2 littlestone:2 theoretical:2 minimal:4 uncertain:1 instance:2 earlier:1 cover:3 vertex:6 subset:5 uniform:2 wortman:2 dependency:1 teacher:1 density:2 international:2 probabilistic:1 off:7 quickly:1 concrete:2 earn:4 satisfied:4 choose:1 slowly:3 book:1 li:3 supp:1 de:1 b2:1 coefficient:2 satisfy:2 vi:1 stream:9 later:1 break:1 tion:1 h1:2 analyze:2 sup:3 doing:1 characterizes:1 sort:1 yield:1 generalize:1 mere:1 hanneke:3 kpk:1 monteleoni:2 definition:2 proof:8 hsu:1 proved:1 lim:2 knowledge:1 ut:3 formalize:1 subtle:1 appears:1 dt:10 follow:1 though:2 just:1 hand:1 cohn:1 lack:1 continuity:1 grows:1 orienting:1 b3:1 modulus:1 concept:10 true:3 contain:1 y2:1 former:1 q0:1 round:2 adjacent:2 self:1 maintained:1 mammen:1 allowable:1 tt:1 passive:6 balcan:1 reasoning:1 meaning:1 recently:1 common:1 mt:10 extend:1 mellon:1 refer:1 cambridge:3 mathematics:1 similarly:2 inclusion:6 hp:1 kwik:1 perspective:2 inf:4 irrelevant:1 scenario:6 liuy:1 certain:3 verlag:1 binary:1 arbitrarily:1 discussing:1 additional:1 gentile:1 recognized:1 determine:1 dhm07:2 kdt:2 branch:1 multiple:2 desirable:2 full:1 smooth:1 technical:1 match:1 characterized:1 sphere:1 long:1 concerning:1 ensuring:1 prediction:7 variant:2 basic:1 essentially:7 cmu:1 expectation:1 represent:2 sometimes:1 achieved:3 addition:1 else:2 malicious:1 source:1 biased:1 unlike:1 thing:1 yang:1 noting:2 identically:1 independence:1 simplifies:1 idea:1 tradeoff:1 shift:3 whether:1 bartlett:2 york:1 dar:1 enumerate:2 detailed:2 verifiable:1 tsybakov:5 simplest:1 acal:13 exist:1 sign:1 correctly:1 carnegie:1 dasgupta:4 threshold:1 drawn:2 changing:3 erp:1 asymptotically:1 graph:9 orient:1 uncertainty:2 family:4 throughout:4 bound:23 ct:1 guaranteed:1 annual:1 adapted:2 bp:2 x2:1 unverifiable:2 argument:1 min:4 department:2 request:18 combination:1 kd:1 smaller:2 slightly:1 making:4 modification:2 intuitively:1 restricted:1 invariant:2 taken:1 ln:4 agree:1 remains:2 discus:2 needed:1 know:3 letting:3 studying:2 apply:1 observe:1 disagreement:3 alternative:3 batch:2 drifting:8 running:1 log2:7 implied:1 streamed:1 question:2 quantity:8 realized:1 strategy:3 dependence:1 usual:1 rt:11 exhibit:1 september:1 distance:2 separate:1 topic:1 toward:2 induction:1 laying:1 modeled:1 index:4 optionally:1 stated:1 zt:28 proper:1 allowing:2 upper:6 ladner:1 finite:4 y1:2 mansour:4 supa:1 drift:9 pair:2 required:1 specified:5 namely:1 california:1 distinction:1 established:1 vap82:3 nip:1 address:1 beyond:1 below:8 xm:4 including:1 max:4 shifting:1 power:1 misclassification:1 natural:4 predicting:2 minimax:5 imply:1 bare:1 literature:2 geometric:1 disagrees:1 asymptotic:1 freund:1 permutation:1 sublinear:4 interesting:5 suph:2 querying:1 discoverer:1 lit88:2 h2:3 das05:2 foundation:1 degree:1 consistent:1 principle:1 pi:1 mohri:2 keeping:1 free:1 dis:6 allow:3 perceptron:2 ber:1 taking:1 fifth:1 distributed:1 van:1 streambased:1 dimension:1 plain:1 cumulative:1 collection:2 commonly:1 san:1 far:2 excess:8 uni:1 active:20 b1:2 continuous:2 eru:1 additionally:4 learn:1 robust:3 improving:1 requested:1 investigated:1 european:1 separator:3 anthony:1 protocol:1 domain:4 pk:4 linearly:1 noise:12 x1:5 referred:4 borel:1 rithm:1 comput:1 breaking:1 third:1 theorem:15 down:1 removing:1 xt:31 specific:3 pac:2 er:13 maxi:1 weakest:1 exists:1 workshop:1 restricting:1 vapnik:1 mt99:2 gap:1 lt:11 explore:1 infinitely:1 expressed:1 collectively:1 springer:1 corresponds:1 satisfies:4 conditional:5 lipschitz:2 change:4 included:3 specifically:4 infinite:2 uniformly:1 except:1 called:3 total:5 geer:1 formally:1 latter:1 crammer:1 brevity:1 vdg00:2 d1:1 ex:1 |
3,524 | 4,191 | Adaptive Hedge
?
Peter Grunwald
Tim van Erven
Department of Mathematics
VU University
De Boelelaan 1081a
1081 HV Amsterdam, the Netherlands
[email protected]
Centrum Wiskunde & Informatica (CWI)
Science Park 123, P.O. Box 94079
1090 GB Amsterdam, the Netherlands
[email protected]
Wouter M. Koolen
CWI and Department of Computer Science
Royal Holloway, University of London
Egham Hill, Egham, Surrey
TW20 0EX, United Kingdom
[email protected]
Steven de Rooij
Centrum Wiskunde & Informatica (CWI)
Science Park 123, P.O. Box 94079
1090 GB Amsterdam, the Netherlands
[email protected]
Abstract
Most methods for decision-theoretic online learning are based on the Hedge algorithm, which takes a parameter called the learning rate. In most previous analyses
the learning rate was carefully tuned to obtain optimal worst-case performance,
leading to suboptimal performance on easy instances, for example when there exists an action that is significantly better than all others. We propose a new way
of setting the learning rate, which adapts to the difficulty of the learning problem: in the worst case our procedure still guarantees optimal performance, but on
easy instances it achieves much smaller regret. In particular, our adaptive method
achieves constant regret in a probabilistic setting, when there exists an action that
on average obtains strictly smaller loss than all other actions. We also provide a
simulation study comparing our approach to existing methods.
1
Introduction
Decision-theoretic online learning (DTOL) is a framework to capture learning problems that proceed
in rounds. It was introduced by Freund and Schapire [1] and is closely related to the paradigm of
prediction with expert advice [2, 3, 4]. In DTOL an agent is given access to a fixed set of K actions,
and at the start of each round must make a decision by assigning a probability to every action. Then
all actions incur a loss from the range [0, 1], and the agent?s loss is the expected loss of the actions
under the probability distribution it produced. Losses add up over rounds and the goal for the agent
is to minimize its regret after T rounds, which is the difference in accumulated loss between the
agent and the action that has accumulated the least amount of loss.
The most commonly studied strategy for the agent is called the Hedge algorithm [1, 5]. Its performance crucially depends on a parameter ? called the learning rate. Different ways of tuning
the learning rate have been proposed, which all aim to minimize the regret for the worst possible sequence of losses the actions might incur. If T is known
to the agent, then the learning rate
p
may be tuned to achieve worst-case regret bounded by T ln(K)/2, which is known to be optimal as T and K become large [4]. Nevertheless, by slightly relaxing the problem, one can obtain
better guarantees. Suppose for example that the cumulative loss L?T of the best action is known
to
p the? agent beforehand. Then, if the learning rate is set appropriately, the regret is bounded by
2LT ln(K) + ln(K) [4], which has the same asymptotics as the previous bound in the worst case
1
(because L?T ? T ) but may p
be much better when L?T turns out to be small. Similarly, Hazan and
Kale [6] obtain a bound of 8 VARmax
T ln(K) + 10 ln(K) for a modification of Hedge if the cumulative empirical variance VARmax
T of the best expert is known. In applications it may be unrealistic to
assume that T or (especially) L?T or VARmax
is known beforehand, but at the cost of slightly worse
T
constants such problems may be circumvented using either the doubling trick (setting a budget on
the unknown quantity and restarting the algorithm with a double budget when the budget is depleted)
[4, 7, 6], or a variable learning rate that is adjusted each round [4, 8].
Bounding the regret in terms of L?T or VARmax
is based on the idea that worst-case performance is
T
not the only property of interest: such bounds give essentially the same guarantee in the worst case,
but a much better guarantee in a plausible favourable case (when L?T or VARmax
is small). In this
T
paper, we pursue the same goal for a different favourable case. To illustrate our approach, consider
the following simplistic example with two actions: let 0 < a < b < 1 be such that b ? a > 2. Then
in odd rounds the first action gets loss a + and the second action gets loss b ? ; in even rounds
the actions get losses a ? and b + , respectively. Informally, this seems like a very easy instance
of DTOL, because the cumulative losses of the actions diverge and it is easy to see from the losses
which action is the best one. In fact, the Follow-the-Leader strategy, which puts all probability mass
on the action
p 1 in this case ? the worst-case
p with smallest cumulative loss, gives a regret of at most
bound O( L?T ln(K)) is very loose by comparison, and so is O( VARmax
T ln(K)), which is of the
p
same order T ln(K). On the other hand, for Follow-the-Leader one cannot guarantee sublinear
regret for worst-case instances. (For example, if one out of two actions yields losses 12 , 0, 1, 0, 1, . . .
and the other action yields losses 0, 1, 0, 1, 0, . . ., its regret will be at least T /2 ? 1.) To get the best
of both worlds, we introduce an adaptive version of Hedge, called AdaHedge, that automatically
adapts to the difficulty of the problem by varying the learning rate appropriately. As a result we
obtain constant regret for the simplistic
example above and other ?easy? instances of DTOL, while
p
at the same time guaranteeing O( L?T ln(K)) regret in the worst case.
It remains to characterise what we consider easy problems, which we will do in terms of the probabilities produced by Hedge. As explained below, these may be interpreted as a generalisation of
Bayesian posterior probabilities. We measure the difficulty of the problem in terms of the speed at
which the posterior probability of the best action converges to one. In the previous example, this
happens at an exponential rate, whereas for worst-case instances the posterior probability of the best
action does not converge to one at all.
Outline In the next section we describe a new way of tuning the learning rate, and show that it
yields essentially optimal performance guarantees in the worst case. To construct the AdaHedge
algorithm, we then add the doubling trick to this idea in Section 3, and analyse its worst-case regret.
In Section 4 we show that AdaHedge in fact incurs much smaller regret on easy problems. We
compare AdaHedge to other instances of Hedge by means of a simulation study in Section 5. The
proof of our main technical lemma is postponed to Section 6, and open questions are discussed in
the concluding Section 7. Finally, longer proofs are only available as Additional Material in the full
version at arXiv.org.
2
Tuning the Learning Rate
Setting Let the available actions be indexed by k ? {1, . . . , K}. At the start of each round
t = 1, 2, . . . the agent A is to assign a probability wtk to each action k by producing a vector
wt = (wt1 , . . . , wtK ) with nonnegative components that sum up to 1. Then every action k incurs
a loss `kt ? [0, 1], which we collect in the loss vector `t = (`1t , . . . , `K
loss of the agent
t ), and theP
PK
T
k k
k
is wt ? `t = k=1 wt `t . After T rounds action k has accumulated loss LT = t=1 `kt , and the
agent?s regret is
T
X
RA (T ) =
wt ? `t ? L?T ,
t=1
where L?T = min1?k?K LkT is the cumulative loss of the best action.
2
k
k
Hedge The Hedge algorithm chooses the weights wt+1
proportional to e??Lt , where ? > 0 is
the learning rate. As is well-known, these weights may essentially be interpreted as Bayesian posk
terior probabilities on actions, relative to a uniform prior and pseudo-likelihoods Ptk = e??Lt =
Qt
k
??`s
[9, 10, 4]:
s=1 e
k
1
k
e??Lt
k
K ? Pt
,
=
wt+1
=P
0
??Lk
Bt
t
k0 e
where
Bt =
X
1
K
? Ptk =
k
X
1
K
k
? e??Lt
(1)
k
is a generalisation of the Bayesian marginal likelihood. And like the ordinary marginal likelihood,
Bt factorizes into sequential per-round contributions:
Bt =
t
Y
ws ? e??`s .
(2)
s=1
We will sometimes write wt (?) and Bt (?) instead of wt and Bt in order to emphasize the dependence of these quantities on ?.
The Learning Rate and the Mixability Gap A key quantity in our and previous [4] analyses is
the gap between the per-round loss of the Hedge algorithm and the per-round contribution to the
negative logarithm of the ?marginal likelihood? BT , which we call the mixability gap:
?t (?) = wt (?) ? `t ? ? ?1 ln(wt (?) ? e??`t ) .
In the setting of prediction with expert advice, the subtracted term coincides with the loss incurred
by the Aggregating Pseudo-Algorithm (APA) which, by allowing the losses of the actions to be
mixed with optimal efficiency, provides an idealised lower bound for the actual loss of any prediction strategy [9]. The mixability gap measures how closely we approach this ideal. As the same
interpretation still holds in the more general DTOL setting of this paper, we can measure the difficulty of the problem, and tune ?, in terms of the cumulative mixability gap:
?T (?) =
T
X
t=1
?t (?) =
T
X
wt (?) ? `t +
1
?
ln BT (?).
t=1
We proceed to list some basic properties of the mixability gap. First, it is nonnegative and bounded
above by a constant that depends on ?:
Lemma 1. For any t and ? > 0 we have 0 ? ?t (?) ? ?/8.
Proof. The lower bound follows by applying Jensen?s inequality to the concave function ln, the
upper bound from Hoeffding?s bound on the cumulant generating function [4, Lemma A.1].
Further, the cumulative mixability gap ?T (?) can be related to L?T via the following upper bound,
proved in the Additional Material:
?L?T + ln(K)
Lemma 2. For any T and ? ? (0, 1] we have ?T (?) ?
.
e?1
This relationship will make it possible to provide worst-case guarantees similar to what is possible
when ? is tuned in terms of L?T . However, for easy instances of DTOL this inequality is very loose,
in which case we can prove substantially better regret bounds. We could now proceed by optimizing
the learning rate ? given the rather awkward assumption that ?T (?) is bounded by a known constant
b for all ?, which would be the natural counterpart to an analysis that optimizes ? when a bound on
L?T is known. However, as ?T (?) varies with ? and is unknown a priori anyway, it makes more
sense to turn the analysis on its head and start by fixing ?. We can then simply run the Hedge
algorithm until the smallest T such that ?T (?) exceeds an appropriate budget b(?), which we set to
1
b(?) = ?1 + e?1
ln(K).
(3)
3
When at some point the budget is depleted, i.e. ?T (?) ? b(?), Lemma 2 implies that
q
? ? (e ? 1) ln(K)/L?T ,
(4)
so that, up to a constant
pfactor, the learning rate used by AdaHedge is at least as large as the learning
rates proportional to ln(K)/L?T that are used in the literature. On the other hand, it is not too
p
large, because we can still provide a bound of order O( L?T ln(K)) on the worst-case regret:
Theorem 3. Suppose the agent runs Hedge with learning rate ? ? (0, 1], and after T rounds has
just used up the budget (3), i.e. b(?) ? ?T (?) < b(?) + ?/8. Then its regret is bounded by
q
4
1
RHedge(?) (T ) < e?1
L?T ln(K) + e?1
ln(K) + 18 .
Proof. The cumulative loss of Hedge is bounded by
T
X
wt ? `t = ?T (?) ?
1
?
ln BT < b(?) + ?/8 ?
1
?
ln BT ?
1
e?1
ln(K) + 18 +
2
?
ln(K) + L?T , (5)
t=1
where we have used the bound BT ?
3
1 ??L?
T.
Ke
Plugging in (4) completes the proof.
The AdaHedge Algorithm
We now introduce the AdaHedge algorithm by adding the doubling trick to the analysis of the
previous section. The doubling trick divides the rounds in segments i = 1, 2, . . ., and on each
segment restarts Hedge with a different learning rate ?i . For AdaHedge we set ?1 = 1 initially, and
scale down the learning rate by a factor of ? > 1 for every new segment, such that ?i = ?1?i . We
monitor ?t (?i ), measured only on the losses in the i-th segment, and when it exceeds its budget
bi = b(?i ) a new segment is started. The factor ? is a parameter
of the algorithm. Theorem 5 below
?
suggests setting its value to the golden ratio ? = (1 + 5)/2 ? 1.62 or simply to ? = 2.
Algorithm 1 AdaHedge(?)
. Requires ? > 1
???
for t = 1, 2, . . . do
if t = 1 or ? ? b then
. Start a new segment
1
+ ?1 ) ln(K)
? ? ?/?; b ? ( e?1
1
1
? ? 0; w = (w1 , . . . , wK ) ? ( K
,..., K
)
end if
. Make a decision
Output probabilities w for round t
Actions receive losses `t
. Prepare for the next round
? ? ? + w ? `t + ?1 ln(w ? e??`t )
1
K
w ? (w1 ? e??`t , . . . , wK ? e??`t )/(w ? e??`t )
end for
end
The regret of AdaHedge is determined by the number of segments it creates: the fewer segments
there are, the smaller the regret.
Lemma 4. Suppose that after T rounds, the AdaHedge algorithm has started m new segments.
Then its regret is bounded by
?m ? 1
1
RAdaHedge (T ) < 2 ln(K)
+ m e?1
ln(K) + 18 .
??1
Proof. The regret per segment is bounded as in (5). Summing over all m segments, and plugging in
Pm
Pm?1 i
m
i=1 1/?i =
i=0 ? = (? ? 1)/(? ? 1) gives the required inequality.
4
Using (4), one can obtain an upper bound on the number of segments that leads to the following
guarantee for AdaHedge:
Theorem 5. Suppose the agent runs AdaHedge for T rounds. Then its regret is bounded by
p
? ?2 ? 1 q 4 ?
RAdaHedge (T ) ?
LT ln(K) + O ln(L?T + 2) ln(K) ,
e?1
??1
For details see the proof in the Additional
Material. p
The value for ? that minimizes the leading
?
for
which
?
?2 ? 1/(? ? 1) ? 3.33, but simply taking
factor is the golden ratio ? = (1 + 5)/2,
p
? = 2 leads to a very similar factor of ? ?2 ? 1/(? ? 1) ? 3.46.
4
Easy Instances
While the previous sections reassure us that AdaHedge performs well for the worst possible sequence of losses, we are also interested in its behaviour when the losses are not maximally antagonistic. We will characterise such sequences in terms of convergence of the Hedge posterior
probability of the best action:
wt? (?) = max wtk (?).
1?k?K
??Lk
t?1
is proportional to e
(Recall that
, so wt? corresponds to the posterior probability of the
action with smallest cumulative loss.) Technically, this is expressed by the following refinement of
Lemma 1, which is proved in Section 6.
Lemma 6. For any t and ? ? (0, 1] we have ?t (?) ? (e ? 2)? 1 ? wt? (?) .
wtk
This lemma, which may be of independent interest, is a variation on Hoeffding?s bound on the
cumulant generating function. While Lemma 1 leads to a bound on ?T (?) that grows linearly
in T , Lemma 6 shows that ?T (?) may grow much slower. In fact, if the posterior probabilities wt?
converge to 1 sufficiently quickly, then ?T (?) is bounded, as shown by the following lemma. Recall
that L?T = min1?k?K LkT .
Lemma 7. Let ? and ? be positive constants, and let ? ? Z+ . Suppose that for t = ?, ? + 1, . . . , T
?
there exists a single action k ? that achieves minimal cumulative loss Lkt = L?t , and for k 6= k ? the
cumulative losses diverge as Lkt ? L?t ? ?t? . Then for all ? > 0
T
X
?
1 ? wt+1
(?) ? CK ? ?1/? ,
t=?
where CK = (K ? 1)?
?1/?
?(1 + ?1 ) is a constant that does not depend on ?, ? or T .
The lemma is proved in the Additional Material. Together with Lemmas 1 and 6, it gives an upper
bound on ?T (?), which may be used to bound the number of segments started by AdaHedge. This
leads to the following result, whose proof is also delegated to the Additional Material.
Let s(m) denote the round in which AdaHedge starts its m-th segment, and let Lkr (m) =
Lks(m)+r?1 ? Lks(m)?1 denote the cumulative loss of action k in that segment.
Lemma 8. Let ? > 0 and ? > 1/2 be constants, and let CK be as in Lemma 7. Suppose there
?
exists a segment m? ? Z+ started by AdaHedge, such that ? := b8 ln(K)?(m ?1)(2?1/?) ? 8(e ?
?
?
2)CK + 1c ? 1 and for some action k the cumulative losses in segment m diverge as
?
Lkr (m? ) ? Lkr (m? ) ? ?r?
for all r ? ? and k 6= k ? .
(6)
?
Then AdaHedge starts at most m segments, and hence by Lemma 4 its regret is bounded by a
constant:
RAdaHedge (T ) = O(1).
In the simplistic example from the introduction, we may take ? = b ? a ? 2 and ? = 1, such that
(6) is satisfied for any ? ? 1. Taking m? large enough to ensure that ? ? 1, we find that AdaHedge
1
never starts more than m? = 1 + dlog? ( ?e?2
ln(2) + 8 ln(2) )e segments. Let us also give an example of
a probabilistic setting in which Lemma 8 applies:
5
Theorem 9. Let ? > 0 and ? ? (0, 1] be constants, and let k ? be a fixed action. Suppose the loss
vectors `t are independent random variables such that the expected differences in loss satisfy
?
min? E[`kt ? `kt ] ? 2?
k6=k
for all t ? Z+ .
Then, with probability at least 1 ? ?, AdaHedge starts at most
(K ? 1)(e ? 2) ln 2K/(?2 ?)
l
1 m
m? = 1 + log?
+
+
2
? ln(K)
4? ln(K)
8 ln(K)
(7)
(8)
segments and consequently its regret is bounded by a constant:
RAdaHedge (T ) = O K + log(1/?) .
This shows that the probabilistic setting of
pthe theorem is much easier than the worst case, for which
only a bound on the regret of order O( T ln(K)) is possible, and that AdaHedge automatically
adapts to this easier setting. The proof of Theorem 9 is in the Additional Material. It verifies that the
conditions of Lemma 8 hold with sufficient probability for ? = 1, and ? and m? as in the theorem.
5
Experiments
We compare AdaHedge to other hedging algorithms in two experiments involving simulated losses.
5.1
Hedging Algorithms
Follow-the-Leader. This algorithm is included because it is simple and very effective if the losses
are not antagonistic, although as mentioned in the introduction its regret is linear in the worst case.
Hedge with fixed learning rate. We also include Hedge with a fixed learning rate
q
? = 2 ln(K)/L?T ,
(9)
p
which achieves the regret bound 2 ln(K)L?T + ln(K)1 . Since ? is a function of L?T , the agent
needs to use post-hoc knowledge to use this strategy.
Hedge with doubling trick. The common way to apply the doubling trick to L?T is to set a budget on
L?T and multiply it by some constant ?0 at the start of each new segment, after which ? is optimized
for the new budget [4, 7]. Instead, we proceed the other way around and with each new segment
first divide ? by ? = 2 and then calculate the new budget such that (9) holds when ?t (?) reaches
the budget. This way we keep the same invariant (? is never larger than the right-hand side of (9),
with equality when the budget is depleted), and the frequency of doubling remains logarithmic in
L?T with a constant determined by ?, so both approaches are equally valid. However, controlling the
sequence of values of ? allows for easier comparison to AdaHedge.
AdaHedge (Algorithm 1). Like in the previous algorithm, we set ? = 2. Because of how we set up
the doubling, both algorithms now use the same sequence of learning rates 1, 1/2, 1/4, . . . ; the only
difference is when they decide to start a new segment.
Hedge with variable learning rate. Rather than using the doubling trick, this algorithm, described
in [8], changes the learning rate each round as a function of L?t . This way there is no need to relearn
the weights of the actions in each block, which leads to a better worst-case bound and potentially
better performance in practice. Its behaviour on easy problems, as we are currently interested in, has
not been studied.
5.2
Generating the Losses
In both experiments we choose losses in {0, 1}. The experiments are set up as follows.
1
Cesa-Bianchi and Lugosi use ? = ln(1 +
simplified expression we use.
p
2 ln K/L?T ) [4], but the same bound can be obtained for the
6
100
20
90
18
Hedge (doubling)
Hedge (fixed learning rate)
Hedge (variable learning rate)
AdaHedge
Follow the leader
80
70
16
14
12
Regret
Regret
60
50
10
40
8
30
6
20
4
10
2
0
0
Hedge (doubling)
Hedge (fixed learning rate)
Hedge (variable learning rate)
AdaHedge
Follow the leader
1000
2000
3000
4000 5000 6000
Number of Rounds
7000
8000
9000
0
0
10000
1000
2000
(a) I.I.D. losses
3000
4000 5000 6000
Number of Rounds
7000
8000
9000
10000
(b) Correlated losses
Figure 1: Simulation results
I.I.D. losses. In the first experiment, all T = 10 000 losses for all K = 4 actions are independent,
with distribution depending only on the action: the probabilities of incurring loss 1 are 0.35, 0.4,
0.45 and 0.5, respectively. The results are then averaged over 50 repetitions of the experiment.
Correlated losses. In the second experiment, the T = 10 000 loss vectors are still independent,
but no longer identically distributed. In addition there are dependencies within the loss vectors `t ,
between the losses for the K = 2 available actions: each round is hard with probability 0.3, and
easy otherwise. If round t is hard, then action 1 yields loss 1 with probability 1 ? 0.01/t and action
2 yields loss 1 with probability 1 ? 0.02/t. If the round is easy, then the probabilities are flipped and
the actions yield loss 0 with the same probabilities. The results are averaged over 200 repetitions.
5.3
Discussion and Results
Figure 1 shows the results of the experiments above. We plot the regret (averaged over repetitions
of the experiment) as a function of the number of rounds, for each of the considered algorithms.
I.I.D. Losses. In the first considered regime, the accumulated losses for each action diverge linearly with high probability, so that the regret of Follow-the-Leader is bounded. Based on Theorem 9
we expect AdaHedge to incur bounded regret also; this is confirmed in Figure 1(a). Hedge with a
fixed learning rate shows much larger regret. This happens because the learning rate, while it optimizes the worst-case bound, is much too small for this easy regime. In fact, if we would include
more rounds, the learning rate would be set to an even smaller value, clearly showing the need to
determine the learning rate adaptively. The doubling trick provides one way to adapt the learning
rate; indeed, we observe that the regret of Hedge with the doubling trick is initially smaller than the
regret of Hedge with fixed learning rate. However, unlike AdaHedge, the algorithm never detects
that its current value of ? is working well; instead it keeps exhausting its budget, which leads to a
sequence of clearly visible bumps in its regret. Finally, it appears that the Hedge algorithm with
variable learning rate also achieves bounded regret. This is surprising, as the existing theory for
this algorithm only considers its worst-case behaviour, and the algorithm was not designed to do
specifically well in easy regimes.
Correlated Losses. In the second simulation we investigate the case where the mean cumulative
loss of two actions is extremely close ? within O(log t) of one another. If the losses of the actions
where independent, such a small difference
? would be dwarfed by random fluctuations in the cumulative losses, which would be of order O( t). Thus the two actions can only be distinguished because
we have made their losses dependent. Depending on the application, this may actually be a more natural scenario than complete independence as in the first simulation; for example, we can think of the
losses as mistakes of two binary classifiers, say, two naive Bayes classifiers with different smoothing parameters. In such a scenario,
losses will be dependent, and the difference in cumulative loss
?
will be much smaller than O( t). In the previous experiment, the posterior weights of the actions
7
converged relatively quickly for a large range of learning rates, so that the exact value of the learning
rate was most important at the start (e.g., from 3000 rounds onward Hedge with fixed learning rate
does not incur much additional regret any more). In this second setting, using a high learning rate
remains important throughout. This explains why in this case Hedge with variable learning rate can
no longer keep up with Follow-the-Leader. The results for AdaHedge are also interesting: although
Theorem 9 does not apply in this case, we may still hope that ?t (?) grows slowly enough that the
algorithm does not start too many segments. This turns out to be the case: over the 200 repetitions
of the experiment, AdaHedge started only 2.265 segments on average, which explains its excellent
performance in this simulation.
6
Proof of Lemma 6
Our main technical tool is Lemma 6. Its proof requires the following intermediate result:
Lemma 10. For any ? > 0 and any time t, the function f (`t ) = ln wt ? e??`t is convex.
This may be proved by observing that f is the convex conjugate of the Kullback-Leibler divergence.
An alternative proof based on log-convexity is provided in the Additional Material.
Proof of Lemma 6. We need to bound ?t = wt (?) ? `t + ?1 ln(wt (?) ? e??`t ), which is a convex
function of `t by Lemma 10. As a consequence, its maximum is achieved when `t lies on the
boundary of its domain, such that the losses `kt are either 0 or 1 for all k, and in the remainder of the
proof we will assume (without loss of generality) that this is the case. Now let ?t = wt ? `t be the
posterior probability of the actions with loss 1. Then
1
1
?t = ?t + ln (1 ? ?t ) + ?t e?? = ?t + ln 1 + ?t (e?? ? 1) .
?
?
Using ln x ? x ? 1 and e?? ? 1 ? ? + 12 ? 2 , we get ?t ? 12 ?t ?, which is tight for ?t near 0. For ?t
near 1, rewrite
1
?t = ?t ? 1 + ln(e? (1 ? ?t ) + ?t )
?
and use ln x ? x ? 1 and e? ? 1 + ? + (e ? 2)? 2 for ? ? 1 to obtain ?t ? (e ? 2)(1 ? ?t )?.
Combining the bounds, we find
?t ? (e ? 2)? min{?t , 1 ? ?t }.
?
?
Now, let k ? be an action such that wt? = wtk . Then `kt = 0 implies ?t ? 1 ? wt? . On the other
?
hand, if `kt = 1, then ?t ? wt? so 1??t ? 1?wt? . Hence, in both cases min{?t , 1??t } ? 1?wt? ,
which completes the proof.
7
Conclusion and Future Work
We have presented a new algorithm, AdaHedge, that adapts to the difficulty of the DTOL learning
problem. This difficulty was characterised in terms of convergence of the posterior probability of the
best action. For hard instances of DTOL, for which thepposterior does not converge, it was shown
that the regret of AdaHedge is of the optimal order O( L?T ln(K)); for easy instances, for which
the posterior converges sufficiently fast, the regret was bounded by a constant. This behaviour was
confirmed in a simulation study, where the algorithm outperformed existing versions of Hedge.
A surprising observation in the experiments was the good performance of Hedge with a variable
learning rate on some easy instances. It would be interesting to obtain matching theoretical guarantees, like those presented here for AdaHedge. A starting point might be to consider how fast the
posterior probability of the best action converges to one, and plug that into Lemma 6.
Acknowledgments
The authors would like to thank Wojciech Kot?owski for useful discussions. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network
of Excellence, IST-2007-216886, and by NWO Rubicon grant 680-50-1010. This publication only
reflects the authors? views.
8
References
[1] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
[2] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[3] V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences,
56(2):153?173, 1998.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[5] Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. Games
and Economic Behavior, 29:79?103, 1999.
[6] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation
in costs. In Proceedings of the 21st Annual Conference on Learning Theory (COLT), pages
57?67, 2008.
[7] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth.
How to use expert advice. Journal of the ACM, 44(3):427?485, 1997.
[8] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48?75, 2002.
[9] V. Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213?248, 2001.
[10] D. Haussler, J. Kivinen, and M. K. Warmuth. Sequential prediction of individual sequences
under general loss functions. IEEE Transactions on Information Theory, 44(5):1906?1925,
1998.
[11] A. N. Shiryaev. Probability. Springer-Verlag, 1996.
9
| 4191 |@word version:3 seems:1 open:1 simulation:7 crucially:1 incurs:2 united:1 tuned:3 erven:1 existing:3 current:1 comparing:1 surprising:2 assigning:1 must:1 visible:1 plot:1 designed:1 fewer:1 warmuth:3 provides:2 boosting:1 org:1 become:1 prove:1 introduce:2 excellence:1 ra:1 indeed:1 expected:2 behavior:1 owski:1 detects:1 automatically:2 actual:1 provided:1 bounded:17 mass:1 what:2 interpreted:2 pursue:1 substantially:1 minimizes:1 guarantee:9 pseudo:2 certainty:1 every:3 golden:2 concave:1 classifier:2 uk:1 grant:1 producing:1 positive:1 aggregating:1 mistake:1 consequence:1 fluctuation:1 lugosi:2 might:2 studied:2 collect:1 relaxing:1 suggests:1 range:2 bi:1 averaged:3 acknowledgment:1 vu:1 practice:1 regret:42 block:1 procedure:1 tw20:1 asymptotics:1 empirical:1 significantly:1 matching:1 get:5 cannot:1 close:1 wt1:1 put:1 applying:1 kale:2 starting:1 convex:3 ke:1 helmbold:1 haussler:2 anyway:1 variation:2 antagonistic:2 delegated:1 pt:1 suppose:7 controlling:1 exact:1 trick:9 centrum:2 steven:1 min1:2 hv:1 worst:21 capture:1 calculate:1 mentioned:1 convexity:1 depend:1 tight:1 segment:25 rewrite:1 incur:4 technically:1 creates:1 efficiency:1 k0:1 fast:2 describe:1 london:1 effective:1 whose:1 larger:2 plausible:1 say:1 otherwise:1 statistic:1 think:1 analyse:1 online:2 ptk:2 hoc:1 sequence:7 dwarfed:1 pdg:1 propose:1 remainder:1 combining:1 pthe:1 achieve:1 adapts:4 convergence:2 double:1 generating:3 guaranteeing:1 converges:3 tim:2 illustrate:1 depending:2 ac:1 fixing:1 measured:1 qt:1 odd:1 ex:1 c:1 implies:2 closely:2 material:7 explains:2 behaviour:4 assign:1 generalization:1 dtol:8 adjusted:1 strictly:1 onward:1 hold:3 sufficiently:2 around:1 considered:2 bump:1 achieves:5 smallest:3 outperformed:1 prepare:1 currently:1 nwo:1 repetition:4 tool:1 reflects:1 weighted:1 hope:1 clearly:2 aim:1 rather:2 ck:4 varying:1 factorizes:1 publication:1 cwi:5 likelihood:4 sense:1 dependent:2 accumulated:4 bt:11 initially:2 w:1 interested:2 colt:1 priori:1 k6:1 smoothing:1 marginal:3 construct:1 never:3 flipped:1 park:2 future:1 others:1 divergence:1 individual:1 interest:2 wouter:2 investigate:1 multiply:1 nl:3 kt:7 beforehand:2 indexed:1 divide:2 logarithm:1 littlestone:1 theoretical:1 minimal:1 instance:12 ordinary:1 cost:2 uniform:1 too:3 dependency:1 varies:1 chooses:1 adaptively:1 st:1 confident:1 international:1 probabilistic:3 diverge:4 together:1 quickly:2 b8:1 w1:2 satisfied:1 cesa:4 choose:1 slowly:1 hoeffding:2 worse:1 expert:5 leading:2 wojciech:1 de:3 wk:2 satisfy:1 depends:2 hedging:2 multiplicative:1 view:1 hazan:2 observing:1 start:12 bayes:1 competitive:1 contribution:2 minimize:2 variance:1 yield:6 bayesian:3 produced:2 confirmed:2 converged:1 reach:1 surrey:1 frequency:1 proof:15 proved:4 recall:2 knowledge:1 carefully:1 actually:1 auer:1 appears:1 varmax:6 follow:7 restarts:1 awkward:1 maximally:1 box:2 generality:1 just:1 until:1 relearn:1 hand:4 working:1 grows:2 counterpart:1 idealised:1 hence:2 equality:1 leibler:1 round:28 game:4 self:1 coincides:1 hill:1 outline:1 theoretic:3 complete:1 performs:1 common:1 koolen:1 discussed:1 interpretation:1 cambridge:1 tuning:3 mathematics:1 similarly:1 pm:2 access:1 longer:3 add:2 posterior:11 optimizing:1 optimizes:2 scenario:2 verlag:1 inequality:3 binary:1 postponed:1 additional:8 gentile:1 timvanerven:1 converge:3 paradigm:1 determine:1 full:1 exceeds:2 technical:2 adapt:1 plug:1 post:1 equally:1 plugging:2 prediction:6 involving:1 simplistic:3 basic:1 essentially:3 arxiv:1 sometimes:1 achieved:1 receive:1 whereas:1 addition:1 completes:2 grow:1 appropriately:2 unlike:1 call:1 extracting:1 near:2 depleted:3 ideal:1 intermediate:1 easy:16 enough:2 identically:1 independence:1 suboptimal:1 economic:1 idea:2 expression:1 gb:2 wiskunde:2 peter:1 proceed:4 action:51 useful:1 informally:1 characterise:2 tune:1 netherlands:3 amount:1 informatica:2 schapire:4 shiryaev:1 per:4 write:1 ist:2 key:1 nevertheless:1 rooij:2 monitor:1 sum:1 run:3 uncertainty:1 throughout:1 decide:1 decision:5 bound:25 apa:1 nonnegative:2 annual:1 speed:1 min:3 concluding:1 extremely:1 relatively:1 circumvented:1 department:2 conjugate:1 smaller:7 slightly:2 modification:1 happens:2 wtk:5 explained:1 dlog:1 invariant:1 ln:51 remains:3 turn:3 loose:2 end:3 available:3 incurring:1 apply:2 observe:1 appropriate:1 egham:2 distinguished:1 subtracted:1 alternative:1 slower:1 ensure:1 include:2 especially:1 mixability:6 question:1 quantity:3 strategy:4 dependence:1 thank:1 simulated:1 majority:1 considers:1 relationship:1 ratio:2 kingdom:1 potentially:1 negative:1 unknown:2 allowing:1 upper:4 bianchi:4 observation:1 head:1 community:1 introduced:1 required:1 optimized:1 below:2 regime:3 kot:1 royal:1 max:1 pascal2:1 unrealistic:1 difficulty:6 natural:2 kivinen:1 lk:2 started:5 naive:1 prior:1 literature:1 review:1 relative:1 freund:4 loss:68 expect:1 lkt:4 sublinear:1 mixed:1 interesting:2 proportional:3 incurred:1 agent:13 sufficient:1 playing:1 supported:1 side:1 taking:2 van:1 distributed:1 boundary:1 world:1 cumulative:16 valid:1 author:2 commonly:1 adaptive:5 refinement:1 simplified:1 made:1 programme:1 transaction:1 restarting:1 obtains:1 emphasize:1 kullback:1 keep:3 summing:1 leader:7 thep:1 why:1 lks:2 excellent:1 european:1 domain:1 pk:1 main:2 linearly:2 bounding:1 verifies:1 advice:4 grunwald:1 exponential:1 lie:1 theorem:9 down:1 showing:1 jensen:1 favourable:2 rhul:1 list:1 exists:4 sequential:2 adding:1 budget:13 gap:7 easier:3 rubicon:1 lt:7 logarithmic:1 simply:3 amsterdam:3 expressed:1 doubling:13 terior:1 applies:1 springer:1 corresponds:1 acm:1 hedge:33 adahedge:33 goal:2 consequently:1 change:1 hard:3 included:1 generalisation:2 determined:2 specifically:1 characterised:1 wt:26 exhausting:1 vovk:2 lemma:26 called:4 holloway:1 cumulant:2 correlated:3 |
3,525 | 4,192 | A Denoising View of Matrix Completion
? Carreira-Perpin?
? an
Weiran Wang
Miguel A.
EECS, University of California, Merced
Zhengdong Lu
Microsoft Research Asia, Beijing
http://eecs.ucmerced.edu
[email protected]
Abstract
In matrix completion, we are given a matrix where the values of only some of the
entries are present, and we want to reconstruct the missing ones. Much work has
focused on the assumption that the data matrix has low rank. We propose a more
general assumption based on denoising, so that we expect that the value of a missing entry can be predicted from the values of neighboring points. We propose a
nonparametric version of denoising based on local, iterated averaging with meanshift, possibly constrained to preserve local low-rank manifold structure. The few
user parameters required (the denoising scale, number of neighbors and local dimensionality) and the number of iterations can be estimated by cross-validating
the reconstruction error. Using our algorithms as a postprocessing step on an
initial reconstruction (provided by e.g. a low-rank method), we show consistent
improvements with synthetic, image and motion-capture data.
Completing a matrix from a few given entries is a fundamental problem with many applications in
machine learning, computer vision, network engineering, and data mining. Much interest in matrix
completion has been caused by recent theoretical breakthroughs in compressed sensing [1, 2] as well
as by the now celebrated Netflix challenge on practical prediction problems [3, 4]. Since completion
of arbitrary matrices is not a well-posed problem, it is often assumed that the underlying matrix
comes from a restricted class. Matrix completion models almost always assume a low-rank structure
of the matrix, which is partially justified through factor models [4] and fast convex relaxation [2], and
often works quite well when the observations are sparse and/or noisy. The low-rank structure of the
matrix essentially asserts that all the column vectors (or the row vectors) live on a low-dimensional
subspace. This assumption is arguably too restrictive for problems with richer structure, e.g. when
each column of the matrix represents a snapshot of a seriously corrupted motion capture sequence
(see section 3), for which a more flexible model, namely a curved manifold, is more appropriate.
In this paper, we present a novel view of matrix completion based on manifold denoising, which
conceptually generalizes the low-rank assumption to curved manifolds. Traditional manifold denoising is performed on fully observed data [5, 6], aiming to send the data corrupted by noise back
to the correct surface (defined in some way). However, with a large proportion of missing entries,
we may not have a good estimate of the manifold. Instead, we start with a poor estimate and improve
it iteratively. Therefore the ?noise? may be due not just to intrinsic noise, but mostly to inaccurately
estimated missing entries. We show that our algorithm can be motivated from an objective purely
based on denoising, and prove its convergence under some conditions. We then consider a more
general case with a nonlinear low-dimensional manifold and use a stopping criterion that works
successfully in practice. Our model reduces to a low-rank model when we require the manifold to
be flat, showing a relation with a recent thread of matrix completion models based on alternating
projection [7]. In our experiments, we show that our denoising-based matrix completion model can
make better use of the latent manifold structure on both artificial and real-world data sets, and yields
superior recovery of the missing entries.
The paper is organized as follows: section 1 reviews nonparametric denoising methods based on
mean-shift updates, section 2 extends this to matrix completion by using denoising with constraints,
section 3 gives experimental results, and section 4 discusses related work.
1
1
Denoising with (manifold) blurring mean-shift algorithms (GBMS/MBMS)
In Gaussian blurring mean-shift (GBMS), denoising is performed in a nonparametric way by local
averaging: each data point moves to the average of its neighbors (to a certain scale), and the process
D
and define a
is repeated. We follow the derivation in [8]. Consider a dataset {xn }N
n=1 ? R
Gaussian kernel density estimate
p(x) =
N
1 X
G? (x, xn )
N n=1
(1)
with bandwidth ? > 0 and kernel G? (x, xn ) ? exp ? 21 (kx ? xn k /?)2 (other kernels may
be used, such as the Epanechnikov kernel, which results in sparse affinities). The (non-blurring)
mean-shift algorithm rearranges the stationary point equation ?p(x) = 0 into the iterative scheme
x(? +1) = f (x(? ) ) with
2
N
X
exp ? 12
(x(? ) ? xn )/?
(? )
(? )
(? +1)
(? )
p(n|x )xn
p(n|x ) = PN
x
= f (x ) =
2 . (2)
exp ? 1
(x(? ) ? xn? )/?
?
n=1
n =1
2
D
This converges to a mode of p from almost every initial x ? R , and can be seen as taking selfadapting step sizes along the gradient (since the mean shift f (x) ? x is parallel to ?p(x)). This
iterative scheme was originally proposed by [9] and it or variations of it have found widespread
application in clustering [8, 10?12] and denoising of 3D point sets (surface fairing; [13, 14]) and
manifolds in general [5, 6].
The blurring mean-shift algorithm applies one step of the previous scheme, initialized from every
point, in parallel for all points. That is, given the dataset X = {x1 , . . . , xN }, for each xn ? X
? n = f (xn ) by applying one step of the mean-shift algorithm, and then we
we obtain a new point x
? which is a blurred (shrunk) version of X. By iterating this process
replace X with the new dataset X,
we obtain a sequence of datasets X(0) , X(1) , . . . (and a corresponding sequence of kernel density
estimates p(0) (x), p(1) (x), . . .) where X(0) is the original dataset and X(? ) is obtained by blurring
X(? ?1) with one mean-shift step. We can see this process as maximizing the following objective
function [10] by taking parallel steps of the form (2) for each point:
E(X) =
N
X
n=1
p(xn ) =
N
N
X
1 xn ?xm 2
1 X
e? 2 k ? k .
G? (xn , xm ) ?
N n,m=1
n,m=1
(3)
This process eventually converges to a dataset X(?) where all points are coincident: a completely
denoised dataset where all structure has been erased. As shown by [8], this process can be stopped
early to return clusters (= locally denoised subsets of points); the number of clusters obtained is
controlled by the bandwidth ?. However, here we are interested in the denoising behavior of GBMS.
? =
The GBMS step can be formulated in a matrix form reminiscent of spectral clustering [8] as X
X P where X = (x1 , . . . , xN ) is a D?N matrix
of
data
points;
W
is
the
N
?N
matrix
of
Gaussian
PN
?1
affinities wnm = G? (xn , xm ); D = diag ( n=1 wnm ) is the degree
PN matrix; and P = WD is
an N ? N stochastic matrix: pnm = p(n|xm ) ? (0, 1) and n=1 pnm = 1. P (or rather its
transpose) is the stochastic matrix of the random walk in a graph [15], which in GBMS represents
the posterior probabilities
of each point under the kernel density estimate (1). P is similar to the
1
1
matrix N = D? 2 WD? 2 derived from the normalized graph Laplacian commonly used in spectral
clustering, e.g. in the normalized cut [16]. Since, by the Perron-Frobenius theorem [17, ch. 8], all left
eigenvalues of P(X) have magnitude less than 1 except for one that equals 1 and is associated with
? = X P(X) converges to the stationary distribution of
an eigenvector of constant entries, iterating X
each P(X), where all points coincide.
? = X P(X) can be seen as filtering the dataset X with a dataFrom this point of view, the product X
dependent low-pass filter P(X), which makes clear the denoising behavior. This also suggests using
? = X ?(P(X)) as long as ?(1) = 1 and |?(r)| < 1 for r ? [0, 1), such as explicit
other filters [12] X
schemes ?(P) = (1 ? ?)I + ?P for ? ? (0, 2], power schemes ?(P) = Pn for n = 1, 2, 3 . . . or
implicit schemes ?(P) = ((1 + ?)I ? ?P)?1 for ? > 0.
One important problem with GBMS is that it denoises equally in all directions. When the data lies
on a low-dimensional manifold, denoising orthogonally to it removes out-of-manifold noise, but
2
denoising tangentially to it perturbs intrinsic degrees of freedom of the data and causes shrinkage of
the entire manifold (most strongly near its boundary). To prevent this, the manifold blurring meanshift algorithm (MBMS) [5] first computes a predictor averaging step with GBMS, and then for each
point xn a corrector projective step removes the step direction that lies in the local tangent space of
xn (obtained from local PCA run on its k nearest neighbors). In practice, both GBMS and MBMS
must be stopped early to prevent excessive denoising and manifold distortions.
2
Blurring mean-shift denoising algorithms for matrix completion
We consider the natural extension of GBMS to the matrix completion case by adding the constraints
given by the present values. We use the subindex notation XM and XP to indicate selection of the
missing or present values of the matrix XD?N , where P ? U , M = U \ P and U = {(d, n): d =
1, . . . , D, n = 1, . . . , N }. The indices P and values XP of the present matrix entries are the data
of the problem. Then we have the following constrained optimization problem:
max E(X) =
X
N
X
G? (xn , xm )
s.t. XP = XP .
(4)
n,m=1
This is similar to low-rank formulations for matrix completion that have the same constraints but
use as objective function the reconstruction error with a low-rank assumption, e.g. kX ? ABXk2
with AD?L , BL?D and L < D.
We initialize XM to the output of some other method for matrix completion, such as singular value
projection (SVP; [7]). For simple constraints such as ours, gradient projection algorithms are attractive. The gradient of E wrt X is a matrix of D ? N whose nth column is:
!
N
N
X
2
2 X ? 21 k xn ?x
m 2
k (x ? x ) ?
?
?xn E(X) = 2
p(m|xn )xm
p(xn ) ?xn +
(5)
e
m
n
? m=1
?2
m=1
and its projection on the constraint space is given by zeroing its entries having indices in P; call
?P this projection operator. Then, we have the following step of length ? ? 0 along the projected
gradient:
(? +1)
(? )
X(? +1) = X(? ) + ??P (?X E(X(? ) )) ?? XM = XM + ? ?P (?X E(X(? ) ))
(6)
M
which updates only the missing entries XM . Since our search direction is ascent and makes an angle
with the gradient that is bounded away from ?/2, and E is lower bounded, continuously differentiable and has bounded Hessian (thus a Lipschitz continuous gradient) in RN L , by carrying out a line
search that satisfies the Wolfe conditions, we are guaranteed convergence to a local stationary point,
typically a maximizer [18, th. 3.2]. However, as reasoned later, we do not perform a line search
at all, instead we fix the step size to the GBMS self-adapting step size, which results in a simple
and faster algorithm consisting of carrying out a GBMS step on X (i.e., X(? +1) = X(? ) P(X(? ) ))
and then refilling XP to the present values. While we describe the algorithm in this way for ease
of explanation, in practice we do not actually compute the GBMS step for all xdn values, but only
for the missing ones, which is all we need. Thus, our algorithm carries out GBMS denoising steps
within the missing-data subspace. We can derive this result
PN in a different way by starting from
the unconstrained optimization problem maxXP E(X) = n,m=1 G? (xn , xm ) (equivalent to (4)),
computing its gradient wrt XP , equating it to zero and rearranging (in the same way the mean-shift
algorithm is derived) to obtain a fixed-point iteration identical to our update above.
Fig. 1 shows the pseudocode for our denoising-based matrix completion algorithms (using three
nonparametric denoising algorithms: GBMS, MBMS and LTP).
Convergence and stopping criterion As noted above, we have guaranteed convergence by simply
satisfying standard line search conditions, but a line search is costly. At present we do not have
(? +1)
a proof that the GBMS step size satisfies such conditions, or indeed that the new iterate XM
increases or leaves unchanged the objective, although we have never encountered a counterexample.
In fact, it turns out that none of the work about GBMS that we know about proves that either: [10]
proves that ?(X(? +1) ) ? ?(X(? ) ) for 0 < ? < 1, where ?(?) is the set diameter, while [8, 12]
3
notes that P(X) has a single eigenvalue of value 1 and all others of magnitued less than 1. While
this shows that all points converge to the same location, which indeed is the global maximum of (3),
it does not necessarily follow that each step decreases E.
GBMS (k, ?) with full or k-nn graph: given XD?N , M
repeat
for n = 1, . . . , N
Nn ? {1, . . . , N } (full graph) or
k nearest neighbors of xn (k-nn graph)
P
mean-shift
?xn ? ?xn + m?Nn P ? G? (xGn?,x(xmn),x ? ) xm
step
m
m ?Nn
end
XM ? XM + (?X)M
move points? missing entries
until validation error increases
return X
However, the question of convergence as ? ? ? has no
practical interest in a denoising setting, because achieving
a total denoising almost never
yields a good matrix completion. What we want is to achieve
just enough denoising and stop
the algorithm, as was the case
with GBMS clustering, and as is
the case in algorithms for image
denoising. We propose to determine the optimal number of
iterations, as well as the bandwidth ? and any other parameters, by cross-validation. Specifically, we select a held-out set
by picking a random subset of
the present entries and considering them as missing; this allows
us to evaluate an error between
our completion for them and the
ground truth. We stop iterating
when this error increases.
MBMS (L, k, ?) with full or k-nn graph: given XD?N , M
repeat
for n = 1, . . . , N
Nn ? {1, . . . , N } (full graph) or
k nearest neighbors of xn (k-nn graph)
P
mean-shift
?xn ? ?xn + m?Nn P ? G? (xGn?,x(xmn),x ? ) xm
step
m
m ?Nn
Xn ? k nearest neighbors of xn
(?n , Un ) ? PCA(Xn , L)
estimate L-dim tangent space at xn
subtract parallel motion
?xn ? (I ? Un UTn )?xn
end
XM ? XM + (?X)M
move points? missing entries
until validation error increases
return X
This argument justifies an algorithmic, as opposed to an opLTP (L, k) with k-nn graph: given XD?N , M
timization, view of denoisingrepeat
based matrix completion: apfor n = 1, . . . , N
ply a denoising step, refill the
Xn ? k nearest neighbors of xn
present values, iterate until the
(?n , Un ) ? PCA(Xn , L)
estimate L-dim tangent space at xn validation error increases. This
project point onto tangent space allows very general definitions
?xn ? (I ? Un UTn )(?n ? xn )
end
of denoising, and indeed a lowXM ? XM + (?X)M
move points? missing entries rank projection is a form of deuntil validation error increases
noising where points are not alreturn X
lowed outside the linear manifold. Our formulation using
Figure 1: Our denoising matrix completion algorithms, based on the objective function (4) is still
Manifold Blurring Mean Shift (MBMS) and its particular cases useful in that it connects our
Local Tangent Projection (LTP, k-nn graph, ? = ?) and Gauss- denoising assumption with the
ian Blurring Mean Shift (GBMS, L = 0); see [5] for details. Nn more usual low-rank assumption
contains all N points (full graph) or only xn ?s nearest neighbors that has been used in much ma(k-nn graph). The index M selects the components of its input trix completion work, and juscorresponding to missing values. Parameters: denoising scale ?, tifies the refilling step as renumber of neighbors k, local dimensionality L.
sulting from the present-data
constraints under a gradientprojection optimization.
MBMS denoising for matrix completion Following our algorithmic-based approach to denois? = X ?(P(X)). For clustering,
ing, we could consider generalized GBMS steps of the form X
Carreira-Perpi?na? n [12] found an overrelaxed explicit step ?(P) = (1 ? ?)I + ?P with ? ? 1.25 to
achieve similar clusterings but faster. Here, we focus instead on the MBMS variant of GBMS that
allows only for orthogonal, not tangential, point motions (defined wrt their local tangent space as
estimated by local PCA), with the goal of preserving low-dimensional manifold structure. MBMS
has 3 user parameters: the bandwidth ? (for denoising), and the latent dimensionality L and the
4
number of neighbors k (for the local tangent space and the neighborhood graph). A special case
of MBMS called local tangent projection (LTP) results by using a neighborhood graph and setting
? = ? (so only two user parameters are needed: L and k). LTP can be seen as doing a low-rank
matrix completion locally. LTP was found in [5] to have nearly as good performance as the best ? in
several problems. MBMS also includes as particular cases GBMS (L = 0), PCA (k = N , ? = ?),
and no denoising (? = 0 or L = D).
Note that if we apply MBMS to a dataset that lies on a linear manifold of dimensionality d using
L ? d then no denoising occurs whatsoever because the GBMS updates lie on the d-dimensional
manifold and are removed by the corrector step. In practice, even if the data are assumed noiseless,
the reconstruction from a low-rank method will lie close to but not exactly on the d-dimensional
manifold. However, this suggests using largish ranks for the low-rank method used to reconstruct X
and lower L values in the subsequent MBMS run.
In summary, this yields a matrix completion algorithm where we apply an MBMS step, refill the
present values, and iterate until the validation error increases. Again, in an actual implementation
we compute the MBMS step only for the missing entries of X. The shrinking problem of GBMS is
less pronounced in our matrix completion setting, because we constrain some values not to change.
Still, in agreement with [5], we find MBMS to be generally superior to GBMS.
Computational cost With a full graph, the cost per iteration of GBMS and MBMS is O(N 2 D)
and O(N 2 D + N (D + k) min(D, k)2 ), respectively. In practice with high-dimensional data, best
denoising results are obtained using a neighborhood graph [5], so that the sums over points in eqs. (3)
or (4) extend only to the neighbors. With a k-nearest-neighbor graph and if we do not update
the neighbors at each iteration (which affects the result little), the respective cost per iteration is
O(N kD) and O(N kD + N (D + k) min(D, k)2 ), thus linear in N . The graph is constructed on the
initial X we use, consisting of the present values and an imputation for the missing ones achieved
with a standard matrix completion method, and has a one-off cost of O(N 2 D). The cost when we
have a fraction ? = |M|
N D ? [0, 1] of missing data is simply the above times ?. Hence the run time
of our mean-shift-based matrix completion algorithms is faster the more present data we have, and
thus faster than the usual GBMS or MBMS case, where all data are effectively missing.
3
Experimental results
We compare with representative methods of several approaches: a low-rank matrix completion
method, singular value projection (SVP [7], whose performance we found similar to that of alternating least squares, ALS [3, 4]); fitting a D-dimensional Gaussian model with EM and imputing the
missing values of each xn as the conditional mean E {xn,Mn |xn,Pn } (we use the implementation
of [19]); and the nonlinear method of [20] (nlPCA). We initialize GBMS and MBMS from some or
all of these algorithms. For methods with user parameters, we set them by cross-validation in the
following way: we randomly select 10% of the present entries and pretend they are missing as well,
we run the algorithm on the remaining 90% of the present values, and we evaluate the reconstruction
at the 10% entries we kept earlier. We repeat this over different parameters? values and pick the one
with lowest reconstruction error. We then run the algorithm with these parameters values on the
entire present data and report the (test) error with the ground truth for the missing values.
100D Swissroll We created a 3D swissroll data set with 3 000 points and lifted it to 100D with
a random orthonormal mapping, and added a little noise (spherical Gaussian with stdev 0.1). We
selected uniformly at random 6.76% of the entries to be present. We use the Gaussian model and
SVP (fixed rank = 3) as initialization for our algorithm. We typically find that these initial X are
very noisy (fig. 3), with some reconstructed points lying between different branches of the manifold
and causing a big reconstruction error. We fixed L = 2 (the known dimensionality) for MBMS
and cross-validated the other parameters: ? and k for MBMS and GBMS (both using k-nn graph),
and the number of iterations ? to be used. Table 1 gives the performance of MBMS and GBMS for
testing, along with their optimal parameters. Fig. 3 shows the results of different methods at a few
iterations. MBMS initialized from the Gaussian model gives the most remarkable denoising effect.
To show that there is a wide range of ? and number of iterations ? that give good performance
with GBMS and MBMS, we fix k = 50 and run the algorithm with varying ? values and plot
the reconstruction error for missing entries over iterations in fig. 2. Both GBMS can achieve good
5
Methods
Gaussian
+ GBMS (?, 10, 0, 1)
+ MBMS (1, 20, 2, 25)
SVP
+ GBMS (3, 50, 0, 1)
+ MBMS (3, 50, 2, 2)
RSSE
168.1
165.8
157.2
156.8
151.4
151.8
mean
2.63
2.57
2.36
1.94
1.89
1.87
stdev
1.59
1.61
1.63
2.10
2.02
2.05
Methods
nlPCA
SVP
+ GBMS (400,140,0,1)
+ MBMS (500,140,9,5)
Table 1: Swissroll data set: reconstruction errors
obtained by different algorithms along with their
optimal parameters (?, k, L, no. iterations ? ). The
three columns show the root sum of squared errors
on missing entries, the mean, and the standard deviation of the pointwise reconstruction error, resp.
SVP + GBMS
error (RSSE)
180
170
0.3
0.5
1
2
3
5
SVP + MBMS
8
10
15
25
mean
26.1
21.8
18.8
17.0
stdev
42.6
39.3
37.7
34.9
Table 2: MNIST-7 data set: errors of the different algorithms and their optimal parameters
(?, k, L, no. iterations ? ). The three columns
show the root sum of squared errors on missing entries (?10?4 ), the mean, and the standard deviation of pixel errors, respectively.
Gaussian + GBMS
Gaussian + MBMS
180
180
180
170
170
170
160
160
160
?
160
150
0 1 2 3 4 5 6 7 8 910 12 14 16 18 20
iteration ?
RSSE
7.77
6.99
6.54
6.03
150
0 1 2 3 4 5 6 7 8 910 12 14 16 18 20
iteration ?
150
0 1 2 3 4 5 6 7 8 910 12 14 16 18 20
iteration ?
150
0 1 2 3 4 5 6 7 8 910 12 14 16 18 20
iteration ?
Figure 2: Reconstruction error of GBMS/MBMS over iterations (each curve is a different ? value).
denoising (and reconstruction), but MBMS is more robust, with good results occurring for a wide
range of iterations, indicating it is able to preserve the manifold structure better.
Mocap data We use the running-motion sequence 09 01 from the CMU mocap database with 148
samples (? 1.7 cycles) with 150 sensor readings (3D positions of 50 joints on a human body). The
motion is intrinsically 1D, tracing a loop in 150D. We compare nlPCA, SVP, the Gaussian model,
and MBMS initialized from the first three algorithms. For nlPCA, we do a grid search for the weight
decay coefficient while fixing its structure to be 2 ? 10 ? 150 units, and use an early stopping
criterion. For SVP, we do grid search on {1, 2, 3, 5, 7, 10} for the rank. For MBMS (L = 1) and
GBMS (L = 0), we do grid search for ? and k.
We report the reconstruction error as a function of the proportion of missing entries from 50% to
95%. For each missing-data proportion, we randomly select 5 different sets of present values and
run all algorithms for them. Fig. 4 gives the mean errors of all algorithms. All methods perform
well when missing-data proportion is small. nlPCA, being prone to local optima, is less stable than
SVP and the Gaussian model, especially when the missing-data proportion is large. The Gaussian
model gives the best and most stable initialization. At 95%, all methods fail to give an acceptable
reconstruction, but up to 90% missing entries, MBMS and GBMS always beat the other algorithms.
Fig. 4 shows selected reconstructions from all algorithms.
MNIST digit ?7? The MNIST digit ?7? data set contains 6 265 greyscale (0?255) images of size
28 ? 28. We create missing entries in a way reminiscent of run-length errors in transmission. We
generate 16 to 26 rectangular boxes of an area approximately 25 pixels at random locations in each
image and use them to black out pixels. In this way, we create a high dimensional data set (784
dimensions) with about 50% entries missing on average. Because of the loss of spatial correlations
within the blocks, this missing data pattern is harder than random.
The Gaussian model cannot handle such a big data set because it involves inverting large covariance
matrices. nlPCA is also very slow and we cannot afford cross-validating its structure or the weight
decay coefficient, so we picked a reasonable structure (10 ? 30 ? 784 units), used the default weight
decay parameter in the code (10?3 ), and allowed up to 500 iterations. We only use SVP as initialization for our algorithm. Since the intrinsic dimension of MNIST is suspected to be not very high,
6
SVP
? =0
SVP + GBMS
? =1
SVP + MBMS
? =2
Gaussian
? =0
Gaussian + GBMS
? =1
Gaussian + MBMS
? = 25
20
20
20
20
20
20
15
15
15
15
15
15
10
10
10
10
10
10
5
5
5
5
5
5
0
0
0
0
0
0
?5
?5
?5
?5
?5
?5
?10
?10
?15
?15
?10
?5
0
5
10
15
20
?10
?15
?15
?10
?5
0
5
10
15
20
?15
?15
?10
?10
?5
0
5
10
15
20
?15
?15
?10
?10
?5
0
5
10
15
20
?15
?15
?10
?10
?5
0
5
10
15
20
?15
?15
?10
?5
0
5
10
15
20
Figure 3: Denoising effect of the different algorithms. For visualization, we project the 100D data
to 3D with the projection matrix used for creating the data. Present values are refilled for all plots.
7000
6000
error
5000
4000
frame 2 (leg distance) frame 10 (foot pose) frame 147 (leg pose)
nlPCA
nlPCA + GBMS
nlPCA + MBMS
SVP
SVP + GBMS
SVP + MBMS
Gaussian
Gaussian + GBMS
Gaussian + MBMS
3000
2000
1000
0
50
60
70
80
85
90
95
% of missing data
Figure 4: Left: mean of errors (RSSE) of 5 runs obtained by different algorithms for varying percentage of missing values. Errorbars shown only for Gaussian + MBMS to avoid clutter. Right: sample
reconstructions when 85% percent data is missing. Row 1: initialization. Row 2: init+GBMS. Row
3: init+MBMS. Color indicates different initialization: black, original data; red, nlPCA; blue, SVP;
green, Gaussian.
we used rank 10 for SVP and L = 9 for MBMS. We also use the same k = 140 as in [5]. So we
only had to choose ? and the number of iterations via cross-validation.
Table 2 shows the methods and their corresponding error. Fig. 5 shows some representative reconstructions from different algorithms, with present values refilled. The mean-shift averaging among
closeby neighbors (a soft form of majority voting) helps to eliminate noise, unusual strokes and
other artifacts created by SVP, which by their nature tend to occur in different image locations over
the neighborhood of images.
4
Related work
Matrix completion is widely studied in theoretical compressed sensing [1, 2] as well as practical
recommender systems [3, 4]. Most matrix completion models rely on a low-rank assumption, and
cannot fully exploit a more complex structure of the problem, such as curved manifolds. Related
work is on multi-task learning in a broad sense, which extracts the common structure shared by
multiple related objects and achieves simultaneous learning on them. This includes applications
such as alignment of noise-corrupted images [21], recovery of images with occlusion [22], and even
learning of multiple related regressors or classifiers [23]. Again, all these works are essentially based
on a subspace assumption, and do not generalize to more complex situations.
A line of work based on a nonlinear low-rank assumption (with a latent P
variable z of dimensionN
2
ality
PN,DL < D) involves 2setting up a least-squares error function minf ,Z n=1 kxn ? f (zn )k =
n,d=1 (xdn ? fd (zn )) where one ignores the terms for which xdn is missing, and estimates the
function f and the low-dimensional data projections Z by alternating optimization. Linear functions f have been used in the homogeneity analysis literature [24], where this approach is called
?missing data deleted?. Nonlinear functions f have been used recently (neural nets [20]; Gaussian
processes
for collaborative filtering [25]). Better results are obtained if adding a projection term
PN
2
kz
? F(xn )k and optimizing over the missing data as well [26].
n
n=1
7
Orig
Missing nlPCA
SVP
GBMS MBMS
Orig
Missing nlPCA
SVP
GBMS MBMS
Figure 5: Selected reconstructions of MNIST block-occluded digits ?7? with different methods.
Prior to our denoising-based work there have been efforts to extend the low-rank models to smooth
manifolds, mostly in the context of compressed sensing. Baraniuk and Wakin [27] show that certain
random measurements, e.g. random projection to a low-dimensional subspace, can preserve the
metric of the manifold fairly well, if the intrinsic dimension and the curvature of the manifold
are both small enough. However, these observations are not suitable for matrix completion and
no algorithm is given for recovering the signal. Chen et al. [28] explicitly model a pre-determined
manifold, and use this to regularize the signal when recovering the missing values. They estimate the
manifold given complete data, while no complete data is assumed in our matrix completion setting.
Another related work is [29], where the manifold modeled with Isomap is used in estimating the
positions of satellite cameras in an iterative manner.
Finally, our expectation that the value of a missing entry can be predicted from the values of neighboring points is similar to one category of collaborative filtering methods that essentially use similar
users/items to predict missing values [3, 4].
5
Conclusion
We have proposed a new paradigm for matrix completion, denoising, which generalizes the commonly used assumption of low rank. Assuming low-rank implies a restrictive form of denoising
where the data is forced to have zero variance away from a linear manifold. More general definitions of denoising can potentially handle data that lives in a low-dimensional manifold that is
nonlinear, or whose dimensionality varies (e.g. a set of manifolds), or that does not have low rank
at all, and naturally they handle noise in the data. Denoising works because of the fundamental fact
that a missing value can be predicted by averaging nearby present values.
Although we motivate our framework from a constrained optimization point of view (denoise subject
to respecting the present data), we argue for an algorithmic view of denoising-based matrix completion: apply a denoising step, refill the present values, iterate until the validation error increases.
In turn, this allows different forms of denoising, such as based on low-rank projection (earlier work)
or local averaging with blurring mean-shift (this paper). Our nonparametric choice of mean-shift
averaging further relaxes assumptions about the data and results in a simple algorithm with very
few user parameters that afford user control (denoising scale, local dimensionality) but can be set
automatically by cross-validation. Our algorithms are intended to be used as a postprocessing step
over a user-provided initialization of the missing values, and we show they consistently improve
upon existing algorithms.
The MBMS-based algorithm bridges the gap between pure denoising (GBMS) and local low rank.
Other definitions of denoising should be possible, for example using temporal as well as spatial
neighborhoods, and even applicable to discrete data if we consider denoising as a majority voting
among the neighbours of a vector (with suitable definitions of votes and neighborhood).
Acknowledgments
Work supported by NSF CAREER award IIS?0754089.
8
References
[1] Emmanuel J. Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational Mathematics, 9(6):717?772, December 2009.
[2] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Trans. Information Theory, 56(5):2053?2080, April 2010.
[3] Yehuda Koren. Factorization meets the neighborhood: A multifaceted collaborative filtering model.
SIGKDD 2008, pages 426?434, Las Vegas, NV, August 24?27 2008.
[4] Robert Bell and Yehuda Koren. Scalable collaborative filtering with jointly derived neighborhood interpolation weights. ICDM 2007, pages 43?52, October 28?31 2007.
? Carreira-Perpi?na? n. Manifold blurring mean shift algorithms for manifold
[5] Weiran Wang and Miguel A.
denoising. CVPR 2010, pages 1759?1766, San Francisco, CA, June 13?18 2010.
[6] Matthias Hein and Markus Maier. Manifold denoising. NIPS 2006, 19:561?568. MIT Press, 2007.
[7] Prateek Jain, Raghu Meka, and Inderjit S. Dhillon. Guaranteed rank minimization via singular value
projection. NIPS 2010, 23:937?945. MIT Press, 2011.
? Carreira-Perpi?na? n. Fast nonparametric clustering with Gaussian blurring mean-shift. ICML
[8] Miguel A.
2006, pages 153?160. Pittsburgh, PA, June 25?29 2006.
[9] Keinosuke Fukunaga and Larry D. Hostetler. The estimation of the gradient of a density function, with
application in pattern recognition. IEEE Trans. Information Theory, 21(1):32?40, January 1975.
[10] Yizong Cheng. Mean shift, mode seeking, and clustering. IEEE Trans. PAMI, 17(8):790?799, 1995.
[11] Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. IEEE
Trans. PAMI, 24(5):603?619, May 2002.
? Carreira-Perpi?na? n. Generalised blurring mean-shift algorithms for nonparametric clustering.
[12] Miguel A.
CVPR 2008, Anchorage, AK, June 23?28 2008.
[13] Gabriel Taubin. A signal processing approach to fair surface design. SIGGRAPH 1995, pages 351?358.
[14] Mathieu Desbrun, Mark Meyer, Peter Schr?oder, and Alan H. Barr. Implicit fairing of irregular meshes
using diffusion and curvature flow. SIGGRAPH 1999, pages 317?324.
[15] Fan R. K. Chung. Spectral Graph Theory. American Mathematical Society, Providence, RI, 1997.
[16] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 22(8):888?
905, August 2000.
[17] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1986.
[18] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer-Verlag, New York, second
edition, 2006.
[19] Tapio Schneider. Analysis of incomplete climate data: Estimation of mean values and covariance matrices
and imputation of missing values. Journal of Climate, 14(5):853?871, March 2001.
[20] Matthias Scholz, Fatma Kaplan, Charles L. Guy, Joachim Kopka, and Joachim Selbig. Non-linear PCA:
A missing data approach. Bioinformatics, 21(20):3887?3895, October 15 2005.
[21] Yigang Peng, Arvind Ganesh, John Wright, Wenli Xu, and Yi Ma. RASL: Robust alignment by sparse
and low-rank decomposition for linearly correlated images. CVPR 2010, pages 763?770, 2010.
[22] A. M. Buchanan and A. W. Fitzgibbon. Damped Newton algorithms for matrix factorization with missing
data. CVPR 2005, pages 316?322, San Diego, CA, June 20?25 2005.
[23] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. NIPS
2006, 19:41?48. MIT Press, 2007.
[24] Albert Gifi. Nonlinear Multivariate Analysis. John Wiley & Sons, 1990.
[25] Neil D. Lawrence and Raquel Urtasun. Non-linear matrix factorization with Gaussian processes. ICML
2009, Montreal, Canada, June 14?18 2009.
? Carreira-Perpi?na? n and Zhengdong Lu. Manifold learning and missing data recovery through
[26] Miguel A.
unsupervised regression. ICDM 2011, December 11?14 2011.
[27] Richard G. Baraniuk and Michael B. Wakin. Random projections of smooth manifolds. Foundations of
Computational Mathematics, 9(1):51?77, February 2009.
[28] Minhua Chen, Jorge Silva, John Paisley, Chunping Wang, David Dunson, and Lawrence Carin. Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance
bounds. IEEE Trans. Signal Processing, 58(12):6140?6155, December 2010.
[29] Michael B. Wakin. A manifold lifting algorithm for multi-view compressive imaging. In Proc. 27th
Conference on Picture Coding Symposium (PCS?09), pages 381?384, 2009.
9
| 4192 |@word version:2 proportion:5 perpin:1 covariance:2 ality:1 decomposition:1 pick:1 harder:1 carry:1 initial:4 celebrated:1 contains:2 seriously:1 ours:1 existing:1 com:1 wd:2 reminiscent:2 must:1 john:3 mesh:1 subsequent:1 numerical:1 remove:2 plot:2 update:5 stationary:3 leaf:1 selected:3 item:1 epanechnikov:1 location:3 theodoros:1 mathematical:1 along:4 constructed:1 anchorage:1 symposium:1 prove:1 fitting:1 buchanan:1 manner:1 peng:1 indeed:3 behavior:2 cand:2 multi:3 spherical:1 automatically:1 actual:1 little:2 considering:1 taubin:1 provided:2 project:2 underlying:1 notation:1 bounded:3 estimating:1 lowest:1 what:1 prateek:1 eigenvector:1 compressive:2 whatsoever:1 temporal:1 every:2 voting:2 xd:4 exactly:1 classifier:1 jianbo:1 control:1 unit:2 arguably:1 generalised:1 engineering:1 local:17 aiming:1 ak:1 meet:1 interpolation:1 approximately:1 pami:3 black:2 initialization:6 equating:1 studied:1 ucmerced:1 suggests:2 ease:1 factorization:3 projective:1 scholz:1 range:2 practical:3 camera:1 acknowledgment:1 testing:1 horn:1 practice:5 block:2 yehuda:2 fitzgibbon:1 digit:3 pontil:1 area:1 pnm:2 adapting:1 bell:1 projection:16 pre:1 onto:1 close:1 selection:1 operator:1 noising:1 cannot:3 context:1 live:1 applying:1 equivalent:1 missing:49 maximizing:1 send:1 shi:1 starting:1 convex:3 focused:1 rectangular:1 recovery:3 closeby:1 pure:1 fairing:2 orthonormal:1 regularize:1 meer:1 handle:3 variation:1 resp:1 diego:1 user:8 exact:1 rsse:4 agreement:1 pa:1 wolfe:1 satisfying:1 recognition:1 merced:1 cut:2 database:1 observed:1 wang:3 capture:2 cycle:1 decrease:1 removed:1 benjamin:1 respecting:1 occluded:1 motivate:1 carrying:2 orig:2 purely:1 upon:1 blurring:13 completely:1 joint:1 siggraph:2 stdev:3 derivation:1 forced:1 fast:2 describe:1 jain:1 massimiliano:1 artificial:1 outside:1 neighborhood:8 quite:1 richer:1 posed:1 whose:3 widely:1 distortion:1 cvpr:4 reconstruct:2 compressed:3 neil:1 jointly:1 noisy:2 sequence:4 eigenvalue:2 differentiable:1 net:1 matthias:2 propose:3 reconstruction:18 product:1 neighboring:2 causing:1 loop:1 achieve:3 asserts:1 frobenius:1 pronounced:1 convergence:5 cluster:2 overrelaxed:1 optimum:1 transmission:1 satellite:1 converges:3 object:1 help:1 derive:1 completion:34 pose:2 fixing:1 miguel:5 montreal:1 nearest:7 eq:1 recovering:2 predicted:3 involves:2 come:1 indicate:1 implies:1 direction:3 foot:1 correct:1 filter:2 shrunk:1 stochastic:2 human:1 larry:1 require:1 barr:1 fix:2 extension:1 lying:1 ground:2 wright:2 exp:3 lawrence:2 algorithmic:3 mapping:1 predict:1 achieves:1 early:3 dorin:1 estimation:2 proc:1 applicable:1 bridge:1 create:2 successfully:1 minimization:1 mit:3 sensor:1 always:2 gaussian:25 rather:1 pn:8 avoid:1 shrinkage:1 lifted:1 varying:2 derived:3 focus:1 validated:1 june:5 improvement:1 consistently:1 rank:29 indicates:1 joachim:2 sigkdd:1 sense:1 dim:2 dependent:1 stopping:3 nn:15 entire:2 typically:2 eliminate:1 relation:1 selects:1 tao:1 interested:1 pixel:3 among:2 flexible:1 constrained:3 breakthrough:1 initialize:2 special:1 spatial:2 equal:1 fairly:1 evgeniou:1 having:1 reasoned:1 never:2 identical:1 represents:2 broad:1 unsupervised:1 icml:2 excessive:1 nearly:1 minf:1 carin:1 others:1 report:2 richard:1 few:4 tangential:1 randomly:2 neighbour:1 preserve:3 homogeneity:1 intended:1 consisting:2 connects:1 occlusion:1 microsoft:2 freedom:1 interest:2 fd:1 mining:1 alignment:2 mixture:1 pc:1 damped:1 held:1 rearranges:1 respective:1 orthogonal:1 incomplete:1 initialized:3 walk:1 hein:1 theoretical:2 stopped:2 sulting:1 column:5 earlier:2 soft:1 zn:2 cost:5 deviation:2 entry:26 subset:2 predictor:1 weiran:2 johnson:1 too:1 providence:1 varies:1 eec:2 corrupted:3 synthetic:1 recht:1 density:4 fundamental:2 off:1 picking:1 terence:1 michael:2 meanshift:2 continuously:1 na:5 again:2 squared:2 opposed:1 choose:1 possibly:1 guy:1 creating:1 american:1 denoises:1 chung:1 return:3 coding:1 includes:2 coefficient:2 blurred:1 jitendra:1 caused:1 explicitly:1 ad:1 performed:2 view:7 later:1 root:2 picked:1 doing:1 keinosuke:1 red:1 netflix:1 start:1 denoised:2 parallel:4 collaborative:4 square:2 tangentially:1 variance:1 maier:1 yield:3 conceptually:1 generalize:1 zhengdong:2 iterated:1 lu:2 none:1 stroke:1 simultaneous:1 definition:4 naturally:1 associated:1 proof:1 fatma:1 stop:2 dataset:8 intrinsically:1 color:1 dimensionality:7 organized:1 segmentation:1 actually:1 back:1 originally:1 follow:2 asia:1 april:1 formulation:2 box:1 strongly:1 hostetler:1 just:2 implicit:2 roger:1 until:5 correlation:1 nonlinear:6 maximizer:1 ganesh:1 widespread:1 mode:2 artifact:1 multifaceted:1 effect:2 normalized:3 isomap:1 hence:1 kxn:1 alternating:3 iteratively:1 dhillon:1 climate:2 attractive:1 self:1 comaniciu:1 noted:1 criterion:3 generalized:1 yizong:1 complete:2 motion:6 silva:1 percent:1 postprocessing:2 image:10 novel:1 recently:1 vega:1 charles:2 superior:2 imputing:1 common:1 pseudocode:1 extend:2 measurement:1 cambridge:1 counterexample:1 paisley:1 meka:1 unconstrained:1 grid:3 mathematics:2 zeroing:1 analyzer:1 had:1 stable:2 surface:3 curvature:2 posterior:1 multivariate:1 recent:2 optimizing:1 certain:2 verlag:1 jorge:2 life:1 yi:1 utn:2 seen:3 preserving:1 schneider:1 converge:1 determine:1 mocap:2 paradigm:1 signal:4 ii:1 branch:1 full:6 multiple:2 stephen:1 reduces:1 ing:1 smooth:2 faster:4 xdn:3 alan:1 cross:7 long:1 arvind:1 icdm:2 equally:1 award:1 controlled:1 laplacian:1 prediction:1 variant:1 scalable:1 regression:1 vision:1 essentially:3 noiseless:1 cmu:1 metric:1 iteration:20 kernel:6 expectation:1 albert:1 achieved:1 irregular:1 justified:1 want:2 singular:3 ascent:1 nv:1 subject:1 tend:1 validating:2 ltp:5 december:3 flow:1 call:1 near:2 enough:2 relaxes:1 iterate:4 affect:1 bandwidth:4 andreas:1 shift:23 thread:1 motivated:1 pca:6 effort:1 peter:2 hessian:1 cause:1 afford:2 oder:1 york:1 gabriel:1 useful:1 iterating:3 clear:1 subindex:1 generally:1 nonparametric:8 clutter:1 locally:2 category:1 diameter:1 http:1 generate:1 percentage:1 nsf:1 estimated:3 per:2 blue:1 discrete:1 achieving:1 deleted:1 imputation:2 prevent:2 diffusion:1 kept:1 nocedal:1 imaging:1 graph:20 relaxation:2 fraction:1 sum:3 beijing:1 swissroll:3 run:9 angle:1 baraniuk:2 raquel:1 extends:1 almost:3 reasonable:1 acceptable:1 bound:1 completing:1 guaranteed:3 koren:2 cheng:1 fan:1 encountered:1 occur:1 constraint:6 constrain:1 ri:1 flat:1 markus:1 nearby:1 argument:1 min:2 fukunaga:1 march:1 poor:1 kd:2 em:1 son:1 yigang:1 leg:2 restricted:1 equation:1 visualization:1 discus:1 eventually:1 turn:2 fail:1 wrt:3 know:1 apfor:1 needed:1 end:3 unusual:1 raghu:1 generalizes:2 apply:3 svp:22 away:2 appropriate:1 spectral:3 original:2 clustering:9 remaining:1 running:1 desbrun:1 wakin:3 newton:1 exploit:1 pretend:1 restrictive:2 emmanuel:2 prof:2 especially:1 february:1 society:1 unchanged:1 bl:1 seeking:1 objective:5 move:4 question:1 added:1 occurs:1 malik:1 costly:1 usual:2 traditional:1 affinity:2 gradient:8 subspace:4 perturbs:1 distance:1 majority:2 manifold:41 argue:1 urtasun:1 toward:1 assuming:1 length:2 code:1 index:3 corrector:2 pointwise:1 modeled:1 timization:1 mostly:2 october:2 robert:1 potentially:1 greyscale:1 dunson:1 kaplan:1 implementation:2 design:1 perform:2 recommender:1 observation:2 snapshot:1 datasets:1 coincident:1 curved:3 beat:1 january:1 situation:1 frame:3 rn:1 schr:1 arbitrary:1 august:2 canada:1 david:1 inverting:1 namely:1 required:1 perron:1 california:1 errorbars:1 nip:3 trans:6 xmn:2 able:1 pattern:2 xm:20 reading:1 challenge:1 max:1 green:1 explanation:1 power:2 suitable:2 natural:1 rely:1 nth:1 mn:1 scheme:6 improve:2 orthogonally:1 mathieu:1 picture:1 created:2 extract:1 review:1 inaccurately:1 literature:1 tangent:8 prior:1 fully:2 expect:1 loss:1 filtering:5 remarkable:1 validation:10 foundation:2 degree:2 consistent:1 wnm:2 xp:6 suspected:1 rasl:1 row:4 prone:1 summary:1 repeat:3 supported:1 transpose:1 neighbor:14 wide:2 taking:2 sparse:3 tracing:1 boundary:1 curve:1 xn:47 world:1 dimension:3 default:1 computes:1 ignores:1 kz:1 commonly:2 coincide:1 projected:1 regressors:1 san:2 selbig:1 reconstructed:1 global:1 pittsburgh:1 assumed:3 francisco:1 search:8 latent:3 iterative:3 continuous:1 un:4 table:4 nature:1 robust:3 rearranging:1 career:1 ca:2 init:2 necessarily:1 complex:2 diag:1 linearly:1 big:2 noise:8 edition:1 denoise:1 repeated:1 allowed:1 fair:1 x1:2 body:1 fig:7 representative:2 xu:1 slow:1 wiley:1 shrinking:1 position:2 meyer:1 explicit:2 lie:5 ply:1 renumber:1 ian:1 theorem:1 perpi:5 showing:1 sensing:4 decay:3 lowed:1 dl:1 intrinsic:4 mnist:5 adding:2 effectively:1 lifting:1 magnitude:1 justifies:1 occurring:1 kx:2 chen:2 gap:1 subtract:1 simply:2 partially:1 trix:1 inderjit:1 applies:1 springer:1 ch:1 truth:2 satisfies:2 ma:2 conditional:1 goal:1 formulated:1 replace:1 erased:1 lipschitz:1 change:1 shared:1 carreira:6 specifically:1 except:1 uniformly:1 determined:1 averaging:7 denoising:53 total:1 called:2 pas:1 experimental:2 gauss:1 e:2 vote:1 la:1 indicating:1 select:3 mark:1 bioinformatics:1 evaluate:2 argyriou:1 correlated:1 |
3,526 | 4,193 | Multi-View Learning of Word Embeddings via CCA
Paramveer S. Dhillon
Dean Foster
Lyle Ungar
Computer & Information Science
Statistics
Computer & Information Science
University of Pennsylvania, Philadelphia, PA, U.S.A
{dhillon|ungar}@cis.upenn.edu, [email protected]
Abstract
Recently, there has been substantial interest in using large amounts of unlabeled
data to learn word representations which can then be used as features in supervised
classifiers for NLP tasks. However, most current approaches are slow to train, do
not model the context of the word, and lack theoretical grounding. In this paper,
we present a new learning method, Low Rank Multi-View Learning (LR-MVL)
which uses a fast spectral method to estimate low dimensional context-specific
word representations from unlabeled data. These representation features can then
be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed
convergence to a global optimum, is theoretically elegant, and achieves state-ofthe-art performance on named entity recognition (NER) and chunking problems.
1
Introduction and Related Work
Over the past decade there has been increased interest in using unlabeled data to supplement the
labeled data in semi-supervised learning settings to overcome the inherent data sparsity and get
improved generalization accuracies in high dimensional domains like NLP. Approaches like [1, 2]
have been empirically very successful and have achieved excellent accuracies on a variety of NLP
tasks. However, it is often difficult to adapt these approaches to use in conjunction with an existing
supervised NLP system as these approaches enforce a particular choice of model.
An increasingly popular alternative is to learn representational embeddings for words from a large
collection of unlabeled data (typically using a generative model), and to use these embeddings to
augment the feature set of a supervised learner. Embedding methods produce features in low dimensional spaces or over a small vocabulary size, unlike the traditional approach of working in the
original high dimensional vocabulary space with only one dimension ?on? at a given time. Broadly,
these embedding methods fall into two categories:
1. Clustering based word representations: Clustering methods, often hierarchical, are used to
group distributionally similar words based on their contexts. The two dominant approaches
are Brown Clustering [3] and [4]. As recently shown, HMMs can also be used to induce a
multinomial distribution over possible clusters [5].
2. Dense representations: These representations are dense, low dimensional and real-valued.
Each dimension of these representations captures latent information about a combination
of syntactic and semantic word properties. They can either be induced using neural networks like C&W embeddings [6] and Hierarchical log-linear (HLBL) embeddings [7] or
by eigen-decomposition of the word co-occurrence matrix, e.g. Latent Semantic Analysis/Latent Semantic Indexing (LSA/LSI) [8].
Unfortunately, most of these representations are 1). slow to train, 2). sensitive to the scaling of the
embeddings (especially `2 based approaches like LSA/PCA), 3). can get stuck in local optima (like
EM trained HMM) and 4). learn a single embedding for a given word type; i.e. all the occurrences
1
of the word ?bank? will have the same embedding, irrespective of whether the context of the word
suggests it means ?a financial institution? or ?a river bank?.
In this paper, we propose a novel context-specific word embedding method called Low Rank MultiView Learning, LR-MVL, which is fast to train and is guaranteed to converge to the optimal solution.
As presented here, our LR-MVL embeddings are context-specific, but context oblivious embeddings
(like the ones used by [6, 7]) can be trivially gotten from our model. Furthermore, building on recent
advances in spectral learning for sequence models like HMMs [9, 10, 11] we show that LR-MVL
has strong theoretical grounding. Particularly, we show that LR-MVL estimates low dimensional
context-specific word embeddings which preserve all the information in the data if the data were
generated by an HMM. Moreover, LR-MVL being linear does not face the danger of getting stuck
in local optima as is the case for an EM trained HMM.
LR-MVL falls into category (2) mentioned above; it learns real-valued context-specific word embeddings by performing Canonical Correlation Analysis (CCA) [12] between the past and future
views of low rank approximations of the data. However, LR-MVL is more general than those methods, which work on bigram or trigram co-occurrence matrices, in that it uses longer word sequence
information to estimate context-specific embeddings and also for the reasons mentioned in the last
paragraph.
The remainder of the paper is organized as follows. In the next section we give a brief overview of
CCA, which forms the core of our method. Section 3 describes our proposed LR-MVL algorithm
in detail and gives theory supporting its performance. Section 4 demonstrates the effectiveness of
LR-MVL on the NLP tasks of Named Entity Recognition and Chunking. We conclude with a brief
summary in Section 5.
2
Brief Review: Canonical Correlation Analysis (CCA)
CCA [12] is the analog to Principal Component Analysis (PCA) for pairs of matrices. PCA computes the directions of maximum covariance between elements in a single matrix, whereas CCA
computes the directions of maximal correlation between a pair of matrices. Unlike PCA, CCA does
not depend on how the observations are scaled. This invariance of CCA to linear data transformations allows proofs that keeping the dominant singular vectors (those with largest singular values)
will faithfully capture any state information.
More specifically, given a set of n paired observation vectors {(l1 , r1 ), ..., (ln , rn )}?in our case the
two matrices are the left (L) and right (R) context matrices of a word?we would like to simultaneously find the directions ?l and ?r that maximize the correlation of the projections of L onto ?l
with the projections of R onto ?r . This is expressed as
E[hL, ?l ihR, ?r i]
max p
E[hL, ?l i2 ]E[hR, ?r i2 ]
?l ,?r
(1)
where E denotes the empirical expectation. We use the notation Clr (Cll ) to denote the cross (auto)
covariance matrices between L and R (i.e. L?R and L?L respectively.).
The left and right canonical correlates are the solutions h?l , ?r i of the following equations:
Cll ?1 Clr Crr ?1 Crl ?l = ??l
Crr ?1 Crl Cll ?1 Clr ?r = ??r
3
(2)
Low Rank Multi-View Learning (LR-MVL)
In LR-MVL, we compute the CCA between the past and future views of the data on a large unlabeled
corpus to find the common latent structure, i.e., the hidden state associated with each token. These
induced representations of the tokens can then be used as features in a supervised classifier (typically
discriminative).
The context around a word, consisting of the h words to the right and left of it, sits in a high
dimensional space, since for a vocabulary of size v, each of the h words in the context requires an
indicator function of dimension v. The key move in LR-MVL is to project the v-dimensional word
2
space down to a k dimensional state space. Thus, all eigenvector computations are done in a space
that is v/k times smaller than the original space. Since a typical vocabulary contains at least 50, 000
words, and we use state spaces of order k ? 50 dimensions, this gives a 1,000-fold reduction in the
size of calculations that are needed.
The core of our LR-MVL algorithm is a fast spectral method for learning a v ? k matrix A which
maps each of the v words in the vocabulary to a k-dimensional state vector. We call this matrix the
?eigenfeature dictionary?.
We now describe the LR-MVL method, give a theorem that provides intuition into how it works, and
formally present the LR-MVL algorithm. The Experiments section then shows that this low rank
approximation allows us to achieve state-of-the-art performance on NLP tasks.
3.1
The LR-MVL method
Given an unlabeled token sequence w={w0 , w1 , . . ., wn } we want to learn a low (k)- dimensional
state vector {z0 , z1 , . . . , zn } for each observed token. The key is to find a v ?k matrix A (Algorithm
1) that maps each of the v words in the vocabulary to a reduced rank k-dimensional state vector,
which is later used to induce context specific embeddings for the tokens (Algorithm 2).
For supervised learning, these context specific embeddings are supplemented with other information
about each token wt , such as its identity, orthographic features such as prefixes and suffixes or
membership in domain-specific lexicons, and used as features in a classifier.
Section 3.4 gives the algorithm more formally, but the key steps in the algorithm are, in general
terms:
? Take the h words to the left and to the right of each target word wt (the ?Left? and ?Right?
contexts), and project them each down to k dimensions using A.
? Take the CCA between the reduced rank left and right contexts, and use the resulting model
to estimate a k dimensional state vector (the ?hidden state?) for each token.
? Take the CCA between the hidden states and the tokens wt . The singular vectors associated
with wt form a new estimate of the eigenfeature dictionary.
LR-MVL can be viewed as a type of co-training [13]: The state of each token wt is similar to that
of the tokens both before and after it, and it is also similar to the states of the other occurrences of
the same word elsewhere in the document (used in the outer iteration). LR-MVL takes advantage
of these two different types of similarity by alternately estimating word state using CCA on the
smooths of the states of the words before and after each target token and using the average over the
states associated with all other occurrences of that word.
3.2
Theoretical Properties of LR-MVL
We now present the theory behind the LR-MVL algorithm; particularly we show that the reduced
rank matrix A allows a significant data reduction while preserving the information in our data and
the estimated state does the best possible job of capturing any label information that can be inferred
by a linear model.
Let L be an n ? hv matrix giving the words in the left context of each of the n tokens, where the
context is of length h, R be the corresponding n ? hv matrix for the right context, and W be an
n ? v matrix of indicator functions for the words themselves.
We will use the following assumptions at various points in our proof:
Assumption 1. L, W, and R come from a rank k HMM i.e. it has a rank k observation matrix and
rank k transition matrix both of which have the same domain.
For example, if the dimension of the hidden state is k and the vocabulary size is v then the observation matrix, which is k ? v, has rank k. This rank condition is similar to the one used by [10].
Assumption 1A. For the three views, L, W and R assume that there exists a ?hidden state H? of
dimension n ? k, where each row Hi has the same non-singular variance-covariance matrix and
3
such that E(Li |Hi ) = Hi ? TL and E(Ri |Hi ) = Hi ? TR and E(Wi |Hi ) = Hi ? TW where all ??s are of
rank k, where Li , Ri and Wi are the rows of L, R and W respectively.
Assumption 1A follows from Assumption 1.
Assumption 2. ?(L, W), ?(L, R) and ?(W, R) all have rank k, where ?(X1 , X2 ) is the expected
correlation between X1 and X2 .
Assumption 2 is a rank condition similar to that in [9].
Assumption 3. ?([L, R], W) has k distinct singular values.
Assumption 3 just makes the proof a little cleaner, since if there are repeated singular values, then
the singular vectors are not unique. Without it, we would have to phrase results in terms of subspaces
with identical singular values.
We also need to define the CCA function that computes the left and right singular vectors for a pair
of matrices:
Definition 1 (CCA). Compute the CCA between two matrices X1 and X2 . Let ?X1 be a matrix
containing the d largest singular vectors for X1 (sorted from the largest on down). Likewise for
?X2 . Define the function CCAd (X1 , X2 ) = [?X1 , ?X2 ]. When we want just one of these ??s, we
will use CCAd (X1 , X2 )left = ?X1 for the left singular vectors and CCAd (X1 , X2 )right = ?X2
for the right singular vectors.
Note that the resulting singular vectors, [?X1 , ?X2 ] can be used to give two redundant estimates,
X1 ?X1 and X2 ?X2 of the ?hidden? state relating X1 and X2 , if such a hidden state exists.
Definition 2. Define the symbol ??? to mean
X1 ? X2 ?? lim X1 = lim X2
n??
n??
where n is the sample size.
Lemma 1. Define A by the following limit of the right singular vectors:
CCAk ([L, R], W)right ? A.
Under assumptions 2, 3 and 1A, such that if CCAk (L, R) ? [?L , ?R ] then
CCAk ([L?L , R?R ], W)right ? A.
Lemma 1 shows that instead of finding the CCA between the full context and the words, we can take
the CCA between the Left and Right contexts, estimate a k dimensional state from them, and take
the CCA of that state with the words and get the same result. See the supplementary material for the
Proof.
? h denote a matrix formed by stacking h copies of A on top of each other. Right multiplying
Let A
? h projects each of the words in that context into the k-dimensional reduced rank space.
L or R by A
The following theorem addresses the core of the LR-MVL algorithm, showing that there is an A
which gives the desired dimensionality reduction. Specifically, it shows that the previous lemma
also holds in the reduced rank space.
Theorem 1. Under assumptions 1, 2 and 3 there exists a unique matrix A such that if
? h , RA
? h ) ? [?
? L, ?
? R ] then
CCAk (LA
? h?
? L , RA
? h?
? R ], W)
CCAk ([LA
right ? A
? h is the stacked form of A.
where A
See the supplementary material for the Proof 1 .
? used by [9, 10]. They showed that U is
It is worth noting that our matrix A corresponds to the matrix U
sufficient to compute the probability of a sequence of words generated by an HMM; although we do not show
? , and hence
it here (due to limited space), our A provides a more statistically efficient estimate of U than their U
can also be used to estimate the sequence probabilities.
1
4
Under the above assumptions, there is asymptotically (in the limit of infinite data) no benefit to first
estimating state by finding the CCA between the left and right contexts and then finding the CCA
between the estimated state and the words. One could instead just directly find the CCA between
the combined left and rights contexts and the words. However, because of the Zipfian distribution
of words, many words are rare or even unique, and hence one is not in the asymptotic limit. In this
case, CCA between the rare words and context will not be informative, whereas finding the CCA
between the left and right contexts gives a good state vector estimate even for unique words. One can
then fruitfully find the CCA between the contexts and the estimated state vector for their associated
words.
3.3 Using Exponential Smooths
In practice, we replace the projected left and right contexts with exponential smooths (weighted
average of the previous (or next) token?s state i.e. Zt?1 (or Zt+1 ) and previous (or next) token?s
smoothed state i.e. St?1 (or St+1 ).), of them at a few different time scales, thus giving a further
dimension reduction by a factor of context length h (say 100 words) divided by the number of
smooths (often 5-7). We use a mixture of both very short and very long contexts which capture short
and long range dependencies as required by NLP problems as NER, Chunking, WSD etc. Since
exponential smooths are linear, we preserve the linearity of our method.
3.4
The LR-MVL Algorithm
The LR-MVL algorithm (using exponential smooths) is given in Algorithm 1; it computes the pair
of CCAs described above in Theorem 1.
Algorithm 1 LR-MVL Algorithm - Learning from Large amounts of Unlabeled Data
1: Input: Token sequence Wn?v , state space size k, smoothing rates ?j
2: Initialize the eigenfeature dictionary A to random values N (0, 1).
3: repeat
4:
Set the state Zt (1 < t ? n) of each token wt to the eigenfeature vector of the corresponding word.
Zt = (Aw : w = wt )
5:
Smooth the state estimates before and after each token to get a pair of views for each smoothing rate ?j .
(l,j)
(l,j)
= (1 ? ?j )St?1 + ?j Zt?1 // left view L
St
(r,j)
(r,j)
j
St
= (1 ? ? )St+1 + ?j Zt+1 // right view R.
(l,j)
(r,j)
th
where the t rows of L and R are, respectively, concatenations of the smooths St
and St
for
(j)
each of the ? s.
6:
Find the left and right canonical correlates, which are the eigenvectors ?l and ?r of
(L0 L)?1 L0 R(R0 R)?1 R0 L?l = ??l .
(R0 R)?1 R0 L(L0 L)?1 L0 R?r = ??r .
7:
Project the left and right views on to the space spanned by the top k/2 left and right CCAs respectively
(k/2)
(k/2)
Xl = L?l
and Xr = R?r
(k/2)
(k/2)
where ?l
, ?r
are matrices composed of the singular vectors of ?l , ?r with the k/2 largest
magnitude singular values. Estimate the state for each word wt as the union of the left and right estimates: Z = [Xl , Xr ]
8:
Estimate the eigenfeatures of each word type, w, as the average of the states estimated for that word.
Aw = avg(Zt : wt = w)
9:
Compute the change in A from the previous iteration
10: until |?A| <
11: Output: ?kl , ?kr , A .
A few iterations (? 5) of the above algorithm are sufficient to converge to the solution. (Since the
problem is convex, there is a single solution, so there is no issue of local minima.) As [14] show
for PCA, one can start with a random matrix that is only slightly larger than the true rank k of the
correlation matrix, and with extremely high likelihood converge in a few iterations to within a small
distance of the true principal components. In our case, if the assumptions detailed above (1, 1A, 2
and 3) are satisfied, our method converges equally rapidly to the true canonical variates.
As mentioned earlier, we get further dimensionality reduction in Step 5, by replacing the Left and
Right context matrices with a set of exponentially smoothed values of the reduced rank projections
of the context words. Step 6 finds the CCA between the Left and Right contexts. Step 7 estimates
5
the state by combining the estimates from the left and right contexts, since we don?t know which
will best estimate the state. Step 8 takes the CCA between the estimated state Z and the matrix of
words W. Because W is a vector of indicator functions, this CCA takes the trivial form of a set of
averages.
Once we have estimated the CCA model, it is used to generate context specific embeddings for the
tokens from training, development and test sets (as described in Algorithm 2). These embeddings
are further supplemented with other baseline features and used in a supervised learner to predict the
label of the token.
Algorithm 2 LR-MVL Algorithm -Inducing Context Specific Embeddings for Train/Dev/Test Data
1: Input: Model (?kl , ?kr , A) output from above algorithm and Token sequences Wtrain , (Wdev , Wtest )
2: Project the left and right views L and R after smoothing onto the space spanned by the top k left and right
CCAs respectively
Xl = L?kl and Xr = R?kr
and the words onto the eigenfeature dictionary Xw = W train A
3: Form the final embedding matrix Xtrain:embed by concatenating these three estimates of state
Xtrain:embed = [Xl , Xw , Xr ]
4: Output: The embedding matrices Xtrain:embed , (Xdev:embed , Xtest:embed ) with context-specific representations for the tokens. These embeddings are augmented with baseline set of features mentioned in
Sections 4.1.1 and 4.1.2 before learning the final classifier.
Note that we can get context ?oblivious? embeddings i.e. one embedding per word type, just by
using the eigenfeature dictionary (Av?k ) output by Algorithm 1.
4
Experimental Results
In this section we present the experimental results of LR-MVL on Named Entity Recognition (NER)
and Syntactic Chunking tasks. We compare LR-MVL to state-of-the-art semi-supervised approaches
like [1] (Alternating Structures Optimization (ASO)) and [2] (Semi-supervised extension of CRFs)
as well as embeddings like C&W, HLBL and Brown Clustering.
4.1
Datasets and Experimental Setup
For the NER experiments we used the data from CoNLL 2003 shared task and for Chunking experiments we used the CoNLL 2000 shared task data2 with standard training, development and testing set
splits. The CoNLL ?03 and the CoNLL ?00 datasets had ? 204K/51K/46K and ? 212K/ ? /47K
tokens respectively for Train/Dev./Test sets.
4.1.1
Named Entity Recognition (NER)
We use the same set of baseline features as used by [15, 16] in their experiments. The detailed list
of features is as below:
? Current Word wi ; Its type information: all-capitalized, is-capitalized, all-digits and so on;
Prefixes and suffixes of wi
? Word tokens in window of 2 around the current word i.e.
(wi?2 , wi?1 , wi , wi+1 , wi+2 ); and capitalization pattern in the window.
d
=
? Previous two predictions yi?1 and yi?2 and conjunction of d and yi?1
? Embedding features (LR-MVL, C&W, HLBL, Brown etc.) in a window of 2 around the
current word (if applicable).
Following [17] we use regularized averaged perceptron model with above set of baseline features
for the NER task. We also used their BILOU text chunk representation and fast greedy inference as
it was shown to give superior performance.
2
More details about the data and competition are available at http://www.cnts.ua.ac.be/
conll2003/ner/ and http://www.cnts.ua.ac.be/conll2000/chunking/
6
We also augment the above set of baseline features with gazetteers, as is standard practice in NER
experiments. We tuned our free parameter namely the size of LR-MVL embedding on the development and scaled our embedding features to have a `2 norm of 1 for each token and further multiplied
them by a normalization constant (also chosen by cross validation), so that when they are used in
conjunction with other categorical features in a linear classifier, they do not exert extra influence.
The size of LR-MVL embeddings (state-space) that gave the best performance on the development
set was k = 50 (50 each for Xl , Xw , Xr in Algorithm 2) i.e. the total size of embeddings was 50?3,
and the best normalization constant was 0.5. We omit validation plots due to paucity of space.
4.1.2
Chunking
For our chunking experiments we use a similar base set of features as above:
? Current Word wi and word tokens in window of 2 around the current word i.e. d =
(wi?2 , wi?1 , wi , wi+1 , wi+2 );
? POS tags ti in a window of 2 around the current word.
? Word conjunction features wi ? wi+1 , i ? {?1, 0} and Tag conjunction features ti ? ti+1 ,
i ? {?2, ?1, 0, 1} and ti ? ti+1 ? ti+2 , i ? {?2, ?1, 0}.
? Embedding features in a window of 2 around the current word (when applicable).
Since CoNLL 00 chunking data does not have a development set, we randomly sampled 1000 sentences from the training data (8936 sentences) for development. So, we trained our chunking models
on 7936 training sentences and evaluated their F1 score on the 1000 development sentences and used
a CRF 3 as the supervised classifier. We tuned the size of embedding and the magnitude of `2 regularization penalty in CRF on the development set and took log (or -log of the magnitude) of the
value of the features4 . The regularization penalty that gave best performance on development set
was 2 and here again the best size of LR-MVL embeddings (state-space) was k = 50. Finally, we
trained the CRF on the entire (?original?) training data i.e. 8936 sentences.
4.1.3
Unlabeled Data and Induction of embeddings
For inducing the embeddings we used the RCV1 corpus containing Reuters newswire from Aug ?96
to Aug ?97 and containing about 63 million tokens in 3.3 million sentences5 . Case was left intact and
we did not do the ?cleaning? as done by [18, 16] i.e. remove all sentences which are less than 90%
lowercase a-z, as our multi-view learning approach is robust to such noisy data, like news byline
text (mostly all caps) which does not correlate strongly with the text of the article.
We induced our LR-MVL embeddings over a period of 3 days (70 core hours on 3.0 GHz CPU)
on the entire RCV1 data by performing 4 iterations, a vocabulary size of 300k and using a variety
of smoothing rates (? in Algorithm 1) to capture correlations between shorter and longer contexts
? = [0.005, 0.01, 0.05, 0.1, 0.5, 0.9]; theoretically we could tune the smoothing parameters on the
development set but we found this mixture of long and short term dependencies to work well in
practice.
As far as the other embeddings are concerned i.e. C&W, HLBL and Brown Clusters, we downloaded
them from http://metaoptimize.com/projects/wordreprs. The details about their
induction and parameter tuning can be found in [16]; we report their best numbers here. It is also
worth noting that the unsupervised training of LR-MVL was (> 1.5 times)6 faster than other embeddings.
4.2
Results
The results for NER and Chunking are shown in Tables 1 and 2, respectively, which show that
LR-MVL performs significantly better than state-of-the-art competing methods on both NER and
Chunking tasks.
3
http://www.chokkan.org/software/crfsuite/
Our embeddings are learnt using a linear model whereas CRF is a log-linear model, so to keep things on
same scale we did this normalization.
5
We chose this particular dataset to make a fair comparison with [1, 16], who report results using RCV1 as
unlabeled data.
6
As some of these embeddings were trained on GPGPU which makes our method even faster comparatively.
4
7
Embedding/Model
Baseline
C&W, 200-dim
HLBL, 100-dim
Brown 1000 clusters
Ando & Zhang ?05
Suzuki & Isozaki ?08
LR-MVL (CO) 50 ? 3-dim
LR-MVL 50 ? 3-dim
HLBL, 100-dim
C&W, 200-dim
Brown, 1000 clusters
LR-MVL (CO) 50 ? 3-dim
LR-MVL 50 ? 3-dim
No Gazetteers
With Gazetteers
F1-Score
Dev. Set Test Set
90.03
84.39
92.46
87.46
92.00
88.13
92.32
88.52
93.15
89.31
93.66
89.36
93.11
89.55
93.61
89.91
92.91
89.35
92.98
88.88
93.25
89.41
93.91
89.89
94.41
90.06
Table 1: NER Results. Note: 1). LR-MVL (CO) are Context Oblivious embeddings which are
gotten from (A) in Algorithm 1. 2). F1-score= Harmonic Mean of Precision and Recall. 3). The
current state-of-the-art for this NER task is 90.90 (Test Set) but using 700 billion tokens of unlabeled
data [19].
Embedding/Model
Baseline
HLBL, 50-dim
C&W, 50-dim
Brown 3200 Clusters
Ando & Zhang ?05
Suzuki & Isozaki ?08
LR-MVL (CO) 50 ? 3-dim
LR-MVL 50 ? 3-dim
Test Set F1-Score
93.79
94.00
94.10
94.11
94.39
94.67
95.02
95.44
Table 2: Chunking Results.
It is important to note that in problems like NER, the final accuracy depends on performance on
rare-words and since LR-MVL is robustly able to correlate past with future views, it is able to learn
better representations for rare words resulting in overall better accuracy. On rare-words (occurring
< 10 times in corpus), we got 11.7%, 10.7% and 9.6% relative reduction in error over C&W, HLBL
and Brown respectively for NER; on chunking the corresponding numbers were 6.7%, 7.1% and
8.7%.
Also, it is worth mentioning that modeling the context in embeddings gives decent improvements
in accuracies on both NER and Chunking problems. For the case of NER, the polysemous words
were mostly like Chicago, Wales, Oakland etc., which could either be a location or organization
(Sports teams, Banks etc.), so when we don?t use the gazetteer features, (which are known lists of
cities, persons, organizations etc.) we got higher increase in F-score by modeling context, compared
to the case when we already had gazetteer features which captured most of the information about
polysemous words for NER dataset and modeling the context didn?t help as much. The polysemous
words for Chunking dataset were like spot (VP/NP), never (VP/ADVP), more (NP/VP/ADVP/ADJP)
etc. and in this case embeddings with context helped significantly, giving 3.1 ? 6.5% relative improvement in accuracy over context oblivious embeddings.
5
Summary and Conclusion
In this paper, we presented a novel CCA-based multi-view learning method, LR-MVL, for large
scale sequence learning problems such as arise in NLP. LR-MVL is a spectral method that works
in low dimensional state-space so it is computationally efficient, and can be used to train using
large amounts of unlabeled data; moreover it does not get stuck in local optima like an EM trained
HMM. The embeddings learnt using LR-MVL can be used as features with any supervised learner.
LR-MVL has strong theoretical grounding; is much simpler and faster than competing methods and
achieves state-of-the-art accuracies on NER and Chunking problems.
Acknowledgements: The authors would like to thank Alexander Yates, Ted Sandler and the three
anonymous reviews for providing valuable feedback. We would also like to thank Lev Ratinov and
Joseph Turian for answering our questions regarding their paper [16].
8
References
[1] Ando, R., Zhang, T.: A framework for learning predictive structures from multiple tasks and
unlabeled data. Journal of Machine Learning Research 6 (2005) 1817?1853
[2] Suzuki, J., Isozaki, H.: Semi-supervised sequential labeling and segmentation using giga-word
scale unlabeled data. In: In ACL. (2008)
[3] Brown, P., deSouza, P., Mercer, R., Pietra, V.D., Lai, J.: Class-based n-gram models of natural
language. Comput. Linguist. 18 (December 1992) 467?479
[4] Pereira, F., Tishby, N., Lee, L.: Distributional clustering of English words. In: 31st Annual
Meeting of the ACL. (1993) 183?190
[5] Huang, F., Yates, A.: Distributional representations for handling sparsity in supervised
sequence-labeling. ACL ?09, Stroudsburg, PA, USA, Association for Computational Linguistics (2009) 495?503
[6] Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural
networks with multitask learning. ICML ?08, New York, NY, USA, ACM (2008) 160?167
[7] Mnih, A., Hinton, G.: Three new graphical models for statistical language modelling. ICML
?07, New York, NY, USA, ACM (2007) 641?648
[8] Dumais, S., Furnas, G., Landauer, T., Deerwester, S., Harshman, R.: Using latent semantic
analysis to improve access to textual information. In: SIGCHI Conference on human factors
in computing systems, ACM (1988) 281?285
[9] Hsu, D., Kakade, S., Zhang, T.: A spectral algorithm for learning hidden markov models. In:
COLT. (2009)
[10] Siddiqi, S., Boots, B., Gordon, G.J.: Reduced-rank hidden Markov models. In: AISTATS2010. (2010)
[11] Song, L., Boots, B., Siddiqi, S.M., Gordon, G.J., Smola, A.J.: Hilbert space embeddings of
hidden Markov models. In: ICML. (2010)
[12] Hotelling, H.: Canonical correlation analysis (cca). Journal of Educational Psychology (1935)
[13] Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT?
98. (1998) 92?100
[14] Halko, N., Martinsson, P.G., Tropp, J.: Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. (Dec 2010)
[15] Zhang, T., Johnson, D.: A robust risk minimization based named entity recognition system.
CONLL ?03 (2003) 204?207
[16] Turian, J., Ratinov, L., Bengio, Y.: Word representations: a simple and general method for
semi-supervised learning. ACL ?10, Stroudsburg, PA, USA, Association for Computational
Linguistics (2010) 384?394
[17] Ratinov, L., Roth, D.: Design challenges and misconceptions in named entity recognition. In:
CONLL. (2009) 147?155
[18] Liang, P.: Semi-supervised learning for natural language. Master?s thesis, Massachusetts
Institute of Technology (2005)
[19] Lin, D., Wu, X.: Phrase clustering for discriminative learning. In: Proceedings of the Joint
Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference
on Natural Language Processing of the AFNLP: Volume 2 - Volume 2. ACL ?09, Stroudsburg,
PA, USA, Association for Computational Linguistics (2009) 1030?1038
9
| 4193 |@word multitask:1 bigram:1 norm:1 decomposition:2 covariance:3 xtest:1 tr:1 reduction:6 contains:1 score:5 tuned:2 document:1 prefix:2 past:4 existing:1 current:9 com:1 chicago:1 informative:1 remove:1 plot:1 generative:1 greedy:1 ihr:1 data2:1 eigenfeatures:1 core:4 short:3 lr:48 institution:1 provides:2 lexicon:1 sits:1 location:1 org:1 simpler:1 zhang:5 wale:1 paragraph:1 theoretically:2 upenn:2 ra:2 expected:1 themselves:1 multi:5 little:1 cpu:1 window:6 ua:2 project:6 estimating:2 moreover:2 notation:1 linearity:1 didn:1 eigenvector:1 unified:1 finding:5 transformation:1 ti:6 scaled:2 classifier:6 demonstrates:1 lsa:2 omit:1 harshman:1 before:4 ner:18 local:4 limit:3 lev:1 chose:1 exert:1 acl:6 suggests:1 co:8 hmms:2 limited:1 mentioning:1 range:1 statistically:1 averaged:1 unique:4 testing:1 lyle:1 practice:3 orthographic:1 union:1 xr:5 digit:1 spot:1 danger:1 empirical:1 significantly:2 got:2 projection:3 word:71 induce:2 get:7 onto:4 unlabeled:14 context:46 influence:1 risk:1 www:3 dean:1 map:2 roth:1 crfs:1 educational:1 convex:1 spanned:2 financial:1 embedding:15 target:2 cleaning:1 us:2 cnts:2 pa:4 element:1 recognition:6 particularly:2 distributional:2 labeled:2 observed:1 capture:4 hv:2 news:1 valuable:1 substantial:1 mentioned:4 intuition:1 trained:6 depend:1 predictive:1 learner:4 po:1 joint:2 various:1 train:7 stacked:1 distinct:1 fast:5 describe:1 labeling:2 supplementary:2 valued:2 larger:1 say:1 statistic:1 syntactic:2 noisy:1 final:3 afnlp:1 sequence:9 advantage:1 wtest:1 propose:1 took:1 maximal:1 remainder:1 combining:2 rapidly:1 eigenfeature:6 achieve:1 representational:1 inducing:2 competition:1 getting:1 billion:1 convergence:1 cluster:5 optimum:4 r1:1 produce:1 converges:1 zipfian:1 help:1 stroudsburg:3 ac:2 aug:2 job:1 strong:2 come:1 direction:3 gotten:2 capitalized:2 human:1 material:2 ungar:2 f1:4 generalization:1 anonymous:1 extension:1 hold:1 around:6 predict:1 trigram:1 achieves:2 dictionary:5 applicable:2 label:2 sensitive:1 largest:4 faithfully:1 city:1 weighted:1 aso:1 minimization:1 conjunction:5 l0:4 improvement:2 rank:21 likelihood:1 modelling:1 baseline:7 dim:12 inference:1 suffix:2 membership:1 lowercase:1 typically:2 entire:2 hidden:10 issue:1 overall:1 sandler:1 colt:2 augment:2 aistats2010:1 development:10 art:6 smoothing:5 initialize:1 wharton:1 once:1 never:1 ted:1 identical:1 unsupervised:1 icml:3 future:3 report:2 np:2 gordon:2 inherent:1 oblivious:4 few:3 randomly:1 composed:1 preserve:2 simultaneously:1 pietra:1 consisting:1 ando:3 organization:2 interest:2 mnih:1 mixture:2 behind:1 shorter:1 desired:1 theoretical:4 gpgpu:1 increased:1 earlier:1 modeling:3 dev:3 zn:1 phrase:2 stacking:1 rare:5 successful:1 fruitfully:1 johnson:1 tishby:1 dependency:2 aw:2 learnt:2 combined:1 chunk:1 st:9 person:1 dumais:1 river:1 cll:3 international:1 lee:1 probabilistic:1 w1:1 again:1 thesis:1 satisfied:1 containing:3 huang:1 li:2 depends:1 collobert:1 later:1 view:14 helped:1 start:1 formed:1 accuracy:7 variance:1 who:1 likewise:1 ofthe:1 vp:3 crr:2 polysemous:3 multiplying:1 worth:3 randomness:1 definition:2 proof:5 associated:4 sampled:1 hsu:1 dataset:3 massachusetts:1 popular:1 mitchell:1 recall:1 lim:2 cap:1 dimensionality:2 hilbert:1 organized:1 segmentation:1 higher:1 supervised:16 day:1 improved:1 done:2 evaluated:1 strongly:1 furthermore:1 just:4 smola:1 correlation:8 until:1 working:1 tropp:1 replacing:1 lack:1 grounding:3 building:1 usa:5 brown:9 true:3 hence:2 regularization:2 alternating:1 dhillon:2 paramveer:1 semantic:4 i2:2 multiview:1 crf:4 performs:1 l1:1 harmonic:1 novel:2 recently:2 common:1 superior:1 multinomial:1 empirically:1 overview:1 exponentially:1 volume:2 million:2 analog:1 association:3 martinsson:1 relating:1 significant:1 tuning:1 trivially:1 newswire:1 language:5 had:2 access:1 longer:2 similarity:1 etc:6 base:1 dominant:2 recent:1 showed:1 meeting:2 clr:3 yi:3 preserving:1 minimum:1 captured:1 isozaki:3 r0:4 converge:3 maximize:1 redundant:1 period:1 semi:6 full:1 multiple:1 smooth:8 faster:3 adapt:1 calculation:1 cross:2 long:3 lin:1 divided:1 lai:1 equally:1 paired:1 prediction:1 hlbl:8 expectation:1 iteration:5 normalization:3 achieved:1 dec:1 whereas:3 want:2 singular:16 extra:1 unlike:2 capitalization:1 induced:3 elegant:1 thing:1 december:1 effectiveness:1 call:1 noting:2 split:1 embeddings:35 wn:2 concerned:1 variety:2 decent:1 variate:1 gave:2 psychology:1 pennsylvania:1 competing:2 architecture:1 mvl:48 regarding:1 whether:1 pca:5 penalty:2 song:1 york:2 linguist:1 deep:1 detailed:2 eigenvectors:1 cleaner:1 xtrain:3 tune:1 amount:3 siddiqi:2 category:2 reduced:7 generate:1 http:4 lsi:1 canonical:6 estimated:6 per:1 broadly:1 yates:2 group:1 key:3 blum:1 asymptotically:1 ratinov:3 deerwester:1 master:1 named:6 wsd:1 wu:1 scaling:1 conll:7 capturing:1 cca:30 hi:7 guaranteed:2 fold:1 annual:2 ri:2 x2:15 software:1 tag:2 extremely:2 performing:2 rcv1:3 combination:1 describes:1 smaller:1 increasingly:1 em:3 slightly:1 wi:17 kakade:1 tw:1 joseph:1 hl:2 indexing:1 chunking:17 ln:1 equation:1 computationally:1 needed:1 know:1 oakland:1 available:1 multiplied:1 hierarchical:2 spectral:5 enforce:1 occurrence:5 robustly:1 sigchi:1 hotelling:1 alternative:1 eigen:1 original:3 denotes:1 clustering:6 nlp:8 top:3 linguistics:3 graphical:1 xw:3 paucity:1 giving:3 especially:1 comparatively:1 move:1 already:1 question:1 traditional:1 subspace:1 distance:1 thank:2 entity:6 hmm:6 outer:1 w0:1 concatenation:1 bengio:1 trivial:1 reason:1 induction:2 length:2 providing:1 liang:1 difficult:1 unfortunately:1 setup:1 mostly:2 design:1 zt:7 av:1 observation:4 boot:2 datasets:2 markov:3 supporting:1 hinton:1 team:1 rn:1 smoothed:2 inferred:1 pair:5 required:1 kl:3 namely:1 z1:1 sentence:6 textual:1 hour:1 alternately:1 address:1 able:2 below:1 pattern:1 sparsity:2 challenge:1 max:1 natural:4 regularized:1 indicator:3 hr:1 improve:1 technology:1 brief:3 irrespective:1 categorical:1 auto:1 philadelphia:1 text:3 review:2 acknowledgement:1 asymptotic:1 relative:2 validation:2 downloaded:1 sufficient:2 article:1 foster:2 mercer:1 bank:3 row:3 elsewhere:1 summary:2 token:27 repeat:1 last:1 keeping:1 copy:1 free:1 english:1 perceptron:1 institute:1 fall:2 face:1 benefit:1 ghz:1 overcome:1 dimension:8 vocabulary:8 transition:1 feedback:1 gram:1 computes:4 stuck:3 collection:1 avg:1 projected:1 suzuki:3 author:1 far:1 correlate:4 approximate:1 keep:1 global:1 desouza:1 corpus:3 conclude:1 advp:2 discriminative:2 landauer:1 don:2 latent:5 decade:1 table:3 learn:5 robust:2 conll2000:1 excellent:1 constructing:1 domain:3 did:2 dense:2 reuters:1 arise:1 turian:2 repeated:1 fair:1 x1:16 augmented:1 tl:1 slow:2 ny:2 furnas:1 precision:1 pereira:1 exponential:4 xl:5 concatenating:1 comput:1 answering:1 learns:1 down:3 theorem:4 z0:1 embed:5 specific:12 misconception:1 showing:1 supplemented:2 symbol:1 list:2 exists:3 sequential:1 kr:3 ci:1 supplement:1 magnitude:3 occurring:1 halko:1 expressed:1 sport:1 corresponds:1 acm:3 weston:1 identity:1 viewed:1 sorted:1 replace:1 crl:2 shared:2 change:1 specifically:2 typical:1 infinite:1 wt:9 principal:2 lemma:3 called:1 total:1 invariance:1 experimental:3 la:2 intact:1 distributionally:1 formally:2 giga:1 alexander:1 handling:1 |
3,527 | 4,194 | Hierarchical Multitask Structured Output Learning
for Large-Scale Sequence Segmentation
Nico G?ornitz1
Technical University Berlin,
Franklinstr. 28/29, 10587 Berlin, Germany
[email protected]
Christian Widmer1
FML of the Max Planck Society
Spemannstr. 39, 72070 T?ubingen, Germany
[email protected]
Georg Zeller
European Molecular Biology Laboratory
Meyerhofstr. 1, 69117 Heidelberg, Germany
[email protected]
Andr?e Kahles
FML of the Max Planck Society
Spemannstr. 39, 72070 T?ubingen, Germany
[email protected]
S?oren Sonnenburg2
TomTom
An den Treptowers 1, 12435 Berlin, Germany
[email protected]
Gunnar R?atsch
FML of the Max Planck Society
Spemannstr. 39, 72070 T?ubingen, Germany
[email protected]
Abstract
We present a novel regularization-based Multitask Learning (MTL) formulation
for Structured Output (SO) prediction for the case of hierarchical task relations.
Structured output prediction often leads to difficult inference problems and hence
requires large amounts of training data to obtain accurate models. We propose to
use MTL to exploit additional information from related learning tasks by means of
hierarchical regularization. Training SO models on the combined set of examples
from multiple tasks can easily become infeasible for real world applications. To
be able to solve the optimization problems underlying multitask structured output learning, we propose an efficient algorithm based on bundle-methods. We
demonstrate the performance of our approach in applications from the domain of
computational biology addressing the key problem of gene finding. We show that
1) our proposed solver achieves much faster convergence than previous methods
and 2) that the Hierarchical SO-MTL approach outperforms considered non-MTL
methods.
1
Introduction
In Machine Learning, model quality is most often limited by the lack of sufficient training data.
When data from different, but related tasks, is available, it is possible to exploit it to boost the performance of each task by transferring relevant information. Multitask learning (MTL) considers
the problem of inferring models for several tasks simultaneously, while imposing regularity criteria
or shared representations in order to allow learning across tasks. This has been an active research
focus and various methods (e.g., [5, 8]) have been explored, providing empirical findings [16] and
theoretical foundations [3, 4]. Recently, also the relationships between tasks have been studied (e.g.,
[1]) assuming a cluster relationship [11] or a hierarchy [6, 23, 13] between tasks. Our proposed
method follows this line of research in that it exploits externally provided hierarchical task relations. The generality of regularization-based MTL approaches makes it possible to extend them
beyond the simple cases of classification or regression to Structured Output (SO) learning problems
1
2
These authors contributed equally.
This work was done while SS was at Technical University Berlin
1
[14, 2, 21, 10]. Here, the output is not in the form of a discrete class label or a real valued number,
but a structured entity such as a label sequence, a tree, or a graph. One of the main contributions
of this paper is to explicitly extend a regularization-based MTL formulation to the SVM-struct formulation for SO prediction [2, 21]. SO learning methods can be computationally demanding, and
combining information from several tasks leads to even larger problems, which renders many interesting applications infeasible. Hence, our second main contribution is to provide an efficient solver
for SO problems which is based on bundle methods [18, 19, 7]. It achieves much faster convergence
and is therefore an essential tool to cope with the demands of the MTL setting.
SO learning has been successfully applied in the analysis of images, natural language, and sequences. The latter is of particular interest in computational biology for the analysis of DNA, RNA
or protein sequences. This field moreover constitutes an excellent application area for MTL [12, 22].
In computational biology, one often uses supervised learning methods to model biological processes
in order to predict their outcomes and ultimately understand them better. Due to the complexity
of many biological mechanisms, rich computational models have to be developed, which in turn
require a reasonable amount of training data. However, especially in the biomedical domain, obtaining labeled training examples through experiments can be costly. Thus, combining information
from several related tasks can be a cost-effective approach to best exploit the available label data.
When transferring label information across tasks, it often makes sense to assume hierarchical task
relations. In particular, in computational biology, where evolutionary processes often impose a task
hierarchy [22]. For instance, we might be interested in modeling a common biological mechanism
in several organisms such that each task corresponds to one organism. In this setting, we expect
that the longer the common evolutionary history between two organisms, the more beneficial it is
to share information between the corresponding tasks. In this work, we chose a challenging problem from genome biology to demonstrate that our approach is practically feasible in terms of speed
and accuracy. In ab initio gene finding [17], the task is to build an accurate model of a gene and
subsequently use it to predict the gene content of newly sequenced genomes or to refine existing
annotations. Despite many commonalities between sequence features of genes across organisms,
sequence differences have made it very difficult to build universal gene finders that achieve high
accuracy in cross-organism prediction. This problem is hence ideally suited for the application of
the proposed SO-MTL approach.
2
Methods
Regularization based supervised learning methods, such as the SVM or Logistic Regression play
a central role in many applications. In its most general form, such a method consists of a loss
function L that captures the error with respect to the training data S = {(x1 , y1 ), . . . , (xn , yn )} and
a regularizer R that penalizes model complexity
n
X
J(w) =
L(w, xi , yi ) + R(w).
i=1
In the case of Multitask Learning (MTL), one is interested in obtaining several models w1 , ..., wT
based on T associated sets of examples St = {(x1 , y1 ), . . . , (xnt , ynt )}, t = 1, . . . , T . To couple
individual tasks, an additional regularization term RM T L is introduced that penalizes the disagreement between the individual models (e.g., [1, 8]):
!
nt
T
X
X
J(w1 , ..., wT ) =
L(w, xi , yi ) + R(wt ) + RM T L (w1 , ..., wT ).
t=1
i=1
Special cases include T = 2 and RM T L (w1 , w2 ) = ? ||w1 ? w2 || (e.g., [8, 16]), where ? is a
hyper-parameter controlling the strength of coupling of the solutions for both tasks. For more than
two tasks, the number of coupling terms and hyper-parameters can rise quadratically leading to a
difficult model-selection problem.
2.1 Hierarchical Multitask Learning (HMTL)
We consider the case where tasks correspond to leaves of a tree and are related by its inner nodes. In
[22], the case of taxonomically organized two-class classification tasks was investigated, where each
task corresponds to a species (taxon). The idea was to mimic biological evolution that is assumed to
2
generate more specialized molecular processes with each speciation event from root to leaf. This is
implemented by training on examples available for nodes in the current subtree (i.e., the tasks below
the current node), while similarity to the parent classifier is induced through regularization. Thus,
for each node n, one solves the following optimization problem,
?
?
?
?1
X
2
2
(w?n , b?n ) = argmin
(1 ? ?) ||w|| + ? w ? w?p + C
` (hx, wi + b, y) , (1)
?
?2
w,b
(x,y)?S
where p is the parent node of n (with the special case of w?p
= 0 for the root node), ` is an appropriate
loss function (e.g., the hinge-loss). The hyper-parameter ? ? [0, 1] determines the contribution of
regularization from the origin vs. the parent node?s parameters (i.e., the strength of coupling between
the node and its parent). The above problem can be equivalently rewritten as:
?
?
?1
?
X
2
(w?n , b?n ) = argmin
||w|| ? ? w, w?p + C
` (hx, wi + b, y) .
(2)
?2
?
w,b
(x,y)?S
For ? = 0, the tasks completely decouple and can be learnt independently. The parameters for
the root node correspond to the globally best model. We will refer to these two cases as base-line
methods for comparisons in the experimental section.
2.2 Structured Output Learning and Extensions for HMTL
In contrast to binary classification, elements from the output space ? (e.g., sequences, trees, or
graphs) of structured output problems have an inherent structure which makes more sophisticated,
problem-specific loss functions desirable. The loss between the true label y ? ? and the predicted
? ? ? is measured by a loss function ? : ? ? ? ? <+ . A widely used approach to predict
label y
? ? ? is the use of a linearly parametrized model given an input vector x ? X and a joint feature
y
map ? : X ? ? ? H that captures the dependencies between input and output (e.g., [21]):
? w (x) = argmax hw, ?(x, y
? )i.
y
? ??
y
The most common approaches to estimate the model parameters w are based on structured output
SVMs (e.g., [2, 21]) and conditional random fields (e.g., [14]; see also [10]). Here we follow
the approach taken in [21, 15], where estimating the parameter vector w amounts to solving the
following optimization problem
(
)
n
X
? )i + ?(y i , y
? ) ? hw, ?(xi , y i )i) ,
min
R(w) + C
`(maxhw, ?(xi , y
(3)
w?H
i=1
? ??
y
where R(w) is a regularizer and ` is a loss function. For `(a) = max(0, a) and R(w) = k w k22 we
obtain the structured output support vector machine [21, 2] with margin rescaling and hinge-loss.
It turns out that we can combine the structured output formulation with hierarchical multitask learning in a straight-forward way. We extend the regularizer R(w) in (3) with a ?-parametrized convex
2
combination of a multitask regularizer 21 ||w ? wp ||2 with the original term. When R(w) = 12 k w k22
and omitting constant terms, we arrive at Rp,? (w) = 21 k w k22 ? ?hw, wp i. Thus we can apply the
described hierarchical multitask learning approach and solve for every node the following optimization problem: (
)
n
X
? )i + ?(y i , y
? ) ? hw, ?(xi , y i )i)
min
Rp,? (w) + C
`(maxhw, ?(xi , y
(4)
w?H
i=1
? ??
y
A major difficulty remains: solving the resulting optimization problems which now can become
considerably larger than for the single-task case.
2.3 A Bundle Method for Efficient Optimization
A common approach to obtain a solution to (3) is to use so-called cutting-plane or column-generation
methods. Here one considers growing subsets of all possible structures and solves restricted optimization problems. An algorithm implementing a variant of this strategy based on primal optimization is given in the appendix (similar in [21]). Cutting-plane and column generation techniques
3
often converge slowly. Moreover, the size of the restricted optimization problems grows steadily
and solving them becomes more expensive in each iteration. Simple gradient descent or second
order methods can not be directly applied as alternatives, because (4) is continuous but non-smooth.
Our approach is instead based on bundle methods for regularized risk minimization as proposed in
[18, 19] and [7]. In case of SVMs, this further relates to the OCAS method introduced in [9]. In
order to achieve fast convergence, we use a variant of these methods adapted to structured output
learning that is suitable for hierarchical multitask learning.
We consider the objective function J(w) = Rp,? (w) + L(w), where
L(w) := C
n
X
i=1
? )i + ?(y i , y
? )} ? hw, ?(xi , y i )i)
`(max {hw, ?(xi , y
? ??
y
and Rp,? (w) is as defined in Section 2.2. Direct optimization of J is very expensive as computing L
involves computing the maximum over the output space. Hence, we propose to optimize an estimate
? (w), which can be computed efficiently. We define the estimated empirical
of the empirical loss L
? (w) as
loss L
N
X
?
L(w)
:= C
`
max {hw, ?i + ?} ? hw, ?(xi , y i )i .
(?,?)??i
i=1
?
?
Accordingly, we define the estimated objective function as J(w)
= Rp,? (w) + L(w).
It is easy to
?
verify that J(w) ? J(w). ?i is a set of pairs (?(xi , y), ?(y i , y)) defined by a suitably chosen,
?
growing subset of ?, such that L(w)
? L(w) (cf. Algorithm 1).
In general, bundle methods are extensions of cutting plane methods that use a prox-function to stabilize the solution of the approximated function. In the framework of regularized risk minimization,
?
a natural prox-function is given by the regularizer. We apply this approach to the objective J(w)
and solve
min Rp,? (w) + max{hai , wi + bi }
(5)
w
i?I
? As proposed in [7, 19], we use a set I
where the set of cutting planes ai , bi lower bound L.
of limited size. Moreover, we calculate an aggregation cutting plane a
?, ?b that lower bounds the
?
estimated empirical loss L. To be able to solve the primal optimization problem in (5) in the dual
space as proposed by [7, 19], we adopt an elegant strategy described in [7] to obtain the aggregated
cutting plane (?
a0 , ?b0 ) using the dual solution ? of (5):
X
X
?b0 =
a
?0 =
?j ai
and
? i bi .
(6)
i?I
i?I
The following two formulations reach the same minimum when optimized with respect to w:
a0 , wi + ?b0 }.
min Rp (w) + maxhai , wi + bi = min {Rp (w) + h?
w?H
i?I
w?H
This new aggregated plane can be used as an additional cutting plane in the next iteration step.
We therefore have a monotonically increasing lower bound on the estimated empirical loss and can
remove previously generated cutting planes without compromising convergence (see [7] for details).
The algorithm is able to handle any (non-)smooth convex loss function `, since only the subgradient
needs to be computed. This can be done efficiently for the hinge-loss, squared hinge-loss, Huberloss, and logistic-loss.
The resulting optimization algorithm is outlined in Algorithm 1. There are several improvements
possible: For instance, one can bypass updating the empirical risk estimates in line 6, when
? (k) ) ? . Finally, while Algorithm 1 was formulated in primal space, it is easy
L(w(k) ) ? L(w
to reformulate in dual variables making it independent of the dimensionality of w ? H.
2.4 Taxonomically Constrained Model Selection
Model selection for multitask learning is particularly difficult, as it requires hyper-parameter selection for several different, but related tasks in a dependent manner. For the described approach, each
4
Algorithm 1 Bundle Methods for Structured Output Algorithm
S ? 1: maximal size of the bundle set
? > 0: linesearch trade-off (cf. [9] for details)
w(1) = wp
k = 1 and a
? = 0, ?b = 0, ?i = ? ?i
repeat
for i = 1, .., n do
y ? = argmaxy?? {hw(k) , ?(xi , y)i + ?(y i , y)}
8:
if ` max {hw, ?(xi , y)i + ?(y i , y)} > `
max hw, ?i + ? then
1:
2:
3:
4:
5:
6:
7:
y??
9:
10:
11:
12:
13:
w? = argmin
w?H
14:
15:
16:
17:
18:
19:
(?,?)??i
?i = ?i ? (?(xi , y ? ), ?(y i , y ? ))
end if
? (k) )
Compute ak ? ?w L(w
?
Compute bk =
L(w(k) ) ? hw(k) , a
k i
Rp,? (w) + max
max
(k?S)+ <i?k
{hai , wi + bi }, h?
a, wi + ?b
Update a
?, ?b according to (6)
? ? +?(w? ? w(k) ))
? ? = argmin??< J(w
(k+1)
w
= (1 ? ?) w? +?? ? (w? ? w(k) )
k =k+1
end for
? (k) ) ? and J(w(k) ) ? Jk (w(k) ) ?
until L(w(k) ) ? L(w
node n in the given taxonomy corresponds to solving an optimization problem that is subject to
hyper-parameters ?n and Cn (except for the root node, where only Cn is relevant). Hence, the direct
optimization of all combinations of dependent hyper-parameters in model selection is not feasible
in many cases. Therefore, we propose to perform a local model selection and optimize the current
Cn and ?n at each node n from top to bottom independently. This corresponds to using the taxonomy for reducing the parameter search space. To clarify this point, assume a perfect binary tree
for n tasks. The length of the path from root to leaf is log2 (n). The parameters along one path are
dependent, e.g. the values chosen at the root will influence the optimal choice further down the tree.
Given k candidate values for parameter ?n , jointly optimizing all interdependent parameters along
one path corresponds to optimizing over a grid of k log2 (n) in contrast to k ? log2 (n) when using our
proposed local strategy.
3
Results
3.1 Background
To demonstrate the validity of our approach, we applied it to the computational biology problem of
gene finding. Here, the task is to identify genomic regions encoding genes (from which RNAs and/or
proteins are produced). Genomic sequence can be represented by long strings of the four letters A, C,
G, and T (genome sizes range from a few megabases to several gigabases). In prokaryotes (mostly
bacteria and archaea) gene structures are comparably simple (cf. Figure 1A): the protein coding
region starts by a start codon (one out of three specific 3-mers in many prokaryotes) followed by a
number of codon triplets (of three nucleotides each) and is terminated by a stop codon (one out of
five specific 3-mers in many prokaryotes). Genic regions are first transcribed to RNA, subsequently
the contained coding region is translated into a protein. Parts of the RNA that are not translated are
called untranslated region (UTR). Genes are separated from one another by intergenic regions. The
protein coding segment is depleted of stop codons making the computational problem of identifying
coding regions relatively straight forward.
In higher eukaryotes (animals, plants, etc.) however, the coding region can be interrupted by introns, which are removed from the RNA before it is translated into protein. Introns are flanked by
specific sequence signals, so-called splice sites (cf. Figure 1B). The presence of introns substantially
complicates the identification of the transcribed and coding regions. In particular, it is usually insufficient to identify regions depleted of stop codons to determine the encoded protein sequence. To
5
accurately detect the transcribed regions in eukaryotic genomes, it is therefore often necessary to
use additional experimental data (e.g., sequencing of RNA fragments). Here, we consider two key
problems in computational gene finding of (i) predicting (only) the coding regions for prokaryotes
and (ii) predicting the exon-intron structure (but not the coding region) for eukaryotes.
A) Prokaryotic Gene
Intergenic
UTR
Start
Codon
Coding region
N x 3 x {A,C,G,T}
ATG
Stop
Codon
UTR
Intergenic
TAA
B) Eukaryotic Gene
Intergenic
Exon
Intron
Exon
Intron
Exon
Intergenic
N x 3 x {A,C,G,T}
UTR
Coding region
UTR
Figure 1: Panel A shows the structure of
a prokaryotic gene. The protein coding region is flanked by a start and a stop codon
and contains a multiple of three nucleotides.
UTR denotes the untranslated region. Panel
B shows the structure of an eukaryotic gene.
The transcribed region contains introns and
exons. Introns are flanked by splice sites and
are removed from the RNA. The remaining
sequence contains the UTRs and coding region.
The problem of identifying genes can be posed as a label sequence learning task, were one assigns
a label (out of intergenic, transcript start, untranslated region, coding start, coding exon, intron,
coding stop, transcript stop) to each position in the genome. The labels have to follow a grammar
dictated by the biological processes of transcription and translation (see Figure 1) making it suitable
to apply structured output learning techniques to identify genes. Because the biological processes
and cellular machineries which recognize genes have slowly evolved over time, genes of closely
related species tend to exhibit similar sequence characteristics. Therefore these problems are very
well suited for the application of multitask learning: sharing information among species is expected
to lead to more accurate gene predictions compared to approaching the problem for each species in
isolation. Currently, the genomes of many prokaryotic and eukaryotic species are being sequenced,
but often very little is known about the genes encoded, and standard methods are typically used to
infer them without systematically exploiting reliable information on related species.
In the following we will consider two different aspects of the described problem. First, focusing on
eukaryotic gene finding for a single species, we show that the proposed optimization algorithm very
quickly converges to the optimal solution. Second, for the problem of prokaryotic gene finding in
several species, we demonstrate that hierarchical multitask structured output learning significantly
improves gene prediction accuracy. The supplement, data and code can be found on the project
website3 .
3.2 Eukaryotic Gene Finding Based on RNA-Seq
We first consider the problem of detecting exonic, intronic and intergenic regions in a single eukaryotic genome. We use experimental data from RNA sequencing (RNA-seq) which provides evidence
for exonic and intronic regions . For simplicity, we assume that for each position in the genome
we are given numbers on how often this position was experimentally determined to be exonic and
intronic, respectively. Ideally, exons and introns belonging to the same gene should have a constant
number of confirmations, whereas these values may vary greatly between different genes. But in
reality, these measurements are typically incomplete and noisy, so that inference techniques greatly
help to reconstruct complete gene structures.
As any HMM or HMSVM, our method employs a state model defining allowed transitions between
states. It consists of five basic states: intergenic, exonic, intron start (donor splice site), intronic,
and intron end (acceptor splice site). These states are duplicated Q = 5 times to model different
levels of confirmation and the whole model is mirrored for simultaneous predictions of genes from
both strands of the genome (see supplement for details). In total, we have 41 states, each of which is
associated with several parameters scoring features derived from the exon and intron confirmation
and computational splice site predictions (see supplement for details). Overall the model has almost
1000 parameters.
We trained the model using 700 training regions with known exon/intron structures and a total length
of ca. 5.2 million nucleotides (data from the nematode C. elegans). We used the column generationbased algorithm (see Appendix) and the Bundle method-based algorithm (Algorithm 1) and recorded
upper and lower bounds of the objective during run time (cf. Figure 2). Whereas both algorithms
3
http://bioweb.me/so-mtl
6
need a similar amount of computation per iteration (mostly decoding steps), the Bundle-method
showed much faster convergence.
We assessed prediction accuracy in a three-fold cross-validation procedure where individual test
sequences consisted of large genomic regions (of several Mbp) each containing many genes. This
evaluation procedure is expected to yield unbiased estimates that are very similar to whole-genome
predictions. Prediction accuracy was compared to another recently proposed, widely used method
called Cufflinks [20]. We observed that our method detects introns and transcripts more accurately
than Cufflinks in the data set analyzed here (cf. Figure 2).
8
10
7
10
Cufflinks
0.9
Our method
0.8
6
10
0.7
5
10
F?Score
objective value
1.0
4
10
3
10
Bundle Method Upper Bound
1
10
5
10
15
20
25
0.4
Original OP Upper Bound
0.2
Original OP Lower Bound
0.1
Target
0
10
0.5
0.3
Bundle Method Lower Bound
2
10
0.6
iteration
30
35
40
0.0
45
Intron
Transcript
Figure 2: Left panel: Convergence for bundle method-based solver versus column generation (log-scale).
Right panel: Prediction accuracy of our eukaryotic gene finding method in comparison to a state-of-the-art
method, Cufflinks [20]. The F-score (harmonic mean of precision and recall) was assessed based on two
metrics: correctly predicted introns as well as transcripts for which all introns were correct (see label).
3.3
Gene Finding in Multiple Prokaryotic Genomes
In a second series of experiments we evaluated the benefit of applying SO-MTL to prokaryotic gene
prediction.
SO prediction method We modeled prokaryotic genes as a Markov chain on the nucleotide level.
To nonetheless account for the biological fact that genetic information is encoded in triplets, the
model contains a 3-cycle of exon states; details are given in Figure 3.
Start
Intergenic
Stop
Exonic3
Start
Codon
Stop
Codon
Exonic2
Exonic1
Figure 3: Simple state model for prokaryotic gene finding.
A suitable model for prokaryotic gene prediction needs to
consider 1) that a gene starts with a start codon (i.e. a certain
triplet of nucleotides) 2) ends with a stop codon and 3) has
a length divisible by 3. Properties 1) and 2) are enforced by
allowing only transitions into and out of the exonic states on
start and stop codons, respectively. Property 3) is enforced
by only allowing transitions from exon state Exonic3 to the
stop codon state.
Data generation We selected a subset of organisms with publicly available genomes to broadly
cover the spectrum of prokaryotic organisms. In order to show that MTL is beneficial even for
relatively distant species, we selected representatives from two different domains: bacteria and archaea. The relationship between these organisms is captured by the taxonomy shown in Figure 4,
which was created based on the information available on the NCBI website4 . For each organism,
we generated one training example per annotated gene. The genomic sequences were cut between
neighboring genes (splitting intergenic regions equally), such that a minimum distance of 6 nucleotides between genes was maintained. Features for SO learning were derived from the nucleotide
sequence by transcoding it to a numerical representation of triplets. This resulted in binary vectors
of size 43 = 64 with exactly one non-zero entry. We sub-sampled from the complete dataset of Ni
examples for each organism i and created new datasets with 20 training examples, 40 evaluation
examples and 200 test examples.
4
ftp://ftp.ncbi.nlm.nih.gov/genomes/Bacteria/
7
Figure 4: Species and their taxonomic hierarchy used for prokaryotic gene finding.
Experimental setup For model selection we used a grid over the following two parameter ranges
C = [100, 250], ? = [0, 0.025, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0] for each node in the taxonomy (cf. Figure 4). Sub-sampling of the dataset was performed 3 times and results were subsequently averaged.
We compared our MTL algorithm to two baseline methods, one where predictors for all tasks where
trained without information transfer (independent) and the other extreme case, where one global
model was fitted for all tasks based on the union of all data sets (union). Performance was measured by the F-score, the harmonic mean of precision and recall, where precision and recall were
determined on nucleotide level (e.g. whether or not an exonic nucleotide was correctly predicted) in
single-gene regions. (Note that due to its per-nucleotide Markov restriction, however, our method is
not able to exploit that there is only one gene per examples sequence.)
Results Figure 5 shows the results for our proposed MTL method and the two baseline methods
described above (see Appendix for table). We observe that it generally pays off to combine information from different organisms, as union always performs better than independent. Indeed MTL
improves over the naive combination method union with F-score increases of up to 4.05 percentage
points in A. tumefaciens. On average, we observe an improvement of 13.99 percentage points for
MTL over independent and 1.13 percentage points for MTL over union, confirming the value of
MTL in transferring information across tasks. In addition, the new bundle method converges at least
twice as fast as the originally proposed cutting plane method.
1.0
F?Score
0.9
0.8
Independent
0.7
Union
MTL
0.6
E.
c
i
ol
u
g
er
f
E.
i
ni
so
A.
ie
ac
ef
m
tu
ri
ylo
.p
ns
H
B.
cis
ra
th
an
B.
s
s
tili
ub
.s
M
it
m
i
hi
ic
us
i
S.
nd
sla
n
ea
m
Figure 5: Evaluation of MTL and baseline methods independent and union.
4
Discussion
We have introduced a regularization-based approach to SO learning in the setting of hierarchical
task relations and have empirically shown its validity on an application from computational biology. To cope with the increased problem size usually encountered in the MTL setting, we have
developed an efficient solver based on bundle-methods and demonstrated its improved convergence
behavior compared to column generation techniques. Applying our SO-MTL algorithm to the problem of prokaryotic gene finding, we could show that sharing information across tasks indeed results
in improved accuracy over learning tasks in isolation. Additionally, the taxonomy, which relates
individual tasks to each other, proved useful in that it led to more accurate predictions than were
obtained when simply training on all examples together. We have previously shown that MTL algorithms excel in a scenarios where there is limited training data relative to the complexity of the
problem and model [23]. As this experiment was carried out on a relatively small data set, more
work is required to turn our approach into a state-of-the-art prokaryotic gene finder.
8
Acknowledgments
We would like to thank the anonymous reviewers for insightful comments. Moreover, we are grateful
to Jonas Behr, Jose Leiva, Yasemin Altun and Klaus-Robert M?uller. This work was supported by
the German Research Foundation (DFG) under the grant RA 1894/1-1.
References
[1] A. Agarwal, S. Gerber, and H. Daum?e III. Learning multiple tasks using manifold regularization. In
Advances in Neural Information Processing Systems 23, 2010.
[2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In Proc. ICML,
2003.
[3] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Lecture notes in
computer science, pages 567?580, 2003.
[4] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain adaptation.
Advances in Neural Information Processing Systems, 20, 2007.
[5] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[6] H. Daum?e III. Bayesian multitask learning with latent hierarchies. In Proceedings of the Twenty-Fifth
Conference on Uncertainty in Artificial Intelligence, 2009.
[7] T.-M.-T. Do. Regularized Bundle Methods for Large-scale Learning Problems with an Application to
Large Margin Training of Hidden Markov Models. PhD thesis, l?Universit?e Pierre & Marie Curie, 2010.
[8] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of
Machine Learning Research, 6:615?637, 2005.
[9] V. Franc and S. Sonnenburg. OCAS optimized cutting plane algorithm for support vector machines. In
Proc. ICML, 2008.
[10] T. Hazan and R. Urtasun. A primal-dual message-passing algorithm for approximated large scale structured prediction. In Advances in Neural Information Processing Systems 23, 2010.
[11] L. Jacob, F. Bach, and J. Vert. Clustered multi-task learning: A convex formulation. Arxiv preprint
arXiv:0809.2085, 2008.
[12] L. Jacob and J. Vert. Efficient peptide-MHC-I binding prediction for alleles with few known binders.
Bioinformatics, 24(3):358?66, 2008.
[13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. Proc.
ICML, 2010.
[14] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In Proc. ICML, 2001.
[15] G. R?atsch and S. Sonnenburg. Large scale hidden semi-markov SVMs. In Advances in Neural Information
Processing Systems 18, 2006.
[16] G. Schweikert, C. Widmer, B. Sch?olkopf, and G. R?atsch. An Empirical Analysis of Domain Adaptation
Algorithms for Genomic Sequence Analysis. In Advances in Neural Information Processing Systems 21,
2009.
[17] G. Schweikert, A. Zien, G. Zeller, J. Behr, C. Dieterich, C. Ong, P. Philips, F. De Bona, L. Hartmann,
A. Bohlen, N. Kr?uger, S. Sonnenburg, and G. R?atsch. mGene: accurate SVM-based gene finding with an
application to nematode genomes. Genome Research, 19(11):2133?43, 2009.
[18] A. Smola, S. Vishwanathan, and Q. Le. Bundle methods for machine learning. In Advances in Neural
Information Processing Systems 20, 2008.
[19] C. Teo, S. Vishwanathan, A.Smola, and Q. Le. Bundle methods for regularized risk minimization. Journal
of Machine Learning Research, 11:311?365, 2010.
[20] C. Trapnell, B. A. Williams, G. Pertea, A. Mortazavi, G. Kwan, M. J. van Baren, S. L. Salzberg, B. J.
Wold, and L. Pachter. Transcript assembly and quantification by RNA-seq reveals unannotated transcripts
and isoform switching during cell differentiation. Nature Biotechnology, 28:511?515, 2010.
[21] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. Journal of Machine Learning Research, 6:1453?1484, 2005.
[22] C. Widmer, J. Leiva, Y. Altun, and G. R?atsch. Leveraging Sequence Classification by Taxonomy-based
Multitask Learning. In Research in Computational Molecular Biology, 2010.
[23] C. Widmer, N. Toussaint, Y. Altun, and G. R?atsch. Inferring latent task structure for Multitask Learning
by Multiple Kernel Learning. BMC Bioinformatics, 11(Suppl 8):S5, 2010.
9
| 4194 |@word multitask:17 nd:1 suitably:1 mers:2 jacob:2 contains:4 fragment:1 score:5 series:1 kahles:2 genetic:1 outperforms:1 existing:1 current:3 com:2 nt:1 gmail:1 interrupted:1 numerical:1 distant:1 confirming:1 hofmann:2 christian:2 remove:1 update:1 v:1 intelligence:1 leaf:3 selected:2 accordingly:1 plane:11 mccallum:1 eukaryote:2 detecting:1 provides:1 node:14 five:2 along:2 direct:2 become:2 jonas:1 consists:2 combine:2 manner:1 indeed:2 ra:2 expected:2 mpg:3 behavior:1 growing:2 multi:2 ol:1 codon:14 globally:1 detects:1 gov:1 little:1 website4:1 solver:4 increasing:1 becomes:1 provided:1 estimating:1 underlying:1 moreover:4 panel:4 project:1 evolved:1 argmin:4 string:1 substantially:1 isoform:1 developed:2 finding:14 differentiation:1 every:1 exactly:1 universit:1 rm:3 classifier:1 grant:1 yn:1 planck:3 segmenting:1 before:1 zeller:3 local:2 switching:1 despite:1 encoding:1 ak:1 path:3 might:1 chose:1 twice:1 studied:1 challenging:1 binder:1 limited:3 bi:5 range:2 averaged:1 acknowledgment:1 union:7 procedure:2 pontil:1 area:1 empirical:7 universal:1 mhc:1 significantly:1 acceptor:1 vert:2 protein:8 altun:5 selection:7 tsochantaridis:2 risk:4 influence:1 applying:2 optimize:2 restriction:1 map:1 demonstrated:1 reviewer:1 williams:1 independently:2 convex:3 simplicity:1 identifying:2 assigns:1 splitting:1 handle:1 hierarchy:4 play:1 controlling:1 target:1 us:1 origin:1 element:1 expensive:2 approximated:2 updating:1 particularly:1 jk:1 cut:1 donor:1 labeled:1 bottom:1 role:1 observed:1 preprint:1 capture:2 calculate:1 region:26 cycle:1 sonnenburg:4 trade:1 removed:2 complexity:3 ideally:2 ong:1 ultimately:1 trained:2 grateful:1 solving:4 segment:1 completely:1 translated:3 exon:11 easily:1 joint:1 various:1 represented:1 regularizer:5 separated:1 fast:2 effective:1 artificial:1 labeling:1 klaus:1 hyper:6 outcome:1 nematode:2 encoded:3 larger:2 solve:4 valued:1 widely:2 s:1 posed:1 reconstruct:1 grammar:1 dieterich:1 jointly:1 noisy:1 sequence:20 propose:4 maximal:1 adaptation:2 tu:2 relevant:2 combining:2 neighboring:1 achieve:2 olkopf:1 exploiting:2 convergence:7 regularity:1 cluster:1 parent:4 perfect:1 converges:2 ben:1 help:1 coupling:3 ftp:2 ac:1 blitzer:1 measured:2 op:2 b0:3 transcript:7 solves:2 implemented:1 predicted:3 involves:1 guided:1 closely:1 correct:1 compromising:1 annotated:1 subsequently:3 allele:1 nlm:1 implementing:1 require:1 hx:2 mortazavi:1 clustered:1 anonymous:1 biological:7 utrs:1 extension:2 clarify:1 initio:1 practically:1 considered:1 ic:1 predict:3 major:1 achieves:2 commonality:1 adopt:1 vary:1 proc:4 pachter:1 label:10 currently:1 peptide:1 teo:1 successfully:1 tool:1 minimization:3 uller:1 genomic:5 rna:11 always:1 derived:2 focus:1 joachim:1 improvement:2 sequencing:2 greatly:2 contrast:2 kim:1 sense:1 detect:1 baseline:3 inference:2 dependent:3 typically:2 transferring:3 a0:2 hidden:3 relation:4 interested:2 germany:6 overall:1 dual:4 classification:4 among:1 hartmann:1 oca:2 animal:1 constrained:1 special:2 art:2 field:3 evgeniou:1 sampling:1 biology:9 bmc:1 icml:4 constitutes:1 mimic:1 inherent:1 few:2 employ:1 franc:1 simultaneously:1 recognize:1 resulted:1 individual:4 dfg:1 argmax:1 prokaryotic:13 ab:1 interest:1 message:1 evaluation:3 argmaxy:1 analyzed:1 extreme:1 primal:4 bundle:17 chain:1 accurate:5 bacteria:3 necessary:1 nucleotide:10 machinery:1 tree:6 incomplete:1 gerber:1 penalizes:2 intronic:4 theoretical:1 complicates:1 fitted:1 instance:2 column:5 modeling:1 increased:1 linesearch:1 salzberg:1 cover:1 caruana:1 cost:1 leiva:2 addressing:1 subset:3 entry:1 predictor:1 wortman:1 dependency:1 learnt:1 considerably:1 combined:1 st:1 ie:1 probabilistic:1 off:2 decoding:1 together:1 quickly:1 w1:5 squared:1 central:1 recorded:1 thesis:1 containing:1 slowly:2 nico:2 transcribed:4 leading:1 rescaling:1 account:1 prox:2 de:5 coding:15 stabilize:1 taxon:1 explicitly:1 unannotated:1 performed:1 root:6 hazan:1 start:12 aggregation:1 xing:1 annotation:1 curie:1 contribution:3 publicly:1 accuracy:7 ni:2 ynt:1 efficiently:2 characteristic:1 correspond:2 identify:3 yield:1 identification:1 bayesian:1 accurately:2 produced:1 comparably:1 straight:2 history:1 simultaneous:1 reach:1 andre:1 sharing:2 nonetheless:1 steadily:1 associated:2 couple:1 stop:12 newly:1 exonic:6 sampled:1 duplicated:1 dataset:2 proved:1 recall:3 dimensionality:1 improves:2 segmentation:1 organized:1 sophisticated:1 ea:1 focusing:1 higher:1 originally:1 supervised:2 mtl:25 follow:2 improved:2 formulation:6 done:2 evaluated:1 wold:1 generality:1 biomedical:1 smola:2 until:1 flanked:3 lack:1 uger:1 logistic:2 quality:1 grows:1 omitting:1 k22:3 verify:1 unbiased:1 true:1 consisted:1 evolution:1 regularization:10 hence:5 validity:2 laboratory:1 wp:3 widmer:4 during:2 maintained:1 criterion:1 complete:2 demonstrate:4 performs:1 image:1 harmonic:2 novel:1 recently:2 ef:1 nih:1 common:4 specialized:1 empirically:1 million:1 extend:3 organism:11 raetsch:1 refer:1 measurement:1 s5:1 imposing:1 ai:2 fml:3 outlined:1 grid:2 language:1 longer:1 similarity:1 etc:1 base:1 showed:1 dictated:1 optimizing:2 scenario:1 certain:1 ubingen:3 binary:3 yi:2 scoring:1 yasemin:1 captured:1 minimum:2 additional:4 impose:1 converge:1 aggregated:2 determine:1 monotonically:1 signal:1 semi:1 relates:2 multiple:7 desirable:1 ii:1 infer:1 zien:1 smooth:2 technical:2 faster:3 untranslated:3 cross:2 long:1 bach:1 molecular:3 equally:2 finder:2 prediction:18 variant:2 regression:3 basic:1 metric:1 arxiv:2 iteration:4 kernel:2 sequenced:2 oren:1 agarwal:1 cell:1 suppl:1 background:1 whereas:2 addition:1 sch:1 w2:2 kwan:1 comment:1 induced:1 subject:1 elegant:1 tend:1 elegans:1 spemannstr:3 lafferty:1 leveraging:1 depleted:2 presence:1 mbp:1 iii:2 easy:2 divisible:1 isolation:2 approaching:1 lasso:1 inner:1 idea:1 cn:3 whether:1 render:1 passing:1 biotechnology:1 generally:1 useful:1 amount:4 svms:3 dna:1 generate:1 http:1 percentage:3 mirrored:1 andr:1 estimated:4 per:4 correctly:2 broadly:1 discrete:1 georg:2 group:1 gunnar:2 key:2 four:1 sla:1 marie:1 graph:2 subgradient:1 enforced:2 run:1 jose:1 letter:1 taxonomic:1 franklinstr:1 uncertainty:1 arrive:1 almost:1 reasonable:1 seq:3 schweikert:2 soeren:1 appendix:3 bound:9 hi:1 pay:1 followed:1 fold:1 bohlen:1 refine:1 encountered:1 strength:2 adapted:1 vishwanathan:2 ri:1 aspect:1 speed:1 min:5 relatively:3 structured:18 according:1 combination:3 belonging:1 across:5 beneficial:2 wi:7 making:3 den:1 restricted:2 taa:1 taken:1 computationally:1 remains:1 previously:2 turn:3 german:1 mechanism:2 end:4 available:5 rewritten:1 apply:3 observe:2 hierarchical:12 appropriate:1 disagreement:1 pierre:1 alternative:1 struct:1 rp:9 original:3 top:1 denotes:1 include:1 cf:7 remaining:1 assembly:1 log2:3 hinge:4 ncbi:2 daum:2 exploit:5 especially:1 build:2 society:3 micchelli:1 objective:5 strategy:3 costly:1 hai:2 evolutionary:2 gradient:1 exhibit:1 distance:1 thank:1 berlin:5 tue:3 entity:1 parametrized:2 hmm:1 me:1 philip:1 manifold:1 considers:2 cellular:1 urtasun:1 assuming:1 length:3 code:1 modeled:1 relationship:3 reformulate:1 providing:1 insufficient:1 equivalently:1 difficult:4 mostly:2 setup:1 robert:1 taxonomy:6 rise:1 xnt:1 bona:1 twenty:1 contributed:1 perform:1 upper:3 allowing:2 markov:5 datasets:1 descent:1 prokaryote:4 defining:1 y1:2 introduced:3 bk:1 pair:1 required:1 david:1 optimized:2 website3:1 quadratically:1 boost:1 able:4 beyond:1 below:1 usually:2 kulesza:1 sparsity:1 max:11 reliable:1 event:1 demanding:1 natural:2 difficulty:1 regularized:4 suitable:3 predicting:2 quantification:1 schuller:1 created:2 excel:1 carried:1 naive:1 interdependent:2 relative:1 loss:16 expect:1 plant:1 lecture:1 interesting:1 maxhw:2 generation:5 versus:1 toussaint:1 validation:1 foundation:2 sufficient:1 systematically:1 bypass:1 share:1 translation:1 repeat:1 supported:1 infeasible:2 allow:1 understand:1 fifth:1 benefit:1 van:1 xn:1 world:1 transition:3 rich:1 genome:15 author:1 made:1 forward:2 cope:2 cutting:10 relatedness:1 transcription:1 gene:46 global:1 active:1 reveals:1 assumed:1 xi:13 spectrum:1 continuous:1 search:1 latent:2 triplet:4 behr:2 reality:1 table:1 additionally:1 nature:1 transfer:1 confirmation:3 ca:1 obtaining:2 heidelberg:1 excellent:1 european:1 investigated:1 domain:5 intergenic:10 eukaryotic:8 main:2 linearly:1 terminated:1 whole:2 allowed:1 x1:2 site:5 representative:1 n:1 precision:3 sub:2 inferring:2 position:3 pereira:2 ylo:1 candidate:1 hw:12 splice:5 externally:1 down:1 specific:4 intron:18 er:1 insightful:1 explored:1 svm:3 taxonomically:2 evidence:1 utr:6 essential:1 kr:1 ci:1 supplement:3 phd:1 subtree:1 demand:1 margin:3 suited:2 led:1 simply:1 contained:1 strand:1 binding:1 corresponds:5 determines:1 conditional:2 formulated:1 shared:1 feasible:2 content:1 experimentally:1 determined:2 except:1 reducing:1 wt:4 decouple:1 called:4 specie:10 total:2 experimental:4 atsch:6 support:3 latter:1 crammer:1 assessed:2 bioinformatics:2 ub:1 atg:1 |
3,528 | 4,195 | A Two-Stage Weighting Framework for Multi-Source
Domain Adaptation
Qian Sun? , Rita Chattopadhyay?, Sethuraman Panchanathan, Jieping Ye
Computer Science and Engineering, Arizona State University, AZ 85287
{Qian Sun, rchattop, panch, Jieping.Ye}@asu.edu
Abstract
Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no
labeled data from the test or target distribution but may have plenty of labeled
data from multiple related sources with different distributions. The difference in
distributions may be both in marginal and conditional probabilities. Most of the
existing domain adaptation work focuses on the marginal probability distribution
difference between the domains, assuming that the conditional probabilities are
similar. However in many real world applications, conditional probability distribution differences are as commonplace as marginal probability differences. In
this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage),
with the target domain data. The weights for minimizing the marginal probability
differences are estimated independently, while the weights for minimizing conditional probability differences are computed simultaneously by exploiting the potential interaction among multiple sources. We also provide a theoretical analysis
on the generalization performance of the proposed multi-source domain adaptation formulation using the weighted Rademacher complexity measure. Empirical
comparisons with existing state-of-the-art domain adaptation methods using three
real-world datasets demonstrate the effectiveness of the proposed approach.
1
Introduction
We consider the domain adaptation scenarios where we have very few or no labeled data from target
domain but a large amount of labeled data from multiple related source domains with different data
distributions. Under such situations, learning a single or multiple hypotheses on the source domains
using traditional machine learning methodologies and applying them on target domain data may lead
to poor prediction performance. This is because traditional machine learning algorithms assume that
both the source and target domain data are drawn i.i.d. from the same distribution. Figure 1 shows
two such source distributions, along with their hypotheses obtained based on traditional machine
learning methodologies and a target data distribution. It is evident that the hypotheses learned by
the two source distributions D1 and D2 would perform poorly on the target domain data.
One effective approach under such situations is domain adaptation, which enables transfer of knowledge between the source and target domains with dissimilar distributions [1]. It has been applied
successfully in various applications including text classification (parts of speech tagging, webpage
tagging, etc) [2], video concept detection across different TV channels [3], sentiment analysis (identifying positive and negative reviews across domains) [4] and WiFi Localization (locating device
location depending upon the signal strengths from various access points) [5].
?
Authors contributed equally.
1
D2: Source Domain 2
D1: Source Domain 1
10
12
A4
10
A2
10
A5
8
8
A3
4
A6
2
2
0
?2
0
?2
2
6
4
A1
0
8
6
6
4
Target Domain
12
12
4
6
(a)
8
10
12
2
0
2
4
6
(b)
8
10
12
0
?2
0
2
4
6
8
10
12
(c)
Figure 1: Two source domains D1 and D2 and target domain data with different marginal and
conditional probability differences, along with conflicting conditional probabilities (the red squares
and blue triangles refer to the positive and negative classes).
Many existing methods re-weight source domain data in order to minimize the marginal probability
differences between the source and target domains and learn a hypothesis on the re-weighted source
data [6, 7, 8, 9]. However they assume that the distributions differ only in marginal probabilities but
the conditional probabilities remain the same. There are other methods that learn model parameters
to reduce marginal probability differences [10, 11]. Similarly, several algorithms have been developed in the past to combine knowledge from multiple sources [12, 13, 14]. Most of these methods
measure the distribution difference between each source and target domain data, independently,
based on marginal or conditional probability differences and combine the hypotheses generated by
each of them on the basis of the respective similarity factors. However the example in Figure 1
demonstrates the importance of considering both marginal and conditional probability differences
in multi-source domain adaptation.
In this paper we propose a two-stage multi-source domain adaptation framework which computes
weights for the data samples from multiple sources to reduce both marginal and conditional probability differences between the source and target domains. In the first stage, we compute weights
of the source domain data samples to reduce the marginal probability differences, using Maximum
Mean Discrepancy (MMD) [15, 6] as the measure. The second stage computes the weights of
multiple sources to reduce the conditional probability differences; the computation is based on the
smoothness assumption on the conditional probability distribution of the target domain data [16].
Finally, a target classifier is learned on the re-weighted source domain data. A novel feature of our
weighting methodologies is that no labeled data is needed from the target domain, thus widening the
scope of their applicability. The proposed framework is readily extendable to the case where a few
labeled data may be available from the target domain.
In addition, we present a detailed theoretical analysis on the generalization performance of our
proposed framework. The error bound of the proposed target classifier is based on the weighted
Rademacher complexity measure of a class of functions or hypotheses, defined over a weighted
sample space [17, 18]. The Rademacher complexity measures the ability of a class of functions to
fit noise. The empirical Rademacher complexity is data-dependent and can be measured from finite
samples. It can lead to tighter bounds than those based on other complexity measures such as the
VC-dimension. Theoretical analysis of domain adaptation has been studied in [19, 20]. In [19], the
authors provided the generalization bound based on the VC dimension for both single-source and
multi-source domain adaptation. The results were extended in [20] to a broader range of prediction
problems based on the Rademacher complexity; however only the single-source case was analyzed
in [20]. We extend the analysis in [19, 20] to provide the generalization bound for our proposed
two-stage framework based on the weighted Rademacher complexity; our generalization bound is
tighter than the previous ones in the multi-source case. Our theoretical analysis also reveals the
key properties of our generalization bound in terms of a differential weight ? between the weighted
source and target samples.
We have performed extensive experiments using three real-world datasets including 20 Newsgroups,
Sentiment Analysis data and one dataset of multi-dimensional feature vectors extracted from Surface
Electromyigram (SEMG) signals from eight subjects. SEMG signals are recorded using surface
electrodes, from the muscle of a subject, during a submaximal repetitive gripping activity, to detect
stages of fatigue. Our empirical results demonstrate superior performance of the proposed approach
over the existing state-of-the-art domain adaptation methods; our results also reveal the effect of the
differential weight ? on the target classifier performance.
2
2
Proposed Approach
We consider the following multi-source domain adaptation setting. There are k auxiliary source
s
, s = 1, 2, ? ? ? k,
domains. Each source domain is associated with a sample set Ds = (xsi , yis )|ni=1
s
s
where xi is the i-th feature vector, yi is the corresponding class label, ns is the sample size of the
s-th source domain, and k is the total number of source domains. The target domain consists of
nl
u
and optionally a few labeled data DlT = (xTi , yiT )|S
plenty of unlabeled data DuT = xTi |ni=1
i=1 . Here
nu and nl are the numbers of unlabeled and labeled data, respectively. Denote DT = DlT DuT and
nT = nl + nu . The goal is to build a classifier for the target domain data using the source domain
data and a few labeled target domain data, if available.
The proposed approach consists of two stages. In the first stage, we compute the weights of source
domain data based on the marginal probability difference; in the second stage, we compute the
weights of source domains based on the conditional probability difference. A target domain classifier
is learned on these re-weighted data.
2.1
Re-weighting data samples based on marginal probability differences
The difference between the means of two distributions after mapping onto a reproducing kernel
Hilbert space, called Maximum Mean Discrepancy, has been shown to be an effective measure of
the differences in their marginal probability distributions [15]. We use this measure to compute the
weights ?is ?s of the s-th source domain data by solving the following optimization problem [6]:
2
nT
ns
1 X
1 X
s
s
T
min
?i ?(xi ) ?
?(xi )
s
?
ns
nT i=1
(1)
i=1
H
s.t. ?is ? 0
where ?(x) is a feature map onto a reproducing kernel Hilbert space H [21], ns is the number of
samples in the s-th source domain, nT is the number of samples in the target domain, and ?s is the
ns dimensional weight vector. The minimization problem is a standard quadratic problem and can
be solved by applying many existing solvers.
2.2
Re-weighting Sources based on Conditional probability differences
In the second stage the proposed framework modulates the ?s weights of a source domain s obtained
on the basis of marginal probability differences in the first stage, with another weighting factor given
by ? s . The weight ? s reflects the similarity of a particular source domain s to the target domain
with respect to conditional probability distributions.
Next, we show how to estimate the weights ? s . For each of the k source domains, a hypothesis
hs : X ? Y is learned on the ?s re-weighted source data samples. This ensures that the hypothesis
is learned on source data samples with similar marginal probability distributions. These k source
u
domain hypotheses are used to predict the unlabeled target domain data DuT = xTi |ni=1
. Let HiS =
1
k
[hi ? ? ? hi ] be the 1 ? k vector of predicted labels of k source domain hypotheses for the i-th sample
of target domain data. Let ? = [? 1 ? ? ? ? k ]0 be the k ? 1 weight vector, where ? s is the weight
corresponding to the s-th source hypothesis. The estimation of the weight for each source domain
hypothesis hs is based on the smoothness assumption on the conditional probability distribution
of the target domain data [16]; specifically we aim to find the optimal weights by minimizing the
difference in predicted labels between two nearby points in the target domain as follows.
nu
X
min
0
?:? e=1,??0
(HiS ? ? HjS ?)2 Wij
(2)
i,j=1
where H S is an n ? k matrix with each row of H S given by HiS as defined above, HiS ? and HjS ?
are the predicted labels for the i-th and j-th samples of target domain data obtained by following
a ? weighted ensemble methodology over all k sources, and Wij is the similarity between the two
target domain data samples. We can rewrite the minimization problem as follows:
0
min
0
0
? H S Lu H S ?
?:? e=1,??0
3
(3)
where Lu is the graph Laplacian associated with the target domain data DuT , given by Lu = D ? W ,
T
where W is the similarity matrix defining
Pnedge weights between the data samples in Du , and D
is the diagonal matrix given by Dii = j=1 Wij . The minimization problem in (3) is a standard
quadratic problem (QP) and can be solved efficiently by applying many existing solvers.
To illustrate the proposed two-stage framework, we demonstrate the effect of re-weighting data
samples in source domains D1 and D2 of the toy dataset (shown in Figure 1), based on the computed
weights, in the supplemental material.
2.3
Learning the Target Classifier
The target classifier is learned based on the re-weighted source data and a few labeled target domain
data (if available). We also incorporate an additional weighting factor ? to provide a differential
weight to the source domain data with respect to the labeled target domain data. Mathematically,
? is learnt by solving the following optimization problem:
the target classifier h
? = argmin ?
h
h
ns
k
X
?s X
s=1
ns
?is L(h(xsi ), yis ) +
i=1
nl
X
1
L(h(xTj ), yjT )
n
l
j=1
(4)
where nl is the number of labeled data from the target domain.
We refer to the proposed framework as 2-Stage Weighting framework for Multi-Source Domain
Adaptation (2SW-MDA). Algorithm 1 below summarizes the main steps involved in 2SW-MDA.
Algorithm 1 2SW-MDA
1: for s = 1, . . . ,k do
2:
Compute ?s by solving (1)
3:
Learn a hypothesis hs on the ?s weighted source data
4: end for
5: Form the nu ? k prediction matrix H S as in Section 2.2
6: Compute matrices W , D and L using the unlabeled target data DuT
7: Compute ? s by solving (3)
? by solving (4)
8: Learn the target classifier h
3
Theoretical Analysis
For convenience of presentation, we rewrite the empirical joint error function on (?, ?)-weighted
source domain and the target domain defined in (4) as follows:
S
??,?
E
(h) = ??
?,? (h) + ?T (h) = ?
ns
k
X
?s X
s=1
yis
yit
ns
?is L(h(xsi ), fs (xsi )) +
i=1
nl
X
1
L(h(x0i ), f0 (x0i )) (5)
n
l
i=1
fs (xsi )
f0 (x0i )
where
=
and fs is the labeling function for source s, ? > 0, (x0i ) are samples from the
target, =
and f0 is the labeling function for the target domain, and S = (xsi ) include all
samples from the target and source domains. The true (?, ?)-weighted error ?,? (h) on weighted
S
source domain samples is defined analogously. Similarly, we define E?,?
(h) as the true joint error
function. For notational simplicity, denote n0 = nl as the number of labeled samples from the target,
Pk
m = s=0 ns as the total number of samples from both source and target, and ?si = ?? s ?is /ns for
s ? 1 and ?si = 1/n for s = 0. Then we can re-write the empirical joint error function in (5) as:
? S (h) =
E
?,?
ns
k X
X
?is L(h(xsi ), fs (xsi )).
s=0 i=1
S
Next, we bound the difference between the true joint error function E?,?
(h) and its empirical estiS
?
mate E?,? (h) using the weighted Rademacher complexity measure [17, 18] defined as follows:
4
Definition 1. (Weighted Rademacher Complexity) Let H be a set of real-valued functions defined
over a set X. Given a sample S ? X m , the empirical weighted Rademacher complexity of H is
defined as follows:
"
#
ns
k X
X
s
s
s
s
? S (H) = E? sup |
<
?i ?i h(xi )| S = (xi ) .
h?H
s=0 i=1
where {?is } are independent uniform random variables
The expectation is taken over ? =
taking values in {?1, +1}. The weighted Rademacher complexity of a hypothesis set H is defined
? S (H) over all samples of size m:
as the expectation of <
i
h
? S (H) |S| = m .
<m (H) = ES <
{?is }
Our main result is summarized in the following lemma, which involves the estimation of the
Rademacher complexity of the following class of functions:
G = {x 7? L(h0 (x), h(x)) : h, h0 ? H}.
Lemma 1. Let H be a family of functions taking values in {?1, +1}. Then, for any ? > 0, with
probability at least 1 ? ?, the following holds for h ? H:
v
u P
Pns s 2
k
u
log(2/?)
t
s=0
i=1 (?i )
S
? S (h) ? IRS (H) +
.
E?,? (h) ? E
?,?
2
Furthermore, if H has a VC dimension of d, then the following holds with probability at least 1 ? ?:
v
u P
Pns s 2
k
u
log(2/?) r
t
s=0
i=1 (?i )
em
S
S
?
+1 ,
2d log
E?,? (h) ? E?,? (h) ?
2
d
where e is the natural number.
The proof is provided in Section A of the supplemental material.
3.1
Error bound on target domain data
In the previous section we presented an upper bound on the difference between the true joint error
function and its empirical estimate and established its relation to the weighting factors ?is . Next we
present our main theoretical result, i.e., an upper bound of the error function on target domain data,
? We need the following definition of divergence for our main result:
i.e., an upper bound of T (h).
Definition 2. For a hypothesis space H, the symmetric difference hypothesis space dH?H is the set
of hypotheses
0
0
g ? H?H ? g(x) = h(x) ? h (x) f or some h, h ? H,
where ? is the XOR function. In other words, every hypothesis g ? H?H is the set of disagreements
between two hypotheses in H.
The H?H-divergence between any two distributions DS and DT is defined as
dH?H (DS , DT )) = 2 sup |P rxvDS [h(x) 6= h0 (x)] ? P rxvDT [h(x) 6= h0 (x)]| .
h,h0 ?H
? ? H be an empirical minimizer of the joint error function on similarity weighted
Theorem 1. Let h
source domain and the target domain:
? = arg min E
??,? (h) ? ??
h
?,? (h) + ?T (h)
h?H
for fixed weights ?, ?, and ? and let h?T = minh?H T (h) be a target error minimizer. Then for any
? ? (0, 1), the following holds with probability at least 1 ? ?:
v
u P
Pns s 2
k
u
log(2/?)
t
s=0
i=1 (?i )
2
2<
(H)
S
? ? T (h? ) +
+
T (h)
T
1+?
1+?
2
?
+
(2??,? + dH?H (D?,? , DT )) ,
(6)
1+?
5
if H has a VC dimension of d, then the following holds with probability at least 1 ? ?:
?v
?
u P
Pns s 2
k
r
u
log(2/?)
?t
?
i=1 (?i )
s=0
em
? ? T (h? ) + 2 ?
T (h)
2d log
+1 ?
T
?
1+??
2
d
+
?
(2??,? + dH?H (D?,? , DT )) ,
1+?
(7)
where ??,? = minh?H {T (h) + ?,? (h)}, and dH?H (D?,? , DT )) is the symmetric difference hypothesis space for (?, ?)-weighted source and target domain data.
The proof as well as a comparison with the result in [19] is provided in the supplemental material.
We observe that ? and the divergence between the weighted source and target data play significant
roles in the generalization bound. Our proposed two-stage weighting scheme aims to reduce the
divergence. Next, we analyze the effect of ?. When ? = 0, the bound reduces to the generalization
bound using the nl training samples in the target domain only. As ? increases, the effect of the source
domain data increases. Specifically, when ? is larger than a certain value, for the bound in (7), as ?
increases, the second term will reduce, while the last term capturing the divergence will increase. In
the extreme case when ? = ?, the second term in (7) can be shown to be the generalization bound
using the weighted samples in the source domain only (the target data will not be effective in this
case), and the last term equals to 2??,? + dH?H (D?,? , DT ). Thus, effective transfer is possible in
this case only if the divergence is small. We also observed in our experiments that the target domain
error of the learned joint hypothesis follows a bell shaped curve; it has a different optimal point for
each dataset under certain similarity and divergence measures.
4
Empirical evaluations
Datasets.
We evaluate the proposed 2SW-MDA method on three real-world datasets and the
toy data shown in Figure 1. The toy dataset is generated using a mixture of Gaussian distributions. It has two classes and three domains, as shown in Figure 1. The two source domains D1
and D2 were created to have both conditional and marginal probability differences with the target
domain data so as to provide an ideal testbed for the proposed domain adaptation methodology.
The three real-world datasets used are 20 Newsgroups1 , Sentiment Analysis2 and another dataset
of multi-dimensional feature vectors extracted from SEMG (Surface electromyogram) signals. The
20 Newsgroups dataset is a collection of approximately 20,000 newsgroup documents, partitioned
(nearly) evenly across 20 different categories. We represented each document as a binary vector
of the 100 most discriminating words determined by Weka?s info-gain filter [22]. Out of the 20
categories, we used 13 categories, to form the source and target domains. For each of these categories the negative class was formed by a random mixture of the rest of the seven categories, as
suggested in [23]. The details of the 13 categories used can be found in the supplemental material.
The Sentiment Analysis dataset contains positive and negative reviews on four categories (or domains) including kitchen, book, dvd, and electronics. We processed the Sentiment Analysis dataset
to reduce the feature dimension to 200 using a cutoff document frequency of 50.
The SEMG dataset is 12-dimensional time and frequency domain features derieved from Surface
Electromyogram (SEMG) physiological signals. SEMG are biosignals recorded from the muscle
of a subject using surface electrodes to study the muscoskeletal activities of the subject under test.
SEMG signals used in our experiments, are recorded from extensor carpi radialis muscle during a
submaximal repetitive gripping activity, to study different stages of fatigue. Data is collected from
8 subjects. Each subject data forms a domain. There are 4 classes defining various stages of fatigue.
Data from a target subject is classified using the data from the remaining 7 subjects, which form the
multiple source domains.
Competing Methods. To evaluate the effectiveness of our approach we compare 2SW-MDA with a
baseline method SVM-C as well as with five state-of-the-art domain adaptation methods. In SVM-C,
1
2
Available at http://www.ai.mit.edu/vjrennie/20Newsgroups/
Available at http://www.cs.jhu.edu/vmdredze/
6
the training data comprises of data from all source domains (12 for 20 Newsgroups data) and the test
data is from the remaining one domain as indicated in the first column of the results in Table 1. The
recently proposed multi-source domain adaptation methods used for comparison include Locally
Weighted Ensemble (LWE) [14] and Domain Adaptation Machine (DAM) [13]. To evaluate the
effectiveness of multi-source domain adaption, we also compared with three other state-of-the-art
single-source domain adaptation methods, including Kernel Mean Matching (KMM) [6], Transfer
Component Analysis (TCA) [11] and Kernel Ensemble (KE) [24].
Experimental Setup. Recall that one of the appealing features of the proposed method is that
it requires very few or no labeled target domain data. In our experiments, we used only 1 labeled
sample per class from the target domain. The results of the proposed 2SW-MDA method are based
on ? = 1 (see Figure 2 for results on varying ?). Each experiment was repeated 10 times with
random selections of the labeled data. For each experiment, the category shown in first column of
Table 1 was used as the target domain and the rest of the categories as the source domains. Different
instances of the 20 Newsgroups categories are different random samples of 100 data samples selected from the total 500 data samples in the dataset. Different instances of SEMG dataset are data
belonging to different subjects used as target data. Details about the parameter settings are included
in the supplemental material.
Dataset
talk.politics.mideast
talk.politics.misc
comp.sys.ibm.pc.hardware
rec.sport.baseball
kitchen
electronics
book
dvd
SEMG- 8 subjects
Toy data
SVM-C
46.00%
49.33%
49.33%
48.83%
48.22%
48.31%
48.42%
47.44%
45.93%
56.25%
58.75%
56.35%
35.55%
35.95%
37.77%
36.01%
70.76%
43.69%
50.11%
59.65%
40.37%
59.21%
47.13%
69.85%
60.05%
LWE
50.66%
49.39%
50.27%
53.62%
51.12%
50.72%
51.25%
51.44%
49.88%
61.51%
50.09%
59.26%
40.12%
42.66%
40.12%
49.44%
67.44%
77.54%
75.55%
81.22%
52.48%
65.77%
60.32%
72.81%
75.63%
KE
49.01%
53.48%
54.67%
46.77%
48.39%
55.01%
49.50%
49.44%
48.00%
47.50%
51.25%
56.25%
49.38%
48.38%
49.38%
48.77%
63.55%
74.62%
62.50%
69.35%
65.61%
83.92%
77.97%
79.48%
81.40%
KMM
45.78%
39.75%
43.37%
62.32%
59.42%
59.07%
50.56%
59.55%
58.43%
61.79%
64.04%
58.43%
64.04%
65.55%
58.88%
50.00%
64.94%
63.63%
64.06%
52.68%
49.77%
70.62%
51.13%
67.24%
68.01%
TCA
58.66%
56.00%
52.04%
55.90%
53.23%
54.83%
61.25%
57.50%
59.75%
61.75%
57.75%
57.83%
64.10%
54.20%
55.01%
50.00%
66.35%
59.94%
56.78%
73.38%
57.48%
76.92%
55.64%
42.79%
64.97%
DAM
52.03%
52.00%
51.81%
53.22%
54.12%
54.12%
52.50%
52.50%
57.80%
61.25%
53.75%
55.05%
58.61%
52.61%
54.10%
50.61%
74.83%
81.36%
74.77%
80.63%
76.74%
59.21%
74.27%
84.55%
84.27%
2SW-MDA
73.49%
65.06%
62.65%
63.67%
60.87%
68.12%
62.92%
60.67%
64.04%
79.78%
60.22%
61.24%
70.55%
59.44%
59.47%
51.11%
83.03%
87.96%
88.96%
88.49%
86.14%
87.10%
87.08%
93.01%
98.54%
Table 1: Comparison of different methods on three real-world and one toy datasets in terms of
classification accuracies (%).
Comparative Studies. Table 1 shows the classification accuracies of different methods on the realworld and the toy datasets. We observe that SVM-C performs poorly for all cases. This may be
attributed to the distribution difference among the multiple source and target domains. We observe
that 20 Newsgroups and Sentiment Analysis datasets have predominantly marginal probability differences. In other words, the frequency of a particular word varies from one category of documents
to another. In contrary physiological signals, such as SEMG are predominantly different in conditional probability distributions due to the high subject based variability in the power spectrum
of these signals and their variations as fatigue sets in [25, 26]. We also observe that the proposed
2SW-MDA method outperforms other domain adaptation methods and achieves higher classification
accuracies in most cases, specially for the SEMG dataset. The accuracies of an SVM classifier, on
the toy dataset, when learned only on the source domains D1, D2 individually and on the combined
source domains, are 64.08% and 71.84% and 60.05% respectively, while 2SW-MDA achieves an
accuracy of 98.54%. More results are provided in the supplemental material.
It is interesting to note that instance re-weighting method KMM and feature mapping based method
TCA, which address marginal probability differences between the source and target domains per7
form better than LWE and KE for both 20 Newsgroups and Sentiment Analysis data. They also
perform better than DAM, a multi-source domain adaptation method, based on marginal probability
based weighted hypotheses combination. It is worthwhile to note that LWE is based on conditional
probability differences and KE tries to address both differences. Thus, it is not surprising that LWE
and KE perform better than KMM and TCA for the SEMG dataset, which is predominantly different in conditional probability distributions. DAM too performs better for SEMG signals. However
the proposed 2SW-MDA method, which addresses both marginal and conditional probability differences outperforms all the other methods in most cases. Our experiments verify the effectiveness of
the proposed two-stage framework.
75
Parameter Sensitivity Studies. In this experiment, we study the effect of ? on the classifica70
tion performance. Figure 2 shows the variation
in classification accuracies for some cases pre65
sented in Table 1, with varying ? over a range
[0 0.001 0.01 0.1 0.3 0.5 1 100 1000]. The x60
axis of the figures are in logarithmic scale. The
results for the toy data are included in supple55
mental material. We can observe from the figure that in most cases, the accuracy values in50
crease as ? increases from 0 to an optimal value
45
and decreases when ? further increases. When
?8
?6
?4
?2
0
2
4
6
8
log u
? = 0 the target classifier is learned only on the
few labeled data from the target domain. As ?
increases the transfer of knowledge due to the Figure 2: Performance of the proposed 2SWpresence of additional weighted source data has MDA method on 20 Newsgroups dataset and Sena positive impact leading to increase in classifi- timent Analysis dataset with varying ?.
cation accuracies in the target domain. We also
observe that after a certain value of ? the classifier accuracies drop, due to the distribution differences between the source and target domains. These experimental results are consistent with the
theoretical results established in this paper.
talk.politics.misc
comp.sys.ibm.pc.hardware
talk.politics.mideast
dvd
book
accuracy(%)
electronics
5
Conclusion
Domain adaptation is an important problem that arises in a variety of modern applications where
limited or no labeled data is available for a target application. We presented here a novel multisource domain adaptation framework. The proposed framework computes the weights for the source
domain data using a two-step procedure in order to reduce both marginal and conditional probability
distribution differences between the source and target domain. We also presented a theoretical error
bound on the target classifier learned on re-weighted data samples from multiple sources. Empirical
comparisons with existing state-of-the-art domain adaptation methods demonstrate the effectiveness
of the proposed approach. As a part of the future work we plan to extend the proposed multi-source
framework to applications involving other types of physiological signals for developing generalized
models across subjects for emotion and health monitoring [27, 28]. We would also like to extend
our framework to video and speech based applications, which are commonly affected by distribution
differences [3].
Acknowledgements
This research is sponsored by NSF IIS-0953662, CCF-1025177, and ONR N00014-11-1-0108.
References
[1] S.J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2009.
[2] H. Daum III. Frustratingly easy domain adaptation. In ACL, 2007.
[3] L. Duan, I.W. Tsang, D. Xu, and S.J. Maybank. Domain transfer svm for video concept detection. In
CVPR, 2009.
8
[4] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, 2007.
[5] S.J. Pan, J.T. Kwok, and Q. Yang. Transfer learning via dimensionality reduction. In AAAI 08.
[6] J. Huang, A.J. Smola, A. Gretton, K.M. Borgwardt, and B. Scholkopf. Correcting sample selection bias
by unlabeled data. In NIPS, volume 19, page 601, 2007.
[7] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. In JSPI, 2000.
[8] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning under covariate shift. In JMLR, 2009.
[9] C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighing. In NIPS, 2010.
[10] M. Sugiyama, S. Nakajima, H. Kashima, P.V. Buenau, and M. Kawanabe. Direct importance estimation
with model selection and its application to covariate shift adaptation. In NIPS, 2008.
[11] S.J. Pan, I.W. Tsang, J.T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. In
IJCAI, 2009.
[12] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS, 2009.
[13] L. Duan, I.W. Tsang, D. Xu, and T. Chua. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, pages 289?296, 2009.
[14] J. Gao, W. Fan, J. Jiang, and J. Han. Knowledge transfer via multiple model local structure mapping. In
KDD, pages 283?291, 2008.
[15] K.M. Borgwardt, A. Gretton, M.J. Rasch, H.P. Kriegel, B. Scholkopf, and A.J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. In Bioinformatics, volume 22, pages 49?57,
2006.
[16] R. Chattopadhyay, J. Ye, S. Panchanathan, W. Fan, and I. Davidson. Multi-source domain adaptation and
its application to early detection of fatigue. In KDD, 2011.
[17] P.L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. JMLR, 3:463?482, 2002.
[18] V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5):1902?1914, 2001.
[19] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J.W. Vaughan. A theory of learning
from different domains. Journal of Mach Learn, 79:151?175, 2010.
[20] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms.
Computing Research Repository, abs/0902.3430, 2009.
[21] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. In JMLR,
volume 2, page 93, 2002.
[22] I.H. Witten and E. Frank. In Data Mining: Practical Machine Learning Tools with Java Implementations,
San Francisco, CA, 2000. Morgan Kaufmann.
[23] E. Eaton and M. desJardins. Set-based boosting for instance-level transfer. In IEEE International Conference on Data Mining Workshops, 2009.
[24] E. Zhong, W. Fan, J. Peng, K. Zhang, J. Ren, D. Turaga, and O. Verscheure. Cross domain distribution
adaptation via kernel mapping. In KDD, Paris, France, 2009. ACM.
[25] P. Contessa, A. Adam, and C.J. De Luca. Motor unit control and force fluctuation during fatigue. Journal
of Applied Physiology, April 2009.
[26] B. Gerdle, B. Larsson, and S. Karlsson. Criterion validation of surface EMG variables as fatigue indicators
using peak torque: a study of repetitive maximum isokinetic knee extensions. Journal of Electromyography and Kinesiology, 10(4):225?232, August 2000.
[27] E. leon, G. Clarke, V. Callaghan, and F. Sepulveda. A user independent real time emotion recognition
system for software agents in domestic environment. In Engineering Application of Artificial Intelligence,
April 2007.
[28] J. Kim and E. Andre. Emotion recognition based on physiological changes in music listening. In Pattern
Analysis and Machine Intelligence, December 2008.
[29] C. McDiarmid. On the method of bounded differences., volume 5. Cambridge University Press, Cambridge, 1989.
[30] S. Kakade and A. Tewari. Lecture notes of CMSC 35900: Learning theory, Toyota Technological Institute
at Chicago. Spring 2008.
[31] P. Massart. Some applications of concentration inequalities to statistics. Annales de la Faculte des sciences
de ToulouseSciences de Toulouse, IX(2):245?303, 2000.
9
| 4195 |@word h:3 repository:1 d2:6 blender:1 reduction:1 electronics:3 contains:1 document:4 past:1 existing:7 outperforms:2 nt:4 surprising:1 si:2 readily:1 chicago:1 kdd:3 enables:1 motor:1 drop:1 sponsored:1 n0:1 intelligence:2 asu:1 device:1 selected:1 weighing:1 sys:2 chua:1 mental:1 boosting:1 location:1 mcdiarmid:1 zhang:1 five:1 along:2 direct:1 differential:3 scholkopf:2 consists:2 combine:3 peng:1 tagging:2 x60:1 multi:15 torque:1 verscheure:1 duan:2 xti:3 considering:1 solver:2 domestic:1 provided:4 bounded:1 argmin:1 developed:1 supplemental:6 every:1 demonstrates:1 classifier:14 control:1 unit:1 extensor:1 positive:4 engineering:3 local:1 mach:1 jiang:1 fluctuation:1 approximately:1 acl:2 koltchinskii:1 studied:1 dut:5 challenging:1 limited:1 range:2 practical:1 procedure:1 empirical:11 bell:1 jhu:1 java:1 matching:1 physiology:1 word:4 integrating:1 onto:2 unlabeled:5 convenience:1 selection:3 dam:4 applying:3 risk:2 influence:1 vaughan:1 www:2 map:1 jieping:2 independently:2 survey:1 ke:5 simplicity:1 identifying:1 knee:1 qian:2 correcting:1 his:4 variation:2 target:72 play:1 user:1 hypothesis:22 rita:1 recognition:2 rec:1 electromyogram:2 labeled:18 observed:1 role:1 solved:2 tsang:3 commonplace:1 biosignals:1 ensures:1 sun:2 decrease:1 technological:1 environment:1 complexity:13 classifi:1 solving:5 rewrite:2 predictive:1 localization:1 upon:1 baseball:1 basis:2 triangle:1 tca:4 joint:7 various:3 represented:1 talk:4 effective:4 artificial:1 labeling:2 h0:5 larger:1 valued:1 cvpr:1 ability:1 statistic:1 toulouse:1 propose:2 interaction:1 adaptation:33 poorly:2 az:1 exploiting:1 webpage:1 electrode:2 ijcai:1 rademacher:13 comparative:1 adam:1 ben:1 depending:1 illustrate:1 blitzer:2 measured:1 x0i:4 auxiliary:2 predicted:3 involves:1 c:1 differ:1 rasch:1 filter:1 vc:4 dii:1 material:7 generalization:9 tighter:2 biological:1 mathematically:1 extension:1 hold:4 scope:1 mapping:4 predict:1 eaton:1 desjardins:1 achieves:2 bickel:1 a2:1 early:1 estimation:3 label:4 individually:1 successfully:1 tool:1 weighted:28 reflects:1 minimization:4 mit:1 gaussian:2 aim:2 zhong:1 varying:3 broader:1 focus:1 notational:1 likelihood:1 baseline:1 detect:1 rostamizadeh:2 kim:1 inference:1 dependent:1 relation:1 wij:3 france:1 arg:1 among:2 classification:6 multisource:1 plan:1 art:5 marginal:24 equal:1 emotion:3 shaped:1 icml:1 nearly:1 wifi:1 plenty:2 discrepancy:3 future:1 few:8 modern:1 simultaneously:1 divergence:7 xtj:1 kitchen:2 ab:1 detection:3 irs:1 a5:1 mining:2 karlsson:1 evaluation:1 analyzed:1 extreme:1 nl:8 mixture:2 pc:2 buenau:1 respective:1 re:12 theoretical:8 lwe:5 column:2 instance:4 a6:1 applicability:1 uniform:1 too:1 emg:1 varies:1 learnt:1 extendable:1 combined:1 borgwardt:2 international:1 sensitivity:1 discriminating:1 peak:1 analogously:1 aaai:1 recorded:3 huang:1 book:3 leading:1 toy:8 potential:1 de:5 summarized:1 boom:1 performed:1 try:1 tion:1 analyze:1 sup:2 red:1 minimize:1 formed:1 square:1 ni:3 xor:1 kaufmann:1 accuracy:10 efficiently:1 ensemble:3 lu:3 ren:1 monitoring:1 comp:2 cation:1 classified:1 andre:1 definition:3 frequency:3 involved:1 chattopadhyay:2 associated:2 proof:2 attributed:1 gain:1 dataset:17 recall:1 knowledge:5 dimensionality:1 hilbert:2 higher:1 dt:7 methodology:6 april:2 formulation:1 box:1 furthermore:1 stage:20 smola:2 d:3 steinwart:1 indicated:1 reveal:1 dredze:1 effect:5 ye:3 concept:2 true:4 verify:1 ccf:1 symmetric:2 misc:2 during:3 criterion:1 generalized:1 fatigue:7 evident:1 demonstrate:4 performs:2 novel:2 recently:1 predominantly:3 superior:1 witten:1 qp:1 volume:4 belong:1 extend:3 refer:2 significant:1 cambridge:2 ai:1 maybank:1 smoothness:2 consistency:1 similarly:2 sugiyama:1 panchanathan:2 access:1 f0:3 similarity:6 surface:6 han:1 etc:1 larsson:1 scenario:1 pns:4 certain:3 n00014:1 inequality:1 binary:1 onr:1 yi:4 muscle:3 morgan:1 additional:2 signal:10 ii:1 multiple:14 reduces:1 gretton:2 cross:1 yjt:1 crease:1 luca:1 equally:1 a1:1 laplacian:1 impact:1 prediction:3 involving:1 uckner:1 xsi:8 expectation:2 repetitive:3 kernel:7 nakajima:1 mmd:1 addition:1 source:89 rest:2 specially:1 massart:1 subject:12 december:1 contrary:1 effectiveness:5 structural:2 yang:3 ideal:1 iii:1 easy:1 newsgroups:8 variety:1 fit:1 competing:1 reduce:8 dlt:2 weka:1 br:1 listening:1 shift:3 politics:4 bartlett:1 sentiment:8 penalty:1 f:4 locating:1 speech:2 tewari:1 detailed:1 amount:1 locally:1 hardware:2 processed:1 category:11 http:2 nsf:1 estimated:1 per:1 blue:1 write:1 affected:1 key:1 four:1 drawn:1 yit:2 cutoff:1 graph:1 annales:1 realworld:1 family:1 sented:1 summarizes:1 clarke:1 capturing:1 bound:20 gripping:2 hi:2 fan:3 quadratic:2 arizona:1 activity:3 mda:11 strength:1 software:1 dvd:3 nearby:1 min:4 spring:1 leon:1 structured:1 tv:1 developing:1 turaga:1 combination:1 poor:1 shimodaira:1 belonging:1 across:4 remain:1 em:2 pan:3 partitioned:1 appealing:1 kakade:1 kmm:4 taken:1 needed:1 end:1 available:6 eight:1 observe:6 worthwhile:1 kwok:2 kawanabe:1 disagreement:1 faculte:1 kashima:1 remaining:2 include:2 a4:1 sw:10 daum:1 music:1 build:1 concentration:1 traditional:3 diagonal:1 evenly:1 seven:1 collected:1 assuming:1 minimizing:3 optionally:1 setup:1 frank:1 info:1 negative:4 implementation:1 perform:3 contributed:1 upper:3 datasets:8 finite:1 mate:1 minh:2 situation:2 extended:1 defining:2 variability:1 mansour:3 reproducing:2 august:1 david:1 paris:1 extensive:1 learned:10 conflicting:1 testbed:1 established:2 nu:4 nip:4 address:3 suggested:1 kriegel:1 below:1 pattern:1 kulesza:1 including:4 video:3 power:1 widening:1 natural:1 force:1 indicator:1 scheme:1 sethuraman:1 created:1 axis:1 health:1 text:1 review:2 acknowledgement:1 lecture:1 interesting:1 validation:1 agent:1 consistent:1 ibm:2 row:1 mohri:3 last:2 bias:1 institute:1 taking:2 curve:1 dimension:5 world:6 computes:3 author:2 collection:1 commonly:1 san:1 transaction:2 reveals:1 francisco:1 discriminative:2 xi:5 davidson:1 spectrum:1 frustratingly:1 table:5 channel:1 transfer:10 learn:5 ca:1 improving:1 du:1 complex:1 domain:129 bollywood:1 pk:1 main:4 noise:1 repeated:1 xu:2 scheffer:1 n:13 comprises:1 pereira:2 jmlr:3 weighting:12 mideast:2 toyota:1 ix:1 theorem:1 covariate:3 hjs:2 physiological:4 svm:6 cortes:1 a3:1 mendelson:1 workshop:1 importance:3 modulates:1 callaghan:1 logarithmic:1 gao:1 sport:1 minimizer:2 adaption:1 extracted:2 dh:6 acm:1 conditional:23 goal:1 presentation:1 change:1 included:2 specifically:2 determined:1 lemma:2 total:3 called:1 e:1 experimental:2 la:1 newsgroup:1 support:1 arises:1 crammer:1 dissimilar:1 bioinformatics:1 incorporate:1 evaluate:3 d1:6 biography:1 |
3,529 | 4,196 | Sparse Features for PCA-Like Linear Regression
Petros Drineas
Computer Science Department
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Christos Boutsidis
Mathematical Sciences Department
IBM T. J. Watson Research Center
Yorktown Heights, New York
[email protected]
Malik Magdon-Ismail
Computer Science Department
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Abstract
Principal Components Analysis (PCA) is often used as a feature extraction procedure. Given a matrix X ? Rn?d , whose rows represent n data points with respect
to d features, the top k right singular vectors of X (the so-called eigenfeatures),
are arbitrary linear combinations of all available features. The eigenfeatures are
very useful in data analysis, including the regularization of linear regression. Enforcing sparsity on the eigenfeatures, i.e., forcing them to be linear combinations
of only a small number of actual features (as opposed to all available features), can
promote better generalization error and improve the interpretability of the eigenfeatures. We present deterministic and randomized algorithms that construct such
sparse eigenfeatures while provably achieving in-sample performance comparable
to regularized linear regression. Our algorithms are relatively simple and practically efficient, and we demonstrate their performance on several data sets.
1 Introduction
Least-squares analysis was introduced by Gauss in 1795 and has since has bloomed into a staple of
the data analyst. Assume the usual setting with n tuples (x1 , y1 ), . . . , (xn , yn ) in Rd , where xi are
points and yi are targets. The vector of regression weights w? ? Rd minimizes (over all w ? Rd )
the RMS in-sample error
v
u n
uX
E(w) = t (xi ? w ? yi )2 = kXw ? yk2 .
i=1
In the above, X ? Rn?d is the data matrix whose rows are the vectors xi (i.e., Xij = xi [j]); and,
y ? Rn is the target vector (i.e., y[i] = yi ). We will use the more convenient matrix formulation1,
namely given X and y, we seek a vector w? that minimizes kXw ? yk2 . The minimal-norm vector
w? can be computed via the Moore-Penrose pseudo-inverse of X: w? = X+ y. Then, the optimal
in-sample error is equal to:
E(w? ) = ky ? XX+ yk2 .
1
For the sake of simplicity, we assume d ? n and rank (X) = d in our exposition; neither assumption is
necessary.
1
When the data is noisy and X is ill-conditioned, X+ becomes unstable to small perturbations and
overfitting can become a serious problem. Practitioners deal with such situations by regularizing
the regression. Popular regularization methods include, for example, the Lasso [28], Tikhonov
regularization [17], and top-k PCA regression or truncated SVD regularization [21]. In general,
such methods are encouraging some form of parsimony, thereby reducing the number of effective
degrees of freedom available to fit the data. Our focus is on top-k PCA regression which can be
viewed as regression onto the top-k principal components, or, equivalently, the top-k eigenfeatures.
The eigenfeatures are the top-k right singular vectors of X and are arbitrary linear combinations
of all available input features. The question we tackle is whether one can efficiently extract sparse
eigenfeatures (i.e., eigenfeatures that are linear combinations of only a small number of the available
features) that have nearly the same performance as the top-k eigenfeatures.
Basic notation. A, B, . . . are matrices; a, b, . . . are vectors; i, j, . . . are integers; In is the n ? n
identity matrix; 0m?n is the m ? n matrix of zeros; ei is the standard basis (whose dimensionality
will be clear from the context). For vectors, we use the Euclidean norm k ? k2 ; for matrices, the
P
2
Frobenius and the spectral norms: kXkF = i,j X2ij and kXk2 = ?1 (X), i.e., the largest singular
value of X.
Top-k PCA Regression. Let X = U?V T be the singular value decomposition of X, where U
(resp. V) is the matrix of left (resp. right) singular vectors of X with singular values in the diagonal
matrix ?. For k ? d, let Uk , ?k , and Vk contain only the top-k singular vectors and associated
singular values. The best rank-k reconstruction of X in the Frobenius norm can be obtained from
this truncated singular value decomposition as Xk = Uk ?k VkT . The k right singular vectors in Vk
are called the top-k eigenfeatures. The projections of the data points onto the top k eigenfeatures are
obtained by projecting the xi ?s onto the columns of Vk to obtain Fk = XVk = U?V T Vk = U k ?k .
Now, each data point (row) in Fk only has k dimensions. Each column of Fk contains a particular
eigenfeature?s value for every data point and is a linear combination of the columns of X.
The top-k PCA regression uses Fk as the data matrix and y as the target vector to produce regression
weights wk? = F+
k y. The in-sample error of this k-dimensional regression is equal to
?1 T
T
ky ? Fk wk? k2 = ky ? Fk F +
k yk2 = ky ? U k ?k ?k U k yk2 = ky ? U k U k yk2 .
The weights wk? are k-dimensional and cannot be applied to X, but the equivalent weights Vk wk?
can be applied to X and they have the same in-sample error with respect to X:
E(V k wk? ) = ky ? XVk wk? k2 = ky ? F k wk? k2 = ky ? Uk UkT yk2 .
Hence, we will refer to both wk? and Vk wk? as the top-k PCA regression weights (the dimension will
make it clear which one we are talking about) and, for simplicity, we will overload wk? to refer to both
these weight vectors (the dimension will make it clear which). In practice, k is chosen to measure
the ?effective dimension? of the data, and, typically, k ? rank(X) = d. One way to choose k is so
that kX ? Xk kF ? ?k (X) (the ?energy? in the k-th principal component is large compared to the
energy in all smaller principal components). We do not argue the merits of top-k PCA regression;
we just note that top-k PCA regression is a common tool for regularizing regression.
Problem Formulation. Given X ? Rn?d , k (the number of target eigenfeatures for top-k PCA
regression), and r > k (the sparsity parameter), we seek to extract a set of at most k sparse eigenfea? k which use at most r of the actual dimensions. Let F
?k = XV
? k ? Rn?k denote the matrix
tures V
whose columns are the k extracted sparse eigenfeatures, which are a linear combination of a set of at
most r actual features. Our goal is to obtain sparse features for which the vector of sparse regression
?+
? ?+
?k = F
weights w
k y results in an in-sample error ky ? F k F k yk2 that is close to the top-k PCA
regression error ky ? F k F+
k yk2 . Just as with top-k PCA regression, we can define the equivalent
? kw
? k ; we will overload w
? k to refer to these weights as well.
d-dimensional weights V
Finally, we conclude by noting that while our discussion above has focused on simple linear regression, the problem can also be defined for multiple regression, where the vector y is replaced by a
matrix Y ? Rn?? , with ? ? 1. The weight vector w becomes a weight matrix, W, where each
column of W contains the weights from the regression of the corresponding column of Y onto the
features. All our results hold in this general setting as well, and we will actually present our main
contributions in the context of multiple regression.
2
2 Our contributions
Recall from our discussion at the end of the introduction that we will present all our results in the
general setting, where the target vector y is replaced by a matrix Y ? Rn?? . Our first theorem
argues that there exists a polynomial-time deterministic algorithm that constructs a feature matrix
? k ? Rn?k , such that each feature (column of F
? k ) is a linear combination of at most r actual
F
features (columns) from X and results in small in-sample error . Again, this should be contrasted
with top-k PCA regression, which constructs a feature matrix Fk , such that each feature (column of
Fk ) is a linear combination of all features (columns) in X. Our theorems argue that the in-sample
error of our features is almost as good as the in-sample error of top-k PCA regression, which uses
dense features.
Theorem 1 (Deterministic Feature Extraction). Let X ? Rn?d and Y ? Rn?? be the input matrices
in a multiple regression problem. Let k > 0 be a target rank for top-k PCA regression on X and Y.
?k = XV
? k ? Rn?k , such
For any r > k, there exists an algorithm that constructs a feature matrix F
?
that every column of Fk is a linear combination of (the same) at most r columns of X, and
r !
+
9k kX ? Xk kF
?
?
?
?
kYk2 .
Y ? X Wk
= kY ? F k Fk YkF ? kY ? XW k kF + 1 +
r
?k (X)
F
(?k (X) is the k-th
singular value of X.) The running time of the proposed algorithm is T (Vk ) +
O ndk + nrk 2 , where T (Vk ) is the time required to compute the matrix Vk , the top-k right singular vectors of X.
Theorem 1 says that one can construct k features with sparsity O(k) and obtain a comparble regression error to that attained by the dense top-k PCA features, up to additive term that is proportional
to ?k = kX ? Xk kF /?k (X).
To construct the features satisfying the guarantees of the above theorem, we first employ the Algorithm DSF-Select (see Table 1 and Section 4.3) to select r columns of X and form the matrix
C ? Rn?r . Now, let ?C,k (Y) denote the best rank-k approximation (with respect to the Frobenius
norm) to Y in the column-span of C. In other words, ?C,k (Y) is a rank-k matrix that minimizes
kY ? ?C,k (Y) kF over all rank-k matrices in the column-span of C. Efficient algorithms are known
for computing ?C,k (X) and have been described in [2]. Given ?C,k (Y), the sparse eigenfeatures
can be computed efficiently as follows: first, set ? = C + ?C,k (Y). Observe that
C? = CC+ ?C,k (Y) = ?C,k (Y).
The last equality follows because CC + projects onto the column span of C and ?C,k (Y) is already
in the column span of C. ? has rank at most k because ?C,k (Y) has rank at most k. Let the
T
? k = CU? ?? ? Rn?k . Clearly, each column of F
? k is a
SVD of ? be ? = U? ?? V?
and set F
linear combination of (the same) at most r columns of X (the columns in C). The sparse features
?k = XV
? k , so V
? k = X+ F
?k.
themselves can also be obtained because F
? k are a good set of sparse features, we first relate the regression error from using F
?k
To prove that F
to how well ?C,k (Y) approximates Y.
T
T
? k V?
?k F
?+
kY ? ?C,k (Y)kF = kY ? C?kF = kY ? CU? ?? V?
kF = kY ? F
kF ? kY ? F
k YkF .
+
? k Y are the optimal regression weights for the features F
? k . The
The last inequality follows because F
reverse inequality also holds because ?C,k (Y) is the best rank-k approximation to Y in the column
span of C. Thus,
?k F
?+
kY ? F
k YkF = kY ? ?C,k (Y)kF .
The upshot of the above discussion is that if we can find a matrix C consisting of columns of X for
which kY ? ?C,k (Y)kF is small, then we immediately have good sparse eigenfeatures. Indeed, all
that remains to complete the proof of Theorem 1 is to bound kY ? ?C,k (Y)kF for the columns C
returned by the Algorithm DSF-Select.
Our second result employs the Algorithm RSF-Select (see Table 2 and Section 4.4) to select r
columns of X and again form the matrix C ? Rn?r . One then proceeds to construct ?C,k (Y) and
? k as described above. The advantage of this approach is simplicity, better efficiency and a slightly
F
better error bound, at the expense of logarithmically worse sparsity.
3
Theorem 2 (Randomized Feature Extraction). Let X ? Rn?d and Y ? Rn?? be the input matrices
in a multiple regression problem. Let k > 0 be a target rank for top-k PCA regression on X and
Y. For any r > 144k ln(20k), there exists a randomized algorithm that constructs a feature matrix
?k = XV
? k ? Rn?k , such that every column of F
? k is a linear combination of at most r columns
F
of X, and, with probability at least .7 (over random choices made in the algorithm),
r
+
?
? k
= kY ? F
?k F
? k Yk ? kY ? XWk k + 36k ln(20k) kX ? Xk kF kYk .
Y ? X W
F
F
2
r
?k (X)
F
The running time of the proposed algorithm is T (Vk ) + O(dk + r log r).
3 Connections with prior work
A variant of our problem is the identification of a matrix C consisting of a small number (say r)
columns of X such that the regression of Y onto C (as opposed to k features from C) gives small insample error. This is the sparse approximation problem, where the number of non-zero weights in the
regression vector is restricted to r. This problem is known to be NP-hard [25]. Sparse approximation
has important applications and many approximation algorithms have been presented [29, 9, 30];
proposed algorithms are typically either greedy or are based on convex optimization relaxations of
the objective. An important difference between sparse approximation and sparse PCA regression is
that our goal is not to minimize the error under a sparsity constraint, but to match the top-k PCA
regularized regression under a sparsity constraint. We argue that it is possible to achieve a provably
accurate sparse PCA-regression, i.e., use sparse features instead of dense ones.
If X = Y (approximating X using the columns of X), then this is the column-based matrix reconstruction problem, which has received much attention in existing literature [16, 18, 14, 26, 5, 12, 20].
In this paper, we study the more general problem where X 6= Y, which turns out to be considerably
more difficult.
Input sparseness is closely related to feature selection and automatic relevance determination. Research in this area is vast, and we refer the reader to [19] for a high-level view of the field. Again,
the goal in this area is different than ours, namely they seek to reduce dimensionality and improve
out-of-sample error. Our goal is to provide sparse PCA features that are almost as good as the exact principal components. While it is definitely the case that many methods outperform top-k PCA
regression, especially for d ? n, this discussion is orthogonal to our work.
The closest result to ours in prior literature is the so-called rank-revealing QR (RRQR) factorization [8]. The authors use a QR-like decomposition to select exactly k columns of X and compare
? k with the top-k PCA regularized solution wk? . They show that
their sparse solution vector w
? k k2 ?
kwk? ? w
p
kX ? Xk k2
k(n ? k) + 1
?,
?k (X)
? k k2 + ky ? Xwk? k2 /?k (X). This bound is similar to our bound in Theorem 1,
where ? = 2 kw
p
but
? only applies to r = k and is considerably weaker. For example, k(n ? k) + 1 kX ? Xk k2 ?
k kX ? Xk kF ; note also that the dependence of the above bound on 1/?k (X) is generally worse
than ours.
The importance of the right singular vectors in matrix reconstruction problems (including PCA)
has been heavily studied in prior literature, going back to work by Jolliffe in 1972 [22]. The idea of
sampling columns from a matrix X with probabilities that are derived from VkT (as we do in Theorem
2) was introduced in [15] in order to construct coresets for regression problems by sampling data
points (rows of the matrix X) as opposed to features (columns of the matrix X). Other prior work
including [15, 13, 27, 6, 4] has employed variants of this sampling scheme; indeed, we borrow
proof techniques from the above papers in our work. Finally, we note that our deterministic feature
selection algorithm (Theorem 1) uses a sparsification tool developed in [2] for column based matrix
reconstruction. This tool is a generalization of algorithms originally introduced in [1].
4
4 Our algorithms
Our algorithms emerge from the constructive proofs of Theorems 1 and 2. Both algorithms necessitate access to the right singular vectors of X, namely the matrix Vk ? Rd?k . In our experiments, we
used PROPACK [23] in order to compute Vk iteratively; PROPACK is a fast alternative to the exact
SVD. Our first algorithm (DSF-Select) is deterministic, while the second algorithm (RSF-Select)
is randomized, requiring logarithmically more columns to guarantee the theoretical bounds. Prior
to describing our algorithms in detail, we will introduce useful notation on sampling and rescaling
matrices as well as a matrix factorization lemma (Lemma 3) that will be critical in our proofs.
4.1 Sampling and rescaling matrices
Let C ? Rn?r contain r columns of X ? Rn?d . We can express the matrix C as C = X?, where
the sampling matrix ? ? Rd?r is equal to [ei1 , . . . , eir ] and ei are standard basis vectors in Rd . In
our proofs, we will make use of S ? Rr?r , a diagonal rescaling matrix with positive entries on the
diagonal. Our column selection algorithms return a sampling and a rescaling matrix, so that X?S
contains a subset of rescaled columns from X. The rescaling is benign since it does not affect the
span of the columns of C = X? and thus the quantity of interest, namely ?C,k (Y).
4.2 A structural result using matrix factorizations
We now present a matrix reconstruction lemma that will be the starting point for our algorithms.
Let Y ? Rn?? be a target matrix and let X ? Rn?d be the basis matrix that we will use in order
to reconstruct Y. More specifically, we seek a sparse reconstruction of Y from X, or, in other
words, we would like to choose r ? d columns from X and form a matrix C ? Rn?r such that
kY ? ?C,k (Y)kF is small. Let Z ? Rd?k be an orthogonal matrix (i.e., Z T Z = Ik ), and express the
matrix X as follows:
X = HZT + E,
where H is some matrix in Rn?k and E ? Rn?d is the residual error of the factorization. It is easy
to prove that the Frobenius or spectral norm of E is minimized when H = XZ. Let ? ? Rd?r and
S ? Rr?r be a sampling and a rescaling matrix respectively as defined in the previous section, and
let C = X? ? Rn?r . Then, the following lemma holds (see [3] for a detailed proof).
Lemma 3 (Generalized Column Reconstruction). Using the above notation, if the rank of the matrix
Z T ?S is equal to k, then
kY ? ?C,k (Y)kF ? kY ? HH+ YkF + kE?S(Z T ?S)+ H+ YkF .
(1)
We now parse the above lemma carefully in order to understand its implications in our setting. For
our goals, the matrix C essentially contains a subset of r features from the data matrix X. Recall that
?C,k (Y) is the best rank-k approximation to Y within the column space of C; and, the difference
Y ? ?C,k (Y) measures the error from performing regression using sparse eigenfeatures that are
constructed as linear combinations of the columns of C. Moving to the right-hand side of eqn. (1),
the two terms reflect a tradeoff between the accuracy of the reconstruction of Y using H and the
error E in approximating X by the product HZT . Ideally, we would like to choose H so that Y can
be accurately approximated and, at the same time, the matrix X is approximated by the product HZ T
with small residual error E. In general, these two goals might be competing and a balance must be
struck. Here, we focus on one extreme of this trade off, namely choosing Z so that the (Frobenius)
norm of the matrix E is minimized. More specifically, since Z has rank k, the best choice for HZT in
order to minimize kEkF is Xk ; then, E = X ? Xk . Using the SVD of Xk , namely Xk = Uk ?k VkT ,
we apply Lemma 3 setting H = Uk ?k and Z = Vk . The following corollary is immediate.
Lemma 4 (Generalization of Lemma 7 in [2]). Using the above notation, if the rank of the matrix
VkT ?S is equal to k, then
T
kY ? ?C,k (Y)kF ? kY ? U k UkT YkF + k(X ? Xk )?S(V kT ?S)+ ??1
k U k YkF .
Our main results will follow by carefully choosing ? and S in order to control the right-hand side of
the above inequality.
5
Algorithm: DSF-Select
1: Input: X, k, r.
2: Output: r columns of X in C.
3: Compute Vk and
E = X ? Xk = X ? XVk VkT .
Algorithm: DetSampling
1: Input: V T = [v1 , . . . , vd ], A = [a1 , . . . , ad ], r.
2: Output: Sampling and rescaling matrices [?, S].
3: Initialize B0 = 0k?k , ? = 0d?r , and S = 0r?r .
4: for ? = 1 to r ??
1 do
5:
Set L? = ? ? rk.
6:
Pick index i ? {1, 2, ..., n} and t such that
4: Run DetSampling to construct sampling and rescaling matrices ? and S:
[?, S] = DetSampling(VkT , E, r).
5: Return C = X?.
U (ai ) ?
1
? L(vi , B? ?1 , L? ).
t
7:
Update B? = B? ?1 + tvi v?iT .
8:
Set ?i? = 1 and S ? ? = 1/ t.
9: end for
10: Return ? and S.
Table 1: DSF-Select: Deterministic Sparse Feature Selection
4.3 DSF-Select: Deterministic Sparse Feature Selection
DSF-Select deterministically selects r columns of the matrix X to form the matrix C (see Table 1
and note that the matrix C = X? might contain duplicate columns which can be removed without
any loss in accuracy). The heart of DSF-Select is the subroutine DetSampling, a near-greedy
algorithm which selects columns of VkT iteratively to satisfy two criteria: the selected columns should
form an approximately orthogonal basis for the columns of VkT so that (VkT ?S)+ is well-behaved;
and E?S should also be well-behaved. These two properties will allow us to prove our results via
Lemma 4. The implementation of the proposed algorithm is quite simple since it relies only on
standard linear algebraic operations.
DetSampling takes as input two matrices: V T ? Rk?d (satisfying V T V = Ik ) and A ? Rn?d . In
order to describe the algorithm, it is convenient to view these two matrices as two sets of column
Pd
vectors, V T = [v1 , . . . , vd ] (satisfying i=1 vi viT = Ik ) and A = [a1 , . . . , ad ]. In DSF-Select
T
T
we set V = Vk and A = E = X ? Xk . Given k and r, the algorithm iterates from ? = 0 up to
? = r ? 1 and its main operation is to compute the functions ?(L , B) and L(v, B, L) that are defined
as follows:
? ( L, B) =
k
X
i=1
1
,
?i ? L
?2
L (v, B, L) =
vT (B ? (L + 1) Ik ) v
?1
? vT (B ? ( L + 1) Ik ) v.
? ( L + 1, B) ? ? (L, B)
In the above, B ? Rk?k is a symmetric matrix with eigenvalues ?1 , . . . , ?k and L ? R is a parameter.
We also define the function U (a) for a vector a ? Rn as follows:
r !
k
aT a
U (a) = 1 ?
.
r kAk2F
At every step ? , the algorithm selects a column ai such that U (ai ) ? L(vi , B ? ?1 , L? ); note that
B ? ?1 is a k ? k matrix which is also updated at every step of the algorithm (see Table 1). The
existence of such a column is guaranteed by results in [1, 2].
It is worth noting that in practical implementations of the proposed algorithm, there might exist
multiple columns which satisfy the above requirement. In our implementation we chose to break
such ties arbitrarily. However, more careful and informed choices, such as breaking the ties in a way
that makes maximum progress towards our objective, might result in considerable savings. This is
indeed an interesting open problem.
The running time of our algorithm is dominated by the search for a column which satisfies
U (ai ) ? L(vi , B? ?1 , L? ). To compute the function L, we first need to compute ?(L? , B? ?1 ) (which
necessitates the eigenvalues of B ? ?1 ) and then we need to compute the inverse of B ? ?1 ?(L + 1) Ik .
These computations need O(k 3 ) time per iteration, for a total of O(rk 3 ) time over all r iterations.
Now, in order to compute the function L for each vector vi for all i = 1, . . . , d, we need an additional
6
Algorithm: RSF-Select
1:
2:
3:
4:
Input: X, k, r.
Output: r columns of X in C.
Compute Vk .
Run RandSampling to construct sampling and rescaling matrices ? and S:
[?, S] = RandSampling(VkT , r).
5: Return C = X?.
Algorithm: RandSampling
1: Input: V T = [v1 , . . . , vd ] and r.
2: Output: Sampling and rescaling matrices [?, S].
3: For i = 1, ..., d compute probabilities
pi =
1
kvi k22 .
k
4: Initialize ? = 0d?r and S = 0r?r .
5: for ? = 1 to r do
6:
Select an index i? ? {1, 2, ..., d} where the
probability of selecting index i is equal to pi .
?
7:
Set ?i? ? = 1 and S? ? = 1/ rpi? .
8: end for
9: Return ? and S.
Table 2: RSF-Select: Randomized Sparse Feature Selection
O(dk 2 ) time per iteration; the total time for all r iterations is O(drk 2 ). Next, in order to compute
the function U , we need to compute aiT ai (for all i = 1, . . . , d) which necessitates O(nnz(A)) time,
where nnz(A) is the number of non-zero elements of A. In our setting, A = E ? Rn?d , so the
overall running time is O(drk 2 + nd). In order to get the final running time we also need to account
for the computation of Vk and E.
The theoretical properties of DetSampling were analyzed in detail in [2], building on the original
analysis of [1]. The following lemma from [2] summarizes important properties of ?.
Lemma 5 ([2]). DetSampling with inputs V T and A returns a sampling matrix ? ? Rd?r and a
rescaling matrix S ? Rr?r satisfying
r
k
T
+
k(V ?S) k2 ? 1 ?
;
kA?SkF ? kAkF .
r
We apply Lemma 5 with V = VTk and A = E and we combine it with Lemma 4 to conclude the
proof of Theorem 1; see [3] for details.
4.4 RSF-Select: Randomized Sparse Feature Selection
RSF-Select is a randomized algorithm that selects r columns of the matrix X in order to form the
matrix C (see Table 2). The main differences between RSF-Select and DSF-Select are two: first,
RSF-Select only needs access to V kT and, second, RSF-Select uses a simple sampling procedure in
order to select the columns of X to include in C. This sampling procedure is described in algorithm
RandSampling and essentially selects columns of X with probabilities that depend on the norms of
the columns of VkT . Thus, RandSampling first computes a set of probabilities that are proportional
to the norms of the columns of VkT and then samples r columns of X in r independent identical trials
with replacement, where in each trial a column is sampled according to the computed probabilities.
Note that a column could be selected multiple times. In terms of running time, and assuming that
the matrix Vk that contains the top k right singular vectors of X has already been computed, the
proposed algorithm needs O(dk) time to compute the sampling probabilities and an additional O(d+
r log r) time to sample r columns from X. Similar to Lemma 5, we can prove analogous properties
for the matrices ? and S that are returned by algorithm RandSampling. Again, combining with
Lemma 4 we can prove Theorem 2; see [3] for details.
5 Experiments
The goal of our experiments is to illustrate that our algorithms produce sparse features which perform as well in-sample as the top-k PCA regression. It turns out that the out-of-sample performance
is comparable (if not better in many cases, perhaps due to the sparsity) to top-k PCA-regression.
7
(n; d)
Data
wk?
Arcene
(100;10,000)
I-sphere
(351;34)
LibrasMov
(45;90)
Madelon
(2,000;500)
HillVal
(606;100)
Spambase
(4601;57)
0.93
0.99
0.57
0.58
2.9
3.3
0.98
0.98
0.68
0.68
0.30
0.30
k = 5, r = k + 1
? kDSF w
? kRSF w
? krnd
w
0.88
0.94
0.52
0.53
2.9
3.6
0.98
0.98
0.66
0.67
0.30
0.30
0.91
0.98
0.55
0.57
3.1
3.7
0.98
0.98
0.67
0.68
0.31
0.30
1.0
1.0
0.57
0.57
3.7
3.7
1.0
1.0
0.68
0.68
0.28
0.38
wk?
0.93
1.0
0.57
0.58
2.9
3.3
0.98
0.98
0.68
0.68
0.3
0.3
k = 5, r = 2k
? kDSF w
? kRSF
w
0.89
0.97
0.51
0.54
2.4
3.3
0.97
0.98
0.65
0.67
0.3
0.3
0.86
0.98
0.52
0.55
2.6
3.6
0.97
0.98
0.67
0.69
0.3
0.3
? krnd
w
1.0
1.0
0.56
0.56
3.6
3.6
1.0
1.0
0.69
0.69
0.25
0.35
Table 3: Comparison of DSF-select and RSF-select with top-k PCA. The top entry in each cell
is the in-sample error, and the bottom entry is the out-sample error. In bold is the method achieving
the best out-sample error.
Compared to top-k PCA, our algorithms are efficient and work well in practice, even better than the
theoretical bounds suggest.
We present our findings in Table 3 using data sets from the UCI machine learning repository. We
used a five-fold cross validation design with 1,000 random splits: we computed regression weights
using 80% of the data and estimated out-sample error in the remaining 20% of the data. We set k = 5
in the experiments (no attempt was made to optimize k). Table 3 shows the in- and out-sample error
? kDSF ;
for four methods: top-k PCA regression, wk? ; r-sparse features regression using DSF-select, w
RSF
? k ; r-sparse features regression using r random
r-sparse features regression using RSF-select, w
? krnd .
columns, w
6 Discussion
The top-k PCA regression constructs ?features? without looking at the targets ? it is target-agnostic.
So are all the algorithms we discussed here, as our goal was to compare with top-k PCA. However,
there is unexplored potential in Lemma 3. We only explored one extreme choice for the factorization,
namely the minimization of some norm of the matrix E. Other choices, in particular non-targetagnostic choices, could prove considerably better. Such investigations are left for future work.
As mentioned when we discussed our deterministic algorithm, it will often be the case that in some
steps of the greedy selection process, multiple columns could satisfy the criterion for selection. In
such a situation, we are free to choose any one; we broke ties arbitrarily in our implementation,
and even as is, the algorithm performed as well or better than top-k PCA. However, we expect that
breaking the ties so as to optimize the ultimate objective would yield considerable additional benefit;
this would also be non-target-agnostic.
Acknowledgments
This work has been supported by two NSF CCF and DMS grants to Petros Drineas and Malik
Magdon-Ismail.
References
[1] J. Batson, D. Spielman, and N. Srivastava. Twice-ramanujan sparsifiers. In Proceedings of ACM STOC,
pages 255?262, 2009.
[2] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column based matrix reconstruction. In
Proceedings of IEEE FOCS, 2011.
[3] C. Boutsidis, P. Drineas, and M. Magdon-Ismail.
manuscript, 2011.
[4] C. Boutsidis and M. Magdon-Ismail.
arXiv:1109.5664v1, 2011.
Sparse features for PCA-like linear regression.
Deterministic feature selection for k-means clustering.
8
[5] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column
subset selection problem. In Proceedings of ACM -SIAM SODA, pages 968?977, 2009.
[6] C. Boutsidis, M. W. Mahoney, and P. Drineas. Unsupervised feature selection for the k-means clustering
problem. In Proceedings of NIPS, 2009.
[7] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal components. Applied
Statistics, 22:203?214, 1995.
[8] T. Chan and P. Hansen. Some applications of the rank revealing QR factorization. SIAM Journal on
Scientific and Statistical Computing, 13:727?741, 1992.
[9] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In Proceedings of ACM
STOC, 2008.
[10] A. Dasgupta, P. Drineas, B. Harb, R. Kumar, and M. W. Mahoney. Sampling algorithms and coresets for
Lp regression. In Proceedings of ACM-SIAM SODA, 2008.
[11] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA
using semidefinite programming. In Proceedings of NIPS, 2004.
[12] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of ACM STOC, 2010.
[13] P. Drineas, R. Kannan, and M. Mahoney. Fast Monte Carlo algorithms for matrices I: Approximating
matrix multiplication. SIAM Journal of Computing, 36(1):132?157, 2006.
[14] P. Drineas, M. Mahoney, and S. Muthukrishnan. Polynomial time algorithm for column-row based
relative-error low-rank matrix approximation. Technical Report 2006-04, DIMACS, March 2006.
[15] P. Drineas, M. Mahoney, and S. Muthukrishnan. Sampling algorithms for ?2 regression and applications.
In Proceedings of ACM-SIAM SODA, pages 1127?1136, 2006.
[16] G. Golub. Numerical methods for solving linear least squares problems. Numerische Mathematik, 7:206?
216, 1965.
[17] G. Golub, P. Hansen, and D. O?Leary. Tikhonov regularization and total least squares. SIAM Journal on
Matrix Analysis and Applications, 21(1):185?194, 2000.
[18] M. Gu and S. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization.
SIAM Journal on Scientific Computing, 17:848?869, 1996.
[19] I. Guyon and A. Elisseeff. Special issue on variable and feature selection. Journal of Machine Learning
Research, 3, 2003.
[20] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: probabilistic algorithms for
constructing approximate matrix decompositions. SIAM Review, 2011.
[21] P. Hansen. The truncated SVD as a method for regularization. BIT Numerical Mathematics, 27(4):534?
553, 1987.
[22] I. Jolliffe. Discarding variables in Principal Component Analysis: asrtificial data. Applied Statistics,
21(2):160?173, 1972.
[23] R. Larsen.
PROPACK: A software package for the symmetric eigenvalue problem and singular value problems on Lanczos and Lanczos bidiagonalization with partial reorthogonalization.
http://soi.stanford.edu/?rmunk/?PROPACK/.
[24] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: exact and greedy algorithms.
In Proceedings of NIPS, 2005.
[25] B. Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227?
234, 1995.
[26] M. Rudelson and R. Vershynin. Sampling from large matrices: An approach through geometric functional
analysis. Journal of the ACM, 54, 2007.
[27] N. Srivastava and D. Spielman. Graph sparsifications by effective resistances. In Proceedings of ACM
STOC, pages 563?568, 2008.
[28] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
pages 267?288, 1996.
[29] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information
Theory, 50(10):2231?2242, 2004.
[30] T. Zhang. Generating a d-dimensional linear subspace efficiently. In Adaptive forward-backward greedy
algorithm for sparse learning with linear models, 2008.
9
| 4196 |@word trial:2 cu:2 repository:1 madelon:1 polynomial:2 norm:10 loading:1 nd:1 open:1 seek:4 decomposition:4 elisseeff:1 pick:1 thereby:1 contains:5 selecting:1 ours:3 spambase:1 existing:1 ka:1 com:1 rpi:3 must:1 additive:1 numerical:2 benign:1 update:1 greedy:5 selected:2 kyk:1 xk:15 propack:4 eigenfeatures:17 iterates:1 insample:1 zhang:1 five:1 height:1 mathematical:1 constructed:1 direct:1 become:1 ik:6 focs:1 prove:6 combine:1 introduce:1 indeed:3 themselves:1 xz:1 actual:4 encouraging:1 becomes:2 project:1 xx:1 notation:4 agnostic:2 minimizes:3 parsimony:1 developed:1 informed:1 finding:2 sparsification:1 guarantee:2 pseudo:1 every:5 unexplored:1 tackle:1 tie:4 exactly:1 k2:10 uk:5 control:1 grant:1 yn:1 positive:1 xv:4 approximately:1 might:4 chose:1 twice:1 studied:1 factorization:7 practical:1 acknowledgment:1 practice:2 procedure:3 area:2 nnz:2 revealing:3 convenient:2 projection:1 word:2 staple:1 suggest:1 get:1 onto:6 cannot:1 close:1 selection:16 arcene:1 context:2 optimize:2 equivalent:2 deterministic:9 center:1 ramanujan:1 attention:1 starting:1 vit:1 convex:1 focused:1 ke:1 numerische:1 simplicity:3 rmunk:1 immediately:1 eisenstat:1 borrow:1 rrqr:1 analogous:1 updated:1 resp:2 target:11 heavily:1 exact:3 programming:1 us:4 lanckriet:1 logarithmically:2 element:1 satisfying:4 approximated:2 natarajan:1 bottom:1 eir:1 trade:1 rescaled:1 removed:1 yk:1 mentioned:1 pd:1 ideally:1 depend:1 solving:1 efficiency:1 basis:4 gu:1 drineas:10 necessitates:2 muthukrishnan:2 fast:2 effective:3 describe:1 monte:1 choosing:2 whose:4 quite:1 stanford:1 say:2 reconstruct:1 statistic:2 noisy:1 final:1 advantage:1 rr:3 eigenvalue:3 reconstruction:9 product:2 uci:1 combining:1 eigenfeature:1 achieve:1 ismail:5 frobenius:5 ky:30 qr:4 requirement:1 rademacher:1 produce:2 dsf:12 generating:1 illustrate:1 b0:1 received:1 progress:1 strong:1 c:2 closely:1 broke:1 generalization:3 investigation:1 hold:3 practically:1 algorithmic:1 hansen:3 largest:1 tool:3 minimization:1 clearly:1 shrinkage:1 corollary:1 derived:1 focus:2 vk:18 rank:19 hzt:3 el:1 typically:2 going:1 subroutine:1 selects:5 provably:2 overall:1 issue:1 ill:1 special:1 initialize:2 kempe:1 equal:6 construct:12 field:1 extraction:3 saving:1 nrk:1 sampling:20 identical:1 kw:2 unsupervised:1 nearly:1 promote:1 future:1 minimized:2 np:1 report:1 serious:1 employ:2 duplicate:1 replaced:2 consisting:2 replacement:1 attempt:1 freedom:1 interest:1 golub:2 mahoney:6 analyzed:1 extreme:2 semidefinite:1 implication:1 accurate:1 kt:2 moghaddam:1 partial:1 necessary:1 orthogonal:3 skf:1 euclidean:1 theoretical:3 minimal:1 column:69 kxkf:1 lanczos:2 entry:3 subset:5 xwk:2 considerably:3 vershynin:1 drk:2 definitely:1 randomized:7 siam:9 sparsifiers:1 probabilistic:1 off:1 leary:1 again:4 reflect:1 ukt:2 opposed:3 choose:4 worse:2 necessitate:1 rescaling:11 return:6 account:1 vtk:1 potential:1 rsf:12 bold:1 wk:15 coresets:2 satisfy:3 ad:2 vi:5 performed:1 view:2 break:1 kwk:1 contribution:2 minimize:2 square:3 accuracy:2 bidiagonalization:1 efficiently:3 yield:1 identification:1 accurately:1 carlo:1 worth:1 cc:2 randomness:1 boutsidis:6 energy:2 deshpande:1 larsen:1 dm:1 associated:1 proof:7 petros:2 sampled:1 popular:1 recall:2 dimensionality:2 carefully:2 actually:1 back:1 manuscript:1 attained:1 originally:1 follow:1 improved:1 wei:1 formulation:2 just:2 correlation:1 hand:2 eqn:1 ykf:7 ei:2 parse:1 tropp:2 perhaps:1 behaved:2 scientific:2 building:1 k22:1 contain:3 requiring:1 ccf:1 regularization:6 hence:1 equality:1 symmetric:2 moore:1 iteratively:2 deal:1 kyk2:1 yorktown:1 criterion:2 generalized:1 dimacs:1 complete:1 demonstrate:1 argues:1 common:1 functional:1 volume:1 discussed:2 interpretation:1 approximates:1 martinsson:1 refer:4 ai:5 rd:9 automatic:1 fk:10 mathematics:1 harb:1 moving:1 access:2 yk2:9 closest:1 chan:1 forcing:1 reverse:1 tikhonov:2 inequality:3 watson:1 arbitrarily:2 vt:2 yi:3 drinep:1 additional:3 ndk:1 employed:1 multiple:7 technical:1 match:1 determination:1 cross:1 sphere:1 a1:2 variant:2 regression:54 basic:1 avidan:1 essentially:2 arxiv:1 iteration:4 represent:1 cell:1 batson:1 singular:16 hz:1 jordan:1 practitioner:1 integer:1 structural:1 near:2 noting:2 cadima:1 split:1 easy:1 affect:1 fit:1 lasso:2 competing:1 reduce:1 idea:1 tradeoff:1 whether:1 pca:35 rms:1 ultimate:1 greed:1 returned:2 algebraic:1 resistance:1 york:1 useful:2 generally:1 clear:3 detailed:1 http:1 outperform:1 xij:1 exist:1 nsf:1 estimated:1 per:2 tibshirani:1 dasgupta:1 express:2 four:1 achieving:2 neither:1 backward:1 v1:4 vast:1 graph:1 relaxation:1 run:2 inverse:2 package:1 soda:3 soi:1 almost:2 reader:1 guyon:1 summarizes:1 comparable:2 bit:1 bound:8 guaranteed:1 fold:1 constraint:2 software:1 sake:1 dominated:1 xvk:3 span:6 kumar:1 performing:1 relatively:1 department:3 according:1 combination:12 march:1 smaller:1 slightly:1 lp:1 projecting:1 restricted:1 ghaoui:1 heart:1 ln:2 remains:1 mathematik:1 turn:2 describing:1 jolliffe:3 hh:1 merit:1 end:3 available:5 magdon:6 operation:2 apply:2 polytechnic:2 observe:1 spectral:3 alternative:1 existence:1 original:1 top:37 running:6 include:2 remaining:1 clustering:2 rudelson:1 xw:1 especially:1 tvi:1 approximating:3 society:1 malik:2 objective:3 question:1 already:2 quantity:1 dependence:1 usual:1 diagonal:3 subspace:1 vd:3 ei1:1 kak2f:1 kekf:1 argue:3 unstable:1 enforcing:1 kannan:1 analyst:1 assuming:1 index:3 balance:1 equivalently:1 difficult:1 stoc:4 relate:1 expense:1 troy:2 implementation:4 design:1 perform:1 vkt:12 truncated:3 immediate:1 situation:2 looking:1 y1:1 rn:28 perturbation:1 arbitrary:2 kxw:2 introduced:3 namely:7 required:1 struck:1 connection:1 nip:3 proceeds:1 sparsity:7 including:3 interpretability:1 royal:1 critical:1 regularized:3 residual:2 scheme:1 improve:2 aspremont:1 extract:2 prior:5 upshot:1 literature:3 review:1 kf:17 multiplication:1 geometric:1 relative:1 loss:1 expect:1 kakf:1 interesting:1 tures:1 proportional:2 validation:1 degree:1 pi:2 ibm:2 row:6 supported:1 last:2 free:1 side:2 weaker:1 understand:1 allow:1 institute:2 emerge:1 sparse:35 benefit:1 dimension:5 xn:1 x2ij:1 computes:1 author:1 made:2 adaptive:1 forward:1 reorthogonalization:1 transaction:1 approximate:2 overfitting:1 conclude:2 tuples:1 xi:5 search:1 rensselaer:2 table:10 constructing:1 da:1 main:4 dense:3 ait:1 x1:1 ny:2 christos:1 deterministically:1 kxk2:1 breaking:2 theorem:13 rk:4 discarding:1 kvi:1 explored:1 dk:3 exists:3 importance:1 conditioned:1 sparseness:1 kx:7 halko:1 penrose:1 ux:1 talking:1 applies:1 satisfies:1 relies:1 extracted:1 acm:8 viewed:1 identity:1 goal:8 exposition:1 careful:1 towards:1 considerable:2 hard:1 specifically:2 reducing:1 contrasted:1 principal:7 lemma:17 called:3 total:3 gauss:1 svd:5 select:28 relevance:1 overload:2 spielman:2 constructive:1 regularizing:2 srivastava:2 |
3,530 | 4,197 | Inverting Grice?s Maxims to Learn Rules from
Natural Language Extractions
Mohammad Shahed Sorower, Thomas G. Dietterich, Janardhan Rao Doppa
Walker Orr, Prasad Tadepalli, and Xiaoli Fern
School of Electrical Engineering and Computer Science
Oregon State University
Corvallis, OR 97331
{sorower,tgd,doppa,orr,tadepall,xfern}@eecs.oregonstate.edu
Abstract
We consider the problem of learning rules from natural language text sources. These sources,
such as news articles and web texts, are created by a writer to communicate information to a
reader, where the writer and reader share substantial domain knowledge. Consequently, the
texts tend to be concise and mention the minimum information necessary for the reader to
draw the correct conclusions. We study the problem of learning domain knowledge from such
concise texts, which is an instance of the general problem of learning in the presence of missing
data. However, unlike standard approaches to missing data, in this setting we know that facts
are more likely to be missing from the text in cases where the reader can infer them from
the facts that are mentioned combined with the domain knowledge. Hence, we can explicitly
model this ?missingness? process and invert it via probabilistic inference to learn the underlying
domain knowledge. This paper introduces a mention model that models the probability of facts
being mentioned in the text based on what other facts have already been mentioned and domain
knowledge in the form of Horn clause rules. Learning must simultaneously search the space of
rules and learn the parameters of the mention model. We accomplish this via an application of
Expectation Maximization within a Markov Logic framework. An experimental evaluation on
synthetic and natural text data shows that the method can learn accurate rules and apply them
to new texts to make correct inferences. Experiments also show that the method out-performs
the standard EM approach that assumes mentions are missing at random.
1
Introduction
The immense volume of textual information available on the web provides an important opportunity
and challenge for AI: Can we develop methods that can learn domain knowledge by reading natural
texts such as news articles and web pages. We would like to acquire at least two kinds of domain
knowledge: concrete facts and general rules. Concrete facts can be extracted as logical relations or
as tuples to populate a data base. Systems such as Whirl [3], TextRunner [5], and NELL [1] learn
extraction patterns that can be applied to text to extract instances of relations.
General rules can be acquired in two ways. First, they may be stated explicitly in the text?
particularly in tutorial texts. Second, they can be acquired by generalizing from the extracted concrete facts. In this paper, we focus on the latter setting: Given a data base of literals extracted from
natural language texts (e.g., newspaper articles), we seek to learn a set of probabilistic Horn clauses
that capture general rules.
Unfortunately for rule learning algorithms, natural language texts are incomplete. The writer tends
to mention only enough information to allow the reader to easily infer the remaining facts from
shared background knowledge. This aspect of economy in language was first pointed out by Grice
1
[7] in his maxims of cooperative conversation (see Table 1). For example, consider the following
sentence that discusses a National Football League (NFL) game:
?Given the commanding lead of Kansas city on the
road, Denver Broncos? 14-10 victory surprised many?
Table 1: Grice?s Conversational Maxims
1
Be truthful?do not say falsehoods.
2 Be concise?say as much as
This mentions that Kansas City is the away team and
necessary, but no more.
that the Denver Broncos won the game, but does not
3
Be
relevant.
mention that Kansas City lost the game or that the
4 Be clear.
Denver Broncos was the home team. Of course these
facts can be inferred from domain knowledge rules such as the rule that ?if one team is the winner,
the other is the loser (and vice versa)? and the rule ?if one team is the home team, the other is the
away team (and vice versa)?. This is an instance of the second maxim.
Another interesting case arises when shared knowledge could lead the reader to an incorrect inference:
?Ahmed Said Khadr, an Egyptian-born Canadian, was killed last October in Pakistan.?
This explicitly mentions that Khadr is Canadian, because otherwise the reader would infer that he
was Egyptian based on the domain knowledge rule ?if a person is born in a country, then the person
is a citizen of that country?. Grice did not discuss this case, but we can state this as a corollary of
the first maxim: Do not by omission mislead the reader into believing falsehoods.
This paper formalizes the first two maxims, including this corollary, and then shows how to apply
them to learn probabilistic Horn clause rules from propositions extracted from news stories. We
show that rules learned this way are able to correctly infer more information from incomplete texts
than a baseline approach that treats propositions in news stories as missing at random.
The problem of learning rules from extracted texts has been studied previously [11, 2, 17]. These
systems rely on finding documents in which all of the facts participating in a rule are mentioned.
If enough such documents can be found, then standard rule learning algorithms can be applied. A
drawback of this approach is that it is difficult to learn rules unless there are many documents that
provide such complete training examples. The central hypothesis of our work is that by explicitly
modeling the process by which facts are mentioned, we can learn rules from sets of documents that
are smaller and less complete.
The line of work most similar to this paper is that of Michael and Valiant [10, 9] and Doppa, et al.
[4]. They study learning hard (non-probabilistic) rules from incomplete extractions. In contrast with
our approach of learning explicit probabilistic models, they take the simpler approach of implicitly
inverting the conversational maxims when counting evidence for a proposed rule. Specifically, they
count an example as consistent with a proposed rule unless it explicitly contradicts the rule. Although this approach is much less expensive than the probabilistic approach described in this paper,
it has difficulty with soft (probabilistic) rules. To handle these, these authors sort the rules by their
scores and keep high scoring rules even if they have some contraditions. Such an approach can learn
?almost hard? rules, but will have difficulty with rules that are highly probabilistic (e.g., that the
home team is somewhat more likely to win a game than the away team).
Our method has additional advantages. First, it provides a more general framework that can support
alternative sets of conversational maxims, such as mentions based on saliency, recency (prefer to
mention a more recent event rather than an older event), and surprise (prefer to mention a less likely
event rather than a more likely event). Second, when applied to new articles, it assigns probabilities
to alternative interpretations, which is important for subsequent processing. Third, it provides an
elegant, first-principles account of the process, which can then be compiled to yield more efficient
learning and reasoning procedures.
2
Technical Approach
We begin with a logical formalization of the Gricean maxims. Then we present our implementation
of these maxims in Markov Logic [15]. Finally, we describe a method for probabilistically inverting
the maxims to learn rules from textual mentions.
2
Formalizing the Gricean maxims. Consider a writer and a reader who share domain knowledge
K. Suppose that when told a fact F , the reader will infer an additional fact G. We will write this
as (K, M ENTION(F ) `reader G), where `reader represents the inference procedure of the reader
and M ENTION is a modal operator that captures the action of mentioning a fact in the text. Note
that the reader?s inference procedure is not standard first-order deduction, but instead is likely to be
incomplete and non-monotonic or probabilistic.
With this notation, we can formalize the first two Gricean maxims as follows:
? Mention true facts/don?t lie:
F
M ENTION(F )
?
?
M ENTION(F )
F
(1)
(2)
The first formula is overly strong, because it requires the writer to mention all true facts. Below,
we will show how to use Markov Logic weights to weaken this. The second formula captures a
positive version of ?don?t lie??if something is mentioned, then it is true. For news articles, it
does not need to be weakened probabilistically.
? Don?t mention facts that can be inferred by the reader:
M ENTION(F ) ? G ? (K, M ENTION(F ) `reader G ? ?M ENTION(G)
? Mention facts needed to prevent incorrect reader inferences:
M ENTION(F ) ? ?G ? (K, M ENTION(F ) `reader G) ?
H ? (K, M ENTION(F ? H) 6`reader G)
?
M ENTION(H)
In this formula H is a true fact that, when combined with F , is sufficient to prevent the reader
from inferring G.
Implementation in Markov Logic. Although this formalization is very general, it is difficult to
apply directly because of the embedded invocation of the reader?s inference procedure and the use
of the M ENTION modality. Consequently, we sidestep this problem by manually ?compiling? the
maxims into ordinary first-order Markov Logic as follows. The notation w : indicates that a rule has
a weight w in Markov Logic.
The first maxim is encoded in terms of fact-to-mention and mention-to-fact rules. For each predicate
P in the domain of discourse, we write
?
?
w1 : FACT P
w2 : M ENTION P
M ENTION P
FACT P.
Suppose that the shared knowledge K contains the Horn clause rule P ? Q, then we encode the
positive form of second maxim in terms of the mention-to-mention rule:
w3 : M ENTION P ? FACT Q
?
?M ENTION Q
One might expect that we could encode the faulty-inference-by-omission corollary as
w4 : M ENTION P ? ?FACT Q
?
M ENTION NOT Q,
where we have chosen M ENTION NOT Q to play the role of H in axiom 2. However, in news
stories, there is a strong preference for H to be a positive assertion, rather than a negative assertion. For example, in the citizenship case, it would be unnatural to say ?Ahmed Said Khadr,
an Egyptian-born non-Egyptian. . . ?. In particular, because C ITIZEN O F(p, c) is generally a function from p to c (i.e., a person is typically a citizen of only one country), it suffices to mention C ITIZEN O F(Khadr, Canada) to prevent the faulty inference C ITIZEN O F(Khadr, Egypt).
Hence, for rules of the form P (x, y) ? Q(x, y), where Q is a function from its first to its second
argument, we can implement the inference-by-omission maxim as
w5 : M ENTION P(x, y) ? FACT Q(x, z) ? (y 6= z)
?
M ENTION Q(x, z).
Finally, the shared knowledge P ? Q is represented by the fact-to-fact rule:
?
w6 : FACT P
3
FACT Q
In Markov Logic, each of these rules is assigned a (learned) weight which can be viewed as a cost
of violating the rule. The probability of a world ? is proportional to
?
?
X
exp ?
wj I[Rule j is satisfied by ?]? ,
j
where j iterates over all groundings of the Markov logic rules in world ? and I[?] is 1 if ? is true
and 0 otherwise.
An advantage of Markov Logic is that it allows us to define a probabilistic model even when there
are contradictions and cycles in the logical rules. Hence, we can include both a rule that says ?if the
home team is mentioned, then the away team is not mentioned? and rules that say ?the home team is
always mentioned? and ?the away team is always mentioned?. Obviously a possible world ? cannot
satisfy all of these rules. The relative weights on the rules determine the probability that particular
literals are actually mentioned.
Learning. We seek to learn both the rules and their weights. We proceed by first proposing candidate fact-to-fact rules and then automatically generating the other rules (especially the mentionto-mention rules) from the general rule schemata described above. Then we apply EM to learn the
weights on all of the rules. This has the effect of removing unnecessary rules by driving their weights
to zero.
Proposing Candidate Fact-to-Fact Rules. For each predicate symbol and its specified arity, we
generate a set of candidate Horn clauses with that predicate as the head (consequent). For the rule
body (antecedent), we consider all conjunctions of literals involving other predicates (i.e., we do not
allow recursive rules) up to a fixed maximum length. Each candidate rule is scored on the mentions
in the training documents for support (number of training examples that mention all facts in the
body) and confidence (the conditional probability that the head is mentioned given that the body is
satisfied). We discard all rules that do not achieve minimum support ? and then keep the top ? most
confident rules. The values of ? and ? are determined via cross-validation within the training set.
The selected rules are then entered into the knowledge base. From each fact-to-fact rule, we derive
mention-to-mention rules as described above. For each predicate, we also generate fact-to-mention
and mention-to-fact rules.
Learning the Weights. The goal of
weight learning is to maximize the
likelihood of the observed mentions
(in the training set) by adjusting the
weights of the rules. Because our
training data only consists of mentions and no facts, the facts are latent (hidden variables), and we must
apply the EM algorithm to learn the
weights.
We employ the Markov Logic system
Alchemy [8] for learning and inference. To implement EM, we applied
the MC-SAT algorithm in the E-step
and maximum pseudo-log likelihood
(?generative training?) for the M step.
EM is iterated to convergence, which
only requires a few iterations. Table 2
summarizes the pseudo-code of the
algorithm. MAP inference for prediction is achieved using Alchemy?s
extension of MaxWalkSat.
Table 2: Learn Gricean Mention Model
Input: DI =Incomplete training examples
? = number of rules per head
? = minimum support per rule
Output: M = Explicit mention model
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
L EARN G RICEAN M ENTION M ODEL :
exhaustively learn rules for each head
discard rules with less than ? support
select the ? most confident rules R for each head
R0 := R
for each rule (f actP => f actQ) ? R do
add mentionP ? ?mentionQ to R0
end for
for every f actP ? R do
add f actP ? mentionP to R0
add mentionP ? f actP to R0
end for
repeat
E-Step: apply inference to predict weighted facts F
define complete weighted data DC := DI ? F
M-Step: learn weights for rules in R0 using data DC
until convergence
return the set of weighted rules R0
Treating Missing Mentions as
Missing At Random: An alternative to the Gricean mention model
described above is to assume that the writer chooses which facts to mention (or omit) at random
4
Table 3: Synthetic Data Properties
q
Mentioned literals
Complete records
(%)
(%)
0.17
0.33
0.50
0.67
0.83
0.97
91.38
61.70
80.74
30.64
68.72
8.51
63.51
5.53
51.70
0.43
42.13
0.00
according to some unknown probability distribution that does not depend on the values of the
missing variables?a setting known as Missing-At-Random (MAR). When data are MAR, it is
possible to obtain unbiased estimates of the true distribution via imputation using EM [16]. We
implemented this approach as follows. We apply the same method of learning rules (requiring
minimum support ? and then taking the ? most confident rules). Each learned rule has the general
form M ENTION A ? M ENTION B. The collection of rules is treated as a model of the joint
distribution over the mentions. Generative weight learning combined with Alchemy?s builtin EM
implementation is then applied to learn the weights on these rules.
3
Experimental Evaluation
We evaluated our mention model approach using data generated from a known mention model to
understand its behavior. Then we compared its performance to the MAR approach on actual extractions from news stories about NFL football games, citizenship, and Somali ship hijackings.
Synthetic Mention Experiment. The goal of this experiment was to evaluate the ability of our
method to learn accurate rules from data that match the assumptions of the algorithm. We also
sought to understand how performance varies as a function of the amount of information omitted
from the text.
The data were generated using a database of NFL games (from 1998 and 2000-2005)
downloaded from www.databasefootball.com.
These games were then encoded using the predicates TEAM I N G AME(Game, T eam), GAME W INNER(Game, T eam),
GAME L OSER (Game, T eam),
HOME T EAM (Game, T eam),
AWAY T EAM (Game, T eam),
and TEAM G AME S CORE(Game, T eam, Score) and treated as the ground truth.
Note that these predicates can be divided into two correlated sets:
WL
=
{GAME W INNER, GAME L OSER, TEAM G AME S CORE} and HA = {HOME T EAM, AWAY T EAM}.
From this ground truth, we generate a set of mentions for each game as follows. One literal is
chosen uniformly at random from each of W L and HA and mentioned. Then each of the remaining
literals is mentioned with probability 1?q, where q is a parameter that we varied in the experiments.
Table 3 shows the average percentage of literals mentioned in each generated ?news story? and the
percentage of generated ?news stories? that mentioned all literals.
For each q, we generated 5 differ- Table 4: Gricean Mention Model Performance on Synthetic
ent datasets, each containing 235 Data. Each cell indicates % of complete records inferred.
games. For each value of q, we
Training q
Test q
ran the algorithm five times. In
each iteration, one dataset was used
0.17 0.33 0.50 0.67 0.83 0.97
for training, another for validation,
(%)
(%)
(%)
(%)
(%)
(%)
and the remaining 3 for testing.
0.17
100
100
100
100
100
100
The training and validation datasets
0.33
100
99
97
96
90
85
shared the same value of q. The re0.50
100
99
98
97
93
87
sulting learned rules were evaluated
0.67
100
98
92
92
81
66
on the test sets for all of the differ0.83
99
98
72
71
61
54
ent values of q. The validation set is
0.97
91
81
72
68
56
41
employed to determine the thresholds ? and ? during rule learning and to decide when to terminate EM. The chosen values were
? = 10, ? = 0.5 (50% of the total training instances), and between 3 and 8 EM iterations.
Table 4 reports the proportion of complete game records (i.e., all four literals) that were correctly
inferred, averaged over the five runs. Note that any facts mentioned in the generated articles are
5
automatically correctly inferred, so if no inference was performed at all, the results would match the
second row of Table 3. Notice that when trained on data with low missingness (e.g. q = 0.17), the
algorithm was able to learn rules that predict well for articles with much higher levels of missing
values. This is because q = 0.17 means that only 8.62% of the literals are missing in the training
dataset, which results in 61.70% complete records. These are sufficient to allow learning highlyaccurate rules. However, as the proportion of missing literals in the training data increases, the
algorithm starts learning incorrect rules, so performance drops. In particular, when q = 0.97, the
training documents contain no complete records (Table 3). Nonetheless, the learned rules are still
able to completely and correctly reconstruct 41% of the games!
The rules learned under such high levels of missingness are not totally correct. Here is an example
of one learned rule (for q = 0.97):
FACT HOME T EAM(g, t1) ? FACT TEAM I N G AME(g, t1) ? FACT GAME W INNER(g, t1).
This rule says that the home team always wins. When appropriately weighted in Markov Logic, this
is a reasonable rule even though it is not perfectly correct (nor was it a rule that we applied during
the synthetic data generation process).
In addition to measuring the fraction of
Table 5: Percentage of Literals Correctly Predicted
entire games correctly inferred, we can
Training q
Test q
obtain a more fine-grained assessment by
measuring the fraction of individual liter0.17 0.33 0.50 0.67 0.83 0.97
als correctly inferred. Table 5 shows this
(%)
(%)
(%)
(%)
(%)
(%)
for the q = 0.97 training scenario. We
0.97
98
95
93
92
89
85
can see that even when the test articles
have q = 0.97 (which means only 42.13% of literals are mentioned), the learned rules are able to
correctly infer 85% of the literals. By comparison, if the literals had been predicted independently
at random, only 6.25% would be correctly predicted.
Experiments with Real Data: We performed experiments on three datasets extracted from news
stories: NFL games, citizenship, and Somali ship hijackings.
NFL Games. A state-of-the-art infor- Table 6: Statistics on mentions for extracted NFL games
mation extraction system from BBN (after repairing violations of integrity constraints). Under
Technologies [6, 14] was applied to a ?Home/Away?, ?men none? gives the percentage of articles
corpus of 1000 documents taken from in which neither the Home nor the Away team was menthe Gigaword corpus V4 [13] to ex- tioned, ?men one?, the percentage in which exactly one of
tract the same five propositions em- Home or Away was mentioned, and ?men both?, the perployed in the synthetic data experi- centage where both were mentioned.
ments. The BBN coreference sysHome/Away
Winner/Loser
tem attempted to detect and combine
multiple mentions of the same game
men men men
men men men
none
one both
none
one both
within a single article. The result(%)
(%)
(%)
(%)
(%)
(%)
ing data set contained 5,850 games.
However, the data still contained many
NFL Train
17.9 58.9 23.2
17.9 57.1 25.0
coreference errors, which produced
NFL Test
83.6 19.6
0.0
1.8 98.2
0.0
games apparently involving more than
two teams or where one team achieved multiple scores.
To address these problems, we took each extracted game and applied a set of integrity constraints.
The integrity constraints were learned automatically from 5 complete game records. Examples of
the learned constraints include ?Every game has exactly two teams? and ?Every game has exactly
one winner.? Each extracted game was then converted into multiple games by deleting literals in
all possible ways until all of the integrity constraints were satisfied. The team names were replaced
(arbitrarily) with constants A and B. The games were then processed to remove duplicates. The
result was a set of 56 distinct extracted games, which we call NFL Train. To develop a test set,
NFL Test, we manually extracted 55 games from news stories about the 2010 NFL season (which
has no overlap with Gigaword V4). Table 6 summarizes these game records.
Here is an excerpt from one of the stories that was analyzed during learning: ?William Floyd rushed
for three touchdowns and Steve Young scored two more, moving the San Francisco 49ers one victory
6
from the Super Bowl with a 44-15 American football rout of Chicago.? The initial set of literals
extracted by the BBN system was the following:
MENTION
MENTION
MENTION
MENTION
MENTION
T EAM I N G AME(N F LGame9209, SanF rancisco49ers) ?
T EAM I N G AME(N F LGame9209, ChicagoBears) ?
G AME W INNER(N F LGame9209, SanF rancisco49ers) ?
G AME W INNER(N F LGame9209, ChicagoBears) ?
G AME L OSER(N F LGame9209, ChicagoBears).
After processing with the learned integrity constraints, the extracted interpretation was the following:
MENTION
MENTION
MENTION
MENTION
T EAM I N G AME(N F LGame9209, SanF rancisco49ers) ?
T EAM I N G AME(N F LGame9209, ChicagoBears) ?
G AME W INNER(N F LGame9209, SanF rancisco49ers) ?
G AME L OSER(N F LGame9209, ChicagoBears).
It is interesting to ask whether these data are Table 7: Observed percentage of cases where exconsistent with the explicit mention model ver- actly one literal is mentioned and the percentage
sus the missing-at-random model. Let us sup- predicted if the literals were missing at random
pose that under MAR, the probability that a fact
Home/Away
Winner/Loser
will be mentioned is p. Then the probability
that both literals in a rule (e.g., home/away or
obs. pred.
obs.
pred.
winner/loser) will be mentioned is p2 , the probmen
men
men
men
ability that both will be missing is (1 ? p)2 , and
one
one
one
one
the probability that exactly one will be men(%)
(%)
(%)
(%)
tioned is 2p(1 ? p). We can fit the best value
NFL Train 58.9
49.9
57.1
49.8
for p to the observed missingness rates to minNFL Test 19.6
34.5
98.2
47.9
imize the KL divergence between the predicted
and observed distributions. If the explicit mention model is correct, then the MAR fit will be a poor
estimate of the fraction of cases where exactly one literal is missing. Table 7 shows the results. On
NFL Train, it is clear that the MAR model seriously underestimates the probability that exactly one
literal will be mentioned. The NFL Test data is inconsistent with the MAR assumption, because
there are no cases where both predicates are mentioned. If we estimate p based only on the cases
where both are missing or one is missing, the MAR model seriously underestimates the one-missing
probability. Hence, we can see that train and test, though drawn from different corpora and extracted
by different methods, both are inconsistent with the MAR assumption.
We applied both our explicit mention model and the MAR model to the
NFL dataset. The cross-validated parameter values for the explicit mention
model were = 0.5 and ? = 50, and the number of EM iterations varied
between 2 and 3. We measured performance relative to the performance
that could be attained by a system that uses the correct rules. The results are
summarized in Table 8. Our method achieves perfect performance, whereas
the MAR method only reconstructs half of the reconstructable games. This
reflects the extreme difficulty of the test set, where none of the articles mentions all literals involved in any rule.
Table 8: NFL test set
performance.
Gricean
Model
(%)
MAR
Model
(%)
100.0
50.0
Here are a few examples of the rules that are learned:
0.00436 : FACT TEAM I N G AME(g, t1 ) ? FACT GAME L OSER(g, t2 ) ? (t1 6= t2 ) ?
FACT GAME W INNER(g, t1 )
0.17445 : M ENTION TEAM I N G AME(g, t1 ) ? M ENTION GAME L OSER(g, t2 ) ? (t1 6= t2 ) ?
?M ENTION GAME W INNER(g, t1 )
The first rule is a weak form of the ?fact? rule that if one team is the loser, the other is the winner.
The second rule is the corresponding ?mention? rule that if the loser is mentioned then the winner is
not. The small weights on these rules are difficult to interpret in isolation, because in Markov logic,
all of the weights are coupled and there are other learned rules that involve the same literals.
Birthplace and Citizenship. We repeated this same experiment on a different set of 182 articles
selected from the ACE08 Evaluation corpus [12] and extracted by the same methods. In these
7
articles, the citizenship of a person is mentioned 583 times and birthplace only 25 times. Both are
mentioned in the same article only 6 times (and of these, birthplace and citizenship are the same in
only 4). Clearly, this is another case where the MAR assumption does not hold. Integrity constraints
were applied to force each person to have at most one birthplace and one country of citizenship,
and then both methods were applied. The cross-validated parameter values for the explicit mention
model were = 0.5 and ? = 50 and the number of EM iterations varied between 2 and 3. Table 9
shows the two cases of interest and the probability assigned to the missing fact by the two methods.
The inverse Gricean approach gives much better results.
Somali Ship Hijacking. We collected a set Table 9: Birthplace and Citizenship: Predicted
of 41 news stories concerning ship hijack- probability assigned to the correct interpretation by
ings based on ship names taken from the web the Gricean mention model and the MAR model.
site coordination-maree-noire.eu.
Configuration
Gricean Model
MAR
From these documents, we manually exPred.
prob.
Pred.
prob.
tracted all mentions of the ownership counCitizenship missing
1.000
0.969
try and flag country of the hijacked ships.
Birthplace missing
1.000
0.565
Twenty-five stories mentioned only one fact
(ownership or flag), while 16 mentioned both.
Of the 16, 14 reported the flag country as different from the ownership country. The Gricean maxims
predict that if the two countries are the same, then only one of them will be mentioned. The results
(Table 10) show that the Gricean model is again much more accurate than the MAR model.
4
Conclusion
This paper has shown how to formalize Table 10: Flag and Ownership: Predicted probabilthe Gricean conversational maxims, compile ity assigned to the missing fact by the Gricean menthem into Markov Logic, and invert them via tion model and the MAR model. Cross-validated
probabilistic reasoning to learn Horn clause parameter values = 0.5 and ? = 50; 2-3 EM iterrules from facts extracted from documents. ations.
Experiments on synthetic mentions showed
Configuration
Gricean Model
MAR
that our method is able to correctly reconPred. prob.
Pred. prob.
struct complete records even when neither the
Ownership missing
1.000
0.459
training data nor the test data contain comFlag missing
1.000
0.519
plete records. Our three studies provide evidence that news articles obey the maxims
across three domains. In all three domains, our method achieves excellent performance that far
exceeds the performance of standard EM imputation. This shows conclusively that rule learning
benefits from employing an explicit model of the process that generates the data. Indeed, it allows
rules to be learned correctly from only a handful of complete training examples.
An interesting direction for future work is to learn forms of knowledge more complex than Horn
clauses. For example, the state of a hijacked ship can change over time from states such as ?attacked?
and ?captured? to states such as ?ransom demanded? and ?released?. The Gricean mention model
predicts that if a news story mentions that a ship was released, then it does not need to mention that
the ship was ?attacked? or ?captured?. Handling such cases will require extending the methods in
this paper to reason about time and what the author and reader know at each point in time. It will also
require better methods for joint inference, because there are more than 10 predicates in this domain,
and our current EM implementation scales exponentially in the number of interrelated predicates.
Acknowledgments
This material is based upon work supported by the Defense Advanced Research Projects Agency
(DARPA) under Contract No. FA8750-09-C-0179 and by Army Research Office (ARO). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA, the Air Force Research Laboratory
(AFRL), ARO, or the US government.
8
References
[1] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hruschka Jr., and T.M. Mitchell. Toward an
architecture for never-ending language learning. In Proceedings of the Conference on Artificial
Intelligence (AAAI), pages 1306?1313. AAAI Press, 2010.
[2] A. Carlson, J. Betteridge, R. C. Wang, E. R. Hruschka, Jr., and T. M. Mitchell. Coupled semisupervised learning for information extraction. In Proceedings of the Third ACM International
Conference on Web Search and Data Mining, WSDM ?10, pages 101?110, New York, NY,
USA, 2010. ACM.
[3] W. W. Cohen. WHIRL: A word-based information representation language. Artificial Intelligence, 118(1-2):163?196, 2000.
[4] J. R. Doppa, M. S. Sorower, M. Nasresfahani, J. Irvine, W. Orr, T. G. Dietterich, X. Fern,
and P. Tadepalli. Learning rules from incomplete examples via implicit mention models. In
Proceedings of the 2011 Asian Conference on Machine Learning, 2011.
[5] O. Etzioni, M. Banko, S. Soderland, and D. S. Weld. Open information extraction from the
web. Commun. ACM, 51(12):68?74, 2008.
[6] M. Freedman, E. Loper, E. Boschee, and R. Weischedel. Empirical Studies in Learning to
Read. In Proceedings of Workshop on Formalisms and Methodology for Learning by Reading
(NAACL-2010), pages 61?69, 2010.
[7] H. P. Grice. Logic and conversation. In Syntax and semantics: Speech acts, volume 3, pages
43?58. Academic Press, New York, 1975.
[8] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, and P. Domingos. The
Alchemy system for statistical relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2007.
[9] L. Michael. Reading between the lines. In IJCAI, pages 1525?1530, 2009.
[10] L. Michael and L. G. Valiant. A first experimental demonstration of massive knowledge infusion. In KR, pages 378?389, 2008.
[11] U. Y. Nahm and R. J. Mooney. A mutually beneficial integration of data mining and information extraction. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and the Twelfth Conference on Innovative Applications of Artificial Intelligence, pages
627?632. AAAI Press, 2000.
[12] NIST. Automatic Content Extraction 2008 Evaluation Plan.
[13] R. Parker, D. Graff, J. Kong, K. Chen, and K. Maeda. English Gigaword Fourth Edition.
Linguistic Data Consortium, Philadelphia, 2009.
[14] L. Ramshaw, E. Boschee, M. Freedman, J. MacBride, R. Weischedel, and A.Zamanian. Serif
language processing effective trainable language understanding. In Joseph Olive, Caitlin Christianson, and John McCary, editors, Handbook of Natural Language Processing and Machine
Translation: DARPA Global Autonomous Language Exploitation. Springer, 2011.
[15] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62:107?136,
February 2006.
[16] J. L. Schafer and M. K. Olsen. Multiple imputation for multivariate missing-data problems: a
data analyst?s perspective. Multivariate Behavioral Research, 33:545?571, 1998.
[17] S. Schoenmackers, O. Etzioni, D. S. Weld, and J. Davis. Learning first-order Horn clauses from
web text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language
Processing, EMNLP ?10, pages 1088?1098, Stroudsburg, PA, USA, 2010. Association for
Computational Linguistics.
9
| 4197 |@word kong:1 exploitation:1 version:1 proportion:2 tadepalli:2 twelfth:1 open:1 seek:2 prasad:1 concise:3 mention:66 initial:1 born:3 contains:1 score:3 configuration:2 seriously:2 document:9 fa8750:1 current:1 com:1 nell:1 must:2 olive:1 john:1 subsequent:1 chicago:1 remove:1 treating:1 drop:1 generative:2 selected:2 half:1 intelligence:4 core:2 record:9 provides:3 iterates:1 preference:1 simpler:1 five:4 surprised:1 incorrect:3 consists:1 combine:1 behavioral:1 acquired:2 indeed:1 behavior:1 nor:3 wsdm:1 automatically:3 alchemy:4 actual:1 totally:1 begin:1 project:1 underlying:1 formalizing:1 notation:2 schafer:1 what:2 schoenmackers:1 kind:1 proposing:2 finding:2 formalizes:1 pseudo:2 every:3 act:1 exactly:6 omit:1 positive:3 t1:9 engineering:2 treat:1 tends:1 re0:1 might:1 studied:1 weakened:1 compile:1 mentioning:1 averaged:1 seventeenth:1 horn:8 acknowledgment:1 testing:1 lost:1 recursive:1 implement:2 banko:1 procedure:4 w4:1 axiom:1 empirical:2 confidence:1 road:1 word:1 consortium:1 cannot:1 operator:1 recency:1 faulty:2 www:1 map:1 missing:26 independently:1 sumner:1 mislead:1 assigns:1 ention:27 rule:102 contradiction:1 his:1 ity:1 handle:1 autonomous:1 suppose:2 play:1 massive:1 us:1 hypothesis:1 domingo:2 pa:1 expensive:1 particularly:1 predicts:1 cooperative:1 database:1 observed:4 role:1 electrical:1 capture:3 wang:1 grice:5 wj:1 news:14 cycle:1 eu:1 ran:1 substantial:1 mentioned:32 agency:1 exhaustively:1 trained:1 depend:1 ings:1 coreference:2 upon:1 writer:6 completely:1 easily:1 joint:2 bowl:1 darpa:3 represented:1 train:5 distinct:1 describe:1 effective:1 artificial:4 repairing:1 encoded:2 say:6 otherwise:2 football:3 reconstruct:1 ability:2 statistic:1 richardson:2 obviously:1 advantage:2 took:1 aro:2 relevant:1 entered:1 loser:6 poon:1 achieve:1 participating:1 ent:2 seattle:1 convergence:2 ijcai:1 extending:1 generating:1 tract:1 perfect:1 stroudsburg:1 derive:1 develop:2 pose:1 measured:1 school:1 strong:2 p2:1 implemented:1 predicted:7 differ:1 direction:1 drawback:1 correct:7 settle:1 opinion:1 material:2 require:2 government:1 suffices:1 proposition:3 extension:1 hold:1 ground:2 exp:1 predict:3 driving:1 sought:1 achieves:2 omitted:1 released:2 coordination:1 singla:1 wl:1 vice:2 city:3 weighted:4 reflects:1 clearly:1 always:3 mation:1 super:1 rather:3 season:1 probabilistically:2 conjunction:1 corollary:3 encode:2 validated:3 focus:1 loper:1 office:1 linguistic:1 believing:1 indicates:2 likelihood:2 contrast:1 baseline:1 detect:1 inference:15 economy:1 typically:1 entire:1 hidden:1 relation:2 deduction:1 semantics:1 infor:1 plan:1 art:1 integration:1 never:1 extraction:9 maxwalksat:1 washington:1 manually:3 represents:1 tem:1 future:1 report:2 t2:4 duplicate:1 employ:1 few:2 simultaneously:1 national:2 divergence:1 individual:1 asian:1 replaced:1 antecedent:1 william:1 interest:1 w5:1 highly:1 mining:2 evaluation:4 introduces:1 violation:1 analyzed:1 extreme:1 tgd:1 immense:1 accurate:3 citizen:2 necessary:2 unless:2 incomplete:6 gricean:16 weaken:1 sulting:1 instance:4 formalism:1 modeling:1 soft:1 rao:1 assertion:2 measuring:2 ations:1 maximization:1 ordinary:1 cost:1 predicate:10 reported:1 varies:1 eec:1 accomplish:1 synthetic:7 combined:3 confident:3 person:5 chooses:1 international:1 reconstructable:1 probabilistic:11 told:1 v4:2 contract:1 michael:3 concrete:3 earn:1 w1:1 again:1 central:1 satisfied:3 aaai:3 containing:1 reconstructs:1 reflect:1 emnlp:1 literal:24 american:1 sidestep:1 return:1 account:1 converted:1 orr:3 summarized:1 oregon:1 satisfy:1 explicitly:5 performed:2 try:1 tion:1 view:1 schema:1 apparently:1 sup:1 start:1 sort:1 odel:1 air:1 who:1 yield:1 saliency:1 weak:1 iterated:1 produced:1 fern:2 mc:1 none:4 mooney:1 underestimate:2 nonetheless:1 involved:1 di:2 irvine:1 dataset:3 adjusting:1 ask:1 logical:3 mitchell:2 knowledge:17 conversation:2 formalize:2 actually:1 steve:1 higher:1 attained:1 violating:1 afrl:1 methodology:1 modal:1 evaluated:2 though:2 mar:18 implicit:1 until:2 web:7 assessment:1 lowd:1 semisupervised:1 grounding:1 dietterich:2 effect:1 requiring:1 true:6 unbiased:1 contain:2 name:2 hence:4 assigned:4 naacl:1 read:1 laboratory:1 floyd:1 game:44 during:3 davis:1 won:1 syntax:1 complete:11 mohammad:1 performs:1 egypt:1 reasoning:2 plete:1 clause:8 denver:3 cohen:1 winner:7 exponentially:1 volume:2 association:1 he:1 interpretation:3 interpret:1 corvallis:1 versa:2 ai:2 automatic:1 league:1 pointed:1 killed:1 ramshaw:1 language:12 had:1 moving:1 compiled:1 base:3 add:3 something:1 integrity:6 multivariate:2 recent:1 showed:1 perspective:1 commun:1 discard:2 ship:9 scenario:1 somali:3 arbitrarily:1 scoring:1 captured:2 minimum:4 additional:2 somewhat:1 employed:1 r0:6 determine:2 truthful:1 maximize:1 multiple:4 infer:6 ing:1 technical:2 match:2 exceeds:1 ahmed:2 cross:4 academic:1 divided:1 concerning:1 victory:2 involving:2 prediction:1 expectation:1 iteration:5 invert:2 achieved:2 cell:1 background:1 addition:1 fine:1 whereas:1 walker:1 source:2 country:8 modality:1 appropriately:1 w2:1 unlike:1 tend:1 elegant:1 inconsistent:2 call:1 presence:1 counting:1 canadian:2 enough:2 weischedel:2 fit:2 isolation:1 w3:1 architecture:1 perfectly:1 inner:8 nfl:16 whether:1 defense:1 unnatural:1 sus:1 speech:1 proceed:1 york:2 action:1 generally:1 clear:2 involve:1 amount:1 kok:1 processed:1 generate:3 percentage:7 tutorial:1 notice:1 overly:1 correctly:11 per:2 gigaword:3 write:2 ame:15 four:1 threshold:1 drawn:1 imputation:3 prevent:3 neither:2 fraction:3 rout:1 missingness:4 tadepall:1 run:1 inverse:1 prob:4 communicate:1 fourth:1 almost:1 reader:22 decide:1 reasonable:1 commanding:1 draw:1 home:14 excerpt:1 prefer:2 summarizes:2 ob:2 constraint:7 handful:1 weld:2 generates:1 aspect:1 argument:1 innovative:1 conversational:4 department:1 according:1 poor:1 jr:2 smaller:1 across:1 em:15 contradicts:1 beneficial:1 joseph:1 handling:1 eam:15 xiaoli:1 taken:2 mutually:1 previously:1 discus:2 count:1 needed:1 know:2 end:2 available:1 pakistan:1 apply:7 obey:1 away:13 hruschka:2 alternative:3 compiling:1 struct:1 thomas:1 assumes:1 remaining:3 include:2 top:1 touchdown:1 imize:1 opportunity:1 linguistics:1 carlson:2 infusion:1 especially:1 february:1 already:1 said:2 win:2 collected:1 reason:1 toward:1 w6:1 analyst:1 length:1 code:1 demonstration:1 acquire:1 difficult:3 unfortunately:1 october:1 stated:1 negative:1 implementation:4 unknown:1 twenty:1 markov:14 datasets:3 nist:1 attacked:2 kisiel:1 relational:1 team:25 head:5 dc:2 varied:3 omission:3 canada:1 inferred:7 pred:4 inverting:3 specified:1 kl:1 sentence:1 egyptian:4 learned:14 textual:2 address:1 able:5 below:1 pattern:1 maeda:1 reading:3 challenge:1 including:1 deleting:1 event:4 overlap:1 natural:8 rely:1 difficulty:3 treated:2 force:2 advanced:1 older:1 technology:1 kansa:3 xfern:1 created:1 extract:1 coupled:2 philadelphia:1 text:19 understanding:1 oregonstate:1 relative:2 embedded:1 expect:1 interesting:3 generation:1 proportional:1 men:13 christianson:1 validation:4 downloaded:1 etzioni:2 sufficient:2 consistent:1 experi:1 article:15 principle:1 usa:2 story:12 editor:1 share:2 actly:1 row:1 translation:1 course:1 repeat:1 last:1 supported:1 english:1 populate:1 allow:3 understand:2 taking:1 benefit:1 world:3 ending:1 author:3 collection:1 san:1 far:1 employing:1 newspaper:1 olsen:1 implicitly:1 conclusively:1 logic:15 keep:2 global:1 ver:1 sat:1 corpus:4 handbook:1 unnecessary:1 francisco:1 tuples:1 don:3 search:2 latent:1 demanded:1 table:22 learn:23 terminate:1 correlated:1 excellent:1 complex:1 necessarily:1 domain:14 did:1 scored:2 freedman:2 edition:1 repeated:1 body:3 site:1 parker:1 ny:1 formalization:2 inferring:1 explicit:8 lie:2 invocation:1 candidate:4 third:2 grained:1 young:1 formula:3 removing:1 arity:1 er:1 symbol:1 consequent:1 ments:1 evidence:2 betteridge:2 soderland:1 workshop:1 serif:1 doppa:4 valiant:2 kr:1 maxim:20 bbn:3 chen:1 surprise:1 generalizing:1 interrelated:1 likely:5 army:1 expressed:1 contained:2 recommendation:1 monotonic:1 springer:1 truth:2 discourse:1 extracted:16 rushed:1 acm:3 conditional:1 viewed:1 goal:2 consequently:2 tioned:2 ownership:5 shared:5 content:1 hard:2 change:1 specifically:1 determined:1 uniformly:1 graff:1 flag:4 total:1 experimental:3 attempted:1 differ0:1 select:1 support:6 latter:1 arises:1 evaluate:1 trainable:1 ex:1 |
3,531 | 4,198 | Approximating Semidefinite Programs in Sublinear
Time
Elad Hazan
Technion - Israel Institute of Technology
Haifa 32000 Israel
[email protected]
Dan Garber
Technion - Israel Institute of Technology
Haifa 32000 Israel
[email protected]
Abstract
In recent years semidefinite optimization has become a tool of major importance
in various optimization and machine learning problems. In many of these problems the amount of data in practice is so large that there is a constant need for
faster algorithms. In this work we present the first sublinear time approximation
algorithm for semidefinite programs which we believe may be useful for such
problems in which the size of data may cause even linear time algorithms to have
prohibitive running times in practice. We present the algorithm and its analysis
alongside with some theoretical lower bounds and an improved algorithm for the
special problem of supervised learning of a distance metric.
1
Introduction
Semidefinite programming (SDP) has become a tool of great importance in optimization in the past
years. In the field of combinatorial optimization for example, numerous approximation algorithms
have been discovered starting with Goemans and Williamson [1] and [2, 3, 4]. In the field of machine
learning solving semidefinite programs is at the heart of many learning tasks such as learning a
distance metric [5], sparse PCA [6], multiple kernel learning [7], matrix completion [8], and more. It
is often the case in machine learning that the data is assumed no be noisy and thus when considering
the underlying optimization problem, one can settle for an approximated solution rather then an
exact one. Moreover it is also common in such problems that the amounts of data are so large that
fast approximation algorithms are preferable to exact generic solvers, such as interior-point methods,
which have impractical running times and memory demands and are not scalable.
In the problem of learning a distance metric [5] one is given a set of points in Rn and similarity
information in the form of pairs of points and a label indicating weather the two points are in the
same class or not. The goal is to learn a distance metric over Rn which respects this similarity
information. That is it assigns small distances to points in the same class and bigger distances to
points in different classes. Learning such a metric is important for other learning tasks which rely on
having a good metric over the input space, such as K-means, nearest-neighbours and kernel-based
algorithms.
In this work we present the first approximation algorithm for general semidefinite programming
which runs in time that is sublinear in the size of the input. For the special case of learning a
pseudo-distance metric, we present an even faster sublinear time algorithm. Our algorithms are the
fastest possible in terms of the number of constraints and the dimensionality, although slower than
other methods in terms of the approximation guarantee.
1.1
Related Work
Semidefinite programming is a notoriously difficult optimization formulation, and has attracted a
host of attempts at fast approximation methods. Klein and Lu [9] gave a fast approximate solver for
1
the MAX-CUT semidefinite relaxation of [1]. Various faster and more sophisticated approximate
solvers followed [10, 11, 12], which feature near-linear running time albeit polynomial dependence
on the approximation accuracy. For the special case of covering an packing SDP problems, [13]
and [14] respectively give approximation algorithms with a smaller dependency on the approximation parameter . Our algorithms are based on the recent work of [15] which described sublinear
algorithms for various machine learning optimization problems such has linear classification and
minimum enclosing ball. We describe here how such methods, coupled with techniques, may be
used for semidefinite optimization.
2
Preliminaries
In this paper we denote vectors in Rn by a lower case letter (e.g. v) and matrices in Rn?n by
upper case letters (e.g. A). We denote by kvk the standard euclidean
qP norm of the vector v and by
2
kAk the frobenius norm norm of the matrix A, that is kAk =
i,j A(i, j) . We denote by kvk1
the l1 -norm of v. The notation X 0 states that the matrix X is positive semi definite, i.e. it is
symmetric and all of its eigenvalues are non negative. The notation X B states
P that X ? B 0.
The notation C ? X is just the dot product between matrices, that is C ? X = i,j C(i, j)X(i, j).
Pm
We denote by ?m the m-dimensional simplex, that is ?m = {p| i=1 pi = 1, ?i : pi ? 0}.
We denote by 1n the all ones n-dimensional vector and by 0n?n the all zeros n ? n matrix. We
denote by I the identity matrix when its size is obvious from context. Throughout the paper we will
? which is the same as the notation O(?) with the difference that it
use the complexity notation O(?)
suppresses poly-logarithmic factors that depend on n, m, ?1 .
We consider the following general SDP problem
Maximise C ? X
subject to Ai ? X
X0
(1)
?
0
i = 1, ..., m
Where C, A1 , ..., Am ? Rn?n . For reasons that will be made clearer in the analysis, we will assume
that for all i ? [m], kAi k ? 1
The optimization problem (1) can be reduced to a feasibility problem by a standard reduction of
performing a binary search over the value of the objective C?X and adding an appropriate constraint.
Thus we will only consider the feasibility problem of finding a solution that satisfies all constraints.
The feasibility problem can be rewritten using the following min-max formulation
max min Ai ? X
X0 i?[m]
(2)
Clearly if the optimum value of (2) is non-negative, then a feasible solution exists and vice versa.
Denoting the optimum of (2) by ?, an additive approximation algorithm to (2) is an algorithm that
produces a solution X such that X 0 and for all i ? [m], Ai ? X ? ? ? .
For the simplicity of the presentation we will only consider constraints of the form A ? X ? 0 but
we mention in passing that SDPs with other linear constraints can be easily rewritten in the form of
(1).
We will be interested in a solution to (2) which lies in the bounded semidefinite cone K =
{X|X 0, Tr(X) ? 1}. The demand on a solution to (2) to have bounded trace is due to the
observation that in case ? > 0, any solution needs to be bounded or else the products Ai ? X could
be made to be arbitrarily large.
Learning distance pseudo metrics In the problem of learning a distance metric from examples,
0
n
we are given a set triplets S = {{xi , x0i , yi }}m
i=1 such that xi , xi ? R and yi ? {?1, 1}. A value
0
yi = 1 indicates that the vectors xi , xi are in the same class and a value yi = ?1 indicates that they
are from different classes. Our goal is to learn a pseudo-metric over Rn which respects the example
set. A pseudo-metric is a function d : R ? R ? R, which satisfies three conditions: (i) d(x, x0 ) ? 0,
(ii) d(x, x0 ) = d(x0 , x) ,p
and (iii) d(x1 , x2 ) + d(x2 , x3 ) ? d(x1 , x3 ). We consider pseudo-metrics
of the form dA (x, x0 ) ? (x ? x0 )> A(x ? x0 ). Its easily verified that if A 0 then dA is indeed a
pseudo-metric. A reasonable demand from a ?good? pseudo metric is that it separates the examples
2
(assuming such a separation exists). That is we would like to have a matrix A 0 and a threshold
value b ? R such that for all {xi , x0i , yi } ? S it will hold that,
(dA (xi ? x0i ))2 = (xi ? x0i )> A(xi ? x0i ) ? b ? ?/2
(dA (xi ?
x0i ))2
= (xi ?
x0i )> A(xi
?
x0i )
? b + ?/2
yi = 1
(3)
yi = ?1
where ? is the margin of separation which we would like to maximize. Denoting by vi = (xi ? x0i )
for all i ? [m], (3) can be summarized into the following formalism:
yi b ? vi> Avi ? ?
Without loss of generality we can assume that b = 1 and derive the following optimization problem
max min yi 1 ? vi> Avi
(4)
A0 i?[m]
3
Algorithm for General SDP
Our algorithm for general SDPs is based on the generic framework for constrained optimization
problems that fitP
a max-min formulation, such as (2), presented in [15]. Noticing that mini?[m] Ai ?
X = minp??m i?[m] p(i)Ai ? X, we can rewrite (2) in the following way
max min p(i)A>
i x
(5)
x?K p??m
Building on [15], we use an iterative primal-dual algorithm
that simulates a repeated game between
P
two online algorithms: one that wishes to maximize i?[m] p(i)Ai ? X as a function of X and the
P
other that wishes to minimize i?[m] p(i)Ai ? X as a function of p. If both algorithms achieve
sublinear regret, then this framework is known to approximate max-min problems such as (5), in
case a feasible solution exists [16].
The primal algorithm which controls X is a gradient
P ascent algorithm that given p adds to the current
solution a vector in the direction of the gradient i?[m] p(i)Ai . Instead of adding the exact gradient
we actually only sample from it by adding Ai with probability p(i) (lines 5-6). The dual algorithm
which controls p is a variant of the well known multiplicative (or exponential) update rule for online
optimization over the simplex which updates the weight p(i) according to the product Ai ? X (line
11). Here we replace the exact computation of Ai ? X by employing the l2 -sampling technique
used in [15] in order to estimate this quantity by viewing only a single entry of the matrix Ai (line
9). An important property of this sampling procedure is that if kAi k ? 1, then E[?
vt (i)2 ] ? 1.
Thus, we can estimate the product Ai ? X with constant variance, which is important for our analysis. A problem that arises with this estimation procedure is that it might yield unbounded values
which do not fit well with the multiplicative weights analysis. Thus we use a clipping procedure
clip(z, V ) ? min{V, max{?V, Z}} to bound these estimations in a certain range (line 10). Clipping the samples yields unbiased estimators of the products Ai ? X but the analysis shows that this
bias is not harmful.
The algorithm is required to generate a solution X ? K. This constraint is enforced by performing
a projection step onto the convex set K after each gradient improvement step of the primal online
algorithm. A projection of a matrix Y ? Rn?n onto K is given by Yp = arg minX?K kY ? Xk.
Unlike the algorithms in [15] that perform optimization over simple sets such as the euclidean unit
ball which is trivial to project onto, projecting onto the bounded semidefinite cone is more complicated and usually requires to diagonalize the projected matrix (assuming it is symmetric). Instead,
we show that one can settle for an approximated projection which is faster to compute (line 4). Such
approximated projections could be computed by Hazan?s algorithm for offline optimization over the
bounded semidefinite cone, presented in [12]. Hazan?s algorithm gives the following guarantee
Lemma 3.1. Given a matrix Y ? Rn?n , > 0, let f (X) = ?kY ? Xk2 and denote X ? =
?1
?
arg maxX?K f (X). Then Hazan?s algorithm
2 produces a solution X ? K of rank at most such
? 2 ? kY ? X ? k2 ? in O n1.5 time.
that kY ? Xk
We can now state the running time of our algorithm.
?
Lemma 3.2. Algorithm SublinearSDP has running time O
3
m
2
+
n2
5
.
Algorithm 1 SublinearSDP
1: Input: > 0, Ai ? Rn?n for i ? [m].
2: Let T ? 602 ?2 log m, Y1 ? 0n?n , w1 ? 1m , ? ?
q
log m
T , P
? /2.
3: for t = 1 to T do
4:
pt ? kwwttk1 , Xt ?ApproxProject(Yt , 2P ).
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
Choose it ? [m] by it ? i w.p. pt (i).
Yt+1 ? Yt + ?12T Ait
Choose (jt , lt ) ? [n] ? [n] by (jt , lt ) ? (j, l) w.p. Xt (j, l)2 /kXt k2 .
for i ? [m] do
v?t ? Ai (jt , lt )kXt k2 /Xt (jt , lt )
vt (i) ?clip(?
vt (i), 1/?)
wt+1 (i) ? wt (i)(1 ? ?vt (i) + ? 2 vt (i)2 )
end for
end for
? = 1 P Xt
return X
t
T
We also have the following lower bound.
Theorem 3.3. Any
algorithmwhich computes an -approximation with probability at least
2
has running time ? m2 + n2 .
2
3
to (2)
We note that while the dependency of our algorithm on the number of constraints m is close to
? ?3 ) between the dependency of our
optimal (up to poly-logarithmic factors), there is a gap of O(
2
algorithm on the size of the constraint matrices n and the above lower bound. Here it is important to
note that our lower bound does not reflect the computational effort in computing a general solution
that is positive semidefinite which is in fact the computational bottleneck of our algorithm (due to
the use of the projection procedure).
4
Analysis
We begin with the presentation of the Multiplicative Weights algorithm used in our algorithm.
Definition 4.1. Consider a sequence of vectors q1 , ..., qT ? Rm . The Multiplicative Weights (MW)
algorithm is as follows. Let 0 < ? ? R, w1 ? 1m , and for t ? 1,
pt ? wt /kwt k1 ,
wt+1 ? wt (i)(1 ? ?qt (i) + ? 2 qt (i)2 )
The following lemma gives a bound on the regret of the MW algorithm, suitable for the case in
which the losses are random variables with bounded variance.
Lemma 4.2. The MW algorithm satisfies
X
t?[T ]
p>
t qt ? min
i?[m]
X
t?[T ]
X
log m
1
2
+?
p>
max{qt (i), ? } +
t qt
?
?
t?[t]
The following lemma gives concentration bounds on our random variables from their expectations.
q
Lemma 4.3. For 1/4 ? ? ? logT m , with probability at least 1 ? O(1/m), it holds that
X
X
P
>
(i) maxi?[m] t?[T ] [vt (i) ? Ai ? Xt ] ? 4?T (ii)
Ait ? Xt ?
pt vt ? 8?T
t?[T ]
t?[T ]
The following Lemma gives a regret bound on the lazy gradient ascent algorithm used in our algorithm (line 6). For a proof see Lemma A.2 in [17].
4
Lemma 4.4. Consider matrices A1 , ..., AT ? Rn?n
such that for all i ?
[m] kAi k ? 1. Let
Pt
X0 = 0n?n and for all t ? 1 let Xt+1 = arg minX?K
?12T ? =1 A? ? X
Then
X
X
?
max
At ? X ?
At ? Xt ? 2 2T
X?K
t?[T ]
t?[T ]
We are now ready to state the main theorem and prove it.
Theorem 4.5 (Main Theorem). With probability 1/2, the SublinearSDP algorithm returns an additive approximation to (5).
Proof. At first assume that the projection onto the set K in line 4 is an exact projection and not an
? t the exact projection of Yt . In this case, by lemma 4.4 we have
approximation and denote by X
X
X
?
? t ? 2 2T
max
Ai t ? X ?
Ai t ? X
(6)
x?K
t?[T ]
t?[T ]
By the law of cosines and lemma 3.1 we have for every t ? [T ]
? t k2 ? kYt ? Xt k2 ? kYt ? X
? t k2 ? 2
kXt ? X
P
(7)
Rewriting (6) we have
X
X
X
?
? t ? Xt ) ? 2 2T
Ai t ? X ?
max
Ait ? Xt ?
Ait ? (X
x?K
t?[T ]
t?[T ]
t?[T ]
Using the Cauchy-Schwarz inequality, kAit k ? 1 and (7) we get
X
X
X
?
?
? t ? Xt k ? 2 2T + T P
max
Ait ? X ?
Ait ? Xt ? 2 2T +
kAit kkX
x?K
t?[T ]
t?[T ]
t?[T ]
Rearranging and plugging maxx?K mini?[m] Ai ? X = ? we get
X
?
Ait ? Xt ? T ? ? 2 2T ? T P
(8)
t?[T ]
Turning to the MW part of the algorithm, by the MW Regret Lemma 4.2, and using the clipping of
vt (i) we have
X
X
X
2
p>
vt (i) + (log m)/? + ?
p>
t vt ? min
t vt
i?[i]
t?[T ]
t?[t]
t?[T ]
By Lemma 4.3, with high probability and for any i ? [n],
X
X
vt (i) ?
Ai ? Xt + 4?T
t?[T ]
t?[T ]
Thus with high probability it holds that
X
X
X
2
p>
Ai ? Xt + (log m)/? + ?
p>
t vt ? min
t vt + 4?T
i?[i]
t?[T ]
t?[t]
t?[T ]
Combining (8) and (9) we get
X
X
2
min
Ai ? Xt ? ? (log m)/? ? ?
p>
t vt ? 4?T + T ?
i?[i]
t?[t]
t?[T ]
X
?
X >
pt vt ?
Ait ? Xt ? T P
? 2 2T ?
t?[T ]
t?[T ]
By a simple Markov inequality argument it holds that w.p. at least 3/4,
X
2
p>
t vt ? 8T
t?[T ]
5
(9)
Combined with lemma 4.3, we have w.p. at least 43 ? O( n1 ) ? 12
X
?
min
Ai ? Xt ? ?(log m)/? ? 8?T ? 4?T + T ? ? 2 2T ? 8?T ? T P
i?[i]
t?[t]
?
T? ?
?
log m
? 20?T ? 2 2T ? T P
?
? ???
Dividing through by T and plugging in our choice for ? and P , we have mini?[m] Ai ? X
w.p. at least 1/2.
5
Application to Learning Pseudo-Metrics
As in the problem of general SDP, we can also rewrite (4) by replacing the mini?[m] objective with
minp??m and arrive at the following formalism,
max min yi 1 ? vi> Avi
(10)
A0 p??m
As we demanded
to general SDP to have bounded
trace, here wedemand that kAk
? 1.
a solution
v
A
0
i
Letting vi0 =
and defining the set of matrices P =
|A 0, kAk ? 1 , we
1
0 ?1
can rewrite (10) in the following form.
max min ?yi vi0 vi0> ? A
A?P p??m
(11)
In what comes next, we use the notation Ai = ?yi vi0 vi0 .
Since projecting a matrix onto the set P is as easy as projecting a matrix onto the set
{A 0, kAk ? 1}, we assume for the simplicity of the presentation that the set on which we optimize is indeed P = {A 0, kAk ? 1}.
We proceed with presenting a simpler algorithm for this problem than the one given for general SDP.
The gradient of yi vi0 vi0> ? A with respect to A is a symmetric rank one matrix and here we have the
following useful fact that was previously stated in [18].
Theorem 5.1. If A ? Rn?n is positive semi definite, v ? Rn and ? ? R then the matrix B =
A + ?vv > has at most one negative eigenvalue.
The proof is due to the eigenvalue Interlacing Theorem (see [19] pp. 94-97 and [20] page 412).
Thus after performing a gradient step improvement of the form Yt+1 = Xt + ?yi vi vi> , projecting
Yt+1 onto to the feasible set P comes down to the removal of at most one eigenvalue in case we
subtracted a rank one matrix (yit = ?1) or normalizing the l2 norm in case we added a rank
one matrix (yit = 1). Since in practice computing eigenvalues fast, using the Power or Lanczos
methods, can be done only up to a desired approximation, in fact the resulting projection Xt+1
might not be positive semidefinite. Nevertheless, we show by care-full analysis that we can still
settle for a single eigenvector computation in order to compute an approximated projection with the
price that Xt+1 ?3 I. That is Xt+1 might be slightly outside of the positive semidefinite cone.
The benefit is an algorithm with improved performance over the general SDP algorithm since far
less eigenvalue computations are required than in Hazan?s algorithm.
The projection to the set P is carried out in lines 7-11. In line 7 we check if Yt+1 has a negative
eigenvalue and if so, we compute the corresponding eigenvector in line 8 and remove it in line 9. In
line 11 we normalize the l2 norm of the solution. The procedure Sample(Ai , Xt ) will be detailed
later on when we discuss the running time.
The following Lemma is a variant of Zinkevich?s Online Gradient Ascent algorithm [21] suitable
for the use of approximated projections when Xt is not necessarily inside the set P.
Lemma 5.2. Consider a set of matrices A1 , ..., AT ? Rn?n such that kAi k ? 1. Let X0 = 0n?n
and for all t ? 0 let
Yt+1 = Xt + ?At ,
? t+1 = arg min kYt+1 ? Xk
X
X?P
6
Algorithm 2 SublinearPseudoMetric
1: Input: > 0, Ai = yi vi vi> ? Rn?n for i ? [m].
2: Let T ? 602 ?2 log m, X1 =? 0n?n , w1 ? 1m , ? ?
q
log m
T .
3: for t = 1 to T do
4:
pt ? kwwttk1 .
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
Choose it ? [m]
qby it ? i w.p. pt (i).
Yt+1 ? Xt + T2 yit vit vi>t
if yi < 0 and ?min (Yt+1 ) < 0 then
u ? arg minz:kzk=1 z > Yt+1 z
Yt+1 = Yt+1 ? ?uu>
end if
Yt+1
Xt+1 ? max {1,kY
t+1 k}
for i ? [m] do
vt (i) ? clip(Sample(Ai , Xt ), 1/?)
wt+1 (i) ? wt (i)(1 ? ?vt (i) + ? 2 vt (i)2 )
end for
end for
? = 1 P Xt
return X
t
T
?
and let Xt+1 be such that
X
t+1 ? Xt+1
? d . Then, for a proper choice of ? it holds that,
max
X?P
X
At ? X ?
t?[T ]
X
At ? Xt ?
t?[T ]
?
3
2T + d T 3/2
2
The following lemma states the connection between the precision used in eigenvalues approximation
in lines 7-8, and the quality of the approximated projection.
Lemma 5.3. Assume that on each iteration t of the algorithm, the eigenvalue computation in
?t =
line 7 is a ? = 4Td1.5 additive approximation of the smallest eigenvalue of Yt+1 and let X
arg minX?P kYt ? Xk. It holds that
? t ? Xt k ? d
kX
Theorem 5.4. Algorithm SublinearPseudoMetric computes an additive approximation to (11) w.p.
1/2.
Proof. Combining lemmas 5.2, 5.3 we have,
X
X
?
3
max
At ? X ?
At ? Xt ? 2T + d T 3/2
X?P
2
t?[T ]
Setting d =
2
?P
3 T
t?[T ]
where P is the same as in theorem 4.5 yields,
X
X
?
arg max
At ? X ?
At ? Xt ? 2T + P T
X?P
t?[T ]
t?[T ]
The rest of the proof follows the same lines as theorem 4.5.
We move on to discus the time complexity of the algorithm. It is easily observed from the algorithm
that for all t ? [T ], the matrix Xt can be represented
as the sum of kt ? 2T symmetric rank-one
P
>
matrices. That is Xt is of the form Xt =
i?[kt ] ?i zi zi , kzi k = 1 for all i. Thus instead of
computing Xt explicitly, we may represent it by the vectors zi and scalars ?i . Denote by ? the
vector of length kt in which the ith entry is just ?i , for some iteration t ? [T ]. Since kXt k ?
1 it holds that k?k ? 1. The sampling procedure Sample(Ai , Xt ) in line 13, returns the value
?2k
2
Ai (j,l)k?k2
zk (j)zk (l)?k with probability k?k2 ? (zk (j)zk (l)) . That is we first sample a vector zi according to
7
? and then we sample an entry (j, l) according to the chosen vector zi . It is easily observed that
v?t (i) = Sample(Ai , Xt ) is an unbiased estimator of Ai ? Xt . It also holds that:
2
X
?k
Ai (j, l)2 k?k4
2
2
E[?
vt (i) ] =
(zk (j)zk (l)) ?
k?k2
(zk (j)zk (l))2 ?k2
j?[n],l?[n],k?[kt ]
=
? ?2 )
kt k?k2 kAi k2 = O(
? ?2 ) i.i.d samples as described above yields an unbiased
Thus taking v?t (i) to be the average of O(
estimator of Ai ? Xt with variance at most 1 as required for the analysis of our algorithm.
We can now state the running time of the algorithm.
n
? m4 + 6.5
.
Lemma 5.5. Algorithm SublinearPseudoMetric can be implemented to run in time O
Proof. According the lemmas 5.3, 5.4, the required precision in eigenvalue approximation is O(1)T
2.
Using the Lanczos method for eigenvalue approximation and the sparse representation of Xt de? ?4.5 ) time per iteration. Estimating the
scribed above, a single eigenvalue computation takes O(n
?2
?
products Ai ? Xt on each iteration takes by the discussion above O(m
). Overall the running
time on all iteration is as stated in the lemma.
6
Conclusions
We have presented the first sublinear time algorithm for approximate semi-definite programming, a
widely used optimization framework in machine learning. The algorithm?s running time is optimal
up to poly-logarithmic factors and its dependence on ? - the approximation guarantee. The algorithm
is based on the primal-dual approach of [15], and incorporates methods from previous SDP solvers
[12].
For the problem of learning peudo-metrics, we have presented further improvements to the basic
n
method which entail an algorithm that performs O( log
?2 ) iterations, each encompassing at most one
approximate eigenvector computation.
Acknowledgements
This work was supported in part by the IST Programme of the European Community, under the
PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors?
views.
References
[1] Michel. X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. In Journal of the ACM,
volume 42, pages 1115?1145, 1995.
[2] Sanjeev Arora, Satish Rao, and Umesh Vazirani. Expander flows, geometric embeddings and
graph partitioning. In Proceedings of the thirty-sixth annual ACM symposium on Theory of
computing, STOC ?04, pages 222?231, 2004.
[3] Amit Agarwal, Moses Charikar, Konstantin Makarychev, and Yury Makarychev. O(sqrt(log
n)) approximation algorithms for min uncut, min 2cnf deletion, and directed cut problems. In
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, STOC ?05,
pages 573?581, 2005.
[4] Sanjeev Arora, James R. Lee, and Assaf Naor. Euclidean distortion and the sparsest cut. In
Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, STOC ?05,
pages 553?562, 2005.
[5] Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning, with application to clustering with side-information. In Advances in Neural Information
Processing Systems 15, pages 505?512, 2002.
8
[6] Alexandre d?Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A direct formulation of sparse PCA using semidefinite programming. In SIAM Review, volume 49,
pages 41?48, 2004.
[7] Gert R. G. Lanckriet, Nello Cristianini, Laurent El Ghaoui, Peter Bartlett, and Michael I.
Jordan. Learning the kernel matrix with semi-definite programming. In Journal of Machine
Learning Research, pages 27?72, 2004.
[8] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
2004.
[9] Philip Klein and Hsueh-I Lu. Efficient approximation algorithms for semidefinite programs
arising from max cut and coloring. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, STOC ?96, pages 338?347, 1996.
[10] Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorithms for approximate semide.nite
programming using the multiplicative weights update method. In Proceedings of the 46th
Annual IEEE Symposium on Foundations of Computer Science, FOCS ?05, pages 339?348,
2005.
[11] Sanjeev Arora and Satyen Kale. A combinatorial, primal-dual approach to semidefinite programs. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing,
STOC ?07, pages 227?236, 2007.
[12] Elad Hazan. Sparse approximate solutions to semidefinite programs. In Proceedings of the 8th
Latin American conference on Theoretical informatics, LATIN?08, pages 306?316, 2008.
[13] Garud Iyengar, David J. Phillips, and Clifford Stein. Feasible and accurate algorithms for
covering semidefinite programs. In SWAT, pages 150?162, 2010.
[14] Garud Iyengar, David J. Phillips, and Clifford Stein. Approximating semidefinite packing
programs. In SIAM Journal on Optimization, volume 21, pages 231?268, 2011.
[15] Kenneth L. Clarkson, Elad Hazan, and David P. Woodruff. Sublinear optimization for machine learning. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of
Computer Science, FOCS ?10, pages 449?457, 2010.
[16] Elad Hazan.
Approximate convex optimization by online game playing.
abs/cs/0610119, 2006.
CoRR,
[17] Kenneth L. Clarkson, Elad Hazan, and David P. Woodruff. Sublinear optimization for machine
learning. CoRR, abs/1010.4408, 2010.
[18] Shai Shalev-shwartz, Yoram Singer, and Andrew Y. Ng. Online and batch learning of pseudometrics. In ICML, pages 743?750, 2004.
[19] James Hardy Wilkinson. The algebric eigenvalue problem. Claderon Press, Oxford, 1965.
[20] Gene H. Golub and Charles F. Van Loan. Matrix computations. John Hopkins University
Press, 1989.
[21] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent.
In ICML, pages 928?936, 2003.
9
| 4198 |@word polynomial:1 norm:6 q1:1 mention:1 tr:1 reduction:1 woodruff:2 denoting:2 hardy:1 past:1 current:1 attracted:1 john:1 additive:4 garud:2 remove:1 update:3 prohibitive:1 xk:4 ith:1 simpler:1 unbounded:1 direct:1 become:2 symposium:7 focs:2 prove:1 naor:1 dan:1 assaf:1 inside:1 excellence:1 x0:8 indeed:2 sdp:9 considering:1 solver:4 project:1 begin:1 notation:6 bounded:7 underlying:1 moreover:1 estimating:1 israel:4 what:1 eigenvector:3 suppresses:1 finding:1 impractical:1 guarantee:3 pseudo:8 every:1 preferable:1 k2:12 rm:1 control:2 unit:1 partitioning:1 positive:5 maximise:1 uncut:1 oxford:1 laurent:2 might:3 fastest:1 range:1 scribed:1 directed:1 thirty:4 practice:3 regret:4 definite:4 x3:2 procedure:6 nite:1 maxx:2 weather:1 projection:13 boyd:1 get:3 onto:8 interior:1 close:1 context:1 optimize:1 zinkevich:2 yt:15 kale:2 starting:1 vit:1 convex:4 simplicity:2 assigns:1 m2:1 rule:1 estimator:3 vandenberghe:1 gert:2 pt:8 exact:6 programming:9 lanckriet:2 approximated:6 cut:5 observed:2 russell:1 complexity:2 wilkinson:1 cristianini:1 depend:1 solving:1 rewrite:3 eric:1 packing:2 easily:4 various:3 represented:1 fast:5 describe:1 avi:3 outside:1 shalev:1 garber:1 elad:6 kai:5 widely:1 distortion:1 satyen:2 noisy:1 online:7 semide:1 kxt:4 eigenvalue:14 sequence:1 product:6 pseudometrics:1 combining:2 achieve:1 frobenius:1 normalize:1 ky:5 optimum:2 produce:2 derive:1 andrew:2 ac:2 completion:1 clearer:1 x0i:9 nearest:1 qt:6 dividing:1 implemented:1 c:2 come:2 uu:1 direction:1 viewing:1 settle:3 preliminary:1 hold:8 great:1 makarychev:2 major:1 smallest:1 xk2:1 estimation:2 combinatorial:2 label:1 schwarz:1 vice:1 tool:2 reflects:1 clearly:1 iyengar:2 rather:1 publication:1 kvk1:1 improvement:3 rank:5 indicates:2 check:1 am:1 el:2 interested:1 arg:7 classification:1 dual:4 overall:1 constrained:1 special:3 field:2 having:1 ng:2 sampling:3 stuart:1 icml:2 simplex:2 t2:1 neighbour:1 kwt:1 m4:1 n1:2 attempt:1 ab:2 golub:1 kvk:1 semidefinite:22 primal:5 kt:5 accurate:1 ehazan:1 vi0:7 euclidean:3 harmful:1 haifa:2 desired:1 theoretical:2 formalism:2 konstantin:1 rao:1 lanczos:2 clipping:3 entry:3 technion:4 satish:1 seventh:2 dependency:3 combined:1 st:1 siam:2 ie:1 lee:1 informatics:1 michael:3 hopkins:1 sanjeev:4 w1:3 clifford:2 reflect:1 choose:3 american:1 return:4 yp:1 michel:1 de:1 summarized:1 yury:1 explicitly:1 vi:9 multiplicative:5 later:1 view:1 hazan:10 xing:1 complicated:1 shai:1 minimize:1 il:2 accuracy:1 variance:3 yield:4 sdps:2 lu:2 notoriously:1 sqrt:1 definition:1 sixth:1 infinitesimal:1 pp:1 james:2 obvious:1 proof:6 dimensionality:1 satisfiability:1 sophisticated:1 actually:1 coloring:1 alexandre:1 supervised:1 improved:3 formulation:4 done:1 generality:1 just:2 replacing:1 quality:1 believe:1 building:1 unbiased:3 symmetric:4 game:2 covering:2 kak:6 cosine:1 generalized:1 presenting:1 performs:1 l1:1 umesh:1 charles:1 common:1 qp:1 volume:3 lieven:1 versa:1 cambridge:1 ai:38 phillips:2 pm:1 dot:1 entail:1 similarity:2 add:1 recent:2 certain:1 inequality:2 binary:1 arbitrarily:1 vt:21 yi:16 minimum:1 care:1 maximize:2 semi:4 stephen:1 ii:2 multiple:1 full:1 interlacing:1 faster:4 host:1 bigger:1 a1:3 feasibility:3 plugging:2 scalable:1 basic:1 variant:2 metric:17 expectation:1 iteration:6 kernel:3 represent:1 agarwal:1 else:1 diagonalize:1 rest:1 unlike:1 ascent:4 subject:1 expander:1 simulates:1 incorporates:1 flow:1 jordan:3 near:1 mw:5 latin:2 iii:1 easy:1 embeddings:1 fit:1 gave:1 zi:5 bottleneck:1 pca:2 bartlett:1 effort:1 clarkson:2 peter:1 passing:1 cause:1 proceed:1 cnf:1 useful:2 detailed:1 amount:2 stein:2 clip:3 reduced:1 generate:1 moses:1 arising:1 per:1 klein:2 ist:2 threshold:1 nevertheless:1 yit:3 k4:1 rewriting:1 verified:1 kenneth:2 graph:1 relaxation:1 year:2 cone:4 enforced:1 run:2 sum:1 letter:2 noticing:1 arrive:1 throughout:1 reasonable:1 separation:2 bound:8 followed:1 kyt:4 annual:7 constraint:8 x2:2 argument:1 min:18 performing:3 martin:1 charikar:1 according:4 ball:2 logt:1 smaller:1 slightly:1 projecting:4 ghaoui:2 heart:1 previously:1 discus:2 singer:1 letting:1 end:5 rewritten:2 generic:2 appropriate:1 subtracted:1 batch:1 slower:1 running:10 clustering:1 yoram:1 k1:1 amit:1 approximating:2 objective:2 move:1 added:1 quantity:1 concentration:1 dependence:2 gradient:9 minx:3 distance:10 separate:1 philip:1 cauchy:1 nello:1 trivial:1 reason:1 assuming:2 length:1 mini:4 difficult:1 stoc:5 trace:2 negative:4 stated:2 enclosing:1 proper:1 twenty:1 perform:1 upper:1 observation:1 markov:1 defining:1 y1:1 discovered:1 rn:14 ninth:1 community:1 david:5 pair:1 required:4 connection:1 kkx:1 deletion:1 alongside:1 usually:1 eighth:1 program:8 max:20 memory:1 pascal2:1 power:1 suitable:2 rely:1 turning:1 technology:2 numerous:1 arora:4 ready:1 carried:1 aspremont:1 coupled:1 review:1 geometric:1 l2:3 removal:1 acknowledgement:1 law:1 loss:2 encompassing:1 sublinear:9 foundation:2 minp:2 playing:1 pi:2 supported:1 offline:1 bias:1 side:1 vv:1 institute:2 taking:1 sparse:4 benefit:1 van:1 kzk:1 computes:2 author:1 made:2 projected:1 programme:1 employing:1 far:1 kzi:1 vazirani:1 approximate:8 gene:1 assumed:1 xi:13 shwartz:1 search:1 iterative:1 demanded:1 triplet:1 learn:2 zk:8 rearranging:1 williamson:2 poly:3 necessarily:1 european:1 da:4 main:2 n2:1 ait:8 repeated:1 x1:3 precision:2 wish:2 sparsest:1 exponential:1 lie:1 minz:1 theorem:9 down:1 xt:46 jt:4 maxi:1 normalizing:1 exists:3 albeit:1 adding:3 corr:2 importance:2 demand:4 margin:1 gap:1 kx:1 logarithmic:3 lt:4 lazy:1 scalar:1 satisfies:3 acm:6 goal:2 identity:1 presentation:3 replace:1 price:1 feasible:4 loan:1 wt:7 lemma:22 goemans:2 swat:1 indicating:1 arises:1 d1:1 |
3,532 | 4,199 | Advice Refinement in Knowledge-Based SVMs
Gautam Kunapuli
Univ. of Wisconsin-Madison
1300 University Avenue
Madison, WI 53705
[email protected]
Richard Maclin
Univ. of Minnesota, Duluth
1114 Kirby Drive
Duluth, MN 55812
[email protected]
Jude W. Shavlik
Univ. of Wisconsin-Madison
1300 University Avenue
Madison, WI 53705
[email protected]
Abstract
Knowledge-based support vector machines (KBSVMs) incorporate advice from
domain experts, which can improve generalization significantly. A major limitation that has not been fully addressed occurs when the expert advice is imperfect,
which can lead to poorer models. We propose a model that extends KBSVMs
and is able to not only learn from data and advice, but also simultaneously improves the advice. The proposed approach is particularly effective for knowledge
discovery in domains with few labeled examples. The proposed model contains
bilinear constraints, and is solved using two iterative approaches: successive linear
programming and a constrained concave-convex approach. Experimental results
demonstrate that these algorithms yield useful refinements to expert advice, as
well as improve the performance of the learning algorithm overall.
1 Introduction
We are primarily interested in learning in domains where there is only a small amount of labeled data
but advice can be provided by a domain expert. The goal is to refine this advice, which is usually
only approximately correct, during learning, in such scenarios, to produce interpretable models that
generalize better and aid knowledge discovery. For learning in complex environments, a number
of researchers have shown that incorporating prior knowledge from experts can greatly improve the
generalization of the model learned, often with many fewer labeled examples. Such approaches
have been shown in rule-learning methods [16], artificial neural networks (ANNs) [21] and support
vector machines (SVMs) [10, 17]. One limitation of these methods concerns how well they adapt
when the knowledge provided by the expert is inexact or partially correct. Many of the rule-learning
methods focus on rule refinement to learn better rules, while ANNs form the rules as portions of
the network which are refined by backpropagation. Further, ANN methods have been paired with
rule-extraction methods [3, 20] to try to understand the resulting learned network and provide rules
that are easily interpreted by domain experts.
We consider the framework of knowledge-based support vector machines (KBSVMs), introduced by Fung et al. [6]. KBSVMs have been extensively studied, and in addition to linear
classification, they have been extended to incorporate kernels [5], nonlinear advice [14] and for kernel approximation [13]. Recently, Kunapuli et al. derived an online version of KBSVMs [9], while
other approaches such as that of Le et al. [11] modify the hypothesis space rather than the optimization problem. Extensive empirical results from this prior work establish that expert advice can be
effective, especially for biomedical applications such as breast-cancer diagnosis. KBSVMs are an
attractive methodology for knowledge discovery as they can produce good models that generalize
well with a small amount of labeled data.
Advice tends to be rule-of-thumb and is based on the expert?s accumulated experience in
the domain; it may not always be accurate. Rather than simply ignoring or heavily penalizing inaccurate rules, the effectiveness of the advice can be improved through refinement. There are two
main reasons for this: first, refined rules result in the improvement of the overall generalization,
and second, if the refinements to the advice are interpretable by the domain experts, it will help in
the understanding of the phenomena underlying the applications for the experts, and consequently
1
Figure 1: (left) Standard SVM, trades off complexity and loss wrt the data; (center) Knowledge-based SVM,
also trades off loss wrt advice. A piece of advice set 1 extends over the margin, and is penalized as the advice
error. No part of advice set 2 touches the margin, i.e., none of the rules in advice set 2 are useful as support
constraints. (right) SVM that refines advice in two ways: (1) advice set 1 is refined so that no part of is on the
wrong side of the optimal hyperplane, minimizing advice error, (2) advice set 2 is expanded until it touches the
optimal margin thus maximizing coverage of input space.
greatly facilitate the knowledge-discovery process. This is the motivation behind this work. KBSVMs already have several desirable properties that make them an ideal target for refinement. First,
advice is specified as polyhedral regions in input space, whose constraints on the features are easily
interpretable by non-experts. Second, it is well-known that KBSVMs can learn to generalize well
with small data sets [9], and can even learn from advice alone [6]. Finally, owing to the simplicity of
the formulation, advice-refinement terms for the rules can be incorporated directly into the model.
We further motivate advice refinement in KBSVMs with the following example. Figure 1
(left) shows an SVM, which trades off regularization with the data error. Figure 1 (center) illustrates
KBSVMs in their standard form as shown in [6]. As mentioned before, expert rules are specified
in the KBSVM framework as polyhedral advice regions in input space. They introduce a bias to
focus the learner on a model that also includes the advice of the form ?x, (x ? advice region i) ?
class(x) = 1. Advice regarding the regions for which class(x) = ?1 can be specified similarly.
In the KBSVM (Figure 1, center), each advice region contributes to the final hypothesis in
a KBSVM via its advice vector, u1 and u2 (as introduced in [6]; also see Section 2). The individual
constraints that touch or intersect the margin have non-zero uij components. As a piece of advice
region 1 extends beyond the margin, u1 6= 0; furthermore, analogous to data error, this overlap is
penalized as the advice error. As no part of advice set 2 touches the margin, u2 = 0 and none of
its rules contribute anything to the final classifier. Again, analogous to support vectors, rules with
non-zero uij components are called support constraints [6]. Consequently, in the final classifier the
advice sets are incorporated with advice error (advice set 1) or are completely ignored (advice set
2). Even though the rules are inaccurate, they are able to improve generalization compared to the
SVM. However, simply penalizing advice that introduces errors can make learning difficult as the
user must carefully trade off between optimizing data or advice loss.
Now, consider an SVM that is capable of refining inaccurate advice (Figure 1, right). When
advice is inaccurate and intersects the hyperplane, it is truncated such that it minimizes the advice
error. Advice that was originally ignored is extended to cover as much of the input space as is
feasible. The optimal classifier has now minimized the error with respect to the data and the refined
advice and is able to further improve upon the performance of not just the SVM but also the KBSVM.
Thus, the goal is to refine potentially inaccurate expert advice during learning so as to learn a model
with the best generalization.
Our approach generalizes the work of Maclin et al. [12], to produce a model that corrects
the polyhedral advice regions of KBSVMs. The resulting mathematical program is no longer a
linear or quadratic program owing to bilinear correction factors in the constraints. We propose
two algorithmic techniques to solve the resulting bilinear program, one based on successive linear
programming [12], and the other based on a concave-convex procedure [24]. Before we describe
advice refinement, we briefly introduce our notation and KBSVMs.
We wish to learn a linear classifier (w? x = b) given ? labeled data (xj , yj )?j=1 with xj ? Rn
and labels yj ? {?1}. Data are collected row-wise in the matrix X ? R??n , while Y = diag(y) is
the diagonal matrix of the labels. We assume that m advice sets (Di , di , zi )m
i=1 are given in addition
to the data (see Section 2), and if the i-th advice set has ki constraints, we have Di ? Rki ?n ,
di ? Rki and zi = {?1}. The absolute value of a scalar y is denoted |y|, the 1-norm of a vector x
2
Pn
p?q
is denoted
is denoted
1P=
i=1 |xi |, and the entrywise 1-norm of a m ? n matrix A ? R
Pkxk
p
q
kAk1 = i=1 i=1 |Aij |. Finally, e is a vector of ones of appropriate dimension.
2 Knowledge-Based Support Vector Machines
In KBSVMs, advice can be specified about every potential data point in the input space that satisfies
certain advice constraints. For example, consider a task of learning to diagnose diabetes, based on
features such as age, blood pressure, body mass index (bmi), plasma glucose concentration (gluc),
etc. The National Institute for Health (NIH) provides the following guidelines to establish risk for
Type-2 Diabetes1 : a person who is obese (bmi ? 30) with gluc ? 126 is at strong risk for diabetes,
while a person who is at normal weight (bmi ? 25) with gluc ? 100 is unlikely to have diabetes.
This leads to two advice sets, one for each class:
(bmi ? 25) ? (gluc ? 100) ? ?diabetes; (bmi ? 30) ? (gluc ? 126) ? diabetes,
(1)
where ? is the negation operator. In general, rules such as the ones above define a polyhedral region
of the input space and are expressed as the implication
Di x ? di ? zi (w? x ? b) ? 1,
(2)
where the advice label zi = +1 indicates that all points x that satisfy the constraints for the i-th
advice set, Di x ? di belong to class +1, while z = ?1 indicates the same for the other class. The
standard linear SVM formulation (without incorporating advice) for binary classification optimizes
model complexity + ? data loss:
min kwk1 + ?e? ?,
s.t. Y (Xw ? eb) + ? ? e.
(3)
??0,w,b
The implications (2), for the i = 1, . . . , m, can be incorporated into (3) using the nonhomogeneous
Farkas theorem of the alternative [6] that introduces advice vectors ui . The advice vectors perform
the same role as the dual multipliers ? in the classical SVM. Recall that points with non-zero ??s
are the support vectors which additively contribute to w. Similarly, the constraints of an advice set
which have non-zero ui s are called support constraints. The resulting formulation is the KBSVM,
which optimizes model complexity + ? data loss + ? advice loss:
Pm
min
kwk1 + ?e? ? + ? i=1 (e? ? i + ?i )
i
i
w,b,(?,u ,? ,?i )?0
s.t.
Y (Xw ? be) + ? ? e,
?? i ? Di? ui + zi w ? ? i ,
(4)
?
?di ui ? zi b + ?i ? 1, i = 1, . . . , m.
In the case of inaccurate advice, the advice errors ? i and ?i soften the advice constraints analogous
to the data errors ?. Returning to Figure 1, for advice set 1, ? 1 , ?1 and u1 are non-zero, while for
advice set 2, u2 = 0. The influence of data and advice is determined by the choice of the parameters
? and ? which reflect the user?s trust in the data and advice respectively.
3 Advice-Refining Knowledge-based Support Vector Machines
Previously, Maclin et al. [12] formulated a model to refine advice in KBSVMs. However, their
model is limited as only the terms di are refined, which as we discuss below, greatly restricts the
types of refinements that are possible. They only consider refinement terms f i for the right hand
side of the i-th advice set, and attempt to refine each rule such that
Di x ? (di ? f i ) ? zi (w? x ? b) ? 1, i = 1, . . . , m.
(5)
The resulting formulation adds refinement terms into the KBSVM model (4) in the advice constraints, as well as in the objective. The latter allows for the overall extent of the refinement to be
controlled by the refinement parameter ? > 0. This formulation was called Refining-Rules Support
Vector Machine (RRSVM):
Pm
Pm
min
kwk1 + ?e? ? + ? i=1 (e? ? i + ?i ) + ? i=1 kf i k1
w,b,f i ,(?,ui ,? i ,?i )?0
s.t.
1
Y (Xw ? be) + ? ? e,
?? i ? Di? ui + zi w ? ? i ,
?(di ? f i )? ui ? zi b + ?i ? 1, i = 1, . . . , m.
http://diabetes.niddk.nih.gov/DM/pubs/?riskfortype2
3
(6)
?
This problem is no longer an LP owing to the bilinear terms f i ui which make the refinement constraints non-convex. Maclin et al. solve this problem using successive linear programming (SLP)
wherein linear programs arising from alternately fixing either the advice terms di or the refinement
terms f i are solved iteratively.
We consider a full generalization of the RRSVM approach and develop a model where it is
possible to refine the entire advice region Dx ? d. This allows for much more flexibility in refining
the advice based on the data, while still retaining interpretability of the resulting refined advice.
In addition to the terms f i , we propose the introduction of additional refinement terms Fi into the
model, so that we can refine the rules in as general a manner as possible:
(Di ? Fi )x ? (di ? f i ) ? zi (w? x ? b) ? 1, i = 1, . . . , m.
(7)
Recall that for each advice set we have Di ? Rki ?n and di ? Rki , i.e., the i-th advice set contains
ki constraints. The corresponding refinement terms Fi and f i will have the same dimensions respectively as Di and di . The formulation (6) now includes the additional refinement terms Fi , and the
formulation optimizes:
Pm
Pm
mini i
kwk1 + ?e? ? + ? i=1 (e? ? i + ?i ) + ? i=1 kFi k1 + kf i k1
i
w,b,Fi ,f ,(?,u ,? ,?i )?0
s.t.
Y (Xw ? be) + ? ? e,
?? i ? (Di ? Fi )? ui + zi w ? ? i ,
?(di ? f i )? ui ? zi b + ?i ? 1, i = 1, . . . , m.
(8)
The objective function of (8) trades-off the effect of refinement in each of the advice sets via the
refinement parameter ?. This is the Advice-Refining KBSVM (arkSVM); it improves upon the work
of Maclin et al. in two important ways. First, refining d alone is highly restrictive as it allows only
for the translation of the boundaries of the polyhedral advice; the generalized refinement offered
by arkSVMs allows for much more flexibility owing to the fact that the boundaries of the advice
can be translated and rotated (see Figure 2). Second, the newly added refinement terms, Fi? ui , are
bilinear also, and do not make the overall problem more complex; in addition to the successive
linear programming approach of [12], we also propose a concave-convex procedure that leads to an
approach based on successive quadratic programming. We provide details of both approaches next.
3.1 arkSVMs via Successive Linear Programming
One approach to solving bilinear programming problems is to solve a sequence of linear programs
while alternately fixing the bilinear variables. This approach is called successive linear programming, and has been used to solve various machine learning formulations, for instance [1, 2]. In this
approach, which was also adopted by [12], we solve the LPs arising from alternatingly fixing the
i m
sources of bilinearity: (Fi , f i )m
i=1 and {u }i=1 . Algorithm 1 describes the above approach. At the
t-th iteration, the algorithm alternates between the following steps:
? (Estimation Step) When the refinement terms, (F?it , ?f i,t )m
i=1 , are fixed the resulting LP
becomes a standard KBSVM which attempts to find a data-estimate of the advice vectors
j
?j,t
?t
{ui }m
i=1 using the current refinement of the advice region: (Dj ? Fj ) x ? (d ? f ).
? (Refinement Step) When the advice-estimate terms {?
ui,t }m
i=1 are fixed, the resulting LP
solves for (Fi , f i )m
and
attempts
to
further
refine
the
advice regions based on estimates
i=1
from data computed in the previous step.
Proposition 1 I. For
sequence
converges to the value
P ? =? i0, the
P of ?objective?i values
? 1 + ?e? ?? + ? m
? + ??i ) + ? m
kwk
i=1 (e ?
i=1 kFi k1 + kf k1 , where the data and advice
? ??i , ??i ) are computed from any accumulation point (w,
? ?b, u
? i , F?i , ?f i ) of the sequence of
errors (?,
t
t
i,t
t
i,t
?
?
?
? ,b ,u
? , F?i , f )t=1 generated by Algorithm 1.
iterates (w
II. Such an accumulation point satisfies the local minimum condition
P
? ?b) ?
(w,
min
kwk1 + ?e? ? + ? m (e? ? i + ?i )
i=1
ui ?0
w,b,(?,? i ?i ?0)
subject to
Y (Xw ? be) + ? ? e,
?? i ? (Di ? F?i )? ui + zi w ? ? i ,
?(di ? ?f i )? ui ? zi b + ?i ? 1,
i = 1, . . . , m.
4
Algorithm 1 arkSVM via Successive Linear Programming (arkSVM-sla)
1: initialize: t = 1, F?i1 = 0, ?f i,1 = 0
2: while feasible do
3:
if x not feasible for (Di ? F?it ) x ? (dj ? ?f i,t )
4:
(estimation step) solve for {?
ui,t+1 }m
i=1
Pm
min
kwk1 + ?e? ? + ?
s.t.
Y (Xw ? be) + ? ? e,
?? i ? (Di ? F?it )? ui + zi w ? ? i ,
?(di ? ?f i,t )? ui ? zi b + ?i ? 1, i = 1, . . . , m.
w,b,(?,ui ,? i ,?i )?0
5:
return failure
i=1
(e? ? i + ?i )
(refinement step) solve for (F?it+1 , ?f i,t+1 )m
i=1
P
Pm
?
? i
i
min
kwk1 + ?e ? + ? m
i=1 (e ? + ?i ) + ?
i=1 kFi k1 + kf k1
w,b,Fi ,f i ,(?,? i ,?i )?0
s.t.
Y (Xw ? be) + ? ? e,
? i,t+1 + zi w ? ? i ,
?? i ? (Di ? Fi )? u
? i,t+1 ? zi b + ?i ? 1, i = 1, . . . , m.
?(di ? f i )? u
P
6:
(termination test) if j kFjt ? Fjt+1 k + kfjt ? fjt+1 k ? ?
then return solution
7:
(continue) t = t + 1
8: end while
Algorithm 2 arkSVM via Successive Quadratic Programming (arkSVM-sqp)
1: initialize: t = 1, F?i1 = 0, ?f i,1 = 0
2: while feasible do
3:
if x not feasible for (Di ? F?it ) x ? (dj ? ?f i,t )
return failure
4:
solve for {?
ui,t+1 }m
i=1
P
Pm
? i
i
min
kwk1 + ?e? ? + ? m
i=1 (e ? + ?i ) + ?
i=1 kFi k1 + kf k1
Fi ,f i ,(ui ?0)
w,b,(?,? i ?i ?0)
Y (Xw ? be) + ? ? e,
eqns (10?12), i = 1, . . . , m, j = 1, . . . , n
P
5:
(termination test) if j kFjt ? Fjt+1 k + kfjt ? fjt+1 k ? ?
then return solution
6:
(continue) t = t + 1
7: end while
s.t.
3.2 arkSVMs via Successive Quadratic Programming
In addition to the above approach, we introduce another algorithm (Algorithm 2) that is based on
successive quadratic programming. In the constraint (Di ? Fi )? ui + zi w ? ? i ? 0, only the refinement term Fi? ui is bilinear, while the rest of the constraint is linear. Denote the j-th components
of w and ? i to be wj and ?ji respectively. A general bilinear term r? s, which is non-convex, can be
written as the difference of two convex terms: 41 kr + sk2 ? 14 kr ? sk2 . Thus, we have the equivalent
constraint
1
1
?
Dij
ui + zi wj ? ?ji + kFij ? ui k2 ? kFij + ui k2 ,
(9)
4
4
and both sides of the constraint above are convex and quadratic. We can linearize the right-hand side
? i,t ):
of (9) around some current estimate of the bilinear variables (F?ijt , u
?
? i,t k2
Dij
ui + zi wj ? ?ji + 41 kFij ? ui k2 ? 14 kF?ijt + u
(10)
? i,t )? (Fij ? F?ijt ) + (ui ? u
? i,t ) .
+ 12 (F?ijt + u
Similarly, the constraint ?(Di ? Fi )? ui ? zi w ? ? i ? 0, can be replaced by
?
? i,t k2
?Dij
ui ? zi wj ? ?ji + 14 kFij + ui k2 ? 14 kF?ijt ? u
(11)
? i,t )? (Fij ? F?ijt ) ? (ui ? u
? i,t ) ,
+ 21 (F?ijt ? u
5
Figure 2: Toy data set (Section 4.1) using (left) RRSVM (center) arkSVM-sla (right) arkSVM-sqp. Orange
and green unhatched regions show the original advice. The dashed lines show the margin, kwk? . For each
method, we show the refined advice: vertically hatched for Class +1, and diagonally hatched for Class ?1.
?
?
while di ui + zi b + 1 ? ?i ? f i ui ? 0 is replaced by
?
? i,t k2
di ui + zi b + 1 ? ?i + 14 kf i ? ui k2 ? 41 k?f i,t + u
(12)
? i,t )? (f i,t ? ?f i,t ) + (ui ? u
? i,t ) .
+ 12 (?f i,t + u
The right-hand sides in (10?12) are affine and hence, the entire set of constraints are now convex.
Replacing the original bilinear non-convex constraints of (8) with the convexified relaxations results
in a quadratically-constrained linear program (QCLP). These quadratic constraints are more restrictive than their non-convex counterparts, which leads the feasible set of this problem to be a subset of
that of the original problem. Now, we can iteratively solve the resulting QCLP. At the t-th iteration,
the restricted problem uses the current estimate to construct a new feasible point and iterating this
procedure produces a sequence of feasible points with decreasing objective values. The approach
described here is essentially the constrained concave-convex procedure (CCCP) that has been discovered and rediscovered several times. Most recently, the approach was described in the context
of machine learning approaches by Yuille and Rangarajan [24], and Smola and Vishwanathan [19],
who also derived conditions under which the algorithm converges to a local solution. The following
convergence theorem is due to [19].
? 1+
Proposition
sequence of objective
to the value kwk
values converges
P2m For Algorithm 2, the
Pm
? ??i , ??i ) is the
? ?b, u
? i , F?i , ?f i , ?,
?e? ?? + ? i=1 (e? ??i + ??i ) + ? i=1 kF?i k1 + k?f i k1 , where (w,
local minimum solution of (8) provided that the constraints (10?12) in conjunction with the convex
constraints Y (Xw ? eb) + ? ? e, ? ? 0, ui ? 0, ?i ? 0 satisfy suitable constraint qualifications
at the point of convergence of the algorithm.
Both Algorithms 1 and 2 produce local minima solutions to the arkSVM formulation (8).
For either solution, the following proposition holds, which shows that either algorithm produces
a refinement of the original polyhedral advice regions. The proof is a direct consequence of
[13][Proposition 2.1].
? ??i , ??i ) be the local minimum solution produced by Algorithm
? ?b, u
? i , F?i , ?f i , ?,
Proposition 3 Let (w,
1 or Algorithm 2. Then, the following refinement to the advice sets holds:
? ? x ? ?b) ? ???i ? x ? ??i ,
(Di ? F?i ) ? (di ? ?f i ) ? zi (w
?i + w
? + ??i = 0.
where ???i ? ??i ? ??i such that Di? u
4 Experiments
We present the results of several experiments that compare the performance of three algorithms:
RRSVMs (which only refine the d term in Dx ? d), arkSVM-sla (successive linear programming)
and arkSVM-sqp (successive quadratic programming) with that of standard SVMs and KBSVMs.
The LPs were solved using QSOPT2 , while the QCLPs were solved using SDPT-3 [22].
4.1 Toy Example
We illustrate the behavior of advice refinement algorithms discussed previously geometrically using
a simple 2-dimensional example (Figure 2). This toy data set consists of 200 points separated by
x1 + x2 = 2. There are two advice sets: {S1 : (x1 , x2 ) ? 0 ? z = +1}, {S2 : (x1 , x2 ) ? 0 ?
2
http://www2.isye.gatech.edu/?wcook/qsopt/
6
40
35
Testing Error (%)
30
25
20
15
svm
kbsvm
rrsvm
arksvm?sla
arksvm?sqp
10
5
0
0
10
20
30 40 50 60 70 80
Number of Training Examples
90
100
Figure 3: Diabetes data set, Section 4.2; (left) Results averaged over 10 runs on a hold-out test set of 412
points, with parameters selected by five-fold cross validation; (right) An approximate decision-tree representation of Diabetes Rule 6 before and after refinement. The left branch is chosen if the query at a node is
true, and the right branch otherwise. The leaf nodes classify the data point according to ?diabetes.
z = ?1}. Both arkSVMs are able to refine knowledge sets such that the no part of S1 lies on the
wrong side of the final hyperplane. In addition, the refinement terms allow for sufficient modification
of the advice sets Dx ? d so that they fill the input space as much as possible, without violating
the margin. Comparing to RRSVMs, we see that refinement is restrictive because corrections are
applied only to part of the advice sets, rather than fully correcting the advice.
4.2 Case Study 1: PIMA Indians Diabetes Diagnosis
The Pima Indians Diabetes data set [4] has been studied for several decades and is used as a standard
benchmark to test many machine learning algorithms. The goal is to predict the onset of diabetes in
768 Pima Indian women within the next 5 years based on current indicators (eight features): number
of times pregnant, plasma glucose concentration (gluc), diastolic blood pressure, triceps skin fold
test, 2-hour serum insulin, body mass index (bmi), diabetes pedigree function (pedf) and age.
Studies [15] show that diabetes incidence among the Pima Indians is significantly higher among
subjects with bmi ? 30. In addition, a person with impaired glucose tolerance is at a significant
risk for, or worse, has undiagnosed diabetes [8]. This leads to the following expert rules:
(Diabetes
(Diabetes
(Diabetes
(Diabetes
Rule
Rule
Rule
Rule
1)
2)
3)
4)
(gluc ? 126)
(gluc ? 126) ? (gluc ? 140) ? (bmi ? 30)
(gluc ? 126) ? (gluc ? 140) ? (bmi ? 30)
(gluc ? 140)
??diabetes,
??diabetes,
? diabetes,
? diabetes.
The diabetes pedigree function was developed by Smith et al. [18], and uses genetic information
from family relatives to provide a measure of the expected genetic influence (heredity) on the subject?s diabetes risk. The function also takes into account the age of relatives who do have diabetes;
on average, Pima Indians are only 36 years old3 when diagnosed with diabetes. A subject with high
heredity who is at least 31 is at a significantly increased risk for diabetes in the next five years:
(Diabetes Rule 5)
(Diabetes Rule 6)
(pedf ? 0.5) ? (age ? 31) ??diabetes,
(pedf ? 0.5) ? (age ? 31) ? diabetes.
Figure 3 (left) shows that unrefined advice does help initially, especially with as few as 30 data
points. However, as more data points are available, the effect of the advice diminishes. In contrast,
the advice refining methods are able to generalize much better with few data points, and eventually
converge to a better solution. Finally, Figure 3 (right) shows an approximate tree representation of
Diabetes Rule 6 after refinement. This tree was constructed by sampling the space around
refined advice region uniformly, and then training a decision tree that covers as many of the sampled
points as possible. This naive approach to rule extraction from refined advice is shown here only to
illustrate that it is possible to produce very useful domain-expert-interpretable rules from refinement.
More efficient and accurate rule extraction techniques inspired by SVM-based rule extraction (for
example, [7]) are currently under investigation.
7
30
svm
kbsvm
rrsvm
arksvm?sla
arksvm?sqp
Testing Error (%)
25
20
15
10
5
0
20
40
60
80
Number of Training Examples
100
Figure 4: Wargus data set, Section 4.3; (left) An example Wargus scenario; (right) Results using 5-fold cross
validation on a hold out test set of 1000 points.
4.3 Case Study 2: Refining GUI-Collected Human Advice in a Wargus Task
Wargus4 is a real-time strategy game in which two or more players gather resources, build bases and
control units in order to conquer opposing players. It has been widely used to study and evaluate
various machine learning and planning algorithms. We evaluate our algorithms on a classification
task in the Wargus domain developed by Walker et al. [23] called tower-defense (Figure 4,
left). Advice for this task was collected from humans via a graphical, human-computer interface
(HCI) as detailed in [23]. Each scenario (example) in tower-defense, consists of a single tower
being attacked by a group of enemy units, and the task is to predict whether the tower will survive
the attack and defeat the attackers given the size and composition of the latter, as well as other
factors such as the environment. The data set consists of 80 features including information about
units (eg., archers, ballista, peasants), unit properties (e.g., map location, health), group properties
(e.g., #archers, #footmen) and environmental factors (e.g., ?hasMoat).
Walker et al. [23] used this domain to study the feasibility of learning from human teachers.
To this end, human players were first trained to identify whether a tower would fall given a particular
scenario. Once the humans learned this task, they were asked to provide advice via a GUI-based
interface based on specific examples. This setting lends itself very well to refinement as the advice
collected from human experts represents the sum of their experiences with the domain, but is by no
means perfect or exact. The following are some rules provided by human ?domain experts?:
(Wargus
(Wargus
(Wargus
(Wargus
Rule
Rule
Rule
Rule
1)
2)
3)
4)
(#footmen ? 3) ? (?hasMoat = 0)
?falls,
(#archers ? 5)
?falls,
(#ballistas ? 1)
?falls,
(#ballistas = 0) ? (#archers = 0) ? (?hasMoat = 1) ?stands.
Figure 4 (right) shows the performance of the various algorithms on the Wargus data set. As with
the previous case study, the arkSVM methods are able to not only learn very effectively with a small
data set, they are also able to improve significantly on the performances of standard knowledgebased SVMs (KBSVMs) and rule-refining SVMs (RRSVMs).
5 Conclusions and Future Work
We have presented two novel knowledge-discovery methods: arkSVM-sla and arkSVM-sqp, that
allow SVM methods to not only make use of advice provided by human experts but to refine that
advice using labeled data to improve the advice. These methods are an advance over previous
knowledge-based SVM methods which either did not refine advice [6] or could only refine simple
aspects of the advice [12]. Experimental results demonstrate that our arkSVM methods can make
use of inaccurate advice to revise them to better fit the data. A significant aspect of these learning methods is that the system not only produces a classifier but also produces human-inspectable
changes to the user-provided advice, and can do so using small data sets. In terms of future work, we
plan to explore several avenues of research including extending this approach to the nonlinear case
for more complex models, better optimization algorithms for improved efficiency, and interpretation
of refined rules for non-AI experts.
3
4
http://diabetes.niddk.nih.gov/dm/pubs/pima/kiddis/kiddis.htm
http://wargus.sourceforge.net/index.shtml
8
Acknowledgements
The authors gratefully acknowledge support of the Defense Advanced Research Projects Agency under DARPA
grant FA8650-06-C-7606 and the National Institute of Health under NLM grant R01-LM008796. Views and
conclusions contained in this document are those of the authors and do not necessarily represent the official
opinion or policies, either expressed or implied of the US government or of DARPA.
References
[1] K. P. Bennett and E. J. Bredensteiner. A parametric optimization method for machine learning. INFORMS
Journal on Computing, 9(3):311?318, 1997.
[2] K. P. Bennett and O. L. Mangasarian. Bilinear separation of two sets in n-space. Computational Optimization and Applications, 2:207?227, 1993.
[3] M. W. Craven and J. W. Shavlik. Extracting tree-structured representations of trained networks. In
Advances in Neural Information Processing Systems, volume 8, pages 24?30, 1996.
[4] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[5] G. Fung, O. L. Mangasarian, and J. W. Shavlik. Knowledge-based nonlinear kernel classifiers. In Sixteenth
Annual Conference on Learning Theory, pages 102?113, 2003.
[6] G. Fung, O. L. Mangasarian, and J. W. Shavlik. Knowledge-based support vector classifiers. In Advances
in Neural Information Processing Systems, volume 15, pages 521?528, 2003.
[7] G. Fung, S. Sandilya, and R. B. Rao. Rule extraction from linear support vector machines. In Proc.
Eleventh ACM SIGKDD Intl. Conference on Knowledge Discovery in Data Mining, pages 32?40, 2005.
[8] M. I. Harris, K. M. Flegal, C. C. Cowie, M. S. Eberhardt, D. E. Goldstein, R. R. Little, H. M. Wiedmeyer,
and D. D. Byrd-Holt. Prevalence of diabetes, impaired fasting glucose, and impaired glucose tolerance in
U.S. adults. Diabetes Care, 21(4):518?524, 1998.
[9] G. Kunapuli, K. P. Bennett, A. Shabbeer, R. Maclin, and J. W. Shavlik. Online knowledge-based support
vector machines. In Proc. of the European Conference on Machine Learning, pages 145?161, 2010.
[10] F. Lauer and G. Bloch. Incorporating prior knowledge in support vector machines for classification: A
review. Neurocomputing, 71(7?9):1578?1594, 2008.
[11] Q. V. Le, A. J. Smola, and T. G?artner. Simpler knowledge-based support vector machines. In Proceedings
of the Twenty-Third International Conference on Machine Learning, pages 521?528, 2006.
[12] R. Maclin, E. W. Wild, J. W. Shavlik, L. Torrey, and T. Walker. Refining rules incorporated into
knowledge-based support vector learners via successive linear programming. In AAAI Twenty-Second
Conference on Artificial Intelligence, pages 584?589, 2007.
[13] O. L. Mangasarian, J. W. Shavlik, and E. W. Wild. Knowledge-based kernel approximation. Journal of
Machine Learning Research, 5:1127?1141, 2004.
[14] O. L. Mangasarian and E. W. Wild. Nonlinear knowledge-based classification. IEEE Transactions on
Neural Networks, 19(10):1826?1832, 2008.
[15] M. E. Pavkov, R. L. Hanson, W. C. Knowler, P. H. Bennett, J. Krakoff, and R. G. Nelson. Changing
patterns of Type 2 diabetes incidence among Pima Indians. Diabetes Care, 30(7):1758?1763, 2007.
[16] M. Pazzani and D. Kibler. The utility of knowledge in inductive learning. Mach. Learn., 9:57?94, 1992.
[17] B. Sch?olkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. In Advances
in Neural Information Processing Systems, volume 10, pages 640?646, 1998.
[18] J. W. Smith, J. E. Everhart, W. C. Dickson, W. C. Knowler, and R. S. Johannes. Using the ADAP learning
algorithm to forecast the onset of diabetes mellitus. In Proc. of the Symposium on Comp. Apps. and
Medical Care, pages 261?265. IEEE Computer Society Press, 1988.
[19] A. J. Smola and S. V. N. Vishwanathan. Kernel methods for missing variables. In Proceedings of the
Tenth International Workshop on Artificial Intelligence and Statistics, pages 325?332, 2005.
[20] S. Thrun. Extracting rules from artificial neural networks with distributed representations. In Advances
in Neural Information Processing Systems, volume 8, 1995.
[21] G. G. Towell and J. W. Shavlik. Knowledge-based artificial neural networks. Artificial Intelligence,
70(1?2):119?165, 1994.
[22] R. H. T?ut?unc?u, K. C. Toh, and M. J. Todd. Solving semidefinite-quadratic-linear programs using SDPT3.
Mathematical Programming, 95(2), 2003.
[23] T. Walker, G. Kunapuli, N. Larsen, D. Page, and J. W. Shavlik. Integrating knowledge capture and
supervised learning through a human-computer interface. In Proc. Fifth Intl. Conf. Knowl. Capture, 2011.
[24] A. L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In Advances in Neural Information Processing Systems, volume 13, 2001.
9
| 4199 |@word repository:1 briefly:1 version:1 norm:2 termination:2 additively:1 pressure:2 contains:2 pub:2 genetic:2 document:1 current:4 comparing:1 incidence:2 toh:1 dx:3 must:1 written:1 refines:1 interpretable:4 farkas:1 alone:2 intelligence:3 fewer:1 selected:1 leaf:1 smith:2 provides:1 iterates:1 contribute:2 gautam:1 successive:14 node:2 attack:1 location:1 simpler:1 five:2 mathematical:2 constructed:1 direct:1 symposium:1 consists:3 hci:1 artner:1 wild:3 eleventh:1 polyhedral:6 manner:1 introduce:3 expected:1 behavior:1 planning:1 inspired:1 decreasing:1 byrd:1 gov:2 little:1 becomes:1 provided:6 project:1 underlying:1 notation:1 mass:2 interpreted:1 minimizes:1 developed:2 every:1 concave:5 returning:1 wrong:2 classifier:7 k2:8 control:1 unit:4 grant:2 medical:1 before:3 local:5 modify:1 tends:1 vertically:1 qualification:1 consequence:1 bilinear:12 mach:1 todd:1 approximately:1 eb:2 studied:2 bredensteiner:1 limited:1 kfi:4 kfij:4 averaged:1 diastolic:1 yj:2 testing:2 backpropagation:1 prevalence:1 procedure:5 intersect:1 empirical:1 mellitus:1 significantly:4 integrating:1 holt:1 unc:1 operator:1 risk:5 influence:2 context:1 accumulation:2 equivalent:1 map:1 center:4 maximizing:1 missing:1 serum:1 convex:13 simplicity:1 correcting:1 rule:43 fill:1 analogous:3 target:1 heavily:1 user:3 exact:1 programming:16 us:2 hypothesis:2 diabetes:39 particularly:1 labeled:6 role:1 solved:4 capture:2 region:14 wj:4 trade:5 mentioned:1 environment:2 agency:1 complexity:3 ui:40 asked:1 motivate:1 trained:2 solving:2 yuille:2 upon:2 efficiency:1 learner:2 completely:1 translated:1 easily:2 htm:1 darpa:2 various:3 intersects:1 univ:3 separated:1 effective:2 describe:1 artificial:6 query:1 refined:10 whose:1 widely:1 solve:9 enemy:1 otherwise:1 statistic:1 fasting:1 insulin:1 torrey:1 itself:1 final:4 online:2 sequence:5 net:1 propose:4 uci:1 kak1:1 flexibility:2 sixteenth:1 olkopf:1 sourceforge:1 convergence:2 impaired:3 sqp:6 rangarajan:2 knowledgebased:1 produce:9 extending:1 perfect:1 converges:3 rotated:1 adap:1 help:2 illustrate:2 develop:1 linearize:1 fixing:3 informs:1 solves:1 strong:1 coverage:1 c:1 fij:2 correct:2 owing:4 human:11 nlm:1 opinion:1 government:1 generalization:6 investigation:1 proposition:5 correction:2 hold:4 around:2 normal:1 algorithmic:1 predict:2 major:1 estimation:2 diminishes:1 proc:4 label:3 currently:1 knowl:1 always:1 rki:4 rather:3 pn:1 shtml:1 gatech:1 conjunction:1 derived:2 focus:2 refining:10 improvement:1 indicates:2 greatly:3 contrast:1 sigkdd:1 accumulated:1 inaccurate:7 unlikely:1 entire:2 i0:1 maclin:7 initially:1 uij:2 archer:4 interested:1 i1:2 overall:4 classification:5 dual:1 among:3 denoted:3 retaining:1 plan:1 constrained:3 initialize:2 orange:1 construct:1 once:1 extraction:5 sampling:1 represents:1 survive:1 kibler:1 future:2 minimized:1 richard:1 few:3 primarily:1 simultaneously:1 national:2 neurocomputing:1 individual:1 replaced:2 opposing:1 negation:1 attempt:3 gui:2 highly:1 rediscovered:1 mining:1 umn:1 introduces:2 semidefinite:1 behind:1 bloch:1 implication:2 accurate:2 poorer:1 capable:1 experience:2 tree:5 instance:1 classify:1 increased:1 rao:1 dickson:1 cover:2 soften:1 subset:1 dij:3 teacher:1 person:3 international:2 off:5 corrects:1 again:1 reflect:1 aaai:1 woman:1 duluth:2 worse:1 conf:1 expert:20 simard:1 return:4 toy:3 account:1 potential:1 includes:2 satisfy:2 onset:2 piece:2 try:1 view:1 diagnose:1 kwk:3 portion:1 asuncion:1 who:5 yield:1 identify:1 generalize:4 apps:1 thumb:1 produced:1 none:2 comp:1 drive:1 researcher:1 alternatingly:1 anns:2 inexact:1 failure:2 larsen:1 dm:2 proof:1 di:39 sampled:1 newly:1 revise:1 kunapuli:5 recall:2 knowledge:28 ut:1 improves:2 carefully:1 goldstein:1 originally:1 higher:1 violating:1 supervised:1 methodology:1 wherein:1 improved:2 entrywise:1 formulation:9 though:1 diagnosed:1 furthermore:1 just:1 biomedical:1 smola:4 until:1 hand:3 touch:4 trust:1 nonlinear:4 replacing:1 qclp:2 facilitate:1 effect:2 multiplier:1 true:1 counterpart:1 inductive:1 regularization:1 hence:1 iteratively:2 eg:1 attractive:1 during:2 niddk:2 eqns:1 game:1 anything:1 pedigree:2 generalized:1 demonstrate:2 interface:3 fj:1 wise:1 novel:1 recently:2 fi:15 nih:3 bilinearity:1 mangasarian:5 ji:4 defeat:1 volume:5 belong:1 discussed:1 interpretation:1 significant:2 composition:1 glucose:5 ai:1 heredity:2 pm:9 similarly:3 gratefully:1 dj:3 convexified:1 minnesota:1 longer:2 etc:1 add:1 base:1 optimizing:1 optimizes:3 scenario:4 certain:1 binary:1 continue:2 kwk1:8 minimum:4 additional:2 care:3 diabetes1:1 converge:1 dashed:1 ii:1 branch:2 full:1 desirable:1 adapt:1 cross:2 cccp:2 paired:1 controlled:1 feasibility:1 breast:1 essentially:1 jude:1 kernel:6 iteration:2 represent:1 addition:7 addressed:1 walker:4 source:1 sch:1 rest:1 lauer:1 subject:4 effectiveness:1 www2:1 extracting:2 ideal:1 xj:2 fit:1 zi:26 imperfect:1 regarding:1 avenue:3 whether:2 sdpt3:1 defense:3 utility:1 fa8650:1 ignored:2 useful:3 iterating:1 detailed:1 johannes:1 amount:2 extensively:1 svms:5 p2m:1 http:4 restricts:1 arising:2 towell:1 diagnosis:2 group:2 blood:2 rrsvm:5 sla:6 wisc:2 changing:1 penalizing:2 triceps:1 tenth:1 relaxation:1 geometrically:1 year:3 sum:1 run:1 extends:3 slp:1 family:1 separation:1 decision:2 ki:2 fold:3 quadratic:9 refine:12 annual:1 constraint:26 vishwanathan:2 x2:3 u1:3 aspect:2 min:7 expanded:1 structured:1 fung:4 according:1 alternate:1 craven:1 describes:1 wi:2 kirby:1 lp:5 modification:1 s1:2 restricted:1 resource:1 previously:2 discus:1 eventually:1 wrt:2 end:3 adopted:1 generalizes:1 ijt:7 available:1 eight:1 appropriate:1 alternative:1 original:4 intl:2 graphical:1 madison:4 xw:9 restrictive:3 k1:11 especially:2 establish:2 build:1 society:1 classical:1 conquer:1 r01:1 implied:1 objective:5 skin:1 already:1 added:1 occurs:1 strategy:1 concentration:2 parametric:1 diagonal:1 lends:1 thrun:1 nelson:1 tower:5 collected:4 extent:1 reason:1 index:3 mini:1 minimizing:1 difficult:1 potentially:1 pima:7 frank:1 unrefined:1 guideline:1 policy:1 twenty:2 perform:1 fjt:4 attacker:1 benchmark:1 acknowledge:1 attacked:1 truncated:1 extended:2 incorporated:4 incorporate:2 rn:1 discovered:1 introduced:2 specified:4 extensive:1 hanson:1 learned:3 quadratically:1 hour:1 alternately:2 adult:1 able:7 beyond:1 usually:1 below:1 pattern:1 program:7 interpretability:1 green:1 including:2 overlap:1 suitable:1 indicator:1 advanced:1 mn:1 improve:7 naive:1 health:3 prior:4 understanding:1 discovery:6 acknowledgement:1 kf:9 review:1 relative:2 wisconsin:2 fully:2 loss:6 sk2:2 undiagnosed:1 limitation:2 age:5 validation:2 offered:1 affine:1 sufficient:1 gather:1 translation:1 row:1 cancer:1 penalized:2 diagonally:1 aij:1 side:6 bias:1 understand:1 allow:2 shavlik:10 institute:2 fall:4 absolute:1 fifth:1 tolerance:2 distributed:1 boundary:2 dimension:2 stand:1 author:2 refinement:37 transaction:1 hatched:2 approximate:2 xi:1 iterative:1 decade:1 learn:8 pazzani:1 nonhomogeneous:1 ignoring:1 eberhardt:1 contributes:1 complex:3 necessarily:1 european:1 domain:12 diag:1 official:1 did:1 main:1 bmi:9 motivation:1 s2:1 body:2 x1:3 advice:112 aid:1 wish:1 lie:1 isye:1 third:1 sandilya:1 theorem:2 specific:1 svm:14 concern:1 incorporating:3 workshop:1 vapnik:1 effectively:1 kr:2 illustrates:1 margin:8 forecast:1 simply:2 explore:1 expressed:2 contained:1 partially:1 scalar:1 u2:3 satisfies:2 environmental:1 acm:1 harris:1 goal:3 formulated:1 ann:1 consequently:2 bennett:4 feasible:8 change:1 determined:1 uniformly:1 hyperplane:3 called:5 experimental:2 plasma:2 player:3 support:19 latter:2 indian:6 obese:1 evaluate:2 phenomenon:1 |
3,533 | 42 | 632
STATIC AND DYNAMIC ERROR PROPAGATION
NETWORKS WITH APPLICATION TO SPEECH
CODING
A J Robinson, F Fallside
Cambridge University Engineering Department
Trumpington Street, Cambridge, England
Abstract
Error propagation nets have been shown to be able to learn a variety of tasks in
which a static input pattern is mapped outo a static output pattern. This paper
presents a generalisation of these nets to deal with time varying, or dynamic
patterns, and three possible architectures are explored. As an example, dynamic
nets are applied to tbe problem of speech coding, in which a time sequence of
speech data are coded by one net and decoded by another. The use of dynamic
nets gives a better signal to noise ratio than that achieved using static nets.
1. INTRODUCTION
This paper is based upon the use of the error propagation algorithm of Rumelbart, Hinton
and Williams l to train a connectionist net. The net is defined as a set of units, each witb an
activation, and weights between units which determine the activations. The algorithm uses a
gradient descent technique to calculate the direction by which each weight should be changed
in order to minimise the summed squared difference between the desired output and the actual
output. Using this algorithm it is believed that a net can be trained to make an arbitrary
non-linear mapping of the input units onto the output units if given enough intermediate
units. This 'static' net can be used as part of a larger system with more complex behaviour.
The static net has no memory for past inputs, but many problems require the context of
the input in order to c.ompute the answer. An extension to the static net is developed, the
'dynamic' net, which feeds back a section of the output to the input, so creating some internal
storage for context, and allowing a far greater class of problems to be learned. Previously this
method of training time dependence into uets has suffered from a computational requirement
which increases linearly with the time span of the desired context. The three architectures
for dynamic uets presented here overcome this difficulty.
To illustrate the power of these networks a general coder is developed and applied to the
problem of speech coding. The non-liuear solution found by training a dynamic net coder is
compared with an established linear solution, and found to have an increased performance as
measured by the signal to noise ratio .
2. STATIC ERROR PROPAGATION NETS
A static Ret is defined by a set of units and links between the units. Denoting 0i as the value
of the ith unit, and wi,l as the weight of the link between Oi and OJ, we may divide up the
units into input units, hidden units and output units. If we assign 00 to a. constant to form a
@ American Institute of Physics 1988
633
bias, the input units run from 01 up to on",\., followed by the hidden units to onh?.t and then
the output units to On.".' The values of the input units are defined by the problem and the
values of the remaining units are defined by:
i-I
neti
?i
~1LJ'',1'0'J
(2.1)
j=O
!(net;)
(2.2)
where !( x) is any continuous monotonic non-linear function and is known as the activation
function. The function used the application is:
2
----1
1 + e- z",
!(x)
(2 .3)
These equations define a net which has the maximum number of interconnections. This
arrangement is commonly restricted to a layered structure in which units are only connected
to the immediately preceding layer . The architecture of these nets is specified by the number
of input, output and hidden units. Diagrammatically the static net is transformation of an
input 'U, onto the output y, as in figure 1.
static
net
figure 1
The net is trained by using a gradient descent algorithm which mlDlsmises an energy
term, E, defined as the summed squared error between the actual outputs, ai, and the target
outputs, t i . The algorithm also defines an error signal, Oi, for each unit:
E
[Ii
1
2
"lint
~
(ti --
i=nl w l+1
!' (netd(t i
-
od 2
0;)
(2.4)
nhid
< i ::;
nout
(2.5 )
ninp
< i ::;
nhid
(2 .6)
" lint
.f' (net;) ~
OiWj,i
j=i+l
where f' (x) is the derivative of !( x). The error signal and the adivations of the units define
the change in each weight, D. Wi,j'
(2.7)
where '1 is a constant of proportionality which determines the learning rate. The above
equations define the error signal, 0;, for the input units as well as for the hidden units. Thus
any number of static nets can be connected together, the values of Oi being passed from input
units of one net to output units of the preceding net. It is this ability of error propagation
nets to be 'glued' together in this way that enables the construction of dynamic nets.
3. DYNAMIC ERROR PROPAGATION NETS
The essential quality of the dynamic net is is that its behaviour is determined both by the
external input to the net, and also by its own internal state. This state is represented by the
634
activation of a group of units. These units form part of the output of a st.atic net and also
part of the input to another copy of the same static net in the next time period. Thus the
state units link multiple copies of static nets over time to form a dynamk net.
3.1. DEVELOPMENT FROM LINEAR CONTROL THEORY
The analogy of a dynamic net in linear systems 2 may be stated as:
(3.1.1)
(3.1.2)
where up is the input vector,
zp
the state vector, and Yp the output vector at the integer time
p. A, Band C are matrices.
The structure of the linear systems solution may be implemented as a non-linear dynamic
net by substituting the matrices A, Band C by statk nets, represented by the non-linear
functions A[.]' B[.] and C[.]. The summation operation of Azp and Bup could be achieved
using a net with one node for each element in z and u and with unity weights from the two
inputs to the identity activation function f( x) = z. Alternatively this net can be incorporated
into the A[.] net giving the architecture of figure 2.
B [.]
y(p+l)
dynamic t---of
A[.]
e[.]
y(p+l)
net
x(p+l)
Time
Time
Delay
Delay
figure 2
figure 3
The three networks may be combined into one, as in figure 3. Simplicity of architecture
is not just an aesthetic consideration. If three nets are used then each one must have enough
computational power for its part of the task, combining the nets means that only the combined
power must be sufficient and it allows common computations can be shared.
The error signal for thf' output Yp+l, can be calculated by comparison with the desired
output. However, the error signal for thf' state units, x P ' is only given by the net at time p+l,
which is not known at time p. Thus it is impossible to use a single backward pass to train
this net . It is this difficulty which introduces the variation in the architectures of dynamic
nets.
3.2. THE FINITE INPUT DURATION (FID) DYNAMIC NET
If the output of a dynamic net, YP' is df'pendf'nt on a finite number of previous inputs, up_p
to up, or if this assumption is a good approximation, then it is possible to formulate the
635
learning algorithm by expansion of the dynamk net for a finite time, as in figure 4. This
formulation is simlar to a restricted version of the recurrent net of Rumelhart, Hinton and
Williams. 1
x(p+l)
dynamic
net
(p)
y(p+l)
dynamic
net
(p-l)
yep)
dynamic
net
(p-2)
figure 4
Consider only the component of the error signal in past instantiations of the nets which
is the result of the error signal at time t. The errot signal for YP is calculated from the target
output and the ('rror signal for xr is zero. This combined error signal is propagated back
though the dynamic net at p to yield the error signals for up and xp' Similarly these error
signals can then be propagated back through the net at t - P, and so on for all relevant inputs.
The summed error signal is then used to change the weights as for a static net.
Formalising the FID dynamic net for a general time q, q ~ p:
n, is the number of state units
is the output value of unit i at time q
?q,i
is the target value of unit i at time q
tq,i
is
the error value of unit i at time q
6'1,'
is the weight between 0; and OJ
Wi,j
is the weight change for this iteration at time q
~Wq,i,i
is the total weight change for this iteration
~wi,i
These values are calculated in the same way as in a static net,
i-1
netq,i
L
(3.2.1)
Wi,jOq,j
j=O
(3 .2.2)
f(net q ,.)
f' (netq,d( tq,i
-
0'1,;)
+ n, < i :S nout
nhid < i :S nhid + n,
nhid
(3 .2.3)
(3.2.4)
nullt
!'(n('t q ,;)
L
6q ,jWj ,i
(3 .2.5)
j-=i+l
(3.2.6)
and the total weight change is given by the summation of the partial weight changes for all
636
previous times.
p
L
Llu'q,i,j
(3.2.7)
7]6 q,i O q,j
(3.2.8)
q=p-P
p
L
q=p-P
Thus, it is possible to train a dynamic net to incorporate the information from any time
period of finite length, and so l~arn any function which has a finite impulse response.?
In some situations the approximation to a finite length may not be valid, or the storage
and computational requirements of such a net may not be feasible. In such situations another
approach is possible, the infinite input duration dynamic net .
3.3. THE INFINITE INPUT DURATION (lID) DYNAMIC NET
Although the forward pass of the FID net of the previous section is a non-linear process, th..
backward pass computes the efred of small variations on the forward pass, and is a linear
process. Thus the recursive learning procedure described in the previous section may be
compressed into a single operation.
Given the target values for the output of the net at time p, equations (3.2.3) and (3.2.4)
define valu~s of 6p ,i at the outputs. If we denote this set of 6p ,i by Dp then equation (3.2.5)
states that any 6p ,i in the net at time p is simply a linear transformation o( Dp. Writing the
transformation matrix as S:
(3.3.1)
In particular the set of 6p ,i which is to be fed back into the network at time p - 1 is also
a linear transformation of Dp
(3.3.2)
or for an arbitrary time q:
(3.3.3)
so substituting equations (3.3.1) and (3.3.3) into equation (3.2.8):
p
LlU'i,j
7]L
Sq,i
q=-oo
( IT T,) D,o"j
(3.3.4)
7=q+l
(3.3.5)
7]Mp ,i,i D p
where:
p
M p,',)
.,
L
q=-oo
Sq,i
( IT T}"j
(3.3.6)
"=q+l
? This is a restriction on the class of functions which can be learned, the output will always be affected
in some way by all previous inputs giving an infinite impulse response performance.
637
and note that Mp,i,i can be written in terms of Mp-1,i,i :
MP,-.,,J
Sp,i (
IT
T,.) 0p,i
,.=p+l
Sp,iop,i
+
(I:
Sq,i
(3.3.7)
q=-oo
+ Mp-1,i,iTp
(3.3 .8)
Hence we can calculate the weight changes for an infinite recursion using only the finite
matrix M,
3.3. THE STATE COMPRESSION DYNAMIC NET
The previous architectures for dynamic nets rely on the propagation of the error signal hack
ill time to define the format of the information in the state units. All alternative approach
is to use another error propagation net to define the format of the state units. The overall
architecture is given in figure 5.
Bncoder
net
1-----\1 Tranlllatort---""'"
x(p+1)
y(p+1)
net
Decoder
net
figure 5
The encoder net is trained to code the current input and current state onto the next state,
while the decoder net is trained to do the reverse operation. The tran81ator net code8 the
next state onto the desired output. This encoding/decoding attempts to represent the current
input and the current state in the next state, and by the recursion, it will try to represent all
previous inputs. Feeding errors back from the translator directs this coding of past inputs to
those which are useful in forming the output.
3.4. COMPARISON OF DYNAMIC NET ARCHITECTURES
III comparing the three architectures for dynamic nets, it is important to consider the computational and memory requirements, and how these requirements scale with increasing context.
To train an FID net the net must store the past activations of the all the units within
the time span of thel'necessary context, Using this minimal storage, the computational load
scales proportiona.lly to the time span considered, as for every new input/output pair the
net must propagate an error signal back though all the past nets. However, if more sets
of past activations are stored in a buffer, then it is possible to wait until this buffer is full
before computing the weight changes. As the buffer size increases the computational load in
638
calculating the weight changes tends to that of a single backward pass through the units, and
so becomes independent of the amount of coutext.
The largest matrix required to compute the 110 net is M, which requires a factor of the
number of outputs of the net more storage than the weight matrix. This must be updated
on each iteration, a computational requirement larger than that of the FlO net for smaJl
problems 3 . However, if this architecture were implemented on a paraJlel machine it would be
possible to store the matrix M in a distributed form over the processors, and locally calculate
the weight changes. Thus, whilst the FID net requires the error signal to be propagated back
in time in a strictly sequential manner, the 110 net may be implemented in paraJld, with
possible advantages on parallel machines.
The state compression net has memory and computational requirements independent of
the amount of context. This is achieved at the expense of storing recent information in the
state units whether it is required to compute the output or not . This results in an increased
computational and memory load over the more efficient FID net when implemented with a
buffer for past outputs. However, the exclusion of external storage during training gives this
architecture more biological plausibili ty, constrained of course by the plausibility of the error
propagation algorithm itself.
With these considerations in mind, the FlO net was chosen to investigate a 'real world'
problem, that of the coding of the speech waveform.
4. APPLICATION TO SPEECH CODING
The problem of speech coding is one of finding a suitable model to remove redundancy and
hence reduce the data rate of the speech. The Boltzmann machine learning algorithm has
already been extended to deal to the dynamic case and applied to speech recognition4. However, previous use of error propagation nets for speech processing has mainly been restricted to
explicit presentation of the context 5,6 or explicit feeding back the output units to the input 7,8,
with some work done in usillg units with feedback links to themselves 9 . In a similar area,
static error propagation nets have been used to perform image coding as well as cOllventional
techniques 1o .
4.1. THE ARCHITECTURE OF A GENERAL CODER
The coding principle used in this section is not restricted to c.oding speech data. The general
problem is one of encoding the present input using past input context to form the transmitted
signal, and decoding this signal using the context ofthe coded signals to regenerate the original
input. Previous sections have shown that dynamic nets are able to represent context, so two
dynamic, nets in series form the architecture of the coder, as in figure 6.
This architecture may be specified by the number of input, state, hidden and transmission
units. There are as many output units as input units and, in this application, both the
transmitter and receiver have the same number of state and hidden units.
The input is combined with the internal state of the transmitter to form the coded signal,
and then decoded by the receiver using its internal state. Training of the net involves the
comparison of the input and output to form the error signal, which is thell propagated back
through past instantiations of the receiver and transmitter in the same way as a for a FID
dynamic net.
It is useful to introduce noise into the coded signal during the training to reduce the
information capacity of the transmission line. This forces the dynamic 11ets to incorporate
time information, without this constraint both nets can learn a simple transformation without
any time dependence. The noise can be used to simulate quantisation of the coded signal so
639
input
,
coded signal
J
?
I
TX
r-\
output
?
ax
r-\
rI
io-
..,
rI
~
Time
V-
Delay
~
I-
I""
Time
I
Delay
\--
figure 6
quantifying the transmission rate. Unfortunately, a
violates tbe requirement of the activation function
train the net. Instead quantisation to n levels may
distributed uniformly in the range + 1/ n to -1 / n to
straight implementation of quantisation
to be continuous, which is necessary to
be simulated by adding a random value
each of the channels in the coded signal.
4.2. TRAINING OF THE SPEECH CODER
The chosen problem was to present a sinJZ;le sample of digitised speech to the input, code to
a single value quantised to fifteen levels, and then to reconstruct tile original speech at the
output . Fifteen levels was chosen as the point where there is a marked loss in the intelligibility
of the speech, so implementation of these coding schemes gives an audible improvement. Two
version of the coder net were implemented, both nets had eight hidden units, with no state
units for the static time independent case and four state units for the dynamic time dependent
case.
The data for this problem was 40 seconds of speech from a single male speaker, digit,ised
to 12 bits at 10kHz and recorded in a laboratory environment. The speech was divided into
two halves, the first was used for training and the second for testing.
The static and the dynamic versions of the architecture were trained on about 20 passes
through the training data. After training the weights were frozen and the inclusion of random
noise was replaced by true quantisation of the coded representation. A further pass was then
made through the test data to yield the performance measurements.
The adaptive training algorithm of Chan 11 was used to dynamically alter the learning
rates during training. Previously these machines were trained with fixed learning rates and
weight update after every sample 3 , and the use of the adaptive t.raining algorithm has been
found to result in a substantially deeper energy minima. Weights were updated after every
1000 samples, that is about 200 times in one pass of the training data.
4.3. COMPARISON OF PERFORMANCE
The performance of a coding schemes can be measured by defining the noise energy as half the
summed squared difference between the actual output and the desired output. This energy
is the quantity minimised by the error propagation algorithm. The lower the noise energy in
relation to the energy of the signal, the higher the performance.
Three non-connectionist coding schemes were implemented for comparison with the static
640
and dynamic net coders. In the first the signal is linearly quantised within the dynamic range
of the original signal. In the second the quantiser is restricted to operate over a reduced
dynamic range, with values outside that range thresholded to the maximuJn and minimum
outputs of the quantiser. The thresholds of the quantiser were chosen to optimise the signal
to noise ratio. The third scheme used the technique of Differential Pulse Code Modulation
(DPCM)12 which involves a linear filter to predict the speech waveform, and the transmitted
signal is the difference between the real signal and the predicted signal. Another linear filter
reconstructs the original signal from the difference signal at the receiver. The filter order of
the DPCM coder was chosen to be the same as the number of state units in the dynamic net
coder, thus both coders can store the same amount of context enabling a comparison with
this established technique.
The resulting noise energy when the signal energy was normalised to unity, and the corresponding signal to noise ratio are given in table 1 for the five coding techniques.
coding method
linear, original thresholds
linear, optimum thresholds
static net
DPCM, optimum thresholds
dynamic net
normalised
nOise energy
0.071
0.041
0.049
0.037
0.028
signal to noise
ratio in dB
11.5
13.9
13.1
14.3
15.5
table 1
The static net may be compared with the two forms of the linear quantiser. Firstly note
that a considerable improvemeut in the signal to noise ratio may be achieved by reducing the
thresholds of the qllantiser from the extremes of the input. This improvement is achieved
because the distribution of samples in the input is concentrated around the mean value, with
very few values near the extremes. Thus many samples are represented with greater accuracy
at the expense of a few which are thresholded. The static net has a poorer performance than
the linear quantiser with optimum thresholds. The form of the linear quantiser solution is
within the class of problems which the static net can represent . It's failure to do so can be
attributed to finding a local minima, a plateau in weight space, or corruption of the true
steepest descent direction by noise introduced by updating the weights more than once per
pass through the training data.
The dynamic net may be compared with the DPCM coding. The output from both these
coders is no longer constrained to discrete signal levels and the resulting noise energy is lower
than all the previous examples. The dynamic net has a significantly lower noise energy than
any other coding scheme, although, from the static net example, this is unlikely to be an
optimal solution. The dynamic net achieves a lower noise energy than the DPCM coder by
virtue of the non-linear processing at each unit, and the flexibility of data storage in the state
units.
As expected from the measured noise energies, there is an improvement in signal quality
and intelligibility from the linear quantised speech through to the DCPM and dynamic net
quantised speech.
5. CONCLUSION
This report has developed three architectures for dynamic nets. Each architecture can be
formulated in a way where the computational requirement is independent of the degree of
context necessary to learn the solution. The FID architecture appears most suitable for
641
implementation on a sf'rial processor, t.hf' nn archit.f'd,11fe has possihle a(lvant,ages for implementation on parallel processors, and the state compression net has a higher degree of
biological plausibility.
Two FID dynamic nets have been coupled together to form a coder, and this has been
applied to speech coding. Although the dynamic net coder is unlikely to have learned the
optimum coding strategy, it does delUonstrate that dynamic nets can be used to 8.Chieve an
improved performance in a real world task over an estaBlished conventional technique.
One of the authors, A J Robinson, is supported by a maintenance grant from the U.K.
Science and Engineering Research Council, and gratefully acknowledges this support.
References
[1] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by
error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed
Processing: E2:plorations in the M1crostructure of Cognition, Vol. 1: Foundations., Bradford Books/MIT Press, Cambridge, MA , 1986,
[2] O. L. R. Jacobs. IntroductIOn to Contml Theory. Clarendon Press, Oxford, 1974.
[3J A. J. Robinson and F. Fallside. The Utility Drit'en Dynamic Error Propagation Network. Technical Report CUED/F-INFENG/TR.l, Cambridge University Engineering
Department, 1987.
[4J R. W. Prager, T. D. Harrison, and F. Fallside, Boltzmann machines for speech recognition. Compllter Speech and Language, 1:3-27, 1986,
[5] J. L. Elman and D. Zipser. Learning the Hidden Structure of Speech. ICS Report 8701,
University of California, San Diego, 1987.
[6] A. J. Robinson. Speech Rerognition wIth Associatille Networks. M.Phil Computer Speech
and Language Processing thesis, Cambridge University Engineering Department, 1986.
[7] M. I. Jordan. Serial Order: A Parallel Distributed Processing Approach. ICS Report 8604, Institute for Cognitive Science, University of California, San Diego, May
1986.
[8] D. J, C. MacKay. A Method of Increa,sing the Conte2:tual Input to Adaptive Pattern
Recognition Systems. Technical Report RIPRREP /1000 /14/87, Research Initiative in
Pattern Recognition, RSRE, Malvern, 1987.
[9) R. L. Watrous, L. Shastri, and A. H. Waibel. Learned phonetic discrimination using
connectionist networks. In J . Laver and M. A. Jack, editors, Proceedings of the Etl.ropea,n
Conference on Speech Technology, CEP Consultants Ltd, Edinburgh, September 1987.
(10) G. W. Cottrell, P. Munro, and D Zipser. Image Compression by Back Propagation: An
E2:ample of Existential Programming. ICS Report 8702, Institute for Cognitive Science,
University of California, San Diego, Febuary 1986.
[11) L. W . Chan and F. Fallside. An Adaptive Learning Algori.thm for Back Propaga.tion Networks . Technical Report CUED / F-INFENG/TR.2, Cambridge University Engineering
Department, 1987, submitted to Compute?' Speech and Language.
[12] L, R. Rabiner and R. W, Schefer . DIgital Processmg of Speech Signals. Prentice Hall,
Englewood Cliffs, New Jersey, 1978.
| 42 |@word version:3 compression:4 proportionality:1 pulse:1 propagate:1 jacob:1 fifteen:2 tr:2 series:1 itp:1 denoting:1 past:9 usillg:1 current:4 comparing:1 od:1 nt:1 activation:8 must:5 yep:1 written:1 cottrell:1 enables:1 remove:1 update:1 discrimination:1 half:2 ith:1 steepest:1 bup:1 node:1 firstly:1 five:1 differential:1 initiative:1 introduce:1 manner:1 expected:1 themselves:1 elman:1 actual:3 increasing:1 becomes:1 etl:1 coder:14 watrous:1 substantially:1 developed:3 whilst:1 ret:1 finding:2 transformation:5 every:3 ti:1 control:1 unit:50 grant:1 before:1 engineering:5 local:1 tends:1 io:1 encoding:2 ets:1 oxford:1 cliff:1 glued:1 modulation:1 dynamically:1 range:4 thel:1 testing:1 recursive:1 xr:1 digit:1 nout:2 procedure:1 sq:3 area:1 significantly:1 wait:1 onto:4 layered:1 storage:6 context:12 impossible:1 writing:1 prentice:1 restriction:1 conventional:1 phil:1 williams:3 duration:3 formulate:1 simplicity:1 immediately:1 digitised:1 variation:2 updated:2 target:4 construction:1 diego:3 programming:1 us:1 element:1 rumelhart:3 recognition:3 updating:1 calculate:3 connected:2 prager:1 environment:1 dynamic:49 trained:6 uets:2 lint:2 upon:1 liuear:1 represented:3 tx:1 jersey:1 train:5 outside:1 larger:2 interconnection:1 compressed:1 encoder:1 ability:1 reconstruct:1 itself:1 sequence:1 advantage:1 frozen:1 net:117 relevant:1 combining:1 flexibility:1 flo:2 requirement:8 zp:1 transmission:3 optimum:4 cued:2 illustrate:1 recurrent:1 oo:3 measured:3 implemented:6 predicted:1 involves:2 direction:2 waveform:2 filter:3 violates:1 require:1 behaviour:2 assign:1 feeding:2 dpcm:5 biological:2 summation:2 extension:1 strictly:1 around:1 considered:1 ic:3 hall:1 mapping:1 predict:1 cognition:1 substituting:2 achieves:1 council:1 largest:1 cep:1 mit:1 always:1 varying:1 rial:1 ax:1 directs:1 improvement:3 transmitter:3 mainly:1 febuary:1 dependent:1 nn:1 lj:1 unlikely:2 hidden:8 relation:1 overall:1 ill:1 development:1 constrained:2 summed:4 mackay:1 once:1 possihle:1 alter:1 connectionist:3 report:7 few:2 replaced:1 tq:2 attempt:1 englewood:1 investigate:1 introduces:1 male:1 extreme:2 nl:1 poorer:1 partial:1 necessary:3 divide:1 desired:5 minimal:1 increased:2 delay:4 formalising:1 stored:1 answer:1 combined:4 tual:1 st:1 physic:1 audible:1 decoding:2 minimised:1 together:3 squared:3 thesis:1 recorded:1 algori:1 reconstructs:1 tile:1 external:2 creating:1 american:1 derivative:1 book:1 cognitive:2 yp:4 coding:18 mp:5 tion:1 try:1 hf:1 parallel:4 oi:3 accuracy:1 yield:2 ofthe:1 rabiner:1 translator:1 corruption:1 straight:1 processor:3 submitted:1 plateau:1 failure:1 ty:1 energy:13 hack:1 e2:2 attributed:1 static:25 propagated:4 back:11 appears:1 feed:1 clarendon:1 higher:2 response:2 improved:1 formulation:1 done:1 though:2 just:1 until:1 propagation:15 defines:1 quality:2 impulse:2 true:2 iop:1 hence:2 laboratory:1 deal:2 during:3 speaker:1 onh:1 image:2 jack:1 consideration:2 common:1 khz:1 measurement:1 cambridge:6 ai:1 witb:1 similarly:1 inclusion:1 gratefully:1 had:1 language:3 longer:1 quantisation:4 processmg:1 own:1 recent:1 exclusion:1 chan:2 reverse:1 store:3 buffer:4 phonetic:1 fid:9 transmitted:2 minimum:3 greater:2 preceding:2 arn:1 determine:1 period:2 signal:43 ii:1 thell:1 multiple:1 full:1 dynamk:2 technical:3 england:1 plausibility:2 believed:1 divided:1 serial:1 coded:8 infeng:2 maintenance:1 df:1 lly:1 iteration:3 represent:4 achieved:5 ompute:1 harrison:1 suffered:1 operate:1 pass:1 db:1 ample:1 quantiser:6 jordan:1 integer:1 zipser:2 near:1 intermediate:1 aesthetic:1 enough:2 iii:1 variety:1 architecture:19 reduce:2 minimise:1 whether:1 utility:1 munro:1 passed:1 ltd:1 speech:29 rsre:1 useful:2 amount:3 band:2 locally:1 concentrated:1 mcclelland:1 reduced:1 oding:1 per:1 discrete:1 vol:1 affected:1 group:1 redundancy:1 four:1 threshold:6 thresholded:2 backward:3 tbe:2 run:1 quantised:4 bit:1 layer:1 followed:1 constraint:1 ri:2 simulate:1 span:3 format:2 department:4 trumpington:1 waibel:1 nhid:5 unity:2 wi:5 lid:1 restricted:5 equation:6 previously:2 neti:1 mind:1 fed:1 operation:3 eight:1 intelligibility:2 alternative:1 original:5 remaining:1 llu:2 calculating:1 archit:1 giving:2 arrangement:1 already:1 quantity:1 strategy:1 dependence:2 september:1 fallside:4 gradient:2 dp:3 link:4 mapped:1 simulated:1 capacity:1 street:1 decoder:2 length:2 code:3 ratio:6 unfortunately:1 fe:1 shastri:1 expense:2 stated:1 implementation:4 boltzmann:2 perform:1 allowing:1 regenerate:1 consultant:1 sing:1 finite:7 enabling:1 descent:3 situation:2 hinton:3 incorporated:1 extended:1 defining:1 ised:1 arbitrary:2 thm:1 introduced:1 pair:1 required:2 specified:2 california:3 learned:4 established:3 robinson:4 able:2 pattern:5 oj:2 memory:4 optimise:1 power:3 suitable:2 difficulty:2 rely:1 force:1 recursion:2 scheme:5 technology:1 acknowledges:1 thf:2 coupled:1 existential:1 loss:1 analogy:1 valu:1 age:1 digital:1 foundation:1 degree:2 sufficient:1 xp:1 principle:1 editor:2 storing:1 course:1 changed:1 supported:1 copy:2 bias:1 normalised:2 deeper:1 institute:3 distributed:4 edinburgh:1 overcome:1 calculated:3 feedback:1 valid:1 world:2 raining:1 computes:1 forward:2 commonly:1 made:1 adaptive:4 author:1 san:3 far:1 instantiation:2 receiver:4 alternatively:1 continuous:2 table:2 learn:3 channel:1 expansion:1 complex:1 sp:2 linearly:2 noise:18 malvern:1 en:1 decoded:2 explicit:2 sf:1 third:1 load:3 explored:1 virtue:1 essential:1 sequential:1 adding:1 laver:1 jwj:1 simply:1 forming:1 rror:1 monotonic:1 determines:1 ma:1 identity:1 presentation:1 marked:1 quantifying:1 formulated:1 shared:1 feasible:1 change:10 considerable:1 generalisation:1 determined:1 infinite:4 uniformly:1 reducing:1 total:2 pas:8 bradford:1 internal:5 wq:1 support:1 incorporate:2 |
3,534 | 420 | EVOLUTION AND LEARNING IN
NEURAL NETWORKS: THE NUMBER
AND DISTRIBUTION OF LEARNING
TRIALS AFFECT THE RATE OF
EVOLUTION
Ron Keesing and David G. Stork*
Ricoh California Research Center
2882 Sand Hill Road Suite 115
Menlo Park, CA 94025
[email protected]
and
*Dept. of Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Learning can increase the rate of evolution of a population of
biological organisms (the Baldwin effect). Our simulations
show that in a population of artificial neural networks
solving a pattern recognition problem, no learning or too
much learning leads to slow evolution of the genes whereas
an intermediate amount is optimal. Moreover, for a given
total number of training presentations, fastest evoution
occurs if different individuals within each generation receive
different numbers of presentations, rather than equal
numbers. Because genetic algorithms (GAs) help avoid
local minima in energy functions, our hybrid learning-GA
systems can be applied successfully to complex, highdimensional pattern recognition problems.
INTRODUCTION
The structure and function of a biological network derives from both its
evolutionary precursors and real-time learning. Genes specify (through
development) coarse attributes of a neural system, which are then refined
based on experience in an environment containing more information - and
804
Evolution and Learning in Neural Networks
more unexpected infonnation - than the genes alone can represent. Innate
neural structure is essential for many high level problems such as scene
analysis and language [Chomsky, 1957].
Although the Central Dogma of molecular genetics [Crick, 1970] implies
that information learned cannot be directly transcribed to the genes, such
information can appear in the genes through an indirect Darwinian process
(see below). As such, learning can change the rate of evolution - the
Baldwin effect [Baldwin, 1896]. Hinton and Nowlan [1987] considered a
closely related process in artificial neural networks, though they used
stochastic search and not learning per se. We present here analyses and
simulations of a hybrid evolutionary-learning system which uses gradientdescent learning as well as a genetic algorithm, to determine network
connections.
Consider a population of networks for pattern recognition, where initial
synaptic weights (weights "at birth") are detennined by genes. Figure 1
shows the Darwinian fitness of networks (i.e., how many patterns each can
correctly classify) as a function the weights. Iso-fitness contours are not
concentric, in general. The tails of the arrows represent the synaptic
weights of networks at birth. In the case of evolution without learning,
network B has a higher fitness than does A, and thus would be
preferentially selected. In the case of gradient-descent learning before
selection, however, network A has a higher after-learning fitness, and
would be preferentially selected (tips of arrows). Thus learning can change
which individuals will be selected and reproduce, in particular favoring a
network (here, A) whose genome is "good" (i.e., initial weights "close" to
the optimal), despite its poor performance at birth. Over many generations,
the choice of "better" genes for reproduction leads to new networks which
require less learning to solve the problem - they are closer to the optimal.
The rate of gene evolution is increased by learning (the Baldwin effect).
Iso-fitness contours
A
Weight 1
Figure 1: Iso-fitness contours in
synaptic weight space. The black region
corresponds to perfect classifications
(fitness = 5). The weights of two
networks are shown at birth (tails of
arrows), and after learning (tips of
arrows). At birth, 8 has a higher fitness
score (2) than does A (1); a pure genetic
algorithm (without learning) would
preferentially reproduce 8. Wit h
learning, though, A has a higher fitness
score (4) than 8 (2), and would thus be
preferentially reproduced. Since A's
genes are "better" than 8's, learning can
lead to selection of better genes.
805
806
Keesing and Stork
Surprisingly, too much learning leads to slow evolution of the genome,
since after sufficient training in each generation, all networks can perform
perfectly on the pattern recognition task, and thus are equally likely to pass
on their genes, regardless of whether they are "good" or "bad." In Figure
1, if both A and B continue learning, eventually both will identify all five
patterns correctly. B will be just as likely to reproduce as A, even though
A's genes are "better." Thus the rate of evolution will be decreased - too
much learning is worse than an intermediate amount - or even no learning.
SIMULA TION APPROACH
Our system consists of a population of 200 networks, each for classifying
pixel images of the first five letters of the alphabet. The 9 x 9 input grid is
connected to four 7 x 7 sets of overlapping 3 x 3 orientation detectors;
each detector is fully connected by modifiable weights to an output layer
containing five category units (Fig. 2).
trainable
weights
A
B
.......
C
='
~
.~
fully interconnected
((,~
0
E
V'J
Q)
.~
~
0
OJ)
Q)
.......
~
u
~~~~
Figure 2: Individual network architecture. The 9x9 pixel input is detected by each of
four orientation selective input layers (7x7 unit arrays), which are fully connected by
trainable weights to the five category units. The network is thus a simple perceptron
with 196 (=4x7x7) inputs and 5 outputs. Genes specify the initial connection strengths.
Each network has a 490-bit gene specifying the initial weights (Figure 3).
For each of the 49 filter positions and 5 categories, the gene has two bits
Evolution and Learning in Neural Networks
which specify which orientation is initially most strongly connected to the
category unit (by an arbitrarily chosen factor of 3:1). During training, the
weights from the filters to the output layer are changed by (supervised)
perceptron learning. Darwinian fitness is given by the number of patterns
correctly classified after training. We use fitness-proportional reproduction
and the standard genetic algorithm processes of replication, mutation, and
cross-over [Holland, 1975]. Note that while fitness may be measured after
training, reproduction is of the genes present at birth, in accord with the
Central Dogma. This is llil1 a Lamarkian process.
A detector
B detector
C detector
D detector ..... .
pit 011010101001 0 101 00111010 100101 01001 01 010010101 00101 010010 101001010 10 10 1001 010100 ...
001010110001001110100110100101010101101010101011 ...
~
Relative
initial
weights
between
filters (at a
spatial
position)
and
category
~@IJ ~ QI]~
3
1
1
1
~
1
3
1
1
~
1
1
3
1
~
1
1
1
3
~
?
possible
gene
values at
a spatial
position
Figure 3: The genetic representation of a network. For each of the five category units,
49 two-bit numbers describe which of the four orientation units is most strongly
connected at each position within the 7x7 grid. This unit is given a relative connection
strength of 3, while the other three orientation units at that position are given a relative
strength of 1.
For a given total number of teaching presentations, reproductive fitness
might be defined in many ways, including categorization score at the end of
learning or during learning; such functions will lead to different rates of
evolution. We show simulations for two schemes: in uniform learning each
network received the same number (e.g., 20) of training presentations; in
807
808
Keesing and Stork
distributed learning networks received a randomly chosen number (10, 34,
36, 16, etc.) of presentations.
RESULTS AND DISCUSSION
Figure 4 shows the population average fitness at birth. The lower curve
shows the performance of the genetic algorithm alone; the two upper curves
represent genetypic evolution - the amount of information within the genes
- when the genetic algorithm is combined with gradient-descent learning.
Learning increases the rate of evolution - both uniform and distributed
learning are significantly better than no learning. The fitness after learning
in a generation (not shown) is typically only 5% higher than the fitness at
birth. Such a small improvement at a single generation cannot account for
the overall high performance at later generations. A network's performance
- even after learning - is more dependent upon its ancestors having
learned than upon its having learned the task.
Pop. Avg. Fitness at Birth for
Different Learning Schemes
=... S~--------------------~
m
CD
C
Distributed Learning
as
(t)
(t)
3
CD
C
!::
u..
u.. 2
.
3
2
ai
>
C)
>
cr:: 1
cr:: 1
.
D..
0
D..
.cs-r--------------.
t:
_ 4
-as 4
(t)
(t)
Ave. Fitness at Generation 100
Depends on Amount of Training
D..
0
0
Q.
0
20
40
60
80
100
Generation
Figure 4: Learning guides the rate of
evolution. In uniform learning, every
network in every generation receives 20
learning presentations; in the distributed
learning scheme, any network receives a
number of patterns randomly chosen
between 0 and 40 presentations (mean =
20). Clearly, evolution with learning
leads to superior genes (fitness at birth)
than evolution without learning.
0
1
Avg.
10
100
1000
Learning Trials per Indlv.
Figure 5: Selectivity of learningevolution interactions. Too little or too
much learning leads to slow evolution
(population fitness at birth at generation
100) while an intermediate amount of
learning leads to significantly higher such
fitness. This effect is significant in both
learning schemes. (Each point represents
the mean of five simulation runs.)
Evolution and Learning in Neural Networks
Figure 5 illustrates the tuning of these learning-evolution interactions, as
discussed above: too little or too much learning leads to poorer evolution
than does an intermediate amount of learning. Given excessive learning
(e.g., 500 presentations) all networks perform perfectly. This leads to the
slowest evolution, since selection is independent of the quality of the genes.
Note too in Fig. 4 that distributed learning leads to significantly faster
evolution (higher fitness at any particular generation) than uniform learning.
In the uniform learning scheme, once networks have evolved to a point in
weight space where they (and their offspring) can identify a pattern after
learning, there is no more "pressure" on the genes to evolve. In Figure 6,
both A and B are able to identify three patterns correctly after uniform
learning, and hence both will reproduce equally. However, in the
distributed learning scheme, one of the networks may (randomly) receive a
small amount of learning. In such cases, A's reproductive fitness will be
unaffected, because it is able to solve the patterns without learning, while
B's fitness will decrease significantly. Thus in the distributed learning
scheme (and in schemes in which fitness is determined in part during
learning), there is "pressure" on the genes to improve at every generation.
Diversity is, a driving force for evolution. Our distributed learning scheme
leads to a greater diversity of fitness throughout a population.
Iso-fitness contours
Figure 6: Distributed learning leads
to faster evolution than uniform
learning. In uniform learning, (shown
above) A and B have equal reproductive
fitness, even though A has "better"
genes. In distributed learning, A will
be more likely to reproduce when it
(randomly) receives a small amount of
learning (shorter arrow) than B will
under similar circumstances.
Thus
"better" genes will be more likely to
reproduce, leading to faster evolution.
Weight 1
CONCLUSIONS
Evolutionary search via genetic algorithms is a powerful technique for
avoiding local minima in complicated energy landscapes [Goldberg, 1989;
Peterson, 1990], but is often slow to converge in large problems.
Conventional genetic approaches consider only the reproductive fitness of
809
810
Keesing and Stork
the genes; the slope of the fitness landscape in the immediate vicinity of the
genes is ignored. Our hybrid evolutionary-learning approach utilizes the
gradient of the local fitness landscape, along with the fitness of the genes, in
detennining survival and reproduction.
We have shown that this technique offers advantages over evolutionary
search alone in the single-minimum landscape given by perceptron learning.
In a simple pattern recognition problem, the hybrid system performs twice
as well as a genetic algorithm alone. A hybrid system with distributed
learning, which increases the "pressure" on the genes to evolve at every
generation, performs four times as well as a genetic algorithm. In addition,
we have demonstrated that there exists an optimal average amount of
learning in order to increase the rate of evolution - too little or too much
learning leads to slower evolution. In the extreme case of too much
learning, where all networks are trained to perfect performance, there is no
improvement of the genes. The advantages of the hybrid approach in
landscapes with multiple minima can be even more pronounced [Stork and
Keesing, 1991].
Acknowledgments
Thanks to David Rumelhart, Marcus Feldman, and Aviv Bergman for
useful discussions.
References
Baldwin, J. M. "A new factor in evolution," American Naturalist 30,441451 (1896)
Chomsky, N. Syntactic Structures The Hague: Mouton (1957)
Crick, F. W. "Central Dogma of Molecular Biology," Nature 227, 561-563
(1970)
Goldberg, D. E. Genetic Algorithms in Search, Optimization &
Machine Learning Reading, MA: Addison-Wesley (1989).
Hinton, G. E. and Nowlan, S. 1. "How learning can guide evolution,"
Complex Systems 1,495-502 (1987)
Holland, J. H. Adaptation in Natural and Artificial Systems
University of Michigan Press (1975)
Peterson, C. "Parallel Distributed Approaches to Combinatorial
Optimization: Benchmanrk Studies on Traveling Salesman Problem,"
Neural Computation 2, 261-269 (1990).
Stork, D. G. and Keesing, R. "The distribution of learning trials affects
evolution in neural networks" (1991, submitted).
| 420 |@word trial:3 simulation:4 pressure:3 initial:5 score:3 genetic:12 com:1 nowlan:2 alone:4 selected:3 iso:4 coarse:1 ron:1 five:6 along:1 replication:1 consists:1 hague:1 little:3 precursor:1 moreover:1 evolved:1 psych:1 suite:1 every:4 unit:8 appear:1 before:1 engineering:1 local:3 offspring:1 despite:1 black:1 might:1 twice:1 specifying:1 pit:1 fastest:1 acknowledgment:1 significantly:4 road:1 chomsky:2 cannot:2 ga:1 selection:3 close:1 conventional:1 demonstrated:1 center:1 regardless:1 wit:1 pure:1 array:1 population:7 us:1 goldberg:2 bergman:1 simula:1 recognition:5 rumelhart:1 baldwin:5 electrical:1 region:1 connected:5 decrease:1 keesing:6 environment:1 trained:1 solving:1 dogma:3 upon:2 indirect:1 alphabet:1 describe:1 artificial:3 detected:1 refined:1 birth:11 whose:1 stanford:3 solve:2 syntactic:1 reproduced:1 advantage:2 interconnected:1 interaction:2 adaptation:1 detennined:1 pronounced:1 categorization:1 perfect:2 help:1 measured:1 ij:1 received:2 c:1 implies:1 closely:1 attribute:1 filter:3 stochastic:1 sand:1 crc:1 require:1 biological:2 considered:1 driving:1 combinatorial:1 infonnation:1 successfully:1 clearly:1 rather:1 avoid:1 cr:2 improvement:2 slowest:1 ave:1 dependent:1 typically:1 initially:1 favoring:1 ancestor:1 reproduce:6 selective:1 pixel:2 overall:1 classification:1 orientation:5 development:1 spatial:2 equal:2 once:1 having:2 biology:1 represents:1 park:1 excessive:1 randomly:4 individual:3 fitness:32 extreme:1 poorer:1 closer:1 experience:1 shorter:1 increased:1 classify:1 uniform:8 too:11 combined:1 thanks:1 tip:2 central:3 x9:1 containing:2 transcribed:1 worse:1 american:1 leading:1 account:1 diversity:2 depends:1 tion:1 later:1 complicated:1 parallel:1 slope:1 mutation:1 identify:3 landscape:5 unaffected:1 classified:1 submitted:1 detector:6 synaptic:3 energy:2 wesley:1 higher:7 supervised:1 specify:3 though:4 strongly:2 just:1 traveling:1 receives:3 overlapping:1 quality:1 aviv:1 innate:1 effect:4 evolution:31 hence:1 vicinity:1 naturalist:1 during:3 hill:1 performs:2 image:1 superior:1 stork:8 detennining:1 tail:2 organism:1 discussed:1 significant:1 feldman:1 ai:1 tuning:1 mouton:1 grid:2 teaching:1 language:1 etc:1 selectivity:1 continue:1 arbitrarily:1 minimum:4 greater:1 converge:1 determine:1 multiple:1 faster:3 cross:1 offer:1 molecular:2 equally:2 qi:1 circumstance:1 represent:3 accord:1 receive:2 whereas:1 addition:1 decreased:1 intermediate:4 affect:2 architecture:1 perfectly:2 whether:1 ignored:1 useful:1 se:1 amount:9 category:6 per:2 correctly:4 modifiable:1 four:4 run:1 letter:1 powerful:1 throughout:1 utilizes:1 bit:3 layer:3 strength:3 scene:1 x7:2 poor:1 eventually:1 addison:1 end:1 salesman:1 slower:1 occurs:1 evolutionary:5 gradient:3 marcus:1 preferentially:4 ricoh:2 perform:2 upper:1 descent:2 gas:1 immediate:1 hinton:2 concentric:1 david:2 connection:3 california:1 learned:3 pop:1 able:2 below:1 pattern:12 reading:1 oj:1 including:1 natural:1 force:1 hybrid:6 scheme:9 improve:1 evolve:2 relative:3 fully:3 generation:13 proportional:1 sufficient:1 classifying:1 cd:2 genetics:1 changed:1 surprisingly:1 guide:2 perceptron:3 peterson:2 distributed:12 curve:2 contour:4 genome:2 avg:2 gene:29 search:4 nature:1 ca:2 menlo:1 complex:2 darwinian:3 arrow:5 fig:2 slow:4 position:5 gradientdescent:1 bad:1 reproductive:4 reproduction:4 derives:1 essential:1 survival:1 exists:1 illustrates:1 michigan:1 likely:4 unexpected:1 holland:2 corresponds:1 ma:1 presentation:8 crick:2 change:2 determined:1 total:2 pas:1 highdimensional:1 dept:1 trainable:2 avoiding:1 |
3,535 | 4,200 | Unifying Framework for Fast Learning Rate of
Non-Sparse Multiple Kernel Learning
Taiji Suzuki
Department of Mathematical Informatics
The University of Tokyo
Tokyo 113-8656, Japan
[email protected]
Abstract
In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is
dense type regularizations including ?p -MKL that imposes ?p -mixed-norm regularization instead of ?1 -mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good
performance compared with dense type regularizations. Motivated by this fact,
this paper gives a general theoretical tool to derive fast learning rates that is applicable to arbitrary mixed-norm-type regularizations in a unifying manner. As
a by-product of our general result, we show a fast learning rate of ?p -MKL that
is tightest among existing bounds. We also show that our general learning rate
achieves the minimax lower bound. Finally, we show that, when the complexities
of candidate reproducing kernel Hilbert spaces are inhomogeneous, dense type
regularization shows better learning rate compared with sparse ?1 regularization.
1
Introduction
Multiple Kernel Learning (MKL) proposed by [20] is one of the most promising methods that adaptively select the kernel function in supervised kernel learning. A kernel method is widely used and
several studies have supported its usefulness [25]. However the performance of kernel methods
critically relies on the choice of the kernel function. Many methods have been proposed to deal
with the issue of kernel selection. [23] studied hyperkrenels as a kernel of kernel functions. [2]
considered DC programming approach to learn a mixture of kernels with continuous parameters.
Some studies tackled a problem to learn non-linear combination of kernels as in [4, 9, 34]. Among
them, learning a linear combination of finite candidate kernels with non-negative coefficients is the
most basic, fundamental and commonly used approach. The seminal work of MKL by [20] considered learning convex combination of candidate kernels. This work opened up the sequence of
the MKL studies. [5] showed that MKL can be reformulated as a kernel version of the group lasso
[36]. This formulation gives an insight that MKL can be described as a ?1 -mixed-norm regularized
method. As a generalization of MKL, ?p -MKL that imposes ?p -mixed-norm regularization has been
proposed [22, 14]. ?p -MKL includes the original MKL as a special case as ?1 -MKL. Another direction of generalizing MKL is elasticnet-MKL [26, 31] that imposes a mixture of ?1 -mixed-norm and
?2 -mixed-norm regularizations. Recently numerical studies have shown that ?p -MKL with p > 1
and elasticnet-MKL show better performances than ?1 -MKL in several situations [14, 8, 31]. An
interesting perception here is that both ?p -MKL and elasticnet-MKL produce denser estimator than
the original ?1 -MKL while they show favorable performances. One motivation of this paper is to
give a theoretical justification to these generalized dense type MKL methods in a unifying manner.
1
?
In the pioneering paper of [20], a convergence rate of MKL is given as M
n , where M is the number
of given kernels and n is the number of samples. [27] gave improved learning bound utilizing the
pseudo-dimension of the given kernel class. [35] gave a convergence bound utilizing Rademacher
chaos and gave some upper bounds of the Rademacher chaos utilizing the pseudo-dimension of the
kernel class. [8] presented a convergence bound for a learning method with L2 regularization on the
1? 1 ?
p?
M
log(M )
?
kernel weight. [10] gave the convergence rate of ?p -MKL as
for 1 ? p ? 2. [15]
n
gave a similar convergence bound with improved constants. [16] generalized this bound to a variant
of the elasticnet type regularization and widened the effective range of p to all range of p ? 1 while
in the existing bounds 1 ? p ? 2 was imposed. One concern about these bounds is that all bounds
introduced above are ?global? bounds in a sense that the bounds are applicable ?
to all candidates of
estimators. Consequently all convergence rate presented above are of order 1/ n with respect to
the number n of samples. However, by utilizing the localization techniques including so-called local
Rademacher complexity [6, 17] and peeling device [32], we can derive a faster learning rate. Instead
of uniformly bounding all candidates of estimators, the localized inequality focuses on a particular
estimator such as empirical risk minimizer, thus can gives a sharp convergence rate.
Localized bounds of MKL have been given mainly in sparse learning settings [18, 21, 19], and there
are only few studies for non-sparse settings in which the sparsity of the ground truth is not assumed.
Recently [13] gave a localized convergence bound of ?p -MKL. However, their analysis assumed a
strong condition where RKHSs have no-correlation to each other.
In this paper, we show a unified framework to derive fast convergence rates of MKL with various
regularization types. The framework is applicable to arbitrary mixed-norm regularizations including ?p -MKL and elasticnet-MKL. Our learning rate utilizes the localization technique, thus is tighter
than global type learning rates. Moreover our analysis does not require no-correlation assumption
as in [13]. We apply our general framework to some examples and show our bound achieves the
minimax-optimal rate. As a by-product, we obtain a tighter convergence rate of ?p -MKL than existing results. Finally, we show that dense type regularizations can outperforms sparse ?1 regularization
when the complexities of the RKHSs are not uniformly same.
2 Preliminary
In this section, we give the problem formulation, the notations and the assumptions required for the
convergence analysis.
2.1
Problem Formulation
Suppose that we are given n i.i.d. samples {(xi , yi )}ni=1 distributed from a probability distribution
P on X ? R where X is an input space. We denote by ? the marginal distribution of P on X . We
are given M reproducing kernel Hilbert spaces (RKHS) {Hm }M
m=1 each of which is associated with
a kernel km . We consider a mixed-norm type regularization with respect to an arbitrary given norm
M
???? , that is, the regularization is given by the norm ?(?fm ?Hm )M
m=1 ?? of the vector (?fm ?Hm )m=1
?
M
for fm ? Hm (m = 1, . . . , M ) . For notational simplicity, we write ?f ?? = ?(?fm ?Hm )m=1 ??
?M
for f = m=1 fm (fm ? Hm ).
?M
The general formulation of MKL that we consider in this paper fits a function f = m=1 fm (fm ?
Hm ) to the data by solving the following optimization problem:
(
)2
M
n
M
?
?
1?
(n)
?
?
f=
fm =
arg min
yi ?
fm (xi ) + ?1 ?f ?2? .
(1)
n
f
?H
(m=1,...,M
)
m
m
m=1
m=1
i=1
We call this ??-norm MKL?. This formulation covers many practically used MKL methods (e.g.,
?p -MKL, elasticnet-MKL, variable sparsity kernel learning (see later for their definitions)), and is
solvable by a finite dimensional optimization procedure due to the representer theorem [12]. In this
?
We assume that the mixed-norm ?(?fm ?Hm )M
m=1 ?? satisfies the triangular inequality with respect to
?
M
M
?
M
(fm )M
m=1 , that is, ?(?fm + fm ?Hm )m=1 ?? ? ?(?fm ?Hm )m=1 ?? + ?(?fm ?Hm )m=1 ?? . To satisfy this
condition, it is sufficient if the norm is monotone, i.e., ?a?? ? ?a + b?? for all a, b ? 0.
2
paper, we focus on the regression problem (the squared loss). However the discussion presented
here can be generalized to Lipschitz continuous and strongly convex losses [6].
Example 1: ?p -MKL The first motivating example of ?-norm MKL is ?p -MKL [14] that employs
?M
1
p
p
?p -norm for 1 ? p ? ? as the regularizer: ?f ?? = ?(?fm ?Hm )M
m=1 ??p = (
m=1 ?fm ?Hm ) .
If p is strictly greater than 1 (p > 1), the solution of ?p -MKL becomes dense. In particular, p = 2
corresponds to averaging candidate kernels with uniform weight [22]. It is reported that ?p -MKL
with p greater than 1, say p = 43 , often shows better performance than the original sparse ?1 -MKL
[10].
Example 2: Elasticnet-MKL The second example is elasticnet-MKL [26, 31] that employs mix?M
ture of ?1 and ?2 norms as the regularizer: ?f ?? = ? ?f ??1 + (1 ? ? )?f ??2 = ? m=1 ?fm ?Hm +
?M
1
(1 ? ? )( m=1 ?fm ?2Hm ) 2 with ? ? [0, 1]. Elasticnet-MKL shares the same spirit with ?p -MKL in
a sense that it bridges sparse ?1 -regularization and dense ?2 -regularization. An efficient optimization
method for elasticnet-MKL is proposed by [30].
Example 3: Variable Sparsity Kernel Learning Variable Sparsity Kernel Learning (VSKL) proMj
posed by [1] divides the RKHSs into M ? groups {Hj,k }k=1
, (j = 1, . . . , M ? ) and imposes a mixed
{? ? ?
} q1
q
Mj
M
p
p
norm regularization ?f ?? = ?f ?(p,q) =
(
?f
?
)
where 1 ? p, q, and
j,k
Hj,k
j=1
k=1
fj,k ? Hj,k . An advantageous point of VSKL is that by adjusting the parameters p and q, various
levels of sparsity can be introduced, that is, the parameters can control the level of sparsity within
group and between groups. This point is beneficial especially for multi-modal tasks like object
categorization.
2.2
Notations and Assumptions
Here, we prepare notations and assumptions that are used in the analysis. Let H?M = H1 ? ? ? ? ?
HM . Throughout the paper, we assume the following technical conditions (see also [3]).
Assumption 1. (Basic Assumptions)
?M
?
?
(X),
) ? H?M such that E[Y |X] = f ? (X) = m=1 fm
(A1) There exists f ? = (f1? , . . . , fM
?
and the noise ? := Y ? f (X) is bounded as |?| ? L.
(A2) For each m = 1, . . . , M , Hm is separable (with respect to the RKHS norm) and
supX?X |km (X, X)| < 1.
The first assumption in (A1) ensures the model H?M is correctly specified, and the technical assumption |?| ? L allows ?f to be Lipschitz continuous with respect to f . The noise boundedness
can be relaxed to unbounded situation as in [24], but we don?t pursue that direction for simplicity.
Let an integral operator Tkm : L2 (?) ? L2 (?) corresponding to a kernel function km be
?
Tkm f = km (?, x)f (x)d?(x).
It is known that this operator is compact, positive, and self-adjoint (see Theorem 4.27 of [28]). Thus
it has at most countably many non-negative eigenvalues. We denote by ??,m be the ?-th largest
eigenvalue (with possible multiplicity) of the integral operator Tkm . Then we assume the following
assumption on the decreasing rate of ??,m .
Assumption 2. (Spectral Assumption) There exist 0 < sm < 1 and 0 < c such that
(A3)
??,m ? c?? sm , (?? ? 1, 1 ? ?m ? M ),
1
where {??,m }?
?=1 is the spectrum of the operator Tkm corresponding to the kernel km .
It was shown that the spectral assumption (A3) is equivalent to the classical covering number assumption [29]. Recall that the ?-covering number N (?, BHm , L2 (?)) with respect to L2 (?) is the
minimal number of balls with radius ? needed to cover the unit ball BHm in Hm [33]. If the spectral
assumption (A3) holds, there exists a constant C that depends only on s and c such that
log N (?, BHm , L2 (?)) ? C??2sm ,
3
(2)
n
M
sm
?M
Table 1: Summary of the constants we use in this article.
The number of samples.
The number of candidate kernels.
The spectral decay coefficient; see (A3).
The smallest eigenvalue of the design matrix (see Eq. (3)).
and the converse is also true (see [29, Theorem 15] and [28] for details). Therefore, if sm is large,
the RKHSs are regarded as ?complex?, and if sm is small, the RKHSs are ?simple?.
d
An important class of RKHSs where sm is known is Sobolev space. (A3) holds with sm = 2?
for
d
Sobolev space of ?-times continuously differentiability on the Euclidean ball of R [11]. Moreover,
for ?-times continuously differentiable kernels on a closed Euclidean ball in Rd , that holds for sm =
d
2? [28, Theorem 6.26]. According to Theorem 7.34 of [28], for Gaussian kernels with compact
support distribution, that holds for arbitrary small 0 < sm . The covering number of Gaussian
kernels with unbounded support distribution is also described in Theorem 7.34 of [28].
Let ?M be defined as follows:
{
?M := sup ? ? 0 ? ?
?
? M
fm ?2L (?)
2
?Mm=1
,
2
m=1 ?fm ?L (?)
}
?fm ? Hm (m = 1, . . . , M ) .
(3)
2
?M represents the correlation of RKHSs. We assume all RKHSs are not completely correlated to
each other.
Assumption 3. (Incoherence Assumption) ?M is strictly bounded from below; there exists a constant C0 > 0 such that
(A4)
0 < C0?1 < ?M .
This condition is motivated by the incoherence condition [18, 21] considered in sparse MKL settings.
?M
?
of the ground truth. [3] also
This ensures the uniqueness of the decomposition f ? = m=1 fm
assumed this condition to show the consistency of ?1 -MKL.
Finally we give a technical assumption with respect to ?-norm.
Assumption 4. (Embedded Assumption) Under the Spectral Assumption, there exists a constant
C1 > 0 such that
(A5)
sm
m
?fm ?? ? C1 ?fm ?1?s
Hm ?fm ?L2 (?) .
This condition is met when the input distribution ? has a density with respect to the uniform distribution on X that is bounded away from 0 and the RKHSs are continuously embedded in a Sobolev
d
space W ?,2 (X ) where sm = 2?
, d is the dimension of the input space X and ? is the ?smoothness?
of the Sobolev space. Many practically used kernels satisfy this condition (A5). For example, the
RKHSs of Gaussian kernels can be embedded in all Sobolev spaces. Therefore the condition (A5)
seems rather common and practical. More generally, there is a clear characterization of the condition (A5) in terms of real interpolation of spaces. One can find detailed and formal discussions of
interpolations in [29], and Proposition 2.10 of [7] gives the necessary and sufficient condition for
the assumption (A5).
Constants we use later are summarized in Table 1.
3
Convergence Rate Analysis of ?-norm MKL
Here we derive the learning rate of ?-norm MKL in a most general setting. We suppose that the
number of kernels M can increase along with the number of samples n. The motivation of our
analysis is summarized as follows:
? Give a unifying frame work to derive a sharp convergence rate of ?-norm MKL.
? (homogeneous complexity) Show the convergence rate of some examples using our general
frame work, and prove its minimax-optimality under conditions that the complexities sm
of all RKHSs are same.
4
? (inhomogeneous complexity) Discuss how the dense type regularization outperforms the
sparse type regularization, when the complexities sm of all RKHSs are not uniformly same.
?
?
Now we define ?(t) := ?n (t) = max(1, t, t/ n) for t > 0, and, for given positive reals {rm }M
m=1
and given n, we define ?1 , ?2 , ?1 , ?2 as follows:
(
( M
) 12
sm r1?sm )M
?2sm
? rm
m
,
?
?1 := ?1 ({rm }) = 3
,
?
:=
?
({r
})
=
3
2
2
m
n
n
(
?1 := ?1 ({rm }) = 3
m=1
M
?
?
rm
m=1
) 12
2sm (3?sm )
1+sm
2
n 1+sm
m=1 ? ?
(
2 )M
m)
s r (1?s
1+sm
m m
, ?2 := ?2 ({rm }) = 3
1
n 1+sm
m=1
, (4)
??
{rm }M
m=1 ).
(note that ?1 , ?2 , ?1 , ?2 implicitly depends on the reals
Then the following theorem
gives the general form of the learning rate of ?-norm MKL.
Theorem 1. Suppose Assumptions 1-4 are satisfied. Let {rm }M
m=1 be arbitrary positive reals that
( )2 ( )2
(n)
)
?2
?2
?
can depend on n, and assume ?1 = ?1 + ?1 . Then for all n and t? that satisfy log(M
?1
n
?
4? n
?M
)
1
max{?12 , ?12 , M log(M
}?(t? ) ? 12
and for all t ? 1, we have
n
[( )
(
)
( )2 ]
2
2 2
24?(t)
?
M
log(M
)
?2
?
2
?f? ? f ? ?2L2 (?) ?
?f ? ?2? .
?12 + ?12 +
+4
+
?M
n
?1
?1
and
(5)
with probability 1 ? exp(?t) ? exp(?t? ).
The proof will be given in Appendix D in the supplementary material. One can also find an outline
of the proof in Appendix A in the supplementary material.
The statement of Theorem 1 itself is complicated. Thus we will show later concrete learning rates on
some examples such as ?p -MKL. The convergence rate (5) depends on the positive reals {rm }M
m=1 ,
are
arbitrary.
Thus
by
minimizing
the
right
hand
side
of
Eq.
(5),
we
but the choice of {rm }M
m=1
obtain tight convergence bound as follows:
(
{
[( )
})
( )2 ]
2
?2
?2
M log(M )
? 2
2
2
? 2
?
?f ? f ?L2 (?) = Op
min
?1 + ?1 +
+
?f ?? +
. (6)
?1
?1
n
{rm }M
m=1 :
rm >0
There
between the first two terms (a) := ?12 + ?12 and the third term (b) :=
[( )is a (trade-off
)2 ]
2
?2
+ ??21
?f ? ?2? , that is, if we take {rm }m large, then the term (a) becomes small and
?1
the term (b) becomes large, on the other hand, if we take {rm }m small, then it results in large (a)
and small (b). Therefore we need to balance the two terms (a) and (b) to obtain the minimum in
Eq. (6).
We discuss the obtained learning rate in two situations, (i) homogeneous complexity situation, and
(ii) inhomogeneous complexity situation:
(i) (homogeneous) All sm s are same: there exists 0 < s < 1 such that sm = s (?m) (Sec.3.1).
(ii) (inhomogeneous) All sm s are not same: there exist m, m? such that sm ?= sm? (Sec.3.2).
3.1 Analysis on Homogeneous Settings
Here we assume all sm s are same, say sm = s for all m (homogeneous setting). If we further restrict
the situation as all rm s are same (rm = r (?m) for some r), then the minimization in Eq. (6) can
be easily carried out as in the following lemma. Let 1 be the M -dimensional vector each element of
which is 1: 1 := (1, . . . , 1)? ? RM , and ? ? ??? be the dual norm of the ?-norm? .
4s
Lemma 2. When sm = s (?m) with some 0 < s < 1 and n ? (?1??? ?f ? ?? /M ) 1?s , the bound
(6) indicates that
(
)
2s
2s
1
M
log(M
)
1? 1+s
? 1+s
? 2
?
?
?f ? f ?L2 (?) = Op M
.
(7)
n
(?1??? ?f ?? ) 1+s +
n
?
The dual of the norm ? ? ?? is defined as ?b??? := supa {b? a | ?a?? ? 1}.
5
The proof is given in Appendix G.1 in the supplementary material. Lemma 2 is derived by assuming
rm = r (?m), which might make the bound loose. However, when the norm ? ? ?? is isotropic
(whose definition will appear later), that restriction (rm = r (?m)) does not make the bound loose,
that is, the upper bound obtained in Lemma 2 is tight and achieves the minimax optimal rate (the
minimax optimal rate is the one that cannot be improved by any estimator). In the following, we
investigate the general result of Lemma 2 through some important examples.
Convergence Rate of ?p -MKL Here we derive the convergence rate of ?p -MKL (1 ? p ? ?)
?M
1
where ?f ?? = m=1 (?fm ?pHm ) p (for p = ?, it is defined as maxm ?fm ?Hm ). It is well known
that the dual norm of ?p -norm is given as ?q -norm where q is the real satisfying p1 + 1q = 1. For
(?
) p1
M
? p
notational simplicity, let Rp :=
?f
?
. Then substituting ?f ? ?? = Rp and ?1??? =
m Hm
m=1
1
1
?1??q = M q = M 1? p into the bound (7), the learning rate of ?p -MKL is given as
(
2s
2s
1
M log(M ) )
.
?f? ? f ? ?2L2 (?) =Op n? 1+s M 1? p(1+s) Rp1+s +
n
2
(8)
If we further assume n is sufficiently large so that n ? M p Rp?2 (log M ) s , the leading term is the
first term, and thus we have
)
(
2s
2s
1
1? p(1+s)
? 1+s
1+s
? 2
?
.
(9)
?f ? f ?L2 (?) = Op n
M
Rp
1+s
Note that as the complexity s of RKHSs becomes small the convergence rate becomes fast. It is
1
known that n? 1+s is the minimax optimal learning rate for single kernel learning. The derived
rate of ?p -MKL is obtained by multiplying a coefficient depending on M and Rp to the optimal
rate of single kernel learning. To investigate the dependency of Rp to the learning rate, let us
?
?Hm )M
consider two extreme settings, i.e., sparse setting (?fm
m=1 = (1, 0, . . . , 0) and dense setting
?
=
(1,
.
.
.
,
1)
as
in
[15].
(?fm
?Hm )M
m=1
?
?Hm )M
? (?fm
m=1 = (1, 0, . . . , 0): Rp = 1 for all p. Therefore the convergence rate
2s
1
1? p(1+s)
? 1+s
n
M
is fast for small p and the minimum is achieved at p = 1. This means
that ?1 regularization is preferred for sparse truth.
? 1+s
?
p
? (?fm
?Hm )M
for all
m=1 = (1, . . . , 1): Rp = M , thus the convergence rate is M n
p. Interestingly for dense ground truth, there is no dependency of the convergence rate
on the parameter p (later we will show that this is not the case in inhomogeneous settings
(Sec.3.2)). That is, the convergence rate is M times the optimal learning rate of single
1
kernel learning (n? 1+s ) for all p. This means that for the dense settings, the complexity of
solving MKL problem is equivalent to that of solving M single kernel learning problems.
1
1
Comparison with Existing Bounds Here we compare the bound for ?p -MKL we derived above
with the existing bounds. Let H?p (R) be the ?p -mixed norm ball with radius R: H?p (R) := {f =
?M
?M
1
p
p
m=1 fm | (
m=1 ?fm ?Hm ) ? R}. [10, 16, 15] gave ?global? type bounds for ?p -MKL as
1? 1 ?
p?
M
log(M )
b
?
R(f ) ? R(f ) + C
R for all f ? H?p (R),
(10)
n
b ) is the population risk and the empirical risk. First observation is that the
where R(f ) and R(f
bounds by [10] and [15] are restricted to the situation 1 ? p ? 2. On the other hand, our analysis
and that of [16] covers all p ? 1. Second, since our bound is specialized to the regularized risk
minimizer f? defined at Eq. (1) while the existing bound (10) is applicable to all f ? H?p (R), our
2
bound is sharper than theirs for sufficiently large n. To see this, suppose n ? M p Rp?2 , then we
2s
have n? 1+s M 1? p(1+s) ? n? 2 M 1? p . Moreover we should note that s can be large as long as
Spectral Assumption (A3) is satisfied. Thus the bound (10) is formally recovered by our analysis by
approaching s to 1.
1
1
1
Recently [13] gave a tighter convergence rate utilizing the localization technique as ?f??f ? ?2L2 (?) =
(
{ ?
2s })
1
1? 2s
, under a strong condition ?M = 1 that imposes all
Op minp? ?p p?p?1 n? 1+s M p? (1+s) Rp1+s
?
6
RKHSs are completely uncorrelated to each other. Comparing our bound with their result, there are
?
?
not minp? ?p and p?p?1 in our bound (if there is not the term p?p?1 , then the minimum of minp? ?p
is attained at p? = p, thus our bound is tighter), moreover our analysis doesn?t need the strong
assumption ?M = 1.
Convergence Rate of Elasticnet-MKL Elasticnet-MKL employs a mixture of ?1 and ?2 norm as
the regularizer: ?f ?? ={? ?f ?(
?1 + (1 ? ? )?f ??)}
2 where ? ? [0, 1]. Then its dual norm is given
by ?b??? = mina?RM max
?1??? =
?
M?
.
1?? +? M
?a???
?
,
?a?b??2
1??
. Therefore by a simple calculation, we have
Hence Eq. (7) gives the convergence rate of elasticnet-MKL as
(
?f? ? f ? ?2L2 (?) = Op n
1
? 1+s
M
1?
(1?? +?
s
1+s
?
2s
M ) 1+s
(? ?f ? ??1 + (1 ? ? )?f ? ??2 )
2s
1+s
+
)
M log(M )
n
.
Note that, when ? = 0 or ? = 1, this rate is identical to that of ?2 -MKL or ?1 -MKL obtained in
Eq. (8) respectively.
3.1.1
Minimax Lower Bound
In this section, we show that the derived learning rate (7) achieves the minimax-learning rate on the
?-norm ball
{
}
?M
H? (R) := f = m=1 fm ?f ?? ? R ,
when the norm is isotropic. We say the ?-norm ? ? ?? is isotropic when there exits a universal
constant c? such that
(11)
c?M = c??1??1 ? ?1??? ?1?? ,
?b?? ? ?b? ?? (if 0 ? bm ? b?m (?m)),
(note that the inverse inequality M ? ?1??? ?1?? of the first condition always holds by the definition of the dual norm). Practically used regularizations usually satisfy this isotropic property. In
fact, ?p -MKL, elasticnet-MKL and VSKL satisfy the isotropic property with c? = 1.
We derive the minimax learning rate in a simpler situation. First we assume that each RKHS is same
as others. That is, the input vector is decomposed into M components like x = (x(1) , . . . , x(M ) )
?
where {x(m) }M
m=1 are M i.i.d. copies of a random variable X, and Hm = {fm | fm (x) =
(1)
(M )
(m)
e where H
e is an RKHS shared by all Hm . Thus
fm (x , . . . , x
) = f?m (x ), f?m ? H}
?M ? (m)
?M
(1)
(M )
) where each f?m is
f ? H
is decomposed as f (x) = f (x , . . . , x
) =
m=1 fm (x
e
e
e
a member of the common RKHS H. We denote by k the kernel associated with the RKHS H.
In addition to the condition about the upper bound of spectrum (Spectral Assumption (A3)), we
assume that the spectrum of all the RKHSs Hm have the same lower bound of polynomial rate.
Assumption 5. (Strong Spectral Assumption) There exist 0 < s < 1 and 0 < c, c? such that
c? ?? s ? ?
?? ? c?? s , (1 ? ??),
1
(A6)
1
?
where {?
?? }?
? corresponding to the kernel k. In partic?=1 is the spectrum of the integral operator Tk
? 1s
ular, the spectrum of Tkm also satisfies ??,m ? ?
(??, m).
? = 0 (?f ? H).
e Since each fm receives
Without loss of generality, we may assume that E[f (X)]
?
i.i.d. copy of X, Hm s are orthogonal to each other:
?
E[fm (X)fm? (X)] = E[f?m (X (m) )f?m? (X (m ) )] = 0 (?fm ? Hm , ?fm? ? Hm? , ?m ?= m? ).
We also assume that the noise {?i }ni=1 is an i.i.d. normal sequence with standard deviation ? > 0.
Under the assumptions described above, we have the following minimax L2 (?)-error.
Theorem 3. Suppose R > 0 is given and n >
c?2 M 2
R2 ?1?2??
is satisfied. Then the minimax-learning rate
on H? (R) for isotropic norm ? ? ?? is lower bounded as
[
]
2s
2s
1
min max E ?f? ? f ? ?2
? CM 1? 1+s n? 1+s (?1??? R) 1+s ,
f? f ? ?H? (R)
L2 (?)
where inf is taken over all measurable functions of n samples {(xi , yi )}ni=1 .
7
(12)
The proof will be given in Appendix F in the supplementary material. One can see that the convergence rate derived in Eq. (7) achieves the minimax rate on the ?-norm ball (Theorem 3) up
)
to M log(M
that is negligible when the number of samples is large. This means that the ?-norm
n
regularization is well suited to make the estimator included in the ?-norm ball.
3.2 Analysis on Inhomogeneous Settings
In the previous section (analysis on homogeneous settings), we have not seen any theoretical justification supporting the fact that dense MKL methods like ? 34 -MKL can outperform the sparse ?1 -MKL
[10]. In this section, we show dense type regularizations can outperform the sparse regularization
in inhomogeneous settings (there exists m, m? such that sm ?= sm? ). For simplicity, we focus on
?p -MKL, and discuss the relation between the learning rate and the norm parameter p.
Let us consider an extreme situation where s1 = s for some 0 < s < 1 and sm = 0 (m > 1)? . In
this situation, we have
( ? 2s(3?s)
) 12
(1?s)2
)1
( ?2s
r1 1+s +M ?1
sr11?s
sr1 1+s
r1 +M ?1 2
, ?2 = 3 ?n , ? 1 = 3
, ?2 = 3
.
?1 = 3
2
1
n
n 1+s
n 1+s
for all p. Note that these ?1 , ?2 , ?1 and ?2 have no dependency on p. Therefore the learning bound
(6) is smallest when p = ? because ?f ? ??? ? ?f ? ??p for all 1 ? p < ?. In particular, when
?
?
?
(?fm
?Hm )M
m=1 = 1, we have ?f ??1 = M ?f ??? and thus obviously the learning rate of ?? -MKL
given by Eq. (6) is faster than that of ?1 -MKL. In fact, through a bit cumbersome calculation, one
2s
can check that ?? -MKL can be M 1+s times faster than ?1 -MKL in a worst case. This indicates
that, when the complexities of RKHSs are inhomogeneous, the generalization abilities of dense type
regularizations (e.g., ?? -MKL) can be better than the sparse type regularization (?1 -MKL). In real
settings, it is likely that one uses various types of kernels and the complexities of RKHSs become
inhomogeneous. As mentioned above, it has been often reported that ?1 -MKL is outperformed by
dense type MKL such as ? 43 -MKL in numerical experiments [10]. Our theoretical analysis explains
well this experimental results.
4
Conclusion
We have shown a unified framework to derive the learning rate of MKL with arbitrary mixed-normtype regularization. To analyze the general result, we considered two situations: homogeneous
settings and inhomogeneous settings. We have seen that the convergence rate of ?p -MKL obtained in
homogeneous settings is tighter and require less restrictive condition than existing results. We have
also shown the convergence rate of elasticnet-MKL, and proved the derived learning rate is minimax
optimal. Furthermore, we observed that our bound well explains the favorable experimental results
for dense type MKL by considering the inhomogeneous settings. This is the first result that strongly
justifies the effectiveness of dense type regularizations in MKL.
Acknowledgement This work was partially supported by MEXT Kakenhi 22700289 and the Aihara Project, the FIRST program from JSPS, initiated by CSTP.
References
[1] J. Aflalo, A. Ben-Tal, C. Bhattacharyya, J. S. Nath, and S. Raman. Variable sparsity kernel learning.
Journal of Machine Learning Research, 12:565?592, 2011.
[2] A. Argyriou, R. Hauser, C. A. Micchelli, and M. Pontil. A DC-programming algorithm for kernel selection. In the 23st ICML, pages 41?48, 2006.
[3] F. R. Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine Learning
Research, 9:1179?1225, 2008.
[4] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Advances in
Neural Information Processing Systems 21, pages 105?112, 2009.
[5] F. R. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm.
In the 21st ICML, pages 41?48, 2004.
[6] P. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. The Annals of Statistics,
33:1487?1537, 2005.
?
In our assumption sm should be greater than 0. However we formally put sm = 0 (m > 1) for simplicity
of discussion. For rigorous discussion, one might consider arbitrary small sm ? s.
8
[7] C. Bennett and R. Sharpley. Interpolation of Operators. Academic Press, Boston, 1988.
[8] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In UAI 2009, 2009.
[9] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning non-linear combinations of kernels. In Advances
in Neural Information Processing Systems 22, pages 396?404, 2009.
[10] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In the 27th
ICML, pages 247?254, 2010.
[11] D. E. Edmunds and H. Triebel. Function Spaces, Entropy Numbers, Differential Operators. Cambridge
University Press, Cambridge, 1996.
[12] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33:82?95, 1971.
[13] M. Kloft and G. Blanchard. The local rademacher complexity of ?p -norm multiple kernel learning, 2011.
arXiv:1103.0790.
[14] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K.-R. M?uller, and A. Zien. Efficient and accurate ?p -norm
multiple kernel learning. In Advances in Neural Information Processing Systems 22, pages 997?1005,
2009.
[15] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. lp -norm multiple kernel learning. Journal of Machine
Learning Research, 12:953?997, 2011.
[16] M. Kloft, U. R?uckert, and P. L. Bartlett. A unifying view of multiple kernel learning. In ECML/PKDD,
2010.
[17] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals
of Statistics, 34:2593?2656, 2006.
[18] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines. In COLT, pages
229?238, 2008.
[19] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics, 38(6):3660?
3695, 2010.
[20] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. Jordan. Learning the kernel matrix with
semi-definite programming. Journal of Machine Learning Research, 5:27?72, 2004.
[21] L. Meier, S. van de Geer, and P. B?uhlmann. High-dimensional additive modeling. The Annals of Statistics,
37(6B):3779?3821, 2009.
[22] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6:1099?1125, 2005.
[23] C. S. Ong, A. J. Smola, and R. C. Williamson. Learning the kernel with hyperkernels. Journal of Machine
Learning Research, 6:1043?1071, 2005.
[24] G. Raskutti, M. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel
classes via convex programming. Technical report, 2010. arXiv:1008.3654.
[25] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[26] J. Shawe-Taylor. Kernel learning for novelty detection. In NIPS 2008 Workshop on Kernel Learning:
Automatic Selection of Optimal Kernels, Whistler, 2008.
[27] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In
COLT, pages 169?183, 2006.
[28] I. Steinwart. Support Vector Machines. Springer, 2008.
[29] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT,
2009.
[30] T. Suzuki and R. Tomioka. Spicymkl: A fast algorithm for multiple kernel learning with thousands of
kernels. Machine Learning, 85(1):77?108, 2011.
[31] R. Tomioka and T. Suzuki. Sparsity-accuracy trade-off in MKL. In NIPS 2009 Workshop: Understanding
Multiple Kernel Learning Methods, Whistler, 2009.
[32] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[33] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes: With Applications to
Statistics. Springer, New York, 1996.
[34] M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In the 26th ICML, pages
1065?1072, 2009.
[35] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. In COLT, 2009.
[36] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of The
Royal Statistical Society Series B, 68(1):49?67, 2006.
9
| 4200 |@word version:1 polynomial:1 seems:1 norm:46 advantageous:1 c0:2 km:5 decomposition:1 q1:1 boundedness:1 series:1 rkhs:6 interestingly:1 bhattacharyya:1 outperforms:2 existing:7 sharpley:1 recovered:1 comparing:1 scovel:1 numerical:3 additive:2 device:1 rp1:2 isotropic:6 characterization:1 simpler:1 unbounded:2 mathematical:2 along:1 become:1 differential:1 yuan:3 prove:1 manner:2 p1:2 pkdd:1 multi:1 decreasing:1 decomposed:2 considering:1 becomes:5 project:1 moreover:4 notation:3 bounded:4 kimeldorf:1 cm:1 pursue:1 unified:2 pseudo:2 sr1:1 rm:20 control:1 unit:1 converse:1 appear:1 positive:4 negligible:1 local:4 initiated:1 incoherence:2 interpolation:3 might:2 koltchinskii:3 studied:1 range:2 practical:1 definite:1 procedure:1 pontil:2 universal:1 empirical:4 cannot:1 selection:4 operator:7 put:1 risk:5 seminal:1 restriction:1 measurable:1 equivalent:2 imposed:1 convex:3 simplicity:5 recovery:1 insight:1 estimator:6 utilizing:5 regarded:1 varma:1 population:1 justification:2 annals:4 target:1 suppose:5 programming:4 homogeneous:8 us:1 lanckriet:2 element:1 satisfying:1 taiji:2 observed:1 worst:1 thousand:1 ensures:2 sonnenburg:2 trade:2 mentioned:1 complexity:16 cristianini:1 ong:1 depend:1 solving:3 tight:2 localization:3 exit:1 completely:2 easily:1 various:3 regularizer:3 fast:7 effective:1 whose:1 widely:1 posed:1 denser:1 say:3 supplementary:4 triangular:1 ability:1 statistic:5 vaart:1 itself:1 obviously:1 sequence:2 eigenvalue:3 differentiable:1 brefeld:2 product:2 adjoint:1 olkopf:1 convergence:30 r1:3 rademacher:6 produce:1 categorization:1 ben:2 object:1 tk:1 derive:8 depending:1 ac:1 stat:1 op:6 eq:9 strong:4 met:1 direction:2 inhomogeneous:11 radius:2 tokyo:3 opened:1 material:4 explains:2 require:2 f1:1 generalization:5 preliminary:1 proposition:1 tighter:5 strictly:2 exploring:1 hold:5 practically:3 mm:1 considered:4 ground:3 sufficiently:2 exp:2 normal:1 substituting:1 achieves:5 a2:1 smallest:2 uniqueness:1 favorable:2 estimation:2 outperformed:1 applicable:4 prepare:1 uhlmann:1 bridge:1 largest:1 maxm:1 grouped:1 tool:1 minimization:2 uller:1 mit:1 gaussian:3 always:1 rather:1 hj:3 partic:1 edmunds:1 derived:6 focus:3 notational:2 kakenhi:1 indicates:2 mainly:1 check:1 phm:1 rigorous:1 rostamizadeh:3 sense:2 relation:1 arg:1 among:2 issue:1 dual:5 colt:4 special:1 marginal:1 cstp:1 identical:1 represents:1 yu:1 icml:4 representer:1 others:1 spline:1 report:1 few:1 employ:3 detection:1 a5:5 investigate:2 mixture:3 extreme:2 accurate:1 integral:3 necessary:1 orthogonal:1 divide:1 euclidean:2 taylor:1 theoretical:4 minimal:1 modeling:1 cover:3 a6:1 deviation:1 uniform:2 usefulness:1 jsps:1 motivating:1 reported:2 dependency:3 hauser:1 supx:1 adaptively:1 st:2 density:1 fundamental:1 kloft:4 off:2 informatics:1 continuously:3 concrete:1 squared:1 satisfied:3 leading:1 japan:1 de:2 summarized:2 sec:3 includes:1 coefficient:3 blanchard:1 babu:1 satisfy:5 depends:3 later:5 h1:1 view:1 closed:1 analyze:1 sup:1 complicated:1 square:1 ni:3 accuracy:1 ensemble:1 weak:1 critically:1 multiplying:1 cumbersome:1 definition:3 associated:2 proof:4 proved:1 adjusting:1 recall:1 hilbert:2 campbell:1 attained:1 supervised:1 modal:1 improved:3 formulation:5 strongly:2 generality:2 furthermore:1 smola:2 correlation:3 hand:3 receives:1 steinwart:2 mkl:86 true:1 regularization:35 hence:1 deal:1 self:1 covering:3 generalized:3 mina:1 outline:1 fj:1 chaos:2 recently:3 common:2 specialized:1 raskutti:1 jp:1 theirs:1 cambridge:4 smoothness:1 rd:1 automatic:1 consistency:2 shawe:1 recent:1 showed:1 inf:1 inequality:4 yi:3 der:1 seen:2 minimum:3 greater:3 relaxed:1 novelty:1 ii:2 zien:2 multiple:14 mix:1 semi:1 technical:4 faster:3 academic:1 calculation:2 bach:3 long:1 lin:1 a1:2 variant:1 basic:2 regression:3 arxiv:2 kernel:71 achieved:1 c1:2 addition:1 sch:1 vskl:3 member:1 spirit:1 effectiveness:1 nath:1 call:1 jordan:2 ture:1 fit:1 gave:8 lasso:2 fm:48 restrict:1 approaching:1 wahba:1 triebel:1 motivated:2 bartlett:3 wellner:1 reformulated:1 york:1 generally:1 clear:1 detailed:1 differentiability:1 outperform:2 exist:3 correctly:1 write:1 group:5 tchebycheffian:1 monotone:1 inverse:1 throughout:1 utilizes:1 sobolev:5 raman:1 appendix:4 bit:1 bound:43 laskov:1 tackled:1 oracle:1 tal:1 bousquet:1 min:3 optimality:1 separable:1 department:1 according:2 combination:4 ball:8 beneficial:1 lp:1 s1:1 aihara:1 restricted:1 multiplicity:1 ghaoui:1 taken:1 discus:3 loose:2 needed:1 tightest:1 apply:1 hierarchical:1 away:1 spectral:8 rkhss:17 rp:9 original:3 a4:1 unifying:5 restrictive:1 especially:1 classical:1 society:1 micchelli:2 assuming:1 minimizing:1 balance:1 ying:1 statement:1 sharper:1 negative:2 design:1 upper:3 observation:1 sm:37 finite:2 ecml:1 supporting:1 situation:11 dc:2 frame:2 supa:1 reproducing:2 arbitrary:8 sharp:2 introduced:2 david:1 meier:1 widened:1 required:1 specified:1 smo:1 learned:1 hush:1 nip:2 below:1 perception:1 usually:1 sparsity:9 pioneering:1 program:1 including:3 max:4 royal:1 wainwright:1 regularized:3 solvable:1 minimax:14 elasticnet:15 conic:1 carried:1 hm:34 understanding:1 l2:17 acknowledgement:1 embedded:3 loss:3 mixed:13 interesting:1 srebro:1 localized:3 sufficient:2 imposes:5 article:1 minp:3 uncorrelated:1 share:1 summary:1 mohri:3 supported:2 copy:2 formal:1 side:1 sparse:17 distributed:1 van:3 dimension:3 doesn:1 suzuki:3 commonly:1 bm:1 compact:2 countably:1 implicitly:1 preferred:1 global:3 uai:1 assumed:3 xi:3 don:1 spectrum:5 continuous:3 table:2 promising:1 learn:2 mj:1 williamson:1 necessarily:1 complex:1 main:1 dense:17 motivation:2 bounding:1 noise:3 tkm:5 tomioka:2 candidate:7 third:1 peeling:1 theorem:11 r2:1 decay:1 cortes:3 concern:1 a3:7 exists:6 mendelson:1 workshop:2 justifies:1 boston:1 suited:1 entropy:1 generalizing:1 likely:1 partially:1 springer:2 corresponds:1 minimizer:2 truth:4 relies:1 satisfies:2 ma:1 consequently:1 lipschitz:2 shared:1 bennett:1 included:1 uniformly:3 averaging:1 ular:1 hyperkernels:1 lemma:5 called:1 geer:2 duality:1 experimental:2 select:1 formally:2 whistler:2 support:4 mext:1 argyriou:1 correlated:1 |
3,536 | 4,201 | A Pylon Model for Semantic Segmentation
Victor Lempitsky
Andrea Vedaldi
Andrew Zisserman
Visual Geometry Group, University of Oxford?
{vilem,vedaldi,az}@robots.ox.ac.uk
Abstract
Graph cut optimization is one of the standard workhorses of image segmentation since for
binary random field representations of the image, it gives globally optimal results and there
are efficient polynomial time implementations. Often, the random field is applied over a
flat partitioning of the image into non-intersecting elements, such as pixels or super-pixels.
In the paper we show that if, instead of a flat partitioning, the image is represented by a
hierarchical segmentation tree, then the resulting energy combining unary and boundary
terms can still be optimized using graph cut (with all the corresponding benefits of global
optimality and efficiency). As a result of such inference, the image gets partitioned into a
set of segments that may come from different layers of the tree.
We apply this formulation, which we call the pylon model, to the task of semantic segmentation where the goal is to separate an image into areas belonging to different semantic
classes. The experiments highlight the advantage of inference on a segmentation tree (over
a flat partitioning) and demonstrate that the optimization in the pylon model is able to flexibly choose the level of segmentation across the image. Overall, the proposed system has
superior segmentation accuracy on several datasets (Graz-02, Stanford background) compared to previously suggested approaches.
1
Introduction
Semantic segmentation (i.e. the task of assigning each pixel of a photograph to a semantic class label) is
often tackled via a ?flat? conditional random field model [10, 29]. This model considers the subdivision
of an image into small non-overlapping elements (pixels or small superpixels). It then learns and evaluates
the likelihood of each element as belonging to one of the semantic classes (unary terms) and combine these
likelihoods with pairwise terms that encourage neighboring elements to take the same labels, and in this way
propagates the information from elements that are certain about their labels to uncertain ones. The appeal of
the flat CRF model is the availability of efficient MAP inference based on graph cut [7], which is exact for
two-label problems with submodular pairwise terms [4, 16] and gets very close to global optima for many
practical cases of multi-label segmentation [31].
The main limitation of the flat CRF model is that since each superpixel takes only one semantic label, superpixels have to be small, so that they do not straddle class boundaries too often. Thus, the amount of visual
information inside the superpixel is limited. The best performing CRF models therefore consider wider local
context around each superpixel, but as the object and class boundaries are not known in advance, the support
area over which such context information is aggregated is not adapted. For this reason, such context-based
descriptors have limited repeatability and may not allow reliable classification. This is, in fact, a manifestation of a well-known chicken-and-egg problem between segmentation and recognition (given spatial support
based on proper segmentation, recognition is easy [20], but to get the proper segmentation prior recognition
is needed).
Recently, several semantic segmentation methods that explicitly interleave segmentation and recognition have
been proposed. Such methods [8, 11, 18] consider a large pool of overlapping segments that are much bigger
?
Victor Lempitsky is currently with Yandex, Moscow. This work was supported by ERC grant VisRec no. 228180
and by the PASCAL Network of Excellence.
1
Figure 1: Pool-based binary segmentation. For binary semantic segmentation, the pylon model is able
to find a globally optimal subset of segments and their labels (bottom row), while optimizing unary and
boundary costs. Here we show a result of such inference for images from each of the Graz-02 [23] datasets
(people and bikes ? left, cars ? right).
than superpixels in flat CRF approaches. These methods then perform joint optimization over the choice of
several non-overlapping segments from the pool and the semantic labels of the chosen segments. As a result,
in the ideal case, a photograph is pieced from a limited number of large segments, each of which can be
unambiguously assigned to one of the semantic classes, based on the information contained in it. Essentially,
the photograph is then ?explained? by these segments that often correspond to objects or their parts. Such
scene explanation can then be used as a basis for more high-level scene understanding than just semantic
segmentation.
In this work, we present a pylon model for semantic segmentation which largely follows the pool-based
semantic segmentation approach from [8, 11, 18]. Our goal is to overcome the main problem of existing
pool-based approaches, which is the fact that they all face very hard optimization problems and tackle them
with rather inexact and slow algorithms (greedy local moves for [11], loose LP relaxations in [8, 18]). Our
aim is to integrate the exact and efficient inference employed by flat CRF methods with the strong scene
interpretation properties of the pool-based approaches.
Like previous pool-based approaches, the pylon model ?explains? each image as a union of non-intersecting
segments. We achieve the tractability of the inference by restricting the pool of segments to come from a
segmentation tree. Segmentation trees have been investigated for a long time, and several efficient algorithms
have been developed [1, 2, 38, 27]. Furthermore, any binary unsupervised algorithm (e.g. normalized cut
[28]) can be used to obtain a segmentation tree via iterative application. As segmentation trees reflect the
hierarchical nature of visual scenes, algorithms based on segmentation-trees achieved very impressive results
for visual-recognition tasks [13, 22, 34]. For our purpose, the important property of tree-based segment pool
is that each image region is covered by segments of very different sizes and there is a good chance that one
such segment does not straddle object boundaries but is still big enough to contain enough visual information
for a reliable class identification.
Inference in pylons optimizes the sum of the real-valued costs of the segments selected to explain the image. Similarly to random field approaches, pylons also include spatial smoothness terms that encourage
the boundary compactness of the resulting segmentations (this could be e.g. the popular contrast-dependent
Potts-potentials). Such boundary terms often remedy the imperfections of segmentation trees by propagating
the information from big segments that fit within object boundaries to smaller ones that have to supplement
the big segments to fit class boundaries accurately.
The most important advantage of pylons over previous pool-based methods [8, 11, 18] is the tractability of
inference. Similarly to flat CRFs, in the two-class (e.g. foreground-background) case, the globally optimal
set of segments can be found exactly and efficiently via graph cut (Figure 1). Such inference can then be
extended to multi-label problems via an alpha-expansion procedure [7] that gives solutions close to a global
optimum. Effectively, inference in pylons is as ?easy? as in the flat CRF approach. We then utilize such a
?free lunch? to achieve a better than state-of-the-art performance on several datasets (Graz-02 datasets[23]
for binary label segmentations, Stanford background dataset [11] for multi-label segmentation). At least
in part, the excellent performance of our system is explained by the fact that we can learn both unary and
boundary term parameters within a standard max-margin approach developed for CRFs [32, 33, 35], which is
2
not easily achievable with the approximate and slow inference in previous pool-based methods [17]. We also
demonstrate that the pylon model achieves higher segmentation accuracy than flat CRFs, or non-loopy pylon
models without boundary terms, given the same features and the same learning procedure.
Other related work. The use of segmentation trees for semantic segmentation has a long history. The
older works of [5] and [9] as well as a recent work [22] use a sequence of top-down inference processes
on a segmentation tree to infer the class labels at the leaf level. Our work is probably more related to the
approaches performing MAP estimation in tree-structured/hierarchical random fields. For this, Awasthi et
al. [3], Reynolds and Murphy [25] and Plath et al. [24] use pure tree-based random fields without boundary
terms, while Schnitzspan et al. [26] and Ladicky et al. [19] incorporate boundary terms and perform semantic
segmentation at different levels of granularity. The weak consistency between levels is then enforced with
higher-order potentials. Overall, our philosophy is different from all these works as we obtain an explicit
scene interpretation as a union of few non-intersecting segments, while the tree-structured/hierarchical CRF
works assign class labels and aggregate unary terms over all segments in the tree/hierarchy. Our inference
however is similar to that of [19]. In fact, while below we demonstrate how inference in pylons can be
reduced to submodular pseudo-boolean quadratic optimization, it can also be reduced to the hierarchical
associative CRFs introduced in [19]. We also note that another interesting approach to joint segmentation
and classification based on this class of CRFs has been recently proposed by Singaraju and Vidal [30].
2
Random fields, Pool-based models, and Pylons
We now derive a joint framework covering the flat random field models, the preceding pool-based models,
and the pylon model introduced in this paper.
We consider a semantic segmentation problem for an image I and a set of K semantic classes, so that each
part of the image domain has to be assigned to one of the classes. Let S = {Si |i = 1 . . . N } be a pool of
segments, i.e. a set of sub-regions of the image domain. For a traditional (flat) random field approach, this
pool comes from an image partitioned into is a set of small non-intersecting segments (or pixels); in the case
of the pool-based models this is an arbitrary set of many segments coming from multiple flat segmentations
[18] or explored via local moves [11]. In the pylon case, S contains all segments in a segmentation tree
computed for an image I.
A segmentation f then assigns each Si an integer label fi within a range from 0 to K. A special label fi =0
means that the segment is not included into the segmentation, while the rest of the labels mean that the
segment participates in the explanation of the scene and is assigned to a semantic class fi . Not all labelings
are consistent and correspond to valid segmentations. First of all, the completeness constraint requires that
each image pixel p is covered by a segment with non-zero label:
?p ? I, ?i : Si 3 p, fi > 0
(1)
For the flat random field case, this means that zero labels are prohibited and each segment has to be assigned
some non-zero label. For pool-based methods and the pylon model, this is not the case as each pixels has a
multitude of segments in S covering it. Thus, zero labels are allowed. Furthermore, non-zero labels should
be controlled by the non-overlap constraint requiring that overlapping segments cannot take non-zero labels:
?i 6= j : Si ? Sj 6= ? ? fi ? fj = 0 .
(2)
Once again, the constraint (2) is not needed for flat CRFs as their pools do not contain overlapping segments.
It is, however, non-trivial for the existing pool-based models and for the pylon model, where overlapping
(nested) segments exist. Under the constraints (1) and (2), each pixel p in the image is covered by exactly
one segment with non-zero label and we define the number of this segment as i(p). The semantic label f (p)
of the pixel p is then determined as fi(p) .
To formulate the energy function, we define the set of real-valued unary terms Ui (fi ), where each Ui specifies
the cost of including a segment Si into the segmentation with the label fi > 0. Furthermore, we associate
the non-negative boundary cost Vpq with any pair of pixels adjacent in the image domain (p, q) ? N . For
any segmentation f we then define the boundary cost as the sum of boundary costs over the sets of adjacent
pixel pairs (p, q) that straddle the boundaries between classes induced by this segmentation (i.e. (p, q) ?
N : f (p) 6= f (q)). In other words, the boundary terms are accumulated along the boundary between pool
segments that are assigned different non-zero semantic labels.
Overall, the energy that we are interested in, is defined as:
X
E(f ) =
Ui (fi ) +
i?1..N |fi >0
X
(p,q)?N :f (p)6=f (q)
3
Vpq
(3)
Figure 2: Inference in the Pylon model(best viewed in color.): a tree segmentation of an image (left) and
a corresponding graphical model for the 2-class pylon (right). Each pair of nodes in the graphical model
correspond to a segment in a segmentation tree, while each edge corresponds to the pairwise term in the
pseudo-boolean energy (9)?(10). Blue edges (4) enforce the segment cost potentials (U -terms) as well as
consistency of x (children of a shaded node have to be shaded). Red edges (6) and magenta edges (7)
enforce non-overlap and completeness. Green edges (8) encode boundary terms. Shading gives an example
valid labeling for x variables (xti =1 are shaded). Left ? the corresponding semantic segmentation on the
segmentation tree consisting of three segments is highlighted.
and we wish to minimize this subject to the constraints (1) and (2). The energy (3) contains the contribution
of unary terms only from those segments that are selected to explain the image (fi > 0).
Note that the energy functional has the same form as that of a traditional random field (with weighted Potts
boundary terms). The pool-based model in [18] is also similar, but lacks the boundary terms. It is well-known
that for flat random fields, the optimal segmentation f in the binary case K = 2 with Vpq ? 0 can be found
with graph cut [7, 12, 16]. Furthermore, for K > 2 one can get very close to global optimum (within a factor
2 with guarantee [7], but much closer in practice [31]) by applying graph cut-based alpha-expansions [7].
For pylons as well as for the pool-based approaches [11, 18], the segment pool is much richer. As a consequence, the constraints (1) and (2) that are trivial to enforce in the case of the flat random field, become
non-trivial. In the next section, we demonstrate that in the case of a tree-based pool of segments (pylon
model), one still can find the globally optimal f in the case K = 2 and Vpq ? 0, and use alpha-expansions in
the case K > 2.
1-class model. Before discussing the inference and learning in the pylon model, we briefly introduce a
modification of the generic model derived above, which we call a 1-class model. A 1-class model can be
used for semantic foreground-background segmentation tasks (e.g. segmenting out people in an image). The
2-class model defined in (1)?(3) for K = 2 can of course also be used for this purpose. The difference is
that the 1-class model treats the foreground and background in an asymmetric way. Namely, for 1-class case
the labels xi can only take the values of 0 or 1 (i.e. K=1) and the completeness constraint (1) is omitted. As
such, each segmentation f defines the foreground as a set of segments with fi =1 and the semantic label of
a pixel f (p) is defined to be 1 if p belongs to some segment Si with fi = 1 and f (p) = 0 otherwise. In a
1-class case, each segment has thus a single unary cost Ui = Ui (1) associated with it. The energy remains
the same as in (3).
For the flat random field case, the 1-class and 2-class models are equivalent (one can just define Ui1class =
Ui2class (2) ? Ui2class (1) to get the same energy upto an additive constant). For pool-based models and pylons,
this is no longer the case, and the 1-class model is non-trivially different from the 2-class model. Intuitively,
a 1-class model only ?explains? the foreground as a union of segments, while leaving the background part
?unexplained?. As shown in our experiments, this may be beneficial e.g. when the visual appearance of
foreground is more repeatable than that of the background.
3
Inference in pylon models
Two-class case. We first demonstrate how the energy (3) can be globally minimized subject to (1)?(2) in the
case of a tree-based pool and K = 2. Later, we will outline inference in the case K > 2 and in the case of
a 1-class model K = 1. For each segment number i = 1..N we define p(i) to be the number of its parent
segment in a tree. We further assume that the first L segments correspond to leaves in the segmentation tree
and that the last segment SN is the root (i.e. the entire image).
4
For each segment i, we introduce two binary variables x1i and x2i indicating whether the segment falls entirely
into the segment assigned to class 1 or 2. The exact semantic meaning and relation to variables f of these
labels is as follows: xti equals 1 if and only if one of its ancestors j up the tree (including the segment i
itself) has a label fj = t. We now re-express the constraints (1)?(2) and the energy (3) via a real valued (i.e.
pseudo-boolean) energy of the newly-introduced variables that involve pairwise terms only (Figure 2).
First of all, the definition of the x variables implies that if xti is zero, then xtp(i) has to be zero as well.
Furthermore, if xti = 1 and xtp(i) = 0 implies that the segment i has a label fi = t (incurring the cost Ui (t) in
(3)). These two conditions can be expressed with the bottom-up pairwise term on the variables xti and xtp(i)
(one term for each t = 1, 2):
Eit (0, 0) = 0, Eit (0, 1) = +?,
Eit (1, 0) = Ui (t), Eit (1, 1) = 0 .
(4)
These potentials express almost all unary terms in (3) except for the unary term for the root node, that can be
expressed as a sum of two unary terms on the new variables (one term for each t = 1, 2):
t
EN
(0) = 0,
t
EN
(1) = UN (t) .
(5)
The non-overlap constraint (2) can be enforced by demanding that at most one of x1i and x2i can be 1 at
the same time (as otherwise there are two segments with non-zero f -variables that overlap), introducing the
following exclusion pairwise term on the variables x1i and x2i :
EiEXC (0, 0) = EiEXC (0, 1) = EiEXC (1, 0) = 0,
EiEXC (1, 1) = +? .
(6)
The completeness constraint (1) can be expressed by demanding that each leaf segment is covered by either
an ancestor segment with label 1 or with label 2. Consequently, in the leaf node, at least one of x1i and x2i has
to be 1, hence the following pairwise completeness potential for all leaf segments i = 1..L:
EiCPL (0, 0) = +?,
EiCPL (0, 1) = EiCPL (1, 0) = EiCPL (1, 1) = 0 .
(7)
Finally, the only unexpressed part of the optimization problem is the boundary term in (3). To express the
boundary term, we consider the set P of pairs of numbers of adjacent leaf segments. For each such pair (i, j)
of leaf segments (Si , Sj ) we consider all pairs of adjacent pixels (p, q).
P The boundary cost Vij between Si
and Sj is then defined as the sum of pixel-level pairwise costs Vij =
Vpq over all pairs of adjacent pixels
(p, q) ? N such that p ? Si and q ? Sj or vice versa (i.e. p ? Sj and q ? Si ). The boundary terms can then
be expressed with pairwise terms over variables x1i and x1j for all (i, j) ? P:
BND
BND
Eij
(0, 0) = Eij
(1, 1) = 0,
BND
BND
Eij
(0, 1) = Eij
(1, 0) = Vij .
(8)
Overall, the constrained minimization problem (1)?(3) for the variables f , is expressed as the unconstrained
minimization of the following energy of boolean variables x1 , x2 :
E(x1 , x2 ) =
?1
X NX
t=1,2 i=1
Eit (xti , xtp(i) ) +
X
t
EN
(xtN ) +
t=1,2
N
X
i=1
X
BND 1
Ei,j
(xi , x1j ) +
(9)
(i,j)?P
EiEXC (x1i , x2i ) +
L
X
EiCPL (x1i , x2i )
(10)
i=1
The energy (9)?(10) contains two parts. The pairwise terms in the first part (9) involve only such pairs of
variables that both terms come either from x1 set or from x2 set. All the pairwise terms in (9) are submodular,
i.e. they obey E(0, 0) + E(1, 1) ? E(0, 1) + E(1, 0). The pairwise terms in the second part (10) involve
only such pairs of variables where one term comes from the x1 set and the other from the x2 set. All terms
in (10) are supermodular, i.e. obey E(0, 0) + E(1, 1) ? E(0, 1) + E(1, 0).
Thus, in the energy (9)?(10), submodular terms act within x1 and x2 sets of variables and supermodular
? 2 , and get a
terms act only across the two sets. One can then perform a variable substitution x2 = 1 ? x
1 ?2
new energy function E(x , x ). During the substitution, the terms (9) remain submodular, while the terms
(10) change from being supermodular to being submodular in the new variables. As a result, one gets a
pseudo-boolean pairwise energy with submodular terms only, which can therefore be minimized exactly and
in a low-polynomial in N time through the graph cut in a specially constructed graph [4, 6, 16]. Given the
5
Figure 3: Several examples from the Stanford background dataset [11], where the ability of the pylon model
(middle row) to choose big enough segments allowed it to obtain better semantic segmentation compared to
a flat CRF defined on leaf segments (bottom row). Colors: grey=sky, olive=tree, purple=road, green=grass,
blue=water, red=building, orange=foreground.
? 2 , it is trivial to infer the optimal values for x2 and ultimately for the f variables
optimal values for x1 and x
(for the latter step one goes up the tree and set fi = t whenever xti = 1 and xtp(i) = 0).
One-class case. Inference in the one-class case is simpler that in the two-class case. As one may expect, it
is sufficient to introduce just a single set of binary variables {xi1 } and omit the pairwise terms (6) and (7)
altogether. The resulting energy function is then:
E(x1 ) =
N
?1
X
1
Ei1 (x1i , x1p(i) ) + EN
(x1N ) +
i=1
X
BND 1
Ei,j
(xi , x1j )
(11)
(i,j)?P
In this case, the non-overlap constraint is enforced by infinite terms within (4). The pseudo-boolean energy
(11) is submodular and, hence, can be optimized directly via graph cut.
Multi-class case. As in the flat CRF case, the alpha-expansion procedure [7] can be used to extend the 2-class
inference procedure to the case K > 2. Alpha-expansion is an iterative convergent process, where 2-class
inference is applied at each iteration. In our case, given the current labeling f , and a particular ? ? 1 . . . K,
each segment has the following three options: (1a) a segment with the non-zero label can retain it (1b) a
segment with zero label can change it to the current non-zero label of its ancestor (if any), (2) label fi can
be changed to ?, (3) label fi can be changed to 0 (or kept at 0 if already there). Thus, each step results in
the 2-class inference task, where U and V potentials of the 2-class inference are induced by the U and V
potentials of the multi-label problem (in fact, some boundary terms then become asymmetric if one of the
adjacent segments have the current label ?. We do not detail this case here since it is handled in exactly the
same way as in [7]). Alpha-expansion then performs a series of 2-class inferences for ? sweeping the range
1 . . . K multiple times until convergence.
4
Implementation and Experiments
Segmentation tree. For this paper, we used the popular segmentation tree approach [2] that is based on the
powerful pPb edge detector and is known to produce high-quality segmentation trees. The implementation
[2] is rather slow (orders of magnitude slower than our inference) and we plan to explore faster segmentation
tree approaches.
Features. We use the following features to describe a segment Si : (1) a histogram hSIFT
of densely sampled
i
visual SIFT words computed with vl feat [36]. We use a codebook of size 512, and soft-assign each word
to the 5 nearest codewords via the locality-constrained linear coding [39]; (2) a histogram hCOL
of RGB colors
i
(codebook size 128; hard-assignment); (3) a histogram hLOC
of
locations
(where
each
pixel
corresponds
to a
i
number from 1 to 36 depending on its position in a uniform 6 ? 6 grid; (4) the ?contour shape? descriptor
hSHP
from [13] (a binned histogram of oriented pPb edge detector responses). Each of the four histograms is
i
6
then normalized and mapped by a non-linear coordinate-wise mapping H(?) to a higher-dimensional space,
where the inner product (linear kernel) closely approximates the ?2 -kernel in the original space [37]. The
unary term Uit is then computed as a scalar product of the stacked descriptor and the parameter weight vector
t
wU
:
t
Uit = si ? H(hSIFT
)T H(hCOL
)T H(hLOC
)T H(hSHP
)T 1 ? w U
.
(12)
i
i
i
i
Note, that each unary term is also multiplied by si , which is the size of the segment Si . Without such
multiplication, the inference process would be biased towards small segments (leaves in the segmentation
trees).
The boundary cost for a pair of pixel (p, q) ? N is set based on the local boundary strength ?pq estimated
with gPb edge detector. The exact value of Vpq is then computed as a linear combination of exponentiated
?pq with several bandwidths:
Vpq
??pq
??pq
??pq
= exp
exp
exp
1 ? wV
10
40
100
(13)
We discuss the learning of parameters w below. The meta-parameters (codebook sizes, number of words in
soft-assignment, number of bins for location and contour shape descriptors, bandwidths) were not tweaked
(we set them based on previous experience and have not tried other values).
K
1
, wV ], wV ? 0 the parameter of the
, . . . , wU
Max-margin learning parameters. Denote by w = [wU
1
2
? (w)), defined as the minimizer of the energy E(x1 , x2 ) given in (9)?(10). The
pylon model (?
x (w), x
? 2 (w)) has a small Hamming distance ?(?
? 2 (w))
goal is to find a parameter w such that (?
x1 (w), x
x1 (w), x
?1, x
? 2 of a training image. The Hamming distance is simply the number of pixels
to the segmentation x
incorrectly labeled. To obtain a convex optimization problem and regularize its solution, we use the large
margin formulation of [33, 14]. The first step is to rewrite the optimization task (9)?(10) as:
? 2 (w)) = argmax ?E(x1 , x2 ) = argmax F (x1 , x2 ) + h?(x1 , x2 ), wi,
(?
x1 (w), x
x1 ,x2
(14)
x1 ,x2
where ?(x1 , x2 ) is a concatentation of the summed coefficients of (12) and (13) and F (x1 , x2 ) accounts for
the terms of E(x1 , x2 ) that do not depend on w. Then margin rescaling [14] is used to construct a convex
? 2 (w)):
upper bound of the Hamming loss ?(?
x1 (w), x
?0 (w)
=
? 2 ) + h?(x1 , x2 ), wi ? h?(?
? 2 ), wi (15)
max ?(x1 , x2 ) + F (x1 , x2 ) ? F (?
x1 , x
x1 , x
x1 ,x2
The function ?0 (w) is convex because it is the upper envelope of a family of planes, one for each setting of x1 , x2 . This allows to learn the parameter w as the minimizer of the convex objective function
?kwk2 /2 + ?0 (w), where ? controls overfitting. Optimization uses the cutting plane algorithm described
in [14], which gradually approximates ?0 (w) by selecting a small representative subset of the exponential
number of planes that figure in (15). These representative planes are found by maximizing (15), which can
be done by the algorithm described in Sect. 3 after accounting for the loss ?(x1 , x2 ) in a suitable adjustment
of the potentials.
Datasets. We consider the three Graz-02 datasets [23] that to the best of our knowledge represent the
most challenging datasets for semantic binary (foreground-background) segmentation. Each Graz-02 dataset
has one class of interest (bicycles, cars, and people). The datasets are loosely annotated at the pixel level.
Previous methods reported performance for the fixed splits including 150 training and 150 testing images.
The customary performance measure is the equal recall-precision rate averaged over all pixels in the test
set. In general, when trained with Hamming loss, our method produces recall slightly lower than precision.
We therefore retrained our system with weighted Hamming loss (so that false negatives are penalized higher
than false positive), tuning the balancing constant to achieve approximately equal recall and precision (an
alternative would be to use parametric maxflow [15]).
We also consider the Stanford background dataset [11] containing 715 images of outdoor scenes with pixelaccurate annotations into 8 semantic classes (sky, tree, road, grass, water, building, mountain, and foreground
object). Similar to previous approaches, we report the percentage of correctly labeled pixels on 5 random
splits of a fixed size (572 training, 143 testing).
Results. We compare the performance of our system with the state-of-the-art in Table 1. We note that our
approach performs considerably better than state-of-the-art including the CRF-based method [10], the poolbased methods [11, 18], and the approach based on the same gPb-based tree [22]. There are probably three
7
Graz-02 dataset [23]
Method
Marszalek&Schmid [21]
Fulkerson et al. [10]
1-class pylon
2-class pylon
equal recall-precision
Bikes Cars People
53.8 44.1 61.8
72.2 72.2 66.3
83.4 84.9 81.5
83.7 83.3 82.5
Stanford background dataset [11]
Method
correct %
Gould et al. [11]
76.4 ? 1.22
Munoz et al. [22]
76.9
Kumar&Koller [18] 79.42 ? 1.41
8-class pylon
81.90 ? 1.09
Table 1: Comparison with state-of-the-art. Left ? equal recall-precision on the Graz datasets (pylon models
were trained with class-weighted Hamming loss to achieve approximately equal recall-precision). Right
? percentage of correctly labelled pixels on the Stanford dataset. For all datasets, our systems achieves a
considerable improvement over the state-of-the-art.
Graz-02 Bikes
rec. prec. Ham.
1-class pylon
80.8 86.9 7.7
2/8-class pylon
81.2 86.1 7.8
Flat CRF ? 0
79.4 83.8 8.8
Flat CRF ? 20
81.3 84.6 8.2
Flat CRF ? 40
78.3 85.4 8.6
Flat CRF ? 60
71.2 84.2 10.3
Flat CRF ? 80
64.5 81.1 12.4
1-class pylon (no bnd) 78.3 85.7 8.5
2/8-class pylon (no bnd) 79.6 85.7 8.3
Model
Graz-02 Cars
rec. prec. Ham.
81.7 87.0 3.1
80.4 85.6 3.4
80.7 86.8 3.3
81.1 83.7 3.6
81.2 82.1 3.8
79.5 80.8 4.1
74.7 76.8 4.9
76.7 83.9 3.9
77.9 84.0 3.8
Graz-02 People
rec. prec. Ham.
77.3 85.0 6.4
78.7 84.4 6.3
73.7 79.8 7.9
76.7 80.7 7.3
76.0 80.6 7.4
71.6 79.0 8.4
68.9 80.2 8.4
76.3 84.9 6.6
76.6 82.9 6.9
Stanford background
mean
diff. to full
?
?
81.90 0.00 ? 0.00
80.07 ?1.84 ? 0.15
81.13 ?0.78 ? 0.42
80.25 ?1.65 ? 0.69
77.99 ?3.91 ? 0.74
75.01 ?6.89 ? 0.47
?
?
81.29 ?0.62 ? 0.24
Table 2: Comparison with baseline methods with the same features and the same training procedure (unweighted Hamming loss was used in all cases). ?Flat CRF ? X? correspond to flat random fields trained and
evaluated on the segmentations obtained by thresholding the segmentation tree at level X. The last two lines
correspond to the pylon model trained and evaluated with boundary terms disabled. For Graz-02, recall,
precision and Hamming error for the predefined splits are given. For Stanford background, % of correctlylabeled pixels is measured over 5 random splits, then the mean and the difference to the full pylon model are
given. For all datasets, the full pylon models perform better than the baselines (the best baseline for each
dataset is underlined).
reasons for this higher performance: superior features, a superior learning procedure, and a superior model
(pylon).
To clarify what is the benefit of the pylon model alone, we perform an extensive comparison with baselines
(Table 2). We compare with the flat CRF approaches, where the partitions are obtained by thresholding the
segmentation tree at different levels. We also determine the benefit of having boundary terms by comparing
with the pylon model without these terms. All baseline models used the same features and the same maxmargin learning procedure. The full pylon model performs better than baselines, although the advantage is
not as large as that over the preceding methods.
Efficiency. The runtime of the entire framework is dominated by the pre-computation of segmentation trees
and the features. After such pre-computation, our graph cut inference is extremely fast: less than 0.1s per
image/label which is orders of magnitude faster than inference in previous pool-based methods. Training the
model (after the precomputation) takes 85 minutes for one split of the Stanford background dataset (compared
to 55 minutes for the flat CRF).
5
Discussion
Despite a very strong performance of our system in the experiments, we believe that the main appeal of the
pylon model is in the combination of interpretability, tractability, and flexibility. The interpretability is not
adequately measured by the quantitative evaluation, but it may be observed in qualitative examples (Figures
1 and 3), where many segments chosen by the pylon model to ?explain? a photograph correspond to objects
or their high-level parts. The pylon model generalizes the flat CRF model for semantic segmentation that
operates with small low-level structural elements. Notably, despite such generalization, the inference and
max-margin learning in the pylon model is as easy as in the flat CRF model.
8
References
[1] N. Ahuja. A transform for multiscale image segmentation by integrated edge and region detection. IEEE Trans. Pattern
Anal. Mach. Intell., 18(12), 1996.
[2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE Trans.
Pattern Anal. Mach. Intell., 33(5):898?916, 2011.
[3] P. Awasthi, A. Gagrani, and B. Ravindran. Image modeling using tree structured conditional random fields. In IJCAI,
pages 2060?2065, 2007.
[4] E. Boros and P. L. Hammer. Pseudo-boolean optimization. Discrete Applied Mathematics, 123(1-3):155?225, 2002.
[5] C. A. Bouman and M. Shapiro. A multiscale random field model for bayesian image segmentation. IEEE Transactions
on Image Processing, 3(2):162?177, 1994.
[6] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization
in vision. IEEE Trans. Pattern Anal. Mach. Intell., 26(9):1124?1137, 2004.
[7] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal.
Mach. Intell., 23(11):1222?1239, 2001.
[8] X. Chen, A. Jain, A. Gupta, and L. Davis. Piecing together the segmentation jigsaw using context. In CVPR, 2011.
[9] X. Feng, C. K. I. Williams, and S. N. Felderhof. Combining belief networks and neural networks for scene segmentation.
IEEE Trans. Pattern Anal. Mach. Intell., 24(4):467?483, 2002.
[10] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In
ICCV, pages 670?677, 2009.
[11] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent regions. In ICCV,
pages 1?8, 2009.
[12] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images. Journal of
the Royal Statistical Society, 51(2), 1989.
[13] C. Gu, J. J. Lim, P. Arbelaez, and J. Malik. Recognition using regions. In CVPR, pages 1030?1037, 2009.
[14] T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1), 2009.
[15] V. Kolmogorov, Y. Boykov, and C. Rother. Applications of parametric maxflow in computer vision. In ICCV, pages
1?8, 2007.
[16] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal.
Mach. Intell., 26(2):147?159, 2004.
[17] A. Kulesza and F. Pereira. Structured learning with approximate inference. In NIPS, 2007.
[18] M. P. Kumar and D. Koller. Efficiently selecting regions for scene understanding. In CVPR, 2010.
[19] L. Ladicky, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical crfs for object class image segmentation. In
ICCV, pages 739?746, 2009.
[20] T. Malisiewicz and A. A. Efros. Improving spatial support for objects via multiple segmentations. In BMVC, September
2007.
[21] M. Marszalek and C. Schmid. Accurate object localization with shape masks. In CVPR, 2007.
[22] D. Munoz, J. A. Bagnell, and M. Hebert. Stacked hierarchical labeling. In ECCV (6), pages 57?70, 2010.
[23] A. Opelt, A. Pinz, M. Fussenegger, and P. Auer. Generic object recognition with boosting. IEEE Trans. Pattern Anal.
Mach. Intell., 28(3):416?431, 2006.
[24] N. Plath, M. Toussaint, and S. Nakajima. Multi-class image segmentation using conditional random fields and global
classification. In ICML, page 103, 2009.
[25] J. Reynolds and K. Murphy. Figure-ground segmentation using a hierarchical conditional random field. In CRV, pages
175?182, 2007.
[26] P. Schnitzspan, M. Fritz, and B. Schiele. Hierarchical support vector random fields: Joint training to combine local and
global features. In ECCV (2), pages 527?540, 2008.
[27] E. Sharon, A. Brandt, and R. Basri. Fast multiscale image segmentation. In CVPR, 2000.
[28] J. Shi and J. Malik. Normalized cuts and image segmentation. In CVPR, pages 731?737, 1997.
[29] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi. TextonBoost: Joint appearance, shape and context modeling for
multi-class object recognition and segmentation. In ECCV (1), pages 1?15, 2006.
[30] D. Singaraju and R. Vidal. Using global bag of features models in random fields for joint categorization and segmentation of objects. In CVPR, 2011.
[31] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. F. Tappen, and C. Rother. A comparative study of energy minimization methods for markov random fields with smoothness-based priors. IEEE Trans.
Pattern Anal. Mach. Intell., 30(6):1068?1080, 2008.
[32] M. Szummer, P. Kohli, and D. Hoiem. Learning crfs using graph cuts. In ECCV, 2008.
[33] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[34] S. Todorovic and N. Ahuja. Learning subcategory relevances for category recognition. In CVPR, 2008.
[35] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and
structured output spaces. In ICML, 2004.
[36] A. Vedaldi and B. Fulkerson.
VLFeat: An open and portable library of computer vision algorithms.
http://www.vlfeat.org/, 2008.
[37] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. In CVPR, 2010.
[38] O. Veksler. Image segmentation by nested cuts. In CVPR, pages 1339?, 2000.
[39] J. Wang, J. Yang, K. Yu, F. Lv, T. S. Huang, and Y. Gong. Locality-constrained linear coding for image classification.
In CVPR, pages 3360?3367, 2010.
9
| 4201 |@word kohli:2 briefly:1 middle:1 polynomial:2 interleave:1 achievable:1 open:1 grey:1 tried:1 rgb:1 accounting:1 textonboost:1 shading:1 substitution:2 contains:3 series:1 selecting:2 hoiem:1 reynolds:2 existing:2 current:3 comparing:1 si:14 assigning:1 olive:1 additive:2 partition:1 shape:4 hofmann:1 grass:2 alone:1 greedy:1 selected:2 leaf:9 plane:5 completeness:5 boosting:1 node:4 codebook:3 location:2 brandt:1 simpler:1 org:1 x1p:1 along:1 constructed:1 become:2 qualitative:1 combine:2 inside:1 introduce:3 excellence:1 pairwise:14 ravindran:1 mask:1 notably:1 andrea:1 multi:7 globally:5 xti:7 tweaked:1 bike:3 what:2 mountain:1 developed:2 guarantee:1 pseudo:6 sky:2 quantitative:1 act:2 tackle:1 precomputation:1 runtime:1 exactly:4 uk:1 partitioning:3 control:1 grant:1 omit:1 vlfeat:2 segmenting:1 positive:1 before:1 local:5 treat:1 consequence:1 despite:2 mach:8 oxford:1 marszalek:2 approximately:2 shaded:3 challenging:1 limited:3 range:2 malisiewicz:1 averaged:1 practical:1 testing:2 union:3 practice:1 procedure:7 maire:1 maxflow:2 area:2 vedaldi:5 word:4 road:2 pre:2 altun:1 get:7 cannot:1 close:3 tsochantaridis:1 context:5 applying:1 www:1 equivalent:1 map:3 shi:1 crfs:8 maximizing:1 go:1 williams:1 flexibly:1 felderhof:1 convex:4 formulate:1 assigns:1 pure:1 regularize:1 pylon:46 fulkerson:3 coordinate:1 hierarchy:1 exact:5 schnitzspan:2 us:1 superpixel:4 associate:1 element:6 recognition:9 tappen:1 rec:3 asymmetric:2 cut:16 labeled:2 bottom:3 poolbased:1 observed:1 taskar:1 wang:1 graz:11 pieced:1 region:6 sect:1 russell:1 ham:3 ui:7 gpb:2 schiele:1 pinz:1 fussenegger:1 ultimately:1 trained:4 depend:1 rewrite:1 segment:69 localization:2 efficiency:2 basis:1 gu:1 easily:1 joint:6 eit:5 represented:1 kolmogorov:4 stacked:2 jain:1 fast:3 describe:1 labeling:3 aggregate:1 neighborhood:1 richer:1 stanford:9 valued:3 cvpr:11 otherwise:2 ability:1 highlighted:1 itself:1 transform:1 associative:2 advantage:3 sequence:1 coming:1 product:2 neighboring:1 combining:2 flexibility:1 achieve:4 az:1 parent:1 convergence:1 optimum:3 ijcai:1 produce:2 categorization:1 comparative:1 object:13 wider:1 derive:1 andrew:1 ac:1 gong:1 propagating:1 depending:1 measured:2 nearest:1 strong:2 come:5 implies:2 closely:1 annotated:1 correct:1 hammer:1 criminisi:1 bin:1 explains:2 piecing:1 assign:2 generalization:1 clarify:1 around:1 ground:1 prohibited:1 exp:3 mapping:1 bicycle:1 efros:1 achieves:2 omitted:1 purpose:2 estimation:2 bag:1 label:41 currently:1 unexplained:1 vice:1 weighted:3 minimization:5 awasthi:2 imperfection:1 super:1 aim:1 rather:2 encode:1 derived:1 joachim:2 improvement:1 potts:2 likelihood:2 superpixels:3 contrast:1 baseline:6 posteriori:1 inference:30 dependent:1 unary:13 accumulated:1 entire:2 vl:1 integrated:1 compactness:1 relation:1 ancestor:3 koller:4 labelings:1 interested:1 pixel:23 overall:4 classification:4 agarwala:1 pascal:1 plan:1 spatial:3 art:5 special:1 constrained:3 orange:1 field:22 once:1 equal:6 summed:1 construct:1 having:1 yu:2 unsupervised:1 icml:2 foreground:9 minimized:3 report:1 few:1 oriented:1 densely:1 intell:8 murphy:2 geometry:1 consisting:1 argmax:2 detection:2 interest:1 evaluation:1 predefined:1 accurate:1 edge:9 encourage:2 closer:1 experience:1 tree:38 loosely:1 re:1 uncertain:1 bouman:1 soft:2 boolean:7 modeling:2 assignment:2 loopy:1 cost:12 tractability:3 introducing:1 subset:2 veksler:3 uniform:1 too:1 reported:1 considerably:1 fritz:1 straddle:3 retain:1 participates:1 xi1:1 pool:27 together:1 intersecting:4 again:1 reflect:1 containing:1 choose:2 huang:1 rescaling:1 account:1 potential:8 coding:2 availability:1 coefficient:1 explicitly:1 yandex:1 later:1 root:2 jigsaw:1 red:2 option:1 annotation:1 contribution:1 minimize:1 purple:1 accuracy:2 descriptor:4 largely:1 efficiently:2 correspond:7 repeatability:1 weak:1 identification:1 bayesian:1 accurately:1 xtn:1 history:1 explain:3 detector:3 whenever:1 definition:1 inexact:1 evaluates:1 energy:23 associated:1 hamming:8 sampled:1 newly:1 dataset:9 popular:2 recall:7 color:3 car:4 knowledge:1 lim:1 segmentation:79 x1j:3 auer:1 higher:5 supermodular:3 unambiguously:1 zisserman:2 response:1 bmvc:1 formulation:2 done:1 ox:1 evaluated:2 furthermore:5 just:3 until:1 ei:2 multiscale:3 overlapping:6 lack:1 defines:1 quality:1 disabled:1 believe:1 building:2 normalized:3 contain:2 remedy:1 requiring:1 adequately:1 hence:2 assigned:6 soatto:1 semantic:29 adjacent:6 during:1 covering:2 x1n:1 davis:1 manifestation:1 outline:1 crf:20 demonstrate:5 workhorse:1 performs:3 fj:2 image:40 meaning:1 wise:1 recently:2 fi:17 boykov:3 superior:4 functional:1 extend:1 interpretation:2 approximates:2 kwk2:1 versa:1 munoz:2 smoothness:2 tuning:1 unconstrained:1 consistency:2 trivially:1 similarly:2 erc:1 grid:1 mathematics:1 submodular:8 pq:5 robot:1 impressive:1 longer:1 recent:1 exclusion:1 optimizing:1 optimizes:1 belongs:1 certain:1 meta:1 binary:10 wv:3 discussing:1 underlined:1 victor:2 guestrin:1 preceding:2 employed:1 aggregated:1 determine:1 multiple:3 full:4 infer:2 faster:2 long:2 bigger:1 controlled:1 essentially:1 vision:3 iteration:1 histogram:5 kernel:3 represent:1 nakajima:1 achieved:1 chicken:1 background:14 winn:1 leaving:1 biased:1 rest:1 specially:1 envelope:1 probably:2 induced:2 subject:2 flow:1 call:2 integer:1 structural:2 yang:1 ideal:1 granularity:1 split:5 easy:3 enough:3 shotton:1 fit:2 bandwidth:2 greig:1 inner:1 whether:1 handled:1 todorovic:1 boros:1 covered:4 involve:3 amount:1 zabih:3 svms:1 category:1 reduced:2 http:1 specifies:1 shapiro:1 exist:1 percentage:2 estimated:1 correctly:2 per:1 blue:2 discrete:1 express:3 group:1 four:1 utilize:1 kept:1 sharon:1 graph:13 relaxation:1 sum:4 enforced:3 powerful:1 almost:1 family:1 wu:3 entirely:1 layer:1 bound:1 tackled:1 convergent:1 quadratic:1 adapted:1 binned:1 strength:1 constraint:11 ladicky:2 scene:10 flat:32 x2:22 dominated:1 optimality:1 extremely:1 kumar:2 performing:2 min:1 gould:2 structured:5 combination:2 belonging:2 across:2 smaller:1 beneficial:1 remain:1 slightly:1 partitioned:2 lp:1 wi:3 lunch:1 modification:1 maxmargin:1 explained:2 intuitively:1 gradually:1 iccv:4 previously:1 remains:1 discus:1 loose:1 needed:2 generalizes:1 decomposing:1 incurring:1 multiplied:1 vidal:2 apply:1 obey:2 hierarchical:10 enforce:3 generic:2 upto:1 prec:3 fowlkes:1 alternative:1 altogether:1 slower:1 customary:1 original:1 moscow:1 top:1 include:1 porteous:1 graphical:2 society:1 feng:1 move:2 objective:1 already:1 malik:3 codewords:1 parametric:2 fulton:1 traditional:2 bagnell:1 september:1 distance:2 separate:1 mapped:1 arbelaez:2 nx:1 ei1:1 considers:1 portable:1 trivial:4 reason:2 water:2 rother:3 negative:2 vpq:7 implementation:3 anal:8 proper:2 perform:5 subcategory:1 upper:2 datasets:11 markov:2 incorrectly:1 extended:1 arbitrary:1 sweeping:1 retrained:1 introduced:3 pair:10 namely:1 extensive:1 optimized:2 nip:2 trans:8 able:2 suggested:1 below:2 pattern:8 kulesza:1 royal:1 reliable:2 explanation:2 max:6 including:4 green:2 overlap:5 demanding:2 suitable:1 interpretability:2 belief:1 older:1 x2i:6 library:1 finley:1 schmid:2 sn:1 prior:2 understanding:2 geometric:1 interdependent:1 multiplication:1 loss:6 expect:1 highlight:1 interesting:1 limitation:1 lv:1 toussaint:1 integrate:1 sufficient:1 consistent:2 propagates:1 thresholding:2 vij:3 balancing:1 eccv:4 row:3 course:1 changed:2 penalized:1 supported:1 last:2 free:1 hebert:1 allow:1 exponentiated:1 opelt:1 fall:1 szeliski:1 face:1 benefit:3 boundary:31 overcome:1 valid:2 uit:2 contour:3 unweighted:1 transaction:1 sj:5 alpha:6 approximate:3 scharstein:1 cutting:2 feat:1 basri:1 global:7 overfitting:1 visrec:1 xi:3 un:1 iterative:2 table:4 nature:1 learn:2 improving:1 expansion:6 investigated:1 excellent:1 domain:3 main:3 big:4 allowed:2 child:1 x1:28 representative:2 en:4 egg:1 ahuja:2 slow:3 precision:7 sub:1 xtp:5 position:1 explicit:2 wish:1 exponential:1 x1i:8 pereira:1 outdoor:1 learns:1 down:1 magenta:1 minute:2 repeatable:1 sift:1 appeal:2 explored:1 gupta:1 multitude:1 restricting:1 false:2 effectively:1 supplement:1 magnitude:2 margin:6 chen:1 locality:2 photograph:4 eij:4 appearance:2 explore:1 simply:1 visual:7 expressed:5 contained:1 adjustment:1 scalar:1 bnd:8 nested:2 corresponds:2 chance:1 minimizer:2 lempitsky:2 conditional:4 goal:3 viewed:1 consequently:1 towards:1 labelled:1 considerable:1 hard:2 change:2 included:1 determined:1 except:1 infinite:1 diff:1 operates:1 semantically:1 torr:1 experimental:1 subdivision:1 indicating:1 people:5 support:5 latter:1 szummer:1 relevance:1 philosophy:1 incorporate:1 seheult:1 |
3,537 | 4,202 | On U -processes and clustering performance
St?ephan Cl?emenc?on?
LTCI UMR Telecom ParisTech/CNRS No. 5141
Institut Telecom, Paris, 75634 Cedex 13, France
[email protected]
Abstract
Many clustering techniques aim at optimizing empirical criteria that are of the
form of a U -statistic of degree two. Given a measure of dissimilarity between
pairs of observations, the goal is to minimize the within cluster point scatter over
a class of partitions of the feature space. It is the purpose of this paper to define
a general statistical framework, relying on the theory of U -processes, for studying the performance of such clustering methods. In this setup, under adequate
assumptions on the complexity of the subsets forming the partition
candidates, the
?
excess of clustering risk is proved to be of the order OP (1/ n). Based on recent
results related to the tail behavior of degenerate U -processes, it is also shown how
to establish tighter rate bounds. Model selection issues, related to the number of
clusters forming the data partition in particular, are also considered.
1
Introduction
In cluster analysis, the objective is to segment a dataset into subgroups, such that data points in the
same subgroup are more similar to each other (in a sense that will be specified) than to those in
other subgroups. Given the wide range of applications of the clustering paradigm, numerous data
segmentation procedures have been introduced in the machine-learning literature (see Chapter 14 in
[HTF09] and Chapter 8 in [CFZ09] for recent overviews of ?off-the-shelf? clustering techniques).
Whereas the design of clustering algorithms is still receiving much attention in machine-learning
(see [WT10] and the references therein for instance), the statistical study of their performance,
with the notable exception of the celebrated K-means approach, see [Har78, Pol81, Pol82, BD04]
and more recently [BDL08] in the functional data analysis setting, may appear to be not sufficiently
well-documented in contrast, as pointed out in [vLBD05, BvL09]. Indeed, in the K-means situation,
the specific form of the criterion (and of its expectation, the clustering risk), as well as that of the
cells defining the clusters and forming a partition of the feature space (Voronoi cells), permits to
use, in a straightforward manner, results of the theory of empirical processes in order to control
the performance of empirical clustering risk minimizers. Unfortunately, this center-based approach
does not carry over into more general situations, where the dissimilarity measure is not a square
hilbertian norm anymore, unless one loses the possibility to interpret the clustering criterion as a
function of pairwise dissimilarities between the observations (cf K-medians).
It is the goal of this paper to establish a general statistical framework for investigating clustering
performance. The present analysis is based on the observation that many statistical criteria for
measuring clustering accuracy are (symmetric) U -statistics (of degree two), functions of a matrix
of dissimilarities between pairs of data points. Such statistics have recently received a good deal of
attention in the machine-learning literature, insofar as empirical performance measures of predictive
rules in problems such as statistical ranking (when viewed as pairwise classification), see [CLV08],
or learning on graphs ([BB06]), are precisely functionals of this type, generalizing sample mean
statistics. By means of uniform deviation results for U -processes, the Empirical Risk Minimization
?
http://www.tsi.enst.fr/?clemenco/.
1
paradigm (ERM) can be extended to situations where natural?estimates of the risk are U -statistics.
In this way, we establish here a rate bound of order OP (1/ n) for the excess of clustering risk
of empirical minimizers under adequate complexity assumptions on the cells forming the partition
candidates (the bias term is neglected in the present analysis). A linearization technique, combined
with sharper tail results in the case of degenerate U -processes is also used in order to show that
tighter rate bounds can be obtained. Finally, it is shown how to use the upper bounds established
in this analysis in order to deal with the problem of automatic model selection, that of selecting the
number of clusters in particular, through complexity penalization.
The paper is structured as follows. In section 2, the notations are set out, a formal description
of cluster analysis, from the ?pairwise dissimilarity? perspective, is given and the main theoretical
concepts involved in the present analysis are briefly recalled. In section 3, an upper bound for the
performance of empirical minimization of the clustering risk is established in the context of general
dissimilarity measures. Section 4 shows how to refine the rate bound previously obtained by means
of a recent inequality for degenerate U -processes, while section 5 deals with automatic selection of
the optimal number of clusters. Technical proofs are deferred to the Appendix section.
2
Theoretical background
In this section, after a brief description of the probabilistic framework of the study, the general
formulation of the clustering objective, based on the notion of dissimilarity between pairs of observations, is recalled and the connection of the problem of investigating clustering performance with
the theory of U -statistics and U -processes is highlighted. Concepts pertaining to this theory and
involved in the subsequent analysis are next recalled.
2.1
Probabilistic setup and first notations
Here and throughout, (X1 , . . . , Xn ) denotes a sample of i.i.d. random vectors, valued in a highdimensional feature space X , typically a subset of the euclidian space Rd with d >> 1, with common probability distribution ?(dx). With no loss of generality, we assume that the feature space X
coincides with the support of the distribution ?(dx). The indicator function of any event E will be
Pd
denoted by I{E}, the usual lp norm on Rd by ||x||p = ( i=1 |xi |p )1/p when 1 ? p < ? and by
||x||? = max1?i?d |xi | in the case p = ?, with x = (x1 , . . . , xd ) ? Rd . When well-defined, the
expectation and the variance of a r.v. Z are denoted by E[Z] and Var(Z) respectively. Finally, we
denote by x+ = max(0, x) the positive part of any real number x.
2.2
Cluster analysis
The goal of clustering techniques is to partition the data (X1 , . . . , Xn ) into a given finite number
of groups, K << n say, so that the observations lying in a same group are more similar to each
other than to those in other groups. When equipped with a (borelian) measure of dissimilarity
D : X 2 ? R?+ , the clustering task can be rigorously cast as the problem of minimizing the criterion
K
cn (P) =
W
X
2
n(n ? 1)
X
D(Xi , Xj ) ? I{(Xi , Xj ) ? Ck2 },
(1)
k=1 1?i<j?n
over all possible partitions P = {Ck : 1 ? k ? K} of the feature space X . The quantity
(1) is generally called the intra-cluster similarity or the within cluster point scatter. The function
D aiming at measuring dissimilarity between pairs of observations, we suppose that it fulfills the
following properties:
? (S YMMETRY ) For all (x, x0 ) ? X 2 , D(x, x0 ) = D(x0 , x)
? (S EPARATION ) For all (x, x0 ) ? X 2 : D(x, x0 ) = 0, ? x = x0
Typical choices for the dissimilarity measure are of the form D(x, x0 ) = ?(||x?x0 ||p ), where p ? 1
and ? : R+ ? R+ is a nondecreasing function such that ?(0) = 0 and ?(t) > 0 for all t > 0. This
includes the so-termed ?standard K-means? setup, where the dissimilarity measure coincides with
2
the square euclidian norm (in this case, p = 2 and ?(t) = t2 for t ? 0). Notice that the expectation
of the r.v. (1) is equal to the following quantity:
W (P) =
K
X
E D(X, X 0 ) ? I{(X, X 0 ) ? Ck2 } ,
(2)
k=1
where (X, X 0 ) denotes a pair of independent r.v.?s drawn from ?(dx). It will be referred to as the
clustering risk of the partition P, while its statistical counterpart (1) will be called the empirical
clustering risk. Optimal partitions of the feature space X are defined as those that minimize W (P).
Remark 1 (M AXIMIZATION FORMULATION ) It is well-known that minimizing the empirical clustering risk (1) P
is equivalent
to maximizing the between-cluster scatter point, which is given by
P
1/(n(n ? 1)) ? k6=l i, j D(Xi , Xj ) ? I{(Xi , Xj ) ? Ck ? Cl }, the sum of these two statistics being
independent from the partition P = {Ck : 1 ? k ? K} considered.
Suppose we are given a (hopefully sufficiently rich) class ? of partitions of the feature space X .
cn over ?, i.e. partitions P
b? in ? such that
Here we consider minimizers of the empirical risk W
n
cn (P) .
bn? = min W
cn P
(3)
W
P??
The design of practical algorithms for computing (approximately) empirical clustering risk minimizers is beyond the scope of this paper (refer to [HTF09] for an overview of ?off-the-shelf? clustering
methods). Here, focus is on the performance of such empirically defined rules.
2.3
U -statistics and U -processes
The subsequent analysis crucially relies on the fact that the quantity (1) that one seeks to optimize
is a U -statistic. For clarity?s sake, we recall the definition of this class of statistics, generalizing
sample means.
Definition 1 (U - STATISTIC OF DEGREE TWO .) Let X1 , . . . , Xn be independent copies of a
random vector X drawn from a probability distribution ?(dx) on the space X and K : X 2 ? R be
a symmetric function such that K(X1 , X2 ) is square integrable. By definition, the functional
X
2
Un =
K(Xi , Xj ).
(4)
n(n ? 1)
1?i<j?n
is a (symmetric) U -statistic of degree two, with kernel K. It is said to be degenerate when
def
K(1) (x) = E[K(x, X)] = 0 with probability one for all x ? X , non degenerate otherwise.
RR
The statistic (4) is a natural (unbiased) estimate of the quantity ? =
K(x, x0 )?(dx)?(dx0 ). The
class of U -statistics is very large and include most dispersion measures, including the variance or the
Gini mean difference (with K(x, x0 ) = (x?x0 )2 and K(x, x0 ) = |x?x0 | respectively, (x, x0 ) ? R2 ),
as well as the celebrated Wilcoxon location test statistic (with K(x, x0 ) = I{x + x0 > 0} for
(x, x0 ) ? R2 in this case). Although the dependence structure induced by the summation over all
pairs of observations makes its study more difficult than that of basic sample means, this estimator
has nice properties. It is well-known folklore in mathematical statistics that it is the most efficient
estimator among all unbiased estimators of the parameter ? (i.e. that with minimum variance),
see [vdV98]. Precisely, when non degenerate, it is asymptotically normal with limiting variance
4?Var(K(1) (X)) (refer to Chapter 5 in [Ser80] for an account of asymptotic analysis of U -statistics).
As shall be seen in section 4, the reduced variance property of U -statistics is crucial, when it comes
to establish tight rate bounds.
Going back to the U -statistic of degree two (1) estimating (2), observe that its symmetric kernel is:
?(x, x0 ) ? X 2 , KP (x, x0 ) =
K
X
D(x, x0 ) ? I{(x, x0 ) ? Ck2 }.
(5)
k=1
Assuming that E[D2 (X1 , X2 ) ? I{(X1 , X2 ) ? Ck2 }] < ? for all k ? {1, . . . , K} and placing
ourselves in the situation where K ? 1 is less than X ?s cardinality, the U -statistic (1) is always non
3
degenerate, except in the (sole) case where X is made of K elements exactly and all P?s cells are
singletons. Indeed, for all x ? X , denoting by k(x) the index of {1, . . . , K} such that x ? Ck(x) ,
we have:
Z
def
(1)
KP (x) = E[KP (x, X)] =
D(x, x0 )?(dx0 ).
(6)
x0 ?Ck(x)
As ??s support coincides with X and the separation property is fulfilled by D, the quantity
above is zero iff Ck(x) = {x}. In the non degenerate case, notice finally that the asymptotic
? c
Rvariance of 0 n{Wn0(P) ? W (P)} is equal to 4 ? Var(D(X, Ck(X) ), where we set D(x, C) =
D(x, x )?(dx ) for all x ? X and any measurable set C ? X .
x0 ?X
By definition, a U -process is a collection of U -statistics, one may refer to [dlPG99] for an account
of the theory of U -processes. Echoing the role played by the theory of empirical processes in the
study of the ERM principle in binary classification, the control of the fluctuations of the U -process
n
o
cn (P) ? W (P) : P ? ?
W
indexed by a set ? of partition candidates will naturally lie at the heart of the present analysis. As
shall be seen below, this can be achieved mainly by the means of the Hoeffding representations of
U -statistics, see [Hoe48].
3
A bound for the excess of clustering risk
Here we establish an upper bound for the performance of an empirical minimizer of the clustering
risk over a class ?K of partitions of X with K ? 1 cells, K being fixed here and supposed to be
?
smaller than X ?s cardinality. We denote by WK
the clustering risk minimum over all partitions of
X with K cells. The following global suprema of empirical Rademacher averages, characterizing
the complexity of the cells forming the partition candidates, shall be involved in the subsequent rate
analysis: ?n ? 2,
bn/2c
X
1
2
AK,n =
sup
i D(Xi , Xi+bn/2c ) ? I{(Xi , Xi+bn/2c ) ? C } ,
(7)
C?P, P??K bn/2c i=1
where = (i )i?1 is a Rademacher chaos, independent from the Xi ?s, see [Kol06].
The following theorem
reveals that the clustering performance of the empirical minimizer (3) is of
?
the order OP (1/ n), when neglecting the bias term (depending on the richness of ?K solely).
Theorem 1 Consider a class ?K of partitions with K ? 1 cells and suppose that:
? there exists B < ? such that for all P in ?K , any C in P, sup(x,x0 )?C 2 D(x, x0 ) ? B,
? the expectation of the Rademacher average AK,n is of the order O(n?1/2 ).
bn? , we have with probability at least 1 ? ?:
Let ? > 0. For any empirical clustering risk minimizer P
r
2 log(1/?)
?
?
?
b
?n ? 2, W (Pn ) ? WK ? 4KE[AK,n ] + 2BK
+
inf W (P) ? WK
P??K
n
K
?
inf W (P) ? WK
,
(8)
? c(B, ?) ? ? +
P??K
n
for some constant c(B, ?) < ?, independent from n and K.
The key for proving (8) is to express the U -statistic Wn (P) in terms of sums of i.i.d. r.v.?s, as that
involved in the Rademacher average (7):
bn/2c
X
1 X
1
Wn (P) =
KP (Xi , Xi+bn/2c ),
n!
bn/2c i=1
??Sn
4
(9)
where the average is taken over Sn , the symmetric group of order n. The main point lies in the fact
that standard techniques in empirical process theory can be then used to control Wn (P) ? W (P)
uniformly over ?K under adequate hypotheses, see the proof in the Appendix for technical details.
We underline that, naturally, the complexity assumption is also a crucial ingredient of the result
stated above, and more generally to clustering consistency results, see Example 1 in [BvL09]. We
also point out that the ERM approach is by no means the sole method to obtain error bounds in the
clustering context. Just like in binary classification (see [KN02]), one may use a notion of stability
of a clustering algorithm to establish such results, see [vL09, ST09] and the references therein. Refer
to [vLBD06, vLBD08] for error bounds proved through the stability approach. Before showing how
the bound for the excess of risk stated above can be improved, a few remarks are in order.
Remark 2 (O N THE COMPLEXITY ASSUMPTION .) We point out that standard entropy metric arguments can be used in order to bound the expected value of the Rademacher average An , see [BBL05]
for instance. In particular, if the set of functions F?K = {(x, x0 ) ? X 2 7? D(x, x0 ) ? I{(x, x0 ) ?
C 2 } : C ? P,
p P ? ?K } is a VC major class with finite VC dimension V (see [Dud99]), then
E[AK,n ] ? c V /n for some universal constant c < ?. This covers a wide variety of situations,
including the case where D(x, x0 ) = ||x ? x0 ||?p and the class of sets {C : C ? P, P ? ?K } is of
finite VC dimension.
Remark 3 (K- MEANS .) In the standard K-means approach, the dissimilarity measure is
D(x, x0 ) = ||x ? x0 ||22 and partition candidates are indexed by a collection c of distinct ?centers?
c1 , . . . , cK in X : Pc = {C1 , . . . , CK } with Ck = {x ? X : ||x ? ck ||2 = min1?l?K ||x ? cl ||2 }
for 1 ? k ? K (with adequate distance-tie breaking). One may easily check that for this specific
collection of partitions ?K and this choice for the dissimilarity measure, the class F?K is a VC
major class with finite VC dimension, see section 19.1 in [DGL96] for instance. Additionally, it
should be noticed than in most practical clustering procedures, center candidates are picked in a
data-driven fashion, being taken as the averages of the observations lying in each cluster/cell. In this
respect, the M -estimation problem formulated here can be considered to a certain extent as closer to
what is actually achieved by K-means clustering techniques in practice, than the usual formulation
of the K-means problem (as an optimization problem over c = (c1 , . . . , cK ) namely).
Remark 4 (W EIGHTED CLUSTERING CRITERIA .) Notice that, in practice, the measure D involved
in (1) may depend on the data. For scaling purpose, one could assign data-dependent weights ? =
b x0 ) = Pd (xi ? x0 )2 /b
(?i )1?i?d in a coordinatewise manner, leading to D(x,
?i2 for instance,
i
i=1
2
where ?
bi denotes the sample variance related to the i-th coordinate. Although the criterion reflecting
the performance is not a U -statistic anymore, the theory we develop here can be straightforwardly
used for investigating clustering accuracy in such a case. Indeed, it is easy to control the difference
Pd
between the latter and the U -statistic (1) with D(x, x0 ) = i=1 (xi ? x0i )2 /?i2 , the ?i2 ?s denoting
the theoretical variances of ??s marginals, under adequate moment assumptions.
4
Tighter bounds for empirical clustering risk minimizers
We now show that one may refine the rate bound established above, by considering another representation of the U -statistic (1), its Hoeffding?s decomposition (see [Ser80]): for all partition P,
Wn (P) ? W (P) = 2Ln (P) + Mn (P),
Ln (P) = (1/n)
Pn
i=1
P
C?P
(10)
(1)
HC (Xi ) being a simple average of i.i.d r.v.?s with, for (x, x0 ) ? X 2 ,
(1)
HC (x, x0 ) = D(x, x0 ) ? I{(x, x0 ) ? C 2 } and HC (x) = D(x, C) ? I{x ? C} ? D(C, C),
R
where D(C, C) = x?C D(x, C)?(dx) and E[HC (x, X)] = D(x, C) ? I{x ? C}, and Mn (P) being a
P
(2)
degenerate U -statistic based on the Xi ?s with kernel given by: C?P HC (x, x0 ), where
(2)
(1)
(1)
HC (x, x0 ) = HC (x, x0 ) ? HC (x) ? HC (x0 ) ? D(C, C),
for all (x,p
x0 ) ? X 2 . The leading term in (10) is the (centered) sample mean 2Ln (P), of the
order OP ( 1/n), while the second term is of the order OP (1/n). Hence, provided this holds true
5
uniformly over P, the main contribution to the rate bound should arise from the quantity
sup |2Ln (P)| ? 2K
P??K
|(1/n)
sup
C?P, P??K
n
X
(1)
HC (Xi ) ? D(C, C)|,
i=1
which thus leads to consider the following suprema of empirical Rademacher averages:
n
1 X
i D(Xi , C) ? I{Xi ? C} .
RK,n =
sup
C?P, P??K n
(11)
i=1
This supremum clearly has smaller mean and variance than (7). We also introduce the quantities:
X
X
(2)
(2)
, U =
sup
sup
i ?j HC (Xi , Xj ),
H
(X
,
X
)
Z =
sup
i
j
i
j
C
P 2
C?P, P??K ?:
C?P, P??K i,j
?
j j i,j
X
(2)
i HC (Xi , Xj ) .
M =
sup
sup
C?P, P??K 1?j?n
i
Theorem 2 Consider a class ?K of partitions with K cells and suppose that:
? there exists B < ? such that sup(x,x0 )?C 2 D(x, x0 ) ? B for all P ? ?K , C ? P.
bn? , with probability at least 1 ? ?: ?n ? 2,
Let ? > 0. For any empirical clustering risk minimizer P
r
log(2/?)
?
?
?
b
W (Pn ) ? WK ? 4KE[RK,n ] + 2BK
+ K?(n, ?) +
inf W (P) ? WK , (12)
P??K
n
where we set for some universal constant C < ?, independent from n, N and K:
p
?(n, ?) = C E[Z ] + log(1/?)E[U ] + (n + E[M ])/ log(1/?) /n2 .
(13)
The result above relies on the moment inequality for degenerate U -processes proved in [CLV08].
Remark 5 (L OCALIZATION .) The same argument can be used to decompose ?n (P) ? ?(P),
cn (P) ? W ? is an estimate of the excess of risk ?(P) = W (P) ? W ? , and, by
where ?n (P) = W
K
K
means of concentration inequalities, to obtain next a sharp upper bound that involves the modulus
of continuity
average indexed by the convex hull of the set of
Pof the variance of the Rademacher
P
functions { C?P D(x, C) ? I{x ? C} ? C ? ?P ? D(x, C ? ) ? {x ? C ? } : P ? ?K }, following in
the footsteps or recent advances in binary classification, see [Kol06] and subsection 5.3 in [BBL05].
Owing to space limitations, this will be dealt with in a forthcoming article.
5
Model selection - choosing the number of clusters
A crucial issue in data segmentation is to determine the number K of cells that exhibits the most the
clustering phenomenon in the data. A variety of automatic procedures for choosing a good value for
K have been proposed in the literature, based on data splitting, resampling or sampling techniques
([PFvN89, TWH01, ST08]). Here we consider a complexity regularization method that avoids to
have recourse to such techniques and uses a data-dependent penalty term based on the analysis
carried out above.
Suppose that we have a sequence ?1 , ?2 , . . . of collections of partitions of the feature space X
such that, for all K ? 1, the elements of ?K are made of K cells and fulfill the assumptions of
Theorem 1. In order to avoid overfitting, consider the (data-driven) complexity penalty given by
27BK log K p
pen(n, K) = 3KE [AK,n ] +
+ (2B log K)/n
(14)
n
b b of the penalized empirical clustering risk, with
and the minimizer P
K,n
n
o
b = arg min W
cn (P
bK,n ) + pen(n, K) and W
cn (P
bK,n ) = min W
cn (P).
K
P??K
K?1
6
The next result shows that the partition thus selected nearly achieves the performance that would be
bK,n ]?W ? ,
obtained with the help of an oracle, revealing the value of the index K that minimizes E[P
with W ? = inf P W (P).
Theorem 3 (A N ORACLE INEQUALITY ) Suppose that, for all K ? 1, the assumptions of Theorem
1 are fulfilled. Then, we have:
!
r
h
i
?2
2
18B
?
?
?
b
E W (PK,n
2B
+
.
(15)
b ) ? W ? min {WK ? W + pen(n, K)} +
K?1
6
n
n
Of course, the penalty could be slightly refined using the results of Section 4. Due to space limitations, such an analysis is not carried out here and is left to the reader.
6
Conclusion
Whereas, until now, the theoretical analysis of clustering performance was mainly limited to the
K-means situation (but not only, cf [BvL09] for instance), this paper establishes bounds for the
success of empirical clustering risk minimization in a general ?pairwise dissimilarity? framework,
relying on the theory of U -processes. The excess of risk of empirical minimizers of the clustering
risk is proved to be of the order OP (n?1/2 ) under mild assumptions on the complexity of the cells
forming the partition candidates. It is also shown how to refine slightly this upper bound through
a linearization technique and the use of recent inequalities for degenerate U -processes. Although
the improvement displayed here can appear as not very significant at first glance, our approach
suggests that much sharper data-dependent bounds could be established this way. To the best of
our knowledge, the present analysis is the first to state results of this nature. As regards complexity
regularization, while focus is here on the choice of the number of clusters, the argument used in this
paper also paves the way for investigating more general model selection issues, including choices
related to the geometry/complexity of the cells of the partition considered.
Appendix - Technical proofs
Proof of Theorem 1
We may classically write:
?
c (P
bn ) ? WK
W
?
P??K
?
?
inf W (P) ? WK
P??K
?
|Un (C) ? u(C)| +
inf W (P) ? WK
,
cn (P) ? W (P)| +
2 sup |W
2K
sup
C?P, P??K
P??K
(16)
where Un (C) denotes the U -statistic with kernel HC (x, x0 ) = D(x, x0 ) ? I{(x, x0 ) ? C 2 } based on
the sample X1 , . . . , Xn and u(C) its expectation. Therefore, mimicking the argument of Corollary
3 in [CLV08], based on the so-termed first Hoeffding?s representation of U -statistics (see Lemma
A.1 in [CLV08]), we may straightforwardly derive the lemma below.
Proposition 1 (U NIFORM DEVIATIONS ) Suppose that Theorem 1?s assumptions are fulfilled. Let
? > 0. With probability at least 1 ? ?, we have: ?n ? 2,
r
2 log(1/?)
.
(17)
sup
|Un (C) ? u(C)| ? 2E[AK,n ] + B
n
C?P, P??K
PROOF. The argument follows in the footsteps of Corollary 3?s proof in [CLV08]. It is based on the
so-termed first Hoeffding?s representation of U -statistics (9), which provides an immediate control
of the moment generating function of the supremum supC |Un (C) ? u(C)| by that of the norm of an
empirical process, namely supC |An (C) ? u(C)|, where, for all C ? P and P ? ?K :
bn/2c
X
1
An (C) =
D(Xi , Xi+bn/2c ) ? I{(Xi , Xi+bn/2c ) ? C 2 }.
bn/2c i=1
7
Lemma 1 (see Lemma A.1 in [CLV08]) Let ? : R ? R be convex and nondecreasing. We have:
E exp ? ? sup |Un (C) ? u(C)|
? E exp ? ? sup |An (C) ? u(C)| .
(18)
C
C
Now, using standard symmetrization and randomization tricks, one obtains that: ?? > 0,
E exp ? ? sup |An (C) ? u(C)|
? E [exp (2? ? AK,n )] .
(19)
C
Observing that the value of AK,n cannot change by more than 2B/n when one of the
(i , Xi , Xi+bn/2c )0 s is changed, while the others are kept fixed, the standard bounded differences
inequality argument applies and yields:
?2 B 2
.
(20)
E [exp (2? ? AK,n )] ? exp 2? ? E[AK,n ] +
2n
Next, Markov?s inequality with ? = (t ? 2E[AK,n ])/B 2 gives: P{supC |An (C) ? u(C)| > t} ?
exp(?n(t ? 2E[AK,n ])2 /(2B 2 )). The desired result is then immediate.
The rate bound is finally established by combining bounds (16) and (17).
Proof of Theorem 2 (Sketch of)
The theorem can be proved by using the decomposition (10), applying the argument above in order
to control supP |Ln (P)| and the lemma below to handle the degenerate part. The latter is based on a
recent moment inequality for degenerate U -processes, proved in [CLV08]. Due to space limitations,
technical details are left to the reader.
Lemma 2 (see Theorem 11 in [CLV08]) Suppose that Theorem 2?s assumptions are fulfilled. There
exists a universal constant C < ? such that for all ? ? (0, 1), we have with probability at least
1 ? ?: ?n ? 2,
sup |Mn (P)| ? K?(n, ?).
P??K
Proof of Theorem 3
The proof mimics the argument of Theorem 8.1 in [BBL05]. We thus obtain that: ?K ? 1,
h
i
h
i
b b ) ? W ? ? E W (P
bK,n ) ? W ? + pen(K, n)
E W (P
K,n
"
#
X
c
.
+
E
sup {W (P) ? Wn (P)} ? pen(n, k)
P??k
k?1
+
Reproducing the argument of Theorem 1?s proof, one may easily show that: ?k ? 1,
c
E sup {W (P) ? Wn (P)} ? 2kE[Ak,n ].
P??k
cn (P)} ? pen(n, k) + 2?} is bounded by
Thus, for all k ? 1, the quantity P{supP??k {W (P) ? W
p
c
c
P sup {W (P) ? Wn (P)} ? E sup {W (P) ? Wn (P)} + (2B log k)/n + ?
P??k
P??k
27Bk log k
+ P 3kE [Ak,n ] ? 2kE[Ak,n ] ?
?? .
n
By virtue of the bounded differences inequality (jumps being bounded by 2B/n), the first term is
bounded by exp(?n? 2 /(2B 2 ))/k 2 , while the second term is bounded by, exp(?n?/(9Bk))/k 3 as
shown by Lemma 8.2 in [BBL05] (see the third inequality therein). Integrating over ?, one obtains:
"
#
p
c
E
sup {W (P) ? Wn (P)} ? pen(n, k)
? (2B 2/n + 18B/n)/k 2 .
P??k
+
Summing next the bounds thus obtained over k leads to the oracle inequality stated in the theorem.
8
References
[BB06]
G. Biau and L. Bleakley. Statistical Inference on Graphs. Statistics & Decisions, 24:209?232, 2006.
[BBL05]
S. Boucheron, O. Bousquet, and G. Lugosi. Theory of Classification: A Survey of Some Recent
Advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[BD04]
S. Ben-David. A framework for statistical clustering with a constant time approximation algorithms
for k-median clustering. In Proceedings of COLT?04, Lecture Notes in Computer Science, Volume
3120/2004, 415-426, 2004.
[BDL08]
G. Biau, L. Devroye, and G. Lugosi. On the Performance of Clustering in Hilbert Space. IEEE
Trans. Inform. Theory, 54(2):781?790, 2008.
[BvL09]
S. Bubeck and U. von Luxburg. Nearest neighbor clustering: A baseline method for consistent
clustering with arbitrary objective functions. Journal of Machine Learning Research, 10:657?698,
2009.
[CFZ09]
B. Clarke, E. Fokou?e, and H.. Zhang. Principles and Theory for Data-Mining and MachineLearning. Springer, 2009.
[CLV08]
S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of U-statistics.
The Annals of Statistics, 36(2):844?874, 2008.
[DGL96]
L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer,
1996.
[dlPG99]
V. de la Pena and E. Gin?e. Decoupling: from Dependence to Independence. Springer, 1999.
[Dud99]
R.M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999.
[Har78]
J.A. Hartigan. Asymptotic distributions for clustering criteria. The Annals of Statistics, 6:117?131,
1978.
[Hoe48]
W. Hoeffding. A class of statistics with asymptotically normal distribution. Ann. Math. Stat.,
19:293?325, 1948.
[HTF09]
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning (2nd ed.), pages
520?528. Springer, 2009.
[KN02]
S. Kutin and P. Niyogi. Almost-everywhere algorithmic stability and generalization error. In Proceedings of the of the 18th Conference in Uncertainty in Artificial Intelligence, 2002.
[Kol06]
V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization (with
discussion). The Annals of Statistics, 34:2593?2706, 2006.
[PFvN89] R. Peck, L. Fisher, and J. van Ness. Bootstrap confidence intervals for the number of clusters in
cluster analysis. J. Am. Stat. Assoc., 84:184?191, 1989.
[Pol81]
D. Pollard. Strong consistency of k-means clustering. The Annals of Statistics, 9:135?140, 1981.
[Pol82]
D. Pollard. A central limit theorem for k-means clustering. The Annals of Probability, 10:919?926,
1982.
[Ser80]
R.J. Serfling. Approximation theorems of mathematical statistics. Wiley, 1980.
[ST08]
O. Shamir and N. Tishby. Model selection and stability in k-means clustering. In in Proceedings of
the 21rst Annual Conference on Learning Theory, 2008.
[ST09]
O. Shamir and N. Tishby. On the reliability of clustering stability in the large sample regime. In
Advances in Neural Information Processing Systems 21, 2009.
[TWH01] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters in a data set via the gap
statistic. J. Royal Stat. Soc., 63(2):411?423, 2001.
[vdV98]
A. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
[vL09]
U. von Luxburg. Clustering stability: An overview. Foundations and Trends in Machine Learning,
2(3):235?274, 2009.
[vLBD05] U. von Luxburg and S. Ben-David. Towards a statistical theory of clustering. In Pascal workshop
on Statistics and Optimization of Clustering, 2005.
[vLBD06] U. von Luxburg and S. Ben-David. A sober look at clustering stability. In Proceedings of the 19th
Conference on Learning Theory, 2006.
[vLBD08] U. von Luxburg and S. Ben-David. Relating clustering stability to properties of cluster boundaries.
In Proceedings of the 21th Conference on Learning Theory, 2008.
[WT10]
D. M. Witten and R. Tibshirani. A framework for feature selection in clustering. J. Amer. Stat.
Assoc., 105(490):713?726, 2010.
9
| 4202 |@word mild:1 briefly:1 norm:4 underline:1 nd:1 d2:1 seek:1 crucially:1 bn:16 decomposition:2 euclidian:2 carry:1 moment:4 celebrated:2 selecting:1 denoting:2 scatter:3 dx:7 subsequent:3 partition:25 resampling:1 intelligence:1 selected:1 ck2:4 provides:1 math:1 location:1 zhang:1 mathematical:2 walther:1 introduce:1 manner:2 x0:50 pairwise:4 expected:1 indeed:3 behavior:1 relying:2 equipped:1 cardinality:2 considering:1 provided:1 estimating:2 notation:2 pof:1 bounded:6 what:1 minimizes:1 xd:1 tie:1 exactly:1 assoc:2 control:6 appear:2 peck:1 supc:3 positive:1 before:1 local:1 limit:2 aiming:1 ak:15 fluctuation:1 solely:1 approximately:1 lugosi:4 umr:1 therein:3 koltchinskii:1 suggests:1 limited:1 range:1 bi:1 practical:2 practice:2 bootstrap:1 procedure:3 universal:3 suprema:2 empirical:25 orfi:1 revealing:1 confidence:1 integrating:1 cannot:1 selection:7 context:2 risk:26 applying:1 bleakley:1 www:1 equivalent:1 optimize:1 measurable:1 center:3 maximizing:1 emenc:2 attention:2 straightforward:1 convex:2 survey:1 ke:6 splitting:1 rule:2 estimator:3 machinelearning:1 proving:1 stability:8 notion:2 coordinate:1 handle:1 limiting:1 annals:5 shamir:2 suppose:8 us:1 hypothesis:1 trick:1 element:3 trend:1 recognition:1 role:1 min1:1 eighted:1 richness:1 pd:3 complexity:12 rigorously:1 neglected:1 depend:1 tight:1 segment:1 predictive:1 max1:1 easily:2 chapter:3 distinct:1 pertaining:1 gini:1 kp:4 artificial:1 choosing:2 refined:1 valued:1 say:1 otherwise:1 statistic:42 niyogi:1 vaart:1 highlighted:1 nondecreasing:2 sequence:1 rr:1 fr:2 combining:1 iff:1 degenerate:13 supposed:1 description:2 bbl05:5 rst:1 cluster:18 rademacher:8 generating:1 ben:4 help:1 depending:1 develop:1 derive:1 stat:4 nearest:1 sole:2 x0i:1 op:6 received:1 strong:1 soc:1 involves:1 come:1 owing:1 hull:1 vc:5 centered:1 sober:1 assign:1 generalization:1 decompose:1 randomization:1 proposition:1 tighter:3 summation:1 hold:1 lying:2 sufficiently:2 considered:4 normal:2 exp:9 scope:1 algorithmic:1 major:2 achieves:1 purpose:2 estimation:1 symmetrization:1 establishes:1 minimization:5 clearly:1 always:1 aim:1 ck:12 fulfill:1 pn:3 shelf:2 avoid:1 corollary:2 focus:2 improvement:1 check:1 mainly:2 contrast:1 baseline:1 sense:1 am:1 inference:1 voronoi:1 minimizers:6 cnrs:1 dependent:3 typically:1 footstep:2 going:1 france:1 mimicking:1 issue:3 classification:5 among:1 arg:1 denoted:2 hilbertian:1 k6:1 pascal:1 colt:1 ness:1 equal:2 sampling:1 placing:1 look:1 nearly:1 mimic:1 t2:1 others:1 few:1 geometry:1 ourselves:1 friedman:1 ltci:1 possibility:1 mining:1 intra:1 deferred:1 pc:1 closer:1 neglecting:1 clemencon:1 institut:1 unless:1 indexed:3 clv08:9 desired:1 theoretical:4 instance:5 cover:1 measuring:2 deviation:2 subset:2 uniform:2 tishby:2 straightforwardly:2 combined:1 st:1 probabilistic:3 off:2 receiving:1 von:5 central:2 hoeffding:5 classically:1 leading:2 supp:2 account:2 singleton:1 de:1 gy:1 wk:10 includes:1 notable:1 ranking:2 picked:1 observing:1 sup:23 contribution:1 minimize:2 square:3 accuracy:2 variance:9 yield:1 borelian:1 biau:2 dealt:1 inform:1 ed:1 definition:4 involved:5 kn02:2 naturally:2 proof:10 proved:6 dataset:1 recall:1 subsection:1 knowledge:1 eparation:1 niform:1 segmentation:2 hilbert:1 actually:1 back:1 reflecting:1 improved:1 formulation:3 amer:1 generality:1 just:1 until:1 sketch:1 hopefully:1 glance:1 continuity:1 modulus:1 concept:2 unbiased:2 true:1 counterpart:1 hence:1 regularization:2 symmetric:5 boucheron:1 i2:3 deal:3 coincides:3 criterion:8 chaos:1 recently:2 common:1 witten:1 functional:2 empirically:1 overview:3 volume:1 tail:2 pena:1 interpret:1 marginals:1 relating:1 refer:4 significant:1 cambridge:2 automatic:3 rd:3 consistency:2 pointed:1 reliability:1 similarity:1 wilcoxon:1 recent:7 perspective:1 optimizing:1 inf:6 driven:2 termed:3 certain:1 inequality:12 binary:3 success:1 der:1 integrable:1 seen:2 minimum:2 determine:1 paradigm:2 technical:4 basic:1 expectation:5 metric:1 kernel:4 bdl08:2 achieved:2 cell:14 c1:3 vayatis:1 whereas:2 background:1 interval:1 median:2 crucial:3 cedex:1 induced:1 echoing:1 stephan:1 insofar:1 wn:9 variety:2 xj:7 easy:1 independence:1 forthcoming:1 hastie:2 cn:11 penalty:3 pollard:2 remark:6 adequate:5 generally:2 documented:1 http:1 reduced:1 notice:3 fulfilled:4 tibshirani:3 write:1 shall:3 dgl96:2 express:1 group:4 key:1 drawn:2 clarity:1 hartigan:1 kept:1 graph:2 asymptotically:2 sum:2 luxburg:5 everywhere:1 uncertainty:1 throughout:1 reader:2 almost:1 separation:1 decision:1 appendix:3 scaling:1 clarke:1 bound:23 def:2 played:1 refine:3 oracle:4 kutin:1 annual:1 precisely:2 x2:3 sake:1 bousquet:1 argument:9 min:4 structured:1 smaller:2 enst:1 slightly:2 serfling:1 lp:1 erm:3 heart:1 taken:2 ln:5 recourse:1 previously:1 dud99:2 studying:1 permit:1 observe:1 dudley:1 anymore:2 denotes:4 clustering:58 cf:2 include:1 folklore:1 establish:6 objective:3 noticed:1 quantity:8 concentration:1 dependence:2 usual:2 pave:1 said:1 exhibit:1 gin:1 distance:1 extent:1 assuming:1 devroye:2 index:2 minimizing:2 setup:3 unfortunately:1 difficult:1 sharper:2 stated:3 design:2 htf09:3 upper:5 observation:8 dispersion:1 markov:1 finite:4 displayed:1 immediate:2 situation:6 defining:1 extended:1 reproducing:1 sharp:1 arbitrary:1 ephan:1 introduced:1 bk:9 pair:6 paris:1 specified:1 cast:1 connection:1 namely:2 david:4 recalled:3 established:5 subgroup:3 trans:1 beyond:1 below:3 pattern:1 regime:1 max:1 including:3 royal:1 event:1 natural:2 indicator:1 mn:3 esaim:1 brief:1 numerous:1 carried:2 sn:2 nice:1 literature:3 asymptotic:4 loss:1 lecture:1 limitation:3 var:3 ingredient:1 penalization:1 foundation:1 degree:5 consistent:1 article:1 principle:2 course:1 penalized:1 changed:1 copy:1 bias:2 formal:1 wide:2 neighbor:1 characterizing:1 van:2 regard:1 boundary:1 dimension:3 xn:4 avoids:1 rich:1 made:2 collection:4 jump:1 functionals:1 excess:6 obtains:2 supremum:2 global:1 tsi:1 investigating:4 reveals:1 overfitting:1 summing:1 xi:29 un:6 pen:7 additionally:1 nature:1 decoupling:1 hc:13 cl:4 pk:1 main:3 arise:1 n2:1 coordinatewise:1 x1:8 telecom:3 referred:1 fashion:1 wiley:1 candidate:7 lie:2 breaking:1 ser80:3 third:1 theorem:19 rk:2 specific:2 showing:1 r2:2 virtue:1 exists:3 workshop:1 dissimilarity:14 linearization:2 gap:1 entropy:1 generalizing:2 bubeck:1 forming:6 applies:1 springer:4 loses:1 minimizer:5 relies:2 goal:3 viewed:1 formulated:1 ann:1 towards:1 fisher:1 paristech:2 change:1 typical:1 except:1 uniformly:2 lemma:7 called:2 la:1 exception:1 wn0:1 highdimensional:1 support:2 hoe48:2 fulfills:1 dx0:2 latter:2 phenomenon:1 |
3,538 | 4,203 | Greedy Model Averaging
Dong Dai
Department of Statistics Rutgers University, New Jersey, 08816
[email protected]
Tong Zhang
Department of Statistics, Rutgers University, New Jersey, 08816
[email protected]
Abstract
This paper considers the problem of combining multiple models to achieve a
prediction accuracy not much worse than that of the best single model for least
squares regression. It is known that if the models are mis-specified, model averaging is superior to model selection. Specifically, let n be the sample size, then
the worst case regret of the former decays at the rate
? of O(1/n) while the worst
case regret of the latter decays at the rate of O(1/ n). In the literature, the most
important and widely studied model averaging method that achieves the optimal
O(1/n) average regret is the exponential weighted model averaging (EWMA) algorithm. However this method suffers from several limitations. The purpose of
this paper is to present a new greedy model averaging procedure that improves
EWMA. We prove strong theoretical guarantees for the new procedure and illustrate our theoretical results with empirical examples.
1
Introduction
This paper considers the model combination problem, where the goal is to combine multiple models
in order to achieve improved accuracy. This problem is important for practical applications because
it is often the case that single learning models do not perform as well as their combinations. In
practice, model combination is often achieved through the so-called ?stacking? procedure, where
multiple models {f1 (x), . . . , fM (x)} are first learned based on a shared ?training dataset?. Then
these models are combined on a separate ?validation dataset?. This paper is motivated by this scenario. In particular, we assume that M models {f1 (x), . . . , fM (x)} are given a priori (e.g., we may
regard them as being obtained with a separate training set), and we are provided with n labeled data
points (validation data) {(X1 , Y1 ), . . . , (Xn , Yn )} to combine these models.
For simplicity and clarity, our analysis focuses on least squares regression in fixed design although
similar analysis can be extended to random design and to other loss functions. In this setting, for
notation convenience, we can represent the k-th model on the validation data as a vector f k =
[fk (X1 ), . . . , fk (Xn )] ? Rn , and we let the observation vector y = [Y1 , . . . , Yn ] ? Rn . Let
g = Ey be the mean. Our goal (in the fixed design or denoising setting) is to estimate the mean
vector g from y using the M existing models F = {f 1 , . . . f M }. Here, we can write
y = g + ?,
where we assume that ? are iid Gaussian noise: ? ? N (0, ? 2 I n?n ) for simplicity. This iid Gaussian
assumption isn?t critical, and the results remain the same for independent sub-Gaussian noise.
We assume that the models may be mis-specified. That is, let k? be the best single model defined as:
2
k? = argmin kf k ? gk2 ,
k
1
(1)
then f k? 6= g.
We are interested in an estimator f? of g that achieves a small regret
2 1
2
1
R(f? ) =
f? ? g
?
f k? ? g
2 .
n
n
2
This paper considers a special class of model combination methods which we refer to as model
averaging, with combined estimators of the form
f? =
M
X
w
?k f k ,
k=1
P
where w
?k ? 0 and k w
?k = 1. A standard method for ?model averaging? is model selection, where
?
we choose the model k with the smallest least squares error:
f? M S = f k? ;
2
k? = arg min kf k ? yk2 .
k
? However, it is well known that
?k = 0 when k 6= k.
This corresponds to the choice of w
?k? = 1 and w
p
the worst case regret this procedure can achieve is R(f? M S ) = O( ln M/n) [1]. Another standard
model averaging method is the Exponential Weighted Model Averaging (EWMA) estimator defined
as
2
M
X
qk e??kf k ?yk2
?
f EW M A =
w
?k f k , w
?k = P
(2)
2 ,
M
q e??kf j ?yk2
k=1
j=1 j
with a tuned parameter ? ? 0. The extra parameters {qj }j=1,...,M are priors that P
impose bias
favoring some models over some other models. Here we assume that qj ? 0 and j qj = 1.
In this setting, the most common prior choice is the flat prior qj = 1/M . It should be pointed
out that a progressive variant of (2), which returns the average of n + 1 EWMA estimators with
Si = {(X1 , Y1 ), . . . , (Xi , Yi )} for i = 0, 1, . . . , n, was often analyzed in the earlier literature
[2, 9, 5, 1]. Nevertheless, the non progressive version presented in (2) is clearly a more natural
estimator, and this is the form that has been studied in more recent work [3, 6, 8]. Our current paper
does not differentiate these two versions of EWMA because they have similar theoretical properties.
In particular, our experiments only compare to the non-progressive version (2) that performs better
in practice.
It is known that exponential model averaging leads to an average regret of O(ln M/n) which
achieves the optimal rate; however it was pointed out in [1] that the rate does notp
hold with large
probability. Specifically, EWMA only leads to a sub-optimal deviation bound of O( ln M/n) with
large probability. To remedy this sub-optimality, an empirical star algorithm (which we will refer to
as STAR from now on) was then proposed in [1]; it was shown that the algorithm gives O(ln M/n)
deviation bound with large probability under the flat prior qi = 1/M . One major issue of the STAR
algorithm is that its average performance is often inferior to EWMA, as we can see from our empirical examples. Therefore although theoretically interesting, it is not an algorithm that can be
regarded as a replacement of EWMA for practical purposes. Partly for this reason, a more recent
study [7] re-examined the problem of improving EWMA, where different estimators were proposed
in order to achieve optimal deviation for model averaging. However, the proposed algorithms are
rather complex and difficult to implement. The purpose of this paper is to present a simple greedy
model averaging (GMA) algorithm that gives the optimal O(ln M/n) deviation bound with large
probability, and it can be applied with arbitrary prior qi . Moreover, unlike STAR which has average
performance inferior to EWMA, the average performance of GMA algorithm is generally superior
to EWMA as we shall illustrate with examples. It also has some other advantages which we will
discuss in more details later in the paper.
2
Greedy Model Averaging
This paper studies a new model averaging procedure presented in Algorithm 1. The procedure has L
stages, and each time adds an additional model f k?(`) into the ensemble. It is based on a simple, but
2
important modification of a classical sequential greedy approximation procedure in the literature [4],
which corresponds to setting ?(`) = 0, ? = 0 in Algorithm 1 with ?(`) optimized over [0, 1]. The
(2)
STAR algorithm corresponds to the stage-2 estimator f? with the above mentioned classical greedy
procedure of [4]. However, in order to prove the desired deviation bound, our analysis critically
2
(`?1)
? f
which isn?t present in the classical procedure (that
depends on the extra term ?(`)
f?
j
2
is, our proof does not apply to the procedure of [4]). As we will see in Section 4, this extra term
does have a positive impact under suitable conditions that correspond to Theorem 1 and Theorem 2
below, and thus this term is not only for theoretical interest, but also it leads to practical benefits
under the right conditions.
Another difference between GMA and the greedy algorithm in [4] is that our procedure allows the
use of non-flat priors through the extra penalty term ?c(`) ln(1/qj ). This generality can be useful
for some applications. Moreover, it is useful to notice that if we choose the flat prior qj = 1/M ,
then the term ?c(`) ln(1/qj ) is identical for all models, and thus this term can be removed from the
optimization. In this case, the proposed method has the advantage of being parameter free (with the
default choice of ? = 0.5). This advantage is also shared by the STAR algorithm.
: noisy observation y and static models f 1 , . . . , f M
(`)
output
: averaged model f?
parameters: prior {qj }j=1,...,M and regularization parameters ? and ?
input
(0)
let f? = 0
for ` = 1, 2, . . . , L do
let ?(`) = (` ? 1)/`
let ?(1) = 0; ?(2) = 0.05; ?(`) = ?(` ? 1)/`2 if ` > 2
let c(1) = 1; c(2) = 0.25; and c(`) = [20?(1 ? ?)(` ? 1)]?1 if ` > 2
let k?(`) = argminj Q(`) (j), where
2
2
(`) ? (`?1)
1
(`)
(`)
(`)
? (`?1)
(`)
Q (j) :=
? f
+ (1 ? ? )f j ? y
+ ?
f
? f j
+ ?c ln qj
2
let f?
end
(`)
= ?(`) f?
(`?1)
2
+ (1 ? ?(`) )f k?(`)
Algorithm 1: Greedy Model Averaging (GMA)
Observe that the first stage of GMA corresponds to the standard model selection procedure:
h
i
2
k?(1) = argmin
f j ? y
2 + ? ln(1/qj ) ,
j
f?
(1)
= f k?(1) .
?
As we have pointed out earlier, it is well known that only O(1/ n) regret can be achieved by
?
any model selection procedure (that is, any procedure that returns a single model f? k? for some k).
However, a combination of only two models will allow us to achieve the optimal O(1/n) rate. In
(2)
fact, f? achieves this rate. For clarity, we rewrite this stage 2 estimator as
"
#
2
2 ?
1
1
?
(2)
?
k
= argmin
(f k?(1) + f j ) ? y
+
f k?(1) ? f j
+ ln(1/qj ) ,
2
20
4
2
j
2
(2)
f?
=
1
(f ?(1) + f k?(2) ).
2 k
Theorem 1 shows that this simple stage 2 estimator achieves O(1/n) regret. A similar result was
shown in [1] for the STAR algorithm under the flat prior qj = 1/M , which corresponds to the stage
2 estimator of the classical greedy algorithm in [4]. Theoretically our result has several advantages
over that of the classical EWMA method. First it produces a sparse estimator while exponential
averaging estimator is dense; second the performance bound is scale free in the sense that the bound
3
depends only on the noise variance but not the magnitude of maxj
f j
; third the optimal bound
holds with high probability while EWMA only achieves optimal bound on average but not with large
probability; and finally if we choose a flat prior qj = 1/M , the estimator is parameter free because
we can exclude the term ? ln(1/qj ) from the estimators. This result also improves the recent work
of [7] in that the resulting bound is scale free while the algorithm itself is significantly simpler. One
disadvantage of this stage-2 estimator (and similarly the STAR estimator of [1]) is that its average
performance is generally inferior to that of EWMA, mainly due to the relatively large constant in
Theorem 1 (the same issue holds for the STAR algorithm). For this reason, the stage-2 estimator is
not a practical replacement of EWMA. This is the main reason why it is necessary to run GMA for
L > 2 stages, which leads to reduced constants (see Theorem 2) below. Our empirical experiments
show that in order to compete with EWMA for average performance, it is important to take L > 2.
However a relatively small L (as small as L = 5) is often sufficient, and in such case the resulting
estimator is still quite sparse.
M
P
Theorem 1 Given qj ? 0 such that
qj = 1. If ? ? 40? 2 , then with probability 1 ? 2? we have
j=1
R(f?
(2)
)?
? 3
1
ln(1/qk? ) + ln(1/?) .
n 4
2
(2)
While the stage-2 estimator f?
achieves the optimal rate, running GMA for more more stages
can further improve the performance. The following theorem shows that similar bounds can be
2
obtained for GMA at stages larger than 2. However, the constant before ?n ln qk1 ? approaches 8
?
when ` ? ? (with default ? = 0.5), which is smaller than the constant of Theorem 1 which is
about 30. This implies potential improvement when we run more stages, and this improvement is
confirmed in our empirical study. In fact, with relatively large `, the GMA method not only has the
theoretical advantage of achieving smaller regret in deviation (that is, the regret bound holds with
large probability) but also achieves better average performance in practice.
M
P
Theorem 2 Given qj ? 0 such that
qj = 1. If ? ? 40? 2 and let 0 < ? < 1 in Algorithm 1,
j=1
then with probability 1 ? 2? we have
(`)
1
? (` ? 2) + ln(` ? 1) + 30?(1 ? ?)
ln
.
R(f? ) ?
n
20?(1 ? ?)`
qk? ?
Another important advantage of running GMA for ` > 2 stages is that the resulting estimator not
only competes with the best single estimator f k? , but also competes with the best estimator in
the convex hull of cov(F) (with the parameter ? appropriately tuned). Note that the latter can be
significantly better than the former. Define the convex hull of F as
?
?
M
?X
?
X
cov(F) =
wj f j : wj ? 0;
wj = 1 .
?
?
j=1
j
?
(`)
The following theorem shows that as ` ? ?, the prediction error of f? is no more than O(1/?n)
worse than that of the optimal f? ? cov(F) when we choose a sufficiently small ? = O(1/ n)
in Algorithm 1. Note that in this case, it is beneficial to use a parameter ? smaller than the default
choice of ? = 0.5. This phenomenon is also confirmed by our experiments.
M
P
Theorem 3 Given qj ? 0 such that
qj = 1. Consider any {wj : j = 1, . . . , M } such that
j=1
P
P
2
?
j wj = 1 and wj ? 0, and let f =
j wj f j . If ? ? 40? and let 0 < ? < 1 in Algorithm 1,
then with probability 1 ? 2?, when ` ? ?:
2
X
2 ? X
2
1
1
?
1
1
? (`)
?
?
f ? g 2+
wk f k ? f 2 +
wk ln
+O
.
f ? g
?
n
n
n
20?(1 ? ?)n
?qk
`
2
k
k
4
3
Experiments
The point of these experiments is to show that the consequences of our theoretical analysis can be
observed in practice, which support the main conclusions we reach. For this purpose, we consider
the model g = Xw + 0.5?g, where X = (f 1 , . . . , f M ) is an n ? M matrix with independent
standard Gaussian entries, and ?g ? N (0, In?n ) implies that the model is mis-specified.
The noise vector is ? ? N (0, ? 2 I n?n ), independently
generated of X. The coefficient vector
Ps
w = (w1 , . . . , wM )> is given by wi = |ui |/ j=1 |uj | for i = 1, . . . , s, where u1 , . . . , us are
independent standard uniform random variables for some fixed s.
The performance of an estimator f? measured here is the mean squared error (MSE) defined as
2
1
MSE(f? ) =
f? ? g
.
n
2
We run the Greedy Model Averaging (GMA) algorithm for L stages up to L = 40. The EWMA
parameter is tuned via 10-fold cross-validation. Moreover, we also listed the performance of EWMA
with projection, which is the method that runs EWMA, but with each model f k replaced by model
f? k = ?k f k where ?k = arg min??R k?f k ? yk22 . That is, f? k is the best linear scaling of f k to
predict y. Note that this is a special case of the class of methods studied in [6] (which considers
more general projections) that leads to non progressive regret bounds, and this is the method of
significant current interests [3, 8]. However, at least for the scenario considered in our paper, the
projected EWMA method never improves performance in our experiments. Finally, for reference
purpose, we also report the MSE of the best single model (BSM) f k? , where k? is given by (1).
The model f k? is clearly not a valid estimator because it depends on the unobserved g; however its
performance is informative, and thus included in the tables. For simplicity, all algorithms use flat
prior qk = 1/M .
4
Illustration of Theorem 1 and Theorem 2
The first set of experiments are performed with the parameters n = 50, M = 200,s = 1 and ? = 2.
Five hundred replications are run, and the MSE performance of different algorithms are reported in
Table 1 using the ?mean ? standard deviation? format.
Note that with s = 1, the target is g = f 1 + 0.5?g. Since f 1 and ?g are random Gaussian vectors,
the best single model is likely f 1 . The noise ? = 2 is relatively large. This is thus the situation
that model averaging does not achieve as good a performance as that of the best single model. This
corresponds to the scenario considered in Theorem 1 and Theorem 2.
The results indicate that for GMA, from L = 1 (corresponding to model selection) to L = 2 (stage-2
model averaging of Theorem 1), there is significant reduction of error. The performance of GMA
with L = 2 is comparable to that of the STAR algorithm. This isn?t surprising, because STAR can
be regarded as the stage-2 estimator based on the more classical greedy algorithm of [4]. We also
observe that the error keeps decreasing (but at a slower pace) when L > 2, which is consistent with
Theorem 2. It means that in order to achieve good performance, it is necessary to use more stages
than L = 2 (although this doesn?t change the O(1/n) rate for regret, it can significantly reduce
constant). It becomes better than EWMA when L is as small as 5, which still gives a relatively
sparse averaged model. EWMA with projection does not perform as well as the standard EWMA
method in this setting. Moreover, we note that in this scenario, the standard choice of ? = 0.5 in
Theorem 2 is superior to choosing smaller ? = 0.1 or ? = 0.001. This again is consistent with
Theorem 2, which shows that the new term we added into the greedy algorithm is indeed useful in
this scenario.
5
Illustration of Theorem 3
The second set of experiments are performed with the parameters n = 50, M = 200,s = 10 and
? = 0.5. Five hundred replications are run, and the MSE performance of different algorithms are
reported in Table 2 using the ?mean ? standard deviation? format.
5
Table 1: MSE of different algorithms: best single model is superior to averaged models
STAR
EWMA
EWMA (with projection)
BSM
0.663 ? 0.4 0.645 ? 0.5
0.744 ? 0.5
0.252 ? 0.05
GMA
? = 0.5
? = 0.1
? = 0.01
L=1
0.735 ? 0.74
0.735 ? 0.74
0.735 ? 0.74
L=2
0.689 ? 0.4
0.689 ? 0.4
0.689 ? 0.4
L=5
0.58 ? 0.39
0.645 ? 0.31
0.663 ? 0.3
L = 20
0.566 ? 0.37
0.623 ? 0.29
0.638 ? 0.28
L = 40
0.567 ? 0.38
0.622 ? 0.29
0.639 ? 0.28
Note that with s = 10, the target is g = f? + 0.5?g for some f? ? cov(F). The noise ? = 0.5 is
relatively small, which makes it beneficial ?
to compete with the best model f? in the convex hull even
though GMA has a larger regret of O(1/ n) when competing with f? . This is thus the situation
considered in Theorem 3, which means that model averaging can achieve better performance than
that of the best single model.
The results again show that for GMA, from L = 1 (corresponding to model selection) to L = 2
(stage-2 model averaging of Theorem 1), there is significant reduction of error. The performance
of GMA with L = 2 is again comparable to that of the STAR algorithm. Again we observe that
even with the standard choice of ? = 0.5, the error keeps decreasing (but at a slower pace) when
L > 2, which is consistent with Theorem 2. It becomes better than EWMA when L is as small as 5,
which still gives a relatively sparse averaged model. EWMA with projection again does not perform
as well as the standard EWMA method in this setting. Moreover, we note that in this scenario, the
standard choice of ? = 0.5 in Theorem 2 is inferior to choosing smaller parameter values of ? = 0.1
or ? = 0.001. This is consistent with Theorem 3, where it is beneficial to use a smaller value for ?
in order to compete with the best model in the convex hull.
Table 2: MSE of different algorithms: best single model is inferior to averaged model
STAR
EWMA
EWMA (with projection)
BSM
0.443 ? 0.08 0.316 ? 0.087
0.364 ? 0.078
0.736 ? 0.083
GMA
? = 0.5
? = 0.1
? = 0.01
6
L=1
0.809 ? 0.12
0.809 ? 0.12
0.809 ? 0.12
L=2
0.456 ? 0.081
0.456 ? 0.081
0.456 ? 0.081
L=5
0.305 ? 0.062
0.269 ? 0.056
0.268 ? 0.053
L = 20
0.266 ? 0.057
0.214 ? 0.046
0.211 ? 0.045
L = 40
0.265 ? 0.057
0.211 ? 0.045
0.207 ? 0.045
Conclusion
This paper presents a new model averaging scheme which we call greedy model averaging (GMA).
It is shown that the new method can achieve regret bound of O(ln M/n) with large probability
when competing with the single best model. Moreover, it can also compete with the best combined
model in convex hull. Both our theory and experimental results suggest that the proposed GMA
algorithm is superior to the standard EWMA procedure. Due to the simplicity of our proposal,
GMA may be regarded as a valid alternative to the more widely studied EWMA procedure both for
practical applications and for theoretical purposes. Finally we shall point out that while this work
only considers static model averaging where the models F are finite, similar results can be obtained
for affine estimators or infinite models considered in recent work [3, 6, 8]. Such extension will be
left to the extended report.
A
Proof Sketches
We only include proof sketches, and leave the details to the supplemental material that accompanies
the submission. First we need the following standard Gaussian tail bounds. The proofs can be found
in the supplemental material.
6
Proposition
1 Let f j ? Rn be a set of fixed vectors (j = 1, . . . , M ), and assume that qj ? 0 with
P
j qj = 1. Let k? be a fixed integer between 1 and M . Define event E1 as
q
E1 = ?j : (f j ? f k? )> ? ? ?kf j ? f k? k2 2 ln(1/(?qj ))
and define event E2 as
q
>
E2 = ?j, k : (f j ? f k ) ? ? ?kf j ? f k k2 2 ln(1/(?qj qk )) ,
then P (E1 ) ? 1 ? ? and P (E2 ) ? 1 ? ?.
A.1
Proof Sketch of Theorem 1
More detailed proof can be found in the supplemental material. Note that with probability 1 ? 2?,
both event E1 and event E2 of Proposition 1 hold. Moreover we have
2
2
(2)
(1)
?
f ? g
=
?(2) f? + (1 ? ?(2) )f k?(2) ? g
2
2
2
(2) ? (1)
(2)
?
? f + (1 ? ? )f k? ? g
+ 2(1 ? ?(2) )? > (f k?(2) ? f k? )
2
2
(1)
2
(1)
(2)
?
?
+?
f ? f k?
?
f ? f k?(2)
+ ?c(2) (ln(1/qk? ) ? ln(1/qk?(2) )).
2
2
(2)
?(2)
In the above derivation, the inequality is equivalent to Q (k ) ? Q(2) (k? ), which is a simple fact
of the definition of k?(`) in the algorithm. Also we can rewrite the fact that Q(1) (k?(1) ) ? Q(1) (k? ) as
(1)
2
2
?
f ? g
?
f k? ? g
2 ? 2? > (f k?(1) ? f k? ) + ?c(1) ln(qk?(1) /qk? ).
2
By combining the above two inequalities, we obtain
(2)
2
i
h
2
?
f ? g
?
f k? ? g
2 ? ?(2) 2? > (f k?(1) ? f k? ) + ?c(1) ln(qk?(1) /qk? )
2
h
i
2
+2(1 ? ?(2) )? > (f k?(2) ? f k? ) + ?(2) ? ?(2) (1 ? ?(2) )
f k?(1) ? f k?
2
2
??(2)
f k?(1) ? f k?(2)
2 + ?c(2) (ln(1/qk? ) ? ln(1/qk?(2) )).
Since ?(2) = 1/2, we obtain
(2)
2
2
?
f ? g
?
f k? ? g
2
2
1
1
? ( ?c(1) + ?c(2) ) ln(1/qk? ) ? ?c(1) ln(1/qk?(1) ) ? ?c(2) ln(1/qk?(2) )
2
2
s
s
1
1
1
+2 f k?(1) ? f k? 2 ? 2 ln
+2?
f k?(2) ? f k?(1) 2 ? 2 ln
qk?(1) ?
2
qk?(1) qk?(2) ?
2
2
+(?(2) ? 1/4)
f k?(1) ? f k?
2 ? ?(2)
f k?(1) ? f k?(2)
2
1
? ( ?c(1) + ?c(2) ) ln(1/qk? ) + (2r1 + 2r2 ) ln(1/?).
2
The first inequality above uses the tail probability bounds in the event E1 and E2 . We then use the
algebraic inequality 2a1 b1 ? a21 /r1 + r1 b21 and 2a2 b2 ? a22 /r2 + r2 b22 to obtain the last inequality,
which implies the desired bound.
A.2
Proof Sketch of Theorem 2
Again, more detailed proof can be found in the supplemental material. With probability 1 ? 2?, both
event E1 and event E2 of Proposition 1 hold. This implies that the claim of Theorem 1 also holds.
7
Now consider any ` ? 3. We have
2
(`)
2
i
h
(`?1)
?
+ (1 ? ?(`) )f k? ? g
+ 2? > (1 ? ?(`) )f k?(`) ? (1 ? ?(`) )f k?
f ? g
?
?(`) f?
2
2
2
2
(`?1)
(`?1)
(`)
? f ?(`)
.
+?c (ln(1/qk ) ? ln(1/q?(`) )) + ?(`)
f?
? f
?
f?
?
(`)
?(`)
k?
k
k
2
2
?(`)
(`)
The inequality is equivalent to Q (k ) ? Q (k? ), which is a simple fact of the definition of k
in the algorithm. We can rewrite the above inequality as
2
2
? (`)
f ? g
? f k? ? g 2
2
2
2
(`)
? (`?1)
? ?
? g
? f k? ? g 2 ? ?c(`) (ln(qk? ) ? ln(qk?(`) )) + 2(1 ? ?(`) )? > (f k?(`) ? f k? )
f
2
2
h
i
(`?1)
(`?1)
2
? f k?
??(`)
f k?(`) ? f?
+ ?(`) ? ?(`) (1 ? ?(`) )
f?
2
2
2
(`?1)
2
? ?(`)
f?
? g
?
f k? ? g
2 + ?c(`) (ln(1/qk? ) ? ln(1/qk?(`) ))
2
s
?(`) ?(`) (1 ? ?(`) ) ? ?(`)
2
1
f ?(`) ? f k
2
+ f k?(`) ? f k? 2 ? 2 ln
?
k
? 2
`
qk?(`) ?
?(`) (1 ? ?(`) )
2
2
`?1
? (`?1)
?
g
?
f
?
g
?
f
+ ?c(`) (ln(1/qk? ) ? ln(1/qk?(`) ))
k?
2
`
2
?2
`?1
f ?(`) ? f k
2 + 2r` ln 1 .
+ ? 2 ?(1 ? ?) + 2
k
? 2
`
` r`
qk?(`) ?
2
2
2
The second inequality uses the fact that ?p kak ? q kbk
? ?pq/(p + q) ka + bk ,
2
(`)
(`?1)
2
(`)
(`)
? (`?1)
which implies that ? ? ? (1 ? ? )
f
? f k?
? ?(`)
f k?(`) ? f?
?
2
2
(`)
(`)
(`)
(`)
? [? (1?? )?? ]
f ?(`) ? f k
2 and uses the Gaussian tail bound in the event E1 . The
?
k
? 2
?(`) (1??(`) )
last inequality uses 2ab ? a2 /r` + r` b2 , where r` > 0 is r` = ?c(`) /2. Denote by R(`) =
(`)
2
2
?
?1
f ? g
?
f k? ? g
2 ,then since the choice of parameters c(`) = [20?(1 ? ?)(` ? 1)] we
2
obtain R(`) ?
bound.
A.3
`?1 (`?1)
` R
+ ?c(`) ln(1/qk? ?) . Solving this recursion for R(`) leads to the desired
Proof Sketch of Theorem 3
Again, more detailed proof can be found in the supplemental material. Consider any ` ? 3. We have
(`)
2
?
f ? g
?
2
X
2
(`?1)
wk
?(`) f?
+ (1 ? ?(`) )f k ? g
+ ?(`)
2
k
X
(`?1)
2
(`?1)
2
wk
f?
? f k
?
f?
? f k?(`)
2
k
2
"
+?c(`) (
X
#
wk ln(1/qk ) ? ln(1/qk?(`) )) + 2? > (1 ? ?(`) )f k?(`) ? (1 ? ?(`) )
k
X
wk f k .
k
P
The inequality is equivalent to Q(`) (k?(`) ) ? k wk Q(`) (k), which is a simple fact of the definition
(`)
2
2
of k?(`) in the algorithm. Denote by R(`) =
f? ? g
?
?
f ? g
2 , then the same derivation as
2
that of Theorem 2 implies that
X
X
2
` ? 1 (`?1)
R(`) ?
R
+ ?c(`)
wk ln(1/(?qk )) + [?(`) + (1 ? ?(`) )2 ]
wk
f k ? f?
2 .
`
k
k
Now by solving the recursion, we obtain the theorem.
8
!
References
[1] Jean-Yves Audibert. Progressive mixture rules are deviation suboptimal. In NIPS?07, 2008.
[2] Olivier Catoni. Statistical learning theory and stochastic optimization. Springer-Verlag, 2004.
[3] Arnak Dalalyan and Joseph Salmon. Optimal aggregation of affine estimators. In COLT?01,
2011.
[4] L.K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for
projection pursuit regression and neural network training. Ann. Statist., 20(1):608?613, 1992.
[5] Anatoli Juditsky, Philippe Rigollet, and Alexandre Tsybakov. Learning by mirror averaging.
The Annals of Statistics, 36:2183?2206, 2008.
[6] Gilbert Leung and A.R. Barron. Information theory and mixing least-squares regressions. Information Theory, IEEE Transactions on, 52(8):3396 ?3410, aug. 2006.
[7] Philippe Rigollet. Kullback-leibler aggregation and misspecified generalized linear models.
arXiv:0911.2919, November 2010.
[8] Pilippe Rigollet and Alexandre Tsybakov. Exponential Screening and optimal rates of sparse
estimation. The Annals of Statistics, 39:731?771, 2011.
[9] Yuhong Yang. Adaptive regression by mixing. Journal of the American Statistical Association,
96:574?588, 2001.
9
| 4203 |@word version:3 bsm:3 reduction:2 tuned:3 existing:1 current:2 com:1 ka:1 surprising:1 si:1 gmail:1 informative:1 juditsky:1 greedy:14 simpler:1 zhang:1 five:2 replication:2 prove:2 combine:2 theoretically:2 indeed:1 decreasing:2 becomes:2 provided:1 notation:1 moreover:7 competes:2 argmin:3 supplemental:5 unobserved:1 guarantee:1 k2:2 yn:2 arnak:1 positive:1 before:1 consequence:1 studied:4 examined:1 averaged:5 practical:5 practice:4 regret:14 implement:1 procedure:16 empirical:5 significantly:3 projection:7 suggest:1 convenience:1 selection:6 gilbert:1 equivalent:3 dalalyan:1 independently:1 convex:5 simplicity:4 estimator:27 rule:1 regarded:3 annals:2 target:2 olivier:1 us:4 submission:1 labeled:1 observed:1 worst:3 wj:7 removed:1 mentioned:1 ui:1 rewrite:3 solving:2 jersey:2 derivation:2 choosing:2 jean:1 quite:1 widely:2 larger:2 statistic:4 cov:4 noisy:1 itself:1 differentiate:1 advantage:6 combining:2 mixing:2 achieve:9 convergence:1 p:1 r1:3 produce:1 leave:1 illustrate:2 stat:1 measured:1 a22:1 aug:1 gma:21 strong:1 implies:6 indicate:1 hull:5 stochastic:1 material:5 f1:2 proposition:3 extension:1 hold:7 sufficiently:1 considered:4 predict:1 claim:1 gk2:1 major:1 achieves:8 smallest:1 a2:2 purpose:6 estimation:1 weighted:2 clearly:2 gaussian:7 rather:1 focus:1 improvement:2 mainly:1 sense:1 leung:1 favoring:1 interested:1 issue:2 arg:2 colt:1 priori:1 special:2 never:1 identical:1 progressive:5 jones:1 report:2 maxj:1 replaced:1 replacement:2 ab:1 interest:2 screening:1 analyzed:1 mixture:1 necessary:2 re:1 desired:3 theoretical:7 earlier:2 disadvantage:1 stacking:1 deviation:9 entry:1 uniform:1 hundred:2 reported:2 combined:3 dong:1 w1:1 squared:1 again:7 choose:4 worse:2 american:1 return:2 exclude:1 potential:1 star:14 b2:2 wk:9 coefficient:1 audibert:1 depends:3 later:1 performed:2 wm:1 aggregation:2 square:4 yves:1 accuracy:2 qk:34 variance:1 ensemble:1 correspond:1 critically:1 iid:2 confirmed:2 reach:1 suffers:1 definition:3 e2:6 proof:10 mi:3 static:2 dataset:2 improves:3 hilbert:1 alexandre:2 improved:1 though:1 generality:1 stage:19 sketch:5 remedy:1 former:2 regularization:1 leibler:1 inferior:5 kak:1 generalized:1 performs:1 salmon:1 misspecified:1 superior:5 common:1 rigollet:3 tail:3 association:1 refer:2 significant:3 fk:2 similarly:1 pointed:3 pq:1 yk2:3 add:1 recent:4 scenario:6 verlag:1 inequality:10 yi:1 b22:1 dai:1 additional:1 impose:1 ey:1 multiple:3 cross:1 e1:7 a1:1 qi:2 prediction:2 variant:1 regression:5 impact:1 rutgers:3 arxiv:1 represent:1 achieved:2 proposal:1 appropriately:1 extra:4 unlike:1 call:1 integer:1 yang:1 yk22:1 fm:2 competing:2 suboptimal:1 reduce:1 qj:24 motivated:1 penalty:1 argminj:1 algebraic:1 accompanies:1 generally:2 useful:3 detailed:3 listed:1 tsybakov:2 statist:1 reduced:1 notice:1 pace:2 write:1 shall:2 nevertheless:1 achieving:1 clarity:2 qk1:1 run:6 compete:4 tzhang:1 scaling:1 comparable:2 bound:18 fold:1 flat:7 u1:1 min:2 optimality:1 relatively:7 format:2 department:2 combination:5 remain:1 smaller:6 ewma:32 beneficial:3 wi:1 joseph:1 modification:1 kbk:1 ln:47 discus:1 end:1 pursuit:1 apply:1 observe:3 barron:1 alternative:1 slower:2 running:2 include:1 xw:1 anatoli:1 uj:1 classical:6 added:1 separate:2 considers:5 reason:3 illustration:2 difficult:1 design:3 perform:3 observation:2 finite:1 november:1 philippe:2 situation:2 extended:2 y1:3 rn:3 arbitrary:1 bk:1 specified:3 optimized:1 learned:1 nip:1 below:2 b21:1 critical:1 suitable:1 natural:1 event:8 recursion:2 scheme:1 improve:1 isn:3 prior:11 literature:3 kf:6 loss:1 interesting:1 limitation:1 validation:4 affine:2 sufficient:1 consistent:4 last:2 free:4 bias:1 allow:1 sparse:5 benefit:1 regard:1 default:3 xn:2 valid:2 doesn:1 adaptive:1 projected:1 transaction:1 kullback:1 keep:2 b1:1 xi:1 why:1 table:5 improving:1 mse:7 complex:1 dense:1 main:2 noise:6 x1:3 tong:1 sub:3 a21:1 exponential:5 third:1 theorem:31 yuhong:1 r2:3 decay:2 sequential:1 mirror:1 magnitude:1 catoni:1 likely:1 springer:1 corresponds:6 goal:2 ann:1 shared:2 change:1 included:1 specifically:2 infinite:1 averaging:25 denoising:1 lemma:1 called:1 partly:1 experimental:1 ew:1 support:1 latter:2 phenomenon:1 |
3,539 | 4,204 | Dynamic Pooling and Unfolding Recursive
Autoencoders for Paraphrase Detection
Richard Socher, Eric H. Huang, Jeffrey Pennington? , Andrew Y. Ng, Christopher D. Manning
Computer Science Department, Stanford University, Stanford, CA 94305, USA
?
SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94309, USA
[email protected], {ehhuang,jpennin,ang,manning}@stanford.edu
Abstract
Paraphrase detection is the task of examining two sentences and determining
whether they have the same meaning. In order to obtain high accuracy on this
task, thorough syntactic and semantic analysis of the two statements is needed.
We introduce a method for paraphrase detection based on recursive autoencoders
(RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn
feature vectors for phrases in syntactic trees. These features are used to measure
the word- and phrase-wise similarity between two sentences. Since sentences may
be of arbitrary length, the resulting matrix of similarity measures is of variable
size. We introduce a novel dynamic pooling layer which computes a fixed-sized
representation from the variable-sized matrices. The pooled representation is then
used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.
1
Introduction
Paraphrase detection determines whether two phrases of arbitrary length and form capture the same
meaning. Identifying paraphrases is an important task that is used in information retrieval, question
answering [1], text summarization, plagiarism detection [2] and evaluation of machine translation
[3], among others. For instance, in order to avoid adding redundant information to a summary one
would like to detect that the following two sentences are paraphrases:
S1 The judge also refused to postpone the trial date of Sept. 29.
S2 Obus also denied a defense motion to postpone the September trial date.
We present a joint model that incorporates the similarities between both single word features as well
as multi-word phrases extracted from the nodes of parse trees. Our model is based on two novel
components as outlined in Fig. 1. The first component is an unfolding recursive autoencoder (RAE)
for unsupervised feature learning from unlabeled parse trees. The RAE is a recursive neural network.
It learns feature representations for each node in the tree such that the word vectors underneath each
node can be recursively reconstructed.
These feature representations are used to compute a similarity matrix that compares both the single
words as well as all nonterminal node features in both sentences. In order to keep as much of the
resulting global information of this comparison as possible and deal with the arbitrary length of
the two sentences, we then introduce our second component: a new dynamic pooling layer which
outputs a fixed-size representation. Any classifier such as a softmax classifier can then be used to
classify whether the two sentences are paraphrases or not.
We first describe the unsupervised feature learning with RAEs followed by a description of pooling
and classification. In experiments we show qualitative comparisons of different RAE models and describe our state-of-the-art results on the Microsoft Research Paraphrase (MSRP) Corpus introduced
by Dolan et al. [4]. Lastly, we discuss related work.
1
Recursive Autoencoder
7
5
6
1
Paraphrase
5
2
Dynamic Pooling and Classification
3
4
4
The cats catch mice
1
2
Softmax Classifier
Fixed-Sized Matrix
n
3
Cats eat mice
1
2
3
4
5
1 2 3 4 5 6 7
Dynamic Pooling Layer
Variable-Sized Similarity Matrix
Figure 1: An overview of our paraphrase model. The recursive autoencoder learns phrase features
for each node in a parse tree. The distances between all nodes then fill a similarity matrix whose
size depends on the length of the sentences. Using a novel dynamic pooling layer we can compare
the variable-sized sentences and classify pairs as being paraphrases or not.
2
Recursive Autoencoders
In this section we describe two variants of unsupervised recursive autoencoders which can be used
to learn features from parse trees. The RAE aims to find vector representations for variable-sized
phrases spanned by each node of a parse tree. These representations can then be used for subsequent
supervised tasks. Before describing the RAE, we briefly review neural language models which
compute word representations that we give as input to our algorithm.
2.1
Neural Language Models
The idea of neural language models as introduced by Bengio et al. [5] is to jointly learn an embedding of words into an n-dimensional vector space and to use these vectors to predict how likely
a word is given its context. Collobert and Weston [6] introduced a new neural network model to
compute such an embedding. When these networks are optimized via gradient ascent the derivatives
modify the word embedding matrix L ? Rn?|V | , where |V | is the size of the vocabulary. The word
vectors inside the embedding matrix capture distributional syntactic and semantic information via
the word?s co-occurrence statistics. For further details and evaluations of these embeddings, see
[5, 6, 7, 8].
Once this matrix is learned on an unlabeled corpus, we can use it for subsequent tasks by using each
word?s vector (a column in L) to represent that word. In the remainder of this paper, we represent a
sentence (or any n-gram) as an ordered list of these vectors (x1 , . . . , xm ). This word representation
is better suited for autoencoders than the binary number representations used in previous related
autoencoder models such as the recursive autoassociative memory (RAAM) model of Pollack [9, 10]
or recurrent neural networks [11] since the activations are inherently continuous.
2.2
Recursive Autoencoder
Fig. 2 (left) shows an instance of a recursive autoencoder (RAE) applied to a given parse tree as
introduced by [12]. Unlike in that work, here we assume that such a tree is given for each sentence by
a parser. Initial experiments showed that having a syntactically plausible tree structure is important
for paraphrase detection. Assume we are given a list of word vectors x = (x1 , . . . , xm ) as described
in the previous section. The binary parse tree for this input is in the form of branching triplets of
parents with children: (p ? c1 c2 ). The trees are given by a syntactic parser. Each child can be
either an input word vector xi or a nonterminal node in the tree. For both examples in Fig. 2, we
have the following triplets: ((y1 ? x2 x3 ), (y2 ? x1 y1 )), ?x, y ? Rn .
Given this tree structure, we can now compute the parent representations. The first parent vector
p = y1 is computed from the children (c1 , c2 ) = (x2 , x3 ) by one standard neural network layer:
p
= f (We [c1 ; c2 ] + b),
(1)
where [c1 ; c2 ] is simply the concatenation of the two children, f an element-wise activation function
such as tanh and We ? Rn?2n the encoding matrix that we want to learn. One way of assessing
how well this n-dimensional vector represents its direct children is to decode their vectors in a
2
Recursive Autoencoder
x1'
y1'
Unfolding Recursive Autoencoder
Wd
Wd
y2
We
x1
x2
Wd
y2
y1
We
x1'
We
x1
x3
We
x2'
x3'
y1'
y1
x2
x3
Figure 2: Two autoencoder models with details of the reconstruction at node y2 . For simplicity we
left out the reconstruction layer at the first node y1 which is the same standard autoencoder for both
models. Left: A standard autoencoder that tries to reconstruct only its direct children. Right: The
unfolding autoencoder which tries to reconstruct all leaf nodes underneath each node.
reconstruction layer and then to compute the Euclidean distance between the original input and its
reconstruction:
[c01 ; c02 ]
2
Erec (p) = ||[c1 ; c2 ] ? [c01 ; c02 ]|| .
= f (Wd p + bd )
(2)
In order to apply the autoencoder recursively, the same steps repeat. Now that y1 is given, we can
use Eq. 1 to compute y2 by setting the children to be (c1 , c2 ) = (x1 , y1 ). Again, after computing
the intermediate parent vector p = y2 , we can assess how well this vector captures the content of the
children by computing the reconstruction error as in Eq. 2. The process repeats until the full tree is
constructed and each node has an associated reconstruction error.
During training, the goal is to minimize the reconstruction error of all input pairs at nonterminal
nodes p in a given parse tree T :
X
Erec (T ) =
Erec (p)
(3)
p?T
For the example in Fig. 2 (left), we minimize Erec (T ) = Erec (y1 ) + Erec (y2 ).
Since the RAE computes the hidden representations it then tries to reconstruct, it could potentially
lower reconstruction error by shrinking the norms of the hidden layers. In order to prevent this, we
add a length normalization layer p = p/||p|| to this RAE model (referred to as the standard RAE).
Another more principled solution is to use a model in which each node tries to reconstruct its entire
subtree and then measure the reconstruction of the original leaf nodes. Such a model is described in
the next section.
2.3 Unfolding Recursive Autoencoder
The unfolding RAE has the same encoding scheme as the standard RAE. The difference is in the
decoding step which tries to reconstruct the entire spanned subtree underneath each node as shown
in Fig. 2 (right). For instance, at node y2 , the reconstruction error is the difference between the
leaf nodes underneath that node [x1 ; x2 ; x3 ] and their reconstructed counterparts [x01 ; x02 ; x03 ]. The
unfolding produces the reconstructed leaves by starting at y2 and computing
[x01 ; y10 ] = f (Wd y2 + bd ).
Then it recursively splits
y10
(4)
again to produce vectors
[x02 ; x03 ] = f (Wd y10 + bd ).
(5)
In general, we repeatedly use the decoding matrix Wd to unfold each node with the same tree
structure as during encoding. The reconstruction error is then computed from a concatenation of the
word vectors in that node?s span. For a node y that spans words i to j:
2
Erec (y(i,j) ) = [xi ; . . . ; xj ] ? x0i ; . . . ; x0j .
(6)
The unfolding autoencoder essentially tries to encode each hidden layer such that it best reconstructs
its entire subtree to the leaf nodes. Hence, it will not have the problem of hidden layers shrinking
in norm. Another potential problem of the standard RAE is that it gives equal weight to the last
merged phrases even if one is only a single word (in Fig. 2, x1 and y1 have similar weight in the last
merge). In contrast, the unfolding RAE captures the increased importance of a child when the child
represents a larger subtree.
3
2.4 Deep Recursive Autoencoder
Both types of RAE can be extended to have multiple encoding layers at each node in the tree. Instead
of transforming both children directly into parent p, we can have another hidden layer h in between.
While the top layer at each node has to have the same dimensionality as each child (in order for
the same network to be recursively compatible), the hidden layer may have arbitrary dimensionality.
For the two-layer encoding network, we would replace Eq. 1 with the following:
h = f (We(1) [c1 ; c2 ] + b(1)
e )
f (We(2) h
(7)
b(2)
e ).
p =
+
(8)
2.5 RAE Training
For training we use a set of parse trees and then minimize the sum of all nodes? reconstruction errors.
We compute the gradient efficiently via backpropagation through structure [13]. Even though the
objective is not convex, we found that L-BFGS run with mini-batch training works well in practice.
Convergence is smooth and the algorithm typically finds a good locally optimal solution.
After the unsupervised training of the RAE, we demonstrate that the learned feature representations
capture syntactic and semantic similarities and can be used for paraphrase detection.
3
An Architecture for Variable-Sized Similarity Matrices
Now that we have described the unsupervised feature learning, we explain how to use
these features to classify sentence pairs as being in a paraphrase relationship or not.
3.1 Computing Sentence Similarity Matrices
Our method incorporates both single word and phrase similarities
in one framework. First, the RAE computes phrase vectors for the
nodes in a given parse tree. We then compute Euclidean distances
between all word and phrase vectors of the two sentences. These
distances fill a similarity matrix S as shown in Fig. 1. For computing the similarity matrix, the rows and columns are first filled by the
words in their original sentence order. We then add to each row and
column the nonterminal nodes in a depth-first, right-to-left order.
Simply extracting aggregate statistics of this table such as the average distance or a histogram of distances cannot accurately capture the global structure of the similarity comparison. For instance,
paraphrases often have low or zero Euclidean distances in elements
close to the diagonal of the similarity matrix. This happens when
similar words align well between the two sentences. However, since
the matrix dimensions vary based on the sentence lengths one cannot simply feed the similarity matrix into a standard neural network
or classifier.
0.1
0.1
0.6
0.7
0.2
Figure 3: Example of the dynamic min-pooling layer finding the smallest number in a
pooling window region of the
original similarity matrix S.
3.2 Dynamic Pooling
Consider a similarity matrix S generated by sentences of lengths n and m. Since the parse trees are
binary and we also compare all nonterminal nodes, S ? R(2n?1)?(2m?1) . We would like to map S
into a matrix Spooled of fixed size, np ? np . Our first step in constructing such a map is to partition
the rows and columns of S into np roughly equal parts, producing an np ? np grid.1 We then define
Spooled to be the matrix of minimum values of each rectangular region within this grid, as shown in
Fig. 3.
The matrix Spooled loses some of the information contained in the original similarity matrix but it still
captures much of its global structure. Since elements of S with small Euclidean distances show that
1
The partitions will only be of equal size if 2n ? 1 and 2m ? 1 are divisible by np . We account for this in
the following way, although many alternatives are possible. Let the number of rows of S be R = 2n ? 1. Each
pooling window then has bR/np c many rows. Let M = R mod np , be the number of remaining rows. We
then evenly distribute these extra rows to the last M window regions which will have bR/np c + 1 rows. The
same procedure applies to the number of columns for the windows. This procedure will have a slightly finer
granularity for the single word similarities which is desired for our task since word overlap is a good indicator
for paraphrases. In the rare cases when np > R, the pooling layer needs to first up-sample. We achieve this by
simply duplicating pixels row-wise until R ? np .
4
Center Phrase
the U.S.
suffering low
morale
to
watch
hockey
advance to the
next round
a prominent political figure
Seventeen people were killed
conditions
his release
of
Recursive Average
the U.S. and German
suffering a 1.9 billion baht
UNK 76 million
to watch one Jordanian border policeman stamp the Israeli passports
advance to final qualifying
round in Argentina
such a high-profile figure
?Seventeen people were
killed, including a prominent
politician ?
?conditions of peace, social
stability and political harmony ?
RAE
the Swiss
suffering due to no fault of
my own
to watch television
Unfolding RAE
the former U.S.
suffering heavy casualties
to watch a video
advance to the final of the
UNK 1.1 million Kremlin
Cup
the second high-profile opposition figure
Fourteen people were
killed
advance to the semis
conditions of peace, social
stability and political harmony
negotiations for their
release
a powerful business figure
Fourteen people were
killed
Table 1: Nearest neighbors of randomly chosen phrases. Recursive averaging and the standard RAE
focus mostly on the last merged words and incorrectly add extra information. The unfolding RAE
captures most closely both syntactic and semantic similarities.
there are similar words or phrases in both sentences, we keep this information by applying a min
function to the pooling regions. Other functions, like averaging, are also possible, but might obscure
the presence of similar phrases. This dynamic pooling layer could make use of overlapping pooling
regions, but for simplicity, we consider only non-overlapping pooling regions. After pooling, we
normalize each entry to have 0 mean and variance 1.
4
Experiments
For unsupervised RAE training we used a subset of 150,000 sentences from the NYT and AP sections of the Gigaword corpus. We used the Stanford parser [14] to create the parse trees for all
sentences. For initial word embeddings we used the 100-dimensional vectors computed via the
unsupervised method of Collobert and Weston [6] and provided by Turian et al. [8].
For all paraphrase experiments we used the Microsoft Research paraphrase corpus (MSRP) introduced by Dolan et al. [4]. The dataset consists of 5,801 sentence pairs. The average sentence
length is 21, the shortest sentence has 7 words and the longest 36. 3,900 are labeled as being in
the paraphrase relationship (technically defined as ?mostly bidirectional entailment?). We use the
standard split of 4,076 training pairs (67.5% of which are paraphrases) and 1,725 test pairs (66.5%
paraphrases). All sentences were labeled by two annotators who agreed in 83% of the cases. A third
annotator resolved conflicts. During dataset collection, negative examples were selected to have
high lexical overlap to prevent trivial examples. For more information see [4, 15].
As described in Sec. 2.4, we can have deep RAE networks with two encoding or decoding layers.
The hidden RAE layer (see h in Eq. 8) was set to have 200 units for both standard and unfolding
RAEs.
4.1 Qualitative Evaluation of Nearest Neighbors
In order to show that the learned feature representations capture important semantic and syntactic
information even for higher nodes in the tree, we visualize nearest neighbor phrases of varying
length. After embedding sentences from the Gigaword corpus, we compute nearest neighbors for
all nodes in all trees. In Table 1 the first phrase is a randomly chosen phrase and the remaining
phrases are the closest phrases in the dataset that are not in the same sentence. We use Euclidean
distance between the vector representations. Note that we do not constrain the neighbors to have
the same word length. We compare the two autoencoder models above: RAE and unfolding RAE
without hidden layers, as well as a recursive averaging baseline (R.Avg). R.Avg recursively takes the
average of both child vectors in the syntactic tree. We only report results of RAEs without hidden
layers between the children and parent vectors. Even though the deep RAE networks have more
parameters to learn complex encodings they do not perform as well in this and the next task. This is
likely due to the fact that they get stuck in local optima during training.
5
Encoding Input
a December summit
the first qualifying session
English premier division club
the safety of a flight
the signing of the accord
the U.S. House of Representatives
enforcement of the economic embargo
visit and discuss investment possibilities
the agreement it made with Malaysia
the full bloom of their young lives
the organization for which the men work
a pocket knife was found in his suitcase in the
plane?s cargo hold
Generated Text from Unfolded Reconstruction
a December summit
the first qualifying session
Irish presidency division club
the safety of a flight
the signing of the accord
the U.S. House of Representatives
enforcement of the national embargo
visit and postpone financial possibilities
the agreement it made with Malaysia
the lower bloom of their democratic lives
the organization for Romania the reform work
a bomb corpse was found in the mission in the Irish
car language case
Table 2: Original inputs and generated output from unfolding and reconstruction. Words are the
nearest neighbors to the reconstructed leaf node vectors. The unfolding RAE can reconstruct perfectly almost all phrases of 2 and 3 words and many with up to 5 words. Longer phrases start to get
incorrect nearest neighbor words. For the standard RAE good reconstructions are only possible for
two words. Recursive averaging cannot recover any words.
Table 1 shows several interesting phenomena. Recursive averaging is almost entirely focused on
an exact string match of the last merged words of the current phrase in the tree. This leads the
nearest neighbors to incorrectly add various extra information which would break the paraphrase
relationship if we only considered the top node vectors and ignores syntactic similarity. The standard
RAE does well though it is also somewhat focused on the last merges in the tree. Finally, the
unfolding RAE captures most closely the underlying syntactic and semantic structure.
4.2
Reconstructing Phrases via Recursive Decoding
In this section we analyze the information captured by the unfolding RAE?s 100-dimensional phrase
vectors. We show that these 100-dimensional vector representations can not only capture and memorize single words but also longer, unseen phrases.
In order to show how much of the information can be recovered we recursively reconstruct sentences
after encoding them. The process is similar to unfolding during training. It starts from a phrase
vector of a nonterminal node in the parse tree. We then unfold the tree as given during encoding
and find the nearest neighbor word to each of the reconstructed leaf node vectors. Table 2 shows
that the unfolding RAE can very well reconstruct phrases of up to length five. No other method that
we compared had such reconstruction capabilities. Longer phrases retain some correct words and
usually the correct part of speech but the semantics of the words get merged. The results are from
the unfolding RAE that directly computes the parent representation as in Eq. 1.
4.3 Evaluation on Full-Sentence Paraphrasing
We now turn to evaluating the unsupervised features and our dynamic pooling architecture in our
main task of paraphrase detection.
Methods which are based purely on vector representations invariably lose some information. For
instance, numbers often have very similar representations, but even small differences are crucial to
reject the paraphrase relation in the MSRP dataset. Hence, we add three number features. The first is
1 if two sentences contain exactly the same numbers or no number and 0 otherwise, the second is 1
if both sentences contain the same numbers and the third is 1 if the set of numbers in one sentence is
a strict subset of the numbers in the other sentence. Since our pooling-layer cannot capture sentence
length or the number of exact string matches, we also add the difference in sentence length and the
percentage of words and phrases in one sentence that are in the other sentence and vice-versa. We
also report performance without these three features (only S).
For all of our models and training setups, we perform 10-fold cross-validation on the training set to
choose the best regularization parameters and np , the size of the pooling matrix S ? Rnp ?np . In
our best model, the regularization for the RAE was 10?5 and 0.05 for the softmax classifier. The
best pooling size was consistently np = 15, slightly less than the average sentence length. For all
sentence pairs (S1 , S2 ) in the training data, we also added (S2 , S1 ) to the training set in order to
make the most use of the training data. This improved performance by 0.2%.
6
Model
All Paraphrase Baseline
Rus et al. (2008) [16]
Mihalcea et al. (2006) [17]
Islam and Inkpen (2007) [18]
Qiu et al. (2006) [19]
Fernando and Stevenson (2008) [20]
Wan et al. (2006) [21]
Das and Smith (2009) [15]
Das and Smith (2009) + 18 Features
Unfolding RAE + Dynamic Pooling
Acc.
66.5
70.6
70.3
72.6
72.0
74.1
75.6
73.9
76.1
76.8
F1
79.9
80.5
81.3
81.3
81.6
82.4
83.0
82.3
82.7
83.6
Table 3: Test results on the MSRP paraphrase corpus. Comparisons of unsupervised feature learning
methods (left), similarity feature extraction and supervised classification methods (center) and other
approaches (right).
In our first set of experiments we compare several unsupervised feature learning methods: Recursive
averaging as defined in Sec. 4.1, standard RAEs and unfolding RAEs. For each of the three methods,
we cross-validate on the training data over all possible hyperparameters and report the best performance. We observe that the dynamic pooling layer is very powerful because it captures the global
structure of the similarity matrix which in turn captures the syntactic and semantic similarities of the
two sentences. With the help of this powerful dynamic pooling layer and good initial word vectors
even the standard RAE and recursive averaging perform well on this dataset with an accuracy of
75.5% and 75.9% respectively. We obtain the best accuracy of 76.8% with the unfolding RAE without hidden layers. We tried adding 1 and 2 hidden encoding and decoding layers but performance
only decreased by 0.2% and training became slower.
Next, we compare the dynamic pooling to simpler feature extraction methods. Our comparison
shows that the dynamic pooling architecture is important for achieving high accuracy. For every
setting we again exhaustively cross-validate on the training data and report the best performance.
The settings and their accuracies are:
(i) S-Hist: 73.0%. A histogram of values in the matrix S. The low performance shows that our
dynamic pooling layer better captures the global similarity information than aggregate statistics.
(ii) Only Feat: 73.2%. Only the three features described above. This shows that simple binary string
and number matching can detect many of the simple paraphrases but fails to detect complex cases.
(iii) Only Spooled : 72.6%. Without the three features mentioned above. This shows that some information still gets lost in Spooled and that a better treatment of numbers is needed. In order to better
recover exact string matches it may be necessary to explore overlapping pooling regions.
(iv) Top Unfolding RAE Node: 74.2%. Instead of Spooled , use Euclidean distance between the two
top sentence vectors. The performance shows that while the unfolding RAE is by itself very powerful, the dynamic pooling layer is needed to extract all information from its trees.
Table 3 shows our results compared to previous approaches (see next section). Our unfolding RAE
and dynamic similarity pooling architecture achieves state-of-the-art performance without handdesigned semantic taxonomies and features such as WordNet. Note that the effective range of the
accuracy lies between 66% (most frequent class baseline) and 83% (interannotator agreement).
In Table 4 we show several examples of correctly classified paraphrase candidate pairs together
with their similarity matrix after dynamic min-pooling. The first and last pair are simple cases of
paraphrase and not paraphrase. The second example shows a pooled similarity matrix when large
chunks are swapped in both sentences. Our model is very robust to such transformations and gives
a high probability to this pair. Even more complex examples such as the third with very few direct
string matches (few blue squares) are correctly classified. The second to last example is highly
interesting. Even though there is a clear diagonal with good string matches, the gap in the center
shows that the first sentence contains much extra information. This is also captured by our model.
5
Related Work
The field of paraphrase detection has progressed immensely in recent years. Early approaches were
based purely on lexical matching techniques [22, 23, 19, 24]. Since these methods are often based on
exact string matches of n-grams, they fail to detect similar meaning that is conveyed by synonymous
words. Several approaches [17, 18] overcome this problem by using Wordnet- and corpus-based
semantic similarity measures. In their approach they choose for each open-class word the single
most similar word in the other sentence. Fernando and Stevenson [20] improved upon this idea
by computing a similarity matrix that captures all pair-wise similarities of single words in the two
sentences. They then threshold the elements of the resulting similarity matrix and compute the mean
7
L
P
P
P
N
N
N
Pr
Sentences
0.95 (1) LLEYTON Hewitt yesterday traded his tennis racquet for his first sporting passion Australian football - as the world champion relaxed before his Wimbledon title defence
(2) LLEYTON Hewitt yesterday traded his tennis racquet for his first sporting passionAustralian rules football-as the world champion relaxed ahead of his Wimbledon defence
0.82 (1) The lies and deceptions from Saddam have been well documented over 12 years
(2) It has been well documented over 12 years of lies and deception from Saddam
Sim.Mat.
0.67 (1) Pollack said the plaintiffs failed to show that Merrill and Blodget directly caused their
losses
(2) Basically, the plaintiffs did not show that omissions in Merrill?s research caused the
claimed losses
0.49 (1) Prof Sally Baldwin, 63, from York, fell into a cavity which opened up when the structure collapsed at Tiburtina station, Italian railway officials said
(2) Sally Baldwin, from York, was killed instantly when a walkway collapsed and she fell
into the machinery at Tiburtina station
0.44 (1) Bremer, 61, is a onetime assistant to former Secretaries of State William P. Rogers and
Henry Kissinger and was ambassador-at-large for counterterrorism from 1986 to 1989
(2) Bremer, 61, is a former assistant to former Secretaries of State William P. Rogers and
Henry Kissinger
0.11 (1) The initial report was made to Modesto Police December 28
(2) It stems from a Modesto police report
Table 4: Examples of sentence pairs with: ground truth labels L (P - Paraphrase, N - Not Paraphrase),
the probabilities our model assigns to them (P r(S1 , S2 ) > 0.5 is assigned the label Paraphrase) and
their similarity matrices after dynamic min-pooling. Simple paraphrase pairs have clear diagonal
structure due to perfect word matches with Euclidean distance 0 (dark blue). That structure is
preserved by our min-pooling layer. Best viewed in color. See text for details.
of the remaining entries. There are two shortcomings of such methods: They ignore (i) the syntactic
structure of the sentences (by comparing only single words) and (ii) the global structure of such a
similarity matrix (by computing only the mean).
Instead of comparing only single words [21] adds features from dependency parses. Most recently,
Das and Smith [15] adopted the idea that paraphrases have related syntactic structure. Their quasisynchronous grammar formalism incorporates a variety of features from WordNet, a named entity
recognizer, a part-of-speech tagger, and the dependency labels from the aligned trees. In order to
obtain high performance they combine their parsing-based model with a logistic regression model
that uses 18 hand-designed surface features.
We merge these word-based models and syntactic models in one joint framework: Our matrix consists of phrase similarities and instead of just taking the mean of the similarities we can capture the
global layout of the matrix via our min-pooling layer.
The idea of applying an autoencoder in a recursive setting was introduced by Pollack [9] and extended recently by [10]. Pollack?s recursive auto-associative memories are similar to ours in that
they are a connectionist, feedforward model. One of the major shortcomings of previous applications of recursive autoencoders to natural language sentences was their binary word representation
as discussed in Sec. 2.1. Recently, Bottou discussed related ideas of recursive autoencoders [25]
and recursive image and text understanding but without experimental results. Larochelle [26] investigated autoencoders with an unfolded ?deep objective?. Supervised recursive neural networks have
been used for parsing images and natural language sentences by Socher et al. [27, 28]. Lastly, [12]
introduced the standard recursive autoencoder as mentioned in Sect. 2.2.
6
Conclusion
We introduced an unsupervised feature learning algorithm based on unfolding, recursive autoencoders. The RAE captures syntactic and semantic information as shown qualitatively with nearest
neighbor embeddings and quantitatively on a paraphrase detection task. Our RAE phrase features
allow us to compare both single word vectors as well as phrases and complete syntactic trees. In
order to make use of the global comparison of variable length sentences in a similarity matrix we
introduce a new dynamic pooling architecture that produces a fixed-sized representation. We show
that this pooled representation captures enough information about the sentence pair to determine the
paraphrase relationship on the MSRP dataset with a higher accuracy than any previously published
results.
8
References
[1] E. Marsi and E. Krahmer. Explorations in sentence fusion. In European Workshop on Natural Language
Generation, 2005.
[2] P. Clough, R. Gaizauskas, S. S. L. Piao, and Y. Wilks. METER: MEasuring TExt Reuse. In ACL, 2002.
[3] C. Callison-Burch. Syntactic constraints on paraphrases extracted from parallel corpora. In Proceedings
of EMNLP, pages 196?205, 2008.
[4] B. Dolan, C. Quirk, and C. Brockett. Unsupervised construction of large paraphrase corpora: exploiting
massively parallel news sources. In COLING, 2004.
[5] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. J. Mach.
Learn. Res., 3, March 2003.
[6] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks
with multitask learning. In ICML, 2008.
[7] Y. Bengio, J. Louradour, Collobert R, and J. Weston. Curriculum learning. In ICML, 2009.
[8] J. Turian, L. Ratinov, and Y. Bengio. Word representations: a simple and general method for semisupervised learning. In Proceedings of ACL, pages 384?394, 2010.
[9] J. B. Pollack. Recursive distributed representations. Artificial Intelligence, 46, November 1990.
[10] T. Voegtlin and P. Dominey. Linear Recursive Distributed Representations. Neural Networks, 18(7), 2005.
[11] J. L. Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine
Learning, 7(2-3), 1991.
[12] R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. Semi-Supervised Recursive
Autoencoders for Predicting Sentiment Distributions. In EMNLP, 2011.
[13] C. Goller and A. K?uchler. Learning task-dependent distributed representations by backpropagation
through structure. In Proceedings of the International Conference on Neural Networks (ICNN-96), 1996.
[14] D. Klein and C. D. Manning. Accurate unlexicalized parsing. In ACL, 2003.
[15] D. Das and N. A. Smith. Paraphrase identification as probabilistic quasi-synchronous recognition. In In
Proc. of ACL-IJCNLP, 2009.
[16] V. Rus, P. M. McCarthy, M. C. Lintean, D. S. McNamara, and A. C. Graesser. Paraphrase identification
with lexico-syntactic graph subsumption. In FLAIRS Conference, 2008.
[17] R. Mihalcea, C. Corley, and C. Strapparava. Corpus-based and Knowledge-based Measures of Text Semantic Similarity. In Proceedings of the 21st National Conference on Artificial Intelligence - Volume 1,
2006.
[18] A. Islam and D. Inkpen. Semantic Similarity of Short Texts. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2007), 2007.
[19] L. Qiu, M. Kan, and T. Chua. Paraphrase recognition via dissimilarity significance classification. In
EMNLP, 2006.
[20] S. Fernando and M. Stevenson. A semantic similarity approach to paraphrase detection. Proceedings of
the 11th Annual Research Colloquium of the UK Special Interest Group for Computational Linguistics,
2008.
[21] S. Wan, M. Dras, R. Dale, and C. Paris. Using dependency-based features to take the ?para-farce? out of
paraphrase. In Proceedings of the Australasian Language Technology Workshop 2006, 2006.
[22] R. Barzilay and L. Lee. Learning to paraphrase: an unsupervised approach using multiple-sequence
alignment. In NAACL, 2003.
[23] Y. Zhang and J. Patrick. Paraphrase identification by text canonicalization. In Proceedings of the Australasian Language Technology Workshop 2005, 2005.
[24] Z. Kozareva and A. Montoyo. Paraphrase Identification on the Basis of Supervised Machine Learning
Techniques. In Advances in Natural Language Processing, 5th International Conference on NLP, FinTAL,
2006.
[25] L. Bottou. From machine learning to machine reasoning. CoRR, abs/1102.1808, 2011.
[26] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep neural
networks. JMLR, 10, 2009.
[27] R. Socher, C. D. Manning, and A. Y. Ng. Learning continuous phrase representations and syntactic parsing
with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised
Feature Learning Workshop, 2010.
[28] R. Socher, C. Lin, A. Y. Ng, and C.D. Manning. Parsing Natural Scenes and Natural Language with
Recursive Neural Networks. In ICML, 2011.
9
| 4204 |@word multitask:1 trial:2 merrill:2 briefly:1 norm:2 open:1 tried:1 recursively:6 initial:4 contains:1 ours:1 outperforms:1 counterterrorism:1 current:1 wd:7 recovered:1 comparing:2 activation:2 bd:3 parsing:5 subsequent:2 partition:2 interannotator:1 malaysia:2 designed:1 intelligence:2 leaf:7 selected:1 plane:1 smith:4 short:1 chua:1 node:37 club:2 org:1 simpler:1 zhang:1 five:1 tagger:1 c2:7 direct:3 constructed:1 qualitative:2 consists:2 incorrect:1 combine:1 inside:1 introduce:4 roughly:1 elman:1 multi:1 passion:1 unfolded:2 window:4 provided:1 underlying:1 string:7 c01:2 finding:1 argentina:1 transformation:1 unified:1 thorough:1 duplicating:1 every:1 exactly:1 classifier:6 uk:1 unit:1 producing:1 before:2 safety:2 local:1 modify:1 subsumption:1 encoding:11 mach:1 merge:2 ap:1 might:1 handdesigned:1 acl:4 challenging:1 co:1 range:1 recursive:36 practice:1 postpone:3 investment:1 x3:6 backpropagation:2 swiss:1 lost:1 procedure:2 mihalcea:2 unfold:2 reject:1 matching:2 word:54 get:4 cannot:4 unlabeled:2 close:1 context:1 applying:2 collapsed:2 map:2 center:3 lexical:2 layout:1 starting:1 convex:1 rectangular:1 focused:2 simplicity:2 identifying:1 assigns:1 bomb:1 rule:1 fill:2 spanned:2 lamblin:1 his:8 financial:1 embedding:5 stability:2 construction:1 parser:3 decode:1 exact:4 us:1 agreement:3 element:4 recognition:2 summit:2 distributional:1 labeled:2 baldwin:2 capture:19 region:7 news:1 msrp:6 sect:1 principled:1 mentioned:2 transforming:1 colloquium:1 dynamic:21 exhaustively:1 technically:1 purely:2 division:2 eric:1 upon:1 uchler:1 basis:1 resolved:1 joint:2 cat:2 various:1 describe:3 effective:1 shortcoming:2 artificial:2 aggregate:2 whose:1 stanford:6 plausible:1 larger:1 ducharme:1 canonicalization:1 reconstruct:8 otherwise:1 football:2 grammar:1 statistic:3 unseen:1 syntactic:19 jointly:1 itself:1 final:2 associative:1 sequence:1 reconstruction:16 mission:1 remainder:1 frequent:1 aligned:1 date:2 bremer:2 achieve:1 description:1 validate:2 normalize:1 billion:1 parent:7 convergence:1 optimum:1 assessing:1 exploiting:1 produce:3 perfect:1 help:1 andrew:1 recurrent:2 quirk:1 nonterminal:6 nearest:9 x0i:1 barzilay:1 sim:1 eq:5 judge:1 memorize:1 australian:1 larochelle:2 merged:4 closely:2 correct:2 opened:1 exploration:1 rogers:2 f1:1 icnn:1 voegtlin:1 ijcnlp:1 exploring:1 hold:1 immensely:1 considered:1 ground:1 predict:1 visualize:1 traded:2 major:1 vary:1 achieves:1 smallest:1 early:1 recognizer:1 assistant:2 proc:1 harmony:2 lose:1 tanh:1 label:3 title:1 vice:1 create:1 champion:2 unfolding:28 suitcase:1 defence:2 aim:1 avoid:1 varying:1 encode:1 release:2 focus:1 longest:1 consistently:1 she:1 contrast:1 political:3 underneath:4 baseline:3 detect:4 secretary:2 dependent:1 synonymous:1 entire:3 typically:1 brockett:1 hidden:11 relation:1 italian:1 quasi:1 semantics:1 pixel:1 among:1 classification:4 unk:2 reform:1 negotiation:1 art:3 softmax:3 special:1 equal:3 once:1 wimbledon:2 having:1 ng:4 irish:2 extraction:2 field:1 represents:2 unsupervised:15 progressed:1 icml:3 others:1 np:14 report:6 richard:2 few:2 connectionist:1 quantitatively:1 randomly:2 national:3 jeffrey:1 microsoft:2 william:2 ab:1 detection:11 organization:2 invariably:1 interest:1 rae:44 possibility:2 highly:1 callison:1 evaluation:4 alignment:1 accurate:1 necessary:1 machinery:1 tree:32 filled:1 euclidean:7 iv:1 desired:1 re:1 pollack:5 politician:1 instance:5 classify:3 column:5 increased:1 formalism:1 measuring:1 phrase:33 entry:2 rare:1 subset:2 mcnamara:1 examining:1 goller:1 marsi:1 dependency:3 para:1 my:1 casualty:1 chunk:1 st:1 international:3 retain:1 probabilistic:2 lee:1 decoding:5 together:1 mouse:2 again:3 reconstructs:1 huang:2 choose:2 wan:2 emnlp:3 derivative:1 account:1 potential:1 distribute:1 bfgs:1 stevenson:3 australasian:2 pooled:3 sec:3 caused:2 depends:1 collobert:4 try:6 break:1 slac:1 analyze:1 start:2 recover:2 capability:1 parallel:2 ass:1 square:1 minimize:3 accuracy:7 became:1 variance:1 who:1 efficiently:1 identification:4 vincent:1 accurately:1 basically:1 saddam:2 finer:1 published:1 classified:2 acc:1 explain:1 premier:1 associated:1 seventeen:2 dataset:6 treatment:1 color:1 car:1 dimensionality:2 knowledge:1 pocket:1 agreed:1 gaizauskas:1 feed:1 bidirectional:1 higher:2 supervised:5 improved:2 entailment:1 though:4 just:1 lastly:2 autoencoders:10 until:2 flight:2 hand:1 parse:13 christopher:1 overlapping:3 paraphrasing:1 logistic:1 clough:1 semisupervised:1 usa:2 naacl:1 contain:2 y2:10 counterpart:1 former:4 hence:2 regularization:2 assigned:1 laboratory:1 semantic:13 deal:1 round:2 during:6 branching:1 yesterday:2 flair:1 wilks:1 prominent:2 complete:1 demonstrate:1 motion:1 syntactically:1 reasoning:1 meaning:3 wise:4 image:2 novel:4 recently:3 overview:1 fourteen:2 volume:1 million:2 discussed:2 cup:1 versa:1 outlined:1 grid:2 session:2 killed:5 language:14 had:1 henry:2 kozareva:1 tennis:2 similarity:40 longer:3 surface:1 add:7 align:1 patrick:1 closest:1 own:1 showed:1 recent:2 mccarthy:1 x03:2 claimed:1 massively:1 binary:5 fault:1 life:2 raam:1 captured:2 minimum:1 somewhat:1 relaxed:2 determine:1 shortest:1 redundant:1 x02:2 fernando:3 semi:2 ii:2 full:3 multiple:2 stem:1 smooth:1 match:7 cross:3 knife:1 retrieval:1 lin:1 visit:2 peace:2 embargo:2 variant:1 regression:1 essentially:1 histogram:2 represent:2 normalization:1 accord:2 c1:7 preserved:1 want:1 signing:2 decreased:1 source:1 crucial:1 extra:4 swapped:1 unlike:1 ascent:1 strict:1 pooling:35 fell:2 december:3 incorporates:3 mod:1 extracting:1 presence:1 granularity:1 intermediate:1 bengio:5 embeddings:3 split:2 divisible:1 iii:1 xj:1 variety:1 feedforward:1 enough:1 architecture:6 perfectly:1 economic:1 idea:5 br:2 synchronous:1 whether:3 defense:1 reuse:1 sentiment:1 speech:2 york:2 repeatedly:1 autoassociative:1 deep:7 clear:2 dark:1 ang:1 locally:1 documented:2 percentage:1 correctly:2 klein:1 blue:2 cargo:1 gigaword:2 instantly:1 mat:1 group:1 threshold:1 achieving:1 prevent:2 bloom:2 y10:3 nyt:1 graph:1 sum:1 year:3 ratinov:1 run:1 policeman:1 powerful:4 named:1 almost:2 c02:2 x0j:1 entirely:1 layer:32 opposition:1 followed:1 fold:1 annual:1 jpennin:1 ahead:1 constraint:1 burch:1 constrain:1 x2:6 scene:1 span:2 min:6 eat:1 department:1 march:1 manning:6 rnp:1 slightly:2 reconstructing:1 s1:4 happens:1 pr:1 previously:1 discus:2 describing:1 german:1 turn:2 needed:3 enforcement:2 fail:1 adopted:1 apply:1 observe:1 occurrence:1 batch:1 alternative:1 slower:1 original:6 top:4 remaining:3 linguistics:1 nlp:1 prof:1 objective:3 question:1 added:1 strategy:1 diagonal:3 said:2 september:1 gradient:2 distance:11 concatenation:2 denied:1 entity:1 evenly:1 trivial:1 unlexicalized:1 ru:2 length:15 relationship:4 mini:1 setup:1 mostly:2 statement:1 potentially:1 taxonomy:1 negative:1 summarization:1 perform:3 november:1 incorrectly:2 extended:2 y1:12 rn:3 station:2 arbitrary:4 omission:1 police:2 paraphrase:49 dras:1 introduced:8 pair:14 paris:1 janvin:1 sentence:53 optimized:1 lexico:1 conflict:1 learned:3 merges:1 nip:1 israeli:1 usually:1 xm:2 democratic:1 including:1 memory:2 video:1 presidency:1 overlap:2 business:1 natural:8 predicting:1 indicator:1 islam:2 curriculum:1 scheme:1 technology:2 qualifying:3 catch:1 autoencoder:19 extract:1 auto:1 sept:1 sporting:2 text:8 review:1 understanding:1 meter:1 determining:1 dolan:3 loss:2 par:1 accelerator:1 men:1 interesting:2 generation:1 annotator:2 validation:1 x01:2 conveyed:1 heavy:1 obscure:1 translation:1 row:9 compatible:1 summary:1 morale:1 repeat:2 last:8 english:1 allow:1 neighbor:10 taking:1 distributed:4 grammatical:1 overcome:1 depth:1 vocabulary:1 gram:2 dimension:1 evaluating:1 computes:4 ignores:1 stuck:1 collection:1 avg:2 made:3 world:2 qualitatively:1 dale:1 social:2 passport:1 reconstructed:5 ignore:1 feat:1 keep:2 cavity:1 global:8 hist:1 corpus:11 xi:2 continuous:2 triplet:2 table:10 hockey:1 learn:6 robust:1 ca:2 inherently:1 bottou:2 european:1 complex:3 investigated:1 constructing:1 da:4 official:1 did:1 significance:1 onetime:1 main:1 hewitt:2 louradour:2 s2:4 border:1 hyperparameters:1 profile:2 turian:2 qiu:2 krahmer:1 child:14 suffering:4 x1:10 fig:8 referred:1 representative:2 shrinking:2 fails:1 lie:3 house:2 answering:1 stamp:1 candidate:1 third:3 jmlr:1 learns:2 young:1 coling:1 list:2 fusion:1 workshop:4 socher:6 adding:2 pennington:2 importance:1 corr:1 dissimilarity:1 subtree:4 television:1 gap:1 suited:1 simply:4 likely:2 explore:1 failed:1 ordered:1 contained:1 watch:4 sally:2 applies:1 refused:1 determines:1 loses:1 extracted:2 truth:1 kan:1 weston:4 sized:8 goal:1 viewed:1 replace:1 content:1 deception:2 averaging:7 wordnet:3 experimental:1 people:4 phenomenon:1 |
3,540 | 4,205 | Emergence of Multiplication in a Biophysical Model
of a Wide-Field Visual Neuron for Computing Object
Approaches: Dynamics, Peaks, & Fits
Matthias S. Keil?
Department of Basic Psychology
University of Barcelona
E-08035 Barcelona, Spain
[email protected]
Abstract
Many species show avoidance reactions in response to looming object approaches.
In locusts, the corresponding escape behavior correlates with the activity of the
lobula giant movement detector (LGMD) neuron. During an object approach, its
firing rate was reported to gradually increase until a peak is reached, and then
it declines quickly. The ?-function predicts that the LGMD activity is a product
?
between an exponential function of angular size exp(??) and angular velocity ?,
and that peak activity is reached before time-to-contact (ttc). The ?-function has
become the prevailing LGMD model because it reproduces many experimental
observations, and even experimental evidence for the multiplicative operation was
reported. Several inconsistencies remain unresolved, though. Here we address
? to
these issues with a new model (?-model), which explicitly connects ? and ?
biophysical quantities. The ?-model avoids biophysical problems associated with
implementing exp(?), implements the multiplicative operation of ? via divisive
inhibition, and explains why activity peaks could occur after ttc. It consistently
predicts response features of the LGMD, and provides excellent fits to published
experimental data, with goodness of fit measures comparable to corresponding fits
with the ?-function.
1
Introduction: ? and ?
Collision sensitive neurons were reported in species such different as monkeys [5, 4], pigeons
[36, 34], frogs [16, 20], and insects [33, 26, 27, 10, 38]. This indicates a high ecological relevance,
and raises the question about how neurons compute a signal that eventually triggers corresponding
movement patterns (e.g. escape behavior or interceptive actions). Here, we will focus on visual
stimulation. Consider, for simplicity, a circular object (diameter 2l), which approaches the eye at
a collision course with constant velocity v. If we do not have any a priori knowledge about the
object in question (e.g. its typical size or speed), then we will be able to access only two information
sources. These information sources can be measured at the retina and are called optical variables
(OVs). The first is the visual angle ?, which can be derived from the number of stimulated photore?
? is
ceptors (spatial contrast). The second is its rate of change d?(t)/dt ? ?(t).
Angular velocity ?
related to temporal contrast.
? in order to track an imminent collision? The perhaps simplest
How should we combine ? and ?
?
combination is ? (t) ? ?(t)/?(t) [13, 18]. If the object hit us at time tc , then ? (t) ? tc ? t will
?
Also: www.ir3c.ub.edu, Research Institute for Brain, Cognition, and Behaviour (IR3C) Edifici de
Ponent, Campus Mundet, Universitat de Barcelona, Passeig Vall d?Hebron, 171. E-08035 Barcelona
1
give us a running estimation of the time that is left until contact1 . Moreover, we do not need to know
anything about the approaching object: The ttc estimation computed by ? is practically independent
of object size and velocity. Neurons with ? -like responses were indeed identified in the nucleus retundus of the pigeon brain [34]. In humans, only fast interceptive actions seem to rely exclusively on
? [37, 35]. Accurate ttc estimation, however, seems to involve further mechanisms (rate of disparity
change [31]).
? exp(???), with ? = const. [10].
Another function of OVs with biological relevance is ? ? ?
While ?-type neurons were found again in pigeons [34] and bullfrogs [20], most data were gathered from the LGMD2 in locusts (e.g. [10, 9, 7, 23]). The ?-function is a phenomenological model
for the LGMD, and implies three principal hypothesis: (i) An implementation of an exponential
function exp(?). Exponentation is thought to take place in the LGMD axon, via active membrane
conductances [8]. Experimental data, though, seem to favor a third-power law rather than exp(?).
(ii) The LGMD carries out biophysical computations for implementing the multiplicative operation.
It has been suggested that multiplication is done within the LGMD itself, by subtracting the loga? ? ?? [10, 8]. (iii) The peak of the ?-function occurs before
rithmically encoded variables log ?
ttc, at visual angle ?(t?) = 2 arctan(1/?) [9]. It follows ttc for certain stimulus configurations (e.g.
l/|v| / 5ms). In principle, t? > tc can be accounted for by ?(t + ?) with a fixed delay ? < 0 (e.g.
?27ms). But other researchers observed that LGMD activity continuous to rise after ttc even for
l/|v| ' 5ms [28]. These discrepancies remain unexplained so far [29], but stimulation dynamics
perhaps plays a role.
We we will address these three issues by comparing the novel function ??? with the ?-function.
LGMD computations with the ?-function: No multiplication, no
exponentiation
2
A circular object which starts its approach at distance x0 and with speed v projects a visual angle
?(t) = 2 arctan[l/(x0 ? vt)] on the retina [34, 9]. The kinematics is hence entirely specified by the
?
half-size-to-velocity ratio l/|v|, and x0 . Furthermore, ?(t)
= 2lv/((x0 ? vt)2 + l2 ).
In order to define ?, we consider at first the LGMD neuron as an RC-circuit with membrane potential3 V [17]
dV
Cm
= ? (Vrest ? V ) + gexc (Vexc ? V ) + ginh (Vinh ? V )
(1)
dt
4
Cm = membrane capacity ; ? ? 1/Rm denotes leakage conductance across the cell membrane
(Rm : membrane resistance); gexc and ginh are excitatory and inhibitory inputs. Each conductance
gi (i = exc, inh ) can drive the membrane potential to its associated reversal potential Vi (usually
Vinh ? Vexc ). Shunting inhibition means Vinh = Vrest . Shunting inhibition lurks ?silently? because
it gets effective only if the neuron is driven away from its resting potential. With synaptic input, the
neuron decays into its equilibrium state
Vrest ? + Vexc gexc + Vinh ginh
V? ?
(2)
? + gexc + ginh
according to V (t) = V? (1 ? exp(?t/?m )). Without external input, V (t 1) ? Vrest . The
time scale is set by ?m . Without synaptic input ?m ? Cm /?. Slowly varying inputs gexc , ginh > 0
modify the time scale to approximately ?m /(1 + (gexc + ginh )/?). For highly dynamic inputs, such
as in late phase of the object approach, the time scale gets dynamical as well. The ?-model assigns
synaptic inputs5
?
? = ?1 ?(t
? ? ?tstim ) + (1 ? ?1 )?(t)
?
gexc (t) = ?(t),
?(t)
(3a)
e
ginh (t) = [??(t)] ,
?(t) = ?0 ?(t ? ?tstim ) + (1 ? ?0 )?(t)
1
(3b)
This linear approximation gets worse with increasing ?, but turns out to work well until short before ttc (?
adopts a minimum at tc ? 0.428978 ? l/|v|).
2
LGMD activity is usually monitored via its postsynaptic neuron, the Descending Contralateral Movement
Detector (DCMD) neuron. This represents no problem as LGMD spikes follow DCMD spikes 1:1 under visual
stimulation [22] from 300Hz [21] to at least 400Hz [24].
3
Here we assume that the membrane potential serves as a predictor for the LGMD?s mean firing rate.
4
Set to unity for all simulations.
5
LGMD receives also inhibition from a laterally acting network [21]. The ?-function considers only direct
feedforward inhibition [22, 6], and so do we.
2
? ? [7.63??, 180.00??[
temporal resolution ? tstim=1.0ms
l/|v|=20.00ms, ?=1.00, ?=7.50, e=3.00, ?0=0.90, ?1=0.99, nrelax=25
0.04
scaled d?/dt
continuous
discretized
0.035
0.03
?(t) (input)
?(t) (filtered)
voltage V(t) (output)
t
= 56ms
max
t =300ms
c
0.025
0
10
2
?(t): ?=3.29, R =1.00
n
=10 ? t
=37ms
log ?(t)
amplitude
relax
max
0.02
0.015
0.01
0.005
0
?0.005
0
50
100
150
200
250
300
?0.01
0
350
time [ms]
50
100
150
200
250
300
350
time [ms]
(b) ? versus ?
(a) discretized optical variables
Figure 1: (a) The continuous visual angle of an approaching object is shown along with its discretized version. Discretization transforms angular velocity from a continuous variable into a series
of ?spikes? (rescaled). (b) The ? function with the inputs shown in a, with nrelax = 25 relaxation
time steps. Its peak occurs tmax = 56ms before ttc (tc = 300ms). An ? function (? = 3.29) that
was fitted to ? shows good agreement. For continuous optical variables, the peak would occur 4ms
earlier, and ? would have ? = 4.44 with R2 = 1. For nrelax = 10, ? is farther away from its
equilibrium at V? , and its peak moves 19ms closer to ttc.
t =500ms, dia=12.0cm, ?t
c
=1.00ms, dt=10.00?s, discrete=1
stim
250
n
relax
= 50
2
200
?=4.66, R =0.99 [normal]
n
= 25
relax
2
?=3.91, R =1.00 [normal]
n
=0
relax
tmax [ms]
150
2
?=1.15, R =0.99 [normal]
100
50
0
?=1.00, ?=7.50, e=3.00, V
=?0.001, ? =0.90, ? =0.99
inh
?50
5
10
15
20
25
30
35
0
1
40
45
50
l/|v| [ms]
(a) different nrelax
(b) different ?tstim
Figure 2: The figures plot the relative time tmax ? tc ? t? of the response peak of ?, V (t?), as a
function of half-size-to-velocity ratio (points). Line fits with slope ? and intercept ? were added
(lines). The predicted linear relationship in all cases is consistent with experimental evidence [9].
(a) The stimulus time scale is held constant at ?tstim = 1ms, and several LGMD time scales
are defined by nrelax (= number of intercalated relaxation steps for each integration time step).
Bigger values of nrelax move V (t) closer to its equilibrium V? (t), implying higher slopes ? in
turn. (b) LGMD time scale is fixed at nrelax = 25, and ?tstim is manipulated. Because of the
discretization of optical variables (OVs) in our simulation, increasing ?tstim translates to an overall
smaller number of jumps in OVs, but each with higher amplitude.
Thus, we say ?(t) ? V (t) if and only if gexc and ginh are defined with the last equation. The time
scale of stimulation is defined by ?tstim (by default 1ms). The variables ? and ?? are lowpass filtered
angular size and rate of expansion, respectively. The amount of filtering is defined by memory
constants ?0 and ?1 (no filtering if zero). The idea is to continue with generating synaptic input
? > tc ) = 0. Inhibition is first weighted by ?,
after ttc, where ?(t > tc ) = const and thus ?(t
and then potentiated by the exponent e. Hodgkin-Huxley potentiates gating variables n, m ? [0, 1]
instead (potassium ? n4 , sodium ? m3 , [12]) and multiplies them with conductances. Gabbiani
and co-workers found that the function which transforms membrane potential to firing rate is better
described by a power function with e = 3 than by exp(?) (Figure 4d in [8]).
3
Dynamics of the ?-function
3
Discretization. In a typical experiment, a monitor is placed a short distance away from the insect?s
eye, and an approaching object is displayed. Computer screens have a fixed spatial resolution, and
as a consequence size increments of the displayed object proceed in discrete jumps. The locust
retina is furthermore composed of a discrete array of ommatidia units. We therefore can expect
a corresponding step-wise increment of ? with time, although optical and neuronal filtering may
? discontinuous,
smooth ? to some extent again, resulting in ? (figure 1). Discretization renders ?
?
what again will be alleviated in ?. For simulating the dynamics of ?, we discretized angular size
?
with floor(?), and ?(t)
? [?(t + ?tstim ) ? ?(t)]/?tstim . Discretized optical variables (OVs)
were re-normalized to match the range of original (i.e. continuous) OVs.
To peak, or not to peak? Rind & Simmons reject the hypothesis that the activity peak signals
impending collision on grounds of two arguments [28]: (i) If ?(t + ?tstim ) ? ?(t) ' 3o in consecutively displayed stimulus frames, the illusion of an object approach would be lost. Such stimulation
would rather be perceived as a sequence of rapidly appearing (but static) objects, causing reduced
responses. (ii) After the last stimulation frame has been displayed (that is ? = const), LGMD
responses keep on building up beyond ttc. This behavior clearly depends on l/|v|, also according
to their own data (e.g. Figure 4 in [26]): Response build up after ttc is typically observed for suffi? = 0, respectively,
ciently small values of l/|v|. Input into ? in situations where ? = const and ?
? respectively.
is accommodated by ? and ?,
We simulated (i) by setting ?tstim = 5ms, thus producing larger and more infrequent jumps in
discrete OVs than with ?tstim = 1ms (default). As a consequence, ?(t) grows more slowly (delayed build up of inhibition), and the peak occurs later (tmax ? tc ? t? = 10ms with everything else
identical with figure 1b). The peak amplitude V? = V (t?) decreases nearly sixfold with respect to
default. Our model thus predicts the reduced responses observed by Rind & Simmons [28].
Linearity. Time of peak firing rate is linearly related to l/|v| [10, 9]. The ?-function is consistent
with this experimental evidence: t? = tc ? ?l/|v| + ? (e.g. ? = 4.7, ? = ?27ms). The ?-function
reproduces this relationship as well (figure 2), where ? depends critically on the time scale of biophysical processes in the LGMD. We studied the impact of this time scale by choosing 10?s for the
numerical integration of equation 1 (algorithm: 4th order Runge-Kutta). Apart from improving the
numerical stability of the integration algorithm, ? is far from its equilibrium V? (t) in every moment
?
t, given the stimulation time scale ?tstim = 1ms 6 . Now, at each value of ?(t) and ?(t),
respectively, we intercalated nrelax iterations for integrating ?. Each iteration takes V (t) asymptotically
closer to V? (t), and limnrelax 1 V (t) = V? (t). If the internal processes in the LGMD cannot keep
up with stimulation (nrelax = 0), we obtain slopes values that underestimate experimentally found
values (figure 2a). In contrast, for nrelax ' 25 we get an excellent agreement with the experimentally determined ?. This means that ? under the reported experimental stimulation conditions (e.g.
[9]) ? the LGMD would operate relatively close to its steady state7 .
Now we fix nrelax at 25 and manipulate ?tstim instead (figure 2b). The default value ?tstim = 1ms
corresponds to ? = 3.91. Slightly bigger values of ?tstim (2.5ms and 5ms) underestimate the experimental ?. In addition, the line fits also return smaller intercept values then. We see tmax < 0 up
to l/|v| ? 13.5ms ? LGMD activity peaks after ttc! Or, in other words, LGMD activity continues
to increase after ttc. In the limit, where stimulus dynamics is extremely fast, and LGMD processes
are kept far from equilibrium at each instant of the approach, ? gets very small. As a consequence,
tmax gets largely independent of l/|v|: The activity peak would cling to tmax although we varied
l/|v|.
4
Freeze! Experimental data versus steady state of ?psi?
In the previous section, experimentally plausible values for ? were obtained if ? is close to equilibrium at each instant of time during stimulation. In this section we will thus introduce a steady-state
6
Assuming one ?tstim for each integration time step. This means that by default stimulation and biophysical dynamics will proceed at identical time scales.
7
Notice that in this moment we can only make relative statements - we do not have data at hand for defining
absolute time scales
4
tc=500ms, v=2.00m/s ?? ? (? varies), ?=3.50, e=3.00, Vinh=?0.001
tc=500ms, v=2.00m/s ?? ? ?=2.50, ?=3.50, (e varies), Vinh=?0.001
300
tc=500ms, v=2.00m/s ?? ? ?=2.50, (? varies), e=3.00, Vinh=?0.001
350
300
?=10.00
250
?=5.00
norm. |??? | = 0.020...0.128
?=2.50
norm. rmse = 0.058...0.153
correlation (?,?)=?0.90 (n=4)
?=1.00
300
e=4.00
norm. |??? | = 0.009...0.114
e=3.00
norm. rmse = 0.014...0.160
correlation (e,?)=0.98 (n=4)
?
e=2.50
250
250
norm. |??? | = 0.043...0.241
?
norm. rmse = 0.085...0.315
correlation (?,?)=1.00 (n=5)
150
tmax [ms]
200
tmax [ms]
200
tmax [ms]
?=5.00
?=2.50
?=1.00
?=0.50
?=0.25
e=5.00
?
200
150
100
150
100
100
50
50
50
0
5
10
15
20
25
30
35
40
45
0
5
50
10
15
20
l/|v| [ms]
25
30
35
40
45
0
5
50
10
15
20
l/|v| [ms]
(a) ? varies
25
30
35
40
45
50
l/|v| [ms]
(b) e varies
(c) ? varies
Figure 3: Each curve shows how the peak ??? ? ?? (t?) depends on the half-size-to-velocity ratio.
In each display, one parameter of ?? is varied (legend), while the others are held constant (figure
title). Line slopes vary according to parameter values. Symbol sizes are scaled according to rmse
(see also figure 4). Rmse was calculated between normalized ?? (t) & normalized ?(t) (i.e. both
functions ? [0, 1] with original minimum and maximum indicated by the textbox). To this end, the
peak of the ?-function was placed at tc , by choosing, at each parameter value, ? = |v| ? (tc ? t?)/l (for
determining correlation, the mean value of ? was taken across l/|v|).
tc=500ms, v=2.00m/s ?? ? (? varies), ?=3.50, e=3.00, Vinh=?0.001
tc=500ms, v=2.00m/s ?? ? ?=2.50, ?=3.50, (e varies), Vinh=?0.001
tc=500ms, v=2.00m/s ?? ? ?=2.50, (? varies), e=3.00, Vinh=?0.001
0.25
?=5.00
0.12
?=2.50
?=1.00
0.1
0.08
(normalized ?, ??)
0.12
?=10.00
(normalized ?, ??)
(normalized ?, ??)
0.14
0.1
0.08
?=5.00
?=2.50
0.2
?=1.00
?=0.50
?=0.25
0.15
0.06
0.04
0.02
0
5
10
15
20
25
30
35
40
45
50
meant |?(t)???(t)|
meant |?(t)???(t)|
meant |?(t)???(t)|
0.06
0.04
e=5.00
e=4.00
e=3.00
0.02
e=2.50
10
l/|v| [ms]
15
20
25
30
35
40
45
50
l/|v| [ms]
(a) ? varies
(b) e varies
0.1
0.05
0
5
10
15
20
25
30
35
40
45
50
l/|v| [ms]
(c) ? varies
Figure 4: This figure complements figure 3. It visualizes the time averaged absolute difference
between normalized ?? (t) & normalized ?(t). For ?, its value of ? was chosen such that the
maxima of both functions coincide. Although not being a fit, it gives a rough estimate on how the
shape of both curves deviate from each other. The maximum possible difference would be one.
version of ? (i.e. equation 2 with Vrest = 0, Vexc = 1, and equations 3 plugged in),
?? (t) ?
e
?
?(t)
+ Vinh [??(t)]
e
?
? + ?(t)
+ [??(t)]
(4)
(Here we use continuous versions of angular size and rate of expansion). The ?? -function
makes life easier when it comes to fitting experimental data. However, it has its limitations, because we brushed the whole dynamic of ? under the carpet. Figure 3 illustrates how the linear relationship (=?linearity?) between tmax ? tc ? t? and l/|v| is influenced by changes in parameter values. Changing any of the values of e, ?, ? predominantly causes variation in line
slopes. The smallest slope changes are obtained by varying Vinh (data not shown; we checked
Vinh = 0, ?0.001, ?0.01, ?0.1). For Vinh / ?0.01, linearity is getting slightly compromised, as
slope increases with l/|v| (e.g. Vinh = ?1 ? ? [4.2, 4.7]).
In order to get a notion about how well the shape of ?? (t) matches ?(t), we computed timeaveraged difference measures between normalized versions of both functions (details: figure 3 & 4).
Bigger values of ? match ? better at smaller, but worse at bigger values of l/|v| (figure 4a). Smaller
? cause less variation across l/|v|. As to variation of e, overall curve shapes seem to be best aligned
with e = 3 to e = 4 (figure 4b). Furthermore, better matches between ?? (t) and ?(t) correspond to
bigger values of ? (figure 4c). And finally, Vinh marches again to a different tune (data not shown).
Vinh = ?0.1 leads to the best agreement (? 0.04 across l/|v|) of all Vinh , quite different from the
other considered values. For the rest, ?? (t) and ?(t) align the same (all have maximum 0.094),
5
? = 126o /s
(a) ?
? = 63o /s
(b) ?
Figure 5: The original data (legend label ?HaGaLa95?) were resampled from ref. [10] and show
? = const. Thus, ? increases linearly with time. The
DCMD responses to an object approach with ?
?-function (fitting function: A?(t+?)+o) and ?? (fitting function: A?? (t)+o) were fitted to these data:
(a) (Figure 3 Di in [10]) Good fits for ?? are obtained with e = 5 or higher (e = 3 R2 = 0.35 and
rmse = 0.644; e = 4
R2 = 0.45 and rmse = 0.592). ?Psi? adopts a sigmoid-like curve form which
(subjectively) appears to fit the original data better than ?. (b) (Figure 3 Dii in [10]) ?Psi? yields an
excellent fit for e = 3.
RoHaTo10 gregarious locust LV=0.03s
?(t), lv=30ms
e011pos014
sgolay with 100
t
=107ms
max
ttc=5.00s
? adj.R2 0.95 (LM:3)
?
?(t) adj.R2 1 (TR::1)
2
? : R =0.95, rmse=0.004, 3 coefficients
?
? ?=2.22, ?=0.70, e=3.00, V =?0.001, A=0.07, o=0.02, ?=0.00ms
inh
?: R2=1.00, rmse=0.001 ? ?=3.30, A=0.08, o=0.0, ?=?10.5ms
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
5
5.2
time [s]
(b) ? versus ?
(a) spike trace
Figure 6: (a) DCMD activity in response to a black square (l/|v| = 30ms, legend label
?e011pos14?, ref. [30]) approaching to the eye center of a gregarious locust (final visual angle 50o ).
Data show the first stimulation so habituation is minimal. The spike trace (sampled at 104 Hz) was
full wave rectified, lowpass filtered, and sub-sampled to 1ms resolution. Firing rate was estimated
with Savitzky-Golay filtering (?sgolay?). The fits of the ?-function (A?(t + ?) + o; 4 coefficients) and
?? -function (A?? (t) with fixed e, o, ?, Vinh ; 3 coefficients) provide both excellent fits to firing rate.
(b) Fitting coefficient ? (? ?-function) inversely correlates with ? (? ?? ) when fitting firing rates
of another 5 trials as just described (continuous line = line fit to the data points). Similar correlation
values would be obtained if e is fixed at values e = 2.5, 4, 5
c = ?0.95, ?0.96, ?0.91. If o was
determined by the fitting algorithm, then c = ?0.70. No clear correlations with ? were obtained for
?.
despite of covering different orders of magnitude with Vinh = 0, ?0.001, ?0.01.
Decelerating approach. Hatsopoulos et al. [10] recorded DCMD activity in response to an ap? = const.
proaching object which projected image edges on the retina moving at constant velocity: ?
?
implies ?(t) = ?0 + ?t. This ?linear approach? is perceived as if the object is getting increasingly
slower. But what appears a relatively unnatural movement pattern serves as a test for the functions
? & ?? . Figure 5 illustrates that ?? passes the test, and consistently predicts that activity sharply
rises in the initial approach phase, and subsequently declines (? passed this test already in the year
1995).
6
Spike traces. We re-sampled about 30 curves obtained from LGMD recordings from a variety of
publications, and fitted ? & ?? -functions. We cannot show the results here, but in terms of goodness of fit measures, both functions are in the same ballbark. Rather, figure 6a shows a representative
example [30]. When ? and ? are plotted against each other for five trials, we see a strong inverse
correlation (figure 6b). Although five data points are by no means a firm statistical sample, the
strong correlation could indicate that ? and ? play similar roles in both functions. Biophysically, ?
is the leakage conductance, which determines the (passive) membrane time constant ?m ? 1/? of
the neuron. Voltage drops within ?m to exp(?1) times its initial value. Bigger values of ? mean
shorter ?m (i.e., ?faster neurons?). Getting back to ?, this would suggest ? ? ?m , such that higher
(absolute) values for ? would possibly indicate a slower dynamic of the underlying processes.
5
Discussion (?The Good, the Bad, and the Ugly?)
Up to now, mainly two classes of LGMD models existed: The phenomenological ?-function on the
one hand, and computational models with neuronal layers presynaptic to the LGMD on the other
(e.g. [25, 15]; real-world video sequences & robotics: e.g. [3, 14, 32, 2]). Computational models
predict that LGMD response features originate from excitatory and inhibitory interactions in ? and
between ? presynaptic neuronal layers. Put differently, non-linear operations are generated in the
presynaptic network, and can be a function of many (model) parameters (e.g. synaptic weights, time
constants, etc.). In contrast, the ?-function assigns concrete nonlinear operations to the LGMD [7].
The ?-function is accessible to mathematical analysis, whereas computational models have to be
probed with videos or artificial stimulus sequences. The ?-function is vague about biophysical parameters, whereas (good) computational models need to be precise at each (model) parameter value.
The ?-function establishes a clear link between physical stimulus attributes and LGMD activity: It
postulates what is to be computed from the optical variables (OVs). But in computational models,
such a clear understanding of LGMD inputs cannot always be expected: Presynaptic processing may
strongly transform OVs.
The ? function thus represents an intermediate model class: It takes OVs as input, and connects them
with biophysical parameters of the LGMD. For the neurophysiologist, the situation could hardly be
any better. Psi implements the multiplicative operation of the ?-function by shunting inhibition
(equation 1: Vexc ? Vrest and Vinh ? Vrest ). The ?-function fits ? very well according to our
dynamical simulations (figure 1), and satisfactory by the approximate criterion of figure 4.
We can conclude that ? implements the ?-function in a biophysically plausible way. However, ?
does neither explicitly specify ??s multiplicative operation, nor its exponential function exp(?). Instead we have an interaction between shunting inhibition and a power law (?)e , with e ? 3. So what
about power laws in neurons?
Because of e > 1, we have an expansive nonlinearity. Expansive power-law nonlinearities are well
established in phenomenological models of simple cells of the primate visual cortex [1, 11]. Such
models approximate a simple cell?s instantaneous firing rate r from linear filtering of a stimulus (say
Y ) by r ? ([Y ]+ )e , where [?]+ sets all negative values to zero and lets all positive pass. Although
experimental evidence favors linear thresholding operations like r ? [Y ? Ythres ]+ , neuronal responses can behave according to power law functions if Y includes stimulus-independent noise [19].
Given this evidence, the power-law function of the inhibitory input into ? could possibly be interpreted as a phenomenological description of presynaptic processes.
The power law would also be the critical feature by means of which the neurophysiologist could distinguish between the ? function and ?. A study of Gabbiani et al. aimed to provide direct evidence
for a neuronal implementation of the ?-function [8]. Consequently, the study would be an evidence
? ? ??. Their experimental
for a biophysical implementation of ?direct? multiplication via log ?
evidence fell somewhat short in the last part, where ?exponentation through active membrane conductances? should invert logarithmic encoding. Specifically, the authors observed that ?In 7 out of
10 neurons, a third-order power law best described the data? (sixth-order in one animal). Alea iacta
est.
Acknowledgments
MSK likes to thank Stephen M. Rogers for kindly providing the recording data for compiling figure
6. MSK furthermore acknowledges support from the Spanish Government, by the Ramon and Cajal
program and the research grant DPI2010-21513.
7
References
[1] D.G. Albrecht and D.B. Hamilton, Striate cortex of monkey and cat: contrast response function, Journal
of Neurophysiology 48 (1982), 217?237.
[2] S. Bermudez i Badia, U. Bernardet, and P.F.M.J. Verschure, Non-linear neuronal responses as an emergent property of afferent networks: A case study of the locust lobula giant movemement detector, PLoS
Computational Biology 6 (2010), no. 3, e1000701.
[3] M. Blanchard, F.C. Rind, and F.M.J. Verschure, Collision avoidance using a model of locust LGMD
neuron, Robotics and Autonomous Systems 30 (2000), 17?38.
[4] D.F. Cooke and M.S.A. Graziano, Super-flinchers and nerves of steel: Defensive movements altered by
chemical manipulation of a cortical motor area, Neuron 43 (2004), no. 4, 585?593.
[5] L. Fogassi, V. Gallese, L. Fadiga, G. Luppino, M. Matelli, and G. Rizzolatti, Coding of peripersonal space
in inferior premotor cortex (area f4), Journal of Neurophysiology 76 (1996), 141?157.
[6] F. Gabbiani, I. Cohen, and G. Laurent, Time-dependent activation of feed-forward inhibition in a looming
sensitive neuron, Journal of Neurophysiology 94 (2005), 2150?2161.
[7] F. Gabbiani, H.G. Krapp, N. Hatsopolous, C.H. Mo, C. Koch, and G. Laurent, Multiplication and stimulus
invariance in a looming-sensitive neuron, Journal of Physiology - Paris 98 (2004), 19?34.
[8] F. Gabbiani, H.G. Krapp, C. Koch, and G. Laurent, Multiplicative computation in a visual neuron sensitive
to looming, Nature 420 (2002), 320?324.
[9] F. Gabbiani, H.G. Krapp, and G. Laurent, Computation of object approach by a wide-field, motionsensitive neuron, Journal of Neuroscience 19 (1999), no. 3, 1122?1141.
[10] N. Hatsopoulos, F. Gabbiani, and G. Laurent, Elementary computation of object approach by a wide-field
visual neuron, Science 270 (1995), 1000?1003.
[11] D.J. Heeger, Modeling simple-cell direction selectivity with normalized, half-squared, linear operators,
Journal of Neurophysiology 70 (1993), 1885?1898.
[12] A.L. Hodkin and A.F. Huxley, A quantitative description of membrane current and its application to
conduction and excitation in nerve, Journal of Physiology 117 (1952), 500?544.
[13] F. Hoyle, The black cloud, Pinguin Books, London, 1957.
[14] M.S. Keil, E. Roca-Morena, and A. Rodr??guez-V?azquez, A neural model of the locust visual system for
detection of object approaches with real-world scenes, Proceedings of the Fourth IASTED International
Conference (Marbella, Spain), vol. 5119, 6-8 September 2004, pp. 340?345.
[15] M.S. Keil and A. Rodr??guez-V?azquez, Towards a computational approach for collision avoidance with
real-world scenes, Proceedings of SPIE: Bioengineered and Bioinspired Systems (Maspalomas, Gran
Canaria, Canary Islands, Spain) (A. Rodr??guez-V?azquez, D. Abbot, and R. Carmona, eds.), vol. 5119,
SPIE - The International Society for Optical Engineering, 19-21 May 2003, pp. 285?296.
[16] J.G. King, J.Y. Lettvin, and E.R. Gruberg, Selective, unilateral, reversible loss of behavioral responses
to looming stimuli after injection of tetrodotoxin or cadmium chloride into the frog optic nerve, Brain
Research 841 (1999), no. 1-2, 20?26.
[17] C. Koch, Biophysics of computation: information processing in single neurons, Oxford University Press,
New York, 1999.
[18] D.N. Lee, A theory of visual control of braking based on information about time-to-collision, Perception
5 (1976), 437?459.
[19] K.D. Miller and T.W. Troyer, Neural noise can explain expansive, power-law nonlinearities in neuronal
response functions, Journal of Neurophysiology 87 (2002), 653?659.
[20] Hideki Nakagawa and Kang Hongjian, Collision-sensitive neurons in the optic tectum of the bullfrog, rana
catesbeiana, Journal of Neurophysiology 104 (2010), no. 5, 2487?2499.
[21] M. O?Shea and C.H.F. Rowell, Projection from habituation by lateral inhibition, Nature 254 (1975), 53?
55.
[22] M. O?Shea and J.L.D. Williams, The anatomy and output connection of a locust visual interneurone: the
lobula giant movement detector (lgmd) neurone, Journal of Comparative Physiology 91 (1974), 257?266.
[23] S. Peron and F. Gabbiani, Spike frequency adaptation mediates looming stimulus selectivity, Nature Neuroscience 12 (2009), no. 3, 318?326.
[24] F.C. Rind, A chemical synapse between two motion detecting neurones in the locust brain, Journal of
Experimental Biology 110 (1984), 143?167.
[25] F.C. Rind and D.I. Bramwell, Neural network based on the input organization of an identified neuron
signaling implending collision, Journal of Neurophysiology 75 (1996), no. 3, 967?985.
8
[26] F.C. Rind and P.J. Simmons, Orthopteran DCMD neuron: a reevaluation of responses to moving objects.
I. Selective responses to approaching objects, Journal of Neurophysiology 68 (1992), no. 5, 1654?1666.
[27]
, Orthopteran DCMD neuron: a reevaluation of responses to moving objects. II. Critical cues for
detecting approaching objects, Journal of Neurophysiology 68 (1992), no. 5, 1667?1682.
[28]
, Signaling of object approach by the dcmd neuron of the locust, Journal of Neurophysiology 77
(1997), 1029?1033.
[29]
, Reply, Trends in Neuroscience 22 (1999), no. 5, 438.
[30] S.M. Roger, G.W.J. Harston, F. Kilburn-Toppin, T. Matheson, M. Burrows, F. Gabbiani, and H.G. Krapp,
Spatiotemporal receptive field properties of a looming-sensitive neuron in solitarious and gregarious
phases of desert locust, Journal of Neurophysiology 103 (2010), 779?792.
[31] S.K. Rushton and J.P. Wann, Weighted combination of size and disparity: a computational model for
timing ball catch, Nature Neuroscience 2 (1999), no. 2, 186?190.
[32] Yue. S., Rind. F.C., M.S. Keil, J. Cuadri, and R. Stafford, A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment, Neurocomputing 69
(2006), 1591?1598.
[33] G.R. Schlotterer, Response of the locust descending movement detector neuron to rapidly approaching
and withdrawing visual stimuli, Canadian Journal of Zoology 55 (1977), 1372?1376.
[34] H. Sun and B.J. Frost, Computation of different optical variables of looming objects in pigeon nucleus
rotundus neurons, Nature Neuroscience 1 (1998), no. 4, 296?303.
[35] J.R. Tresilian, Visually timed action: time-out for ?tau??, Trends in Cognitive Sciences 3 (1999), no. 8,
1999.
[36] Y. Wang and B.J. Frost, Time to collision is signalled by neurons in the nucleus rotundus of pigeons,
Nature 356 (1992), 236?238.
[37] J.P. Wann, Anticipating arrival: is the tau-margin a specious theory?, Journal of Experimental Psychology and Human Perceptual Performance 22 (1979), 1031?1048.
[38] M. Wicklein and N.J. Strausfeld, Organization and significance of neurons that detect change of visual
depth in the hawk moth manduca sexta, The Journal of Comparative Neurology 424 (2000), no. 2, 356?
376.
9
| 4205 |@word neurophysiology:11 trial:2 version:4 seems:1 norm:6 simulation:3 tr:1 carry:1 moment:2 rind:7 configuration:1 series:1 exclusively:1 disparity:2 initial:2 reaction:1 current:1 comparing:1 discretization:4 adj:2 activation:1 guez:3 rizzolatti:1 numerical:2 shape:3 motor:1 plot:1 drop:1 implying:1 half:4 cue:1 fogassi:1 short:3 farther:1 filtered:3 provides:1 detecting:2 timeaveraged:1 arctan:2 five:2 rc:1 along:1 mathematical:1 direct:3 become:1 combine:1 fitting:6 behavioral:1 introduce:1 x0:4 expected:1 indeed:1 behavior:3 nor:1 brain:4 discretized:5 inspired:1 increasing:2 spain:3 project:1 campus:1 moreover:1 circuit:1 linearity:3 underlying:1 what:4 lgmd:34 cm:4 interpreted:1 monkey:2 giant:3 temporal:2 quantitative:1 every:1 laterally:1 rm:2 hit:1 scaled:2 control:1 unit:1 grant:1 bio:1 producing:1 hamilton:1 before:4 positive:1 engineering:1 rushton:1 modify:1 timing:1 limit:1 consequence:3 despite:1 encoding:1 oxford:1 laurent:5 firing:8 approximately:1 ap:1 tmax:11 black:2 frog:2 studied:1 co:1 range:1 averaged:1 locust:14 acknowledgment:1 lost:1 bioinspired:1 implement:3 illusion:1 signaling:2 area:2 thought:1 reject:1 imminent:1 alleviated:1 word:1 integrating:1 physiology:3 projection:1 suggest:1 get:7 cannot:3 close:2 operator:1 put:1 intercept:2 descending:2 www:1 center:1 williams:1 resolution:3 simplicity:1 defensive:1 assigns:2 avoidance:3 array:1 stability:1 notion:1 variation:3 increment:2 autonomous:1 simmons:3 tectum:1 trigger:1 play:2 infrequent:1 hypothesis:2 agreement:3 velocity:9 trend:2 continues:1 predicts:4 observed:4 role:2 cloud:1 wang:1 stafford:1 reevaluation:2 sun:1 plo:1 movement:7 rescaled:1 decrease:1 hatsopoulos:2 environment:1 dynamic:9 raise:1 ov:10 vague:1 lowpass:2 differently:1 emergent:1 cat:1 fast:2 effective:1 golay:1 london:1 artificial:1 ponent:1 choosing:2 firm:1 quite:1 encoded:1 larger:1 plausible:2 premotor:1 say:2 relax:4 favor:2 gi:1 emergence:1 itself:1 transform:1 final:1 runge:1 sequence:3 biophysical:9 matthias:1 subtracting:1 interaction:2 product:1 unresolved:1 adaptation:1 causing:1 aligned:1 rapidly:2 matheson:1 neurone:1 description:2 getting:3 potassium:1 generating:1 comparative:2 object:27 measured:1 strong:2 predicted:1 implies:2 come:1 indicate:2 msk:2 direction:1 vrest:7 anatomy:1 discontinuous:1 attribute:1 f4:1 consecutively:1 subsequently:1 human:2 dii:1 implementing:2 everything:1 explains:1 rogers:1 government:1 behaviour:1 sexta:1 fix:1 biological:1 elementary:1 practically:1 koch:3 considered:1 ground:1 normal:3 exp:9 visually:1 equilibrium:6 cognition:1 predict:1 mo:1 lm:1 vary:1 ommatidium:1 smallest:1 perceived:2 estimation:3 suffi:1 label:2 unexplained:1 title:1 sensitive:6 gabbiani:9 establishes:1 weighted:2 rough:1 clearly:1 always:1 super:1 rather:3 varying:2 voltage:2 publication:1 derived:1 focus:1 consistently:2 indicates:1 mainly:1 expansive:3 contrast:5 detect:1 dependent:1 typically:1 selective:2 issue:2 overall:2 rodr:3 insect:2 priori:1 exponent:1 multiplies:1 animal:1 prevailing:1 spatial:2 integration:4 field:4 silently:1 identical:2 represents:2 biology:2 nearly:1 discrepancy:1 others:1 stimulus:12 escape:2 retina:4 looming:8 composed:1 manipulated:1 cajal:1 neurocomputing:1 delayed:1 phase:3 connects:2 conductance:6 detection:2 organization:2 circular:2 highly:1 signalled:1 zoology:1 held:2 accurate:1 edge:1 closer:3 worker:1 shorter:1 plugged:1 accommodated:1 re:2 plotted:1 timed:1 minimal:1 fitted:3 earlier:1 modeling:1 abbot:1 decelerating:1 goodness:2 contralateral:1 predictor:1 delay:1 universitat:1 reported:4 conduction:1 varies:12 spatiotemporal:1 peak:19 international:2 accessible:1 lee:1 graziano:1 quickly:1 concrete:1 again:4 postulate:1 recorded:1 squared:1 slowly:2 possibly:2 worse:2 external:1 book:1 cognitive:1 return:1 albrecht:1 potential:5 nonlinearities:2 de:2 coding:1 includes:1 coefficient:4 blanchard:1 explicitly:2 afferent:1 vi:1 depends:3 multiplicative:6 later:1 reached:2 start:1 wave:1 slope:7 rmse:9 vinh:21 chloride:1 square:1 largely:1 miller:1 gathered:1 correspond:1 yield:1 biophysically:2 critically:1 researcher:1 drive:1 published:1 visualizes:1 rectified:1 detector:5 explain:1 influenced:1 synaptic:5 checked:1 sixth:1 ed:1 against:1 underestimate:2 pp:2 frequency:1 associated:2 psi:4 monitored:1 static:1 di:1 sampled:3 spie:2 knowledge:1 car:1 amplitude:3 anticipating:1 back:1 nerve:3 appears:2 feed:1 higher:4 dt:4 follow:1 response:21 specify:1 synapse:1 done:1 though:2 strongly:1 furthermore:4 angular:7 just:1 carmona:1 reply:1 until:3 correlation:8 hand:2 receives:1 roger:1 nonlinear:1 reversible:1 perhaps:2 indicated:1 grows:1 building:1 normalized:10 hence:1 chemical:2 satisfactory:1 during:2 spanish:1 inferior:1 covering:1 anything:1 steady:3 excitation:1 m:50 criterion:1 motion:1 passive:1 image:1 wise:1 instantaneous:1 novel:2 predominantly:1 sigmoid:1 stimulation:12 physical:1 cohen:1 resting:1 braking:1 strausfeld:1 freeze:1 nonlinearity:1 phenomenological:4 moving:3 access:1 badia:1 cortex:3 inhibition:11 subjectively:1 align:1 etc:1 own:1 loga:1 driven:1 apart:1 manipulation:1 certain:1 selectivity:2 ecological:1 continue:1 vt:2 inconsistency:1 life:1 minimum:2 somewhat:1 floor:1 hoyle:1 signal:2 ii:3 stephen:1 full:1 smooth:1 match:4 faster:1 rotundus:2 shunting:4 manipulate:1 bigger:6 biophysics:1 impact:1 basic:1 optimisation:1 iteration:2 robotics:2 cell:4 invert:1 addition:1 whereas:2 else:1 source:2 operate:1 rest:1 pass:1 fell:1 hz:3 recording:2 yue:1 legend:3 seem:3 habituation:2 ciently:1 feedforward:1 iii:1 intermediate:1 canadian:1 variety:1 fit:15 psychology:2 approaching:7 identified:2 decline:2 idea:1 translates:1 passed:1 unnatural:1 render:1 unilateral:1 resistance:1 proceed:2 cause:2 hardly:1 action:3 york:1 neurones:1 collision:11 clear:3 involve:1 tune:1 aimed:1 transforms:2 amount:1 diameter:1 simplest:1 reduced:2 inhibitory:3 notice:1 impending:1 estimated:1 track:1 neuroscience:5 discrete:4 probed:1 vol:2 iasted:1 lobula:3 monitor:1 changing:1 neither:1 kept:1 asymptotically:1 relaxation:2 year:1 angle:5 exponentiation:1 inverse:1 fourth:1 hodgkin:1 place:1 comparable:1 entirely:1 layer:2 resampled:1 distinguish:1 display:1 existed:1 potentiates:1 lettvin:1 activity:14 occur:2 optic:2 huxley:2 sharply:1 scene:2 speed:2 argument:1 extremely:1 optical:9 injection:1 relatively:2 moth:1 department:1 according:6 gran:1 combination:2 march:1 ball:1 membrane:11 remain:2 across:4 smaller:4 postsynaptic:1 unity:1 slightly:2 increasingly:1 island:1 n4:1 primate:1 frost:2 dv:1 gradually:1 taken:1 equation:5 turn:2 eventually:1 mechanism:2 kinematics:1 manduca:1 know:1 reversal:1 serf:2 dia:1 end:1 operation:8 away:3 simulating:1 appearing:1 compiling:1 slower:2 original:4 denotes:1 running:1 instant:2 const:6 build:2 society:1 contact:1 leakage:2 move:2 question:2 quantity:1 occurs:3 spike:7 added:1 ginh:8 already:1 striate:1 receptive:1 september:1 kutta:1 distance:2 link:1 thank:1 simulated:1 capacity:1 lateral:1 exc:1 originate:1 presynaptic:5 considers:1 extent:1 stim:1 ttc:16 assuming:1 relationship:3 intercalated:2 ratio:3 providing:1 statement:1 trace:3 negative:1 rise:2 steel:1 implementation:3 potentiated:1 neuron:34 observation:1 keil:4 behave:1 displayed:4 situation:2 defining:1 precise:1 inh:3 frame:2 tetrodotoxin:1 varied:2 vall:1 complement:1 paris:1 specified:1 connection:1 hideki:1 kang:1 established:1 mediates:1 barcelona:4 cadmium:1 address:2 able:1 suggested:1 beyond:1 usually:2 pattern:2 dynamical:2 perception:1 program:1 max:3 memory:1 video:2 ramon:1 tau:2 power:10 critical:2 rely:1 sodium:1 altered:1 eye:3 inversely:1 acknowledges:1 canary:1 catch:1 deviate:1 understanding:1 l2:1 multiplication:5 determining:1 relative:2 law:9 loss:1 expect:1 limitation:1 filtering:5 versus:3 lv:3 nucleus:3 consistent:2 principle:1 thresholding:1 cooke:1 course:1 excitatory:2 accounted:1 placed:2 last:3 verschure:2 ugly:1 institute:1 wide:3 absolute:3 curve:5 default:5 calculated:1 world:3 avoids:1 cortical:1 depth:1 adopts:2 author:1 jump:3 coincide:1 projected:1 forward:1 vexc:5 far:3 correlate:2 approximate:2 keep:2 reproduces:2 active:2 conclude:1 neurology:1 continuous:8 compromised:1 why:1 stimulated:1 nature:6 improving:1 expansion:2 krapp:4 excellent:4 troyer:1 kindly:1 significance:1 linearly:2 whole:1 noise:2 arrival:1 ref:2 neurophysiologist:2 neuronal:7 representative:1 screen:1 axon:1 sub:1 bullfrog:2 heeger:1 exponential:3 carpet:1 gallese:1 burrow:1 perceptual:1 third:2 late:1 bad:1 gating:1 symbol:1 r2:6 decay:1 evidence:8 roca:1 shea:2 magnitude:1 illustrates:2 margin:1 easier:1 tc:19 logarithmic:1 pigeon:5 peron:1 visual:17 rana:1 corresponds:1 determines:1 king:1 consequently:1 towards:1 change:5 experimentally:3 typical:2 determined:2 specifically:1 nakagawa:1 acting:1 principal:1 called:1 specie:2 pas:1 invariance:1 experimental:14 divisive:1 m3:1 est:1 desert:1 internal:1 support:1 meant:3 azquez:3 relevance:2 ub:2 hawk:1 |
3,541 | 4,206 | History distribution matching method for predicting
effectiveness of HIV combination therapies
Jasmina Bogojeska
Max-Planck Institute for Computer Science
Campus E1 4
66123 Saarbr?ucken, Germany
[email protected]
Abstract
This paper presents an approach that predicts the effectiveness of HIV combination therapies by simultaneously addressing several problems affecting the available HIV clinical data sets: the different treatment backgrounds of the samples, the
uneven representation of the levels of therapy experience, the missing treatment
history information, the uneven therapy representation and the unbalanced therapy outcome representation. The computational validation on clinical data shows
that, compared to the most commonly used approach that does not account for
the issues mentioned above, our model has significantly higher predictive power.
This is especially true for samples stemming from patients with longer treatment
history and samples associated with rare therapies. Furthermore, our approach is
at least as powerful for the remaining samples.
1
Introduction
According to [18], more than 33 million people worldwide are infected with the human immunodeficiency virus (HIV), for which there exists no cure. HIV patients are treated by administration of
combinations of antiretroviral drugs, which succeed in suppressing the virus much longer than the
monotherapies based on a single drug. Eventually, the drug combinations also become ineffective
and need to be replaced. On such occasion, the very large number of potential therapy combinations
makes the manual search for an effective therapy increasingly impractical. The search is particulary
challenging for patients in the mid to late stages of antiretroviral therapy because of the accumulated
drug resistance from all previous therapies. The availability of large clinical data sets enables the
development of statistical methods that offer an automated procedure for predicting the outcome
of potential antiretroviral therapies. An estimate of the therapy outcome can assist physicians in
choosing a successful regimen for an HIV patient.
However, the HIV clinical data sets suffer from several problems. First of all, the clinical data
comprise therapy samples that originate from patients with different treatment backgrounds. Also
the various levels of therapy experience ranging from therapy-na??ve to heavily pretreated are represented with different sample abundances. Second, the samples on different combination therapies
have widely differing frequencies. In particular, many therapies are only represented with very few
data points. Third, the clinical data do not necessarily have the complete information on all administered HIV therapies for all patients and the information on whether all administered therapies is
available or not is also missing for many of the patients. Finally, the imbalance between the effective and the ineffective therapies is increasing over time: due to the knowledge acquired from HIV
research and clinical practice the quality of treating HIV patients has largely increased in the recent
years rendering the amount of effective therapies in recently collected data samples much larger
than the amount of ineffective ones. These four problems create bias in the data sets which might
negatively affect the usefulness of the derived statistical models.
1
In this paper we present an approach that addresses all these problems simultaneously. To tackle the
issues of the uneven therapy representation and the different treatment backgrounds of the samples,
we use information on both the current therapy and the patient?s treatment history. Additionally, our
method uses a distribution matching approach to account for the problems of missing information in
the treatment history and the growing gap between the abundances of effective and ineffective HIV
therapies over time. The performance of our history distribution matching approach is assessed by
comparing it with two common reference methods in the so called time-oriented validation scenario,
where all models are trained on data from the more distant past, while their performance is assessed
on data from the more recent past. In this way we account for the evolving trends in composing drug
combination therapies for treating HIV patients.
Related work. Various statistical learning methods, including artificial neural networks, decision
trees, random forests, support vector machines (SVMs) and logistic regression [19, 11, 14, 10, 16,
1, 15], have been used to predict the effectiveness of HIV combination therapies from clinical data.
None of these methods considers the problems affecting the available clinical data sets: different
treatment backgrounds of the samples, uneven representations of therapies and therapy outcomes,
and incomplete treatment history information. Some approaches [2, 4] deal with the uneven therapy
representation by training a separate model for each combination therapy on all available samples
with properly derived sample weights. The weights reflect the similarities between the target therapy
and all training therapies. However, the therapy-specific approaches do not address the bias originating from the different treatment backgrounds of the samples, or the missing treatment history
information.
2
Problem setting
Let z denote a therapy sample that comprises the viral genotype g represented as a binary vector indicating the occurrence of a set of resistance-relevant mutations, the therapy combination z encoded
as a binary vector that indicates the individual drugs comprising the current therapy, the binary vector h representing the drugs administered in all known previous therapies, and the label y indicating
the success (1) or failure (?1) of the therapy z. Let D = {(g1 , z1 , h1 , y1 ), . . . , (gm , zm , hm , ym )}
denote the training set and let s refer to the therapy sample of interest. Let start(s) refer to the point
of time when the therapy s was started and patient(s) refer to the patient identifier corresponding
to the therapy sample s. Then:
r(s) = {z | (start(z) ? start(s)) and (patient(z) = patient(s))}
denotes the complete treatment data associated with the therapy sample s and will be referred to as
therapy sequence. It contains all known therapies administered to patient(s) not later than start(s)
ordered by their corresponding starting times. We point out that each therapy sequence also contains
the current therapy, i.e., the most recent therapy in the therapy sequence r(s) is s. Our goal is to train
a model f (g, s, h) that addresses the different types of bias associated with the available clinical data
sets when predicting the outcome of the therapy s. In the rest of the paper we denote the set of input
features (g, s, h) by x.
3
History distribution matching method
The main idea behind the history distribution matching method we present in this paper is that the
predictions for a given patient should originate from a model trained using samples from patients
with treatment backgrounds similar as the one of the target patient. The details of this method are
summarized in Algorithm 1. In what follows, we explain each step of this algorithm.
3.1
Clustering based on similarities of therapy sequences
Clustering partitions a set of objects into clusters, such that the objects within each cluster are more
similar to one another than to the objects assigned to a different cluster [7]. In the first step of
Algorithm 1, all available training samples are clustered based on the pairwise dissimilarity of their
corresponding therapy sequences. In the following, we first describe a similarity measure for therapy
sequences and then present the details of the clustering.
2
Algorithm 1: History distribution matching method
1. Cluster the training samples by using the pairwise dissimilarities of their corresponding
therapy sequences.
2. For each (target) cluster:
? Compute sample weights that match the distribution of all available training
samples to the distribution of samples in the target cluster.
? Train a sample-weighted logistic regression model using the sample weights
computed in the previous distribution matching step.
Similarity of therapy sequences. In order to quantify the pairwise similarity of therapy sequences
we use a slightly modified version of the alignment similarity measure introduced in [5]. It adapts
sequence alignment techniques [13] to the problem of aligning therapy sequences by considering the
specific therapies given to a patient, their respective resistance-relevant mutations, the order in which
they were applied and the length of the therapy history. The alphabet used for the therapy sequence
alignment comprises all distinct drug combinations making up the clinical data set. The pairwise
similarities between the different drug combinations are quantified with the resistance mutations
kernel [5], which uses the table of resistance-associated mutations of each drug afforded by the
International AIDS society [8]. First, binary vectors indicating resistance-relevant mutations for the
set of drugs occurring in a combination are calculated for each therapy. Then, the similarity score
of two therapies of interest is computed as normalized inner product between their corresponding
resistance mutation vectors. In this way, the therapy similarity also accounts for the similarity of the
genetic fingerprint of the potential latent virus populations of the compared therapies. Each therapy
sequence ends with the current (most recent) therapy ? the one that determines the label of the sample
and the sequence alignment is adapted such that the most recent therapies are always matched.
Therefore, it also accounts for the problem of uneven representation of the different therapies in the
clinical data. It has one parameter that specifies the linear gap cost penalty.
For the history distribution matching method, we modified the alignment similarity kernel described
in the paragraph above such that it also takes the importance of the different resistance-relevant mutations into account. This is achieved by updating the resistance mutations kernel, where instead of
using binary vectors that indicate the occurrence of a set of resistance-relevant mutations, we use
vectors that indicate their importance. If two or more drugs from a certain drug group, that comprise a target therapy share a resistance mutation, then we consider its maximum importance score.
Importance scores for the resistance-relevant mutations are derived from in-vivo experiments and
can be obtained from the Stanford University HIV Drug Resistance Database [12]. Furthermore, we
want to keep the cluster similarity measure parameter-free, such that in the process of model selection the clustering Step 1 in Algorithm 1 is decoupled from the Step 2 and is computed only once.
This is achieved by computing the alignments with zero gap costs and ensures time-efficient model
selection procedure. However, in this case only the similarities of the matched therapies comprising
the two compared therapy sequences contribute to the similarity score and thus the differing lengths
of the therapy sequences are not accounted for. Having a clustering similarity measure that addresses
the differing therapy lengths is important for tackling the uneven sample representation with respect
to the level of therapy experience. In order to achieve this we normalize each pairwise similarity
score with the length of the longer therapy sequence. This yields pairwise similarity values in the
interval [0, 1] which can easily be converted to dissimilarity values in the same range by subtracting
them from 1.
Clustering. Once we have a measure of dissimilarity of therapy sequences, we cluster our data
using the most popular version of K-medoids clustering [7], referred to as partitioning around
medoids (PAM) [9]. The main reason why we choose this approach instead of the simpler K-means
clustering [7] is that it can use any precomputed dissimilarity matrix. We select the number of
clusters with the silhouette validation technique [17], which uses the so-called silhouette value to
assess the quality of the clustering and select the optimal number of clusters.
3
3.2
Cluster distribution matching
The clustering step of our method groups the training data into different bins based on their therapy
sequences. However, the complete treatment history is not necessarily available for all patients in
our clinical data set. Therefore, by restricting the prediction model for a target sample only to the
data from its corresponding cluster, the model might ignore relevant information from the other
clusters. The approach we use to deal with this issue is inspired by the multi-task learning with
distribution matching method introduced in [2].
In our current problem setting, the goal is to train a prediction model fc : x ? y for each cluster
c of similar treatment sequences, where x denotes the input features and y denotes the label. The
straightforward approach to achieve this is to train a prediction model by using only the samples
in cluster c. However, since the available treatment history for some samples might be incomplete,
totally excluding the samples from all other clusters (6= c) ignores relevant information about the
model fc . Furthermore, the cluster-specific tasks are related and the samples from the other clusters
? especially those close to the cluster boundaries of cluster c ? also carry valuable information
for the model fc . Therefore, we use a multi-task learning approach where a separate model is
trained for each cluster by not only using the training samples from the target cluster, but also the
available training samples from the remaining clusters with appropriate sample-specific weights.
These weights are computed by matching the distribution of all samples to the distribution of the
samples of the target cluster and they thereby reflect the relevance of each sample for the target
cluster. In this way, the model for the target cluster uses information from the input features to
extract relevant knowledge from the other clusters.
More formally, let D = {(x1 , y1 , c1 ), . . . , (xm , ym , cm )} denote the training data, where ci denotes
the cluster associated with the training sample (xi ,P
yi ) in the history-based clustering. The training
data are governed by the joint training distribution c p(c)p(x, y|c). The most accurate model for a
given target cluster t minimizes the loss with respect to the conditional probability p(x, y|t) referred
to as the target distribution. In [2] it is shown that:
E(x,y)?p(x,y|t) [`(ft (x))] = E(x,y)?Pc p(c)p(x,y|c) [rt (x, y)`(ft (x))],
where:
p(x, y|t)
.
p(c)p(x,
y|c)
c
rt (x, y) = P
(1)
(2)
In
P other words, by using sample-specific weights rt (x, y) that match the training distribution
c p(c)p(x, y|c) to the target distribution p(x, y|t) we can minimize the expected loss with respect
to the target distribution by minimizing the expected loss with respect to the training distribution.
The weighted training data are governed by the correct target distribution p(x, y|t) and the sample
weights reflect the relevance of each training sample for the target model. The weights are derived
based on information from the input features. If a sample was assigned to the wrong cluster due to
the incompleteness of the treatment history, by matching the training to the target distribution it can
still receive high sample weight for the model of its correct cluster.
In order to avoid the estimation of the high-dimensional densities p(x, y|t) and p(x, y|c) in Equation 2, we follow the example of [3, 2] and compute the sample weights rt (x, y) using a discriminative model for a conditional distribution with a single variable:
rt (x, y) =
p(t|x, y)
,
p(t)
(3)
where p(t|x, y) quantifies the probability that a sample (x, y) randomly drawn from the training set
D belongs to the target cluster t. p(t) is the prior probability which can easily be estimated from the
training data.
As in [2], p(t|x, y) is modeled for all clusters jointly using a kernelized version of multi-class logistic
regression with a feature mapping that separates the effective from the ineffective therapies:
?(y, +1)x
?(x, y) =
,
(4)
?(y, ?1)x
where ? is the Kronecker delta (?(a, b) = 1, if a = b, and ?(a, b) = 0, if a 6= b). In this way, we can
train the cluster-discriminative models for the effective and the ineffective therapies independently,
4
and thus, by proper time-oriented model selection address the increasing imbalance in their representation over time. Formally, the multi-class model is trained by maximizing the log-likelihood
over the training data using a Gaussian prior on the model parameters:
X
arg max
log(p(ci |xi , yi , v)) + vT ??1 v,
v
(xi ,yi ,ci )?Dc
where v are the model parameters (a concatenation of the cluster specific parameters vc ), and ? is
the covariance matrix of the Gaussian prior.
3.3
Sample-weighted logistic regression method
As described in the previous subsection, we use a multi-task distribution matching procedure to
obtain sample-specific weights for each cluster, which reflect the relevance of each sample for the
corresponding cluster. Then, a separate logistic regression model that uses all available training data
with the proper sample weights is trained for each cluster. More formally, let t denote the target
cluster and let rt (x, y) denote the weight of the sample (x, y) for the cluster t. Then, the prediction
model for the cluster t that minimizes the loss over the weighted training samples is given by:
X
1
arg min
rt (xi , y)? ? `(ft (xi ), yi ) + ?wtT wt ,
(5)
wt |D|
(xi ,yi )?D
where wt are the model parameters, ? is the regularization parameter, ? is a smoothing parameter
for the sample-specific weights and `(f (x, wt ), y) = ln(1 + exp(?ywtT x)) is the loss of linear
logistic regression.
All in all, our method first clusters the training data based on their corresponding therapy sequences
and then learns a separate model for each cluster by using relevant data from the remaining clusters.
By doing so it tackles the problems of the different treatment backgrounds of the samples and the
uneven sample representation in the clinical data sets with respect to the level of therapy experience.
Since the alignment kernel considers the most recent therapy and the drugs comprising this therapy
are encoded as a part of the input feature space, our method also deals with the differing therapy
abundances in the clinical data sets. Once we have the models for each cluster, we use them to
predict the label of a given test sample x as follows: First of all, we use the therapy sequence of the
target sample to calculate its dissimilarity to the therapy sequences of each of the cluster centers.
Then, we assign the sample x to the cluster c with the closest cluster center. Finally, we use the
logistic regression model trained for cluster c to predict the label y for the target sample x.
4
4.1
Experiments and results
Data
The clinical data for our model are extracted from the EuResist [16] database that contains information on 93014 antiretroviral therapies administered to 18325 HIV (subtype B) patients from several
countries in the period from 1988 to 2008. The information employed by our model is extracted
from these data: the viral sequence g assigned to each therapy sample is obtained shortly before
the respective therapy was started (up to 90 days before); the individual drugs of the currently administered therapy z; all available (known) therapies administered to each patient h, r(z); and the
response to a given therapy quantified with a label y (success or failure) based on the virus load values (copies of viral RNA per ml blood plasma) measured during its course (for more details see [4]
and the Supplementary material). Finally, our training set comprises 6537 labeled therapy samples
from 690 distinct therapy combinations.
4.2
Validation setting
Time-oriented validation scenario. The trends of treating HIV patients change over time as a
result of the gathered practical experience with the drugs and the introduction of new antiretroviral
drugs. In order to account for this phenomenon we use the time-oriented validation scenario [4]
which makes a time-oriented split when selecting the training and the test set. First, we order all
5
available training samples by their corresponding therapy starting dates. We then make a timeoriented split by selecting the most recent 20% of the samples as the test set and the rest as the
training set. For the model selection we split the training set further in a similar manner. We take
the most recent 25% of the training set for selecting the best model parameters (see Supplementary
material) and refer to this set as tuning set. In this way, our models are trained on the data from the
more distant past, while their performance is measured on the data from the more recent past. This
scenario is more realistic than other scenarios since it captures how a given model would perform on
the recent trends of combining the drugs. The details of the data sets resulting from this scenario are
given in Table 1, where one can also observe the large gap between the abundances of the effective
and ineffective therapies, especially for the most recent data.
Table 1: Details on the data sets generated in the time-oriented validation scenario.
Data set
Sample count
Success rate
training
3596
69%
tuning
1634
79%
test
1307
83%
The search for an effective HIV therapy is particulary challenging for patients in the mid to late
stages of antiretroviral therapy when the number of therapy options is reduced and effective therapies are increasingly hard to find because of the accumulated drug resistance mutations from all
previous therapies. The therapy samples gathered in the HIV clinical data sets are associated with
patients whose treatment histories differ in length: while some patients receive their first antiretroviral treatment, others are heavily pretreated. These different sample groups, from treatment na??ve to
heavily pretreated, are represented unevenly in the HIV clinical data with fewer samples associated
to therapy-experienced patients (see Figure 1 (a) in the Supplementary material). In order to assess
the ability of a given target model to address this problem, we group the therapy samples in the test
set into different bins based on the number of therapies administered prior to the therapy of interest
? the current therapy (see Table 1 in the Supplementary material). Then, we assess the quality of a
given target model by reporting its performance for each of the bins. In this way we can assess the
predictive power of the models in dependence on the level of therapy experience.
Another important property of an HIV model is its ability to address the uneven representation of
the different therapies (see Figure 1 (b) in the Supplementary material). In order to achieve this we
group the therapies in the test set based on the number of samples they have in the training set, and
then we measure the model performance on each of the groups. The details on the sample counts in
each of the bins are given in Table 2 of the Supplementary material. In this manner we can evaluate
the performance of the models for the rare therapies. Due to the lack of data and practical experience
for the rare HIV combination therapies, predicting their efficiency is more challenging compared to
estimating the efficiency of the frequent therapies.
Reference methods. In our computational experiments we compare the results of our history distribution matching approach, denoted as transfer history clustering validation scenario, to those of
three reference approaches, namely the one-for-all validation scenario, the history-clustering validation scenario, and the therapy-specific validation scenario. The one-for-all method mimics the
most common approaches in the field [16, 1, 19] that train a single model (here logistic regression)
on all available therapy samples in the data set. The information on the individual drugs comprising
the target (most recent) therapy and the drugs administered in all its available preceding therapies
are encoded in a binary vector and supplied as input features. The history-clustering method implements a modified version of Algorithm 1 that skips the distribution matching step. In other words, a
separate model is trained for each cluster by using only the data from the respective cluster. We introduce this approach to assess the importance of the distribution matching step. The therapy-specific
scenario implements the drugs kernel therapy similarity model described in [4]. It represents the
approaches that train a separate model for each combination therapy by using not only the samples from the target therapy but also the available samples from similar therapies with appropriate
sample-importance weights.
Performance measures. The performance of all considered methods is assessed by reporting their
corresponding accuracies (ACC) and AUCs (Area Under the ROC Curve). The accuracy reflects the
ability of the methods to make correct predictions, i.e., to discriminate between successful and failing HIV combination therapies. With the AUC we are able to assess the quality of the ranking based
6
on the probability of therapy success. For this reason, we carry out the model selection based on
both accuracy and AUC and then use accuracy or AUC, respectively, to assess the model performance. In order to compare the performance of two methods on a separate test set, the significance
of the difference of two accuracies as well as their standard deviations are calculated based on a
paired t-test. The standard deviations of the AUC values and the significance of the difference of
two AUCs used for the pairwise method comparison are estimated as described in [6].
4.3
Experimental results
According to the results from the silhouette validation technique [17] displayed in Figure 2 in the
Supplementary material, the first clustering step of Algorithm 1 divides our training data into two
clusters ? one comprises the samples with longer therapy sequences (with average treatment history
length of 5.507 therapies), and the other one those with shorter therapy sequences (with average
treatment history length of 0.308 therapies). Thus, the transfer history distribution matching method
trains two models, one for each cluster. The clustering results are depicted in Figure 3 in the Supplementary material. In what follows, we first present the results of the time-oriented validation
scenario stratified for the length of treatment history, followed by the results stratified for the abundance of the different therapies. In both cases we report both the accuracies and the AUCs for all
considered methods.
0.80
The computational results for the transfer history method and the three reference methods stratified
for the length of the therapy history are summarized in Figure 1, where (a) depicts the accuracies,
and (b) depicts the AUCs. For samples with a small number (? 5) of previously administered therapies, i.e., with short treatment histories, all considered models have comparable accuracies. For
test samples from patients with longer (> 5) treatment histories, the transfer history clustering approach achieves significantly better accuracy (p-values ? 0.004) compared to those of the reference
methods. According to the paired difference test described in [6], the transfer history approach has
significantly better AUC performance for test samples with longer (> 5) treatment histories compared to the one-for-all (p-value = 0.043) and the history-clustering (p-value = 0.044) reference
methods. It also has better AUC performance compared to the one of the therapy-specific model, yet
this improvement is not significant (p-value = 0.253). Furthermore, the transfer history approach
achieves better AUCs for test samples with less than five previously administered therapies compared to all reference methods. However, the improvement is only significant for the one-for-all
method (p-value = 0.007). The corresponding p-values for the history-clustering method and the
therapy-specific method are 0.080 and 0.178, respectively.
0.75
0.45
0.5
0.50
0.55
0.6
0.60
0.65
AUC
0.8
0.7
ACC
transfer history clustering
history clustering
therapy specific
one?for?all
0.70
0.9
transfer history clustering
history clustering
therapy specific
one?for?all
0?5
>5
0?5
Number of preceding treatments
>5
Number of preceding treatments
(a)
(b)
Figure 1: Accuracy (a) and AUC (b) results of the different models obtained on the test set in the
time-oriented validation scenario. Error bars indicate the standard deviations of each model. The
test samples are grouped based on their corresponding number of known previous therapies.
The experimental results, stratified for the abundance of the therapies summarizing the accuracies
and AUCs for all considered methods, are depicted in Figure 2 (a) and (b), respectively. As can
7
1.0
be observed from Figure 2 (a), all considered methods have comparable accuracies for the test
therapies with more than seven samples. The transfer history method achieves significantly better
accuracy (p-values ? 0.0001) compared to all reference methods for the test therapies with few
(0 ? 7) available training samples. Considering the AUC results in Figure 2 (b), the transfer history
approach outperforms all the reference models for the rare test therapies (with 0 ? 7 training samples) with estimated p-values of 0.05 for the one-for-all, 0.042 for the therapy-specific and 0.1 for
the history-clustering model. The one-for-all and the therapy-specific models have slightly better
AUC performance compared to the transfer history and the history-clustering approaches for test
therapies with 8 ? 30 available training samples. However, according to the paired difference test
described in [6], the improvements are not significant with p-values larger than 0.141 for all pairwise comparisons. Moreover, considering the test therapies with more than 30 training samples the
transfer history approach significantly outperforms the one-for-all approach with estimated p-value
of 0.037. It also has slightly better AUC performance than the history-clustering model and the
therapy-specific model, however these improvements are not significant with estimated p-values of
0.064 and 0.136, respectively.
0.8
transfer history clustering
history clustering
therapy specific
one?for?all
0.7
AUC
0.5
0.5
0.6
0.6
0.7
ACC
0.8
0.9
transfer history clustering
history clustering
therapy specific
one?for?all
0?7
8?30
>30
0?7
Number of available training samples
8?30
>30
Number of available training samples
(a)
(b)
Figure 2: Accuracy (a) and AUC (b) results of the different models obtained on the test set in the
time-oriented validation scenario. Error bars indicate the standard deviations of each model. The
test samples are grouped based on the number of available training examples for their corresponding
therapy combinations.
5
Conclusion
This paper presents an approach that simultaneously considers several problems affecting the available HIV clinical data sets: the different treatment backgrounds of the samples, the uneven representation of the different levels of therapy experience, the missing treatment history information,
the uneven therapy representation and the unbalanced therapy outcome representation especially
pronounced in recently collected samples. The transfer history clustering model has its prime advantage for samples stemming from patients with long treatment histories and for samples associated
with rare therapies. In particular, for these two groups of test samples it achieves significantly better
accuracy than all considered reference approaches. Moreover, the AUC performance of our method
for these test samples is also better than all reference methods and significantly better compared
to the one-for-all method. For the remaining test samples both the accuracy and the AUC performance of the transfer history method are at least as good as the corresponding performances of all
considered reference methods.
Acknowledgments
We gratefully acknowledge the EuResist EEIG for providing the clinical data. We thank Thomas Lengauer
for the helpful comments and for supporting this work. We also thank Levi Valgaerts for the constructive
suggestions. This work was funded by the Cluster of Excellence (Multimodal Computing and Interaction).
8
References
[1] A. Altmann, M. D?aumer, N. Beerenwinkel, E. Peres, Y. Sch?ulter, A. B?uch, S. Rhee,
A. S?onnerborg, WJ. Fessel, M. Shafer, WR. Zazzi, R. Kaiser, and T. Lengauer. Predicting
response to combination antiretroviral therapy: retrospective validation of geno2pheno-THEO
on a large clinical database. Journal of Infectious Diseases, 199:999?1006, 2009.
[2] S. Bickel, J. Bogojeska, T. Lengauer, and T. Scheffer. Multi-task learning for HIV therapy
screening. In Proceedings of the International Conference on Machine Learning, 2008.
[3] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test
distributions. In Proceedings of the International Conference on Machine Learning, 2007.
[4] J. Bogojeska, S. Bickel, A. Altmann, and T. Lengauer. Dealing with sparse data in predicting
outcomes of HIV combination therapies. Bioinformatics, 26:2085?2092, 2010.
[5] J. Bogojeska, D. St?ockel, M. Zazzi, R. Kaiser, F. Incardona, M. Rosen-Zvi, and T. Lengauer.
History-alignment models for bias-aware prediction of virological response to HIV combination therapy. submitted, 2011.
[6] J. Hanley and B. McNeil. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148:839?843, 1983.
[7] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer,
2009.
[8] VA. Johnson, F. Brun-Vezinet, B. Clotet, HF. G?unthrad, DR. Kuritzkes, D. Pillay, JM.
Schapiro, and DD. Richman. Update of the drug resistance mutations in HIV-1: December
2008. Topics in HIV Medicine, 16:138?145, 2008.
[9] L. Kaufman and PJ. Rousseeuw. Finding Groups in Data. An introduction to cluster analysis.
John Wiley and Sons, Inc., 1990.
[10] B. Larder, D. Wang, A. Revell, J. Montaner, R. Harrigan, F. De Wolf, J. Lange, S. Wegner,
L. Ruiz, MJ. Prez-Elas, S. Emery, J. Gatell, A. DArminio Monforte, C. Torti, M. Zazzi, and
C. Lane. The development of artificial neural networks to predict virological response to combination HIV therapy. Antiviral Therapy, 12:15?24, 2007.
[11] RH. Lathrop and MJ. Pazzani. Combinatorial optimization in rapidly mutating drug-resistant
viruses. Journal of Combinatorial Optimization, 3:301?320, 1999.
[12] TF. Liu and Shafer RW. Web resources for HIV type 1 genotypic-resistance test interpretation.
Clinical Infectious Diseases, 42, 2006.
[13] S. Needleman and C. Wunsch. A general method applicable to the search for similarities in the
amino acid sequence of two proteins. Journal of Molecular Biology, 48(3):443?453, 1970.
[14] DA. Ouattara. Mathematical analysis of the HIV-1 infection: parameter estimation, therapies
effectiveness and therapeutical failures. In Engineering in Medicine and Biology Society, 2005.
[15] M. Prosperi, A. Altmann, M. Rosen-Zvi, E. Aharoni, G. Borgulya, F. Bazso, A. S?onnerborg,
E. Sch?ulter, D. Struck, G. Ulivi, A. Vandamme, J. Vercauteren, and M. Zazzi. Investigation of
expert rule bases, logistic regression, and non-linear machine learning techniques for predicting
response to antiretroviral treatment. Antiviral Therapy, 14:433?442, 2009.
[16] M. Rosen-Zvi, A. Altmann, M. Prosperi, E. Aharoni, H. Neuvirth, A. S?onnerborg, E. Sch?ulter,
D. Struck, Y. Peres, F. Incardona, R. Kaiser, M. Zazzi, and T. Lengauer. Selecting anti-HIV
therapies based on a variety of genomic and clinical factors. Proceedings of the ISMB, 2008.
[17] P. J. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster
analysis. Journal of Computational and Applied Mathematics, 20:53?65, 1987.
[18] UNAIDS/WHO. Report on the global aids epidemic: 2010. 2010.
[19] D. Wang, BA. Larder, A. Revell, R. Harrigan, and J. Montaner. A neural network model
using clinical cohort data accurately predicts virological response and identifies regimens with
increased probability of success in treatment failures. Antiviral Therapy, 8:U99?U99, 2003.
9
| 4206 |@word version:4 covariance:1 thereby:1 carry:2 liu:1 contains:3 score:5 selecting:4 genetic:1 suppressing:1 past:4 outperforms:2 current:6 comparing:2 virus:5 montaner:2 tackling:1 yet:1 john:1 stemming:2 distant:2 partition:1 realistic:1 enables:1 treating:3 update:1 fewer:1 schapiro:1 short:1 contribute:1 simpler:1 five:1 mathematical:1 become:1 paragraph:1 introduce:1 manner:2 excellence:1 pairwise:8 acquired:1 expected:2 mpg:1 growing:1 multi:6 inspired:1 ucken:1 jm:1 considering:3 increasing:2 totally:1 estimating:1 campus:1 matched:2 moreover:2 what:2 cm:1 kaufman:1 minimizes:2 differing:5 finding:1 impractical:1 tackle:2 wrong:1 partitioning:1 subtype:1 planck:1 before:2 engineering:1 might:3 pam:1 quantified:2 challenging:3 stratified:4 range:1 ismb:1 practical:2 acknowledgment:1 practice:1 implement:2 procedure:3 harrigan:2 area:2 drug:25 evolving:1 significantly:7 matching:17 word:2 protein:1 close:1 selection:5 missing:5 maximizing:1 center:2 straightforward:1 starting:2 independently:1 rule:1 wunsch:1 population:1 target:25 gm:1 heavily:3 us:5 trend:3 element:1 updating:1 predicts:2 database:3 labeled:1 observed:1 ft:3 wang:2 capture:1 calculate:1 wj:1 ensures:1 valuable:1 mentioned:1 disease:2 trained:8 predictive:2 negatively:1 efficiency:2 easily:2 joint:1 multimodal:1 various:2 represented:4 alphabet:1 train:8 distinct:2 effective:9 describe:1 artificial:2 outcome:7 choosing:1 hiv:32 encoded:3 widely:1 larger:2 stanford:1 supplementary:8 whose:1 epidemic:1 ability:3 g1:1 radiology:1 jointly:1 sequence:27 advantage:1 subtracting:1 interaction:1 product:1 zm:1 frequent:1 relevant:10 combining:1 date:1 rapidly:1 achieve:3 adapts:1 infectious:2 pronounced:1 normalize:1 cluster:56 emery:1 object:3 measured:2 skip:1 indicate:4 quantify:1 differ:1 correct:3 vc:1 human:1 material:8 bin:4 assign:1 clustered:1 investigation:1 therapy:162 around:1 considered:7 exp:1 mapping:1 predict:4 immunodeficiency:1 particulary:2 achieves:4 bickel:3 failing:1 estimation:2 applicable:1 label:6 currently:1 combinatorial:2 grouped:2 create:1 tf:1 weighted:4 reflects:1 clotet:1 genomic:1 always:1 gaussian:2 rna:1 modified:3 avoid:1 vandamme:1 derived:5 properly:1 improvement:4 indicates:1 likelihood:1 summarizing:1 helpful:1 accumulated:2 kernelized:1 originating:1 comprising:4 germany:1 issue:3 arg:2 denoted:1 development:2 smoothing:1 field:1 comprise:2 once:3 having:1 aware:1 biology:2 represents:1 mimic:1 rosen:3 others:1 report:2 few:2 oriented:9 randomly:1 simultaneously:3 ve:2 individual:3 replaced:1 friedman:1 interest:3 screening:1 alignment:8 genotype:1 pc:1 behind:1 accurate:1 experience:8 respective:3 shorter:1 decoupled:1 tree:1 incomplete:2 divide:1 increased:2 infected:1 cost:2 addressing:1 deviation:4 rare:5 usefulness:1 successful:2 johnson:1 zvi:3 st:1 density:1 international:3 physician:1 ym:2 na:2 reflect:4 choose:1 dr:1 expert:1 account:7 potential:3 converted:1 de:2 summarized:2 availability:1 inc:1 ranking:1 later:1 h1:1 doing:1 start:4 hf:1 option:1 mutation:13 vivo:1 ass:7 minimize:1 accuracy:16 acid:1 largely:1 characteristic:1 who:1 yield:1 gathered:2 accurately:1 regimen:2 none:1 history:56 acc:3 submitted:1 explain:1 manual:1 infection:1 failure:4 frequency:1 associated:8 treatment:34 popular:1 knowledge:2 subsection:1 higher:1 day:1 follow:1 response:6 furthermore:4 stage:2 web:1 brun:1 lack:1 logistic:9 quality:4 lengauer:6 normalized:1 true:1 needleman:1 regularization:1 assigned:3 deal:3 during:1 auc:21 mpi:1 occasion:1 complete:3 ranging:1 recently:2 common:2 viral:3 million:1 interpretation:2 refer:4 significant:4 tuning:2 mathematics:1 gratefully:1 fingerprint:1 funded:1 resistant:1 longer:6 similarity:19 operating:1 base:1 aligning:1 closest:1 recent:12 inf:1 belongs:1 prime:1 scenario:15 certain:1 binary:6 success:5 vt:1 yi:5 aumer:1 preceding:3 employed:1 period:1 worldwide:1 match:2 clinical:24 offer:1 long:1 e1:1 molecular:1 paired:3 uckner:1 va:1 prediction:7 regression:9 patient:29 aharoni:2 kernel:5 achieved:2 c1:1 receive:2 affecting:3 background:8 want:1 interval:1 unevenly:1 country:1 sch:3 rest:2 ineffective:7 comment:1 december:1 effectiveness:4 cohort:1 split:3 automated:1 rendering:1 affect:1 wegner:1 variety:1 hastie:1 inner:1 idea:1 lange:1 br:1 administration:1 administered:11 whether:1 assist:1 retrospective:1 penalty:1 suffer:1 resistance:16 amount:2 rousseeuw:2 mid:2 svms:1 rw:1 reduced:1 specifies:1 supplied:1 estimated:5 delta:1 per:1 wr:1 tibshirani:1 group:8 levi:1 four:1 blood:1 drawn:1 pj:1 mcneil:1 year:1 wtt:1 powerful:1 reporting:2 decision:1 incompleteness:1 comparable:2 followed:1 altmann:4 adapted:1 kronecker:1 afforded:1 lane:1 min:1 according:4 combination:22 slightly:3 increasingly:2 son:1 making:1 antiviral:3 medoids:2 ln:1 equation:1 resource:1 previously:2 eventually:1 precomputed:1 count:2 mutating:1 end:1 available:22 observe:1 appropriate:2 occurrence:2 shortly:1 thomas:1 denotes:4 remaining:4 clustering:31 graphical:1 medicine:2 hanley:1 especially:4 society:2 beerenwinkel:1 kaiser:3 rt:7 dependence:1 separate:8 thank:2 concatenation:1 originate:2 seven:1 topic:1 collected:2 considers:3 reason:2 length:9 modeled:1 providing:1 minimizing:1 uch:1 ba:1 proper:2 perform:1 imbalance:2 acknowledge:1 anti:1 displayed:1 supporting:1 peres:2 excluding:1 y1:2 dc:1 introduced:2 namely:1 struck:2 z1:1 saarbr:1 antiretroviral:9 address:7 able:1 bar:2 xm:1 genotypic:1 max:2 including:1 power:2 ela:1 treated:1 predicting:7 representing:1 identifies:1 started:2 hm:1 extract:1 prior:4 loss:5 suggestion:1 validation:17 dd:1 share:1 course:1 accounted:1 free:1 copy:1 theo:1 bias:4 institute:1 sparse:1 boundary:1 calculated:2 curve:2 cure:1 ignores:1 commonly:1 richman:1 ignore:1 silhouette:4 keep:1 dealing:1 ml:1 global:1 receiver:1 xi:6 discriminative:3 search:4 latent:1 quantifies:1 why:1 table:5 additionally:1 virological:3 mj:2 transfer:16 pazzani:1 composing:1 forest:1 necessarily:2 da:1 significance:2 main:2 rh:1 shafer:2 identifier:1 amino:1 x1:1 referred:3 scheffer:2 roc:1 depicts:2 aid:3 wiley:1 experienced:1 comprises:4 governed:2 late:2 third:1 learns:1 abundance:6 ruiz:1 load:1 specific:19 exists:1 restricting:1 importance:6 ci:3 dissimilarity:6 occurring:1 gap:4 depicted:2 fc:3 ordered:1 springer:1 wolf:1 determines:1 extracted:2 succeed:1 conditional:2 goal:2 change:1 hard:1 wt:4 called:2 discriminate:1 lathrop:1 experimental:2 plasma:1 bogojeska:4 indicating:3 select:2 uneven:11 formally:3 support:1 people:1 unbalanced:2 assessed:3 relevance:3 bioinformatics:1 constructive:1 evaluate:1 phenomenon:1 |
3,542 | 4,207 | Variance Penalizing AdaBoost
Tony Jebara
Department of Compter Science
Columbia University, New York NY
[email protected]
Pannagadatta K. Shivaswamy
Department of Computer Science
Cornell University, Ithaca NY
[email protected]
Abstract
This paper proposes a novel boosting algorithm called VadaBoost which is motivated by recent empirical Bernstein bounds. VadaBoost iteratively minimizes a
cost function that balances the sample mean and the sample variance of the exponential loss. Each step of the proposed algorithm minimizes the cost efficiently
by providing weighted data to a weak learner rather than requiring a brute force
evaluation of all possible weak learners. Thus, the proposed algorithm solves a
key limitation of previous empirical Bernstein boosting methods which required
brute force enumeration of all possible weak learners. Experimental results confirm that the new algorithm achieves the performance improvements of EBBoost
yet goes beyond decision stumps to handle any weak learner. Significant performance gains are obtained over AdaBoost for arbitrary weak learners including
decision trees (CART).
1
Introduction
Many machine learning algorithms implement empirical risk minimization or a regularized variant
of it. For example, the popular AdaBoost [4] algorithm minimizes exponential loss on the training
examples. Similarly, the support vector machine [11] minimizes hinge loss on the training examples.
The convexity of these losses is helpful for computational as well as generalization reasons [2].
The goal of most learning problems, however, is not to obtain a function that performs well on
training data, but rather to estimate a function (using training data) that performs well on future
unseen test data. Therefore, empirical risk minimization on the training set is often performed
while regularizing the complexity of the function classes being explored. The rationale behind this
regularization approach is that it ensures that the empirical risk converges (uniformly) to the true
unknown risk. Various concentration inequalities formalize the rate of convergence in terms of the
function class complexity and the number of samples.
A key tool in obtaining such concentration inequalities is Hoeffding?s inequality which relates the
empirical mean of a bounded random variable to its true mean. Bernstein?s and Bennett?s inequalities
relate the true mean of a random variable to the empirical mean but also incorporate the true variance
of the random variable. If the true variance of a random variable is small, these bounds can be
significantly tighter than Hoeffding?s bound. Recently, there have been empirical counterparts of
Bernstein?s inequality [1, 5]; these bounds incorporate the empirical variance of a random variable
rather than its true variance. The advantage of these bounds is that the quantities they involve are
empirical. Previously, these bounds have been applied in sampling procedures [6] and in multiarmed bandit problems [1]. An alternative to empirical risk minimization, called sample variance
penalization [5], has been proposed and is motivated by empirical Bernstein bounds.
A new boosting algorithm is proposed in this paper which implements sample variance penalization.
The algorithm minimizes the empirical risk on the training set as well as the empirical variance. The
two quantities (the risk and the variance) are traded-off through a scalar parameter. Moreover, the
1
algorithm proposed in this article does not require exhaustive enumeration of the weak learners
(unlike an earlier algorithm by [10]).
Assume that a training set (Xi , yi )ni=1 is provided where Xi ? X and yi ? {?1} are drawn independently and identically distributed (iid) from a fixed but unknown distribution D. The goal is to
learn a classifier or a function f : X ? {?1} that performs well on test examples drawn from the
same distribution D. In the rest of this article, G : X ? {?1} denotes the so-called weak learner.
The notation Gs denotes the weak learner in a particular iteration s. Further, the two indices sets Is
and Js , respectively, denote examples that the weak learner Gs correctly classified and misclassified,
i.e., Is := {i|Gs (Xi ) = yi } and Js := {j|Gs (Xj ) 6= yj }.
Algorithm 1 AdaBoost Require: (Xi , yi )ni=1 , and weak learners H
Initialize the weights: wi ? 1/n for i = 1, . . . , n; Initialize f to predict zero on all inputs.
for s ? 1 to S do
s
n
Estimate a weak
training examples
P learner G (?) from
weighted by (wi )i=1 .
P
?s = 21 log
i:Gs (Xi )=yi wi /
j:Gs (Xj )6=yj wj
if ?s ? 0 then break end if
f (?) ? f (?) + ?s Gs (?)
Pn
wi ? wi exp(?yi Gs (Xi )?s )/Zs where Zs is such that i=1 wi = 1.
end for
Algorithm 2 VadaBoost Require: (Xi , yi )ni=1 , scalar parameter 1 ? ? ? 0, and weak learners H
Initialize the weights: wi ? 1/n for i = 1, . . . , n; Initialize f to predict zero on all inputs.
for s ? 1 to S do
ui ? ?nwi2 + (1 ? ?)wi
s
Estimate a weak
training examples
weighted by (ui )ni=1 .
P learner G (?) from
P
?s = 41 log
i:Gs (Xi )=yi ui /
j:Gs (Xj )6=yj uj
if ?s ? 0 then break end if
f (?) ? f (?) + ?s Gs (?)
Pn
wi ? wi exp(?yi Gs (Xi )?s )/Zs where Zs is such that i=1 wi = 1.
end for
2
Algorithms
In this section, we briefly discuss AdaBoost [4] and then propose a new algorithm called the VadaBoost. The derivation of VadaBoost will be provided in detail in the next section.
AdaBoost (Algorithm 1) assigns a weight wi to each training example. In each step of the AdaBoost,
a weak learner Gs (?) is obtained on the weighted examples and a weight ?s is assigned to it. Thus,
PS
AdaBoost iteratively builds s=1 ?s Gs (?). If a training example is correctly classified, its weight
is exponentially decreased; if it is misclassified, its weight is exponentially increased. The process
is repeated until a stopping criterion
risk minimiza is met. AdaBoost essentially performs empirical
Pn
PS
tion: minf ?F n1 i=1 e?yi f (Xi ) by greedily constructing the function f (?) via s=1 ?s Gs (?).
Recently an alternative to empirical risk minimization has been proposed. This new criterion, known
as the sample variance penalization [5] trades-off the empirical risk with the empirical variance:
s
n
?
1X
V[l(f
(X), y)]
arg min
l(f (Xi ), yi ) + ?
,
(1)
f ?F n
n
i=1
where ? ? 0 explores the trade-off between the two quantities. The motivation for sample variance
penalization comes from the following theorem [5]:
2
Theorem 1 Let (Xi , yi )ni=1 be drawn iid from a distribution D. Let F be a class of functions
f : X ? R. Then, for a loss l : R ? Y ? [0, 1], for any ? > 0, w.p. at least 1 ? ?, ?f ? F
s
n
?
1X
18V[l(f
(X), y)] ln(M(n)/?)
15 ln(M(n)/?)
E[l(f (X), y)] ?
l(f (Xi ), yi ) +
+
, (2)
n i=1
(n ? 1)
n
where M(n) is a complexity measure.
From the above uniform convergence result, it can be argued that future loss can be minimized by
?
minimizing the right hand side of the bound on training examples. Since the variance V[l(f
(X), y)]
has a multiplicative factor involving M(n), ? and n, for a given problem, it is difficult to specify the
relative importance between empirical risk and empirical variance a priori. Hence, sample variance
penalization (1) necessarily involves a trade-off parameter ? .
Empirical risk minimization or sample variance penalization on the 0 ? 1 loss is a hard problem;
this problem is often circumvented by minimizing a convex upper bound on the 0 ? 1 loss. In this
paper, we consider the exponential loss l(f (X), y) := e?yf (X) . With the above loss, it was shown
by [10] that sample variance penalization is equivalent to minimizing the following cost,
?
!2
!2 ?
n
n
n
X
X
X
e?yi f (Xi )
+ ? ?n
e?2yi f (Xi ) ?
e?yi f (Xi ) ? .
(3)
i=1
i=1
i=1
Theorem 1 requires that the loss function be bounded. Even though the exponential loss is unbounded, boosting is typically performed only for a finite number of iterations in most practical
applications. Moreover, since weak learners typically perform only slightly better than random
guessing, each ?s in AdaBoost (or in VadaBoost) is typically small thus limiting the range of the
function learned. Furthermore, experiments will confirm that sample variance penalization results
in a significant empirical performance improvement over empirical risk minimization.
Our proposed algorithm is called VadaBoost1 and is described in Algorithm 2. VadaBoost iteratively
performs sample variance penalization (i.e., it minimizes the cost (3) iteratively). Clearly, VadaBoost
shares the simplicity and ease of implementation found in AdaBoost.
3
Derivation of VadaBoost
s
In the sth iteration, our objective is to choose a weak learner G
and a weight ?s such that
Ps?1
Ps
t
?yi t=1 ?t Gt (xi )
?
G
(?)
reduces
the
cost
(3).
Denote
by
w
the
quantity
e
/Zs . Given a cani
t=1 t
Ps?1
s
t
s
didate weak learner G (?), the cost (3) for the function t=1 ?t G (?) + ?G (?) can be expressed as
a function of ?:
V (?; w, ?, I, J) :=
?
?2 ?
?
?2 ?
X
X
X
X
X
X
?
?
? wi e?? +
wj e??+? ?n
wi2 e?2? + n wj2 e2? ? ? wi e?? +
wj e? ? ? .
i?I
j?J
i?I
j?J
i?I
(4)
j?J
up to a multiplicative factor. In the quantity above, I and J are the two index sets (of correctly
classified and incorrectly classified examples) over Gs . Let the vector w whose ith component is
wi denote the current set of weights on the training examples. Here, we have dropped the subscripts/superscripts s for brevity.
Lemma 2 The update of ?s in Algorithm 2 minimizes the cost
?
?
!
X
X
U (?; w, ?, I, J) :=
?nwi2 + (1 ? ?)wi e?2? + ?
?nwj2 + (1 ? ?)wj ? e2? . (5)
i?I
1
j?J
The V in VadaBoost emphasizes the fact that Algorithm 2 penalizes the empirical variance.
3
Proof By obtaining the second derivative of the above expression (with respect to ?), it is easy to
see that it is convex in ?. Thus, setting the derivative with respect to ? to zero gives the optimal
choice of ? as shown in Algorithm 2.
Pn
Theorem 3 Assume that 0 ? ? ? 1 and
i=1 wi = 1 (i.e. normalized weights). Then,
V (?; w, ?, I, J) ? U (?; w, ?, I, J) and V (0; w, ?, I, J) = U (0; w, ?, I, J). That is, U is an upper
bound on V and the bound is exact at ? = 0.
? we have:
Proof Denoting 1 ? ? by ?,
?
?
?
?2
?2 ?
X
X
X
X
X
? X
?
V (?; w, ?, I, J) = ? wi e?? +
wj2 e2? ?? wi e?? +
wj e??+ ??n wi2 e?2? + n
wj e?? ?
i?I
??
=?
i?I
j?J
?2
?
X
wi e?? +
i?I
X
j?J
?
wj e? ? + ? ?n
X
wi2 e?2? + n
i?I
j?J
i?I
j?J
?
X
wj2 e2? ?
j?J
?
?
?2
??
!2
!?
X
X
X
X
X
X
?
??
= ??n
wi2 e?2? + n
wj2 e2??+ ?
wi e?2? + ?
wj?e2? + 2
wi ?
wj ??
?
?
?
i?I
j?J
i?I
?
= ? ?n
?
X
wi2 e?2? + n
i?I
X
??
wj2 e2? ? + ?
j?J
?
+?
!
wj ? 1 ?
X
j?J
wi
i?I
!?
X
wi
?1 ?
i?I
?
X
j?J
?
j?J
?
X
wj ? e?2?
j?J
?
?
!?
X
X
?
e2? ? + 2?
wi ?
wj ?
i?I
i?I
j?J
?
?
X
X
? i e?2? + ?
? j ? e2?
?nwi2 + ?w
?nwj2 + ?w
!
=
i?I
j?J
?
!?
X
X
?
+?
wi ?
wj ? ?e2? ? e?2? + 2
i?I
j?J
!
?
X
? i
?nwi2 + ?w
?
?
X
? j ? e2? = U (?; w, ?, I, J).
e?2? + ?
?nwj2 + ?w
i?I
j?J
On line two, terms were simply regrouped. OnP
line three, P
the square
from line two was
Pterm
n
expanded. On the next line, we used the fact that i?I wi + j?J = i=1 wi = 1. On the fifth
line, we once again regrouped terms; the last term in this expression (which is e2? + e?2? ? 2)
can be written as (e? ?e?? )2 . When ? = 0 this term vanishes. Hence the bound is exact at ? = 0.
Corollary 4 VadaBoost monotonically decreases the cost (3).
The above corollary follows from:
V (?s ; w, ?, I, J) ? U (?s ; w, ?, I, J) < U (0; w, ?, I, J) = V (0; w, ?, I, J).
In the above, the first inequality follows from Theorem (3). The second strict inequality holds
because ?s is a minimizer of U from Lemma (2); it is not hard to show that U (?s ; w, ?, I, J) is
strictly less than U (0; w, ?, I, J) from the termination criterion of VadaBoost. The third equality
again follows from Theorem (3). Finally, we notice that V (0; w, ?, I, J) merely corresponds to the
Ps?1
cost (3) at t=1 ?t Gt (?). Thus, we have shown that taking a step ?s decreases the cost (3).
4
Actual Cost:V
Upper Bound:U
Actual Cost:V
Upper Bound:U
3
Cost
Cost
2
2
1
0
0.3
0.6
1
0.9
?
0
0.3
0.6
0.9
?
Figure 1: Typical Upper bound U (?; w, ?, I, J) and the actual cost function V (?; w, ?, I, J) values
under varying ?. The bound is exact at ? = 0. The bound gets closer to the actual function value as
? grows. The left plot shows the bound for ? = 0 and the right plot shows it for ? = 0.9
We point out that we use a different upper bound in each iteration since V and U are parameterized
by the current weights in the VadaBoost algorithm. Also note that our upper bound holds only for
0 ? ? ? 1. Although the choice 0 ? ? ? 1 seems restrictive, intuitively, it is natural to have a
higher penalization on the empirical mean rather than the empirical variance during minimization.
Also, a closer look atpthe empirical Bernstein inequality in [5] shows that the empirical variance
term is multiplied by 1/n while the empirical mean is multiplied by one. Thus, for large values of
n, the weight on the sample variance is small. Furthermore, our experiments suggest that restricting
? to this range does not significantly change the results.
4
How good is the upper bound?
First, we observe that our upper bound is exact when ? = 1. Also, our upper bound is loosest for
the case ? = 0. We visualize the upper bound and the true cost for two settings of ? in Figure 1.
Since the cost (4) is minimized via an upper bound (5), a natural question is: how good is this approximation? We evaluate the tightness of this upper bound by considering its impact on learning
efficiency. As is clear from figure (1), when ? = 1, the upper bound is exact and incurs no inefficiency. In the other extreme when ? = 0, the cost of VadaBoost coincides with AdaBoost and the
bound is effectively at its loosest. Even in this extreme case, VadaBoost derived through an upper
bound only requires at most twice the number of iterations as AdaBoost to achieve a particular cost.
The following theorem shows that our algorithm remains efficient even in this worst-case scenario.
Theorem 5 Let OA denote the squared cost obtained by AdaBoost after S iterations. For weak
learners in any iteration achieving a fixed error rate < 0.5, VadaBoost with the setting ? = 0
attains a cost at least as low as OA in no more than 2S iterations.
Proof Denote thePweight on the example i in sth iteration by wis . The weighted error rate of the sth
classifier is s = j?Js wjs . We have, for both algorithms,
wiS+1
PS
exp(?yi s=1 ?s Gs (Xi ))
wiS exp(?yi ?S GS (Xi ))
=
.
=
QS
Zs
n s=1 Zs
The value of the normalization factor in the case of AdaBoost is
X
X
p
Zsa =
wjs e?s +
wis e??s = 2 s (1 ? s ).
j?js
(7)
i?Is
Similarly, the value of the normalization factor for VadaBoost is given by
X
X
?
1 ?
Zsv =
wjs e?s +
wis e??s = ((s )(1 ? s )) 4 ( s + 1 ? s ).
j?Js
(6)
i?Is
5
(8)
The squared cost function of AdaBoost after S steps is given by
!2
!2
!2
n
n
S
S
S
S
X
X
Y
Y
X
Y
OA =
Zsa = n2
4s (1 ? s ).
exp(?yi ?s yi Gs (X)) = n
Zsa
wis+1 = n2
s=1
i=1
s=1
S+1
i=1 wi
2
OV =
exp(?yi
= n2
!2
?s yi Gs (X))
=
s=1
i=1
S
Y
S
X
(2s (1 ? s ) +
n
S
Y
s=1
p
s=1
= 1 to derive the above expression. Similarly, for
We used (6), (7) and the fact that
? = 0 the cost of VadaBoost satisfies
n
X
s=1
i=1
Pn
Zsa
n
X
!2
wis+1
i=1
= n2
S
Y
!2
Zsv
s=1
s (1 ? s )).
s=1
Now, suppose that s = for all s. Then, the squared cost achieved by AdaBoost is given by
n2 (4(1 ? ))S . To achieve the same cost value, VadaBoost, with weak learners with the same
log(4(1?))
?
error rate needs at most S
times. Within the range of interest for , the term
log(2(1?)+
(1?))
multiplying S above is at most 2.
Although the above worse-case bound achieves a factor of two, for > 0.4, VadaBoost requires
only about 33% more iterations than AdaBoost. To summarize, even in the worst possible scenario
where ? = 0 (when the variational bound is at its loosest), the VadaBoost algorithm takes no more
than double (a small constant factor) the number of iterations of AdaBoost to achieve the same cost.
Algorithm 3 EBBoost:
Require: (Xi , yi )ni=1 , scalar parameter ? ? 0, and weak learners H
Initialize the weights: wi ? 1/n for i = 1, . . . , n; Initialize f to predict zero on all inputs.
for s ? 1 to S do
Get a weak
learner Gs (?) that minimizes (3)with the following choice of ?s :
P
P
w2
wi )2 +?n
(1??)(
?s = 41 log (1??)(P i?Is wi )2 +?n Pi?Is wi2
i?Js
i?Js
i
if ?s < 0 then break end if
f (?) ? f (?) + ?s Gs (?)
Pn
wi ? wi exp(?yi Gs (Xi )?s )/Zs where Zs is such that i=1 wi = 1.
end for
5
A limitation of the EBBoost algorithm
A sample variance penalization algorithm known as EBBoost was previously explored [10]. While
this algorithm was simple to implement and showed significant improvements over AdaBoost, it
suffers from a severe limitation: it requires enumeration and evaluation of every possible weak
learner per iteration. Recall the steps implementing EBBoost in Algorithm 3. An implementation
of EBBoost requires exhaustive enumeration of weak learners in search of the one that minimizes
cost (3). It is preferable, instead, to find the best weak learner by providing weights on the training
examples and efficiently computing the rule whose performance on that weighted set of examples
is guaranteed to be better than random guessing. However, with the EBBoost algorithm, the weight
2
P
P
on all the misclassified examples is i?Js wi2 +
and the weight on correctly classii?Js wi
2
P
P
2
fied examples is i?Is wi +
i?Is wi ; these aggregate weights on misclassified examples and
correctly classified examples do not translate into weights on the individual examples. Thus, it becomes necessary to exhaustively enumerate weak learners in Algorithm 3. While enumeration of
weak learners is possible in the case of decision stumps, it poses serious difficulties in the case of
weak learners such as decision trees, ridge regression, etc. Thus, VadaBoost is the more versatile
boosting algorithm for sample variance penalization.
2
The cost which VadaBoost minimizes at ? = 0 is the squared cost of AdaBoost, we do not square it again.
6
Table 1: Mean and standard errors with decision stump as the weak learner.
Dataset
AdaBoost
EBBoost
VadaBoost
RLP-Boost
RQP-Boost
a5a
16.15 ? 0.1 16.05 ? 0.1 16.22 ? 0.1
16.21 ? 0.1 16.04 ? 0.1
abalone
21.64 ? 0.2 21.52 ? 0.2 21.63 ? 0.2
22.29 ? 0.2 21.79 ? 0.2
image
3.37 ? 0.1
3.14 ? 0.1
3.14 ? 0.1
3.18 ? 0.1
3.09 ? 0.1
mushrooms
0.02 ? 0.0
0.02 ? 0.0
0.01 ? 0.0
0.01 ? 0.0
0.00 ? 0.0
musk
3.84 ? 0.1
3.51 ? 0.1
3.59 ? 0.1
3.60 ? 0.1
3.41 ? 0.1
mnist09
0.89 ? 0.0
0.85 ? 0.0
0.84 ? 0.0
0.98 ? 0.0
0.88 ? 0.0
mnist14
0.64 ? 0.0
0.58 ? 0.0
0.60 ? 0.0
0.68 ? 0.0
0.63 ? 0.0
mnist27
2.11 ? 0.1
1.86 ? 0.1
2.01 ? 0.1
2.06 ? 0.1
1.95 ? 0.1
mnist38
4.45 ? 0.1
4.12 ? 0.1
4.32 ? 0.1
4.51 ? 0.1
4.25 ? 0.1
mnist56
2.79 ? 0.1
2.56 ? 0.1
2.62 ? 0.1
2.77 ? 0.1
2.72 ? 0.1
ringnorm
13.16 ? 0.6 11.74 ? 0.6 12.46 ? 0.6
13.02 ? 0.6 12.86 ? 0.6
spambase
5.90 ? 0.1
5.64 ? 0.1
5.78 ? 0.1
5.81 ? 0.1
5.75 ? 0.1
splice
8.83 ? 0.2
8.33 ? 0.1
8.48 ? 0.1
8.55 ? 0.2
8.47 ? 0.1
twonorm
3.16 ? 0.1
2.98 ? 0.1
3.09 ? 0.1
3.29 ? 0.1
3.07 ? 0.1
w4a
2.60 ? 0.1
2.38 ? 0.1
2.50 ? 0.1
2.44 ? 0.1
2.36 ? 0.1
waveform
10.99 ? 0.1 10.96 ? 0.1 10.75 ? 0.1
10.95 ? 0.1 10.60 ? 0.1
wine
23.62 ? 0.2 23.52 ? 0.2 23.41 ? 0.1
24.16 ? 0.1 23.61 ? 0.1
wisc
5.32 ? 0.3
4.38 ? 0.2
5.00 ? 0.2
4.96 ? 0.3
4.72 ? 0.3
Table 2: Mean and standard errors with CART as the weak learner.
Dataset
AdaBoost
VadaBoost
RLP-Boost
RQP-Boost
a5a
17.59 ? 0.2 17.16 ? 0.1 18.24 ? 0.1 17.99 ? 0.1
abalone
21.87 ? 0.2 21.30 ? 0.2 22.16 ? 0.2 21.84 ? 0.2
image
1.93 ? 0.1
1.98 ? 0.1
1.99 ? 0.1
1.95 ? 0.1
mushrooms
0.01 ? 0.0
0.01 ? 0.0
0.02 ? 0.0
0.01 ? 0.0
musk
2.36 ? 0.1
2.07 ? 0.1
2.40 ? 0.1
2.29 ? 0.1
mnist09
0.73 ? 0.0
0.72 ? 0.0
0.76 ? 0.0
0.71 ? 0.0
mnist14
0.52 ? 0.0
0.50 ? 0.0
0.55 ? 0.0
0.52 ? 0.0
mnist27
1.31 ? 0.0
1.24 ? 0.0
1.32 ? 0.0
1.29 ? 0.0
mnist38
1.89 ? 0.1
1.72 ? 0.1
1.88 ? 0.1
1.87 ? 0.1
mnist56
1.23 ? 0.1
1.17 ? 0.0
1.20 ? 0.0
1.19 ? 0.1
ringnorm
7.94 ? 0.4
7.78 ? 0.4
8.60 ? 0.5
7.84 ? 0.4
spambase
6.14 ? 0.1
5.76 ? 0.1
6.25 ? 0.1
6.03 ? 0.1
splice
4.02 ? 0.1
3.67 ? 0.1
4.03 ? 0.1
3.97 ? 0.1
twonorm
3.40 ? 0.1
3.27 ? 0.1
3.50 ? 0.1
3.38 ? 0.1
w4a
2.90 ? 0.1
2.90 ? 0.1
2.90 ? 0.1
2.90 ? 0.1
waveform
11.09 ? 0.1 10.59 ? 0.1 11.11 ? 0.1 10.82 ? 0.1
wine
21.94 ? 0.2 21.18 ? 0.2 22.44 ? 0.2 22.18 ? 0.2
wisc
4.61 ? 0.2
4.18 ? 0.2
4.63 ? 0.2
4.37 ? 0.2
6
Experiments
In this section, we evaluate the empirical performance of the VadaBoost algorithm with respect to
several other algorithms. The primary purpose of our experiments is to compare sample variance
penalization versus empirical risk minimization and to show that we can efficiently perform sample
variance penalization for weak learners beyond decision stumps. We compared VadaBoost against
EBBoost, AdaBoost, regularized LP and QP boost algorithms [7]. All the algorithms except AdaBoost have one extra parameter to tune.
Experiments were performed on benchmark datasets that have been previously used in [10]. These
datasets include a variety of tasks including all digits from the MNIST dataset. Each dataset was
divided into three parts: 50% for training, 25% for validation and 25% for test. The total number
of examples was restricted to 5000 in the case of MNIST and musk datasets due to computational
restrictions of solving LP/QP.
The first set of experiments use decision stumps as the weak learners. The second set of experiments
used Classification and Regression Trees or CART [3] as weak learners. A standard MATLAB
implementation of CART was used without modification. For all the datasets, in both experiments,
7
AdaBoost, VadaBoost and EBBoost (in the case of stumps) were run until there was no drop in the
error rate on the validation for the next 100 consecutive iterations. The values of the parameters for
VadaBoost and EBBoost were chosen to minimize the validation error upon termination. RLP-Boost
and RQP-Boost were given the predictions obtained by AdaBoost. Their regularization parameter
was also chosen to minimize the error rate on the validation set. Once the parameter values were
fixed via the validation set, we noted the test set error corresponding to that parameter value. The
entire experiment was repeated 50 times by randomly selecting train, test and validation sets. The
numbers reported here are average from these runs.
The results for the decision stump and CART experiments are reported in Tables 1 and 2. For each
dataset, the algorithm with the best percentage test error is represented by a dark shaded cell. All
lightly shaded entries in a row denote results that are not significantly different from the minimum
error (according to a paired t-test at a 1% significance level). With decision stumps, both EBBoost
and VadaBoost have comparable performance and significantly outperform AdaBoost. With CART
as the weak learner, VadaBoost is once again significantly better than AdaBoost.
We gave a guarantee on the number of iterations required in the worst case for Vadaboost
AdaBoost
10
(which approximately matches the AdaBoost
EBBoost ?=0.5
VadaBoost ?=0
cost (squared) in Theorem 5). An assumption
VadaBoost ?=0.5
10
in that theorem was that the error rate of each
weak learner was fixed. However, in practice,
10
the error rates of the weak learners are not constant over the iterations. To see this behavior
10
in practice, we have shown the results with the
MNIST 3 versus 8 classification experiment. In
10
figure 2 we show the cost (plus 1) for each algorithm (the AdaBoost cost has been squared)
10
100
200
300
400
500
600
700
Iteration
versus the number of iterations using a logarithmic scale on the Y-axis. Since at ? = 0,
Figure 2: 1+ cost vs the number of iterations.
EBBoost reduces to AdaBoost, we omit its plot
at that setting. From the figure, it can be seen
that the number of iterations required by VadaBoost is roughly twice the number of iterations required by AdaBoost. At ? = 0.5, there is only a minor difference in the number of iterations
required by EBBoost and VadaBoost.
5
4
cost + 1
3
2
1
0
7
Conclusions
This paper identified a key weakness in the EBBoost algorithm and proposed a novel algorithm
that efficiently overcomes the limitation to enumerable weak learners. VadaBoost reduces a well
motivated cost by iteratively minimizing an upper bound which, unlike EBBoost, allows the boosting
method to handle any weak learner by estimating weights on the data. The update rule of VadaBoost
has a simplicity that is reminiscent of AdaBoost. Furthermore, despite the use of an upper bound,
the novel boosting method remains efficient. Even when the bound is at its loosest, the number
of iterations required by VadaBoost is a small constant factor more than the number of iterations
required by AdaBoost. Experimental results showed that VadaBoost outperforms AdaBoost in terms
of classification accuracy and efficiently applying to any family of weak learners. The effectiveness
of boosting has been explained via margin theory [9] though it has taken a number of years to settle
certain open questions [8]. Considering the simplicity and effectiveness of VadaBoost, one natural
future research direction is to study the margin distributions it obtains. Another future research
direction is to design efficient sample variance penalization algorithms for other problems such as
multi-class classification, ranking, and so on.
Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 1117631, by a Google Research Award, and by the Department of Homeland
Security under Grant No. N66001-09-C-0080.
8
References
[1] J-Y. Audibert, R. Munos, and C. Szepesv?ari. Tuning bandit algorithms in stochastic environments. In ALT, 2007.
[2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[3] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and Regression Trees.
Chapman and Hall, New York, 1984.
[4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[5] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. In
COLT, 2009.
[6] V. Mnih, C. Szepesv?ari, and J-Y. Audibert. Empirical Bernstein stopping. In COLT, 2008.
[7] G. Raetsch, T. Onoda, and K.-R. Muller. Soft margins for AdaBoost. Machine Learning,
43:287?320, 2001.
[8] L. Reyzin and R. Schapire. How boosting the margin can also boost classifier complexity. In
ICML, 2006.
[9] R. E. Schapire, Y. Freund, P. L. Bartlett, and W. S. Lee. Boosting the margin: a new explanation
for the effectiveness of voting methods. Annals of Statistics, 26(5):1651?1686, 1998.
[10] P. K. Shivaswamy and T. Jebara. Empirical Bernstein boosting. In AISTATS, 2010.
[11] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, NY, 1995.
9
| 4207 |@word briefly:1 seems:1 open:1 termination:2 incurs:1 versatile:1 didate:1 inefficiency:1 selecting:1 wj2:5 denoting:1 spambase:2 outperforms:1 current:2 yet:1 mushroom:2 written:1 reminiscent:1 plot:3 drop:1 update:2 v:1 ith:1 boosting:12 unbounded:1 roughly:1 behavior:1 multi:1 actual:4 enumeration:5 considering:2 becomes:1 provided:2 estimating:1 bounded:2 moreover:2 notation:1 minimizes:10 z:9 guarantee:1 every:1 voting:1 preferable:1 classifier:3 brute:2 grant:2 omit:1 mcauliffe:1 dropped:1 despite:1 subscript:1 approximately:1 plus:1 twice:2 shaded:2 ringnorm:2 ease:1 range:3 practical:1 yj:3 practice:2 implement:3 digit:1 procedure:1 pontil:1 empirical:34 significantly:5 suggest:1 get:2 risk:15 applying:1 restriction:1 equivalent:1 go:1 independently:1 convex:2 simplicity:3 assigns:1 q:1 rule:2 handle:2 limiting:1 annals:1 suppose:1 exact:5 worst:3 wj:13 ensures:1 trade:3 decrease:2 vanishes:1 convexity:2 complexity:4 ui:3 environment:1 exhaustively:1 ov:1 solving:1 minimiza:1 upon:2 efficiency:1 learner:38 various:1 represented:1 derivation:2 train:1 aggregate:1 exhaustive:2 whose:2 tightness:1 statistic:1 unseen:1 superscript:1 advantage:1 propose:1 reyzin:1 translate:1 achieve:3 convergence:2 double:1 p:7 converges:1 derive:1 pose:1 minor:1 solves:1 c:2 involves:1 come:1 met:1 direction:2 waveform:2 stochastic:1 settle:1 material:1 implementing:1 require:4 argued:1 generalization:2 tighter:1 strictly:1 hold:2 hall:1 exp:7 predict:3 traded:1 visualize:1 regrouped:2 achieves:2 consecutive:1 wine:2 purpose:1 tool:1 weighted:6 minimization:8 clearly:1 rather:4 pn:6 cornell:2 breiman:1 varying:1 corollary:2 derived:1 improvement:3 greedily:1 attains:1 helpful:1 shivaswamy:2 stopping:2 typically:3 entire:1 bandit:2 misclassified:4 arg:1 classification:6 musk:3 colt:2 priori:1 proposes:1 initialize:6 once:3 sampling:1 chapman:1 look:1 icml:1 minf:1 future:4 minimized:2 serious:1 randomly:1 national:1 individual:1 n1:1 friedman:1 interest:1 mnih:1 evaluation:2 severe:1 weakness:1 extreme:2 behind:1 closer:2 necessary:1 tree:4 maurer:1 penalizes:1 increased:1 earlier:1 soft:1 cost:35 entry:1 uniform:1 reported:2 explores:1 twonorm:2 lee:1 off:4 again:4 squared:6 a5a:2 choose:1 hoeffding:2 worse:1 american:1 derivative:2 stump:8 ranking:1 audibert:2 multiplicative:2 performed:3 break:3 tion:1 wjs:3 minimize:2 square:2 ni:6 accuracy:1 variance:30 efficiently:5 weak:38 emphasizes:1 iid:2 multiplying:1 classified:5 suffers:1 against:1 e2:12 proof:3 gain:1 dataset:5 popular:1 recall:1 formalize:1 higher:1 adaboost:38 specify:1 though:2 furthermore:3 until:2 hand:1 google:1 yf:1 grows:1 requiring:1 true:7 normalized:1 counterpart:1 regularization:2 assigned:1 hence:2 equality:1 iteratively:5 during:1 noted:1 coincides:1 abalone:2 criterion:3 stone:1 ridge:1 theoretic:1 performs:5 image:2 variational:1 novel:3 recently:2 ari:2 qp:2 exponentially:2 association:1 significant:3 multiarmed:1 raetsch:1 tuning:1 similarly:3 loosest:4 gt:2 etc:1 j:9 recent:1 showed:2 scenario:2 certain:1 inequality:8 yi:25 muller:1 seen:1 minimum:1 monotonically:1 relates:1 reduces:3 onp:1 match:1 divided:1 award:1 paired:1 impact:1 prediction:1 variant:1 involving:1 regression:3 essentially:1 iteration:23 normalization:2 achieved:1 cell:1 szepesv:2 decreased:1 ithaca:1 w2:1 rest:1 unlike:2 extra:1 strict:1 cart:6 effectiveness:3 jordan:1 bernstein:9 identically:1 easy:1 variety:1 pannaga:1 xj:3 gave:1 identified:1 enumerable:1 motivated:3 expression:3 bartlett:2 york:3 matlab:1 enumerate:1 clear:1 involve:1 tune:1 dark:1 schapire:3 outperform:1 percentage:1 notice:1 correctly:5 per:1 key:3 achieving:1 drawn:3 wisc:2 penalizing:1 n66001:1 merely:1 year:1 run:2 parameterized:1 family:1 decision:10 comparable:1 bound:36 guaranteed:1 g:23 lightly:1 min:1 expanded:1 circumvented:1 department:3 according:1 slightly:1 sth:3 wi:45 lp:2 modification:1 intuitively:1 restricted:1 explained:1 taken:1 ln:2 previously:3 remains:2 discus:1 end:6 multiplied:2 observe:1 zsv:2 alternative:2 denotes:2 tony:1 include:1 hinge:1 restrictive:1 uj:1 build:1 objective:1 question:2 quantity:5 concentration:2 primary:1 guessing:2 oa:3 reason:1 index:2 providing:2 balance:1 minimizing:4 difficult:1 olshen:1 relate:1 implementation:3 design:1 unknown:2 perform:2 upper:17 datasets:4 pterm:1 benchmark:1 finite:1 incorrectly:1 arbitrary:1 jebara:3 required:7 security:1 homeland:1 learned:1 boost:8 beyond:2 pannagadatta:1 wi2:7 summarize:1 including:2 explanation:1 natural:3 force:2 regularized:2 difficulty:1 axis:1 columbia:2 acknowledgement:1 relative:1 freund:2 loss:12 rationale:1 limitation:4 versus:3 penalization:16 validation:6 foundation:1 article:2 share:1 pi:1 row:1 supported:1 last:1 side:1 taking:1 munos:1 fifth:1 distributed:1 zsa:4 obtains:1 overcomes:1 confirm:2 xi:21 search:1 table:3 learn:1 onoda:1 nature:1 obtaining:2 necessarily:1 constructing:1 aistats:1 significance:1 motivation:1 n2:5 repeated:2 fied:1 ny:3 exponential:4 rqp:3 third:1 splice:2 theorem:10 explored:2 alt:1 mnist:3 restricting:1 vapnik:1 effectively:1 importance:1 margin:5 logarithmic:1 simply:1 expressed:1 scalar:3 rlp:3 springer:1 corresponds:1 minimizer:1 satisfies:1 goal:2 bennett:1 hard:2 change:1 typical:1 except:1 uniformly:1 lemma:2 called:5 total:1 experimental:2 support:1 brevity:1 incorporate:2 evaluate:2 regularizing:1 |
3,543 | 4,208 | Spectral Methods for
Learning Multivariate Latent Tree Structure
Animashree Anandkumar
UC Irvine
Kamalika Chaudhuri
UC San Diego
Daniel Hsu
Microsoft Research
[email protected]
[email protected]
[email protected]
Sham M. Kakade
Microsoft Research &
University of Pennsylvania
Le Song
Carnegie Mellon University
Tong Zhang
Rutgers University
[email protected]
[email protected]
[email protected]
Abstract
This work considers the problem of learning the structure of multivariate linear
tree models, which include a variety of directed tree graphical models with continuous, discrete, and mixed latent variables such as linear-Gaussian models, hidden
Markov models, Gaussian mixture models, and Markov evolutionary trees. The
setting is one where we only have samples from certain observed variables in the
tree, and our goal is to estimate the tree structure (i.e., the graph of how the underlying hidden variables are connected to each other and to the observed variables).
We propose the Spectral Recursive Grouping algorithm, an efficient and simple
bottom-up procedure for recovering the tree structure from independent samples
of the observed variables. Our finite sample size bounds for exact recovery of
the tree structure reveal certain natural dependencies on underlying statistical and
structural properties of the underlying joint distribution. Furthermore, our sample
complexity guarantees have no explicit dependence on the dimensionality of the
observed variables, making the algorithm applicable to many high-dimensional
settings. At the heart of our algorithm is a spectral quartet test for determining the
relative topology of a quartet of variables from second-order statistics.
1
Introduction
Graphical models are a central tool in modern machine learning applications, as they provide a
natural methodology for succinctly representing high-dimensional distributions. As such, they have
enjoyed much success in various AI and machine learning applications such as natural language
processing, speech recognition, robotics, computer vision, and bioinformatics.
The main statistical challenges associated with graphical models include estimation and inference.
While the body of techniques for probabilistic inference in graphical models is rather rich [1], current
methods for tackling the more challenging problems of parameter and structure estimation are less
developed and understood, especially in the presence of latent (hidden) variables. The problem of
parameter estimation involves determining the model parameters from samples of certain observed
variables. Here, the predominant approach is the expectation maximization (EM) algorithm, and
only rather recently is the understanding of this algorithm improving [2, 3]. The problem of structure
learning is to estimate the underlying graph of the graphical model. In general, structure learning is
NP-hard and becomes even more challenging when some variables are unobserved [4]. The main
approaches for structure estimation are either greedy or local search approaches [5, 6] or, more
recently, based on convex relaxation [7].
1
z1
z2
z3
h g
z4
{{z1 , z2 }, {z3 , z4 }}
(a)
z1
z3
z2
h g
z1
z4
z4
{{z1 , z3 }, {z2 , z4 }}
(b)
z2
h g
z3
{{z1 , z4 }, {z2 , z3 }}
(c)
z1
z4
z2
h
z3
{{z1 , z2 , z3 , z4 }}
(d)
Figure 1: The four possible (undirected) tree topologies over leaves {z1 , z2 , z3 , z4 }.
This work focuses on learning the structure of multivariate latent tree graphical models. Here, the
underlying graph is a directed tree (e.g., hidden Markov model, binary evolutionary tree), and only
samples from a set of (multivariate) observed variables (the leaves of the tree) are available for
learning the structure. Latent tree graphical models are relevant in many applications, ranging from
computer vision, where one may learn object/scene structure from the co-occurrences of objects to
aid image understanding [8]; to phylogenetics, where the central task is to reconstruct the tree of life
from the genetic material of surviving species [9].
Generally speaking, methods for learning latent tree structure exploit structural properties afforded
by the tree that are revealed through certain statistical tests over every choice of four variables in the
tree. These quartet tests, which have origins in structural equation modeling [10, 11], are hypothesis
tests of the relative configuration of four (possibly non-adjacent) nodes/variables in the tree (see
Figure 1); they are also related to the four point condition associated with a corresponding additive
tree metric induced by the distribution [12]. Some early methods for learning tree structure are based
on the use of exact correlation statistics or distance measurements (e.g., [13, 14]). Unfortunately,
these methods ignore the crucial aspect of estimation error, which ultimately governs their sample
complexity. Indeed, this (lack of) robustness to estimation error has been quantified for various
algorithms (notably, for the popular Neighbor Joining algorithm [15, 16]), and therefore serves as a
basis for comparing different methods. Subsequent work in the area of mathematical phylogenetics
has focused on the sample complexity of evolutionary tree reconstruction [17, 15, 18, 19]. The basic
model there corresponds to a directed tree over discrete random variables, and much of the recent
effort deals exclusively in the regime for a certain model parameter (the Kesten-Stigum regime [20])
that allows for a sample complexity that is polylogarithmic in the number of leaves, as opposed
to polynomial [18, 19]. Finally, recent work in machine learning has developed structure learning
methods for latent tree graphical models that extend beyond the discrete distributions of evolutionary
trees [21], thereby widening their applicability to other problem domains.
This work extends beyond previous studies, which have focused on latent tree models with either
discrete or scalar Gaussian variables, by directly addressing the multivariate setting where hidden
and observed nodes may be random vectors rather than scalars. The generality of our techniques
allows us to handle a much wider class of distributions than before, both in terms of the conditional
independence properties imposed by the models (i.e., the random vector associated with a node need
not follow a distribution that corresponds to a tree model), as well as other characteristics of the node
distributions (e.g., some nodes in the tree could have discrete state spaces and others continuous, as
in a Gaussian mixture model).
We propose the Spectral Recursive Grouping algorithm for learning multivariate latent tree structure.
The algorithm has at its core a multivariate spectral quartet test, which extends the classical quartet tests for scalar variables by applying spectral techniques from multivariate statistics (specifically
canonical correlation analysis [22, 23]). Spectral methods have enjoyed recent success in the context
of parameter estimation [24, 25, 26, 27]; our work shows that they are also useful for structure learning. We use the spectral quartet test in a simple modification of the recursive grouping algorithm
of [21] to perform the tree reconstruction. The algorithm is essentially a robust method for reasoning
about the results of quartet tests (viewed simply as hypothesis tests); the tests either confirm or reject
hypotheses about the relative topology over quartets of variables. By carefully choosing which tests
to consider and properly interpreting their results, the algorithm is able to recover the correct latent
tree structure (with high probability) in a provably efficient manner, in terms of both computational
and sample complexity. The recursive grouping procedure is similar to the short quartet method
from phylogenetics [15], which also guarantees efficient reconstruction in the context of evolutionary trees. However, our method and analysis applies to considerably more general high-dimensional
settings; for instance, our sample complexity bound is given in terms of natural correlation con2
ditions that generalize the more restrictive effective depth conditions of previous works [15, 21].
Finally, we note that while we do not directly address the question of parameter estimation, provable parameter estimation methods may derived using the spectral techniques from [24, 25].
2
2.1
Preliminaries
Latent variable tree models
Let T be a connected, directed tree graphical model with leaves Vobs := {x1 , x2 , . . . , xn } and
internal nodes Vhid := {h1 , h2 , . . . , hm } such that every node has at most one parent. The leaves
are termed the observed variables and the internal nodes hidden variables. Note that all nodes in
this work generally correspond to multivariate random vectors; we will abuse terminology and still
refer to these random vectors as random variables. For any h ? Vhid , let ChildrenT (h) ? VT denote
the children of h in T.
Each observed variable x ? Vobs is modeled as random vector in Rd , and each hidden variable
h ? Vhid as a random vector in Rk . The joint distribution over all the variables VT := Vobs ?
Vhid is assumed satisfy conditional independence properties specified by the tree structure over the
variables. Specifically, for any disjoint subsets V1 , V2 , V3 ? VT such that V3 separates V1 from V2
in T, the variables in V1 are conditionally independent of those in V2 given V3 .
2.2
Structural and distributional assumptions
The class of models considered are specified by the following structural and distributional assumptions.
Condition 1 (Linear conditional means). Fix any hidden variable h ? Vhid . For each hidden child
g ? ChildrenT (h) ? Vhid , there exists a matrix A(g|h) ? Rk?k such that
E[g|h] = A(g|h) h;
and for each observed child x ? ChildrenT (h) ? Vobs , there exists a matrix C(x|h) ? Rd?k such
that
E[x|h] = C(x|h) h.
We refer to the class of tree graphical models satisfying Condition 1 as linear tree models. Such
models include a variety of continuous and discrete tree distributions (as well as hybrid combinations
of the two, such as Gaussian mixture models) which are widely used in practice. Continuous linear
tree models include linear-Gaussian models and Kalman filters. In the discrete case, suppose that
the observed variables take on d values, and hidden variables take k values. Then, each variable is
represented by a binary vector in {0, 1}s , where s = d for the observed variables and s = k for
the hidden variables (in particular, if the variable takes value i, then the corresponding vector is the
i-th coordinate vector), and any conditional distribution between the variables is represented by a
linear relationship. Thus, discrete linear tree models include discrete hidden Markov models [25]
and Markovian evolutionary trees [24].
In addition to the linearity, the following conditions are assumed in order to recover the hidden tree
structure. For any matrix M , let ?t (M ) denote its t-th largest singular value.
Condition 2 (Rank condition). The variables in VT = Vhid ? Vobs obey the following rank conditions.
1. For all h ? Vhid , E[hh? ] has rank k (i.e., ?k (E[hh? ]) > 0).
2. For all h ? Vhid and hidden child g ? ChildrenT (h) ? Vhid , A(g|h) has rank k.
3. For all h ? Vhid and observed child x ? ChildrenT (h) ? Vobs , C(x|h) has rank k.
The rank condition is a generalization of parameter identifiability conditions in latent variable models [28, 24, 25] which rules out various (provably) hard instances in discrete variable settings [24].
3
h4
h2
x3
h1
x1
T1
h3
x4
T2
x6
T3
x5
x2
Figure 2: Set of trees Fh4 = {T1 , T2 , T3 } obtained if h4 is removed.
Condition 3 (Non-redundancy condition). Each hidden variable has at least three neighbors. Furthermore, there exists ?2max > 0 such that for each pair of distinct hidden variables h, g ? Vhid ,
det(E[hg ? ])2
? ?2max < 1.
det(E[hh? ]) det(E[gg ? ])
The requirement for each hidden node to have three neighbors is natural; otherwise, the hidden
node can be eliminated. The quantity ?max is a natural multivariate generalization of correlation.
First, note that ?max ? 1, and that if ?max = 1 is achieved with some h and g, then h and g are
completely correlated, implying the existence of a deterministic map between hidden nodes h and
g; hence simply merging the two nodes into a single node h (or g) resolves this issue. Therefore
the non-redundancy condition simply means that any two hidden nodes h and g cannot be further
reduced to a single node. Clearly, this condition is necessary for the goal of identifying the correct
tree structure, and it is satisfied as soon as h and g have limited correlation in just a single direction.
Previous works [13, 29] show that an analogous condition ensures identifiability for general latent
tree models (and in fact, the conditions are identical in the Gaussian case). Condition 3 is therefore
a generalization of this condition suitable for the multivariate setting.
Our learning guarantees also require a correlation condition that generalize the explicit depth conditions considered in the phylogenetics literature [15, 24]. To state this condition, first define Fh to be
the set of subtrees of that remain after a hidden variable h ? Vhid is removed from T (see Figure 2).
Also, for any subtree T ? of T, let Vobs [T ? ] ? Vobs be the observed variables in T ? .
Condition 4 (Correlation condition). There exists ?min > 0 such that for all hidden variables h ?
Vhid and all triples of subtrees {T1 , T2 , T3 } ? Fh in the forest obtained if h is removed from T,
max
min
x1 ?Vobs [T1 ],x2 ?Vobs [T2 ],x3 ?Vobs [T3 ] {i,j}?{1,2,3}
?k (E[xi x?
j ]) ? ?min .
The quantity ?min is related to the effective depth of T, which is the maximum graph distance
between a hidden variable and its closest observed variable [15, 21]. The effective depth is at most
logarithmic in the number of variables (as achieved by a complete binary tree), though it can also be
a constant if every hidden variable is close to an observed variable (e.g., in a hidden Markov model,
the effective depth is 1, even though the true depth, or diameter, is m + 1). If the matrices giving
the (conditionally) linear relationship between neighboring variables in T are all well-conditioned,
then ?min is at worst exponentially small in the effective depth, and therefore at worst polynomially
small in the number of variables.
Finally, also define
?max :=
max
{x1 ,x2 }?Vobs
{?1 (E[x1 x?
2 ])}
to be the largest spectral norm of any second-moment matrix between observed variables. Note
?max ? 1 in the discrete case, and, in the continuous case, ?max ? 1 if each observed random
vector is in isotropic position.
In this work, the Euclidean norm of a vector x is denoted by ?x?, and the (induced) spectral norm
of a matrix A is denoted by ?A?, i.e., ?A? := ?1 (A) = sup{?Ax? : ?x? = 1}.
4
Algorithm 1 SpectralQuartetTest on observed variables {z1 , z2 , z3 , z4 }.
?i,j of the second-moment matrix
Input: For each pair {i, j} ? {1, 2, 3, 4}, an empirical estimate ?
?
E[zi zj ] and a corresponding confidence parameter ?i,j > 0.
Output: Either a pairing {{zi , zj }, {zi? , zj ? }} or ?.
1: if there exists a partition of {z1 , z2 , z3 , z4 } = {zi , zj } ? {zi? , zj ? } such that
k
?
s=1
?i,j ) ? ?i,j ]+ [?s (?
?i? ,j ? ) ? ?i? ,j ? ]+ >
[?s (?
then return the pairing {{zi , zj }, {zi? , zj ? }}.
k
?
?i? ,j ) + ?i? ,j )(?s (?
?i,j ? ) + ?i,j ? )
(?s (?
s=1
2: else return ?.
3
Spectral quartet tests
This section describes the core of our learning algorithm, a spectral quartet test that determines
topology of the subtree induced by four observed variables {z1 , z2 , z3 , z4 }. There are four possibilities for the induced subtree, as shown in Figure 1. Our quartet test either returns the correct
induced subtree among possibilities in Figure 1(a)?(c); or it outputs ? to indicate abstinence. If the
test returns ?, then no guarantees are provided on the induced subtree topology. If it does return a
subtree, then the output is guaranteed to be the correct induced subtree (with high probability).
The quartet test proposed is described in Algorithm 1 (SpectralQuartetTest). The notation [a]+
denotes max{0, a} and [t] (for an integer t) denotes the set {1, 2, . . . , t}.
The quartet test is defined with respect to four observed variables Z := {z1 , z2 , z3 , z4 }. For each
?i,j of the second-moment matrix
pair of variables zi and zj , it takes as input an empirical estimate ?
E[zi zj? ], and confidence bound parameters ?i,j which are functions of N , the number of samples
?i,j ?s, a confidence parameter ?, and of properties of the distributions of zi and
used to compute the ?
zj . In practice, one uses a single threshold ? for all pairs, which is tuned by the algorithm. Our
theoretical analysis also applies to this case. The output of the test is either ? or a pairing of the
variables {{zi , zj }, {zi? , zj ? }}. For example, if the output is the pairing is {{z1 , z2 }, {z3 , z4 }}, then
Figure 1(a) is the output topology.
Even though the configuration in Figure 1(d) is a possibility, the spectral quartet test never returns
{{z1 , z2 , z3 , z4 }}, as there is no correct pairing of Z. The topology {{z1 , z2 , z3 , z4 }} can be viewed
as a degenerate case of {{z1 , z2 }, {z3 , z4 }} (say) where the hidden variables h and g are deterministically identical, and Condition 3 fails to hold with respect to h and g.
3.1
Properties of the spectral quartet test
With exact second moments: The spectral quartet test is motivated by the following lemma, which
shows the relationship between the singular values of second-moment matrices of the zi ?s and the
?k
induced topology among them in the latent tree. Let detk (M ) := s=1 ?s (M ) denote the product
of the k largest singular values of a matrix M .
Lemma 1 (Perfect quartet test). Suppose that the observed variables Z = {z1 , z2 , z3 , z4 } have
the true induced tree topology shown in Figure 1(a), and the tree model satisfies Condition 1 and
Condition 2. Then
detk (E[z1 z3? ])detk (E[z2 z4? ])
detk (E[z1 z4? ])detk (E[z2 z3? ])
det(E[hg ? ])2
=
=
?1
det(E[hh? ]) det(E[gg ? ])
detk (E[z1 z2? ])detk (E[z3 z4? ])
detk (E[z1 z2? ])detk (E[z3 z4? ])
(1)
and detk (E[z1 z3? ])detk (E[z2 z4? ]) = detk (E[z1 z4? ])detk (E[z2 z3? ]).
This lemma shows that given the true second-moment matrices and assuming Condition 3, the inequality in (1) becomes strict and thus can be used to deduce the correct topology: the correct pairing
is {{zi , zj }, {zi? , zj ? }} if and only if
detk (E[zi zj? ])detk (E[zi? zj?? ]) > detk (E[zi? zj? ])detk (E[zi zj?? ]).
5
Reliability: The next lemma shows that even if the singular values of E[zi zj? ] are not known exactly, then with valid confidence intervals (that contain these singular values) a robust test can be
constructed which is reliable in the following sense: if it does not output ?, then the output topology
is indeed the correct topology.
Lemma 2 (Reliability). Consider the setup of Lemma 1, and suppose that Figure 1(a) is the
?i,j ) ? ?i,j ?
correct topology. If for all pairs {zi , zj } ? Z and all s ? [k], ?s (?
?
?
?s (E[zi zj ]) ? ?s (?i,j ) + ?i,j , and if SpectralQuartetTest returns a pairing {{zi , zj }, {zi? , zj ? }},
then {{zi , zj }, {zi? , zj ? }} = {{z1 , z2 }, {z3 , z4 }}.
In other words, the spectral quartet test never returns an incorrect pairing as long as the singular
?i,j . The lemma
values of E[zi zj? ] lie in an interval of length 2?i,j around the singular values of ?
below shows how to set the ?i,j s as a function of N , ? and properties of the distributions of zi and zj
so that this required event holds with probability at least 1 ? ?. We remark that any valid confidence
intervals may be used; the one described below is particularly suitable when the observed variables
are high-dimensional random vectors.
Lemma 3 (Confidence intervals). Let Z = {z1 , z2 , z3 , z4 } be four random vectors. Let ?zi ? ? Mi
?i,j is computed using
almost surely, and let ? ? (0, 1/6). If each empirical second-moment matrix ?
N iid copies of zi and zj , and if
E[?zi ?2 ?zj ?2 ] ? tr(E[zi zj? ]E[zi zj? ]? )
,
ti,j := 1.55 ln(24d?i,j /?),
max{?E[?zj ?2 zi zi? ]?, ?E[?zi ?2 zj zj? ]?}
?
? ?
??
??
2 max ?E[?zj ?2 zi zi? ]?, ?E[?zi ?2 zj zj? ]? ti,j
Mi Mj ti,j
?i,j ?
+
,
N
3N
then with probability 1 ? ?, for all pairs {zi , zj } ? Z and all s ? [k],
?i,j ) ? ?i,j ? ?s (E[zi z ? ]) ? ?s (?
?i,j ) + ?i,j .
?s ( ?
d?i,j :=
j
(2)
Conditions for returning a correct pairing: The conditions under which SpectralQuartetTest
returns an induced topology (as opposed to ?) are now provided.
An important quantity in this analysis is the level of non-redundancy between the hidden variables
h and g. Let
det(E[hg ? ])2
?2 :=
.
(3)
det(E[hh? ]) det(E[gg ? ])
If Figure 1(a) is the correct induced topology among {z1 , z2 , z3 , z4 }, then the smaller ? is, the
greater the gap between detk (E[z1 z2? ])detk (E[z3 z4? ]) and either of detk (E[z1 z3? ])detk (E[z2 z4? ])
and detk (E[z1 z4? ])detk (E[z2 z3? ]). Therefore, ? also governs how small the ?i,j need to be for the
quartet test to return a correct pairing; this is quantified in Lemma 4. Note that Condition 3 implies
? ? ?max < 1.
Lemma 4 (Correct pairing). Suppose that (i) the observed variables Z = {z1 , z2 , z3 , z4 } have the
true induced tree topology shown in Figure 1(a); (ii) the tree model satisfies Condition 1, Condition 2, and ? < 1 (where ? is defined in (3)), and (iii) the confidence bounds in (2) hold for all {i, j}
and all s ? [k]. If
? 1
?
1
?i,j <
? min 1, ? 1 ? min{?k (E[zi zj? ])}
8k
?
{i,j}
for each pair {i, j}, then SpectralQuartetTest returns the correct pairing {{z1 , z2 }, {z3 , z4 }}.
4
The Spectral Recursive Grouping algorithm
The Spectral Recursive Grouping algorithm, presented as Algorithm 2, uses the spectral quartet test
discussed in the previous section to estimate the structure of a multivariate latent tree distribution
from iid samples of the observed leaf variables.1 The algorithm is a modification of the recursive
1
?x,y and threshold parameTo simplify notation, we assume that the estimated second-moment matrices ?
ters ?x,y ? 0 for all pairs {x, y} ? Vobs are globally defined. In particular, we assume the spectral quartet
tests use these quantities.
6
Algorithm 2 Spectral Recursive Grouping.
?x,y for all pairs {x, y} ? Vobs computed from N iid
Input: Empirical second-moment matrices ?
samples from the distribution over Vobs ; threshold parameters ?x,y for all pairs {x, y} ? Vobs .
? or ?failure?.
Output: Tree structure T
1: let R := Vobs , and for all x ? R, T [x] := rooted single-node tree x and L[x] := {x}.
2: while |R| > 1 do
3:
let pair {u, v} ? {{?
u, v?} ? R : Mergeable(R, L[?], u
?, v?) = true} be such that
?x,y ) : (x, y) ? L[u] ? L[v]} is maximized. If no such pair exists, then halt
max{?k (?
and return ?failure?.
4:
let result := Relationship(R, L[?], T [?], u, v).
5:
if result = ?siblings? then
6:
Create a new variable h, create subtree T [h] rooted at h by joining T [u] and T [v] to h with
edges {h, u} and {h, v}, and set L[h] := L[u] ? L[v].
7:
Add h to R, and remove u and v from R.
8:
else if result = ?u is parent of v? then
9:
Modify subtree T [u] by joining T [v] to u with an edge {u, v}, and modify L[u] := L[u] ?
L[v].
10:
Remove v from R.
11:
else if result = ?v is parent of u? then
12:
{Analogous to above case.}
13:
end if
14: end while
? := T [h] where R = {h}.
15: Return T
grouping (RG) procedure proposed in [21]. RG builds the tree in a bottom-up fashion, where the
initial working set of variables are the observed variables. The variables in the working set always
correspond to roots of disjoint subtrees of T discovered by the algorithm. (Note that because these
subtrees are rooted, they naturally induce parent/child relationships, but these may differ from those
implied by the edge directions in T.) In each iteration, the algorithm determines which variables in
the working set to combine. If the variables are combined as siblings, then a new hidden variable
is introduced as their parent and is added to the working set, and its children are removed. If the
variables are combined as neighbors (parent/child), then the child is removed from the working set.
The process repeats until the entire tree is constructed.
Our modification of RG uses the spectral quartet tests from Section 3 to decide which subtree roots
in the current working set to combine. Note that because the test may return ? (a null result), our
algorithm uses the tests to rule out possible siblings or neighbors among variables in the working
set?this is encapsulated in the subroutine Mergeable (Algorithm 3), which tests quartets of observed variables (leaves) in the subtrees rooted at working set variables. For any pair {u, v} ? R
submitted to the subroutine (along with the current working set R and leaf sets L[?]):
? Mergeable returns false if there is evidence (provided by a quartet test) that u and v should
first be joined with different variables (u? and v ? , respectively) before joining with each
other; and
? Mergeable returns true if no quartet test provides such evidence.
The subroutine is also used by the subroutine Relationship (Algorithm 4) which determines whether
a candidate pair of variables should be merged as neighbors (parent/child) or as siblings: essentially,
to check if u is a parent of v, it checks if v is a sibling of each child of u. The use of unreliable
estimates of long-range correlations is avoided by only considering highly-correlated variables as
candidate pairs to merge (where correlation is measured using observed variables in their corresponding subtrees as proxies). This leads to a sample-efficient algorithm for recovering the hidden
tree structure.
The Spectral Recursive Grouping algorithm enjoys the following guarantee.
Theorem 1. Let ? ? (0, 1). Assume the directed tree graphical model T over variables (random
vectors) VT = Vobs ? Vhid satisfies Conditions 1, 2, 3, and 4. Suppose the Spectral Recursive
7
Algorithm 3 Subroutine Mergeable(R, L[?], u, v).
Input: Set of nodes R; leaf sets L[v] for all v ? R; distinct u, v ? R.
Output: true or false.
1: if there exists distinct u? , v ? ? R \ {u, v} and (x, y, x? , y ? ) ? L[u] ? L[v] ? L[u? ] ? L[v ? ] s.t.
SpectralQuartetTest({x, y, x? , y ? }) returns {{x, x? }, {y, y ? }} or {{x, y ? }, {x? , y}} then return
false.
2: else return true.
Algorithm 4 Subroutine Relationship(R, L[?], T [?], u, v).
Input: Set of nodes R; leaf sets L[v] for all v ? R; rooted subtrees T [v] for all v ? R; distinct
u, v ? R.
Output: ?siblings?, ?u is parent of v? (?u ? v?), or ?v is parent of u? (?v ? u?).
1: if u is a leaf then assert u ?? v.
2: if v is a leaf then assert v ?? u.
3: let R[w] := (R \ {w}) ? {w ? : w ? is a child of w in T [w]} for each w ? {u, v}.
4: if there exists child u1 of u in T [u] s.t. Mergeable(R[u], L[?], u1 , v) = false then assert ?u ?? v?.
5: if there exists child v1 of v in T [v] s.t. Mergeable(R[v], L[?], u, v1 ) = false then assert ?v ?? u?.
6: if both ?u ?? v? and ?v ?? u? were asserted then return ?siblings?.
7: else if ?u ?? v? was asserted then return ?v is parent of u? (?v ? u?).
8: else return ?u is parent of v? (?u ? v?).
Grouping algorithm (Algorithm 2) is provided N independent samples from the distribution over
Vobs , and uses parameters given by
?
2Bxi ,xj txi ,xj
Mxi Mxj txi ,xj
?xi ,xj :=
+
(4)
N
3N
where
? ?
??
??
2
? ?
? ?
Bxi ,xj := max ?E[?xi ?2 xj x?
,
j ] , E[?xj ? xi xi ]
d?xi ,xj :=
E[?xi ? ?xj ? ] ? tr(E[xi x?
]E[xj x?
i ])
? ? j
??
??
?, ?E[?xi ?2 xj x? ]? ,
max ?E[?xj ?2 xi x?
]
i
j
2
2
Mxi ? ?xi ?
almost surely,
txi ,xj := 4 ln(4d?xi ,xj n/?).
Let B := maxxi ,xj ?Vobs {Bxi ,xj }, M := maxxi ?Vobs {Mxi }, t := maxxi ,xj ?Vobs {txi ,xj }. If
N>?
200 ? k 2 ? B ? t
2
?min
? (1 ? ?max )
?max
?2 +
7 ? k ? M2 ? t
2
?min
? (1 ? ?max )
?max
,
? with
then with probability at least 1 ? ?, the Spectral Recursive Grouping algorithm returns a tree T
the same undirected graph structure as T.
Consistency is implied by the above theorem with an appropriate scaling of ? with N . The theorem
reveals that the sample complexity of the algorithm depends solely on intrinsic spectral properties
of the distribution. Note that there is no explicit dependence on the dimensions of the observable
variables, which makes the result applicable to high-dimensional settings.
Acknowledgements
Part of this work was completed while DH was at the Wharton School of the University of Pennsylvania and at Rutgers University. AA was supported by in part by the setup funds at UCI and the
AFOSR Award FA9550-10-1-0310.
References
[1] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
8
[2] S. Dasgupta and L. Schulman. A probabilistic analysis of EM for mixtures of separated, spherical Gaussians. Journal of Machine Learning Research, 8(Feb):203?226, 2007.
[3] K. Chaudhuri, S. Dasgupta, and A. Vattani. Learning mixtures of Gaussians using the k-means algorithm,
2009. arXiv:0912.0086.
[4] D. M. Chickering, D. Heckerman, and C. Meek. Large-sample learning of Bayesian networks is NP-hard.
Journal of Machine Learning Research, 5:1287?1330, 2004.
[5] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, 14(3):462?467, 1968.
[6] N. Friedman, I. Nachman, and D. Pe?er. Learning Bayesian network structure from massive datasets: the
?sparse candidate? algorithm. In Fifteenth Conference on Uncertainty in Artificial Intelligence, 1999.
[7] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional Ising model selection using ?1 regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010.
[8] M. J. Choi, J. J. Lim, A. Torralba, and A. S. Willsky. Exploiting hierarchical context on a large database
of object categories. In IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[9] R. Durbin, S. R. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models
of Proteins and Nucleic Acids. Cambridge University Press, 1999.
[10] J. Wishart. Sampling errors in the theory of two factors. British Journal of Psychology, 19:180?187,
1928.
[11] K. Bollen. Structural Equation Models with Latent Variables. John Wiley & Sons, 1989.
[12] P. Buneman. The recovery of trees from measurements of dissimilarity. In F. R. Hodson, D. G. Kendall,
and P. Tautu, editors, Mathematics in the Archaeological and Historical Sciences, pages 387?395. 1971.
[13] J. Pearl and M. Tarsi. Structuring causal trees. Journal of Complexity, 2(1):60?77, 1986.
[14] N. Saitou and M. Nei. The neighbor-joining method: A new method for reconstructing phylogenetic trees.
Molecular Biology and Evolution, 4:406?425, 1987.
[15] P. L. Erd?os, L. A. Sz?ekely, M. A. Steel, and T. J. Warnow. A few logs suffice to build (almost) all trees:
Part II. Theoretical Computer Science, 221:77?118, 1999.
[16] M. R. Lacey and J. T. Chang. A signal-to-noise analysis of phylogeny estimation by neighbor-joining:
insufficiency of polynomial length sequences. Mathematical Biosciences, 199(2):188?215, 2006.
[17] P. L. Erd?os, L. A. Sz?ekely, M. A. Steel, and T. J. Warnow. A few logs suffice to build (almost) all trees
(I). Random Structures and Algorithms, 14:153?184, 1999.
[18] E. Mossel. Phase transitions in phylogeny. Transactions of the American Mathematical Society,
356(6):2379?2404, 2004.
[19] C. Daskalakis, E. Mossel, and S. Roch. Evolutionary trees and the Ising model on the Bethe lattice: A
proof of Steel?s conjecture. Probability Theory and Related Fields, 149(1?2):149?189, 2011.
[20] H. Kesten and B. P. Stigum. Additional limit theorems for indecomposable multidimensional galtonwatson processes. Annals of Mathematical Statistics, 37:1463?1481, 1966.
[21] M. J. Choi, V. Tan, A. Anandkumar, and A. Willsky. Learning latent tree graphical models. Journal of
Machine Learning Research, 12:1771?1812, 2011.
[22] M. S. Bartlett. Further aspects of the theory of multiple regression. Mathematical Proceedings of the
Cambridge Philosophical Society, 34:33?40, 1938.
[23] R. J. Muirhead and C. M. Waternaux. Asymptotic distributions in canonical correlation analysis and other
multivariate procedures for nonnormal populations. Biometrika, 67(1):31?43, 1980.
[24] E. Mossel and S. Roch. Learning nonsingular phylogenies and hidden Markov models. Annals of Applied
Probability, 16(2):583?614, 2006.
[25] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. In
Twenty-Second Annual Conference on Learning Theory, 2009.
[26] S. M. Siddiqi, B. Boots, and G. J. Gordon. Reduced-rank hidden Markov models. In Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.
[27] L. Song, S. M. Siddiqi, G. J. Gordon, and A. J. Smola. Hilbert space embeddings of hidden Markov
models. In International Conference on Machine Learning, 2010.
[28] E. S. Allman, C. Matias, and J. A. Rhodes. Identifiability of parameters in latent structure models with
many observed variables. The Annals of Statistics, 37(6A):3099?3132, 2009.
[29] J. Pearl. Probabilistic Reasoning in Intelligent Systems?Networks of Plausible Inference. Morgan Kaufmann, 1988.
[30] D. Hsu, S. M. Kakade, and T. Zhang. Dimension-free tail inequalities for sums of random matrices, 2011.
arXiv:1104.1672.
9
| 4208 |@word polynomial:2 norm:3 tarsus:1 thereby:1 tr:2 moment:9 initial:1 configuration:2 liu:1 exclusively:1 daniel:1 genetic:1 tuned:1 current:3 com:2 z2:32 comparing:1 tackling:1 john:1 subsequent:1 additive:1 partition:1 remove:2 fund:1 implying:1 greedy:1 leaf:12 intelligence:2 isotropic:1 core:2 short:1 fa9550:1 provides:1 node:19 zhang:3 phylogenetic:1 mathematical:5 along:1 h4:2 constructed:2 pairing:12 incorrect:1 combine:2 manner:1 notably:1 indeed:2 globally:1 spherical:1 resolve:1 considering:1 becomes:2 provided:4 underlying:5 linearity:1 notation:2 suffice:2 nonnormal:1 null:1 developed:2 unobserved:1 guarantee:5 assert:4 every:3 multidimensional:1 ti:3 exactly:1 returning:1 biometrika:1 before:2 t1:4 understood:1 local:1 modify:2 insufficiency:1 limit:1 joining:6 solely:1 abuse:1 merge:1 quantified:2 challenging:2 co:1 limited:1 childrent:5 range:1 directed:5 recursive:11 practice:2 x3:2 procedure:4 area:1 empirical:4 reject:1 confidence:7 word:1 induce:1 protein:1 cannot:1 close:1 selection:1 context:3 applying:1 imposed:1 vobs:22 deterministic:1 map:1 archaeological:1 convex:1 focused:2 recovery:2 identifying:1 m2:1 rule:2 muirhead:1 population:1 handle:1 coordinate:1 analogous:2 annals:4 diego:1 suppose:5 tan:1 massive:1 exact:3 us:5 hypothesis:3 origin:1 trend:1 recognition:2 satisfying:1 particularly:1 distributional:2 ising:2 database:1 observed:29 bottom:2 worst:2 ensures:1 connected:2 removed:5 complexity:8 ultimately:1 mxj:1 basis:1 completely:1 joint:2 various:3 represented:2 separated:1 distinct:4 effective:5 artificial:2 choosing:1 widely:1 plausible:1 say:1 reconstruct:1 otherwise:1 statistic:7 sequence:2 propose:2 reconstruction:3 skakade:1 product:1 neighboring:1 uci:2 relevant:1 chaudhuri:2 degenerate:1 vhid:15 exploiting:1 parent:12 requirement:1 perfect:1 object:3 wider:1 stat:1 measured:1 school:1 h3:1 krogh:1 recovering:2 c:2 involves:1 indicate:1 implies:1 differ:1 direction:2 merged:1 correct:14 filter:1 material:1 require:1 fix:1 generalization:3 preliminary:1 biological:1 hold:3 around:1 considered:2 early:1 torralba:1 fh:2 estimation:10 encapsulated:1 rhodes:1 applicable:2 nachman:1 stigum:2 ditions:1 largest:3 create:2 tool:1 clearly:1 gaussian:7 always:1 rather:3 mergeable:7 structuring:1 derived:1 focus:1 ax:1 properly:1 rank:7 check:2 sense:1 inference:4 entire:1 chow:1 hidden:33 subroutine:6 provably:2 issue:1 among:4 denoted:2 uc:2 wharton:1 field:1 never:2 eliminated:1 sampling:1 x4:1 identical:2 biology:1 np:2 others:1 t2:4 simplify:1 few:2 gordon:2 modern:1 intelligent:1 phase:1 microsoft:4 friedman:1 possibility:3 highly:1 predominant:1 mixture:5 detk:23 hg:3 asserted:2 subtrees:7 edge:3 necessary:1 tree:67 euclidean:1 causal:1 theoretical:2 instance:2 modeling:1 markovian:1 maximization:1 lattice:1 applicability:1 addressing:1 subset:1 dependency:1 considerably:1 combined:2 international:2 probabilistic:4 central:2 satisfied:1 opposed:2 possibly:1 wishart:1 american:1 vattani:1 return:23 con2:1 indecomposable:1 satisfy:1 depends:1 h1:2 root:2 kendall:1 sup:1 recover:2 identifiability:3 saitou:1 acid:1 characteristic:1 kaufmann:1 maximized:1 correspond:2 t3:4 nonsingular:1 generalize:2 bayesian:2 iid:3 submitted:1 failure:2 matias:1 naturally:1 associated:3 mi:2 bioscience:1 proof:1 hsu:3 irvine:1 animashree:1 popular:1 lim:1 dimensionality:1 eddy:1 hilbert:1 carefully:1 follow:1 methodology:1 x6:1 erd:2 though:3 generality:1 furthermore:2 just:1 smola:1 correlation:10 until:1 working:9 o:2 lack:1 ekely:2 logistic:1 reveal:1 contain:1 true:8 evolution:1 hence:1 deal:1 conditionally:2 adjacent:1 x5:1 rooted:5 gg:3 complete:1 txi:4 interpreting:1 reasoning:2 ranging:1 image:1 variational:1 recently:2 bxi:3 exponentially:1 extend:1 discussed:1 tail:1 mellon:1 measurement:2 refer:2 cambridge:2 ai:1 enjoyed:2 rd:2 consistency:1 mathematics:1 z4:32 language:1 reliability:2 deduce:1 add:1 feb:1 multivariate:13 closest:1 recent:3 termed:1 certain:5 inequality:2 binary:3 success:2 life:1 vt:5 morgan:1 greater:1 additional:1 surely:2 v3:3 signal:1 ii:2 multiple:1 sham:1 long:2 ravikumar:1 award:1 molecular:1 halt:1 buneman:1 basic:1 regression:2 vision:3 cmu:1 rutgers:3 expectation:1 metric:1 essentially:2 iteration:1 arxiv:2 fifteenth:1 robotics:1 achieved:2 addition:1 thirteenth:1 interval:4 else:6 singular:7 crucial:1 strict:1 induced:12 undirected:2 lafferty:1 jordan:1 anandkumar:3 surviving:1 structural:6 integer:1 presence:1 allman:1 revealed:1 iii:1 embeddings:1 variety:2 independence:2 xj:18 zi:42 psychology:1 pennsylvania:2 topology:16 sibling:7 det:9 whether:1 motivated:1 lesong:1 bartlett:1 effort:1 song:2 kesten:2 speech:1 speaking:1 remark:1 generally:2 useful:1 governs:2 siddiqi:2 category:1 diameter:1 reduced:2 canonical:2 zj:39 estimated:1 disjoint:2 carnegie:1 discrete:12 dasgupta:2 redundancy:3 four:8 terminology:1 threshold:3 v1:5 graph:5 relaxation:1 sum:1 uncertainty:1 tzhang:1 extends:2 almost:4 family:1 decide:1 scaling:1 bound:4 guaranteed:1 meek:1 durbin:1 annual:1 scene:1 afforded:1 x2:4 aspect:2 u1:2 min:9 conjecture:1 combination:1 remain:1 describes:1 em:2 smaller:1 heckerman:1 son:1 kakade:3 reconstructing:1 making:1 modification:3 heart:1 ln:2 equation:2 hh:5 dahsu:1 serf:1 end:2 available:1 gaussians:2 obey:1 hierarchical:1 v2:3 spectral:28 appropriate:1 occurrence:1 robustness:1 existence:1 denotes:2 include:5 completed:1 graphical:13 exploit:1 giving:1 restrictive:1 especially:1 build:3 approximating:1 classical:1 society:2 implied:2 question:1 quantity:4 added:1 dependence:3 evolutionary:7 distance:2 separate:1 considers:1 provable:1 willsky:2 quartet:26 assuming:1 kalman:1 length:2 modeled:1 relationship:7 z3:32 setup:2 unfortunately:1 phylogenetics:4 steel:3 bollen:1 twenty:1 perform:1 boot:1 nucleic:1 markov:9 datasets:1 finite:1 discovered:1 ucsd:1 nei:1 introduced:1 pair:15 required:1 specified:2 z1:32 philosophical:1 polylogarithmic:1 pearl:2 address:1 beyond:2 able:1 roch:2 below:2 pattern:1 regime:2 challenge:1 max:21 reliable:1 wainwright:2 suitable:2 event:1 natural:6 widening:1 hybrid:1 regularized:1 representing:1 mxi:3 mossel:3 hm:1 understanding:2 literature:1 acknowledgement:1 schulman:1 determining:2 relative:3 afosr:1 asymptotic:1 mixed:1 triple:1 h2:2 foundation:1 proxy:1 editor:1 succinctly:1 repeat:1 supported:1 soon:1 copy:1 hodson:1 enjoys:1 free:1 neighbor:8 sparse:1 depth:7 xn:1 valid:2 dimension:2 rich:1 transition:1 san:1 avoided:1 historical:1 polynomially:1 transaction:2 observable:1 ignore:1 unreliable:1 confirm:1 sz:2 reveals:1 assumed:2 xi:12 mitchison:1 daskalakis:1 continuous:5 latent:18 search:1 learn:1 mj:1 robust:2 bethe:1 improving:1 forest:1 domain:1 main:2 noise:1 child:14 body:1 x1:5 fashion:1 tong:1 aid:1 wiley:1 fails:1 position:1 explicit:3 deterministically:1 exponential:1 lie:1 candidate:3 pe:1 chickering:1 warnow:2 maxxi:3 rk:2 theorem:4 choi:2 british:1 er:1 evidence:2 grouping:11 exists:9 intrinsic:1 false:5 merging:1 kamalika:2 dissimilarity:1 subtree:10 conditioned:1 gap:1 rg:3 logarithmic:1 simply:3 scalar:3 joined:1 ters:1 chang:1 applies:2 aa:1 corresponds:2 determines:3 satisfies:3 dh:1 conditional:4 goal:2 viewed:2 hard:3 specifically:2 lemma:10 specie:1 phylogeny:3 internal:2 bioinformatics:1 correlated:2 |
3,544 | 4,209 | Learning Higher-Order Graph Structure with
Features by Structure Penalty
Shilin Ding1?, Grace Wahba1,2,3? , and Xiaojin Zhu2?
Department of { Statistics, 2 Computer Sciences, 3 Biostatistics and Medical Informatics}
University of Wisconsin-Madison, WI 53705
{sding, wahba}@stat.wisc.edu, [email protected]
1
Abstract
In discrete undirected graphical models, the conditional independence of node
labels Y is specified by the graph structure. We study the case where there is
another input random vector X (e.g. observed features) such that the distribution
P (Y | X) is determined by functions of X that characterize the (higher-order)
interactions among the Y ?s. The main contribution of this paper is to learn the
graph structure and the functions conditioned on X at the same time. We prove
that discrete undirected graphical models with feature X are equivalent to multivariate discrete models. The reparameterization of the potential functions in
graphical models by conditional log odds ratios of the latter offers advantages
in representation of the conditional independence structure. The functional spaces
can be flexibly determined by kernels. Additionally, we impose a Structure Lasso
(SLasso) penalty on groups of functions to learn the graph structure. These groups
with overlaps are designed to enforce hierarchical function selection. In this way,
we are able to shrink higher order interactions to obtain a sparse graph structure.
1
Introduction
In undirected graphical models (UGMs), a graph is defined as G = (V, E), where V = {1, ? ? ? , K}
is the set of nodes and E ? V ? V is the set of edges between the nodes. The graph structure specifies the conditional independence among nodes. Much prior work has focused on graphical model
structure learning without conditioning on X. For instance, Meinshausen and B?uhlmann [1] and
Peng et al. [2] studied sparse covariance estimation of Gaussian Markov Random Fields. The covariance matrix fully determines the dependence structure in the Gaussian distribution. But it is not
the case for non-elliptical distributions, such as the discrete UGMs. Ravikumar et al. [3] and H?ofling
and Tibshirani [4] studied variable selection of Ising models based on l1 penalty. Ising models are
special cases of discrete UGMs with (usually) only pairwise interactions, and without features. We
focused on discrete UGMs with both higher order interactions and features. It is important to note
that the graph structure may change conditioned on different X?s, thus our approach may lead to
better estimates and interpretation.
In addressing the problem of structure learning with features, Liu et al. [5] assumed Gaussian distributed Y given X, and they partitioned the space of X into bins. Schmidt et al. [6] proposed a
framework to jointly learn pairwise CRFs and parameters with block-l1 regularization. Bradley and
Guestrin [7] learned tree CRF that recovers a max spanning tree of a complete graph based on heuristic pairwise link scores. These methods utilize only pairwise information to scale to large graphs.
The closest work is Schmidt and Murphy [8], which examined the higher-order graphical structure
?
SD wishes to acknowledge the valuable comments from Stephen J. Wright and Sijian Wang. Research of
SD and GW is supported in part by NIH Grant EY09946, NSF Grant DMS-0906818 and ONR Grant N001409-1-0655. Research of XZ is supported in part by NSF IIS-0953219, IIS-0916038.
1
learning problem without considering features. They used an active set method to learn higher order
interactions in a greedy manner. Their model is over-parameterized, and the hierarchical assumption
is sufficient but not necessary for conditional independence in the graph.
To the best of our knowledge, no previous work addressed the issue of graph structure learning of
all orders while conditioning on input features. Our contributions include a reparemeterization of
UGMs with bivariate outcomes into multivariate Bernoulli (MVB) models. The set of conditional
log odds ratios in MVB models are complete to represent the effects of features on responses and
their interactions at all levels. The sparsity in the set of functions are sufficient and necessary for the
conditional independence in the graph, i.e., two nodes are conditionally independent iff the pairwise
interaction is constant zero; and the higher order interaction among a subset of nodes means none of
the variables is separable from the others in the joint distribution.
To obtain a sparse graph structure, we impose Structure Lasso (SLasso) penalty on groups of functions with overlaps. SLasso can be viewed as group lasso with overlaps. Group lasso [9] leads to
selection of variables in groups. Jacob et al. [10] considered the penalty on groups with arbitrary
overlaps. Zhao et al. [11] set up the general framework for hierarchical variable selection with overlapping groups, which we adopt here for the functions. Our groups are designed to shrink higher
order interactions similar to hierarchical inclusion restriction in Schimdt and Murphy [8]. We give
a proximal linearization algorithm that efficiently learns the complete model. Global convergence is
guaranteed [12]. We then propose a greedy search algorithm to scale our method up to large graphs
as the number of parameters grows exponentially.
2
Conditional Independence in Discrete Undirected Graphical Models
In this section, we first discuss the relationship between the multivariate Bernoulli (MVB) model
and the UGM whose nodes are binary, i.e. Yi = 0 or 1. At the end, we will give the representation
of the general discrete UGM where Yi takes value in {0, ? ? ? , m ? 1}. In UGMs, the distribution of
multivariate discrete random variables Y1 , . . . , YK given X is:
1 Y
P (Y1 = y1 , . . . , YK = yK |X) =
?C (yC ; X)
(1)
Z(X)
C?C
where Z(X) is the normalization factor. The distribution is factorized according to the cliques in
the graph. A clique C ? ? = {1, . . . , K} is the set of nodes that are fully connected. ?C (yC ; X) is
the potential function on C, indexed by yC = (yi )i?C . This factorization follows from the Markov
property: any two nodes not in a clique are conditionally independent given others [13]. So C does
not have to comply with the graph structure, as long as it is sufficient. For example, the most general
choice for any given graph is C = {?}. See Theorem 2.1 and Example 2.1 for details.
(a) Graph 1
(b) Graph 2
(c) Graph 3
(d) Graph 4
Figure 1: Graphical model examples.
Given the graph structure, the potential functions characterize the distribution on the graph. But if
the graph is unknown in advance, estimating the potential functions on all possible cliques tends
to be over-parameterized [8]. Furthermore, log ?C (yC ; X) = 0 is sufficient for the conditional
independence among the nodes but not necessary (see Example 2.1). To avoid these problems, we
introduce the MVB model that is equivalent to (1) with binary nodes, i.e. Yi = 0 or 1. The MVB
distribution is:
X ? ?
y f ? b(f )
(2)
P (Y1 = y1 , . . . , YK = yk |X = x) = exp
???K
= exp y1 f 1 (x) + ? ? ? + yK f K (x) + ? ? ? + y1 y2 f 1,2 (x) + ? ? ? + y1 . . . yK f 1,...,K (x) ? b(f )
2
Here, we use the following notations. Let ?K be the power set of ? = {1, . . . , K}, and
use ?K = ?K ? {?} to index the 2K ? 1 f ? ?s in (2). Let ? denotes
Q a set in ?K , define Y = (y 1 , ? ? ? , y ? , ? ? ? , y ? ) be the augmented response with y ? =
i?? yi . And f =
(f 1 , . . . , f ? , . . . , f ? ) is the vector of conditional log odds ratios [14]. We assume f ? is in a Reproducing Kernel Hilbert Space (RKHS) H? with kernel K ? [15]. For example, in our simulation we
choose f ? to be B-spline (see supplementary mateiral). We focus on estimating the set of f ? (x)
with feature x where the sparsity in the set specifies the graph structure.
We present the following lemma and theorem which show the equivalence between UGM and MVB:
Lemma 2.1. In a MVB model, define the odd-even partition of the power set of ? as: ??
odd = {? ?
? | |?| = |?| ? k, where k is odd}, and ??
even = {? ? ? | |?| = |?| ? k, where k is even}. Note
|?|?1
?
. The following property holds:
|??
odd | = |?even | = 2
Q
P (Yi = 1, i ? ?; Yj = 0, j ? ?\?|X)
Z(x)
????
f ? = log Q even
, b(f ) = log Q
(3)
P
(Y
=
1,
i
?
?;
Y
=
0,
j
?
?\?|X)
?
i
j
???
C?C ?C (0; x)
odd
Theorem 2.1. A UGM of the general form (1) with binary nodes is equivalent to a MVB model of
(2). In addition, the following are equivalent: 1) There is no |C|-order interaction in {Yi , i ? C};
2) There is no clique C ? ?K in the graph; 3) f ? = 0 for all ? such that C ? ?.
A proof is given in Appendix. It states that there is a clique C in the graph, iff there is ? ?
C, f ? 6= 0 in MVB model. The advantage of modeling by MVB is that the sparsity in f ? ?s is
sufficient and necessary for the conditional independence in the graph, thus fully specifying the
graph structure. Specifically, Yi , Yj are conditionally independent iff f ? = 0, ? ? {i, j}. This
showed the interaction is non-zero iff all the nodes involved are not conditionally independent.
Example 2.1. When K = 2, ? = {1, 2}, C = {?}, denote ?? (Y1 = 1, Y2 = 1; X) as ?11
for simplicity, then P (Y1 = 1, Y2 = 1|X) = Z1 ?11 . Define ?10 , ?01 , ?00 similarly, then the
distribution with UGM parameterization is determined. The relation between UGM and MVB is
f 1 = log
?10
,
?00
f 2 = log
?01
,
?00
f 1,2 = log
?11 ? ?00
?01 ? ?10
Note, the independence between Y1 and Y2 implies: f 1,2 = 0 or ?11 ? ?00 = ?01 ? ?10 . Therefore,
f 1,2 being zero in MVB model is sufficient and necessary for the conditional independence in the
model. On the other hand, log ?C = 0 is a sufficient condition but not necessary.
The distribution of a general discrete UGM where Yk ? {0, ? ? ? , m ? 1} can be extended from (2).
Lemma 2.2. Let V = {1, . . . , m ? 1}, y? = (yi )i?? , then
P (Y1 = y1 , ? ? ? , YK = yK |X) = exp
?
X
X
?=1 v?V |?|
I(y? = v)fv? ? b(f )
(4)
where I is an indicator function and V n is the tensor product of n V ?s. Each f ? is a |V ||?| vector.
3
Structure Penalty
In many applications, the assumption is that the graph has very few large cliques. Similar to the
hierarchical inclusion restriction in Schmidt and Murphy [8], we will include a higher order interaction only when all its subsets are included. Our model is very flexible in that f ? (x) can be in an
arbitrary RKHS.
Let y(i) = (y1 (i), . . . , yK (i)), x(i) = (x1 (i), . . . , xp (i)) be the ith data point. There are |?K | =
2K ? 1 functions in total. We first consider learning the full model when K is small, and later
propose a greedy search algorithm to scale to large graphs. The penalized log likelihood model is:
min I? (f ) = L(f ) + ?J(f ) =
n
X
i=1
3
? Y(i)T f (x(i)) + b(f ) + ?J(f )
(5)
where L(f ) is the negative log likelihood and J(?) is the structure penalty. The hierarchical assumption is that if there is no interaction on clique C, then all f ? should be zero, for ? ? C. The penalty
is designed to shrink such f ? toward zero. We consider the Structure Lasso (SLasso) penalty guided
by the lattice in Figure 2. The lattice T has 2K ? 1 nodes: 1, . . . , ?, . . . , ?. There is an edge from
?1 to ?2 if and only if ?1 ? ?2 and |?1 | + 1 = |?2 |. Jenatton et al. [16] discussed how to define the
groups to achieve different nonzero patterns.
Figure 2: Hierarchical lattice for penalty
Let Tv = {? ? ?K |v ? ?} be the subgraph rooted at v in T , including all the descendants
of v. Denote f Tv = (f ? )??Tv . All the functions are categorizedqinto groups with overlaps as
P
? 2
(T1 , . . . , T? ). The SLasso penalty on the group Tv is: J(f Tv ) = pv
??Tv kf kH? where pv is
the weight for the penalty on Tv , empirically chosen as
min
f
1
|Tv | .
Then, the objective is:
s
X
X
kf ? k2H?
I? (f ) = L(f ) + ?
pv
v
(6)
??Tv
The following theorem shows that by minimizing the objective (6), f ?1 will enter the model before
f ?2 if ?1 ? ?2 . That is to say, if f ?1 is zero, there will be no higher order interactions on ?2 . It is
an extension of Theorem 1 in Zhao et al. [11] and the proof is given in Appendix.
Theorem 3.1. Objective (6) is convex, thus the minimal is attainable. Let ?1 , ?2 ? ?K and ?1 ?
?2 . If f? is the minimizer of (6) given the observations, that is, 0 ? ?I? (f?) which is the subgradient
of I? at f?, then f??2 = 0 almost surely if f??1 = 0.
1,3
Example 3.1. If K = 3, f = (f 1 , f 2 , f 3 , f 1,2 , f p
, f 2,3 , f 1,2,3 ). The group at node 1 in Figure 2
is f T1 = (f 1 , f 1,2 , f 1,3 , f 1,2,3 ) and J(f T1 ) = p1 kf 1 k2 + kf 1,2 k2 + kf 1,3 k2 + kf 1,2,3 k2 .
4
Parameter Estimation
In this section, we discuss parameter estimation where the ?th function space is linear as H? =
?
with a linear
{1} ? H1? for simplicity. {1} refers to the constant function
Ppspace,? and H1 is a RKHS
kernel. The functions in H? have the form f ? (x) = c?
+
c
x
.
Its
norm is kf ? kH? = kc? k,
0
j=1 j j
p+1
? T
as a vector
where k ? k stands for Euclidean l2 norm. Here, we denote c? = (c?
0 , . . . , cp ) ? R
p?
?
of length p + 1 and c = (c )???K ? R is the concatenated vector of all parameters of length
p? = (p + 1) ? |?K |. Let cTv = (c? )??Tv be a (p + 1) ? |T v | vector, then the objective (6) is now:
X
min I? (c) = L(c) + ?
(7)
pv kcTv k
c
4.1
v
Estimating the complete model on small graphs
Many applications do not involve a large amount of responses, so it is desirable to learn the complete
model when the graph is small for consistency reasons. We propose a method to optimize (7) of the
4
Algorithm 1 Proximal Linearization Algorithm
Input: c0 , ?0 , ? > 1, tol > 0
repeat
Choose ?k ? [?min , ?max ]
Solve Eq (8) for dk = c ? ck
while ?k = I? (ck ) ? I? (ck + dk ) < kdk k3 do
// Insufficient decrease
Set ?k = max(?min , ??k )
Solve Eq (8) for dk
end while
Set ?k+1 = ?k /?
Set ck+1 = ck + dk
until ?k < tol
complete model with all interaction levels by iteratively solving the following proximal linearization
problem as discussed in Wright [12]:
?k
min Lk + ?LTk (c ? ck ) +
(8)
kc ? ck k2 + ?J(c)
c
2
where Lk = L(ck ), and ?k is a positive scalar chosen adaptively at kth step. With slight abuse
of notation, we denote ck as the value of c at kth step. Algorithm 1 summarized the framework of
solving (7). Following the analysis in Wright [12], we can ensure that the proximal linearization
algorithm will converge for the negative log-likelihood loss function with the SLasso penalty.
However, solving group lasso with overlaps is not trivial due to the non-smoothness at the singular
point. In recent years, several papers have addressed this problem. Jacob et al. [10] duplicated the
design matrix columns that appear in group overlaps, then solved the problem as group lasso without
overlaps. Kim and Xing [17] reparameterized the group norm with additional dummy variables.
They alternatively optimized the model parameters and the dummy ones at each step. It is efficient
for the quadratic loss function on Gaussian data, but might not scale well in our case. Instead, we
solve (8) by its smooth and convex dual problem [18].The details are in the supplementary material.
4.2
Estimating large graphs
The above algorithm is efficient on small graphs (K < 20). It usually terminates within 20 iterations
in our experiments. However, the issue of estimating a complete model is the exponential number
of f ? ?s and the same amount of groups involved in objective (7). It is intractable when the graph
becomes large. The hierarchical assumption and the SLasso penalty lend themselves naturally to a
greedy search algorithm:
1. Start from the set of main effects as A0 = {f 1 , ? ? ? , f K }.
2. In step i, remove the nodes that are not in Ai from the lattice in Figure 2. Obtain a sparse
estimation of the functions in Ai by algorithm (1). Denote the resulting sparse set A?i .
3. Let Ai+1 = A?i . Keep adding a higher order interaction into Ai+1 if all its subsets of
interactions are included in A?i . And also add this node into the lattice in Figure 2.
Iterate step 2 and 3 until convergence. The algorithm is similar to the active set method in Schmidt
and Murphy [8]. It has multiple runs of algorithm (1) to enforce the hierarchical assumption. It is
not guaranteed to converge to the global optimum. Nonetheless, our empirical experiments show its
ability to scale to large graphs.
5
5.1
Experiments
Toy Data
In the simulation, we create 6 toy graphs. The first four graphs are depicted in Figure 1. Graph 5
has 100 nodes where the first 8 nodes have the same structure as in Figure 1(c) and the others are
independent. Graph 6 also has 100 nodes where the first 10 nodes have the same connection as in
Figure 1(d) and the others are independent. We generate 100 datasets for each structure to evaluate
5
the performance. The sample size of each dataset is 1000. Here is how the first data set is generated:
The length of the feature vector, p, is set to 5 in our experiment, i.e. X = (X1 , . . . , X5 ). Each
P5
PD ?
?
?
f ? (x) = c?
0 +
j=1 gj (xj ) where gj (xj ) =
k=1 cjk Bk (xj ) is spanned by the B-spline basis
functions {Bk (?)}k=1,??? ,D (see the supplementary material), where D is chosen to be 5. The true
set of the model parameters, c?
jk , is uniformly sampled from {?5, ?4, ? ? ? , 5}. We set the intercepts
c?
in
main
effects
to
1,
and
those
in second or higher order interactions to 2. The features, Xj , are
0
i.i.d uniform on [-1, 1]. Then, Y is sampled according to the probability in equation (2).
We use GACV (generalized approximate cross validation) and BGACV (B-type GACV) [19] to
choose the regularization parameter ? for the complete model (graphs 1-4). We call these variants
of SLasso Complete-GACV and Complete-BGACV. We use AIC for greedy search (Greedy-AIC)
in graphs 5 and 6 due to computational consideration. The range of ? is chosen according to Koh
et al. [20]. The details of the tuning methods are discussed in the supplementary material. The R
package, BMN, is used as a baseline [4].
Table 1: Number of true positive and false positive functions
Graph
1
2
3
4
5
6
Method
BMN
Complete-GACV
Complete-BGACV
BMN
Complete-GACV
Complete-BGACV
BMN
Complete-GACV
Complete-BGACV
BMN
Complete-GACV
Complete-BGACV
BMN
Greedy-AIC
BMN
Greedy-AIC
f 1,2
60
100
86
44
100
88
72
91
36
48
92
68
38
99
28
100
f 1,3
76
100
83
50
99
91
64
87
22
34
98
68
28
99
26
100
f 2,3
70
100
83
38
100
88
60
81
23
37
94
71
26
98
14
100
f 3,4
60
94
72
58
99
78
60
92
93
29
90
62
22
97
26
99
f 1,2,3
0
84
14
0
83
33
0
62
0
0
54
0
0
22
0
24
f 5,7,8
0
71
39
0
45
0
0
21
0
15
f 5,6,7,8
0
33
0
0
0
-
FP
162
136
11
412
341
64
830
412
162
774
693
144
9476
1997
9672
3458
In Table 1, we count, for each function f ? , the number of runs out of 100 where f ? is recovered
(kc? k =
6 0). If a recovered function is in the true model, it is considered a true positive, otherwise a
false positive. The main effects are always detected correctly, thus are not listed in the table. SLasso
is more effective compared to BMN which only considers pairwise interactions.
In Figure 3, we show the learning results in terms of true positive rate (TPR) as sample size increases
from 100 to 1000. The experimental setting is the same as before. The TPRs improve with increasing sample size. GACV achieves better TPR, but higher FPR compared to BGACV. Our method
outperforms BMN in all six graphs.
5.2
Case Study: Census Bureau County Data
We use the county data from U.S. Census Bureau1 to validate our method. We remove the counties
that have missing values and obtain 2668 entries in total. The outcomes of this study are summarized
in Table 2. ?Vote? [21] is coded as 1 if the Republican candidate won in the 2004 presidential
election. To dichotomize the remaining outcomes, the national mean is selected as a threshold. The
data is standardized to mean 0 and variance 1. The following features are included: Housing unit
change in percent from 2000-2006, percent of ethnic groups, percent foreign born, percent people
over 65, percent people under 18, percent people with a high school education, percent people
with a bachelors degree; birth rate, death rate, per capita government expenditure in dollars. By
adjusting ?, we observe new interactions enter the model. The graph structure of ? = 0.1559 is
1
http://www.census.gov/statab/www/ccdb.html
6
400
600
Sample Size
800
1000
1.0
200
400
600
Sample Size
800
1.0
True Positive Rate
0.6 0.7 0.8 0.9
0.5
200
400
600
Sample Size
800
1000
200
(b) Graph 2 (5%)
True Positive Rate
0.4 0.5 0.6 0.7 0.8 0.9 1.0
(a) Graph 1 (5%)
1000
AIC
BMN
200
(d) Graph 4 (0.5%)
400
600
Sample Size
800
(e) Graph 5 (< 10
400
600
Sample Size
800
1000
(c) Graph 3 (1%)
True Positive Rate
0.4 0.5 0.6 0.7 0.8 0.9 1.0
200
0.5
0.5
True Positive Rate
0.6 0.7 0.8 0.9
1.0
True Positive Rate
0.6 0.7 0.8 0.9
1.0
True Positive Rate
0.6 0.7 0.8 0.9
0.5
GACV
BGACV
BMN
1000
?20
)
200
400
600
Sample Size
800
(f) Graph 6 (< 10
1000
?20
)
Figure 3: The True Positive Rate (TPR) of graph structure learning methods with increasing sample size. The percentage in the bracket is the upper bound of False Positive Rate (FPR) in each
experiment. BMN always has larger FPR compared to SLasso.
Table 2: Selected response variables
Response
Vote
Poverty
VCrime
PCrime
URate
PChange
Description
2004 votes for Republican presidential candidate
Poverty Rate
Violent Crime Rate, eg. murder, robbery
Property Crime Rate, eg. burglary
Unemployment Rate
Population change in percent from 2000 to 2006
Positive%
81.11
52.70
23.09
6.82
51.35
64.96
shown in Figure 4(a). The results of BMN (the tuning parameter is 0.015) is in Figure 4(b). The
unemployment rate plays an important role as a hub as discovered by SLasso, but not by BMN.
(a) SLasso-Complete
(b) BMN
Figure 4: Interactions of response variables in the Census Bureau data. The first number on the edge
is the order at which the link is recovered. The number in bracket is the function norm on the clique
and the absolute value of the elements in the concentration matrix, respectively. We note SLasso
discovers at 7th step two third-order interactions which are displayed by two circles in (a).
We analyze the link between ?Vote? and ?PChange?. Though the marginal correlation between
them (without X) is only 0.0389, which is the second lowest absolute pairwise correlation, the
7
link is firstly recovered by SLasso. It has been suggested that there is indeed a connection2 . This
shows that after taking features into account, the dependence structure of response variables may
change and hidden relations could be discovered. The main factors in this case are ?percentage of
housing unit change? (X1 ) and ?population percentage of people over 65? (X2 ). The part of the
fitted model shown below suggests that as housing units increase, the counties are more likely to
have both positive results for ?Vote? and ?PChange?. But this tendency will be counteracted by the
increase of people over 65: the responses are less likely to take both positive values.
f?V ote = 0.2913 ? X1 + 0.3475 ? X2 + ? ? ?
f?P Change = 1.4726 ? X1 ? 0.3709 ? X2 + ? ? ?
f?V ote,P Change = 0.1358 ? X1 ? 0.0458 ? X2 + ? ? ?
6
Conclusions
Our SLasso method can learn the graph structure that is specified by the conditional log odds ratios
conditioned on input features X, which allows the graphical model depending on features. The
modeling interprets well, since f ? = 0 iff there is no such clique. An efficient algorithm is given
to estimate the complete model. A greedy approach is applied when the graph is large. SLasso
can be extended to model a general discrete UGM, where Yk takes value in {0, . . . , m ? 1}. Also,
there exist rich selections of the function forms, which makes the model more flexible and powerful,
though modification is needed in solving the proximal subproblem for non-parametric families.
A Proof
A.1 Proof of Theorem 2.1
Proof. Given UGM (1), the corresponding parameterization in MVB model is shown in (3) of
Lemma 2.1. Conversely, given the MVB model of (2), the cliques can be determined by the nonzero
f ? : clique C exists if C = ? and f ? 6= 0. Then the maximal cliques can be inferred from the
graph structure. And suppose they are C1 , . . . , Cm . Let ?i = Ci , for i = 1, . . . , m, and ?1 = ?,
?i = Ci ? (Ci?1 ? ? ? ? ? C1 ), i = 2, . . . , m. Then the parameterization is:
(9)
?Ci (yCi ; x) = exp S ?i (y; x) ? S ?i (y; x) and Z(x) = exp(b(f ))
P
where S ? (y; x) = ??? y ? f ? (x). Thus, UGM (1) with bivariate nodes is equivalent to MVB (2).
In the latter part of the theorem, 1 ? 2 and 3 ? 1 follow naturally from the Markov property of
?
?
graphical models. To show 2 ? 3, let yC
be a realization of yC such that yC
= (yi? )i?C where
?
?
?
?
??
yi = 1 if i ? ? and yi = 0 otherwise. Notice that whenever ??C = ? ?C, we have yC
= yC
. For
any possible v = ? ? C, ?? ? {?|? = v ? u, s.t. u ? ? ? v} will satisfy the condition: ?? ? C = v.
There are 2|??v| such ?? in total due to the choice of u. Also, they appear in the nominator and
denominator of equation (3) equally. So, for any C ? C,
Y
Y
?
?
?C (yC
; x) =
?C (yC
; x)
(10)
????
odd
????
even
?
It follows that f = 0 by (3).
A.2 Proof of Theorem 3.1
Proof. We give the proof for the linear case. The convexity of I? is easy to check, since L and
J(f Tv ) are all convex in c. Suppose there is some ?2 ? ?1 s.t. c??2 6= 0 and c??1 = 0, by the groups
constructed through Figure 2, k?
cTv k = k(?
c? )v?? k 6= 0 for all v ? ?1 . So the partial derivative of
the objective (7) with respect to c?1 at c??1 is
X
?L
c??1
=0
(11)
+
?
p
v
?c?1 c?1 =?c?1
k?
cTv k
v??1
? ? = 0}, which is 0.
Thus, the probability of {?
c?2 6= 0} equals to the probability of { ?c?L
?1
c 1
c 1 =?
2
http://www.ipsos-mori.com/researchpublications/researcharchive/2545/Analysis-Population-change-turnout-the-election.aspx
8
References
[1] N. Meinshausen and P. Buhlmann. High-dimensional graphs and variable selection with the lasso. The
Annals of Statistics, 34(3):1436?1462, 2006.
[2] J. Peng, P. Wang, N. Zhou, and J. Zhu. Partial correlation estimation by joint sparse regression models.
Journal of the American Statistical Association, 104(486):735?746, 2009.
[3] P. Ravikumar, M.J. Wainwright, and J. Lafferty. High-dimensional Ising model selection using l1regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010.
[4] H. H?ofling and R. Tibshirani. Estimation of sparse binary pairwise markov networks using pseudolikelihoods. The Journal of Machine Learning Research, 10:883?906, 2009.
[5] Han Liu, Xi Chen, John Lafferty, and Larry Wasserman. Graph-valued regression. In J. Lafferty, C. K. I.
Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1423?1431. 2010.
[6] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion
abnormality detection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1?8,
2008.
[7] J.K. Bradley and C. Guestrin. Learning tree conditional random fields. In Proceedings of the 27th
International Conference on Machine learning, pages 127?134, 2010.
[8] M. Schmidt and K. Murphy. Convex structure learning in log-linear models: Beyond pairwise potentials.
In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[9] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
[10] L. Jacob, G. Obozinski, and J.P. Vert. Group Lasso with overlap and graph Lasso. In Proceedings of the
26th Annual International Conference on Machine Learning, pages 433?440, 2009.
[11] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical
variable selection. Annals of Statistics, 37(6A):3468?3497, 2009.
[12] S.J. Wright. Accelerated block-coordinate relaxation for regularized optimization. Technical report,
Department of Computer Science, University of Wisconsin-Madison, 2010.
[13] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference.
R in Machine Learning, 1:1?305, 2008.
Foundations and Trends
[14] F. Gao, G. Wahba, R. Klein, and B. Klein. Smoothing Spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data. Journal of the American Statistical Association,
96(453):127, 2001.
[15] G. Wahba. Spline Models for Observational Data. Society for Industrial Mathematics, 1990.
[16] R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms.
arXiv:0904.3523, 2009.
[17] S. Kim and E.P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In
Proceedings of 27th International Conference on Machine Learning, pages 543?550, Haifa, Israel, 2010.
[18] J. Liu and J. Ye. Fast overlapping group lasso. arXiv:1009.0306v1, 2010.
[19] Xiwen Ma. Penalized Regression in Reproducing Kernel Hilbert Spaces With Randomized Covariate
Data. PhD thesis, Department of Statistics, University of Wisconsin-Madison, 2010.
[20] K. Koh, S.J. Kim, and S. Boyd. An interior-point method for large-scale l1-regularized logistic regression.
Journal of Machine learning research, 8(8):1519?1555, 2007.
[21] R.M. Scammon, A.V. McGillivray, and R. Cook. America Votes 26: 2003-2004, Election Returns By
State. CQ Press, 2005.
9
| 4209 |@word norm:5 c0:1 simulation:2 covariance:2 jacob:3 attainable:1 liu:3 born:1 score:1 murder:1 series:1 rkhs:3 outperforms:1 bradley:2 elliptical:1 recovered:4 com:1 john:1 partition:1 remove:2 designed:3 ugms:6 greedy:9 selected:2 intelligence:1 cook:1 parameterization:3 fpr:3 ith:1 node:22 firstly:1 constructed:1 descendant:1 prove:1 yuan:1 introduce:1 manner:1 pairwise:9 peng:2 indeed:1 p1:1 xz:1 themselves:1 multi:1 ote:2 gov:1 election:3 considering:1 increasing:2 becomes:1 estimating:5 notation:2 biostatistics:1 factorized:1 lowest:1 israel:1 cm:1 ofling:2 k2:5 unit:3 medical:1 grant:3 appear:2 t1:3 before:2 positive:17 sd:2 tends:1 abuse:1 might:1 studied:2 examined:1 meinshausen:2 equivalence:1 specifying:1 dichotomize:1 suggests:1 conversely:1 factorization:1 range:1 yj:2 block:2 empirical:1 vert:1 composite:1 boyd:1 refers:1 interior:1 selection:10 intercept:1 restriction:2 equivalent:5 optimize:1 www:3 missing:1 crfs:1 williams:1 flexibly:1 convex:4 focused:2 simplicity:2 wasserman:1 spanned:1 l1regularized:1 rocha:1 reparameterization:1 population:3 coordinate:1 annals:3 play:1 suppose:2 element:1 trend:1 recognition:1 jk:1 ising:3 observed:1 role:1 subproblem:1 p5:1 wang:2 solved:1 culotta:1 connected:1 decrease:1 valuable:1 yk:12 pd:1 convexity:1 solving:4 basis:1 gacv:9 joint:2 america:1 fast:1 effective:1 sijian:1 detected:1 zemel:1 artificial:1 outcome:3 birth:1 whose:1 heuristic:1 supplementary:4 solve:3 larger:1 say:1 valued:1 otherwise:2 pseudolikelihoods:1 presidential:2 ability:1 statistic:6 jointly:1 housing:3 advantage:2 propose:3 interaction:22 product:1 maximal:1 realization:1 iff:5 subgraph:1 achieve:1 description:1 kh:2 validate:1 inducing:1 convergence:2 optimum:1 depending:1 stat:1 school:1 odd:6 eq:2 c:1 implies:1 rosales:1 guided:2 observational:1 larry:1 material:3 bin:1 education:1 government:1 county:4 extension:1 hold:1 considered:2 wright:4 exp:5 k2h:1 k3:1 achieves:1 adopt:1 estimation:7 violent:1 label:1 uhlmann:1 grouped:2 create:1 gaussian:4 always:2 ck:9 avoid:1 zhou:1 focus:1 bernoulli:3 likelihood:3 check:1 industrial:1 kim:3 baseline:1 dollar:1 inference:1 foreign:1 a0:1 hidden:1 relation:2 kc:3 issue:2 among:4 flexible:2 dual:1 html:1 smoothing:1 special:1 zhu2:1 marginal:1 field:3 equal:1 yu:1 ugm:10 others:4 spline:4 report:1 few:1 national:1 murphy:6 poverty:2 detection:1 expenditure:1 robbery:1 bracket:2 edge:3 ltk:1 partial:2 necessary:6 tree:4 indexed:1 euclidean:1 taylor:1 circle:1 haifa:1 minimal:1 fitted:1 instance:1 column:1 modeling:2 lattice:5 jerryzhu:1 addressing:1 subset:3 entry:1 uniform:1 characterize:2 proximal:5 adaptively:1 international:4 randomized:1 informatics:1 thesis:1 choose:3 american:2 zhao:3 derivative:1 return:1 toy:2 account:1 potential:5 summarized:2 satisfy:1 audibert:1 later:1 h1:2 analyze:1 xing:2 start:1 contribution:2 variance:1 efficiently:1 none:1 whenever:1 nonetheless:1 involved:2 dm:1 naturally:2 proof:8 recovers:1 sampled:2 dataset:1 adjusting:1 duplicated:1 knowledge:1 hilbert:2 jenatton:2 higher:13 follow:1 methodology:1 response:8 caput:1 shrink:3 though:2 furthermore:1 until:2 correlation:3 hand:1 overlapping:2 logistic:2 grows:1 effect:4 ye:1 y2:4 true:12 regularization:2 nonzero:2 iteratively:1 death:1 eg:2 gw:1 conditionally:4 x5:1 rooted:1 won:1 generalized:1 crf:1 complete:20 l1:3 motion:1 cp:1 percent:8 variational:1 consideration:1 discovers:1 nih:1 functional:1 empirically:1 conditioning:2 exponentially:1 discussed:3 interpretation:1 slight:1 tpr:3 association:2 bmn:15 counteracted:1 enter:2 ai:4 smoothness:1 tuning:2 consistency:1 mathematics:1 similarly:1 inclusion:2 shawe:1 han:1 gj:2 add:1 multivariate:5 closest:1 showed:1 recent:1 onr:1 binary:4 yi:12 guestrin:2 additional:1 impose:2 surely:1 converge:2 stephen:1 ii:2 full:1 desirable:1 multiple:1 smooth:1 technical:1 offer:1 long:1 cross:1 lin:1 bach:1 ravikumar:2 equally:1 coded:1 variant:1 regression:7 denominator:1 vision:1 arxiv:2 iteration:1 kernel:5 represent:1 normalization:1 c1:2 addition:1 addressed:2 singular:1 comment:1 ey09946:1 undirected:4 lafferty:3 jordan:1 odds:4 call:1 nominator:1 abnormality:1 easy:1 iterate:1 independence:10 xj:4 wahba:3 lasso:12 interprets:1 six:1 penalty:15 tol:2 involve:1 listed:1 amount:2 generate:1 specifies:2 http:2 percentage:3 exist:1 nsf:2 notice:1 slasso:16 tibshirani:2 dummy:2 correctly:1 per:1 klein:2 discrete:11 group:23 four:1 threshold:1 wisc:2 anova:1 utilize:1 v1:1 graph:60 subgradient:1 relaxation:1 year:1 run:2 package:1 parameterized:2 powerful:1 almost:1 family:3 appendix:2 bound:1 guaranteed:2 aic:5 quadratic:1 annual:1 x2:4 min:6 separable:1 department:3 tv:11 according:3 fung:1 structured:2 terminates:1 ophthalmology:1 wi:1 partitioned:1 modification:1 census:4 koh:2 heart:1 mori:1 equation:2 discus:2 count:1 needed:1 end:2 observe:1 hierarchical:10 enforce:2 schmidt:6 bureau:2 denotes:1 remaining:1 include:2 ensure:1 standardized:1 graphical:11 madison:3 concatenated:1 society:2 tensor:1 objective:6 parametric:1 concentration:1 dependence:2 grace:1 kth:2 link:4 considers:1 trivial:1 spanning:1 toward:1 reason:1 length:3 index:1 relationship:1 insufficient:1 ratio:4 minimizing:1 cq:1 negative:2 design:1 unknown:1 unemployment:2 upper:1 observation:2 markov:4 datasets:1 acknowledge:1 displayed:1 reparameterized:1 extended:2 y1:14 discovered:2 reproducing:2 arbitrary:2 buhlmann:1 inferred:1 bk:2 specified:2 z1:1 optimized:1 connection:1 crime:2 fv:1 learned:1 able:1 suggested:1 beyond:1 usually:2 pattern:2 below:1 yc:11 fp:1 sparsity:5 max:3 including:1 lend:1 royal:1 wainwright:2 power:2 overlap:9 regularized:2 indicator:1 zhu:1 improve:1 republican:2 lk:2 ctv:3 xiaojin:1 prior:1 comply:1 l2:1 kf:7 wisconsin:3 fully:3 loss:2 validation:1 foundation:1 degree:1 sufficient:7 xp:1 editor:1 penalized:2 supported:2 repeat:1 taking:1 absolute:3 sparse:7 distributed:1 stand:1 rich:1 kdk:1 approximate:1 keep:1 clique:13 global:2 active:2 assumed:1 xi:1 alternatively:1 search:4 table:5 additionally:1 learn:6 aistats:1 main:5 x1:6 augmented:1 ethnic:1 pv:4 wish:1 exponential:2 candidate:2 third:1 learns:1 theorem:9 covariate:1 hub:1 dk:4 bivariate:2 intractable:1 cjk:1 exists:1 false:3 adding:1 ci:4 phd:1 linearization:4 conditioned:3 aspx:1 chen:1 depicted:1 likely:2 gao:1 scalar:1 minimizer:1 determines:1 ma:1 obozinski:1 conditional:14 viewed:1 change:8 included:3 determined:4 specifically:1 uniformly:1 lemma:4 total:3 experimental:1 tendency:1 vote:6 people:6 latter:2 accelerated:1 evaluate:1 |
3,545 | 421 | Analog Computation at a Critical Point: A Novel
Function for Neuronal Oscillations?
Leonid Kruglyak and Willianl Bialek
Depart.ment of Physics
University of California at Berkeley
Berkeley, California 94720
and NEC Research Institute?
4 Independence vVay
Princeton, New Jersey 08540
Abstract
\Ve show that a simple spin system bia.sed at its critical point can encode spatial characteristics of external signals, sHch as the dimensions of
"objects" in the visual field. in the temporal correlation functions of individual spins. Qualit.ative arguments suggest that regularly firing neurons
should be described by a planar spin of unit lengt.h. and such XY models
exhibit critical dynamics over a broad range of parameters. \Ve show how
to extract these spins from spike trains and then mea'3ure t.he interaction
Hamilt.onian using simulations of small dusters of cells. Static correlations among spike trains obtained from simulations of large arrays of cells
are in agreement with the predictions from these Hamiltonians, and dynamic correlat.ions display the predicted encoding of spatial information.
\Ve suggest that this novel representation of object dinwnsions in temporal
correlations may be relevant t.o recent experiment.s on oscillatory neural
firing in the visual cortex.
1
INTRODUCTION
Physical systems at a critical point exhibit long-range correlations even though
the interactions among the constituent partides are of short range . Through the
fluct.uation-dissipation theorem this implies that the dynamics at one point in the
?Current address.
137
138
Kruglyak and Bialek
system are sensitive t.o external pert.urbat.ions which are applied very far away. If
we build a.ll analog computer poised precisely at such a critical point it should be
possible to evaluate highly non-local funct.ionals of the input signals using a locally
interconnected architecture. Such a scheme would be very useful for visual computations, especially those which require comparisons of widely separated regions of
the image. From a biological point of view long-range correlat.ions at a critical point
might provide a robust scenario for "responses from beyond the classical receptive
field" [1].
In this paper we present. an explicit model for analog computation at a critical
point and show that this model has a remarkable consequence: Because of dynamic
scaling, spatial properties of input. signals are mapped into temporal correlat.ions
of the local dynamics. One can, for example, measure t.he size and t.opology of
"object.s" in a scene llsing only the temporal correlations in t.he output of a single
computational unit (neuron) locat.ed within the object. We then show that our
abst.ract model can be realized in networks of semi-realistic spiking neurons. The
key to this construction is that. neurons biased in a regime of regular or oscillatory
firing can be mapped to XY or planar spins [2,3]' and two-dimensional arrays of
these spins exhibit a broad range of parameters in which the system is generically
at a critical point. Non-oscillatory neurons cannot, in general, be forced to operate
at a critical point. without delicate fine tuning of the dynamics, fine tuning which
is implausible both for biology and for man-made analog circuits. We suggest t.hat
these arguments may be relevant to the recent observations of oscillatory firing in
the visual cortex [4,5,6].
2
A STATISTICAL MECHANICS MODEL
\Ve consider a simple two-dimensional array of spins whose stat.es are defined by unit
two-vect.ors Sn. These spins interact. with their neighbors so that the total energy of
the syst.em is H = -J L Sn ,Sm, with the sum restricted to nearest neighbor pairs .
This is the XY model, which is interesting in part because it possesses not a critical
point but rather a critical line [7] . At a given temperature, for all J > J c one finds
that correlations among spins decay algebraically, (Sn ,Sm) ex l/lrn - rm 117 , so that
there is no characteristic scale or correlation length; more precisely the correlation
length is infinite. In contrast, for J < J c we have (Sn,Sm) ex exp[-Irn - rml/{],
which defines a finite correlation length {.
In the algebraic phase the dynamics of t.he spins on long length scales are rigorously
described by the spin wave approximation, in which one assumes that fluctuations
in the angle between neighboring spins are small. In this regime it. makes sense to
use a continuum approximation rather than a lattice, and the energy of the system
becomes H = J J ({l ,z'lv 4>(x)l2, where ?(x) is the orientation of the spin at position
x. The dynamics of the syst.em are determined by the Langevin equation
iJ?(x,t)
ot
')
= J'V-4>(x, t) + 1J(x, t),
(1)
where I] is a Gaussian t.hermal noise source with
(1J(x, i)-I](x', t')} = 2k B TcS(x - x')cS(t - I').
(2)
Analog Computation at a Critical Point
\Ve can then show that the time correlation function of the spin at a single sit,(> x
is given by
(S(x. t)?S(x. 0)) = exp [-2k n TJ
~~'
'l.ir
J(J22~k)'?
7r -
e;~:t4l.
-'
1.)w- +
(:3 )
In fact Eq. 3 is valid only for an infinite array of spins. Imagine that external signals
to this array of spins can "activate" and "deactivate" t.he spins so that one must
really solve Eq. 1 on finite rpgions or clusters of active spins. Then we can writ.e
the analog of Eq. :3 as
1
,
?
1
I ?
(S(x, t)?S(x. 0)) = exp [ -knT
-~
L-1~'71(x)I--(1e- J>.. n It)
J
All
7l
(4)
where 1/'71 and All are the eigenfunctions and associated eigenvalues of (- v<?) on
the region of active spins. The key point here is that the spin auto-correlation
function in time determines the spectrum of the Laplacian on the region of activity.
But from the classic work of I\:ac [8] we know that this spectrum gives a great.
deal of information about the size and shape of the active region - we can in
general determine the area, the length of the perimeter, and the t.opology (number
of holes) from the set of eigenvalues {An}. and this is t.rue regardless of the absolut.e
dimensions of the region. Thus by operating at a critical point we can achieve
a scale-independent encoding of object dimension and topology in the temporal
correlations of a locally connected system.
3
MAPPING REAL NEURONS ONTO THE
STATISTICAL MODEL
All current models of neuralnet",.'orks are based on the hope that most microscopic
("biological") details are unimportant for the macroscopic, collective computational
behavior of the system as a whole. Here we provide a rigorous connection between a
more realistic neural model and a simplified model with spin variables and pffective
interactions . essentially the XY model discussed above. A more det.ailed account is
given in [2,3].
\Ve use the Fitzhugh-Nagumo (FN) model [9.10] to describe the electrical dynamics
of an individual neuron. This model demonstrates a threshold for firing action
potentials. a refractory Jwriod, ano single-shot as well as repetitive firing - in
short , all tlw qualit.ative properties of neural firing. It is also known to provide a
reasonable quant.it.at.ive df'scription of sewral cell types. To be realistic it is essential
to add a noise current bln(t) which we take to be Gaussian, spectrally white. and
independent. in each cell n .
"Ve connect each neuron to its neighbors in regular one- and two-dimensional arrays.
More general local connections are easily added and do not significantly change t.he
results presented helow. We model a synapse between two neurons by exponentiating the volta.ge from one anel injecting it as current into the other. Our choice is
motivated by the fact that the number of t.ransmitter vesicles released at a synapse
is exponential in the presynaptic voltage [11]; other synapt.ic transfer characteristics. including sma.ll delays. give results qualitatively similar t.o those described
139
140
Kruglyak and Bialek
here. The resulting equations of motion are
(l/Td [10
+ 6In (t) -
~~l(\'~ -1) -
lVn
+L
J nm eXP{\/m(t)/\;o}] ,
111
(5)
where Vn is t.he transmembrane voltage in cell 11" 10 is the DC bias current, and t.he
H'n are auxiliary variables; Vo sets the scale of voltage sensitivity in the synapse.
Voltages and currents are dimensionless, and t.he parameters of the syst.em are
expressed in terms of the time constants TI and T2 and a dimensionless rat.io Q.
From t.he voltage traces we extract the spike arrival times in the nth neuron, {til.
Wit.h the appropriate choice of parameters the FN model can be made to fire
regularly-t.he interspike intervals are tightly clustered around a mean value. The
power spectrum of t.he spike train s(t) Li b(t - ti) has well resolved peaks at ?wo,
?2wo, .... \Ve then low-pass filter s(1) to keep only the ?wo peaks, obtaining a
phase-modulated cosine,
[Fs](t)
~ 1.410
cos[wot
+ ?(t)],
(6)
where [Fs](t) denot.es the filtered spike train. By looking at [Fs](t) and its time
derivative, we can extract the phase ?(t) which describes the oscillat.ion that underlies regular firing. Since the orientation of a planar spin is also described by a
single phase variable, we can reduce the spike train to a time-dependent planar spin
S(t). \Ve now want to see how these spins interact when we connect two cells via
synapses.
We characterize the two-neuron interaction by accumulating a histogram of the
phase differences between two connected neurons. This probability distribution
defines an effective Hamiltonian, P( ?l, ?2) ex: exp[-H( ?I - ?2)). \Vith excitatory
synapses (J > 0) the interaction is ferromagnetic, as expected (sf'e Fig. 1). The
Hamiltonian takes other interesting forms for inhibitory, delayed, and nonreciprocal
synapses. By simulating small clusters of cells we find that interactions other than
nearest neighbor are negligible. This leads us to predict that the entire network is
desc.ribed by the effective Hamiltonian H = Lij Hij(?i - ?j), where Hij(?i - <Pj)
is the effective Hamiltonian measured for the pair of connected cells i, j.
One crucial consequence of Eq. G is that correlations of the filtered spike trains
are exactly proportional to the spin-spin correlations which are natura.l objects in
statistical mechanics. Specifically, if we have two cells 11, and m,
( 7)
This relation shows us how the statistical description of the net.work can be tested
in experiments which monitor actual neural spike trains.
4
DOES THE MAPPING WORK?
\Vhen planar spins are connected in a one-dimensional chain with nearest-neighbor
interactions, correlations bet.ween spins drop off expollentially with distance. To test.
Analog Computation at a Critical Point
this prediction we have run simulations on chains of 32 Fitzhugh-Nagumo neurons
connected to their nearest neighbors. Correlations computeJ directly from the
filtered spike trains as indicated above indeed decay exponentially. as seen in the
insert to Fig. 1. Fig. 1 shows that the predictions for the correlation length from
the simple model are in excellent agreement with the correlation lengths observed
in the simulations of spiking neurons; there are no free parameters.
;~----l
:
40
..
\
<
...r
.
~ 30
o
?
~
r~
z.o
WI
/1
\!
~r
.0
o
dig
I
d
0.0
__- L_ _
r.o
~
~
00
~
t
0
...I
~_----.....3L
_ _L -_ _ _ _- L_ _
2.0
:
\!
3.0
__
4.0
____
~
1.0
~
~
~
~
6 .0
Figure 1: Correlation length obtained from fits to the simulation dat.a vs. correlation
length predicted from t.he Hamiltonians. Inset., upper left.: Correlation function vs.
distance from simulat.ions, with exponential fit. Inset., lower right.: Corresponding
Hamiltonian as a function of phase difference.
In t.he t.wo-dimensional case we connect each neuron to its four nearest neighbors
on a square lat.tice. The corresponding spin model is essentially the XY mode.
Hence we expect. a low-temperature (high synaptic st.rengt.h) phase wit.h correlations that decay slowly (as a small power of distance) and a high-t.emperature (low
synaptic strength) disordered phase with exponential decay. These predictions were
confirmed by large-scale simulations of two-dimensional arrays [2].
5
OBJECT DIMENSIONS FROM TEMPORAL
CORRELATIONS
We believe that we have presented convincing evidence for the description of regula.rly firing neurons in t.erms of XY spins, at least as regards their static or equilibrium correlations. In our theoretical discussion we showed t.hat the temporal correlation functions of XY spins in the algebraic phase contained informat.ion about the
141
142
Kruglyak and Bialek
hI
7.7
0.0
500.0
1000.0
1500.0
Figure 2: A uto-correlation functions for the spike trains of single cells at the center
of square arrays of different sizes.
dimensions of "objects." Here we test this idea in a very simple numerical experiment. Imagine that we have an a.rray of N x N connected cells which are excited
by incoming signals so that. they are in the oscillatory regime. Obviously we can
measure t.he size of this "object" by looking at the entire network, but. our theoretical results suggest that. one can sense these dimensions (N) using the temporal
correlations in just. one cell, most simply the cell in the center of t.he array.
In Fig. 2 we show the auto-correlation functions for the spike trains of the center
cell in arrays of different dimensions. It is clear that changing the dimensions
of the array of active cells has profound effect.s on these spa.t.iaJly local temporal
correlations. Because of the fact that the model is on a critical line these correlat.ions
cont.inue to change as the dimensions of the array increase, rather than saturating
after some finite correlation length is reached. Qualitatively similar results are
expected throughout the algebraic phase of the associated spin model.
Recently it has been shown that when cells in t.he cat visual cortex are excit.ed by
appropriat.e st.imuli t.hey enter a regime of regular firing. These firing st.atist.ics are
somewha.t. more complex t.han simulated here because there are a variable number
of spikes per cycle, but we have reproduced all of our major results in models which
capture t.his feature of the real dat.a. We have seen that networks of regularly
firing cells are capable of qualitatively different types of computation because these
networks can be placed at a critical point without fine tuning of paramet.ers. Most
dramatically dynamic scaling allows us to trade spatial and t.emporal features and
thereby encode object dimension in temporal correlations of single cells, as in Fig.
2. To see if such novel computations are indeed mediated by cortical oscillations
Analog Computation at a Critical Point
we suggest. the direct analog of our numerical experiment, in which the correlation
functions of single cells would be monitored in response to stmct.ured stimuli (e.g.,
textures) wit.h different total spatial extent in the t.wo dimensions of the visual
field . 'rVe predict that these correlation functions will show a clear dependence on
the area of t.he visual field being excited, with some sensitivity to the shape and
topology as well. Most importantly this dependence on "object" dimension will
extend to very large objects because the network is at a critical point. In this sense
the temporal correlations of single cells will encode any object dimension, rather
than being detectors for objects of some critical size.
AcknowledgeInents
"'vVe thank O. Alvarez, D. Arovas, A. B. Bonds, K. Brueckner. M. Crair , E.
Knobloch. H. Lecar, and D. Rohksar for helpful discussions. ''''ork at Berkeley
was supported in part by the National Science Foundation through a Presidential
Young Investigator Award (to W.B.), supplement.ed by funds from Cray Research,
Sun Microsystems, and the NEC Research Institute , by the Fannie and John Hertz
Foundation through a Graduate Fellowship (to L.K.), and by the USPHS through
a Biomedical Research Support Grant.
References
[1] J. Allman, F. Meizin, and E. McGuiness. Ann. Rev. Neurosci., 8:407, 1985.
[2] L. Kruglyak. From biological reality to simple physical models: Networks of
oscillating neurons and the XY model. PhD thesis, University of California at
Berkeley, Berkeley, California, 1990.
,
[3] W . Bialek . In E. Jen, editor, 1989 Lectures i1l CompleJ: Systems, SF! St'udies in the Sciences of Complexity, volume 2, pages 513-595. Addison-Wesley,
Reading, Mass., 1990.
[4] R. Eckhorn, R. Bauer, W. Jordan, IVI. Brosch, "V. I{ruse, M. l\.funk, and H. J.
Reit.boeck. Bioi. Cybern., 60:121, 1988.
[5] C. M. Gray and W. Singer. Proc. Nat. Acad. Sci. USA, 86:1698, 1989.
[6] C. M. Gray, P. Konig, A. K. Engel, and W. Singer. Nature, 338:334, 1989.
[7] D. R. Nelson. In C . Domb and J. L. Lebowitz , editors, Phase Transitions and
Critica.l Phenomena, volume 7, chapter 1. Academic Press , London, 1983.
[8] M. Kac. Th.e American Mathematical Monthly, 73:1-23, 1966.
[9] Richard Fitzhugh. Biophysical Journal, 1:445-466, 1961.
[lOJ J. S. Nagumo, S. Arimoto, and S. Yoshizawa. Proc. !. R. E., 50:2061, 1962.
[11] D. J. Aidley. The Physiology of Excitable Cells. Cambridge Universit.y Press,
Cambridge, 1971.
143
Part IV
Temporal Reasoning
| 421 |@word simulation:6 excited:2 emperature:1 thereby:1 shot:1 tice:1 current:6 erms:1 must:1 john:1 fn:2 realistic:3 numerical:2 i1l:1 interspike:1 shape:2 drop:1 fund:1 v:2 hamiltonian:5 short:2 filtered:3 correlat:4 mathematical:1 direct:1 profound:1 cray:1 poised:1 indeed:2 expected:2 behavior:1 mechanic:2 td:1 actual:1 becomes:1 circuit:1 mass:1 spectrally:1 temporal:12 berkeley:5 ti:2 exactly:1 universit:1 rm:1 demonstrates:1 unit:3 grant:1 negligible:1 local:4 consequence:2 io:1 acad:1 encoding:2 ure:1 firing:12 fluctuation:1 might:1 co:1 range:5 graduate:1 area:2 significantly:1 physiology:1 regular:4 suggest:5 atist:1 cannot:1 onto:1 mea:1 dimensionless:2 cybern:1 accumulating:1 center:3 regardless:1 wit:3 array:12 importantly:1 his:1 classic:1 synapt:1 construction:1 imagine:2 agreement:2 fluct:1 observed:1 electrical:1 capture:1 region:5 ferromagnetic:1 connected:6 cycle:1 sun:1 trade:1 transmembrane:1 complexity:1 rigorously:1 dynamic:10 funct:1 vesicle:1 neuralnet:1 easily:1 resolved:1 jersey:1 cat:1 chapter:1 train:10 separated:1 forced:1 describe:1 activate:1 effective:3 london:1 whose:1 widely:1 solve:1 ive:1 simulat:1 presidential:1 tlw:1 reproduced:1 obviously:1 abst:1 eigenvalue:2 net:1 biophysical:1 ment:1 interaction:7 interconnected:1 neighboring:1 relevant:2 achieve:1 description:2 constituent:1 konig:1 cluster:2 oscillating:1 object:14 ac:1 stat:1 measured:1 ij:1 nearest:5 eq:4 auxiliary:1 predicted:2 c:1 implies:1 rml:1 filter:1 disordered:1 require:1 clustered:1 really:1 biological:3 desc:1 insert:1 paramet:1 around:1 ic:2 exp:5 great:1 equilibrium:1 mapping:2 predict:2 vith:1 sma:1 continuum:1 major:1 released:1 proc:2 injecting:1 bond:1 sensitive:1 engel:1 hope:1 gaussian:2 rather:4 bet:1 voltage:5 encode:3 deactivate:1 contrast:1 rigorous:1 sense:3 helpful:1 dependent:1 entire:2 irn:1 relation:1 among:3 orientation:2 scription:1 spatial:5 field:4 biology:1 broad:2 t2:1 stimulus:1 richard:1 ve:9 tightly:1 individual:2 delayed:1 national:1 phase:11 fire:1 delicate:1 highly:1 generically:1 lrn:1 wot:1 tj:1 perimeter:1 chain:2 capable:1 xy:8 iv:1 denot:1 theoretical:2 lattice:1 delay:1 characterize:1 connect:3 st:4 peak:2 sensitivity:2 ured:1 physic:1 off:1 thesis:1 nm:1 slowly:1 external:3 american:1 derivative:1 til:1 rly:1 li:1 syst:3 account:1 potential:1 kruglyak:5 domb:1 fannie:1 view:1 reached:1 wave:1 orks:1 ative:2 sed:1 square:2 ir:1 spin:32 ract:1 characteristic:3 confirmed:1 dig:1 oscillatory:5 implausible:1 synapsis:3 detector:1 ed:3 synaptic:2 energy:2 yoshizawa:1 associated:2 qualit:2 static:2 monitored:1 regula:1 wesley:1 planar:5 response:2 alvarez:1 synapse:3 though:1 ano:1 just:1 biomedical:1 correlation:34 defines:2 mode:1 indicated:1 gray:2 believe:1 usa:1 effect:1 hence:1 vhen:1 deal:1 white:1 ll:2 cosine:1 rat:1 vo:1 dissipation:1 temperature:2 motion:1 reasoning:1 image:1 novel:3 recently:1 uto:1 spiking:2 physical:2 arimoto:1 refractory:1 exponentially:1 volume:2 extend:1 analog:9 he:18 discussed:1 knt:1 monthly:1 cambridge:2 enter:1 tuning:3 eckhorn:1 emporal:1 funk:1 han:1 cortex:3 operating:1 add:1 recent:2 showed:1 scenario:1 shch:1 seen:2 algebraically:1 determine:1 ween:1 signal:5 llsing:1 semi:1 academic:1 long:3 nagumo:3 crair:1 award:1 laplacian:1 prediction:4 underlies:1 essentially:2 df:1 repetitive:1 histogram:1 cell:21 ion:8 want:1 fine:3 fellowship:1 interval:1 source:1 macroscopic:1 crucial:1 ot:1 biased:1 operate:1 posse:1 ivi:1 eigenfunctions:1 regularly:3 jordan:1 allman:1 independence:1 fit:2 architecture:1 topology:2 quant:1 reduce:1 idea:1 det:1 motivated:1 j22:1 wo:5 f:3 algebraic:3 action:1 dramatically:1 useful:1 clear:2 unimportant:1 locally:2 kac:1 inhibitory:1 per:1 appropriat:1 key:2 four:1 threshold:1 monitor:1 changing:1 pj:1 sum:1 ork:1 bia:1 angle:1 run:1 throughout:1 reasonable:1 vn:1 oscillation:2 informat:1 scaling:2 oscillat:1 spa:1 hi:1 display:1 locat:1 activity:1 strength:1 precisely:2 scene:1 argument:2 aidley:1 fitzhugh:3 hertz:1 describes:1 em:3 wi:1 rev:1 restricted:1 equation:2 brosch:1 singer:2 know:1 addison:1 ge:1 lecar:1 away:1 appropriate:1 simulating:1 hat:2 assumes:1 lat:1 absolut:1 build:1 especially:1 classical:1 dat:2 added:1 realized:1 depart:1 spike:12 receptive:1 dependence:2 bialek:5 exhibit:3 microscopic:1 distance:3 thank:1 mapped:2 simulated:1 sci:1 nelson:1 presynaptic:1 extent:1 writ:1 length:10 cont:1 convincing:1 hij:2 trace:1 collective:1 upper:1 neuron:17 observation:1 vect:1 sm:3 finite:3 langevin:1 looking:2 dc:1 pair:2 connection:2 california:4 address:1 beyond:1 microsystems:1 regime:4 reading:1 including:1 lengt:1 power:2 critical:19 nth:1 scheme:1 mediated:1 extract:3 auto:2 lij:1 excitable:1 sn:4 ailed:1 l2:1 expect:1 lecture:1 interesting:2 proportional:1 remarkable:1 lv:1 foundation:2 anel:1 boeck:1 editor:2 excitatory:1 placed:1 supported:1 free:1 l_:2 bias:1 institute:2 neighbor:7 bauer:1 regard:1 dimension:13 cortical:1 valid:1 pert:1 transition:1 made:2 qualitatively:3 exponentiating:1 simplified:1 far:1 keep:1 active:4 incoming:1 spectrum:3 reality:1 nature:1 transfer:1 robust:1 obtaining:1 interact:2 excellent:1 complex:1 rue:1 hamiltonians:2 neurosci:1 whole:1 noise:2 arrival:1 neuronal:1 fig:5 vve:1 position:1 loj:1 explicit:1 exponential:3 sf:2 rve:1 young:1 theorem:1 uation:1 jen:1 inset:2 er:1 decay:4 evidence:1 sit:1 essential:1 supplement:1 texture:1 nec:2 phd:1 nat:1 hole:1 tc:1 simply:1 visual:7 expressed:1 contained:1 saturating:1 hey:1 determines:1 bioi:1 ann:1 leonid:1 man:1 change:2 infinite:2 determined:1 specifically:1 total:2 pas:1 e:2 support:1 modulated:1 investigator:1 evaluate:1 princeton:1 tested:1 phenomenon:1 ex:3 |
3,546 | 4,210 | Learning Patient-Specific Cancer Survival
Distributions as a Sequence of Dependent Regressors
Chun-Nam Yu, Russell Greiner, Hsiu-Chin Lin
Department of Computing Science
University of Alberta
Edmonton, AB T6G 2E8
Vickie Baracos
Department of Oncology
University of Alberta
Edmonton, AB T6G 1Z2
{chunnam,rgreiner,hsiuchin}@ualberta.ca
[email protected]
Abstract
An accurate model of patient survival time can help in the treatment and care
of cancer patients. The common practice of providing survival time estimates
based only on population averages for the site and stage of cancer ignores many
important individual differences among patients. In this paper, we propose a local
regression method for learning patient-specific survival time distribution based
on patient attributes such as blood tests and clinical assessments. When tested
on a cohort of more than 2000 cancer patients, our method gives survival time
predictions that are much more accurate than popular survival analysis models
such as the Cox and Aalen regression models. Our results also show that using
patient-specific attributes can reduce the prediction error on survival time by as
much as 20% when compared to using cancer site and stage only.
1
Introduction
When diagnosed with cancer, most patients ask about their prognosis: ?how long will I live?, and
?what is the success rate of each treatment option?. Many doctors provide patients with statistics
on cancer survival based only on the site and stage of the tumor. Commonly used statistics include
the 5-year survival rate and median survival time, e.g., a doctor can tell a specific patient with early
stage lung cancer that s/he has a 50% 5-year survival rate.
In general, today?s cancer survival rates and median survival times are estimated from a large group
of cancer patients; while these estimates do apply to the population in general, they are not particularly accurate for individual patients, as they do not include patient-specific information such as age
and general health conditions. While doctors can make adjustments to their survival time predictions based on these individual differences, it is better to directly incorporate these important factors
explicitly in the prognostic models ? e.g. by incorporating the clinical information, such as blood
tests and performance status assessments [1] that doctors collect during the diagnosis and treatment
of cancer. These data reveal important information about the state of the immune system and organ functioning of the patient, and therefore are very useful for predicting how well a patient will
respond to treatments and how long s/he will survive. In this work, we develop machine learning
techniques to incorporate this wealth of healthcare information to learn a more accurate prognostic
model that uses patient-specific attributes. With improved prognostic models, cancer patients and
their families can make more informed decisions on treatments, lifestyle changes, and sometimes
end-of-life care.
In survival analysis [2], the Cox proportional hazards model [3] and other parametric survival distributions have long been used to fit the survival time of a population. Researchers and clinicians
usually apply these models to compare the survival time of two populations or to test for significant
risk factors affecting survival; n.b., these models are not designed for the task of predicting survival
1
time for individual patients. Also, as these models work with the hazard function instead of the survival function (see Section 2), they might not give good calibrated predictions on survival rates for
individuals. In this work we propose a new method, multi-task logistic regression (MTLR), to learn
patient-specific survival distributions. MTLR directly models the survival function by combining
multiple local logistic regression models in a dependent manner. This allows it to handle censored
observations and the time-varying effects of features naturally. Compared to survival regression
methods such as the Cox and Aalen regression models, MTLR gives significantly more accurate
predictions on survival rates over several datasets, including a large cohort of more than 2000 cancer
patients. MTLR also reduces the prediction error on survival time by 20% when compared to the
common practice of using the median survival time based on cancer site and stage.
Section 2 surveys basic survival analysis and related works. Section 3 introduces our method for
learning patient-specific survival distributions. Section 4 evaluates our learned models on a large
cohort of cancer patients, and also provides additional experiments on two other datasets.
2
Survival Time Prediction for Cancer Patients
In most regression problems, we know both the covariates and ?outcome? values for all individuals.
By contrast, it is typical to not know many of the outcome values in survival data. In many medical
studies, the event of interest for many individuals (death, disease recurrence) might not have occurred within the fixed period of study. In addition, other subjects could move out of town or decide
to drop out any time. Here we know only the date of the final visit, which provides a lower bound
on the survival time. We refer to the time recorded as the ?event time?, whether it is the true survival
time, or just the time of the last visit (censoring time). Such datasets are considered censored.
Survival analysis provides many tools for modeling the survival time T of a population, such as a
group of stage-3 lung cancer patients. A basic quantity of interest is the survival function S(t) =
P (T ? t), which is the probability that an individual within the population will survive longer than
time t. Given the survival times of a set of individuals, we can plot the proportion of surviving
individuals against time, as a way to visualize S(t). The plot of this empirical survival distribution
is called the Kaplan-Meier curve [4] (Figure 1(left)).
This is closely related to the hazard function ?(t), which describes the instantaneous rate of failure
at time t
Z
t
?(t) = lim P (t ? T < t + ?t | T ? t)/?t, and S(t) = exp ?
?t?0
2.1
?(u)du .
0
Regression Models in Survival Analysis
One of the most well-known regression model in survival analysis is Cox?s proportional hazards
model [3]. It assumes the hazard function ?(t) depends multiplicatively on a set of features ~x:
?(t | ~x) = ?0 (t) exp(?~ ? ~x).
It is called the proportional hazards model because the hazard rates of two individuals with features
~x1 and ~x2 differ by a ratio exp(?~ ? (~x1 ? ~x2 )). The function ?0 (t), called the baseline hazard, is
usually left unspecified in Cox regression. The regression coefficients ?~ are estimated by maximizing a partial likelihood objective, which depends only on the relative ordering of survival time of
individuals but not on their actual values. Cox regression is mostly used for identifying important
risk factors associated with survival in clinical studies. It is typically not used to predict survival
time since the hazard function is incomplete without the baseline hazard ?0 . Although we can fit
a non-parametric survival function for ?0 (t) after the coefficients of Cox regression are determined
[2], this requires a cumbersome 2-step procedure. Another weakness of the Cox model is its proportional hazards assumption, which restricts the effect of each feature on survival to be constant over
time.
There are alternatives to the Cox model that avoids the proportional hazards restriction, including
the Aalen additive hazards model [5] and other time-varying extensions to the Cox model [6]. The
Aalen linear hazard model assumes the hazard function has the form
~ ? ~x.
?(t | ~x) = ?(t)
(1)
2
1.0
1.0
???? ?
??
??
??
?
t60=60
t21=21 t22=22
?
?
?
?
?
0.8
t2=2
0.8
t1=1
?
?
y2
1
y21
y22
?
y60
Patient 1 (uncensored) s=21.3
0.4
?
1
?
?
0.6
y1
0
P(survival)
0
......
?
?
?
?
?
0.4
0.6
?
?
?
?
?
0.0
0
10
20
30
40
50
60
70
0
0
y1
y2
......
Patient 2 (censored)
Time (Months)
0
y21
sc=21.3
......
y22
y60
?
?
0.2
t60=60
t21=21 t22=22
t2=2
?
?
?
??
? ??
?? ??
??
??
??
??
? ??
?
0.0
t1=1
0.2
Proportion Surviving
?
0
......
0
10
20
30
40
50
60
Time (Months)
Figure 1: (Left) Kaplan-Meier curve: each point (x, y) means proportion y of the patients are alive
at time x. Vertical line separates those who have died versus those who survive at t = 20 months.
(Middle) Example binary encoding for patient 1 (uncensored) with survival time 21.3 months and for
patient 2 (censored), with last visit time at 21.3 months. (Right) Example discrete survival function
for a single patient predicted by MTLR.
While there are now many estimation techniques, goodness-of-fit tests, hypothesis tests for these
survival regression models, they are rarely evaluated on the task of predicting survival time of individual patients. Moreover, it is not easy to choose between the various assumptions imposed by
these models, such as whether the hazard rate should be a multiplicative or additive function of the
features. In this paper we will test our MTLR method, which directly models the survival function,
against Cox regression and Aalen regression as representatives of these survival analysis models.
In machine learning, there are a few recently proposed regression technqiues for survival prediction
[7, 8, 9, 10]. These methods attempt to optimize specific loss function or performance measures,
which usually involve modifying the common regression loss functions to handle censored data. For
example, Shivaswamy et al. [7] modified the support vector regression (SVR) loss function from
n
o
n
o
max |y ? ?~ ? ~x| ? , 0 to max (y ? ?~ ? ~x) ? , 0 ,
where y is the time of censoring and is a tolerance parameter. In this way any prediction ?~ ?~x above
the censoring time y is deemed consistent with observation and is not penalized. This class of direct
regression methods usually give very good results on the particular loss functions they optimize
over, but could fail if the loss function is non-convex or difficult to optimize. Moreover, these
methods only predict a single survival time value (a real number) without an associated confidence
on prediction, which is a serious drawback in clinical applications.
Our MTLR model below is closely related to local regression models [11] and varying coefficient
models [12] in statistics. Hastie and Tibshirani [12] described a very general class of regression
models that allow the coefficients to change with another set of variables called ?effect modifiers?;
they also discussed an application of their model to overcome the proportional hazards assumption
in Cox models. While we focus on predicting survival time, they instead focused on evaluating the
time-varying effect of prognostic factors and worked with the rank-based partial likelihood objective.
3
Survival Distribution Modeling via a Sequence of Dependent Regressors
Consider a simpler classification task of predicting whether an individual will survive for more than
t months. A common approach for this classification task is the logistic regression model [13],
where we model the probability of surviving more
than t months as:
?1
P~ (T ? t | ~x) = 1 + exp(?~ ? ~x + b)
.
?
The parameter vector ?~ describes the effect of how the features ~x affect the chance of survival, with
the threshold b. This task corresponds to a specific time point on the Kaplan-Meier curve, which
attempts to discriminate those who survive against those who have died, based on the features ~x
(Figure 1(left)). Equivalently, the logistic regression model can be seen as modeling the individual
survival probabilities of cancer patients at the time snapshot t.
Taking this idea one step further, consider modeling the probability of survival of patients at each of
a vector of time points ? = (t1 , t2 , . . . , tm ) ? e.g., ? could be the 60 monthly intervals from 1 month
3
up to 60 months. We can set up a series of logistic regression models for each of these:
?1
P?~i (T ? ti | ~x) = 1 + exp(?~i ? ~x + bi )
,
1 ? i ? m,
(2)
where ?~i and bi are time-specific parameter vector and thresholds. The input features ~x stay the
same for all these classification tasks, but the binary labels yi = [T ? ti ] can change depending
on the threshold ti . This particular setup allows us to answer queries about the survival probability
of individual patients at each of the time snapshots {ti }, getting close to our goal of modeling
a personal survival time distribution for individual patients. The use of time-specific parameter
vector naturally allows us to capture the effect of time-varying covariates, similar to many dynamic
regression models [14, 12].
However the outputs of these logistic regression models are not independent, as a death event at
or before time ti implies death at all subsequent time points tj for all j > i. MTLR enforces
the dependency of the outputs by predicting the survival status of a patient at each of the time
snapshots ti jointly instead of independently. We encode the survival time s of a patient as a binary
sequence y = (y1 , y2 , . . . , ym ), where yi ? {0, 1} denotes the survival status of the patient at time
ti , so that yi = 0 (no death event yet) for all i with ti < s, and yi = 1 (death) for all i with
ti ? s (see Figure 1(middle)). We denote such an encoding of the survival time s as y(s), and
let yi (s) be the value at its ith position. Here there are m + 1 possible legal sequences of the form
(0, 0, . . . , 1, 1, . . . , 1), including the sequence of all ?0?s and the sequence of all ?1?s. The probability
of observing the survival status sequence y = (y1 , y2 , . . . , ym ) can be represented by the following
generalization of the logistic regression model:
Pm
exp( i=1 yi (?~i ? ~x + bi ))
P
P? (Y =(y1 , y2 , . . . , ym ) | ~x) =
,
m
x, k))
k=0 exp(f? (~
Pm
where ? = (?~1 , . . . , ?~m ), and f? (~x, k) = i=k+1 (?~i ? ~x + bi ) for 0 ? k ? m is the score of the
sequence with the event occuring in the interval [tk , tk+1 ) before taking the logistic transform, with
the boundary case f? (~x, m) = 0 being the score for the sequence of all ?0?s. This is similar to the
objective of conditional random fields [15] for sequence labeling, where the labels at each node are
scored and predicted jointly.
Therefore the log likelihood of a set of uncensored patients with survival time s1 , s2 , . . . , sn and
feature vectors ~x1 , ~x2 , . . . , ~xn is
i
Xn hXm
Xm
yj (si )(?~j ? ~xi + bj ) ? log
exp f? (~xi , k) .
i=1
j=1
k=0
Instead of directly maximizing this log likelihood, we solve the following optimization problem:
?
?
m
m?1
n
m
m
X
X
X
C1 X ~ 2 C2 X ~
min
k?j k +
k?j+1??~j k2? ?
yj (si )(?~j ?~xi +bj )?log
exp f? (~xi , k)? (3)
? 2
2
j=1
j=1
i=1 j=1
k=0
The first regularizer over k?~j k2 ensures the norm of the parameter vector is bounded to prevent
overfitting. The second regularizer k?~j+1 ? ?~j k2 ensures the parameters vary smoothly across consecutive time points, and is especially important for controlling the capacity of the model when the
time points become dense. The regularization constants C1 and C2 , which control the amount of
smoothing for the model, can be estimated via cross-validation. As the above optimization problem
is convex and differentiable, optimization algorithms such as Newton?s method or quasi-Newton
methods can be applied to solve it efficiently. Since we model the survival distribution as a series of
dependent prediction tasks, we call this model multi-task logistic regression (MTLR). Figure 1(right)
shows an example survival distribution predicted by MTLR for a test patient.
3.1
Handling Censored Data
Our multi-task logistic regression model can handle censoring naturally by marginalizing over the
unobserved variables in a survival status sequence (y1 , y2 , . . . , ym ). For example, suppose a patient
with features ~x is censored at time sc , and tj is the closest time point after sc . Then all the sequences
4
Table 1: Left: number of cancer patients for each site and stage in the cancer registry dataset. Right:
features used in learning survival distributions
site\stage
Bronchus & Lung
Colorectal
Head and Neck
Esophagus
Pancreas
Stomach
Other Digestive
Misc
1
61
15
6
0
1
0
0
1
2
44
157
8
1
3
0
1
0
3
186
233
14
1
0
1
0
3
4
390
545
206
63
134
128
77
123
basic
general wellbeing
blood test
age, sex, weight gain/loss,
BMI, cancer site, cancer stage
no appetite, nausea, sore mouth,
taste funny, constipation, pain,
dental problem, dry mouth, vomit,
diarrhea, performance status
granulocytes, LDH-serum, HGB,
lyphocytes platelet, WBC count,
calcium-serum, creatinine, albumin
y = (y1 , y2 , . . . , ym ) with yi = 0 for i < j are consistent with this censored observation (see
Figure 1(middle)). The likelihood of this censored patient is
Xm
Xm
P? (T ? tj | ~x) =
exp(f? (~x, k))/
exp(f? (~x, k)),
(4)
k=j
k=0
where the numerator is the sum over all consistent sequences. While the sum in the numerator makes
the log-likelihood non-concave, we can still learn the parameters effectively using EM or gradient
descent with suitable initialization.
In summary, the proposed MTLR model holds several advantages over classical regression models
in survival analysis for survival time prediction. First, it directly models the more intuitive survival
function rather than the hazard function (conditional rate of failure/death), avoiding the difficulties
of choosing between different forms of hazards. Second, by modeling the survival distribution as
the joint output of a sequence of dependent local regressors, we can capture the time-varying effects
of features and handle censored data easily and naturally. Third, we will see that our model can give
more accurate predictions on survival and better calibrated probabilities (see Section 4), which are
important in clinical applications.
Our goal here is not to replace these tried-and-tested models in survival analysis, which are very
effective for hypothesis testing and prognostic factor discovery. Instead, we want a tool that can
accurately and effectively predict an individual?s survival time.
3.2
Relations to Other Machine Learning Models
The objective of our MTLR model is of the same form as a general CRF [15], but there are several
important differences from typical applications of CRFs for sequence labeling. First MTLR has
no transition features (edge potentials) (Eq (3)); instead the dependencies between labels in the
sequence are enforced implicitly by only allowing a linear number (m+1) of legal labelings. Second,
in most sequence labeling applications of CRFs, the weights for the node potentials are shared across
nodes to share statistic strengths and improve generalization. Instead, MTLR uses a different weight
vector ?~i at each node to capture the time-varying effects of input features. Unlike typical sequence
labeling problems, the sequence construction of our model might be better viewed as a device to
obtain a flexible discrete approximation of the survival distribution of individual patients.
Our approach can also be seen as an instance of multi-task learning [16], where the prediction of
individual survival status at each time snapshot tj can be regarded as a separate task. The smoothing
penalty k?~j ? ?~j+1 k2 is used by many multi-task regularizers to encourage weight sharing between
related tasks. However, unlike typical multi-task learning problems, in our model the outputs of
different tasks are dependent to satisfy the monotone condition of a survival function.
4
Experiments
Our main dataset comes from the Alberta Cancer Registry obtained through the Cross Cancer Institute at the University of Alberta, which included 2402 cancer patients with tumors at different sites.
About one third of the patients have censored survival times. Table 1 shows the groupings of cancer
patients in the dataset and the patient-specific attributes for learning survival distributions. All these
measurements are taken before the first chemotherapy.
5
In all experiments we report five-fold cross validation (5CV) results, where MTLR?s regularization
parameters C1 and C2 are selected by another 5CV within the training fold, based on log likelihood.
We pick the set of time points ? in these experiments to be the 100 points from the 1st percentile
up to the 100th percentile of the event time (true survival time or censoring time) over all patients.
Since all the datasets contain censored data, we first train an MTLR model using the event time
(survival/censoring) as regression targets (no hidden variables). Then the trained model is used as
the initial weights in the EM procedure in Eq (4) to train the final model.
The Cox proportional hazards model is trained using the survival package in R, followed by the
fitting of the baseline hazard ?0 (t) using the Kalbfleisch-Prentice estimator [2]. The Aalen linear
hazards model is trained using the timereg package. Both the Cox and the Aalen models are
trained using the same set of 25 features. As a baseline for this cancer registry dataset, we also
provide a prediction based on the median survival time and survival probabilities of the subgroup of
patients with cancer at a specific site and at a specific stage, estimated from the training fold.
4.1
Survival Rate Prediction
Our first evaluation focuses on the classification accuracy and calibration of predicted survival probabilities at different time thresholds. In addition to giving a binary prediction on whether a patient
would survive beyond a certain time period, say 2 years, it is very useful to give an associated confidence of the prediction in terms of probabilities (survival rate). We use mean square error (MSE),
also called the Brier score in this setting [17], to measure the quality of probability predictions.
Previous work [18] showed that MSE can be decomposed into two components, one measuring
calibration and one measuring discriminative power (i.e., classification accuracy) of the probability
predictions.
Table 2 shows the classification accuracy and MSE on the predicted probabilities of different models
at 5, 12, and 22 months, which correspond to the 25% lower quantile, median, and 75% upper
quantile of the survival time of all the cancer patients in the dataset. Our MTLR models produce
predictions on survival status and survival probability that are much more accurate than the Cox
and Aalen regression models. This shows the advantage of directly modeling the survival function
instead of going through the hazard function when predicting survival probabilites. The Cox model
and the Aalen model have classification accuracies and MSE that are similar to one another on
this dataset. All regression models (MTLR, Cox, Aalen) beat the baseline prediction using median
survival time based on cancer stage and site only, indicating that there is substantial advantage of
employing extra clinical information to improve survival time predictions given to cancer patients.
4.2
Visualization
Figure 2 visualizes the MTLR, Cox and Aalen regression models for two patients on a test fold.
Patient 1 is a short survivor who lives for only 3 months from diagnosis, while patient 2 is a long
survivor whose survival time is censored at 46 months. All three regression models (correctly) give
poor prognosis for patient 1 and good prognosis for patient 2, but there are a few interesting differences when we examine the plots. The MTLR model is able to produce smooth survival curves
of different shapes for the two patients (one convex with the other one slightly concave), while the
Cox model always predict survival curves of similar shapes because of the proportional hazards assumption. Indeed it is well known that the survival curves of two individuals never crosses for a
Cox model. For the Aalen model, we observe that the survival function is not (locally) monotonically decreasing. This is a consequence of the linear hazards assumption (Eq (1)), which allows the
hazard to become negative and therefore the survival function to increase. This problem is less common when predicting survival curves at population level, but could be more frequent for individual
survival distribution predictions.
4.3
Survival Time Predictions Optimizing Different Loss Functions
Our third evaluation on the predicted survival distributions involves applying them to make predictions that minimize different clinically-relevant loss functions. For example, if the patient is
interested in knowing whether s/he has weeks, months, or years to live, then measuring errors in
terms of the logarithm of the survival time can be appropriate. In this case we can measure the loss
6
Table 2: Classification accuracy and MSE of survival probability predictions on cancer registry
dataset (standard error of 5CV shown in brackets). Bold numbers indicate significance with a paired
t-test at p = 0.05 level (this applies to all subsequent tables).
Accuracy
MTLR
Cox
Aalen
Baseline
5 month
86.5 (0.7)
74.5 (0.9)
73.3 (1.2)
69.2 (0.3)
12 month
76.1 (0.9)
59.3 (1.1)
61.0 (1.7)
56.2 (2.0)
22 month
74.5 (1.3)
62.8 (3.5)
59.6 (3.6)
57.0 (1.4)
MSE
MTLR
Cox
Aalen
Baseline
0.4
0.6
0.4
0.2
0.2
0
0
0
10
20
30
months
40
50
60
patient 1
patient 2
0.8
P(survival)
0.6
22 month
0.170 (0.007)
0.232 (0.016)
0.288 (0.020)
0.243 (0.012)
Aalen
1
patient 1
patient 2
0.8
P(survival)
P(survival)
1
patient 1
patient 2
0.8
12 month
0.158 (0.004)
0.270 (0.008)
0.278 (0.008)
0.299 (0.011)
Cox
MTLR
1
5 month
0.101 (0.005)
0.196 (0.009)
0.198 (0.004)
0.227 (0.012)
0.6
0.4
0.2
0
0
10
20
30
months
40
50
60
0
10
20
30
months
40
50
60
Figure 2: Predicted survival function for two patients in test set: MTLR (left), Cox (center), Aalen
(right). Patient 1 lives for 3 months while patient 2 has survival time censored at 46 months.
using the absolute error (AE) over log survival time
lAE?log (p, t) = | log p ? log t|,
(5)
where p and t are the predicted and true survival time respectively.
In other scenarios, we might be more concerned about the difference of the predicted and true
survival time. For example, as the cost of hospital stays and medication scales linearly with the
survival time, the AE loss on the survival time could be appropriate, i.e,
lAE (p, t) = |p ? t|.
(6)
We also consider an error measure called the relative absolute error (RAE):
lRAE (p, t) = min {|(p ? t)/p| , 1} ,
(7)
which is essentially AE scaled by the predicted survival time p, since p is known at prediction time
in clinical applications. The loss is truncated at 1 to prevent large penalizations for small predicted
survival time. Knowing that the average RAE of a predictor is 0.3 means we can expect the true
survival time to be within 30% of the predicted time.
Given any of these loss models l above, we can make a point prediction hl (~x) of the survival time
for a patient with features ~x using the survival distribution P? estimated by our MTLR model:
Xm
hl (~x) = argmin
l(p, tk )P? (Y = y(tk ) | ~x),
(8)
p?{t1 ,...,tm }
k=0
where y(tk ) is the survival time encoding defined in Section 3.
Table 3 shows the results on optimizing the three proposed loss functions using the individual survival distribution learned with MTLR against other methods. For this particular evaluation, we also
implemented the censored support vector regression (CSVR) proposed in [7, 8]. We train two CSVR
models, one using the survival time and the other using logarithm of the survival time as regression
targets, which correspond to minimizing the AE and AE-log loss functions. For RAE we report the
best result from linear and log-scale CSVR in the table, since this non-convex loss is not minimized
by either of them. As we do not know the true survival time for censored patients, we adopt the
approach of not penalizing a prediction p for a patient with censoring time t if p > t, i.e., l(p, t) = 0
for the loss functions defined in Eqs (5) to (7) above. This is exactly the same censored training loss
used in CSVR. Note that it is undesirable to test on uncensored patients only, as the survival time
distributions are very different for censored and uncensored patients. For Cox and Aalen models we
report results using predictions based on the median, as optimizing for different loss functions using
Eq (8) with the distributions predicted by Cox and Aalen models give inferior results.
The results in Table 3 show that, although CSVR has the advantage of optimizing the loss function
directly during training, our MTLR model is still able to make predictions that improve on CSVR,
7
Table 3: Results on Optimizing Different Loss Functions on the Cancer Registry Dataset
AE
AE-log
RAE
MTLR
9.58 (0.11)
0.56 (0.02)
0.40 (0.01)
Cox
10.76 (0.12)
0.61 (0.02)
0.44 (0.02)
Aalen
19.06 (2.04)
0.76 (0.06)
0.44 (0.02)
CSVR
9.96 (0.32)
0.56 (0.02)
0.44 (0.03)
Baseline
11.73 (0.62)
0.70 (0.05)
0.53 (0.02)
Table 4: (Top) MSE of Survival Probability Predictions on SUPPORT2 (left) and RHC (right).
(Bottom) Results on Optimizing Different Loss Functions: SUPPORT2 (left), RHC (right)
Support2
MTLR
Cox
Aalen
Support2
MTLR
Cox
Aalen
CSVR
14 day
0.102(0.002)
0.152(0.003)
0.141(0.003)
AE
11.74 (0.35)
14.08 (0.49)
14.61 (0.66)
11.62 (0.15)
58 day
0.162(0.002)
0.213(0.004)
0.195(0.004)
AE-log
1.19 (0.03)
1.35 (0.03)
1.28 (0.04)
1.18 (0.02)
252 day
0.189(0.004)
0.199(0.006)
0.195(0.008)
RAE
0.53 (0.01)
0.71 (0.01)
0.65 (0.01)
0.65 (0.01)
RHC
MTLR
Cox
Aalen
RHC
MTLR
Cox
Aalen
CSVR
8 day
0.121(0.002)
0.180(0.005)
0.176(0.004)
AE
2.90 (0.09)
3.08 (0.09)
3.55 (0.85)
2.96 (0.07)
27 day
0.175(0.005)
0.239(0.004)
0.229(0.006)
AE-log
1.07 (0.02)
1.10 (0.02)
1.10 (0.06)
1.09 (0.02)
163 day
0.201(0.004)
0.223(0.004)
0.221(0.006)
RAE
0.49 (0.01)
0.53 (0.01)
0.54 (0.01)
0.58 (0.01)
sometimes significantly. Moreover MTLR is able to make survival time prediction with improved
RAE, which is difficult for CSVR to optimize directly. MTLR also beats the Cox and Aalen models
on all three loss functions. When compared to the baseline of predicting the median survival time
by cancer site and stage, MTLR is able to employ extra clinical features to reduce the absolute error
on survival time from 11.73 months to 9.58 months, and the error ratio between true and predicted
survival time from being off by exp(0.70) ? 2.01 times to exp(0.56) ? 1.75 times. Both error
measures are reduced by about 20%.
4.4
Evaluation on Other Datasets
As additional evaluations, we also tested our model on the SUPPORT2 and RHC datasets (available
at http://biostat.mc.vanderbilt.edu/wiki/Main/DataSets), which record the
survival time for patients hospitalized with severe illnesses. SUPPORT2 contains over 9000 patients
(32% censored) while RHC contains over 5000 patients (35% censored).
Table 4 (top) shows the MSE on survival probability prediction over the SUPPORT2 dataset and
RHC dataset (we omit classification accuracy due to lack of space). The thresholds are again chosen
at 25% lower quantile, median, and 75% upper quantile of the population survival time. The MTLR
model, again, produces significantly more accurate probabilty predictions when compared against
the Cox and Aalen regression models. Table 4 (bottom) shows the results on optimizing different
loss functions for SUPPORT2 and RHC. The results are consistent with the cancer registry dataset,
with MTLR beating Cox and Aalen regressions while tying with CSVR on AE and AE-log.
5
Conclusions
We plan to extend our model to an online system that can update survival predictions with new
measurements. Our current data come from measurements taken when cancers are first diagnosed;
it would be useful to be able to update survival predictions for patients incrementally, based on new
blood tests or physician?s assessments.
We have presented a new method for learning patient-specific survival distributions. Experiments
on a large cohort of cancer patients show that our model gives much more accurate predictions of
survival rates when compared to the Cox or Aalen survival regression models. Our results demonstrate that incorporating patient-specific features can significantly improve the accuracy of survival
prediction over just using cancer site and stage, with prediction errors reduced by as much as 20%.
Acknowledgments
This work is supported by Alberta Innovates Centre for Machine Learning (AICML) and NSERC.
We would also like to thank the Alberta Cancer Registry for the datasets used in this study.
8
References
[1] M.M. Oken, R.H. Creech, D.C. Tormey, J. Horton, T.E. Davis, E.T. McFadden, and P.P. Carbone. Toxicity
and response criteria of the eastern cooperative oncology group. American Journal of Clinical Oncology,
5(6):649, 1982.
[2] J.D. Kalbfleisch and R.L. Prentice. The statistical analysis of failure time data. Wiley New York:, 1980.
[3] D.R. Cox. Regression models and life-tables. Journal of the Royal Statistical Society. Series B (Methodological), 34(2):187?220, 1972.
[4] E.L. Kaplan and P. Meier. Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282):457?481, 1958.
[5] O.O. Aalen. A linear regression model for the analysis of life times. Statistics in Medicine, 8(8):907?925,
1989.
[6] T. Martinussen and T.H. Scheike. Dynamic regression models for survival data. Springer Verlag, 2006.
[7] P.K. Shivaswamy, W. Chu, and M. Jansche. A support vector approach to censored targets. In ICDM
2007, pages 655?660. IEEE, 2008.
[8] A. Khosla, Y. Cao, C.C.Y. Lin, H.K. Chiu, J. Hu, and H. Lee. An integrated machine learning approach
to stroke prediction. In KDD, pages 183?192. ACM, 2010.
[9] V. Raykar, H. Steck, B. Krishnapuram, C. Dehing-Oberije, and P. Lambin. On ranking in survival analysis:
Bounds on the concordance index. NIPS, 20, 2007.
[10] G.C. Cawley, N.L.C. Talbot, G.J. Janacek, and M.W. Peck. Sparse bayesian kernel survival analysis for
modeling the growth domain of microbial pathogens. IEEE Transactions on Neural Networks, 17(2):471?
481, 2006.
[11] W.S. Cleveland and S.J. Devlin. Locally weighted regression: an approach to regression analysis by local
fitting. Journal of the American Statistical Association, 83(403):596?610, 1988.
[12] T. Hastie and R. Tibshirani. Varying-coefficient models. Journal of the Royal Statistical Society. Series B
(Methodological), 55(4):757?796, 1993.
[13] B. Efron. Logistic regression, survival analysis, and the Kaplan-Meier Curve. Journal of the American
Statistical Association, 83(402):414?425, 1988.
[14] D. Gamerman. Dynamic Bayesian models for survival data. Applied Statistics, 40(1):63?79, 1991.
[15] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, pages 282?289, 2001.
[16] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[17] G.W. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1?
3, 1950.
[18] M.H. DeGroot and S.E. Fienberg. The comparison and evaluation of forecasters. Journal of the Royal
Statistical Society. Series D (The Statistician), 32(1):12?22, 1983.
9
| 4210 |@word multitask:1 cox:36 middle:3 innovates:1 proportion:3 prognostic:5 norm:1 sex:1 hu:1 steck:1 tried:1 forecaster:1 creatinine:1 pick:1 gamerman:1 initial:1 series:5 score:3 contains:2 current:1 z2:1 si:2 yet:1 chu:1 additive:2 subsequent:2 kdd:1 shape:2 designed:1 drop:1 plot:3 update:2 selected:1 device:1 mccallum:1 ith:1 short:1 record:1 provides:3 node:4 digestive:1 simpler:1 hospitalized:1 five:1 c2:3 direct:1 become:2 fitting:2 kalbfleisch:2 manner:1 indeed:1 brier:2 examine:1 multi:6 decomposed:1 alberta:6 decreasing:1 actual:1 cleveland:1 moreover:3 bounded:1 what:1 tying:1 argmin:1 unspecified:1 probabilites:1 informed:1 unobserved:1 appetite:1 ti:9 concave:2 growth:1 exactly:1 k2:4 scaled:1 healthcare:1 control:1 medical:1 omit:1 peck:1 segmenting:1 rgreiner:1 t1:4 before:3 local:5 died:2 consequence:1 encoding:3 granulocyte:1 might:4 initialization:1 collect:1 bi:4 acknowledgment:1 enforces:1 yj:2 testing:1 practice:2 procedure:2 empirical:1 significantly:4 weather:1 confidence:2 krishnapuram:1 svr:1 close:1 undesirable:1 prentice:2 risk:2 live:2 applying:1 restriction:1 optimize:4 imposed:1 center:1 maximizing:2 crfs:2 serum:2 independently:1 convex:4 survey:1 focused:1 identifying:1 estimator:1 regarded:1 nam:1 toxicity:1 population:8 handle:4 controlling:1 today:1 suppose:1 ualberta:2 construction:1 target:3 us:2 hypothesis:2 particularly:1 cooperative:1 bottom:2 capture:3 rhc:8 ensures:2 pancreas:1 ordering:1 russell:1 e8:1 disease:1 substantial:1 covariates:2 dynamic:3 personal:1 support2:8 trained:4 easily:1 joint:1 various:1 represented:1 regularizer:2 train:3 effective:1 baracos:2 sc:3 tell:1 query:1 labeling:5 outcome:2 choosing:1 lifestyle:1 whose:1 solve:2 say:1 statistic:6 jointly:2 transform:1 final:2 online:1 sequence:20 differentiable:1 advantage:4 t21:2 propose:2 frequent:1 relevant:1 combining:1 cao:1 date:1 oberije:1 intuitive:1 getting:1 oken:1 vanderbilt:1 produce:3 tk:5 help:1 depending:1 develop:1 eq:5 implemented:1 predicted:14 involves:1 implies:1 come:2 indicate:1 differ:1 closely:2 drawback:1 attribute:4 modifying:1 generalization:2 extension:1 hold:1 considered:1 exp:13 predict:4 visualize:1 bj:2 week:1 vary:1 early:1 consecutive:1 adopt:1 estimation:2 label:3 organ:1 t22:2 tool:2 weighted:1 always:1 modified:1 rather:1 varying:8 encode:1 focus:2 methodological:2 rank:1 likelihood:7 survivor:2 contrast:1 medication:1 baseline:9 shivaswamy:2 dependent:6 typically:1 integrated:1 hidden:1 relation:1 microbial:1 quasi:1 labelings:1 going:1 interested:1 among:1 classification:9 flexible:1 plan:1 smoothing:2 field:2 never:1 yu:1 survive:6 icml:1 minimized:1 t2:3 report:3 serious:1 employ:1 few:2 individual:23 statistician:1 ab:2 attempt:2 interest:2 rae:7 chemotherapy:1 evaluation:6 severe:1 weakness:1 introduces:1 bracket:1 tj:4 regularizers:1 accurate:9 edge:1 encourage:1 partial:2 censored:21 incomplete:2 logarithm:2 instance:1 modeling:8 goodness:1 measuring:3 caruana:1 cost:1 predictor:1 dental:1 dependency:2 answer:1 calibrated:2 st:1 stay:2 lee:1 off:1 physician:1 probabilistic:1 ym:5 horton:1 again:2 recorded:1 town:1 choose:1 american:4 concordance:1 potential:2 bold:1 coefficient:5 satisfy:1 explicitly:1 ranking:1 depends:2 multiplicative:1 observing:1 doctor:4 lung:3 option:1 minimize:1 square:1 accuracy:8 who:5 efficiently:1 correspond:2 dry:1 t60:2 bayesian:2 accurately:1 mc:1 biostat:1 researcher:1 diarrhea:1 visualizes:1 stroke:1 cumbersome:1 sharing:1 evaluates:1 against:5 failure:3 naturally:4 associated:3 gain:1 dataset:11 treatment:5 popular:1 ask:1 stomach:1 lim:1 efron:1 day:6 response:1 improved:2 evaluated:1 diagnosed:2 just:2 stage:13 lambin:1 assessment:3 lack:1 incrementally:1 logistic:11 quality:1 reveal:1 effect:8 contain:1 true:7 functioning:1 y2:7 regularization:2 death:6 misc:1 during:2 numerator:2 recurrence:1 inferior:1 davis:1 raykar:1 percentile:2 criterion:1 chin:1 occuring:1 crf:1 demonstrate:1 instantaneous:1 recently:1 common:5 dehing:1 discussed:1 he:3 occurred:1 illness:1 extend:1 association:3 significant:1 refer:1 monthly:2 measurement:3 cv:3 pm:2 centre:1 immune:1 calibration:2 longer:1 closest:1 showed:1 optimizing:7 scenario:1 certain:1 verlag:1 binary:4 success:1 life:5 yi:7 seen:2 additional:2 care:2 period:2 monotonically:1 multiple:1 reduces:1 smooth:1 clinical:9 long:4 lin:2 hazard:26 y22:2 cross:4 icdm:1 visit:3 paired:1 prediction:42 regression:47 basic:3 y21:2 patient:82 ae:13 essentially:1 sometimes:2 kernel:1 c1:3 affecting:1 addition:2 want:1 cawley:1 interval:2 wealth:1 median:9 extra:2 unlike:2 degroot:1 subject:1 lafferty:1 surviving:3 call:1 cohort:4 easy:1 concerned:1 affect:1 fit:3 hastie:2 prognosis:3 registry:7 reduce:2 idea:1 tm:2 knowing:2 devlin:1 whether:5 penalty:1 york:1 useful:3 probabilty:1 colorectal:1 involve:1 amount:1 nonparametric:1 locally:2 reduced:2 http:1 wiki:1 restricts:1 estimated:5 tibshirani:2 correctly:1 diagnosis:2 discrete:2 group:3 threshold:5 blood:4 prevent:2 penalizing:1 monotone:1 year:4 sum:2 enforced:1 package:2 respond:1 family:1 decide:1 funny:1 decision:1 modifier:1 bound:2 followed:1 fold:4 strength:1 alive:1 worked:1 x2:3 wbc:1 min:2 department:2 clinically:1 poor:1 describes:2 across:2 em:2 slightly:1 s1:1 hl:2 taken:2 fienberg:1 legal:2 visualization:1 count:1 fail:1 know:4 end:1 available:1 apply:2 observe:1 appropriate:2 alternative:1 assumes:2 denotes:1 include:2 top:2 newton:2 medicine:1 giving:1 quantile:4 especially:1 classical:1 society:3 move:1 objective:4 quantity:1 parametric:2 pain:1 gradient:1 separate:2 uncensored:5 thank:1 capacity:1 aicml:1 index:1 multiplicatively:1 providing:1 ratio:2 minimizing:1 equivalently:1 difficult:2 mostly:1 setup:1 negative:1 kaplan:5 esophagus:1 calcium:1 allowing:1 upper:2 vertical:1 observation:4 snapshot:4 datasets:8 descent:1 beat:2 truncated:1 head:1 y1:7 oncology:3 meier:5 learned:2 subgroup:1 nip:1 beyond:1 able:5 usually:4 below:1 xm:4 beating:1 including:3 max:2 royal:3 mouth:2 power:1 event:7 suitable:1 difficulty:1 predicting:9 improve:4 deemed:1 health:1 sn:1 review:1 taste:1 discovery:1 marginalizing:1 relative:2 lae:2 loss:23 expect:1 mcfadden:1 interesting:1 proportional:8 versus:1 age:2 validation:2 penalization:1 verification:1 t6g:2 consistent:4 share:1 cancer:39 censoring:7 penalized:1 summary:1 supported:1 last:2 eastern:1 allow:1 institute:1 taking:2 absolute:3 jansche:1 sparse:1 tolerance:1 curve:8 overcome:1 boundary:1 evaluating:1 avoids:1 xn:2 transition:1 ignores:1 commonly:1 regressors:3 employing:1 transaction:1 implicitly:1 status:8 overfitting:1 xi:4 discriminative:1 khosla:1 table:13 learn:3 ca:2 du:1 mse:8 domain:1 significance:1 dense:1 bmi:1 main:2 linearly:1 s2:1 scored:1 x1:3 site:12 representative:1 edmonton:2 wiley:1 position:1 pereira:1 third:3 specific:17 bronchus:1 chun:1 carbone:1 survival:151 grouping:1 incorporating:2 talbot:1 effectively:2 pathogen:1 forecast:1 smoothly:1 greiner:1 expressed:1 adjustment:1 nserc:1 applies:1 springer:1 corresponds:1 chance:1 acm:1 conditional:3 month:26 goal:2 viewed:1 replace:1 shared:1 change:3 included:1 typical:4 clinician:1 determined:1 tumor:2 called:6 hospital:1 discriminate:1 neck:1 aalen:28 rarely:1 indicating:1 chiu:1 support:3 incorporate:2 tested:3 avoiding:1 handling:1 |
3,547 | 4,211 | Modelling Genetic Variations with
Fragmentation-Coagulation Processes
Yee Whye Teh, Charles Blundell and Lloyd T. Elliott
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, United Kingdom
{ywteh,c.blundell,elliott}@gatsby.ucl.ac.uk
Abstract
We propose a novel class of Bayesian nonparametric models for sequential data
called fragmentation-coagulation processes (FCPs). FCPs model a set of sequences using a partition-valued Markov process which evolves by splitting and
merging clusters. An FCP is exchangeable, projective, stationary and reversible,
and its equilibrium distributions are given by the Chinese restaurant process. As
opposed to hidden Markov models, FCPs allow for flexible modelling of the number of clusters, and they avoid label switching non-identifiability problems. We
develop an efficient Gibbs sampler for FCPs which uses uniformization and the
forward-backward algorithm. Our development of FCPs is motivated by applications in population genetics, and we demonstrate the utility of FCPs on problems
of genotype imputation with phased and unphased SNP data.
1
Introduction
We are interested in probablistic models for sequences arising from the study of genetic variations
in a population of organisms (particularly humans). The most commonly studied class of genetic
variations in humans are single nucleotide polymorphisms (SNPs), with large quantities of data now
available (e.g. from the HapMap [1] and 1000 Genomes projects [2]). SNPs play an important role
in our understanding of genetic processes, human historical migratory patterns, and in genome-wide
association studies for discovering the genetic basis of diseases, which in turn are useful in clinical
settings for diagnoses and treatment recommendations.
A SNP is a specific location in the genome where a mutation has occurred to a single nucleotide at
some time during the evolutionary history of a species. Because the rate of such mutations is low
in human populations the chances of two mutations occurring in the same location is small and so
most SNPs have only two variants (wild type and mutant) in the population. The SNP variants on
a chromosome of an individual form a sequence, called a haplotype, with each entry being binary
valued coding for the two possible variants at that SNP. Due to the effects of gene conversion and
recombination, the haplotypes of a set of individuals often has a ?mosaic? structure where contiguous subsequences recur across multiple individuals [3]. Hidden Markov Models (HMMs) [4] are
often used as the basis of existing models of genetic variations that exploit this mosaic structure
(e.g. [3, 5]). However, HMMs, as dynamic generalisations of finite mixture models, cannot flexibly
model the number of states needed for a particular dataset, and suffer from the same label switching
non-identifiability problems of finite mixture models [6] (see Section 3.2). While nonparametric
generalisations of HMMs [7, 8, 9] allow for flexible modelling of the number of states, they still
suffer from label switching problems.
In this paper we propose alternative Bayesian nonparametric models for genetic variations called
fragmentation-coagulation processes (FCPs). An FCP defines a Markov process on the space of partitions of haplotypes, such that the random partition at each time is marginally a Chinese restaurant
1
process (CRP). The clusters of the FCP are used in the place of HMM states. FCPs do not require
the number of clusters in each partition to be specified, and do not have explicit labels for clusters
thus avoid label switching problems. The partitions of FCPs evolve via a series of events, each of
which involves either two clusters merging into one, or one cluster splitting into two. We will see
that FCPs are natural models for the mosaic structure of SNP data since they can flexibly accommodate varying numbers of subsequences and they do not have the label switching problems inherent
in HMMs. Further, computations in FCPs scale well.
There is a rich literature on modelling genetic variations. The standard coalescent with recombination (also known as the ancestral recombination graph) model describes the genealogical history of
a set of haplotypes using coalescent, recombination and mutation events [10]. Though an accurate
model of the genetic process, inference is unfortunately highly intractable. PHASE [11, 12] and IMPUTE [13] are a class of HMM based models, where each HMM state corresponds to a haplotype
in a reference panel (training set). This alleviates the label switching problem, but incurs higher
computational costs than the normal HMMs or our FCP since there are now as many HMM states
as reference haplotypes. BEAGLE [14] introduces computational improvements by collapsing the
multiple occurrences of the same mosaic subsequence across the reference haplotypes into a single
node of a graph, with the graph constructed in a very efficient but somewhat ad hoc manner.
Section 2 introduces preliminary notation and describes random partitions and the CRP. In Section 3
we introduce FCPs, discuss their more salient properties, and describe how they are used to model
SNP data. Section 4 describes an auxiliary variables Gibbs sampler for our model. Section 5 presents
results on simulated and real data, and Section 6 concludes.
2
Random Partitions
Let S denote a set of n SNP sequences. Label the sequences by the integers 1, . . . , n so that S can
be taken to be [n] = {1, . . . , n}. A partition ? of S is a set of disjoint non-empty subsets of S
(called clusters) whose union is S. Denote the set of partitions of S by ?S . If a ? S, define the
projection ?|a of ? onto a to be the partition of a obtained by removing the elements of S\a as well
as any resulting empty subsets from ?. The canonical distribution over ?S is the Chinese restaurant
process (CRP) [15, 16]. It can be described using an iterative generative process: n customers
enter a Chinese restaurant one at a time. The first customer sits at some table and each subsequent
customer sit at a table with m current customers with probability proportional to m, or at a new table
with probability proportional to ?, where ? is a parameter of the CRP. The seating arrangement of
customers around tables forms a partition ? of S, with occupied tables corresponding to the clusters
in ?. We write ? ? CRP(?, S) if ? ? ?S is a CRP distributed random partition over S. Multiplying
the conditional probabilities together gives the probability mass function of the CRP:
f?,S (?) =
?|?| ?(?) Y
?(|a|)
?(n + ?) a??
(1)
where ? is the gamma function. The CRP is exchangeable (invariant to permutations of S), and
projective (the probability of the projection ?|a is simply f?,a (?|a )), so can be extended in a natural
manner to partitions of N and is related via de Finetti?s theorem to the Dirichlet process [17].
3
Fragmentation-Coagulation Processes
A fragmentation-coagulation process (FCP) is a continuous-time Markov process ? ? (?(t), t ?
[0, T ]) over a time interval [0, T ] where each ?(t) is a random partition in ?S . Since the space of
partitions for a finite S is finite, the FCP is a Markov jump process (MJP) [18] : it evolves according
to a discrete series of random events (or jumps) at which it changes state and at all other times
the state remains unchanged. In particular, the jump events in an FCP are either fragmentations or
coagulations. A fragmentation at time t involves a cluster c ? ?(t?) splitting into exactly two nonempty clusters a, b ? ?(t) (all other clusters stay unchanged; the t? notation means an infinitesimal
time before t), and a coagulation at t involves two clusters a, b ? ?(t?) merging to form a single
cluster c = a ? b ? ?(t) (see Figure 1). Note that fragmentations and coagulations are converses of
each other; as we will see later, this will lead to some important properties of the FCP.
2
C
C
|c|
?+i?1
0
R
|a|
|a|
|c|
F
R
?
F
C
T
Figure 1: FCP cartoon. Each line is a sequence
and bundled lines form clusters. C: coagulation event. F: fragmentation event. Fractions
are, for the orange sequence, from left to right:
probability of joining cluster c at time 0, probability of following cluster a at a fragmentation
event, rate of starting a new table (creating a
fragmentation), and rate of joining with an existing table (creating a coagulation).
Following the various popular culinary processes in Bayesian nonparametrics, we will start by describing the law of ? in terms of the conditional distribution of the cluster membership of each
sequence i given those of 1, . . . , i ? 1. Since we have a Markov process with a time index, the
metaphor is of a Chinese restaurant operating from time 0 to time T , where customers (sequences)
may move from one table (cluster) to another and tables may split and merge at different points
in time, so that the seating arrangements (partition structures) at different times might not be the
same. To be more precise, define ?|[i?1] = (?|[i?1] (t), t ? [0, T ]) to be the projection of ? onto
the first i ? 1 sequences. ?|[i?1] is piecewise constant, with ?|[i?1] (t) ? ?[i?1] describing the
partitioning of the sequences 1, . . . , i ? 1 (the seating arrangement of customers 1, . . . , i ? 1) at
time t. Let ai (t) = c\{i}, where c is the unique cluster in ?|[i] (t) containing i. Note that either
ai (t) ? ?|[i?1] (t), meaning customer i sits at an existing table in ?|[i?1] (t), or ai (t) = ?, which will
mean that customer i sits at a new table. Thus the function ai describes customer i?s choice of table
to sit at through times [0, T ]. We define the conditional distribution of ai given ?|[i?1] as a Markov
jump process evolving from time 0 to T with two parameters ? > 0 and R > 0 (see Figure 1):
i = 1: The first customer sits at a table for the duration of the process, i.e. a1 (t) = ? ?t ? [0, T ].
t = 0: Each subsequent customer i starts at time t = 0 by sitting at a table according to CRP
probabilities with parameter ?. So, ai (0) = c ? ?|[i?1] (0) with probability proportional to
|c|, and ai (0) = ? with probability proportional to ?.
F1: At time t > 0, if customer i is sitting at table ai (t?) = c ? ?|[i?1] (t?), and the table c
fragments into two tables a, b ? ?|[i?1] (t), customer i will move to table a with probability
|a|/|c|, and to table b with probability |b|/|c|.
C1: If the table c merges with another table at time t, the customer simply follows the other
customers to the resulting merged table.
F2: At all other times t, if customer i is sitting at some existing table ai (t?) = c ? ?|[i?1] (t),
then the customer will move to a new empty table (ai (t) = ?) with rate R/|c|.
C2: Finally, if i is sitting by himself (ai (t?) = ?), then he will join an existing table ai (t) =
c ? ?|[i?1] (t) with rate R/?. The total rate of joining any existing table is |?|[i?1] (t)|R/?.
Note that when customer i moves to a new table in step F2, a fragmentation event is created, and
all subsequent customers who end up in the same table will have to decide at step F1 whether to
move to the original table or to the table newly created by i. The probabilities in steps F1 and F2
are exactly the same as those for a Dirichlet diffusion tree [19] with constant divergence function
R. Similarly step C2 creates a coagulation event in which subsequent customers seated at the two
merging tables will move to the merged table in step C1, and the probabilities are exactly the same
as those for Kingman?s coalescent [20, 21]. Thus our FCP is a combination of the Dirichlet diffusion
tree and Kingman?s coalescent. Theorem 3 below shows that this combination results in FCPs being
stationary Markov processes with CRP equilibrium distributions. Further, FCPs are reversible, so in
a sense the Dirichlet diffusion tree and Kingman?s coalescent are duals of each other.
Given ?|[i?1] , ?|[i] is uniquely determined by ai and vice versa, so that the seating of all n customers
through times [0, T ], a1 , . . . , an , uniquely determines the sequential partition structure ?. We now
investigate various properties of ? that follows from the iterative construction above. The first is
an alternative characterisation of ? as an MJP whose transitions are fragmentations or coagulations,
an unsurprising observation since both the Dirichlet diffusion tree and Kingman?s coalescent, as
partition-valued processes, are Markov.
3
Theorem 1. ? is an MJP with initial distribution ?(0) ? CRP(?, S) and stationary transit rates,
q(?, ?) = R
?(|a|)?(|b|)
?(|c|)
q(?, ?) =
R
?
(2)
where ?, ? ? ?S are such that ? is obtained from ? by fragmenting a cluster c ? ? into two clusters
a, b ? ? (at rate q(?, ?)), and conversely ? is obtained from ? by coagulating a, b into c (at rate
q(?, ?)). The total rate of transition out of ? is:
X
R |?|(|?| ? 1)
q(?, ?) = R
H|c|?1 +
(3)
?
2
c??
where H|c|?1 is the |c| ? 1st harmonic number.
Proof. The initial distribution follows from the CRP probabilities of step t = 0. For every i, ai is
Markov and ai (t) depends only on ai (t?) and ?|[i?1] (t), thus (ai (s), s ? [0, t]) depends only on
(aj (s), s ? [0, t], j < i) and the Markovian structure of ? follows by induction. Since ?S is finite,
? is an MJP. Further, the probabilities and rates in steps F1, F2, C1 and C2 do not depend explicitly
on t so ? has stationary transit rates. By construction, q(?, ?) is only non-zero if ? and ? are related
by a complimentary pair of fragmentation and coagulation events, as in the theorem.
To derive the transition rates (2), recall that a transition rate r from state s to state s0 means that if
the MJP is in state s at time t then it will transit to state s0 by an infinitesimal time later t + ? with
probability ?r. For the fragmentation rate q(?, ?), the probability of transiting from ? to ? in an
infinitesimal time ? is ? times the rate at which a customer starts his own table in step F2, times the
probabilities of subsequent customers choosing either table in step F1 to form the two tables a and
b. Dividing this product by ? forms the rate q(?, ?). Without loss of generality suppose that the table
started by the customer eventually becomes a and that there were j other customers at the existing
table which eventually becomes b. Thus, the rate of the customer starting his own table is R/j and
.
the product of probabilities of subsequent customer choices in step F1 is then 1?2???(|a|?1)?j???(|b|?1)
(j+1)???(|c|?1)
Multiplying these together gives q(?, ?) in (2). Similarly, the coagulation rate q(?, ?) is a product
of the rate R
? at which a customer moves from his own table to an existing table in step C2 and the
probability of all subsequent customers in either table moving to the merged table (which is just 1).
Finally, the total transition rate q(?, ?) is a sum over all possible fragmentations and coagulations of
?. There are |?|(|?|?1)
possible pairs of clusters to coagulate, giving the second term. The first term
2
is obtained by summing over all c ? ?, and over all unordered pairs a, b resulting from fragmenting
P
= H|c|?1 .
c, and using the identity {a,b} ?(|a|)?(|b|)
?(|c|)
Theorem 2. ? is projective and exchangeable. Thus it can be extended naturally to a Markov
process over partitions of N.
Proof. Both properties follow from the fact that both the initial distribution CRP(?, S) and the
transition rates (2) are projective and exchangeable. Here we will give more direct arguments for the
theorem. Projectivity is a direct consequence of the iterative construction, showing that the law of
?|[i] does not depend on the clustering trajectories aj of subsequent customers j > i. We can show
exchangeability of ? by deriving the joint probability density of a sample path of ? (the density
exists since both ?S and T are finite so ? has a finite number of events on [0, T ]), and seeing that
it is invariant to permutations of S. For an MJP the probability of a sample path is the probability
of the initial state (f?,S (?(0))) times, for each subsequent jump, the probability of staying in the
current state ? until the jump (the holding time is exponential distributed with rate q(?, ?)) and the
transition from ? to the next state ? (this is the ratio q(?, ?)/q(?, ?)), and finally the probability of
not transiting from the last jump time to T . Multiplying these probabilities together gives, after
simplification:
!Q
Z T
?(?)
a?A<> ?(|a|)
exp ?
q(?(t), ?)dt Q
(4)
p(?) = R|C|+|F | ?|A|?2|C|?2|F |
?(? + n)
0
a?A>< ?(|a|)
with |C| the number of coagulations, |F | number of fragmentations, and A, A<> , A>< are sets of
paths in ?. A path is a cluster created either at time 0 or a coagulation or fragmentation, and exists
for a definite amount of time until it is terminated at time T or another event (these are the horizontal
4
bundles of lines in Figure 1). A is the set of all paths in ?, A<> the set of paths created either at
time 0 or by a fragmentation and terminated either at time T or by a coagulation, and A>< the set
of paths created by a coagulation and terminated by a fragmentation or at time T .
Theorem 3. ? is ergodic and has equilibrium distribution CRP(?, S). Further, it is reversible with
(?(T ? t), t ? [0, T ]) having the same law as ?.
Proof. Ergodicity follows from the fact that for any T > 0 and any two partitions ?, ? ? ?S ,
there is positive probability that if it starts at ?(0) = ?, it will end with ?(T ) = ?. For example,
it may undergo a sequence of fragmentations until each sequence belong to its own cluster, then a
sequence of coagulations forming the clusters in ?. Reversibility and the equilibrium distribution
can be demonstrated by detailed balance. Suppose ?, ? ? ?S and a, b, c are related as in Theorem 1,
|?|
?(?) Q
?(|a|)?(|b|)
f?,S (?)q(?, ?) = ??(n+?)
(5)
k?? ?(|k|) ? R
?(|c|)
|?|+1
Q
?(?)
= ? ?(n+?)
?(|a|)?(|b|) k??,k6=c ?(|k|) ? R
? = f?,S (?)q(?, ?)
Finally, the terms in (4) are invariant to time reversals, i.e. p((?(T ? t), t ? [0, T ])) = p(?).
Theorem 3 shows that the ? parameter controls the marginal distributions of ?(t), while (2) indicates
that the R parameter controls the rate at which ? evolves.
3.1
A Model of SNP Sequences
We model the n SNP sequences (haplotypes) with an FCP ? over partitions of S = [n]. Let the
m assayed SNP locations on a chunk of the chromosome be at positions t1 < t2 ? ? ? < tm . The
ith haplotype consists of observations xi1 , . . . , xim ? {0, 1} each corresponding to a binary SNP
variant. For j = 1, . . . , m, and for each cluster c ? ?(tj ) at position tj , we have a parameter
?cj ? Bernoulli(?j ) which denotes the variant at location tj of the corresponding subsequence. For
each i ? c we model xij as equal to ?cj with probability 1 ? , where is a noise probability. We
place a prior ?j ? Beta(???j , ?(1 ? ??j )) with mean ??j given by the empirical mean of variant 1 at
SNP j among the observed haplotypes. We place uninformative uniform priors on log R, log ? and
log ? over a bounded but large range such that the boundaries were never encountered.
The properties of FCPs in Theorems 1-3 are natural in the modelling setting here. Projectivity and
exchangeability relate to the assumption that sequence labels should not have an effect on the model,
while stationarity and reversibility arise from the simplifying assumption that we do not expect the
genetic processes operating in different parts of the genome to be different. These are also properties
of the standard coalescent with recombination model of genetic variations [10]. Incidentally the
coalescent with recombination model is not Markov, though there have been Markov approximations
[22, 23], and all practical HMM based methods are Markov.
3.2
HMMs and the Label Switching Problem
HMMs can also be interpreted as sequential partitioning processes in which each state at time step t
corresponds to a cluster in the partition at t. Since each sequence can be in different states at different
times this automatically induces a partition-structured Markov process, where each partition consists
of at most K clusters (K being the number of states in the HMM), and where each cluster is labelled
with an HMM state. This labelling of the clusters in HMMs is a significant, but subtle, difference
between HMMs and FCPs. Note that the clusters in FCPs are unlabelled, and defined purely in
terms of the sequences they contain. This labelling of the clusters in HMMs are a significant source
of non-identifiability in HMMs, since the likelihoods of data items (and often even the priors over
transition probabilities) are invariant to the labels themselves so that each permutation over labels
creates a mode in the posterior. This is the so called ?label switching problem? for finite mixture
models [6]. Since the FCP clusters are unlabelled they do not suffer from label switching problems.
On the other hand, by having labelled clusters HMMs can share statistical strength among clusters
across time steps (e.g. by enforcing the same emission probabilities from each cluster across time),
while FCPs do not have a natural way of sharing statistical strength across time. This means that
FCPs are not suitable for sequential data where there is no natural correspondence between times
across different sequences, e.g. time series data like speech and video.
5
3.3
Discrete Time Markov Chain Construction
FCPs can be derived as continuous time limits of discrete time Markov chains constructed from
fragmentation and coagulation operators [24]. This construction is more intuitive but lacks the
rigour of the development described here. Let CRP(?, d, S) be a generalisation of the CRP on S
with an additional discount parameter d (see [25] for details). For any ? > 0, construct a Markov
chain over ?(0), ?(?), ?(2?), . . . as follows: ?(0) ? CRP(?, 0, S); then for every m ? 1, define
?(m?) to be the partition obtained by fragmenting each cluster c ? ?((m?1)?) by a partition drawn
independently from CRP(0, R?, c), and ?(m?) is constructed by coagulating into one the clusters of
?(m?) belonging to the same cluster in a draw from CRP(?/R?, 0, ?(m?)). Results from [26] (see
also [27]) show that marginally each ?(m?) ? CRP(?, R?, S) and ?(m?) ? CRP(?, 0, S). The
various properties of FCPs, i.e. Markov, projectivity, exchangeability, stationarity, and reversibility,
hold for this discrete time Markov chain, and the continuous time ? can be derived by taking ? ? 0.
4
Gibbs Sampling using Uniformization
We use a Gibbs sampler for inference in the FCP given SNP haplotype data. Each iteration of
the sampler involves treating the ith haplotype sequence as the last sequence to be added into the
FCP partition structure (making use of exchangeability), so that the iterative procedure described
in Section 3 gives the conditional prior of ai given ?|S\{i} . Coupling with the likelihood terms of
xi1 , . . . , xim gives us the desired conditional distribution of ai . Since this conditional distribution
of ai is Markov, we can make use of the forward filtering-backward sampling procedure to sample
it. However, ai is a continuous-time MJP so a direct application of the typical forward-backward
algorithm is not possible. One possibility is to marginalise out the sample path of ai except at a finite
number of locations (corresponding to the jumps in ?|S\{i} and the SNP locations). This approach
is computationally expensive as it requires many matrix exponentiations, and does not resolve the
issue of obtaining a full sample path of ai , which may involve jumps at random locations we have
marginalised out.
Instead, we make use of a recently developed MCMC inference method for MJPs [28]. This sampler
introduces as auxiliary variables a set of ?potential jump points? distributed according to a Poisson
process with piecewise constant rates, such that conditioned on them the posterior of ai becomes
a Markov chain that can only transition at either its previous jump locations or the potential jump
points, and we can then apply standard forward-backward to sample ai . For each t the state space
of ai (t) is Cit ? ?|S\{i} ? {?}. For s,P
s0 ? Cit let Qt (s, s0 ) be the transition rate from state s to s0
given in Section 3, with Qt (s, s) = ? s0 6=s Qt (s, s0 ). Let ?t > maxs?Cit ?Qt (s, s) be an upper
bound on the transition rates of ai at time t, a0i be the previous sample path of ai , J 0 be the jumps in
a0i , and E consists of the m SNP locations and the event times in ?|S\{i} . Let Mt (s) be the forward
message at time t and state s ? Cit . The resulting forward-backward sampling algorithm is given
below. In addition we update the logarithms of R, ? and ? by slice sampling.
1. Sample potential jumps J aux ? Poisson(?) with rate ?(t) = ?t + Qt (a0i (t), a0i (t))).
2. Compute forward messages by iterating over t ? {0} ? J aux ? J 0 ? E from left to right:
2a. At t = 0, set Mt (s) ? |s| for s ? ?|S\{i} and Mt (?) ? ?.
|b|
2b. At a fragmentation in ?|S\{i} , say of c into a, b, set Mt (a) = |a|
|c| Mt? (c), Mt (b) = |c| Mt? (c),
and Mt (k) = Mt? (k) for k 6= a, b, c. Here t? denotes the time of the previous iteration.
2c. At a coagulation in ?|S\{i} , say of a, b into c, set Mt (c) = Mt? (a) + Mt? (b).
2d. At an observation, say t = tj , set Mt (s) = p(xij |?sj )Mt? (s). We integrate out ??j and ?j .
P
2e. At a potential jump in J aux ? J 0 , set Mt (s) = s0 ?Cit Mt? (s0 )(1(s0 = s) + Qt (s0 , s)/?).
3. Get new sample path ai by backward sampling. This is straightforward and involves reversing
the message computations above. Note that ai can only jump at the times in J aux ? J 0 , and
change state at times in E if it was involved in the fragmentation or coagulation event.
5
Experiments
Label switching problem Figure 2 demonstrates the label switching problem (Section 3.2) during
block Gibbs sampling of a 2-state Bayesian HMM (BHMM) compared to inference in an FCP. The
6
normalized log likelihood
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
FCP
BHMM
FCP
BHMM
0
20
40
60
80
MCMC iteration
0
20 40 60 80
MCMC iterations until optimum
100
Figure 2: Label switching problem.
Left: Each line is median, over 10 runs, of the normalized log-likelihoods of a Bayesian HMM (blue)
and an FCP (red) at each iteration of MCMC. Lighter polygons are the 25% and 75% percentiles.
Right: Number of MCMC iterations before each model first encounters the optimum states.
observed data comprises 16 sequences of length 16. Eight of the sequences consist of just zeros
and the others consist of just ones. Each of the binary BHMM states, zij ? {0, 1}, i indexing
sequence and j indexing position within sequence i, transits to the same state with probability ? ,
with a prior ? ? Beta(10.0, 0.1) encouraging self transitions. The observations of the BHMM have
distribution xij ? Bernoulli(?zij ) where ?1 = 1 ? ?0 and ?0 ? Beta(1.0, 1.0). The optimal
clustering under both models assigns all zero observations to one state and all ones to another state.
As shown in Figure 2, due to the lack of identifiability of its states, the BHMM requires more MCMC
iterations through the data before inference converges upon an optimal state, whilst an FCP is able
to find the correct state much more quickly. This is reflected in both the normalized log-likelihood
of the models in Figure 2(left) and in the number of iterations before reaching the optimal state,
Figure 2(right).
98
96
94
accuracy (%)
Imputation from phased data To reduce costs,
typically not all known SNPs are assayed for each
participant in a large association study. The problem of inferring the variants of unassayed SNPs in
a study using a larger dataset (e.g. HapMap or 1000
Genomes) is called genotype imputation [13].
92
Figure 3 compares the genotype imputation accuracy of FCP with that of fastPHASE [5] and BEA90
GLE [14], two state-of-the-art methods. We used
88
BEAGLE
3000 MCMC iterations for inference with the FCP,
FCP
with the first 1000 iterations discarded as burn-in.
86
fastPHASE
We used 320 genes from 47 individuals in the Seat0.1
0.2
0.3
0.4
0.5
tle SNPs dataset [29]. Each gene consists of 94 seproportion held out SNPs
quences, of length between 13 and 416 SNPs. We
held out 10%?50% of the SNPs uniformly among Figure 3: Accuracy vs proportion of missing
all haplotypes for testing. Our model had higher ac- data for imputation from phased data. Lines
curacy than both fastPHASE and BEAGLE.
are drawn at the means and error bars at the
standard error of the means.
Imputation from unphased data In humans,
most chromosomes come in pairs. Current assaying methods are unable to determine from which
of these two chromosomes each variant originates without employing expensive protocols, thus the
data for each individual in large datasets actually consist of sequences of unordered pairs of variants
(called genotypes). This includes the Seattle SNPs dataset (the haplotypes provided by [29] in the
previous experiment were phased using PHASE [11, 12]).
In this experiment, we performed imputation using the original unphased genotypes, using an extension of the FCP able to handle this sort of data. Figure 4 shows the genotype imputation accuracies
and run-times of the FCP model (with 60, 600 or 3000 MCMC iterations of which 30, 200 or 600
were discarded for burn-in) and state-of-the-art software (fastPHASE [5], IMPUTE2 [30], BEAGLE
7
92
BEAGLE
IMPUTE2
fastPHASE
FCP 60
FCP 600
FCP 3000
90
88
101
102
computation time (s)
103
accuracy (%)
accuracy (%)
94
94
94
93
93
accuracy (%)
96
92
91
91
90
0.1
92
0.2
0.3
0.4
proportion held out genotypes
0.5
90
0.1
0.2
0.3
0.4
proportion held out SNPs
0.5
Figure 4: Time and accuracy performance of genotype imputation on 231 Seattle SNPs genes.
Left: Accuracies evaluated by removing 10%?50% of SNPs from 10%?50% of individuals, repeated
five times on each gene with the same hold out proportions. Centers of crosses correspond to median
accuracy and times whilst whiskers correspond to the extent of the inter-quartile range.
Middle: Lines are accuracy averaged over five repetitions of each gene with 30% of shared SNPs
removed from 10%?50% of individuals. Each repetition uses a different subset of SNPs and individuals. Lighter polygons are standard errors.
Right: As Middle, except with 10%?50% of shared SNPs removed from 30% of individuals.
[14]). We held out 10%?50% of the shared SNPs in 10%?50% of the 47 individuals of the Seattle SNPs dataset. This paradigm mimics a popular experimental setting in which the genotypes of
sparsely assayed individuals are imputed using a densely assayed reference panel [30]. We discarded
89 of the genes as they were unable to be properly pre-processed for use with IMPUTE2.
As can be seen in Figure 4, FCP achieves similar state-of-the-art accuracy to IMPUTE2 and fastPHASE. Given enough iterations, the FCP outperforms all other methods in terms of accuracy.
With 600 iterations, FCP has almost the same accuracy and run-time as fastPHASE. With just 60
iterations, FCP performs comparably to IMPUTE2 but is an order of magnitude faster. Note that
IMPUTE2 scales quadratically in the number of genotypes, so we expect FCPs to be more scalable.
Finally, BEAGLE is the fastest algorithm but has worst accuracies.
6
Discussion
We have proposed a novel class of Bayesian nonparametric models called fragmentation-coagulation
processes (FCPs), and applied them to modelling population genetic variations, showing encouraging empirical results on genotype imputation. FCPs are the simplest non-trivial examples of exchangeable fragmentation-coalescence processes (EFCP) [31]. In general EFCPs the fragmentation
and coagulation events may involve more than two clusters. They also have an erosion operation,
where a single element of S forms a single element cluster. EFCPs were studied by probabilists for
their theoretical properties, and our work represents the first application of EFCPs as probabilistic
models of real data, and the first inference algorithm derived for EFCPs.
There are many interesting avenues for future research. Firstly, we are currently exploring a number
of other applications in population genetics, including phasing and genome-wide association studies.
Secondly, it would be interesting to explore the discrete time Markov chain version of FCPs, which
although not as elegant will have simpler and more scalable inference. Thirdly, the haplotype graph
in BEAGLE is constructed via a series of cluster splits and merges, and bears striking resemblance
to the partition structures inferred by FCPs. It would be interesting to explore the use of BEAGLE
as a fast initialisation of FCPs, and to use FCPs as a Bayesian interpretation of BEAGLE. Finally,
beyond population genetics, FCPs can also be applied to other time series and sequential data, e.g.
the time evolution of community structure in network data, or topical change in document corpora.
Acknowledgements
We thank the Gatsby Charitable Foundation for generous funding, and Vinayak Rao, Andriy Mnih,
Chris Holmes and Gil McVean for fruitful discussions.
8
References
[1] The International HapMap Consortium. The international HapMap project. Nature, 426:789?796, 2003.
[2] The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature, 467:1061?1073, 2010.
[3] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and R. S. Lander. High-resolution haplotype
structure in the human genome. Nature Genetics, 29:229?232, 2001.
[4] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257?285, 1989.
[5] P. Scheet and M. Stephens. A fast and flexible statistical model for large-scale population genotype data:
Applications to inferring missing genotypes and haplotypic phase. The American Journal of Human
Genetics, 78(4):629 ? 644, 2006.
[6] A. Jasra, C. C. Holmes, and D. A. Stephens. Markov chain Monte Carlo methods and the label switching
problem in Bayesian mixture modeling. Statistical Science, 20(1):50?67, 2005.
[7] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in
Neural Information Processing Systems, volume 14, 2002.
[8] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[9] E. P. Xing and K. Sohn. Hidden Markov Dirichlet process: Modeling genetic recombination in open
ancestral space. Bayesian Analysis, 2(2), 2007.
[10] R. R. Hudson. Properties of a neutral allele model with intragenic recombination. Theoretical Population
Biology, 23(2):183 ? 201, 1983.
[11] M. Stephens and P. Donnelly. A comparison of Bayesian methods for haplotype reconstruction from
population genotype data. American Journal of Human Genetics, 73:1162?1169.
[12] N. Li and M. Stephens. Modeling Linkage Disequilibrium and Identifying Recombination Hotspots Using
Single-Nucleotide Polymorphism Data. Genetics, 165(4):2213?2233, 2003.
[13] J. Marchini, B. Howie, S. Myers, G. McVean, and P. Donnelly. A new multipoint method for genome-wide
association studies by imputation of genotypes. Nature Genetics, 39(7):906?913, 2007.
[14] B. L. Browning and S. R. Browning. A unified approach to genotype imputation and haplotype-phase
inference for large data sets of trios and unrelated individuals. American Journal of Human Genetics,
84:210?223, 2009.
?
? e de Probabilit?es de Saint-Flour XIII?1983,
[15] D. Aldous. Exchangeability and related topics. In Ecole
d?Et?
pages 1?198. Springer, Berlin, 1985.
[16] J. Pitman. Combinatorial Stochastic Processes. Lecture Notes in Mathematics. Springer-Verlag, 2006.
[17] D. Blackwell and J. B. MacQueen. Ferguson distributions via P?olya urn schemes. Annals of Statistics,
1:353?355, 1973.
[18] E. C
? inlar. Introduction to Stochastic Processes. Prentice Hall, 1975.
[19] R. M. Neal. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[20] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27?43, 1982.
Essays in Statistical Science.
[21] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235?248, 1982.
[22] G. A. T. McVean and N. J. Cardin. Approximating the coalescent with recombination. Philosophical
Transactions of the Royal Society of London B: Biological Sciences, 360(1459):1387?1393, 2005.
[23] P. Marjoram and J. Wall. Fast ?coalescent? simulation. BMC Genetics, 7(1):16, 2006.
[24] J. Bertoin. Random Fragmentation and Coagulation Processes. Cambridge University Press, 2006.
[25] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855?900, 1997.
[26] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870?1902, 1999.
[27] J. Gasthaus and Y. W. Teh. Improvements to the sequence memoizer. In Advances in Neural Information
Processing Systems, 2010.
[28] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian
networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011.
[29] NHLBI Program for Genomic Applications. SeattleSNPs. June 2011. http://pga.gs.washington.edu.
[30] B. N. Howie, P. Donnelly, and J. Marchini. A flexible and accurate genotype imputation method for the
next generation of genome-wide association studies. PLoS Genetics, (6), 2009.
[31] J. Berestycki. Exchangeable fragmentation-coalescence processes and their equilibrium measures.
http://arxiv.org/abs/math/0403154, 2004.
9
| 4211 |@word version:1 middle:2 proportion:4 mjp:7 open:1 multipoint:1 essay:1 simulation:1 simplifying:1 incurs:1 accommodate:1 initial:4 series:5 fragment:1 united:1 zij:2 initialisation:1 ecole:1 genetic:13 document:1 outperforms:1 existing:8 current:3 subsequent:9 partition:28 treating:1 update:1 v:1 stationary:4 generative:1 discovering:1 selected:1 item:1 intelligence:1 ith:2 memoizer:1 blei:1 math:1 node:1 location:9 sits:4 coagulation:26 firstly:1 simpler:1 five:2 org:1 constructed:4 c2:4 direct:3 beta:3 assayed:4 consists:4 wild:1 fastphase:7 introduce:1 manner:2 inter:1 themselves:1 olya:1 automatically:1 resolve:1 encouraging:2 metaphor:1 becomes:3 project:3 provided:1 notation:2 bounded:1 panel:2 mass:1 unrelated:1 complimentary:1 interpreted:1 developed:1 whilst:2 unified:1 every:2 exactly:3 demonstrates:1 uk:1 exchangeable:6 unit:1 converse:1 partitioning:2 control:2 originates:1 before:4 positive:1 t1:1 hudson:2 limit:1 consequence:1 switching:13 joining:3 path:11 merge:1 probablistic:1 might:1 burn:2 studied:2 conversely:1 hmms:12 fastest:1 projective:4 range:2 averaged:1 phased:4 unique:1 fragmenting:3 practical:1 testing:1 union:1 block:1 definite:1 procedure:2 probabilit:1 empirical:2 evolving:1 projection:3 pre:1 seeing:1 consortium:2 get:1 cannot:1 onto:2 operator:1 prentice:1 yee:1 fruitful:1 map:1 customer:31 demonstrated:1 missing:2 center:1 straightforward:1 flexibly:2 starting:2 duration:1 ergodic:1 independently:1 resolution:1 splitting:3 assigns:1 identifying:1 holmes:2 deriving:1 his:3 population:12 handle:1 variation:9 annals:4 construction:5 play:1 suppose:2 lighter:2 us:2 mosaic:4 howie:2 element:3 expensive:2 particularly:1 recognition:1 sparsely:1 observed:2 role:1 worst:1 plo:1 removed:2 disease:1 projectivity:3 dynamic:1 depend:2 purely:1 creates:2 upon:1 f2:5 basis:2 joint:1 various:3 polygon:2 fast:4 describe:1 london:2 monte:1 artificial:1 choosing:1 whose:2 cardin:1 larger:1 valued:3 say:3 coalescents:1 statistic:2 beal:2 hoc:1 sequence:28 myers:1 ucl:2 propose:2 reconstruction:1 product:3 alleviates:1 intuitive:1 seattle:3 cluster:43 empty:3 xim:2 optimum:2 incidentally:1 converges:1 staying:1 derive:1 develop:1 ac:2 coupling:1 qt:6 dividing:1 auxiliary:2 involves:5 come:1 merged:3 correct:1 quartile:1 allele:1 stochastic:3 human:10 coalescent:11 duals:1 hapmap:4 require:1 polymorphism:2 f1:6 wall:1 preliminary:1 biological:1 secondly:1 extension:1 exploring:1 genealogy:1 hold:2 around:1 hall:1 normal:1 exp:1 equilibrium:5 achieves:1 generous:1 daly:1 label:18 currently:1 combinatorial:1 vice:1 repetition:2 hotspot:1 genomic:1 reaching:1 occupied:1 avoid:2 exchangeability:5 varying:1 derived:4 emission:1 bundled:1 june:1 improvement:2 mutant:1 modelling:6 indicates:1 bernoulli:2 sequencing:1 likelihood:5 properly:1 sense:1 inference:9 browning:2 membership:1 ferguson:1 typically:1 hidden:5 interested:1 issue:1 among:3 flexible:4 k6:1 development:2 art:3 orange:1 marginal:1 equal:1 construct:1 never:1 having:2 reversibility:3 cartoon:1 sampling:8 biology:1 represents:1 bmc:1 washington:1 mimic:1 future:1 t2:1 others:1 piecewise:2 inherent:1 curacy:1 xiii:1 gamma:1 divergence:1 densely:1 individual:12 phase:4 ab:1 stationarity:2 message:3 highly:1 investigate:1 possibility:1 mnih:1 flour:1 introduces:3 mixture:4 genotype:17 tj:4 held:5 a0i:4 wc1n:1 bundle:1 accurate:2 chain:7 nucleotide:3 tree:4 logarithm:1 desired:1 theoretical:2 modeling:3 markovian:1 rao:2 ar:1 contiguous:1 vinayak:1 queen:1 cost:2 bertoin:1 entry:1 subset:3 neutral:1 uniform:1 culinary:1 unsurprising:1 trio:1 chunk:1 st:1 density:2 international:3 recur:1 ancestral:2 stay:1 probabilistic:1 xi1:2 together:3 quickly:1 opposed:1 containing:1 collapsing:1 creating:2 american:4 kingman:6 li:1 potential:4 de:3 unordered:2 lloyd:1 coding:1 includes:1 explicitly:1 ad:1 depends:2 later:2 performed:1 red:1 start:4 sort:1 participant:1 xing:1 identifiability:4 mutation:4 square:1 accuracy:15 who:1 sitting:4 correspond:2 rabiner:1 bayesian:11 comparably:1 marginally:2 carlo:1 multiplying:3 trajectory:1 history:2 sharing:1 infinitesimal:3 involved:1 naturally:1 proof:3 newly:1 dataset:5 treatment:1 popular:2 recall:1 cj:2 subtle:1 actually:1 marchini:2 higher:2 dt:1 follow:1 reflected:1 nonparametrics:1 evaluated:1 though:2 generality:1 just:4 ergodicity:1 crp:21 until:4 hand:1 horizontal:1 reversible:3 lack:2 defines:1 mode:1 aj:2 resemblance:1 effect:2 contain:1 normalized:3 evolution:1 neal:1 during:2 subordinator:1 impute:1 uniquely:2 self:1 erosion:1 percentile:1 intragenic:1 whye:1 demonstrate:1 performs:1 snp:33 meaning:1 harmonic:1 novel:2 recently:1 charles:1 funding:1 haplotype:18 mt:16 quences:1 volume:1 thirdly:1 association:6 organism:1 occurred:1 he:1 belong:1 interpretation:1 significant:2 versa:1 gibbs:5 enter:1 ai:30 cambridge:1 beagle:9 mathematics:1 similarly:2 had:1 coalescence:2 moving:1 stable:1 operating:2 posterior:2 own:4 aldous:1 verlag:1 binary:3 seen:1 additional:1 somewhat:1 determine:1 paradigm:1 stephen:4 multiple:3 unphased:3 full:1 unlabelled:2 faster:1 clinical:1 cross:1 a1:2 variant:9 scalable:2 himself:1 poisson:3 arxiv:1 iteration:14 c1:3 addition:1 uninformative:1 interval:1 lander:1 median:2 source:1 phasing:1 undergo:1 elegant:1 coagulating:2 jordan:1 integer:1 split:2 enough:1 restaurant:5 andriy:1 reduce:1 tm:1 avenue:1 blundell:2 whether:1 motivated:1 utility:1 linkage:1 suffer:3 speech:2 useful:1 iterating:1 detailed:1 involve:2 collision:1 amount:1 nonparametric:4 discount:1 gle:1 induces:1 processed:1 sohn:1 cit:5 imputed:1 simplest:1 rioux:1 http:2 xij:3 canonical:1 tutorial:1 gil:1 neuroscience:1 arising:1 disjoint:1 disequilibrium:1 blue:1 diagnosis:1 write:1 discrete:5 finetti:1 donnelly:3 salient:1 drawn:2 imputation:13 characterisation:1 diffusion:4 backward:6 graph:4 fraction:1 sum:1 run:3 exponentiation:1 uncertainty:1 striking:1 place:3 almost:1 decide:1 draw:1 pga:1 bound:1 simplification:1 correspondence:1 encountered:1 g:1 strength:2 software:1 ywteh:1 argument:1 urn:1 inlar:1 jasra:1 structured:1 according:3 transiting:2 combination:2 belonging:1 across:6 describes:4 evolves:3 making:1 invariant:4 indexing:2 taken:1 computationally:1 remains:1 turn:1 discus:1 nonempty:1 describing:2 needed:1 eventually:2 end:2 reversal:1 available:1 operation:1 apply:1 eight:1 hierarchical:1 occurrence:1 alternative:2 encounter:1 schaffner:1 original:2 denotes:2 dirichlet:8 clustering:2 saint:1 exploit:1 giving:1 recombination:10 ghahramani:1 chinese:5 approximating:1 society:1 unchanged:2 move:7 arrangement:3 quantity:1 added:1 evolutionary:1 unable:2 thank:1 simulated:1 berlin:1 hmm:9 chris:1 seating:4 transit:4 topic:1 extent:1 trivial:1 induction:1 enforcing:1 length:2 index:1 ratio:1 balance:1 kingdom:1 unfortunately:1 holding:1 relate:1 teh:4 conversion:1 upper:1 observation:5 markov:29 discarded:3 datasets:1 finite:9 macqueen:1 extended:2 precise:1 topical:1 gasthaus:1 uniformization:2 community:1 inferred:1 pair:5 blackwell:1 specified:1 philosophical:1 marginalise:1 merges:2 quadratically:1 able:2 bar:1 beyond:1 below:2 pattern:1 mjps:1 program:1 max:1 including:1 video:1 royal:1 event:15 suitable:1 natural:5 marginalised:1 marjoram:1 scheme:1 migratory:1 created:5 concludes:1 started:1 prior:5 understanding:1 literature:1 acknowledgement:1 evolve:1 law:3 loss:1 expect:2 permutation:3 whisker:1 bear:1 interesting:3 lecture:1 proportional:4 filtering:1 generation:1 foundation:1 integrate:1 elliott:2 berestycki:1 s0:11 mcvean:3 charitable:1 seated:1 share:1 genetics:11 last:2 rasmussen:1 allow:2 wide:4 taking:1 pitman:3 yor:1 distributed:3 slice:2 boundary:1 transition:12 genome:11 rich:1 forward:7 commonly:1 jump:17 historical:1 employing:1 transaction:1 sj:1 gene:7 summing:1 corpus:1 subsequence:4 continuous:5 iterative:4 table:42 nature:4 chromosome:4 obtaining:1 fcp:31 protocol:1 terminated:3 noise:1 arise:1 repeated:1 join:1 gatsby:3 position:3 comprises:1 explicit:1 coagulate:1 exponential:1 inferring:2 removing:2 theorem:10 specific:1 showing:2 sit:2 intractable:1 exists:2 consist:3 sequential:5 merging:4 fragmentation:29 magnitude:1 labelling:2 conditioned:1 occurring:1 simply:2 explore:2 forming:1 recommendation:1 springer:2 corresponds:2 chance:1 determines:1 tle:1 conditional:6 identity:1 labelled:2 shared:3 change:3 generalisation:3 determined:1 typical:1 except:2 sampler:5 reversing:1 uniformly:1 infinite:1 called:8 specie:1 total:3 experimental:1 e:1 nhlbi:1 genealogical:1 mcmc:9 aux:4 |
3,548 | 4,212 | Fast and Balanced: Efficient Label Tree Learning for
Large Scale Object Recognition
Jia Deng1,2 , Sanjeev Satheesh1 , Alexander C. Berg3 , Li Fei-Fei1
Computer Science Department, Stanford University1
Computer Science Department, Princeton University2
Computer Science Department, Stony Brook University3
Abstract
We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique
to simultaneously determine the structure of the tree and learn the classifiers for
each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced
trees. Experiments are performed on large scale image classification with 10184
classes and 9 million images. We demonstrate significant improvements in test
accuracy and efficiency with less training time and more balanced trees compared
to the previous state of the art by Bengio et al.
1
Introduction
Classification problems with many classes arise in many important domains and pose significant
computational challenges. One prominent example is recognizing tens of thousands of visual object
categories, one of the grand challenges of computer vision. The large number of classes renders the
standard one-versus-all multiclass approach too costly, as the complexity grows linearly with the
number of classes, for both training and testing, making it prohibitive for practical applications that
require low latency or high throughput, e.g. those in robotics or in image retrieval.
Classification with many classes has received increasing attention recently and most approaches
appear to have converged to tree based models [2, 3, 9, 1]. In particular, Bengio et al. [1] proposes
a label tree model, which has been shown to achieve state of the art performance in testing. In a
label tree, each node is associated with a subset of class labels and a linear classifier that determines
which branch to follow. In performing the classification task, a test example travels from the root
of the tree to a leaf node associated with a single class label. Therefore for a well balanced tree,
the time required for evaluation is reduced from O(DK) to O(D log K), where K is the number
of classes and D is the feature dimensionality. The technique can be combined with an embedding
? log K + DD)
? where D
? D
technique, so that the evaluation cost can be further reduced to O(D
is an embedded label space.
Despite the success of label trees in addressing testing efficiency, the learning technique, critical to
ensuring good testing accuracy and efficiency, has several limitations. Learning the tree structure
(determining how to split the classes into subsets) involves first training one-vs-all classifiers for all
K classes to obtain a confusion matrix, and then using spectral clustering to split the classes into disjoint subsets. First, learning one-vs-all classifiers is costly for large number of classes. Second, the
partitioning of classes does not allow overlap, which can be unnecessarily difficult for classification.
Third, the tree structure may be unbalanced, which can result in sub-optimal test efficiency.
In this paper, we address these issues by observing that (1)determining the partition of classes and
learning a classifier for each child can be performed jointly, and (2)allowing overlapping of class
1
labels among children leads to an efficient optimization that also enables precise control of the
accuracy vs efficiency trade-off, which can in turn guarantee balanced trees. This leads to a novel
label tree learning technique that is more efficient and effective. Specifically, we eliminate the onevs-all training step while improving both efficiency and accuracy in testing.
2
Related Work
Our approach is directly motivated by the label tree embedding technique proposed by Bengio et
al. in [1], which is among the few approaches that address sublinear testing cost for multi-class
classification problems with a large number of classes and has been shown to outperform alternative
approaches including Filter Tree [2] and Conditional Probability Tree(CPT) [3]. Our contribution is
a new technique to achieve more efficient and effective learning for label trees. For a comprehensive
discussion on multi-class classification techniques, we refer the reader to [1].
Classifying a large number of object classes has received increasing attention in computer vision as
datasets with many classes such as ImageNet [7] become available. One line of work is concerned
with developing effective feature representations [13, 16, 15, 10] and achieving state of the art performances. Another direction of work, explores methods for exploiting the structure between object
classes. In particular, it has been observed that object classes can be organized in a tree-like structure
both semantically and visually [9, 11, 6], making tree based approaches especially attractive. Our
work follows this direction, focusing on effective learning methods for building tree models.
Our framework of explicitly controlling accuracy or efficiency is connected to Weiss et al.?s
work [14] on building a cascade of graphical models with increasing complexity for structured
prediction. Our work differs in that we reduce the label space instead of the model space.
3
Label Tree and Label Tree Learning by Bengio et al.
Here we briefly review the label tree learning technique proposed by Bengio et al. and then discuss
the limitations we attempt to address.
A label tree is a tree T = (V, E) with nodes V and edges E. Each node r ? V is associated with
a set of class labels ?(r) ? {1, . . . , K} . Let ?(r) ? V be the its set of children. For each child c,
there is a linear classifier wc ? RD and we require that its label set is a subset of its parent?s, that is,
?(c) ? ?(r), ?c ? ?(r).
To make a prediction given an input x ? RD , we use Algorithm 1. We travel from the root until we
reach a leaf node, at each node following the child that has the largest classifier score. There is a
slight difference than the algorithm in [1] in that the leaf node is not required to have only one class
label. If there is more than one label, an arbitrary label from the set is predicted.
Algorithm 1 Predict the class of x given the root node r
s ? r.
while ?(s) 6= ? do
s ? arg maxc??(s) wcT x
end while
return an arbitrary k ? ?(s) or NULL if ?(s) = ?.
Learning the tree structure is a fundamentally hard problem because brute force search for the optimal combination of tree structure and classifier weights is intractable. Bengio et al. [1] instead
propose to solve two subproblems: learning the tree structure and learning the classifier weights.
To learn the tree structure, K one versus all classifiers are trained first to obtain a confusion matrix
C ? RK?K on a validation set. The class labels are then clustered into disjoint sets by spectral clustering with the confusion between classes as affinity measure. This procedure is applied recursively
to build a complete tree. Given the tree structure, all classifier weights are then learned jointly to
optimize the misclassification loss of the tree.
We first analyze the cost of learning by showing that training, with m examples, K classes and D dimensional feature, costs O(mDK). Assume optimistically that the optimization algorithm converges
2
after only one pass of the data and that we use first order methods that cost O(D) at each iteration, with feature dimensionality D. Therefore learning one versus all classifiers costs O(mDK).
Spectral clustering only depends on K and does not depend on D or m, and therefore its cost is negligible. In learning the classifier weights on the tree, each training example is affected by only the
classifiers on its path, i.e. O(Q log K) classifiers, where Q K is the number of children for each
node. Hence the training cost is O(mDQ log K). This analysis indicates that learning K one versus
all classifiers dominates the cost. This is undesirable in large scale learning because with bounded
time, accommodating a large number of classes entails using less expressive and lower dimensional
features.
Moreover, spectral clustering only produces disjoint subsets. It can be difficult to learn a classifier
for disjoint subsets when examples of certain classes cannot be reliably classified to one subset. If
such mistakes are made at higher level of the tree, then it is impossible to recover later. Allowing
overlap potentially yields more flexibility and avoids such errors. In addition, spectral clustering
does not guarantee balanced clusters and thus cannot ensure a desired speedup. We seek a novel
learning technique that overcomes these limitations.
4
New Label Tree Learning
To address the limitations, we start by considering simple and less expensive alternatives of generating the splits. For example, we can sub-sample the examples for one-vs-all training, or generate
the splits randomly, or use a human constructed semantic hierarchy(e.g. WordNet [8]). However, as
shown in [1], improperly partitioning the classes can greatly reduce testing accuracy and efficiency.
To preserve accuracy, it is important to split the classes such that they can be easily separated. To
gain efficiency, it is important to have balanced splits.
We therefore propose a new technique that jointly learns the splits and classifier weights. By tightly
coupling the two, this approach eliminates the need of one-vs-all training and brings the total learning cost down to O(mDQ log K). By allowing overlapping splits and explicitly modeling the accuracy and efficiency trade-off, this approach also improves testing accuracy and efficiency.
Our approach processes one node of the tree a time, starting with the root node. It partitions the
classes into a fixed number of child nodes and learns the classifier weights for each of the children.
It then recursively repeats for each child.
In learning a tree model, accuracy and efficiency are inherently conflicting goals and some trade-off
must be made. Therefore we pose the optimization problem as maximizing efficiency given a constraint on accuracy, i.e. requiring that the error rate cannot exceed a certain threshold. Alternatively
one can also optimize accuracy given efficiency constraints. We will first describe the accuracy constrained optimization and then briefly discuss the efficiency constrained variant. In practice, one can
choose between the two formulations depending on convenience.
For the rest of this section, we first express all the desiderata in one single optimization problem(Sec. 4.1), including defining the optimization variables(classifier weights and partitions), objectives(efficiency) and constraints(accuracy). Then in Sec. 4.2& 4.3 we show how to solve the main
optimization by alternating between learning the classifier weights and determining the partitions.
We then summarize the complete algorithm(Sec. 4.4) and conclude with an alternative formulation
using efficiency constraints(Sec. 4.5).
4.1
Main optimization
Formally, let the current node r represent classes labels ?(r) = {1, . . . , K} and let Q be the
specified number of children we wish to follow. The goal is to determine: (1)a partition matrix
P ? {0, 1}Q?K that represents the assignment of classes to the children, i.e. Pqk = 1 if class label
k appear in child q and Pqk = 0 otherwise; (2)the classifier weights w ? RD?Q , where a column
wq is the classifier weights for child q ? ?(r),
We measure accuracy by examining whether an example is classified to the correct child, i.e. a child
that includes its true class label. Let x ? RD be a training example and y ? {1, . . . , K} be its true
label. Let q? = arg maxq??(r) wqT x be the child that x follows. Given w, P, x, y, the classification
3
loss at the current node r is then
L(w, x, y, P ) = 1 ? P (?
q , y).
(1)
Note that the final prediction of the example is made at a leaf node further down the tree, if the
child to follow is not already a leaf node. Therefore L is a lower bound of the actual loss. It is thus
important to achieve a smaller L because it could be a bottleneck of the final accuracy.
We measure efficiency by how fast the set of possible class labels shrinks. Efficiency is maximized
when each child has a minimal number of class labels so that an unambiguous prediction can be
made, otherwise we incur further cost for traveling down the tree. Given a test example, we define
ambiguity as our efficiency measure, i.e. the size of label set of the child that the example follows,
relative to its parent?s size. Specifically, given w and P , the ambiguity for an example x is
A(w, x, P ) =
K
1 X
P (?
q , k).
K
(2)
k=1
Note that A ? [0, 1]. A perfectly balanced K-nary tree would result in an ambiguity of 1/K for all
examples at each node.
One important note is that the classification loss(accuracy) and ambiguity(efficiency) measures as
defined in Eqn. 1 and Eqn. 2 are local to the current node being considered in greedily building the
tree. They serve as proxies to the global accuracy and efficiency of the entire tree. For the rest of
this paper, we will omit the ?local? and ?global? qualifications if it is clear according to the context.
Let > 0 be the maximum classification loss we are willing to tolerate. Given a training set
(xi , yi ), i = 1, . . . , m, we seek to minimize the average ambiguity of all examples while keeping
the classification loss below , which leads to the following optimization problem:
OP1.
Optimizing efficiency with accuracy constraints.
m
1 X
A(w, xi , P )
minimize
w,P
m i=1
m
subject to
1 X
L(w, xi , yi , P ) ?
m i=1
P ? {0, 1}Q?K .
There are no further constraints on P other than that its entries are integers 0 and 1. We do not
require that the children cover all the classes in the parent. It is legal that one class in the parent
can be assigned to none of the children, in which case we give up on the training examples from the
class. In doing so, we pay a price on accuracy, i.e. those examples will have a misclassification loss
of 1. Therefore a partition P with all zeros is unlikely to be a good solution. We also allow overlap
of label sets between children. If we cannot classify the examples from a class perfectly into one
of the children, we allow them to go to more than one child. We pay a price on efficiency since we
make less progress in eliminating possible class labels. This is different from the disjoint label sets
in [1]. Overlapping label sets gives more flexibility and in fact leads to simpler optimization, as will
become clear in Sec. 4.3.
Directly solving OP1 is intractable. However, with proper relaxation, we can alternate between
optimizing over w and over P where each is a convex program.
4.2
Learning classifier weights w given partitions P
Observe that fixing P and optimizing over w is similar to learning a multi-class classifier except for
? similar to the hinge loss.
the overlapping classes. We relax the loss L by a convex loss L
?
L(w,
xi , yi , P ) = max{0, 1 + max {wT xi ? wT xi )}}
q?Ai ,r?Bi
r
q
where Ai = {q|Pq,yi = 1} and Bi = {r|Pr,yi = 0}. Here Ai is the set of children that contain
class yi and Bi is the rest of the children. The responses of the classifiers in Ai are encouraged to
? increases. It is easily verifiable that L
? upperbounds
be bigger than those in Bi , otherwise the loss L
L. We then obtain the following convex optimization problem.
4
OP2.
Optimizing over w given P .
minimize ?
w
Q
X
m
kwq k22 +
q=1
1 X?
L(w, xi , yi , P )
m i=1
Note that here the objective is no longer the ambiguity A. This is because the influence of w on A
is typically very small. When the partition P is fixed, w can lower A by classifying examples into
the child with the smallest label set. However, the way w classifies examples is mostly constrained
? over
by the accuracy cap , especially for small . Empirically we also found that in optimizing L
w, A remains almost constant. Therefore for simplicity we assume that A is constant w.r.t w and
the optimization becomes minimizing classification loss to move w to the feasible region. We also
PQ
added a regularization term q=1 kwq k22 .
4.3
Determining partitions P given classifier weights w
If we fix w and optimize over P , rearranging terms gives the following integer program.
OP3.
Optimizing over P .
minimize
P
A(P ) =
X
Pqk
q,k
m
1 X
1(q?i = q)
mK i=1
m
subject to 1 ?
X
Pqk
q,k
1 X
1(q?i = q ? yi = k) ?
m i=1
Pqk ? {0, 1}, ?q, k.
Integer programming in general is NP-hard. However, for this integer program, we can solve it
by relaxing it to a linear program and then taking the ceiling of the solution. We show that this
solution is in fact near optimal by showing that the number of non-integers can be very few, due
to the fact that the LP has few constraints other than that the variables lie in [0, 1] and most of the
[0, 1] constraints will be active. Specifically we use Lemma 4.1(proof in supplementary materials)
to bound the rounded LP solution in Theorem 4.2.
Lemma 4.1. For LP problem
cT x
minimize
x
subject to Ax ? b
0 ? x ? 1,
where A ? Rm?n , m < n, if it is feasible, then there exists an optimal solution with at most m
non-integer entries and such a solution can be found in polynomial time.
Theorem 4.2. Let A? be an optimal value of OP3. A solution P 0 can be computed within polynomial
1
time such that A(P 0 ) ? A? + K
.
Proof. We relax OP3 to an LP by replacing the constraint Pqk ? {0, 1}, ?q, k with Pqk ?
[0, 1], ?q, k. Apply Lemma 4.1 and we obtain an optimal solution P 00 of the LP with at most 1
non-integer. We take the ceiling of the fraction and obtain an integer
P 0 to OP3. The value
Psolution
m
1
1
1
?
of the LP, a lower bound of A , increases by at most K , since mK i=1 1(q?i = q) ? K
, ?q.
Note that the ambiguity is a quantity in [0, 1] and K is the number of classes. Therefore for large
numbers of classes the rounded solution is almost optimal.
4.4
Summary of algorithm
Now all ingredients are in place for an iterative algorithm to build the tree, except that we need to
initialize the partition P or the weights w. We find that a random initialization of P works well in
practice. Specifically, for each child, we randomly pick one class, without replacement, from the
5
label set of the parent. That is, for each row of P , randomly pick a column and set the column to 1.
This is analogous to picking the cluster seeds in the K-means algorithm.
We summarize the algorithm for building one level of tree nodes in Algorithm 2. The procedure is
applied recursively from the root. Note that each training example only affects classifiers on one
path of the tree, hence the training cost is O(mD log K) for a balanced tree.
Algorithm 2 Grow a single node r
Input: Q, and training examples classified into node r by its ancestors.
Initialize P . For each child, randomly pick one class label from the parent, without replacement.
for t = 1 ? T do
Fix P , solve OP2 and update w.
Fix w, solve OP3 and update P .
end for
4.5
Efficiency constrained formulations
As mentioned earlier, we can also optimize accuracy given explicit efficiency constraints. Let ? be
the maximum ambiguity we can tolerate. Let OP1?, OP2?, OP3? be the counterparts of OP1, OP2
and OP3. We obtain OP1? by replacing with ? and switching L(w, xi , yi , P ) and A(w, xi , p) in
OP1. OP2? is the same as OP2 because we also treat A as constant and minimize the classification
loss L unconstrained. OP3? can also be formulated in a straightforward manner, and solved nearly
optimally by rounding from LP(Theorem 4.3).
0
Theorem 4.3. Let L? be the optimal value of OP3?. A solution
PPm can be computed within polyno1
0
?
mial time such that L(P ) ? L + maxk ?k , where ?k = m i=1 1(yi = k), is the percentage of
training examples from class k.
Proof. We relax OP3? to an LP. Apply Lemma 4.1 and obtain an optimal solution P 00 with at most
1 non-integer. We take the floor of P 00 and obtain a feasible solution P
P 0 to OP 30 . The value of
1
?
the LP, a lower bound of L , increases by at most maxk ?k , since m i 1(q?i = q ? yi = k) ?
Pm
1
i=1 1(yi = k) ? maxk ?k , ?k, q.
m
For uniform distribution of examples among classes, maxk ?k = 1/K and the rounded solution
is near optimal for large K. If the distribution is highly skewed, for example, a heavy tail, then
the rounding can give poor approximation. One simple workaround is to split the big classes into
artificial subclasses or treat the classes in the tail as one big class, to ?equalize? the distribution.
Then the same learning techniques can be applied. In this paper we focus on the near uniform case
and leave further discussion on the skewed case as future work.
5
Experiments
We use two datasets for evaluation: ILSVRC2010 [12] and ImageNet10K [6]. In ILSVRC2010,
there are 1.2M images from 1k classes for training, 50k images for validation and 150k images
for test. For each image in ILSVRC2010 we compute the LLC [13] feature with SIFT on a 10k
codebook and use a two level spatial pyramid(1x1 and 2x2 grids) to obtain a 50k dimensional feature
vector. In ImageNet10K, there are 9M images from 10184 classes. We use 50% for training, 25%
for validation, and the rest 25% for testing. For ImageNet10K, We compute LLC similarly except
that we use no spatial pyramid, obtaining a 10k dimensional feature vector.
We use parallel stochastic gradient descent(SGD) [17] for training. SGD is especially suited for
large scale learning [4] where the learning is bounded by the time and the features can no longer fit
into memory (the LLC features take 80G in sparse format). Parallelization makes it possible to use
multiple CPUs to improves wall time.
We compare our algorithm with the original label tree learning method by Bengio et al. [1]. For
both algorithms, we fix two parameters, the number of children Q for each node, and the maximum
depth H of the tree. The depth of each node is defined as the maximum distance to the root(the root
6
Ours
[1]
Acc%
11.9
8.33
T32,2
Ctr
259
321
Ste
10.3
10.3
Acc%
8.92
5.99
T10,3
Ctr
104
193
Ste
18.2
15.2
Acc%
5.62
5.88
T6,4
Ctr
50.2
250
Ste
31.3
9.32
Acc%
3.4
2.7
T101,2
Ctr
685
1191
Ste
32.4
32.4
Table 1: Global accuracy(Acc), training cost(Ctr ), and test speedup(Ste ) on ILSVRC2010 1K
classes (T32,2 , T10,3 , T6,4 ) and on ImageNet10K(T101,2 ) classes. Training and test costs are measured as the average number of vector operations performed per example. Test speedup is the onevs-all test cost divided by the label tree test cost. Ours outperforms the Bengio et al. [1] approach
by achieving comparable or better accuracy and efficiency, with less training cost, compared with
the training cost for Bengio et al. [1] with the one-vs-all training cost excluded.
Tree
Depth
Classification loss(%)
Ambiguity(%)
Ours
Bengio [1]
Ours
Bengio [1]
T32,2
0
1
49.9 76.1
76.6 64.8
6.49 1.55
6.49 1.87
0
34.6
62.8
18.9
19.0
T10,3
1
52.6
53.7
18.4
25.9
2
71.2
65.3
2.96
2.95
0
30.0
56.2
24.7
24.7
T6,4
1
2
48.8 55.9
34.8 37.3
24.1 23.5
59.6 56.5
3
64.4
65.8
7.15
2.02
Table 2: Local classification loss(Eqn. 1) and ambiguity(Eqn. 2) measured at different depth levels
for all trees on the ILSVRC2010 test set(1k classes). T6,4 of Bengio et al. is less balanced(large ambiguity). Our trees are more balanced as efficiency is explicitly enforced by capping the ambiguity
throughout all levels.
has depth 0). We require every internal node to split into Q children, with two exceptions: nodes at
depth H ? 1(parent of leaves) and nodes with fewer than Q classes. In both cases, we split the node
fully, i.e. grow one child node per class. We use TQ,H to denote a tree built with parameters Q and
H. We set Q and H such that for a well balanced tree, the number of leaf nodes QH approximate
the number of classes K.
We evaluate the global classification accuracy and computational cost in both training and test. The
main costs of learning consist of two operations, evaluating the gradient and updating the weights,
i.e. vector dot products and vector additions(possibly with scaling). We treat both operations as
costing the same 1 . To measure the cost, we count the number of vector operations performed per
training example. For instance, running SGD one-versus-all(either independent or single machine
SVMs [5]) for K classes costs 2K per example for going through data once, as in each iteration all
K classifiers are evaluated against the feature vector(dot product) and updated(addition).
For both algorithms, we build three trees T32,2 , T10,3 , T6,4 for the ILSVRC2010 1k classes and build
one tree T101,2 for ImageNet10K classes. For the Bengio et al. method, we first train one-versus-all
classifiers with one pass of parallel SGD. This results in a cost of 2000 per example for ISVRC2010
and 20368 for ImageNet10K. After forming the tree skeleton by spectral clustering using confusion
matrix from the validation set, we learn the weights by solving a joint optimization(see [1]) with two
passes of parallel SGD. For our method, we do three iterations in Algorithm 2. In each iteration,
we do one pass of parallel SGD to solve OP3?, such that the computation is comparable to that of
Bengio et al. (excluding the one-versus-all training). We then solve OP3? on the validation set to
update the partition. To set the efficiency constraint, we measure the average (local) ambiguity of
the root node of the tree generated by the Bengio et al. approach, on the validation set. We use it as
our ambiguity cap throughout our learning, in an attempt to produce a similarly structured tree.
We report the test results in Table 1. The results show that for all types of trees, our method achieves
comparable or significantly better accuracy while achieving better speed-up with much less training
cost, even after excluding the 1-versus-all training in Bengio et al.?s. It?s worth noting that for the
Bengio et al. approach, T6,4 fails to further speed-up testing compared to the other shallower trees.
The reason is that at depth 1(one level down from root), the splits became highly imbalanced and
does not shrink the class sets faster enough until the height limit is reached. This is revealed in
Table 2, where we measure the average local ambiguity(Eq. 2) and classification loss(Eq. 1) at each
depth on the test set to shed more light on the structure of the trees. Observe that our trees have
1
This is inconsequential as a vector addition always pairs with a dot product for all training in this paper.
7
Figure 1: Comparison of partition matrices(32 ? 1000) of the root node of T32,2 for our approach(top) and the Bengio et al. approach(bottom). Each entry represents the membership of a
class label(column) in a child(row). The columns are ordered by a depth first search of WordNet.
Columns belonging to certain WordNet subtrees are marked by red boxes.
Figure 2: Paths of the tree T6,4 taken by two test examples. The class labels shown are randomly
subsampled to fit into the space.
almost constant average ambiguity at each level, as enforced in learning. This shows an advantage
of our algorithm since we are able to explicitly enforce balanced tree while in Bengio et al. [1] no
such control is possible, although spectral clustering encourages balanced splits.
In Fig. 1, we visualize the partition matrices of the root of T32,2 , for both algorithms. The columns
are ordered by a depth first search of the WordNet tree so that neighboring columns are likely to
be semantic similar classes. We observe that for both methods, there is visible alignment of the
WordNet ordering. We further illustrate the semantic alignment by showing with the paths of our
T6,4 traveled by two test examples. Also observe that our partition is notably ?noisier?, despite that
both partitions have the same average ambiguity. This is a result of overlapping partitions, which
in fact improves accuracy(as shown in Table 2) because it avoids the mistakes made by forcing all
examples of a class commit to one child.
Also note that Bengio et al. showed in [1] that optimizing the classifiers on the tree jointly is significantly better than independently training the classifiers for each node, as it encodes the dependency
of the classifiers along a tree path. This does not contradict our results. Although we have no explicit
joint learning of classifiers over the entire tree, we train the classifiers of each node using examples
already filtered by classifiers of the ancestors, thus implicitly enforcing the dependency.
6
Conclusion
We have presented a novel approach to efficiently learn a label tree for large scale classification with
many classes, allowing fine grained efficiency-accuracy tradeoff. Experimental results demonstrate
more efficient trees at better accuracy with less training cost compared to previous work.
Acknowledgment
L. F-F is partially supported by an NSF CAREER grant (IIS-0845230), the DARPA CSSG grant,
and a Google research award.
8
References
[1] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In
Advances in Neural Information Processing Systems (NIPS), 2010.
[2] A. Beygelzimer, J. Langford, and P. Ravikumar. Multiclass classification with filter trees.
Preprint, June, 2007.
[3] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory B. Sorkin, and Alexander L. Strehl.
Conditional probability tree estimation analysis and algorithms. Computing Research Repository, 2009.
[4] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information processing systems, 20:161?168, 2008.
[5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. The Journal of Machine Learning Research, 2:265?292, 2002.
[6] J. Deng, A.C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image
categories tell us? In ECCV10.
[7] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR09, 2009.
[8] C. Fellbaum. WordNet: An Electronic Lexical Database. MIT Press, 1998.
[9] Gregory Griffin and Pietro Perona. Learning and using taxonomies for fast visual categorization. CVPR08, 2008.
[10] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image
classification: Fast feature extraction and svm training. In Conference on Computer Vision and
Pattern Recognition, page (to appear), volume 1, page 3, 2011.
[11] A. Torralba, R. Fergus, and W.T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence, pages 1958?1970, 2008.
[12] http://www.image-net.org/challenges/LSVRC/2010/.
[13] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for
image classification. 2010.
[14] D. Weiss, B. Sapp, and B. Taskar. Sidestepping intractable inference with structured ensemble
cascades. In NIPS, volume 1281, pages 1282?1284, 2010.
[15] K. Yu and T. Zhang. Improved local coordinate coding using local tangents. ICML09, 2010.
[16] X. Zhou, K. Yu, T. Zhang, and T. Huang. Image classification using super-vector coding of
local image descriptors. Computer Vision?ECCV 2010, pages 141?154, 2010.
[17] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In
J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances
in Neural Information Processing Systems 23, pages 2595?2603. 2010.
9
| 4212 |@word repository:1 briefly:2 eliminating:1 polynomial:2 willing:1 seek:2 pick:3 sgd:6 recursively:3 score:1 ours:4 outperforms:1 current:3 beygelzimer:2 stony:1 must:1 john:1 visible:1 partition:16 enables:1 update:3 v:7 intelligence:1 prohibitive:1 leaf:7 fewer:1 filtered:1 node:34 codebook:1 org:1 simpler:1 zhang:2 height:1 along:1 constructed:1 become:2 manner:1 notably:1 multi:4 freeman:1 actual:1 cpu:1 considering:1 increasing:3 becomes:1 classifies:1 bounded:2 moreover:1 null:1 what:1 guarantee:2 every:1 subclass:1 shed:1 classifier:36 rm:1 control:3 partitioning:2 brute:1 omit:1 appear:3 grant:2 negligible:1 local:8 qualification:1 treat:3 mistake:2 limit:1 switching:1 despite:2 path:5 optimistically:1 inconsequential:1 initialization:1 relaxing:1 wqt:1 bi:4 practical:1 acknowledgment:1 fei1:1 testing:10 practice:2 differs:1 procedure:2 cascade:2 t10:4 significantly:2 cannot:4 undesirable:1 convenience:1 context:1 impossible:1 influence:1 optimize:4 www:1 zinkevich:1 lexical:1 maximizing:1 go:1 attention:2 starting:1 straightforward:1 convex:3 independently:1 williams:1 simplicity:1 embedding:3 cvpr09:1 coordinate:1 analogous:1 updated:1 controlling:1 hierarchy:1 qh:1 programming:1 designing:1 recognition:3 expensive:1 updating:1 database:2 observed:1 bottom:1 taskar:1 preprint:1 solved:1 wang:1 thousand:1 region:1 culotta:1 connected:1 ordering:1 trade:4 balanced:14 mentioned:1 equalize:1 workaround:1 complexity:2 skeleton:1 ppm:1 trained:1 depend:1 solving:2 incur:1 serve:1 efficiency:31 easily:2 joint:2 darpa:1 train:2 separated:1 fast:4 effective:4 describe:1 artificial:1 zemel:1 tell:1 stanford:1 solve:7 supplementary:1 relax:3 otherwise:3 commit:1 jointly:4 final:2 advantage:1 net:1 propose:2 product:3 ste:5 neighboring:1 cao:1 flexibility:2 achieve:3 exploiting:1 parent:7 cluster:2 cour:1 produce:2 generating:1 categorization:1 converges:1 leave:1 object:6 coupling:1 depending:1 illustrate:1 fixing:1 gong:1 pose:2 measured:2 op:1 received:2 progress:1 eq:2 predicted:1 involves:1 direction:2 university1:1 correct:1 filter:2 stochastic:2 human:1 material:1 require:4 fix:4 clustered:1 wall:1 eccv10:1 considered:1 visually:1 seed:1 algorithmic:1 predict:1 visualize:1 achieves:1 torralba:1 smallest:1 estimation:1 travel:2 label:45 largest:1 mit:1 always:1 super:1 zhou:1 ax:1 focus:1 june:1 improvement:1 indicates:1 greatly:1 greedily:1 inference:1 membership:1 eliminate:1 entire:2 unlikely:1 kwq:2 typically:1 perona:1 ancestor:2 going:1 issue:1 classification:23 among:3 arg:2 proposes:1 art:3 constrained:5 initialize:2 spatial:2 once:1 extraction:1 encouraged:1 represents:2 unnecessarily:1 yu:4 throughput:1 nearly:1 future:1 np:1 report:1 fundamentally:1 few:3 randomly:5 simultaneously:1 tightly:1 comprehensive:1 preserve:1 subsampled:1 replacement:2 tq:1 attempt:2 highly:2 evaluation:3 alignment:2 light:1 subtrees:1 edge:1 op3:12 tree:78 taylor:1 desired:1 minimal:1 mk:2 instance:1 column:8 modeling:1 classify:1 earlier:1 cover:1 assignment:1 cost:26 addressing:1 subset:7 entry:3 uniform:2 recognizing:1 examining:1 rounding:2 too:1 optimally:1 dependency:2 gregory:2 combined:1 grand:1 explores:1 off:4 dong:1 rounded:3 picking:1 sanjeev:1 ctr:5 ambiguity:17 choose:1 possibly:1 huang:3 leading:1 return:1 li:5 op1:6 sec:5 coding:3 includes:1 explicitly:4 depends:1 performed:4 root:11 later:1 observing:1 analyze:1 doing:1 start:1 recover:1 reached:1 parallel:4 red:1 jia:1 contribution:2 minimize:6 accuracy:30 became:1 descriptor:1 efficiently:2 maximized:1 yield:1 ensemble:1 none:1 worth:1 converged:1 classified:3 acc:5 maxc:1 reach:1 against:1 associated:3 proof:3 gain:1 cap:2 dimensionality:2 improves:3 organized:1 sapp:1 fellbaum:1 focusing:1 higher:1 tolerate:2 follow:3 response:1 wei:2 improved:1 formulation:3 evaluated:1 shrink:2 box:1 smola:1 until:2 traveling:1 langford:2 eqn:4 expressive:1 replacing:2 overlapping:5 google:1 brings:1 grows:1 building:4 k22:2 requiring:1 true:2 contain:1 counterpart:1 hence:2 assigned:1 regularization:1 alternating:1 excluded:1 semantic:3 attractive:1 skewed:2 encourages:1 unambiguous:1 prominent:1 complete:2 demonstrate:2 confusion:4 image:16 novel:4 recently:1 empirically:1 volume:2 million:2 tail:2 slight:1 significant:2 refer:1 ai:4 rd:4 unconstrained:1 grid:1 pm:1 similarly:2 grangier:1 shawe:1 pq:2 dot:3 entail:1 longer:2 imbalanced:1 showed:1 optimizing:7 forcing:1 certain:3 success:1 yuri:1 yi:12 floor:1 deng:2 op2:6 parallelized:1 determine:2 ii:1 branch:1 multiple:1 faster:1 retrieval:1 lin:1 divided:1 ravikumar:1 award:1 bigger:1 ensuring:1 prediction:4 variant:1 desideratum:1 vision:4 iteration:4 represent:1 kernel:1 pyramid:2 robotics:1 addition:4 fine:2 wct:1 grow:2 parallelization:1 eliminates:1 rest:4 nary:1 pass:1 mdk:2 subject:3 lafferty:1 integer:9 near:3 noting:1 yang:2 exceed:1 bengio:21 split:13 concerned:1 enough:1 revealed:1 affect:1 fit:2 sorkin:1 perfectly:2 reduce:2 multiclass:3 tradeoff:2 bottleneck:1 whether:1 motivated:1 improperly:1 render:1 cpt:1 latency:1 clear:2 verifiable:1 nonparametric:1 ten:1 svms:1 category:2 reduced:2 generate:1 http:1 outperform:1 percentage:1 nsf:1 disjoint:5 per:5 sidestepping:1 affected:1 express:1 ilsvrc2010:6 key:1 threshold:1 achieving:3 alina:1 costing:1 relaxation:1 pietro:1 fraction:1 enforced:2 place:1 almost:3 reader:1 throughout:2 mial:1 electronic:1 griffin:1 scaling:1 comparable:3 bound:4 ct:1 pay:2 constraint:11 fei:5 x2:1 scene:1 encodes:1 bousquet:1 wc:1 t32:6 speed:2 performing:1 format:1 speedup:3 department:3 developing:1 structured:3 according:1 alternate:1 combination:1 poor:1 belonging:1 smaller:1 lp:9 making:2 pr:1 taken:1 ceiling:2 legal:1 remains:1 turn:1 discus:2 count:1 singer:1 end:2 available:1 operation:4 apply:2 observe:4 hierarchical:1 spectral:7 enforce:1 alternative:3 original:1 top:1 clustering:7 ensure:1 running:1 graphical:1 hinge:1 upperbounds:1 especially:3 build:4 objective:2 move:1 already:2 added:1 quantity:1 costly:2 md:1 affinity:1 gradient:3 distance:1 accommodating:1 reason:1 enforcing:1 minimizing:1 difficult:2 mostly:1 potentially:1 taxonomy:1 subproblems:1 implementation:1 reliably:1 proper:1 allowing:4 shallower:1 datasets:2 descent:2 defining:1 maxk:4 excluding:2 precise:1 arbitrary:2 pair:1 required:2 specified:1 imagenet:2 learned:1 conflicting:1 maxq:1 nip:2 brook:1 address:4 able:1 below:1 pattern:2 challenge:3 summarize:2 program:4 built:1 including:2 max:2 memory:1 critical:1 overlap:3 misclassification:2 force:1 zhu:1 pqk:7 traveled:1 review:1 tangent:1 determining:4 relative:1 embedded:1 loss:16 fully:1 sublinear:1 limitation:4 versus:8 ingredient:1 lv:2 validation:6 proxy:1 dd:1 editor:1 classifying:3 tiny:1 heavy:1 strehl:1 row:2 eccv:1 summary:1 repeat:1 supported:1 keeping:1 t6:8 allow:3 onevs:2 taking:1 sparse:1 depth:10 llc:3 evaluating:1 avoids:2 made:5 transaction:1 approximate:1 contradict:1 implicitly:1 overcomes:1 global:4 active:1 conclude:1 xi:9 fergus:1 alternatively:1 search:3 iterative:1 table:5 learn:6 rearranging:1 inherently:1 career:1 obtaining:1 improving:1 cssg:1 bottou:1 domain:1 main:3 linearly:1 weimer:1 big:2 arise:1 child:34 x1:1 fig:1 lifshits:1 sub:2 fails:1 wish:1 explicit:2 lie:1 third:1 learns:2 grained:2 capping:1 rk:1 down:4 theorem:4 showing:3 sift:1 dk:1 svm:1 dominates:1 intractable:3 exists:1 consist:1 socher:1 suited:1 locality:1 likely:1 forming:1 visual:2 ordered:2 partially:1 determines:1 weston:1 conditional:2 goal:2 formulated:1 marked:1 price:2 feasible:3 hard:2 lsvrc:1 specifically:4 except:3 semantically:1 wt:2 wordnet:6 lemma:4 total:1 pas:3 experimental:1 exception:1 formally:1 berg:1 wq:1 internal:1 crammer:1 unbalanced:1 alexander:2 noisier:1 evaluate:1 princeton:1 |
3,549 | 4,213 | ShareBoost: Efficient Multiclass Learning with
Feature Sharing
Shai Shalev-Shwartz?
Yonatan Wexler?
Amnon Shashua?
Abstract
Multiclass prediction is the problem of classifying an object into a relevant target
class. We consider the problem of learning a multiclass predictor that uses only
few features, and in particular, the number of used features should increase sublinearly with the number of possible classes. This implies that features should be
shared by several classes. We describe and analyze the ShareBoost algorithm for
learning a multiclass predictor that uses few shared features. We prove that ShareBoost efficiently finds a predictor that uses few shared features (if such a predictor
exists) and that it has a small generalization error. We also describe how to use
ShareBoost for learning a non-linear predictor that has a fast evaluation time. In a
series of experiments with natural data sets we demonstrate the benefits of ShareBoost and evaluate its success relatively to other state-of-the-art approaches.
1
Introduction
Learning to classify an object into a relevant target class surfaces in many domains such as document categorization, object recognition in computer vision, and web advertisement. In multiclass
learning problems we use training examples to learn a classifier which will later be used for accurately classifying new objects. Typically, the classifier first calculates several features from the input
object and then classifies the object based on those features. In many cases, it is important that the
runtime of the learned classifier will be small. In particular, this requires that the learned classifier
will only rely on the value of few features.
We start with predictors that are based on linear combinations of features. Later, in Section 3, we
show how our framework enables learning highly non-linear predictors by embedding non-linearity
in the construction of the features. Requiring the classifier to depend on few features is therefore
equivalent to sparseness of the linear weights of features. In recent years, the problem of learning
sparse vectors for linear classification or regression has been given significant attention. While, in
general, finding the most accurate sparse predictor is known to be NP hard, two main approaches
have been proposed for overcoming the hardness result. The first approach uses `1 norm as a surrogate for sparsity (e.g. the Lasso algorithm [33] and the compressed sensing literature [5, 11]). The
second approach relies on forward greedy selection of features (e.g. Boosting [15] in the machine
learning literature and orthogonal matching pursuit in the signal processing community [35]).
A popular model for multiclass predictors maintains a weight vector for each one of the classes. In
such case, even if the weight vector associated with each class is sparse, the overall number of used
features might grow with the number of classes. Since the number of classes can be rather large,
and our goal is to learn a model with an overall small number of features, we would like that the
weight vectors will share the features with non-zero weights as much as possible. Organizing the
weight vectors of all classes as rows of a single matrix, this is equivalent to requiring sparsity of the
columns of the matrix.
?
School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
OrCam Ltd., Jerusalem, Israel
?
OrCam Ltd., Jerusalem, Israel
?
1
In this paper we describe and analyze an efficient algorithm for learning a multiclass predictor whose
corresponding matrix of weights has a small number of non-zero columns. We formally prove that
if there exists an accurate matrix with a number of non-zero columns that grows sub-linearly with
the number of classes, then our algorithm will also learn such a matrix. We apply our algorithm
to natural multiclass learning problems and demonstrate its advantages over previously proposed
state-of-the-art methods.
Our algorithm is a generalization of the forward greedy selection approach to sparsity in columns.
An alternative approach, which has recently been studied in [26, 12], generalizes the `1 norm based
approach, and relies on mixed-norms. We discuss the advantages of the greedy approach over mixednorms in Section 1.2.
1.1
Formal problem statement
Let V be the set of objects we would like to classify. For example, V can be the set of gray scale
images of a certain size. For each object v 2 V, we have a pool of predefined d features, each of
which is a real number in [ 1, 1]. That is, we can represent each v 2 V as a vector of features
x 2 [ 1, 1]d . We note that the mapping from v to x can be non-linear and that d can be very large.
For example, we can define x so that each element xi corresponds to some patch, p 2 {?1}q?q , and
a threshold ?, where xi equals 1 if there is a patch of v whose inner product with p is higher than
?. We discuss some generic methods for constructing features in Section 3. From this point onward
we assume that x is given.
The set of possible classes is denoted by Y = {1, . . . , k}. Our goal is to learn a multiclass predictor, which is a mapping from the features of an object into Y. We focus on the set of predictors
parametrized by matrices W 2 Rk,d that takes the following form:
hW (x) = argmax(W x)y .
(1)
y2Y
That is, the matrix W maps each d-dimensional feature vector into a k-dimensional score vector,
and the actual prediction is the index of the maximal element of the score vector. If the maximizer
is not unique, we break ties arbitrarily.
Recall that our goal is to find a matrix W with few non-zero columns. We denote by W?,i the i?th
column of W and use the notation kW k1,0 = |{i : kW?,i k1 > 0}| to denote the number of
columns of W which are not identically the zero vector. More generally, given a matrix W and a
pair of norms k ? kp , k ? kr we denote kW kp,r = k(kW?,1 kp , . . . , kW?,d kp )kr , that is, we apply the
p-norm on the columns of W and the r-norm on the resulting d-dimensional vector.
The 0 1 loss of a multiclass predictor hW on an example (x, y) is defined as 1[hW (x) 6= y]. That
is, the 0 1 loss equals 1 if hW (x) 6= y and 0 otherwise. Since this loss function is not convex
with respect to W , we use a surrogate convex loss function based on the following easy to verify
inequalities:
1[hW (x) 6= y] ? 1[hW (x) 6= y]
(W x)y + (W x)hW (x)
? max
1[y 6= y] (W x)y + (W x)y0
y 0 2Y
X
0
? ln
e1[y 6=y] (W x)y +(W x)y0 .
0
(2)
(3)
y 0 2Y
We use the notation `(W, (x, y)) to denote the right-hand side (eqn. (3)) of the above. The loss given
in eqn. (2) is the multi-class hinge loss [7] used in Support-Vector-Machines,
P whereas `(W, (x, y)) is
the result of performing a ?soft-max? operation: maxx f (x) ? (1/p) ln x epf (x) , where equality
holds for p ! 1.
This logistic multiclass loss function `(W, (x, y)) has several nice properties ? see for example
[39]. Besides being a convex upper-bound on the 0 1 loss, it is smooth. The reason we need the
loss function to be both convex and smooth is as follows. If a function is convex, then its first order
approximation at any point gives us a lower bound on the function at any other point. When the
function is also smooth, the first order approximation gives us both lower and upper bounds on the
2
value of the function at any other point1 . ShareBoost uses the gradient of the loss function at the
current solution (i.e. the first order approximation of the loss) to make a greedy choice of which
column to update. To ensure that this greedy choice indeed yields a significant improvement we
must know that the first order approximation is indeed close to the actual loss function, and for that
we need both lower and upper bounds on the quality of the first order approximation.
Given a training
P set S = (x1 , y1 ), . . . , (xm , ym ), the average training loss of a matrix W is:
1
L(W ) = m
(x,y)2S `(W, (x, y)). We aim at approximately solving the problem
min L(W ) s.t. kW k1,0 ? s .
W 2Rk,d
(4)
That is, find the matrix W with minimal training loss among all matrices with column sparsity of at
most s, where s is a user-defined parameter. Since `(W, (x, y)) is an upper bound on 1[hW (x) 6= y],
by minimizing L(W ) we also decrease the average 0 1 error of W over the training set. In Section 4
we show that for sparse models, a small training error is likely to yield a small error on unseen
examples as well.
Regrettably, the constraint kW k1,0 ? s in eqn. (4) is non-convex, and solving the optimization
problem in eqn. (4) is NP-hard [24, 9]. To overcome the hardness result, the ShareBoost algorithm
will follow the forward greedy selection approach. The algorithm comes with formal generalization
and sparsity guarantees (described in Section 4) that makes ShareBoost an attractive multiclass
learning engine due to efficiency (both during training and at test time) and accuracy.
1.2
Related Work
The centrality of the multiclass learning problem has spurred the development of various approaches
for tackling the task. Perhaps the most straightforward approach is a reduction from multiclass to
binary, e.g. the one-vs-rest or all pairs constructions. The more direct approach we choose, in
particular, the multiclass predictors of the form given in eqn. (1), has been extensively studied and
showed a great success in practice ? see for example [13, 37, 7].
An alternative construction, abbreviated as the single-vector model, shares a single weight vector,
for all the classes, paired with class-specific feature mappings. This construction is common in
generalized additive models [17], multiclass versions of boosting [16, 28], and has been popularized
lately due to its role in prediction with structured output where the number of classes is exponentially
large (see e.g. [31]). While this approach can yield predictors with a rather mild dependency of the
required features on k (see for example the analysis in [39, 31, 14]), it relies on a-priori assumptions
on the structure of X and Y. In contrast, in this paper we tackle general multiclass prediction
problems, like object recognition or document classification, where it is not straightforward or even
plausible how one would go about to construct a-priori good class specific feature mappings, and
therefore the single-vector model is not adequate.
The class of predictors of the form given in eqn. (1) can be trained using Frobenius norm regularization (as done by multiclass SVM ? see e.g. [7]) or using `1 regularization over all the entries of
W . However, as pointed out in [26], these regularizers might yield a matrix with many non-zeros
columns, and hence, will lead to a predictor that uses many features.
The alternative approach, and the most relevant to our work, is the use of mix-norm regularizations
like kW k1,1 or kW k2,1 [21, 36, 2, 3, 26, 12, 19]. For example, [12] solves the following problem:
min L(W ) + kW k1,1 .
W 2Rk,d
(5)
which can be viewed as a convex approximation of our objective (eqn. (4)). This is advantageous
from an optimization point of view, as one can find the global optimum of a convex problem, but
it remains unclear how well the convex program approximates the original goal. For example,
in Section C we show cases where mix-norm regularization does not yield sparse solutions while
ShareBoost does yield a sparse solution. Despite the fact that ShareBoost tackles a non-convex
program, and thus limited to local optimum solutions, we prove in Theorem 2 that under mild
1
Smoothness guarantees that |f (x) f (x0 ) rf (x0 )(x x0 )| ? kx x0 k2 for some and all x, x0 .
Therefore one can approximate f (x) by f (x0 ) + rf (x0 )(x x0 ) and the approximation error is upper bounded
by the difference between x, x0 .
3
conditions ShareBoost is guaranteed to find an accurate sparse solution whenever such a solution
exists and that the generalization error is bounded as shown in Theorem 1.
We note that several recent papers (e.g. [19]) established exact recovery guarantees for mixed norms,
which may seem to be stronger than our guarantee given in Theorem 2. However, the assumptions
in [19] are much stronger than the assumptions of Theorem 2. In particular, they have strong noise
assumptions and a group RIP like assumption (Assumption 4.1-4.3 in their paper). In contrast,
we impose no such restrictions. We would like to stress that in many generic practical cases, the
assumptions of [19] will not hold. For example, when using decision stumps, features will be highly
correlated which will violate Assumption 4.3 of [19].
Another advantage of ShareBoost is that its only parameter is the desired number of non-zero
columns of W . Furthermore, obtaining the whole-regularization-path of ShareBoost, that is, the
curve of accuracy as a function of sparsity, can be performed by a single run of ShareBoost, which
is much easier than obtaining the whole regularization path of the convex relaxation in eqn. (5).
Last but not least, ShareBoost can work even when the initial number of features, d, is very large,
as long as there is an efficient way to choose the next feature. For example, when the features are
constructed using decision stumps, d will be extremely large, but ShareBoost can still be implemented efficiently. In contrast, when d is extremely large mix-norm regularization techniques yield
challenging optimization problems.
As mentioned before, ShareBoost follows the forward greedy selection approach for tackling the
hardness of solving eqn. (4). The greedy approach has been widely studied in the context of learning
sparse predictors for linear regression. However, in multiclass problems, one needs sparsity of
groups of variables (columns of W ). ShareBoost generalizes the fully corrective greedy selection
procedure given in [29] to the case of selection of groups of variables, and our analysis follows
similar techniques.
Obtaining group sparsity by greedy methods has been also recently studied in [20, 23], and indeed,
ShareBoost shares similarities with these works. We differ from [20] in that our analysis does not
impose strong assumptions (e.g. group-RIP) and so ShareBoost applies to a much wider array of
applications. In addition, the specific criterion for choosing the next feature is different. In [20], a
ratio between difference in objective and different in costs is used. In ShareBoost, the L1 norm of
the gradient matrix is used. For the multiclass problem with log loss, the criterion of ShareBoost
is much easier to compute, especially in large scale problems. [23] suggested many other selection
rules that are geared toward the squared loss, which is far from being an optimal loss function for
multiclass problems.
Another related method is the JointBoost algorithm [34]. While the original presentation in
[34] seems rather different than the type of predictors we describe in eqn. (1), it is possible
to show that JointBoost in fact learns a matrix W with additional constraints. In particular,
the features x are assumed to be decision stumps and each column W?,i is constrained to be
?i (1[1 2 Ci ] , . . . , 1[k 2 Ci ]), where ?i 2 R and Ci ? Y. That is, the stump is shared by all
classes in the subset Ci . JointBoost chooses such shared decision stumps in a greedy manner by
applying the GentleBoost algorithm on top of this presentation. A major disadvantage of JointBoost
is that in its pure form, it should exhaustively search C among all 2k possible subsets of Y. In practice, [34] relies on heuristics for finding C on each boosting step. In contrast, ShareBoost allows
the columns of W to be any real numbers, thus allowing ?soft? sharing between classes. Therefore,
ShareBoost has the same (or even richer) expressive power comparing to JointBoost. Moreover,
ShareBoost automatically identifies the relatedness between classes (corresponding to choosing the
set C) without having to rely on exhaustive search. ShareBoost is also fully corrective, in the sense
that it extracts all the information from the selected features before adding new ones. This leads to
higher accuracy while using less features as was shown in our experiments on image classification.
Lastly, ShareBoost comes with theoretical guarantees.
Finally, we mention that feature sharing is merely one way for transferring information across classes
[32] and several alternative ways have been proposed in the literature such as target embedding
[18, 4], shared hidden structure [22, 1], shared prototypes [27], or sharing underlying metric [38].
4
2
The ShareBoost Algorithm
ShareBoost is a forward greedy selection approach for solving eqn. (4). Usually, in a greedy approach, we update the weight of one feature at a time. Now, we will update one column of W at a
time (since the desired sparsity is over columns). We will choose the column that maximizes the `1
norm of the corresponding column of the gradient of the loss at W . Since W is a matrix we have that
rL(W ) is a matrix
of L. Denote by rr L(W ) the r?th column of rL(W ),
? of the partial derivatives
?
@L(W )
@L(W )
that is, the vector @W1,r , . . . , @Wk,r . A standard calculation shows that
@L(W )
1 X X
=
?c (x, y) xr (1[q = c]
@Wq,r
m
1[q = y])
(x,y)2S c2Y
where
e1[c6=y] (W x)y +(W x)c
.
1[y 0 6=y] (W x)y +(W x)y0
y 0 2Y e
?c (x, y) = P
P
Note that
= 1 for all (x, y).
Therefore, we can rewrite,
c ?c (x, y)
P
1
x
(?
(x,
y)
1[q
=
y])
.
Based
on
the
above
we have
(x,y) r q
m
krr L(W )k1 =
1 X X
xr (?q (x, y)
m
1[q = y]) .
(6)
@L(W )
@Wq,r
=
(7)
q2Y (x,y)
Finally, after choosing the column for which krr L(W )k1 is maximized, we re-optimize all the
columns of W which were selected so far. The resulting algorithm is given in Algorithm 1.
Algorithm 1 ShareBoost
1: Initialize: W = 0 ; I = ;
2: for t=1,2,. . . ,T do
3:
For each class c and example (x, y) define ?c (x, y) as in eqn. (6)
4:
Choose feature r that maximizes the right-hand side of eqn. (7)
5:
I
I [ {r}
6:
Set W
argminW L(W ) s.t. W?,i = 0 for all i 2
/I
7: end for
The runtime of ShareBoost is as follows. Steps 3-5 requires O(mdk). Step 6 is a convex optimization problem in tk variables and can be performed using various methods. p
In our experiments, we
used Nesterov?s accelerated gradient method [25] whose runtime is O(mtk/ ?) for a smooth
p objective, where ? is the desired accuracy. Therefore, the overall runtime is O(T mdk + T 2 mk/ ?). It is
interesting to compare this runtime to the complexity of minimizing the mixed-norm regularization
objective given in eqn. (5). Since the objective is no longer smooth, the runtime of using Nesterov?s
accelerated method would be O(mdk/?) which can be much larger than the runtime of ShareBoost
when d
T.
2.1
Variants of ShareBoost
We now describe several variants of ShareBoost. The analysis we present in Section 4 can be easily
adapted for these variants as well.
Modifying the Greedy Choice Rule ShareBoost chooses the feature r which maximizes the `1
norm of the r-th column of the gradient matrix. Our analysis shows that this choice leads to a sufficient decrease of the objective function. However, one can easily develop other ways for choosing
a feature which may potentially lead to an even larger decrease of the objective. For example, we
can choose a feature r that minimizes L(W ) over matrices W with support of I [ {r}. This will
lead to the maximal possible decrease of the objective function at the current iteration. Of course,
the runtime of choosing r will now be much larger. Some intermediate options are to choose r that
minimizes min?2R W + ?rr L(W ) or to choose r that minimizes minw2Rk W + we?r , where e?r is
the all-zero row vector except 1 in the r?th position.
5
Selecting a Group of Features at a Time In some situations, features can be divided into groups
where the runtime of calculating a single feature in each group is almost the same as the runtime of
calculating all features in the group. In such cases, it makes sense to choose groups of features at
each iteration ofPShareBoost. This can be easily done by simply choosing the group of features J
that maximizes j2J krj L(W )k1 .
? k)
Adding Regularization Our analysis implies that when |S| is significantly larger than O(T
then ShareBoost will not overfit. When this is not the case, we can incorporate regularization in the
objective of ShareBoost in order to prevent overfitting. One simple way
P is to2 add to the objective
function L(W ) a Frobenius norm regularization term of the form
is a regi,j Wi,j , where
ularization parameter. It is easy to verify that this is a smooth and convex function and therefore
we can easily adapt ShareBoost to deal with this regularized objective. It is also possible to rely
on other norms such as the `1 norm or the `1 /`1 mixed-norm. However, there is one technicality due to the fact that these norms are not smooth. We can overcome this problem by defining
smooth approximations to these norms. The main idea is to first note that for a scalar a we have
|a| = max{a, a} and therefore we can rewrite the aforementioned norms using max and sum
operations. Then, we can replace each max expression with its soft-max counterpart and obtain a
smooth version of the overall norm
?Pfunction. For example,?a smooth version of the `1 /`1 norm
Pd
k
1
Wi,j
+ e Wi,j ) , where
1 controls the tradeoff
will be kW k1,1 ?
j=1 log
i=1 (e
between quality of approximation and smoothness.
3
Non-Linear Prediction Rules
We now demonstrate how ShareBoost can be used for learning non-linear predictors. The main idea
is similar to the approach taken by Boosting and SVM. That is, we construct a non-linear predictor
by first mapping the original features into a higher dimensional space and then learning a linear
predictor in that space, which corresponds to a non-linear predictor over the original feature space.
To illustrate this idea we present two concrete mappings. The first is the decision stumps method
which is widely used by Boosting algorithms. The second approach shows how to use ShareBoost
for learning piece-wise linear predictors and is inspired by the super-vectors construction recently
described in [40].
3.1
ShareBoost with Decision Stumps
Let v 2 Rp be the original feature vector representing an object. A decision stump is a binary
feature of the form 1[vi ? ?], for some feature i 2 {1, . . . , p} and threshold ? 2 R. To construct
a non-linear predictor we can map each object v into a feature-vector x that contains all possible
decision stumps. Naturally, the dimensionality of x is very large (in fact, can even be infinite),
and calculating Step 4 of ShareBoost may take forever. Luckily, a simple trick yields an efficient
solution. First note that for each i, all stump features corresponding to i can get at most m + 1
values on a training set of size m. Therefore, if we sort the values of vi over the m examples in the
training set, we can calculate the value of the right-hand side of eqn. (7) for all possible values of
? in total time of O(m). Thus, ShareBoost can be implemented efficiently with decision stumps.
2
3.2
Learning Piece-wise Linear Predictors with ShareBoost
1.8
1.6
1.4
To motivate our next construction let us consider first a simple one dimensional function estimation problem. Given sample
(x1 , yi ), . . . , (xm , ym ) we would like to find a function f : R !
R such that f (xi ) ? yi for all i. The class of piece-wise linear
functions can be a good candidate for the approximation function
f . See for example an illustration in Fig. 1. In fact, it is easy to
verify that all smooth functions can be approximated by pieceFigure 1: Motivating super vecwise linear functions (see for example the discussion in [40]). In
tors.
general, we can
Pq express piece-wise linear vector-valued functions
as f (v) =
vj k < rj ] (huj , vi + bj ) , where q is
j=1 1[kv
1.2
1
0.8
0.6
0.4
0.2
0
6
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
the number of pieces, (uj , bj ) represents the linear function corresponding to piece j, and (vj , rj )
represents the center and radius of piece j. This expression can be also written as a linear function
over a different domain, f (v) = hw, (v)i where
(v) = [ 1[kv
v1 k < r1 ] [v , 1] , . . . , 1[kv
vq k < rq ] [v , 1] ] .
In the case of learning a multiclass predictor, we shall learn a predictor v 7! W (v), where W will
be a k by dim( (v)) matrix. ShareBoost can be used for learning W . Furthermore, we can apply
the variant of ShareBoost described in Section 2.1 to learn a piece-wise linear model with few pieces
(that is, each group of features will correspond to one piece of the model). In practice, we first define
a large set of candidate centers by applying some clustering method to the training examples, and
second we define a set of possible radiuses by taking values of quantiles from the training examples.
Then, we train ShareBoost so as to choose a multiclass predictor that only use few pairs (vj , rj ).
The advantage of using ShareBoost here is that while it learns a non-linear model it will try to find
a model with few linear ?pieces?, which is advantageous both in terms of test runtime as well as in
terms of generalization performance.
4
Analysis
In this section we provide formal guarantees for the ShareBoost algorithm. The proofs are deferred
to the appendix. We first show that if the algorithm has managed to find a matrix W with a small
number of non-zero columns and a small training error, then the generalization error of W is also
small. The bound below is in terms of the 0 1 loss. A related bound, which is given in terms of the
convex loss function, is described in [39].
Theorem 1 Suppose that the ShareBoost algorithm runs for T iterations and let W be its output
matrix. Then, with probability of at least 1
over the choice of the training set S we have that
s
!
T k log(T k) log(k) + T log(d) + log(1/ )
P [hW (x) 6= y] ? P [hW (x) 6= y]+O
|S|
(x,y)?D
(x,y)?S
Next, we analyze the sparsity guarantees of ShareBoost. As mentioned previously, exactly solving
eqn. (4) is known to be NP hard. The following main theorem gives an interesting approximation
guarantee. It tells us that if there exists an accurate solution with small `1,1 norm, then the ShareBoost algorithm will find a good sparse solution.
?
Theorem 2 Let ? >
? 0 and let W? be an arbitrary matrix. Assume that we run the ShareBoost
algorithm for T = 4 1? kW ? k21,1 iterations and let W be the output matrix. Then, kW k1,0 ? T
and L(W ) ? L(W ? ) + ?.
5
Experiments
In this section we demonstrate the merits (and pitfalls) of ShareBoost by comparing it to alternative
algorithms in different scenarios. The first experiment exemplifies the feature sharing property of
ShareBoost. We perform experiments with an OCR data set and demonstrate a mild growth of the
number of features as the number of classes grows from 2 to 36. The second experiment shows
that ShareBoost can construct predictors with state-of-the-art accuracy while only requiring few
features, which amounts to fast prediction runtime. The third experiment, which due to lack of
space is deferred to Appendix A.3, compares ShareBoost to mixed-norm regularization and to the
JointBoost algorithm of [34]. We follow the same experimental setup as in [12]. The main finding
is that ShareBoost outperforms the mixed-norm regularization method when the output predictor
needs to be very sparse, while mixed-norm regularization can be better in the regime of rather dense
predictors. We also show that ShareBoost is both faster and more accurate than JointBoost.
Feature Sharing The main motivation for deriving the ShareBoost algorithm is the need for
a multiclass predictor that uses only few features, and in particular, the number of features
should increase slowly with the number of classes. To demonstrate this property of ShareBoost we experimented with the Char74k data set which consists of images of digits and
7
letters. We trained ShareBoost with the number of classes varying from 2 classes to the
36 classes corresponding to the 10 digits and 26 capital letters. We calculated how many
features were required to achieve a certain fixed accuracy as a function of the number of
classes. Due to lack of space, the description of the feature space is deferred to the appendix.
350
300
250
# features
We compared ShareBoost to the 1-vs-rest approach, where in the
latter, we trained each binary classifier using the same mechanism
as used by ShareBoost. Namely, we minimize the binary logistic
loss using a greedy algorithm. Both methods aim at constructing sparse predictors using the same greedy approach. The difference between the methods is that ShareBoost selects features
in a shared manner while the 1-vs-rest approach selects features
for each binary problem separately. In Fig. 2 we plot the overall
number of features required by both methods to achieve a fixed
accuracy on the test set as a function of the number of classes. As
can be easily seen, the increase in the number of required features
is mild for ShareBoost but significant for the 1-vs-rest approach.
200
150
100
50
0
0
5
10
15
20
25
30
35
40
# classes
Figure 2: The number of features
required to achieve a fixed accuracy as a function of the number
of classes for ShareBoost (dashed)
and the 1-vs-rest (solid-circles).
The blue lines are for a target error
of 20% and the green lines are for
8%.
Constructing fast and accurate predictors The goal of our
this experiment is to show that ShareBoost achieves state-ofthe-art performance while constructing very fast predictors. We
experimented with the MNIST digit dataset, which consists of
a training set of 60, 000 digits represented by centered sizenormalized 28 ? 28 images, and a test set of 10, 000 digits. The MNIST dataset has been extensively studied and is considered a standard test for multiclass classification of handwritten digits. The SVM algorithm with Gaussian kernel achieves an error rate of 1.4% on the test set.
The error rate achieved by the most advanced algorithms are below 1% of the test set. See
http://yann.lecun.com/exdb/mnist/. In particular, the top MNIST performer [6] uses
a feed-forward Neural-Net with 7.6 million connections which roughly translates to 7.6 million
multiply-accumulate (MAC) operations at run-time as well. During training, geometrically distorted versions of the original examples were generated in order to expand the training set following
[30] who introduced a warping scheme for that purpose. The top performance error rate stands at
0.35% at a run-time cost of 7.6 million MAC per test example
The error-rate of ShareBoost with T = 266 rounds stands on
0.71% using the original training set and 0.47% with the expanded training set of 360, 000 examples generated by adding
five deformed instances per original example and with T = 305
rounds. Fig. 3 displays the convergence curve of error-rate as a
function of the number of rounds. Note that the training error
is higher than the test error. This follows from the fact that the
training set was expanded with 5 fairly strong deformed versions
of each input, using the method in [30]. As can be seen, less than
Figure 3: The test error rate of
75 features suffices to obtain an error rate of < 1%.
1.5
Train
Test
1
0.5
0.47
0
50
100
150
200
250
300
350
400
450
500
550
600
Rounds
ShareBoost on the MNIST dataset
In terms of run-time on a test image, the system requires 305 con- as a function of the number of
volutions of 7?7 templates and 540 dot-product operations which rounds using patch based features.
totals to roughly 3.3?106 MAC operations ? compared to around
7.5 ? 106 MAC operations of the top MNIST performer. The error
rate of 0.47% is better than that reported by [10] who used a 1-vs-all SVM with a 9-degree polynomial kernel and with an expanded training set of 780, 000 examples. The number of support vectors
(accumulated over the ten separate binary classifiers) was 163, 410 giving rise to a run-time of 21fold compared to ShareBoost. Moreover, due to the fast convergence of ShareBoost, 75 rounds are
enough for achieving less than 1% error.
Acknowledgements: We would like to thank Itay Erlich and Zohar Bar-Yehuda for their contribution to the implementation of ShareBoost and to Ronen Katz for helpful comments.
8
References
[1] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification. In International Conference on
Machine Learning, 2007.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, pages 41?48, 2006.
[3] F.R. Bach. Consistency of the group lasso and multiple kernel learning. J. of Machine Learning Research, 9:1179?1225, 2008.
[4] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In NIPS, 2011.
[5] E.J. Candes and T. Tao. Decoding by linear programming. IEEE Trans. on Information Theory, 51:4203?4215, 2005.
[6] D. C. Ciresan, U. Meier, L. Maria G., and J. Schmidhuber. Deep big simple neural nets excel on handwritten digit recognition. CoRR,
2010.
[7] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research,
3:951?991, 2003.
[8] A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass learnability and the erm principle. In COLT, 2011.
[9] G. Davis, S. Mallat, and M. Avellaneda. Greedy adaptive approximation. Journal of Constructive Approximation, 13:57?98, 1997.
[10] D. Decoste and S. Bernhard. Training invariant support vector machines. Mach. Learn., 46:161?190, 2002.
[11] D.L. Donoho. Compressed sensing. In Technical Report, Stanford University, 2006.
[12] J. Duchi and Y. Singer. Boosting with structural sparsity. In Proc. ICML, pages 297?304, 2009.
[13] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.
[14] M. Fink, S. Shalev-Shwartz, Y. Singer, and S. Ullman. Online multiclass learning by interclass hypothesis sharing. In International
Conference on Machine Learning, 2006.
[15] Y. Freund and R. E. Schapire. A short introduction to boosting. J. of Japanese Society for AI, pages 771?780, 1999.
[16] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer
and System Sciences, pages 119?139, 1997.
[17] T.J. Hastie and R.J. Tibshirani. Generalized additive models. Chapman & Hall, 1995.
[18] D. Hsu, S.M. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, 2010.
[19] J. Huang and T. Zhang. The benefit of group sparsity. Annals of Statistics, 38(4), 2010.
[20] J. Huang, T. Zhang, and D.N. Metaxas. Learning with structured sparsity. In ICML, 2009.
[21] G.R.G. Lanckriet, N. Cristianini, P.L. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming.
J. of Machine Learning Research, pages 27?72, 2004.
[22] Y. L. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of IEEE,
pages 2278?2324, 1998.
[23] A. Majumdar and R.K. Ward. Fast group sparse classification. Electrical and Computer Engineering, Canadian Journal of, 34(4):136?
144, 2009.
[24] B. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Computing, pages 227?234, 1995.
[25] Y. Nesterov and I.U.E. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Netherlands, 2004.
[26] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for l 1 ,inf inity regularization. In ICML, page 108, 2009.
[27] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype representations. In CVPR, 2008.
[28] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1?40, 1999.
[29] S. Shalev-Shwartz, T. Zhang, and N. Srebro. Trading accuracy for sparsity in optimization problems with sparsity constraints. Siam
Journal on Optimization, 20:2807?2832, 2010.
[30] P. Y. Simard, Dave S., and John C. Platt. Best practices for convolutional neural networks applied to visual document analysis. Document
Analysis and Recognition, 2003.
[31] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[32] S. Thrun. Learning to learn: Introduction. Kluwer Academic Publishers, 1996.
[33] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58(1):267?288, 1996.
[34] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE Transactions
on Pattern Analysis and Machine Intelligence (PAMI), pages 854?869, 2007.
[35] J.A. Tropp and A.C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. Information Theory, IEEE
Transactions on, 53(12):4655?4666, 2007.
[36] B. A Turlach, W. N V., and Stephen J Wright. Simultaneous variable selection. Technometrics, 47, 2000.
[37] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[38] E. Xing, A.Y. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with side-information. In NIPS,
2003.
[39] T. Zhang. Class-size independent generalization analysis of some discriminative multi-category classification. In NIPS, 2004.
[40] X. Zhou, K. Yu, T. Zhang, and T. Huang. Image classification using super-vector coding of local image descriptors. Computer Vision?
ECCV 2010, pages 141?154, 2010.
9
| 4213 |@word mild:4 deformed:2 version:5 polynomial:1 norm:28 advantageous:2 stronger:2 seems:1 duda:1 turlach:1 wexler:1 mention:1 solid:1 reduction:1 initial:1 series:1 score:2 minw2rk:1 contains:1 selecting:1 document:5 outperforms:1 current:2 comparing:2 com:1 tackling:2 must:1 written:1 john:1 additive:2 enables:1 plot:1 update:3 v:6 greedy:17 selected:2 intelligence:1 short:1 boosting:9 c6:1 zhang:6 five:1 constructed:1 direct:1 prove:3 consists:2 introductory:1 manner:2 x0:9 hardness:3 indeed:3 sublinearly:1 roughly:2 multi:5 y2y:1 inspired:1 freeman:1 pitfall:1 automatically:1 actual:2 decoste:1 classifies:1 linearity:1 notation:2 bounded:2 moreover:2 underlying:1 maximizes:4 israel:3 minimizes:3 finding:3 guarantee:8 tackle:2 growth:1 runtime:12 tie:1 exactly:1 classifier:7 k2:2 fink:2 control:1 platt:1 before:2 engineering:2 local:2 despite:1 mach:1 path:2 approximately:1 pami:1 might:2 studied:5 challenging:1 limited:1 unique:1 practical:1 lecun:2 practice:4 yehuda:1 xr:2 digit:7 procedure:1 pontil:1 maxx:1 significantly:1 matching:2 projection:1 confidence:1 get:1 close:1 selection:10 context:1 applying:2 restriction:1 equivalent:2 map:2 optimize:1 center:2 gilbert:1 jerusalem:3 attention:1 straightforward:2 go:1 convex:15 recovery:2 pure:1 rule:3 array:1 deriving:1 embedding:3 annals:1 target:4 construction:6 suppose:1 user:1 exact:1 rip:2 itay:1 us:8 programming:2 mallat:1 hypothesis:1 lanckriet:1 trick:1 element:2 recognition:5 approximated:1 natarajan:1 role:1 taskar:1 electrical:1 calculate:1 decrease:4 russell:1 mentioned:2 rq:1 pd:1 complexity:1 nesterov:4 cristianini:1 exhaustively:1 trained:3 depend:1 solving:5 rewrite:2 motivate:1 volution:1 efficiency:1 easily:5 various:2 represented:1 corrective:2 train:2 fast:6 describe:5 kp:4 tell:1 majumdar:1 shalev:4 choosing:6 exhaustive:1 whose:3 heuristic:1 widely:2 plausible:1 cvpr:1 richer:1 larger:4 otherwise:1 compressed:3 valued:1 stanford:1 statistic:1 ward:1 unseen:1 online:2 advantage:4 rr:2 net:2 erlich:1 product:2 maximal:2 argminw:1 relevant:3 organizing:1 achieve:3 description:1 frobenius:2 kv:3 convergence:2 optimum:2 r1:1 darrell:2 categorization:1 ben:1 object:13 wider:1 tk:1 develop:1 illustrate:1 school:1 strong:3 soc:1 implemented:2 solves:1 implies:2 trading:1 come:2 differ:1 pfunction:1 radius:2 modifying:1 luckily:1 centered:1 suffices:1 generalization:8 onward:1 hold:2 around:1 considered:1 hall:1 wright:1 great:1 mapping:6 bj:2 major:1 tor:1 achieves:2 torralba:1 purpose:1 estimation:1 proc:1 label:2 krr:2 gaussian:1 aim:2 super:3 rather:4 zhou:1 shrinkage:1 varying:1 exemplifies:1 focus:1 improvement:1 maria:1 contrast:4 sizenormalized:1 sense:2 dim:1 helpful:1 el:1 accumulated:1 typically:1 transferring:1 hidden:1 koller:1 expand:1 selects:2 tao:1 overall:5 classification:10 among:2 aforementioned:1 denoted:1 priori:2 uncovering:1 development:1 colt:1 art:4 constrained:1 initialize:1 fairly:1 equal:2 construct:4 evgeniou:1 having:1 ng:1 chapman:1 kw:13 represents:2 yu:1 icml:3 np:3 report:1 few:11 murphy:1 argmax:1 technometrics:1 detection:1 highly:2 multiply:1 evaluation:1 deferred:3 semidefinite:1 regularizers:1 predefined:1 accurate:6 partial:1 orthogonal:2 huj:1 tree:1 desired:3 re:1 circle:1 j2j:1 theoretical:1 minimal:1 mk:1 instance:1 classify:2 column:24 soft:3 disadvantage:1 cost:2 mac:4 entry:1 subset:2 daniely:1 predictor:36 learnability:1 motivating:1 reported:1 dependency:1 chooses:2 international:2 siam:2 decoding:1 pool:1 ym:2 concrete:1 epf:1 w1:1 squared:1 choose:9 slowly:1 huang:3 derivative:1 simard:1 ullman:2 stump:11 coding:1 wk:1 vi:3 piece:11 later:2 break:1 view:1 performed:2 try:1 analyze:3 shashua:1 start:1 sort:1 maintains:1 option:1 xing:1 shai:1 candes:1 contribution:1 minimize:1 regrettably:1 accuracy:9 descriptor:1 who:2 efficiently:3 maximized:1 yield:8 correspond:1 ofthe:1 ronen:1 convolutional:1 handwritten:2 metaxas:1 accurately:1 dave:1 quattoni:2 simultaneous:1 sharing:8 whenever:1 naturally:1 associated:1 proof:1 con:1 hsu:1 dataset:3 popular:1 recall:1 to2:1 dimensionality:1 feed:1 higher:4 follow:2 improved:1 done:2 furthermore:2 lastly:1 langford:1 overfit:1 hand:3 eqn:16 web:1 expressive:1 tropp:1 maximizer:1 lack:2 logistic:2 quality:2 gray:1 perhaps:1 grows:2 requiring:3 verify:3 managed:1 counterpart:1 equality:1 regularization:15 hence:1 deal:1 attractive:1 round:6 during:2 davis:1 criterion:2 generalized:2 stress:1 exdb:1 theoretic:1 demonstrate:6 multiview:1 duchi:1 l1:1 image:8 wise:5 recently:3 common:1 rl:2 exponentially:1 volume:1 million:3 approximates:1 katz:1 kluwer:1 accumulate:1 significant:3 measurement:1 ai:1 smoothness:2 consistency:1 pointed:1 grangier:1 pq:1 dot:1 geared:1 similarity:1 surface:1 longer:1 shareboost:73 add:1 carreras:1 recent:2 showed:1 inf:1 scenario:1 schmidhuber:1 yonatan:1 certain:2 inequality:1 binary:6 success:2 arbitrarily:1 yi:2 seen:2 guestrin:1 additional:1 impose:2 performer:2 signal:2 dashed:1 stephen:1 violate:1 mix:3 rj:3 multiple:1 smooth:11 technical:1 faster:1 adapt:1 calculation:1 bach:1 long:1 academic:1 divided:1 hart:1 e1:2 paired:1 calculates:1 prediction:8 variant:4 regression:3 basic:1 vision:2 metric:2 iteration:4 represent:1 kernel:4 achieved:1 whereas:1 addition:1 separately:1 grow:1 publisher:1 sabato:1 rest:5 mdk:3 comment:1 point1:1 seem:1 jordan:2 structural:1 intermediate:1 bengio:2 identically:1 easy:3 enough:1 canadian:1 hastie:1 lasso:3 ciresan:1 inner:1 idea:3 prototype:2 haffner:1 multiclass:30 tradeoff:1 translates:1 amnon:1 expression:2 bartlett:1 ltd:2 adequate:1 deep:1 generally:1 netherlands:1 amount:1 extensively:2 ten:1 statist:1 category:1 http:1 schapire:3 per:2 tibshirani:2 blue:1 shall:1 express:1 group:15 threshold:2 achieving:1 capital:1 prevent:1 v1:1 relaxation:1 merely:1 geometrically:1 year:1 sum:1 run:7 letter:2 distorted:1 almost:1 yann:1 patch:3 decision:10 appendix:3 bound:7 guaranteed:1 display:1 fold:1 adapted:1 inity:1 constraint:3 scene:1 min:3 extremely:2 performing:1 expanded:3 relatively:1 structured:2 popularized:1 combination:1 across:1 y0:3 wi:3 kakade:1 invariant:1 erm:1 ghaoui:1 taken:1 ln:2 vq:1 previously:2 remains:1 discus:2 abbreviated:1 mechanism:1 singer:4 know:1 merit:1 end:1 pursuit:2 generalizes:2 operation:6 apply:3 ocr:1 generic:2 centrality:1 alternative:5 rp:1 original:8 top:4 spurred:1 ensure:1 clustering:2 hinge:1 calculating:3 giving:1 k1:11 especially:1 uj:1 amit:1 society:1 warping:1 objective:11 surrogate:2 unclear:1 gradient:6 distance:1 separate:1 thank:1 thrun:1 parametrized:1 reason:1 toward:1 besides:1 index:1 illustration:1 ratio:1 minimizing:2 hebrew:1 setup:1 statement:1 potentially:1 rise:1 implementation:1 perform:1 allowing:1 upper:5 markov:1 situation:1 defining:1 incorporate:1 y1:1 interclass:1 arbitrary:1 community:1 overcoming:1 introduced:1 david:1 pair:3 required:5 namely:1 meier:1 connection:1 engine:1 learned:2 established:1 nip:6 trans:1 zohar:1 avellaneda:1 suggested:1 bar:1 usually:1 mtk:1 xm:2 below:2 pattern:2 regime:1 sparsity:15 program:2 rf:2 max:7 green:1 royal:1 power:1 natural:2 rely:3 regularized:1 advanced:1 representing:1 scheme:1 rated:1 identifies:1 lately:1 excel:1 extract:1 nice:1 literature:3 regi:1 acknowledgement:1 freund:2 loss:21 fully:2 lecture:1 mixed:7 interesting:2 srebro:2 jointboost:7 degree:1 sufficient:1 principle:1 classifying:2 share:3 row:2 eccv:1 course:2 last:1 formal:3 side:4 template:1 taking:1 sparse:14 benefit:2 overcome:2 curve:2 calculated:1 stand:2 forward:6 adaptive:1 far:2 transaction:2 approximate:2 relatedness:1 forever:1 bernhard:1 technicality:1 global:1 overfitting:1 assumed:1 xi:3 shwartz:4 discriminative:1 search:2 ultraconservative:1 learn:8 transfer:1 obtaining:3 bottou:1 japanese:1 constructing:4 domain:2 vj:3 main:6 dense:1 linearly:1 whole:2 noise:1 motivation:1 big:1 x1:2 gentleboost:1 fig:3 quantiles:1 wiley:2 sub:1 position:1 candidate:2 third:1 advertisement:1 learns:2 hw:11 rk:3 theorem:7 specific:3 k21:1 sensing:3 experimented:2 svm:4 exists:4 mnist:6 vapnik:1 adding:3 corr:1 kr:2 ci:4 sparseness:1 kx:1 margin:1 easier:2 simply:1 likely:1 visual:2 scalar:1 applies:1 springer:1 corresponds:2 relies:4 weston:1 goal:5 viewed:1 presentation:2 donoho:1 shared:9 replace:1 hard:3 infinite:1 except:1 total:2 experimental:1 formally:1 wq:2 support:4 latter:1 crammer:1 collins:2 accelerated:2 constructive:1 evaluate:1 argyriou:1 correlated:1 |
3,550 | 4,214 | Estimating time-varying input signals and ion
channel states from a single voltage trace of a neuron
Ryota Kobayashi?
Department of Human and Computer Intelligence, Ritsumeikan University
Siga 525-8577, Japan
[email protected]
Yasuhiro Tsubo
Laboratory for Neural Circuit Theory, Brain Science Institute, RIKEN
2-1 Hirosawa Wako, Saitama 351-0198, Japan
[email protected]
Petr Lansky
Institute of Physiology, Academy of Sciences of the Czech Republic
Videnska 1083, 142 20 Prague 4, Czech Republic
[email protected]
Shigeru Shinomoto
Department of Physics, Kyoto University
Kyoto 606-8502, Japan
[email protected]
Abstract
State-of-the-art statistical methods in neuroscience have enabled us to ?t mathematical models to experimental data and subsequently to infer the dynamics of
hidden parameters underlying the observable phenomena. Here, we develop a
Bayesian method for inferring the time-varying mean and variance of the synaptic
input, along with the dynamics of each ion channel from a single voltage trace of a
neuron. An estimation problem may be formulated on the basis of the state-space
model with prior distributions that penalize large ?uctuations in these parameters.
After optimizing the hyperparameters by maximizing the marginal likelihood, the
state-space model provides the time-varying parameters of the input signals and
the ion channel states. The proposed method is tested not only on the simulated
data from the Hodgkin?Huxley type models but also on experimental data obtained from a cortical slice in vitro.
1
Introduction
Owing to the great advancements in measurement technology, a huge amount of data is generated
in the ?eld of science, engineering, and medicine, and accordingly, there is an increasing demand
for estimating the hidden states underlying the observed signals. Neurons transmit information by
transforming synaptic inputs into action potentials; therefore, it is essential to investigate the dynamics of the synaptic inputs to understand the mechanism of the information processing in neuronal
systems. Here we propose a method to deduce the dynamics from experimental data.
?
Webpage: http://www.ritsumei.ac.jp/?r-koba84/index.html
1
Cortical neurons in vivo receive synaptic bombardments from thousands of neurons, which cause the
membrane voltage to ?uctuate irregularly. As each synaptic input is small and the synaptic input rate
is high, the total input can be characterized only with its mean and variance, as in the mathematical
description of Brownian motion of a small particle suspended in a ?uid. Given the information of
the mean and variance of the synaptic input, it is possible to estimate the underlying excitatory and
inhibitory ?ring rates from respective populations of neurons.
The membrane voltage ?uctuations in a neuron are caused not only by the synaptic input but also
by the hidden dynamics of ionic channels. These dynamics can be described by conductance-based
models, including the Hodgkin?Huxley model. Many studies have been reported on the dynamics
of ionic channels and their impact on neural coding properties [1].
There have been attempts to decode a voltage trace in terms of input parameters; the maximum likelihood estimator for current inputs was derived under an assumption of linear leaky integration [2, 3].
Empirical attempts were made to infer conductance inputs by ?tting an approximate distribution of
the membrane voltage to the experimental data [4, 5]. A linear regression method was proposed to
infer the maximal ionic conductances and single synaptic inputs in the dendrites [6]. In all studies,
these input parameters were assumed to be constant in time. In practice, however, such assumption
of the constancy of input parameters is too strong simpli?cation for the neuronal ?ring [7, 8].
In this paper, we propose a method for the simultaneous identi?cation of the time-varying input
parameters and of the ion-channels dynamics from a single voltage trajectory. The problem is illposed, in the sense that the set of parameters giving rise to a particular voltage trace cannot be
uniquely determined. However, the problem may be formulated as a statistical problem of estimating
the hidden state using a state-space model and then it is solvable. We verify the proposed method
by applying it not only to numerical data obtained from the Hodgkin?Huxley type models but also
to the biological data obtained in in vitro experiment.
2 Model
2.1 Conductance-based model
We start from the conductance-based neuron model [1]:
?
dV
= ??
gleak (V ? Eleak ) ?
Jion (V, w)
? + Jsyn (t),
dt
(1)
ion
where, g?leak =: gleak /Cm , Jion =: Iion /Cm , Jsyn (t) := Isyn (t)/Cm ,
V is the membrane voltage, g?leak is the normalized leak conductance, Eleak is the reversal potential,
Jion are the voltage-dependent ionic inputs, w
? := (w1 , w2 , ? ? ? , wd ) are the gating variables that
characterize the states of ion channels, Jsyn is a synaptic input, Cm is the membrane capacitance,
Iion are the voltage-dependent ionic currents and Isyn (t) is a synaptic input current. The ionic
inputs Jion are a nonlinear function of V and w.
? Each gating variable wi (i = 1, ? ? ? , d) follows the
Langevin equation [9]:
dwi
= ?i (V )(1 ? wi ) ? ?i (V )wi + si ?i (t),
(2)
dt
where ?i (V ), ?i (V ) are nonlinear functions of the voltage, si is the standard deviation of the channel
noise, and ?i (t) is an independent Gaussian white noise with zero mean and unit variance. The
synaptic input Jsyn (t) is the sum of the synaptic inputs from a large number of presynaptic neurons.
If each synaptic input is weak and the synaptic time constants are small, we can adopt a diffusion
approximation [10],
(3)
Jsyn (t) = ?(t) + ?(t)?(t),
where ?(t), ?(t) are the instantaneous mean and standard deviation of the synaptic input, and ?(t)
is Gaussian white noise with zero mean and unit variance. The components ?(t) and ? 2 (t) are
considered to be the input signals to a neuron.
2.2
Estimation Problem
The problem is to ?nd the parameters of model (1-3) from a single voltage trace {V (t)}. There are
three kinds of parameters in the model. The ?rst kind is the input signals {?(t), ? 2 (t)}. The second
2
kind is the gating variables {w(t)}
?
that characterize the activity of the ionic channels. The remaining
parameters are the intrinsic parameters of a neuron, such as the standard deviation of the channel
noise, the functional form of voltage-dependent ionic inputs, and that of the rate constants. Some
of these parameters, i.e., Jion (V, w),
? ?i (V ), ?i (V ), g?leak and Eleak are measurable by additional
experiments. After determining such intrinsic parameters of the third group by separate experiments,
we estimate parameters of the ?rst and second group from a single voltage trace.
3
Method
Because of the ill-posedness of the estimation problem, we cannot determine the input signals from
a voltage trace alone. To overcome this, we introduce random-walk-type priors for the input signals.
Then, we determine hyperparameters using the EM algorithm. Finally, we evaluate the Bayesian
estimate for the input signals and the ion channel states with the Kalman ?lter and smoothing algorithm. Figure 1 is a schematic of the estimation method.
3.1
Priors for Estimating Input Parameters
Let us assume, for the sake of simplicity, that the voltage is sampled at N equidistant steps ?t,
denoting by Vj the observed voltage at time j?t. To apply the Bayesian approach, the conductance
based model (1, 3) is modi?ed into the discretized form:
{
}
?
?
(4)
Vj+1 = Vj + ??
gleak (Vj ? Eleak ) ?
Jion (Vj , w
? j ) + Mj ?t + Sj ?t?j ,
ion
where {Mj , Sj } are random functions of time, ?j is a standard Gaussian random variable. It is
not possible to infer a large set of parameters {Mj , Sj } from a single voltage trace {Vj } alone,
because the number of parameters overwhelms the number of data points. To resolve it, we introduce
random-walk-type priors, i.e. we assume that the random functions are suf?ciently smooth to satisfy
the following conditions [11]:
2
?t),
P [Mj+1 |Mj = m] ? N (m, ?M
(5)
P [Sj+1 |Sj = s] ? N (s, ?S2 ?t),
(6)
where ?M and ?S are hyperparameters that regulate the smoothness of M (t) and S(t), respectively,
and N (?, ? 2 ) represents the Gaussian distribution with mean ? and variance ? 2 .
3.2
Formulation as a State Space model
The model described in the previous sections could be represented as the state-space model, in which
?xj ? (Mj , Sj , w
? j ) are the (d + 2)-dimensional states, and Zj ? Vj+1 ? Vj (j = 1, ? ? ? , N ? 1) are
the observations. The kinetic equations (2) and the prior distributions (5, 6) can be rewritten as
?xj+1 = Fj ?xj + ?uj + G??j ,
(7)
where
?
?
?
?
?
Fj = diag(1, 1, a1;j , a2;j , ? ? ? , ad;j ), G = diag(?M ?t, ?S ?t, s1 ?t, s2 ?t, ? ? ? , sd ?t),
?uj = (0, 0, b1;j , b2;j , ? ? ? , bd;j )T ,
Fj and G are (d + 2) ? (d + 2) diagonal matrices, ?uj is (d + 2)-dimensional vector, and ??j is a
(d + 2)-dimensional independent Gaussian random vector with zero mean and unit variance.
ai,j and bi,j is given by
ai,j = 1 ? {?i (Vj ) + ?i (Vj )}?t, bi,j = ?i (Vj )?t,
The observation equation is obtained from Eq. (4):
?
?
Jion (Vj , w
? j )?t + Mj ?t + Sj ?t?j ,
Zj = ??
gleak (Vj ? Eleak )?t ?
(8)
ion
where ?j is an independent Gaussian random variable with zero mean and unit variance. In the
estimation problem, only {Vj }N
xj }N
j=1 are observable. {?
j=1 are the hidden variables because it
cannot be observed in a experiment.
3
Figure 1: A schema of the estimation procedure: A conductance-based model neuron [12] is driven
by a ?uctuating input of the mean ?(t) and variance ? 2 (t) varying in time. The ?(t) (black line) and
the ?(t) ? ?(t) (black dotted lines) are depicted in the second panel from the top. We estimate the
input signals {?(t), ? 2 (t)} and the gating variables {m(t), h(t), n(t), p(t)} from a single voltage
trace (blue line). The estimated results are shown in the bottom panels. The input signals are in the
two panels and the ion channel states are in the right shaded box. Gray dashed lines are the true
values and red lines are their estimates.
3.3
Hyperparameter Optimization
2
We determine d + 2 hyperparameters ?q := (?M
, ?S2 , s21 , ? ? ? , s2d ) by maximizing the marginal likelihood via the EM algorithm [13]. We maximize the likelihood integrated over hidden variables
?1
{?xt }N
t=1 ,
?
?qML = argmax p(Z1:N ?1 |?q) = argmax
q
?
q
?
4
p(Z1:N ?1 , ?x1:N ?1 |?q)d?x1:N ?1 ,
(9)
?1
?1
?1
where Z1:N ?1 := {Zj }N
x1:N ?1 := {?xj }N
x1:N ?1 := ?N
xj . The maximization
j=1 , ?
j=1 , and d?
j=1 d?
can be achieved by iteratively maximizing the Q function, the conditional expectation of the log
likelihood:
?qk+1 = argmax Q(?q|?qk ),
(10)
q
?
where Q(?q|?qk ) := E[log(P [Z1:N ?1 , ?x1:N ?1 |?q])|Z1:N ?1 , ?qk ],
?qk is the kth iterated estimate of ?q, E[X|Y ] is the conditional expectation of X given the value of
Y , and P [X|Y ] is the conditional probability distribution of X given the value of Y .
The Q function can be written as
Q(?q|?qk ) =
N
?1
?
E[log(P [Zj |?xj ]) |Z1:N ?1 , ?qk ] +
j=1
N
?2
?
E[log(P [?xj+1 |?xj , ?q]) |Z1:N ?1 , ?qk ]. (11)
j=1
The (k + 1) th iterated estimate of ?q is determined by the conditions for ?Q/?qi = 0:
qi,k+1 =
N
?2
?
1
E[(xi,j+1 ? fi,j xi,j ? ui,j )2 |Z1:N ?1 , ?qk ],
(N ? 2)?t j=1
(12)
where qi,k+1 is the ith component of the ?qk+1 , xi,j is the ith component of ?xj , fi,j is the ith diagonal
component of Fj , and ui,j is the ith component of ?uj . As the EM algorithm increases the marginal
likelihood at each iteration, the estimate converges to a local maximum. We calculate the conditional
expectations in Eq.(12) using Kalman ?lter and smoothing algorithm [11, 14, 15, 16, 17].
3.4
Bayesian estimator for the input signal
After ?tting the hyperparameters, we evaluate the Bayesian estimator for the input signals and the
gating variables:
?x?j = E[?xj |Z1:N ?1 , ?q],
(13)
where ?x?j is the Bayesian estimator for ?xj . Using this estimator, we can estimate not only the
(smoothly) time-varying mean and variance of the synaptic input {?(t), ? 2 (t)}, but also the time
evolution of the gating variables w(t).
?
We evaluate the estimator (13) using a Kalman ?lter and
smoothing algorithm [11, 14, 15, 16, 17].
4
4.1
Applications
Estimating time-varying input signals and ion channel states in a conductance-based
model
To test the accuracy and robustness of our method, we applied the proposed method to simulated
voltage traces. We adopted a Hodgkin?Huxley model with microscopic description of ionic channels [18], which consists of two ionic inputs Jion (ion ? {Na, Kd}): JNa = ?Na [m3 h1 ](V ? ENa )
and JKd = ?K [n4 ](V ? EK ), where ?Na(K) is the conductance of a single sodium (potassium)
ion channel in the open state, [m3 h1 ] ([n4 ]) is the number of sodium (potassium) channels that are
open and ENa(K) is the sodium (potassium) reversal potential. There are 8 (5) states in a sodium
(potassium) channel and the state transitions are described by a Markov chain model. Details of this
model can be found in [18].
First, we apply the proposed method to sinusoidally modulated input signals. Figure 2B compares
the time-varying input signals {?(t), ? 2 (t)} with their estimate and Figure 2C compares the open
probability of each ion channel with its estimate. It is observed in this case that the method provides
the accurate estimate. Second, we examine whether the method can also work in the presence
of discontinuity in the input signals. Though discontinuous inputs do not satisfy the smoothness
assumption (5, 6), the method gives accurate estimates (Figure 3A). Third,
the estimation method is
?
applied to conductance input model, which is given by Jsyn (t) = g?E j,k ?(t ? tkE,j )(VE ? V (t)) +
?
g?I j,k ?(t ? tkI,j )(VI ? V (t)), where the subscript E(I) means the excitatory (inhibitory) synapse,
5
g?E(I) is the normalized postsynaptic conductance, VE(I) is the reversal potential and tkE(I),j is the
kth spike time of the jth presynaptic neuron, and ?(t) is the Dirac delta function. It can be seen
from Figure 3B that the method can provide accurate estimate except during action potentials when
the input undergoes a rapid modulation. Fourth, the effect of observation noise on the estimation
accuracy is investigated. We introduce an observation noise in the following manner: Zobs,j =
Zj + ?obs ?j , where Zobs,j =: Vobs,j+1 ? Vobs,j is the observed value, Vobs,j is the recorded voltage
at time step j, ?obs is the standard deviation of the observation noise and ?j is an independent
Gaussian random variable with zero mean and unit variance. Mathematically, it is equivalent to
assume the observation noise as an additive Gaussian white noise on the voltage. In such a case, the
estimation method reckons the input variance at the sum of the original input variance ? 2 (t) and the
2
observation noise variance ?obs
(Figure 3C).
Furthermore, we also tested the present framework in its potential applicability to more complicated conductance-based models, which have slow ionic currents. To observe this, we adopted a
conductance-based model proposed by Pospischil et al. [12] that has three ionic inputs Jion (ion ?
{Na, Kd, M}): JNa = g?Na m3 h(V ? ENa ), JKd = g?Kd n4 (V ? EK ) and JM = g?M p(V ? EK ),
where {m, h, n, p} are the gating variables, g?ion represents the normalized ionic conductances and
Eion are the reversal potentials. (See [12] for details.) An example of the estimation result is shown
in Figure 1.
Figure 2: Estimation of input signals and ion channel states from the simulated data: A. Voltage
Trace. B. Estimate of the mean ? and variance ? 2 input signals. C. Estimate of the ion channel
states. The time evolution of the open probabilities of sodium (Na) and potassium (K) channels are
shown. The gray dashed lines and red lines represent the true and the estimates, respectively.
4.2
Estimating time-varying input signals and ion channel states in experimental data
We applied the proposed method to experimental data. Randomly ?uctuating current, generated by
the sum of the ?ltered time-dependent Poisson process, was injected to a neuron in the rat motor
cortex and the membrane voltage was recorded intracellularly in vitro. Details of the experimental
procedure can be found in [19, 20]. We adopted the neuron model proposed by Pospischil et
al. [12] for the membrane voltage. After tuning the ionic conductances and kinetic parameters, six
hyperparameters ?M,S and sm,h,n,p were optimized using Eq. (12). For avoiding over-?tting, we set
the upper limit smax = 0.002 for the hyperparameters of the gating variables. The observation noise
2
variance was estimated from data recorded in absence of stimulation: ?obs
= 0.66 [(mV)2 /ms].
The variance of the input signal was estimated by subtracting the observation noise variance from
the estimated variance. In this way, the mean and standard deviation (SD) of the input as well as
6
Figure 3: Robustness of the estimation method: A. Constant input with a jump. B. Conductance
input. C. Sinusoidal input with observation noise. Voltage traces used for the estimation and estimates of the input signals {?(t), ? 2 (t)} are shown. In A and B, the gray dashed and red lines
represents the true and the estimated input signals, respectively. In C, the blue dotted line represents the true input variance ? 2 (t), the gray dotted line represents the sum of the true input variance
2
and the true observation noise variance ?obs
= 1.6 [(mV)2 /ms], and the red line represents the
estimated variance.
the gating variables were estimated. The time-varying mean and SD of the input are compared with
their estimates in Figure 4B. The results suggest that the proposed method is applicable for these
experimental data.
5
Discussion
We have developed a method for estimating not only the time-varying mean and variance of the
synaptic input but also the ion channel states from a single voltage trace of a neuron. It was con?rmed that the proposed method is capable of providing accurate estimate by applying it to simulated data. We also tested the general applicability of this method by applying it to experimental
data obtained with current injection to a neuron in cortical slice preparation.
Until now, several attempts have been made to estimate synaptic input from experimental data [2,
4, 5, 8, 21, 22]. The new aspects introduced in this paper are the implementation of the state space
model that allows to estimate the input signals to ?uctuate in time and the gating variables that
varies according to the voltage. However, the present method can be implemented under several
simplifying assumptions, whose validity should be veri?ed.
First, we approximated the synaptic inputs by white (uncorrelated) noise. In practice, the synaptic
inputs are conductance-based and inevitably have the correlation of a few milliseconds. We have
con?rmed the applicability of the model to the numerical data generated with conductance input,
and also the experimental data in which temporally correlated current is injected to a neuron. These
results indicate that the white noise assumption in our method robustly applies to the reality.
Second, we constructed the state space method by assuming the smooth ?uctuation of the input
signals, or equivalently, by penalizing the rapid ?uctuation in the prior distribution. By applying the
present method to the case of stepwise change in the input signals, we realized that the method is
rather robust against an abrupt change.
Third, we also approximated the channel noise by the white noise. We tested our method by applying
it to a more realistic Hodgkin?Huxley type model in which the individual channels are modeled by
a Markov chain [18]. It was con?rmed that the present white noise approximation is acceptable for
such realistic models.
7
Figure 4: Estimation of input signals and ion channel states from experimental data. A. Voltage trace
recorded intracellularly in vitro. Fluctuating current, sinusoidally modulated mean and standard
deviation (SD), was injected to the neuron. B. Estimation of the time-varying mean and SD of the
input. The gray dashed and red lines represent the true and the estimates, respectively. C. Estimation
of the ion channels state. The red lines represent the estimates of the gating variables.
Fourth, we ignored the possible nonlinear effects in dendritic conduction such as dendritic spike
and backpropagating action potential. It would be worthwhile to consider augmenting the model by
dividing into multiple compartments as has been done in Huys et al. [6].
Fifth, in analyzing experimental data, we employed ?xed functions for the ionic currents and the
rate constants and assumed that some of the intrinsic parameters are known. It may be possible
to infer the maximal ionic conductances using the particle ?lter method developed by Huys and
Paninski [23], but their method is not able to identify the ionic currents and the rate constants. In
our examination of biological data, we have explored parameters empirically from current-voltage
data. It would be an important direction of this study to develop the method such that models are
selected solely from the voltage trace.
Acknowledgments
This study was supported by Support Center for Advanced Telecommunications Technology Research, Foundation; Yazaki Memorial Foundation for Science and Technology; and Ritsumeikan
University Research Funding Research Promoting Program ?Young Scientists (Start-up) ?, ?General Research? to R.K., Grant-in-Aid for Young Scientists (B) from the MEXT Japan (22700323) to
Y.T., Grants-in-Aid for Scienti?c Research from the MEXT Japan (20300083, 23115510) to S.S.,
and the Center for Neurosciences LC554, Grant No. AV0Z50110509 and the Grant Agency of the
Czech Republic, project P103/11/0282 to P.L.
References
[1] Koch, C. (1999) Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press.
[2] Lansky, P. (1983) Math. Biosci. 67: 247-260.
[3] Lansky, P. & Ditlevsen S. (2008) Biol. Cybern. 99: 253-262.
8
[4] Rudolph, M., Piwkowska, Z., Badoual, M., Bal, T. & Destexhe, A. (2004) J. Neurophysiol.
91: 2884-2896.
[5] Pospischil, M., Piwkowska, Z., Bal, T. & Destexhe, A. (2009) Neurosci. 158: 545-552.
[6] Huys, Q.J.M., Ahrens, M.B. & Paninski, L. (2006) J. Neurophysiol. 96: 872-890.
[7] Shinomoto, S., Sakai, S. & Funahashi, S. (1999) Neural Comput. 11: 935-951.
[8] DeWeese, M.R. & Zador, A.M. (2006) J. Neurosci. 26: 12206-12218.
[9] Fox, R.F. (1997) Biophys. J. 72: 2068-2074.
[10] Burkitt, A.N. (2006) Biol. Cybern. 95: 1-19.
[11] Kitagawa, G. & Gersh, W. (1996) Smoothness priors analysis of time series. New York:
Springer-Verlag.
[12] Pospischil, M., Toledo-Rodriguez, M., Monier, C., Piwkowska, Z., Bal, T., Fregnac, Y.,
Markram, H. & Destexhe, A. (2008) Biol. Cybern. 99: 427-441.
[13] Dempster, A.P., Laird, N.M. & Rubin, D.B. (1977) J. R. Stat. Soc. 39: 1-38.
[14] Smith, A.C. & Brown, E.N. (2003) Neural Comput. 15: 965-991.
[15] Eden, U.T., Frank, L.M., Barbieri, R., Solo, V. & Brown, E.N., (2004) Neural Comput. 16:
971-998.
[16] Paninski, L., Ahmadian, Y., Ferreira, D.G., Koyama, S., Rad, K.R., Vidne, M., Vogelstein, J.
& Wu, W. (2010) J. Comput. Neurosci. 29: 107-126.
[17] Koyama, S., P?erez-Bolde, L.C., Shalizi, C.R. & Kass, R.E. (2010) J. Am. Stat. Assoc. 105:
170-180.
[18] Schneidman, E., Freedman, B. & Segev, I. (1998) Neural Comput. 10: 1679-1703.
[19] Tsubo, Y., Takada, M., Reyes, A. D. & Fukai, T. (2007) Eur. J. Neurosci. 25: 3429-3441.
[20] Kobayashi, R., Tsubo, Y. & Shinomoto, S. (2009) Front. Comput. Neurosci. 3: 9.
[21] Lansky, P., Sanda, P. & He, J. (2006) J. Comput. Neurosci. 21: 211-223.
[22] Kobayashi, R., Shinomoto, S. & Lansky, P. (2011) Neural Comput. 23: 3070-3093.
[23] Huys, Q.J.M. & Paninski, L. (2009) PLoS Comput. Biol. 5: e1000379.
9
| 4214 |@word nd:1 open:4 simplifying:1 eld:1 series:1 denoting:1 wako:1 current:11 wd:1 ka:1 si:2 bd:1 written:1 additive:1 realistic:2 numerical:2 s21:1 motor:1 alone:2 intelligence:1 selected:1 advancement:1 accordingly:1 ith:4 smith:1 funahashi:1 provides:2 math:1 mathematical:2 along:1 constructed:1 consists:1 manner:1 introduce:3 rapid:2 examine:1 brain:1 discretized:1 resolve:1 jm:1 increasing:1 project:1 estimating:7 underlying:3 circuit:1 panel:3 xed:1 cm:4 kind:3 developed:2 uctuating:2 sanda:1 ferreira:1 assoc:1 unit:5 grant:4 kobayashi:4 engineering:1 local:1 tki:1 sd:5 limit:1 scientist:2 analyzing:1 oxford:1 piwkowska:3 subscript:1 solely:1 modulation:1 barbieri:1 black:2 shaded:1 bi:2 huys:4 acknowledgment:1 practice:2 bolde:1 illposed:1 procedure:2 empirical:1 physiology:1 suggest:1 cannot:3 applying:5 cybern:3 www:1 measurable:1 equivalent:1 vobs:3 center:2 maximizing:3 zador:1 simplicity:1 abrupt:1 estimator:6 enabled:1 population:1 transmit:1 tting:3 decode:1 approximated:2 intracellularly:2 observed:5 constancy:1 bottom:1 thousand:1 calculate:1 plo:1 transforming:1 leak:4 ui:2 agency:1 dempster:1 dynamic:8 overwhelms:1 basis:1 qml:1 neurophysiol:2 represented:1 riken:2 ahmadian:1 whose:1 rudolph:1 laird:1 jion:9 scphys:1 propose:2 subtracting:1 maximal:2 academy:1 description:2 dirac:1 webpage:1 rst:2 potassium:5 smax:1 ring:2 converges:1 develop:2 ac:3 stat:2 augmenting:1 eq:3 strong:1 dividing:1 implemented:1 soc:1 indicate:1 direction:1 discontinuous:1 owing:1 subsequently:1 human:1 shalizi:1 biological:2 dendritic:2 mathematically:1 kitagawa:1 koch:1 considered:1 great:1 adopt:1 a2:1 estimation:16 applicable:1 uctuate:2 gaussian:8 rather:1 varying:12 voltage:32 derived:1 likelihood:6 sense:1 am:1 dependent:4 integrated:1 hidden:6 biomed:1 html:1 ill:1 art:1 integration:1 smoothing:3 marginal:3 represents:6 few:1 randomly:1 modi:1 ve:2 individual:1 argmax:3 tke:2 cns:1 attempt:3 conductance:20 huge:1 investigate:1 scienti:1 chain:2 accurate:4 solo:1 capable:1 respective:1 fox:1 walk:2 sinusoidally:2 maximization:1 applicability:3 deviation:6 republic:3 bombardment:1 saitama:1 too:1 front:1 characterize:2 reported:1 conduction:1 varies:1 eur:1 physic:1 s2d:1 fregnac:1 hirosawa:1 w1:1 na:6 recorded:4 ek:3 japan:5 potential:8 sinusoidal:1 coding:1 b2:1 satisfy:2 caused:1 mv:2 ad:1 vi:1 h1:2 shigeru:1 schema:1 red:6 start:2 complicated:1 vivo:1 compartment:1 accuracy:2 variance:24 qk:10 identify:1 weak:1 bayesian:6 iterated:2 ionic:17 trajectory:1 cation:2 simultaneous:1 synaptic:21 ed:2 against:1 pospischil:4 con:3 sampled:1 gleak:4 takada:1 dt:2 tsubo:3 synapse:1 formulation:1 done:1 box:1 though:1 furthermore:1 until:1 correlation:1 nonlinear:3 rodriguez:1 petr:1 undergoes:1 gray:5 effect:2 validity:1 verify:1 normalized:3 true:7 brown:2 evolution:2 laboratory:1 iteratively:1 jkd:2 white:7 shinomoto:5 during:1 uniquely:1 backpropagating:1 rat:1 m:2 bal:3 motion:1 reyes:1 fj:4 instantaneous:1 fi:2 funding:1 functional:1 stimulation:1 vitro:4 empirically:1 jp:4 he:1 measurement:1 biosci:1 ai:2 ena:3 smoothness:3 tuning:1 erez:1 particle:2 cortex:1 deduce:1 brownian:1 optimizing:1 driven:1 isyn:2 verlag:1 suspended:1 seen:1 additional:1 simpli:1 employed:1 determine:3 maximize:1 schneidman:1 signal:26 dashed:4 vogelstein:1 multiple:1 infer:5 kyoto:3 smooth:2 memorial:1 characterized:1 a1:1 biophysics:1 impact:1 schematic:1 qi:3 regression:1 expectation:3 poisson:1 iteration:1 represent:3 cz:1 achieved:1 ion:22 penalize:1 receive:1 fukai:1 w2:1 veri:1 prague:1 ciently:1 presence:1 destexhe:3 xj:12 equidistant:1 uctuations:2 whether:1 six:1 york:1 cause:1 action:3 ignored:1 amount:1 http:1 zj:5 inhibitory:2 millisecond:1 dotted:3 ahrens:1 neuroscience:2 estimated:7 delta:1 blue:2 hyperparameter:1 group:2 eden:1 uid:1 penalizing:1 deweese:1 diffusion:1 lter:4 sum:4 fourth:2 injected:3 hodgkin:5 telecommunication:1 wu:1 ob:5 acceptable:1 activity:1 huxley:5 segev:1 sake:1 aspect:1 injection:1 department:2 according:1 kd:3 membrane:7 em:3 postsynaptic:1 wi:3 n4:3 s1:1 dv:1 equation:3 mechanism:1 irregularly:1 reversal:4 rmed:3 adopted:3 rewritten:1 apply:2 observe:1 worthwhile:1 fluctuating:1 regulate:1 promoting:1 robustly:1 uctuation:2 robustness:2 original:1 vidne:1 top:1 remaining:1 medicine:1 giving:1 uj:4 capacitance:1 realized:1 spike:2 diagonal:2 microscopic:1 kth:2 separate:1 simulated:4 koyama:2 presynaptic:2 assuming:1 kalman:3 index:1 modeled:1 providing:1 equivalently:1 frank:1 ryota:1 trace:15 rise:1 implementation:1 upper:1 neuron:20 observation:11 markov:2 sm:1 inevitably:1 langevin:1 posedness:1 introduced:1 z1:9 optimized:1 rad:1 identi:1 czech:3 toledo:1 discontinuity:1 able:1 program:1 including:1 examination:1 solvable:1 advanced:1 sodium:5 technology:3 temporally:1 jna:2 ltered:1 prior:7 determining:1 suf:1 lansky:6 foundation:2 eleak:5 rubin:1 uncorrelated:1 excitatory:2 supported:1 jth:1 understand:1 institute:2 markram:1 fifth:1 leaky:1 slice:2 overcome:1 cortical:3 transition:1 sakai:1 made:2 dwi:1 jump:1 sj:7 approximate:1 observable:2 b1:1 assumed:2 xi:3 reality:1 channel:27 mj:7 robust:1 ca:1 dendrite:1 investigated:1 vj:14 diag:2 neurosci:6 s2:3 noise:19 hyperparameters:7 freedman:1 x1:5 neuronal:2 burkitt:1 slow:1 aid:2 inferring:1 comput:9 third:3 monier:1 young:2 xt:1 gating:11 explored:1 essential:1 intrinsic:3 stepwise:1 ci:1 biophys:1 demand:1 smoothly:1 depicted:1 paninski:4 applies:1 springer:1 kinetic:2 conditional:4 formulated:2 absence:1 change:2 determined:2 except:1 total:1 experimental:13 m3:3 yasuhiro:1 support:1 mext:2 modulated:2 avoiding:1 preparation:1 phenomenon:1 evaluate:3 tested:4 biol:4 correlated:1 |
3,551 | 4,215 | Demixed Principal Component Analysis
Ranulfo Romo
Instituto de Fisiolog?a Celular
Universidad Nacional Aut?noma de M?xico
Mexico City, Mexico
Wieland Brendel
Ecole Normale Sup?rieure, Paris, France
Champalimaud Neuroscience Programme
Lisbon, Portugal
Christian K. Machens
Ecole Normale Sup?rieure, Paris, France
Champalimaud Neuroscience Programme, Lisbon, Portugal
Abstract
In many experiments, the data points collected live in high-dimensional observation spaces, yet can be assigned a set of labels or parameters. In electrophysiological recordings, for instance, the responses of populations of neurons generally
depend on mixtures of experimentally controlled parameters. The heterogeneity and diversity of these parameter dependencies can make visualization and interpretation of such data extremely difficult. Standard dimensionality reduction
techniques such as principal component analysis (PCA) can provide a succinct
and complete description of the data, but the description is constructed independent of the relevant task variables and is often hard to interpret. Here, we start
with the assumption that a particularly informative description is one that reveals
the dependency of the high-dimensional data on the individual parameters. We
show how to modify the loss function of PCA so that the principal components
seek to capture both the maximum amount of variance about the data, while also
depending on a minimum number of parameters. We call this method demixed
principal component analysis (dPCA) as the principal components here segregate
the parameter dependencies. We phrase the problem as a probabilistic graphical
model, and present a fast Expectation-Maximization (EM) algorithm. We demonstrate the use of this algorithm for electrophysiological data and show that it serves
to demix the parameter-dependence of a neural population response.
1
Introduction
Samples of multivariate data are often connected with labels or parameters. In fMRI data or electrophysiological data from awake behaving humans and animals, for instance, the multivariate data
may be the voxels of brain activity or the firing rates of a population of neurons, and the parameters
may be sensory stimuli, behavioral choices, or simply the passage of time. In these cases, it is often
of interest to examine how the external parameters or labels are represented in the data set.
Such data sets can be analyzed with principal component analysis (PCA) and related dimensionality
reduction methods [4, 2]. While these methods are usually successful in reducing the dimensionality
of the data, they do not take the parameters or labels into account. Not surprisingly, then, they
often fail to represent the data in a way that simplifies the interpretation in terms of the underlying
parameters. On the other hand, dimensionality reduction methods that can take parameters into
account, such as canonical correlation analysis (CCA) or partial least squares (PLS) [1, 5], impose a
specific model of how the data depend on the parameters (e.g. linearly), which can be too restrictive.
We illustrate these issues with neural recordings collected from the prefrontal cortex (PFC) of monkeys performing a two-frequency discrimination task [9, 3, 7]. In this experiment a monkey received
1
two mechanical vibrations with frequencies f1 and f2 on its fingertip, delayed by three seconds. The
monkey then had to make a binary decision d depending on whether f1 > f2 . In the data set, each
neuron has a unique firing pattern, leading to a large diversity of neural responses. The firing rates of
three neurons (out of a total of 842) are plotted in Fig. 1, top row. The responses of the neurons mix
information about the different task parameters, a common observation for data sets of recordings
in higher-order brain areas, and a problem that exacerbates interpretation of the data.
Here we address this problem by modifying PCA such that the principal components depend on
individual task parameters while still capturing as much variance as possible. Previous work has
addressed the question of how to demix data depending on two [7] or several parameters [8], but did
not allow components that capture nonlinear mixtures of parameters. Here we extend this previous
work threefold: (1) we show how to systematically split the data into univariate and multivariate
parameter dependencies; (2) we show how this split suggests a simple loss function, capable of
demixing data with arbitrary combinations of parameters, (3) we introduce a probabilistic model for
our method and derive a fast algorithm using expectation-maximization.
2
Principal component analysis and the demixing problem
The firing rates of the neurons in our dataset depend on three external parameters: the time t, the
stimulus s = f1 , and the decision d of the monkey. We omit the second frequency f2 since this
parameter is highly correlated with f1 and d (the monkey makes errors in < 10% of the trials). Each
sample of firing rates in the population, yn , is therefore tagged with parameter values (tn , sn , dn ).
For notational simplicity, we will assume that each data point is associated with a unique set of
parameter values so that the parameter values themselves can serve as indices for the data points yn .
In turn, we drop the index n, and simply write ytsd .
The main aim of PCA is to find a new coordinate system in which the data can be represented in
a more succinct and compact fashion. The covariance matrix of the firing rates summarizes the
second-order statistics of the data set,
>
C = ytsd ytsd
(1)
tsd
and has size D ? D where D is the number of neurons in the data set (we will assume the data are
centered throughout the paper). The angular bracket denotes averaging over all parameter values
(t, s, d) which corresponds to averaging over all data points. Given the covariance matrix, we can
compute the firing rate variance that falls along arbitrary directions in state space. For instance, the
variance captured by a coordinate axis given by a normalized vector w is simply
L = w> Cw.
(2)
The first principal component corresponds to the axis that captures most of the variance of the data,
and thereby maximizes the function L subject to the normalization constraint w> w = 1. The
second principal component maximizes variance in the orthogonal subspace and so on [4, 2].
PCA succeeds nicely in summarizing the population response for our data set: the first ten principal
components capture more than 90% of the variance of the data. However, PCA completely ignores
the causes of firing rate variability. Whether firing rates have changed due to the first stimulus
frequency s = f1 , due to the passage of time, t, or due to the decision, d, they will enter equally into
the computation of the covariance matrix and therefore do not influence the choice of the coordinate
system constructed by PCA. To clarify this observation, we will segregate the data ytsd into pieces
capturing the variability caused by different parameters.
Marginalized average. Let us denote the set of parameters by S = {t, s, d}. For every subset of
S we construct a ?marginalized average?,
? ts
y
? t := hytsd isd ,
? s := hytsd itd ,
? d := hytsd its
y
y
y
?t ? y
?s,
? td := hytsd is ? y
?t ? y
?d,
? sd := hytsd it ? y
?s ? y
?d,
:= hytsd id ? y
y
y
? tsd := ytsd ? y
? ts ? y
? td ? y
? sd ? y
?t ? y
?s ? y
?d,
y
(3)
(4)
(5)
? t = hytsd isd , for instance,
where hytsd i? denotes the average of the data over the subset ? ? S. In y
we average over all parameter values (s, d) such that the remaining variation of the averaged data
2
?ring rate (Hz)
60
60
60
45
45
45
30
30
30
15
15
15
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
2
time (s)
3
4
sample neurons
0
0
1
2
time (s)
3
4
0
PCA
1
2
time (s)
3
4
naive demixing
Figure 1: (Top row) Firing rates of three (out of D = 842) neurons recorded in the PFC of monkeys
discriminating two vibratory frequencies. The two stimuli were presented during the shaded periods.
The rainbow colors indicate different stimulus frequencies f1 , black and gray indicate the decision
of the monkey during the interval [3.5,4.5] sec. (Bottom row) Relative contribution of time (blue),
stimulus (light blue), decision (green), and non-linear mixtures (yellow) to the total variance for a
sample of 14 neurons (left), the top 14 principal components (middle), and naive demixing (right).
? ts , we subtract all variation due to t or s individually, leaving only variation
only comes from t. In y
that depends on combined changes of (t, s). These marginalized averages are orthogonal so that
??, ?0 ? S
? ?0 i = 0 if
h?
y?>y
? 6= ?0 .
(6)
At the same time, their sum reconstructs the original data,
?t + y
?s + y
?d + y
? ts + y
? td + y
? sd + y
? tsd .
ytsd = y
(7)
The latter two properties allow us to segregate the covariance matrix of ytsd into ?marginalized
covariance matrices? that capture the variance in a subset of parameters ? ? S,
C = Ct + Cs + Cd + Cts + Ctd + Csd + Ctsd ,
with
? ?>i.
C? = h?
y? y
Note that here we use the parameters {t, s, d} as labels, whereas they are indices in Eq. (3)-(5),
and (7). For a given component w, the marginalized covariance matrices allow us to calculate the
variance x? of w conditioned on ? ? S as
x2? = w> C? w,
P
so that the total variance is given by L = ? x2? =: kxk22 .
Using this segregation, we are able to examine the distribution of variance in the PCA components and the original data. In Fig. 1, bottom row, we plot the relative contributions of time (blue;
computed as x2t /kxk22 ), decision (light blue; computed as (x2d + x2td )/kxk22 ), stimulus (green; computed as (x2s + x2ts )/kxk22 ), and nonlinear mixtures of stimulus and decision (yellow; computed as
(x2sd + x2tsd )/kxk22 ) for a set of sample neurons (left) and for the first fourteen components of PCA
(center). The left plot shows that individual neurons carry varying degree of information about the
different task parameters, reaffirming the heterogeneity of neural responses. While the situation is
slightly better for the PCA components, we still find a strong mixing of the task parameters.
To improve visualization of the data and to facilitate the interpretation of individual components, we
would prefer components that depend on only a single parameter, or, more generally, that depend on
the smallest number of parameters possible. At the same time, we would want to keep the attractive
properties of PCA in which every component captures as much variance as possible about the data.
Naively, we could simply combine eigenvectors from the marginalized covariance matrices. For example, consider the first Q eigenvectors of each marginalized covariance matrix. Apply symmetric
3
100
PCA or dPCA (?=0)
dPCA (?=1)
dPCA (?=4)
2
x
2
60
x
x
2
80
40
20
0
20 40 60 80 100
x1
0
20 40 60 80 100
x1
0
20 40 60 80 100
x1
Figure 2: Illustration of the objective functions. The PCA objective function corresponds to the
L2-norm in the space of standard deviations, x. Whether a solution falls into the center or along
the axis does not matter, as long as it captures a maximum of overall variance. The dPCA objective
functions (with parameters ? = 1 and ? = 4) prefer solutions along the axes over solutions in the
center, even if the solutions along the axes capture less overall variance.
orthogonalization to these eigenvectors and choose the Q coordinates that capture the most variance.
The resulting variance distribution is plotted in Fig. 1 (bottom, right). While the parameter dependence of the components is sparser than in PCA, there is a strong bias towards time, and variance
induced by the decision of the monkey is squeezed out. As a further drawback, naive demixing
covers only 84.6% of the total variance compared with 91.7% for PCA. We conclude that we have
to rely on a more systematic approach based specifically on an objective that promotes demixing.
3
Demixed principal component analysis (dPCA): Loss function
With respect to the segregated covariances, the PCA objective function, Eq. (2), can be written as
P
P
2
L = w> Cw = ? w> C? w = ? x2? = kxk2 . This function is illustrated in Fig 2 (left), and
shows that PCA will maximize variance, no matter whether this variance comes about through a
single marginalized variance, or through mixtures thereof.
Consequently, we need to modify this objective function such that solutions w that do not mix
variances?thereby falling along one of the axes in x-space?are favored over solutions w that fall
into the center in x-space. Hence, we seek an objective function L = L(x) that grows monotonically
with any x? such that more variance is better, just as in PCA, and that grows faster along the axes
than in the center so that mixtures of variances get punished. A simple way of imposing this is
?
kxk2
2
LdPCA = kxk2
(8)
kxk1
where ? ? 0 controls the tradeoff. This objective function is illustrated in Fig. 2 (center and right)
for two values of ?. Here, solutions w that lead to mixtures of variances are punished against
solutions that do not mix variances.
Note that the objective function is a function of the coordinate axis w, and the aim is to maximize
LdPCA with respect to w. A generalization to a set of Q components w1 , . . . , wQ is straightforward
by maximizing L in steps for every component and ensuring orthonormality by means of symmetric
orthogonalization [6] after each step. We call the resulting algorithm demixed principal component
analysis (dPCA), since it essentially can be seen as a generalization of standard PCA.
4
Probabilistic principal component analysis with orthogonality constraint
We introduced dPCA by means of a modification of the objective function of PCA. It is straightforward to build a gradient ascent algorithm to solve Eq. (8). However, we aim for a superior algorithm by framing dPCA in a probabilistic framework. A probabilistic model provides several benefits
that include dealing with missing data and the inclusion of prior knowledge [see 2, p. 570]. Since
the probabilistic treatment of dPCA requires two modifications over the conventional expectationmaximization (EM) algorithm for probabilistic PCA (PPCA), we here review PPCA [11, 10], and
show how to introduce an explicit orthogonality constraint on the mixing matrix.
4
In PPCA, the observed data y are linear combinations of latent variables z
y = Wz + y
2
2
(9)
D?Q
where y ? N (0, ? ID ) is isotropic Gaussian
is the mixing
noise with variance ? and W ? R
matrix. In turn, p(y|z) = N y|Wz, ? 2 ID . The latent variables are assumed to follow a zero-mean,
unit-covariance Gaussian prior, p(z) = N (z|0, IQ ). These equations completely specify the model
of the data and allow us to compute the marginal distribution p(y).
Let Y = {yn } be the set of data points, with n = 1 . . . N , and Z = {zn } the corresponding
values
Q
of the latent variables. Our aim is to maximize the likelihood of the data, p(Y) = n p(yn ), with
respect to the parameters W and ?. To this end, we use the EM algorithm, in which we first calculate
the statistics (mean and covariance) of the posterior distribution, p(Z|Y), given fixed values for W
and ? 2 (Expectation step). Then, using these statistics, we compute the expected complete-data
likelihood, E[p(Y, Z)], and maximize it with respect to W and ? 2 (Maximization step). We cycle
through the two steps until convergence.
Expectation step. The posterior distribution p(Z|Y) is again Gaussian and given by
p(Z|Y) =
N
Y
N zn M?1 W> yn , ? 2 M?1
with
M = W> W + ? 2 IQ .
(10)
n=1
Mean and covariance can be read off the arguments, and we note in particular that E[zn z>
n] =
? 2 M?1 + E[zn ]E[zn ]> . We can then take the expectation of the complete-data log likelihood with
respect to this posterior distribution, so that
N
X
D
1
1
2
>
2
E ln p Y, Z W, ?
=?
ln 2?? 2 + 2 kyn k ? 2 E [zn ] W> yn
2
2?
?
n=1
(11)
1
Q
1
>
+ 2 Tr E zn z>
ln (2?) + Tr E zn z>
.
n W W +
n
2?
2
2
Maximization step. Next, we need to maximize Eq. (11) with respect to ? and W. For ?, we obtain
N
> o
1 Xn
2
>
(? ? )2 =
kyn k ? 2E [zn ] W> yn + Tr E zn z>
.
(12)
n W W
N D n=1
For W, we need to deviate from the conventional PPCA algorithm, since the development of probabilistic dPCA requires an explicit orthogonality constraint on W, which had so far not been included
in PPCA. To impose this constraint, we factorize W into an orthogonal and a diagonal matrix,
W = U?, U> U = ID
(13)
where U ? R
has orthogonal columns of unit length and ? ? RQ?Q is diagonal. In order to
maximize Eq. (11) with respect to U and ? we make use of infinitesimal translations in the respective
restricted space of matrices,
U ? (ID + A) U,
? ? (IQ + diag(b)) ?,
(14)
D?Q
where A ? SkewD is D ? D skew-symmetric, b ? RQ , and 1. The set of D ? D skewsymmetric matrices are the generators of rotations in the space of orthogonal matrices. The necessary conditions for a maximum of the likelihood function at U ? , ?? are
E ln p Y, Z(ID + A) U? ?, ? 2 ? E ln p Y, ZU? ?, ? 2 = 0 + O 2 ?A ? SkewD ,
(15)
? 2
?
2
2
D
E ln p Y, Z U (IQ + diag(b)) ? , ?
? E ln p Y, Z U? , ?
= 0 + O ?b ? R .
(16)
>
P
>
1
Given the reduced singular value decomposition of n yn E zn ? = K?L , the maximum is
U? = KL>
X
X
?1
?? = diag U>
y n E z>
diag
E zn z>
n
n
n
(17)
(18)
n
1
The reduced singular value decomposition factorizes a D ?Q matrix A as A = KDL? , where K is a D ?Q
unitary matrix, D is a Q ? Q nonnegative, real diagonal matrix, and L? is a Q ? Q unitary matrix.
5
a
b
y1
W
z1
y2
?1
z?
z2
y3
?2
?
z3
1
..
z?
y4
y
N
L
z4
?2
y5
Figure 3: (a) Graphical representation of the general idea of dPCA. Here, the data y are projected on
a subspace z of latent variables. Each latent variable zi depends on a set of parameters ?j ? S. To
ease interpretation of the latent variables zi , we impose a sparse mapping between the parameters
and the latent variables. (b) Full graphical model of dPCA.
where diag(A) returns a square matrix with the same diagonal as A but with all off-diagonal elements
set to zero.
5
Probabilistic demixed principal component analysis
We described a PPCA EM-algorithm with an explicit constraint on the orthogonality of the columns
of W. So far, variance due to different parameters in the data set are completely mixed in the latent
variables z. The essential idea of dPCA is to demix these parameter dependencies by sparsifying
the mapping from parameters to latent variables (see Fig. 3a). Since we do not want to impose
the nature of this mapping (which is to remain non-parametric), we suggest a model in which each
latent variable zi is segregated into (and replaced by) a set of R latent variables {z?,i }, each of which
depends on a subset ? ? S of parameters.
Note that R is the number of all subsets of S, exempting
P
the empty set. We require zi = ??S z?,i , so that
X
y=
Wz? + y
(19)
??S
with y ? N (0, ? 2 ID ), see also Fig. 3b. The priors over the latent variables are specified as
p(z? ) = N (z? |0, diag?? )
(20)
where ?? is a row in ? ? RR?Q , the matrix of variances for all latent variables. The covariance of
the sum of the latent variables shall again be the identity matrix,
X
diag ?? = IQ .
(21)
??S
This completely specifies our model. As before, we will use the EM-algorithm to maximize the
model evidence p(Y) with respect to the parameters ?, W, ?. However, we additionally impose that
each column ?i of ? shall bePsparse, thereby ensuring that the diversity of parameter dependencies
of the latent variables zi = ? z?,i is reduced. Note that ?i is proportional to the vector x with
elements x? introduced in section 3. This links the probabilistic model to the loss function in Eq. (8).
Expectation step. Due to the implicit parameter dependencies of the latent variables, the sets of
variables Z? = {zn? } can only depend on the respective marginalized averages of the data. The
posterior distribution over all latent variables Z = {Z? } therefore factorizes such that
Y
? ?)
p(Z|Y) =
p(Z? |Y
(22)
??S
6
Algorithm 1: demixed Principal Component Analysis (dPCA)
Input: Data Y, # components Q
Algorithm:
U(k=1) ? first Q principal components of y, I(k=1) ? IQ
repeat
(k)
M? , U(k) , ?(k) , ? (k) , ?(k) ? update using (25), (17), (18), (12) and (30)
k ?k+1
until p(Y) converges
? ? = {?
where Y
y?n } are the marginalized averages over the complete data set. For three parameters,
the marginalized averages were specified in Eq. (3)-(7). For more than three parameters, we obtain
X
|? |
n
? ?n = hyin(S\?) +
y
(?1) hyi(S\?)?? .
(23)
? ??
hyin?
where
denotes averaging of the data over the parameter subset ?. The index n refers the
average to the respective data point.2 In turn, the posterior of Z? takes the form
N
Y
> n
2 ?1
? ?) =
?
(24)
p(Z? |Y
N zn? M?1
W
y
,
?
M
?
?
?
n=1
where
M? = W> W + ? 2 diag ??1
(25)
? .
Hence, the expectation of the complete-data log-likelihood function is modified from Eq. (11),
(
N
X Q
X
1
D
2
n 2
2
ln 2?? + 2 ky k +
ln (2?)
E ln p Y, Z W, ?
=?
2
2?
2
n=1
??S
>
1 >
1
W W ? 2 E zn? W> yn
+ 2 Tr E zn? zn>
?
2?
?
)
1 n n>
1
?1
+ ln det diag (?? ) + Tr E z? z? diag (?? )
.
2
2
(26)
Maximization Step. Comparison of Eq. (11) and Eq. (26) shows that the maximum-likelihood
estimates of W = U? andPof ? 2 are unchanged (this
P can be seen by substituting
P z for>the sum
>
of marginalized averages, ? z? , so that E [z] =
E
[z
]
and
E[zz
]
=
?
?
? E[z? z? ]). The
maximization with respect to ? is more involved because we have to respect constraints from two
sides. First, Eq. (21) constrains the L1 -norm of the columns ?i of ?. Second, since we aim for
components depending only on a small subset of parameters, we have to introduce another constraint
to promote sparsity of ?i . Though this constraint is rather arbitrary, we found that constraining all
but one entry of ?i to be zero works quite effectively, so that k?i k0 = 1. Consequently, for each
column ?i of ?, the maximization of the expected likelihood, L, Eq. (26), is given by
?i ? arg max L (?i )
?i
Defining B?i =
k?i k1 = 1 and k?i k0 = 1.
s.t.
n n
E[z?i
z?i ], the relevant terms in the likelihood can be written as
X
L (?i ) = ?
ln ??i + B?i ??1
?i
(27)
P
n
(28)
?
= ? ln(1 ? m) ? B?0 i (1 ? m)?1 ?
X
(ln + B?i ?1 )
(29)
??J
2
To see through this notation, notice that the n-th data point yn or yn is tagged with parameter values
? n = (?1,n , ?2,n , . . .). Any average over a subset ? = S \ ? of the parameters leaves vectors hyi? that still
depend on some remaining parameters, ? = ?rest . We can therefore take their values for the n-th data point,
n
?rest
, and assign the respective value of the average to the n-data point as well, writing hyin
?.
7
?ring rate (Hz)
dPCA
?ring rate (Hz)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
250
125
0
-125
-250
250
125
0
-125
-250
0
1
2
time (s)
3
4
0
1
2
time (s)
3
4
0
1
2
time (s)
3
4
Figure 4: On the left we plot the relative variance of the fourteen highest components in dPCA
conditioned on time (blue), stimulus (light blue), decision (green) and non-linear mixtures (yellow).
On the right the firing rates of six dPCA components are displayed in three columns separated into
components with the highest variance in time (left), in decision (middle) and in the stimulus (right).
where ?0 is the index of the non-zero entry of ?i , and J is the complementing index set (of length
m = R ? 1) of all zero-entries which have been set to 1 for regularization purposes. Since is
small, its inverse is very large. Accordingly, the likelihood is maximized for the index ?0 referring
to the largest entry in B?i , so that
P
P
n n
n n
1 if
n E[z?i z?i ] ?
n E[z?i z?i ] for all ? 6= ?
??i =
(30)
0
otherwise
More generally, it is possible to substitute the sparsity constraint with k?i k0 = K for K > 1 and
maximize L (?i ) numerically. The full algorithm for dPCA is summarized in Algorithm 1.
6
Experimental results
The results of the dPCA algorithm applied to the electrophysiological data from the PFC are shown
in Fig. 4. With 90% of the total variance in the first fourteen components, dPCA captures a comparable amount of variance as PCA (91.7%). The distribution of variances in the dPCA components
is shown in Fig. 4, left. Note that, compared with the distribution in the PCA components (Fig. 1,
bottom, center), the dPCA components clearly separate the different sources of variability. More
specifically, the neural population is dominated by components that only depend on time (blue), yet
also features separate components for the monkey?s decision (green) and the perception of the stimulus (light blue). The components of dPCA, of which the six most prominent are displayed in Fig. 4,
right, therefore reflect and separate the parameter dependencies of the data, even though these dependencies were completely intermingled on the single neuron level (compare Fig. 1, bottom, left).
7
Conclusions
Dimensionality reduction methods that take labels or parameters into account have recently found a
resurgence in interest. Our study was motivated by the specific problems related to electrophysiological data sets. The main aim of our method?demixing parameter dependencies of high-dimensional
data sets?may be useful in other context as well. Very similar problems arise in fMRI data, for instance, and dPCA could provide a useful alternative to other dimensionality reduction methods such
as CCA, PLS, or Supervised PCA [1, 12, 5]. Furthermore, the general aim of demixing dependencies
could likely be extended to other methods (such as ICA) as well. Ultimately, we see dPCA as a particular data visualization technique that will prove useful if a demixing of parameter dependencies
aids in understanding data.
The source code both for Python and Matlab can be found at https://sourceforge.net/projects/dpca/.
8
References
[1] F. R. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis.
Technical Report 688, University of California, Berkeley, 2005.
[2] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer, 2006.
[3] C. D. Brody, A. Hern?ndez, A. Zainos, and R. Romo. Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cerebral Cortex,
13(11):1196?1207, 2003.
[4] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer,
2001.
[5] A. Krishnan, L. J. Williams, A. R. McIntosh, and H. Abdi. Partial least squares (PLS) methods
for neuroimaging: a tutorial and review. NeuroImage, 56:455?475, 2011.
[6] P.-O. Lowdin. On the non-orthogonality problem connected with the use of atomic wave
functions in the theory of molecules and crystals. The Journal of Chemical Physics, 18(3):365,
1950.
[7] C. K. Machens. Demixing population activity in higher cortical areas. Frontiers in computational neuroscience, 4(October):8, 2010.
[8] C. K. Machens, R. Romo, and C. D. Brody. Functional, but not anatomical, separation of
?what? and ?when? in prefrontal cortex. Journal of Neuroscience, 30(1):350?360, 2010.
[9] R. Romo, C. D. Brody, A. Hernandez, and L. Lemus. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature, 399(6735):470?473, 1999.
[10] S. Roweis. EM algorithms for PCA and SPCA. Advances in neural information processing
systems, 10:626?632, 1998.
[11] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the
Royal Statistical Society - Series B: Statistical Methodology, 61(3):611?622, 1999.
[12] S. Yu, K. Yu, V. Tresp, H. P. Kriegel, and M. Wu. Supervised probabilistic principal component
analysis. Proceedings of 12th ACM SIGKDD International Conf. on KDD, 10, 2006.
9
| 4215 |@word trial:1 middle:2 norm:2 seek:2 covariance:13 decomposition:2 thereby:3 tr:5 carry:1 reduction:5 ndez:1 series:1 ecole:2 z2:1 noma:1 yet:2 written:2 informative:1 kdd:1 christian:1 drop:1 plot:3 update:1 discrimination:1 leaf:1 complementing:1 accordingly:1 isotropic:1 provides:1 dn:1 constructed:2 along:6 prove:1 combine:1 behavioral:1 introduce:3 ica:1 expected:2 themselves:1 examine:2 brain:2 td:3 project:1 underlying:1 notation:1 maximizes:2 what:1 x2s:1 monkey:9 berkeley:1 every:3 y3:1 control:1 unit:2 omit:1 yn:11 before:1 timing:1 modify:2 sd:3 instituto:1 encoding:1 id:7 firing:11 hernandez:1 black:1 suggests:1 shaded:1 ease:1 averaged:1 unique:2 atomic:1 area:2 refers:1 suggest:1 get:1 context:1 live:1 influence:1 writing:1 conventional:2 center:7 romo:4 maximizing:1 straightforward:2 missing:1 williams:1 simplicity:1 population:7 coordinate:5 variation:3 machens:3 element:3 recognition:1 particularly:1 bottom:5 kxk1:1 observed:1 capture:10 champalimaud:2 calculate:2 connected:2 cycle:1 highest:2 rq:2 constrains:1 ultimately:1 depend:9 serve:1 f2:3 completely:5 k0:3 represented:2 x2d:1 separated:1 fast:2 intermingled:1 quite:1 zainos:1 solve:1 otherwise:1 statistic:4 rr:1 net:1 relevant:2 mixing:3 roweis:1 description:3 ky:1 sourceforge:1 convergence:1 empty:1 demix:3 ring:3 converges:1 depending:4 illustrate:1 derive:1 iq:6 expectationmaximization:1 received:1 eq:12 strong:2 c:1 indicate:2 come:2 somatosensory:1 direction:1 drawback:1 modifying:1 centered:1 human:1 require:1 assign:1 f1:6 generalization:2 frontier:1 clarify:1 itd:1 mapping:3 substituting:1 smallest:1 purpose:1 label:6 punished:2 vibration:1 individually:1 largest:1 city:1 clearly:1 gaussian:3 aim:7 modified:1 normale:2 rather:1 varying:1 factorizes:2 ax:4 notational:1 likelihood:9 lemus:1 sigkdd:1 summarizing:1 france:2 issue:1 overall:2 arg:1 favored:1 development:1 animal:1 marginal:1 construct:1 nicely:1 zz:1 yu:2 promote:1 fmri:2 report:1 stimulus:11 skewsymmetric:1 individual:4 delayed:1 replaced:1 friedman:1 interest:2 fingertip:1 highly:1 mixture:8 analyzed:1 bracket:1 light:4 capable:1 partial:2 necessary:1 respective:4 orthogonal:5 plotted:2 instance:5 column:6 cover:1 zn:17 maximization:7 phrase:1 deviation:1 subset:8 entry:4 successful:1 too:1 dependency:12 combined:1 referring:1 nacional:1 international:1 discriminating:1 probabilistic:13 universidad:1 systematic:1 off:2 physic:1 w1:1 again:2 reflect:1 recorded:1 reconstructs:1 choose:1 prefrontal:4 external:2 conf:1 leading:1 tsd:3 return:1 account:3 de:2 diversity:3 sec:1 summarized:1 matter:2 caused:1 depends:3 piece:1 sup:2 start:1 wave:1 contribution:2 brendel:1 square:3 variance:35 maximized:1 yellow:3 infinitesimal:1 against:1 frequency:6 involved:1 thereof:1 associated:1 ppca:6 dataset:1 treatment:1 color:1 knowledge:1 dimensionality:6 electrophysiological:5 higher:2 tipping:1 supervised:2 follow:1 methodology:1 response:6 specify:1 though:2 furthermore:1 angular:1 just:1 kyn:2 implicit:1 correlation:2 until:2 hand:1 working:2 nonlinear:2 gray:1 grows:2 facilitate:1 normalized:1 orthonormality:1 y2:1 tagged:2 assigned:1 regularization:1 read:1 hence:2 symmetric:3 chemical:1 illustrated:2 attractive:1 during:2 exacerbates:1 prominent:1 crystal:1 complete:5 demonstrate:1 tn:1 l1:1 passage:2 orthogonalization:2 recently:1 common:1 superior:1 rotation:1 functional:1 fourteen:3 cerebral:1 extend:1 interpretation:6 interpret:1 numerically:1 imposing:1 enter:1 mcintosh:1 z4:1 inclusion:1 portugal:2 had:2 cortex:5 behaving:1 multivariate:3 posterior:5 rieure:2 binary:1 captured:1 minimum:1 aut:1 seen:2 impose:5 maximize:8 period:1 monotonically:1 hyi:2 full:2 mix:3 technical:1 faster:1 bach:1 long:1 equally:1 promotes:1 controlled:1 ensuring:2 essentially:1 expectation:7 represent:1 normalization:1 whereas:1 want:2 addressed:1 interval:1 singular:2 leaving:1 source:2 rest:2 ascent:1 recording:3 subject:1 hz:3 induced:1 jordan:1 call:2 unitary:2 constraining:1 split:2 spca:1 krishnan:1 zi:5 hastie:1 simplifies:1 idea:2 tradeoff:1 det:1 whether:4 six:2 pca:27 motivated:1 cause:1 matlab:1 generally:3 useful:3 eigenvectors:3 amount:2 ten:1 reduced:3 http:1 specifies:1 wieland:1 canonical:2 ctd:1 notice:1 tutorial:1 neuroscience:4 tibshirani:1 blue:8 anatomical:1 write:1 threefold:1 shall:2 sparsifying:1 falling:1 isd:2 sum:3 inverse:1 throughout:1 wu:1 separation:1 decision:11 summarizes:1 prefer:2 comparable:1 capturing:2 cca:2 ct:2 brody:3 nonnegative:1 activity:2 constraint:10 orthogonality:5 awake:1 x2:3 dominated:1 argument:1 extremely:1 performing:1 combination:2 remain:1 slightly:1 em:6 modification:2 restricted:1 ln:14 segregation:1 visualization:3 equation:1 hern:1 turn:3 skew:1 fail:1 x2t:1 serf:1 end:1 apply:1 alternative:1 original:2 substitute:1 denotes:3 top:3 remaining:2 include:1 graphical:3 marginalized:12 restrictive:1 k1:1 build:1 society:1 unchanged:1 objective:10 question:1 parametric:3 dependence:2 diagonal:5 gradient:1 subspace:2 cw:2 link:1 separate:3 y5:1 collected:2 length:2 code:1 index:7 y4:1 illustration:1 z3:1 mexico:2 difficult:1 neuroimaging:1 october:1 resurgence:1 observation:3 neuron:13 t:4 displayed:2 heterogeneity:2 segregate:3 variability:3 situation:1 defining:1 y1:1 extended:1 arbitrary:3 introduced:2 paris:2 mechanical:1 kl:1 z1:1 specified:2 california:1 framing:1 macaque:1 address:1 able:1 kriegel:1 usually:1 pattern:2 perception:1 sparsity:2 green:4 wz:3 max:1 memory:2 royal:1 lisbon:2 rely:1 improve:1 kxk22:5 demixed:6 axis:4 naive:3 tresp:1 sn:1 deviate:1 prior:3 voxels:1 l2:1 review:2 understanding:1 segregated:2 python:1 relative:3 loss:4 squeezed:1 mixed:1 proportional:1 generator:1 degree:1 ranulfo:1 systematically:1 cd:1 translation:1 row:5 changed:1 surprisingly:1 repeat:1 bias:1 allow:4 side:1 fall:3 sparse:1 benefit:1 rainbow:1 xn:1 cortical:1 sensory:1 ignores:1 projected:1 programme:2 far:2 correlate:1 compact:1 keep:1 dealing:1 reveals:1 conclude:1 assumed:1 factorize:1 latent:17 additionally:1 nature:2 molecule:1 pfc:3 diag:10 did:1 main:2 linearly:1 csd:1 noise:1 arise:1 succinct:2 x1:3 neuronal:1 fig:12 fashion:1 aid:1 neuroimage:1 explicit:3 kxk2:3 specific:2 bishop:2 evidence:1 demixing:10 naively:1 essential:1 effectively:1 dpca:27 conditioned:2 sparser:1 subtract:1 simply:4 univariate:1 likely:1 pls:3 springer:2 corresponds:3 acm:1 identity:1 consequently:2 towards:1 vibratory:1 experimentally:1 hard:1 change:1 specifically:2 included:1 reducing:1 averaging:3 principal:20 total:5 experimental:1 succeeds:1 wq:1 latter:1 abdi:1 correlated:1 |
3,552 | 4,216 | Optimal learning rates for least squares SVMs using
Gaussian kernels
M. Eberts, I. Steinwart
Institute for Stochastics and Applications
University of Stuttgart
D-70569 Stuttgart
{eberts,ingo.steinwart}@mathematik.uni-stuttgart.de
Abstract
We prove a new oracle inequality for support vector machines with Gaussian RBF
kernels solving the regularized least squares regression problem. To this end, we
apply the modulus of smoothness. With the help of the new oracle inequality we
then derive learning rates that can also be achieved by a simple data-dependent
parameter selection method. Finally, it turns out that our learning rates are asymptotically optimal for regression functions satisfying certain standard smoothness
conditions.
1
Introduction
On the basis of i.i.d. observations D := ((x1 , y1 ) , . . . , (xn , yn )) of input/output observations drawn
from an unknown distribution P on X ? Y , where Y ? R, the goal of non-parametric least squares
regression is to find a function fD : X ! R such that, for the least squares loss L : Y ? R ! [0, 1)
2
defined by L (y, t) = (y t) , the risk
Z
Z
2
RL,P (fD ) :=
L (y, fD (x)) dP (x, y) =
(y fD (x)) dP (x, y)
X?Y
X?Y
is small. This means RL,P (fD ) has to be close to the optimal risk
R?L,P := inf {RL,P (f ) | f : X ! R measureable} ,
?
called the Bayes risk with respect to P and L. It is well known that the function fL,P
: X ! R
?
defined by fL,P (x) = EP (Y |x), x 2 X, is the only function for which the Bayes risk is attained.
Furthermore, some simple transformations show
Z
2
2
?
?
?
RL,P (f ) RL,P =
f fL,P
dPX = f fL,P
,
(1)
L (P )
2
X
X
where PX is the marginal distribution of P on X.
In this paper, we assume that X ? Rd is a non-empty, open and bounded set such that its boundary
@X has Lebesgue measure 0, Y := [ M, M ] for some M > 0 and P is a probability measure on
X ?Y such that PX is the uniform distribution on X. In Section 2 we also discuss that this condition
can easily be generalized by assuming that PX on X is absolutely continuous with respect to the
Lebesgue measure on X such that the corresponding density of PX is bounded away from 0 and 1.
Recall that because of the first assumption, it suffices to restrict considerations to decision functions
f : X ! [ M, M ]. To be more precise, if, we denote the clipped value of some t 2 R by ?
t, that is
8
< M if t < M
?
t := t
if t 2 [ M, M ]
:
M
if t > M ,
1
then it is easy to check that
RL,P (f?) ? RL,P (f ) ,
for all f : X ! R.
The non-parametric least squares problem can be solved in many ways. Several of them are e.g. described in [1]. In this paper, we use SVMs to find a solution for the non-parametric least squares
problem by solving the regularized problem
2
fD, = arg min kf kH + RL,D (f ) .
(2)
f 2H
Here, > 0 is a fixed real number, H is a reproducing kernel Hilbert space (RKHS) over X, and
RL,D (f ) is the empirical risk of f , that is
n
RL,D (f ) =
1X
L (yi , f (xi )) .
n i=1
In this work we restrict our considerations to Gaussian RBF kernels k on X, which are defined by
!
2
kx x0 k2
0
k (x, x ) = exp
,
x, x0 2 X ,
2
for some width 2 (0, 1]. Our goal is to deduce asymptotically optimal learning rates for the SVMs
(2) using the RKHS H of k . To this end, we first establish a general oracle inequality. Based on
this oracle inequality, we then derive learning rates if the regression function is contained in some
Besov space. It will turn out, that these learning rates are asymptotically optimal. Finally, we show
that these rates can be achieved by a simple data-dependent parameter selection method based on a
hold-out set.
The rest of this paper is organized as follows: The next section presents the main theorems and as a
consequence of these theorems some corollaries inducing asymptotically optimal learning rates for
regression functions contained in Sobolev or Besov spaces. Section 3 states some, for the proof of
the main statement necessary, lemmata and a version of [2, Theorem 7.23] applied to our special
case as well as the proof of the main theorem. Some further proofs and additional technical results
can be found in the appendix.
2
Results
In this section we present our main results including the optimal rates for LS-SVMs using Gaussian
kernels. To this end, we first need to introduce some function spaces, which are later assumed to
contain the regression function.
Let us begin by recalling from, e.g. [3, p. 44], [4, p. 398], and [5, p. 360], the modulus of smoothness:
Definition 1. Let ? ? Rd with non-empty interior, ? be an arbitrary measure on ?, and f : ? ! Rd
be a function with f 2 Lp (?) for some p 2 (0, 1). For r 2 N, the r-th modulus of smoothness of
f is defined by
!r,Lp (?) (f, t) = sup k4rh (f, ? )kLp (?) ,
t
0,
khk2 ?t
where k ? k2 denotes the Euclidean norm and the r-th difference 4rh (f, ?) is defined by
(P
r
r j
r
f (x + jh) if x 2 ?r,h
j=0 j ( 1)
4rh (f, x) =
0
if x 2
/ ?r,h
for h = (h1 , . . . , hd ) 2 Rd with hi
0 and ?r,h := {x 2 ? : x + sh 2 ? 8 s 2 [0, r]}.
It is well-known that the modulus of smoothness with respect to Lp (?) is a nondecreasing function
of t and for the Lebesgue measure on ? it satisfies
?
?r
t
!r,Lp (?) (f, t) ? 1 +
!r,Lp (?) (f, s) ,
(3)
s
2
for all f 2 Lp (?) and all s > 0, see e.g. [6, (2.1)]. Moreover, the modulus of smoothness can be
used to define the scale of Besov spaces. Namely, for 1 ? p, q ? 1, ? > 0, r := b?c + 1, and an
?
arbitrary measure ?, the Besov space Bp,q
(?) is
n
o
?
Bp,q
(?) := f 2 Lp (?) : |f |B ? (?) < 1 ,
p,q
where, for 1 ? q < 1, the seminorm |? |Bp,q
? (?) is defined by
|f |B ?
p,q (?)
:=
and, for q = 1, it is defined by
|f |B ?
?Z
1
t
?
0
p,1 (?)
!r,Lp (?) (f, t)
:= sup t
t>0
?
q
dt
t
? q1
,
!r,Lp (?) (f, t) .
?
In both cases the norm of Bp,q
(?) can be defined by kf kBp,q
? (?) := kf kL (?) + |f |B ? (?) , see
p
p,q
?
e.g. [3, pp. 54/55] and [4, p. 398]. Finally, for q = 1, we often write Bp,1
(?) = Lip? (?, Lp (?))
and call Lip? (?, Lp (?)) the generalized Lipschitz space of order ?. In addition, it is well-known,
see e.g. [7, p. 25 and p. 44], that the Sobolev spaces Wp? (Rd ) fall into the scale of Besov spaces,
namely
?
Wp? (Rd ) ? Bp,q
(Rd )
(4)
?
for ? 2 N, p 2 (1, 1), and max{p, 2} ? q ? 1 and especially W2? (Rd ) = B2,2
(Rd ).
For our results we need to extend functions f : ? ! R to functions f? : Rd ! R such that the
smoothness properties of f described by some Sobolev or Besov space are preserved by f?. Recall
that Stein?s Extension Theorem guarantees the existence of such an extension, whenever ? is a
bounded Lipschitz domain. To be more precise, in this case there exists a linear operator E mapping
functions f : ? ! R to functions Ef : Rd ! R with the properties:
(a) E (f )|? = f , that is, E is an extension operator.
(b) E continuously maps Wpm (?) into Wpm Rd for all p 2 [1, 1] and all integer m
That is, there exist constants am,p 0, such that, for every f 2 Wpm (?), we have
kEf kWpm (Rd ) ? am,p kf kWpm (?) .
0.
(5)
?
?
(c) E continuously maps Bp,q
(?) into Bp,q
Rd for all p 2 (1, 1), q 2 (0, 1] and all ? > 0.
?
That is, there exist constants a?,p,q 0, such that, for every f 2 Bp,q
(?), we have
kEf kBp,q
? (Rd ) ? a?,p,q kf kB ? (?) .
p,q
For detailed conditions on ? ensuring the existence of E, we refer to [8, p. 181] and [9, p. 83].
?
Property (c) follows by some interpolation argument since Bp,q
can be interpreted as interpolation
m0
m1
space of the Sobolev spaces Wp and Wp for q 2 [1, 1], p 2 (1, 1), ? 2 (0, 1) and m0 , m1 2 N0
with m0 6= m1 and ? = m0 (1 ?) + m1 ?, see [10, pp. 65/66] for more details. In the following,
we always assume that we do have such an extension operator E. Moreover, if ? is the Lebesgue
measure on ?, such that @? has Lebesgue measure 0, the canonical extension of ? to Rd is given
by ?
e(A) := ?(A \ ?) for all measurable A ? Rd . However, in a slight abuse of notation, we
often write ? instead of ?
e, since this simplifies the presentation. Analogously, we proceed for the
uniform distribution on ? and its canonical extension to Rd and the same convention will be applied
to measures PX on ? that are absolutely continuous w.r.t. the Lebesgue measure.
Finally, in order to state our main results, we denote the closed unit ball of the d-dimensional Euclidean space by B`d2 .
Theorem 1. Let X ? B`d2 be a domain such that we have an extension operator E in the above
sense. Furthermore, let M > 0, Y := [ M, M ], and P be a distribution on X ? Y such that
?
PX is the uniform distribution on X. Assume that we have fixed a version fL,P
of the regression
3
?
function such that fL,P
(x) = EP (Y |x) 2 [ M, M ] for all x 2 X. Assume that, for ?
r := b?c + 1, there exists a constant c > 0 such that, for all t 2 (0, 1], we have
1 and
Then, for all " > 0 and p 2 (0, 1) there exists a constant K > 0 such that for all n
> 0, the SVM using the RKHS H satisfies
1, and
?
!r,L2 (Rd ) EfL,P
, t ? ct? .
2
kfD, kH + RL,P (f?D, )
R?L,P ? K
with probability Pn not less than 1
e
?
d
+ Kc2
2?
1, ?
(1 p)(1+")d
+K
pn
+
kfD,
2
n
kH
n
+ RL,P (f?D,
with probability Pn not less than 1
n
n
e
K?
n
.
With this oracle inequality we can derive learning rates for the learning method (2).
Corollary 1. Under the assumptions of Theorem 1 and for " > 0, p 2 (0, 1), and ?
have, for all n 1,
n
(6)
n
)
R?L,P ? Cn
2?
2?+2?p+dp+(1
1 fixed, we
p)(1+")d
and with
?
= c1 n
= c2 n
2?+d
2?+2?p+dp+(1 p)(1+")d
1
2?+2?p+dp+(1
p)(1+")d
,
.
Here, c1 > 0 and c2 > 0 are user-specified constants and C > 0 is a constant independent of n.
Note that for every ? > 0 we can find ", p 2 (0, 1) sufficiently close to 0 such that the learning rate
in Corollary 1 is at least as fast as
n
2?
2?+d +?
.
To achieve these rates, however, we need to set n and n as in Corollary 1, which in turn requires
us to know ?. Since in practice we usually do not know this value, we now show that a standard
training/validation approach, see e.g. [2, Chapters 6.5, 7.4, 8.2], achieves the same rates adaptively,
i.e. without knowing ?. To this end, let ? := (?n ) and := ( n ) be sequences of finite subsets
?n , n ? (0, 1]. For a data set D := ((x1 , y1 ) , . . . , (xn , yn )), we define
D1 := ((x1 , y1 ) , . . . , (xm , ym ))
D2 := ((xm+1 , ym+1 ) , . . . , (xn , yn ))
where m :=
functions
?n?
2
fD1 ,
+ 1 and n
,
4. We will use D1 as a training set by computing the SVM decision
:= arg min
f 2H
2
kf kH + RL,D1 (f ) ,
and use D2 to determine ( , ) by choosing a (
RL,D2 fD1 ,
=
D2 , D2
D2 , D 2 )
min
( , )2?n ?
( , ) 2 ?n ?
2 ?n ?
n
n
n
such that
RL,D2 (fD1 ,
,
) .
Theorem 2. Under the assumptions of Theorem 1 we fix sequences ? := (?n ) and := ( n )
of finite subsets ?n , n ? (0, 1] such that ?n is an ?n -net of (0, 1] and n is an n -net of (0, 1]
1
with ?n ? n 1 and n ? n 2+d . Furthermore, assume that the cardinalities |?n | and | n | grow
polynomially in n. Then, for all ? > 0, the TV-SVM producing the decision functions fD1 , D2 , D2
learns with the rate
n
with probability Pn not less than 1
e
?
2?
2?+d +?
(7)
.
What is left to do is to relate Assumption (6) with the function spaces introduced earlier, such
that we can show that the learning rates deduced earlier are asymptotically optimal under some
circumstances.
4
Corollary 2. Let X ? B`d2 be a domain such that we have an extension operator E of the form
described in front of Theorem 1. Furthermore, let M > 0, Y := [ M, M ], and P be a distribution
?
on X ? Y such that PX is the uniform distribution on X. If, for some ? 2 N, we have fL,P
2
?
W2 (PX ), then, for all ? > 0, both the SVM considered in Corollary 1 and the TV-SVM considered
in Theorem 2 learn with the rate
n
with probability Pn not less than 1
optimal in a minmax sense.
?
e
2?
2?+d +?
. Moreover, if ? > d/2, then this rate is asymptotically
Similar to Corollary 2 we can show assumption (6) and asymptotically optimal learning rates if the
regression function is contained in a Besov space.
Corollary 3. Let X ? B`d2 be a domain such that we have an extension operator E of the form
described in front of Theorem 1. Furthermore, let M > 0, Y := [ M, M ], and P be a distribution
?
on X ? Y such that PX is the uniform distribution on X. If, for some ?
1, we have fL,P
2
?
B2,1 (PX ), then, for all ? > 0, both the SVM considered in Corollary 1 and the TV-SVM considered
in Theorem 2 learn with the rate
n
with probability Pn not less than 1
e
?
2?
2?+d +?
.
?
?
Since for the entropy numbers ei ( id : B2,1
(PX ) ! L2 (PX )) ? i d holds (cf. [7, p. 151])
?
?
and since B2,1 (PX ) = B2,1 (X) is continuously embedded into the space `1 (X) of all bounded
2?
functions on X, we obtain by [11, Theorem 2.2] that n 2?+d is the optimal learning rate in a
minimax sense for ? > d (cf. [12, Theorem 13]). Therefore, for ? > d, the learning rates obtained
in Corollary 3 are asymptotically optimal.
So far, we always assumed that PX is the uniform distribution on X. This can be generalized by assuming that PX is absolutely continuous w.r.t. the Lebesgue measure ? such that the corresponding
density is bounded away from zero and from infinity. Then we have L2 (PX ) = L2 (?) with equivalent norms and the results for ? hold for PX as well. Moreover, to derive learning rates, we actually
only need that the Lebesgue density of PX is upper bounded. The assumption that the density is
bounded away from zero is only needed to derive the lower bounds in Corollaries 2 and 3.
Furthermore, we assumed 2 (0, 1] in Theorem 1, and hence in Corollary 1 and Theorem 2 as
well. Note that does not need to be restricted by one. Instead only needs to be bounded from
above by some constant such that estimates on the entropy numbers for Gaussian kernels as used in
the proofs can be applied. For the sake of simplicity we have chosen one as upper bound, another
upper bound would only have influence on the constants.
There have already been made several investigations on learning rates for SVMs using the least
squares loss, see e.g. [13, 14, 15, 16, 17] and the references therein. In particular, optimal rates
have been established in [16], if fP? 2 H, and the eigenvalue behavior of the integral operator
associated to H is known. Moreover, if fP? 62 H [17] and [12] establish both learning rates of
the form n /( +p) , where is a parameter describing the approximation properties of H and
p is a parameter describing the eigenvalue decay. Furthermore, in the introduction of [17] it is
mentioned that the assumption on the eigenvalues and eigenfunctions also hold for Gaussian kernels
with fixed width, but this case as well as the more interesting case of Gaussian kernels with variable
widths are not further investigated. In the first case, where Gaussian kernels with fixed width are
considered, the approximation error behaves very badly as shown in [18] and fast rates cannot be
expected as we discuss below. In the second case, where variable widths are considered as in our
paper, it is crucial to carefully control the influence of on all arising constants which unfortunately
has not been worked out in [17], either. In [17] and [12], however, additional assumptions on the
interplay between H and L2 (PX ) are required, and [17] actually considers a different exponent
in the regularization term of (2). On the other hand, [12] shows that the rate n /( +p) is often
asymptotically optimal in a minmax sense. In particular, the latter is the case for H = W2m (X),
f 2 W2s (X), and s 2 (d/2, m], that is, when using a Sobolev space as the underlying RKHS H,
5
then all target functions contained in a Sobolev of lower smoothness s > d/2 can be learned with the
2s
asymptotically optimal rate n 2s+d . Here we note that the condition s > d/2 ensures by Sobolev?s
s
embedding theorem that W2 (X) consists of bounded functions, and hence Y = [ M, M ] does not
?
impose an additional assumption on fL,P
. If s 2 (0, d/2], then the results of [12] still yield the
above mentioned rates, but we no longer know whether they are optimal in a minmax sense, since
Y = [ M, M ] does impose an additional assumption. In addition, note that for Sobolev spaces this
result, modulo an extra log factor, has already been proved by [1]. This result suggests that by using
a C 1 -kernel such as the Gaussian RBF kernel, one could actually learn the entire scale of Sobolev
spaces with the above mentioned rates. Unfortunately, however, there are good reasons to believe
that this is not the case. Indeed, [18] shows that for many analytic kernels the approximation error
?
can only have polynomial decay if fL,P
is analytic, too. In particular, for Gaussian kernels with
?
1
fixed width and fL,P 62 C the approximation error does not decay polynomially fast, see [18,
?
Proposition 1.1.], and if fL,P
2 W2m (X), then, in general, the approximation error function only
has a logarithmic decay. Since it seems rather unlikely that these poor approximation properties can
be balanced by superior bounds on the estimation error, the above-mentioned results indicate that
Gaussian kernels with fixed width may have a poor performance. This conjecture is backed-up by
many empirical experience gained throughout the last decade. Beginning with [19], research has thus
focused on the learning performance of SVMs with varying widths. The result that is probably the
closest to ours is [20]. Although these authors actually consider binary classification using convex
loss functions including the least squares loss, formulated it is relatively straightforward to translate
m
their finding to our least squares regression scenario. The result is the learning rate n m+2d+2 , again
?
under the assumption fL,P
2 W2m (X) for some m > 0. Furthermore, [21] treats the case, where X
is isometrically embedded into a t-dimensional, connected and compact C 1 -submanifold of Rd . In
this case, it turns out that the resulting learning rate does not depend on the dimension d, but on the
s
intrinsic dimension t of the data. Namely the authors show the rate n 8s+4t modulo a logarithmic
?
factor, where s 2 (0, 1] and fL,P 2 Lip (s). Another direction of research that can be applied to
Gaussian kernels with varying widths are multi-kernel regularization schemes, see [22, 23, 24] for
2m d
some results in this direction. For example, [22] establishes learning rates of the form n 4(4m d) +?
?
whenever fL,P
2 W2m (X) for some m 2 (d/2, d/2 + 2), where again ? > 0 can be chosen to be
arbitrarily close to 0. Clearly, all these results provide rates that are far from being optimal, so that
it seems fair to say that our results represent a significant advance. Furthermore, we can conclude
that, in terms of asymptotical minmax rates, multi-kernel approaches applied Gaussian RBFs cannot
provide any significant improvement over a simple training/validation approach for determining the
kernel width and the regularization parameter, since the latter already leads to rates that are optimal
modulo an arbitrarily small ? in the exponent.
3
Proof of the main result
To prove Theorem 1 we deduce an oracle inequality for the least squares loss by specializing [2,
Theorem 7.23] (cf. Theorem 3). To be finally able to show Theorem 1 originating from Theorem 3,
we have to estimate the approximation error.
Lemma 1. Let X ? Rd be a domain such that we have an extension operator E of the form described in front of Theorem 1, PX be the uniform distribution on X and f 2 L1 (X). Furthermore,
let f? be defined by
d
p
f? (x) :=
? 2 Ef (x)
(8)
> 0, K : Rd ! R be defined by
?
? d2
r ? ?
X
r
2
1 j 1
p
K (?) :=
( 1)
K pj (?)
2
j
jd
?
j=1
for all x 2 Rd and, for r 2 N and
with
2
K (?) := exp
6
k?k2
2
!
.
(9)
Then, for r 2 N,
e X ) and
> 0, and q 2 [1, 1), we have Ef 2 Lq (P
K ? f?
f
q
Lq (PX )
q
? Cr,q !r,L
d (Ef, /2) ,
q (R )
where Cr,q is a constant only depending on r, q and ?(X).
In order to use the conclusion of Lemma 1 in the proof of Theorem 1 it is necessary to know some
properties of K ? f?. Therefore, we need the next two lemmata.
Lemma 2. Let g 2 L2 Rd , H be the RKHS of the Gaussian RBF kernel k over X ? Rd and
!
?
? d2
r ? ?
2
X
2 kxk2
r
2
1 j 1
p
K (x) :=
( 1)
exp
d
j
j
j2 2
?
j=1
for x 2 Rd and a fixed r 2 N. Then we have
K ?g 2H ,
kK ? gkH ? (2r
1) kgkL2 (Rd ) .
Lemma 3. Let g 2 L1 Rd , H be the RKHS of the Gaussian RBF kernel k over X ? Rd and
K be as in Lemma 2. Then
p d2 r
|K ? g (x)| ?
? (2
1) kgkL1 (Rd )
holds for all x 2 X. Additionally, we assume that X is a domain in Rd such that we have an
extension operator E of the form described in front of Theorem 1, Y := [ M, M ] and, for all x 2
d
p
?
?
Rd , f? (x) := ( ?) 2 E fL,P
(x) , where fL,P
denotes a version of the conditional expectation
?
such that f
(x) = EP (Y |x) 2 [ M, M ] for all x 2 X. Then we have f? 2 L1 Rd and
L,P
for all x 2 X, which implies
|K ? f? (x) | ? a0,1 (2r
1) M
L(y, K ? f? (x)) ? 4r a2 M 2
for the least squares loss L and all (x, y) 2 X ? Y .
Next, we modify [2, Theorem 7.23], so that the proof of Theorem 1 can be build upon it.
Theorem 3. Let X ? B`d2 , Y := [ M, M ] ? R be a closed subset with M > 0 and P be a
distribution on X ? Y . Furthermore, let L : Y ? R ! [0, 1) be the least squares loss, k be
the Gaussian RBF kernel over X with width 2 (0, 1] and H be the associated RKHS. Fix an
f0 2 H and a constant B0 4M 2 such that kL f0 k1 ? B0 . Then, for all fixed ?
1, > 0,
" > 0 and p 2 (0, 1), the SVM using H and L satisfies
?
?
2
kfD, kH + RL,P f?D,
R?L,P
?9
?
2
kf0 kH + RL,P (f0 )
?
R?L,P + C",p
with probability Pn not less than 1
e
?
(1 p)(1+")d
pn
+
3456M 2 + 15B0 (ln(3) + 1)?
n
, where C",p is a constant only depending on ", p and M .
With the previous results we are finally able to prove the oracle inequality declared by Theorem 1.
Proof of Theorem 1. First of all, we want to apply Theorem 3 for f0 := K ? f? with
!
?
? d2
r ? ?
2
X
2 kxk2
r
2
1 j 1
p
K (x) :=
( 1)
exp
j
jd
j2 2
?
j=1
7
and
p
f? (x) :=
d
2
?
?
EfL,P
(x)
?
?
for all x 2 Rd . The choice fL,P
(x) 2 [ M, M ] for all x 2 X implies fL,P
2 L2 (X) and the latter
together with X ? B`d2 and (5) yields
kf?kL2 (Rd ) =
?
?
?
p
?
p
?
2
p
d
2
d
2
?
? d2
?
kEfL,P
kL2 (Rd )
?
a0,2 kfL,P
kL2 (X)
(10)
a0,2 M ,
i.e. f? 2 L2 Rd . Because of this and Lemma 2
f0 = K ? f? 2 H
is satisfied and with Lemma 3 we have
kL f0 k1 =
sup
(x,y)2X?Y
|L (y, f0 (x))| =
sup
(x,y)2X?Y
Furthermore, (1) and Lemma 1 yield
RL,P (f0 )
?
?
L y, K ? f? (x) ? 4r a2 M 2 =: B0 .
?
?
R?L,P = RL,P K ? f?
= K ? f?
R?L,P
2
?
fL,P
L2 (PX )
?
?
2
?
? Cr,2 !r,L
Ef
,
d)
L,P
(R
2
2
2 2?
? Cr,2 c
,
where we used the assumption
for 2 (0, 1], ?
know
kf0 kH
?
?
?
!r,L2 (Rd ) EfL,P
,
?c
2
?
1, r = b?c + 1 and a constant c > 0 in the last step. By Lemma 2 and (10) we
= kK ? f?kH ? (2r
1) kf?kL2 (Rd ) ? (2r
1)
?
2
p
Therefore, Theorem 3 and the above choice of f0 yield, for all fixed ?
p 2 (0, 1), that the SVM using H and L satisfies
?
?
2
kfD, kH + RL,P f?D,
R?L,P
!
?
?d
2
2
p
?9
(2r 1)
a20,2 M 2 + Cr,2 c2 2?
?
? C1
pn
d
+ 9 Cr c2
a0,2 M .
1,
> 0, " > 0 and
3456 + 15 ? 4r a2 M 2 (ln(3) + 1)?
n
(1 p)(1+")d
C2 ?
2?
+ C",p
+
pn
n
(1 p)(1+")d
+ C",p
?
? d2
+
2
d
with probability Pn not less than 1 e ? and with constants C1 := 9 (2r 1) 2d ? 2 a20,2 M 2 ,
C2 := (ln(3) + 1) 3456 + 15 ? 4r a2 M 2 , a := max {a0,1 , 1}, Cr := Cr,2 only depending on r
and ?(X) and C",p as in Theorem 3.
8
References
[1] L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A Distribution-Free Theory of Nonparametric
Regression. Springer-Verlag New York, 2002.
[2] I. Steinwart and A. Christmann. Support Vector Machines. Springer-Verlag, New York, 2008.
[3] R.A. DeVore and G.G. Lorentz. Constructive Approximation. Springer-Verlag Berlin Heidelberg, 1993.
[4] R.A. DeVore and V.A. Popov. Interpolation of Besov Spaces. AMS, Volume 305, 1988.
[5] H. Berens and R.A. DeVore. Quantitative Korovin theorems for positive linear operators on
Lp -spaces. AMS, Volume 245, 1978.
[6] H. Johnen and K. Scherer. On the equivalence of the K-functional and moduli of continuity and
some applications. In Lecture Notes in Math., volume 571, pages 119?140. Springer-Verlag
Berlin, 1976.
[7] D.E. Edmunds and H. Triebel. Function Spaces, Entropy Numbers, Differential 0perators.
Cambridge University Press, 1996.
[8] E.M. Stein. Singular Integrals and Differentiability Properties of Functions. Princeton Univ.
Press, 1970.
[9] R.A. Adams and J.J.F. Fournier. Sobolev Spaces. Academic Press, 2nd edition, 2003.
[10] H. Triebel. Theory of Function Spaces III. Birkh?auser Verlag, 2006.
[11] V. Temlyakov. Optimal estimators in learning theory. Banach Center Publications, Inst. Math.
Polish Academy of Sciences, 72:341?366, 2006.
[12] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression.
Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
[13] F. Cucker and S. Smale. On the mathematical foundations of learning. Bull. Amer. Math. Soc.,
39:1?49, 2002.
[14] E. De Vito, A. Caponnetto, and L. Rosasco. Model selection for regularized least-squares
algorithm in learning theory. Found. Comput. Math., 5:59?85, 2005.
[15] S. Smale and D.-X. Zhou. Learning theory estimates via integral operators and their approximations. Constr. Approx., 26:153?172, 2007.
[16] A. Caponnetto and E. De Vito. Optimal rates for regularized least squares algorithm. Found.
Comput. Math., 7:331?368, 2007.
[17] S. Mendelson and J. Neeman. Regularization in kernel learning. Ann. Statist., 38:526?565,
2010.
[18] S. Smale and D.-X. Zhou. Estimating the approximation error in learning theory. Anal. Appl.,
Volume 1, 2003.
[19] I. Steinwart and C. Scovel. Fast rates for support vector machines using Gaussian kernels. Ann.
Statist., 35:575?607, 2007.
[20] D.-H. Xiang and D.-X. Zhou. Classification with Gaussians and convex loss. J. Mach. Learn.
Res., 10:1447?1468, 2009.
[21] G.-B. Ye and D.-X. Zhou. Learning and approximation by Gaussians on Riemannian manifolds. Adv. Comput. Math., Volume 29, 2008.
[22] Y. Ying and D.-X. Zhou. Learnability of Gaussians with flexible variances. J. Mach. Learn.
Res. 8, 2007.
[23] C.A. Micchelli, M. Pontil, Q. Wu, and D.-X. Zhou. Error bounds for learning the kernel. 2005.
[24] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. In S. Dasgupta and
A. Klivans, editors, Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
9
| 4216 |@word version:3 polynomial:1 norm:3 seems:2 nd:3 open:1 d2:21 q1:1 minmax:4 neeman:1 rkhs:7 ours:1 scovel:2 lorentz:1 analytic:2 n0:1 beginning:1 math:6 mathematical:1 c2:6 differential:1 prove:3 consists:1 introduce:1 x0:2 indeed:1 expected:1 behavior:1 multi:2 cardinality:1 begin:1 estimating:1 bounded:9 moreover:5 notation:1 underlying:1 what:1 interpreted:1 finding:1 transformation:1 guarantee:1 quantitative:1 every:3 isometrically:1 k2:3 control:1 unit:1 yn:3 producing:1 positive:1 treat:1 modify:1 consequence:1 mach:2 id:1 interpolation:3 abuse:1 therein:1 equivalence:1 suggests:1 appl:1 practice:1 dpx:1 pontil:1 empirical:2 orfi:1 cannot:2 close:3 selection:3 interior:1 operator:11 risk:5 influence:2 measurable:1 map:2 equivalent:1 center:1 backed:1 straightforward:1 l:1 convex:2 focused:1 simplicity:1 estimator:1 gkh:1 hd:1 embedding:1 target:1 user:1 modulo:3 satisfying:1 ep:3 solved:1 ensures:1 connected:1 adv:1 besov:8 mentioned:4 balanced:1 vito:2 depend:1 solving:2 upon:1 basis:1 easily:1 chapter:1 univ:1 fast:4 birkh:1 choosing:1 klp:1 say:1 nondecreasing:1 interplay:1 sequence:2 eigenvalue:3 net:2 j2:2 translate:1 achieve:1 academy:1 kh:9 inducing:1 empty:2 adam:1 help:1 derive:5 depending:3 kef:2 b0:4 soc:1 christmann:1 indicate:1 implies:2 convention:1 direction:2 kb:1 suffices:1 fix:2 generalization:1 investigation:1 proposition:1 extension:11 hold:5 sufficiently:1 considered:6 exp:4 mapping:1 m0:4 achieves:1 a2:4 estimation:1 establishes:1 clearly:1 w2s:1 gaussian:17 always:2 rather:1 pn:11 cr:8 zhou:6 varying:2 edmunds:1 krzy:1 corollary:12 publication:1 improvement:1 check:1 polish:1 am:4 sense:5 inst:1 dependent:2 entire:1 unlikely:1 a0:5 originating:1 arg:2 classification:2 flexible:1 exponent:2 special:1 auser:1 marginal:1 lebesgue:8 eberts:2 recalling:1 fd:6 kfd:4 sh:1 integral:3 popov:1 necessary:2 experience:1 euclidean:2 walk:1 re:2 a20:2 earlier:2 bull:1 subset:3 uniform:7 submanifold:1 front:4 too:1 learnability:1 adaptively:1 deduced:1 density:4 cucker:1 analogously:1 continuously:3 ym:2 together:1 again:2 satisfied:1 rosasco:1 de:3 gy:1 b2:5 later:1 h1:1 closed:2 sup:4 bayes:2 rbfs:1 square:15 kfl:1 variance:1 yield:4 whenever:2 definition:1 kl2:4 pp:2 proof:8 associated:2 riemannian:1 proved:1 recall:2 hilbert:1 organized:1 carefully:1 actually:4 campbell:1 attained:1 dt:1 devore:3 amer:1 stuttgart:3 furthermore:12 wpm:3 hand:1 steinwart:5 ei:1 continuity:1 believe:1 seminorm:1 modulus:6 ye:1 contain:1 regularization:4 hence:2 wp:4 width:11 generalized:3 l1:3 consideration:2 ef:5 superior:1 behaves:1 functional:1 rl:20 volume:5 banach:1 extend:1 slight:1 m1:4 refer:1 significant:2 cambridge:1 zak:1 smoothness:8 rd:39 approx:1 f0:9 longer:1 deduce:2 closest:1 inf:1 scenario:1 verlag:5 certain:1 inequality:7 binary:1 arbitrarily:2 yi:1 additional:4 impose:2 determine:1 caponnetto:2 technical:1 academic:1 specializing:1 ensuring:1 regression:11 circumstance:1 expectation:1 kernel:25 represent:1 achieved:2 c1:4 preserved:1 addition:2 want:1 grow:1 singular:1 khk2:1 crucial:1 w2:3 rest:1 extra:1 eigenfunctions:1 probably:1 asymptotical:1 call:1 integer:1 iii:1 easy:1 restrict:2 simplifies:1 cn:1 knowing:1 triebel:2 whether:1 proceed:1 york:2 detailed:1 nonparametric:1 stein:2 statist:2 svms:6 differentiability:1 exist:2 canonical:2 arising:1 write:2 dasgupta:1 drawn:1 pj:1 efl:3 fournier:1 asymptotically:10 clipped:1 kbp:2 throughout:1 wu:1 sobolev:10 decision:3 appendix:1 fl:20 hi:1 ct:1 bound:6 oracle:7 badly:1 annual:2 infinity:1 worked:1 bp:10 sake:1 declared:1 argument:1 min:3 klivans:1 px:22 conjecture:1 relatively:1 tv:3 ball:1 poor:2 lp:12 stochastics:1 constr:1 kgkl2:1 restricted:1 ln:3 mathematik:1 turn:4 discus:2 describing:2 needed:1 know:5 end:4 gaussians:3 apply:2 away:3 existence:2 jd:2 denotes:2 cf:3 k1:2 especially:1 establish:2 build:1 micchelli:1 already:3 parametric:3 dp:5 berlin:2 manifold:1 considers:1 reason:1 assuming:2 kk:2 ying:2 unfortunately:2 measureable:1 statement:1 relate:1 smale:3 anal:1 unknown:1 upper:3 observation:2 fd1:4 ingo:1 finite:2 precise:2 y1:3 reproducing:1 arbitrary:2 introduced:1 namely:3 required:1 kl:3 specified:1 learned:1 established:1 hush:1 able:2 usually:1 below:1 xm:2 fp:2 including:2 max:2 regularized:5 minimax:1 scheme:1 l2:10 kf:8 determining:1 xiang:1 embedded:2 loss:8 lecture:1 interesting:1 scherer:1 validation:2 foundation:1 editor:1 last:2 free:1 jh:1 institute:1 fall:1 boundary:1 dimension:2 xn:3 author:2 made:1 far:2 polynomially:2 temlyakov:1 compact:1 uni:1 assumed:3 conclude:1 xi:1 continuous:3 decade:1 lip:3 additionally:1 learn:5 heidelberg:1 investigated:1 kc2:1 berens:1 domain:6 main:6 rh:2 edition:1 fair:1 x1:3 lq:2 comput:3 kxk2:2 learns:1 theorem:35 decay:4 svm:9 exists:3 intrinsic:1 mendelson:1 gained:1 kx:1 w2m:4 entropy:3 logarithmic:2 contained:4 springer:4 satisfies:4 conditional:1 goal:2 presentation:1 formulated:1 ann:2 rbf:6 lipschitz:2 kf0:2 lemma:11 called:1 support:3 latter:3 absolutely:3 constructive:1 kohler:1 princeton:1 d1:3 |
3,553 | 4,217 | Reinforcement Learning using Kernel-Based
Stochastic Factorization
Andr?e M. S. Barreto
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Doina Precup
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Joelle Pineau
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Abstract
Kernel-based reinforcement-learning (KBRL) is a method for learning a decision
policy from a set of sample transitions which stands out for its strong theoretical
guarantees. However, the size of the approximator grows with the number of transitions, which makes the approach impractical for large problems. In this paper
we introduce a novel algorithm to improve the scalability of KBRL. We resort
to a special decomposition of a transition matrix, called stochastic factorization,
to fix the size of the approximator while at the same time incorporating all the
information contained in the data. The resulting algorithm, kernel-based stochastic factorization (KBSF), is much faster but still converges to a unique solution.
We derive a theoretical upper bound for the distance between the value functions
computed by KBRL and KBSF. The effectiveness of our method is illustrated with
computational experiments on four reinforcement-learning problems, including a
difficult task in which the goal is to learn a neurostimulation policy to suppress
the occurrence of seizures in epileptic rat brains. We empirically demonstrate that
the proposed approach is able to compress the information contained in KBRL?s
model. Also, on the tasks studied, KBSF outperforms two of the most prominent reinforcement-learning algorithms, namely least-squares policy iteration and
fitted Q-iteration.
1
Introduction
Recent years have witnessed the emergence of several reinforcement-learning techniques that make
it possible to learn a decision policy from a batch of sample transitions. Among them, Ormoneit
and Sen?s kernel-based reinforcement learning (KBRL) stands out for two reasons [1]. First, unlike
other approximation schemes, KBRL always converges to a unique solution. Second, KBRL is
consistent in the statistical sense, meaning that adding more data always improves the quality of the
resulting policy and eventually leads to optimal performance.
Despite its nice theoretical properties, KBRL has not been widely adopted by the reinforcement
learning community. One possible explanation for this is its high computational complexity. As
discussed by Ormoneit and Glynn [2], KBRL can be seen as the derivation of a finite Markov
decision process whose number of states coincides with the number of sample transitions collected
to perform the approximation. This gives rise to a dilemma: on the one hand one wants as much
data as possible to describe the dynamics of the decision problem, but on the other hand the number
of transitions should be small enough to allow for the numerical solution of the resulting model.
In this paper we describe a practical way of weighting the relative importance of these two conflicting objectives. We rely on a special decomposition of a transition matrix, called stochastic
factorization, to rewrite it as the product of two stochastic matrices of smaller dimension. As we
1
will see, the stochastic factorization possesses a very useful property: if we swap its factors, we
obtain another transition matrix which retains some fundamental characteristics of the original one.
We exploit this property to fix the size of KBRL?s model. The resulting algorithm, kernel-based
stochastic factorization (KBSF), is much faster than KBRL but still converges to a unique solution.
We derive a theoretical bound on the distance between the value functions computed by KBRL and
KBSF. We also present experiments on four reinforcement-learning domains, including the double
pole-balancing task, a difficult control problem representative of a wide class of unstable dynamical
systems, and a model of epileptic rat brains in which the goal is to learn a neurostimulation policy
to suppress the occurrence of seizures. We empirically show that the proposed approach is able to
compress the information contained in KBRL?s model, outperforming both the least-squares policy
iteration algorithm and fitted Q-iteration on the tasks studied [3, 4].
2
Background
The KBRL algorithm solves a continuous state-space Markov Decision Process (MDP) using a finite
model approximation. A finite MDP is defined by a tuple M ? (S, A, Pa , ra , ?) [5]. The finite sets
S and A are the state and action spaces. The matrix Pa ? R|S|?|S| gives the transition probabilities
associated with action a ? A and the vector ra ? R|S| stores the corresponding expected rewards. The
discount factor ? ? [0, 1) is used to give smaller weights to rewards received further in the future.
In the case of a finite MDP, we can use dynamic programming to find an optimal decision-policy
? ? ? A|S| in polynomial time [5]. As well known, this is done using the concept of a value function.
Throughout the paper, we use v ? R|S| to denote the state-value function and Q ? R|S|?|A| to refer to
the action-value function. Let the operator ? : R|S|?|A| 7? R|S| be given by ?Q = v, with vi = max j qi j ,
and define ? : R|S| 7? R|S|?|A| as ?v = Q, where the ath column of Q is given by qa = ra + ?Pa v.
A fundamental result in dynamic programming states that, starting from v(0) = 0, the expression
v(t) = ??v(t?1) gives the optimal t-step value function, and as t ? ? the vector v(t) approaches v? ,
from which any optimal decision policy ? ? can be derived [5].
Consider now an MDP with continuous state space S ? Rd and let Sa = {(sak , rak , s? ak )|k = 1, 2, ..., na }
be a set of sample transitions associated with action a ? A, where sak , s? ak ? S and rak ? R. The model
constructed by KBRL has the following transition and reward functions:
a
a
? (si , sak ), if s j = s? ak ,
rk , if s j = s? ak ,
a
a
?
?
P (s j |si ) =
and R (si , s j ) =
0, otherwise
0, otherwise,
a
? a (si , sak ) = 1
where ? a (?, sak ) is a weighting kernel centered at sak and defined in such a way that ?nk=1
a
for all si ? S (for example, ? can be a normalized Gaussian function; see [1] and [2] for a formal
definition and other examples of valid kernels). Since only transitions ending in the states s? ak have a
non-zero probability of occurrence, one can solve a finite MDP M? whose space is composed solely
of these n = ?a na states [2, 6]. After the optimal value function
found, the value of
of M? has been
a
? a (si , sak ) rak + ? V? ? (?sak ) . Ormoneit and Sen [1]
any state si ? S can be computed as Q(si , a) = ?nk=1
proved that, if na ? ? for all a ? A and the widths of the kernels ? a shrink at an ?admissible? rate,
the probability of choosing a suboptimal action based on Q(si , a) converges to zero.
As discussed in the introduction, the problem with the practical application of KBRL is that, as n
increases, so does the cost of solving the MDP derived by this algorithm. To alleviate this problem,
Jong and Stone [6] propose growing incrementally the set of sample transitions, using a prioritized
sweeping approach to guide the exploration of the state space. In this paper we present a new method
for addressing this problem, using stochastic factorization.
3
Stochastic factorization
A stochastic matrix has only non-negative elements and each of its rows sums to 1. That said, we
can introduce the concept that will serve as a cornerstone for the rest of the paper:
Definition 1 Given a stochastic matrix P ? Rn?p , the relation P = DK is called a stochastic factorization of P if D ? Rn?m and K ? Rm?p are also stochastic matrices. The integer m > 0 is the order
of the factorization.
2
This mathematical concept has been explored before. For example, Cohen and Rothblum [7] briefly
discuss it as a special case of non-negative matrix factorization, while Cutler and Breiman [8] focus
on slightly modified versions of the stochastic factorization for statistical data analysis. However, in
this paper we will focus on a useful property of this type of factorization that seems to have passed
unnoticed thus far. We call it the ?stochastic-factorization trick?:
Given a stochastic factorization of a square matrix, P = DK, swapping the factors of the factorization yields another transition matrix P? = KD, potentially much smaller than the original,
which retains the basic topology and properties of P.
The stochasticity of P? follows immediately from the same property of D and K. What is perhaps
more surprising is the fact that this matrix shares some fundamental characteristics with the original matrix P. Specifically, it is possible to show that: (i) for each recurrent class in P there is a
corresponding class in P? with the same period and, given some simple assumptions about the factorization, (ii) P is irreducible if and only if P? is irreducible and (iii) P is regular if and only if P? is
regular (see [9] for details).
Given the strong connection between P ? Rn?n and P? ? Rm?m , the idea of replacing the former by the
latter comes almost inevitably. The motivation for this would be, of course, to save computational
resources when m < n. In this paper we are interested in exploiting the stochastic-factorization
trick to reduce the computational cost of dynamic programming. The idea is straightforward: given
stochastic factorizations of the transition matrices Pa , we can apply our trick to obtain a reduced
MDP that will be solved in place of the original one. In the most general scenario, we would
have one independent factorization Pa = Da Ka for each action a ? A. However, in the current
work we will focus on a particular case which will prove to be convenient both mathematically and
computationally. When all factorizations share the same matrix D, it is easy to derive theoretical
guarantees regarding the quality of the solution of the reduced MDP:
Proposition 1 Let M ? (S, A, Pa , ra , ?) be a finite MDP with |S| = n and 0 ? ? < 1. Let DKa = Pa
be |A| stochastic factorizations of order m and let r? a be vectors in Rm such that D?ra = ra for all
? A, P? a , r? a , ?), with |S|
? = m and P? a = Ka D. Then,
a ? A. Define the MDP M? ? (S,
kv? ? v? k? ?
C?
max (1 ? max di j ),
j
(1 ? ?)2 i
(1)
? ? , C? = maxa,k r?a ? mina,k r?a , and k?k is the maximum norm.
where v? = ?DQ
?
k
k
Proof. Since ra = D?ra and DP? a = DKa D = Pa D for all a ? A, the stochastic matrix D satisfies Sorg
and Singh?s definition of a soft homomorphism between M
and M? (see equations
(25)?(28) in [10]).
? ? )
? (1 ? ?)?1 supi,t (1 ?
Applying Theorem 1 by the same authors, we know that
?(Q? ? DQ
?
(t)
(t)
(t)
(t)
(t)
max j di j )?? , where ?? = max j:d >0,k q? ? min j:d >0,k q? and q? are elements of the optimal
i
i
ij
jk
ij
jk
jk
? (t) = ??v(t?1) . In order to obtain our bound, we note that
? Q
t-step action-value function of M,
?
?Q ? ?DQ
? ? )
and, for all t > 0, ?? (t) ? (1 ? ?)?1 (maxa,k r?a ? mina,k r?a ). 2
? ?
?
?(Q? ? DQ
i
k
k
?
?
Proposition 1 elucidates the basic mechanism through which one can exploit the stochasticfactorization trick to reduce the number of states in an MDP. However, in order to apply this idea
in practice, one must actually compute the factorizations. This computation can be expensive, exceeding the computational effort necessary to calculate v? [11, 9]. In the next section we discuss a
strategy to reduce the computational cost of the proposed approach.
4
Kernel-based stochastic factorization
In Section 2 we presented KBRL, an approximation scheme for reinforcement learning whose main
drawback is its high computational complexity. In Section 3 we discussed how the stochasticfactorization trick can in principle be useful to reduce an MDP, as long as one circumvents the
computational burden imposed by the calculation of the matrices involved in the process. We now
show how to leverage these two components to produce an algorithm called kernel-based stochastic
factorization (KBSF) that overcomes these computational limitations.
3
As outlined in Section 2, KBRL defines the probability of a transition from state s? bi to state s? ak via
kernel-averaging, formally denoted ? a (?sbi , sak ), where a, b ? A. So for each action a ? A, the state
s? bi has an associated stochastic vector p? aj ? R1?n whose non-zero entries correspond to the function
? a (?sbi , ?) evaluated at sak , k = 1, 2, . . . , na . Recall that we are dealing with a continuous state space,
so it is possible to compute an analogous vector for any si ? S. Therefore, we can link each state of
the original MDP with |A| n-dimensional stochastic vectors. The core strategy of KBSF is to find
a set of m representative states associated with vectors kai ? R1?n whose convex combination can
approximate the rows of the corresponding P? a .
KBRL?s matrices P? a have a very specific structure, since only transitions ending in states s? ai associated with action a have a non-zero probability of occurrence. Suppose now we want to apply the
stochastic-factorization trick to KBRL?s MDP. Assuming that the matrices Ka have the same structure as P? a , when computing P? a = Ka D we only have to look at the submatrices of Ka and D corre? a ? Rm?na and D
? a ? Rna ?m .
sponding to the na non-zero columns of Ka . We call these matrices K
? a and K
? a with
Let {s?1 , s?2 , ..., s?m } be a set of representative states in S. KBSF computes matrices D
a
a
a
a
a
?a
? si , s? j ) and k? i j = ? (s?i , s j ), where ?? is also a kernel. Obviously, once we have D
elements d?i j = ?(?
? a it is trivial to compute D and Ka . Depending on how the states s?i and the kernels ?? are
and K
defined, we have DKa ? P? a for all a ? A. The important point here is that the matrices Pa = DKa
are never actually computed, but instead we solve an MDP with m states whose dynamics are given
? a . Algorithm 1 gives a step-by-step description of KBSF.
? aD
by P? a = Ka D = K
Algorithm 1 KBSF
Input: Sa for all a ? A, m
Select a set of representative states {s?1 , s?2 , ..., s?m }
for each a ? A do
? a : d?iaj = ?(?
? sai , s? j )
Compute matrix D
? a : k? iaj = ? a (s?i , saj )
Compute matrix K
Compute vector r? a : r?ia = ? j k? iaj raj
end for
? A, P? a , r? a , ?), with P? a = K
?a
? aD
Solve M? ? (S,
h
i
?
? ? , where D? = D
? a|A| ?
? a2 ? ... D
? a1
D
Return v? = ?DQ
Observe that we did not describe how to define the representative states s?i . Ideally, these states
would be linked to vectors kai forming a convex hull which contains the rows of P? a . In practice, we
can often resort to simple methods to pick states s?i in strategic regions of S. In Section 5 we give
an example of how to do so. Also, the reader might have noticed that the stochastic factorizations
computed by KBSF are in fact approximations of the matrices P? a . The following proposition extends
the result of the previous section to the approximate case:
Proposition 2 Let M? ? (S, A, P? a , r? a , ?) be the finite MDP derived by KBRL and let D, Ka , and r? a be
the matrices and vector computed by KBSF. Then,
k?v? ? v? k? ?
?
a
C?
1
1
P? ? DKa
,
?
Cmax
(1
?
max
d
)
+
max
max k?ra ? D?ra k? +
i
j
?
i
j
1?? a
2 a
(1 ? ?)2
(2)
where C? = maxa,i r?ia ? mina,i r?ia .
Proof. Let M ? (S, A, DKa , D?ra , ?). It is obvious that
k?v? ? v? k? ? k?v? ? v? k? + kv? ? v? k? .
k?v? ? v? k? ,
(3)
we apply Whitt?s Theorem 3.1 and Corollary (b) of his
In order to provide a bound for
Theorem 6.1 [12], with all mappings between M? and M taken to be identities, to obtain
?
C?
1
k?v? ? v? k? ?
max k?ra ? D?ra k? +
(4)
max
P? a ? DKa
? .
a
1??
2(1 ? ?) a
Resorting to Proposition 1, we can substitute (1) and (4) in (3) to obtain (2). 2
4
Notice that when D is deterministic?that is, when all its non-zero elements are 1?expression (2)
reduces to Whitt?s classical result regarding state aggregation in dynamic programming [12]. On
the other hand, when the stochastic factorizations are exact, we recover (1), which is a computable
version of Sorg and Singh?s bound for soft homomorphisms [10]. Finally, if we have exact deterministic factorizations, the right-hand side of (2) reduces to zero. This also makes sense, since in
this case the stochastic-factorization trick gives rise to an exact homomorphism [13].
As shown in Algorithm 1, KBSF is very simple to understand and to implement. It is also fast,
requiring only O(nm2 |A|) operations to build a reduced version of an MDP. Finally, and perhaps
most importantly, KBSF always converges to a unique solution whose distance to the optimal one is
bounded. In the next section we show how all these qualities turn into practical benefits.
5
Experiments
We now present a series of computational experiments designed to illustrate the behavior of KBSF
in a variety of challenging domains. We start with a simple problem showing that KBSF is indeed
capable of compressing the information contained in KBRL?s model. We then move to more difficult
tasks, and compare KBSF with other state-of-the-art reinforcement-learning algorithms.
All problems considered in this section have a continuous state space and a finite number of actions
and were modeled as discounted tasks with ? = 0.99. The algorithms?s results correspond to the
performance of the greedy decision policy derived from the final value function computed. In all
cases, the decision policies were evaluated on a set of test states from which the tasks cannot be
easily solved. This makes the tasks considerably harder, since the algorithms must provide a good
approximation of the value function over a larger region of the state space.
The experiments were carried out in the same way for all tasks: first, we collected a set of n sample
transitions (sak , rak , s? ak ) using a uniformly-random exploration policy. Then the states s? ak were grouped
by the k-means algorithm into m clusters and a Gaussian kernel ?? was positioned at the center of
each resulting cluster [14]. These kernels defined the models used by KBSF to approximate the
value function. This process was repeated 50 times for each task.
We adopted the same width for all kernels. The algorithms were executed on each task with the following values for this parameter: {1, 0.1, 0.01}. The results reported represent the best performance
of the algorithms over the 50 runs; that is, for each n and each m we picked the width that generated the maximum average return. Throughout this section we use the following convention to refer
to specific instances of each method: the first number enclosed in parentheses after an algorithm?s
name is n, the number of sample transitions used in the approximation, and the second one is m, the
size of the model used to approximate the value function. Note that for KBRL n and m coincide.
Figure 1 shows the results obtained by KBRL and KBSF on the puddle-world task [15]. In Figure 1a and 1b we observe the effect of fixing the number of transitions n and varying the number
of representative states m. As expected, the results of KBSF improve as m ? n. More surprising
is the fact that KBSF has essentially the same performance as KBRL using models one order of
magnitude smaller. This indicates that KBSF is summarizing well the information contained in the
data. Depending on the values of n and m, this compression may represent a significant reduction of
computational resources. For example, by replacing KBRL(8000) with KBSF(8000, 90), we obtain
a decrease of more than 99% on the number of operations performed to find a policy, as shown in
Figure 1b (the cost of constructing KBSF?s MDP is included in all reported run times).
In Figures 1c and 1d we fix m and vary n. Observe in Figure 1c how KBRL and KBSF have similar
performances, and both improve as n ? ?. However, since KBSF is using a model of fixed size, its
computational cost depends only linearly on n, whereas KBRL?s cost grows with n3 . This explains
the huge difference in the algorithms?s run times shown in Figure 1d.
Next we evaluate how KBSF compares to other reinforcement-learning approaches. We first contrast
our method with Lagoudakis and Parr?s least-squares policy iteration algorithm (LSPI) [3]. LSPI is
a natural candidate here because it also builds an approximator of fixed size out of a batch of sample
transitions. In all experiments LSPI used the same data and approximation architectures as KBSF
(to be fair, we fixed the width of KBSF?s kernel ? a at 1 in the comparisons).
Figure 2 shows the results of LSPI and KBSF on the single and double pole-balancing tasks [16].
We call attention to the fact that the version of the problems used here is significantly harder than
5
Seconds (log)
20
40
60
80
100 120 140
1e?01 1e+00 1e+01 1e+02 1e+03
3.0
2.5
2.0
Return
1.5
1.0
0.5
KBRL(8000)
KBSF(8000,m)
KBRL(8000)
KBSF(8000,m)
20
40
60
80
m
100 120 140
m
(a) Performance as a function of m
(b) Run time as a function of m
?
5e+02
?
?
Seconds (log)
2.0
1.5
Return
2.5
?
?
?
2000
4000
6000
KBRL(n)
KBSF(n,100)
8000
?
?
?
?
?
?
KBRL(n)
KBSF(n,100)
?
5e?01
1.0
?
?
?
?
5e+01
?
5e+00
3.0
?
?
?
?
10000
2000
n
4000
6000
8000
10000
n
(c) Performance as a function of n
(d) Run time as a function of n
Figure 1: Results on the puddle-world task averaged over 50 runs. The algorithms were evaluated
on two sets of states distributed over the region of the state space surrounding the ?puddles?. The
first set was a 3 ? 3 grid over [0.1, 0.3] ? [0.3, 0.5] and the second one was composed of four states:
{0.1, 0.3} ? {0.9, 1.0}. The shadowed regions represent 99% confidence intervals.
the more commonly-used variants in which the decision policies are evaluated on a single state close
to the origin. This is probably the reason why LSPI achieves a success rate of no more than 60% on
the single pole-balancing task, as shown in Figure 2a. In contrast, KBSF?s decision policies are able
to balance the pole in 90% of the attempts, on average, using as few as m = 30 representative states.
The results of KBSF on the double pole-balancing task are still more impressive. As Wieland [17]
rightly points out, this version of the problem is considerably more difficult than its single pole
variant, and previous attempts to apply reinforcement-learning techniques to this domain resulted
in disappointing performance [18]. As shown in Figure 2c, KBSF(106 , 200) is able to achieve a
success rate of more than 80%. To put this number in perspective, recall that some of the test states
are quite challenging, with the two poles inclined and falling in opposite directions.
The good performance of KBSF comes at a relatively low computational cost. A conservative estimate reveals that, were KBRL(106 ) run on the same computer used for these experiments, we would
have to wait for more than 6 months to see the results. KBSF(106 , 200) delivers a decision policy in
less than 7 minutes. KBSF?s computational cost also compares well with that of LSPI, as shown in
Figures 2b and 2d. LSPI?s policy-evaluation step involves the update and solution of a linear system
of equations, which take O(nm2 ) and O(m3 |A|3 ), respectively. In addition, the policy-update stage
requires the definition of ?(?sak ) for all n states in the set of sample transitions. In contrast, KBSF
only performs O(m3 ) operations to evaluate a decision policy and O(m2 |A|) operations to update it.
We conclude our empirical evaluation of KBSF by using it to learn a neurostimulation policy for the
treatment of epilepsy. In order to do so, we use a generative model developed by Bush et al. [19]
based on real data collected from epileptic rat hippocampus slices. This model was shown to re6
Seconds (log)
5
50
500
1.0
0.8
0.6
0.4
20
40
60
80
LSPI(5x104,m)
KBSF(5x104,m)
1
KBSF(5x104,m)
0.0
Successful episodes
0.2
LSPI(5x104,m)
100 120 140
20
40
m
80
Seconds (log)
200 1000
10000
(b) Run time on single pole-balancing
LSPI(106,m)
KBSF(106,m)
50
0.0
LSPI(106,m)
KBSF(106,m)
50
100
m
100 120 140
m
(a) Performance on single pole-balancing
Successful episodes
0.2 0.4 0.6 0.8
60
150
200
50
(c) Performance on double pole-balancing
100
m
150
200
(d) Run time on double pole-balancing
Figure 2: Results on the pole-balancing tasks averaged over 50 runs. The values correspond to the
fraction of episodes initiated from the test states in which the pole(s) could be balanced for 3000
steps (one minute of simulated time). The test sets were regular grids defined over the hypercube
centered at the origin and covering 50% of the state-space axes in each dimension (we used a resolution of 3 and 2 cells per dimension for the single and double versions of the problem, respectively).
Shadowed regions represent 99% confidence intervals.
produce the seizure pattern of the original dynamical system and was later validated through the
deployment of a learned treatment policy on a real brain slice [20]. The associated decision problem
has a five-dimensional continuous state space and extremely non-linear dynamics. At each state the
agent must choose whether or not to apply an electrical pulse. The goal is to suppress seizures while
minimizing the total amount of stimulation needed to do so.
We use as a baseline for our comparisons the fixed-frequency stimulation policies usually adopted
in standard in vitro clinical studies [20]. In particular, we considered policies that apply electrical
pulses at frequencies of 0 Hz, 0.5 Hz, 1 Hz, and 1.5 Hz. For this task we ran LSPI and KBSF
with sparse kernels, that is, we only computed the value of the Gaussian function at the 6-nearest
neighbors of a given state (thus defining a simplex with the same dimension as the state space). This
modification made it possible to use m = 50, 000 representative states with KBSF. Since for LSPI
the reduction on the computational cost was not very significant, we fixed m = 50 to keep its run
time within reasonable bounds.
We compare the decision policies returned by KBSF and LSPI with those computed by fitted Qiteration using Ernst et al.?s extra-trees algorithm [4]. This approach has shown excellent performance on several reinforcement-learning tasks [4]. We used the extra-trees algorithm to build an
ensemble of 30 trees. The algorithm was run for 50 iterations, with the structure of the trees fixed
after the 10th one. The number of cut-directions evaluated at each node was fixed at dim(S) = 5, and
the minimum number of elements required to split a node, denoted here by ?min , was selected from
the set {20, 30, ..., 50, 100, 150, ..., 200}. In general, we observed that the performance of the tree7
based method improved with smaller values for ?min , with an expected increase in the computational
cost. Thus, in order to give an overall characterization of the performance of fitted Q-iteration, we
report the results obtained with the extreme values of ?min . The respective instances of the tree-based
approach are referred to as T20 and T200.
Figure 3 shows the results on the epilepsy-suppression task. In order to obtain different compromises of the problem?s two conflicting objectives, we varied the relative magnitude of the penalties
associated with the occurrence of seizures and with the application of an electrical pulse [19, 20].
In particular, we fixed the latter at ?1 and varied the former with values in {?10, ?20, ?40}. This
appears in the plots as subscripts next to the algorithms?s names. As shown in Figure 3a, LSPI?s policies seem to prioritize reduction of stimulation at the expense of higher seizure occurrence, which
is clearly sub-optimal from a clinical point of view. T200 also performs poorly, with solutions representing no advance over the fixed-frequency stimulation strategies. In contrast, T20 and KBSF
are both able to generate decision policies superior to the 1 Hz policy, which is the most efficient
stimulation regime known to date in the clinical literature [21]. However, as shown in Figure 3b,
KBSF is able to do it at least 100 times faster than the tree-based method.
LSPI?10
KBSF?40
0Hz
LSPI?40
LSPI?20
T200?40
T20?40
LSPI?40
KBSF?20
T200?10T200?20
0.15
T200?40
LSPI?20
KBSF?10
T20?10
T200?20
T20?20
T20?20
1Hz
1.5Hz
KBSF?10
KBSF?20
0.10
Fraction of seizures
0.20
0.5Hz
LSPI?10
T20?40
T200?10
T20?10
KBSF?40
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Fraction of stimulation
0.35
50
200
1000
5000
Seconds (log)
(a) Performance. The length of the rectangles?s edges repre- (b) Run times (confidence intervals
sent 99% confidence intervals.
do not show up in logarithmic scale)
Figure 3: Results on the epilepsy-suppression problem averaged over 50 runs. The algorithms used
n = 500, 000 sample transitions to build the approximations. The decision policies were evaluated
on episodes of 105 transitions starting from a fixed set of 10 test states drawn uniformly at random.
6
Conclusions
We presented KBSF, a reinforcement-learning algorithm that emerges from the application of the
stochastic-factorization trick to KBRL. As discussed, our algorithm is simple, fast, has good theoretical guarantees, and always converges to a unique solution. Our empirical results show that KBSF
is able to learn very good decision policies with relatively low computational cost. It also has predictable behavior, generally improving its performance as the number of sample transitions or the
size of its approximation model increases. In the future, we intend to investigate more principled
strategies to select the representative states, based on the large body of literature available on kernel
methods. We also plan to extend KBSF to the on-line scenario, where the intermediate decision
policies generated during the learning process guide the collection of new sample transitions.
Acknowledgments
The authors would like to thank Keith Bush for making the epilepsy simulator available and also Yuri
Grinberg for helpful discussions regarding this work. Funding for this research was provided by the
National Institutes of Health (grant R21 DA019800) and the NSERC Discovery Grant program.
8
References
[1] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49 (2?3):
161?178, 2002.
[2] D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in average-cost problems.
IEEE Transactions on Automatic Control, 47(10):1624?1636, 2002.
[3] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[4] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal
of Machine Learning Research, 6:503?556, 2005.
[5] M. L. Puterman. Markov Decision Processes?Discrete Stochastic Dynamic Programming.
John Wiley & Sons, Inc., 1994.
[6] N. Jong and P. Stone. Kernel-based models for reinforcement learning in continuous state
spaces. In Proceedings of the International Conference on Machine Learning?Workshop on
Kernel Machines and Reinforcement Learning, 2006.
[7] J. E. Cohen and U. G. Rothblum. Nonnegative ranks, decompositions and factorizations of
nonnegative matrices. Linear Algebra and its Applications, 190:149?168, 1991.
[8] A. Cutler and L. Breiman. Archetypal analysis. Technometrics, 36(4):338?347, 1994.
[9] A. M. S. Barreto and M. D. Fragoso. Computing the stationary distribution of a finite Markov
chain through stochastic factorization. SIAM Journal on Matrix Analysis and Applications. In
press.
[10] J. Sorg and S. Singh. Transfer via soft homomorphisms. In Autonomous Agents & Multiagent
Systems / Agent Theories, Architectures, and Languages, pages 741?748, 2009.
[11] S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on
Optimization, 20:1364?1377, 2009.
[12] W. Whitt. Approximations of dynamic programs, I. Mathematics of Operations Research, 3
(3):231?243, 1978.
[13] B. Ravindran. An Algebraic Approach to Abstraction in Reinforcement Learning. PhD thesis,
University of Massachusetts, Amherst, MA, 2004.
[14] L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis.
John Wiley and Sons, 1990.
[15] R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse
coarse coding. In Advances in Neural Information Processing Systems, volume 8, pages 1038?
1044, 1996.
[16] F. J. Gomez. Robust Non-linear Control Through Neuroevolution. PhD thesis, The University
of Texas at Austin, 2003.
[17] A. P. Wieland. Evolving neural network controllers for unstable systems. In Proceedings of
the International Joint Conference on Neural Networks, volume 2, pages 667?673, 1991.
[18] F. Gomez, J. Schmidhuber, and R. Miikkulainen. Efficient non-linear control through neuroevolution. In Proceedings of the 17th European Conference on Machine Learning, pages
654?662, 2006.
[19] K. Bush, J. Pineau, and M. Avoli. Manifold embeddings for model-based reinforcement learning of neurostimulation policies. In Proceedings of the ICML / UAI / COLT Workshop on
Abstraction in Reinforcement Learning, 2009.
[20] K. Bush and J. Pineau. Manifold embeddings for model-based reinforcement learning under
partial observability. In Advances in Neural Information Processing Systems, volume 22, pages
189?197, 2009.
[21] K. Jerger and S. J. Schiff. Periodic pacing an in vitro epileptic focus. Journal of Neurophysiology, (2):876?879, 1995.
9
| 4217 |@word neurophysiology:1 version:6 briefly:1 polynomial:1 seems:1 norm:1 compression:1 hippocampus:1 pulse:3 decomposition:3 homomorphism:4 pick:1 harder:2 reduction:3 contains:1 series:1 outperforms:1 ka:9 current:1 surprising:2 si:11 must:3 john:2 numerical:1 sorg:3 designed:1 plot:1 update:3 stationary:1 greedy:1 generative:1 selected:1 core:1 characterization:1 coarse:1 node:2 five:1 mathematical:1 constructed:1 prove:1 introduce:2 ravindran:1 ra:13 indeed:1 dprecup:1 expected:3 behavior:2 growing:1 simulator:1 brain:3 discounted:1 provided:1 bounded:1 what:1 kaufman:1 r21:1 maxa:3 developed:1 finding:1 impractical:1 guarantee:3 sai:1 rm:4 control:4 grant:2 before:1 despite:1 sutton:1 ak:8 initiated:1 subscript:1 solely:1 rothblum:2 might:1 studied:2 challenging:2 deployment:1 factorization:34 bi:2 averaged:3 unique:5 practical:3 acknowledgment:1 practice:2 implement:1 empirical:2 evolving:1 submatrices:1 significantly:1 convenient:1 confidence:4 regular:3 wait:1 cannot:1 close:1 operator:1 put:1 applying:1 imposed:1 deterministic:2 center:1 straightforward:1 attention:1 starting:2 convex:2 resolution:1 immediately:1 m2:1 importantly:1 his:1 autonomous:1 analogous:1 mcgill:6 suppose:1 elucidates:1 programming:5 exact:3 origin:2 pa:9 element:5 trick:8 expensive:1 jk:3 cut:1 observed:1 solved:2 electrical:3 calculate:1 region:5 compressing:1 inclined:1 episode:4 sbi:2 decrease:1 ran:1 balanced:1 principled:1 predictable:1 complexity:3 reward:3 ideally:1 dynamic:9 singh:3 rewrite:1 solving:1 algebra:1 compromise:1 dilemma:1 serve:1 swap:1 easily:1 joint:1 derivation:1 surrounding:1 fast:2 describe:3 dka:7 choosing:1 whose:7 quite:1 widely:1 solve:3 kai:2 larger:1 otherwise:2 emergence:1 final:1 obviously:1 sen:3 propose:1 product:1 ath:1 date:1 ernst:2 achieve:1 poorly:1 description:1 kv:2 scalability:1 exploiting:1 double:6 cluster:3 r1:2 produce:2 converges:6 derive:3 recurrent:1 montreal:3 fixing:1 depending:2 illustrate:1 nearest:1 ij:2 school:3 received:1 sa:2 strong:2 solves:1 keith:1 c:3 involves:1 come:2 convention:1 direction:2 avoli:1 drawback:1 stochastic:31 hull:1 centered:2 exploration:2 explains:1 fix:3 generalization:1 pacing:1 alleviate:1 archetypal:1 proposition:5 mathematically:1 considered:2 mapping:1 parr:2 vary:1 achieves:1 a2:1 re6:1 grouped:1 clearly:1 always:4 gaussian:3 rna:1 modified:1 breiman:2 varying:1 corollary:1 derived:4 focus:4 ax:1 validated:1 rank:1 indicates:1 contrast:4 suppression:2 baseline:1 sense:2 summarizing:1 dim:1 helpful:1 abstraction:2 relation:1 interested:1 overall:1 among:1 colt:1 denoted:2 plan:1 art:1 special:3 once:1 never:1 look:1 icml:1 future:2 simplex:1 report:1 few:1 irreducible:2 composed:2 resulted:1 national:1 attempt:2 technometrics:1 huge:1 investigate:1 evaluation:2 extreme:1 cutler:2 swapping:1 chain:1 tuple:1 capable:1 edge:1 necessary:1 partial:1 respective:1 tree:7 theoretical:6 fitted:4 witnessed:1 column:2 soft:3 instance:2 da019800:1 retains:2 cost:12 pole:13 addressing:1 entry:1 strategic:1 successful:3 reported:2 periodic:1 considerably:2 fundamental:3 international:2 siam:2 amherst:1 precup:1 na:6 thesis:2 choose:1 prioritize:1 resort:2 return:4 coding:1 inc:1 doina:1 vi:1 ad:2 depends:1 performed:1 later:1 picked:1 view:1 linked:1 start:1 aggregation:1 recover:1 repre:1 whitt:3 square:5 characteristic:2 ensemble:1 yield:1 correspond:3 definition:4 frequency:3 involved:1 glynn:2 obvious:1 associated:7 di:2 proof:2 proved:1 treatment:2 massachusetts:1 recall:2 emerges:1 improves:1 positioned:1 actually:2 appears:1 higher:1 improved:1 iaj:3 done:1 shrink:1 evaluated:6 stage:1 hand:4 replacing:2 incrementally:1 defines:1 pineau:3 mode:1 quality:3 perhaps:2 aj:1 mdp:18 grows:2 name:2 effect:1 concept:3 normalized:1 requiring:1 former:2 illustrated:1 puterman:1 during:1 width:4 covering:1 coincides:1 rat:3 prominent:1 stone:2 mina:3 demonstrate:1 geurts:1 performs:2 delivers:1 meaning:1 novel:1 funding:1 lagoudakis:2 superior:1 stimulation:6 empirically:2 vitro:2 cohen:2 volume:3 discussed:4 extend:1 epilepsy:4 refer:2 significant:2 ai:1 rd:1 automatic:1 outlined:1 resorting:1 grid:2 mathematics:1 stochasticity:1 language:1 impressive:1 recent:1 perspective:1 raj:1 disappointing:1 scenario:2 store:1 schmidhuber:1 outperforming:1 success:2 joelle:1 yuri:1 seen:1 minimum:1 period:1 ii:1 reduces:2 faster:3 calculation:1 clinical:3 long:1 a1:1 parenthesis:1 qi:1 variant:2 basic:2 controller:1 supi:1 essentially:1 schiff:1 iteration:8 kernel:23 sponding:1 represent:4 cell:1 background:1 want:2 whereas:1 addition:1 interval:4 extra:2 rest:1 unlike:1 posse:1 probably:1 hz:9 sent:1 effectiveness:1 seem:1 integer:1 call:3 leverage:1 intermediate:1 iii:1 enough:1 easy:1 split:1 variety:1 embeddings:2 architecture:2 topology:1 suboptimal:1 opposite:1 reduce:4 idea:3 regarding:3 observability:1 computable:1 texas:1 whether:1 expression:2 epileptic:4 jerger:1 passed:1 effort:1 penalty:1 returned:1 algebraic:1 action:10 cornerstone:1 useful:3 generally:1 amount:1 rousseeuw:1 discount:1 reduced:3 generate:1 vavasis:1 wieland:2 andr:1 notice:1 per:1 discrete:1 group:1 four:3 falling:1 drawn:1 rectangle:1 fraction:3 year:1 sum:1 run:14 place:1 throughout:2 almost:1 reader:1 extends:1 reasonable:1 qiteration:1 circumvents:1 decision:20 seizure:7 bound:6 corre:1 gomez:2 nonnegative:3 n3:1 grinberg:1 min:4 extremely:1 relatively:2 combination:1 kd:1 smaller:5 slightly:1 son:2 modification:1 making:1 taken:1 computationally:1 resource:2 equation:2 discus:2 eventually:1 mechanism:1 turn:1 needed:1 know:1 neuroevolution:2 end:1 adopted:3 available:2 operation:5 apply:7 observe:3 occurrence:6 sak:12 save:1 batch:3 original:6 compress:2 substitute:1 unnoticed:1 wehenkel:1 cmax:1 exploit:2 build:4 classical:1 hypercube:1 lspi:21 objective:2 noticed:1 move:1 intend:1 strategy:4 said:1 dp:1 distance:3 link:1 thank:1 simulated:1 manifold:2 collected:3 unstable:2 trivial:1 reason:2 assuming:1 length:1 modeled:1 balance:1 minimizing:1 difficult:4 executed:1 potentially:1 expense:1 negative:2 rise:2 suppress:3 policy:33 rightly:1 perform:1 upper:1 markov:4 finite:10 inevitably:1 defining:1 rn:3 varied:2 sweeping:1 community:1 canada:3 namely:1 required:1 connection:1 learned:1 conflicting:2 nm2:2 kbsf:60 qa:1 able:7 dynamical:2 pattern:1 usually:1 regime:1 program:2 t20:8 including:2 max:10 explanation:1 shadowed:2 ia:3 natural:1 rely:1 ormoneit:5 representing:1 jpineau:1 improve:3 scheme:2 carried:1 health:1 nice:1 literature:2 discovery:1 relative:2 multiagent:1 limitation:1 enclosed:1 approximator:3 rak:4 agent:3 consistent:1 dq:5 principle:1 share:2 balancing:9 row:3 austin:1 course:1 saj:1 formal:1 allow:1 guide:2 side:1 understand:1 wide:1 neighbor:1 institute:1 sparse:2 benefit:1 distributed:1 slice:2 dimension:4 transition:26 stand:2 valid:1 ending:2 computes:1 author:2 world:2 reinforcement:24 coincide:1 commonly:1 x104:4 made:1 far:1 collection:1 miikkulainen:1 transaction:1 approximate:4 overcomes:1 dealing:1 keep:1 reveals:1 uai:1 conclude:1 continuous:6 why:1 learn:5 transfer:1 robust:1 ca:3 improving:1 excellent:1 european:1 constructing:1 domain:3 da:1 did:1 main:1 linearly:1 motivation:1 repeated:1 neurostimulation:4 fair:1 body:1 representative:9 referred:1 wiley:2 sub:1 exceeding:1 candidate:1 weighting:2 admissible:1 rk:1 theorem:3 minute:2 specific:2 showing:1 explored:1 dk:2 incorporating:1 burden:1 workshop:2 adding:1 importance:1 phd:2 magnitude:2 nk:2 logarithmic:1 forming:1 contained:5 nserc:1 satisfies:1 ma:1 kbrl:34 goal:3 identity:1 month:1 prioritized:1 included:1 specifically:1 uniformly:2 averaging:1 conservative:1 called:4 total:1 m3:2 puddle:3 jong:2 formally:1 select:2 latter:2 bush:4 barreto:2 evaluate:2 |
3,554 | 4,218 | Minimax Localization of Structural Information in
Large Noisy Matrices
Mladen Kolar??
[email protected]
Sivaraman Balakrishnan??
[email protected]
Alessandro Rinaldo??
[email protected]
Aarti Singh?
[email protected]
?
School of Computer Science and ?? Department of Statistics, Carnegie Mellon University
Abstract
We consider the problem of identifying a sparse set of relevant columns and rows
in a large data matrix with highly corrupted entries. This problem of identifying groups from a collection of bipartite variables such as proteins and drugs,
biological species and gene sequences, malware and signatures, etc is commonly
referred to as biclustering or co-clustering. Despite its great practical relevance,
and although several ad-hoc methods are available for biclustering, theoretical
analysis of the problem is largely non-existent. The problem we consider is also
closely related to structured multiple hypothesis testing, an area of statistics that
has recently witnessed a flurry of activity. We make the following contributions
1. We prove lower bounds on the minimum signal strength needed for successful recovery of a bicluster as a function of the noise variance, size of the
matrix and bicluster of interest.
2. We show that a combinatorial procedure based on the scan statistic achieves
this optimal limit.
3. We characterize the SNR required by several computationally tractable procedures for biclustering including element-wise thresholding, column/row
average thresholding and a convex relaxation approach to sparse singular
vector decomposition.
1
Introduction
Biclustering is the problem of identifying a (typically) sparse set of relevant columns and rows
in a large, noisy data matrix. This problem along with the first algorithm to solve it were proposed by Hartigan [14] as a way to directly cluster data matrices to produce clusters with greater
interpretability. Biclustering routinely arises in several applications such as discovering groups of
proteins and drugs that interact with each other [19], learning phylogenetic relationships between
different species based on alignments of snippets of their gene sequences [30], identifying malware
that have similar signatures [7] and identifying groups of users with similar tastes for commercial
products [29]. In these applications, the data matrix is often indexed by (object, feature) pairs and
the goal is to identify clusters in this set of bipartite variables.
In standard clustering problems, the goal is only to identify meaningful groups of objects and the
methods typically use the entire feature vector to define a notion of similarity between the objects.
?
These authors contributed equally to this work
1
Biclustering can be thought of as high-dimensional clustering where only a subset of the features
are relevant for identifying similar objects, and the goal is to identify not only groups of objects
that are similar, but also which features are relevant to the clustering task. Consider, for instance
gene expression data where the objects correspond to genes, and the features correspond to their expression levels under a variety of experimental conditions. Our present understanding of biological
systems leads us to expect that subsets of genes will be co-expressed only under a small number
of experimental conditions. Although, pairs of genes are not expected to be similar under all experimental conditions it is critical to be able to discover local expression patterns, which can for
instance correspond to joint participation in a particular biological pathway or process. Thus, while
clustering aims to identify global structure in the data, biclustering take a more local approach by
jointly clustering both objects and features.
Prevalent techniques for finding biclusters are typically heuristic procedures with little or no theoretical underpinning. In order to study, understand and compare biclustering algorithms we consider
a simple theoretical model of biclustering [18, 17, 26]. This model is akin to the spiked covariance
model of [15] widely used in the study of PCA in high-dimensions.
We will focus on the following simple observation model for the matrix A 2 Rn1 ?n2 :
A = uv0 +
(1)
where
= { ij }i2[n1 ],j2[n2 ] is a random matrix whose entries are i.i.d. N (0, ) with
>0
known, u = {ui : i 2 [n1 ]} and v = {vi : i 2 [n2 ]} are unknown deterministic unit vectors in
Rn1 and Rn2 , respectively, and > 0 is a constant. To simplify the presentation, we assume that
u / {0, 1}n1 and v / {0, 1}n2 . Let K1 = {i : ui 6= 0} and K2 = {i : vi 6= 0} be the sets
indexing the non-zero components of the vectors u and v, respectively. We assume that u and v are
sparse, that is, k1 := |K1 | ? n1 and k2 := |K2 | ? n2 . While the sets (K1 , K2 ) are unknown,
we assume that their cardinalities are known. Notice that the magnitude of the signal for all the
coordinates in the bicluster K1 ? K2 is pk k . The parameter measures the strength of the signal,
1 2
and is the key quantity we will be studying.
2
2
We focus on the case of a single bicluster that appears as an elevated sub-matrix of size k1 ? k2 with
signal strength embedded in a large n1 ?n2 data matrix with entries corrupted by additive Gaussian
noise with variance 2 . Under this model, the biclustering problem is formulated as the problem
of estimating the sets K1 and K2 , based on a single noisy observation A of the unknown signal
matrix uv0 . Biclustering is most subtle when the matrix is large with several irrelevant variables,
the entries are highly noisy, and the bicluster is small as defined by a sparse set of rows/columns.
We provide a sharp characterization of tuples of ( , n1 , n2 , k1 , k2 , 2 ) under which it is possible to
recover the bicluster and study several common methods and establish the regimes under which they
succeed.
We establish minimax lower and upper bounds for the following class of models. Let
?(
0 , k1 , k2 )
:= {( , K1 , K2 ) :
0 , |K1 |
= k1 , K1 ? [n1 ], |K2 | = k2 , K2 ? [n2 ]}
(2)
be a set of parameters. For a parameter ? 2 ?, let P? denote the joint distribution of the entries of
A = {aij }i2[n1 ],j2[n2 ] , whose density with respect to the Lebesgue measure is
Y
N (aij ; (k1 k2 ) 1/2 1I{i 2 K1 , j 2 K2 }, 2 ),
(3)
ij
where the notation N (z; ?, 2 ) denotes the distribution p(z) ? N (?, 2 ) of a Gaussian random
variable with mean ? and variance 2 , and 1I denotes the indicator function.
We derive a lower bound that identifies tuples of ( , n1 , n2 , k1 , k2 , 2 ) under which we can recover
the true biclustering from a noisy high dimensional matrix. We show that a combinatorial procedure based on the scan statistic achieves the minimax optimal limits, however it is impractical
as it requires enumerating all possible sub-matrices of a given size in a large matrix. We analyze
the scalings (i.e. the relation between and (n1 , n2 , k1 , k2 , 2 )) under which some computationally tractable procedures for biclustering including element-wise thresholding, column/row average
thresholding and sparse singular vector decomposition (SSVD) succeed with high probability.
We consider the detection of both small and large biclusters of weak activation, and show that at the
minimax scaling the problem is surprisingly subtle (e.g., even detecting big clusters is quite hard).
2
In Table 1, we describe our main findings and compare the scalings under which the various algorithms succeed.
Algorithm
SNR scaling
Bicluster size
Combinatorial
Minimax
Any
Theorem 2
Thresholding
Weak
Any
Theorem 3
Row/Column Averaging
Intermediate
1/2+?
1/2+?
(n1
? n2
), ? 2 (0, 1/2)
Theorem 4
Sparse SVD
Weak
Any
Theorem 5
Where the scalings are,
?p
?
p
max
k1 log(n1 k1 ), k2 log(n2 k2 )
?p
?
p
2. Weak: ? max
k1 k2 log(n1 k1 ), k1 k2 log(n2 k2 )
?p
p
k1 k2 log(n1 k1 )
k1 k2 log(n2
3. Intermediate (for large clusters): ? max
,
?
n
n?
1. Minimax:
?
2
1
k2 )
?
Element-wise thresholding does not take advantage of any structure in the data matrix and hence
does not achieve the minimax scaling for any bicluster size. If the clusters are big enough
row/column averaging performs better than element-wise thresholding since it can take advantage
of structure. We also study a convex relaxation for sparse SVD, based on the DSPCA algorithm proposed by [11] that encourages the singular vectors of the matrix to be supported over a sparse set of
variables. However, despite the increasing popularity of this method, we show that it is only guaranteed to yield a sparse set of singular vectors when the SNR is quite high, equivalent to element-wise
thresholding, and fails for stronger scalings of the SNR.
1.1
Related work
Due to its practical importance and difficulty biclustering has attracted considerable attention (for
some recent surveys see [9, 27, 20, 22]). Broadly algorithms for biclustering can be categorized as
either score-based searches, or spectral algorithms. Many of the proposed algorithms for identifying
relevant clusters are based on heuristic searches whose goal is to identify large average sub-matrices
or sub-matrices that are well fit by a two-way ANOVA model. Sun et. al. [26] provide some
statistical backing for these exhaustive search procedures. In particular, they show how to construct
a test via exhaustive search to distinguish when there is a small sub-matrix of weak activation from
the ?null? case when there is no bicluster.
The premise behind the spectral algorithms is that if there was a sub-matrix embedded in a large
matrix, then this sub-matrix could be identified from the left and right singular vectors of A. In the
case when exactly one of u and v is random, the model (1) can be related to the spiked covariance
model of [15]. In the case when v is random, the matrix A has independent columns and dependent
rows. Therefore, A0 A is a spiked covariance matrix and it is possible to use the existing theoretical
results on the first eigenvalue to characterize the left singular vector of A. A lot of recent work has
dealt with estimation of sparse eigenvectors of A0 A, see for example [32, 16, 24, 31, 2]. For biclustering applications, the assumption that exactly one u or v is random, is not justifiable, therefore,
theoretical results for the spiked covariance model do not translate directly. Singular vectors of the
model (1) have been analyzed by [21], improving on earlier results of [6]. These results however are
asymptotic and do not consider the case when u and v are sparse.
Our setup for the biclustering problem also falls in the framework of structured normal means multiple hypothesis testing problems, where for each entry in the matrix the hypotheses are that the entry
has mean 0 versus an elevated mean. The presence of a bicluster (sub-matrix) however imposes
structure on which elements are elevated concurrently. Recently, several papers have investigated
the structured normal means setting for ordered domains. For example, [5] consider the detection of
elevated intervals and other parametric structures along an ordered line or grid, [4] consider detection of elevated connected paths in tree and lattice topologies, [3] considers nonparametric cluster
structures in a regular grid. In addition, [1] consider testing of different elevated structures in a general but known graph topology. Our setup for the biclustering problem requires identification of an
elevated submatrix in an unordered matrix. At a high level, all these results suggest that it is possible
to leverage the structure to improve the SNR threshold at which the hypothesis testing problem is
3
feasible. However, computationally efficient procedures that achieve the minimax SNR thresholds
are only known for a few of these problems. Our results for biclustering have a similar flavor, in
that the minimax threshold requires a combinatorial procedure whereas the computationally efficient
procedures we investigate are often sub-optimal.
The rest of this paper is organized as follows. In Section 2, we provide a lower bound on the
minimum signal strength needed for successfully identifying the bicluster. Section 3 presents a
combinatorial procedure which achieves the lower bound and hence is minimax optimal. We investigate some computationally efficient procedures in Section 4. Simulation results are presented in
Section 5 and we conclude in Section 6. All proofs are deferred to the Appendix.
2
Lower bound
In this section, we derive a lower bound for the problem of identifying the correct bicluster, indexed
by K1 and K2 , in model (1). In particular, we derive conditions on ( , n1 , n2 , k1 , k2 , 2 ) under
which any method is going to make an error when estimating the correct cluster. Intuitively, if either
the signal-to-noise ratio / or the cluster size is small, the minimum signal strength needed will
be high since it is harder to distinguish the bicluster from the noise.
Theorem 1. Let ? 2 (0, 18 ) and
min = min (n1 , n2 , k1 , k2 , )
0
1
s
p
p
p
k
k
log(n
k
)(n
k
)
1 2
1
1
2
1 A
=
? max @ k1 log(n1 k1 ), k2 log(n2 k2 ),
.
k1 + k2 1
Then for all
inf
0
?
sup
?2?(
0 ,k1 ,k2 )
(4)
min ,
P? [ (A) 6= (K1 (?), K2 (?))]
p
M
p
1+ M
?
1
2?
2?
log M
?
n1 ,n2 !1
! 1 2?,
(5)
where M = min(n1 k1 , n2 k2 ), ?( 0 , k1 , k2 ) is given in (2) and the infimum is over all
measurable maps : Rn1 ?n2 7! 2[n1 ] ? 2[n2 ] .
The result can be interpreted in the following way: for any biclustering procedure , if 0 ? min ,
then there exists some element in the model class ?( 0 , k1 , k2 ) such that the probability of incorrectly identifying the sets K1 and K2 is bounded away from zero.
The proof is based on a standard technique described in Chapter 2.6 of [28]. We start by identifying
a subset of parameter tuples that are hard to distinguish. Once a suitable finite set is identified, tools
for establishing lower bounds on the error in multiple-hypothesis testing can be directly applied.
These tools only require computing the Kullback-Leibler (KL) divergence between two distributions P?1 and P?2 , which in the case of model (1) are two multivariate normal distributions. These
constructions and calculations are described in detail in the Appendix.
3
Minimax optimal combinatorial procedure
We now investigate a combinatorial procedure achieving the lower bound of Theorem 1, in the sense
that, for any ? 2 ?( min , k1 , k2 ), the probability of recovering the true bicluster (K1 , K2 ) tends to
one as n1 and n2 grow unbounded. This scan procedure consists in enumerating all possible pairs
of subsets of the row and column indexes of size k1 and k2 , respectively, and choosing the one
whose corresponding submatrix has the largest overall sum. In detail, for an observed matrix A and
? 1 ? [n1 ] and K
? 2 ? [n2 ], we define the associated score S(K
? 1, K
? 2 ) :=
two candidate
subsets K
P
P
a
.
The
estimated
bicluster
is
the
pair
of
subsets
of
sizes
k
and
k
achieving
the
?1
? 2 ij
1
2
i2K
j2K
highest score:
? 1, K
? 2 ) subject to |K
? 1 | = k1 , |K
? 2 | = k2 .
(A) := argmax S(K
(6)
? 1 ,K
? 2)
(K
The following theorem determines the signal strength
bicluster.
4
needed for the decoder
to find the true
Theorem 2. Let A ? P? with ? 2 ?( , k1 , k2 ) and assume that k1 ? n1 /2 and k2 ? n2 /2. If
0
1
s
p
p
2k
k
log(n
k
)(n
k
)
1 2
1
1
2
2 A
4 max @ k1 log(n1 k1 ), k2 log(n2 k2 ),
(7)
k1 + k2
then P[ (A) 6= (K1 , K2 )] ? 2[(n1
k1 )
1
+ (n2
k2 )
1
] where
is the decoder defined in (6).
Comparing to the lower bound in Theorem 1, we observe that the combinatorial procedure using the
decoder that looks for all possible clusters and chooses the one with largest score achieves the
lower bound up to constants. Unfortunately, this procedure is not practical for data sets commonly
encountered in practice, as it requires enumerating all nk11 nk22 possible sub-matrices of size k1 ?
k2 . The combinatorial procedure requires the signal to be positive, but not necessarily constant
throughout the bicluster. In fact it is easy to see that provided the average signal in the bicluster is
larger than that stipulated by the theorem this procedure succeeds with high probability irrespective
of how the signal is distributed across the bicluster. Finally, we remark that the estimation of the
cluster is done under the assumption that k1 and k2 are known. Establishing minimax lower bounds
and a procedure that adapts to unknown k1 and k2 is an open problem.
4
Computationally efficient biclustering procedures
In this section we investigate the performance of various procedures for biclustering, that, unlike the
optimal scan statistic procedure studied in the previous section, are computationally tractable. For
each of these procedures however, computational ease comes at the cost of suboptimal performance:
recovery of the true bicluster is only possible if the is much larger than the minimax signal strength
of Theorem 1.
4.1
Element-wise thresholding
The simplest procedure that we analyze is based on element-wise thresholding. The bicluster is
estimated as
?}
(8)
thr (A, ? ) := {(i, j) 2 [n1 ] ? [n2 ] : |aij |
where ? > 0 is a parameter. The following theorem characterizes the signal strength
the element-wise thresholding to succeed in recovering the bicluster.
required for
Theorem 3. Let A ? P? with ? 2 ?( , k1 , k2 ) and fix > 0. Set the threshold ? as
r
(n1 k1 )(n2 k2 ) + k1 (n2 k2 ) + k2 (n1 k1 )
?=
2 log
.
If
then P[
p
k1 k2
thr (A, ? )
r
2 log
k1 k2
+
r
2 log
(n1
k1 )(n2
k2 ) + k1 (n2
k2 ) + k2 (n1
k1 )
!
6= K1 ? K2 ] = o( /(k1 k2 )).
Comparing Theorem 3 with theplower
p bound in Theorem 1, we observe that the signal
strength
needs to be O(max( k1 , k2 )) larger than the lower bound. This is not surprising, since the element-wise thresholding is not exploiting the structure of the problem,
but is assuming that the large elements of the matrix A are positioned randomly. From the
proof it is
not hard to see that this upper bound is tight up to ?
constants, i.e. if
?
?q
q
p
c k1 k2
2 log k1 k2 + 2 log (n1 k1 )(n2 k2 )+k1 (n2 k2 )+k2 (n1 k1 ) for a small enough constant c then thresholding will no longer recover the bicluster with probability at least 1
. It is also
worth noting that thresholding neither requires the signal in the bicluster to be constant nor positive
provided it is larger in magnitude, at every entry, than the threshold specified in the theorem.
5
4.2
Row/Column averaging
Next, we analyze another a procedure based on column and row averaging. When the bicluster
is large this procedure exploits the structure of the problem and outperforms the simple elementwise thresholding and the sparse SVD, which is discussed in the following section. The averaging
procedure works only well if the bicluster is ?large?, as specified below, since otherwise the row or
column average is dominated by the noise.
More precisely, the averaging procedure computes the average of each row and column of A and
outputs the k1 rows and k2 columns with the largest average. Let {rr,i }i2[n1 ] and {rc,j }j2[n2 ] denote
the positions of rows and columns when they are ordered according to row and column averages in
descending order. The bicluster is estimated then as
avg (A)
(9)
:= {i 2 [n1 ] : rr,i ? k1 } ? {j 2 [n2 ] : rc,j ? k2 }.
The following theorem characterizes the signal strength
succeed in recovering the bicluster.
required for the averaging procedure to
1/2+?
1/2+?
Theorem 4. Let A ? P? with ? 2 ?( , k1 , k2 ). If k1 = ?(n1
) and k2 = ?(n2
? 2 (0, 1/2) is a constant and,
!
p
p
k1 k2 log(n1 k1 )
k1 k2 log(n2 k2 )
4 max
,
n?
n?
2
1
), where
then P[ (A) 6= (K1 , K2 )] ? [n1 1 + n2 1 ].
Comparing to Theorem 3, we observe that the averaging requires lower
p signal strength than
p the
element-wise thresholding when the bicluster is large, that is, k1 = ?( n1 ) and k2 = ?( n2 ).
Unless both k1 = O(n1 ) and k2 = O(n2 ), the procedure does not achieve the lower bound of
Theorem 1, however, the procedure is simple and computationally efficient. It is also not hard to
show that this theorem is sharp in its characterization of the averaging procedure. Further, unlike
thresholding, averaging requires the signal to be positive in the bicluster.
It is interesting to note that a large bicluster can also be identified without assuming the normality
of the noise matrix . This non-parametric extension is based on a simple sign-test, and the details
are provided in Appendix.
4.3
Sparse singular value decomposition (SSVD)
An alternate way to estimate K1 and K2 would be based on the singular value decomposition (SVD),
? and v
? that maximize h?
? and v
? . Unfortui.e. finding u
u, A?
vi, and then threshold the elements of u
nately, such a method would perform poorly when the signal is weak and the dimensionality is
? and v
? are poor estimates of u and v and and do not
high, since, due to the accumulation of noise, u
exploit the fact that u and v are sparse.
In fact, it is now well understood [8] that SVD is strongly inconsistent when the signal strength is
weak, i.e. \(?
u, u) ! ?/2 (and similarly for v) almost surely. See [26] for a clear exposition and
discussion of this inconsistency in the SVD setting.
To properly exploit the sparsity in the singular vectors, it seems natural to impose a cardinality
constraint to obtain a sparse singular vector decomposition (SSVD):
u2Sn1
max
1
,v2Sn2
1
hu, Avi
subject to ||u||0 ? k1 , ||v||0 ? k2 ,
which can be further rewritten as
max
Z2Rn2 ?n1
tr AZ
subject to Z = vu0 , ||u||2 = 1, ||v||2 = 1, ||u||0 ? k1 , ||v||0 ? k2 .
(10)
The above problem is non-convex and computationally intractable.
Inspired by the convex relaxation methods for sparse principal component analysis proposed by
[11], we consider the following relaxation the SSVD:
max
X2R(n1 +n2 )?(n1 +n2 )
tr AX21
10 |X21 |1 subject to X ? 0, tr X11 = 1, tr X22 = 1,
6
(11)
where X is the block matrix
?
X11
X21
X12
X22
b is of rank 1, then, necwith the block X21 corresponding to Z in (10). If the optimal solution X
b
u
0
0
b =
b ). Based on the sparse singular vectors u
b and v
b , we estimate the bicluster
essarily, X
u v
b (b
v
as
b 1 = {j 2 [n1 ] : u
b 2 = {j 2 [n2 ] : vbj 6= 0}.
K
bj 6= 0}
and
K
(12)
21
b
The user defined parameter controls the sparsity of the solution X , and, therefore, provided
b and v
b and of the estimated
the solution is of rank one, it also controls the sparsity of the vectors u
bicluster.
b to be rank one and to recover
The following theorem provides sufficient conditions for the solution X
the bicluster.
Theorem 5. Consider the model in (1). Assume k1 ? k2 and k1 ? n1 /2 and k2 ? n2 /2. If
p
2
k1 k2 log(n1 k1 )(n2 k2 )
(13)
b of the optimization problem in (11) with = p
then the solution X
is of rank 1 with probability
2 k1 k 2
1
b1, K
b 2 ) = (K1 , K2 ) with probability 1
O(k1 1 ). Furthermore, we have that (K
O(k1 1 ).
It is worth noting that SSVD correctly recovers signed vectors u
b and vb under this signal strength. In
particular, the procedure works even if the u and v in Equation 1 are signed.
The following theorem establishes necessary conditions for the SSVD to have a rank 1 solution that
correctly identifies the bicluster.
Theorem 6. Consider the model in (1). Fix c 2 (0, 1/2). Assume that k1 ? k2 and k1 = o(n1/2 c )
1/2 c
and k2 = o(n2
). If
p
?2
ck1 k2 log max(n1 k1 , n2 k2 ),
(14)
with
then the optimization problem (11) does not have a rank 1 solution that correctly
p
p
recovers the sparsity pattern with probability at least 1 O(exp( ( k1 + k2 )2 ) for sufficiently
large n1 and n2 .
=
p
2 k1 k 2
From Theorem 6 observe that the sufficient conditions of Theorem 5 are sharp. In particular, the two
theorems establish that the SSVD does not establish the lower bound given in Theorem 1. The signal
strength needs to be of the same order as for the element-wise thresholding, which is somewhat
surprising since from the formulation of the SSVD optimization problem it seems that the procedure
uses the structure of the problem. From numerical simulations in Section 5 we observe that although
SSVD requires the same scaling as thresholding, it consistently performs slightly better at a fixed
signal strength.
5
Simulation results
We test the performance of the three computationally efficient procedures on synthetic data: thresholding, averaging and sparse SVD. For sparse SVD we use an implementation posted online by [11].
We generate data from (1) with n = n1 = n2 , k = k1 = k2 , 2 = 1 and u = v / (10k , 00n k )0 .
For each algorithm we plot the Hamming fraction (i.e. the Hamming distance between sub and su
rescaled to be between 0 and 1) against the rescaled sample size. In each case we average the results
over 50 runs.
For thresholding and sparse SVD the rescaled scaling (x-axis) is p
k
rescaled scaling (x-axis) is p
k
?
n
.
log(n k)
log(n k)
and for averaging the
We observe that there is a sharp threshold between success
and failure of the algorithms, and the curves show good agreement with our theory.
The vertical line shows the point after which successful recovery happens for all values of n. We can
make a direct comparison between thresholding and sparse SVD (since the curves are identically
rescaled) to see that at least empirically sparse SVD succeeds at a smaller scaling constant than
thresholding even though their asymptotic rates are identical.
7
k = n1/3
k = log(n)
0.6
0.4
0.2
0
1
2
3
4
5
Signal strength
6
k = 0.2n
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0.6
0.4
0.2
0
1
7
Hamming fraction
0.8
1
1
n = 100
n = 200
n = 300
n = 400
n = 500
Hamming fraction
Hamming fraction
1
2
3
4
5
Signal strength
6
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0.6
0.4
0.2
0
1
7
2
3
4
5
Signal strength
6
7
Figure 1: Thresholding: Hamming fraction versus rescaled signal strength.
k = 0.2n
k = n1/2 + ?, ? = 0.1
1
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0.6
Hamming fraction
Hamming fraction
1
0.4
0.2
0
0
1
2
3
4
Signal strength
5
6
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0.6
0.4
0.2
0
0
7
1
2
3
4
Signal strength
5
6
7
Figure 2: Averaging: Hamming fraction versus rescaled signal strength.
k = 0.2n
1
0.4
0.2
1.5
2
Signal strength
2.5
3
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0.6
Hamming fraction
0.6
Hamming fraction
Hamming fraction
1
n = 100
n = 200
n = 300
n = 400
n = 500
0.8
0
1
n = 100
n = 200
n = 300
n = 400
n = 500
k = n1/3
k = log(n)
1
0.4
0.2
0
1
1.5
2
Signal strength
2.5
3
0.8
0.6
0.4
0.2
0
1
1.5
2
Signal strength
2.5
3
Figure 3: Sparse SVD: Hamming fraction versus rescaled signal strength.
6
Discussion
In this paper, we analyze biclustering using a simple statistical model (1), where a sparse rank one
matrix is perturbed with noise. Using this model, we have characterized the minimal signal strength
below which no procedure can succeed in recovering the bicluster. This lower bound can be matched
using an exhaustive search technique. However, it is still an open problem to find a computationally
efficient procedure that is minimax optimal.
Amini et. al. [2] analyze the convex relaxation procedure proposed in [11] for high-dimensional
sparse PCA. Under the minimax scaling for this problem they show that provided a rank-1 solution
exists it has the desired sparsity pattern (they were however not able to show that a rank-1 solution
exists with high probability). Somewhat surprisingly, we show that in the SVD case a rank-1 solution
with the desired sparsity pattern does not exist with high probability. The two settings however are
not identical since the noise in the spiked covariance model is Wishart rather than Gaussian, and
has correlated entries. It would be interesting to analyze whether our negative result has similar
implications for the sparse PCA setting.
The focus of our paper has been on a model with one cluster, which although simple, provides
several interesting theoretical insights. In practice, data often contains multiple clusters which need
to be estimated. Many existing algorithms (see e.g. [17] and [18]) try to estimate multiple clusters
and it would be useful to analyze these theoretically.
Furthermore, the algorithms that we have analyzed assume knowledge of the size of the cluster,
which is used to select the tuning parameters. It is a challenging problem of great practical relevance
to find data driven methods to select these tuning parameters.
7
Acknowledgments
We would like to thank Arash Amini and Martin Wainwright for fruitful discussions, and Larry
Wasserman for his ideas, indispensable advice and wise guidance. This research is supported in
part by AFOSR under grant FA9550-10-1-0382 and NSF under grant IIS-1116458. SB would also
like to thank Jaime Carbonell and Srivatsan Narayanan for several valuable comments and thoughtprovoking discussions.
8
References
[1] Louigi Addario-Berry, Nicolas Broutin, Luc Devroye, and G?abor Lugosi. On combinatorial testing problems. Ann. Statist., 38(5):3063?3092, 2010.
[2] A.A. Amini and M.J. Wainwright. High-Dimensional Analysis Of Semidefinite Relaxations For Sparse
Principal Components. The Annals of Statistics, 37(5B):2877?2921, 2009.
[3] Ery Arias-Castro, Emmanuel J. Cand`es, and Arnaud Durand. Detection of an anomalous cluster in a
network. Ann. Stat., 39(1):278?304, 2011.
[4] Ery Arias-Castro, Emmanuel J. Cand`es, Hannes Helgason, and Ofer Zeitouni. Searching for a trail of
evidence in a maze. Ann. Statist., 36(4):1726?1757, 2008.
[5] Ery Arias-Castro, David L. Donoho, and Xiaoming Huo. Adaptive multiscale detection of filamentary
structures in a background of uniform random points. Ann. Statist., 34(1):326?349, 2006.
[6] Jushan Bai. Inferential theory for factor models of large dimensions. Econometrica, 71(1):pp. 135?171,
2003.
[7] Ulrich Bayer, Paolo Milani Comparetti, Clemens Hlauscheck, Christopher Kruegel, and Engin Kirda.
Scalable, Behavior-Based Malware Clustering. In 16th Symposium on Network and Distributed System
Security (NDSS), 2009.
[8] F. Benaych-Georges and R. Rao Nadakuditi. The singular values and vectors of low rank perturbations
of large rectangular random matrices. ArXiv e-prints, March 2011.
[9] S. Busygin, O. Prokopyev, and P.M. Pardalos. Biclustering in data mining. Computers & Operations
Research, 35(9):2964?2987, 2008.
[10] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis?
CoRR, abs/0912.3599, 2009.
[11] Alexandre d?Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A direct
formulation for sparse pca using semidefinite programming. SIAM Review, 49:434?448, 2007.
[12] K.R. Davidson and S.J. Szarek. Local operator theory, random matrices and Banach spaces. Handbook
of the geometry of Banach spaces, 1:317?366, 2001.
[13] R. Fletcher. Semi-definite matrix constraints in optimization. SIAM Journal on Control and Optimization,
23:493, 1985.
[14] J. A. Hartigan. Direct clustering of a data matrix. Journal of the American Statistical Association,
67(337):pp. 123?129, 1972.
[15] I.M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. The Annals
of Statistics, 29(2):295?327, 2001.
[16] I.M. Johnstone and A.Y. Lu. On consistency and sparsity for principal components analysis in high
dimensions. Journal of the American Statistical Association, 104(486):682?693, 2009.
[17] L. Lazzeroni and A. Owen. Plaid models for gene expression data. Statistica sinica, 12:61?86, 2002.
[18] Mihee Lee, Haipeng Shen, Jianhua Z. Huang, and J. S. Marron. Biclustering via sparse singular value
decomposition. Biometrics, 66(4):1087?1095, 2010.
[19] Jinze Liu and Wei Wang. Op-cluster: Clustering by tendency in high dimensional space. In Proceedings
of the Third IEEE International Conference on Data Mining, ICDM ?03, pages 187?, Washington, DC,
USA, 2003. IEEE Computer Society.
[20] S.C. Madeira and A.L. Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE
Transactions on computational Biology and Bioinformatics, pages 24?45, 2004.
[21] A. Onatski. Asymptotics of the principal components estimator of large factor models with weak factors.
Economics Department, Columbia University, 2009.
[22] L. Parsons, E. Haque, and H. Liu. Subspace clustering for high dimensional data: a review. ACM SIGKDD
Explorations Newsletter, 6(1):90?105, 2004.
[23] R.T. Rockafellar. The theory of subgradients and its applications to problems of optimization. Convex
and nonconvex functions. Heldermann, 1981.
[24] H. Shen and J.Z. Huang. Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis, 99(6):1015?1034, 2008.
[25] GW Stewart. Perturbation theory for the singular value decomposition. Computer Science Technical
Report Series; Vol. CS-TR-2539, page 13, 1990.
[26] X. Sun and A. B. Nobel. On the maximal size of Large-Average and ANOVA-fit Submatrices in a
Gaussian Random Matrix. ArXiv e-prints, September 2010.
[27] A. Tanay, R. Sharan, and R. Shamir. Biclustering algorithms: A survey. Handbook of computational
molecular biology, 2004.
[28] A.B. Tsybakov. Introduction to nonparametric estimation. Springer, 2009.
[29] Lyle Ungar and Dean P. Foster. A formal statistical approach to collaborative filtering. In CONALD, 98.
[30] S. Wang, R. R. Gutell, and D. P. Miranker. Biclustering as a method for RNA local multiple sequence
alignment. Bioinformatics, 23:3289?3296, Dec 2007.
[31] D.M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse
principal components and canonical correlation analysis. Biostatistics, 10(3):515, 2009.
[32] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and
graphical statistics, 15(2):265?286, 2006.
9
| 4218 |@word stronger:1 seems:2 nd:1 open:2 hu:1 simulation:3 decomposition:8 covariance:5 tr:5 harder:1 bai:1 liu:2 contains:1 dspca:1 score:4 series:1 outperforms:1 existing:2 comparing:3 surprising:2 activation:2 attracted:1 john:1 additive:1 numerical:1 plot:1 discovering:1 huo:1 fa9550:1 characterization:2 detecting:1 provides:2 phylogenetic:1 unbounded:1 along:2 rc:2 direct:3 symposium:1 prove:1 consists:1 pathway:1 theoretically:1 expected:1 behavior:1 cand:3 nor:1 inspired:1 little:1 cardinality:2 increasing:1 provided:5 discover:1 estimating:2 notation:1 bounded:1 matched:1 biostatistics:1 null:1 interpreted:1 szarek:1 finding:3 impractical:1 every:1 exactly:2 k2:92 control:3 unit:1 grant:2 positive:3 understood:1 local:4 tends:1 limit:2 despite:2 establishing:2 laurent:1 path:1 lugosi:1 signed:2 studied:1 challenging:1 co:2 ease:1 practical:4 acknowledgment:1 testing:6 lyle:1 practice:2 block:2 definite:1 procedure:39 asymptotics:1 area:1 drug:2 submatrices:1 thought:1 inferential:1 regular:1 protein:2 suggest:1 operator:1 descending:1 accumulation:1 equivalent:1 deterministic:1 measurable:1 map:1 fruitful:1 jaime:1 dean:1 attention:1 economics:1 convex:6 survey:3 rectangular:1 shen:2 identifying:11 recovery:3 wasserman:1 insight:1 estimator:1 his:1 searching:1 notion:1 coordinate:1 gert:1 annals:2 construction:1 commercial:1 shamir:1 user:2 programming:1 us:1 trail:1 hypothesis:5 agreement:1 lanckriet:1 element:15 observed:1 wang:2 connected:1 sun:2 highest:1 rescaled:8 valuable:1 alessandro:1 ui:2 econometrica:1 flurry:1 signature:2 existent:1 singh:1 tight:1 localization:1 bipartite:2 joint:2 routinely:1 various:2 chapter:1 filamentary:1 describe:1 avi:1 choosing:1 exhaustive:3 whose:4 heuristic:2 widely:1 solve:1 quite:2 larger:4 otherwise:1 statistic:8 jointly:1 noisy:5 online:1 hoc:1 sequence:3 advantage:2 eigenvalue:2 rr:2 product:1 milani:1 maximal:1 j2:3 relevant:5 translate:1 poorly:1 achieve:3 adapts:1 haipeng:1 az:1 exploiting:1 cluster:18 produce:1 object:7 derive:3 madeira:1 stat:2 ij:3 op:1 school:1 kruegel:1 recovering:4 c:4 come:1 plaid:1 closely:1 correct:2 stipulated:1 arash:1 exploration:1 larry:1 pardalos:1 require:1 premise:1 ungar:1 fix:2 biological:4 extension:1 underpinning:1 sufficiently:1 wright:1 normal:3 exp:1 great:2 fletcher:1 bj:1 achieves:4 aarti:2 estimation:3 combinatorial:10 sivaraman:1 largest:4 successfully:1 tool:2 establishes:1 concurrently:1 gaussian:4 rna:1 aim:1 rather:1 focus:3 properly:1 consistently:1 prevalent:1 rank:12 sigkdd:1 sharan:1 sense:1 dependent:1 el:1 sb:1 typically:3 entire:1 a0:2 abor:1 relation:1 going:1 backing:1 overall:1 x11:2 construct:1 once:1 washington:1 identical:2 biology:2 look:1 report:1 simplify:1 few:1 randomly:1 divergence:1 argmax:1 geometry:1 lebesgue:1 n1:52 ab:1 detection:5 interest:1 highly:2 investigate:4 mining:2 alignment:2 deferred:1 analyzed:2 semidefinite:2 behind:1 x22:2 implication:1 bayer:1 necessary:1 unless:1 indexed:2 tree:1 nadakuditi:1 biometrics:1 desired:2 guidance:1 theoretical:6 minimal:1 witnessed:1 column:16 instance:2 earlier:1 rao:1 stewart:1 lattice:1 cost:1 subset:6 entry:9 snr:6 uniform:1 successful:2 characterize:2 marron:1 perturbed:1 corrupted:2 synthetic:1 chooses:1 density:1 international:1 siam:2 lee:1 michael:1 rn1:3 huang:2 wishart:1 american:2 li:1 rn2:1 unordered:1 rockafellar:1 ad:1 vi:3 try:1 lot:1 analyze:7 sup:1 characterizes:2 start:1 recover:4 clemens:1 ery:3 contribution:1 collaborative:1 variance:3 largely:1 correspond:3 identify:5 yield:1 dealt:1 weak:8 identification:1 lu:1 worth:2 justifiable:1 against:1 failure:1 pp:2 proof:3 associated:1 recovers:2 hamming:13 knowledge:1 dimensionality:1 organized:1 subtle:2 positioned:1 appears:1 alexandre:1 nately:1 wei:1 hannes:1 formulation:2 done:1 though:1 strongly:1 furthermore:2 correlation:1 christopher:1 su:1 multiscale:1 infimum:1 xiaodong:1 engin:1 usa:1 true:4 hence:2 arnaud:1 leibler:1 i2:3 gw:1 encourages:1 performs:2 newsletter:1 wise:12 recently:2 common:1 uv0:2 witten:1 empirically:1 banach:2 discussed:1 elevated:7 association:2 elementwise:1 x2r:1 mellon:1 tuning:2 grid:2 consistency:1 similarly:1 similarity:1 longer:1 etc:1 multivariate:2 recent:2 irrelevant:1 inf:1 driven:1 indispensable:1 nonconvex:1 durand:1 success:1 inconsistency:1 yi:1 minimum:3 greater:1 somewhat:2 impose:1 george:1 surely:1 maximize:1 signal:36 ii:1 semi:1 multiple:6 technical:1 characterized:1 calculation:1 icdm:1 equally:1 molecular:1 anomalous:1 scalable:1 cmu:4 arxiv:2 dec:1 addition:1 whereas:1 background:1 interval:1 singular:15 grow:1 rest:1 unlike:2 benaych:1 comment:1 subject:4 balakrishnan:1 inconsistent:1 jordan:1 structural:1 presence:1 leverage:1 intermediate:2 noting:2 enough:2 easy:1 identically:1 variety:1 fit:2 hastie:2 identified:3 topology:2 suboptimal:1 idea:1 enumerating:3 whether:1 expression:4 pca:4 akin:1 remark:1 useful:1 clear:1 eigenvectors:1 nonparametric:2 oliveira:1 tsybakov:1 broutin:1 statist:3 narayanan:1 simplest:1 generate:1 exist:1 nsf:1 canonical:1 notice:1 sign:1 estimated:5 popularity:1 correctly:3 tibshirani:2 broadly:1 carnegie:1 vol:1 paolo:1 group:5 key:1 threshold:7 achieving:2 hartigan:2 neither:1 anova:2 graph:1 relaxation:6 fraction:12 sum:1 run:1 throughout:1 almost:1 appendix:3 scaling:12 vb:1 jianhua:1 submatrix:2 bound:18 guaranteed:1 distinguish:3 encountered:1 activity:1 strength:27 precisely:1 constraint:2 helgason:1 dominated:1 min:6 subgradients:1 x12:1 martin:1 xiaoming:1 department:2 structured:3 according:1 alternate:1 march:1 poor:1 across:1 slightly:1 smaller:1 arinaldo:1 happens:1 castro:3 intuitively:1 spiked:5 indexing:1 ghaoui:1 computationally:11 equation:1 needed:4 tractable:3 studying:1 available:1 ofer:1 rewritten:1 operation:1 observe:6 away:1 spectral:2 amini:3 srivatsan:1 denotes:2 clustering:10 biclusters:2 x21:3 graphical:1 ck1:1 zeitouni:1 malware:3 exploit:3 k1:95 emmanuel:3 establish:4 society:1 print:2 quantity:1 parametric:2 bicluster:36 september:1 subspace:1 distance:1 thank:2 decoder:3 carbonell:1 considers:1 nobel:1 assuming:2 devroye:1 index:1 relationship:1 ratio:1 kolar:1 setup:2 unfortunately:1 sinica:1 negative:1 implementation:1 unknown:4 contributed:1 perform:1 upper:2 vertical:1 observation:2 mladen:1 snippet:1 finite:1 incorrectly:1 dc:1 perturbation:2 sharp:4 david:1 pair:4 required:3 kl:1 specified:2 thr:2 security:1 able:2 below:2 pattern:4 regime:1 sparsity:7 including:2 interpretability:1 max:11 wainwright:2 critical:1 suitable:1 difficulty:1 natural:1 regularized:1 participation:1 indicator:1 minimax:15 normality:1 improve:1 identifies:2 axis:2 irrespective:1 aspremont:1 columbia:1 review:2 understanding:1 taste:1 berry:1 asymptotic:2 afosr:1 embedded:2 expect:1 interesting:3 filtering:1 versus:4 sufficient:2 imposes:1 thresholding:24 foster:1 ulrich:1 row:16 penalized:1 surprisingly:2 supported:2 aij:3 addario:1 formal:1 understand:1 johnstone:2 fall:1 sparse:33 distributed:2 curve:2 dimension:3 maze:1 computes:1 author:1 collection:1 commonly:2 avg:1 adaptive:1 transaction:1 kullback:1 gene:7 global:1 handbook:2 b1:1 conclude:1 tuples:3 davidson:1 search:5 table:1 gutell:1 robust:1 nicolas:1 parson:1 improving:1 sbalakri:1 interact:1 investigated:1 necessarily:1 posted:1 zou:1 domain:1 pk:1 main:1 lazzeroni:1 statistica:1 big:2 noise:9 n2:50 vbj:1 categorized:1 advice:1 referred:1 tanay:1 sub:11 fails:1 position:1 jushan:1 candidate:1 third:1 theorem:29 evidence:1 exists:3 intractable:1 corr:1 importance:1 aria:3 magnitude:2 heldermann:1 flavor:1 rinaldo:1 expressed:1 ordered:3 biclustering:28 springer:1 determines:1 acm:1 ma:1 succeed:6 goal:4 presentation:1 formulated:1 ann:4 exposition:1 donoho:1 luc:1 owen:1 considerable:1 hard:4 feasible:1 louigi:1 averaging:13 miranker:1 principal:9 specie:2 experimental:3 svd:13 succeeds:2 e:3 meaningful:1 tendency:1 select:2 scan:4 arises:1 relevance:2 bioinformatics:2 correlated:1 |
3,555 | 4,219 | Automated Refinement of Bayes Networks?
Parameters based on Test Ordering Constraints
Omar Zia Khan & Pascal Poupart
David R. Cheriton School of Computer Science
University of Waterloo
Waterloo, ON Canada
{ozkhan,ppoupart}@cs.uwaterloo.ca
John Mark Agosta?
Intel Labs
Santa Clara, CA, USA
[email protected]
Abstract
In this paper, we derive a method to refine a Bayes network diagnostic model by
exploiting constraints implied by expert decisions on test ordering. At each step,
the expert executes an evidence gathering test, which suggests the test?s relative
diagnostic value. We demonstrate that consistency with an expert?s test selection
leads to non-convex constraints on the model parameters. We incorporate these
constraints by augmenting the network with nodes that represent the constraint
likelihoods. Gibbs sampling, stochastic hill climbing and greedy search algorithms are proposed to find a MAP estimate that takes into account test ordering
constraints and any data available. We demonstrate our approach on diagnostic
sessions from a manufacturing scenario.
1 INTRODUCTION
The problem of learning-by-example has the promise to create strong models from a restricted number of cases; certainly humans show the ability to generalize from limited experience. Machine
Learning has seen numerous approaches to learning task performance by imitation, going back to
some of the approaches to inductive learning from examples [14]. Of particular interest are problemsolving tasks that use a model to infer the source, or cause of a problem from a sequence of investigatory steps or tests. The specific example we adopt is a diagnostic task such as appears in medicine,
electro-mechanical fault isolation, customer support and network diagnostics, among others.
We define a diagnostic sequence as consisting of the assignment of values to a subset of tests. The
diagnostic process embodies the choice of the best next test to execute at each step in the sequence,
by measuring the diagnostic value among the set of available tests at each step, that is, the ability of
a test to distinguish among the possible causes. One possible implementation with which to carry
out this process, the one we apply, is a Bayes network [9]. As with all model-based approaches,
provisioning an adequate model can be daunting, resulting in a ?knowledge elicitation bottleneck.?
A recent approach for easing the bottleneck grew out of the realization that the best time to gain an
expert?s insight into the model structure is during the diagnostic process. Recent work in ?QueryBased Diagnostics? [1] demonstrated a way to improve model quality by merging model use and
model building into a single process. More precisely the expert can take steps to modify the network
structure to add or remove nodes or links, interspersed within the diagnostic sequence. In this paper
we show how to extend this variety of learning-by-example to include also refinement of model
parameters based on the expert?s choice of test, from which we determine constraints. The nature
of these constraints, as shown herein, is derived from the value of the tests to distinguish causes, a
value referred to informally as value of information [10]. It is the effect of these novel constraints
on network parameter learning that is elucidated in this paper.
?
J. M. Agosta is no longer affiliated with Intel Corporation
1
Conventional statistical learning approaches are not suited to this problem, since the number of cases
available from diagnostic sessions is small, and the data from any case is sparse. (Only a fraction of
the tests are taken.) But more relevant is that one diagnostic sequence from an expert user represents
the true behavior expected of the model, rather than a noisy realization of a case generated by the
true model. We adopt a Bayesian approach, which offers a principled way to incorporate knowledge
(constraints and data, when available) and also consider weakening the constraints, by applying a
likelihood to them, so that possibly conflicting constraints can be incorporated consistently.
Sec. 2 reviews related work and Sec. 3 provides some background on diagnostic networks and model
consistency. Then, Sec. 4 describes an augmented Bayesian network that incorporates constraints
implied by an expert?s choice of tests. Some sampling techniques are proposed to find the Maximum
a posterior setting of the parameters given the constraints (and any data available). The approach is
evaluated in Sec. 5 on synthetic data and a real world manufacturing diagnostic scenario. Finally,
Sec. 6 discusses some future work.
2
RELATED WORK
Parameter learning for Bayesian networks can be viewed as searching in a high-dimensional space.
Adopting constraints on the parameters based on some domain knowledge is a way of pruning this
search space and learning the parameters more efficiently, both in terms of data needed and time
required. Qualitative probabilistic networks [17] allow qualitative constraints on the parameter space
to be specified by experts. For instance, the influence of one variable on another, or the combined
influence of multiple variables on another variable [5] leads to linear inequalities on the parameters.
Wittig and Jameson [18] explain how to transform the likelihood of violating qualitative constraints
into a penalty term to adjust maximum likelihood, which allows gradient ascent and Expectation
Maximization (EM) to take into account linear qualitative constraints.
Other examples of qualitative constraints include some parameters being larger than others, bounded
in a range, within ? of each other, etc. Various proposals have been made that exploit such constraints. Altendorf et al. [2] provide an approximate technique based on constrained convex optimization for parameter learning. Niculescu et al. [15] also provide a technique based on constrained
optimization with closed form solutions for different classes of constraints. Feelders [6] provides an
alternate method based on isotonic regression while Liao and Ji [12] combine gradient descent with
EM. de Campos and Ji [4] also use constrained convex optimization, however, they use Dirichlet
priors on the parameters to incorporate any additional knowledge. Mao and Lebanon [13] also use
Dirichlet priors, but they use probabilistic constraints to allow inaccuracies in the specification of
the constraints.
A major difference between our technique and previous work is on the type of constraints. Our
constraints do not need to be explicitly specified by an expert. Instead, we passively observe the
expert and learn from what choices are made and not made [16]. Furthermore, as we shall show
later, our constraints are non-convex, preventing the direct application of existing techniques that
assume linear or convex functions. We use Beta priors on the parameters, which can easily be extended to Dirichlet priors like previous work. We incorporate constraints in an augmented Bayesian
network, similar to Liang et al. [11], though their constraints are on model predictions as opposed
to ours which are on the parameters of the network. Finally, we also use the notion of probabilistic
constraints to handle potential mistakes made by experts.
3
3.1
BACKGROUND
DIAGNOSTIC BAYES NETWORKS
We consider the class of bipartite Bayes networks that are widely used as diagnostic models, though
our approach can be used for networks with any structure. The network forms a sparse, directed,
causal graph, where arcs go from causes to observable node variables. We use upper case to denote
random variables; C for causes, and T for observables (tests). Lower case letters denote values in
the domain of a variable, e.g. c ? dom(C) = {c, c?}, and bold letters denote sets of variables. A
set of marginally independent binary-valued node variables C with distributions Pr(C) represent
unobserved causes, and condition the remaining conditionally independent binary-valued test vari2
able nodes T. Each cause conditions one or more tests; likewise each test is conditioned by one or
more causes, resulting in a graph with one or more possibly multiply-connected components. The
test variable distributions Pr(T |C) incorporate the further modeling assumption of Independence of
Causal Influence, the most familiar example being the Noisy-Or model [8]. To keep the exposition
simple, we assume that all variables are binary and that conditional distributions are parametrized by
the Noisy-Or; however, the algorithms described in the rest of the paper generalize to any discrete
non-binary variable models.
Conventionally, unobserved tests are ranked in a diagnostic Bayes network by their Value Of Information (VOI) conditioned on tests already observed. To be precise, VOI is the expected gain in
utility if the test were to be observed. The complete computation requires a model equivalent to a
partially observable Markov decision process. Instead, VOI is commonly approximated by a greedy
computation of the Mutual Information between a test and the set of causes [3]. In this case, it
is easy to show that Mutual Information is in turn well approximated to second order by the Gini
impurity [7] as shown in Equation 1.
]
[?
?
GI(C|T ) =
Pr(T = t)
Pr(C = c|T = t)(1 ? Pr(C = c|T = t))
(1)
t
c
We will use the Gini measure as a surrogate for VOI, as a way to rank the best next test in the
diagnostic sequence.
3.2
MODEL CONSISTENCY
A model that is consistent with an expert would generate Gini impurity rankings consistent with
the expert?s diagnostic sequence. We interpret the expert?s test choices as implying constraints on
Gini impurity rankings between tests. To that effect, [1] defines the notion of Cause Consistency
and Test Consistency, which indicate whether the cause and test orderings induced by the posterior
distribution over causes and the VOI of each test agree with an expert?s observed choice. Assuming
that the expert greedily chooses the most informative test T ? (i.e., test that yields the lowest Gini
impurity) at each step, then the model is consistent with the expert?s choices when the following
constraints are satisfied:
GI(C|T ? ) ? GI(C|Ti )
?i
(2)
We demonstrate next how to exploit these constraints to refine the Bayes network.
4
MODEL REFINEMENT
Consider a simple diagnosis example with two possible causes C1 and C2 and two tests T1 and T2 as
shown in Figure 1. To keep the exposition simple, suppose that the priors for each cause are known
(generally separate data is available to estimate these), but the conditional distribution of each test
is unknown. Using the Noisy-OR parameterizations for the conditional distributions, the number of
parameters are linear in the number of parents instead of exponential.
?
Pr(Ti = true|C) = 1 ? (1 ? ?0i )
(1 ? ?ji )
(3)
j|Cj =true
Here, ?0i = Pr(Ti = true|Cj = f alse ?j) is the leak probability that Ti will be true when none of
the causes are true and ?ji = Pr(Ti = true|Cj = true, Ck = f alse ?k ?= j) is the link reliability,
which indicates the independent contribution of cause Cj to the probability that test Ti will be true.
In the rest of this section, we describe how to learn the ? parameters while respecting the constraints
implied by test consistency.
4.1
TEST CONSISTENCY CONSTRAINTS
Suppose that an expert chooses test T1 instead of test T2 during the diagnostic process. This ordering
by the expert implies that the current model (parametrized by the ??s) must be consistent with the
constraint GI(C|T2 ) ? GI(C|T1 ) ? 0. Using the definition of Gini impurity in Eq. 1, we can rewrite
3
Figure 1: Network with 2
causes and 2 tests
Figure 2: Augmented network with parameters and
constraints
Figure 3: Augmented network extended to handle inaccurate feedback
the constraint for the network shown in Fig. 1 as follows:
?
t1
(
? (Pr(t1 |c1 , c2 ) Pr(c1 ) Pr(c2 ))2
Pr(t1 ) ?
Pr(t1 )
c ,c
1
2
)
(
)
?
? (Pr(t2 |c1 , c2 ) Pr(c1 ) Pr(c2 ))2
?
Pr(t2 ) ?
?0
Pr(t2 )
t
c ,c
2
1
2
(4)
Furthermore, using the Noisy-Or encoding from Eq. 3, we can rewrite the constraint as a polynomial
in the ??s. This polynomial is non-linear, and in general, not concave. The feasible space may
consist of disconnected regions. Fig. 4 shows the surface corresponding to the polynomial for the
case where ?0i = 0 and ?1i = 0.5 for each test i, which leaves ?21 and ?22 as the only free variables.
The parameters? feasible space, satisfying the constraint consists of the two disconnected regions
where the surface is positive.
4.2
AUGMENTED BAYES NETWORK
Our objective is to learn the ? parameters of diagnostic Bayes networks given test constraints of the
form described in Eq. 4. To deal with non-convex constraints and disconnected feasible regions, we
pursue a Bayesian approach whereby we explicitly model the parameters and constraints as random
variables in an augmented Bayes network (see Fig. 2). This allows us to frame the problem of
learning the parameters as an inference problem in a hybrid Bayes network of discrete (T, C, V ) and
continuous (?) variables. As we will see shortly, this augmented Bayes network provides a unifying
framework to simultaneously learn from constraints and data, to deal with possibly inconsistent
constraints, and to express preferences over the degree of satisfaction of the constraints.
We encode the constraint derived from the expert feedback as a binary random variable V in the
Bayes network. If V is true the constraint is satisfied, otherwise it is violated. Thus, if V is true
then ? lies in the positive region of Fig. 4, and if V is f alse then ? lies in the negative region.
We model the CPT for V as Pr(V |?) = max(0, ?), where ? = GI(C|T1 ) ? GI(C|T2 ). Note that
the value of GI(C|T ) lies in the interval [0,1], so the probability ? will always be normalized. The
intuition behind this definition of the CPT for V is that a constraint is more likely to be satisfied if
the parameters lie in the interior of the constraint region.
We place a Beta prior over each ? parameter. Since the test variables are conditioned on the ?
parameters that are now part of the network, their conditional distributions become known. For instance, the conditional distribution for Ti (given in Eq. 3) is fully defined given the noisy-or parameters ?ji . Hence the problem of learning the parameters becomes an inference problem to compute
posteriors over the parameters given that the constraint is satisfied (and any data). In practice, it is
more convenient to obtain a single value for the parameters instead of a posterior distribution since
it is easier to make diagnostic predictions based on one Bayes network. We estimate the parameters
by computing a maximum a posteriori (MAP) hypothesis given that the constraint is satisfied (and
any data): ?? = arg max? Pr(?|V = true).
4
Algorithm 1 Pseudo Code for Gibbs Sampling, Stochastic Hill Climbing and Greedy Search
1 Fix observed variables, let V = true and randomly sample feasible starting state S
2 for i = 1 to #samples
3
for j = 1 to #hiddenV ariables
4
acceptSample = f alse; k = 0
5
repeat
6
Sample s? from conditional of j th hidden variable Sj
7
S? = S; Sj = s?
8
if Sj is cause or test, then acceptSample = true
9
elseif S? obeys constraints V?
10
if algo == Gibbs
11
Sample u from uniform distribution, U(0,1)
?
)
12
if u < Mp(S
q(S? ) where p and q are the true and proposal distributions and M > 1
13
acceptSample = true
14
elseif algo = = StochasticHillClimbing
15
if likelihood(S? ) > likelihood(S), then acceptSample = true
16
elseif algo = = Greedy, then acceptSample = true
17
elseif algo = = Greedy
18
k = k+1
19
if k = = maxIterations, then s? = Sj ; acceptSample = true
20
until acceptSample = = true
21
Sj = s?
4.3
MAP ESTIMATION
Previous approaches for parameter learning with domain knowledge include modified versions of
EM or some other optimization techniques that account for linear/convex constraints on the parameters. Since our constraints are non-convex, we propose a new approach based on Gibbs sampling
to approximate the posterior distribution, from which we compute the MAP estimate. Although
the technique converges to the MAP in the limit, it may require excessive time. Hence, we modify
Gibbs sampling to obtain more efficient stochastic hill climbing and greedy search algorithms with
anytime properties.
The pseudo code for our Gibbs sampler is provided in Algorithm 1. The two key steps are sampling the conditional distributions of each variable (line 6) and rejection sampling to ensure that the
constraints are satisfied (lines 9 and 12). We sample each variable given the rest according to the
following distributions:
ti ? Pr(Ti |c, ?i ) ?i
cj ? Pr(Cj |c ? cj , t, ?) ?
?
Pr(Cj )
j
?
(5)
Pr(ti |c, ?i ) ?j
i
?ji ? Pr(?ij |? ? ?ij , t, c, v) ? Pr(v|t, ?)
?
Pr(ti |cj , ?i ) ?i, j
(6)
(7)
i
The tests and causes are easily sampled from the multinomials as described in the equations above.
However, sampling the ??s is more difficult due to the factor Pr(v|?, t) = max(0, ?), which is a
truncated mixture of Betas. So, instead of sampling ? from its true conditional, we sample it from
a proposal distribution that replaces max(0, ?) by an un-truncated mixture of Betas equal to ? + a
where a is a constant that ensures that ? + a is always positive. This is equivalent to ignoring the
constraints. Then we ensure that the constraints are satisfied by rejecting the samples that violate the
constraints. Once Gibbs sampling has been performed, we obtain a sample that approximates the
posterior distribution over the parameters given the constraints (and any data). We return a single
setting of the parameters by selecting the sampled instance with the highest posterior probability
(i.e., MAP estimate). Since we will only return the MAP estimate, it is possible to speed up the
search by modifying Gibbs sampling. In particular, we obtain a stochastic hill climbing algorithm
by accepting a new sample only if its posterior probability improves upon that of the previous sample
5
Posterior Probability
0.1
0.08
Difference in
Gini Impurity
0.1
0.05
0
?0.05
0.06
0.04
0.02
?0.1
1
0
1
1
0.8
0.5
0.6
0.8
0.4
Link Reliability of
Test 2 and Cause 1
0
0.6
0.2
0
0.4
Link Reliability of
Test 2 and Cause 2
Figure 4: Difference in Gini
impurity for the network in
Fig. 1 when ?21 and ?22 are
the only parameters allowed
to vary.
0.2
Link Reliability of
Test 2 and Cause 1
0
0
0.2
0.4
0.6
0.8
1
Link Reliability of
Test 2 and Cause 1
Figure 5: Posterior over parameters computed through
calculation after discretization.
Figure 6: Posterior over parameters calculated through
Sampling.
(line 15). Thus, each iteration of the stochastic hill climber requires more time, but always improves
the solution.
As the number of constraints grows and the feasibility region shrinks, the Gibbs sampler and stochastic hill climber will reject most samples. We can mitigate this by using a Greedy sampler that caps
the number of rejected samples, after which it abandons the sampling for the current variable to
move on to the next variable (line 19). Even though the feasibility region is small overall, it may still
be large in some dimensions, so it makes sense to try sampling another variable (that may have a
larger range of feasible values) when it is taking too long to find a new feasible value for the current
variable.
4.4
MODEL REFINEMENT WITH INCONSISTENT CONSTRAINTS
So far, we have assumed that the expert?s actions generate a feasible region as a consequence of
consistent constraints. We handle inconsistencies by further extending our augmented diagnostic
Bayes network. We treat the observed constraint variable, V , as a probabilistic indicator of the true
constraint V ? as shown in Figure 3. We can easily extend our techniques for computing the MAP to
cater for this new constraint node by sampling an extra variable.
5 EVALUATION AND EXPERIMENTS
5.1 EVALUATION CRITERIA
Formally, for M ? , the true model that we aim to learn, the diagnostic process determines the choice
of best next test as the one with the smallest Gini impurity. If the correct choice for the next test is
known (such as demonstrated by an expert), we can use this information to include a constraint on the
model. We denote by V+ the set of observed constraints and by V? the set of all possible constraints
that hold for M ? . Having only observed V+ , our technique will consider any M + ? M+ as a
possible true model, where M+ is the set of all models that obey V + . We denote by M? the set
of all models that are diagnostically equivalent to M ? (i.e., obey V ? and would recommend the
MAP
same steps as M ? ) and by MV
the particular model obtained by MAP estimation based on the
+
MAP
constraints V+ . Similarly, when a dataset D is available, we denote by MD
the model obtained
MAP
+
by MAP estimation based on D and by MDV
,
the
model
based
on
D
and
V
.
+
Ideally we would like to find the true underlying model M ? , hence we will report the KL divergence
between the models found and M ? . However, other diagnostically equivalent M ? may recommend
the same tests as M ? and thus have similar constraints, so we also report test consistency with M ?
(i.e., # of recommended tests that are the same).
5.2 CORRECTNESS OF MODEL REFINEMENT
Given V? , our technique for model adjustment is guaranteed to choose a model M MAP ? M? by
construction. If any constraint V ? ? V? is violated, the rejection sampling step of our technique
6
100
Comparing convergence of Different Techniques
80
70
60
50
40
30
Data Only
Constraints Only
Data+Constraints
20
10
0
1
2
3
4
5
Number of constraints used
6
?10
?12
?14
?16
?18
7
?20
Figure 7:
Mean KLdivergence and one standard
deviation for a 3 cause 3
test network on learning
with data, constraints and
data+constraints.
Gibbs Sampling
Stochastic Hill Climbing
Greedy Sampling
?8
Negative Log Likelihood of MAP Estimate
Percentage of tests correctly predicted
90
0
1
2
3
10
10
10
10
Elapsed Time (plotted on log scale from 0 to 1500 seconds)
Figure 8: Test Consistency
for a 3 cause 3 test network on
learning with data, constraints
and data+constraints.
Figure 9: Convergence rate
comparison.
would reject that set of parameters. To illustrate this, consider the network in Fig. 2. There are six
parameters (four link reliabilities and two leak parameters). Let us fix the leak parameters and the
link reliability from the first cause to each test. Now we can compute the posterior surface over
the two variable parameters after discretizing each parameter in small steps and then calculating the
posterior probability at each step as shown in Fig. 5. We can compare this surface with that obtained
after Gibbs sampling using our technique as shown in Fig. 6. We can see that our technique recovers
the posterior surface from which we can compute the MAP. We obtain the same MAP estimate with
the stochastic hill climbing and greedy search algorithms.
5.3
EXPERIMENTAL RESULTS ON SYNTHETIC PROBLEMS
We start by presenting our results on a 3-cause by 3-test fully-connected bipartite Bayes network.
We assume that there exists some M ? ? M? that we want to learn given V+ . We use our technique
to find M MAP . To evaluate M MAP , we first compute the constraints, V? for M ? to get the feasible
region associated with the true model. Next, we sample 100 other models from this feasible region that are diagnostically equivalent. We compare these models with M MAP (after collecting 200
samples with non-informative priors for the parameters).
We compute the KL-divergence of M MAP with respect to each sampled model. We expect KLdivergence to decrease as the number of constraints in V+ increases since the feasible region beMAP
comes smaller. Figure 7 confirms this trend and shows that MDV
+ has lower mean KL-divergence
MAP
MAP
than MV+ , which has lower mean KL-divergence than MD . The data points in D are limited
to the results of the diagnostic sessions needed to obtain V+ . As constraints increase, more data is
available and so the results for the data-only approach also improve with increasing constraints.
We also compare the test consistency when learning from data only, constraints only or both. Given
a fixed number of constraints, we enumerate the unobserved trajectories, and then compute the
highest ranked test using the learnt model and the sampled true models, for each trajectory. The test
consistency is reported as a percentage, with 100% consistency indicating that the learned and true
models had the same highest ranked tests on every trajectory. Figure 8 presents these percentatges
for the greedy sampling technique (the results are similar for the other techniques). It again appears
that learning parameters with both constraints and data is better than learning with only constraints,
which is most of the times better than learning with only data.
Figure 9 compares the convergence rate of each technique to find the MAP estimate. As expected,
Stochastic Hill Climbing and Greedy Sampling take less time than Gibbs sampling to find parameter
settings with high posterior probability.
5.4 EXPERIMENTAL RESULTS ON REAL-WORLD PROBLEMS
We evaluate our technique on a real-world diagnostic network collected and reported by Agosta et
al. [1], where the authors collected detailed session logs over a period of seven weeks in which the
7
KL?divergence of when computing joint over all tests
8
Figure 10: Diagnostic Bayesian network collected
from user trials and pruned to retain sub-networks
with at least one constraint
Data Only
Constraints Only
Data+Constraints
7
6
5
4
3
2
1
6
8
10
12
14
16
Number of constraints used
18
20
22
Figure 11: KL divergence comparison as the
number of constraints increases for the real
world problem.
entire diagnostic sequence was recorded. The sequences intermingle model building and querying
phases. The model network structure was inferred from an expert?s sequence of positing causes
and tests. Test-ranking constraints were deduced from the expert?s test query sequences once the
network structure is established.
The 157 sessions captured over the seven weeks resulted in a Bayes network with 115 tests, 82 root
causes and 188 arcs. The network consists of several disconnected sub-networks, each identified
with a symptom represented by the first test in the sequence, and all subsequent tests applied within
the same subnet. There were 20 sessions from which we were able to observe trajectories with
at least two tests, resulting in a total of 32 test constraints. We pruned our diagnostic network to
remove the sub-networks with no constraints to get a Bayes network with 54 tests, 30 root causes,
and 67 parameters divided in 7 sub-networks, as shown in Figure 10, on which we apply our model
refinement technique to learn the parameters for each sub-network separately.
Since we don?t have the true underlying network and the full set of constraints (more constraints
could be observed in future diagnostic sessions), we treated the 32 constraints as if they were V?
and the corresponding feasible region M? as if it contained models diagnostically equivalent to
the unknown true model. Figure 11 reports the KL divergence between the models found by our
algorithms and sampled models from M? as we increase the number of constraints. With such
limited constraints and consequently large feasible regions, it is not surprising that the variation in
KL divergence is large. Again, the MAP estimate based on both the constraints and the data has
lower KL divergence than constraints only and data only.
6
CONCLUSION AND FUTURE WORK
In summary, we presented an approach that can learn the parameters of a Bayes network based on
constraints implied by test consistency and any data available. While several approaches exist to
incorporate qualitative constraints in learning procedures, our work makes two important contributions: First, this is the first approach that exploits implicit constraints based on value of information
assessments. Secondly it is the first approach that can handle non-convex constraints. We demonstrated the approach on synthetic data and on a real-world manufacturing diagnostic problem. Since
data is generally sparse in diagnostics, this work makes an important advance to mitigate the model
acquisition bottleneck, which has prevented the widespread application of diagnostic networks so
far. In the future, it would be interesting to generalize this work to reinforcement learning in applications where data is sparse, but constraints may be inferred from expert interactions.
Acknowledgments
This work was supported by a grant from Intel Corporation.
8
References
[1] John Mark Agosta, Omar Zia Khan, and Pascal Poupart. Evaluation results for a query-based
diagnostics application. In The Fifth European Workshop on Probabilistic Graphical Models
(PGM 10), Helsinki, Finland, September 13?15 2010.
[2] Eric E. Altendorf, Angelo C. Restificar, and Thomas G. Dietterich. Learning from sparse
data by exploiting monotonicity constraints. In Proceedings of Twenty First Conference on
Uncertainty in Artificial Intelligence (UAI), Edinburgh, Scotland, July 2005.
[3] Brigham S. Anderson and Andrew W. Moore. Fast information value for graphical models.
In Proceedings of Nineteenth Annual Conference on Neural Information Processing Systems
(NIPS), pages 51?58, Vancouver, BC, Canada, December 2005.
[4] Cassio P. de Campos and Qiang Ji. Improving Bayesian network parameter learning using
constraints. In International Conference in Pattern Recognition (ICPR), Tampa, FL, USA,
2008.
[5] Marek J. Druzdzel and Linda C. van der Gaag. Elicitation of probabilities for belief networks:
combining qualitative and quantitative information. In Proceedings of the Eleventh Annual
Conference on Uncertainty in Artificial Intelligence (UAI), pages 141?148, Montreal, QC,
Canada, 1995.
[6] Ad J. Feelders. A new parameter learning method for Bayesian networks with qualitative influences. In Proceedings of Twenty Third International Conference on Uncertainty in Artificial
Intelligence (UAI), Vancouver, BC, July 2007.
[7] Mara Angeles Gil and Pedro Gil. A procedure to test the suitability of a factor for stratification
in estimating diversity. Applied Mathematics and Computation, 43(3):221 ? 229, 1991.
[8] David Heckerman and John S. Breese. Causal independence for probability assessment and
inference using bayesian networks. IEEE Systems, Man, and Cybernetics, 26(6):826?831,
November 1996.
[9] David Heckerman, John S. Breese, and Koos Rommelse. Decision-theoretic troubleshooting.
Communications of the ACM, 38(3):49?56, 1995.
[10] Ronald A. Howard. Information value theory. IEEE Transactions on Systems Science and
Cybernetics, 2(1):22?26, August 1966.
[11] Percy Liang, Michael I. Jordan, and Dan Klein. Learning from measurements in exponential families. In Proceedings of Twenty Sixth Annual International Conference on Machine
Learning (ICML), Montreal, QC, Canada, June 2009.
[12] Wenhui Liao and Qiang Ji. Learning Bayesian network parameters under incomplete data with
domain knowledge. Pattern Recognition, 42:3046?3056, 2009.
[13] Yi Mao and Guy Lebanon. Domain knowledge uncertainty and probabilistic parameter constraints. In Proceedings of Twenty Fifth Conference on Uncertainty in Artificial Intelligence
(UAI), Montreal, QC, Canada, 2009.
[14] Ryszard S. Michalski. A theory and methodology of inductive learning. Artificial Intelligence,
20:111?116, 1984.
[15] Radu Stefan Niculescu, Tom M. Mitchell, and R. Bharat Rao. Bayesian network learning with
parameter constraints. Journal of Machine Learning Research, 7:1357?1383, 2006.
[16] Mark A. Peot and Ross D. Shachter. Learning from what you dont observe. In Proceedings
of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 439?446,
Madison, WI, July 1998.
[17] Michael P. Wellman. Fundamental concepts of qualitative probabilistic networks. Artificial
Intelligence, 44(3):257?303, August 1990.
[18] Frank Wittig and Anthony Jameson. Exploiting qualitative knowledge in the learning of conditional probabilities of Bayesian networks. In Proceedings of the Sixteenth Conference on
Uncertainty in Artificial Intelligence (UAI), San Francisco, CA, July 2000.
9
| 4219 |@word trial:1 version:1 polynomial:3 confirms:1 carry:1 selecting:1 ours:1 bc:2 existing:1 current:3 com:1 discretization:1 comparing:1 surprising:1 clara:1 gmail:1 must:1 john:4 ronald:1 subsequent:1 informative:2 remove:2 implying:1 greedy:11 leaf:1 intelligence:8 scotland:1 accepting:1 provides:3 parameterizations:1 node:6 preference:1 positing:1 c2:5 direct:1 beta:4 become:1 qualitative:10 consists:2 combine:1 eleventh:1 dan:1 bharat:1 peot:1 expected:3 behavior:1 increasing:1 becomes:1 provided:1 estimating:1 bounded:1 underlying:2 linda:1 lowest:1 what:2 cassio:1 pursue:1 voi:5 unobserved:3 corporation:2 kldivergence:2 pseudo:2 mitigate:2 every:1 collecting:1 quantitative:1 ti:11 concave:1 grant:1 t1:8 positive:3 modify:2 treat:1 mistake:1 limit:1 consequence:1 encoding:1 easing:1 suggests:1 limited:3 range:2 obeys:1 directed:1 acknowledgment:1 practice:1 procedure:2 reject:2 convenient:1 agosta:5 get:2 interior:1 selection:1 influence:4 applying:1 isotonic:1 conventional:1 map:25 customer:1 demonstrated:3 equivalent:6 gaag:1 go:1 starting:1 convex:9 qc:3 insight:1 searching:1 notion:2 handle:4 variation:1 construction:1 suppose:2 user:2 hypothesis:1 trend:1 approximated:2 satisfying:1 recognition:2 observed:8 region:14 ensures:1 connected:2 ordering:5 decrease:1 highest:3 principled:1 intuition:1 leak:3 respecting:1 ideally:1 dom:1 rewrite:2 algo:4 impurity:8 upon:1 bipartite:2 eric:1 observables:1 easily:3 joint:1 various:1 represented:1 fast:1 describe:1 gini:9 query:2 artificial:8 larger:2 widely:1 valued:2 nineteenth:1 otherwise:1 ability:2 gi:8 transform:1 noisy:6 abandon:1 sequence:12 michalski:1 propose:1 interaction:1 relevant:1 combining:1 realization:2 ppoupart:1 sixteenth:1 exploiting:3 parent:1 convergence:3 extending:1 converges:1 derive:1 illustrate:1 andrew:1 augmenting:1 montreal:3 ij:2 school:1 eq:4 strong:1 c:1 predicted:1 indicate:1 implies:1 come:1 correct:1 modifying:1 stochastic:9 human:1 require:1 subnet:1 fix:2 suitability:1 secondly:1 ariables:1 hold:1 week:2 major:1 vary:1 adopt:2 smallest:1 finland:1 estimation:3 angelo:1 ross:1 waterloo:2 correctness:1 create:1 stefan:1 always:3 aim:1 modified:1 rather:1 ck:1 feelders:2 encode:1 derived:2 june:1 consistently:1 rank:1 likelihood:7 indicates:1 greedily:1 sense:1 posteriori:1 inference:3 niculescu:2 inaccurate:1 weakening:1 entire:1 hidden:1 going:1 arg:1 among:3 overall:1 pascal:2 constrained:3 mutual:2 equal:1 once:2 having:1 sampling:22 stratification:1 qiang:2 represents:1 icml:1 excessive:1 future:4 others:2 t2:7 recommend:2 report:3 randomly:1 simultaneously:1 divergence:9 resulted:1 familiar:1 phase:1 consisting:1 interest:1 multiply:1 evaluation:3 certainly:1 adjust:1 mixture:2 wellman:1 diagnostics:4 behind:1 experience:1 incomplete:1 plotted:1 causal:3 instance:3 modeling:1 rao:1 measuring:1 assignment:1 maximization:1 deviation:1 subset:1 uniform:1 too:1 reported:2 learnt:1 synthetic:3 combined:1 chooses:2 deduced:1 international:3 fundamental:1 retain:1 probabilistic:7 michael:2 again:2 satisfied:7 recorded:1 opposed:1 choose:1 possibly:3 guy:1 expert:26 return:2 account:3 potential:1 de:2 diversity:1 sec:5 bold:1 explicitly:2 ranking:3 mp:1 mv:2 ad:1 later:1 root:2 performed:1 lab:1 closed:1 try:1 start:1 bayes:19 contribution:2 efficiently:1 likewise:1 yield:1 climbing:7 generalize:3 bayesian:12 rejecting:1 marginally:1 none:1 trajectory:4 cybernetics:2 executes:1 explain:1 definition:2 sixth:1 acquisition:1 associated:1 recovers:1 gain:2 sampled:5 dataset:1 mitchell:1 knowledge:8 anytime:1 improves:2 cap:1 cj:9 back:1 appears:2 violating:1 methodology:1 tom:1 daunting:1 execute:1 evaluated:1 though:3 shrink:1 furthermore:2 rejected:1 symptom:1 implicit:1 anderson:1 until:1 druzdzel:1 assessment:2 widespread:1 defines:1 quality:1 grows:1 building:2 dietterich:1 effect:2 usa:2 normalized:1 true:31 concept:1 inductive:2 hence:3 moore:1 deal:2 conditionally:1 during:2 whereby:1 criterion:1 hill:9 presenting:1 complete:1 demonstrate:3 theoretic:1 percy:1 novel:1 multinomial:1 ji:8 interspersed:1 extend:2 approximates:1 interpret:1 measurement:1 gibbs:12 consistency:13 mathematics:1 session:7 zia:2 similarly:1 provisioning:1 had:1 reliability:7 specification:1 longer:1 surface:5 etc:1 add:1 posterior:15 recent:2 scenario:2 inequality:1 binary:5 cater:1 discretizing:1 fault:1 inconsistency:1 der:1 yi:1 seen:1 captured:1 additional:1 dont:1 determine:1 period:1 recommended:1 july:4 multiple:1 violate:1 full:1 infer:1 calculation:1 offer:1 long:1 divided:1 prevented:1 feasibility:2 prediction:2 regression:1 liao:2 expectation:1 iteration:1 represent:2 adopting:1 c1:5 proposal:3 background:2 want:1 separately:1 campos:2 interval:1 source:1 extra:1 rest:3 ascent:1 induced:1 electro:1 december:1 incorporates:1 inconsistent:2 jordan:1 easy:1 automated:1 variety:1 independence:2 isolation:1 identified:1 angeles:1 bottleneck:3 whether:1 six:1 utility:1 penalty:1 cause:30 action:1 adequate:1 cpt:2 enumerate:1 generally:2 santa:1 informally:1 detailed:1 generate:2 percentage:2 exist:1 gil:2 diagnostic:31 correctly:1 klein:1 diagnosis:1 discrete:2 promise:1 shall:1 express:1 key:1 four:1 graph:2 fraction:1 fourteenth:1 letter:2 uncertainty:7 you:1 place:1 family:1 decision:3 fl:1 guaranteed:1 distinguish:2 replaces:1 refine:2 annual:3 elucidated:1 constraint:110 precisely:1 pgm:1 helsinki:1 speed:1 pruned:2 passively:1 radu:1 according:1 alternate:1 cheriton:1 icpr:1 disconnected:4 describes:1 climber:2 em:3 smaller:1 heckerman:2 wi:1 alse:4 restricted:1 pr:28 gathering:1 taken:1 equation:2 agree:1 discus:1 turn:1 needed:2 available:9 apply:2 observe:3 obey:2 shortly:1 thomas:1 dirichlet:3 include:4 remaining:1 ensure:2 graphical:2 madison:1 unifying:1 medicine:1 embodies:1 exploit:3 calculating:1 implied:4 objective:1 move:1 already:1 md:2 surrogate:1 september:1 gradient:2 link:8 separate:1 diagnostically:4 parametrized:2 poupart:2 omar:2 seven:2 collected:3 assuming:1 code:2 liang:2 difficult:1 frank:1 negative:2 implementation:1 affiliated:1 unknown:2 twenty:4 upper:1 jameson:2 markov:1 arc:2 howard:1 descent:1 november:1 truncated:2 grew:1 incorporated:1 extended:2 precise:1 frame:1 communication:1 august:2 canada:5 inferred:2 david:3 mechanical:1 required:1 khan:2 specified:2 kl:9 elapsed:1 learned:1 herein:1 conflicting:1 established:1 inaccuracy:1 nip:1 able:2 elicitation:2 pattern:2 tampa:1 max:4 marek:1 belief:1 satisfaction:1 ranked:3 hybrid:1 treated:1 indicator:1 improve:2 numerous:1 conventionally:1 review:1 prior:7 vancouver:2 relative:1 fully:2 expect:1 interesting:1 querying:1 degree:1 consistent:5 summary:1 repeat:1 supported:1 free:1 allow:2 taking:1 fifth:2 sparse:5 edinburgh:1 van:1 feedback:2 calculated:1 dimension:1 world:5 preventing:1 author:1 made:4 refinement:6 commonly:1 reinforcement:1 san:1 far:2 transaction:1 lebanon:2 sj:5 pruning:1 approximate:2 observable:2 keep:2 monotonicity:1 uai:6 assumed:1 francisco:1 imitation:1 don:1 search:6 continuous:1 un:1 nature:1 learn:8 ca:3 ignoring:1 improving:1 troubleshooting:1 european:1 anthony:1 domain:5 uwaterloo:1 allowed:1 augmented:8 fig:8 intel:3 referred:1 sub:5 mao:2 exponential:2 lie:4 elseif:4 third:1 specific:1 evidence:1 brigham:1 consist:1 exists:1 workshop:1 merging:1 conditioned:3 easier:1 rejection:2 suited:1 likely:1 shachter:1 adjustment:1 contained:1 partially:1 pedro:1 determines:1 acm:1 conditional:9 viewed:1 consequently:1 exposition:2 manufacturing:3 man:1 feasible:12 sampler:3 total:1 breese:2 experimental:2 indicating:1 formally:1 mark:3 support:1 violated:2 incorporate:6 evaluate:2 |
3,556 | 422 | A Model of Distributed Sensorimotor Control in
the Cockroach Escape Turn
R.D. Beer 1 ,2, G.J. Kacmarcik 1 , R.E. Ritzmann 2 and H.J. Chie1 2
Departments of lComputer Engineering and Science, and 2Biology
Case Western Reserve University
Cleveland, OR 44106
Abstract
In response to a puff of wind, the American cockroach turns away and runs.
The circuit underlying the initial turn of this escape response consists of
three populations of individually identifiable nerve cells and appears to employ distributed representations in its operation. We have reconstructed
several neuronal and behavioral properties of this system using simplified
neural network models and the backpropagation learning algorithm constrained by known structural characteristics of the circuitry. In order to
test and refine the model, we have also compared the model's responses to
various lesions with the insect's responses to similar lesions.
1
INTRODUCTION
It is becoming generally accepted that many behavioral and cognitive capabilities
of the human brain must be understood as resulting from the cooperative activity
of populations of nerve cells rather than the individual activity of any particular cell. For example, distributed representation of orientation by populations of
directionally-tuned neurons appears to be a common principle of many mammalian
motor control systems (Georgopoulos et al., 1988; Lee et al., 1988). While the general principles of distributed processing are evident in these mammalian systems,
however, the details of their operation are not. Without a deeper knowledge of the
underlying neuronal circuitry and its inputs and outputs, it is difficult to answer
such questions as how the population code is formed, how it is read out, and what
precise role it plays in the operation of the nervous system as a whole . In this paper,
we describe our work with an invertebrate system, the cockroach escape response,
which offers the possibility of addressing these questions.
507
508
Beer, Kacmarcik, Ritzmann, and Chiel
2
THE COCKROACH ESCAPE RESPONSE
Any sudden puff of wind directed toward the American cockroach (Periplaneta
americana), such as from an attacking predator, evokes a rapid directional turn
away from the wind source followed by a run (Ritzmann, 1984). The initial turn is
generally completed in approximately 60 msec after the onset of the wind. During
this time, the insect must integrate information from hundreds of sensors to direct
a very specific set of leg movements involving dozens of muscles distributed among
three distinct pairs of multisegmented legs. In addition, the response is known to
exhibit various forms of plasticity, including adaptation to sensory lesions. This
system has also recently been shown to be capable of multiphasic responses (e .g. an
attack from the front may elicit a sequence of escape movements rather than a single
turn) and context-dependent responses (e.g. if the cockroach is in antennal contact
with an obstacle, it may modify its escape movements accordingly) (Ritzmann et
al., in preparation).
The basic architecture of the neuronal circuitry responsible for the initial turn of
the escape response is known (Daley and Camhi, 1988; Ritzmann and Pollack,
1988; Ritzmann and Pollack, 1990). Characteristics of the initiating wind puff
are encoded by a population of several hundred broadly-tuned wind-sensitive hairs
located on the bottom of the insect's cerci (two antennae-like structures found at the
rear of the animal). The sensory neurons which innervate these hairs project to a
small population of four pairs of ventral giant interneurons (the vGIs). These giant
interneurons excite a larger population of approximately 100 interneurons located
in the thoracic ganglia associated with each pair of legs. These type A thoracic
interneurons (the TIAs) integrate information from a variety of other sources as
well, including leg proprioceptors. Finally, the TIAs project to local interneurons
and motor neurons responsible for the control of each leg.
Perhaps what is most interesting about this system is that, despite the complexity
of the response it controls, and despite the fact that its operation appears to be
distributed across several populations of interneurons, the individual members of
these populations are uniquely identifiable. For this reason, we believe that the
cockroach escape response is an excellent model system for exploring the neuronal
basis of distributed sensorimotor control at the level of identified nerve cells. As an
integral part of that effort, we are constructing a computer model of the cockroach
escape response.
3
NEURAL NETWORK MODEL
While a great deal is known about the overall response properties of many of the
individual neurons in the escape circuit, as well as their architecture of connectivity, little detailed biophysical data is currently available. For this reason, our initial
models have employed simplified neural network models and learning techniques.
This approach has proven to be effective for analyzing a variety of neuronal circuits (e.g. Lockery et al., 1989; Anastasio and Robinson, 1989). Specifically, using
backpropagation, we train model neurons to reproduce the observed properties of
identified nerve cells in the escape circuit.
In order to ensure that the resulting models are biologically relevant, we constrain
A Model of Distributed Sensorimotor Control in The Cockroach Escape Turn
~
I
'J
. (. .................... 1................. .....
\
8. ?. .
:
............
?! ..........? ......?.. ..
..
:
:
'--./
Left vGI1
Left vGI2
Left vGI3
Left vGI4
Figure 1: Windfields of Left Model Ventral Giant Interneurons
backpropagation to produce solutions which are consistent with the known structural characteristics of the circuit . The most important constraints we have utilized
to date are the existence or nonexistence of specific connections between identified
cells and the signs of existing connections. Other constraints that we are exploring
include the firing curves and physiological operating ranges of identified neurons in
the circuit. It is important to emphasize that we employed backpropagation solely
as a means for finding the appropriate connection weights given the known structure
of the circuit, and no claim is being made about its biological validity.
As an example of this approach, we have reconstructed the observed windfields of
the eight ventral giant interneurons which serve as the first stage of interneuronal
processing in the escape circuit. These windfields, which represent the intensity of a
cell's response to wind puffs from different directions, have been well characterized
in the insect (Westin, Langberg, and Camhi, 1977). The windfields of individual
cercal sensory neurons have also been mapped (Westin, 1979; Daley and Camhi,
1988). The response of each hair is broadly tuned about a single preferred direction,
which we have modeled as a cardioid. The cercal hairs are arranged in nine major
columns on each cercus. All of the hairs in a single column share similar responses.
Together, the responses of the hairs in all eighteen columns provide overlapping
coverage of most directions around the insect's body. The connectivity between
each major cercal hair column and each ventral giant interneuron is known, as are
the signs of these connections (Daley and Camhi, 1988). Using these data, each
model vGI was trained to reproduce the corresponding windfield by constrained
backpropagation. 1 The resulting responses of the left four model vGIs are shown
in Figure 1. These model windfields closely approximate those observed in the
cockroach. Further details concerning vGI windfield reconstruction will be given in
a forthcoming paper.
4
ESCAPE TURN RECONSTRUCTIONS
Ultimately, we are interested in simulating the entire escape response. This requires some way to connect our neural models to behavior, an approach that we
have termed computational neuroethology (Beer, 1990). Toward that end, we have
1 Strictly speaking, we are only using the delta rule here. The full power of backpropagation is not needed for this task since we are training only a single layer of weights.
509
510
Beer, Kacmarcik, Ritzmann, and Chiel
Figure 2: Model Escape Turns for Wind from Different Directions
also constructed a three dimensional kinematic model of the insect's body which accurately represents the essential degrees of freedom of the legs during escape turns.
For our purposes here, the essential joints are the coxal-femur (CF) and femur-tibia
(FT) joints of each leg. The leg segment lengths and orientations, as well as the
joint angles and axes of rotation, were derived from actual measurements (Nye and
Ritzmann, unpublished data). The active leg movements during escape turns of a
tethered insect, in which the animal is suspended by a rod above a greased plate,
have been shown to be identical to those of a free ranging animal (Camhi and Levy,
1988). Because an insect thus tethered is neither supporting its own weight nor generating appreciable forces with its legs, a kinematic body model can be defended as
an adequate first approximation.
The leg movements of the simulated body were controlled by a neural network
model of the entire escape circuit. Where sufficient data was available, the structure
of this network was constrained appropriately. The first layer of this circuit was
described in the previous section and is prevented from further training here. There
are six groups of six representative TIAs, one group for each leg. Within a group,
representative members of each identified class of TIA are modeled. Where known,
the connectivity from the vGls to each class ofTIAs was enforced and all connections
from vGls to TIAs were constrained to be excitatory (Ritzmann and Pollack, 1988).
Model TIAs also receive inputs from leg proprioceptors which encode the angle of
each joint (Murrain and Ritzmann, 1988). The TIA layer for each side of the body
was fully connected to 12 local interneurons, which were in turn fully connected to
motor neurons which encode the change in angle of each joint in the body model.
High speed video films of the leg movements underlying actual escape turns in the
tethered preparation for a variety of different wind angles and initial joint angles
have been made (Nye and Ritzmann, 1990). The angles of each joint before wind
onset and immediately after completion of the initial turn were used as training
data for the model escape circuit. Only movements of the middle and hind legs were
considered because individual joint angles of the front legs were far more variable.
After training with constrained backpropagation, the model successfully reproduced
the essential features of this data (Figure 2). Wind from the rear always caused
A Model of Distributed Sensorimotor Control in The Cockroach Escape Thrn
Figure 3: Model Escape Turns Following Left Cercal Ablation
the rear legs of the model to thrust back, which would propel the body forward
in a freely moving insect, while wind from the front caused the rear legs to move
forward, pulling the body back. The middle legs always turned the body away from
the direction of the wind.
5
MODEL MANIPULATIONS
The results described above demonstrate that several neuronal and behavioral properties of this system can be reproduced using only simplified but biologically constrained neural network models. However, to serve as a useful tool for understanding
the neuronal basis of the cockroach escape response, it is not enough for the model
to simply reproduce what is already known about the normal operation of the system. In order to test and refine the model, we must also examine its responses
to various lesions and compare them to the responses of the insect to analogous
lesions. Here we report the results of two experiments of this sort.
Immediately following removal of the left cercus, cockroaches make a much higher
proportion of incorrect turns (Le. turns toward rather than away from the wind
source) in response to wind from the left, while turns in response to wind from the
right are largely unaffected. (Vardi and Camhi, 1982a). These results suggest that,
despite the redundant representation of wind direction by each cercus, the insect
integrates information from both cerci in order to compute the appropriate direction
of movement. As shown in Figure 3, the response of the model to a left cercal
ablation is consistent with these results. In response to wind from the unlesioned
side, the model generates leg movements which would turn the body away from the
wind. However, in response to wind from the lesioned side, the model generates leg
movements which would turn the body toward the wind.
It is interesting to note that, following an approximately thirty day recovery pe-
riod, the directionality of a cercally ablated cockroach's escape response is largely
restored (Vardi and Camhi, 1982a). While the mechanisms underlying this adaptation are not yet fully understood, they appear to involve a reorganization of the
vGI connections from the intact cercus (Vardi and Camhi, 1982b). After a cercal
511
512
Beer, Kacmarcik, Ritzmann, and Chiel
ablation, the windfields of the vGIs on the ablated side are significantly reduced.
Following the thirty day recovery period, however, these windfields are largely restored. We have also examined these effects in the model. After cercal ablation,
the model vGI windfields show some similarities to those of similarly lesioned insects. In addition, using vGI retraining to simulate the adaptation process, we have
found that the model can effect a similar recovery of vGI windfields by adjusting
the connections from the intact cercus. However, due to space limitations, these
results will be described in detail elsewhere.
A second experimental manipulation that has been performed on this system is the
selective lesion of individual ventral giant interneurons (Comer, 1985). The only
result that we will describe here is the lesion of vGIl. In the animal, this results
in a behavioral deficit similar to that observed with cereal ablation. Correct turns
result for wind from the unlesioned side, but a much higher proportion of incorrect
turns are observed in response to wind from the lesioned side. The response of the
model to this lesion is also similar to its response to a cercal ablation (Figure 3)
and is thus consistent with these experimental results.
6
CONCLUSIONS
With the appropriate caveats, invertebrate systems offer the possibility of addressing
important neurobiological questions at a much finer level than is generally possible
in mammalian systems. In particular, the cockroach escape response is a complex
sensorimotor control system whose operation is distributed across several populations of interneurons, but is nevertheless amenable to a detailed cellular analysis.
Due to the overall complexity of such circuits and the wealth of data which can be
extracted from them, modeling must playa crucial role in this endeavor. However,
in order to be useful, models must make special efforts to remain consistent with
known biological data and constantly be subjected to experimental test. Experimental work in tUrn must be responsive to model demands and predictions. This
paper has described our initial results with this cooperative approach to the cockroach escape response. Our future work will focus on extending the current model
in a similar manner.
Acknowledgements
This work was supported by ONR grant N00014-90-J-1545 to RDB, a CAISR graduate fellowship from the Cleveland Advanced Manufacturing Program to GJK, NIH
grant NS 17411 to RER, and NSF grant BNS-8810757 to HJC.
References
Anastasio, T.J. and Robinson, D.A. (1989). Distributed parallel processing in the
vestibulo-oculomotor system. Neural Computation 1:230-24l.
Beer, R.D. (1990). Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology. Academic Press.
Camhi, J .M. and Levy, A. (1988). Organization of a complex motor act: Fixed
and variable components of the cockroach escape behavior. J. Compo Physiology
A Model of Distributed Sensorimotor Control in The Cockroach Escape Turn
163:317-328.
Comer, C.M. (1985). Analyzing cockroach escape behavior with lesions of individual
giant interneurons. Brain Research 335:342-346.
Daley, D.L. and Camhi, J .M. (1988). Connectivity pattern of the cercal-to-giant
interneuron system of the American cockroach. J. Neurophysiology 60:1350-1368.
Georgopoulos, A.P., Kettner, R.E. and Schwartz, A.B. (1988). Primate motor
cortex and free arm movements to visual targets in three-dimensional space. II.
Coding of the direction of movement by a neuronal population. J. Neuroscience
8:2928-2937.
Lee, C., Rohrer, W.H. and Sparks, D.L. (1988). Population coding of saccadic eye
movements by neurons in the superior colliculus. Nature 332:357-360.
Lockery, S.R., Wittenberg, G., Kristan, W.B. Jr. and Cottrell, G.W. (1989). Function of identified interneurons in the leech elucidated using neural networks trained
by back-propagation. Nature 340:468-47l.
Murrain, M. and Ritzmann, R.E. (1988). Analysis of proprioceptive inputs to DPG
interneurons in the cockroach. J. Neurobiology 19:552-570.
Nye, S.W. and Ritzmann, R.E. (1990). Videotape motion analysis oflegjoint angles
during escape turns of the cockroach. Society for Neurosciences Abstracts 16:759.
Ritzmann, R.E. (1984). The cockroach escape response. In R.C. Eaton (Ed.)
Neural Mechanisms of Startle Behavior (pp. 93-131). New York: Plenum.
Ritzmann, R.E. and Pollack, A.J. (1988). Wind activated thoracic interneurons
of the cockroach: II. Patterns of connection from ventral giant interneurons. J.
Neurobiology 19:589-61l.
Ritzmann, R.E. and Pollack, A.J. (1990). Parallel motor pathways from thoracic
interneurons of the ventral giant interneuron system of the cockroach, Periplaneta
americana. J. Neurobiology 21:1219-1235.
Ritzmann, R.E., Pollack, A.J., Hudson, S. and Hyvonen, A. (in preparation). Thoracic interneurons in the escape system of the cockroach, Periplaneta americana,
are multi-modal interneurons.
Vardi, N. and Camhi, J.M. (1982). Functional recovery from lesions in the escape
system of the cockroach. I. Behavioral recovery. J. Compo Physiology 146:291-298.
Vardi, N. and Camhi, J.M. (1982). Functional recovery from lesions in the escape
system of the cockroach. II. Physiological recovery of the giant interneurons. J.
Compo Physiology 146:299-309.
Westin, J. (1979). Responses to wind recorded from the cercal nerve of the cockroach Periplaneta americana. I. Response properties of single sensory neurons. J.
Compo Physiology 133:97-102.
Westin, J., Langberg, J.J. and Camhi, J .M. (1977). Response properties of giant interneurons of the cockroach Periplaneta americana to wind puffs of different
directions and velocities. J. Compo Physiology 121:307-324.
513
| 422 |@word neurophysiology:1 middle:2 proportion:2 retraining:1 initial:7 tuned:3 existing:1 current:1 yet:1 must:6 cottrell:1 thrust:1 plasticity:1 motor:6 intelligence:1 nervous:1 accordingly:1 compo:5 proprioceptor:2 sudden:1 caveat:1 attack:1 constructed:1 direct:1 rohrer:1 incorrect:2 consists:1 pathway:1 behavioral:5 manner:1 rapid:1 behavior:5 nor:1 examine:1 gjk:1 brain:2 multi:1 initiating:1 little:1 actual:2 cleveland:2 project:2 underlying:4 circuit:12 what:3 finding:1 giant:12 act:1 lockery:2 schwartz:1 control:9 interneuronal:1 grant:3 appear:1 before:1 engineering:1 understood:2 modify:1 local:2 hudson:1 despite:3 analyzing:2 firing:1 becoming:1 approximately:3 solely:1 examined:1 range:1 graduate:1 directed:1 responsible:2 thirty:2 backpropagation:7 elicit:1 significantly:1 physiology:5 suggest:1 context:1 spark:1 recovery:7 immediately:2 rule:1 periplaneta:5 population:12 chiel:3 analogous:1 plenum:1 target:1 play:1 ritzmann:18 velocity:1 located:2 utilized:1 mammalian:3 cooperative:2 bottom:1 role:2 observed:5 ft:1 connected:2 movement:13 leech:1 complexity:2 lesioned:3 ultimately:1 trained:2 segment:1 serve:2 basis:2 comer:2 joint:8 various:3 train:1 distinct:1 describe:2 effective:1 whose:1 encoded:1 larger:1 film:1 antenna:1 directionally:1 reproduced:2 sequence:1 biophysical:1 tethered:3 reconstruction:2 adaptation:3 relevant:1 turned:1 ablation:6 date:1 defended:1 extending:1 produce:1 generating:1 completion:1 coverage:1 direction:9 closely:1 correct:1 human:1 biological:2 exploring:2 strictly:1 around:1 considered:1 normal:1 great:1 eaton:1 claim:1 reserve:1 circuitry:3 major:2 ventral:7 purpose:1 integrates:1 currently:1 sensitive:1 individually:1 successfully:1 tool:1 sensor:1 always:2 rather:3 neuroethology:2 encode:2 ax:1 derived:1 focus:1 wittenberg:1 cerci:2 kristan:1 camhi:13 dependent:1 rear:4 entire:2 reproduce:3 selective:1 interested:1 overall:2 among:1 orientation:2 insect:12 animal:4 constrained:6 special:1 biology:1 represents:1 identical:1 future:1 report:1 escape:35 employ:1 individual:7 freedom:1 organization:1 interneurons:21 possibility:2 kinematic:2 propel:1 activated:1 amenable:1 integral:1 capable:1 caisr:1 ablated:2 pollack:6 column:4 modeling:1 obstacle:1 rdb:1 addressing:2 hundred:2 front:3 connect:1 answer:1 lee:2 together:1 connectivity:4 recorded:1 cognitive:1 american:3 tia:7 coding:2 caused:2 onset:2 performed:1 wind:26 sort:1 capability:1 parallel:2 predator:1 formed:1 characteristic:3 largely:3 directional:1 accurately:1 finer:1 unaffected:1 ed:1 sensorimotor:6 pp:1 associated:1 adjusting:1 knowledge:1 back:3 nerve:5 appears:3 higher:2 day:2 response:38 modal:1 arranged:1 stage:1 overlapping:1 western:1 propagation:1 perhaps:1 pulling:1 believe:1 effect:2 validity:1 cereal:1 read:1 proprioceptive:1 deal:1 during:4 uniquely:1 plate:1 evident:1 demonstrate:1 motion:1 ranging:1 recently:1 nih:1 common:1 superior:1 rotation:1 functional:2 measurement:1 similarly:1 innervate:1 moving:1 similarity:1 operating:1 cortex:1 playa:1 own:1 dpg:1 termed:1 manipulation:2 n00014:1 onr:1 suspended:1 muscle:1 employed:2 freely:1 attacking:1 period:1 redundant:1 ii:3 full:1 characterized:1 academic:1 offer:2 concerning:1 prevented:1 controlled:1 prediction:1 involving:1 basic:1 hair:7 represent:1 cell:7 receive:1 addition:2 fellowship:1 thoracic:5 wealth:1 source:3 crucial:1 appropriately:1 member:2 structural:2 enough:1 variety:3 forthcoming:1 architecture:2 identified:6 rod:1 six:2 effort:2 speaking:1 nine:1 york:1 adequate:1 generally:3 useful:2 detailed:2 involve:1 reduced:1 nsf:1 sign:2 delta:1 cercal:10 neuroscience:2 broadly:2 group:3 four:2 nevertheless:1 neither:1 colliculus:1 enforced:1 run:2 angle:8 evokes:1 rer:1 layer:3 followed:1 refine:2 identifiable:2 activity:2 elucidated:1 constraint:2 constrain:1 georgopoulos:2 invertebrate:2 generates:2 speed:1 simulate:1 bns:1 hind:1 department:1 jr:1 across:2 remain:1 cardioid:1 biologically:2 primate:1 leg:21 turn:26 mechanism:2 needed:1 subjected:1 end:1 available:2 operation:6 eight:1 away:5 appropriate:3 simulating:1 responsive:1 existence:1 ensure:1 include:1 completed:1 cf:1 nonexistence:1 society:1 contact:1 vgi:6 move:1 question:3 already:1 restored:2 saccadic:1 exhibit:1 deficit:1 mapped:1 simulated:1 cellular:1 toward:4 reason:2 code:1 length:1 modeled:2 reorganization:1 difficult:1 neuron:10 eighteen:1 supporting:1 neurobiology:3 precise:1 intensity:1 pair:3 unpublished:1 connection:8 robinson:2 pattern:2 oculomotor:1 program:1 including:2 video:1 power:1 force:1 advanced:1 arm:1 cockroach:30 eye:1 understanding:1 acknowledgement:1 removal:1 fully:3 antennal:1 interesting:2 limitation:1 proven:1 integrate:2 degree:1 sufficient:1 beer:6 consistent:4 vestibulo:1 principle:2 share:1 excitatory:1 elsewhere:1 supported:1 free:2 side:6 deeper:1 distributed:12 curve:1 femur:2 sensory:4 forward:2 made:2 adaptive:1 simplified:3 far:1 reconstructed:2 approximate:1 emphasize:1 preferred:1 neurobiological:1 langberg:2 active:1 excite:1 kettner:1 nature:2 excellent:1 complex:2 constructing:1 whole:1 vardi:5 lesion:11 body:11 neuronal:8 representative:2 n:1 msec:1 daley:4 pe:1 levy:2 dozen:1 anastasio:2 specific:2 americana:5 physiological:2 essential:3 unlesioned:2 demand:1 westin:4 interneuron:3 simply:1 ganglion:1 visual:1 constantly:1 extracted:1 endeavor:1 manufacturing:1 appreciable:1 change:1 directionality:1 specifically:1 accepted:1 experimental:4 intact:2 puff:5 preparation:3 |
3,557 | 4,220 | Collective Graphical Models
Thomas G. Dietterich
Oregon State University
[email protected]
Daniel Sheldon
Oregon State University
[email protected]
Abstract
There are many settings in which we wish to fit a model of the behavior of individuals but where our data consist only of aggregate information (counts or
low-dimensional contingency tables). This paper introduces Collective Graphical Models?a framework for modeling and probabilistic inference that operates
directly on the sufficient statistics of the individual model. We derive a highlyefficient Gibbs sampling algorithm for sampling from the posterior distribution
of the sufficient statistics conditioned on noisy aggregate observations, prove its
correctness, and demonstrate its effectiveness experimentally.
1
Introduction
In fields such as ecology, marketing, and the social sciences, data about identifiable individuals is
rarely available, either because of privacy issues or because of the difficulty of tracking individuals
over time. Far more readily available are aggregated data in the form of counts or low-dimensional
contingency tables. Despite the fact that only aggregated data are available, researchers often seek
to build models and test hypotheses about individual behavior. One way to build a model connecting
individual-level behavior to aggregate data is to explicitly model each individual in the population,
together with the aggregation mechanism that yields the observed data.
However, with large populations it is infeasible to reason about each individual. Luckily, for many
purposes it is also unnecessary. To fit a probabilistic model of individual behavior, we only need
the sufficient statistics of that model. This paper introduces a formalism in which one starts with a
graphical model describing the behavior of individuals, and then derives a new graphical model ?
the Collective Graphical Model (CGM) ? on the sufficient statistics of a population drawn from
that model. Remarkably, the CGM has a structure similar to that of the original model.
This paper is devoted to the problem of inference in CGMs, where the goal is to calculate conditional
probabilities over the sufficient statistics given partial observations made at the population level. We
consider both an exact observation model where subtables of the sufficient statistics are observed
directly, and a noisy observation model where these counts are corrupted. A primary application is
learning: for example, computing the expected value of the sufficient statistics comprises the ?E?
step of an EM algorithm for learning the individual model from aggregate data.
Main concepts. The ideas behind CGMs are best illustrated by an example. Figure 1(a) shows the
graphical model plate notation for the bird migration model from [1, 2], in which birds transition
stochastically among a discrete set of locations (say, grid cells on a map) according to a Markov
chain (the individual model). The variable Xtm denotes the location of the mth bird at time t, and
birds are independent and identically distributed. This model gives an explicit way to reason about
the interplay between individual-level behavior (inside the plate) and aggregate data. Suppose, for
example, that very accurate surveys reveal the number of birds nt (i) in each location i at each time
t, and these numbers are collected into a single vector nt for each time step. Then, for example, one
can compute the likelihood of the survey data given parameters of the individual model by summing
out the individual variables. However, this is highly impractical: if our map has L grid cells, then
the variable elimination algorithm run on this model would instantiate tabular potentials of size LM .
1
X1m
X2m
???
n1,2
XTm
...
n2,3
nT?1,T
m=1:M
n1
n2
n1
nT
n2
(a)
i
3
1
i
1
j
t=1
5
2
2
i
j
t=2
(c)
nT
(b)
3+?
i
1
5
...
n3
1
j
j
t=3
t=1
1??
5
i
2+?
j
1??
5
t=2
(d)
i
i
3+?
1??
i
j
j
t=3
t=1
5+?
i
1
1??
2
2
2
j
t=2
5
j
t=3
(e)
Figure 1: Collective graphical model of bird migation: (a) replicates of individual model connected to
population-level observations, (b) CGM after marginalizing away individuals, (c) trellis graph on locations
{i, j} for T = 3, M = 10; numbers on edges indicate flow amounts, (d) a degree-one cycle; flows remain
non-negative for ? ? {?3, . . . , 1}, (e) a degree-two cycle; flows remain non-negative for ? ? {?2, . . . , 1}.
Figure 1(b) shows the CGM for this model, which we obtain by analytically marginalizing away the
individual variables to get a new model on their sufficient statistics, which are the tables nt,t+1 with
entries nt,t+1 (i, j) equaling the number of birds that fly from i to j from time t to t + 1. A much
better inference approach would be to conduct variable elimination or message passing directly in
the CGM. However, this would still instantiate potentials that are much too big for realistic problems
2
2
?1
due to the huge state space: e.g., there are ML+L
= O(M L ?1 ) possible values for the table
2 ?1
nt,t+1 .
Instead, we will perform approximate inference using MCMC. Here, we are faced with yet another
challenge: the CGM has hard constraints encoded into its distribution, and our MCMC moves must
preserve these constraints yet still connect the state space. To understand this, observe that the
hidden variables in this example comprise a flow of M units through the trellis graph of the Markov
chain, with the interpretation that nt,t+1 (i, j) birds ?flow? along edge (i, j) at time t (see Figure
1(c) and [1]). The constraints are that (1) flow is conserved at each trellis node, and (2) the number
of birds that enter location i at time t equals the observed number nt (i). (In the case of noisy or
partial observations, the latter constraint may not be present.)
How can we design a set of moves that connect any two M -unit flows while preserving these constraints? The answer is to make moves that send flow around cycles. Cycles of the form illustrated
in Figure 1(d) preserve flow conservation but change the amount of flow through some trellis nodes.
Cycles of the form in Figure 1(e) preserve both constraints. One can show by graph-theoretic arguments that moves of these two general classes are enough to connect any two flows.
This gives us the skeleton of an ergodic MCMC sampler: starting with a feasible flow, select cycles
from these two classes uniformly at random and propose moves that send ? units of flow around
the cycle. There is one unassuming but crucially important final question: how to select ?? The
following is a form of Gibbs sampler: from all values that preserve non-negativity, select ? with
probability proportional to that of the new flow. Such moves are always accepted. Remarkably,
even though ? may take on as many as M different values, the resulting distribution over ? has an
extremely tractable form ? either binomial or hypergeometric ? and thus it is possible to select ?
in constant time, so we can make very large moves in time independent of the population size.
Contributions. This paper formally develops these concepts in a way that generalizes the construction of Figure 1 to allow arbitrary graphical models inside the plate, and a more general observation
model that includes both noisy observations and observations involving multiple variables. We develop an efficient Gibbs sampler to conduct inference in CGMs that builds on existing work for conducting exact tests in contingency tables and makes several novel technical contributions. Foremost
is the analysis of the distribution over the move size ?, which we show to be a discrete univariate
distribution that generalizes both the binomial and hypergeometric distributions. In particular, we
prove that it is always log-concave [3], so it can be sampled in constant expected running time. We
2
show empirically that resulting inference algorithm runs in time that is independent of the population
size, and is dramatically faster than alternate approaches.
Related Work. The bird migration model of [1, 2] is a special case of CGMs where the individual
model is a Markov chain and observations are made for single variables only. That work considered
only maximum a posteriori (MAP) inference; the method of this paper could be used for learning in
that application. Sampling methods for exact tests in contingency tables (e.g. [4]) generate tables
with the same sufficient statistics as an observed table. Our work differs in that our observations
are not sufficient, and we are sampling the sufficient statistics instead of the complete contingency
table. Diaconis and Sturmfels [5] broadly introduced the concept of Markov bases, which are sets
of moves that connect the state space when sampling from conditional distributions by MCMC. We
construct a Markov basis in Section 3.1 based on work of Dobra [6]. Lauritzen [7] discusses the
problem of exact tests in nested decomposable models, a setup that is similar to ours. Inference
in CGMs can be viewed as a form of lifted inference [8?12]. The counting arguments used to derive the CGM distribution (see below) are similar to the operations of counting elimination [9] and
counting conversion [10] used in exact lifted inference algorithms for first-order probabilistic models. However, those algorithms do not replicate the CGM construction when applied to a first-order
representation of the underlying population model. For example, when applied to the bird migration
model, the C-FOVE algorithm of Milch et al. [10] cannot introduce contingency tables over pairs
of variables (Xt , Xt+1 ) as required to represent the sufficient statistics; it can only introduce histograms over single variables Xt . Apsel and Brafman [13] have recently taken a step in this direction
by introducing a lifting operation to construct the Cartesian product of two first-order formulas. In
the applications we are considering, exact inference (even when lifted) is intractable.
2
Problem Setup
Let (X1 , X2 , . . . , X|V | ) be a set of discrete random variables indexed by the finite set V , where Xv
takes values in the set Xv . Let x = (x1 , . . . , x|V | ) denote a joint setting for these variables from the
set X = X1 ? . . . ? X|V | . For our individual model, we consider graphical models of the form:
1 Y
p(x) =
?C (xC ).
(1)
Z
C?C
Here, C is the set of cliques of the independence graph, the functions ?C : XC ? R+ are potentials,
and Z is a normalization constant. For A ? V , we use the notation xA to indicate the sub-vector
of variables with indices belonging to A, and use similar notation for the corresponding domain
XA . We also assume that p(x) > 0 for all x ? X , which is required for our sampler to be ergodic.
Models that fail this restriction can be modified by adding a small positive amount to each potential.
A collection A is a set of subsets of V . For collections A and B, define A B to mean that
each A ? A is contained in some B ? B. A collection A is decomposable if there is a junction tree
T = (A, E(T )) on vertex set A [7]. Any collection A can be extended to a decomposable collection
B such that A B; this corresponds to adding fill-in edges to a graphical model.
Consider a sample {x(1) , . . . , x(M ) } from the graphical model. A contingency table n = (n(i))i?X
PM
has entries n(i) = m=1 I{x(m) = i} that count the number of times each element i ? X appears in the sample. We use index variables such as i, j ? X (instead of x ? X ) to refer to
cells of the contingency table, where i = (i1 , . . . , iV ) is a vector of indices and iA is the subvector corresponding to A ? V . Let tbl(A) denote the set of all valid contingency tables on the
domain XA . A valid table is indexed by elements iA ? XA and has non-negative integer entries.
For a full table n ? tbl(V ) and A ? V , let the marginal table n ? A ? tbl(A) be defined as
P
PM
(m)
(n ? A)(iA ) = m=1 I{xA = iA } = iB ?XV \A n(iA , iB ). When A = ?, define n ? A to be
the scalar M , the grand total of the table. Write nA nB to mean that nA is a marginal table of
nB (i.e., A ? B and nA = nB ? A)
Our observation model is as follows. We assume that a sample {x(1) , . . . , x(M ) } is drawn from the
individual model, resulting in a complete, but unobserved, contingency table nV . We then observe
the marginal tables nD = nV ? D for each set D in a collection of observed margins D, which
we require to be decomposable. Write this overall collection of tables as nD = {nD }D?D . We
consider noisy observations in Section 3.3.
3
Building the CGM. In a discrete graphical model, the sufficient statistics are the contingency tables
nC = {nC }C?C over cliques. Our approach relies on the ability to derive a tractable probabilistic
model for these statistics by marginalizing out the sample. If C is decomposable, this is possible, so
let us assume that C has a junction tree TC (if not, fill-in edges must be added to the original model).
Let ?C be the table of marginal probabilities for clique C (i.e. ?C (iC ) = Pr(XC = iC )). Let S
be the collection of separators of TC (with repetition if the same set appears as a separator multiple
times) and let nS and ?S be the tables of counts and marginal probabilities for the separator S ? S.
The distribution of nC was first derived by Sundberg [14]:
!
!?1
Y Y ?C (iC )nC (iC )
Y Y ?S (iS )nS (iS )
p(nC ) = M !
,
nC (iC )!
nS (iS )!
C?C iC ?XC
(2)
S?S iS ?XS
which can be understood as a product of multinomial distributions corresponding to a sampling
scheme for nC (details omitted). It is this distribution that we call the collective graphical model;
the parameters are the marginal probabilities of the individual model. To understand the conditional
distribution given the observations, let us further assume that D C (if not, add additional fillin edges for variables that co-occur within D), so that each observed table is determined by some
clique table. Write nD nC to express the condition that the tables nC produce observations nD :
formally, this means that D C and that D ? C implies that nD nC . Let I{?} be an indicator
variable. Then
p(nC | nD ) ? p(nC , nD ) = p(nC )I{nD nC }.
(3)
In general, the number of contingency tables over small sets of variables leads to huge state spaces
that prohibit exact inference schemes using (2) and (3). Thus, our approach is based on Gibbs
sampling. However, there are two constraints that significanlty complicate sampling. First, the
clique tables must match the observations (i.e., nD nC ). Second, implicit in (2) is the constraint
that the tables nC must be consistent in the sense that they are the sufficient statistics of some sample,
otherwise p(nC ) = 0.
Definition 1. Refer to the set of contingency tables nA = {nA }A?A as a configuration. A configuration is (globally) consistent if there exists nV ? tbl(V ) such that nA = nV ? A for all A ? A.
Consistency requires, for example, that any two tables must agree on their common marginal, which
yields the flow conservation constraints in the bird migration model. Table entries must be carefully
updated in concert to maintain these constraints. A full discussion follows.
3
Inference
Our goal is to develop a sampler for p(nC | nD ) given the observed tables nD . We assume that the
CGM specified in Equations (1) and (2) satisfies D C, and that the configuration nD is consistent.
Initialization. The first step is to construct a valid initial value for nC , which must be a globally
consistent configuration satisfying nD nC . Doing so without instantiating huge intermediate
tables requires a careful sequence of operations on the two junction trees TC and TD . We state one
key theorem, but defer the full algorithm, which is lengthy and technical, to the supplement.
Theorem 1. Let A be a decomposable collection with junction tree TA . Say that the configuration
nA is locally consistent if it agrees on edges of TA , i.e., if nA ? S = nB ? S for all (A, B) ? E(TA )
with S = A ? B. If nA is locally consistent, then it is also globally consistent.
In the bird migration example, Theorem 1 guarantees that preserving flow conservation is enough to
maintain consistency. It is structurally equivalent to the ?junction tree theorem? (e.g., [15]) which
asserts that marginal probability tables {?A }A?A that are locally consistent are realizable as the
marginals of some joint distribution p(x). Like that result, Theorem 1 also has a constructive proof,
which is the foundation for our initialization algorithm. However, the integrality requirements of
contingency tables necessitate a different style of construction.
3.1
Markov Basis
The first key challenge in designing the MCMC sampler is constructing a set of moves that preserve
the constraints mentioned above, yet still connect any two points in the support of the distribution.
Such a set of moves is called a Markov basis [5].
4
Definition 2. A set of moves M is a Markov basis for the set F if, for any two configurations
PL
n, n0 ? F, there is a sequence of moves z1 , . . . , zL ? M such that: (i) n0 = n + `=1 z` , and (ii)
PL0
n + `=1 z` ? F for all L0 = 1, . . . , L ? 1.
In our problem, the set we wish to connect is the support of p(nC | nD ). Our positivity assumption
on p(x) implies that any consistent configuration nC has positive probability, and thus the support
of p(nC | nD ) is exactly the set of consistent configurations that match the observations:
FnD = {nC : nC is consistent and nD nC }
It is useful at this point to think of the configuration nC as a vector obtained by sorting the table
entries in any consistent fashion (e.g., lexicographically first by C ? C and then by iC ? XC ). A
move can be expressed as n0C = nC + z where z is an integer-valued vector of the same dimension
as nC that may have negative entries.
The Dobra Markov basis for complete tables. Dobra [6] showed how to construct a Markov
basis for moves in a complete contingency table given S
a decomposable set of margins. Specifically,
let A be decomposable and let nA be consistent with A = V , so that each variable is part of an
observed margin. Define Fn?A = {nV ? tbl(V ) : nA nV }. Dobra gave a Markov basis for Fn?A
consisting of only degree-two moves:
Definition 3. Let (A, S, B) be a partition of V . A degree-two move z has two positive entries and
two negative entries:
z(i, j, k) = 1, z(i, j, k 0 ) = ?1, z(i0 , j, k) = ?1, z(i0 , j, k 0 ) = 1,
(4)
where i 6= i0 ? XA , j ? XS k 6= k 0 , ? XB . Let Md=2 (A, S, B) be the set of all degree-two moves
generated from this partition.
These are extensions of the well-known ?swap moves? for two-dimensional contingency tables (e.g. [5]) to the subtable n(?, j, ?), and they can be visualized as
k k0
shown at right. In this arrangement, it is clear that any such move preserves the i + ?
marginal table nA (row sums) and the marginal table nB (column sums); in other
i0 ? +
words, z ? A = 0 and z ? B = 0. Moreover, because j is fixed, it is straightforward to see that z ? A ? S = 0 and z ? B ? S = 0. The cycle in Figure 1(e) is a degree-two move
on the table n1,2 , with A = {X1 }, S = ?, C = {X2 }.
S
Theorem 2 (Dobra [6]). Let A be decomposable with A = V . Let M?A be the union of the sets of
degree-two moves Md=2 (A, S, B) where S is a separator of TA and (A, S, B) is the corresponding
decomposition of V . Then M?A is a Markov basis for Fn?A .
Adaptation of Dobra basis to FnD . We now adapt the Dobra basis to our setting. Consider a
complete table n ? tbl(V ) and the configuration nC = {n ? C}C?C . Because marginalization is a
linear operation, there is a linear operator A such that nC = AnV . Moreover, FnA is the image of
Fn?A under A. Thus, the image of the Dobra basis under A is a Markov basis for FnA .
Lemma 1. Let M?A be a Markov basis for Fn?A . Then MA = {Az : z ? M?A } is a Markov basis
for FnA . We call MA the projected Dobra basis.
Proof. Let nC , n0C ? FnA . By consistency, there exist nV , n0V ? Fn?A such that nC = AnV and
n0C = An0V . There is a sequence of moves z1 , . . . , zL ? M?A leading from n0V to nV , meaning
PL
that n0V = nV + `=1 z` . By appliyng the linear operator A to both sides of this equation, we
PL
PL0
have that n0C = nC + `=1 Az` . Furthermore, each intermediate configuration nC + `=1 Az` =
PL0
A(nV + `=1 z` ) ? FnA . Thus MA = {Az : z ? M?A } is a Markov basis for FnA .
Locality of moves. First consider the case where all variables are part of some observed table, as
in Dobra?s setting. The practical message so far is that to sample from p(nC | nD ), it suffices to
generate moves from the projected Dobra basis MD . This is done by first selecting a degree-two
move z ? M?D , and then marginalizing z onto each clique of C. Naively, it appears that a single
move may require us to update each clique. However, we will show that z ? C will be zero for many
cliques, a fact we can exploit to implement moves more efficiently. Let (A, S, B) be the partition
5
used to generate z. We deduce from the discussion following Definition 3 that z ? C = 0 unless C
has a nonempty intersection with both A and B, so we may restrict our attention to these cliques,
which form a connected subtree (Proposition S.1 in supplementary material). An implementation
can then exploit this by pre-computing the connected subtrees for each separator and only generating
the necessary components of the move. Algorithm 1 gives the details of generating moves.
Unobserved variables. Let us now consider
Algorithm 1: The projected Dobra basis MA
settings where some variables are not part of
Input: Junction tree TA with separators SA
any observed table, which may happen when
1 Before sampling
the individual model has hidden variables, or,
later, with noisy observations. Additional
2
For each S ? SA , find the associated
moves are needed to connect two configuradecomposition (A, S, B)
3
Find the cliques C ? C that have non-empty
tions that disagree on marginal tables involvintersection with both A and B. These form a
ing unobserved variables. Several approaches
subtreeSof TC . Denote these cliques by CS and let
are possible. All require the introduction
VS = CS .
d=1
of degree-one moves z ? M (A, B),
4
Let AS = A ? VS and BS = B ? VS
which partition the variables into two sets
5 During sampling: to generate a move for separator
(A, B) and have two nonzero entries z(i, j) =
S ? SA
1, z(i0 , j) = ?1 for i 6= i0 ? XA , j ? XB . In
the parlance of two-dimensional tables, these
6
Select z ? Md=2 (AS , S, BS )
7
For
each clique C ? CS , calculate z ? C
moves adjust two entries in a single column so
they preserve the column sums (nB ) but modify the row sums (nA ). The cycle in Figure 1(d) is a degree-one move which adjusts the marginal
table over A = {X2 }, but preserves the marginal table over B = {X1 , X3 }. We proceed once again
by constructing a basis for complete tables and then marginalizing the moves onto cliques.
S
Theorem 3. Let U be any decomposable collection on the set of unobserved variables U = V \ D,
and let D0 = D ? U. Let M? consist of the moves M?D0 together with the moves Md=1 (A, V \ A)
for each A ? U. Then M? is a Markov basis for Fn?D , and M = {Az : z ? M? } is a Markov
basis for FnD .
Theorem 3 is proved in the supplementary material. The degree-one moves also become local upon
marginalization: it is easy to check that z ? C is zero unless C ? A is nonempty. These cliques also
form a connected subtree. We recommend choosing U by restricting TC to the variables in U . This
has the effect of adding degree-one moves for each clique of C. By matching the structure of TC ,
many of the additional degree-two moves become zero upon marginalization.
3.2
Constructing an efficient MCMC sampler
The second key challenge in constructing the MCMC sampler is utilizing the moves from the
Markov basis in a way that efficiently explores the state space. A standard approach is to select
a random move z, a direction ? = ?1 (each with probability 1/2), and then propose the move
nC + ?z in a Metropolis Hastings sampler. Although these moves are enough to connect any two
configurations, we are particularly interested in problems where M is large, for which moving by
increments of ?1 will be prohibitively slow.
For general Markov bases, Diaconis and Sturmfels [5] suggest instead to construct a Gibbs sampler
that uses the moves as directions for longer steps, by choosing the value of ? from the following
distribution:
p(?) ? p(nC + ?z | nD ),
? ? {? : nC + ?z ? 0}.
(5)
Lemma 2 (Adapted from Diaconis and Sturmfels [5]). Let M be a Markov basis for FnD . Consider
the Markov chain with moves ?z generated by first choosing z uniformly at random from M and
then choosing ? according to (5). This is a connected, reversible, aperiodic Markov chain on FnD
with stationary distribution p(nC | nD ).
However, it is not obvious how to sample from p(?). They suggest running a Markov chain in ?,
again having the property of moving in increments of one (see also [16]). In our case, the support of
p(?) may be as big as the population size M , so this solution remains unsatisfactory.
Fortunately, p(?) has several properties that allow us to create a very efficient sampling algorithm.
For a separator S ? S, define zS as zC ? S for any clique C containing S. Now let C(z) be the
6
4
10
1
Calculate ?min and ?max using (8)
2
Extend the function f (?) := log p(?) to the real line using the
equality n! = ?(n + 1) in Equation (7) for each constituent
function fA (?) := log pA (?), A ? S(z) ? C(z).
3
Use the logarithm of Equation (6) to evaluate f (?) (for sampling)
and its derivatives (for Newton?s method):
X (q)
X (q)
fC (?) ?
fS (?).
q = 0, 1, 2.
f (q) (?) =
S?S(z)
C?C(z)
Evaluate the derivatives of fA (?) using the logarithm of Equation
d
(7) and the digamma and trigamma functions ?(n) = dn
?(n)
d2
and ?1 (n) = dn2 ?(n).
4
5
Find the mode ? ? by first using Newton?s method to find ? 0
maximizing f (?) over the real line, and then letting ? ? be the
value in {b? 0 c, d? 0 e, ?min , ?max } that attains the maximum.
Run the rejection sampling algorithm of Devroye [3].
VE
MCMC
2
10
0
10
1
10
Population size
Relative error
Input: move z and current configuration nC , with |C(z)| > 1
Seconds
Algorithm 2: Sampling from p(?) in constant time
2
10
exact?nodes
exact?chain
noisy?nodes
noisy?chain
0.4
0.3
0.2
0.1
0
0
50
Seconds
100
Figure 2: Top: running time
vs. M for a small CGM. Bottom:
convergence of MCMC for random Bayes nets.
set of cliques C for which zC is nonzero, and let S(z) be defined analogously. For A ? S ? C, let
I + (zA ) ? XA be the indices of +1 entries of zA and let I ? (zA ) be the indices of ?1 entries. By
ignoring constant terms in (2), we can write (5) as
Y
Y
p(?) ?
pC (?)
pS (?)?1 ,
(6)
C?C(z)
pA (?) :=
Y
i?I + (zA )
?
?A (i)
(nA (i) + ?)!
S?S(z)
Y
j?I ? (zA )
?A (j)??
,
(nA (j) ? ?)!
A ? S ? C.
To maintain the non-negativity of nC , ? is restricted to the support ?min , . . . , ?max with:
?min := ?
min +
nC (i),
?max :=
min ?
nC (j).
C?C(z),i?I (zC )
C?C(z),j?I (zC )
(7)
(8)
Notably, each move in our basis satisfies |I + (zA ) ? I + (zA )| ? 4, so p(?) can be evaluated by
examining at most four entries in each table for cliques in C(z). It is worth noting that Equation
(7) reduces to the binomial distribution for degree-one moves and the (noncentral) hypergeometric distribution for degree-two moves, so we may sample from these distributions directly when
|C(z)| = 1. More importantly, we will now show that p(?) is always a member of the log-concave
class of distributions, which are unimodal and can be sampled very efficiently.
Definition 4. A discrete distribution {pk } is log-concave if p2k ? pk?1 pk+1 for all k [3].
Theorem 4. For any degree-one or degree-two move z, the distribution p(?) is log-concave.
It is easy to show that both pC (?) and pS (?) are log-concave. The proof of Theorem 4, which is
found in the supplementary material, then pairs each separator S with a clique C and uses properties
of the moves to show that pC (?)/pS (?) is also log-concave. Then, by Equation (6), we see that p(?)
is a product of log-concave distributions, which is also log-concave.
We have implemented the rejection sampling algorithm of Devroye [3], which applies to any discrete
log-concave distribution and is simple to implement. The expected number of times it evaluates p(?)
(up to normalization) is fewer than 5. We must also provide the mode of the distribution, which we
find by Newton?s method, usually taking only a few steps. The running time for each move is thus
independent of the population size. Additional details are given in Algorithm 2.
3.3
Noisy Observations
Population-level counts from real survey data are rarely exact, and it is thus important to incorporate noisy observations into our model. In this section, we describe how to modify the sampler for
7
the case when all observations are noisy; it is a straightforward generalization to allow both noisy
and exact observations. Suppose that we make noisy observations yR = {yR : R ? R} corresponding to the true marginal tables nR for a collection R C (that need not be decomposable).
For simplicity, we restrict our attention to models where each entry n in the true table is corrupted
independently according to a univariate noise model p(y | n).
We assume that the noise model is log-concave, meaning in this case that log p(y | n) is a concave
function of the parameter n. Most commonly-used univariate densities are log-concave with respect
to various parameters [17]. A canonical example from the bird migration model is p(y | n) =
Poisson(?n), so the survey count is Poisson with mean proportional to the true number of birds
present. This example and others are discussed in [2]. We also assume that the support of p(y | n)
does not depend on n, so that observations do not restrict the support of the sampling distribution.
For example, we must modify our Poisson noise model to be p(y | n) = Poisson(?n + ?0 ) with
small background rate ?0 to avoid the hard constraint that n must be positive if y is positive.
In analogy with (3), we can then write p(nC | yR ) ? p(nC )p(yR |nC ) (the hard constraint is now
replaced with the likelihood term p(yR |nC )). Given our assumption on p(y | n), the support of
p(nC | yR ) is the same as the support of p(nC ), and a Markov basis can be constructed using the
tools from Section 3.1, with all variables being unobserved. In the sampler, the expression for p(?)
must now be updated to incorporate the likelihood term p(yR |nC +?z). Following reasoning similar
to before, we let R(z)
Qbe the sets in R for which z ? R is nonzero and find that Equation (6) gains
the additional factor R?R(z) pR (?), where
Y
Y
pR (?) =
p(yR (i) | nR (i) + ?)
p(yR (j) | nR (j) ? ?).
(9)
j?I ? (zR )
i?I + (zR )
Each factor in (9) is log-concave in ? by our assumption on p(y | n), and hence the overall distribution p(?) remains log-concave. To update the sampler for p(?), modify line 3 of Algorithm 2 in the
obvious fashion to include these new factors when computing log p(?) and its derivatives.
4
Experiments
We implemented our sampler in MATLAB using Murphy?s Bayes net toolbox [18] for the underlying
operations on graphical models and junction trees. Figure 2 (top) compares the running time of our
method vs. exact inference in the CGM by variable elimination (VE) for a very small model. The
task was to estimate E[n2,3 | n1 , n3 ] in the bird migration model for L = 2, T = 3, and varying M .
2
The running time of VE is O(M L ?1 ), which is cubic in M (linear on a log-log plot), while the time
for our method to estimate the same quantity within 2% relative error actually decreases slightly with
population size. Figure 2 (bottom) shows convergence of the sampler for more complex models. We
generated 30 random Bayes nets on 10 binary variables, and generated two sets of observed tables
for a population of M = 100, 000: the set NODES has a table for each single variable, while the
set CHAIN has tables for pairs of variables that are adjacent in a random ordering. We repeated the
same process with the noise model p(y | n) = Poisson(0.2n + 0.1) to generate noisy observations.
We then ran our sampler to estimate E[nC | nD ] as would be done in the EM algorithm. The plots
show relative error in this estimate as a function of time, averaged over the 30 nets. For more details,
including how we derived the correct answer for comparison, see Section 4.1 in the supplementary
material. The sampler converged quickly in all cases with the more complex CHAIN observation
model taking longer than NODES, and noisy observations taking slightly longer than exact ones. We
found (not shown) that the biggest source of variability in convergence time was due to individual
Bayes nets, while repeat trials using the same net demonstrated very similar behavior.
Concluding Remarks. An important area of future research is to further explore the use of CGMs
within learning algorithms, as well as the limitations of that approach: when is it possible to learn individual models from aggregate data? We believe that the ability to model noisy observations will be
an indispensable tool in real applications. For complex models, convergence may be difficult to diagnose. Some mixing results are known for samplers in related problems with hard constraints [16];
any such results for our model would be a great advance. The use of distributional approximations
for the CGM model and other methods of approximate inference also hold promise.
Acknowledgments. We thank Lise Getoor for pointing out the connection between CGMs and lifted
inference. This research was supported in part by the grant DBI-0905885 from the NSF.
8
References
[1] D. Sheldon, M. A. S. Elmohamed, and D. Kozen. Collective inference on Markov models for
modeling bird migration. In Advances in Neural Information Processing Systems (NIPS 2007),
pages 1321?1328, Cambridge, MA, 2008. MIT Press.
[2] Daniel Sheldon. Manipulation of PageRank and Collective Hidden Markov Models. PhD
thesis, Cornell University, 2009.
[3] L. Devroye. A simple generator for discrete log-concave distributions. Computing, 39(1):
87?91, 1987.
[4] A. Agresti. A survey of exact inference for contingency tables. Statistical Science, 7(1):131?
153, 1992.
[5] P. Diaconis and B. Sturmfels. Algebraic algorithms for sampling from conditional distributions. The Annals of statistics, 26(1):363?397, 1998. ISSN 0090-5364.
[6] A. Dobra. Markov bases for decomposable graphical models. Bernoulli, 9(6):1093?1108,
2003. ISSN 1350-7265.
[7] S.L. Lauritzen. Graphical models. Oxford University Press, USA, 1996.
[8] D. Poole. First-order probabilistic inference. In Proc. IJCAI, volume 18, pages 985?991, 2003.
[9] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. Introduction
to Statistical Relational Learning, page 433, 2007.
[10] B. Milch, L.S. Zettlemoyer, K. Kersting, M. Haimes, and L.P. Kaelbling. Lifted probabilistic
inference with counting formulas. Proc. 23rd AAAI, pages 1062?1068, 2008.
[11] P. Sen, A. Deshpande, and L. Getoor. Bisimulation-based approximate lifted inference. In
Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages
496?505. AUAI Press, 2009.
[12] J. Kisynski and D. Poole. Lifted aggregation in directed first-order probabilistic models. In
Proc. IJCAI, volume 9, pages 1922?1929, 2009.
[13] Udi Apsel and Ronen Brafman. Extended lifted inference with joint formulas. In Proceedings
of the Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in
Artificial Intelligence (UAI-11), pages 11?18, Corvallis, Oregon, 2011. AUAI Press.
[14] R. Sundberg. Some results about decomposable (or Markov-type) models for multidimensional
contingency tables: distribution of marginals and partitioning of tests. Scandinavian Journal
of Statistics, 2(2):71?79, 1975.
[15] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[16] P. Diaconis, S. Holmes, and R.M. Neal. Analysis of a nonreversible Markov chain sampler.
The Annals of Applied Probability, 10(3):726?752, 2000.
[17] W.R. Gilks and P. Wild. Adaptive Rejection sampling for Gibbs Sampling. Journal of the Royal
Statistical Society. Series C (Applied Statistics), 41(2):337?348, 1992. ISSN 0035-9254.
[18] K. Murphy. The Bayes net toolbox for MATLAB. Computing science and statistics, 33(2):
1024?1034, 2001.
9
| 4220 |@word trial:1 replicate:1 nd:21 d2:1 seek:1 crucially:1 decomposition:1 initial:1 configuration:13 series:1 selecting:1 daniel:2 ours:1 existing:1 qbe:1 current:1 nt:10 yet:3 must:11 readily:1 fn:7 realistic:1 partition:4 happen:1 plot:2 concert:1 update:2 n0:2 v:5 stationary:1 braz:1 instantiate:2 fewer:1 yr:9 amir:1 intelligence:2 node:6 location:5 along:1 dn:1 constructed:1 become:2 udi:1 prove:2 wild:1 inside:2 introduce:2 privacy:1 notably:1 expected:3 behavior:7 globally:3 td:1 considering:1 notation:3 underlying:2 moreover:2 z:1 unobserved:5 impractical:1 guarantee:1 multidimensional:1 auai:2 concave:15 exactly:1 prohibitively:1 zl:2 unit:3 grant:1 partitioning:1 before:2 positive:5 understood:1 local:1 modify:4 xv:3 despite:1 oxford:1 bird:17 initialization:2 co:1 averaged:1 directed:1 practical:1 acknowledgment:1 gilks:1 union:1 implement:2 differs:1 x3:1 area:1 matching:1 word:1 pre:1 suggest:2 get:1 cannot:1 onto:2 operator:2 nb:6 milch:2 restriction:1 equivalent:1 map:3 demonstrated:1 roth:1 maximizing:1 send:2 straightforward:2 attention:2 starting:1 independently:1 ergodic:2 survey:5 decomposable:13 simplicity:1 adjusts:1 holmes:1 utilizing:1 importantly:1 fill:2 dbi:1 population:14 increment:2 updated:2 annals:2 construction:3 suppose:2 exact:14 us:2 designing:1 hypothesis:1 pa:2 element:2 trend:1 satisfying:1 particularly:1 distributional:1 observed:11 bottom:2 fly:1 calculate:3 equaling:1 connected:5 cycle:9 ordering:1 decrease:1 ran:1 mentioned:1 skeleton:1 depend:1 upon:2 basis:25 swap:1 joint:3 k0:1 various:1 describe:1 artificial:2 aggregate:6 choosing:4 encoded:1 supplementary:4 valued:1 agresti:1 say:2 otherwise:1 ability:2 statistic:18 think:1 noisy:16 final:1 interplay:1 sequence:3 net:7 sen:1 propose:2 product:3 adaptation:1 x1m:1 mixing:1 asserts:1 az:5 constituent:1 convergence:4 empty:1 requirement:1 p:3 ijcai:2 produce:1 generating:2 noncentral:1 tions:1 derive:3 develop:2 lauritzen:2 sa:3 implemented:2 c:3 indicate:2 implies:2 direction:3 aperiodic:1 correct:1 luckily:1 elimination:4 material:4 require:3 suffices:1 generalization:1 proposition:1 extension:1 pl:3 hold:1 around:2 considered:1 ic:7 great:1 lm:1 pointing:1 omitted:1 purpose:1 proc:3 agrees:1 correctness:1 repetition:1 create:1 tool:2 mit:1 always:3 modified:1 avoid:1 cornell:1 lifted:9 varying:1 kersting:1 derived:2 l0:1 lise:1 cgm:13 unsatisfactory:1 likelihood:3 check:1 bernoulli:1 digamma:1 attains:1 sense:1 realizable:1 posteriori:1 inference:24 xtm:2 i0:6 mth:1 hidden:3 i1:1 interested:1 issue:1 among:1 overall:2 special:1 marginal:14 field:1 comprise:1 equal:1 construct:5 once:1 sampling:19 having:1 tabular:1 future:1 others:1 recommend:1 develops:1 few:1 diaconis:5 preserve:8 ve:3 individual:25 murphy:2 replaced:1 consisting:1 n1:5 maintain:3 ecology:1 huge:3 message:2 highly:1 adjust:1 replicates:1 introduces:2 tgd:1 behind:1 pc:3 devoted:1 xb:2 chain:11 subtrees:1 accurate:1 edge:6 partial:2 necessary:1 unless:2 conduct:2 indexed:2 tree:7 iv:1 logarithm:2 formalism:1 modeling:2 column:3 kaelbling:1 introducing:1 vertex:1 subset:1 entry:14 examining:1 seventh:1 too:1 connect:8 answer:2 eec:2 corrupted:2 migration:8 density:1 grand:1 explores:1 probabilistic:8 connecting:1 together:2 analogously:1 p2k:1 na:15 quickly:1 again:2 thesis:1 aaai:1 containing:1 positivity:1 necessitate:1 stochastically:1 derivative:3 style:1 leading:1 potential:4 de:1 includes:1 oregon:3 explicitly:1 later:1 pl0:3 diagnose:1 doing:1 start:1 aggregation:2 bayes:5 bisimulation:1 defer:1 trigamma:1 contribution:2 conducting:1 efficiently:3 yield:2 ronen:1 worth:1 researcher:1 converged:1 za:7 complicate:1 lengthy:1 definition:5 evaluates:1 deshpande:1 obvious:2 proof:3 associated:1 sampled:2 gain:1 proved:1 carefully:1 actually:1 appears:3 dobra:13 ta:5 done:2 though:1 evaluated:1 furthermore:1 marketing:1 xa:8 implicit:1 parlance:1 hastings:1 reversible:1 mode:2 reveal:1 believe:1 building:1 dietterich:1 effect:1 concept:3 true:3 usa:1 analytically:1 equality:1 hence:1 nonzero:3 neal:1 illustrated:2 adjacent:1 during:1 prohibit:1 plate:3 theoretic:1 demonstrate:1 complete:6 reasoning:1 image:2 meaning:2 kisynski:1 novel:1 recently:1 variational:1 common:1 multinomial:1 empirically:1 volume:2 extend:1 interpretation:1 discussed:1 marginals:2 refer:2 corvallis:1 cambridge:1 gibbs:6 enter:1 rd:1 grid:2 pm:2 consistency:3 moving:2 scandinavian:1 longer:3 deduce:1 base:3 add:1 posterior:1 showed:1 manipulation:1 indispensable:1 binary:1 conserved:1 preserving:2 additional:5 fortunately:1 aggregated:2 ii:1 multiple:2 unimodal:1 full:3 reduces:1 d0:2 ing:1 technical:2 faster:1 match:2 cgms:7 lexicographically:1 adapt:1 constructive:1 instantiating:1 involving:1 foremost:1 poisson:5 histogram:1 represent:1 normalization:2 cell:3 background:1 remarkably:2 zettlemoyer:1 source:1 nv:10 member:1 flow:16 kozen:1 effectiveness:1 jordan:1 integer:2 call:2 counting:4 noting:1 intermediate:2 identically:1 enough:3 easy:2 independence:1 fit:2 gave:1 marginalization:3 restrict:3 idea:1 expression:1 fove:1 f:1 algebraic:1 passing:1 proceed:1 remark:1 matlab:2 dramatically:1 useful:1 clear:1 amount:3 sturmfels:4 locally:3 visualized:1 generate:5 exist:1 canonical:1 nsf:1 broadly:1 discrete:7 write:5 promise:1 express:1 key:3 four:1 tbl:6 drawn:2 integrality:1 graph:4 sum:4 run:3 uncertainty:2 family:1 identifiable:1 annual:1 adapted:1 occur:1 constraint:14 n3:2 x2:3 sheldon:4 haimes:1 argument:2 extremely:1 min:6 concluding:1 according:3 alternate:1 belonging:1 remain:2 slightly:2 em:2 metropolis:1 b:2 restricted:1 pr:3 taken:1 equation:8 agree:1 remains:2 describing:1 count:7 mechanism:1 discus:1 fail:1 nonempty:2 needed:1 letting:1 tractable:2 available:3 generalizes:2 operation:5 junction:7 observe:2 away:2 thomas:1 original:2 denotes:1 binomial:3 running:6 anv:2 top:2 graphical:17 include:1 newton:3 xc:5 exploit:2 build:3 society:1 move:54 question:1 added:1 arrangement:1 quantity:1 fa:2 primary:1 md:5 nr:3 thank:1 collected:1 reason:2 devroye:3 issn:3 index:5 nc:52 setup:2 difficult:1 negative:5 design:1 implementation:1 collective:7 twenty:2 perform:1 conversion:1 disagree:1 observation:28 markov:30 finite:1 extended:2 variability:1 relational:1 arbitrary:1 introduced:1 pair:3 required:2 subvector:1 specified:1 z1:2 toolbox:2 connection:1 hypergeometric:3 salvo:1 nip:1 poole:2 below:1 usually:1 challenge:3 pagerank:1 max:4 including:1 royal:1 wainwright:1 ia:5 getoor:2 difficulty:1 indicator:1 zr:2 scheme:2 fna:6 negativity:2 faced:1 oregonstate:2 sundberg:2 marginalizing:5 relative:3 limitation:1 proportional:2 analogy:1 generator:1 foundation:2 contingency:18 degree:17 sufficient:14 subtables:1 consistent:13 row:2 brafman:2 repeat:1 supported:1 infeasible:1 zc:4 apsel:2 allow:3 understand:2 side:1 taking:3 fifth:1 distributed:1 dimension:1 transition:1 valid:3 dn2:1 made:2 collection:11 subtable:1 projected:3 commonly:1 adaptive:1 far:2 social:1 approximate:3 clique:19 ml:1 uai:1 summing:1 conservation:3 unnecessary:1 x2m:1 table:60 learn:1 ignoring:1 complex:3 separator:9 constructing:4 domain:2 pk:3 main:1 big:2 noise:4 n2:4 repeated:1 x1:5 biggest:1 fashion:2 cubic:1 slow:1 n:3 trellis:4 sub:1 comprises:1 wish:2 explicit:1 structurally:1 exponential:1 nonreversible:1 ib:2 formula:3 theorem:10 xt:3 x:2 derives:1 consist:2 intractable:1 exists:1 naively:1 adding:3 restricting:1 supplement:1 lifting:1 phd:1 subtree:2 conditioned:1 cartesian:1 margin:3 sorting:1 rejection:3 locality:1 tc:6 intersection:1 fc:1 univariate:3 explore:1 expressed:1 contained:1 tracking:1 scalar:1 applies:1 nested:1 corresponds:1 satisfies:2 relies:1 ma:5 conditional:4 goal:2 viewed:1 careful:1 feasible:1 experimentally:1 hard:4 change:1 determined:1 specifically:1 operates:1 uniformly:2 sampler:19 fnd:5 lemma:2 total:1 called:1 accepted:1 rarely:2 select:6 formally:2 support:9 latter:1 incorporate:2 evaluate:2 mcmc:9 |
3,558 | 4,221 | Additive Gaussian Processes
David Duvenaud
Department of Engineering
Cambridge University
[email protected]
Hannes Nickisch
MPI for Intelligent Systems
T?ubingen, Germany
[email protected]
Carl Edward Rasmussen
Department of Engineering
Cambridge University
[email protected]
Abstract
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions,
each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use
squared-exponential kernels. Hyperparameter learning in this model can be seen
as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but
tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased
interpretability, as well as state-of-the-art predictive power in regression tasks.
1
Introduction
Most statistical regression models in use today are of the form: g(y) = f (x1 )+f (x2 )+? ? ?+f (xD ).
Popular examples include logistic regression, linear regression, and Generalized Linear Models [1].
This family of functions, known as Generalized Additive Models (GAM) [2], are typically easy
to fit and interpret. Some extensions of this family, such as smoothing-splines ANOVA [3], add
terms depending on more than one variable. However, such models generally become intractable
and difficult to fit as the number of terms increases.
At the other end of the spectrum are kernel-based models, which typically allow the response to
depend on all input variables simultaneously. These have the form: y = f (x1 , x2 , . . . , xD ). A
popular example would be a Gaussian process model using a squared-exponential (or Gaussian)
kernel. We denote this model as SE-GP. This model is much more flexible than the GAM, but its
flexibility makes it difficult to generalize to new combinations of input variables.
In this paper, we introduce a Gaussian process model that generalizes both GAMs and the SE-GP.
This is achieved through a kernel which allow additive interactions of all orders, ranging from first
order interactions (as in a GAM) all the way to Dth-order interactions (as in a SE-GP). Although
this kernel amounts to a sum over an exponential number of terms, we show how to compute this
kernel efficiently, and introduce a parameterization which limits the number of hyperparameters to
O(D). A Gaussian process with this kernel function (an additive GP) constitutes a powerful model
that allows one to automatically determine which orders of interaction are important. We show
that this model can significantly improve modeling efficacy, and has major advantages for model
interpretability. This model is also extremely simple to implement, and we provide example code.
We note that a similar breakthrough has recently been made, called Hierarchical Kernel Learning
(HKL) [4]. HKL explores a similar class of models, and sidesteps the possibly exponential number of interaction terms by cleverly selecting only a tractable subset. However, this method suffers
considerably from the fact that cross-validation must be used to set hyperparameters. In addition,
the machinery necessary to train these models is immense. Finally, on real datasets, HKL is outperformed by the standard SE-GP [4].
1
1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0
4
0
4
2
4
+
?2
?4
?4
=
?2
?4
?4
?2
?4
?4
k1 (x1 )k2 (x2 )
2nd order kernel
?
4
2
3
1.5
0
?2
?4
k1 (x1 ) + k2 (x2 )
1st order kernel
?
0.5
4
2
0
?2
?4
2
1
2
2
0
?2
k2 (x2 )
1D kernel
?
1.5
4
0
0
?2
k1 (x1 )
1D kernel
?
2
2
0
0
?2
0
4
4
2
0
0.2
0
4
2
1
2
0
1
?1
0
?2
1
0
?0.5
0.5
?1
?1
4
0
4
?1.5
4
2
2
4
0
?2
?2
?4
?4
?3
4
2
4
2
0
2
0
+
f1 (x1 )
draw from
1D GP prior
=
?2
?4
4
2
0
0
?2
?4
0
?2
?2
?4
f2 (x2 )
draw from
1D GP prior
?4
f1 (x1 ) + f2 (x2 )
draw from
1st order GP prior
2
4
2
0
0
?2
?2
?4
?4
f (x1 , x2 )
draw from
2nd order GP prior
Figure 1: A first-order additive kernel, and a product kernel. Left: a draw from a first-order additive
kernel corresponds to a sum of draws from one-dimensional kernels. Right: functions drawn from a
product kernel prior have weaker long-range dependencies, and less long-range structure.
2
Gaussian Process Models
Gaussian processes are a flexible and tractable prior over functions, useful for solving regression
and classification tasks [5]. The kind of structure which can be captured by a GP model is mainly
determined by its kernel: the covariance function. One of the main difficulties in specifying a
Gaussian process model is in choosing a kernel which can represent the structure present in the data.
For small to medium-sized datasets, the kernel has a large impact on modeling efficacy.
Figure 1 compares, for two-dimensional functions, a first-order additive kernel with a second-order
kernel. We can see that a GP with a first-order additive kernel is an example of a GAM: Each
function drawn from this model is a sum of orthogonal one-dimensional functions. Compared to
functions drawn from the higher-order GP, draws from the first-order GP have more long-range
structure.
We can expect many natural functions to depend only on sums of low-order interactions. For example, the price of a house or car will presumably be well approximated by a sum of prices of
individual features, such as a sun-roof. Other parts of the price may depend jointly on a small set of
features, such as the size and building materials of a house. Capturing these regularities will mean
that a model can confidently extrapolate to unseen combinations of features.
3
Additive Kernels
We now give a precise definition of additive kernels. We first assign each dimension i ? {1 . . . D}
a one-dimensional base kernel ki (xi , x0i ). We then define the first order, second order and nth order
additive kernel as:
kadd1 (x, x0 )
= ?12
D
X
ki (xi , x0i )
(1)
i=1
0
kadd2 (x, x )
=
?22
D X
D
X
ki (xi , x0i )kj (xj , x0j )
(2)
i=1 j=i+1
kaddn (x, x0 )
X
= ?n2
N
Y
1?i1 <i2 <...<in ?D d=1
2
kid (xid , x0id )
(3)
where D is the dimension of our input space, and ?n2 is the variance assigned to all nth order
interactions. The nth covariance function is a sum of D
n terms. In particular, the Dth order additive
covariance function has D
=
1
term,
a
product
of
each
dimension?s covariance function:
D
2
kaddD (x, x0 ) = ?D
D
Y
kd (xd , x0d )
(4)
d=1
In the case where each base kernel is a one-dimensional squared-exponential kernel, the Dth-order
term corresponds to the multivariate squared-exponential kernel:
D
Y
D
Y
D
(x ? x0 )2
X
(xd ? x0d )2
d
2
d
=
?
exp ?
exp
?
D
2ld2
2ld2
d=1
d=1
d=1
(5)
also commonly known as the Gaussian kernel. The full additive kernel is a sum of the additive
kernels of all orders.
2
kaddD (x, x0 ) = ?D
3.1
2
kd (xd , x0d ) = ?D
Parameterization
The only design choice necessary in specifying an additive kernel is the selection of a onedimensional base kernel for each input dimension. Any parameters (such as length-scales) of the
base kernels can be learned as usual by maximizing the marginal likelihood of the training data.
In addition to the hyperparameters of each dimension-wise kernel, additive kernels are equipped
2
controlling how much variance we assign to each orwith a set of D hyperparameters ?12 . . . ?D
der of interaction. These ?order variance? hyperparameters have a useful interpretation: The dth
order variance hyperparameter controls how much of the target function?s variance comes from interactions of the dth order. Table 1 shows examples of normalized order variance hyperparameters
learned on real datasets.
Table 1: Relative variance contribution of each order in the additive model, on different datasets. Here, the
maximum order of interaction is set to 10, or smaller if the input dimension less than 10. Values are normalized
to sum to 100.
Order of interaction
pima
liver
heart
concrete
pumadyn-8nh
servo
housing
1st
0.1
0.0
77.6
70.6
0.0
58.7
0.1
2nd
0.1
0.2
0.0
13.3
0.1
27.4
0.6
3rd
0.1
99.7
0.0
13.8
0.1
0.0
80.6
4th
0.3
0.1
0.0
2.3
0.1
13.9
1.4
5th
1.5
0.0
0.1
0.0
0.1
6th
96.4
0.0
0.1
0.0
0.1
7th
1.4
8th
0.0
9th
10th
0.1
0.0
0.1
0.1
0.0
99.5
0.1
22.0
1.8
0.8
0.7
0.8
0.6
12.7
On different datasets, the dominant order of interaction estimated by the additive model varies
widely. An additive GP with all of its variance coming from the 1st order is equivalent to a GAM;
an additive GP with all its variance coming from the Dth order is equivalent to a SE-GP.
Because the hyperparameters can specify which degrees of interaction are important, the additive
GP is an extremely general model. If the function we are modeling is, in fact, decomposable into a
sum of low-dimensional functions, our model can discover this fact (see Figure 5) and exploit it. If
this is not the case, the hyperparameters can specify a suitably flexible model.
3.2
Interpretability
As noted by Plate [6], one of the chief advantages of additive models such as GAM is their interpretability. Plate also notes that by allowing high-order interactions as well as low-order interactions,
one can trade off interpretability with predictive accuracy. In the case where the hyperparameters indicate that most of the variance in a function can be explained by low-order interactions, it is useful
and easy to plot the corresponding low-order functions, as in Figure 2.
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
1
0.5
Strength
Strength
Strength
0
0
0
?0.5
?1
?1.5
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2.5
6
?2
4
?1
0
2
?2
?2.5
?2
?1.5
?1
?0.5
0
Water
0.5
1
1.5
2
2.5
?2
?1
0
1
2
3
Age
4
5
6
1
0
7
2
Age
Water
Figure 2: Low-order functions on the concrete dataset. Left, Centre: By considering only first-order
terms of the additive kernel, we recover a form of Generalized Additive Model, and can plot the
corresponding 1-dimensional functions. Green points indicate the original data, blue points are data
after the mean contribution from the other dimensions? first-order terms has been subtracted. The
black line is the posterior mean of a GP with only one term in its kernel. Right: The posterior mean
of a GP with only one second-order term in its kernel.
3.3
Efficient Evaluation of Additive Kernels
An additive kernel over D inputs with interactions up to order n has O(2n ) terms. Na??vely summing
over these terms quickly becomes intractable. In this section, we show how one can evaluate the sum
over all terms in O(D2 ).
The nth order additive kernel corresponds to the nth elementary symmetric polynomial [7] [8], which
we denote en . For example: if x has 4 input dimensions (D = 4), and if we let zi = ki (xi , x0i ), then
kadd1 (x, x0 ) = e1 (z1 , z2 , z3 , z4 ) = z1 + z2 + z3 + z4
kadd2 (x, x0 ) = e2 (z1 , z2 , z3 , z4 ) = z1 z2 + z1 z3 + z1 z4 + z2 z3 + z2 z4 + z3 z4
kadd3 (x, x0 ) = e3 (z1 , z2 , z3 , z4 ) = z1 z2 z3 + z1 z2 z4 + z1 z3 z4 + z2 z3 z4
kadd4 (x, x0 ) = e4 (z1 , z2 , z3 , z4 ) = z1 z2 z3 z4
The Newton-Girard formulae give an efficient recursive form for computing these polynomials. If
PD
we define sk to be the kth power sum: sk (z1 , z2 , . . . , zD ) = i=1 zik , then
n
kaddn (x, x0 ) = en (z1 , . . . , zD ) =
1X
(?1)(k?1) en?k (z1 , . . . , zD )sk (z1 , . . . , zD )
n
(6)
k=1
Where e0 , 1. The Newton-Girard formulae have time complexity O(D2 ), while computing a sum
over an exponential number of terms.
Conveniently, we can use the same trick to efficiently compute all of the necessary derivatives of the
additive kernel with respect to the base kernels. We merely need to remove the kernel of interest
from each term of the polynomials:
?kaddn
= en?1 (z1 , . . . , zj?1 , zj+1 , . . . zD )
?zj
(7)
This trick allows us to optimize the base kernel hyperparameters with respect to the marginal likelihood.
3.4
Computation
The computational cost of evaluating the Gram matrix of a product kernel (such as the SE kernel) is
O(N 2 D), while the cost of evaluating the Gram matrix of the additive kernel is O(N 2 DR), where
R is the maximum degree of interaction allowed (up to D). In higher dimensions, this can be a
significant cost, even relative to the fixed O(N 3 ) cost of inverting the Gram matrix. However, as
our experiments show, typically only the first few orders of interaction are important for modeling a
given function; hence if one is computationally limited, one can simply limit the maximum degree
of interaction without losing much accuracy.
4
12
1234
1234
1234
123
124
134
234
13
23
14
24
1
2
3
4
34
?
HKL kernel
12
123
124
134
234
13
23
14
24
1
2
3
4
34
12
1234
123
124
134
234
13
23
14
24
1
2
3
4
?
?
GP-GAM kernel
Squared-exp GP
kernel
34
12
123
124
134
234
13
23
14
24
1
2
3
4
34
?
Additive GP kernel
Figure 3: A comparison of different models. Nodes represent different interaction terms, ranging
from first-order to fourth-order interactions. Far left: HKL can select a hull of interaction terms, but
must use a pre-determined weighting over those terms. Far right: the additive GP model can weight
each order of interaction seperately. Neither the HKL nor the additive model dominate one another
in terms of flexibility, however the GP-GAM and the SE-GP are special cases of additive GPs.
Additive Gaussian processes are particularly appealing in practice because their use requires only
the specification of the base kernel. All other aspects of GP inference remain the same. All of the
experiments in this paper were performed using the standard GPML toolbox1 ; code to perform all
experiments is available at the author?s website.2
4
Related Work
Plate [6] constructs a form of additive GP, but using only the first-order and Dth order terms. This
model is motivated by the desire to trade off the interpretability of first-order models, with the flexibility of full-order models. Our experiments show that often, the intermediate degrees of interaction
contribute most of the variance.
A related functional ANOVA GP model [9] decomposes the mean function into a weighted sum of
GPs. However, the effect of a particular degree of interaction cannot be quantified by that approach.
Also, computationally, the Gibbs sampling approach used in [9] is disadvantageous.
Christoudias et al. [10] previously showed how mixtures of kernels can be learnt by gradient descent
in the Gaussian process framework. They call this Bayesian localized multiple kernel learning.
However, their approach learns a mixture over a small, fixed set of kernels, while our method learns
a mixture over all possible products of those kernels.
4.1
Hierarchical Kernel Learning
Bach [4] uses a regularized optimization framework to learn a weighted sum over an exponential
number of kernels which can be computed in polynomial time. The subsets of kernels considered by
this method are restricted to be a hull of kernels.3 Given each dimension?s kernel, and a pre-defined
weighting over all terms, HKL performs model selection by searching over hulls of interaction
terms. In [4], Bach also fixes the relative weighting between orders of interaction with a single term
?, computing the sum over all orders by:
2
ka (x, x0 ) = vD
D
Y
(1 + ?kd (xd , x0d ))
(8)
d=1
which has computational complexity O(D). However, this formulation forces the weight of all nth
order terms to be weighted by ?n .
Figure 3 contrasts the HKL hull-selection method with the Additive GP hyperparameter-learning
method. Neither method dominates the other in flexibility. The main difficulty with the approach
1
Available at http://www.gaussianprocess.org/gpml/code/
http://mlg.eng.cam.ac.uk/duvenaud/
3
we are considering in this paper, a hull can be defined
as a subset of all terms such that if term
Q In the setting
Q
0
0
j?J kj (x, x ) is included in the subset, then so are all terms
j?J/i kj (x, x ), for all i ? J. For details,
see [4].
2
5
of [4] is that hyperparameters are hard to set other than by cross-validation. In contrast, our method
optimizes the hyperparameters of each dimension?s base kernel, as well as the relative weighting of
each order of interaction.
4.2
ANOVA Procedures
Vapnik [11] introduces the support vector ANOVA decomposition, which has the same form as our
additive kernel. However, they recommend approximating the sum over all D orders with only one
term ?of appropriate order?, presumably because of the difficulty of setting the hyperparameters of
an SVM. Stitson et al. [12] performed experiments which favourably compared the support vector
ANOVA decomposition to polynomial and spline kernels. They too allowed only one order to be
active, and set hyperparameters by cross-validation.
A closely related procedure from the statistics literature is smoothing-splines ANOVA (SS-ANOVA)
[3]. An SS-ANOVA model is estimated as a weighted sum of splines along each dimension, plus
a sum of splines over all pairs of dimensions, all triplets, etc, with each individual interaction term
having a separate weighting parameter. Because the number of terms to consider grows exponentially in the order, in practice, only terms of first and second order are usually considered. Learning
in SS-ANOVA is usually done via penalized-maximum likelihood with a fixed sparsity hyperparameter.
In contrast to these procedures, our method can easily include all D orders of interaction, each
weighted by a separate hyperparameter. As well, we can learn kernel hyperparameters individually
per input dimension, allowing automatic relevance determination to operate.
4.3
Non-local Interactions
By far the most popular kernels for regression and classification tasks on continuous data are
the squared exponential (Gaussian) kernel, and the Mat?ern kernels. These kernels depend
only on the scaled Euclidean distance between two points, both having the form: k(x, x0 ) =
PD
2
f ( d=1 (xd ? x0d ) /ld2 ). Bengio et al. [13] argue that models based on squared-exponential kernels
are particularily susceptible to the curse of dimensionality. They emphasize that the locality of the
kernels means that these models cannot capture non-local structure. They argue that many functions
that we care about have such structure. Methods based solely on local kernels will require training
examples at all combinations of relevant inputs.
1st order interactions
k1 + k2 + k3
2nd order interactions
k1 k2 + k2 k3 + k1 k3
3rd order interactions
k1 k2 k3
(Squared-exp kernel)
All interactions
(Additive kernel)
Figure 4: Isocontours of additive kernels in 3 dimensions. The third-order kernel only considers
nearby points relevant, while the lower-order kernels allow the output to depend on distant points,
as long as they share one or more input value.
Additive kernels have a much more complex structure, and allow extrapolation based on distant
parts of the input space, without spreading the mass of the kernel over the whole space. For example,
additive kernels of the second order allow strong non-local interactions between any points which are
similar in any two input dimensions. Figure 4 provides a geometric comparison between squaredexponential kernels and additive kernels in 3 dimensions.
6
5
Experiments
5.1
Synthetic Data
Because additive kernels can discover non-local structure in data, they are exceptionally well-suited
to problems where local interpolation fails. Figure 5 shows a dataset which demonstrates this feature
1
0.5
f1 (x1 )0
?0.5
?1
?1.5
?2
?1.5
?1
?0.5
?1.5
?1
?0.5
0
0.5
1
1.5
2
0
0.5
1
1.5
2
x1
1
0.5
f2 (x2 )0
?0.5
?1
?1.5
?2
True Function
& data locations
Squared-exp GP
posterior mean
Additive GP
posterior mean
x2
Additive GP
1st-order functions
Figure 5: Long-range inference in functions with additive structure.
of additive GPs, consisting of data drawn from a sum of two axis-aligned sine functions. The
training set is restricted to a small, L-shaped area; the test set contains a peak far from the training
set locations. The additive GP recovered both of the original sine functions (shown in green), and
inferred correctly that most of the variance in the function comes from first-order interactions. The
ability of additive GPs to discover long-range structure suggests that this model may be well-suited
to deal with covariate-shift problems.
5.2
Experimental Setup
On a diverse collection of datasets, we compared five different models. In the results tables below,
GP Additive refers to a GP using the additive kernel with squared-exp base kernels. For speed,
we limited the maximum order of interaction in the additive kernels to 10. GP-GAM denotes an
additive GP model with only first-order interactions. GP Squared-Exp is a GP model with a squaredexponential ARD kernel. HKL4 was run using the all-subsets kernel, which corresponds to the same
set of kernels as considered by the additive GP with a squared-exp base kernel.
For all GP models, we fit hyperparameters by the standard method of maximizing training-set
marginal likelihood, using L-BFGS [14] for 500 iterations, allowing five random restarts. In addition
to learning kernel hyperparameters, we fit a constant mean function to the data. In the classification
experiments, GP inference was done using Expectation Propagation [15].
5.3
Results
Tables 2, 3, 4 and 5 show mean performance across 10 train-test splits. Because HKL does not
specify a noise model, it could not be included in the likelihood comparisons.
Table 2: Regression Mean Squared Error
Method
Linear Regression
GP GAM
HKL
GP Squared-exp
GP Additive
bach
1.031
1.302
0.199
0.045
0.045
concrete
0.404
0.142
0.147
0.159
0.097
pumadyn-8nh
0.641
0.602
0.346
0.317
0.317
servo
0.523
0.281
0.199
0.124
0.110
housing
0.289
0.179
0.151
0.092
0.102
The model with best performance on each dataset is in bold, along with all other models that were
not significantly different under a paired t-test. The additive model never performs significantly
worse than any other model, and sometimes performs significantly better than all other models. The
4
Code for HKL available at http://www.di.ens.fr/?fbach/hkl/
7
Table 3: Regression Negative Log Likelihood
bach
2.430
1.746
?0.131
?0.131
Method
Linear Regression
GP GAM
GP Squared-exp
GP Additive
concrete
1.403
0.433
0.412
0.181
pumadyn-8nh
1.881
1.167
0.843
0.843
servo
1.678
0.800
0.425
0.309
Table 4: Classification Percent Error
breast
pima
sonar
ionosphere
7.611
24.392
26.786
16.810
5.189 22.419 15.786
8.524
5.377
24.261 21.000
9.119
4.734 23.722 16.357
6.833
5.566 23.076 15.714
7.976
Method
Logistic Regression
GP GAM
HKL
GP Squared-exp
GP Additive
housing
1.052
0.563
0.208
0.161
liver
45.060
29.842
27.270
31.237
30.060
heart
16.082
16.839
18.975
20.642
18.496
Table 5: Classification Negative Log Likelihood
Method
Logistic Regression
GP GAM
GP Squared-exp
GP Additive
breast
0.247
0.163
0.146
0.150
pima
0.560
0.461
0.478
0.466
sonar
4.609
0.377
0.425
0.409
ionosphere
0.878
0.312
0.236
0.295
liver
0.864
0.569
0.601
0.588
heart
0.575
0.393
0.480
0.415
difference between all methods is larger in the case of regression experiments. The performance of
HKL is consistent with the results in [4], performing competitively but slightly worse than SE-GP.
Because the additive GP is a superset of both the GP-GAM model and the SE-GP model, instances
where the additive GP model performs significantly worse are presumably due to over-fitting, or due
to the hyperparameter optimization becoming stuck in a local maximum. Additive GP performance
can be expected to benefit significantly from integrating out the kernel hyperparameters.
6
Conclusion
We present additive Gaussian processes: a simple family of models which generalizes two widelyused classes of models. Additive GPs also introduce a tractable new type of structure into the GP
framework. Our experiments indicate that such additive structure is present in real datasets, allowing
our model to perform better than standard GP models. In the case where no such structure exists,
our model can recover arbitrarily flexible models, as well.
In addition to improving modeling efficacy, the additive GP also improves model interpretability:
the order variance hyperparameters indicate which sorts of structure are present in our model.
Compared to HKL, which is the only other tractable procedure able to capture the same types of
structure, our method benefits from being able to learn individual kernel hyperparameters, as well
as the weightings of different orders of interaction. Our experiments show that additive GPs are a
state-of-the-art regression model.
Acknowledgments
The authors would like to thank John J. Chew and Guillaume Obozonksi for their helpful comments.
8
References
[1] J.A. Nelder and R.W.M. Wedderburn. Generalized linear models. Journal of the Royal Statistical Society. Series A (General), 135(3):370?384, 1972.
[2] T.J. Hastie and R.J. Tibshirani. Generalized additive models. Chapman & Hall/CRC, 1990.
[3] G. Wahba. Spline models for observational data. Society for Industrial Mathematics, 1990.
[4] Francis Bach. High-dimensional non-linear variable selection through hierarchical kernel
learning. CoRR, abs/0909.0844, 2009.
[5] C.E. Rasmussen and CKI Williams. Gaussian Processes for Machine Learning. The MIT Press,
Cambridge, MA, USA, 2006.
[6] T.A. Plate. Accuracy versus interpretability in flexible modeling: Implementing a tradeoff
using Gaussian process models. Behaviormetrika, 26:29?50, 1999.
[7] I.G. Macdonald. Symmetric functions and Hall polynomials. Oxford University Press, USA,
1998.
[8] R.P. Stanley. Enumerative combinatorics. Cambridge University Press, 2001.
[9] C.G. Kaufman and S.R. Sain. Bayesian functional anova modeling using Gaussian process
prior distributions. Bayesian Analysis, 5(1):123?150, 2010.
[10] M. Christoudias, R. Urtasun, and T. Darrell. Bayesian localized multiple kernel learning. 2009.
[11] V.N. Vapnik. Statistical learning theory, volume 2. Wiley New York, 1998.
[12] M. Stitson, A. Gammerman, V. Vapnik, V. Vovk, C. Watkins, and J. Weston. Support vector
regression with ANOVA decomposition kernels. Advances in kernel methods: Support vector
learning, pages 285?292, 1999.
[13] Y. Bengio, O. Delalleau, and N. Le Roux. The curse of highly variable functions for local
kernel machines. Advances in neural information processing systems, 18, 2006.
[14] J. Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of computation, 35(151):773?782, 1980.
[15] T.P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in
Artificial Intelligence, volume 17, pages 362?369, 2001.
9
| 4221 |@word polynomial:6 nd:4 suitably:1 d2:2 covariance:4 eng:1 decomposition:3 stitson:2 contains:1 efficacy:3 selecting:1 series:1 ka:1 z2:13 recovered:1 must:2 john:1 distant:2 additive:72 remove:1 plot:2 zik:1 intelligence:1 website:1 parameterization:3 provides:1 node:1 contribute:1 location:2 org:1 five:2 along:2 become:1 fitting:1 introduce:5 chew:1 x0:12 expected:1 mpg:1 nor:1 automatically:1 curse:2 equipped:1 considering:2 becomes:1 discover:3 medium:1 mass:1 kind:1 kaufman:1 xd:7 k2:7 scaled:1 uk:3 control:1 demonstrates:1 engineering:2 local:8 limit:2 oxford:1 solely:1 interpolation:1 becoming:1 black:1 plus:1 quantified:1 specifying:2 suggests:1 limited:3 range:5 acknowledgment:1 recursive:1 practice:2 implement:1 procedure:4 area:1 significantly:6 pre:2 integrating:1 refers:1 cannot:2 selection:4 storage:1 optimize:1 equivalent:2 www:2 maximizing:2 williams:1 decomposable:1 roux:1 toolbox1:1 dominate:1 searching:1 controlling:1 today:1 target:1 losing:1 carl:1 gps:7 us:1 trick:2 approximated:1 particularly:1 updating:1 capture:2 sun:1 trade:2 servo:3 pd:2 complexity:2 cam:3 depend:5 solving:1 predictive:2 f2:3 easily:1 train:2 artificial:1 choosing:1 whose:1 widely:1 larger:1 delalleau:1 s:3 ability:1 statistic:1 unseen:1 gp:64 jointly:1 housing:3 advantage:2 interaction:42 product:5 coming:2 fr:1 relevant:2 aligned:1 flexibility:4 christoudias:2 regularity:1 darrell:1 depending:2 ac:3 liver:3 ard:1 x0i:4 strong:1 edward:1 come:2 indicate:4 closely:1 hull:5 observational:1 material:1 implementing:1 xid:1 crc:1 require:1 assign:2 f1:3 fix:1 elementary:1 extension:1 duvenaud:2 considered:3 hall:2 exp:12 presumably:3 k3:4 major:1 outperformed:1 spreading:1 gaussianprocess:1 individually:1 weighted:5 mit:1 gaussian:17 gpml:2 kid:1 likelihood:7 mainly:1 contrast:3 industrial:1 helpful:1 inference:4 typically:3 quasi:1 i1:1 germany:1 classification:5 flexible:5 art:2 smoothing:2 breakthrough:1 special:1 marginal:3 construct:1 never:1 having:2 shaped:1 sampling:1 chapman:1 constitutes:1 spline:6 intelligent:1 recommend:1 few:1 simultaneously:1 individual:3 roof:1 consisting:1 ab:1 interest:1 cer54:1 highly:1 evaluation:2 introduces:1 mixture:3 immense:1 necessary:3 machinery:1 orthogonal:1 vely:1 euclidean:1 e0:1 increased:1 instance:1 modeling:7 cost:4 subset:6 too:1 dependency:1 varies:1 learnt:1 nickisch:1 considerably:1 synthetic:1 st:6 explores:1 peak:1 cki:1 off:2 quickly:1 concrete:4 na:1 pumadyn:3 squared:17 hn:1 possibly:1 dr:1 worse:3 sidestep:1 derivative:1 de:1 bfgs:1 bold:1 combinatorics:1 performed:2 sine:2 extrapolation:1 francis:1 disadvantageous:1 recover:2 sort:1 contribution:2 accuracy:3 variance:13 efficiently:2 generalize:2 bayesian:6 suffers:1 definition:1 mlg:1 minka:1 e2:1 di:1 dataset:3 popular:3 car:1 dimensionality:1 improves:1 stanley:1 higher:2 restarts:1 response:1 specify:3 hannes:1 formulation:1 done:2 ld2:3 favourably:1 expressive:1 propagation:2 logistic:3 grows:1 x0d:5 effect:1 building:1 normalized:2 true:1 usa:2 hence:1 assigned:1 symmetric:2 i2:1 deal:1 noted:1 mpi:1 generalized:6 plate:4 performs:4 percent:1 ranging:2 wise:1 dkd23:1 recently:1 functional:2 exponentially:1 nh:3 volume:2 interpretation:1 interpret:1 onedimensional:1 significant:1 cambridge:4 gibbs:1 rd:2 automatic:1 mathematics:2 z4:12 centre:1 specification:1 etc:1 add:1 base:10 dominant:1 multivariate:1 posterior:4 showed:1 optimizes:1 ubingen:1 arbitrarily:1 der:1 seen:1 captured:1 additional:1 care:1 determine:1 full:2 multiple:2 determination:1 cross:3 long:6 bach:5 e1:1 paired:1 impact:1 regression:15 breast:2 expectation:2 iteration:1 kernel:104 represent:2 sometimes:1 achieved:1 addition:4 operate:1 fbach:1 comment:1 seperately:1 call:1 intermediate:1 bengio:2 easy:2 split:1 superset:1 xj:1 fit:4 zi:1 hastie:1 wahba:1 tradeoff:1 shift:1 motivated:1 e3:1 york:1 generally:1 useful:3 se:9 amount:1 http:3 zj:3 estimated:2 per:1 correctly:1 tibshirani:1 blue:1 zd:5 diverse:1 gammerman:1 hyperparameter:6 mat:1 drawn:4 neither:2 anova:11 nocedal:1 merely:1 sum:20 run:1 powerful:1 fourth:1 uncertainty:1 family:3 x0j:1 draw:7 capturing:1 ki:4 strength:3 x2:10 nearby:1 aspect:1 speed:1 extremely:2 discoverable:1 performing:1 ern:1 department:2 combination:3 kd:3 cleverly:1 smaller:1 remain:1 across:1 slightly:1 appealing:1 explained:1 restricted:2 heart:3 computationally:2 previously:1 tractable:5 end:1 generalizes:2 available:3 competitively:1 gam:15 hierarchical:4 appropriate:1 subtracted:1 original:2 denotes:1 include:2 newton:3 exploit:1 k1:7 approximating:1 society:2 usual:1 gradient:1 kth:1 distance:1 separate:2 thank:1 macdonald:1 tue:1 vd:1 enumerative:1 argue:2 considers:1 urtasun:1 water:2 code:4 length:1 z3:12 difficult:2 susceptible:1 setup:1 pima:3 negative:2 design:1 perform:2 allowing:4 datasets:7 descent:1 precise:1 inferred:1 david:1 inverting:1 pair:1 z1:17 learned:2 dth:7 able:2 usually:2 below:1 sparsity:1 confidently:1 hkl:16 interpretability:8 green:2 royal:1 power:2 difficulty:3 natural:1 regularized:1 force:1 nth:6 improve:1 axis:1 kj:3 prior:7 literature:1 geometric:1 relative:4 expect:1 versus:1 localized:2 age:2 validation:3 degree:5 consistent:1 share:1 penalized:1 rasmussen:2 allow:5 weaker:1 benefit:2 dimension:18 evaluating:2 gram:3 author:2 made:1 commonly:1 collection:1 stuck:1 far:4 approximate:1 emphasize:1 active:1 summing:1 nelder:1 xi:4 spectrum:1 continuous:1 triplet:1 decomposes:2 chief:1 table:8 sk:3 sonar:2 learn:3 improving:1 complex:1 main:2 whole:1 noise:1 hyperparameters:20 n2:2 allowed:2 girard:2 x1:10 en:5 wiley:1 fails:1 exponential:11 house:2 watkins:1 weighting:6 third:1 learns:2 e4:1 formula:2 covariate:1 svm:1 ionosphere:2 dominates:1 intractable:2 exists:1 vapnik:3 corr:1 widelyused:1 locality:1 suited:2 simply:1 conveniently:1 desire:1 corresponds:4 ma:1 weston:1 sized:1 price:3 exceptionally:1 hard:1 included:2 determined:2 vovk:1 called:1 squaredexponential:2 experimental:1 select:1 guillaume:1 support:4 relevance:1 evaluate:1 extrapolate:1 |
3,559 | 4,222 | Universal low-rank matrix recovery
from Pauli measurements
Yi-Kai Liu
Applied and Computational Mathematics Division
National Institute of Standards and Technology
Gaithersburg, MD, USA
[email protected]
Abstract
We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in
quantum state tomography, and is a non-commutative analogue of a well-known
problem in compressed sensing: recovering a sparse vector from a few of its
Fourier coefficients.
We show that almost all sets of O(rd log6 d) Pauli measurements satisfy the rankr restricted isometry property (RIP). This implies that M can be recovered from
a fixed (?universal?) set of Pauli measurements, using nuclear-norm minimization
(e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result
holds for any class of measurements that use an orthonormal operator basis whose
elements have small operator norm. Our proof uses Dudley?s inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy
duality.
1
Introduction
Low-rank matrix recovery is the following problem: let M be some unknown matrix of dimension
d and rank r d, and let A1 , A2 , . . . , Am be a set of measurement matrices; then can one reconstruct M from its inner products tr(M ? A1 ), tr(M ? A2 ), . . . , tr(M ? Am )? This problem has many
applications in machine learning [1, 2], e.g., collaborative filtering (the Netflix problem). Remarkably, it turns out that for many useful choices of measurement matrices, low-rank matrix recovery
is possible, and can even be done efficiently. For example, when the Ai are Gaussian random matrices, then it is known that m = O(rd) measurements are sufficient to uniquely determine M , and
furthermore, M can be reconstructed by solving a convex program (minimizing the nuclear norm)
[3, 4, 5]. Another example is the ?matrix completion? problem, where the measurements return a
random subset of matrix elements of M ; in this case, m = O(rd poly log d) measurements suffice,
provided that M satisfies some ?incoherence? conditions [6, 7, 8, 9, 10].
The focus of this paper is on a different class of measurements, known as Pauli measurements. Here,
the Ai are randomly chosen elements of the Pauli basis, a particular orthonormal basis of Cd?d . The
Pauli basis is a non-commutative analogue of the Fourier basis in Cd ; thus, low-rank matrix recovery
using Pauli measurements can be viewed as a generalization of the idea of compressed sensing of
sparse vectors using their Fourier coefficients [11, 12]. In addition, this problem has applications
in quantum state tomography, the task of learning an unknown quantum state by performing measurements [13]. This is because most quantum states of physical interest are accurately described by
density matrices that have low rank; and Pauli measurements are especially easy to carry out in an
experiment (due to the tensor product structure of the Pauli basis).
1
In this paper we show stronger results on low-rank matrix recovery from Pauli measurements. Previously [13, 8], it was known that, for every rank-r matrix M ? Cd?d , almost all choices of
m = O(rd poly log d) random Pauli measurements will lead to successful recovery of M . Here
we show a stronger statement: there is a fixed (?universal?) set of m = O(rd poly log d) Pauli measurements, such that for all rank-r matrices M ? Cd?d , we have successful recovery.1 We do this
by showing that the random Pauli sampling operator obeys the ?restricted isometry property? (RIP).
Intuitively, RIP says that the sampling operator is an approximate isometry, acting on the set of all
low-rank matrices. In geometric terms, it says that the sampling operator embeds the manifold of
low-rank matrices into O(rd poly log d) dimensions, with low distortion in the 2-norm.
RIP for low-rank matrices is a very strong property, and prior to this work, it was only known to hold
for very unstructured types of random measurements, such as Gaussian measurements [3], which
are unsuitable for most applications. RIP was known to fail in the matrix completion case, and
whether it held for Pauli measurements was an open question. Once we have established RIP for
Pauli measurements, we can use known results [3, 4, 5] to show low-rank matrix recovery from a
universal set of Pauli measurements. In particular, using [5], we can get nearly-optimal universal
bounds on the error of the reconstructed density matrix, when the data are noisy; and we can even get
bounds on the recovery of arbitrary (not necessarily low-rank) matrices. These RIP-based bounds are
qualitatively stronger than those obtained using ?dual certificates? [14] (though the latter technique
is applicable in some situations where RIP fails).
In the context of quantum state tomography, this implies that, given a quantum state that consists
of a low-rank component Mr plus a residual full-rank component Mc , we can reconstruct Mr up
to an error that is not much larger than Mc . In particular, let k?k? denote the nuclear norm, and let
k?kF denote the Frobenius norm. Then the error can be bounded in the nuclear norm by O(kMc k? )
(assuming noiseless data), and it can be bounded in the Frobenius norm by O(kMc kF poly log d)
(which holds even with noisy data2 ). This shows that our reconstruction is nearly as good as the
best rank-r approximation to M (which is given by the truncated SVD).
In addition, a completely
?
arbitrary quantum state can be reconstructed up to an error of O(1/ r) in Frobenius norm. Lastly,
the RIP gives some insight into the optimal design of tomography experiments, in particular, the
tradeoff between the number of measurement settings (which is essentially m), and the number of
repetitions of the experiment at each setting (which determines the statistical noise that enters the
data) [15].
These results can be generalized beyond the class of Pauli measurements. Essentially, one can
d?d
replace the Pauli basis with any orthonormal
that is incoherent, i.e., whose elements
? basis of C
have small operator norm (of order O(1/ d), say); a similar generalization was noted in the earlier
results of [8]. Also, our proof shows that the RIP actually holds in a slightly?stronger sense: it holds
not just for all rank-r matrices, but for all matrices X that satisfy kXk? ? rkXkF .
To prove this result, we combine a number of techniques that have appeared elsewhere. RIP results
were previously known for Gaussian measurements and some of their close relatives [3]. Also,
restricted strong convexity (RSC), a similar but somewhat weaker property, was recently shown
in the context of the matrix completion problem (with additional ?non-spikiness? conditions) [10].
These results follow from covering arguments (i.e., using a concentration inequality to upper-bound
the failure probability on each individual low-rank matrix X, and then taking the union bound over
all such X). Showing RIP for Pauli measurements seems to be more delicate, however. Pauli
measurements have more structure and less randomness, so the concentration of measure phenomena
are weaker, and the union bound no longer gives the desired result.
Instead, one must take into account the favorable correlations between the behavior of the sampling
operator on different matrices ? intuitively, if two low-rank matrices M and M 0 have overlapping
supports, then good behavior on M is positively correlated with good behavior on M 0 . This can be
done by transforming the problem into a Gaussian process, and using Dudley?s entropy bound. This
is the same approach used in classical compressed sensing, to show RIP for Fourier measurements
[12, 11]. The key difference is that in our case, the Gaussian process is indexed by low-rank matrices,
rather than sparse vectors. To bound the correlations in this process, one then needs to bound the
covering numbers of the nuclear norm ball (of matrices), rather than the `1 ball (of vectors). This
1
2
Note that in the universal result, m is slightly larger, by a factor of poly log d.
However, this bound is not universal.
2
requires a different technique, using entropy duality, which is due to Gu?edon et al [16]. (See also
the related work in [17].)
As a side note, we remark that matrix recovery can sometimes fail because there exist large sets of
up to d Pauli matrices that all commute, i.e., they have a simultaneous eigenbasis ?1 , . . . , ?d . (These
?i are of interest in quantum information ? they are called stabilizer states [18].) If one were to
measure such a set of Pauli?s, one would gain complete knowledge about the diagonal elements of
the unknown matrix M in the ?i basis, but one would learn nothing about the off-diagonal elements.
This is reminiscent of the difficulties that arise in matrix completion. However, in our case, these
pathological cases turn out to be rare, since it is unlikely that a random subset of Pauli matrices will
all commute.
Finally, we note that there is a large body of related work on estimating a low-rank matrix by solving
a regularized convex program; see, e.g., [19, 20].
This paper is organized as follows. In section 2, we state our results precisely, and discuss some
specific applications to quantum state tomography. In section 3 we prove the RIP for Pauli matrices,
and in section 4 we discuss some directions for future work. Some technical details appear in
sections A and B in the supplementary material [21].
Notation: For vectors, k?k2 denotes the `2 norm. For matrices, k?kp denotes the Schatten p-norm,
P
kXkp = ( i ?i (X)p )1/p , where ?i (X) are the singular values of X. In particular, k?k? = k?k1
is the trace or nuclear norm, k?kF = k?k2 is the Frobenius norm, and k?k = k?k? is the operator
norm. Finally, for matrices, A? is the adjoint of A, and (?, ?) is the Hilbert-Schmidt inner product,
(A, B) = tr(A? B). Calligraphic letters denote superoperators acting on matrices. Also, A A is
the superoperator that maps every matrix X ? Cd?d to the matrix A tr(A? X).
2
Our Results
We will consider the following approach to low-rank matrix recovery. Let M ? Cd?d be an unknown matrix of rank at most r. Let W1 , . . . , Wd2 be an orthonormal basis for Cd?d , with respect
to the inner product (A, B) = tr(A? B). We choose m basis elements, S1 , . . . , Sm , iid uniformly
at random from {W1 , . . . , Wd2 } (?sampling with replacement?). We then observe the coefficients
(Si , M ). From this data, we want to reconstruct M .
For this to be possible, the measurement matrices Wi must be ?incoherent? with respect to M .
Roughly speaking, this means that the inner products (Wi , M ) must be small. Formally, we say that
the basis W1 , . . . , Wd2 is incoherent if the Wi all have small operator norm,
?
kWi k ? K/ d,
(1)
where K is a constant.3 (This assumption was also used in [8].)
Before proceeding further, let us sketch the connection between this problem and quantum state
tomography. Consider a system of n qubits, with Hilbert space dimension d = 2n . We want to learn
the state of the system, which is described by a density matrix ? ? Cd?d ; ? is positive semidefinite,
has trace 1, and has rank r d when the state is nearly pure. There is a class of convenient (and
experimentally feasible) measurements, which are described by Pauli matrices (also called Pauli
observables). These are matrices of the form P1 ? ? ? ? ? Pn , where ? denotes the tensor product
(Kronecker product), and each Pi is a 2 ? 2 matrix chosen from the following four possibilities:
1 0
0 1
0 ?i
1 0
I=
, ?x =
, ?y =
, ?z =
.
(2)
0 1
1 0
i 0
0 ?1
One can estimate expectation values of Pauli observables, which are given by (?, (P1 ? ? ? ? ? Pn )).
This is a special case of the above measurement?model, where the measurement matrices Wi?are
the (scaled) Pauli observables (P1 ? ? ? ? ? Pn )/ d, and they are incoherent with kWi k ? K/ d,
K = 1.
3
Note that kWi k is the maximum inner product between Wi and any rank-1 matrix M (normalized so that
kM kF = 1).
3
Now we return to our discussion of the general problem. We choose S1 , . . . , Sm iid uniformly at
random from {W1 , . . . , Wd2 }, and we define the sampling operator A : Cd?d ? Cm as
(A(X))i =
?d
m
tr(Si? X),
i = 1, . . . , m.
The normalization is chosen so that EA? A = I. (Note that A? A =
Pm
j=1 Sj Sj ?
(3)
d2
m .)
We assume we are given the data y = A(M ) + z, where z ? Cm is some (unknown) noise contribu? by minimizing the nuclear norm, subject to the constraints
tion. We will construct an estimator M
specified by y. (Note that one can view the nuclear norm as a convex relaxation of the rank function
? thus these estimators can be computed efficiently.) One approach is the matrix Dantzig selector:
? = arg min kXk? such that kA? (y ? A(X))k ? ?.
M
X
(4)
Alternatively, one can solve a regularized least-squares problem, also called the matrix Lasso:
? = arg min 1 kA(X) ? yk22 + ?kXk? .
M
2
X
(5)
Here, the parameters ? and ? are set according to the strength of the noise component z (we will
discuss this later). We will be interested in bounding the error of these estimators. To do this, we
will show that the sampling operator A satisfies the restricted isometry property (RIP).
2.1
RIP for Pauli Measurements
Fix some constant 0 ? ? < 1. Fix d, and some set U ? Cd?d . We say that A satisfies the restricted
isometry property (RIP) over U if, for all X ? U , we have
(1 ? ?)kXkF ? kA(X)k2 ? (1 + ?)kXkF .
(6)
(Here, kA(X)k2 denotes the `2 norm of a vector, while kXkF denotes the Frobenius norm of a
matrix.) When U is the set of all X ? Cd?d with rank r, this is precisely the notion of RIP studied
in [3, 5]. We will show that Pauli measurements
satisfy the RIP over a slightly larger set (the set of
?
all X ? Cd?d such that kXk? ? rkXkF ), provided the number of measurements m is at least
?(rd poly log d). This result generalizes to measurements in any basis with small operator norm.
Theorem 2.1 Fix some constant 0 ? ? < 1. Let {W1 , . . . , Wd2 } be an orthonormal basis for Cd?d
that is incoherent in the sense of (1). Let m = CK 2 ? rd log6 d, for some constant C that depends
only on ?, C = O(1/? 2 ). Let A be defined as in (3). Then, with high probability (over ?
the choice
of S1 , . . . , Sm ), A satisfies the RIP over the set of all X ? Cd?d such that kXk? ? rkXkF .
Furthermore, the failure probability is exponentially small in ? 2 C.
We will prove this theorem in section 3. In the remainder of this section, we discuss its applications
to low-rank matrix recovery, and quantum state tomography in particular.
2.2
Applications
By combining Theorem 2.1 with previous results [3, 4, 5], we immediately obtain bounds on the
accuracy of the matrix Dantzig selector (4) and the matrix Lasso (5). In particular, for the first time
we can show universal recovery of low-rank matrices via Pauli measurements, and near-optimal
bounds on the accuracy of the reconstruction when the data is noisy [5]. (Similar results hold for
measurements in any incoherent operator basis.) These RIP-based results improve on the earlier
results based on dual certificates [13, 8, 14]. See [3, 4, 5] for details.
Here, we will sketch a couple of these results that are of particular interest for quantum state tomography. Here, M is the density matrix describing the state of a quantum mechanical object, and
A(M ) is a vector of Pauli expectation values for the state M . (M has some additional properties:
it is positive semidefinite, and has trace 1; thus A(M ) is a real vector.) There are two main issues
that arise. First, M is not precisely low-rank. In many situations, the ideal state has low rank (for
instance, a pure state has rank 1); however, for the actual state observed in an experiment, the density matrix M is full-rank with decaying eigenvalues. Typically, we will be interested in obtaining a
good low-rank approximation to M , ignoring the tail of the spectrum.
4
Secondly, the measurements of A(M ) are inherently noisy. We do not observe A(M ) directly;
rather, we estimate each entry (A(M ))i by preparing many copies of the state M , measuring the
Pauli observable Si on each copy, and averaging the results. Thus, we observe yi = (A(M ))i + zi ,
where zi is binomially distributed. When the number of experiments being averaged is large, zi can
be approximated by Gaussian noise. We will be interested in getting an estimate of M that is stable
with respect to this noise. (We remark that one can also reduce the statistical noise by performing
more repetitions of each experiment. This suggests the possibility of a tradeoff between the accuracy
of estimating each parameter, and the number of parameters one chooses to measure overall. This
will be discussed elsewhere [15].)
? be
We would like to reconstruct M up to a small error in the nuclear or Frobenius norm. Let M
our estimate. Bounding the error in nuclear norm implies that, for any measurement allowed by
? from M is small. Bounding the
quantum mechanics, the probability of distinguishing the state M
?
error in Frobenius norm implies that the difference M ? M is highly ?mixed? (and thus does not
contribute to the coherent or ?quantum? behavior of the system).
We now sketch a few results from [4, 5] that apply to this situation. Write M = Mr + Mc , where
Mr is a rank-r approximation to M , corresponding to the r largest singular values of M , and Mc
is the residual part of M (the ?tail? of M ). Ideally, our goal is to estimate M up to an error that is
not much larger than Mc . First, we can bound the error in nuclear norm (assuming the data has no
noise):
Proposition 2.2 (Theorem 5 from [4]) Let A : Cd?d ? Cm be the random Pauli sampling operator,
with m = Crd log6 d, for some absolute constant C. Then, with high probability over the choice of
A, the following holds:
Let M be any matrix in Cd?d , and write M = Mr + Mc , as described above. Say we observe
? be the Dantzig selector (4) with ? = 0. Then
y = A(M ), with no noise. Let M
? ? M k? ? C 0 kMc k? ,
kM
0
(7)
where C00 is an absolute constant.
We can also bound the error in Frobenius norm, allowing for noisy data:
Proposition 2.3 (Lemma 3.2 from [5]) Assume the same set-up as above, but say
? we observe y =
2
? be the Dantzig selector (4) with ? = 8 d?, or the Lasso
A(M ) + z, where
z
?
N
(0,
?
I).
Let
M
?
(5) with ? = 16 d?. Then, with high probability over the noise z,
?
?
? ? M kF ? C0 rd? + C1 kMc k? / r,
kM
(8)
where C0 and C1 are absolute constants.
? in terms of the noise strength ? and the size of the tail Mc . It is universal:
This bounds the error of M
one sampling operator A works for all matrices M . While this bound may seem unnatural because
it mixes different norms, it can be quite useful. When M actually is low-rank (with rank r), then
Mc = 0, and the bound (8) becomes particularly simple. The dependence on the noise strength ?
is known to be nearly minimax-optimal
[5]. Furthermore, when some of the singular values of M
?
fall below the ?noise level? d?, one can show a tighter bound, with a nearly-optimal bias-variance
tradeoff; see Theorem 2.7 in [5] for details.
? depends on the behavior of the tail Mc .
On the other hand, when M is full-rank, then the error of M
We will consider a couple of cases. First, suppose we do not assume anything about M , besides the
fact that it is a density matrix for a quantum state. Then kM k? = 1, hence kMc k? ? 1 ? dr , and we
?
C1
? ? M kF ? C0 rd? + ?
can use (8) to get kM
. Thus, even for arbitrary (not necessarily low-rank)
r
?
?
quantum states, the estimator M gives nontrivial results. The O(1/ r) term can be interpreted as
the penalty for only measuring an incomplete subset of the Pauli observables.
Finally, consider the case where M is full-rank, but we do know that the tail Mc is small. If we
know that Mc is small in nuclear norm, then we can use equation (8). However, if we know that Mc
is small in Frobenius norm, one can give a different bound, using ideas from [5], as follows.
5
Proposition 2.4 Let M be any matrix in Cd?d , with singular values ?1 (M ) ? ? ? ? ? ?d (M ).
Choose a random Pauli sampling operator A : Cd?d ? Cm , with m = Crd log6 d, for some
? be the Dantzig
absolute constant C. Say?we observe y = A(M ) + z, where?
z ? N (0, ? 2 I). Let M
selector (4) with ? = 16 d?, or the Lasso (5) with ? = 32 d?. Then, with high probability over
the choice of A and the noise z,
? ? M k2F ? C0
kM
r
X
min(?i2 (M ), d? 2 ) + C2 (log6 d)
i=1
d
X
?i2 (M ),
(9)
i=r+1
where C0 and C2 are absolute constants.
This bound can be interpreted as follows. The first term expresses the bias-variance tradeoff for esti6
mating Mr , while the second term depends on the Frobenius norm
d factor
?c . (Note?that the log
?of M
3
?
may not be tight.) In particular, this implies: kM ? M kF ? C0 rd? + C2 (log d)kMc kF .
This can be compared with equation (8) (involving kMc k? ). This bound will be better when
kMc kF kMc k? , i.e., when the tail Mc has slowly-decaying eigenvalues (in physical terms, it
is highly mixed).
Proposition 2.4 is an adaptation of Theorem 2.8 in [5]. We sketch the proof in section B in [21]. Note
that this bound is not universal: it shows that for all matrices M , a random choice of the sampling
operator A is likely to work.
3
Proof of the RIP for Pauli Measurements
We now prove Theorem 2.1. The general approach involving Dudley?s entropy bound is similar to
[12], while the technical part of the proof (bounding certain covering numbers) uses ideas from [16].
We summarize the argument here; the details are given in section A in [21].
3.1
Overview
?
Let U2 = {X ? Cd?d | kXkF ? 1, kXk? ? rkXkF }. Let B be the set of all self-adjoint linear
d?d
d?d
operators from C
to C
, and define the following norm on B:
kMk(r) = sup |(X, MX)|.
(10)
X?U2
(Suppose r ? 2, which is sufficient for our purposes. It is straightforward to show that k?k(r) is a
norm, and that B is a Banach space with respect to this norm.) Then let us define
?r (A) = kA? A ? Ik(r) .
(11)
By an elementary argument, in order to prove RIP, it suffices to show that ?r (A) < 2? ? ? 2 . We
will proceed as follows: we will first bound E?r (A), then show that ?r (A) is concentrated around
its mean.
P
2
m
Using a standard symmetrization argument, we have that E?r (A) ? 2E
j=1 ?j Sj Sj dm
,
(r)
where the ?j are Rademacher (iid ?1) random variables. Here the round ket notation Sj means
d2
we view the matrix
Sj as an element of the vector space C with Hilbert-Schmidt inner product;
the round bra Sj denotes the adjoint element in the (dual) vector space.
Now we use the following lemma, which we will prove later. This bounds the expected magnitude
in (r)-norm of a Rademacher sum of a fixed collection of operators V1 , . . . , Vm that have small
operator norm.
Lemma 3.1 Let m ? d2 . Fix some V1 , . . . , Vm ? Cd?d that have uniformly bounded operator
norm, kVi k ? K (for all i). Let ?1 , . . . , ?m be iid uniform ?1 random variables. Then
m
m
X
X
1/2
?i Vi Vi
? C5 ?
Vi Vi
,
(12)
E?
where C5 =
?
i=1
(r)
i=1
(r)
r ? C4 K log5/2 d log1/2 m and C4 is some universal constant.
6
After some algebra, one gets that E?r (A) ? 2(E?r (A) + 1)1/2 ? C5 ?
q
d
m,
where C5 =
?
r?
3
C4 K log d. By finding the roots of this quadratic equation, we get the following bound on E?r (A).
Let ? ? 1. Assume that m ? ?d(2C5 )2 = ? ? 4C42 ? dr ? K 2 log6 d. Then we have the desired result:
(13)
E?r (A) ? ?1 + ?1? .
It remains to show that ?r (A) is concentrated around its expectation. For this we use a concentration
inequality from [22] for sums of independent symmetric random variables that take values in some
Banach space. See section A in [21] for details.
Proof of Lemma 3.1 (bounding a Rademacher sum in (r)-norm)
Pm
Let L0 = E? k i=1 ?i Vi Vi k(r) ; this is the quantity we want to bound. Using a standard comparison principle, we can replace the ?1 random variables ?i with iid N (0, 1) Gaussian random
variables gi ; then we get
m
q
X
?
L0 ? Eg sup
|G(X)|,
G(X)
=
gi |(Vi , X)|2 .
(14)
2
3.2
X?U2
i=1
The random variables G(X) (indexed by X ? U2 ) form a Gaussian process, and L0 is upperbounded by the expected supremum of this process. Using the fact that G(0) = 0 and G(?) is
symmetric, and Dudley?s inequality (Theorem 11.17 in [22]), we have
? Z ?
?
log1/2 N (U2 , dG , ?)d?,
(15)
L0 ? 2?Eg sup G(X) ? 24 2?
X?U2
0
where N (U2 , dG , ?) is a covering number (the number of balls in Cd?d of radius ? in the metric dG
that are needed to cover the set U2 ), and the metric dG is given by
1/2
dG (X, Y ) = E[(G(X) ? G(Y ))2 ]
.
(16)
Define a new norm (actually a semi-norm) k?kX on Cd?d , as follows:
kM kX = max |(Vi , M )|.
i=1,...,m
(17)
We use this to upper-bound the metric dG . An elementary calculation shows that dG (X, Y ) ?
Pm 1/2
2RkX ? Y kX , where R = k i=1 Vi Vi k(r) . This lets us upper-bound the covering numbers in
dG with covering numbers in k?kX :
?
) = N ( ?1r U2 , k?kX , 2R??r ).
(18)
N (U2 , dG , ?) ? N (U2 , k?kX , 2R
We will now bound these covering numbers. First, we introduce some notation: let k?kp denote the
Schatten p-norm on Cd?d , and let Bp be the unit ball in this norm. Also, let BX be the unit ball in
the k?kX norm.
Observe that ?1r U2 ? B1 ? K ? BX . (The second inclusion follows because kM kX ?
maxi=1,...,m kVi kkM k? ? KkM k? .) This gives a simple bound on the covering numbers:
(19)
N ( ?1r U2 , k?kX , ?) ? N (B1 , k?kX , ?) ? N (K ? BX , k?kX , ?).
This is 1 when ? ? K. So, in Dudley?s inequality, we can restrict the integral to the interval [0, K].
When ? is small, we will use the following simple bound (equation (5.7) in [23]):
2
2d
N (K ? BX , k?kX , ?) ? (1 + 2K
.
(20)
? )
When ? is large, we will use a more sophisticated bound based on Maurey?s empirical method and
entropy duality, which is due to [16] (see also [17]):
C2K2
N (B1 , k?kX , ?) ? exp( 1?2 log3 d log m),
We defer the proof of (21) to the next section.
for some constant C1 .
Using (20) and (21), we can bound the integral in Dudley?s inequality. We get
?
L0 ? C4 R rK log5/2 d log1/2 m,
where C4 is some universal constant. This proves the lemma.
7
(21)
(22)
3.3
Proof of Equation (21) (covering numbers of the nuclear-norm ball)
Our result will follow easily from a bound on covering numbers introduced in [16] (where it appears
as Lemma 1):
Lemma 3.2 Let E be a Banach space, having modulus of convexity of power type 2 with constant
?(E). Let E ? be the dual space, and let T2 (E ? ) denote its type 2 constant. Let BE denote the unit
ball in E.
Let V1 , . . . , Vm ? E ? , such that kVj kE ? ? K (for all j). Define the norm on E,
kM kX = max |(Vj , M )|.
(23)
? log1/2 N (BE , k?kX , ?) ? C2 ?(E)2 T2 (E ? )K log1/2 m,
(24)
j=1,...,m
Then, for any ? > 0,
where C2 is some universal constant.
The proof uses entropy duality to reduce the problem to bounding the ?dual? covering number. The
m
basic idea is as follows. Let `m
with the `p norm. Consider
p denote the complex vector space C
m
?
the map S : `1 ? E that takes the j?th coordinate vector to Vj . Let N (S) denote the number of
balls in E ? needed to cover the image (under the map S) of the unit ball in `m
1 . We can bound N (S)
,
using Maurey?s empirical method. Also define the dual map S ? : E ? `m
? and the associated dual
covering number N (S ? ). Then N (BE , k?kX , ?) is related to N (S ? ). Finally, N (S) and N (S ? ) are
related via entropy duality inequalities. See [16] for details.
We will apply this lemma as follows, using the same approach as [17]. Let Sp denote the Banach
space consisting of all matrices in Cd?d with the Schatten p-norm. Intuitively, we want to set
E = S1 and E ? = S? , but this won?t work because ?(S1 ) is infinite. Instead, we let E = Sp ,
p = (log d)/(log d ? 1), and E ? = Sq , q = log d. Note that kM kp ? kM k? , hence B1 ? Bp and
? log1/2 N (B1 , k?kX , ?) ? ? log1/2 N (Bp , k?kX , ?).
(25)
?
?
?
?
Also, we have ?(E) ? 1/ p ? 1 = log d ? 1 and T2 (E ) ? ?(E) ? log d ? 1 (see the
Appendix in [17]). Note that kM kq ? ekM k, thus we have kVj kq ? eK (for all j). Then, using
the lemma, we have
? log1/2 N (Bp , k?kX , ?) ? C2 log3/2 d (eK) log1/2 m,
(26)
which proves the claim.
4
Outlook
We have showed that random Pauli measurements obey the restricted isometry property (RIP), which
implies strong error bounds for low-rank matrix recovery. The key technical tool was a bound on
covering numbers of the nuclear norm ball, due to Gu?edon et al [16].
An interesting question is whether this method can be applied to other problems, such as matrix completion, or constructing embeddings of low-dimensional manifolds into linear spaces with slightly
higher dimension. For matrix completion, one can compare with the work of Negahban and Wainwright [10], where the sampling operator satisfies restricted strong convexity (RSC) over a certain set
of ?non-spiky? low-rank matrices. For manifold embeddings, one could try to generalize the results
of [24], which use the sparse-vector RIP to construct Johnson-Lindenstrauss metric embeddings.
There are also many questions pertaining to low-rank quantum state tomography. For example,
how does the matrix Lasso compare to the traditional approach using maximum likelihood estimation? Also, there are several variations on the basic tomography problem, and alternative notions of
sparsity (e.g., elementwise sparsity in a known basis) [25], which have not been fully explored.
Acknowledgements: Thanks to David Gross, Yaniv Plan, Emmanuel Cand`es, Stephen Jordan, and
the anonymous reviewers, for helpful suggestions. Parts of this work were done at the University
of California, Berkeley, and supported by NIST grant number 60NANB10D262. This paper is
a contribution of the National Institute of Standards and Technology, and is not subject to U.S.
copyright.
8
References
[1] M. Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford, 2002.
[2] N. Srebro. Learning with Matrix Factorizations. PhD thesis, MIT, 2004.
[3] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum rank solutions to linear matrix
equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[4] M. Fazel, E. Candes, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of
low rank matrices. In 42nd Asilomar Conference on Signals, Systems and Computers, pages
1043?1047, 2008.
[5] E. J. Candes and Y. Plan. Tight oracle bounds for low-rank matrix recovery from a minimal
number of random measurements. 2009.
[6] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Found. of
Comput. Math., 9:717?772.
[7] E. J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Trans. Inform. Theory, 56(5):2053?2080, 2009.
[8] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans.
Inform. Theory, to appear. arXiv:0910.1879, 2010.
[9] B. Recht. A simpler approach to matrix completion. J. Machine Learning Research (to appear),
2010.
[10] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. arXiv:1009.2118, 2010.
[11] E. J. Candes and T. Tao. Near-optimal signal recovery from random projections: universal
encoding strategies. IEEE Trans. Inform. Theory, 52:5406?5425, 2004.
[12] M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure and Applied Math., 61:1025?1045, 2008.
[13] D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert. Quantum state tomography via
compressed sensing. Phys. Rev. Lett., 105(15):150401, Oct 2010. arXiv:0909.3304.
[14] E. J. Candes and Y. Plan. Matrix completion with noise. Proc. IEEE, 98(6):925 ? 936, 2010.
[15] B. Brown, S. Flammia, D. Gross, and Y.-K. Liu. in preparation, 2011.
[16] O. Gu?edon, S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann. Majorizing measures and
proportional subsets of bounded orthonormal systems. Rev. Mat. Iberoamericana, 24(3):1075?
1095, 2008.
[17] G. Aubrun. On almost randomizing channels with a short Kraus decomposition. Commun.
Math. Phys., 288:1103?1116, 2009.
[18] M. A. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, 2001.
[19] A. Rohde and A. Tsybakov.
Estimation of high-dimensional low-rank matrices.
arXiv:0912.5338, 2009.
[20] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear norm penalization and optimal rates
for noisy low rank matrix completion. arXiv:1011.6256, 2010.
[21] Y.-K. Liu. Universal low-rank matrix recovery from Pauli measurements. arXiv:1103.2816,
2011.
[22] M. Ledoux and M. Talagrand. Probability in Banach spaces. Springer, 1991.
[23] G. Pisier. The volume of convex bodies and Banach space geometry. Cambridge, 1999.
[24] F. Krahmer and R. Ward. New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property. SIAM J. Math. Anal., 43(3):1269?1281, 2011.
[25] A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz, M. A. Broome, M. P. Almeida, A. Fedrizzi,
and A. G. White. Efficient measurement of quantum dynamics via compressive sensing. Phys.
Rev. Lett., 106(10):100401, 2011.
[26] P. Wojtaszczyk. Stability and instance optimality for gaussian measurements in compressed
sensing. Found. Comput. Math., 10(1):1?13, 2009.
9
| 4222 |@word seems:1 norm:50 nd:1 c0:6 stronger:4 open:1 km:13 d2:3 decomposition:1 commute:2 tr:7 outlook:1 carry:1 liu:5 kmk:1 recovered:1 ka:5 si:3 must:3 reminiscent:1 data2:1 short:1 certificate:2 math:5 contribute:1 simpler:1 c2:6 ik:1 consists:1 prove:6 combine:1 kraus:1 introduce:1 expected:2 roughly:1 p1:3 cand:1 mechanic:1 behavior:5 gov:1 actual:1 pajor:1 becomes:1 provided:2 estimating:2 bounded:4 suffice:1 notation:3 cm:4 interpreted:2 compressive:1 finding:1 berkeley:1 every:2 rohde:1 k2:4 scaled:1 unit:4 grant:1 appear:3 before:1 positive:2 encoding:1 incoherence:1 plus:1 koltchinskii:1 dantzig:5 studied:1 suggests:1 factorization:1 obeys:1 averaged:1 fazel:3 union:2 sq:1 universal:15 empirical:2 convenient:1 projection:1 get:7 close:1 operator:22 context:2 map:4 reviewer:1 straightforward:1 convex:6 ke:1 recovery:18 unstructured:1 pure:3 immediately:1 insight:1 estimator:4 nuclear:16 orthonormal:6 kkm:2 stability:1 notion:2 coordinate:1 variation:1 suppose:2 rip:25 exact:1 us:3 distinguishing:1 element:9 eisert:1 approximated:1 particularly:1 observed:1 enters:1 gross:4 transforming:1 convexity:4 ideally:1 dynamic:1 solving:2 tight:2 algebra:1 division:1 basis:17 completely:1 gu:3 observables:4 easily:1 rkx:1 kp:3 pertaining:1 whose:2 quite:1 wojtaszczyk:1 kai:2 larger:4 supplementary:1 say:8 reconstruct:4 compressed:6 distortion:1 solve:1 stanford:1 gi:2 ward:1 noisy:6 eigenvalue:2 ledoux:1 reconstruction:3 crd:2 product:9 remainder:1 adaptation:1 combining:1 adjoint:3 frobenius:10 getting:1 eigenbasis:1 yaniv:1 rademacher:3 object:1 completion:12 strong:5 recovering:2 implies:6 direction:1 radius:1 material:1 fix:4 generalization:2 suffices:1 anonymous:1 kmc:9 proposition:4 tighter:1 elementary:2 secondly:1 c00:1 hold:7 around:2 exp:1 claim:1 a2:2 purpose:1 favorable:1 estimation:2 proc:1 applicable:1 majorizing:1 symmetrization:1 largest:1 repetition:2 tool:1 weighted:1 minimization:3 mit:1 gaussian:11 rather:3 ck:1 pn:3 l0:5 focus:1 rank:53 likelihood:1 sense:2 am:2 helpful:1 unlikely:1 typically:1 interested:3 tao:2 arg:2 dual:7 issue:1 overall:1 plan:3 special:1 once:1 construct:2 having:1 sampling:12 preparing:1 k2f:1 nearly:6 future:1 t2:3 few:3 randomly:1 pathological:1 dg:9 national:2 individual:1 geometry:1 consisting:1 replacement:1 delicate:1 interest:3 possibility:2 highly:2 upperbounded:1 semidefinite:2 copyright:1 held:1 integral:2 indexed:2 incomplete:1 desired:2 minimal:1 rsc:2 iberoamericana:1 instance:2 earlier:2 cover:2 kxkf:4 measuring:2 subset:4 rare:1 entry:1 uniform:1 kq:2 successful:2 johnson:2 randomizing:1 chooses:1 vershynin:1 thanks:1 density:6 recht:4 negahban:2 siam:2 off:1 vm:3 together:1 kvj:2 w1:5 thesis:2 broome:1 choose:3 slowly:1 dr:2 ket:1 ek:2 return:2 bx:4 account:1 parrilo:2 coefficient:4 satisfy:3 gaithersburg:1 depends:3 vi:10 tion:1 view:2 later:2 root:1 try:1 sup:3 netflix:1 decaying:2 candes:6 defer:1 collaborative:1 contribution:1 square:1 accuracy:3 variance:2 efficiently:2 generalize:1 accurately:1 iid:5 mc:13 randomness:1 simultaneous:1 inform:3 phys:3 mating:1 failure:2 dm:1 jaegermann:1 proof:9 associated:1 couple:2 gain:1 knowledge:1 organized:1 hilbert:3 nielsen:1 sophisticated:1 actually:3 ea:1 appears:1 higher:1 follow:2 improved:1 done:3 though:1 lounici:1 furthermore:3 just:1 lastly:1 spiky:1 correlation:2 talagrand:1 sketch:4 hand:1 overlapping:1 modulus:1 usa:1 normalized:1 brown:1 hence:2 symmetric:2 i2:2 eg:2 white:1 round:2 self:1 uniquely:1 covering:14 noted:1 anything:1 won:1 generalized:1 complete:1 image:1 recently:1 physical:2 overview:1 exponentially:1 volume:1 banach:6 tail:6 discussed:1 elementwise:1 measurement:48 cambridge:2 ai:2 rd:12 mathematics:1 pm:3 inclusion:1 stable:1 longer:1 isometry:7 showed:1 commun:2 certain:2 inequality:7 calligraphic:1 yi:3 minimum:1 additional:2 somewhat:1 mr:6 bra:1 determine:1 signal:2 semi:1 stephen:1 full:4 mix:1 technical:3 calculation:1 a1:2 involving:2 basic:2 noiseless:1 essentially:2 expectation:3 metric:4 arxiv:6 sometimes:1 normalization:1 c1:4 addition:2 remarkably:1 want:4 interval:1 spikiness:1 singular:4 flammia:2 wd2:5 kwi:3 subject:2 seem:1 jordan:1 near:3 yk22:1 ideal:1 easy:1 embeddings:4 zi:3 lasso:6 restrict:1 inner:6 idea:4 stabilizer:1 reduce:2 tradeoff:4 whether:2 unnatural:1 becker:1 penalty:1 speaking:1 proceed:1 remark:2 useful:2 tsybakov:2 tomography:11 concentrated:2 exist:1 write:2 mat:1 express:1 key:2 four:1 v1:3 relaxation:2 sum:3 letter:1 almost:3 appendix:1 bound:42 guaranteed:1 quadratic:1 oracle:1 nontrivial:1 strength:3 log5:2 precisely:3 kronecker:1 constraint:1 bp:4 fourier:5 argument:4 min:3 optimality:1 performing:2 according:1 ball:10 slightly:4 reconstructing:1 wi:5 rev:3 s1:5 intuitively:3 restricted:9 asilomar:1 equation:6 previously:2 remains:1 turn:2 edon:3 fail:2 discus:4 describing:1 know:3 needed:2 generalizes:1 apply:2 observe:7 pauli:40 obey:1 dudley:6 schmidt:2 alternative:1 chuang:1 denotes:6 rudelson:1 unsuitable:1 k1:1 especially:1 prof:2 emmanuel:1 classical:1 tensor:2 question:3 quantity:1 strategy:1 concentration:3 dependence:1 md:1 diagonal:2 traditional:1 mx:1 schatten:3 manifold:3 assuming:2 besides:1 minimizing:2 tomczak:1 statement:1 trace:3 design:1 binomially:1 anal:1 unknown:6 allowing:1 upper:3 sm:3 nist:2 truncated:1 situation:3 arbitrary:3 introduced:1 david:1 mechanical:1 specified:1 pisier:1 connection:1 c4:5 california:1 coherent:1 established:1 mohseni:1 trans:3 beyond:1 below:1 appeared:1 sparsity:2 summarize:1 program:2 max:2 analogue:2 power:2 wainwright:2 difficulty:1 regularized:2 residual:2 minimax:1 improve:1 technology:2 incoherent:6 log1:9 prior:1 geometric:1 acknowledgement:1 review:1 kf:9 relative:1 fully:1 log6:6 mixed:2 maurey:2 filtering:1 interesting:1 suggestion:1 srebro:1 proportional:1 penalization:1 sufficient:2 principle:1 kxkp:1 pi:1 cd:24 elsewhere:2 supported:1 copy:2 side:1 weaker:2 bias:2 institute:2 fall:1 taking:1 absolute:5 sparse:5 distributed:1 dimension:5 lett:2 lindenstrauss:2 quantum:22 qualitatively:1 collection:1 c5:5 log3:2 reconstructed:3 approximate:1 qubits:1 sj:7 selector:5 observable:1 supremum:1 b1:5 alternatively:1 spectrum:1 learn:2 channel:1 robust:1 inherently:1 ignoring:1 obtaining:1 poly:8 necessarily:2 complex:1 constructing:1 vj:2 sp:2 main:1 bounding:6 noise:15 arise:2 krahmer:1 nothing:1 allowed:1 positively:1 body:2 embeds:1 fails:1 comput:2 theorem:8 rk:1 specific:1 showing:2 kvi:2 sensing:7 maxi:1 explored:1 mendelson:1 phd:2 magnitude:1 commutative:2 kx:19 entropy:7 likely:1 kxk:6 u2:13 springer:1 satisfies:5 determines:1 oct:1 viewed:1 goal:1 replace:2 feasible:1 experimentally:1 infinite:1 uniformly:3 acting:2 averaging:1 lemma:9 called:3 duality:5 svd:1 e:1 formally:1 support:1 almeida:1 latter:1 preparation:1 phenomenon:1 correlated:1 |
3,560 | 4,223 | Committing Bandits
Loc Bui?
MS&E Department
Stanford University
Ramesh Johari?
MS&E Department
Stanford University
Shie Mannor?
EE Department
Technion
Abstract
We consider a multi-armed bandit problem where there are two phases. The first
phase is an experimentation phase where the decision maker is free to explore
multiple options. In the second phase the decision maker has to commit to one of
the arms and stick with it. Cost is incurred during both phases with a higher cost
during the experimentation phase. We analyze the regret in this setup, and both
propose algorithms and provide upper and lower bounds that depend on the ratio
of the duration of the experimentation phase to the duration of the commitment
phase. Our analysis reveals that if given the choice, it is optimal to experiment
?(ln T ) steps and then commit, where T is the time horizon.
1 Introduction
In a range of applications, a dynamic decision making problem exhibits two distinctly different kinds
of phases: experimentation and commitment. In the first phase, the decision maker explores multiple
options, to determine which might be most suitable for the task at hand. However, eventually the
decision maker must commit to a choice, and use that decision for the duration of the problem
horizon. A notable feature of these phases in the models we study is that costs are incurred during
both phases; that is, experimentation is not carried out ?offline,? but rather is run ?live? in the actual
system.
For example, consider the design of a recommendation engine for an online retailer (such as Amazon). Experimentation amounts to testing different recommendation strategies on arriving customers. However, such testing is not carried out without consequences; the retailer might lose
potential rewards if experimentation leads to suboptimal recommendations. Eventually, the recommendation engine must be stabilized (both from a software development standpoint and a customer
expectation standpoint), and when this happens the retailer has effectively committed to one strategy moving forward. As another example, consider product design and delivery (e.g., tapeouts in
semiconductor manufacturing, or major releases in software engineering). The process of experimentation during design entails costs to the producer, but eventually the experimentation must stop
and the design must be committed. Another example is that of dating followed by marriage to
hopefully, the best possible mate.
In this paper we consider a class of multi-armed bandit problems (which we call committing bandit
problems) that mix these two features: the decision maker is allowed to try different arms in each
period until commitment, at which point a final choice is made (?committed?) and the chosen arm
is used until the end of the horizon. Of course, models that investigate each phase in isolation are
extensively studied. If the problem consists of only experimentation, then we have the classical
multi-armed bandit problem, where the decision maker is interested in minimizing the expected
total regret against the best arm [9, 2]. At the other extreme, several papers have studied the pure
?
Email: [email protected]
Email: [email protected]
?
Email: [email protected]
?
1
exploration or budgeted learning problem, where the goal is to output the best arm at the end of an
experimentation phase [13, 6, 4]; no costs are incurred for experimentation, but after finite time a
single decision must be chosen (see [12] for a review).
Formally, in a committing bandit problem, the decision maker can experiment without constraints for
the first N of T periods, but must commit to a single decision for the last T ? N periods, where T is
the problem horizon. We first consider the soft deadline setting where the experimentation deadline
N can be chosen by the decision maker, but there is a cost incurred per experimentation period.
We divide this setting into two regimes depending on how N is chosen: the non-adaptive regime
(Section 3) in which the decision maker has to choose N before the algorithm begins running, and
the adaptive regime (Section 4) in which N can be chosen adaptively as the algorithm runs.
We obtain two main results for the soft deadline setting. First, in both regimes, we find that the
best tradeoff between experimentation and commitment (in terms of expected regret performance)
is essentially obtained by experimenting for N = ?(ln T ) periods, and then committing to the
empirical best action for the remaining T ? ?(ln T ) periods; this yields an expected average regret
of ?(ln T /T ). Second, and somewhat surprisingly, we find that if the algorithm has access to
distributional information about the arms, then adaptivity provides no additional benefit (at least in
terms of expected regret performance); however, as we observe via simulations, on a sample path
basis adaptive algorithms can outperform nonadaptive algorithms due to the additional flexibility.
Finally, we demonstrate that if the algorithm has no initial distributional information, adaptivity is
beneficial: we demonstrate an adaptive algorithm that achieves ?(ln T /T ) regret in this case.
We then study the hard deadline regime where the value of N is given to the decision maker in
advance (Section 5). This is a sensible assumption for problems where the decision maker cannot
control how long the experimentation period is; for example, in the product design example above,
the release date is often fixed well in advance, and the engineers are not generally free to alter it.
We propose the UCB-poly(?) algorithm for this setting, where the parameter ? ? (0, 1) reflects the
tradeoff between experimentation and commitment. We show how to tune the algorithm to optimally
choose ?, based on the relative values of N and T .
We mention in passing that the celebrated exploration-exploitation dilemma is also a major issue
in our setup. During the first N periods the tradeoff between exploration and exploitation exists
bearing in mind that the last T ? N periods will be used solely for exploitation. This changes the
standard setup so that exploration in the first N periods becomes more important, as we shall see in
our results.
2 The committing bandit problem
We first describe the setup of the classical stochastic multi-armed bandit problem, as it will serve as
background for the committing bandit problem. In a stochastic multi-armed bandit problem, there
are K independent arms; each arm i, when pulled, returns a reward which is independently and
identically drawn from a fixed Bernoulli distribution1 with unknown parameter ?i ? [0, 1]. Let It
denote the index of the arm pulled at time t (It ? {1, 2, . . . , K}), and let Xt denote the associated
reward. Note that E[Xt ] = ?It . Also, we define the following notation:
?? := max ?i ,
1?i?K
i? := arg max ?i ,
1?i?K
?i := ?? ? ?i ,
? := min ?i .
i:?i >0
An allocation policy is an algorithm that chooses the next arm to pull based on the sequence of past
pulled arms and obtained rewards. The cumulative regret of an allocation policy A after time n is:
Rn =
n
!
(Xt? ? Xt ) ,
t=1
Xt?
where
is the reward that the algorithm would have received at time t if it had pulled the optimal
arm i? . In other words, Rn is the cumulative loss due to the fact that the allocation policy does not
always pull the optimal arm. Let Ti (n) be the number of times that arm i is pulled up to time n.
1
We assume Bernoulli distributions throughout the paper. Our results hold with minor modification for any
distribution with bounded support.
2
Then:
E[Rn ] = ?? n ?
K
!
?i E[Ti (n)] =
!
?i E[Ti (n)].
i#=i?
i=1
The reader is referred to the supplementary material for some well-known allocation policies, e.g.,
Unif (Uniform allocation) and UCB (Upper Confidence Bound) [2].
A recommendation policy is an algorithm that tries to recommend the ?best? arm based on the
sequence of past pulled arms and obtained rewards. Suppose that after time n, a recommendation
policy R recommends the arm Jn as the ?best? arm. Then the regret of recommendation policy R
after time n, called the simple regret in [4], is defined as
rn = ?? ? ?Jn = ?Jn .
The reader is also referred to the supplementary material for some natural recommendation policies,
e.g., EBA (Empirical Best Arm) and MPA (Most Played Arm).
The committing bandit problem considered in this paper is a version of the stochastic multi-armed
bandit problem in which the algorithm is forced to commit to only one arm after some period of
time. More precisely, the problem setting is as follows. Let T be the time horizon of the problem.
From time 1 to some time N (N < T ), the algorithm can pull any arm in {1, 2, . . . , K}. Then, from
time N + 1 to the end of the horizon (time T ), it must commit to pull only one arm. The first phase
(time 1 to N ) is called the experimentation phase, and the second phase (time N + 1 to T ) is called
the commitment phase. We refer to time N as the experimentation deadline.
An algorithm for the committing bandit problem is a combination of an allocation and a recommendation policy. That is, the algorithm has to decide which arm to pull during the first N slots, and
then choose an arm to commit to during the remaining T ? N slots. Because we consider settings
where the algorithm designer can choose the experimentation deadline, we also assume a cost is
imposed during the experimentation phase; otherwise, it is never optimal to be forced to commit.
In particular, we assume that the reward earned during the experimentation phase is reduced by a
constant factor ? ? [0, 1). Thus the expected regret E[Reg] of such an algorithm is the average regret
across both phases, i.e.:
" T
#
N
T
!
!
1 ! ?
E[RN ] T ? N
N ??
E[Reg] =
? ??
E[?It ] ?
E[?JN ] = ?
+
E[rN ]+(1??)
.
T t=1
T
T
T
t=1
t=N +1
2.1 Committing bandit regimes
We focus on three distinct regimes, that differ in the level of control given to the algorithm designer
in choosing the experimentation deadline.
Regime 1: Soft experimentation deadline, non-adaptive. In this regime, the value of T is given
to the algorithm. For a given value of T , the value of N can be chosen freely between 1 and T ? 1,
but the choice must be made before the process begins.
Regime 2: Soft experimentation deadline, adaptive. The setting in this regime is the same as
the previous one, except for the fact that the algorithm can choose the value of N adaptively as
outcomes of past pulls are observed.
Regime 3: Hard experimentation deadline. In this regime, both N and T are fixed and given to
the algorithm. That is, the algorithm cannot control the experimentation deadline N . We are mainly
interested in the asymptotic behavior of the algorithm when both N and T go to infinity.
2.2 Known lower-bounds
As mentioned in the Introduction section, the experimentation and commitment phases have each
been extensively studied in isolation. In this subsection, we only summarize briefly the known lower
bounds on cumulative regret and simple regret that will be used in the paper.
Result 1 (Distribution-dependent lower bound on cumulative regret [9]). For any allocation policy,
and for any set of reward distributions such that their parameters ?i are not all equal, there exists
3
an ordering of (?1 , . . . , ?K ) such that
?
E[Rn ] ? ?
pi
p?
p?
pi
?
!
i#=i?
?
?i
+ o(1)? ln n,
D(pi $p? )
where D(pi $p? ) = pi log + p? log is the Kullback-Leibler divergence between two Bernoulli
reward distributions pi (of arm i) and p (of the optimal arm), and o(1) ? 0 as n ? ?.
Result 2 (Distribution-free lower bound on cumulative regret [13]). There exist positive constants c
and N0 such that for any allocation policy, there exists a set of Bernoulli reward distributions such
that
E[Rn ] ? cK(ln n ? ln K), ?n ? N0 .
The difference between Result 1 and Result 2 is that the lower bound in the former depends on the
parameters of reward distributions (hence, called distribution-dependent), while the lower bound
in the latter does not (hence, called distribution-free). That means, in the latter case, the reward
distributions can be chosen adversarially. Therefore, it should be clear that the distribution-free
lower bound is always higher than the distribution-dependent lower bound.
Result 3 (Distribution-dependent bound on simple regret [4]). For any pair of allocation and recommendation policies, if the allocation policy can achieve an upper bound such that for all (Bernoulli)
reward distributions ?1 , . . . , ?K , there exists a constant C ? 0 with
E[Rn ] ? Cf (n),
then for all sets of K ? 3 Bernoulli reward distributions with parameters ?i that are all distinct and
all different from 1, there exists an ordering (?1 , . . . , ?K ) such that
? ?Df (n)
E[rn ] ?
e
,
2
where D is a constant which can be calculated in closed form from C, and ?1 , . . . , ?K .
In particular, since E[Rn ] ? ?? n for any allocation policy, there exists a constant ? depending only
on ?1 , . . . , ?K such that E[rn ] ? (?/2)e??n .
Result 4 (Distribution-free lower bound on simple regret [4]). For any pair of allocation and recom(
1 K
.
mendation policies, there exists a set of Bernoulli reward distributions such that E[rn ] ?
20 n
In the subsequent sections we analyze each of the committing bandit regimes in detail; in particular,
we provide constructive upper bounds and matching lower bounds on the regret in each regime. The
detailed proofs of all the results in this paper are presented in the supplementary material.
3 Regime 1: Soft experimentation deadline, non-adaptive
In this regime, for a given value of T , the value of N can be chosen freely between 1 and T ? 1,
but only before the algorithm begins pulling arms. Our main insight is that there exist matching
upper and lower bounds of order ?(ln T /T ); further, we propose an algorithm that can achieve this
performance.
Theorem 1. (1) Distribution-dependent lower bound: In Regime 1, for any algorithm, and any set
of K ? 3 Bernoulli reward distributions such that ?i are all distinct and all different from 1, there
exists an ordering (?1 , . . . , ?K ) such that
?
?
?
?
? (1 ? ?)?? !
?
?i
ln T
E[Reg] ? ?max
,
+ o(1)?
,
?) ?
?
?
D(p
$p
T
i
?
i#=i
where o(1) ? 0 as T ? ?, and ? is the constant discussed in Result 3.
(2) Distribution-free lower bound: Also, for any algorithm in Regime 1, there exists a set of Bernoulli
reward distributions such that
/
0
ln K ln T
E[Reg] ? cK 1 ?
,
ln T
T
where c is the constant in Result 2.
4
We now show that the Non-adaptive Unif-EBA algorithm (Algorithm 1) achieves the matching
upper bound, as stated in the following theorem.
Algorithm 1 Non-adaptive Unif-EBA
Input: a set of arms {1, 2, . . . , K}, T , ?
repeat
Sample each arm in {1, 2, . . . ,1K} in the2 round robin fashion.
until each arm has been chosen ln T /?2 times.
Commit to the arm with maximum empirical average reward for the remaining periods.
Theorem 2. For the Non-adaptive Unif-EBA algorithm (Algorithm 1),
?
?
2
!
?
2? ? ln T
K
?i +
.
E[Reg] ? 2 ?(1 ? ?)?? +
?
K
ln T
T
?
i#=i
This matches the lower bounds in Theorem 1 to the correct order in T . Observe that in this regime,
both distribution-dependent and distribution-free lower bounds have the same asymptotic order of
ln T /T. However, the preceding algorithm requires knowing the value of ?. If ? is unknown, a low
regret algorithm that matches the lower bound does not seem to be possible in this regime, because
of the relative nature of the regret. An algorithm may be unable to choose an N that explores
sufficiently long when arms are difficult to distinguish, and yet commits quickly when arms are easy
to distinguish.
4 Regime 2: Soft experimentation deadline, adaptive
The setting in this regime is the same as the previous one, except that the algorithm is not required
to choose N before it runs, i.e., N can be chosen adaptively. Thus, in particular, it is possible for
the algorithm to reject bad arms or to estimate ? as it runs.
We first present the lower bounds on regret for any algorithm in this regime.
Theorem 3. (1) Distribution-dependent lower bound: In Regime 2, for any algorithm, and any set
of K ? 3 Bernoulli reward distribution such that ?i are all distinct and all different from 1, there
exists an ordering (?1 , . . . , ?K ) such that
?
?
!
?
ln T
i
E[Reg] ? ?
+ o(1)?
,
?
D(pi $p )
T
?
i#=i
where o(1) ? 0 as T ? ?.
(2) Distribution-free lower bound: Also, for any algorithm in Regime 2, there exists a set of Bernoulli
reward distributions such that
/
0
ln K ln T
E[Reg] ? cK 1 ?
,
ln T
T
where c is the constant in Result 2.
Next, we derive several sequential algorithms with matching upper bounds on regret. The first algorithm is called Sequential Elimination & Commitment 1 (SEC1) (Algorithm 2); this algorithm
requires the values of ? and ?? .
Theorem 4. For the SEC1 algorithm (Algorithm 2),
?
?
!
K ?
?
ln T
E[Reg] ? 2 (1 ? ?)?? +
,
?i + b?
?
K
T
?
i#=i
3
where b = 2 +
?2 (K+2)
(1?e??2 /2 )2
4
1
ln T
? 0 as T ? ?.
5
Algorithm 2 Sequential Elimination & Commitment 1 (SEC1)
Input: A set of arms {1, 2, . . . , K}, T , ?, ??
Initialization: Set m = 0, B0 = {1, 2, . . . , K}, ? = 1/?2 , &1 = 1/?, &2 = ?/2.
repeat
i
Sample each arm in Bm once. Let Sm
be the total reward obtained from arm i so far.
Set Bm+1 = Bm , m = m + 1.
for i ? Bm do
i
if m ? )? ln T * and |m?? ? Sm
| > &1 ln T then
Delete arm i from Bm .
end if
i
if m > )? ln T * and |m?? ? Sm
| > &2 m then
Delete arm i from Bm .
end if
end for
until there is only one arm in Bm , then commit to that arm or the horizon T is reached.
Observe that this algorithm matches the lower bounds in Theorem 3 to the correct order in T . We
note that when N can be chosen adaptively, both distribution-dependent and distribution-free lower
bounds have the same asymptotic order of ln T /T as the ones in the non-adaptive regime. In the
distribution-dependent case, therefore, we obtain the surprising conclusion that adaptivity does not
reduce the optimal expected regret. Indeed, the regret bound of SEC1 in Theorem 4 is exactly
the same as for Non-adaptive Unif-EBA in Theorem 2. We conjecture that the constant 1/?2 is
actually the best achievable constant on expected regret.
What is the benefit of adaptivity then? As simulation results in Section 6 suggest, SEC1 performs
much better than Non-adaptive Unif-EBA in practice. The reason is rather1 intuitive:2 due to its
adaptive nature, SEC1 is able to eliminate poor arms much earlier than the ln T /?2 threshold,
while Non-adaptive Unif-EBA has to wait until that point to make decisions.
Remark 1. Although SEC1 requires the value of ?? , that requirement can be relaxed as ?? can
be estimated by the maximum empirical average reward across arms. In fact, as we will see in
the simulations (Section 6), another version of SEC1 (called SEC2) in which m?? is replaced by
j
maxj?Bm Sm
achieves a nearly identical performance.
Now, if the value of ? is unknown, we have the following Sequential Committing UCB (SC-UCB)
algorithm which is based on the improved UCB algorithm in [3]. The idea is to maintain an estimate
of ? and reduce it over time.
Algorithm 3 Sequential Committing UCB (SC-UCB)
Input: A set of arms {1, 2, . . . , K}, T
? 0 = 0, B0 = {1, 2, . . . , K}.
Initialization: Set m = 0, ?
for m = 0, 1, 2, . . . , +log2 (T /e)/2, do
if |Bm | > 1 then
6
5
? 2 )/?
? 2 times.
Sample each arm in Bm until each arm has been chosen nm = 2 ln(T ?
m
m
i
Let Sm
be the total reward obtained from arm i so far.
Delete all arms i from Bm for which
7
j
i
? 2m )/2
maxj?Bm Sm
? Sm
> 2 nm ln(T ?
to obtain Bm+1 .
? m+1 = ?
? m /2.
Set ?
else
Commit to the single arm in Bm .
end if
end for
Commit to any arm in Bm .
6
Theorem 5. For the SC-UCB algorithm (Algorithm 3),
0
! / ??i + (1 ? ?)?? 0 ln(T ?2 ) /
?2i + 96
i
32
+
.
E[Reg] ?
?2i
T
ln(T ?2i )
?
i#=i
This matches the lower bounds in Theorem 3 to the correct order in T .
5 Regime 3: Hard experimentation deadline
We now investigate the third regime where, in contrast to the previous two, the experimentation
deadline N is fixed exogenously together with T . We consider the asymptotic behavior of regret
as T and N approach infinity together. Note that since in this case the experimentation deadline is
outside the algorithm designer?s control, we set the cost of experimentation ? = 1 for this section.
Because both T and N are given, the main challenge in this context is choosing an algorithm that
optimally balances the cumulative and simple regrets. We design and tune an algorithm that achieves
this balance.
We know from Result 3 that for any pair of allocation and recommendation policies, if E[RN ] ?
C1 f (N ), then E[rN ] ? (?/2)e?Df (N ) . In other words, given an allocation policy A that has a
cumulative regret bound C1 f (N ) (for some constant C1 ), the best (distribution-dependent) upper
bound that any recommendation policy can achieve is C2 e?C3 f (N ) (for some constants C2 and C3 ).
Assuming that there exists a recommendation policy RA that achieves such an upper bound, we have
the following upper bound on regret when applying [A, RA ] to the committing bandit problem:
f (N ) T ? N
E[Reg] ? C1
+
C2 e?C3 f (N ) .
(1)
T
T
One can clearly see the trade-off between experimentation and commitment in (1): the smaller the
first term, the larger the second term, and vice versa. Note that ln(N ) ? f (N ) ? N, and we have
algorithms that give us only either one of the extremes (e.g., Unif has f (N ) = N , while UCB [2] has
f (N ) = ln N ). On the other hand, it would be useful to have an algorithm that can balance between
these two extremes. In particular, we focus on finding a pair of allocation and recommendation
policies which can simultaneously achieve the allocation bound C1 N ? and the recommendation
?
bound C2 e?C3 N where 0 < ? < 1.
Let us consider a modification of the UCB allocation policy called UCB-poly(?) (for 0 < ? < 1),
where for t > K, with ??i,Ti (t?1) be the empirical average of rewards from arm i so far,
8
"
#
?
2(t
?
1)
It = arg max ??i,Ti (t?1) +
.
1?i?K
Ti (t ? 1)
Then we have the following result on the upper bound of its cumulative regret.
Theorem 6. The cumulative regret of UCB-poly(?) is upper-bounded by
"
#
! 8
E[Rn ] ?
+ o(1) n? ,
?i
i:?i >0
where o(1) ? 0 as n ? ?. Moreover, the simple regret for the pair [UCB-poly(?), EBA] is
upper-bounded by
?
?
!
?
E[rn ] ? ?2
?i ? e??n ,
i#=i?
?
where ? = min ?2i .
i 2
In the supplementary material (see Theorem 7 there) we show that in the limit, as T and N increase
to infinity, the optimal value of ? can be chosen as limN ?? ln(ln(T (N ) ? N ))/ ln N if that limit
exists. In particular, if T (N ) is super-exponential in N we get an optimal ? of 1 representing pure
exploration in the experimentation phase. If T (N ) is sub-exponential we get an optimal ? of 0
representing a standard UCB during the experimentation phase. If T (N ) is exponential we obtain ?
in between.
7
Figure 1: Numerical performances where K = 20, ? = 0.75, and ? = 0.02
6 Simulations
In this section, we present numerical results on the performance of Non-adaptive Unif-EBA, SEC1,
SEC2, and SC-UCB algorithms. (Recall that the SEC2 algorithm is a version of SEC1 in which
j
m?? is replaced by maxj?Bm Sm
, as discussed in Remark 1). The simulation setting includes K
arms with Bernoulli reward distributions, the time horizon T , and the values of ? and ?. The arm
configurations are generated as follows. For each experiment, ?? is generated independently and
uniformly in the [0.5, 1] interval, and the second best arm reward is set as ?2? = ?? ? ?. These two
values are then assigned to two randomly chosen arms, and the rest of arm rewards are generated
independently and uniformly in [0, ?2? ].
Figure 1 shows the regrets of the above algorithms for various values of T (in logarithmic scale)
with parameters K = 20, ? = 0.75, and ? = 0.02 (we omitted error bars because the variation
was small). Observe that the performances of SEC1 and SEC2 are nearly identical, which suggests
that the requirement of knowing ?? in SEC1 can be relaxed (see Remark 1). Moreover, SEC1 (or
equivalently, SEC2) performs much better than Non-adaptive Unif-EBA due to its adaptive nature
(see the discussion before Remark 1). Particularly, the performance of Non-adaptive Unif-EBA
is quite poor when the experimentation deadline is roughly equal to T , since the algorithm does
not commit before the experimentation deadline. Finally, SC-UCB does not perform as well as the
others when T is large, but this algorithm does not need to know ?, and thus suffers a performance
loss due to the additional effort required to estimate ?.
Additional simulation results can be found in the supplementary material.
7 Extensions and future directions
Our work is a first step in the study of the committing bandit setup. There are several extensions that
call for future research which we outline below.
First, an extension of the basic committing bandits setup to the case of contextual bandits [10, 11]
is natural. In this setup before choosing an arm an additional ?context? is provided to the decision
maker. The problem is to choose a decision rule from a given class that prescribes what arm to
choose for every context. This setup is more realistic when the decision maker has to commit to
such a rule after some exploration time. Second, models with many arms (structured as in [8, 5])
or even infinitely arms (as in [1, 7, 14]) are of interest here as they may lead to different regimes
and results here. Third, our models assumed that the commitment time is either predetermined or
according to the decision maker?s will. There are other models of interest such as the case where
some stochastic process determines the commitment time.
Finally, a situation where the exploration and commitment phases alternate (randomly or according
to a given schedule or at a cost) is of practical interest. This can represent the situation where there
are a few releases of a product where exploration can be done until the time of the release, when the
product is ?frozen? until a new exploration period followed by a new release.
8
References
[1] R. Agrawal. The continuum-armed bandit problem. SIAM Journal on Control and Optimization, 33(6):1926?1951, 1995.
[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning Journal, 47(2-3):235?256, 2002.
[3] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed
bandit problem. Periodica Mathematica Hungarica, 61(1-2):55?65, 2010.
[4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed
bandits. Theoretical Computer Science, 412(19):1832?1852, 2011.
[5] P. A. Coquelin and R. Munos. Bandit algorithms for tree search. CoRR, abs/cs/0703062, 2007.
[6] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for
the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning
Research, 7:1079?1105, 2006.
[7] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages
681?690, 2008.
[8] L. Kocsis and C. Szepesv?ari. Bandit based Monte-Carlo planning. In ECML, pages 282?293,
2006.
[9] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6:4?22, 1985.
[10] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In
Advances in Neural Information Processing (NIPS), 2008.
[11] L. Li, W. Chu, J. Langford, and R.E. Schapire. A contextual-bandit approach to personalized
news article recommendation. In Proceedings of the 19th International Conference on World
Wide Web, pages 661?670, 2010.
[12] S. Mannor. k-armed bandit. In Encyclopedia of Machine Learning, pages 561?563. 2010.
[13] S. Mannor and J. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit
problem. Journal of Machine Learning Research, 5:623?648, 2004.
[14] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395?411, 2010.
9
| 4223 |@word exploitation:3 briefly:1 version:3 achievable:1 unif:11 simulation:6 mention:1 initial:1 celebrated:1 loc:1 configuration:1 past:3 contextual:3 surprising:1 yet:1 chu:1 must:8 numerical:2 realistic:1 subsequent:1 predetermined:1 n0:2 greedy:1 eba:11 provides:1 mannor:4 revisited:1 zhang:1 c2:4 consists:1 indeed:1 ra:2 roughly:1 expected:7 planning:1 behavior:2 multi:12 actual:1 armed:16 becomes:1 begin:3 provided:1 notation:1 bounded:3 moreover:2 what:2 kind:1 finding:1 every:1 ti:6 exactly:1 stick:1 control:5 before:7 positive:1 engineering:1 limit:2 consequence:1 semiconductor:1 path:1 solely:1 might:2 initialization:2 studied:3 suggests:1 range:1 practical:1 testing:2 practice:1 regret:34 empirical:5 reject:1 matching:4 word:2 confidence:1 wait:1 suggest:1 get:2 cannot:2 context:3 live:1 applying:1 imposed:1 customer:2 go:1 exogenously:1 duration:3 independently:3 amazon:1 pure:3 insight:1 rule:3 pull:6 variation:1 suppose:1 particularly:1 distributional:2 observed:1 news:1 earned:1 ordering:4 trade:1 mentioned:1 complexity:1 reward:27 dynamic:1 prescribes:1 depend:1 dilemma:1 serve:1 basis:1 various:1 forced:2 committing:15 describe:1 distinct:4 monte:1 sc:5 choosing:3 outcome:1 outside:1 quite:1 stanford:4 supplementary:5 larger:1 otherwise:1 fischer:1 commit:14 final:1 online:1 kocsis:1 sequence:2 agrawal:1 frozen:1 propose:3 product:4 commitment:13 date:1 flexibility:1 achieve:4 intuitive:1 requirement:2 depending:2 derive:1 ac:1 finitely:1 minor:1 b0:2 received:1 c:1 differ:1 direction:1 correct:3 stochastic:5 exploration:11 material:5 elimination:3 extension:3 hold:1 marriage:1 considered:1 sufficiently:1 major:2 achieves:5 continuum:1 omitted:1 lose:1 maker:14 robbins:1 vice:1 reflects:1 clearly:1 always:2 super:1 rather:1 ck:3 release:5 focus:2 bernoulli:12 experimenting:1 mainly:1 contrast:1 dependent:10 stopping:1 eliminate:1 bandit:32 interested:2 issue:1 arg:2 development:1 equal:2 once:1 never:1 identical:2 adversarially:1 nearly:2 alter:1 future:2 others:1 recommend:1 producer:1 few:1 ortner:1 randomly:2 simultaneously:1 divergence:1 maxj:3 replaced:2 phase:25 maintain:1 ab:1 interest:3 investigate:2 extreme:3 stoltz:1 tree:1 divide:1 periodica:1 theoretical:1 delete:3 soft:6 earlier:1 cost:9 uniform:1 technion:2 optimally:2 mpa:1 chooses:1 adaptively:4 explores:2 siam:1 international:1 off:1 together:2 quickly:1 nm:2 cesa:1 choose:9 return:1 li:1 potential:1 rusmevichientong:1 includes:1 notable:1 depends:1 try:2 closed:1 johari:2 analyze:2 reached:1 option:2 il:1 yield:1 carlo:1 suffers:1 email:3 against:1 mathematica:1 associated:1 proof:1 stop:1 recall:1 subsection:1 schedule:1 actually:1 auer:2 higher:2 improved:2 done:1 until:8 langford:2 hand:2 web:1 hopefully:1 pulling:1 former:1 hence:2 assigned:1 leibler:1 round:1 during:10 m:2 outline:1 demonstrate:2 performs:2 ari:1 discussed:2 refer:1 versa:1 mathematics:2 had:1 moving:1 access:1 entail:1 retailer:3 additional:5 somewhat:1 preceding:1 relaxed:2 freely:2 determine:1 period:13 multiple:2 mix:1 match:4 long:2 lai:1 deadline:18 basic:1 essentially:1 expectation:1 df:2 metric:1 represent:1 c1:5 background:1 szepesv:1 interval:1 else:1 limn:1 standpoint:2 rest:1 shie:2 seem:1 call:2 ee:2 identically:1 recommends:1 easy:1 isolation:2 suboptimal:1 reduce:2 idea:1 knowing:2 tradeoff:3 effort:1 passing:1 action:2 remark:4 dar:1 generally:1 useful:1 clear:1 detailed:1 tune:2 amount:1 encyclopedia:1 extensively:2 reduced:1 schapire:1 outperform:1 exist:2 stabilized:1 designer:3 estimated:1 per:1 shall:1 threshold:1 drawn:1 budgeted:1 nonadaptive:1 asymptotically:1 run:4 parameterized:1 throughout:1 reader:2 decide:1 delivery:1 decision:20 bound:38 followed:2 played:1 distinguish:2 constraint:1 precisely:1 infinity:3 software:2 personalized:1 kleinberg:1 min:2 conjecture:1 department:3 structured:1 according:2 alternate:1 combination:1 poor:2 beneficial:1 across:2 smaller:1 making:1 happens:1 modification:2 ln:37 eventually:3 mind:1 know:2 end:8 operation:1 experimentation:39 observe:4 jn:4 running:1 remaining:3 cf:1 log2:1 commits:1 classical:2 strategy:2 exhibit:1 unable:1 sensible:1 reason:1 assuming:1 index:1 ratio:1 minimizing:1 balance:3 equivalently:1 setup:8 difficult:1 stoc:1 stated:1 design:6 policy:21 unknown:3 perform:1 bianchi:1 upper:13 sm:8 ramesh:2 mate:1 finite:2 ecml:1 situation:2 committed:3 rn:17 mansour:1 pair:5 required:2 c3:4 slivkins:1 engine:2 nip:1 able:1 bar:1 distribution1:1 below:1 regime:30 summarize:1 challenge:1 max:4 suitable:1 natural:2 arm:61 representing:2 carried:2 dating:1 hungarica:1 review:1 epoch:1 relative:2 asymptotic:4 loss:2 adaptivity:4 allocation:18 upfal:1 incurred:4 article:1 pi:7 course:1 surprisingly:1 last:2 free:10 arriving:1 repeat:2 offline:1 tsitsiklis:2 pulled:6 wide:1 munos:2 distinctly:1 benefit:2 calculated:1 world:1 cumulative:9 forward:1 made:2 adaptive:21 reinforcement:1 bm:16 far:3 kullback:1 bui:1 reveals:1 assumed:1 continuous:1 search:1 robin:1 nature:3 bearing:1 poly:4 the2:1 main:3 linearly:1 allowed:1 referred:2 fashion:1 sub:1 exponential:3 third:2 theorem:13 bad:1 xt:5 exists:13 sequential:5 effectively:1 corr:1 horizon:8 logarithmic:1 explore:1 infinitely:1 bubeck:1 recommendation:16 determines:1 slot:2 goal:1 manufacturing:1 change:1 hard:3 except:2 uniformly:2 engineer:1 total:3 called:8 ucb:17 formally:1 coquelin:1 support:1 latter:2 constructive:1 reg:10 |
3,561 | 4,224 | Learning person-object interactions for
action recognition in still images
Vincent Delaitre?
?
Ecole
Normale Sup?erieure
Josef Sivic*
INRIA Paris - Rocquencourt
Ivan Laptev*
INRIA Paris - Rocquencourt
Abstract
We investigate a discriminatively trained model of person-object interactions for
recognizing common human actions in still images. We build on the locally
order-less spatial pyramid bag-of-features model, which was shown to perform
extremely well on a range of object, scene and human action recognition tasks.
We introduce three principal contributions. First, we replace the standard quantized local HOG/SIFT features with stronger discriminatively trained body part
and object detectors. Second, we introduce new person-object interaction features
based on spatial co-occurrences of individual body parts and objects. Third, we
address the combinatorial problem of a large number of possible interaction pairs
and propose a discriminative selection procedure using a linear support vector
machine (SVM) with a sparsity inducing regularizer. Learning of action-specific
body part and object interactions bypasses the difficult problem of estimating the
complete human body pose configuration. Benefits of the proposed model are
shown on human action recognition in consumer photographs, outperforming the
strong bag-of-features baseline.
1
Introduction
Human actions are ubiquitous and represent essential information for understanding the content
of many still images such as consumer photographs, news images, sparsely sampled surveillance
videos, and street-side imagery. Automatic recognition of human actions and interactions, however,
remains a very challenging problem. The key difficulty stems from the fact that the imaged appearance of a person performing a particular action can vary significantly due to many factors such as
camera viewpoint, person?s clothing, occlusions, variation of body pose, object appearance and the
layout of the scene. In addition, motion cues often used to disambiguate actions in video [6, 27, 31]
are not available in still images.
In this work, we seek to recognize common human actions, such as ?walking?, ?running? or ?reading a book? in challenging realistic images. As opposed to action recognition in video [6, 27, 31],
action recognition in still images has received relatively little attention. A number of previous
works [21, 24, 37] focus on exploiting body pose as a cue for action recognition. In particular,
several methods address joint modeling of human poses, objects and relations among them [21, 40].
Reliable estimation of body configurations for people in arbitrary poses, however, remains a very
challenging research problem. Less structured representations, e.g. [11, 39] have recently emerged
as a promising alternative demonstrating state-of-the-art results for action recognition in static images.
In this work, we investigate discriminatively trained models of interactions between objects and
human body parts. We build on the locally orderless statistical representations based on spatial
?
?
WILLOW project, Laboratoire d?Informatique de l?Ecole
Normale Sup?erieure, ENS/INRIA/CNRS UMR
8548, Paris, France
1
pyramids [28] and bag-of-features models [9, 16, 34], which have demonstrated excellent performance on a range of scene [28], object [22, 36, 41] and action [11] recognition tasks. Rather than
relying on accurate estimation of body part configurations or accurate object detection in the image,
we represent human actions as locally orderless distributions over body parts and objects together
with their interactions. By opportunistically learning class-specific object and body part interactions
(e.g. relative configuration of leg and horse detections for the riding horse action, see Figure 1), we
avoid the extremely challenging task of estimating the full body configuration. Towards this goal,
we consider the following challenges: (i) what should be the representation of object and body part
appearance; (ii) how to model object and human body part interactions; and (iii) how to choose
suitable interaction pairs in the huge space of all possible combinations and relative configurations
of objects and body parts.
To address these challenges, we introduce the following three contributions. First, we replace the
quantized HOG/SIFT features, typically used in bag-of-features models [11, 28, 36] with powerful,
discriminatively trained, local object and human body part detectors [7, 25]. This significantly
enhances generalization over appearance variation, due to e.g. clothing or viewpoint while providing
a reliable signal on part locations. Second, we develop a part interaction representation, capturing
pair-wise relative position and scale between object/body parts, and include this representation in a
scale-space spatial pyramid model. Third, rather than choosing interacting parts manually, we select
them in a discriminative fashion. Suitable pair-wise interactions are first chosen from a large pool of
hundreds of thousands of candidate interactions using a linear support vector machine (SVM) with
a sparsity inducing regularizer. The selected interaction features are then input into a final, more
computationally expensive, non-linear SVM classifier based on the locally orderless spatial pyramid
representation.
2
Related work
Modeling person-object interactions for action recognition has recently attracted significant attention. Gupta et al. [21], Wang et al. [37], and Yao and Fei Fei [40] develop joint models of body
pose configuration and object location within the image. While great progress has been made on
estimating body pose configurations [5, 19, 25, 33], inferring accurate human body pose in images
of common actions in consumer photographs remains an extremely challenging problem due to a
significant amount of occlusions, partial truncation by image boundaries or objects in the scene,
non-upright poses, and large variability in camera viewpoint.
While we build on the recent body pose estimation work by using strong pose-specific body part
models [7, 25], we explicitly avoid inferring the complete body configuration. In a similar spirit,
Desai et al. [13] avoid inferring body configuration by representing a small set of body postures using
single HOG templates and represent relative position of the entire person and an object using simple
relations (e.g. above, to the left). They do not explicitly model body parts and their interactions with
objects as we do in this work. Yang et al. [38] model the body pose as a latent variable for action
recognition. Differently to our method, however, they do not attempt to model interactions between
people (their body parts) and objects. In a recent work, Maji et al. [30] also represent people by
activation responses of body part detectors (rather than inferring the actual body pose), however,
they model only interactions between person and object bounding boxes, not considering individual
body parts, as we do in this work.
Learning spatial groupings of low-level (SIFT) features for recognizing person-object interactions
has been explored by Yao and Fei Fei [39]. While we also learn spatial interactions, we build on
powerful body part and object detectors pre-learnt on separate training data, providing a degree of
generalization over appearance (e.g. clothing), viewpoint and illumination variation. Differently
to [39], we deploy dicriminative selection of interactions using SVM with sparsity inducing regularizer.
Spatial-pyramid based bag-of-features models have demonstrated excellent performance on action
recognition in still images [1, 11] outperforming body pose based methods [21] or grouplet models [40] on their datasets [11]. We build on these locally orderless representations but replace the
low-level features (HOG) with strong pre-trained detectors. Similarly, the object-bank representation [29], where natural scenes are represented by response vectors of densely applied pre-trained
2
Person bounding box
pj
C
pi
Detection dj
(left thigh)
v
Detection di (horse)
Figure 1: Representing person-object interactions by pairs of body part (cyan) and object (blue)
detectors. To get a strong interaction response, the pair of detectors (here visualized at positions pi
and pj ) must fire in a particular relative 3D scale-space displacement (given by the vector v) with a
scale-space displacement uncertainty (deformation cost) given by diagonal 3?3 covariance matrix
C (the spatial part of C is visualized as a yellow dotted ellipse). Our image representation is defined
by the max-pooling of interaction responses over the whole image, solved efficiently by the distance
transform.
object detectors, has shown a great promise for scene recognition. The work in [29], however, does
not attempt to model people, body parts and their interactions with objects.
Related work also includes models of contextual spatial and co-occurrence relationships between
objects [12, 32] as well as objects and the scene [22, 23, 35]. Object part detectors trained from
labelled data also form a key ingredient of attribute-based object representations [15, 26]. While we
build on this body of work, these approaches do not model interactions of people and their body
parts with objects and focus on object/scene recognition rather than recognition of human actions.
3
Representing person-object interactions
This section describes our image representation in terms of body parts, objects and interactions
among them.
3.1
Representing body parts and objects
We assume to have a set of n available detectors d1 , . . . , dn which have been pre-trained for different
body parts and object classes. Each detector i produces a map of dense 3D responses di (I, p) over
locations and scales of a given image I. We express the positions of detections p in terms of scalespace coordinates p = (x, y, ?) where (x, y) corresponds to the spatial location and ? = log ?
? is an
additive scale parameter log-related to the image scale factor ?
? making the addition in the position
vector space meaningful.
In this paper we use two types of detectors. For objects we use LSVM detector [17] trained on
PASCAL VOC images for ten object classes1 . For body parts we implement the method of [25]
and train ten body part detectors2 for each of sixteen pose clusters giving 160 body part detectors
in total (see [25] for further details). Both of our detectors use Histograms of Oriented Gradients
(HOG) [10] as an underlying low-level image representation.
1
The ten object detectors correspond to object classes bicycle, car, chair, cow, dining table, horse, motorbike,
person, sofa, tv/monitor
2
The ten body part detectors correspond to head, torso, {left, right} ? {forearm, upper arm, lower leg,
thigh}
3
3.2
Representing pairwise interactions
We define interactions by the pairs of detectors (di , dj ) as well as by the spatial and scale relations
among them. Each pair of detectors constitutes a two-node tree where the position and the scale of
the leaf are related to the root by scale-space offset and a spatial deformation cost. More precisely,
an interaction pair is defined by a quadruplet q = (i, j, v, C) ? N ? N ? R3 ? M3,3 where i and j
are the indices of the detectors at the root and leaf, v is the offset of the leaf relatively to the root and
C is a 3 ? 3 diagonal matrix defining the displacement cost of the leaf with respect to its expected
position. Figure 1 illustrates an example of an interaction between a horse and the left thigh for the
horse riding action.
We measure the response of the interaction q located at the root position p1 by:
r(I, q, p1 ) = max di (I, p1 ) + dj (I, p2 ) ? uT Cu
p2
(1)
where u = p2 ? (p1 + v) is the displacement vector corresponding to the drift of the leaf node with
respect to its expected position (p1 + v). Maximizing over p2 in (1) provides localization of the
leaf node with the optimal trade-off between the detector score and the displacement cost. For any
interaction q we compute its responses for all pairs of node positions p1 , p2 . We do this efficiently
in linear time with respect to p using distance transform [18].
3.3
Representing images by response vectors of pair-wise interactions
Given a set of M interaction pairs q1 , ? ? ? , qM , we wish to aggregate their responses (1), over an
image region A. Here A can be (i) an (extended) person bounding box, as used for selecting discriminative interaction features (Section 4.2) or (ii) a cell of the scale-space pyramid representation,
as used in the final non-linear classifier (Section 4.3). We define score s(I, q, A) of an interaction
pair q within A of an image I by max-pooling, i.e. as the maximum response of the interaction pair
within A:
s(I, q, A) = max r(I, q, p).
(2)
p?A
An image region A is then represented by a M -vector of interaction pair scores
z = (s1 , ? ? ? , sM ) with si = s(I, qi , A).
4
(3)
Learning person-object interactions
Given object and body part interaction pairs q introduced in the previous section, we wish to use
them for action classification in still images. A brute-force approach of analyzing all possible interactions, however, is computationally prohibitive since the space of all possible interactions is combinatorial in the number of detectors and scale-space relations among them. To address this problem,
we aim in this paper to select a set of M action-specific interaction pairs q1 , . . . , qM , which are both
representative and discriminative for a given action class. Our learning procedure consists of the
three main steps as follows. First, for each action we generate a large pool of candidate interactions,
each comprising a pair of (body part / object) detectors and their relative scale-space displacement.
This step is data-driven and selects candidate detection pairs which frequently occur for a particular
action in a consistent relative scale-space configuration. Next, from this initial pool of candidate interactions we select a set of M discriminative interactions which best separate the particular action
class from other classes in our training set. This is achieved using a linear Support Vector Machine
(SVM) classifier with a sparsity inducing regularizer. Finally, the discriminative interactions are
combined across classes and used as interaction features in our final non-linear spatial-pyramid like
SVM classifier. The three steps are detailed below.
4.1
Generating a candidate pool of interaction pairs
To initialize our model, we first generate a large pool of candidate interactions in a data-driven
manner. Following the suggestion in [17] that the accurate selection of the deformation cost C may
not be that important, we set C to a reasonable fixed value for all pairs, and focus on finding clusters
of frequently co-occurring detectors (di , dj ) in specific relative configurations.
For each detector i and an image I, we first collect a set of positions of all positive detector responses
PIi = {p | di (I, p) > 0}, where di (I, p) is the response of detector i at position p in image I. We
4
then apply a standard non-maxima suppression (NMS) step to eliminate multiple responses of a
detector in local image neighbourhoods and then limit PIi to the L top-scoring detections. The
intuition behind this step is that a part/object interaction is not likely to occur many times in an
image.
For each pair of detectors (di , dj ) we then gather relative displacements between their detections
S
from all the training images Ik : Dij = k {pj ? pi | pi ? PIi k and pj ? PIjk }. To discover potentially interesting interaction pairs, we perform a mean-shift clustering over Dij using a window of
radius R ? R3 (2D-image space and scale) equal to the inverse of the square root of the deformation
1
cost: R = diag(C? 2 ). We also discard clusters which contribute to less than ? percent of the training images. The set of m resulting candidate pairs (i, j, v1 , C), ? ? ? , (i, j, vm , C) is built from the
centers v1 , ? ? ? , vm of the remaining clusters. By applying this procedure to all pairs of detectors,
we generate a large pool (hundreds of thousands) of potentially interesting candidate interactions.
4.2
Discriminative selection of interaction pairs
The initialization described above produces a large number of candidate interactions. Many of
them, however, may not be informative resulting in unnecessary computational load at the training
and classification times. For this reason we wish to select a smaller number of M discriminative
interactions.
Given a set of N training images, each represented by an interaction response vector zi , described
in eq. (3) where A is the extended person bounding box given for each image, and a binary label
yi (in a 1-vs-all setup for each class), the learning problem for each action class can be formulated
using the binary SVM cost function:
J(w, b) = ?
N
X
max{0, 1 ? yi (w> zi + b)} + kwk1 ,
(4)
i=1
where w, b are parameters of the classifier and ? is the weighting factor between the (hinge) loss on
the training examples and the L1 regularizer of the classifier.
By minimizing (4) in a one-versus-all setting for each action class we search (by binary search) for
the value of the regularization parameter ? resulting in the sparse weight vector w with M nonzero elements. Selection of M interaction pairs corresponding to non-zero elements of w gives M
most discriminative (according to (4)) interaction pairs per action class. Note that other discriminative feature selection strategies such as boosting [20] can be also used. However, the proposed
approach is able to jointly search the entire set of candidate feature pairs by minimizing a convex
cost given in (4), whereas boosting implements a greedy feature selection procedure, which may be
sub-optimal.
4.3
Using interaction pairs for classification
Given a set of M discriminative interactions for each action class obtained as described above, we
wish to train a final non-linear action classifier. We use spatial pyramid-like representation [28],
aggregating responses in each cell of the pyramid using max-pooling as described by eq. (2), where
A is one cell of the spatial pyramid. We extend the standard 2D pyramid representation to scalespace resulting in a 3D pyramid with D = 1 + 23 + 43 = 73 cells. Using the scale-space pyramid
with D cells, we represent each image by concatenating M features from each of the K classes
into a M KD-dimensional vector. We train a non-linear SVM with RBF kernel and L2 regularizer
for each action class using a 5-fold cross-validation for the regularization and kernel band-width
parameters. We found that using this final non-linear classifier consistently improves classification
performance over the linear SVM given by equation (4). Note that feature selection (section 4.2) is
necessary in this case as applying the non-linear spatial pyramid classifier on the entire pool of all
candidate interactions would be computationally infeasible.
5
Experiments
We test our model on the Willow-action dataset downloaded from [4] and the PASCAL VOC 2010
action classification dataset [14]. The Willow-action dataset contains more than 900 images with
more than 1100 labelled person detections from 7 human action classes: Interaction with Computer,
5
Photographing, Playing Music, Riding Bike, Riding Horse, Running and Walking. The training set
contains 70 examples of each action class and the rest (at least 39 examples per class) is left for
testing. The PASCAL VOC 2010 dataset contains the 7 above classes together with 2 other actions:
Phoning and Reading. It contains a similar number of images. Each training and testing image
in both datasets is annotated with the smallest bounding box containing each person and by the
performed action(s). We follow the same experimental setup for both datasets.
Implementation details: We use our implementation of body part detectors described in [25] with
16 pose clusters trained on the publicly available 2000 image database [3], and 10 pre-trained PASCAL 2007 Latent SVM object detectors [2]: bicycle, car, chair, cow, dining table, horse, motorbike,
person, sofa, tvmonitor. In the human action training/test data, we extend each given person bounding box by 50% and resize the image so that the bounding box has a maximum size of 300 pixels.
We run the detectors over the transformed bounding boxes and consider the image scales sk = 2k/10
for k ? {?10, ? ? ? , 10}. At each scale we extract the detector response every 4 pixels and 8 pixels
for the body part and object detectors, respectively. The outputs of each detector are then normalized by subtracting the mean of maximum responses within the training bounding boxes and then
normalizing the variance to 1. We generate the candidate interaction pairs by taking the mean-shift
radius R = (30, 30, log(2)/2), L = 3 and ? = 8%. The covariance of the pair deformation cost C
is fixed in all experiments to R?2 . We select M = 310 discriminative interaction pairs to compute
the final spatial pyramid representation of each image.
Results: Table 1 summarizes per-class action classification results (reported using average precision for each class) for the proposed method (d. Interactions), and three baselines. The first baseline
(a. BOF) is the bag-of-features classifier [11], aggregating quantized responses of densely sampled
HOG features in spatial pyramid representation, using a (non-linear) intersection kernel. Note that
this is a strong baseline, which was shown [11] to outperform the recent person-object interaction
models of [39] and [21] on their own datasets. The second baseline (b. LSVM) is the latent SVM
classifier [17] trained in a 1-vs-all fashion for each class. To obtain a single classification score for
each person bounding box, we take the maximum LSVM detection score from the detections overlapping the extended bounding box with the standard overlap score [14] higher than 0.5. The final
baseline (c. Detectors) is a SVM classifier with an RBF kernel trained on max-pooled responses of
the entire bank of body part and object detectors in a spatial pyramid representation but without interactions. This baseline is similar in spirit to the object bank representation [29], but here targeted
to action classification by including a bank of pose-specific body part detectors as well as object
detectors. On average, the proposed method (d.) outperforms all baselines, obtaining the best result
on 4 out of 7 classes. The largest improvements are obtained on Riding Bike and Horse actions,
for which reliable object detectors are available. The improvement of the proposed method d. with
respect to using the plain bank of object and body part detectors c. directly demonstrates the benefit
of modeling interactions. Example detections of interaction pairs are shown in figure 2.
Table 2 shows the performance of the proposed interaction model (d. Interactions) and its combination with the baselines (e. BOF+LSVM+Inter.) on the Pascal VOC 2010 data. Interestingly, the
proposed approach is complementary to both the BOF (51.25 mAP) and LSVM (44.08 mAP) methods and by combining all three approaches (following [11]) the overall performance improves to
60.66 mAP. We also report results of the ?Poselet? method [30], which, similar to our method, is
trained from external non-Pascal data. Our combined approach achieves better overall performance
and also outperforms the ?Poselet? approach on 6 out of 9 classes. Finally, our combined approach
also obtains competitive performance compared to the overall best reported result on the Pascal VOC
2010 data ? ?SURREY MK KDA? [1] ? and outperforms this method on the ?Riding Horse? and
?Walking? classes.
6
Conclusion
We have developed person-object interaction features based on non-rigid relative scale-space displacement of pairs of body part and object detectors. Further, we have shown that such features can
be learnt in a discriminative fashion and can improve action classification performance over a strong
bag-of-features baseline in challenging realistic images of common human actions. In addition, the
learnt interaction features in some cases correspond to visually meaningful configurations of body
parts, and body parts with objects.
6
Inter. w/ Comp.
Blue: Screen
Cyan: L. Leg
Photographing
Blue: Head
Cyan: L. Thigh
Playing Instr.
Blue: L. Forearm
Cyan: L. Forearm
Riding Bike
Blue: R. Forearm
Cyan: Motorbike
Riding Horse
Blue: Horse
Cyan: L. Thigh
Running
Blue: L. Arm
Cyan: R. Leg
Walking
Blue: L. Arm
Cyan: Head
Figure 2: Example detections of discriminative interaction pairs. These body part interaction
pairs are chosen as discriminative (high positive weight wi ) for action classes indicated on the left.
In each row, the first three images show detections on the correct action class. The last image
shows a high scoring detection on an incorrect action class. In the examples shown, the interaction
features capture either a body part and an object, or two body part interactions. Note that while these
interaction pairs are found to be discriminative, due to the detection noise, they do not necessary
localize the correct body parts in all images. However, they may still fire at consistent locations
across many images as illustrated in the second row, where the head detector consistently detects
the camera lens, and the thigh detector fires consistently at the edge of the head. Similarly, the leg
detector seems to consistently fire on keyboards (see the third image in the first row for an example),
thus improving the confidence of the computer detections for the ?Interacting with computer? action.
7
Action / Method
(1) Inter. w/ Comp.
(2) Photographing
(3) Playing Music
(4) Riding Bike
(5) Riding Horse
(6) Running
(7) Walking
Average (mAP)
a. BOF [11]
58.15
35.39
73.19
82.43
69.60
44.53
54.18
b. LSVM
30.21
28.12
56.34
68.70
60.12
51.99
55.97
c. Detectors
45.64
36.35
68.35
86.69
71.44
57.65
57.68
d. Interactions
56.60
37.47
72.00
90.39
75.03
59.73
57.64
59.64
50.21
60.54
64.12
Table 1: Per-class average-precision for different methods on the Willow-actions dataset.
Action / Method
(1) Phoning
(2) Playing Instr.
(3) Reading
(4) Riding Bike
(5) Riding Horse
(6) Running
(7) Taking Photo
(8) Using Computer
(9) Walking
Average (mAP)
d. Interactions
42.11
30.78
28.70
84.93
89.61
81.28
26.89
52.31
70.12
e. BOF+LSVM+Inter.
48.61
53.07
28.56
80.05
90.67
85.81
33.53
56.10
69.56
Poselets[30]
49.6
43.2
27.7
83.7
89.4
85.6
31.0
59.1
67.9
MK-KDA[1]
52.6
53.5
35.9
81.0
89.3
86.5
32.8
59.2
68.6
56.30
60.66
59.7
62.2
Table 2: Per-class average-precision on the Pascal VOC 2010 action classification dataset.
We use only a small set of object detectors available at [2], however, we are now in a position
to include many more additional object (camera, computer, laptop) or texture (grass, road, trees)
detectors, trained from additional datasets, such as ImageNet or LabelMe. Currently, we consider
detections of entire objects, but the proposed model can be easily extended to represent interactions
between body parts and parts of objects [8].
Acknowledgements. This work was partly supported by the Quaero, OSEO, MSR-INRIA, ANR DETECT
(ANR-09-JCJC-0027-01) and the EIT-ICT labs.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
http://pascallin.ecs.soton.ac.uk/challenges/voc/voc2010/results/index.html.
http://people.cs.uchicago.edu/?pff/latent/.
http://www.comp.leeds.ac.uk/mat4saj/lsp.html.
http://www.di.ens.fr/willow/research/stillactions/.
M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated
pose estimation. In CVPR, 2009.
A. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE PAMI,
23(3):257?276, 2001.
L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3D human pose annotations. In
ICCV, 2009.
T. Brox, L. Bourdev, S. Maji, and J. Malik. Object segmentation by alignment of poselet activations to
image contours. In CVPR, 2011.
G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In WS-SLCV,
ECCV, 2004.
N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, pages I:886?893,
2005.
V. Delaitre, I. Laptev, and J. Sivic. Recognizing human actions in still images: a study of bagof-features and part-based representations. In Proc. BMVC., 2010. updated version, available at
http://www.di.ens.fr/willow/research/stillactions/.
C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In ICCV,
2009.
8
[13] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for static human-object interactions. In
SMiCV, CVPR, 2010.
[14] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes
(voc) challenge. IJCV, 2010. In press.
[15] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
[16] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR,
Jun 2005.
[17] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. IEEE PAMI, 2009.
[18] P. Felzenszwalb and D. Huttenlocher. Distance transforms of sampled functions. Technical report, Cornell
University CIS, Tech. Rep. 2004-1963, 2004.
[19] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Pose search: retrieving people using their pose. In
CVPR, 2009.
[20] Y. Freund and R. Schapire. A decision theoretic generalisation of online learning. Computer and System
Sciences, 55(1):119?139, 1997.
[21] A. Gupta, A. Kembhavi, and L. Davis. Observing human-object interactions: Using spatial and functional
compatibility for recognition. IEEE PAMI, 31(10):1775?1789, 2009.
[22] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image classification.
In ICCV, 2009.
[23] D. Hoiem, A. Efros, and M. Hebert. Putting objects in perspective. In CVPR, 2006.
[24] N. Ikizler, R. G. Cinbis, S. Pehlivan, and P. Duygulu. Recognizing actions from still images. In Proc.
ICPR, 2008.
[25] S. Johnson and M. Everingham. Learning effective human pose estimation from inaccurate annotation.
In CVPR, 2011.
[26] C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[27] I. Laptev, M. Marsza?ek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In
CVPR, 2008.
[28] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: spatial pyramid matching for recognizing
natural scene categories. In CVPR, pages II: 2169?2178, 2006.
[29] L. Li, H. Su, E. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene classification and semantic feature sparsification. In NIPS, 2010.
[30] S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and
appearance. In CVPR, 2011.
[31] T. B. Moeslund, A. Hilton, and V. Kruger. A survey of advances in vision-based human motion capture
and analysis. CVIU, 103(2-3):90?126, 2006.
[32] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In ICCV,
2007.
[33] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010.
[34] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In
ICCV, 2003.
[35] A. Torralba. Contextual priming for object detection. IJCV, 53(2):169?191, July 2003.
[36] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV,
2009.
[37] Y. Wang, H. Jiang, M. S. Drew, Z. N. Li, and G. Mori. Unsupervised discovery of action classes. In
CVPR, pages II: 1654?1661, 2006.
[38] W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In
CVPR, 2010.
[39] B. Yao and L. Fei-Fei. Grouplet: A structured image representation for recognizing human and object
interactions. In CVPR, 2010.
[40] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction
activities. In CVPR, 2010.
[41] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of
texture and object categories: a comprehensive study. IJCV, 73(2):213?238, 2007.
9
| 4224 |@word cu:1 version:1 msr:1 dalal:1 stronger:1 seems:1 everingham:2 triggs:1 seek:1 covariance:2 q1:2 initial:1 configuration:13 contains:4 score:6 selecting:1 hoiem:2 jimenez:1 ecole:2 interestingly:1 outperforms:3 contextual:2 activation:2 rocquencourt:2 si:1 attracted:1 must:1 realistic:3 additive:1 informative:1 wiewiora:1 v:2 grass:1 cue:2 selected:1 leaf:6 prohibitive:1 greedy:1 provides:1 quantized:3 node:4 location:5 contribute:1 boosting:2 revisited:1 zhang:1 dn:1 ik:1 retrieving:1 incorrect:1 consists:1 ijcv:3 manner:1 introduce:3 pairwise:1 inter:4 expected:2 p1:6 frequently:2 multi:1 relying:1 voc:8 detects:1 little:1 actual:1 window:1 considering:1 farhadi:1 project:1 estimating:3 underlying:1 discover:1 bike:5 laptop:1 what:1 developed:1 finding:1 sparsification:1 photographing:3 temporal:1 every:1 classifier:12 qm:2 demonstrates:1 brute:1 uk:2 ramanan:3 rozenfeld:1 positive:2 local:4 aggregating:2 limit:1 marin:1 analyzing:1 jiang:1 marszalek:1 instr:2 inria:4 pami:3 umr:1 initialization:1 collect:1 challenging:6 co:3 range:2 jurie:1 harmeling:1 camera:4 testing:2 implement:2 procedure:4 displacement:8 significantly:2 vedaldi:2 matching:2 pre:5 confidence:1 road:1 get:1 selection:8 context:2 applying:2 www:3 map:6 demonstrated:2 center:1 maximizing:1 roth:1 layout:2 attention:2 williams:1 convex:1 survey:1 d1:1 varma:1 ferrari:1 variation:3 coordinate:1 updated:1 deploy:1 thigh:6 element:2 recognition:18 expensive:1 walking:6 located:1 sparsely:1 database:1 huttenlocher:1 taskar:1 wang:3 solved:1 capture:2 thousand:2 region:2 news:1 desai:3 transformed:1 trade:1 movement:1 intuition:1 schiele:1 trained:17 laptev:3 localization:2 easily:1 joint:2 differently:2 eit:1 represented:3 voc2010:1 regularizer:6 maji:3 train:3 articulated:2 informatique:1 effective:1 horse:14 aggregate:1 choosing:1 opportunistically:1 emerged:1 cvpr:17 anr:2 unseen:1 transform:2 jointly:1 final:7 online:1 dining:2 propose:1 subtracting:1 interaction:89 slcv:1 fr:2 combining:2 bobick:1 inducing:4 exploiting:1 cluster:5 produce:2 generating:1 categorization:1 object:86 develop:2 ac:2 pose:26 bourdev:3 received:1 progress:1 eq:2 p2:5 strong:6 c:1 pii:3 poselets:2 radius:2 annotated:1 attribute:3 correct:2 human:30 mcallester:1 generalization:2 clothing:3 visually:1 great:2 bicycle:2 efros:1 vary:1 achieves:1 smallest:1 torralba:1 estimation:6 proc:2 sofa:2 bag:9 combinatorial:2 label:1 currently:1 marsza:1 largest:1 aim:1 normale:2 rather:4 avoid:3 cornell:1 surveillance:1 focus:3 ponce:1 improvement:2 consistently:4 tech:1 suppression:1 baseline:10 detect:2 rigid:1 cnrs:1 inaccurate:1 typically:1 entire:5 eliminate:1 w:1 relation:4 perona:1 willow:6 france:1 comprising:1 selects:1 josef:1 pixel:3 overall:3 among:4 classification:13 pascal:9 compatibility:1 html:2 spatial:22 art:1 initialize:1 andriluka:1 brox:1 equal:1 mutual:1 manually:1 unsupervised:1 constitutes:1 report:2 oriented:2 recognize:1 densely:2 individual:2 comprehensive:1 pictorial:1 occlusion:2 fire:4 attempt:2 detection:23 huge:1 investigate:2 alignment:1 behind:1 accurate:4 edge:1 partial:1 necessary:2 tree:2 deformation:5 girshick:1 mk:2 modeling:4 rabinovich:1 cost:9 hundred:2 recognizing:7 dij:2 johnson:1 reported:2 learnt:3 endres:1 nickisch:1 combined:3 person:23 off:1 vm:2 pool:7 together:2 yao:4 imagery:1 nm:1 opposed:1 choose:1 containing:1 external:1 book:1 ek:1 li:2 de:1 pooled:1 includes:1 forsyth:1 explicitly:2 performed:1 root:5 csurka:1 lab:1 observing:1 sup:2 competitive:1 xing:1 annotation:2 contribution:2 gulshan:1 square:1 publicly:1 variance:1 efficiently:2 correspond:3 yellow:1 bayesian:1 vincent:1 comp:3 detector:50 bof:5 surrey:1 di:10 static:2 soton:1 sampled:3 dataset:6 car:2 ut:1 ubiquitous:1 torso:1 improves:2 segmentation:1 sapp:1 higher:1 follow:1 response:19 zisserman:4 pascallin:1 bmvc:1 box:11 su:1 overlapping:1 google:1 indicated:1 riding:12 normalized:1 galleguillos:1 regularization:2 imaged:1 nonzero:1 semantic:1 illustrated:1 quadruplet:1 width:1 davis:2 complete:2 theoretic:1 scalespace:2 motion:2 l1:1 percent:1 image:55 wise:3 lazebnik:2 recently:2 common:4 lsp:1 functional:1 extend:2 jcjc:1 significant:2 automatic:1 erieure:2 similarly:2 dj:5 own:1 recent:3 perspective:1 driven:2 discard:1 keyboard:1 poselet:3 outperforming:2 binary:3 kwk1:1 rep:1 yi:2 scoring:2 additional:2 signal:1 ii:4 july:1 full:1 multiple:2 keypoints:1 stem:1 technical:1 cross:1 retrieval:1 phoning:2 qi:1 vision:1 histogram:2 represent:6 kernel:6 pyramid:18 achieved:1 cell:5 addition:3 whereas:1 winn:1 laboratoire:1 hilton:1 rest:1 pooling:3 spirit:2 yang:2 delaitre:2 iii:1 ivan:1 zi:2 cow:2 shift:2 action:62 detailed:1 kruger:1 amount:1 transforms:1 locally:5 ten:4 band:1 visualized:2 category:3 generate:4 http:5 outperform:1 schapire:1 bagof:1 dotted:1 per:5 blue:8 promise:1 express:1 key:2 putting:1 demonstrating:1 monitor:1 localize:1 pj:4 v1:2 run:1 inverse:1 powerful:2 uncertainty:1 reasonable:1 lsvm:7 decision:1 resize:1 summarizes:1 capturing:1 cyan:8 fold:1 fan:1 pijk:1 activity:1 bray:1 occur:2 precisely:1 fei:12 scene:11 toshev:1 extremely:3 chair:2 duygulu:1 performing:1 relatively:2 structured:2 tv:1 according:1 icpr:1 combination:2 kd:1 describes:1 across:2 smaller:1 wi:1 making:1 s1:1 leg:5 iccv:6 computationally:3 equation:1 mori:2 remains:3 describing:1 r3:2 kda:2 photo:1 available:6 apply:1 hierarchical:1 occurrence:2 fowlkes:2 neighbourhood:1 alternative:1 motorbike:3 tvmonitor:1 top:1 running:5 include:2 clustering:1 remaining:1 kembhavi:1 hinge:1 music:2 giving:1 build:6 ellipse:1 malik:3 posture:1 strategy:1 diagonal:2 enhances:1 gradient:2 distance:3 separate:2 street:1 reason:1 consumer:3 index:2 relationship:1 providing:2 minimizing:2 difficult:1 setup:2 potentially:2 hog:6 implementation:2 perform:2 upper:1 forearm:4 datasets:5 sm:1 defining:1 extended:4 variability:1 head:5 interacting:2 arbitrary:1 drift:1 introduced:1 pair:38 paris:3 imagenet:1 sivic:3 nip:1 address:4 able:1 beyond:1 below:1 sparsity:4 challenge:4 reading:3 built:1 reliable:3 max:7 video:5 including:1 gool:1 suitable:2 overlap:1 difficulty:1 natural:3 force:1 leeds:1 cascaded:1 arm:3 representing:6 improve:1 movie:1 jun:1 extract:1 schmid:4 text:1 understanding:1 l2:1 acknowledgement:1 ict:1 discovery:1 relative:10 freund:1 loss:1 discriminatively:5 suggestion:1 interesting:2 versus:1 ingredient:1 sixteen:1 validation:1 downloaded:1 degree:1 gather:1 consistent:2 viewpoint:4 bank:6 bypass:1 pi:4 playing:4 row:3 eccv:2 supported:1 last:1 truncation:1 hebert:1 infeasible:1 side:1 uchicago:1 template:2 taking:2 felzenszwalb:2 sparse:1 orderless:4 benefit:2 van:1 boundary:1 plain:1 distributed:1 contour:1 made:1 ec:1 harzallah:1 obtains:1 unnecessary:1 belongie:1 discriminative:18 search:4 latent:5 sk:1 table:6 disambiguate:1 promising:1 learn:1 transfer:1 obtaining:1 improving:1 grouplet:2 excellent:2 priming:1 diag:1 dense:1 main:1 bounding:11 whole:1 noise:1 lampert:1 complementary:1 body:60 representative:1 en:3 screen:1 fashion:3 precision:3 sub:1 position:13 inferring:4 wish:4 concatenating:1 candidate:12 third:3 weighting:1 load:1 specific:6 sift:3 explored:1 offset:2 svm:12 gupta:2 normalizing:1 grouping:1 essential:1 drew:1 ci:1 texture:2 illumination:1 illustrates:1 occurring:1 pff:1 cviu:1 intersection:1 photograph:3 appearance:6 likely:1 visual:2 corresponds:1 goal:1 formulated:1 targeted:1 rbf:2 towards:1 labelled:2 replace:3 labelme:1 content:1 upright:1 generalisation:1 principal:1 total:1 lens:1 partly:1 experimental:1 m3:1 meaningful:2 select:5 support:3 people:8 dance:1 |
3,562 | 4,225 | Finite-Time Analysis of Strati?ed Sampling
for Monte Carlo
R?
emi Munos
INRIA Lille - Nord Europe
[email protected]
Alexandra Carpentier
INRIA Lille - Nord Europe
[email protected]
Abstract
We consider the problem of strati?ed sampling for Monte-Carlo integration.
We model this problem in a multi-armed bandit setting, where the arms
represent the strata, and the goal is to estimate a weighted average of the
mean values of the arms. We propose a strategy that samples the arms
according to an upper bound on their standard deviations and compare
its estimation quality to an ideal allocation that would know the standard
deviations of the strata. We provide two regret analyses: a distribution? ?3/2 ) that depends on a measure of the disparity of
dependent bound O(n
? ?4/3 ) that does not.
the strata, and a distribution-free bound O(n
1
Introduction
Consider a polling institute that has to estimate as accurately as possible the average income
of a country, given a ?nite budget for polls. The institute has call centers in every region in
the country, and gives a part of the total sampling budget to each center so that they can
call random people in the area and ask about their income. A naive method would allocate
a budget proportionally to the number of people in each area. However some regions show
a high variability in the income of their inhabitants whereas others are very homogeneous.
Now if the polling institute knew the level of variability within each region, it could adjust
the budget allocated to each region in a more clever way (allocating more polls to regions
with high variability) in order to reduce the ?nal estimation error.
This example is just one of many for which an e?cient method of sampling a function with
natural strata (i.e., the regions) is of great interest. Note that even in the case that there
are no natural strata, it is always a good strategy to design arbitrary strata and allocate
a budget to each stratum that is proportional to the size of the stratum, compared to a
crude Monte-Carlo. There are many good surveys on the topic of strati?ed sampling for
Monte-Carlo, such as (Rubinstein and Kroese, 2008)[Subsection 5.5] or (Glasserman, 2004).
The main problem for performing an e?cient sampling is that the variances within the
strata (in the previous example, the income variability per region) are usually unknown.
One possibility is to estimate the variances online while sampling the strata. There is
some interesting research along this direction, such as (Arouna, 2004) and more recently
(Etor?e and Jourdain, 2010, Kawai, 2010). The work of Etor?e and Jourdain (2010) matches
exactly our problem of designing an e?cient adaptive sampling strategy. In this article they
propose to sample according to an empirical estimate of the variance of the strata, whereas
Kawai (2010) addresses a computational complexity problem which is slightly di?erent from
ours. The recent work of Etor?e et al. (2011) describes a strategy that enables to sample
asymptotically according to the (unknown) standard deviations of the strata and at the same
time adapts the shape (and number) of the strata online. This is a very di?cult problem,
especially in high dimension, that we will not address here, although we think this is a very
interesting and promising direction for further researches.
1
These works provide asymptotic convergence of the variance of the estimate to the targeted
strati?ed variance1 divided by the sample size. They also prove that the number of pulls
within each stratum converges to the desired number of pulls i.e. the optimal allocation
if the variances per stratum were known. Like Etor?e and Jourdain (2010), we consider a
strati?ed Monte-Carlo setting with ?xed strata. Our contribution is to design a sampling
strategy for which we can derive a ?nite-time analysis (where ?time? refers to the number of
samples). This enables us to predict the quality of our estimate for any given budget n.
We model this problem using the setting of multi-armed bandits where our goal is to estimate
a weighted average of the mean values of the arms. Although our goal is di?erent from a usual
bandit problem where the objective is to play the best arm as often as possible, this problem
also exhibits an exploration-exploitation trade-o?. The arms have to be pulled both in
order to estimate the initially unknown variability of the arms (exploration) and to allocate
correctly the budget according to our current knowledge of the variability (exploitation).
Our setting is close to the one described in (Antos et al., 2010) which aims at estimating
uniformly well the mean values of all the arms. The authors present an algorithm, called
GAFS-MAX, that allocates samples proportionally
? to the empirical variance of the arms,
while imposing that each arm is pulled at least n times to guarantee a su?ciently good
estimation of the true variances.
Note though that in the Master Thesis (Grover, 2009), the author presents an algorithm
named GAFS-WL which is similar to GAFS-MAX and has an analysis close to the one of
GAFS-MAX. It deals with strati?ed sampling, i.e. it targets an allocation which is proportional to the standard deviation (and not to the variance) of the strata times their size2 .
Some questions remain open in this work, notably that no distribution independent regret
bound is provided for GAFS-WL. We clarify this point in Section 4. Our objective is similar,
and we extend the analysis of this setting.
Contributions: In this paper, we introduce a new algorithm based on Upper-Con?denceBounds (UCB) on the standard deviation. They are computed from the empirical standard
deviation and a con?dence interval derived from Bernstein?s inequalities. We provide a
?nite-time analysis of its performance. The algorithm, called MC-UCB, samples the arms
proportionally to an UCB3 on the standard deviation times the size of the stratum. Note
that the idea is similar to the one in (Carpentier et al., 2011). Our contributions are the
following:
? We derive a ?nite-time analysis for the strati?ed sampling for Monte-Carlo setting
by using an algorithm based on upper con?dence bounds. We show how such a
family of algorithm is particularly interesting in this setting.
? ?3/2 )4 that
? We provide two regret analysis: (i) a distribution-dependent bound O(n
depends on the disparity of the stratas (a measure of the problem complexity), and
which corresponds to a stationary regime where the budget n is large compared to
? ?4/3 ) that does not depend on
this complexity. (ii) A distribution-free bound O(n
the the disparity of the stratas, and corresponds to a transitory regime where n is
small compared to the complexity. The characterization of those two regimes and
the fact that the corresponding excess error rates di?er enlightens the fact that a
?nite-time analysis is very relevant for this problem.
The rest of the paper is organized as follows. In Section 2 we formalize the problem and
introduce the notations used throughout the paper. Section 3 introduces the MC-UCB algorithm and reports performance bounds. We then discuss in Section 4 about the parameters
of the algorithm and its performances. In Section 5 we report numerical experiments that
1
The target is de?ned in [Subsection 5.5] of (Rubinstein and Kroese, 2008) and later in this
paper, see Equation 4.
2
This is explained in (Rubinstein and Kroese, 2008) and will be formulated precisely later.
3
Note that we consider a sampling strategy based on UCBs on the standard deviations of the
arms whereas the so-called UCB algorithm of Auer et al. (2002), in the usual multi-armed bandit
setting, computes UCBs on the mean rewards of the arms.
4
? corresponds to O(?) up to logarithmic factors.
The notation O(?)
2
illustrate our method on the problem of pricing Asian options as introduced in (Glasserman
et al., 1999). Finally, Section 6 concludes the paper and suggests future works.
2
Preliminaries
The allocation problem mentioned in the previous section is formalized as a K-armed bandit
problem where each arm (stratum) k = 1, . . . , K is characterized by a distribution ?k with
mean value ?k and variance ?k2 . At each round t ? 1, an allocation strategy (or algorithm) A
selects an arm kt and receives a sample drawn from ?kt independently of the past samples.
Note that a strategy may be adaptive, i.e., the arm selected at round t may depend on
past observed samples. Let {wk }k=1,...,K denote a known set of positive weights which sum
to 1. For example in the setting of strati?ed sampling for Monte-Carlo, this would be the
probability mass in each stratum. The goal is to de?ne a strategy that estimates as precisely
?K
as possible ? = k=1 wk ?k using a total budget of n samples.
?t
Let us write Tk,t = s=1 I {ks = k} the number of times arm k has been pulled up to time
Tk,t
1 ?
Xk,s the empirical estimate of the mean ?k at time t, where Xk,s
t, and ?
?k,t =
Tk,t s=1
denotes the sample received when pulling arm k for the s-th time.
After n rounds, the algorithm A returns the empirical estimate ?
?k,n of all the arms. Note
that in the case of a deterministic strategy, the expected quadratic estimation error of the
?K
?k,n satis?es:
weighted mean ? as estimated by the weighted average ?
?n = k=1 wk ?
?? ?
? ?
??
??
?2 ?
?
?2 ?
2
K
K
2
=E
=
?
?
.
w
(?
?
?
?
)
w
E
?
?
E ?
?n ? ?
k
k,n
k
?
k,n
k
k
k=1
k=1 k
We thus use the following measure for the performance of any algorithm A:
?
?
?K
2
Ln (A) = k=1 wk2 E (?k ? ?
?k,n ) .
(1)
The goal is to de?ne an allocation strategy that minimizes the global loss de?ned in Equation 1. If the variance of the arms were known in advance, one could design an optimal
static5 allocation strategy A? by pulling each arm k proportionally to the quantity wk ?k .
?
Indeed, if arm k is pulled a deterministic number of times Tk,n
, then
2
?
?
K
(2)
Ln (A? ) = k=1 wk2 T ?k .
k,n
?K
?
?
such as to minimize Ln under the constraint that k=1 Tk,n
= n, the
By choosing Tk,n
?
optimal static allocation (up to rounding e?ects) of algorithm A is to pull each arm k,
w k ?k
?
= ?K
n,
(3)
Tk,n
i=1 wi ?i
times, and achieves a global performance
?2
Ln (A? ) = w ,
(4)
n
?K
T?
wk ?k
where ?w = i=1 wi ?i . In the following, we write ?k = k,n
n = ?w the optimal allocation
proportion for arm k and ?min = min1?k?K ?k . Note that a small ?min means a large
disparity of the wk ?k and, as explained later, provides for the algorithm we build in Section
3 a characterization of the hardness of a problem.
However, in the setting considered here, the ?k are unknown, and thus the optimal allocation
is out of reach. A possible allocation is the uniform strategy Au , i.e., such that Tku =
wk
?K
n. Its performance is
i=1 wi
?K w ? 2
?K
?
Ln (Au ) = k=1 wk k=1 kn k = w,2
n ,
5
Static means that the number of pulls allocated to each arm does not depend on the received
samples.
3
?K
where ?w,2 = k=1 wk ?k2 . Note that by Cauchy-Schwartz?s inequality, we have ?2w ? ?w,2
with equality if and only
A? is always at least as good as
? if the (?k ) are all 2equal. Thus ?
u
A . In addition, since i wi = 1, we have ?w ? ?w,2 = ? k wk (?k ? ?w )2 . The di?erence
between those two quantities is the weighted quadratic variation of the ?k around their
weighted mean ?w . In other words, it is the variance of the (?k )1?k?K . As a result the
gain of A? compared to Au grow with the disparity of the ?k .
We would like to do better than the uniform strategy by considering an adaptive strategy A
that would estimate the ?k at the same time as it tries to implement an allocation strategy
as close as possible to the optimal allocation algorithm A? . This introduces a natural
trade-o? between the exploration needed to improve the estimates of the variances and the
exploitation of the current estimates to allocate the pulls nearly-optimally.
In order to assess how well A solves this trade-o? and manages to sample according to the
true standard deviations without knowing them in advance, we compare its performance to
that of the optimal allocation strategy A? . For this purpose we de?ne the notion of regret
of an adaptive algorithm A as the di?erence between the performance loss incurred by the
algorithm and the optimal algorithm:
Rn (A) = Ln (A) ? Ln (A? ).
(5)
The regret indicates how much we loose in terms of expected quadratic estimation error
?2
by not knowing in advance the standard deviations (?k ). Note that since Ln (A? ) = nw ,
a consistent strategy i.e., asymptotically equivalent to the optimal strategy, is obtained
whenever its regret is neglectable compared to 1/n.
3
Allocation based on Monte Carlo Upper Con?dence Bound
3.1
The algorithm
In this section, we introduce our adaptive algorithm for the allocation problem, called Monte
Carlo Upper Con?dence Bound (MC-UCB). The algorithm computes a high-probability
bound on the standard deviation of each arm and samples the arms proportionally to their
bounds times the corresponding weights. The MC-UCB algorithm, AM C?U CB , is described
in Figure 1. It requires three parameters as inputs: c1 and c2 which are related to the
shape of the distributions (see Assumption 1), and ? which de?nes the con?dence level of
the bound. In Subsection 4.2, we discuss a way to reduce the number of parameters from
three to one. The amount of exploration of the algorithm can be adapted by properly tuning
these parameters.
?
?
?
2c1 ?(1+log(c2 /?))n1/2
Input: c1 , c2 , ?. Let b = 2 2 log(2/?) c1 log(c2 /?) +
.
(1??)
Initialize: Pull each arm twice.
for t = 2K + 1, . . . , n do ?
?
?
wk
1
?
?k,t?1 + b Tk,t?1
for each arm 1 ? k ? K
Compute Bk,t = Tk,t?1
Pull an arm kt ? arg max1?k?K Bk,t
end for
Output: ?
?k,t for each arm 1 ? k ? K
Figure 1: The pseudo-code of the MC-UCB algorithm. The empirical standard deviations
?
?k,t?1 are computed using Equation 6.
The algorithm starts by pulling each arm twice in rounds t = 1 to 2K. From round t = 2K+1
on, it computes an upper con?dence bound Bk,t on the standard deviation ?k , for each arm
k, and then pulls the one with largest Bk,t . The upper bounds on the standard deviations
are built by using Theorem 10 in (Maurer and Pontil, 2009)6 and based on the empirical
standard deviation ?
?k,t?1 :
2
=
?
?k,t?1
6
1
Tk,t?1 ? 1
Tk,t?1
?
i=1
(Xk,i ? ?
?k,t?1 )2 ,
We could also have used the variant reported in (Audibert et al., 2009).
4
(6)
where Xk,i is the i-th sample received when pulling arm k, and Tk,t?1 is the number of pulls
allocated to arm k up to time t ? 1. After n rounds, MC-UCB returns the empirical mean
?
?k,n for each arm 1 ? k ? K.
3.2
Regret analysis of MC-UCB
Before stating the main results of this section, we state the assumption that the distributions
are sub-Gaussian, which includes e.g., Gaussian or bounded distributions. See (Buldygin
and Kozachenko, 1980) for more precisions.
Assumption 1 There exist c1 , c2 > 0 such that for all 1 ? k ? K and any ? > 0,
PX??k (|X ? ?k | ? ?) ? c2 exp(??2 /c1 ) .
(7)
We provide two analyses, a distribution-dependent and a distribution-free, of MC-UCB,
which are respectively interesting in two regimes, i.e., stationary and transitory regimes, of
the algorithm. We will comment on this later in Section 4.
A distribution-dependent result: We now report the ?rst bound on the regret of MCUCB algorithm. The proof is reported in (Carpentier and Munos, 2011). and relies on
?
upper- and lower-bounds on Tk,t ? Tk,t
, i.e., the di?erence in the number of pulls of each
arm compared to the optimal allocation (see Lemma 3).
Theorem 1 Under Assumption 1 and if we choose c2 such that c2 ? 2Kn?5/2 , the regret
of MC-UCB run with parameter ? = n?7/2 with n ? 4K is bounded as
?
?
log(n)c1 (c2 + 2) ?
19 ?
2
2
112?
K?
.
+
6K
+
+
720c
(c
+
1)
log(n)
Rn (AM C?U CB ) ?
w
1
2
w
3/2
?3min n2
n3/2 ?min
Note that this result crucially depends on the smallest proportion ?min which is a measure
of the disparity of the standard deviations times their weight. For this reason we refer to it
as ?distribution-dependent? result.
A distribution-free result: Now we report our second regret bound that does not depend
on ?min but whose rate is poorer. The proof is reported in (Carpentier and Munos, 2011)
?
and relies on other upper- and lower-bounds on Tk,t ? Tk,t
detailed in Lemma 4.
Theorem 2 Under Assumption 1 and if we choose c2 such that c2 ? 2Kn?5/2 , the regret
of MC-UCB run with parameter ? = n?7/2 with n ? 4K is bounded as
?
?
200 c1 (c2 + 2)?w K
365 ?
2 2
2
2
129c
.
Rn (AM C?U CB ) ?
log(n)
+
(c
+
2)
K
log(n)
+
K?
1
2
w
n4/3
n3/2
This bound does not depend on 1/?min . Note that the bound is not entirely distribution
free since ?w appears. But it can be proved using Assumption 1 that ?2w ? c1 c2 . This is
? ?4/3 ).
obtained at the price of the slightly worse rate O(n
4
4.1
Discussion on the results
Distribution-free versus distribution-dependent
? ?5/2 n?3/2 ), whereas Theorem 2 provides a
Theorem 1 provides a regret bound of order O(?
min
? ?4/3 ) independently of ?min . Hence, for a given problem i.e., a given ?min , the
bound in O(n
distribution-free result of Theorem 2 is more informative than the distribution-dependent
result of Theorem 1 in the transitory regime, that is to say when n is small compared to
??1
min . The distribution-dependent result of Theorem 1 is better in the stationary regime i.e.,
for large n. This distinction reminds us of the di?erence between distribution-dependent
and distribution-free bounds for the UCB algorithm in usual multi-armed bandits7 .
7
The distribution dependent bound is in O(K log n/?), where ? is the
? di?erence between the
mean value of the two best arms, and the distribution-free bound is in O( nK log n) as explained
in (Auer et al., 2002, Audibert and Bubeck, 2009).
5
Although we do not have a lower bound on the regret yet, we believe that the rate n?3/2
cannot be improved for general distributions. As explained in the proof in Appendix B
of (Carpentier and Munos, 2011), this rate is a direct consequence of the high probability
?
bounds on the estimates of the standard deviations of the arms which are in O(1/ n), and
those bounds are tight. A natural question is whether there exists an algorithm with a regret
? ?3/2 ) without any dependence in ??1 . Although we do not have an answer
of order O(n
min
to this question, we can say that our algorithm MC-UCB does not satisfy this property. In
Appendix D.1 of (Carpentier and Munos, 2011), we give a simple example where ?min = 0
? ?4/3 ). This shows that our
and for which the rate of MC-UCB cannot be better than O(n
analysis of MC-UCB is tight.
The problem dependent upper bound is similar to the one provided for GAFS-WL in
(Grover, 2009). We however expect that GAFS-WL has for some problems a sub-optimal
behavior: it is possible to ?nd cases where Rn (AGAF S?W L ) = ?(1/n), see Appendix D.1
of (Carpentier and Munos, 2011). Note however that when there is an arm with 0 standard
deviation, GAFS-WL is likely to perform better than MC-UCB, as it will only sample this
?
? 2/3 ) times.
arm O( n) times while MC-UCB samples it O(n
4.2
The parameters of the algorithm
Our algorithm takes three parameters as input, namely c1 , c2 and
only use a com? ?, but we ?
bination
of
them
in
the
algorithm,
with
the
introduction
of
b
=
2
2
log(2/?)
c1 log(c2 /?)+
?
2c1 ?(1+log(c2 /?))n1/2
.
(1??)
For practical use of the method, it is enough to tune the algorithm
with a single parameter b. By the choice of the value assigned to ? in the two theorems,
b should be chosen of order c log(n), where c can be interpreted as a high probability
bound on the range of the samples. We thus simply require a rough estimate of the magnitude?of the samples. Note that in the case of bounded distributions, b can be chosen as
?
b = 4 52 c log(n) where c is a true bound on the variables. This result is easy to deduce
by simplifying Lemma 1 in Appendix A of (Carpentier and Munos, 2011) for the case of
bounded variables.
5
Numerical experiment: Pricing of an Asian option
We consider the pricing problem of an Asian option introduced in (Glasserman et al., 1999)
and later considered in (Kawai, 2010, Etor?e and Jourdain, 2010). This uses a Black-Schole
model with strike C and maturity T . Let (W (t))0?t?1 be a Brownian motion that is
discretized at d equidistant times {i/d}1?i?d , which de?nes the vector W ? Rd with components Wi = W (i/d). The discounted payo? of the Asian option is de?ned as a function
of W , by:
?
?
?
? ?
?
d
?
C,
0
,
(8)
+
s
T
W
F (W ) = exp(?rT ) max d1 i=1 S0 exp (r ? 12 s20 ) iT
0
i
d
where S0 , r, and s0 are constants, and the price is de?ned by the expectation p = EW F (W ).
We want to estimate the price p by Monte-Carlo simulations (by sampling on W =
(Wi )1?i?d ). In order to reduce the variance of the estimated price, we can stratify the
space of W . Glasserman et al. (1999) suggest to stratify according to a one dimensional
projection of W , i.e., by choosing a projection vector u ? Rd and de?ne the strata as the set
of W such that u ? W lies in intervals of R. They further argue that the best direction for
strati?cation is to choose u = (0, ? ? ? , 0, 1), i.e., to stratify according to the last component
Wd of W . Thus we sample Wd and then conditionally sample W1 , ..., Wd?1 according to a
Brownian Bridge as explained in (Kawai, 2010). Note that this choice of strati?cation is also
intuitive since Wd has the largest exponent in the payo? (8), and thus the highest volatility.
Kawai (2010) and Etor?e and Jourdain (2010) also use the same direction of strati?cation.
Like in (Kawai, 2010) we consider 5 strata of equal weight. Since Wd follows a N (0, 1),
the strata correspond to the 20-percentile of a normal distribution. The left plot of Figure
2 represents the cumulative distribution function of Wd and shows the strata in terms of
6
percentiles of Wd . The right plot represents, in dot line, the curve E[F (W )|Wd = x] versus
P(Wd < x) parameterized by x, and the box plot represents the expectation and standard
deviations of F (W ) conditioned on each stratum. We observe that this strati?cation produces an important heterogeneity of the standard deviations per stratum, which indicates
that a strati?ed sampling would be pro?table compared to a crude Monte-Carlo sampling.
Expectation of the payoff in every strata for W with C=90
d
1000
900
E[F(W)|W =x]
800
E[F(W)|W ? strata]
d
d
700
E[F(W)|Wd=x]
600
500
400
300
200
100
0
?100
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P(W <x)
d
Figure 2: Left: Cdf of Wd and the de?nition of the strata. Right: expectation and standard
deviation of F (W ) conditioned on each stratum for a strike C = 90.
We choose the same numerical values as Kawai (2010): S0 = 100, r = 0.05, s0 = 0.30, T = 1
and d = 16. Note that the strike C of the option has a direct impact on the variability of
the strata. Indeed, the larger C, the more probable F (W ) = 0 for strata with small Wd ,
and thus, the smaller ?min .
Our two main competitors are the SSAA algorithm of Etor?e and Jourdain (2010) and GAFSWL of Grover (2009). We did not compare to (Kawai, 2010) which aims at minimizing the
computational time and not the loss considered here8 . SSAA works in Kr rounds of length
Nk where, at each round, it allocates proportionally to the empirical standard deviations
computed in the previous rounds. Etor?e and Jourdain (2010) report the asymptotic consistency of the algorithm whenever Nkk goes to 0 when k goes to in?nity. Since their goal is
not to obtain a ?nite-time performance, they do not mention how to calibrate the length
and number of rounds in practice. We choose the same parameters as in their numerical
experiments (Section 3.2.2 of (Etor?e and Jourdain, 2010)) using 3 rounds. In this setting
where we know the budget n at the beginning of the algorithm, GAFS-WL pulls each arm
?
w ?
?
a n times and then pulls at time t + 1 the arm kt+1 that maximizes Tkk,tk,t . We set a = 1.
As mentioned in Subsection 4.2, an advantage of our algorithm is that it requires a single
parameter to tune. We chose b = 1000 log(n) where 1000 is a high-probability range of the
variables (see right plot of Figure 2). Table 5 reports the performance of MC-UCB, GAFSWL, SSAA, and the uniform
strategy, for di?erent values of strike C i.e., for di?erent values
?
2
w k ?k
?1
2
?
of ?min and ?w,2 /?w = ( wk ?k )2 . The total budget is n = 105 . The results are averaged
k
on 50000 trials. We notice that MC-UCB outperforms SSAA, the uniform strategy, and
GAFS-WL strategy. Note however that, in the case of GAFS-WL strategy, the small gain
could come from the fact that there are more parameters in MC-UCB, and that we were
thus able to adjust them (even if we kept the same parameters for the three values of C).
In the left plot of Figure 3, we plot the rescaled regret Rn n3/2 , averaged over 50000 trials,
as a function of n, where n ranges from 50 to 5000. The value of the strike is C = 120.
Again, we notice that MC-UCB performs better than Uniform and SSAA because it adapts
8
In that paper, the computational costs for each stratum vary, i.e. it is faster to sample in some
strata than in others, and the aim of their paper is to minimize the global computational cost while
achieving a given performance.
7
C
60
90
120
1
?min
6.18
15.29
744.25
?w,2 /?2w
1.06
1.24
3.07
Uniform
2.52 10?2
3.32 10?2
3.56 10?2
SSAA
5.87 10?3
6.14 10?3
6.22 10?3
GAFS-WL
8.25 10?4
8.58 10?4
9.89 10?4
MC-UCB
7.29 10?4
8.07 10?4
9.28 10?4
2
Table 1: Characteristics of the distributions (??1
min and ?w,2 /?w ) and regret of the Uniform,
SSAA, and MC-UCB strategies, for di?erent values of the strike C.
faster to the distributions of the strata. But it performs very similarly to GAFS-WL. In
addition, it seems that the regret of Uniform and SSAA grows faster than the rate n3/2 ,
whereas MC-UCB, as well as GAFS-WL, grow with this rate. The right plot focuses on the
MC-UCB algorithm and rescales the y?axis to observe the variations of its rescaled regret
more accurately. The curve grows ?rst and then stabilizes. This could correspond to the
two regimes discussed previously.
x 10
2
MC?UCB
11000
MC?UCB
Uniform Allocation
SSAA
GAFS?WL
10000
3/2
2.5
12000
9000
Rnn
3
Rescaled regret w.r.t. n for C=120
Rescaled Regret w.r.t. n for C=120
5
1.5
8000
1
7000
0.5
0
0
6000
500
1000
1500
2000
2500
n
3000
3500
4000
4500
5000
5000
0
500
1000
1500
2000
2500
n
3000
3500
4000
4500
5000
Figure 3: Left: Rescaled regret (Rn n3/2 ) of the Uniform, SSAA, and MC-UCB strategies.
Right: zoom on the rescaled regret for MC-UCB that illustrates the two regimes.
6
Conclusions
We provided a ?nite-time analysis for strati?ed sampling for Monte-Carlo in the case of
? ?3/2 ??5/2 )
?xed strata. We reported two bounds: (i) a distribution dependent bound O(n
min
which is of interest when n is large compared to a measure of disparity ??1
min of the standard
? ?4/3 ) which is of
deviations (stationary regime), and (ii) a distribution free bound in O(n
?1
interest when n is small compared to ?min (transitory regime).
Possible directions for future work include: (i) making the MC-UCB algorithm anytime
(i.e. not requiring the knowledge of n), (ii) investigating whether their exists an algorithm
? ?3/2 ) regret without dependency on ??1 , and (iii) deriving distribution-dependent
with O(n
min
and distribution-free lower-bounds for this problem.
Acknowledgements
We thank Andr?as Antos for several comments that helped us to improve the quality of the paper. This research was partially supported by Region Nord-Pas-de-Calais Regional Council,
French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project CompLACS),
and by Pascal-2.
8
References
Andr?
as Antos, Varun Grover, and Csaba Szepesv?ari. Active learning in heteroscedastic
noise. Theoretical Computer Science, 411:2712?2728, June 2010.
B. Arouna. Adaptative monte carlo method, a variance reduction technique. Monte Carlo
Methods and Applications, 10(1):1?24, 2004.
J.Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In
22nd annual conference on learning theory, 2009.
J.Y. Audibert, R. Munos, and Cs. Szepesv?ari. Exploration-exploitation tradeo? using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876?1902,
2009.
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine learning, 47(2):235?256, 2002.
VV Buldygin and Y.V. Kozachenko. Sub-gaussian random variables. Ukrainian Mathematical Journal, 32(6):483?489, 1980.
A. Carpentier and R. Munos. Finite-time analysis of strati?ed sampling for monte carlo.
Technical Report inria-00636924, INRIA, 2011.
A. Carpentier, A. Lazaric, M. Ghavamzadeh, R. Munos, and P. Auer. Upper-con?dencebound algorithms for active learning in multi-armed bandits. In Algorithmic Learning
Theory, pages 189?203. Springer, 2011.
Pierre Etor?e and Benjamin Jourdain. Adaptive optimal allocation in strati?ed sampling
methods. Methodol. Comput. Appl. Probab., 12(3):335?360, September 2010.
?
Pierre Etor?e, Gersende Fort, Benjamin Jourdain, and Eric
Moulines. On adaptive strati?cation. Ann. Oper. Res., 2011. to appear.
P. Glasserman. Monte Carlo methods in ?nancial engineering. Springer Verlag, 2004. ISBN
0387004513.
P. Glasserman, P. Heidelberger, and P. Shahabuddin. Asymptotically optimal importance
sampling and strati?cation for pricing path-dependent options. Mathematical Finance, 9
(2):117?152, 1999.
V. Grover. Active learning and its application to heteroscedastic problems. Department of
Computing Science, Univ. of Alberta, MSc thesis, 2009.
R. Kawai. Asymptotically optimal allocation of strati?ed sampling with adaptive variance reduction by strata. ACM Transactions on Modeling and Computer Simulation
(TOMACS), 20(2):1?17, 2010. ISSN 1049-3301.
A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In
Proceedings of the Twenty-Second Annual Conference on Learning Theory, pages 115?124,
2009.
S.I. Resnick. A probability path. Birkh?auser, 1999.
R.Y. Rubinstein and D.P. Kroese. Simulation and the Monte Carlo method. Wileyinterscience, 2008. ISBN 0470177942.
9
| 4225 |@word trial:2 exploitation:4 proportion:2 seems:1 nd:2 open:1 simulation:3 crucially:1 simplifying:1 mention:1 reduction:2 disparity:7 ours:1 past:2 outperforms:1 current:2 com:1 wd:12 yet:1 numerical:4 informative:1 shape:2 enables:2 etor:11 plot:7 stationary:4 selected:1 cult:1 xk:4 beginning:1 characterization:2 provides:3 buldygin:2 mathematical:2 along:1 c2:16 direct:2 ect:1 maturity:1 prove:1 introduce:3 notably:1 indeed:2 hardness:1 ra:1 expected:2 behavior:1 multi:6 discretized:1 moulines:1 discounted:1 alberta:1 glasserman:6 armed:7 considering:1 provided:3 estimating:1 notation:2 bounded:5 maximizes:1 mass:1 project:1 xed:2 interpreted:1 minimizes:1 csaba:1 guarantee:1 pseudo:1 every:2 finance:1 exactly:1 k2:2 schwartz:1 grant:1 appear:1 positive:1 before:1 engineering:1 consequence:1 path:2 inria:6 black:1 twice:2 au:3 k:1 wk2:2 chose:1 suggests:1 heteroscedastic:2 appl:1 range:3 averaged:2 practical:1 practice:1 regret:23 implement:1 nite:7 pontil:2 area:2 empirical:10 erence:5 rnn:1 projection:2 word:1 refers:1 suggest:1 cannot:2 clever:1 close:3 equivalent:1 deterministic:2 center:2 go:2 independently:2 survey:1 formalized:1 deriving:1 pull:12 notion:1 variation:2 target:2 play:1 homogeneous:1 us:1 designing:1 agreement:1 pa:1 particularly:1 observed:1 min1:1 resnick:1 region:8 trade:3 highest:1 rescaled:6 mentioned:2 benjamin:2 complexity:4 reward:1 ghavamzadeh:1 depend:5 tight:2 max1:1 eric:1 univ:1 monte:17 birkh:1 rubinstein:4 choosing:2 whose:1 larger:1 say:2 anr:2 fischer:1 think:1 online:2 advantage:1 isbn:2 propose:2 fr:2 relevant:1 nity:1 adapts:2 intuitive:1 rst:2 convergence:1 produce:1 converges:1 tk:17 volatility:1 derive:2 illustrate:1 stating:1 erent:5 received:3 solves:1 c:1 come:1 direction:5 stochastic:1 exploration:5 jourdain:10 require:1 preliminary:1 probable:1 clarify:1 around:1 considered:3 neglectable:1 exp:3 great:1 cb:3 nw:1 predict:1 normal:1 algorithmic:1 stabilizes:1 achieves:1 vary:1 smallest:1 purpose:1 estimation:5 calais:1 bridge:1 council:1 largest:2 wl:12 weighted:6 rough:1 always:2 gaussian:3 aim:3 derived:1 focus:1 june:1 stratus:19 properly:1 indicates:2 adversarial:1 am:3 dependent:14 initially:1 bandit:9 selects:1 polling:2 arg:1 pascal:1 exponent:1 integration:1 initialize:1 auser:1 equal:2 sampling:21 represents:3 lille:2 nearly:1 future:2 others:2 report:7 zoom:1 asian:4 n1:2 interest:3 satis:1 possibility:1 adjust:2 introduces:2 antos:3 allocating:1 kt:4 poorer:1 allocates:2 maurer:2 desired:1 re:1 theoretical:2 modeling:1 payo:2 calibrate:1 cost:2 deviation:23 size2:1 uniform:10 rounding:1 seventh:1 optimally:1 reported:4 kn:3 answer:1 dependency:1 stratum:37 complacs:1 kroese:4 w1:1 thesis:2 again:1 cesa:1 choose:5 worse:1 return:2 oper:1 de:12 wk:12 includes:1 rescales:1 satisfy:1 audibert:4 depends:3 later:5 try:1 helped:1 start:1 option:6 contribution:3 minimize:2 ass:1 variance:17 characteristic:1 correspond:2 accurately:2 manages:1 mc:28 carlo:17 cation:6 reach:1 whenever:2 ed:13 competitor:1 proof:3 di:12 con:8 static:2 gain:2 proved:1 ask:1 adaptative:1 subsection:4 knowledge:2 anytime:1 organized:1 formalize:1 auer:4 appears:1 varun:1 improved:1 cosi:1 though:1 box:1 just:1 tkk:1 transitory:4 msc:1 receives:1 su:1 french:1 quality:3 pulling:4 believe:1 grows:2 pricing:4 alexandra:2 requiring:1 true:3 equality:1 hence:1 assigned:1 deal:1 reminds:1 round:11 conditionally:1 percentile:2 performs:2 motion:1 pro:1 recently:1 ari:2 extend:1 discussed:1 ukrainian:1 refer:1 multiarmed:1 imposing:1 tuning:1 rd:2 consistency:1 similarly:1 dot:1 europe:2 deduce:1 brownian:2 recent:1 verlag:1 inequality:2 nition:1 strike:6 ii:3 inhabitant:1 technical:1 match:1 characterized:1 faster:3 divided:1 impact:1 variant:1 expectation:4 represent:1 tradeo:1 c1:12 whereas:5 tku:1 addition:2 want:1 interval:2 szepesv:2 grow:2 country:2 allocated:3 rest:1 regional:1 comment:2 call:2 ciently:1 ideal:1 bernstein:2 iii:1 enough:1 easy:1 equidistant:1 reduce:3 idea:1 knowing:2 whether:2 allocate:4 stratify:3 proportionally:6 detailed:1 tune:2 amount:1 exist:1 andr:2 notice:2 estimated:2 lazaric:1 per:3 correctly:1 write:2 poll:2 achieving:1 drawn:1 carpentier:11 nal:1 kept:1 asymptotically:4 sum:1 run:2 parameterized:1 master:1 named:1 family:1 throughout:1 appendix:4 entirely:1 bound:37 quadratic:3 nancial:1 annual:2 adapted:1 precisely:2 constraint:1 n3:5 dence:6 emi:1 min:21 performing:1 px:1 ned:4 department:1 according:8 describes:1 slightly:2 remain:1 smaller:1 wi:6 n4:1 making:1 explained:5 ln:8 equation:3 previously:1 discus:2 loose:1 needed:1 know:2 fp7:1 end:1 observe:2 kozachenko:2 pierre:2 denotes:1 include:1 ucbs:2 especially:1 build:1 stratas:2 objective:2 question:3 quantity:2 strategy:25 dependence:1 usual:3 rt:1 exhibit:1 september:1 thank:1 topic:1 argue:1 cauchy:1 reason:1 code:1 length:2 issn:1 minimizing:1 nord:3 design:3 policy:1 unknown:4 perform:1 bianchi:1 upper:11 twenty:1 finite:3 heterogeneity:1 payoff:1 variability:7 rn:6 arbitrary:1 community:1 introduced:2 bk:4 namely:1 fort:1 s20:1 distinction:1 address:2 able:1 usually:1 regime:11 built:1 max:4 natural:4 methodol:1 arm:43 minimax:1 improve:2 ne:6 axis:1 concludes:1 naive:1 probab:1 acknowledgement:1 asymptotic:2 loss:3 expect:1 interesting:4 allocation:20 proportional:2 grover:5 versus:2 penalization:1 incurred:1 consistent:1 s0:5 article:1 supported:1 last:1 free:11 pulled:4 vv:1 institute:3 munos:11 curve:2 dimension:1 cumulative:1 computes:3 author:2 adaptive:8 programme:1 income:4 transaction:1 excess:1 global:3 active:3 investigating:1 knew:1 table:3 promising:1 bination:1 european:1 did:1 main:3 noise:1 n2:1 cient:3 precision:1 sub:3 comput:1 lie:1 crude:2 theorem:9 er:1 exists:2 kr:1 importance:1 magnitude:1 budget:11 conditioned:2 illustrates:1 nk:2 logarithmic:1 remi:1 simply:1 likely:1 bubeck:2 partially:1 springer:2 corresponds:3 relies:2 acm:1 cdf:1 goal:6 targeted:1 formulated:1 ann:1 price:4 uniformly:1 lemma:3 total:3 called:4 e:1 ucb:31 explo:1 ew:1 people:2 kawai:9 d1:1 wileyinterscience:1 |
3,563 | 4,226 | Semantic Labeling of 3D Point Clouds for
Indoor Scenes
Hema Swetha Koppula? , Abhishek Anand? , Thorsten Joachims, and Ashutosh Saxena
Department of Computer Science, Cornell University.
{hema,aa755,tj,asaxena}@cs.cornell.edu
Abstract
Inexpensive RGB-D cameras that give an RGB image together with depth data
have become widely available. In this paper, we use this data to build 3D point
clouds of full indoor scenes such as an office and address the task of semantic labeling of these 3D point clouds. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and
shape cues, object co-occurence relationships and geometric relationships. With a
large number of object classes and relations, the model?s parsimony becomes important and we address that by using multiple types of edge potentials. The model
admits efficient approximate inference, and we train it using a maximum-margin
learning approach. In our experiments over a total of 52 3D scenes of homes and
offices (composed from about 550 views, having 2495 segments labeled with 27
object classes), we get a performance of 84.06% in labeling 17 object classes for
offices, and 73.38% in labeling 17 object classes for home scenes. Finally, we
applied these algorithms successfully on a mobile robot for the task of finding
objects in large cluttered rooms.1
1
Introduction
Inexpensive RGB-D sensors that augment an RGB image with depth data have recently become
widely available. At the same time, years of research on SLAM (Simultaneous Localization and
Mapping) now make it possible to reliably merge multiple RGB-D images into a single point cloud,
easily providing an approximate 3D model of a complete indoor scene (e.g., a room). In this paper,
we explore how this move from part-of-scene 2D images to full-scene 3D point clouds can improve
the richness of models for object labeling.
In the past, a significant amount of work has been done in semantic labeling of 2D images. However,
a lot of valuable information about the shape and geometric layout of objects is lost when a 2D
image is formed from the corresponding 3D world. A classifier that has access to a full 3D model,
can access important geometric properties in addition to the local shape and appearance of an object.
For example, many objects occur in characteristic relative geometric configurations (e.g., a monitor
is almost always on a table), and many objects consist of visually distinct parts that occur in a
certain relative configuration. More generally, a 3D model makes it easy to reason about a variety
of properties, which are based on 3D distances, volume and local convexity.
Some recent works attempt to first infer the geometric layout from 2D images for improving the
object detection [12, 14, 28]. However, such a geometric layout is not accurate enough to give
significant improvement. Other recent work [35] considers labeling a scene using a single 3D view
(i.e., a 2.5D representation). In our work, we first use SLAM in order to compose multiple views
from a Microsoft Kinect RGB-D sensor together into one 3D point cloud, providing each RGB
pixel with an absolute 3D location in the scene. We then (over-)segment the scene and predict
semantic labels for each segment (see Fig. 1). We predict not only coarse classes like in [1, 35] (i.e.,
1
This work was first presented at [16].
?
indicates equal contribution.
1
Figure 1: Office scene (top) and Home (bottom) scene with the corresponding label coloring above the images.
The left-most is the original point cloud, the middle is the ground truth labeling and the right most is the point
cloud with predicted labels.
wall, ground, ceiling, building), but also label individual objects (e.g., printer, keyboard, mouse).
Furthermore, we model rich relational information beyond an associative coupling of labels [1].
In this paper, we propose and evaluate the first model and learning algorithm for scene understanding that exploits rich relational information derived from the full-scene 3D point cloud for object
labeling. In particular, we propose a graphical model that naturally captures the geometric relationships of a 3D scene. Each 3D segment is associated with a node, and pairwise potentials
model the relationships between segments (e.g., co-planarity, convexity, visual similarity, object
co-occurrences and proximity). The model admits efficient approximate inference [25], and we
show that it can be trained using a maximum-margin approach [7, 31, 34] that globally minimizes
an upper bound on the training loss. We model both associative and non-associative coupling of
labels. With a large number of object classes, the model?s parsimony becomes important. Some
features are better indicators of label similarity, while other features are better indicators of nonassociative relations such as geometric arrangement (e.g., on-top-of, in-front-of ). We therefore introduce parsimony in the model by using appropriate clique potentials rather than using general
clique potentials. Our model is highly flexible and our software is available as a ROS package at:
http://pr.cs.cornell.edu/sceneunderstanding
To empirically evaluate our model and algorithms, we perform several experiments over a total of
52 scenes of two types: offices and homes. These scenes were built from about 550 views from
the Kinect sensor, and they are also available for public use. We consider labeling each segment
(from a total of about 50 segments per scene) into 27 classes (17 for offices and 17 for homes,
with 7 classes in common). Our experiments show that our method, which captures several local
cues and contextual properties, achieves an overall performance of 84.06% on office scenes and
73.38% on home scenes. We also consider the problem of labeling 3D segments with multiple
attributes meaningful to robotics context (such as small objects that can be manipulated, furniture,
etc.). Finally, we successfully applied these algorithms on mobile robots for the task of finding
objects in cluttered office scenes.
2
Related Work
There is a huge body of work in the area of scene understanding and object recognition from 2D images. Previous works focus on several different aspects: designing good local features such as HOG
(histogram-of-gradients) [5] and bag of words [4], and designing good global (context) features such
as GIST features [33]. However, these approaches do not consider the relative arrangement of the
parts of the object or of multiple objects with respect to each other. A number of works propose
models that explicitly capture the relations between different parts of the object e.g., Pedro et al.?s
part-based models [6], and between different objects in 2D images [13, 14]. However, a lot of valuable information about the shape and geometric layout of objects is lost when a 2D image is formed
from the corresponding 3D world. In some recent works, 3D layout or depths have been used for
improving object detection (e.g., [11, 12, 14, 20, 21, 22, 27, 28]). Here a rough 3D scene geometry
(e.g., main surfaces in the scene) is inferred from a single 2D image or a stereo video stream, respectively. However, the estimated geometry is not accurate enough to give significant improvements.
With 3D data, we can more precisely determine the shape, size and geometric orientation of the
objects, and several other properties and therefore capture much stronger context.
The recent availability of synchronized videos of both color and depth obtained from RGB-D
(Kinect-style) depth cameras, shifted the focus to making use of both visual as well as shape features
for object detection [9, 18, 19, 24, 26] and 3D segmentation (e.g., [3]). These methods demonstrate
2
that augmenting visual features with 3D information can enhance object detection in cluttered, realworld environments. However, these works do not make use of the contextual relationships between
various objects which have been shown to be useful for tasks such as object detection and scene
understanding in 2D images. Our goal is to perform semantic labeling of indoor scenes by modeling
and learning several contextual relationships.
There is also some recent work in labeling outdoor scenes obtained from LIDAR data into a few geometric classes (e.g., ground, building, trees, vegetation, etc.). [8, 30] capture context by designing
node features and [36] do so by stacking layers of classifiers; however these methods do not model
the correlation between the labels. Some of these works model some contextual relationships in the
learning model itself. For example, [1, 23] use associative Markov networks in order to favor similar
labels for nodes in the cliques. However, many relative features between objects are not associative
in nature. For example, the relationship ?on top of? does not hold in between two ground segments,
i.e., a ground segment cannot be ?on top of? another ground segment. Therefore, using an associative Markov network is very restrictive for our problem. All of these works [1, 23, 29, 30, 36] were
designed for outdoor scenes with LIDAR data (without RGB values) and therefore would not apply
directly to RGB-D data in indoor environments. Furthermore, these methods only consider very few
geometric classes (between three to five classes) in outdoor environments, whereas we consider a
large number of object classes for labeling the indoor RGB-D data.
The most related work to ours is [35], where they label the planar patches in a point-cloud of an
indoor scene with four geometric labels (walls, floors, ceilings, clutter). They use a CRF to model
geometrical relationships such as orthogonal, parallel, adjacent, and coplanar. The learning method
for estimating the parameters was based on maximizing the pseudo-likelihood resulting in a suboptimal learning algorithm. In comparison, our basic representation is a 3D segment (as compared
to planar patches) and we consider a much larger number of classes (beyond just the geometric
classes). We also capture a much richer set of relationships between pairs of objects, and use a
principled max-margin learning method to learn the parameters of our model.
3
Approach
We now outline our approach, including the model, its inference methods, and the learning algorithm. Our input is multiple Kinect RGB-D images of a scene (i.e., a room) stitched into a single 3D
point cloud using RGBDSLAM.2 Each such point cloud is then over-segmented based on smoothness (i.e., difference in the local surface normals) and continuity of surfaces (i.e., distance between
the points). These segments are the atomic units in our model. Our goal is to label each of them.
Before getting into the technical details of the model, the following outlines the properties we aim
to capture in our model:
Visual appearance. The reasonable success of object detection in 2D images shows that visual
appearance is a good indicator for labeling scenes. We therefore model the local color, texture,
gradients of intensities, etc. for predicting the labels. In addition, we also model the property that if
nearby segments are similar in visual appearance, they are more likely to belong to the same object.
Local shape and geometry. Objects have characteristic shapes?for example, a table is horizontal,
a monitor is vertical, a keyboard is uneven, and a sofa is usually smoothly curved. Furthermore,
parts of an object often form a convex shape. We compute 3D shape features to capture this.
Geometrical context. Many sets of objects occur in characteristic relative geometric configurations.
For example, a monitor is always on-top-of a table, chairs are usually found near tables, a keyboard
is in-front-of a monitor. This means that our model needs to capture non-associative relationships
(i.e., that neighboring segments differ in their labels in specific patterns).
Note the examples given above are just illustrative. For any particular practical application, there
will likely be other properties that could also be included. As demonstrated in the following section,
our model is flexible enough to include a wide range of features.
3.1 Model Formulation
We model the three-dimensional structure of a scene using a model isomorphic to a Markov Random Field with log-linear node and pairwise edge potentials. Given a segmented point cloud
x = (x1 , ..., xN ) consisting of segments xi , we aim to predict a labeling y = (y1 , ..., yN ) for
the segments. Each segment label yi is itself a vector of K binary class labels yi = (yi1 , ..., yiK ),
with each yik ? {0, 1} indicating whether a segment i is a member of class k. Note that multiple yik
can be 1 for each segment (e.g., a segment can be both a ?chair? and a ?movable object?). We use
2
http://openslam.org/rgbdslam.html
3
N6. Vertical component of the normal: n?i z
N7. Vertical position of centroid: ci z
N8. Vert. and Hor. extent of bounding box
N9. Dist. from the scene boundary (Fig. 2)
1
1
2
1
E6. Diff. in angle with vert.: cos N7.
(nizVertical
) - cos position
(njz ) of centroid:
1
ci z
E8.
Dist.
between N8.closest
1 bounding box
Vert. and points:
Hor. extent of
minu?si ,v?sj d(u, v) (Fig. 2) N9. Dist. from the scene boundary (Fig. 2)
E8. rel. position from camera (in front of/behind). (Fig. 2)
1
1
2
1
E8.
Dist.
between
closest
points:
1
Local Shape
and Geometry
minu?si ,v?sj d(u, v) (Fig. 2)
N4. linearness (?i0 - ?i1 ), planarness (?i1 - ?i2 )
E8. rel. position from camera (in front of/behind). (Fig. 2)
1
N5. Scatter: ?i 0
N6. Vertical component of the normal: n?i z
N7. Vertical position of centroid: ci z
N8. Vert.of
andan
Hor.
extentinofthe
bounding
Some features capture spatial location
object
scenebox
N9. Dist. from the scene boundary (Fig. 2)
Table 1: Node and edge features.
Table 1: Node and edge features.
E3. Horizontal distance b/w centroids.
E4. Vertical Displacement b/w centroids: (ciz ? cjz )
E5. Angle between normals (Dot product): n
?i ? n
?j
E6. Diff. in angle with vert.: cos?1 (niz ) - cos?1 (njz )
E8.
Dist.
between
closest
points:
minu?si ,v?sj d(u, v) (Fig. 2)
E8. rel. position from camera (in front of/behind). (Fig. 2)
8
2
1
1
1
2
1
1
1
1
1
1
location
aboveinground,
and its shape.
location above ground, and its shape. Some features capture spatial location
of an object
the scene
1
(e.g., N9).
(e.g., N9).
Table 1: Node and edge features.
We connect two segments (nodes) i and j by an edge if there exists a point
segmenttwo
i andsegments
a point (nodes) i and j by an edge if there exists a point in segment i and a point
Weinconnect
in segment j which are less than context range distance apart. This captures
the
closest
distance
in segment j which are less than context range distance apart. This captures the closest distance
between two segments (as compared to centroid distance between thebetween
segments)?we
study
the
location
above ground,
andthe
its shape. Some features capture spatial location of an object in the scene
two segments (as compared to centroid distance between
the segments)?we
study
effect of context range more in Section 4. The edge features ?t (i, j)effect
(Tableof1-right)
consist
context
rangeofmore in Section 4. The edge features ?(e.g.,
(Table 1-right) consist of
t (i, j) N9).
associative features (E1-E2) based on visual appearance and local shape, as well as non-associative
associative
features (E1-E2) based on visual appearance and local shape, as well as non-associative
features (E3-E8) that capture the tendencies of two objects to occur in certain
configurations.
We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point
features (E3-E8) that capture the tendencies of two objects to occur in certain configurations.
Note that our features are insensitive to horizontal translation and rotation of the camera. However,
in segment j which are less than context range distance apart. This captures the closest distance
Note that
our features
are insensitive to horizontal translation and rotation
the camera.
However,
our features place a lot of emphasis on the vertical direction because gravity
influences
the shape
betweenof two
segments
(as compared to centroid distance between the segments)?we study the
our features place a lot of emphasis on the vertical direction because gravity influences the shape
and relative positions of objects to a large extent.
effect of context range more in Section 4. The edge features ?t (i, j) (Table 1-right) consist of
associative features (E1-E2) based on visual appearance and local shape, as well as non-associative
features (E3-E8) that capture the tendencies of two objects to occur in certain configurations.
Note that our features are insensitive to horizontal translation and rotation of the camera. However,
our features place a lot of emphasis on the vertical direction because gravity influences the shape
and relative positions of objects to a large extent.
and relative positions of objects to a large extent.
dbi
dbj
rhi
?i
n
j
i
ri
rhj
?j
n
rj
j
i
dminij
cam
cam
3.2.1 Computing Predictions
3.2.1 Computing Predictions
3.2.1 Computing Predictions
the argmax
(2) is NP
However,
Solving the argmax in Eq. (1) for the discriminant function in Eq. (2)Solving
is NP hard.
However,initsEq. (1) for the discriminant function in Eq.
Solving
the hard.
argmax
in Eq.its(1) for the discriminant function in Eq. (2) is NP hard. However, its
equivalent formulation as the following mixed-integer program has a linear
relaxation
with severalas the following mixed-integer program has a linear relaxation with several
equivalent
formulation
equivalent formulation as the following mixed-integer program has a linear relaxation with several
desirable properties.
desirable properties.
Figure 2: Illustration of a few features. (Left) Features N11 and E9. Segment i is infront of segment j if
rhi < rhj . (Middle)
Two ?connected
segment
i?
and
j are form
a desirable
convex
shape if (ri ? rj ).n?i ? 0 and
properties.
?? ?
? ? ?
?
?
? ?
? ? ? ?
?
?
argmax max
y w ? ? (i) +
z w y
j)
?? ?
?? ?=(i,argmax
y w ? ? (i) +
z w ? ? (i, j)
(3)
? ? ? ?
?
(rj ? ri ).n?y?j =?
0. (Right)
Illustrating feature
E8.(3)max
? = argmax max
y
y w ? ? (i) +
z w
K
y
k
i
z
?i, j, l, k :
k
n
lk
ij
n
lk
zij
?
yil ,
lk
t
K
lk
zij
?
yjk ,
yil
+
yjk
?
lk
zij
+ 1,
lk
zij
.
y
lk l
zij
, yi
k
i
t
(i,j)?E Tt ?T (l,k)?Tt
i?V k=1
? {0, 1}
z
k
n
lk
ij
n
(4)
lk
?i, j, l, k : zij
? yil ,
lk
t
(i,j)?E Tt ?T (l,k)?Tt
i?V k=1
lk
zij
? yjk ,
lk
yil + yjk ? zij
+ 1,
K
t
y
lk l
zij
, yi ? {0, 1}
k
i
z
k
n
lk
ij
n
(i,j)?E Tt ?T (l,k)?Tt
i?V k=1
(4)
lk
t
? ?t (i, j)
?
(3)
lk
lk
lk
lk l
Note that the products
have been replaced by auxiliary variables
Relaxing the variables
?i,variables
j, l, k : zzlkij
? yil , zij
? yjk , yil + yjk ? zij
+ 1, zij
, yi ? {0, 1}
(4)
lk
Note that the products y y have been replaced by auxiliary variables zij
. Relaxing the
ij
and yil to the interval [0, 1] leads to a linear program that can be shown to always have half-integrali j
l k
lk
lk
and yil tothis
therelaxation
interval can
[0, 1] leads to a linear program that can be shown
tothat
always
have half-integral
solutions (i.e. yil only take values {0, 0.5, 1} at the solution) [10]. Furthermore,
Note
the
products
y
y
have
been
replaced
by
auxiliary
variables
z
.
Relaxing
the variables zij
i
j
ij
solutions
(i.e.method
yil only
take values {0, 0.5, 1} at the solution) [10]. Furthermore,
this relaxation can
also be solved as a quadratic pseudo-Boolean optimization problem using
a graph-cut
[25],
l
and
yi to
the interval
[0, 1]
leads to a linear program that can be shown to always have half-integral
which is orders of magnitude faster than using a general purpose LP solver
10 sec for
also(i.e.,
be solved
as labeling
a quadratic pseudo-Boolean optimization problem
using
a graph-cut
method
[25],
l
a typical scene in our experiments). Therefore, we refer to the solution of
this relaxation
solutions
(i.e.10ysec
only
take
values
{0,
0.5,
1}
at
the
solution)
[10].
Furthermore,
this
relaxation
can
cut .
which
is ordersasofy?magnitude
faster than using a general purpose LP
solver (i.e.,
for
labeling
i
yil yjk
lk
zij
l k
such multi-labelings in our attribute experiments where each segment can have multiple attributes,
but not in segment labeling experiments where each segment can have only one label).
also of
bethis
solved
as a quadratic
? . pseudo-Boolean optimization problem using a graph-cut method [25],
typical scene in our experiments). Therefore, we refer to the solution
relaxation
as y
? hasisancomputed
For a segmented point cloud x, the aThe
prediction
asPersistence
the
ofsaysafaster
discriminant
function
which argmax
is[2,orders
of magnitude
than using a general purpose
LP solver (i.e., 10 sec for labeling
?y
relaxation solution y
interesting property called
10]. Persistence
? .
a typical
scene
ourisexperiments).
Therefore, we refer to the solution of this relaxation as y
? (i.e. does
that any segment for which the value of y is integral in y
not take
valuein0.5)
fw (x, y) that is parameterized by ajustvector
w.solution. The relaxation solutionlabeled
like it would beof
in theweights
optimal mixed-integer
? has an interesting property called Persistence [2, 10]. Persistence says
y
Since every segment in our experiments is in exactly one class, we also
the linear
? (i.e. does not take value 0.5) is labeled
thatconsider
any segment
forrelaxation
which the value of y is integral in y
?
from above
additional constraint ?i :
y = 1. This just
no longer
be the
solved
like can
it would
be in
optimal mixed-integer solution.
?with=the argmax
y
f refer(x,
y) problem
(1)
? . Computing y
?
via graph cuts and is not half-integral. Wew
to its solution as y
for a
? cut has an interesting property called Persistence [2, 10]. Persistence says
The relaxation solution y
? cut (i.e. does not take value 0.5) is labeled
that any segment for which the value of yil is integral in y
just like it would be in the optimal mixed-integer solution.
cut
cut
l
i
cut
cut
cut
5
K
j=1
y
j
i
l
i
cut
LP
LP
Since
every segment in
our experiments is in exactly one class, we also consider the linear relaxation
?K
from above with the additional constraint ?i : j=1 yij = 1. This problem can no longer be solved
? LP . Computing y
? LP for a
via graph cuts and is not half-integral. We refer to its solution as y
scene takes 11 minutes on average4 . Finally, we can also compute the exact mixed integer solution
?K
including the additional constraint ?i : j=1 yij = 1 using a general-purpose MIP solver4 . We set
a time limit of 30 minutes for the MIP solver. This takes 18 minutes on average for a scene. All
runtimes are for single CPU implementations using 17 classes.
The discriminant function captures the dependencies between segment labels as defined by an undirected graph (V, E) of vertices V = {1, ..., N } and edges E ? V ? V. We describe in Section 3.2
how this graph is derived from the spatial proximity of the segments. Given (V, E), we define the folhttp://www.tfinley.net/software/pyglpk/readme.html
lowing discriminant function based on individual segment features
?n (i) and edge features ?t (i, j)
as further described below.
5
K
XX
X X X
fw (y, x) =
yik wnk ? ?n (i) +
yil yjk wtlk ? ?t (i, j)
(2)
5
4
i?V k=1
(i,j)?E Tt ?T (l,k)?Tt
The node feature map ?n (i) describes segment i through a vector of features, and there is one
weight vector for each of the K classes. Examples of such features are the ones capturing local
visual appearance, shape and geometry. The edge feature maps ?t (i, j) describe the relationship
between segments i and j. Examples of edge features are the ones capturing similarity in visual
appearance and geometric context.3 There may be multiple types t of edge feature maps ?t (i, j),
and each type has a graph over the K classes with edges Tt . If Tt contains an edge between classes
l and k, then this feature map and a weight vector wtlk is used to model the dependencies between
classes l and k. If the edge is not present in Tt , then ?t (i, j) is not used.
We say that a type t of edge features is modeled by an associative edge potential if Tt = {(k, k)|?k =
1..K}. And it is modeled by an non-associative edge potential if Tt = {(l, k)|?l, k = 1..K}.
Finally, it is modeled by an object-associative edge potential if Tt = {(l, k)|?object, l, k ?
parts(object)}.
Parsimonious model. In our experiments we distinguished between two types of edge feature
maps??object-associative? features ?oa (i, j) used between classes that are parts of the same object
(e.g., ?chair base?, ?chair back? and ?chair back rest?), and ?non-associative? features ?na (i, j) that
are used between any pair of classes. Examples of features in the object-associative feature map
?oa (i, j) include similarity in appearance, co-planarity, and convexity?i.e., features that indicate
whether two adjacent segments belong to the same class or object. A key reason for distinguishing
between object-associative and non-associate features is parsimony of the model. In this parsimonious model (referred to as svm mrf parsimon), we model object associative features using objectassociative edge potentials and non-associative features as non-associative edge potentials. As not
all edge features are non-associative, we avoid learning weight vectors for relationships which do
not exist. Note that |Tna | >> |Toa | since, in practice, the number of parts of an objects is much
less than K. Due to this, the model we learn with both type of edge features will have much lesser
number of parameters compared to a model learnt with all edge features as non-associative features.
3.2 Features
Table 1 summarizes the features used in our experiments. ?i0 , ?i1 and ?i2 are the 3 eigen-values
of the scatter matrix computed from the points of segment i in decreasing order. ci is the centroid
of segment i. ri is the ray vector to the centroid of segment i from the position camera in which
it was captured. rhi is the projection of ri on horizontal plane. n
? i is the unit normal of segment i
which points towards the camera (ri .?
ni < 0). The node features ?n (i) consist of visual appearance
features based on histogram of HSV values and the histogram of gradients (HOG), as well as local
shape and geometry features that capture properties such as how planar a segment is, its absolute
3
Even though it is not represented in the notation, note that both the node feature map ?n (i) and the edge
feature maps ?t (i, j) can compute their features based on the full x, not just xi and xj .
4
Node features for segment i.
Description
Visual Appearance
N1. Histogram of HSV color values
N2. Average HSV color values
N3. Average of HOG features of the blocks in image spanned by the points of a segment
Local Shape and Geometry
N4. linearness (?i0 - ?i1 ), planarness (?i1 - ?i2 )
N5. Scatter: ?i0
N6. Vertical component of the normal: n?iz
N7. Vertical position of centroid: ciz
N8. Vert. and Hor. extent of bounding box
N9. Dist. from the scene boundary (Fig. 2)
Count
48
14
3
31
8
2
1
1
1
2
1
Edge features for (segment i, segment j).
Description
Visual Appearance (associative)
E1. Difference of avg HSV color values
Local Shape and Geometry (associative)
E2. Coplanarity and convexity (Fig. 2)
Geometric context (non-associative)
E3. Horizontal distance b/w centroids.
E4. Vertical Displacement b/w centroids: (ciz ? cjz )
E5. Angle between normals (Dot product): n
?i ? n
?j
E6. Diff. in angle with vert.: cos?1 (niz ) - cos?1 (njz )
E8.
Dist.
between
closest
points:
minu?si ,v?sj d(u, v) (Fig. 2)
E8. rel. position from camera (in front of/behind). (Fig. 2)
Table 1: Node and edge features.
Count
3
3
2
2
6
1
1
1
1
1
1
location above ground, and its shape. Some features capture spatial location of an object in the scene
(e.g., N9).
We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point
in segment j which are less than context range distance apart. This captures the closest distance
between two segments (as compared to centroid distance between the segments)?we study the
effect of context range more in Section 4. The edge features ?t (i, j) (Table 1-right) consist of
associative features (E1-E2) based on visual appearance and local shape, as well as non-associative
features (E3-E8) that capture the tendencies of two objects to occur in certain configurations.
Note that our features are insensitive to horizontal translation and rotation of the camera. However,
our features place a lot of emphasis on the vertical direction because gravity influences the shape
and relative positions of objects to a large extent.
3.2.1 Computing Predictions
Solving the argmax in Eq. (1) for the discriminant function in Eq. (2) is NP hard. However, its
equivalent formulation as the following mixed-integer program has a linear relaxation with several
desirable properties.
? = argmax max
y
y
z
K
XX
i?V k=1
X X
yik wnk ? ?n (i) +
lk
?i, j, l, k : zij
? yil ,
X
(i,j)?E Tt ?T (l,k)?Tt
lk
zij
? yjk ,
lk
yil + yjk ? zij
+ 1,
lk
lk
zij
wt ? ?t (i, j)
(4)
lk
Note that the products
have been replaced by auxiliary variables
Relaxing the variables zij
l
and yi to the interval [0, 1] leads to a linear program that can be shown to always have half-integral
solutions (i.e. yil only take values {0, 0.5, 1} at the solution) [10]. Furthermore, this relaxation can
also be solved as a quadratic pseudo-Boolean optimization problem using a graph-cut method [25],
which is orders of magnitude faster than using a general purpose LP solver (i.e., 10 sec for labeling
? cut .
a typical scene in our experiments). Therefore, we refer to the solution of this relaxation as y
? cut has an interesting property called Persistence [2, 10]. Persistence says
The relaxation solution y
? cut (i.e. does not take value 0.5) is labeled
that any segment for which the value of yil is integral in y
just like it would be in the optimal mixed-integer solution.
Since every segment in our experiments is in exactly one class, we also consider the linear relaxation
PK
from above with the additional constraint ?i : j=1 yij = 1. This problem can no longer be solved
? LP . Computing y
? LP for a
via graph cuts and is not half-integral. We refer to its solution as y
scene takes 11 minutes on average4 . Finally, we can also compute the exact mixed integer solution
PK
including the additional constraint ?i : j=1 yij = 1 using a general-purpose MIP solver4 . We set
a time limit of 30 minutes for the MIP solver. This takes 18 minutes on average for a scene. All
runtimes are for single CPU implementations using 17 classes.
When using this algorithm in practice on new scenes (e.g., during our robotic experiments), objects
other than the 27 objects we modeled might be present (e.g., coffee-mugs). So we relax the constraint
PK
PK
?i : j=1 yij = 1 to ?i : j=1 yij ? 1. This increases precision greatly at the cost of some drop in
recall. Also, this relaxed MIP takes lesser time to solve.
yil yjk
lk
zij
.
lk l
zij
, yi ? {0, 1}
(3)
3.2.2 Learning Algorithm
We take a large-margin approach to learning the parameter vector w of Eq. (2) from labeled training
examples (x1 , y1 ), ..., (xn , yn ) [31, 32, 34]. Compared to Conditional Random Field training [17]
4
http://www.tfinley.net/software/pyglpk/readme.html
5
using maximum likelihood, this has the advantage that the partition function normalizing Eq. (2)
does not need to be computed, and that the training problem can be formulated as a convex program
for which efficient algorithms exist.
Our method optimizes a regularized upper bound on the training error
n
1X
? j ),
R(h) =
?(yj , y
(5)
n j=1
PN PK
?k
k
? j is the optimal solution of Eq. (1) and ?(y, y
?) =
where y
i=1
k=1 |yi ? yi |. To simplify
T
notation, note that Eq. (3) can be equivalently written as w ?(x, y) by appropriately stacking the
lk
lk
wnk and wtlk into w and the yik ?n (k) and zij
?t (l, k) into ?(x, y), where each zij
is consistent with
Eq. (4) given y. Training can then be formulated as the following convex quadratic program [15]:
1 T
min
w w + C?
(6)
w,?
2
n
X
1
? i )] ? ?(yi , y
?i) ? ?
? n ? {0, 0.5, 1}N ?K : wT
[?(xi , yi ) ? ?(xi , y
s.t.
??
y1 , ..., y
n
i=1
While the number of constraints in this quadratic program is exponential in n, N , and K, it can
nevertheless be solved efficiently using the cutting-plane algorithm for training structural SVMs
[15]. The algorithm maintains a working set of constraints, and it can be shown to provide an accurate solution after adding at most O(R2 C/) constraints (ignoring log terms). The algorithm
merely need access to an efficient method for computing
T
? i = argmax
y
w ?(xi , y) + ?(yi , y) .
(7)
y?{0,0.5,1}N ?K
Due to the structure of ?(., .), this problem is identical to the relaxed prediction problem in Eqs. (3)(4) and can be solved efficiently using graph cuts.
Since our training problem is an overgenerating formulation as defined in [7], the value of ? at the
solution is an upper bound on the training error in Eq. (5). Furthermore, [7] observed empirically
? cut after training w via Eq. (6) is typically largely integral, meaning
that the relaxed prediction y
that most labels yik of the relaxed solution are the same as the optimal mixed-integer solution due to
persistence. We made the same observation in our experiments as well.
4
Experiments
4.1 Data
We consider labeling object segments in full 3D scene (as compared to 2.5D data from a single
view). For this purpose, we collected data of 24 office and 28 home scenes (composed from about
550 views). Each scene was reconstructed from about 8-9 RGB-D views from a Kinect sensor and
contains about one million colored points.
We first over-segment the 3D scene (as described earlier) to obtain the atomic units of our representation. For training, we manually labeled the segments, and we selected the labels which
were present in a minimum of 5 scenes in the dataset. Specifically, the office labels are: {wall,
floor, tableTop, tableDrawer, tableLeg, chairBackRest, chairBase, chairBack, monitor, printerFront,
printerSide keyboard, cpuTop, cpuFront, cpuSide, book, paper }, and the home labels are: {wall,
floor, tableTop, tableDrawer, tableLeg, chairBackRest, chairBase, sofaBase, sofaArm, sofaBackRest, bed, bedSide, quilt, pillow, shelfRack, laptop, book }. This gave us a total of 1108 labeled
segments in the office scenes and 1387 segments in the home scenes. Often one object may be divided into multiple segments because of over-segmentation. We have made this data available at:
http://pr.cs.cornell.edu/sceneunderstanding/data/data.php.
4.2 Results
Table 2 shows the results, performed using 4-fold cross-validation and averaging performance across
the folds for the models trained separately on home and office datasets. We use both the macro and
micro averaging to aggregate precision and recall over various classes. Since our algorithm can
only predict one label for each segment, micro precision and recall are same as the percentage of
correctly classified segments. Macro precision and recall are respectively the averages of precision
and recall for all classes. The optimal C value is determined separately for each of the algorithms
by cross-validation.
Figure 1 shows the original point cloud, ground-truth and predicted labels for one office (top) and
one home scene (bottom). We see that on majority of the classes we are able to predict the correct
6
Table 2: Learning experiment statistics. The table shows average micro precision/recall, and average macro
precision and recall for home and office scenes.
Office Scenes
Home Scenes
micro
macro
micro
macro
features
algorithm
P/R Precision Recall
P/R Precision Recall
None
max class
26.23
26.23
5.88
29.38
29.38
5.88
Image Only
svm node only
46.67
35.73
31.67
38.00
15.03
14.50
Shape Only
svm node only
75.36
64.56
60.88
56.25
35.90
36.52
77.97
69.44
66.23
56.50
37.18
34.73
Image+Shape
svm node only
Image+Shape & context single frames
84.32
77.84
68.12
69.13
47.84
43.62
Image+Shape & context svm mrf assoc
75.94
63.89
61.79
62.50
44.65
38.34
Image+Shape & context svm mrf nonassoc
81.45
76.79
70.07
72.38
57.82
53.62
Image+Shape & context svm mrf parsimon
84.06
80.52
72.64
73.38
56.81
54.80
label. It makes mistakes in some cases and these usually tend to be reasonable, such as a pillow
getting confused with the bed, and table-top getting confused with the shelf-rack.
One of our goals is to study the effect of various factors, and therefore we compared different
versions of the algorithms with various settings. We discuss them in the following.
Do Image and Point-Cloud Features Capture Complimentary Information? The RGB-D data
contains both image and depth information, and enables us to compute a wide variety of features.
In this experiment, we compare the two kinds of features: Image (RGB) and Shape (Point Cloud)
features. To show the effect of the features independent of the effect of context, we only use the
node potentials from our model, referred to as svm node only in Table 2. The svm node only model
is equivalent to the multi-class SVM formulation [15]. Table 2 shows that Shape features are more
effective compared to the Image, and the combination works better on both precision and recall.
This indicates that the two types of features offer complementary information and their combination
is better for our classification task.
How Important is Context? Using our svm mrf parsimon model as described in Section 3.1,
we show significant improvements in the performance over using svm node only model on both
datasets. In office scenes, the micro precision increased by 6.09% over the best svm node only
model that does not use any context. In home scenes the increase is much higher, 16.88%.
The type of contextual relations we capture depend on the type of edge potentials we model. To
study this, we compared our method with models using only associative or only non-associative
edge potentials referred to as svm mrf assoc and svm mrf nonassoc respectively. We observed that
modeling all edge features using associative potentials is poor compared to our full model. In fact,
using only associative potentials showed a drop in performance compared to svm node only model
on the office dataset. This indicates it is important to capture the relations between regions having
different labels. Our svm mrf nonassoc model does so, by modeling all edge features using nonassociative potentials, which can favor or disfavor labels of different classes for nearby segments. It
gives higher precision and recall compared to svm node only and svm mrf assoc. This shows that
modeling using non-associative potentials is a better choice for our labeling problem.
However, not all the edge features are non-associative in nature, modeling them using only nonassociative potentials could be an overkill (each non-associative feature adds K 2 more parameters
to be learnt). Therefore using our svm mrf parsimon model to model these relations achieves higher
performance in both datasets.
How Large should the Context Range be? Context relationships of different objects can be meaningful for different
spatial distances. This range may vary depending on the environment as well. For example, in an office, keyboard and
monitor go together, but they may have little relation with a
sofa that is slightly farther away. In a house, sofa and table
may go together even if they are farther away.
In order to study this, we compared our svm mrf parsimon
with varying context range for determining the neighborhood
(see Figure 3 for average micro precision vs range plot). Note Figure 3: Effect of context range on
that the context range is determined from the boundary of one precision (=recall here).
segment to the boundary of the other, and hence it is somewhat independent of the size of the object.
We note that increasing the context range increases the performance to some level, and then it drops
slightly. We attribute this to the fact that increasing the context range can connect irrelevant objects
7
with an edge, and with limited training data, spurious relationships may be learned. We observe that
the optimal context range for office scenes is around 0.3 meters and 0.6 meters for home scenes.
How does a Full 3D Model Compare to a 2.5D Model? In Table 2, we compare the performance of
our full model with a model that was trained and tested on single views of the same scenes. During
the comparison, the training folds were consistent with other experiments, however the segmentation
of the point clouds was different (because each point cloud is from a single view). This makes the
micro precision values meaningless because the distribution of labels is not same for the two cases.
In particular, many large object in scenes (e.g., wall, ground) get split up into multiple segments in
single views. We observed that the macro precision and recall are higher when multiple views are
combined to form the scene. We attribute the improvement in macro precision and recall to the fact
that larger scenes have more context, and models are more complete because of multiple views.
What is the effect of the inference method? The results for svm mrf algorithms Table 2 were
generated using the MIP solver. We observed that the MIP solver is typically 2-3% more accurate
than the LP solver. The graph-cut algorithm however, gives a higher precision and lower recall on
both datasets. For example, on office data, the graphcut inference for our svm mrf parsimon gave
a micro precision of 90.25 and micro recall of 61.74. Here, the micro precision and recall are not
same as some of the segments might not get any label. Since it is orders of magnitude faster, it is
ideal for realtime robotic applications.
4.3 Robotic experiments
The ability to label segments is very useful for robotics
applications, for example, in detecting objects (so that
a robot can find/retrieve an object on request) or for
other robotic tasks. We therefore performed two relevant
robotic experiments.
Attribute Learning: In some robotic tasks, such as
robotic grasping, it is not important to know the exact
object category, but just knowing a few attributes of an
object may be useful. For example, if a robot has to clean Figure 4: Cornell?s POLAR robot using our
a floor, it would help if it knows which objects it can move classifier for detecting a keyboard in a clutand which it cannot. If it has to place an object, it should tered room.
place them on horizontal surfaces, preferably where humans do not sit. With this motivation we have designed 8 attributes, each for the home and office
scenes, giving a total of 10 unique attributes in total, comprised of: wall, floor, flat-horizontalsurfaces, furniture, fabric, heavy, seating-areas, small-objects, table-top-objects, electronics. Note
that each segment in the point cloud can have multiple attributes and therefore we can learn these
attributes using our model which naturally allows multiple labels per segment. We compute the
precision and recall over the attributes by counting how many attributes were correctly inferred. In
home scenes we obtained a precision of 83.12% and 70.03% recall, and in the office scenes we
obtain 87.92% precision and 71.93% recall.
Object Detection: We finally use our algorithm on two mobile robots, mounted with Kinects, for
completing the goal of finding objects such as a keyboard in cluttered office scenes. The following
video shows our robot successfully finding a keyboard in an office: http://pr.cs.cornell.
edu/sceneunderstanding/
In conclusion, we have proposed and evaluated the first model and learning algorithm for scene understanding that exploits rich relational information from the full-scene 3D point cloud. We applied
this technique to object labeling problem, and studied affects of various factors on a large dataset.
Our robotic application shows that such inexpensive RGB-D sensors can be extremely useful for
scene understanding for robots. This research was funded in part by NSF Award IIS-0713483.
References
[1] D. Anguelov, B. Taskar, V. Chatalbashev, D. Koller, D. Gupta, G. Heitz, and A. Ng. Discriminative
learning of markov random fields for segmentation of 3d scan data. In CVPR, 2005.
[2] E. Boros and P. Hammer. Pseudo-boolean optimization. Dis. Appl. Math., 123(1-3):155?225, 2002.
[3] A. Collet Romea, S. Srinivasa, and M. Hebert. Structure discovery in multi-modal data : a region-based
approach. In ICRA, 2011.
[4] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints.
In Workshop on statistical learning in computer vision, ECCV, 2004.
8
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[6] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In CVPR, 2008.
[7] T. Finley and T. Joachims. Training structural svms when exact inference is intractable. In ICML, 2008.
[8] A. Golovinskiy, V. G. Kim, and T. Funkhouser. Shape-based recognition of 3d point clouds in urban
environments. ICCV, 2009.
[9] S. Gould, P. Baumstarck, M. Quigley, A. Y. Ng, and D. Koller. Integrating Visual and Range Data for
Robotic Object Detection. In ECCV workshop Multi-camera Multi-modal (M2SFA2), 2008.
[10] P. Hammer, P. Hansen, and B. Simeone. Roof duality, complementation and persistency in quadratic 0?1
optimization. Mathematical Programming, 28(2):121?155, 1984.
[11] V. Hedau, D. Hoiem, and D. Forsyth. Thinking inside the box: Using appearance models and context
based on room geometry. In ECCV, 2010.
[12] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for
holistic scene understanding. In NIPS, 2008.
[13] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In ECCV, 2008.
[14] D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. In In CVPR, 2006.
[15] T. Joachims, T. Finley, and C. Yu. Cutting-plane training of structural SVMs. Machine Learning,
77(1):27?59, 2009.
[16] H. Koppula, A. Anand, T. Joachims, and A. Saxena. Labeling 3d scenes for personal assistant robots. In
R:SS workshop on RGB-D cameras, 2011.
[17] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In ICML, 2001.
[18] K. Lai, L. Bo, X. Ren, and D. Fox. A Large-Scale Hierarchical Multi-View RGB-D Object Dataset. In
ICRA, 2011.
[19] K. Lai, L. Bo, X. Ren, and D. Fox. Sparse Distance Learning for Object Recognition Combining RGB
and Depth Information. In ICRA, 2011.
[20] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating spatial layout of rooms using volumetric
reasoning about objects and surfaces. In NIPS, 2010.
[21] B. Leibe, N. Cornelis, K. Cornelis, and L. V. Gool. Dynamic 3d scene analysis from a moving vehicle. In
CVPR, 2007.
[22] C. Li, A. Kowdle, A. Saxena, and T. Chen. Towards holistic scene understanding: Feedback enabled
cascaded classification models. In NIPS, 2010.
[23] D. Munoz, N. Vandapel, and M. Hebert. Onboard contextual classification of 3-d point clouds with
learned high-order markov random fields. In ICRA, 2009.
[24] M. Quigley, S. Batra, S. Gould, E. Klingbeil, Q. V. Le, A. Wellman, and A. Y. Ng. High-accuracy 3d
sensing for mobile manipulation: Improving object detection and door opening. In ICRA, 2009.
[25] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof
duality. In CVPR, 2007.
[26] R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz. Towards 3d point cloud based object
maps for household environments. Robot. Auton. Syst., 56:927?941, 2008.
[27] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS 18, 2005.
[28] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE
PAMI, 31(5):824?840, 2009.
[29] R. Shapovalov and A. Velizhev. Cutting-plane training of non-associative markov network for 3d point
cloud segmentation. In 3DIMPVT, 2011.
[30] R. Shapovalov, A. Velizhev, and O. Barinova. Non-associative markov networks for 3d point cloud
classification. In ISPRS Commission III symposium - PCV 2010, 2010.
[31] B. Taskar, V. Chatalbashev, and D. Koller. Learning associative markov networks. In ICML. ACM, 2004.
[32] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[33] A. Torralba. Contextual priming for object detection. IJCV, 53(2):169?191, 2003.
[34] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004.
[35] X. Xiong and D. Huber. Using context to create semantic 3d models of indoor environments. In BMVC,
2010.
[36] X. Xiong, D. Munoz, J. A. Bagnell, and M. Hebert. 3-d scene analysis via sequenced predictions over
points and regions. In ICRA, 2011.
9
| 4226 |@word illustrating:1 middle:2 version:1 printer:1 stronger:1 dalal:1 triggs:1 rgb:19 n8:4 electronics:1 configuration:7 contains:3 zij:24 hoiem:2 ours:1 past:1 contextual:8 si:4 scatter:3 written:1 partition:1 shape:37 enables:1 hofmann:1 designed:2 gist:1 ashutosh:1 drop:3 v:1 plot:1 cue:2 half:7 selected:1 plane:4 mccallum:1 yi1:1 farther:2 persistency:1 colored:1 coarse:1 detecting:2 node:26 location:9 hsv:4 math:1 org:1 five:1 mathematical:1 become:2 symposium:1 ijcv:1 compose:1 ray:1 inside:1 introduce:1 pairwise:2 huber:1 dist:8 multi:6 globally:1 decreasing:1 cpu:2 little:1 solver:9 increasing:2 becomes:2 confused:2 estimating:2 xx:2 notation:2 laptop:1 what:1 kind:1 complimentary:1 parsimony:4 minimizes:1 lowing:1 finding:4 pseudo:6 every:3 saxena:6 preferably:1 stuff:1 gravity:4 exactly:3 ro:1 classifier:3 assoc:3 unit:3 ramanan:1 yn:2 segmenting:1 before:1 local:17 limit:2 mistake:1 quilt:1 planarity:2 cornelis:2 merge:1 niz:2 might:2 pami:1 emphasis:4 studied:1 relaxing:4 appl:1 co:10 limited:1 range:18 practical:1 camera:14 unique:1 yj:1 atomic:2 lost:2 practice:2 block:1 displacement:2 area:2 vert:7 projection:1 persistence:8 word:1 integrating:1 altun:1 get:3 cannot:2 tsochantaridis:1 context:34 influence:4 www:2 equivalent:5 map:9 demonstrated:1 maximizing:1 layout:6 go:2 cluttered:4 convex:4 solver4:2 dbi:1 spanned:1 retrieve:1 enabled:1 rhi:3 exact:4 programming:1 distinguishing:1 designing:3 associate:1 recognition:3 cut:22 labeled:7 bottom:2 cloud:26 observed:4 taskar:3 solved:9 capture:27 region:3 connected:1 richness:1 sun:1 grasping:1 e8:13 valuable:2 principled:1 environment:7 convexity:4 cam:2 dynamic:1 personal:1 trained:4 depend:1 solving:4 segment:82 ciz:3 localization:1 easily:1 fabric:1 various:6 represented:1 kolmogorov:1 train:1 distinct:1 describe:2 effective:1 labeling:26 aggregate:1 neighborhood:1 koppula:2 richer:1 widely:2 larger:2 cvpr:6 solve:1 say:4 relax:1 s:1 tested:1 favor:2 statistic:1 ability:1 itself:2 associative:40 advantage:1 quigley:2 sequence:1 net:2 propose:4 product:6 macro:7 neighboring:1 relevant:1 combining:2 disfavor:1 holistic:2 deformable:1 description:2 bed:2 getting:3 categorization:1 object:88 help:1 coupling:2 depending:1 bedside:1 augmenting:1 ij:5 make3d:1 eq:15 auxiliary:4 c:4 predicted:2 indicate:1 synchronized:1 differ:1 direction:4 overgenerating:1 correct:1 attribute:13 hammer:2 human:2 mcallester:1 public:1 wall:6 yij:6 hold:1 proximity:2 around:1 ground:11 normal:7 visually:1 minu:4 mapping:1 predict:5 efros:1 achieves:2 vary:1 torralba:1 purpose:7 polar:1 assistant:1 sofa:3 bag:2 label:31 hansen:1 create:1 successfully:3 rough:1 sensor:5 always:6 aim:2 rather:1 avoid:1 pn:1 cornell:6 shelf:1 mobile:4 varying:1 rusu:1 office:24 derived:2 focus:2 joachim:5 improvement:4 indicates:3 likelihood:2 greatly:1 centroid:14 kim:1 inference:6 mrfs:1 chatalbashev:2 i0:4 typically:2 spurious:1 relation:8 koller:6 labelings:1 i1:5 pixel:1 overall:1 classification:5 flexible:2 orientation:1 augment:1 html:3 spatial:8 equal:1 field:5 having:2 ng:5 runtimes:2 identical:1 manually:1 yu:1 icml:4 thinking:1 np:4 simplify:1 micro:11 few:4 opening:1 oriented:1 composed:2 manipulated:1 individual:2 roof:2 replaced:4 geometry:9 consisting:1 argmax:11 microsoft:1 n1:1 attempt:1 detection:11 huge:1 highly:1 wellman:1 behind:4 tj:1 stitched:1 slam:2 accurate:4 edge:40 integral:11 orthogonal:1 fox:2 tree:1 mip:7 increased:1 modeling:5 boolean:5 earlier:1 stacking:2 cost:1 vertex:1 comprised:1 front:6 commission:1 connect:4 dependency:2 learnt:2 combined:1 probabilistic:1 lee:1 enhance:1 together:4 mouse:1 na:1 e9:1 book:2 chung:1 style:1 li:1 syst:1 potential:18 sec:3 availability:1 forsyth:1 explicitly:1 stream:1 performed:2 view:13 lot:6 csurka:1 vehicle:1 maintains:1 parallel:1 wew:1 contribution:1 formed:2 ni:1 php:1 accuracy:1 characteristic:3 efficiently:2 largely:1 none:1 ren:2 classified:1 complementation:1 simultaneous:1 volumetric:1 inexpensive:3 cjz:2 e2:5 naturally:2 associated:1 tered:1 dataset:4 recall:20 color:5 segmentation:5 back:2 coloring:1 higher:5 planar:3 modal:2 bmvc:1 formulation:7 done:1 box:4 though:1 evaluated:1 furthermore:8 just:7 correlation:1 working:1 horizontal:9 multiscale:1 rack:1 continuity:1 hor:4 building:2 effect:9 hence:1 semantic:6 i2:3 funkhouser:1 mug:1 adjacent:2 during:2 illustrative:1 outline:2 complete:2 demonstrate:1 crf:1 tt:16 shapovalov:2 onboard:1 geometrical:2 reasoning:1 image:28 meaning:1 recently:1 dbj:1 srinivasa:1 common:1 rotation:4 empirically:2 yil:18 insensitive:4 volume:1 million:1 belong:2 vegetation:1 significant:4 refer:7 anguelov:1 munoz:2 smoothness:1 dot:2 funded:1 moving:1 robot:10 access:3 similarity:4 surface:5 longer:3 etc:3 movable:1 base:1 add:1 closest:8 recent:5 showed:1 perspective:1 optimizing:1 optimizes:1 apart:4 irrelevant:1 manipulation:1 keyboard:8 certain:5 binary:2 success:1 yi:13 captured:1 minimum:1 additional:5 relaxed:4 floor:5 somewhat:1 guestrin:1 determine:1 ii:1 full:10 multiple:15 keypoints:1 infer:1 rj:3 desirable:4 segmented:3 technical:1 faster:4 cross:2 offer:1 divided:1 lai:2 e1:5 graphcut:1 award:1 n11:1 prediction:8 mrf:13 basic:1 n5:2 vision:1 histogram:5 sequenced:1 robotics:2 addition:2 whereas:1 separately:2 interval:4 appropriately:1 rest:1 meaningless:1 collet:1 beetz:1 tend:1 undirected:1 thing:1 member:1 anand:2 lafferty:1 n7:4 integer:11 structural:3 near:1 counting:1 ideal:1 door:1 split:1 easy:1 enough:3 iii:1 variety:2 xj:1 affect:1 gave:2 marton:1 suboptimal:1 lesser:2 knowing:1 whether:2 stereo:1 e3:6 boros:1 simeone:1 yik:7 generally:1 useful:4 amount:1 clutter:1 tabletop:2 svms:3 category:1 http:5 exist:2 percentage:1 nsf:1 shifted:1 estimated:1 per:2 correctly:2 iz:1 key:1 four:1 putting:1 nevertheless:1 monitor:6 urban:1 clean:1 klingbeil:1 graph:12 relaxation:17 merely:1 year:1 realworld:1 package:1 angle:5 parameterized:1 place:6 almost:1 reasonable:2 patch:2 home:17 parsimonious:2 realtime:1 summarizes:1 toa:1 asaxena:1 capturing:2 bound:3 layer:1 completing:1 furniture:2 fold:3 quadratic:7 fan:1 bray:1 occur:7 precisely:1 athe:1 constraint:9 scene:80 software:3 ri:6 n3:1 flat:1 nearby:2 aspect:1 chair:5 min:1 extremely:1 gould:3 department:1 structured:1 combination:2 poor:1 request:1 describes:1 across:1 slightly:2 lp:11 n4:2 making:1 iccv:1 pr:3 thorsten:1 ceiling:2 monocular:1 discus:1 count:2 know:2 coplanar:1 auton:1 available:5 apply:1 observe:1 hierarchical:1 away:2 appropriate:1 leibe:1 occurrence:1 distinguished:1 xiong:2 eigen:1 original:2 n9:8 top:8 include:2 baumstarck:1 graphical:2 household:1 exploit:2 giving:1 restrictive:1 build:1 wnk:3 coffee:1 icra:6 move:2 arrangement:2 bagnell:1 gradient:4 distance:18 oa:2 majority:1 seating:1 considers:1 extent:7 discriminant:7 reason:2 collected:1 rother:1 modeled:4 relationship:15 illustration:1 providing:2 equivalently:1 hog:3 yjk:11 implementation:2 reliably:1 perform:2 upper:3 vertical:13 observation:1 markov:9 datasets:4 curved:1 relational:3 extended:1 y1:3 frame:1 kinect:5 intensity:1 inferred:2 pair:2 learned:2 nip:5 address:2 beyond:2 able:1 usually:3 pattern:1 below:1 indoor:8 program:11 built:1 including:4 max:7 video:3 gool:1 regularized:1 predicting:1 indicator:3 readme:2 cascaded:2 improve:1 lk:33 finley:2 n6:3 occurence:1 geometric:17 understanding:7 meter:2 discovery:1 interdependent:1 determining:1 relative:9 loss:1 discriminatively:1 mixed:11 interesting:4 mounted:1 validation:2 consistent:2 heavy:1 translation:4 eccv:4 hebert:5 dis:1 wide:2 overkill:1 felzenszwalb:1 absolute:2 sparse:1 boundary:6 depth:8 xn:2 world:2 pillow:2 rich:3 heitz:3 hedau:1 feedback:1 made:2 avg:1 sj:4 approximate:3 reconstructed:1 cutting:3 clique:3 global:1 robotic:9 andthe:1 willamowski:1 abhishek:1 xi:5 discriminative:1 table:22 kanade:1 nature:2 learn:3 ignoring:1 improving:3 e5:2 priming:1 pk:5 main:1 bounding:4 motivation:1 n2:1 complementary:1 body:1 x1:2 fig:14 referred:3 precision:23 tna:1 position:13 pereira:1 exponential:1 outdoor:3 house:1 e4:2 minute:6 specific:1 barinova:1 sensing:1 r2:1 admits:2 gupta:2 svm:23 normalizing:1 sit:1 intractable:1 consist:6 exists:4 rel:4 adding:1 workshop:3 ci:4 texture:1 magnitude:5 margin:5 chen:1 smoothly:1 appearance:16 explore:1 likely:2 visual:18 bo:2 pedro:1 truth:2 acm:1 conditional:2 lempitsky:1 goal:4 formulated:2 towards:3 room:6 lidar:2 hard:4 included:1 typical:4 fw:2 diff:3 specifically:1 kowdle:1 isprs:1 wt:2 averaging:2 determined:2 total:6 called:4 isomorphic:1 duality:2 tendency:4 batra:1 meaningful:2 indicating:1 uneven:1 e6:3 support:1 szummer:1 scan:1 evaluate:2 dance:1 |
3,564 | 4,227 | PAC-Bayesian Analysis of Contextual Bandits
Yevgeny Seldin1,4 Peter Auer2 Franc?ois Laviolette3 John Shawe-Taylor4 Ronald Ortner2
1
Max Planck Institute for Intelligent Systems, T?ubingen, Germany
2
Chair for Information Technology, Montanuniversit?at Leoben, Austria
3
D?epartement d?informatique, Universit?e Laval, Qu?ebec, Canada
4
Department of Computer Science, University College London, UK
[email protected], {auer,ronald.ortner}@unileoben.ac.at,
[email protected], [email protected]
Abstract
We derive an instantaneous (per-round) data-dependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits).
The scaling of our regret bound with the number of states (contexts) N goes as
p
N I?t (S; A), where I?t (S; A) is the mutual information between states and actions (the side information) used by the algorithm at round
p t. If the algorithm
uses all the side information, the regret bound scales as N ln K, where K is
the number of actions (arms). However, if the side information I?t (S; A) is not
fully used, the regret bound is significantly tighter. In the extreme case, when
I?t (S; A) = 0, the dependence on the number of states reduces from linear to
logarithmic. Our analysis allows to provide the algorithm large amount of side
information, let the algorithm to decide which side information is relevant for the
task, and penalize the algorithm only for the side information that it is using de
facto. We also present an algorithm for multiarmed bandits with side information
with O(K) computational complexity per game round.
1
Introduction
Multiarmed bandits with side information are an elegant mathematical model for many real-life
interactive systems, such as personalized online advertising, personalized medical treatment, and so
on. This model is also known as contextual bandits or associative bandits (Kaelbling, 1994, Strehl
et al., 2006, Langford and Zhang, 2007, Beygelzimer et al., 2011). In multiarmed bandits with side
information the learner repeatedly observes states (side information) {s1 , s2 , . . . } (for example,
symptoms of a patient) and has to perform actions (for example, prescribe drugs), such that the
expected regret is minimized. The regret is usually measured by the difference between the reward
that could be achieved by the best (unknown) fixed policy (for example, the number of patients that
would be cured if we knew the best drug for each set of symptoms) and the reward obtained by the
algorithm (the number of patients that were actually cured).
Most of the existing analyses of multiarmed bandits with side information has focused on the adversarial (worst-case) model, where the sequence of rewards associated with each state-action pair
is chosen by an adversary. However, many problems in real-life are not adversarial. We derive datadependent analysis for stochastic multiarmed bandits with side information. In the stochastic setting
the rewards for each state-action pair are drawn from a fixed unknown distribution. The sequence of
states is also drawn from a fixed unknown distribution. We restrict ourselves to problems with finite
number of states N and finite number of actions K and leave generalization to continuous state and
action spaces to future work. We also do not assume any structure of the state space. Thus, for us
a state is just a number between 1 and N . For example, in online advertising the state can be the
country from which a web page is accessed.
1
The result presented in this paper exhibits adaptive dependency on the side information (state identity) that is actually used by the algorithm. This allows us to provide the algorithm a large amount
of side information and let the algorithm decide, which of this side information is actually relevant
to the task. For example, in online advertising we can increase the state resolution and provide the
algorithm the town from which the web page was accessed, but if this refined state information is not
used by the algorithm the regret bound will not deteriorate. This can be opposed to existing analysis
of adversarial multiarmed bandits, where the regret bound depends on a predefined complexity of
the underlying expert class (Beygelzimer et al., 2011). Thus, the existing analysis of adversarial
multiarmed bandits would either become looser if we add more side information or a-priori limit the
usage of the side information through its internal structure. (We note that through the relation between PAC-Bayesian analysis and the analysis of adversarial online learning described in Banerjee
(2006) it might be possible to extend our analysis to adversarial setting, but we leave this research
direction to future work.)
The idea of regularization by relevant mutual information goes back to the Information Bottleneck
principle in supervised and unsupervised learning (Tishby et al., 1999). Tishby and Polani (2010)
further suggested to measure the complexity of a policy in reinforcement learning by the mutual
information between states and actions used by the policy. We note, however, that our starting point
is the regret bound and we derive the regularization term from our analysis without introducing it
a-priori. The analysis also provides time and data dependent weighting of the regularization term.
Our results are based on PAC-Bayesian analysis (Shawe-Taylor and Williamson, 1997, ShaweTaylor et al., 1998, McAllester, 1998, Seeger, 2002), which was developed for supervised learning
within the PAC (Probably Approximately Correct) learning framework (Valiant, 1984). In PACBayesian analysis the complexity of a model is defined by a user-selected prior over a hypothesis
space. Unlike in VC-dimension-based approaches and their successors, where the complexity is
defined for a hypothesis class, in PAC-Bayesian analysis the complexity is defined for individual
hypotheses. The analysis provides an explicit trade-off between individual model complexity and
its empirical performance and a high probability guarantee on the expected performance.
An important distinction between supervised learning and problems with limited feedback, such
as multiarmed bandits and reinforcement learning more generally, is the fact that in supervised
learning the training set is given, whereas in reinforcement learning the training set is generated by
the learner as it plays the game. In supervised learning every hypothesis in a hypothesis class can
be evaluated on all the samples, whereas in reinforcement learning rewards of one action cannot
be used to evaluate another action. Recently, Seldin et al. (2011b,a) generalized PAC-Bayesian
analysis to martingales and suggested a way to apply it under limited feedback. Here, we apply this
generalization to multiarmed bandits with side information.
The remainder of the paper is organized as follows. We start with definitions in Section 2 and provide
our main results in Section 3, which include an instantaneous regret bound and a new algorithm for
stochastic multiarmed bandits with side information. In Section 4 we present an experiment that
illustrates our theoretical results. Then, we dive into the proof of our main results in Section 5 and
discuss the paper in Section 6.
2
Definitions
In this section we provide all essential definitions for our main results in the following section. We
start with the definition of stochastic multiarmed bandits with side information. Let S be a set of
|S| = N states and let A be a set of |A| = K actions, such that any action can be performed in any
state. Let s 2 S denote the states and a 2 A denote the actions. Let R(a, s) be the expected reward
for performing action a in state s. At each round t of the game the learner is presented a state St
drawn i.i.d. according to an unknown distribution p(s). The learner draws an action At according to
his choice of a distribution (policy) ?t (a|s) and obtains a stochastic reward Rt with expected value
R(At , St ). Let {S1 , S2 , . . . } denote the sequence of observed states, {?1 , ?2 , . . . } the sequence of
policies played, {A1 , A2 , . . . } the sequence of actions played, and {R1 , R2 , . . . } the sequence of
observed rewards. Let Tt = {{S1 , . . . , St }, {?1 , . . . , ?t }, {A1 , . . . , At }, {R1 , . . . , Rt }} denote the
history of the game up to time t.
2
Assume that ?t (a|s) > 0 for all t, a, and s. For t
1, a 2 {1, . . . , K}, and the sequence of
observed states {S1 , . . . , St } define a set of random variables Rta,St :
?
1
a,St
?t (a|St ) Rt , if At = a
Rt
=
0,
otherwise.
(The variables Rta,s are defined only for the observed state s = St .) Note that whenever defined,
E[Rta,St |Tt 1 , St ] = R(a, St ). The definition of Rta,s is generally known as importance weighted
sampling (Sutton and Barto, 1998). Importance weighted sampling is required for application of
PAC-Bayesian analysis, as will be shown in the technical part of the paper.
Pt
Define nt (s) = ? =1 I{S? =s} as the number of times state s appeared up to time t (I is the indicator
function). We define the empirical rewards of state-action pairs as:
( P
a,s
{? =1,...,t:S? =s} R?
, if nt (s) > 0
? t (a, s) =
nt (s)
R
0,
otherwise.
? t (a, s) = R(a, s). For every state s we define the ?best?
Note that whenever nt (s) > 0 we have ER
action in that state as a? = arg maxa R(a, s) (if there are multiple ?best? actions, one of them is
chosen arbitrarily). We then define the expected and empirical regret for performing any other action
a in state s as:
(a, s) = R(a? (s), s)
? t (a, s) = R
? t (a? (s), s)
R(a, s),
? t (a, s).
R
Let p?t (s) = ntt(s) be the empirical distribution over states observed up to time t. For any policy ?(a|s) we define the empirical reward, empirical regret, and expected regret of the policy
P
P
?
? t (a, s), ? t (?) = P p?t (s) P ?(a|s) ? t (a, s), and (?) =
as:
?t (s) a ?(a|s)R
sp
s
a
P Rt (?)
P=
s p(s)
a ?(a|s) (a, s).
We define the marginal distribution
Pover actions that corresponds to a policy ?(a|s) and the uniform
distribution over S as ??(a) = N1 s ?(a|s) and the mutual information between actions and states
corresponding to the policy ?(a|s) and the uniform distribution over S as
I? (S; A) =
1 X
?(a|s)
?(a|s) ln
.
N s,a
??(a)
For the proof of our main result and also in order to explain the experiments we also have to define
a hypothesis space for our problem. This definition is not used in the statement of the main result.
Let H be a hypothesis space, such that each member h 2 H is a deterministic mapping from S to
A. Denote by a = h(s) the action assigned by hypothesis
Ph to state s. It is easy to see that the size
of the hypothesis space |H| = K N . Denote by R(h) = s2S p(s)R(h(s), s) the expected reward
of a hypothesis h. Define:
t
1 X h(S? ),S?
?
Rt (h) =
R
.
t ? =1 ?
? t (h) = R(h).
Note that ER
Let h? = arg maxh2H R(h) be the ?best? hypothesis (the one that chooses the ?best? action in each
state). (If there are multiple hypotheses achieving maximal reward pick any of them.) Define:
(h) = R(h? )
? t (h) = R
? t (h? )
R(h),
? t (h).
R
Any policy ?(a|s) defines a distribution over H: we can draw an action a for each state s according
to ?(a|s) and thus obtain a hypothesis h 2 H. We use ?(h) to denote the respective probability
of drawing h. For a policy ? we define (?) = E?(h) [ (h)] and ? t (?) = E?(h) [ ? t (h)]. By
marginalization these definitions are consistent with our preceding definitions of (?) and ? t (?).
PN
Finally, let nh (a) =
s=1
n Ih(s)=a
o be the number of states in which action a is played by the
nh (a)
h
hypothesis h. Let A =
be the normalized cardinality profile (histogram) over the
N
a2A
3
actions played by hypothesis h (with respect to the uniform distribution over S). Let H(Ah ) =
P nh (a) nh (a)
be the entropy of this cardinality profile. In other words, H(Ah ) is the entropy
a N ln N
of an action choice of hypothesis h (with respect to the uniform distribution over S). Note, that the
optimal policy ?? (a|s) (the one, that selects the ?best? action in each state) is deterministic and we
?
have I?? (S; A) = H(Ah ).
3
Main Results
Our main result is a data and complexity dependent regret bound for a general class of prediction
strategies of a smoothed exponential form. Let ?t (a) be an arbitrary distribution over actions, let
?
?exp
t (a|s) =
where Z(?exp
t , s) =
P
a
?t (a)e
?
t Rt (a,s)
exp
?t (a)e t Rt (a,s)
,
Z(?exp
t , s)
(1)
is a normalization factor, and let
??t (a|s) = (1
(2)
K"t+1 )?exp
t (a|s) + "t+1
be a smoothed exponential policy. The following theorem provides a regret bound for playing ??t
at round t + 1 of the game. For generality, we assume that rounds 1, . . . , t were played according to
arbitrary policies ?1 , . . . , ?t .
Theorem 1. Assume that in game rounds 1, . . . , t policies {?1 , . . . , ?t } were played and assume
that mina,s ?t (a|s)
"t for an arbitrary "t that is independent of Tt . Let ?t (a) be an arbitrary
distribution over A that can depend on Tt and satisfies mina ?t (a) ?t . Let c > 1 be an arbitrary
number that is independent of Tt . Then, with probability greater than 1
over Tt , simultaneously
for all policies ??exp
defined by (2) that satisfy
t
exp
N I?exp
(S; A) + K(ln N + ln K) + ln 2mt
"t
t
? 2
2(e 2)t
c
we have:
s
(3)
1
2)(N I?exp
(S; A) + K(ln N + ln K) + ln 2mt ) ln ?t+1
t
+
+ K"t+1 ,
t"t
t
(4)
?q
?
(e 2)t
exp
where mt = ln
/ ln(c), and for all ?t that do not satisfy (3), with the same probability:
ln 2
(?
?exp
t ) ? (1 + c)
exp
(?
?t
2(e
1
2(N I?exp
(S; A) + K(ln N + ln K) + ln 2mt ) ln ?t+1
t
)?
+
+ K"t+1 .
t"t
t
and not ??exp
Note that the mutual information in Theorem 1 is calculated with respect to ?exp
t
t .
Theorem 1 allows to tune the learning rate t based on the sample. It also provides an instantaneous
regret bound for any algorithm that plays the policies {?
?exp
?exp
1 ,?
2 , . . . } throughout the game. In
order to obtain such a bound we just have to take a decreasing sequence {"1 , "2 , . . . } and substitute
in Theorem 1 with t = t(t+1) . Then, by the union bound, the result holds with probability greater
than 1
for all rounds of the game simultaneously. This leads to Algorithm 1 for stochastic
multiarmed bandits with side information. Note that each round of the algorithm takes O(K) time.
Theorem 1 is based on the following regret decomposition and the subsequent theorem and two
lemmas that bound the three terms in the decomposition.
exp
(?
?exp
t ) = [ (?t )
exp
? t (?exp
? t (?exp
t )] +
t ) + [R(?t )
R(?
?exp
t )].
(5)
Theorem 2. Under the conditions of Theorem 1 on {?1 , . . . , ?t } and c, simultaneously for all
policies ? that satisfy (3) with probability greater than 1
:
s
2(e 2)(N I?(S;A) + K(ln N + ln K) + ln 2mt )
(?) ? t (?) ? (1 + c)
,
(6)
t"t
4
Algorithm 1: Algorithm for stochastic contextual bandits. (See text for definitions of "t and
Input: N, K
? s)
R(a,
0 for all a, s (These are cumulative [unnormalized] rewards)
1
?(a)
K for all a
n(s)
0 for all s
t
1
while not terminated do
Observe state St .
1
if "t
K or (n(St ) = 0) then
?(a|St )
?(a) for all a
else
?(a|St )
(1
?
t R(a,St )/n(St )
K"t ) P ?(a)e
?(a0 )e
a0
? 0
t R(a ,St )/n(St )
t .)
+ "t for all a
?(a)
+
for all a
Draw action At according to ?(a|St ) and play it.
Observe reward Rt .
n(St )
n(St ) + 1
?
? t , St ) + Rt
R(At , St )
R(A
?(At |St )
t
t+1
N 1
N ?(a)
1
N ?(a|St )
and for all ? that do not satisfy (3) with the same probability:
(?)
2mt
)
? t (?) ? 2(N I? (S; A) + K(ln N + ln K) + ln
.
t"t
Note that Theorem 2 holds for all possible ?-s, including those that do not have an exponential form.
Lemma 1. For any distribution ?exp
of the form (1), where ?t (a) ? for all a, we have:
t
ln 1?
? t (?exp
.
t )?
t
Lemma 2. Let ?? be an "-smoothed version of a policy ?, such that ??(a|s) = (1
then
R(?) R(?
?) ? K".
K")?(a|s) + ",
Proof of Theorem 2 is provided in Section 5 and proofs of Lemmas 1 and 2 are provided in the
supplementary material.
Comments on Theorem 1. Theorem 1 exhibits what we were looking for: the regret of a policy
??exp
depends on the trade-off between its complexity, N I?exp
(S; A), and the empirical regret, which
t
t
1
is bounded by 1t ln ?t+1
. We note that 0 ? I?t (S; A) ? ln K, hence, the result is interesting when
N
K, since otherwise K ln K term in the bound neutralizes the advantage we get from having
small mutual information values. The assumption that N
K is reasonable for many applications.
We believe that the dependence of the first term of the regret bound (4) on "t is an artifact of our
crude upper bound on the variance of the sampling process (given in Lemma 3 in the proof of Theorem 2) and that this term should not be in the bound. This is supported by an empirical study of
stochastic multiarmed bandits (Seldin et al., 2011a). With the current bound the best choice for "t
is "t = (Kt) 1/3 , which, by integration over the game rounds, yields O(K 1/3 t2/3 ) dependence of
the cumulative regret on the number of arms and game rounds. However, if we manage to derive a
tighter analysis and remove "t from the first term in (4), the best choice of "t will be "t = (Kt) 1/2
and the dependence of the cumulative regret on the number of arms and time horizon will improve
to O((Kt)1/2 ). One way to achieve this is to apply EXP3.P-style updates (Auer et al., 2002b), however, Seldin et al. (2011a) empirically show that in stochastic environments EXP3 algorithm of Auer
et al. (2002b), which is closely related to Algorithm 1, has significantly better performance. Thus,
it is desirable to derive a better analysis for EXP3 algorithm in stochastic environments. We note
5
that although UCB algorithm for stochastic multiarmed bandits (Auer et al., 2002a) is asymptotically better than the EXP3 algorithm, it is not compatible with PAC-Bayesian analysis and we are
not aware of a way to derive a UCB-type algorithm and analysis for multiarmed bandits with side
information, whose dependence on the number of states would be better than O(N ln K). Seldin
et al. (2011a) also demonstrate that empirically it takes a large number of rounds until the asymptotic
advantage of UCB over EXP3 translates into a real advantage in practice.
It is not trivial to minimize (4) with respect to t analytically. Generally, higher values of t decrease
the second term of the bound, but also lead to more concentrated policies (conditional distributions)
exp
?exp
t (a|s) and thus higher mutual information values I?t (S; A). A simple way to address this
trade-off is to set t such that the contribution of the second term is as close to the contribution of
the first term as possible. This can be approximated by taking the value of mutual information from
the previous round (or approximation of the value of mutual information from the previous round).
More details on parameter setting for the algorithm are provided in the supplementary material.
Comments on Algorithm 1. By regret decomposition (5) and Theorem 2, regret at round t + 1 is
minimized by a policy ?t (a|s) that minimizes a certain trade-off between the mutual information
? t (?). This trade-off is analogical to rate-distortion trade-off in
I? (S; A) and the empirical regret R
information theory (Cover and Thomas, 1991). Minimization of rate-distortion trade-off is achieved
by iterative updates of the following form, which are known as Blahut-Arimoto (BA) algorithm:
?
t Rt (a,s)
?BA
t (a)e
?BA
,
t (a|s) = P
?
BA
t Rt (a,s)
a ?t (a)e
?BA
t (a) =
1 X BA
? (a|s).
N s t
Running a similar type of iterations in our case would be prohibitively expensive, since they require
iteration over all states s 2 S at each round of the game. We approximate these iterations by
approximating the marginal distribution over the actions by a running average:
??exp
t+1 (a) =
N
1
N
??exp
t (a) +
1 exp
?? (a|St ).
N t
(7)
Since ?exp
t (a|s) is bounded from zero by a decreasing sequence "t+1 , the same automatically holds
for ??exp
t+1 (a) (meaning that in Theorem 1 ?t = "t ). Note that Theorem 1 holds for any choice of
?t (a), including (7).
We point out an interesting fact: ?exp
t (a) propagates information between different states, but Theo1
rem 1 also holds for the uniform distribution ?(a) = K
, which corresponds to application of EXP3
algorithm in each state independently. If these independent multiarmed bandits independently converge to similar strategies, we still get a tighter regret bound. This happens because the corresponding subspace of the hypothesis space is significantly smaller than the total hypothesis space, which
enables us to put a higher prior on it (Seldin and Tishby, 2010). Nevertheless, propagation of information between states via the distribution ?exp
t (a) helps to achieve even faster convergence of the
regret, as we can see from the experiments in the next section.
Comparison with state-of-the-art. We are not aware of algorithms for stochastic multiarmed bandits with side information. The best known to us algorithm for adversarial multiarmed
bandits with
p
side information is EXP4.P by Beygelzimer et al. (2011). EXP4.P has O( Kt ln |H|) regret and
O(K|H|)
our case |H| = K N , which means that EXP4.P would
p complexity per game round.NIn
+1
have O( KtN ln K) regret and O(K
) computational complexity. For hard problems, where
all side information has to be used, our regret bound is inferior to the regret bound of Beygelzimer
et al. (2011) due to O(t2/3 ) dependence on the number of game rounds. However, we believe that
this can be improved by a more careful analysis of the existing algorithm. For simple problems
the dependence of our regret bound on the number of states is significantly
better, up to the point
p
that when the side information
is
irrelevant
for
the
task
we
can
get
O(
K
ln
N
) dependence on the
p
number of states versus O( N ln K) in EXP4.P. For N
K this leads to tighter regret bounds for
small t even despite the ?incorrect? dependence on t of our bound, and if we improve the analysis
it will lead to tighter regret bounds for all t. As we said it already, our algorithm is able to filter
relevant information from large amounts of side information automatically, whereas in EXP4.P the
usage of side information has to be restricted externally through the construction of the hypothesis
class.
6
5
0
0
1
2
t
(a)
3
4
6
H(Ah*)=0
H(Ah*)=1
H(Ah*)=2
H(Ah*)=3
2
1.5
1
0.5
0
H(Ah*)=0
2 H(Ah*)=1
1.5
H(Ah*)=2
H(Ah*)=3
1
0.5
1
2
t
x 10
(t)
2.5
t
Bound on ?(~?exp
)
t
?(t)
2.5
H(A )=0
H(Ah*)=1
h*
10 H(A )=2
H(Ah*)=3
Baseline
I? (S;A)
4
15 x 10 h*
(b) bound on
3
4
6
x 10
(?
?exp
)
t
0
0
1
2
t
3
4
6
x 10
(c) I?exp
(S; A)
t
Figure 1: Behavior of: (a) cumulative regret (t), (b) bound on instantaneous regret (?
?exp
t ),
exp
and (c) the approximation of mutual information I?t (S; A). ?Baseline? in the first graph corresponds to playing N independent multiarmed bandits, one in each state. Each line in the graphs
corresponds to an average over 10 repetitions of the experiment.
The second important advantage of our algorithm is the exponential improvement of computational
complexity. This is achieved by switching from the space of experts to the state-action space in all
our calculations.
4
Experiments
We present an experiment on synthetic data that illustrates our results. We take N = 100, K = 20, a
?
uniform distribution over states (p(s) = 0.01), and consider four settings, with H(Ah ) = ln(1) =
?
?
?
0, H(Ah ) = ln(3) ? 1, H(Ah ) = ln(7) ? 2, and H(Ah ) = ln(20) ? 3, respectively. In the
?
first case, the same action is the best in all states (and hence H(Ah ) = 0 for the optimal hypothesis
?
h ). In the second case, for the first 33 states the best action is number 1, for the next 33 states
the best action is number 2, and for the rest third of the states the best ?action is number 3 (thus,
depending on the state, one of the three actions is the ?best? and H(Ah ) = ln(3)). In the third
case, there are seven groups of 14 states and each group has its own best action. In the last case,
there are 20 groups of 5 states and each of K = 20 actions is the best in exactly one of the 20
groups. For all states, the reward of the best action in a state has Bernoulli distribution with bias 0.6
and the rewards of all other actions in that state have Bernoulli distribution with bias 0.5.
the
Pt We runexp
experiment for T = 4, 000, 000 rounds and calculate the cumulative regret (t) = ? =1 (?
?? )
and instantaneous regret bound given in (4). For computational efficiency, the mutual information
I?exp
(S; A) is approximated by a running average (see supplementary material for details).
t
As we can see from the graphs (see Figure 1), the algorithm exhibits sublinear cumulative regret
?
(put attention to the axes? scales). Furthermore, for simple problems (with small H(Ah )) the
regret grows slower than for complex problems. ?Baseline? in Figure 1.a shows the performance
of an algorithm with the same parameter values that runs N multiarmed bandits, one in each state
independently of other states. We see that for all problems except the hardest one our algorithm
performs better than the baseline and for the hardest problem it performs almost as good as the
baseline. The regret bound in Figure 1.b provides meaningful values for the simplest problem after
1 million rounds (which is on average 500 samples per state-action pair) and after 4 million rounds
for all the problems (the graph starts at t = 10, 000). Our estimates of the mutual information
?
?
I?exp
(S; A) reflect H(Ah ) for the corresponding problems (for H(Ah ) = 0 it converges to zero,
t
?
for H(Ah ) ? 1 it is approximately one, etc.).
5
Proof of Theorem 2
The proof of Theorem 2 is based on PAC-Bayes-Bernstein inequality for martingales (Seldin et al.,
2011b). Let KL(?k?) denote the KL-divergence between two distributions (Cover and Thomas,
1991). Let {Z1 (h), . . . , Zn (h) : h 2 H} be martingale difference sequences indexed by h with
respect to the filtration (U1 ), . . . , (Un ), where Ui = {Z1 (h), . . . , Zi (h) : h 2 H} is the subset
of martingale difference variables up to index i and (Ui ) is the -algebra generated by Ui . This
means that E[Zi (h)| (Ui 1 )] = 0, where Zi (h) may depend on Zj (h0 ) for all j < i and h0 2 H.
? i (h) = Pi Zj (h) be
There might also be interdependence between {Zi (h) : h 2 H}. Let M
j=1
7
Pi
the corresponding martingales. Let Vi (h) = j=1 E[Zj (h)2 | (Uj 1 )] be cumulative variances of
? i (h). For a distribution ? over H define M
? i (?) = E?(h) [M
? i (h)] and Vt (?) =
the martingales M
E?(h) [Vt (h)] as weighted averages of the martingales and their cumulative variances according to a
distribution ?.
Theorem 3 (PAC-Bayes-Bernstein Inequality). Assume that |Zi (h)| ? b for all h with probability
1. Fix a prior distribution ? over H. Pick an arbitrary number c > 1. Then with probability greater
than 1
over Un , simultaneously for all distributions ? over H that satisfy
s
KL(?k?) + ln 2m
1
?
(e 2)Vn (?)
cb
we have
s
?
?
? n (?)| ? (1 + c) (e 2)Vn (?) KL(?k?) + ln 2m ,
|M
?q
?
(e 2)n
where m = ln
/ ln(c), and for all other ?
2
ln
?
?
? n (?)| ? 2b KL(?k?) + ln 2m .
|M
Note that Mt (h) = t( (h) ? t (h)) are martingales and their cumulative variances are Vt (h) =
?
?
Pt
2
h? (S? ),S?
h(S ),S
R? ? ? ] [R(h? ) R(h)] T? 1 . In order to apply Theorem 3 we
? =1 E [R?
1
have to derive an upper bound on Vt (?exp
t ), a prior ?(h) over H, and calculate (or upper bound) the
exp
KL-divergence KL(?t k?). This is done in the following three lemmas.
Lemma 3. If {"1 , "2 , . . . } is a decreasing sequence, such that "t ? mina,s ?t (a|s), then for all h:
2t
Vt (h) ? .
"t
The proof of the lemma is provided in the supplementary material. Lemma 3 provides an imme2t
diate, but crude, uniform upper bound on Vt (h), which yields Vt (?exp
t ) ? "t . Since our algorithm
concentrates on h-s with small (h), which, in turn, concentrate on the best action in each state, the
variance Vt (h) for the corresponding h-s is expected to be of the order of 2Kt and not "2tt . However,
we were not able to prove yet that the probability ?exp
t (h) of the remaining hypotheses (those with
large (h)) gets sufficiently small (of order K"t ), so that the weighted cumulative variance would
be of order 2Kt. Nevertheless, this seems to hold in practice starting from relatively small values of
t (Seldin et al., 2011a). Improving the upper bound on Vt (?exp
t ) will improve the regret bound, but
2t
for the moment we present the regret bound based on the crude upper bound Vt (?exp
t ) ? "t .
The remaining two lemmas, which define a prior ? over H and bound KL(?k?), are due to Seldin
and Tishby (2010).
Lemma 4. It is possible to define a distribution ? over H that satisfies:
h
?(h) e N H(A ) K ln N K ln K .
Lemma 5. For the distribution ? that satisfies (8) and any distribution ?(a|s):
KL(?k?) ? N I? (S; A) + K ln N + K ln K.
(8)
exp
Substitution of the upper bounds on Vt (?exp
t ) and KL(?t k?) into Theorem 3 yields Theorem 2.
6
Discussion
We presented PAC-Bayesian analysis of stochastic multiarmed bandits with side information. Our
analysis provides data-dependent algorithm and data-dependent regret analysis for this problem.
The selection of task-relevant side information is delegated from the user to the algorithm. We also
provide a general framework for deriving data-dependent algorithms and analyses for many other
stochastic problems with limited feedback. The analysis of the variance of our algorithm still waits
to be improved and will be addressed in future work.
1
Seldin et al. (2011b) show that Vn (?) can be replaced by an upper bound everywhere in Theorem 3.
8
Acknowledgments
We would like to thank all the people with whom we discussed this work and, in particular, Nicol`o
Cesa-Bianchi, G?abor Bart?ok, Elad Hazan, Csaba Szepesv?ari, Miroslav Dud??k, Robert Shapire, John
Langford, and the anonymous reviewers, whose comments helped us to improve the final version of
this manuscript. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886, and by the European Community?s Seventh Framework Programme (FP7/2007-2013), under grant agreement N o 231495. This
publication only reflects the authors? views.
References
Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47, 2002a.
Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal of Computing, 32(1), 2002b.
Arindam Banerjee. On Bayesian bounds. In Proceedings of the International Conference on Machine Learning
(ICML), 2006.
Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandit algorithms with supervised learning guarantees. In Proceedings on the International Conference on Artificial
Intelligence and Statistics (AISTATS), 2011.
Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley & Sons, 1991.
Leslie Pack Kaelbling. Associative reinforcement learning: Functions in k-DNF. Machine Learning, 15, 1994.
John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances
in Neural Information Processing Systems (NIPS), 2007.
David McAllester. Some PAC-Bayesian theorems. In Proceedings of the International Conference on Computational Learning Theory (COLT), 1998.
Matthias Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. Journal of
Machine Learning Research, 2002.
Yevgeny Seldin and Naftali Tishby. PAC-Bayesian analysis of co-clustering and beyond. Journal of Machine
Learning Research, 11, 2010.
Yevgeny Seldin, Nicol`o Cesa-Bianchi, Peter Auer, Franc?ois Laviolette, and John Shawe-Taylor. PAC-BayesBernstein inequality for martingales and its application to multiarmed bandits. 2011a. In review. Preprint
available at http://arxiv.org/abs/1110.6755.
Yevgeny Seldin, Franc?ois Laviolette, Nicol`o Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian
inequalities for martingales. 2011b. In review. Preprint available at http://arxiv.org/abs/1110.6886.
John Shawe-Taylor and Robert C. Williamson. A PAC analysis of a Bayesian estimator. In Proceedings of the
International Conference on Computational Learning Theory (COLT), 1997.
John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization
over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5), 1998.
Alexander L. Strehl, Chris Mesterharm, Michael L. Littman, and Haym Hirsh. Experience-efficient learning in
associative bandit problems. In Proceedings of the International Conference on Machine Learning (ICML),
2006.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
Naftali Tishby and Daniel Polani. Information theory of decisions and actions. In Vassilis Cutsuridis, Amir
Hussain, John G. Taylor, and Daniel Polani, editors, Perception-Reason-Action Cycle: Models, Algorithms
and Systems. Springer, 2010.
Naftali Tishby, Fernando Pereira, and William Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computation, 1999.
Leslie G. Valiant. A theory of the learnable. Communications of the Association for Computing Machinery, 27
(11), 1984.
9
| 4227 |@word version:2 seems:1 decomposition:3 pick:2 moment:1 epartement:1 substitution:1 daniel:2 existing:4 current:1 contextual:6 nt:4 beygelzimer:5 yet:1 john:10 ronald:2 subsequent:1 shawetaylor:1 dive:1 enables:1 remove:1 update:2 joy:1 bart:1 intelligence:1 selected:1 greedy:1 amir:1 provides:7 allerton:1 org:2 zhang:2 accessed:2 mathematical:1 become:1 incorrect:1 prove:1 interdependence:1 excellence:1 deteriorate:1 expected:8 behavior:1 mpg:1 multi:1 rem:1 decreasing:3 automatically:2 armed:1 cardinality:2 provided:4 underlying:1 bounded:2 what:1 minimizes:1 maxa:1 developed:1 csaba:1 guarantee:2 every:2 ebec:1 interactive:1 exactly:1 universit:1 prohibitively:1 uk:2 facto:1 control:1 medical:1 grant:1 planck:1 hirsh:1 limit:1 switching:1 sutton:2 despite:1 lev:1 approximately:2 might:2 co:1 limited:3 acknowledgment:1 union:1 regret:44 practice:2 empirical:9 drug:2 significantly:4 word:1 wait:1 get:4 cannot:1 close:1 selection:1 put:2 context:1 risk:1 deterministic:2 reviewer:1 go:2 attention:1 starting:2 independently:3 focused:1 resolution:1 estimator:1 deriving:1 his:1 delegated:1 pt:3 play:3 construction:1 user:2 hierarchy:1 us:1 prescribe:1 hypothesis:21 agreement:1 element:1 approximated:2 expensive:1 haym:1 observed:5 preprint:2 worst:1 calculate:2 cycle:1 cured:2 trade:7 decrease:1 observes:1 environment:2 complexity:12 ui:4 reward:16 littman:1 depend:2 algebra:1 efficiency:1 learner:4 informatique:1 dnf:1 london:1 artificial:1 refined:1 h0:2 whose:2 supplementary:4 elad:1 distortion:2 drawing:1 otherwise:3 statistic:1 fischer:1 final:1 online:4 associative:3 sequence:11 advantage:4 matthias:1 ucl:1 maximal:1 remainder:1 relevant:5 reyzin:1 achieve:2 analogical:1 convergence:1 r1:2 francois:1 nin:1 leave:2 converges:1 help:1 derive:7 depending:1 ac:2 andrew:1 measured:1 ois:3 c:1 direction:1 concentrate:2 closely:1 correct:1 filter:1 stochastic:15 vc:1 mcallester:2 successor:1 jst:1 material:4 require:1 fix:1 generalization:3 anonymous:1 tighter:5 hold:6 sufficiently:1 montanuniversit:1 exp:51 cb:1 mapping:1 a2:1 pacbayesian:1 repetition:1 weighted:4 reflects:1 minimization:2 mit:1 gaussian:1 pn:1 barto:2 publication:1 ax:1 improvement:1 bernoulli:2 seeger:2 adversarial:7 baseline:5 dependent:7 a0:2 abor:1 bandit:33 relation:1 selects:1 germany:1 arg:2 classification:1 colt:2 priori:2 art:1 integration:1 mutual:13 marginal:2 aware:2 having:1 sampling:3 hardest:2 unsupervised:1 icml:2 future:3 minimized:2 t2:2 intelligent:1 richard:1 ortner:1 franc:3 ktn:1 simultaneously:4 divergence:2 individual:2 replaced:1 ourselves:1 blahut:1 n1:1 william:1 ab:2 extreme:1 predefined:1 kt:6 experience:1 respective:1 machinery:1 indexed:1 taylor:6 theoretical:1 miroslav:1 cover:3 zn:1 yoav:1 leslie:2 kaelbling:2 introducing:1 subset:1 uniform:7 seventh:1 leoben:1 tishby:7 dependency:1 synthetic:1 chooses:1 st:27 international:5 siam:1 off:7 michael:1 reflect:1 cesa:5 town:1 opposed:1 manage:1 expert:2 style:1 li:1 de:2 satisfy:5 depends:2 vi:1 performed:1 helped:1 view:1 hazan:1 start:3 bayes:2 contribution:2 minimize:1 variance:7 yield:3 bayesian:14 advertising:3 history:1 ah:23 explain:1 whenever:2 definition:9 associated:1 proof:8 treatment:1 austria:1 organized:1 auer:8 actually:3 back:1 manuscript:1 ok:1 higher:3 supervised:6 improved:2 evaluated:1 done:1 symptom:2 generality:1 furthermore:1 just:2 langford:4 until:1 web:2 banerjee:2 propagation:1 defines:1 artifact:1 believe:2 grows:1 usage:2 normalized:1 regularization:3 assigned:1 hence:2 analytically:1 dud:1 round:21 game:13 inferior:1 naftali:3 unnormalized:1 ulaval:1 generalized:1 mina:3 tt:7 demonstrate:1 performs:2 meaning:1 mesterharm:1 instantaneous:5 arindam:1 recently:1 ari:1 mt:7 laval:1 empirically:2 arimoto:1 nh:4 million:2 extend:1 discussed:1 association:1 multiarmed:25 shawe:6 lihong:1 etc:1 add:1 own:1 irrelevant:1 certain:1 ubingen:1 inequality:4 arbitrarily:1 life:2 vt:11 pover:1 greater:4 preceding:1 converge:1 fernando:1 multiple:2 desirable:1 reduces:1 ntt:1 technical:1 faster:1 exp3:6 calculation:1 a1:2 prediction:1 patient:3 arxiv:2 histogram:1 normalization:1 iteration:3 achieved:3 penalize:1 whereas:3 szepesv:1 addressed:1 else:1 country:1 rest:1 unlike:1 probably:1 comment:3 elegant:1 member:1 structural:1 bernstein:2 easy:1 marginalization:1 hussain:1 zi:5 nonstochastic:1 restrict:1 idea:1 translates:1 bottleneck:2 bartlett:1 peter:6 action:46 repeatedly:1 generally:3 tune:1 amount:3 ph:1 concentrated:1 vassilis:1 simplest:1 schapire:2 http:2 zj:3 per:4 group:4 ist:2 four:1 nevertheless:2 achieving:1 drawn:3 alina:1 polani:3 asymptotically:1 graph:4 run:1 everywhere:1 throughout:1 reasonable:1 decide:2 almost:1 looser:1 vn:3 draw:3 decision:1 scaling:1 bound:44 played:6 personalized:2 u1:1 chair:1 performing:2 relatively:1 martin:1 department:1 according:6 smaller:1 son:1 qu:1 s1:4 happens:1 restricted:1 ln:47 discus:1 turn:1 fp7:1 available:2 apply:4 observe:2 slower:1 substitute:1 thomas:4 running:3 include:1 remaining:2 clustering:1 laviolette:3 uj:1 approximating:1 shapire:1 already:1 strategy:2 dependence:9 rt:12 bialek:1 said:1 exhibit:3 subspace:1 thank:1 chris:1 seven:1 whom:1 tuebingen:1 trivial:1 reason:1 index:1 robert:5 statement:1 filtration:1 ba:6 policy:22 unknown:4 perform:1 bianchi:5 upper:8 finite:3 looking:1 communication:2 smoothed:3 arbitrary:6 community:2 canada:1 s2s:1 david:1 pair:4 required:1 kl:10 z1:2 distinction:1 nip:1 address:1 able:2 adversary:1 suggested:2 usually:1 beyond:1 perception:1 appeared:1 max:1 including:2 pascal2:1 exp4:5 indicator:1 arm:3 improve:4 technology:1 text:1 prior:5 epoch:1 review:2 nicol:5 asymptotic:1 neutralizes:1 fully:1 freund:1 a2a:1 sublinear:1 interesting:2 versus:1 consistent:1 propagates:1 principle:1 editor:1 playing:2 pi:2 strehl:2 ift:1 compatible:1 supported:2 last:1 side:31 bias:2 institute:1 taking:1 feedback:3 dimension:1 calculated:1 cumulative:10 author:1 adaptive:1 reinforcement:6 programme:2 transaction:1 approximate:1 obtains:1 knew:1 continuous:1 iterative:1 un:2 pack:1 ca:1 improving:1 rta:4 williamson:3 complex:1 european:2 anthony:1 sp:1 aistats:1 main:7 terminated:1 s2:2 yevgeny:4 paul:1 profile:2 martingale:10 wiley:1 tong:1 pereira:1 explicit:1 diate:1 exponential:4 crude:3 weighting:1 third:2 externally:1 theorem:25 pac:17 er:2 learnable:1 r2:1 essential:1 ih:1 valiant:2 importance:2 illustrates:2 theo1:1 horizon:1 entropy:2 logarithmic:1 seldin:13 datadependent:1 springer:1 corresponds:4 satisfies:3 conditional:1 identity:1 careful:1 hard:1 except:1 lemma:12 total:1 ucb:3 meaningful:1 college:1 internal:1 people:1 alexander:1 evaluate:1 unileoben:1 |
3,565 | 4,228 | Probabilistic Joint Image Segmentation and Labeling?
Adrian Ion1,2 , Joao Carreira1, Cristian Sminchisescu1
Faculty of Mathematics and Natural Sciences, University of Bonn
PRIP, Vienna University of Technology & Institute of Science and Technology, Austria
1
2
{ion,carreira,cristian.sminchisescu}@ins.uni-bonn.de
Abstract
We present a joint image segmentation and labeling model (JSL) which, given a
bag of figure-ground segment hypotheses extracted at multiple image locations
and scales, constructs a joint probability distribution over both the compatible
image interpretations (tilings or image segmentations) composed from those segments, and over their labeling into categories. The process of drawing samples
from the joint distribution can be interpreted as first sampling tilings, modeled
as maximal cliques, from a graph connecting spatially non-overlapping segments
in the bag [1], followed by sampling labels for those segments, conditioned on
the choice of a particular tiling. We learn the segmentation and labeling parameters jointly, based on Maximum Likelihood with a novel Incremental Saddle Point
estimation procedure. The partition function over tilings and labelings is increasingly more accurately approximated by including incorrect configurations that a
not-yet-competent model rates probable during learning. We show that the proposed methodology matches the current state of the art in the Stanford dataset [2],
as well as in VOC2010, where 41.7% accuracy on the test set is achieved.
1 Introduction
One of the main goals of scene understanding is the semantic segmentation of images: label a diverse set of object properties, at multiple scales, while at the same time identifying the spatial extent
over which such properties hold. For instance, an image may be segmented into things (man-made
objects, people or animals), amorphous regions or stuff like grass or sky, or main geometric properties like the ground plane or the vertical planes corresponding to buildings in the scene. The
optimal identification of such properties requires inference over spatial supports of different levels
of granularity, and such regions may often overlap. It appears to be now well understood that a successful extraction of such properties requires models that can make inferences over adaptive spatial
neighborhoods that span well beyond patches around individual pixels. Incorporating segmentation
information to inform the labeling process has recently become an increasingly active research area.
While initially inferences were restricted to super-pixel segmentations, recent trends emphasize joint
models with capabilities to represent the uncertainty in the segmentation process [2, 4, 5, 6, 7]. One
difficulty is the selection of segments that have adequate spatial support for reliable labeling, and
a second major difficulty is the design of models where both the segmentation and the labeling
layers can be learned jointly. In this paper, we present a joint image segmentation and labeling
model (JSL) which, given a bag of possibly overlapping figure-ground (binary) segment hypotheses, extracted independently at multiple image locations and scales, constructs a joint probability
distribution over both the compatible image interpretations (or tilings) assembled from those segments, and over their labels. For learning, we present a procedure based on Maximum Likelihood,
where the partition function over tilings and labelings is increasingly more accurately approximated
in each iteration, by including incorrect configurations that the model rates probable. This prevents
?
Supported, in part, by the EC, under MCEXT-025481, and by CNCSIS-UEFISCU, PNII-RU-RC-2/2009.
1
CPMC
Figure-Ground
Segments
Labelings
Tilings
s1
s2
JSL
FGTiling
Sky
l(t1)
t1
Bldg
FG-ObjFG-Obj
Water
Sky
t2
l(t3)
Bldg
FG-Obj FG-Obj
Water
Sky
t3
l(t2)
Bldg
FG-Obj
FG-Obj
Water
Figure 1: Overview of our joint segment composition and categorization framework. Given an image I, we extract a bag S of figure-ground segmentations, constrained at different spatial locations
and scales, using the CPMC algorithm [3] and retain the figure segments (other algorithms can be
used for segment bagging). Segments are composed into image interpretations (tilings) by FGTiling [1]. In brief, segments become nodes in a consistency graph where any two segments that do not
spatially overlap are connected by an edge. Valid compositions (tilings) are obtained by computing
maximal cliques in the consistency graph. Multiple tilings are usually generated for each image.
Tilings consist of subsets of segments in S, and may induce residual regions that contain pixels not
belonging to any of the segments selected in a particular tiling. For labeling (JSL), configurations
are scored based on both their category-dependent properties measured by F?l , and by their midlevel category-independent properties measured by F?t over the dependency graph?a subset of the
consistency graph connecting only spatially neighboring segments that share a boundary. The model
parameters ? = [?? ? ? ]? are jointly learned using Maximum Likelihood based on a novel incremental Saddle Point partition function approximation. Notice that a segment appearing in different
tilings of an image I is constrained to have the same label (red vertical edges).
cyclic behavior and leads to a stable optimization process. The method jointly learns both the midlevel, category-independent parameters of a segment composition model, and the category-sensitive
parameters of a labeling model for those segments. To our knowledge this is the first model for joint
image segmentation and labeling, that accommodates both inference and learning, within a common, consistent probabilistic framework. We show that our procedure matches the state of the art in
the Stanford [2], as well as the VOC2010 dataset, where 41.7% accuracy on the test set is achieved.
Our framework is reviewed in fig. 1.
1.1 Related Work
One approach to recognize the elements of an image would be to accurately partition it into regions based on low and mid-level statistical regularities, and then label those regions, as pursued
by Barnard et al. [8]. The labeling problem can then be reduced to a relatively small number of
classification problems. However, most existing mid-level segmentation algorithms cannot generate
one unique, yet accurate segmentation per image, across multiple images, for the same set of generic
parameters [9, 10]. To achieve the best recognition, some tasks might require multiple overlapping
spatial supports which can only be provided by different segmentations.
Segmenting object parts or regions can be done at a finer granularity, with labels decided locally,
at the level of pixels [11, 12, 13] or superpixels [14, 15], based on measurements collected over
neighborhoods with limited spatial support. Inconsistent label configurations can be resolved by
smoothing neighboring responses, or by encouraging consistency among the labels belonging to regions with similar low-level properties [16, 13]. The models are effective when local appearance
statistics are discriminative, as in the case of amorphous stuff (water, grass), but inference is harder
to constrain for shape recognition, which requires longer-range interactions among groups of measurements. One way to introduce constraints is by estimating the categories likely to occur in the
image using global classifiers, then bias inference to that label distribution [12, 13, 15].
2
A complementary research trend is to segment and recognize categories based on features extracted
over competing image regions with larger spatial support (extended regions). The extended regions
can be rectangles produced by bounding box detectors [17, 2]. The responses are combined in a
single pixel or superpixel layer [7, 18, 17, 6] to obtain the final labeling. Extended regions can also
arise from multiple full-image segmentations [7, 18, 6]. By computing segmentations multiple times
with different parameters, chances increase that some of the segments are accurate. Multiple segmentations can also be aggregated in an inclusion hierarchy [19, 5], instead of being obtained independently. The work of Tu et al. [20] uses generative models to drive the sequential re-segmentation
process, formulated as Data Driven Markov Chain Monte Carlo inference. Recently, Gould et al.
[2] proposed a model for segmentation and labeling where new region hypotheses were generated
through a sequential procedure, where uniform label swaps for all the pixels contained inside individual segment proposals are accepted if they reduce the value of a global energy function. Kumar
and Koller [4] proposed an improved joint inference using dual-decomposition. Our approach for
segmentation and labeling is layered rather than simultaneous, and learning for the segmentation
and labeling parameters is performed jointly (rather than separately), in a probabilistic framework.
2 Probabilistic Segmentation and Labeling
Let S = {s1 , s2 , . . . }, be a set (bag) of segments from an image I. In our case, the segments
si are obtained using the publicly available CPMC algorithm [3], and represent different figureground hypotheses, computed independently by applying constraints at various spatial locations and
scales in the image.1 Subsets of segments in the bag S form the power set P(S), with 2|S| possible
elements. We focus on a restriction of the power set of an image, its tiling set T (I), with the
property that all segments contained in any subset (or tiling) do not spatially overlap and the subset
is maximal: T (I) = {t = {. . . si , . . . sj , . . . } ? P(S), s.t. ?i, j, overlap(si , sj ) = 0}. Each
tiling t in T (I) can have its segments labeled with one of L possible category labels. We call a
labeling the mapping obtained by assigning labels to segments in a tiling l(t) = {l1 , . . . , l|t| }, with
li ? {1, . . . , L} the label of segment si , and |l(t)| = |t| (one label corresponds to one segment).2
Let L(I) be the set of all possible labelings for image I with
X
L|t|
(1)
|L(I)| =
t?T (I)
where we sum over all valid segment compositions (tilings) of an image, T (I), and the label space
of each. We define a joint probability distribution over tilings and their corresponding labelings,
1
p? (l(t), t, I) =
exp F? (l(t), t, I)
(2)
Z? (I)
P P
where Z? (I) = t l(t) exp F? (l(t), t, I) is the normalizer or partition function, l(t) ? L(I), t ?
T (I), and ? the parameters of the model. It is a constrained probability distribution defined over
two sets: a set of segments in a tiling and an index set of labels for those segments, both of the same
cardinality. F? is defined as
F? (l(t), t, I) = F?l (l(t), I) + F?t (t, I)
?
(3)
? ?
with parameters ? = [? ? ] . The additive decomposition can be viewed as the sum of one term,
F?t (t, I), encoding a mid-level, category independent score of a particular tiling t, and another
category-dependent score, F?l (l(t), I), encoding the potential of a labeling l(t) for that tiling t. The
components F?l (l(t), I) and F?t (t, I) are defined as interactions over unary and pairwise terms. The
potential of a labeling is
X
X X
F?l (l(t), I) =
?lli (si , ?) +
(4)
?lli ,lj (si , sj , ?)
si ?t
si ?t sj ?Nsl
i
with ?lli and ?lli ,lj unary and pairwise, label-dependent potentials, and Nsli the label relevant neighborhood of si . In our experiments we take Nsli = t \ {si }. The unary and pairwise terms are linear
1
Some of the figure-ground segments in S(I) can spatially overlap.
We call a segmentation assembled from non-overlapping figure-ground segments a tiling, and the tiling
together with the set of corresponding labels for its segments a labeling (rather than a labeled tiling).
2
3
in the parameters, e.g. ?lli (si , ?) = ?? ?lli (si ). For example ?lli (si , ?) encodes how likely it is for
segment si to exhibit the regularities typical of objects belonging to class li . The potential of a tiling
is defined as
X
X X
F?t (t, I) =
?t (si , ?) +
?t (si , sj , ?)
(5)
si ?t
t
si ?t sj ?Nst
i
t
with ? and ? unary and pairwise, label-independent potential functions, and Nsti the local image
neighborhood i.e. Nsti = {sj ? t | si , sj share a boundary part and do not overlap}. Both terms ?t
and ?t are linear in the parameters, similar to the components of the category dependent potential
F?l (l(t), I). For example ?t (si , ?) encodes how likely is that segment si exhibits generic object
regularities (details on the segmentation model F?t (t, I) can be found in [1]).
Inference: Given an image I, inference for the optimal tiling and labeling (l? (t? ), t? ) is given by
(l? (t? ), t? ) = argmax p? (l(t), t, I)
(6)
l(t),t
Our inference methodology is described in sec. 3.
Learning: During learning we optimize the parameters ? that maximize the likelihood (ML) of the
ground truth under our model:
Y
X
?? = argmax
p? (lI (tI ), tI , I) = argmax
F? (lI (tI ), tI , I) ? log Z? (I)
(7)
?
I
I
?
I
I
I
where (l (t ), t ) are ground truth labeled tilings for image I. Our learning methodology, including
an incremental saddle point approximation for the partition function is presented in sec. 4.
3 Inference for Tilings and Labelings
Given an image where a bag S of multiple figure-ground segments has been extracted using
CPMC [3], inference is performed by first composing a number of plausible tilings from subsets
of the segments, then labeling each tiling using spatial inference methods.
The inference algorithm for computing (sampling) tilings associates each segment to a node in a
consistency graph where an edge exists between all pairs of nodes corresponding to segments that do
not spatially overlap. The cliques of the consistency graph correspond to alternative segmentations
of the image constructed from the basic segments. The algorithm described in [1] can efficiently
find a number of plausible maximal weighted cliques, scored by (5). A maximum of |S| distinct
maximal cliques (tilings) are returned, and each segment si is contained in at least one of them.
Inference for the labels of the segments in each tiling can be performed using any number of reliable
methods?in this work we use tree-reweighted belief propagation TRW-S [21]. The maximum in
(6) is computed by selecting the labeling with the highest probability (2) among the tilings generated
by the segmentation algorithm.
Given a set of N = |S| figure-ground segments, the total complexity for inference is O(N d3 +N T +
N ), where O(N d3 ) steps are required to sample up to N tilings [1], with d = maxsi ?S {|Nsti |},
N T is the complexity for inference with TRW-S (with complexity, say, T ) for each computed tiling,
and N steps are done to select the highest scoring labeling. For |S| = 200 the joint inference over
labelings and tilings takes under 10 seconds per image in our implementation and produces a set of
plausible segmentation and labeling hypotheses which are also useful for learning, described next.
4 Incremental Saddle Point Learning
Fundamental to maximum likelihood learning is a tractable, yet stable and sufficiently accurate estimate of the partition function in (7). The number of terms in Z? (I) is |L(I)| (1), and is exponential
both in the number of figure-ground segments and in the number of labels. As reviewed in sec. 3,
we approximate the tilings distribution of an image by a number of configurations bounded above
by the number of figure-ground segments. This replaces one exponential set of terms in the partition
function in (2) (the sum over tilings) with a set of size at most |S|.
4
In turn, each tiling can be labeled in exponentially many ways?the second sum in the partition
function in (2), running over all labelings of a tiling. One possibility to deal with this exponential
sum for models with loopy dependencies would be Pseudo-Marginal Approximation (PMA) which
estimates Z? (I) using loopy BP and computes gradients as expectations from estimated marginals.
Kumar et al. [22] found this approximation to perform best for learning conditional random fields
for pixel labeling. However it requires inference over all tilings at every optimization iteration.
With 484 iterations required for convergence on the VOC dataset, this strategy took in our case 140
times longer than the learning strategy based on incremental saddle-point approximations presented
(below), which requires 1.3 hours for learning. Run for the same time, the PMA did not produce
satisfactory results in our model (sec. 5).
Another possibility would be to approximate the exponential sum over labels with its largest term,
obtained at the most probable configuration (the saddle-point approximation). However, this approach tends to behave erratically as a result of flips within the MAP configurations used to approximate the partition function (sec. 5).
To ensure stability and learning accuracy, we use an incremental saddle point approximation to the
partition function. This is obtained by accumulating new incorrect (?offending?) labelings rated as
the most probable by our current model, in each learning iteration (Lj (I) denotes the set over which
the partition function for image I is computed in learning iteration j):
Lj+1 (I) = Lj (I) ? {?l, t} with (?l, t) = argmax F? (l(t), t, I)
(8)
l(t),t
and ?l 6= lI with lI the ground truth labeling for image I. We set L0 (I) = ?. The configurations in
Lj are also used to compute the (analytic) gradient and we use quasi-Newton methods to optimize
(7). As learning progresses, new labelings are added to the partition function estimate and this
becomes more accurate.
Our learning procedure stops either when (1) all label configurations have been incrementally generated, case when the exact value of the partition function and unbiased estimates for parameters
are obtained, or (2) when a subset of the configuration space has been considered in the partition
function approximation and no new ?offending? configurations outside this set have been generated
during the previous learning (and inference) iteration. In this case a biased estimate is obtained.
This is to some extent inevitable for learning models with loopy dependencies and exponential state
spaces. In practice, for all datasets we worked on, the learning algorithm converged in 10-25 iterations. In experiments (sec. 5), we show that learning is significantly more stable over standard
saddle-point approximations.
5 Experiments
We evaluate the quality of semantic segmentation produced by our models in two different datasets:
the Stanford Background Dataset [2], and the VOC2010 Pascal Segmentation Challenge [23].
The Stanford Background Dataset contains 715 images and comprises two domains of annotation:
semantic classes and geometric classes. The task is to label each pixel in every image with both
types of properties. The dataset also contains mid-level segmentation annotations for individual
objects, which we use to initially learn the parameters of the segmentation model (see sec. 3 and [1]).
Evaluation in this dataset is performed using cross-validation over 5 folds, as in [2]. The evaluation
criterion is the mean pixel (labeling) accuracy.
The VOC2010 dataset is accepted as currently one of the most challenging object-class segmentation
benchmarks. This dataset also has annotation for individual objects, which we use to learn mid-level
segmentation parameters (?). Unlike Stanford, where all pixels are annotated, on VOC only objects
from the 20 classes have ground truth labels. The evaluation criterion is the VOC score: the average
per-class overlap between pixels labeled in each class and the respective ground truth annotation3.
Quality of segments and tilings: We generate a bag of figure-ground segments for each image
using the publicly available CPMC code [3]. CPMC is an algorithm that generates a large pool
(or bag) of figure-ground segmentations, scores them using mid-level properties, and returns the
3
The overlap measure of two segments is O(s, sg ) =
5
|s?sg |
|s?sg |
[23].
Stanford Geometry
Stanford Semantics
VOC2010 Object Classes
Max. pixel accuracy
93.3
85.6
Max. VOC score
77.9
Method
JSL
Gould et al. [2]
Semantic
75.6
76.4
Geometry
88.8
91.0
Table 1: Left: Study of maximum achievable labeling accuracy for our tiling set, for Stanford and
VOC2010. The study uses our tiling closest to the segmentation ground truth and assigns ?perfect? pixel labels to it based on that ground truth. In contrast, the best labeling accuracy we obtain
automatically is 88.8 for Stanford Geometry, 75.6 for Stanford Semantic, and 41.7 for VOC2010.
This shows that potential bottlenecks in reaching the maximum values have to do more with training
(ranking) and labeling, rather than the spatial segment layouts and the tiling configurations produced.
The average number of segments per tiling are 6.6 on Stanford and 7.9 on VOC. Right: Mean pixel
accuracies on the Stanford Labeling Dataset. We obtain results comparable to the state-of-the-art
in a challenging full-image labeling problem. The results are significant, considering that we use
tilings (image segmentations) made on average of 6.6 segments per image. The same method is also
competitive in object segmentation datasets such as the VOC2010, where the object granularity is
much higher and regions with large spatial support are decisive for effective recognition (table 2).
top k ranked. The online version contains pre-trained models on VOC, but these tend to discard
background regions, since VOC has none. For the Stanford experiments, we retrain the CPMC
segment ranker using Stanford?s segment layout annotations. We generated segment bags having up
to 200 segments on the Stanford dataset, and up to 100 segments on the VOC dataset. We model
and sample tilings using the methodology described in [1] (see also (5) and sec. 3).
Table 1, left) gives labeling performance upper-bounds on the two datasets for the figure-ground segments and tilings produced. It can be seen that the upper bounds are high for both problems, hence
the quality of segments and tilings do not currently limit the final labeling performance, compared
to the current state-of-the-art. For further detail on the figure-ground segment pool quality (CPMC)
and their assembly into complete image interpretations (FGtiling), we refer to [3, 1].
Labeling performance: The tiling component of our model (5) has 41 unary and 31 pairwise
parameters (?) in VOC2010, and 40 unary and 74 parameters (?) in Stanford. Detail for these
features is given in [1]. We will discuss only the features used by the labeling component of the
model (4) in this section.
In both VOC2010 and Stanford we use two meta-features for the unary, category-dependent terms.
One type of meta-feature is produced as the output of regressors trained (on specific image features
described next) to predict overlap of input segments to putative categories. There is one such metafeature (1 regressor) for each category. A second type of meta-feature is obtained from an object
detector [24] to which a particular segment is presented. These detectors operate on bounding boxes,
so we determine segment class scores as those of the bounding box overlapping most with the
bounding box enclosing each segment.
Since the target semantic concepts of the Stanford and VOC2010 datasets are widely different, we
use label-dependent unary terms based on different features. In both cases we use pairwise features
connecting all segments (Nsl encodes full connectivity), among those belonging to a same tiling. As
pairwise features for ?l we use simply a square matrix with all values set to 1, as in [5]. In this way,
the model can learn to avoid unlikely patterns of label co-occurrence.
On the Stanford Background Dataset, we train two types of unary meta-features for each class, for
semantic and geometric classes. The first unary meta-feature is the output of a regressor trained
with the publicly available features from Hoiem et al. [7], and the second one uses the features of
Gould et al. [25]. Each of the feature vectors is transformed using a randomized feature map that
approximates the Gaussian-RBF kernel [26, 27]. Using this methodology we can work with linear
models in the randomized feature map, yet exploit non-linear kernel embeddings. Summarizing,
for Stanford geometry, we have 12 parameters, ? (9 unary parameters from 3 classes, each with 2
meta-features and bias and 3 pairwise parameters), whereas for Stanford semantic labels we have 52
parameters, ? (24 unary from 8 classes, each with 2 meta-features and bias, and 28 pair-wise, the
upper triangle of an 8x8 matrix).
6
person
person
pottedplant
bird
horse
bicycle
bicycle
person
sofa
bird
train
bird
chair
Figure 2: (Best viewed in color) Semantic segmentation results of our method on images from the
VOC2010 test set: first three images where the algorithm performs satisfactorily, whereas the last
three examples where the algorithm works less well. Notice that identifying multiple objects from
the same class is possible in this framework.
In the Stanford dataset, background regions such as grass and sky are shapeless and often locally
discriminative. In such cases methods relying on pixel-level descriptors usually obtain good results
(e.g. see baseline in [2]). In turn, outdoor datasets containing stuff are challenging for a method like
ours that relies on segmentations (tilings) which have an average of 6.6 segments per image (table
1, left). The results we obtain are comparable to Gould et al. [2], as visible in table 1, right. The
evaluation criterion is the same for both methods: the mean pixel accuracy.
On the VOC2010 dataset, performance is evaluated using the VOC score, the average of per-class
overlap between pixels labeled in each class and the respective ground truth class. We used two
different unary meta-features as well. The first is the output of SVM regressors trained as in [28] using their publicly available features [3]. These regressors predict class scores directly on segments,
based on several features: bag of words of gray-level SIFT [29] and color SIFT [30] defined on
the foreground and background of each individual segment, and three pyramid HOGs with different
parameters. Multiple chi-square kernels K(x, y) = exp(???2 (x, y)) are combined as in [28]. As a
second unary meta-feature we use the outputs of deformable part model detectors [24]. Summarizing, we have 63 category-dependent unary parameters, ? (21 classes, each having 2 meta-features
and bias), and 210 category-dependent pairwise parameters ? (upper triangle of 21x21 matrix). The
results, which match and slightly improve the recent winners in the 2010 VOC challenge, are reported in table 2. In particular, our method produces the highest VOC score average over all classes,
and also scores first on 9 individual classes. The images in fig. 2 show that our algorithm produces
correct labelings. Notice that often the boundaries produced by tilings align with the boundaries of
individual objects, even when there are multiple such nearby objects from the same class.
Impact of different segmentation and labeling methods: We also evaluate the inference method
of [4] (using the code provided by the authors), on the VOC 2010 dataset, and the same input segments and potentials as for JSL. The inference time of the C++ implementation of [4] is comparable
with our MATLAB implementations of FGtiling and JSL. The score obtained by [4] on our model
is 31.89%, 2.8% higher than the score obtained by the authors using piece-wise training and a difClasses
Background
Aeroplane
Bicycle
Bird
Boat
Bottle
Bus
Car
JSL CHD BSS
83.4
51.6
25.1
52.4
35.6
49.6
66.7
55.6
81.1
58.3
23.1
39.0
37.8
36.4
63.2
62.4
84.2
52.5
27.4
32.3
34.5
47.4
60.6
54.8
Classes
Cat
Chair
Cow
DiningTable
Dog
Horse
Motorbike
Person
JSL CHD BSS
44.6
10.6
41.2
29.9
25.5
49.8
47.9
37.2
31.9
9.1
36.8
24.6
29.4
37.5
60.6
44.9
42.6
9.0
32.9
25.2
27.1
32.4
47.1
38.3
Classes
PottedPlant
Sheep
Sofa
Train
Tv/Monitor
Average
JSL CHD BSS
19.3
45.0
24.4
37.2
43.3
30.1
36.8
19.4
44.1
35.9
36.8
50.3
21.9
35.2
40.9
41.7 40.1 39.7
Table 2: Per class results and averages obtained by our method (JSL) as well as top-scoring methods
in the VOC2010 segmentation challenge (CHD: CVC-HARMONY-DET [15], BSS: BONN-SVRSEGM [28]). Compared to other VOC2010 participants, the proposed method obtains better scores
in 9 out of 21 classes, and has superior class average, the standard measure used for ranking. Top
scores for each class are marked in bold. Results for other methods can be found in [23]. Note
that both JSL (the meta-features) and CHD are trained with the additional bounding box data and
images from the training set for object detection. Using this additional training data the class average
obtained by BSS is 43.8 [28].
7
5
x 10
160
40
no inc PF
inc PF
100
5
10
15
learning iteration
30
without incremental Z
20
3
nr. labelings
?logZ
120
VOC score
with incremental Z
140
labelings total
labelings new
2
1
10
0
0
5
10
15
Learning iteration
20
2
4
6
learning iteration
8
Figure 3: Left: The negative log(Z) at the end of each iteration, for standard (non-incremental) and
incremental saddle-point approximations to partition function. Without the stable and more accurate
incremental saddle-point approximation to the partition function, the algorithm cannot successfully
learn. Results are obtained by training on VOC2010?s ?trainval? (train+validation) dataset. Center:
VOC2010 labeling score as a function of the learning iteration (training on VOC2010?s ?trainval?).
Right: Number of new labeling configurations added to the partition function expansion as learning
proceeds for VOC2010. Most configurations are added in the first few iterations.
ferent pool of segments [23], but 9.8% lower than the score of JSL. This suggests that a layered
strategy based on selecting a compact set of representative segmentations, followed by labeling is
more accurate than sequentially searching for segments and their labels.
In practice, the proposed JSL framework does not depend on FGtiling/CPMC to provide segmentations. Instead, we can use any segmentation method. We have tested the JSL framework (learning
and inference) on the Stanford dataset, using segmentations produced by the Ultrametric Contour
Map (UCM) hierarchical segmentation method [9]. To obtain a similar number of segments as for
CPMC (200 per image), we have selected only the segmentation levels above 20. The features and
parameters where computed exactly as before. The bag of segments for each image was derived from
the UCM segmentations, and the segmentations where taken as tiling configurations for the corresponding image. In this case, the scores are 76.8 and 88.2 for the semantic and geometric classes,
respectively, showing the robustness of JSL to different input segmentations (see also table 1, right).
Learning performance: In all our learning experiments, the model parameters have been initialized
to the null vector, before learning proceeds, except for the ? corresponding to the unary terms in F?l
which where set to one. Figure 3, left and center, shows comparisons of learning with and without
the incremental saddle point approximation to the partition function, for the VOC 2010 dataset.
Without accumulating labelings incrementally, the learning algorithm exhibits erratic behavior and
overfits?the relatively small number of labelings used to estimate the partition function produce
very different results between consecutive iterations. Figure 3, right, shows the number of total and
new labelings added at each learning iteration.
Learning the parameters on VOC 2010 using PMA has taken 180 hours and produced a VOC score
of 41.3%. Stopping the learning with PMA after 2 hours (slightly above the 1.3 hrs required by the
incremental saddle point approximation) results in a VOC score of 3.87%.
6 Conclusion
We have presented a joint image segmentation and labeling model (JSL) which, given a bag of
figure-ground image segment hypotheses, constructs a joint probability distribution over both the
compatible image interpretations assembled from those segments, and over their labeling. The process can be interpreted as first sampling maximal cliques from a graph connecting all segments that
do not spatially overlap, followed by sampling labels for those segments, conditioned on the choice
of their particular tiling. We propose a joint learning procedure based on Maximum Likelihood
where the partition function over tilings and labelings is increasingly more accurately approximated
during training, by including incorrect configurations that the model rates probable. This ensures
that mistakes are not carried on uncorrected in future training iterations, and produces stable and
accurate learning schedules. We show that models can be learned efficiently and match the state of
the art in the Stanford dataset, as well as VOC2010 where 41.7% accuracy on the test set is achieved.
8
References
[1] A. Ion, J. Carreira, and C. Sminchisescu. Image segmentation by figure-ground composition into maximal
cliques. In ICCV, November 2011.
[2] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent
regions. In ICCV, September 2009.
[3] J. Carreira and C. Sminchisescu. Constrained parametric min-cuts for automatic object segmentation. In
CVPR, June 2010.
[4] M. P. Kumar and D. Koller. Efficiently selecting regions for scene understanding. In CVPR, 2010.
[5] S. Nowozin, P.V. Gehler, and C.H. Lampert. On parameter learning in crf-based approaches to object
class image segmentation. In ECCV, 2010.
[6] L. Ladicky, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical crfs for object class image
segmentation. In ICCV, 2009.
[7] D. Hoiem, A. Efros, and M. Hebert. Recovering surface layout from an image. IJCV, 75(1), 2007.
[8] K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. M. Blei, and M. Jordan. Matching words and
pictures. JMLR., 3:1107?1135, March 2003.
[9] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. From contours to regions: An empirical evaluation. In
CVPR, pages 2294?2301, June 2009.
[10] T. Malisiewicz and A. Efros. Improving spatial support for objects via multiple segmentations. In BMVC,
2007.
[11] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class object
recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 81:2?23, 2009.
[12] X. He, R. S. Zemel, and M. Carreira-Perpinan. Multiscale conditional random fields for image labeling.
CVPR, 2004.
[13] G. Csurka and F. Perronnin. An efficient approach to semantic segmentation. IJCV, pages 1?15, 2010.
[14] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel
neighborhoods. In ICCV, 2009.
[15] J. M. Gonfaus, X. Boix, J. van de Weijer, A. D. Bagdanov, J Serrat, and J. Gonzalez. Harmony potentials
for joint classification and segmentation. In CVPR, 2010.
[16] P. Kohli, L. Ladicky, and P.H.S. Torr. Robust higher order potentials for enforcing label consistency. In
CVPR, 2008.
[17] L. Ladicky, P. Sturgess, K. Alaharia, C. Russel, and P.H.S. Torr. What, where & how many ? combining
object detectors and crfs. In ECCV, September 2010.
[18] C. Pantofaru, C. Schmid, and M. Hebert. Object recognition by integrating multiple image segmentations.
In ECCV, 2008.
[19] J.J. Lim, P. Arbelaez, Chunhui Gu, and J. Malik. Context by region ancestry. In ICCV, 2009.
[20] Z. Tu, X. Chen, A.L. Yuille, and S.-C. Zhu. Image parsing: unifying segmentation, detection, and recognition. In ICCV, 2003.
[21] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PAMI,
28(10):1568?1583, 2006.
[22] S. Kumar, J. August, and M. Hebert. Exploiting inference for approximate parameter learning in discriminative fields: An empirical study. In EMMCVPR, 2005.
[23] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes Challenge 2010 (VOC2010) Results. http://www.pascal-network.org/challenges/VOC/.
[24] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 32(9):1627?1645, 2010.
[25] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location
prior. IJCV, 80(3):300?316, 2008.
[26] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, December 2007.
[27] F. Li, C. Ionescu, and C. Sminchisescu. Random Fourier approximations for skewed multiplicative histogram kernels. In DAGM, September 2010.
[28] F. Li, J. Carreira, and C. Sminchisescu. Object recognition by sequential figure-ground ranking. IJCV,
2012.
[29] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[30] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Evaluating color descriptors for object and scene
recognition. PAMI, 32(9):1582?1596, 2010.
9
| 4228 |@word kohli:2 version:1 faculty:1 achievable:1 everingham:1 adrian:1 decomposition:2 textonboost:1 offending:2 harder:1 configuration:16 cyclic:1 score:20 selecting:3 contains:3 hoiem:2 trainval:2 ours:1 existing:1 freitas:1 current:3 si:22 yet:4 assigning:1 parsing:1 additive:1 partition:21 visible:1 shape:1 analytic:1 grass:3 pursued:1 selected:2 generative:1 plane:2 blei:1 node:3 location:5 org:1 rc:1 constructed:1 become:2 incorrect:4 ijcv:6 inside:1 introduce:1 pairwise:9 snoek:1 behavior:2 multi:2 chi:1 voc:18 relying:1 automatically:1 encouraging:1 pf:2 cardinality:1 considering:1 becomes:1 provided:2 estimating:1 bounded:1 joao:1 null:1 what:1 interpreted:2 pseudo:1 sky:5 every:2 stuff:3 ti:4 exactly:1 classifier:1 ramanan:1 segmenting:1 t1:2 before:2 understood:1 local:2 tends:1 limit:1 mistake:1 encoding:2 pami:3 might:1 bird:4 suggests:1 challenging:3 co:1 limited:1 range:1 malisiewicz:1 decided:1 unique:1 satisfactorily:1 practice:2 ucm:2 procedure:6 logz:1 maire:1 area:1 empirical:2 significantly:1 vedaldi:1 matching:1 pre:1 induce:1 word:2 integrating:1 diningtable:1 cannot:2 selection:1 layered:2 context:2 applying:1 accumulating:2 restriction:1 optimize:2 map:4 www:1 center:2 crfs:2 layout:4 williams:1 independently:3 identifying:2 assigns:1 stability:1 searching:1 fulkerson:1 ultrametric:1 hierarchy:1 target:1 exact:1 us:3 hypothesis:6 superpixel:2 associate:1 trend:2 element:2 approximated:3 recognition:8 cut:1 labeled:6 gehler:1 region:19 ensures:1 connected:1 russell:1 highest:3 complexity:3 trained:6 depend:1 segment:80 yuille:1 localization:1 distinctive:1 swap:1 triangle:2 gu:1 resolved:1 joint:16 voc2010:21 various:1 cat:1 kolmogorov:1 train:4 distinct:1 effective:2 monte:1 zemel:1 labeling:45 horse:2 neighborhood:5 outside:1 stanford:24 larger:1 plausible:3 say:1 drawing:1 widely:1 cvpr:6 statistic:1 jointly:6 final:2 cristian:2 online:1 associative:1 took:1 propose:1 interaction:2 maximal:7 erratically:1 neighboring:2 tu:2 relevant:1 combining:1 achieve:1 deformable:1 exploiting:1 convergence:1 regularity:3 produce:6 categorization:1 incremental:13 perfect:1 object:29 measured:2 progress:1 recovering:1 uncorrected:1 annotated:1 correct:1 criminisi:1 mcallester:1 require:1 probable:5 hold:1 around:1 sufficiently:1 ground:26 considered:1 exp:3 mapping:1 predict:2 bicycle:3 major:1 efros:2 consecutive:1 estimation:1 sofa:2 bag:13 label:33 currently:2 harmony:2 sensitive:1 largest:1 successfully:1 weighted:1 minimization:1 gaussian:1 super:1 rather:4 reaching:1 avoid:1 mcext:1 l0:1 focus:1 derived:1 june:2 likelihood:6 superpixels:1 contrast:1 normalizer:1 baseline:1 summarizing:2 inference:25 dependent:8 stopping:1 perronnin:1 unary:16 dagm:1 lj:6 unlikely:1 initially:2 koller:4 quasi:1 pantofaru:1 labelings:18 semantics:1 transformed:1 pixel:17 classification:2 among:4 dual:1 pascal:3 animal:1 art:5 spatial:13 weijer:1 constrained:4 smoothing:1 marginal:1 field:3 construct:3 extraction:1 having:2 sampling:5 inevitable:1 foreground:1 future:1 t2:2 few:1 composed:2 recognize:2 individual:7 argmax:4 geometry:4 detection:3 message:1 possibility:2 evaluation:5 sheep:1 chain:1 accurate:7 edge:3 respective:2 tree:2 initialized:1 re:1 girshick:1 instance:1 modeling:1 loopy:3 subset:7 uniform:1 successful:1 reported:1 dependency:3 combined:2 person:4 recht:1 fundamental:1 randomized:2 retain:1 probabilistic:4 pool:3 regressor:2 connecting:4 together:1 connectivity:1 containing:1 possibly:1 return:1 nsl:2 li:8 potential:10 de:4 sec:8 bold:1 inc:2 forsyth:1 ranking:3 decisive:1 piece:1 performed:4 csurka:1 multiplicative:1 lowe:1 overfits:1 red:1 competitive:1 participant:1 capability:1 annotation:4 gevers:1 amorphous:2 square:2 publicly:4 accuracy:10 descriptor:2 efficiently:3 t3:2 correspond:1 identification:1 accurately:4 produced:8 lli:7 carlo:1 none:1 drive:1 finer:1 converged:1 detector:5 simultaneous:1 inform:1 energy:2 stop:1 dataset:20 austria:1 knowledge:1 color:3 car:1 lim:1 segmentation:64 schedule:1 trw:2 appears:1 higher:3 methodology:5 response:2 improved:1 bmvc:1 zisserman:1 done:2 box:5 evaluated:1 multiscale:1 overlapping:5 propagation:1 incrementally:2 quality:4 gray:1 building:1 contain:1 unbiased:1 concept:1 hence:1 soatto:1 spatially:7 satisfactory:1 semantic:11 deal:1 reweighted:2 during:4 skewed:1 criterion:3 complete:1 crf:1 performs:1 l1:1 image:63 wise:2 novel:2 recently:2 common:1 superior:1 overview:1 cohen:1 winner:1 exponentially:1 interpretation:5 approximates:1 he:1 marginals:1 rodgers:1 metafeature:1 measurement:2 composition:5 nst:1 significant:1 refer:1 automatic:1 consistency:7 mathematics:1 inclusion:1 stable:5 longer:2 surface:1 align:1 closest:1 recent:2 driven:1 discard:1 sande:1 meta:11 binary:1 scoring:2 seen:1 additional:2 aggregated:1 maximize:1 determine:1 elidan:1 multiple:15 full:3 keypoints:1 rahimi:1 segmented:1 match:4 cross:1 impact:1 basic:1 emmcvpr:1 expectation:1 iteration:16 represent:2 kernel:5 histogram:1 pyramid:1 achieved:3 ion:2 proposal:1 background:7 whereas:2 separately:1 cvc:1 winn:2 figureground:1 biased:1 operate:1 unlike:1 tend:1 thing:1 december:1 prip:1 inconsistent:1 obj:5 call:2 jordan:1 granularity:3 shotton:1 embeddings:1 competing:1 cow:1 reduce:1 det:1 ranker:1 bottleneck:1 aeroplane:1 returned:1 passing:1 adequate:1 matlab:1 useful:1 mid:6 locally:2 category:16 sturgess:1 reduced:1 generate:2 http:1 notice:3 estimated:1 per:9 ionescu:1 diverse:1 group:1 monitor:1 d3:2 rectangle:1 graph:8 sum:6 cpmc:10 run:1 bldg:3 uncertainty:1 patch:1 putative:1 gonzalez:1 comparable:3 layer:2 bound:2 followed:3 convergent:1 fold:1 replaces:1 occur:1 constraint:2 worked:1 constrain:1 bp:1 scene:5 ladicky:3 encodes:3 chd:5 nearby:1 bonn:3 generates:1 fourier:1 span:1 chair:2 kumar:4 min:1 duygulu:1 relatively:2 gould:6 tv:1 march:1 belonging:4 across:1 slightly:2 increasingly:4 s1:2 restricted:1 iccv:6 invariant:1 taken:2 bus:1 turn:2 discus:1 flip:1 tractable:1 end:1 tiling:59 available:4 decomposing:1 hierarchical:2 generic:2 appearing:1 occurrence:1 fowlkes:1 alternative:1 robustness:1 motorbike:1 bagging:1 denotes:1 running:1 ensure:1 top:3 assembly:1 x21:1 newton:1 vienna:1 unifying:1 exploit:1 malik:2 added:4 strategy:3 parametric:1 fulton:1 nr:1 exhibit:3 gradient:2 september:3 arbelaez:2 accommodates:1 extent:2 collected:1 water:4 enforcing:1 ru:1 code:2 rother:1 modeled:1 index:1 hog:1 negative:1 design:1 implementation:3 enclosing:1 perform:1 upper:4 vertical:2 markov:1 datasets:6 benchmark:1 november:1 behave:1 extended:3 august:1 bagdanov:1 pair:2 required:3 bottle:1 dog:1 learned:3 hour:3 nip:1 assembled:3 beyond:1 proceeds:2 usually:2 below:1 pattern:1 challenge:5 including:4 reliable:2 max:2 belief:1 erratic:1 power:2 overlap:12 gool:1 natural:1 difficulty:2 ranked:1 boat:1 residual:1 hr:1 zhu:1 improve:1 technology:2 brief:1 rated:1 picture:1 carried:1 x8:1 extract:1 schmid:1 prior:1 understanding:3 geometric:5 sg:3 relative:1 discriminatively:1 validation:2 consistent:2 jsl:17 share:2 nowozin:1 eccv:3 compatible:3 supported:1 last:1 hebert:3 pma:4 bias:4 institute:1 felzenszwalb:1 fg:5 van:3 boundary:4 bs:5 valid:2 evaluating:1 contour:2 computes:1 ferent:1 author:2 made:2 adaptive:1 regressors:3 ec:1 sj:8 approximate:4 emphasize:1 uni:1 obtains:1 compact:1 clique:7 ml:1 global:2 active:1 sequentially:1 discriminative:3 ancestry:1 reviewed:2 table:8 learn:5 robust:1 composing:1 improving:1 sminchisescu:5 expansion:1 domain:1 did:1 main:2 s2:2 bounding:5 scored:2 arise:1 lampert:1 competent:1 complementary:1 fig:2 representative:1 retrain:1 boix:1 comprises:1 exponential:5 outdoor:1 perpinan:1 jmlr:1 learns:1 specific:1 sift:2 showing:1 svm:1 incorporating:1 consist:1 exists:1 sequential:3 texture:1 conditioned:2 chen:1 simply:1 saddle:12 appearance:1 likely:3 visual:1 prevents:1 contained:3 midlevel:2 corresponds:1 chance:1 truth:8 extracted:4 relies:1 russel:1 conditional:2 goal:1 formulated:1 viewed:2 marked:1 rbf:1 barnard:2 man:1 carreira:5 typical:1 except:1 torr:3 semantically:1 total:3 accepted:2 select:1 people:1 support:7 evaluate:2 tested:1 |
3,566 | 4,229 | Manifold Pr?ecis: An Annealing Technique for Diverse
Sampling of Manifolds
Nitesh Shroff ?, Pavan Turaga ?, Rama Chellappa ?
?Department of Electrical and Computer Engineering, University of Maryland, College Park
?School of Arts, Media, Engineering and ECEE, Arizona State University
{nshroff,rama}@umiacs.umd.edu, [email protected]
Abstract
In this paper, we consider the Pr?ecis problem of sampling K representative yet
diverse data points from a large dataset. This problem arises frequently in applications such as video and document summarization, exploratory data analysis,
and pre-filtering. We formulate a general theory which encompasses not just traditional techniques devised for vector spaces, but also non-Euclidean manifolds,
thereby enabling these techniques to shapes, human activities, textures and many
other image and video based datasets. We propose intrinsic manifold measures for
measuring the quality of a selection of points with respect to their representative
power, and their diversity. We then propose efficient algorithms to optimize the
cost function using a novel annealing-based iterative alternation algorithm. The
proposed formulation is applicable to manifolds of known geometry as well as
to manifolds whose geometry needs to be estimated from samples. Experimental
results show the strength and generality of the proposed approach.
1 Introduction
The problem of sampling K representative data points from a large dataset arises frequently in various applications. Consider analyzing large datasets of shapes, objects, documents or large video
sequences, etc. Analysts spend a large amount of time sifting through the acquired data to familiarize themselves with the content, before using them for their application specific tasks. This has
necessitated the problem of optimal selection of a few representative exemplars from the dataset as
an important step in exploratory data analysis. Other applications include Internet-based video summarization, where providing a quick overview of a video is important for improving the browsing
experience. Similarly, in medical image analysis, picking a subset of K anatomical shapes from
a large population helps in identifying the variations within and across shape classes, providing an
invaluable tool for analysts.
Depending upon the application, several subset selection criteria have been proposed in the literature. However, there seems to be a consensus in selecting exemplars that are representative of
the dataset while minimizing the redundancy between the exemplars. Liu et al.[1] proposed that
the summary of a document should satisfy the ?coverage? and ?orthogonality? criteria. Shroff et
al.[2] extended this idea to selecting exemplars from videos that maximize ?coverage? and ?diversity?. Simon et al.[3] formulated scene summarization as one of picking interesting and important
scenes with minimal redundancy. Similarly, in statistics, stratified sampling techniques sample the
population by dividing the dataset into mutually exclusive and exhaustive ?strata? (sub-groups) followed by a random selection of representatives from each strata [4]. The splitting of population into
stratas ensures that a diverse selection is obtained. The need to select diverse subsets has also been
emphasized in information retrieval applications [5, 6].
Column Subset Selection (CSS) [7, 8, 9] has been one of the popular techniques to address this problem. The goal of CSS is to select the K most ?well-conditioned? columns from the matrix of data
points. One of the key assumptions behind this and other techniques is that the objects or their representations, lie in the Euclidean space. Unfortunately, this assumption is not valid in many cases. In
1
applications like computer vision, images and videos are represented by features/models like shapes
[10], bags-of-words, linear dynamical systems (LDS) [11], etc. Many of these features/models have
been shown to lie in non-Euclidean spaces, implying that the underlying distance metric of the space
is not the usual `2 /`p norm. Since these feature/model spaces have a non-trivial manifold structure,
the distance metrics are highly non-linear functions. Examples of features/models - manifold pairs
include: shapes - complex spherical manifold [10], linear subspaces - Grassmann manifold, covariance matrices - tensor space, histograms - simplex in Rn , etc. Even the familiar bag-of-words
representation, used commonly in document analysis, is more naturally considered as a statistical
manifold than as a vector space [12]. The geometric properties of the non-Euclidean manifolds allow
one to develop accurate inference and classification algorithms [13, 14]. In this paper, we focus on
the problem of selecting a subset of K exemplars from a dataset of N points when the dataset has an
underlying manifold structure to it. We formulate the notion of representational error and diversity
measure of exemplars while utilizing the non-Euclidean structure of the data points followed by the
proposal of an efficient annealing-based optimization algorithm.
Related Work: The problem of subset selection has been studied by the communities of numerical
linear algebra and theoretical computer science. Most work in the former community is related
to the Rank Revealing QR factorization (RRQR) [7, 15, 16]. Given a data matrix Y , the goal of
RRQR factorization is to find a permutation matrix ? such that the QR factorization of Y ? reveals
the numerical rank of the matrix. The resultant matrix Y ? has as its first K columns the most
?well-conditioned? columns of the matrix Y . On the other hand, the latter community has focused
on Column Subset Selection (CSS). The goal of CSS is to pick K columns forming a matrix C ?
Rm?K such that the residual || Y ? PC Y ||? is minimized over all possible choices for the matrix
C. Here PC = CC ? denotes the projection onto the K-dimensional space spanned by the columns
of C and ? can represent the spectral or Frobenius norm. C ? indicates the pseudo inverse of matrix
C. Along these lines, different randomized algorithms have been proposed [17, 18, 9, 8]. Various
approaches include a two-stage approach [9], subspace sampling methods [8], etc.
Clustering techniques [19] have also been applied for subset selection [20, 21]. In order to select K
exemplars, data points are clustered into ` clusters with (` ? K) followed by the selection of one
or multiple exemplars from each cluster to obtain the best representation or low-rank approximation
of each cluster. Affinity Propagation [21], is a clustering algorithm that takes similarity measures as
input and recursively passes message between nodes until a set of exemplars emerges. As we discuss
in this paper, the problems with these approaches are that (a) the objective functions optimized by the
clustering functions do not incorporate the diversity of the exemplars, hence can be biased towards
denser clusters, and also by outliers, and (b) seeking low-rank approximation of the data matrix
or clusters individually is not always an appropriate subset selection criterion. Furthermore, these
techniques are largely tuned towards addressing the problem in an Euclidean setting and cannot be
applied for datasets in non-Euclidean spaces.
Recently, advances have been made in utilizing non-Euclidean structure for statistical inferences
and pattern recognition [13, 14, 22, 23]. These works have addressed inferences, clustering, dimensionality reduction, etc. in non-Euclidean spaces. To the best of our knowledge, the problem of
subset selection for analytic manifolds remains largely unaddressed. While one could try to solve
the problem by obtaining an embedding of a given manifold into a larger ambient Euclidean space, it
is desirable to have a solution that is more intrinsic in nature. This is because the chosen embedding
is often arbitrary, and introduces peculiarities that result from such extrinsic approaches. Further
manifolds such as the Grassmannian or the manifold of infinite dimensional diffeomorphisms do
not admit a natural embedding into a vector space.
Contributions: 1) We present the first formal treatment of subset selection for the general case of
manifolds, 2) We propose a novel annealing-based alternation algorithm to efficiently solve the optimization problem, 3) We present an extension of the algorithm for data manifolds, and demonstrate
the favorable properties of the algorithm on real data.
2 Subset Selection on Analytic Manifolds
In this section, we formalize the subset selection problem on manifolds and propose an efficient
algorithm. First, we briefly touch upon the necessary basic concepts.
Geometric Computations on Manifolds: Let M be an m-dimensional manifold and, for a point
p ? M, consider a differentiable curve ? : (?, ) ? M such that ?(0) = p. The velocity ?(0)
?
2
denotes the velocity of ? at p. This vector is an example of a tangent vector to M at p. The set of
all such tangent vectors is called the tangent space to M at p. If M is a Riemannian manifold then
the exponential map expp : Tp (M) ? M is defined by expp (v) = ?v (1) where ?v is a specific
geodesic. The inverse exponential map (logarithmic map) logp : M ? Tp takes a point on the
manifold and returns a point on the tangent space ? which is an Euclidean space.
Representational error on manifolds: Let us assume that we are given a set of points X =
{x1 , x2 , . . . xn } which belong to a manifold M. The goal is to select a few exemplars E =
{e1 , . . . eK } from the set X, such that the exemplars provide a good representation of the given
data points, and are minimally redundant. For the special case of vector spaces, two common approaches for measuring representational error is in terms of linear spans, and nearest-exemplar error.
2
The linear span error is given by: minz kX ? EzkF , where X is the matrix form of the data, and E
P P
2
is a matrix of chosen exemplars. The nearest-exemplar error is given by: i xk ??i kxk ? ei k ,
th
where ei is the i exemplar and ?i corresponds to its Voronoi region.
Of these two measures, the notion of linear span, while appropriate for matrix approximation, is
not particularly meaningful for general dataset approximation problems since the ?span? of a dataset
item does not carry much perceptually meaningful information. For example, the linear span of a
vector x ? Rn is the set of points ?x, ? ? R. However, if x were an image, the linear span of x
would be the set of images obtained by varying the global contrast level. All elements of this set
however are perceptually equivalent, and one does not obtain any representational advantage from
considering the span of x. Further, points sampled from the linear span of few images, would not be
meaningful images. This situation is further complicated for manifold-valued data such as shapes,
where the notion of linear span does not exist. One could attempt to define the notion of linear spans
on the manifold as the set of points lying on the geodesic shot from some fixed pole toward the given
dataset item. But, points sampled from this linear span might not be very meaningful e.g., samples
from the linear span of a few shapes would give physically meaningless shapes.
Hence, it is but natural to consider the representational error of a set X with respect to a set of
exemplars E as follows:
X X 2
dg (xj , ei )
Jrep (E) =
i
(1)
xj ??i
Here, dg is the geodesic distance on the manifold and ?i is the Voronoi region of the ith exemplar.
This boils down to the familiar K-means or K-medoids cost function for Euclidean spaces. In order
to avoid combinatorial optimization involved in solving this problem, we use efficient approximations i.e., we first find the mean followed by the selection of ei as data point that is closest to the
mean. The algorithm for optimizing Jrep is given in algorithm 1. Similar to K-means clustering,
a cluster label is assigned to each xj followed by the computation of the mean ?i for each cluster.
This is further followed by selecting representative exemplar ei as the data point closest to ?i .
Diversity measures on manifolds: The next question we consider is to define the notion of diversity of a selection of points on a manifold. We first begin by examining equivalent constructions for
Rn . One of the ways to measure diversity is simply to use the sample variance of the points. This is
similar to the construction used recently in [2]. For the case of manifolds, the sample variance can be
PK 2
1
replaced by the sample Karcher variance, given by the function: ?(E) = K
i=1 dg (?, ei ), where
? is the Karcher mean [24], and the function value is the Karcher variance. However, this construction leads to highly inefficient optimization routines, essentially boiling down to a combinatorial
search over all possible K-sized subsets of X.
An alternate formulation for vector spaces that results in highly efficient optimization routines is via
Rank-Revealing QR (RRQR) factorizations. For vector spaces, given a set of vectors X = {xi },
written in matrix form X, RRQR [7] aims to find Q, R and a permutation matrix ? ? Rn?n such
that X? = QR reveals the numerical rank of the matrix X. This permutation X? = (XK Xn?K )
gives XK , the K most linearly independent
columns of X. This factorization is achieved by seeking
Q
? which maximizes ?(XK ) = i ?i (XK ), the product of the singular values of the matrix XK .
For the case of manifolds, we adopt an approximate approach in order to measure diversity in terms
of the ?well-conditioned? nature of the set of exemplars projected on the tangent space at the mean.
In particular, for the dataset {xi } ? M, with intrinsic mean ?, and a given selection of exemplars
3
Algorithm 1: Algorithm to minimize Jrep
Input: X ? M, k, index vector ?, ?
Output: Permutation Matrix ?
Initialize ? ? In?n
for ? ? 1 to ? do
Initialize ?(?) ? In?n
ei ? x?i for i = {1,2,. . . ,k}
for i ? 1 to k do
?i ? {xp : arg minj dg (xp , ej ) = i }
?i ? mean of ?i
?
j ? arg minj dg (xj , ?i )
Update: ?(?) ? ?(?) ?i??j
end
Update: ? ? ? ?(?) , ? ? ??(?)
if ?(?) = In?n then
break
end
end
Algorithm 2: Algorithm for Diversity Maximization
Input: Matrix V ? Rd?n , k, Tolerance tol
Output: Permutation Matrix ?
Initialize ? ? In?n
repeat
Compute QR decomposition ofV to obtain
R11 R12
R11 , R12 and R22 s.t., V = Q
0
R22
?ij ?
q
?1
?1 2
(R11
R12 )2ij + ||R22 ?j ||22 ||?Ti R11
||2
?m ? maxij ?ij
(?i, ?j) ? arg maxij ?ij
Update: ? ? ? ?i?(j+k)
V ? V ?i?(j+k)
until ?m < tol ;
{ej }, we measure the diversity of exemplars as follows: matrix TE = [log? (ej )] is obtained by
projecting the exemplars {ej } on the tangent space at mean ?. Here, log() is the inverse exponential
map on the manifold and gives tangent vectors at ? that point towards ej .
Diversity can then be quantified as Jdiv (E) = ?(TE ), where, ?(TE ) represents the product of the
singular values of the matrix TE . For vector spaces, this measure is related to the sample variance
of the chosen exemplars. For manifolds, this measure is related to the sample Karcher variance. If
we denote TX = [log? (xi )], the matrix of tangent vectors corresponding to all data-points, and if ?
is the permutation matrix that orders the columns such that the first K columns of TX correspond
to the most diverse selection, then
Jdiv (E) = ?(TE ) = det(R11 ), where, TX ? = QR = Q
R11
0
R12
R22
(2)
Here, R11 ? RK?K is the upper triangular matrix of R ? Rn?n , R12 ? RK?(n?K) and R22 ?
R(n?K)?(n?K) . The advantage of viewing the required quantity as the determinant of a sub-matrix
on the right hand-side of the above equation is
Algorithm 3: Annealing-based Alternation Algo- that one can obtain efficient techniques for oprithm for Subset Selection on Manifolds
timizing this cost function. The algorithm for
Input: Data points X = {x1 , x2 , . . . , xn } ? M,
optimizing Jdiv is adopted from [7] and deNumber of exemplars k, Tolerance step ?
scribed in algorithm 2. Input to the algorithm
Output: E = {e1 , . . . ek } ? X
is a matrix V created by the tangent-space proInitial setup:
jection of X and output is the K most ?wellCompute intrinsic mean ? of X
conditioned? columns of V . This is achieved
Compute tangent vectors vi ? log? (xi )
by first decomposing V into QR and computV ? [v1 , v2 , . . . , vn ]
ing ?ij , which indicates the benefit of swapping
? ? [1, 2, . . . , n] be the 1 ? n index vector of X
ith and j th columns [7]. The algorithm then seTol ? 1
lects pair (?i, ?j) corresponding to the maximum
Initialize: ? ? Randomly permute columns of In?n
Update: V ? V ?, ? ? ??.
benefit swap ?m and if ?m > tol, this swap is
while ? 6= In?n do
accepted. This is repeated until either ?m < tol
Diversity: ? ? Div(V, k, tol) as in algorithm 2.
or maximum number of iterations is completed.
Update: V ? V ?, ? ? ??.
Representative Error: ? ? Rep(X, k, ?,1) as
in algorithm 1
Update: V ? V ?, ? ? ??.
tol ? tol + ?
Representation and Diversity Trade-offs
for Subset Selection: From (1) and (2),
it can be seen that we seek a solution that
end
represents a trade-off between two conflicting
ei ? x?i for i = {1,2,. . . ,k}
criteria. As an example, in figure 1(a) we
show two cases, where Jrep and Jdiv are
individually optimized. We can see that the
solutions look quite different in each case. One way to write the global cost function is as a
weighted combination of the two. However, such a formulation does not lend itself to efficient
optimization routines (c.f. [2]). Further, the choice of weights is often left unjustified. Instead,
we propose an annealing-based alternating technique of optimizing the conflicting criteria Jrep
4
Symbol
?
In?n
?i
?i?j
?(?)
V
Hij
?j
H?j , ?T
j H
Represents
Maximum number of iterations
Identity matrix
Voronoi region of ith exemplar
Permutation matrix that swaps columns i and j
? in the ? th iteration
Matrix obtained by tangent-space projection of X
(i, j) element of matrix H
j th column of the identity matrix
j th column and row of matrix H respectively
Computational Step
M Exponential Map (assume)
M Inverse exponential Map (assume)
Intrinsic mean of X
Projection of X to tangent-space
Geodesic distances in alg. 1
K intrinsic means
Alg. 2
Gm,p Exponential Map
Gm,p Inverse exponential map
Complexity
O(?)
O(?)
O((n? + ?)?)
O(n?)
O(nK?)
O((n? + K?)?)
O(mnK log n)
O(p3 )
O(p3 )
Table 2: Complexity of various computational steps.
Table 1: Notations used in Algorithm 1 - 3
and Jdiv . Optimization algorithms for Jrep and Jdiv individually are given in algorithms 1 and
2 respectively. We first optimize Jdiv to obtain an initial set of exemplars, and use this set as
an initialization for optimizing Jrep . The output of this stage is used as the current solution to
further optimize Jdiv . However, with each iteration, we increase the tolerance parameter tol in
algorithm 2. This has the effect of accepting only those permutations that increase the diversity
by a higher factor as iterations progress. This is done to ensure that the algorithm is guided
towards convergence. If the tol value is not increased at each iteration, then optimizing Jdiv will
continue to provide a new solution at each iteration that modifies the cost function only marginally.
This is illustrated in figure 1(c), where we show how the cost functions Jrep and Jdiv exhibit an
oscillatory behavior if annealing is not used. As seen in figure 1(b) , the convergence of Jdiv
and Jrep is obtained very quickly on using the proposed annealing alternation technique. The
complete annealing based alternation algorithm is described in algorithm 3. A technical detail to
be noted here is that for algorithm 2, input matrix V ? Rd?n should have d ? k. For cases where
d < k, algorithm 2 can be replaced by its extension proposed in [9]. Table 1 shows the notations
introduced in algorithms 1 - 3. ?i?j is obtained by permuting i and j columns of the identity matrix.
3 Complexity, Special cases and Limitations
In this section, we discuss how the proposed method relates to the special case of M = Rn , and
to sub-manifolds of Rn specified by a large number of samples. For the case of Rn , the cost functions Jrep and Jdiv boil down to familiar notions of clustering and low-rank matrix approximation
respectively. In this case, algorithm 3 reduces to alternation between clustering and matrix approximation with the annealing ensuring that the algorithm converges. This results in a new algorithm
for subset-selection in vector spaces.
For the case of manifolds implicitly specified using samples, one can approach the problem in one
of two ways. The first would be to obtain an embedding of the space into a Euclidean space and
apply the special case of the algorithm for M = Rn . The embedding here needs to preserve the
geodesic distances between all pairs of points. Multi-dimensional scaling can be used for this purpose. However, recent methods have also focused on estimating logarithmic maps numerically from
sampled data points [25]. This would make the algorithm directly applicable for such cases, without
the need for a separate embedding. Thus the proposed formalism can accommodate manifolds with
known and unknown geometries.
However, the formalism is limited to manifolds of finite dimension. The case of infinite dimensional manifolds, such as diffeomorphisms [26], space of closed curves [27], etc. pose problems
in formulating the diversity cost function. While Jdiv could have been framed purely in terms of
pairwise geodesics, making it extensible to infinite dimensional manifolds, it would have made the
optimization a significant bottleneck, as already discussed in section 2.
Computational Complexity: The computational complexity of computing exponential map and
its inverse is specific to each manifold. Let n be the number of data points and K be the number
of exemplars to be selected. Table 2 enumerates the complexity of different computational step of
the algorithm. The last two rows show the complexity of an efficient algorithm proposed by [28] to
compute the exponential map and its inverse for the case of Grassmann manifold Gm,p .
4 Experiments
Baselines: We compare the proposed algorithm with two baselines. The first baseline is a
clustering-based solution to subset selection, where we cluster the dataset into K clusters, and pick
as exemplar the data point that is closest to the cluster centroid. Since clustering optimizes only the
5
800
800
3
Jdiv
Jrep
2.5
600
2
Jdiv
Jrep
600
1.5
1
Data
Jrep
Jdiv
Proposed
0.5
0
?0.5
?1
?1.5
?1
0
1
2
400
400
200
200
0
0
3
(a) Subset Selection
4
5
1
2
3
4
5
6
7
8
Iteration Number
9
10
11
12
(b) Convergence With Annealing
0
0
1
2
3
4
5
6
7
8
Iteration Number
9
10
11
12
(c) Without Annealing
Figure 1: Subset selection for a simple dataset consisting of unbalanced classes in R4 . (a) Data projected on
R2 for visualization using PCA. While trying to minimize the representational error, Jrep picks two exemplars
from the dominant class. Jdiv picks diverse exemplars but from the boundaries. The proposed approach strikes
a balance between the two and picks one ?representative? exemplar from each class. Convergence Analysis of
algorithm 3: (b) with annealing and (c) without annealing.
representation cost-function, we do not expect it to have the diversity of the proposed algorithm.
This corresponds to the special case of optimizing only Jrep . The second baseline is to apply a
tangent-space approximation to the entire data-set at the mean of the dataset, and then apply a
subset-selection algorithm such as RRQR. This corresponds to optimizing only Jdiv where the
input matrix is the matrix of tangent vectors. Since minimization of Jrep is not explicitly enforced,
we do not expect the exemplars to be the best representatives, even though the set is diverse.
A Simple Dataset: To gain some intuition, we first perform experiments on a simple synthetic
dataset. For easy visualization and understanding, we generated a dataset with 3 unbalanced classes
in Euclidean space R4 . Individual cost functions, Jrep and Jdiv were first optimized to pick three
exemplars using algorithms 1 and 2 respectively. Selected exemplars have been shown in figure 1(a),
where the four dimensional dataset has been projected into two dimensions for visualization using
Principal Component Analysis (PCA). Despite unbalanced class sizes, when optimized individually,
Jdiv seeks to select exemplars from diverse classes but tends to pick them from the class boundaries.
While unbalanced class sizes cause Jrep to pick 2 exemplars from the dominant cluster. Algorithm
3 iteratively optimizes for both these cost functions and picks an exemplar from every class. These
exemplars, are closer to the centroid of the individual classes.
Figure 1(b) shows the convergence of the algorithm for this simple dataset and compares it
with the case when no annealing is applied (figure 1(c)). Jrep and Jdiv plots are shown as the
iterations of algorithm 3 progresses. When annealing is applied, the tolerance value (tol) is
increased by 0.05 in each iteration. It can be noted that in this case the algorithm converges to
a steady state in 7 iterations (tol = 1.35). If no annealing is applied, the algorithm does not converge.
Shape sampling/summarization: We conducted a real shape summarization experiment on the
MPEG dataset [29]. This dataset has 70 shape classes with 20 shapes per class. For our experiments, we created a smaller dataset of 10 shape classes with 10 shapes per class. Figure 2(a) shows
the shapes used in our experiments. We use an affine-invariant representation of shapes based on
landmarks. Shape boundaries are uniformly sampled to obtain m landmark points. These points are
concatenated in a matrix to obtain the landmark matrix L ? Rm?2 . Left singular vectors (Um?2 ),
obtained by the singular value decomposition of matrix L = U ?V T , give the affine-invariant representation of shapes [30]. This affine shape-space U of m landmark points is a 2-dimensional
subspace of Rm . These p-dimensional subspaces in Rm constitute the Grassmann manifold Gm,p .
Details of the algorithms for the computation of exponential and inverse exponential map on Gm,p
can be found in [28] and has also been included in the supplemental material.
In the experiment, the cardinality of the subset was set to 10. As the number of shape classes is also
10, one would ideally seek one exemplar from each class. Algorithms 1 and 2 were first individually
optimized to select the optimal subset. Algorithm 1 was applied intrinsically on the manifold with
multiple initializations. Figure 2(b) shows the output with the least cost among these initializations.
For algorithm 2, data points were projected on the tangent space at the mean using the inverse
exponential map and the selected subset is shown in figure 2(c). Individual optimization of Jrep
results in 1 exemplar each from 6 classes, 2 each from 2 classes (?apple? and ?flower?) and misses
2 classes (?bell? and ?chopper?). While, individual optimization of Jdiv alone picks 1 each from 8
classes, 2 from the class ?car? and none from the class ?bell?. It can be observed that exemplars
chosen by Jdiv for classes ?glass?, ?heart?,?flower? and ?apple? tend to be unusual members of the
6
(a)
(b)
(c)
(d)
Figure 2: (a) 10 classes from MPEG dataset with 10 shapes per class. Comparison of 10 exemplars selected by
(b)Jrep , (c) Jdiv and (d) Proposed Approach. Jrep picks 2 exemplars each from 2 classes (?apple? and ?flower?)
and misses ?bell? and ?chopper? classes. Jdiv picks 1 from 8 different classes, 2 exemplars from class ?car? and
none from class ?bell?. It can be observed that exemplars chosen by Jdiv for classes ?glass?, ?heart?, ?flower?
and ?apple? tend to be unusual members of the class. It also picks up the flipped car. While the proposed
approach picks one representative exemplars from each class as desired.
class. It also picks up the flipped car. Optimizing for both Jdiv and Jrep using algorithm 3 picks one
?representative? exemplar from each class as shown in figure 2(d).
These exemplars picked by the three algorithms can be further used to label data points. Table 3
shows the confusion table thus obtained. For each data point, we find the nearest exemplar, and
label the data point with the ground-truth label of this exemplar. For example, consider the row
labeled as ?bell?. All the data points of the class ?bell? were labeled as ?pocket? by Jrep while Jdiv
labeled 7 data points from this class as ?chopper? and 3 as ?pocket?. This confusion is largely due
to both Jrep and Jdiv having missed out picking exemplars from this class. The proposed approach
correctly labels all data points as it picks exemplars from every class.
Glass
Heart
Apple
Bell
Baby
Chopper
Flower
Car
Pocket
Teddy
Glass
(10,10,10)
Heart
Apple
(10,10,10)
(0,1,0)
(8,7,10)
Bell
Baby
Chopper
Flower
Car
(2,0,0)
(0,7,0)
(0,0,10)
Pocket
Teddy
(0,2,0)
(10,3,0)
(10,10,10)
(2,0,0)
(8,0,0)
(0,10,10)
(10,10,10)
(10,10,10)
(10,10,10)
(10,10,10)
Table 3: Confusion Table. Entries correspond to the tuple (Jrep , Jdiv , P roposed). Row labels correspond to
the ground truth labels of the shape and the column labels correspond to the label of the nearest exemplar. Only
non-zero entries have been shown in the table.
KTH human action dataset: The next experiment was conducted on the KTH human action
dataset [31]. This dataset consists of videos with 6 actions conducted by 25 persons in 4 different
scenarios. For our experiment, we created a smaller dataset of 30 videos with the first 5 human
subjects conducting 6 actions in the s4 (indoor) scenario. Figure 3(a) shows sample frames from
each video. This dataset mainly consists of videos captured under constrained settings. This makes it
difficult to identify the ?usual? or ?unusual? members of a class. To better understand the performance
of the three algorithms, we synthetically added occlusion to the last video of each class. These
occluded videos serve as the ?unusual? members.
Histogram of Oriented Optical Flow (HOOF) [32] was extracted from each frame to obtain a normalized time-series for the videos. A Linear Dynamical System (LDS) is then estimated from
this time-series using the approach in [11]. This model is described by the state transition equation: x(t + 1) = Ax(t) + w(t) and the observation equation z(t) = Cx(t) + v(t), where
7
(b) Jrep
(a) Dataset
(c) Jdiv
(d) Proposed
Figure 3: (a) Sample frames from KTH action dataset [31]. From top to bottom action classes are { box, run,
walk, hand-clap, hand-wave and jog }. 5 exemplars selected by: (b)Jrep , (c) Jdiv and (d) Proposed. Exemplars
picked by Jrep correspond to { box, run, run, hand-clap, hand-wave } actions. While Jdiv selects { box, walk,
hand-clap, hand-wave and jog }. Proposed approach picks { box, run, walk, hand-clap and hand-wave }.
x ? Rd is the hidden state vector, z ? Rp is the observation vector, w(t) ? N (0, ?) and
v(t) ? N (0, ?) are the noise components. Here, A is the state-transition matrix and C is the
observation matrix. The expected observation sequence of model (A, C) lies in the column space
of the infinite extended ?observability? matrix which is commonly approximated by a finite matrix
T
T
Om
= [C T , (CA)T , (CA2 )T , . . . , (CAm?1 )T ]. The column space of this matrix Om
? Rmp?d is
a d-dimensional subspace and hence lies on the Grassmann manifold.
In this experiment, we consider the scenario when the number of classes in a dataset is unknown.
We asked the algorithm to pick 5 exemplars when the actual number of classes in the dataset is 6.
Figure 3(b) shows one frame from each of the videos selected when Jrep was optimized alone. It
picks 1 exemplar each from 3 classes (?box?,?hand-clap? and ?hand-wave?), 2 from the class ?run?
while misses out on ?walk? and ?jog?. On the other hand, Jdiv (when optimized alone) picks 1 each
from 5 different classes and misses the class ?run?. It can be seen that Jdiv picks 2 exemplars that
are ?unusual? members (occluded videos) of their respective class. The proposed approach picks
1 representative exemplar from 5 classes and none from the class ?jog?. The proposed approach
achieves both a diverse selection of exemplars, and also avoids picking outlying exemplars.
Effect of Parameters and Initialization: In our experiments, the effect of tolerance steps (?) for
smaller values (< 0.1) has very minimal effect. After a few attempts, we fixed this value to 0.05 for
all our experiments. In the first iteration, we start with tol = 1. With this value, algorithm 2 accepts
any swap that increases Jdiv . This makes output of algorithm 2 after first iteration almost insensitive
to initialization. While, in the later iterations, swaps are accepted only if they increase the value of
Jdiv significantly and hence input to algorithm 2 becomes more important with the increase in tol.
5 Conclusion and Discussion
In this paper, we addressed the problem of selecting K exemplars from a dataset when the dataset has
an underlying manifold structure to it. We utilized the geometric structure of the manifold to formulate the notion of picking exemplars which minimize the representational error while maximizing the
diversity of exemplars. An iterative alternation optimization technique based on annealing has been
proposed. We discussed its convergence and complexity and showed its extension to data manifolds
and Euclidean spaces. We showed summarization experiments with real shape and human actions
dataset. Future work includes formulating subset selection for infinite dimensional manifolds and
efficient approximations for this case. Also, several special cases of the proposed approach point to
new directions of research such as the cases of vector spaces and data manifolds.
Acknowledgement: This research was funded (in part) by a grant N 00014 ? 09 ? 1 ? 0044 from
the Office of Naval Research. The first author would like to thank Dikpal Reddy and Sima Taheri
for helpful discussions and their valuable comments.
References
[1] K. Liu, E. Terzi, and T. Grandison, ?ManyAspects: a system for highlighting diverse concepts in documents,? in Proceedings of VLDB Endowment, 2008.
8
[2] N. Shroff, P. Turaga, and R. Chellappa, ?Video Pr?ecis: Highlighting diverse aspects of videos,? IEEE
Transactions on Multimedia, vol. 12, no. 8, pp. 853 ?868, Dec. 2010.
[3] I. Simon, N. Snavely, and S. Seitz, ?Scene summarization for online image collections,? in ICCV, 2007.
[4] W. Cochran, Sampling techniques. Wiley, 1977.
[5] Y. Yue and T. Joachims, ?Predicting diverse subsets using structural svms,? in ICML, 2008.
[6] J. Carbonell and J. Goldstein, ?The use of mmr, diversity-based reranking for reordering documents and
reproducing summaries,? in SIGIR, 1998.
[7] M. Gu and S. Eisenstat, ?Efficient algorithms for computing a strong rank-revealing QR factorization,?
SIAM Journal on Scientific Computing, vol. 17, no. 4, pp. 848?869, 1996.
[8] P. Drineas, M. Mahoney, and S. Muthukrishnan, ?Relative-error CUR matrix decompositions,? SIAM
Journal on Matrix Analysis and Applications, vol. 30, pp. 844?881, 2008.
[9] C. Boutsidis, M. Mahoney, and P. Drineas, ?An improved approximation algorithm for the column subset
selection problem,? in SODA, 2009.
[10] D. Kendall, ?Shape manifolds, Procrustean metrics and complex projective spaces,? Bulletin of London
Mathematical society, vol. 16, pp. 81?121, 1984.
[11] S. Soatto, G. Doretto, and Y. N. Wu, ?Dynamic textures,? ICCV, 2001.
[12] J. D. Lafferty and G. Lebanon, ?Diffusion kernels on statistical manifolds,? Journal of Machine Learning
Research, vol. 6, pp. 129?163, 2005.
[13] P. T. Fletcher, C. Lu, S. M. Pizer, and S. C. Joshi, ?Principal geodesic analysis for the study of nonlinear
statistics of shape,? IEEE Transactions on Medical Imaging, vol. 23, no. 8, pp. 995?1005, August 2004.
[14] A. Srivastava, S. H. Joshi, W. Mio, and X. Liu, ?Statistical shape analysis: Clustering, learning, and
testing,? IEEE Transactions on pattern analysis and machine intelligence, vol. 27, no. 4, 2005.
[15] G. Golub, ?Numerical methods for solving linear least squares problems,? Numerische Mathematik, vol. 7,
no. 3, pp. 206?216, 1965.
[16] T. Chan, ?Rank revealing QR factorizations,? Linear Algebra and Its Applications, vol. 88, pp. 67?82,
1987.
[17] A. Frieze, R. Kannan, and S. Vempala, ?Fast Monte-Carlo algorithms for finding low-rank approximations,? Journal of the ACM (JACM), vol. 51, no. 6, pp. 1025?1041, 2004.
[18] A. Deshpande and L. Rademacher, ?Efficient volume sampling for row/column subset selection,? in Foundations of Computer Science (FOCS), 2010.
[19] G. Gan, C. Ma, and J. Wu, Data clustering: theory, algorithms, and applications. Society for Industrial
and Applied Mathematics, 2007.
[20] I. Dhillon and D. Modha, ?Concept decompositions for large sparse text data using clustering,? Machine
learning, vol. 42, no. 1, pp. 143?175, 2001.
[21] B. J. Frey and D. Dueck, ?Clustering by passing messages between data points,? Science, vol. 315, pp.
972?976, Feb. 2007.
[22] R. Subbarao and P. Meer, ?Nonlinear mean shift for clustering over analytic manifolds,? in CVPR, 2006.
[23] A. Goh and R. Vidal, ?Clustering and dimensionality reduction on riemannian manifolds,? in CVPR, 2008.
[24] H. Karcher, ?Riemannian center of mass and mollifier smoothing,? Communications on Pure and Applied
Mathematics, vol. 30, no. 5, pp. 509?541, 1977.
[25] T. Lin and H. Zha, ?Riemannian manifold learning,? IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 30, pp. 796?809, 2008.
[26] A. Trouv?e, ?Diffeomorphisms groups and pattern matching in image analysis,? International Journal of
Computer Vision, vol. 28, pp. 213?221, July 1998.
[27] W. Mio, A. Srivastava, and S. Joshi, ?On shape of plane elastic curves,? International Journal of Computer
Vision, vol. 73, no. 3, pp. 307?324, 2007.
[28] K. Gallivan, A. Srivastava, X. Liu, and P. Van Dooren, ?Efficient algorithms for inferences on grassmann
manifolds,? in IEEE Workshop on Statistical Signal Processing, 2003.
[29] L. Latecki, R. Lakamper, and T. Eckhardt, ?Shape descriptors for non-rigid shapes with a single closed
contour,? in CVPR, 2000.
[30] E. Begelfor and M. Werman, ?Affine invariance revisited,? in CVPR, 2006.
[31] C. Schuldt, I. Laptev, and B. Caputo, ?Recognizing human actions: a local SVM approach,? in ICPR,
2004.
[32] R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, ?Histograms of oriented optical flow and binetcauchy kernels on nonlinear dynamical systems for the recognition of human actions,? in CVPR, 2009.
9
| 4229 |@word determinant:1 briefly:1 seems:1 norm:2 vldb:1 seitz:1 seek:3 covariance:1 decomposition:4 pick:23 thereby:1 accommodate:1 shot:1 hager:1 recursively:1 carry:1 liu:4 series:2 reduction:2 selecting:5 initial:1 tuned:1 document:6 current:1 yet:1 written:1 ecis:3 numerical:4 shape:30 analytic:3 plot:1 update:6 implying:1 alone:3 asu:1 selected:6 item:2 reranking:1 intelligence:2 plane:1 xk:6 ith:3 accepting:1 node:1 revisited:1 mathematical:1 along:1 focs:1 consists:2 pairwise:1 acquired:1 expected:1 behavior:1 themselves:1 frequently:2 multi:1 spherical:1 actual:1 considering:1 cardinality:1 becomes:1 begin:1 estimating:1 underlying:3 latecki:1 notation:2 medium:1 mass:1 maximizes:1 supplemental:1 finding:1 pseudo:1 dueck:1 every:2 ti:1 um:1 rm:4 medical:2 grant:1 before:1 engineering:2 frey:1 local:1 tends:1 despite:1 analyzing:1 modha:1 might:1 minimally:1 studied:1 quantified:1 initialization:5 r4:2 factorization:7 limited:1 stratified:1 projective:1 scribed:1 testing:1 terzi:1 bell:8 significantly:1 revealing:4 projection:3 matching:1 pre:1 word:2 onto:1 cannot:1 selection:30 ravichandran:1 optimize:3 equivalent:2 map:13 quick:1 center:1 maximizing:1 modifies:1 focused:2 formulate:3 sigir:1 numerische:1 identifying:1 splitting:1 pure:1 eisenstat:1 utilizing:2 spanned:1 population:3 embedding:6 notion:7 rrqr:5 exploratory:2 variation:1 meer:1 cs:4 construction:3 gm:5 velocity:2 element:2 recognition:2 particularly:1 approximated:1 utilized:1 labeled:3 observed:2 bottom:1 electrical:1 region:3 ensures:1 sifting:1 trade:2 valuable:1 intuition:1 complexity:8 ideally:1 asked:1 occluded:2 cam:1 dynamic:1 geodesic:7 solving:2 algebra:2 algo:1 laptev:1 purely:1 upon:2 serve:1 swap:5 gu:1 drineas:2 various:3 represented:1 tx:3 muthukrishnan:1 fast:1 chellappa:2 london:1 monte:1 exhaustive:1 whose:1 quite:1 spend:1 solve:2 denser:1 larger:1 valued:1 cvpr:5 triangular:1 statistic:2 itself:1 online:1 sequence:2 differentiable:1 advantage:2 propose:5 product:2 representational:7 frobenius:1 mollifier:1 qr:9 convergence:6 cluster:11 rademacher:1 mpeg:2 converges:2 rama:2 object:2 help:1 depending:1 develop:1 pose:1 hoof:1 exemplar:65 ij:5 nearest:4 school:1 progress:2 strong:1 dividing:1 coverage:2 direction:1 guided:1 peculiarity:1 human:7 viewing:1 material:1 clustered:1 extension:3 lying:1 considered:1 ground:2 fletcher:1 werman:1 achieves:1 adopt:1 purpose:1 favorable:1 applicable:2 bag:2 combinatorial:2 label:9 individually:5 tool:1 weighted:1 minimization:1 offs:1 always:1 aim:1 avoid:1 ej:5 varying:1 office:1 ax:1 focus:1 naval:1 joachim:1 rank:10 indicates:2 mainly:1 contrast:1 industrial:1 centroid:2 baseline:4 glass:4 helpful:1 inference:4 voronoi:3 rigid:1 entire:1 hidden:1 selects:1 arg:3 classification:1 among:1 art:1 special:6 initialize:4 constrained:1 smoothing:1 having:1 sampling:8 familiarize:1 represents:3 park:1 look:1 flipped:2 icml:1 future:1 simplex:1 minimized:1 few:5 dooren:1 r11:7 randomly:1 dg:5 preserve:1 frieze:1 oriented:2 individual:4 familiar:3 replaced:2 geometry:3 consisting:1 occlusion:1 attempt:2 message:2 highly:3 unjustified:1 golub:1 mahoney:2 introduces:1 pc:2 behind:1 swapping:1 permuting:1 accurate:1 ambient:1 tuple:1 closer:1 necessary:1 experience:1 respective:1 necessitated:1 euclidean:15 walk:4 desired:1 goh:1 theoretical:1 minimal:2 increased:2 column:22 formalism:2 tp:2 karcher:5 measuring:2 logp:1 extensible:1 maximization:1 cost:12 pole:1 addressing:1 subset:28 entry:2 recognizing:1 examining:1 conducted:3 pavan:1 synthetic:1 person:1 international:2 stratum:2 randomized:1 siam:2 off:1 picking:5 quickly:1 admit:1 ek:2 inefficient:1 return:1 diversity:18 jection:1 includes:1 satisfy:1 explicitly:1 vi:1 later:1 try:1 break:1 closed:2 picked:2 kendall:1 wave:5 start:1 zha:1 complicated:1 simon:2 cochran:1 contribution:1 minimize:3 square:1 om:2 variance:6 largely:3 efficiently:1 conducting:1 correspond:5 identify:1 descriptor:1 lds:2 marginally:1 none:3 lu:1 carlo:1 cc:1 apple:6 oscillatory:1 minj:2 boutsidis:1 pp:15 involved:1 deshpande:1 naturally:1 resultant:1 riemannian:4 boil:2 cur:1 sampled:4 gain:1 dataset:35 treatment:1 popular:1 intrinsically:1 knowledge:1 emerges:1 dimensionality:2 enumerates:1 car:6 formalize:1 pocket:4 routine:3 goldstein:1 shroff:3 higher:1 improved:1 formulation:3 done:1 though:1 box:5 generality:1 furthermore:1 just:1 stage:2 until:3 schuldt:1 hand:13 touch:1 ei:8 nonlinear:3 propagation:1 quality:1 scientific:1 effect:4 concept:3 normalized:1 former:1 hence:4 assigned:1 soatto:1 alternating:1 iteratively:1 dhillon:1 illustrated:1 sima:1 noted:2 steady:1 criterion:5 trying:1 procrustean:1 complete:1 demonstrate:1 confusion:3 invaluable:1 subbarao:1 image:9 novel:2 recently:2 common:1 overview:1 insensitive:1 volume:1 belong:1 discussed:2 rmp:1 numerically:1 significant:1 framed:1 rd:3 mathematics:2 similarly:2 funded:1 similarity:1 etc:6 feb:1 dominant:2 closest:3 recent:1 showed:2 chan:1 optimizing:8 optimizes:2 scenario:3 rep:1 continue:1 alternation:7 baby:2 seen:3 captured:1 converge:1 maximize:1 redundant:1 strike:1 signal:1 doretto:1 relates:1 multiple:2 desirable:1 july:1 reduces:1 ing:1 technical:1 jog:4 retrieval:1 lects:1 devised:1 lin:1 grassmann:5 e1:2 ensuring:1 basic:1 vision:3 metric:3 essentially:1 physically:1 histogram:3 represent:1 iteration:15 kernel:2 achieved:2 dec:1 proposal:1 eckhardt:1 annealing:18 addressed:2 singular:4 biased:1 meaningless:1 umiacs:1 umd:1 pass:1 comment:1 subject:1 tend:2 yue:1 member:5 unaddressed:1 flow:2 lafferty:1 structural:1 joshi:3 synthetically:1 easy:1 xj:4 observability:1 idea:1 det:1 shift:1 bottleneck:1 pca:2 passing:1 cause:1 constitute:1 action:10 tol:13 amount:1 s4:1 svms:1 mio:2 exist:1 r12:5 estimated:2 extrinsic:1 r22:5 per:3 correctly:1 anatomical:1 diverse:12 mnk:1 boiling:1 write:1 vol:16 group:2 redundancy:2 key:1 four:1 diffusion:1 v1:1 imaging:1 enforced:1 run:6 inverse:9 ca2:1 soda:1 almost:1 wu:2 vn:1 p3:2 missed:1 scaling:1 internet:1 followed:6 arizona:1 activity:1 strength:1 pizer:1 orthogonality:1 scene:3 x2:2 aspect:1 span:12 formulating:2 diffeomorphisms:3 optical:2 vempala:1 department:1 turaga:2 alternate:1 icpr:1 combination:1 across:1 smaller:3 making:1 outlier:1 projecting:1 pr:3 medoids:1 invariant:2 iccv:2 heart:4 equation:3 mutually:1 remains:1 visualization:3 discus:2 reddy:1 mathematik:1 end:4 unusual:5 adopted:1 decomposing:1 vidal:2 apply:3 v2:1 spectral:1 appropriate:2 rp:1 denotes:2 clustering:15 include:3 ensure:1 completed:1 top:1 gan:1 binetcauchy:1 concatenated:1 society:2 stratas:1 tensor:1 objective:1 seeking:2 question:1 quantity:1 already:1 added:1 snavely:1 exclusive:1 usual:2 traditional:1 div:1 affinity:1 exhibit:1 subspace:5 distance:5 grassmannian:1 maryland:1 separate:1 kth:3 landmark:4 thank:1 carbonell:1 manifold:59 consensus:1 trivial:1 toward:1 kannan:1 analyst:2 index:2 providing:2 minimizing:1 balance:1 setup:1 unfortunately:1 difficult:1 hij:1 summarization:7 unknown:2 perform:1 upper:1 observation:4 datasets:3 enabling:1 finite:2 teddy:2 situation:1 extended:2 communication:1 frame:4 rn:9 reproducing:1 arbitrary:1 august:1 community:3 introduced:1 pair:3 required:1 specified:2 trouv:1 optimized:7 accepts:1 conflicting:2 address:1 chaudhry:1 dynamical:3 pattern:4 flower:6 indoor:1 encompasses:1 video:18 maxij:2 lend:1 power:1 gallivan:1 natural:2 predicting:1 lakamper:1 residual:1 created:3 text:1 literature:1 geometric:3 tangent:15 expp:2 understanding:1 acknowledgement:1 relative:1 reordering:1 expect:2 permutation:8 interesting:1 limitation:1 filtering:1 foundation:1 affine:4 xp:2 endowment:1 row:5 summary:2 clap:5 repeat:1 last:2 formal:1 allow:1 side:1 understand:1 bulletin:1 sparse:1 tolerance:5 benefit:2 curve:3 dimension:2 xn:3 valid:1 boundary:3 contour:1 transition:2 avoids:1 author:1 commonly:2 made:2 projected:4 collection:1 outlying:1 transaction:4 lebanon:1 approximate:1 mmr:1 implicitly:1 global:2 reveals:2 xi:4 search:1 iterative:2 table:9 nature:2 ca:1 elastic:1 obtaining:1 improving:1 caputo:1 permute:1 alg:2 complex:2 pk:1 linearly:1 noise:1 repeated:1 x1:2 representative:13 wiley:1 sub:3 exponential:12 lie:4 minz:1 down:3 rk:2 specific:3 emphasized:1 symbol:1 r2:1 svm:1 intrinsic:6 workshop:1 texture:2 te:5 perceptually:2 conditioned:4 kx:1 browsing:1 nk:1 cx:1 logarithmic:2 chopper:5 simply:1 jacm:1 forming:1 highlighting:2 kxk:1 van:1 corresponds:3 truth:2 extracted:1 acm:1 ma:1 goal:4 formulated:1 sized:1 identity:3 towards:4 content:1 included:1 infinite:5 uniformly:1 miss:4 principal:2 called:1 multimedia:1 accepted:2 experimental:1 invariance:1 meaningful:4 select:6 college:1 latter:1 arises:2 unbalanced:4 incorporate:1 srivastava:3 |
3,567 | 423 | The Tempo 2 Algorithm: Adjusting Time-Delays By
Supervised Learning
Ulrich Bodenhausen and Alex Waibel
School of Computer Science
Carnegie Mellon University
Pittsbwgh, PA 15213
Abstract
In this work we describe a new method that adjusts time-delays and the widths of
time-windows in artificial neural networks automatically. The input of the units
are weighted by a gaussian input-window over time which allows the learning
rules for the delays and widths to be derived in the same way as it is used for the
weights. Our results on a phoneme classification task compare well with results
obtained with the TDNN by Waibel et al., which was manually optimized for the
same task.
1 INTRODUCTION
The processing of pattern-sequences has been investigated with several neural network
architectures. One approach to processing of temporal context with neural networks is
to implement time-delays. This approach is neurophysiologically plausible, because real
axons have a limited conduction speed (which is dependent on the diameter of the axon and
whether it is myelinated or not). Additionally, the length of most axons is much greater
than the euclidean distance between the connected neurons. This leads to a great variety
of different time-delays in the brain. Artificial networks that make use of time-delays have
been suggested [10, 11, 12,8,2,3].
In the TDNN [11, 12] and most other artificial neural networks with time-delays the delays
are implemented as hat-shaped input-windows over time. A unit j that is connected with
unit i by a connection with delay n is only receiving information about the activity of unit i
n time-steps ago. A set of connections with consecutive time-delays is used to let each unit
gather a certain amount of temporal context. In these networks, weights are automatically
trained but the architecture of the network (time-delays, number of connections and number
of units) have to be predetermined by laborious experiments [8, 6].
155
156
Bodenhausen and Waibel
In this work we describe a new algorithm that adjusts time-delays and the width of the
input-window automatically. The learning rules require input-windows over time that can
be described by a smooth function. With these input-windows it is possible to derive
learning rules for adjusting the center and the width of the window. During training, new
connections are added if they are needed by splitting already existing connections and
training them independently.
Adaptive time-delays in neural networks could have Significant advantages for the processing of pattern-sequences, especially if the relevant information is distributed across
non-consecutive patterns. A typical example for this kind of pattern sequences are rhythms
(relevant in music and speech). In a rhythm, there are many events but also many gaps
between these events. Another example is speech, where some parts of an utterance are
more important for understanding than others (example: 'hat', 'fat', 'cat' ..). Therefore a
network that allocates existing and new resources to the parts of the input sequence that are
most helpful for the task could be more compact and efficient for various tasks.
2 THE TEMPO 2 NETWORK
The Tempo 2 network is an artificial neural network with adaptive weights, adaptive timedelays and adaptive widths of gaussian input windows over time. It is a generalization
of the Back-Propagation network proposed by Rumelhart, Hinton and Williams [9]. The
network is based on some ideas that were tested with the Tempo 1 network [2, 3].
The Tempo 2 network is designed to learn about the relevant temporal context during
training. A unit in the network is activated by input from a gaussian shaped input-window
centered around (t-d) and standard deviation 0', where d (the time-delay) and 0' (the width of
the input-window) are to be learned 1 (see Fig. 1 and 2). This means that the center and the
width of each input-window can be adjusted by learning rules. The adaptive time-delays
allow the processing of temporal context that is distributed across several non-consecutive
patterns of the sequence. The adaptive width of the window enables the receiving unit to
monitor a variable sequence of consecutive activations over time of each sending unit. New
connections can be added if they are needed (see section 2.1). The input of unitj at time t,
x(t)jt is
t
x(t)j = L
r=O
LYk(r)O(r, t,djkl O'jk)Wjk
k
with O(r, t,djk, O'jk) representing the gaussian input window given by
O(r t dok
1
,
0' 0 ) _
:J , Jk -
1 e(r-t+djl)2 /2~
'2y L.7rUjk
where Yk is the output of the previous sending unit and Wjk, djk and O'jk are the weights,
delays and widths on its connections, respectively.
This approach is partly motivated by neurophysiology and mathematics. In the brain, a
spike that is sent by a neuron via an axon is not received as a spike by the receiving cell.
1Other
windows are possible. The function describing the shape of the window has to be smooth.
The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning
input 3
?
?
?
input D
time
Figure I: The input to one unit in the Tempo 2 network. The boxes represent the activations
of the sending units; a tall box represents a high activity.
Rather, the postsynaptic potential has a short rise and a long tail. Let us assume a situation
with two neurons. Neuron A fires at time t-d, where d is the time that the signal needs to
travel along the connection and to activate neuron B. Neuron B is activated mostly at time
t, but the postsynaptic potential will decrease slowly and neuron B will get some input at
time t+I, some smaller input at time t+2 and so on. Functionally, a spike is smeared over
time and this provides some "local memory".
For our simulations we simulate this behavior by allowing the receiving unit to be activated
by the weighted sum of activations around an input centered at time t-d. If the sending
unit ("neuron A") was activated at time t-d, then the receiving unit ("neuron Bit) will be
activated mostly at time t, will be less activated at time t+I, and so on. In our case, the
input-window function also allows the receiving unit to be (less) activated at times t-I, t-2
etc .. This enables us to formulate a learning rule that can increase and decrease time-delays.
The gaussian input-window has the advantage that it provides some robustness against temporally misaligned input tokens. By looking at Fig. 2 it is obvious that small misalignments
of the input signal do not change the input of the receiving unit significantly. The robustness
is dependent on the width of the window. Therefore a wide window would make the input
of the receiving unit more robust against signals shifted in time, but would also reduce the
time-resOlution of the unit. This suggests the implementation of a learning rule that adjusts
the width of the input-windows of each connection.
With this gaussian input-window, it is possible to compute how the input of unit j would
change if the delay of a connection or the width of the input window were changed. The
formalism is the same as for the derivation of the learning rules for the weights in a standard
Back-Propagation network. The change of a delay is proportional to the derivative of the
output error with respect to the delay. The change of a width is proportional to the derivative
of the error with respect to the width. The error at the output is propagated back to the
hidden layer. The learning rules for weights Wji, delays dji and widths (1'ji were derived from
157
158
Bodenhausen and Waibel
Adjusting the delays:
derivative positive
-> Increase delay
-> move window left /:
~1
II
Adjusting the width of the windows
delay
derivative with
respect to (J
.............. . 11 .. ???
?????
'"
'.
iii.
:".
A
.iI
--.,'Iif..-::
.....-?
..-=.II........... ..
:.
1"8:1
A
Figure 2: A graphical explanation of the learning rules for delays and widths: The derivative
of the gaussian input-window with respect to time is used for adjusting the time-delays.
The derivative with respect to u (dotted line) is used for adjusting the width of the window.
A majority of activation in area A will cause the window to grow. A majority of activity in
area B will cause the window to shrink.
where fl, f2 and f3 are the learning rates and E is the error. As in the derivation of the
standard Back-Propagation learning rules, the chain rule is applied (z = w, d, u):
oE
oE ox(t)j
OZji = ox(t)j OZji
where 8~f,)j is the same in the learning rules for weights, delays and widths. The partial
derivatives of the input with respect to the parameters of the connections are computed as
follows:
The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning
Splitting the Connections:
A. delay: .?..........
.
...
".
Figure 3: Splitting of a connection. The dotted line represents the "old" window and the
solid lines represent the two windows after splitting, respectively.
2.1
ADDING NEW CONNECTIONS
Learning algorithms for neural networks that add hidden units have recently been proposed
[4,5]. In our network connections are added to the already existing ones in a similar way
as it is used by Hanson for adding units [5]. During learning, the network starts with one
connection between two units. Depending on the task this may be insufficient and it would be
desirable to add new connections where more connections are needed. New connections are
added by splitting already existing connections and afterwards training them independently
(see Fig. 3). The rule for splitting a connection is motivated by observations during training
runs. It was observed that input-windows started moving backwards and forwards (that
means the time-delays changed) after a certain level of performance was reached. This can
be interpreted as inconsistent time-delays which might be caused by temporal variability
of certain features in the samples of speech. During training we compute the standard
deviations of all delay changes and compare them with a threshold:
if
L
(L1dji(token) -
a IItOl<.ens
L_
~
lL1d .. 1
'J' )2
#tokens
walltouns
> threshold
then split connection ji.
3 SIMULATIONS
The Tempo 2 network was initially tested with rhythm classification. The results were
encouraging and evaluation was carried out on a phoneme classification task. In this
application, adaptive delays can help to find important cues in a sample of speech. Units
should not accumulate information from irrelevant parts of the phonemes. Rather, they
should look at parts within the phonemes that provide the most important information for
the kind of feature extraction that is needed for the classification task. The network was
trained on the phonemes /bl, Id/ and Ig/ from a single speaker. 783 tokens were used for
training and 759 tokens were used for testing.
159
160
Bodenhausen and Waibel
II
adaptive parameters
weights
delays
widths
delays, widths
weights, delays
weights, delays, widths
I constant parameters I Training Set I Testing Set II
delays, widths
weights, widths
weights, delays
weights
widths
-
93.2%
64.0%
63.5%
70.0%
98.3%
98.8%
89.3%
63.0%
61.8%
68.6%
97.8%
98.0%
Table 1: /b/. Id/ and Ig/ Classification performance with 8 hidden units in one hidden layer.
The network is initialized with random weights and constant widths.
In order to evaluate the usefulness of each adaptive parameter. the network was trained and
tested with a variety of combinations of constant and adaptive parameters (see Table 1). In
all cases the network was initialized with random weights and delays and constant widths
u of the input windows. All results were obtained with 8 hidden units in one hidden layer.
4 DISCUSSION
The TDNN has been shown to be a very powerful approach to phoneme recognition. The
fixed time-delays and the kind of time-window were chosen partly because they were
motivated by results from earlier studies [1. 7] and because they were successful from an
engineering point of view. The architecture was optimized for the recognition of phonemes
/b/. Id/ and Ig/ and could be applied to other phonemes without significant changes. In this
study we explored the performance of an artificial neural network that can automatically
learn its own architecture by learning time-delays and widths of the gaussian input windows.
The learning rules for the time-delays and the width of the windows were derived in the
same way that has been shown successful for the derivation of learning rules for weights.
Our results show that time-delays in artificial neural networks can be learned automatically.
The learning rule proposed in this study is able to improve performance significantly
compared to fixed delays if the network is initialized with random delays.
The width of an input window determines how much local temporal context is captured
by a single connection. Additionally. a large window means increased robustness against
temporal misalignments of the input tokens. A large window also means that the connection
transmits with a low temporal resolution. The learning rule for the widths of the windows
has to compromise between increased robustness against misaligned tokens and decreased
time-resolution. This is done by a gradient descent methOd.
If the network is initialized with the same widths that are used for the training runs with
constant widths. 70 - 80% of the windows in the network get smaller during training. Our
simulations show that it is possible to let a learning rule adjust parameters that determine
the temporal resolution of the network.
The comparison of the performances with one adaptive parameter set (either weights.
delays or widths) shows that the main parameters in the network are the weights. Delays
and widths seem to be of a lesser importance. but in combination with the weights the
delays can improve the performance. especially generalization. A Tempo 2 network with
trained delays and widths and random weights can classify 70% of the phonemes correctly.
The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning
This suggests that learning temporal parameters is effective.
The network achieves results comparable to a similar network with a handtuned architecture.
This suggests that the kind of learning rule could be helpful in applying time-delay neural
networks to problems where no knowledge about optimal time windows is available.
At higher levels of processing such adaptive networks could be used to learn rhythmic
(prosodic) relationships in fluent speech and other tasks.
Acknowledgements
The authors gratefully acknowledge the support by the McDonnel-Pew Foundation (Cognitive Neuroscience Program) and ATR Interpreting Telephony Research Laboratories.
References
[1] S.E. Blumstein and K.N. Stevens. Perceptual Invariance And Onset Spectra For Stop
Consonants In Different Vowel Environments. Journal of the Acoustical Society of
America, 67:648-{)62,1980.
[2] U. Bodenhausen. The Tempo Algorithm: Learning In A Neural Network With
Adaptive Time-Delays. In Proceedings of the IJCNN 90, Washington D.C., January
1990.
[3] U. Bodenhausen. Learning Internal Representations Of pattern Sequences In A Neural
Network With Adaptive Time-Delays. In Proceedings of the IJCNN 90, San Diego,
June 1990.
[4] S. Fahlman and C. Lebiere. The Cascade-Correlation Learning Architecture. In
Advances in Neural Information Processing Systems. Morgan Kaufmann, 1990.
[5] S. J. Hanson. Meiosis Networks. In Advances in Neural Information Processing
Systems. Morgan Kaufmann, 1990.
[6] Kamm, C. E .. Effects Of Neural Network Input Span On Phoneme Classification. In
Proceedings of the International Joint Conference on Neural Networks, June 1990.
[7] D. Kewley-Port. Time Varying Features As Correlates Of Place Of Articulation In
Stop Consonants. Journal of the Acoustical Society of America, 73:322-335, 1983.
[8] K. J. Lang, G. E. Hinton, and A.H. Waibel. A Time-Delay Neural Network Architecture For Speech Recognition. Neural Networks Journal, 1990.
[9] D. E. Rumelhart, G. E. Hinton, and RJ. Williams. Learning Internal Representations
By Error Propagation. In J.L. McClelland and D.E. Rumelhart, editors, Parallel
Distributed Processing; Explorations in the Microstructure of Cognition, chapter 8,
pages 318-362. MIT Press, Cambridge, MA, 1986.
[10] D.W. Tank: and JJ. Hopfield. Neural Computation By Concentrating Information In
Time. In Proceedings National Academy of Sciences, pages 1896-1900, Apri11987 .
[11] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme Recognition
Using Time-Delay Neural Networks. IEEE, Transactions on Acoustics, Speech and
Signal Processing, March 1989.
[12] A. Waibel. Modular Construction Of Time-Delay Neural Networks For Speech Recognition. Neural Computation, MIT-Press, March 1989.
161
| 423 |@word neurophysiology:1 simulation:3 solid:1 existing:4 activation:4 lang:2 predetermined:1 shape:1 enables:2 designed:1 cue:1 short:1 provides:2 along:1 behavior:1 brain:2 automatically:5 kamm:1 encouraging:1 window:41 kind:4 interpreted:1 temporal:10 fat:1 unit:26 positive:1 engineering:1 local:2 id:3 might:1 suggests:3 misaligned:2 limited:1 testing:2 implement:1 area:2 significantly:2 cascade:1 djl:1 get:2 context:5 applying:1 center:2 williams:2 independently:2 fluent:1 formulate:1 resolution:4 splitting:6 adjusts:3 rule:19 diego:1 construction:1 pa:1 rumelhart:3 recognition:5 jk:4 observed:1 connected:2 oe:2 decrease:2 yk:1 environment:1 trained:4 compromise:1 f2:1 misalignment:2 joint:1 hopfield:1 cat:1 various:1 america:2 chapter:1 derivation:3 describe:2 activate:1 effective:1 prosodic:1 artificial:6 modular:1 plausible:1 hanazawa:1 sequence:7 advantage:2 relevant:3 academy:1 wjk:2 tall:1 derive:1 depending:1 help:1 school:1 received:1 implemented:1 stevens:1 centered:2 exploration:1 require:1 microstructure:1 generalization:2 adjusted:1 around:2 great:1 cognition:1 achieves:1 consecutive:4 travel:1 weighted:2 smeared:1 mit:2 gaussian:8 rather:2 varying:1 derived:3 june:2 helpful:2 dependent:2 initially:1 hidden:6 djk:2 tank:1 classification:6 f3:1 shaped:2 extraction:1 washington:1 manually:1 iif:1 represents:2 look:1 others:1 national:1 fire:1 vowel:1 evaluation:1 laborious:1 adjust:1 activated:7 chain:1 kewley:1 partial:1 allocates:1 euclidean:1 old:1 initialized:4 increased:2 formalism:1 earlier:1 classify:1 deviation:2 usefulness:1 delay:58 successful:2 conduction:1 international:1 receiving:8 slowly:1 cognitive:1 derivative:7 potential:2 caused:1 onset:1 view:1 reached:1 start:1 parallel:1 phoneme:11 kaufmann:2 ago:1 against:4 obvious:1 lebiere:1 transmits:1 propagated:1 stop:2 adjusting:9 concentrating:1 knowledge:1 back:4 higher:1 supervised:4 done:1 box:2 shrink:1 ox:2 correlation:1 propagation:4 effect:1 laboratory:1 during:6 width:36 speaker:1 rhythm:3 interpreting:1 recently:1 dji:1 ji:2 tail:1 functionally:1 accumulate:1 mellon:1 significant:2 cambridge:1 pew:1 mathematics:1 gratefully:1 moving:1 etc:1 add:2 own:1 irrelevant:1 certain:3 wji:1 captured:1 morgan:2 greater:1 determine:1 signal:4 ii:5 afterwards:1 desirable:1 rj:1 smooth:2 long:1 bodenhausen:6 represent:2 cell:1 decreased:1 grow:1 sent:1 inconsistent:1 seem:1 backwards:1 iii:1 split:1 variety:2 architecture:7 reduce:1 idea:1 lesser:1 whether:1 motivated:3 speech:8 cause:2 jj:1 amount:1 mcclelland:1 diameter:1 shifted:1 dotted:2 neuroscience:1 correctly:1 carnegie:1 threshold:2 monitor:1 sum:1 run:2 powerful:1 place:1 comparable:1 bit:1 layer:3 fl:1 lyk:1 activity:3 ijcnn:2 alex:1 meiosis:1 speed:1 myelinated:1 simulate:1 span:1 waibel:8 combination:2 march:2 across:2 smaller:2 postsynaptic:2 resource:1 describing:1 needed:4 sending:4 available:1 tempo:12 robustness:4 hat:2 graphical:1 music:1 especially:2 society:2 bl:1 move:1 added:4 already:3 spike:3 gradient:1 distance:1 atr:1 majority:2 acoustical:2 length:1 relationship:1 insufficient:1 mostly:2 rise:1 implementation:1 allowing:1 neuron:9 observation:1 acknowledge:1 descent:1 january:1 situation:1 hinton:4 looking:1 variability:1 optimized:2 connection:24 hanson:2 acoustic:1 learned:2 able:1 suggested:1 pattern:6 articulation:1 program:1 memory:1 explanation:1 event:2 representing:1 improve:2 temporally:1 started:1 carried:1 tdnn:3 utterance:1 understanding:1 acknowledgement:1 neurophysiologically:1 telephony:1 proportional:2 foundation:1 gather:1 port:1 editor:1 ulrich:1 changed:2 token:7 fahlman:1 l_:1 allow:1 wide:1 rhythmic:1 distributed:3 forward:1 author:1 adaptive:14 san:1 ig:3 correlate:1 transaction:1 compact:1 consonant:2 shikano:1 spectrum:1 table:2 additionally:2 learn:3 robust:1 investigated:1 main:1 fig:3 en:1 axon:4 perceptual:1 jt:1 unitj:1 explored:1 adding:2 importance:1 gap:1 determines:1 ma:1 change:6 typical:1 handtuned:1 partly:2 invariance:1 internal:2 support:1 evaluate:1 tested:3 |
3,568 | 4,230 | Confidence Sets for Network Structure
Patrick Wolfe
School of Engineering and Applied Sciences
Harvard University
Cambridge, MA 02138
[email protected]
David S. Choi
School of Engineering and Applied Sciences
Harvard University
Cambridge, MA 02138
[email protected]
Edoardo M. Airoldi
Department of Statistics
Harvard University
Cambridge, MA 02138
[email protected]
Abstract
Latent variable models are frequently used to identify structure in dichotomous
network data, in part because they give rise to a Bernoulli product likelihood that
is both well understood and consistent with the notion of exchangeable random
graphs. In this article we propose conservative confidence sets that hold with respect to these underlying Bernoulli parameters as a function of any given partition
of network nodes, enabling us to assess estimates of residual network structure,
that is, structure that cannot be explained by known covariates and thus cannot be
easily verified by manual inspection. We demonstrate the proposed methodology
by analyzing student friendship networks from the National Longitudinal Survey
of Adolescent Health that include race, gender, and school year as covariates. We
employ a stochastic expectation-maximization algorithm to fit a logistic regression model that includes these explanatory variables as well as a latent stochastic
blockmodel component and additional node-specific effects. Although maximumlikelihood estimates do not appear consistent in this context, we are able to evaluate confidence sets as a function of different blockmodel partitions, which enables
us to qualitatively assess the significance of estimated residual network structure
relative to a baseline, which models covariates but lacks block structure.
1
Introduction
Network datasets comprising edge measurements Aij ? {0, 1} of a binary, symmetric, and antireflexive relation on a set of n nodes, 1 ? i < j ? n, are fast becoming of paramount interest in the
statistical analysis and data mining literatures [1]. A common aim of many models for such data is
to test for and explain the presence of network structure, primary examples being communities and
blocks of nodes that are equivalent in some formal sense. Algorithmic formulations of this problem
take varied forms and span many literatures, touching on subjects such as statistical physics [2, 3],
theoretical computer science [4], economics [5], and social network analysis [6].
One popular modeling assumption for network data is to assume dyadic independence of the edge
measurements when conditioned on a set of latent variables [7, 8, 9, 10]. The number of latent
parameters in such models generally increases with the size of the graph, however, meaning that
computationally intensive fitting algorithms may be required and standard consistency results may
not always hold. As a result, it can often be difficult to assess statistical significance or quantify
the uncertainty associated with parameter estimates. This issue is evident in literatures focused
1
on community detection, where common practice is to examine whether algorithmically identified
communities agree with prior knowledge or intuition [11, 12]; this practice is less useful if additional
confirmatory information is unavailable, or if detailed uncertainty quantification is desired.
Confidence sets are a standard statistical tool for uncertainty quantification, but they are not yet
well developed for network data. In this paper, we propose a family of confidence sets for network structure that apply under the assumption of a Bernoulli product likelihood. The form of
these sets stems from a stochastic blockmodel formulation which reflects the notion of latent nodal
classes, and they provide a new tool for the analysis of estimated or algorithmically determined
network structure. We demonstrate usage of the confidence sets by analyzing a sample of 26 adolescent friendship networks from the National Longitudinal Survey of Adolescent Health (available
at http://www.cpc.unc.edu/addhealth), using a baseline model that only includes explanatory covariates and heterogeneity in the nodal degrees. We employ these confidence sets to validate departures
from this baseline model taking the form of residual community structure. Though the confidence
sets we employ are conservative, we show that they are effective in identifying putative residual
structure in these friendship network data.
2
Model Specification and Inference
We represent network data via a sociomatrix A ? {0, 1}N ?N that reflects the adjacency structure
of a simple, undirected graph on N nodes. In keeping with the latent variable network analysis
literature, we assume entries {Aij } for i < j to be independent Bernoulli random variables with
associated success probabilities {Pij }i<j , and complete A as a symmetric matrix with zeros along
its main diagonal. The corresponding data log-likelihood is given by
X
L(A; P ) =
Aij log(Pij ) + (1 ? Aij ) log(1 ? Pij ),
(1)
i<j
where each Pij can itself be modeled as a function of latent as well as explanatory variables.
Given an instantiation of A and a latent variable model for the probabilities {Pij }i<j , it is natural
to seek a quantification of the uncertainty associated with estimates of these Bernoulli parameters.
A standard approach in non-network settings is to posit a parametric model and then compute confidence intervals, for example by appealing to standard maximum-likelihood asymptotics. However,
as mentioned earlier, the formulation of most latent variable network models dictates an increasing number of parameters as the number of network nodes grows; this amount of expressive power
appears necessary to capture many salient characteristics of network data. As a result, standard
asymptotic results do not necessarily apply, leaving open questions for inference.
2.1
A Logistic Regression Model for Network Structure
To illustrate the complexities that can arise in this inferential setting, we adopt a latent variable
network model with a standard flavor: a logistic regression model that simultaneously incorporates
aspects of blockmodels, additive effects, and explanatory variables (see [10] for a more general formulation). Specifically, we incorporate a K-class stochastic blockmodel component parameterized
in terms of a symmetric matrix ? ? RK?K and a membership vector z ? {1, . . . , K}N whose values denote the class of each node, with Pij depending on ?zi zj . A vector of additional node-specific
latent variables ? is included to account for heterogeneity in the observed nodal degrees, along with
a vector of regression coefficients ? corresponding to explanatory variables x(i, j). Thus we obtain
the log-odds parameterization
Pij
log
= ?zi zj + ?i + ?j + x(i, j)0 ?,
(2)
1 ? Pij
P
where we further enforce the identifiability constraint that i ?i = 0.
2.2
Likelihood-Based Inference
Exact maximization of the log-likelihood L(A; z, ?, ?, ?, x) is computationally demanding even for
moderately large K and N , owing to the total number of nodal partitions induced by z. Algorithm 1
details a stochastic expectation-maximization (EM) algorithm to explore the likelihood space.
2
Algorithm 1 Stochastic Expectation-Maximization Fitting of model (2)
1. Set t = 0 and initialize (z (0) , ?(0) , ?(0) , ? (0) ).
2. For iteration t, do:
E-step Sample z (t) ? exp{L(z | A; ?(t) , ?(t) , ? (t) , x)}
(e.g., via Gibbs sampling)
M-step Set (?(t) , ?(t) , ? (t) ) = argmax?,?,? L(?, ?, ? | z (t) ; A, x)
(convex optimization)
3. Set t ? t + 1 and return to Step 2.
When ? and ? are fixed to zero, model (2) reduces to a re-parameterization of the standard stochastic blockmodel. Consistency results for this model have been developed for a range of conditions [7, 13, 14, 15, 16]. However, it is not clear how uncertainty in z and ? should be quantified
or even concisely expressed: in this vein, previous efforts to assess the robustness of fitted structure include [17], in which community partitions are analyzed under perturbations of the network,
and [18], in which the behavior of local minima resulting from simulated annealing is examined; a
likelihood-based test is proposed in [19] to compare sets of community divisions.
Without the blockmodel components z and ?, the model of Eq. (2) reduces to a generalized linear model whose likelihood can be maximized by standard methods. If ? is further constrained
to equal 0, the model is finite dimensional and standard asymptotic results for inference can be
applied. Otherwise, the increasing dimensionality of ? brings consistency into question, and in
fact certain negative results are known for a related model, known as the p1 exponential random
graph model [20]. Specifically, [21] reports that the maximum likelihood estimator for the p1 model
exhibits bias with magnitude equal to its variance. Although estimation error does converge asymptotically to zero for the p1 model, it is not known how to generate general confidence intervals or
hypothesis tests; [22] prescribes reporting standard errors only as summary statistics, with no association to p-values. The predictions of [21] were replicated (reported below) when fitting simulated
data drawn from the model of Eq. (2) with parameters matched to observed characteristics of the
Adolescent Health friendship networks.
Model selection techniques, such as out-of-sample prediction, are sometimes used to validate statistical network models. For example, [23] uses out-of-sample prediction to compare the stochastic
blockmodel to other network models. We note that model selection techniques and the confidence
estimates presented here are complementary. To choose the best model for the data, a model selection method should be used; however, if the parameter will be interpreted to draw conclusions about
the data, a confidence estimate may be desired as well.
2.3
Confidence Sets for Network Structure
Instead of quantifying the uncertainty associated with estimates of the model parameters
(z, ?, ?, ?), we directly find confidence sets for the Bernoulli likelihood parameters {Pij }i<j . To
? ?
? in [0, 1]K?K
this end, for any fixed K and class assignment z, define symmetric matrices ?,
element-wise for 1 ? a ? b as
X
X
? (z) = 1
? (z) = 1
?
Pij 1{zi = a, zj = b}, ?
Aij 1{zi = a, zj = b},
ab
ab
nab i<j
nab i<j
with nab denoting the maximum number of possible edges between classes a and b (i.e., the cor? (z) is the expected proportion of edges between (or
responding number of Bernoulli trials). Thus ?
ab
? (z) is its corresponding sample
within, if a = b) classes a and b, under class assignment z, and ?
ab
proportion estimator.
? (z) measures assortativity by z; whenever the sociomatrix A is unstructured, elements
Intuitively, ?
(z)
?
of ? should be nearly uniform for any choice of partition z. When strong community structure is
present in A, however, these elements should instead be well separated for corresponding values of
? (z) to its expected value ?
? (z) for
z. Thus, it is of interest to examine a confidence set that relates ?
ab
a range of partitions z. To this end, we may define such a set by considering a weighted sum of the
3
Element of ?
Intercept
Gender
Race
Grade
? = 0, ? = 0
?0.001 (0.004)
0.003 (0.004)
?0.001 (0.004)
0.006 (0.003)
?=0
2.26 (0.070)
?0.005 (0.004)
?0.03 (0.005)
0.04 (0.003)
Table 1: Empirical bias (with standard errors) of ML-estimated components of ? under a baseline
model, for the cases ? = 0 versus ? unconstrained. Note the change in estimated bias when ? is
included in the model.
P
? (z) ||?
? (z) ), where D(p||p0 ) = p log(p/p0 )+(1?p) log[(1?p)/(1?p0 )] denotes
form a?b nab D(?
ab
ab
the (nonnegative) Kullback?Leibler divergence of a Bernoulli random variable with parameter p0
from that of one with parameter p. A confidence set is then obtainable via direct application of the
following theorem.
Theorem 1 ([14]) Let {Aij }i<j be comprised of N2 independent Bernoulli(Pij ) trials, and let
Z = {1, . . . , K}N . Then with probability at least 1 ? ?,
X
N
1
(z) ? (z)
2
?
sup
nab D(?ab ||?ab ) ? N log K + (K + K) log
+ 1 + log .
(3)
K
?
z?Z
a?b
Because Eq. (3) holds uniformly over all class assignments, we may choose to apply it directly to the
value of z obtained from Algorithm 1?and because it does not assume any particular form of latent
structure, we are able to avoid the difficulties associated with determining confidence sets directly
for the parameters of latent variable models such as Eq. (2). However, it is important to note that
this generality comes at a price: In simulation studies undertaken in [14] as well as those detailed
below, the bound of Eq. (3) is observed to be loose by a multiplicative factor ranging from 3 to 7 on
average.
2.4
Estimator Consistency and Confidence Sets
Recalling our above discussion of estimator consistency for the related p1 model, we undertook
a small simulation study to investigate the consistency of maximum-likelihood (ML) estimation
in a ?baseline? version of model (2) with K = 1 and the corresponding (scalar) value of ?
set equal to zero. We compared estimates for the cases ? = 0 versus ? unconstrained for 500
graphs generated randomly from a model of the form specified in Eq. (2) based on school 8 of
the Add-Health data set. The number of nodes N = 204 and covariates x(i, j) matched that of
School 8 in the Adolescent Health friendship network dataset, and the regression coefficient vector
? = (?2.6, 0.025, 0.9, ?1.6)0 , set to match the ML estimate of ? for School 8, fitted via logistic regression with ? = 0, ? = 0. The covariates x(i, j) comprised of an intercept term, an indicator for
whether students i and j shared the same gender, an indicator for shared race, and their difference
in school grade.
The inclusion of ? in the model of Eq. (2) appears to give rise to a loss of estimator consistency, as
shown in Table 1 where the empirical bias of each component of ? is reported. This suggests, as
we alluded to above, that inferential conclusions based on parameter estimates from latent variable
models should be interpreted with caution.
To explore the tightness of the confidence sets given by the bound in Eq. (3), we fitted the full
model specified in Eq. (2) with K in the range 2?6 to 50 draws from a restricted version of the
model corresponding to each of the 26 schools in our dataset. In the same manner described
above, each simulated graph shared the same size and covariates as its corresponding school in
the dataset, with ? fixed to its ML-fitted value with ? = 0, ? = 0. The empirical divergence term
P
? (z) ? (z)
a?b nab D(?ab ||?ab ) under the approximate ML partition determined via Algorithm 1 was then
tabulated for each of these 1300 fits, and compared to its 95% confidence upper bound given by
Eq. (3). The empirical divergences are reported in the histogram of Fig. 1 as a fraction of the upper bound. It may be seen from Fig. 1 that the largest divergence observed was less than 41% of
its corresponding bound, with 95% of all divergences less than 22% of their corresponding bound.
4
P
? (z) ||?
? (z) ) as fractions of 95% confidence set values,
Figure 1: Divergence terms a?b nab D(?
ab
ab
shown for approximate maximum-likelihood fits to 1300 randomly graphs matched to the 26-school
friendship network dataset.
This analysis provides an indication of how inflated the confidence set sizes are expected to be in
practice; while conservative in nature, they seem usable for practical situations.
3
Analysis of Adolescent Health Networks
The National Longitudinal Study of Adolescent Health (Add Health) is a study of adolescents in the
United States. To date, four waves of surveys have been collected over the course of fifteen years.
Many statistical studies have been performed using the data to explore a variety of social and health
issues1 . For example, [24, 25] discusses effects of racial diversity on community formation across
schools. Here we examine the schools individually to find residual block structure not explained by
gender, race, or grade. Since we will be unable to verify such blocks by checking against explanatory
variables, we rely on the confidence sets developed above to assess significance of the discovered
block structure.
Our approach is as follows. As discussed in Section 2.3, Eq. (3) enables us to calculate confidence
sets with respect to Bernoulli parameters {Pij } for any class membership vector z in terms of the
? (z) . Then, by comparing values of ?
? (z) to a baseline
corresponding sample proportion matrices ?
model obtained by fitting K = 1, ? = 0 (thus removing the stochastic block component from
Eq. (2)), we may evaluate whether or not the observed sample counts are consistent with the structure
predicted by the baseline model. This procedure provides a kind of notional p-value to qualitatively
assess significance of the residual structure induced by any choice of z.
3.1
Model Checking
We first fit model (2) with ? = 0 and ? = 0, since it reduces to a logistic regression with explanatory
variables x(i, j), for which standard asymptotic results apply. The parameter fits were examined and
an analysis of deviance was conducted. The fits were observed to be well behaved in this setting;
estimates of ? and their corresponding standard errors indicate a clustering effect by grade that is
stronger than that of either shared gender or race. An analysis of deviance, where each variable was
withheld from the model, resulted in similar conclusions: Average deviances across the 26 schools
were ?69, ?238, and ?3760 for gender, race, and grade respectively, with p-values below 0.05
indicating significance in all but 3, 7, and 0 of the schools for each of the respective covariates; these
schools had small numbers of students, with a maximum N of 108.
When ? was re-introduced into the model of Eq. (2), its components were observed to correlate
highly with the sequence of observed nodal degrees in the data, as expected. (Recall that consistency
results are not known for this model, so that p-values cannot be associated with deviances or standard
errors; however, in our simulations the maximum-likelihood estimates showed moderate errors, as
discussed in Section 2.4.) For two of the schools, the resulting model was degenerate, whereas for
the remaining schools the ?-degree correlation had a range of 0.78?0.94 and a median value of 0.89.
1
For a bibliography, see http://www.cpc.unc.edu/projects/addhealth/pubs.
5
(a) K = 2
(b) K = 4
(c) K = 6
Figure 2: Student counts resulting from a stochastic blockmodel fit for K ? {2, . . . , 6}, arranged by
latent block and school year (grade) for School 6. The inferred block structure approximately aligns
with the grade covariate (which was not included in this model).
Estimates of ? did not undergo qualitative significant changes from their earlier values when the
restriction ? = 0 was lifted.
A ?pure? stochastic blockmodel (? = 0, ? = 0) was fitted to our data over the range K ? {2, . . . , 6},
to observe if the resulting block structure replicates that of any of the known covariates. Figure 2
shows counts of students by latent class (under the approximate maximum-likelihood estimate of
z) and grade for School 6; it can be seen that the recovered grouping of students by latent class is
closely aligned with the grade covariate, particularly for grades 7?10.
3.2
Residual Block Structure
We now report on the assessment of residual block structure in the Adolescent Health friendship
network data. Recalling that the confidence sets obtained with Eq. (3) hold uniformly for all partitions of equal size, independently of how they are chosen, we therefore may freely modify the
fitting procedure of Algorithm 1 to obtain partitions that exhibit the greatest degree of structure.
Bearing in mind the high observed ?-degree correlation as discussed above, we replaced the latent
variable vector ? in the model of Eq. (2) with a more parsimonious categorical covariate determined
by grouping the observed network degrees according to the ranges 0?3, 4?7, and 8??. We also
expanded the covariates by giving each race and grade pairing its own indicator function. These
modifications would be inappropriate for the baseline model, as dyadic independence conditioned
on the covariates would be lost, and standard errors for ? would be larger; however, the changes
were useful for improving the output of Algorithm 1 without invalidating Eq. (3).
? (z) , fitted for various K > 1 using the modified
Fig. 3 depicts partitions for which the observed ?
version of Algorithm 1 detailed above, is ?far? from its nominal value under the baseline model fitted
with K = 1, in the sense that the corresponding 95% Bonferroni-corrected confidence set bound is
exceeded. We observe that in each partition, the number of apparently visible communities exceeds
K, and they are comprised of small numbers of students. This effect is due to the intersection of
grade and z-induced clustering.
? (z) computed under the baseWe take as our definition of nominal value the quantity ?
(z)
line model, which we denote by ? .
Table 2 lists normalized divergence terms
(z)
(z)
N ?1 P
?
a?b nab D(?ab ||?ab ), Bonferroni-corrected 95% confidence bounds, and measures of
2
alignment between the corresponding partitions z and the explanatory variables. The alignment
with the covariates are small, as measured by the Jacaard similarity coefficient and ratio of withinclass to total variance2 , signifying the residual quality of the partitions, while the relatively large
divergence terms signify that the Bonferroni-corrected confidence set bounds for each school have
been met or exceeded.
2
The alignment scores
are defined as follows. The Jacaard similarity coefficient is defined as |A ? B|/|A ?
B|, were A, B ? N2 are the student pairings sharing the same latent class or the same covariate value,
respectively. See [12] for further network-related discussion. Variance ratio denotes the within-class degree
variance divided by the total variance, averaged over all classes.
6
School
10
18
21
22
26
29
38
55
56
66
67
72
78
80
Students
678
284
377
614
551
569
521
336
446
644
456
352
432
594
Edges
2795
1189
1531
2450
2066
2534
1925
803
1394
2865
926
1398
1334
1745
K
6
5
6
5
3
6
5
4
6
6
3
4
6
4
Div. (Bound)
0.0064 (0.0062)
0.0150 (0.0150)
0.0140 (0.0120)
0.0064 (0.0061)
0.0049 (0.0045)
0.0091 (0.0075)
0.0073 (0.0073)
0.0100 (0.0100)
0.0120 (0.0099)
0.0069 (0.0066)
0.0055 (0.0055)
0.0099 (0.0095)
0.0100 (0.0100)
0.0054 (0.0053)
Jaccard coefficient or Variance ratio
Gender Race Grade Degree
0.14
0.16
0.097
0.93
0.17
0.19
0.14
0.88
0.15
0.16
0.12
0.95
0.18
0.14
0.11
0.99
0.25
0.21
0.13
0.99
0.15
0.16
0.10
0.88
0.17
0.18
0.17
0.86
0.20
0.18
0.21
0.97
0.15
0.14
0.15
0.98
0.15
0.16
0.099
0.91
0.25
0.23
0.25
1.00
0.21
0.21
0.12
0.96
0.15
0.12
0.15
0.98
0.20
0.19
0.15
0.99
Table 2: Block structure assessments corresponding to Fig. 3. Small Jacaard coefficient values (for
gender, race, and grade) and variance ratios approaching 1 for degree indicate a lack of alignment
with covariates and hence the identification of residual structure in the corresponding partition.
We note that the usage of covariate information was necessary to detect small student groups; without the incorporation of grade effects, we would require a much larger value of K for Algorithm 1 to
detect the observed network structure (a concern noted by [23] in the absence of covariates), which
in turn would inflate the confidence set, leading to an inability to validate the observed structure
from that predicted by a baseline model.
4
Concluding Remarks
In this article we have developed confidence sets for assessing inferred network structure, by leveraging our result derived in [14]. We explored the use of these confidence sets with an application to
the analysis of Adolescent Health survey data comprising friendship networks from 26 schools.
Our methodology can be summarized as follows. In lieu of a parametric model, we assume dyadic
independence with Bernoulli parameters {Pji }. We introduced a baseline model (K = 1) that incorporates degree and covariate effects, without block structure. Algorithm 1 was then used to find
highly assortative partitions of students which are also far from partitions induced by the explanatory covariates in the baseline model. Differences in assortativity were quantified by an empirical
divergence statistic, which was compared to an upper bound computed from Eq. (3) to check for significance and to generate confidence sets for {Pij }. While the upper bound in Eq. (3) is known to be
loose, simulation results in Figure 1 suggest that the slack is moderate, leading to useful confidence
sets in practice.
In our procedure, we cannot quantify the uncertainty associated with the estimated baseline model,
since the parameter estimates lack consistency. As a result, we cannot conduct a formal hypothesis
test for ? = 0. However, for a baseline model where the MLE is known to be consistent, we conjecture that such a hypothesis test should be possible by incorporating the confidence set associated
with the MLE.
Despite concerns regarding estimator consistency in this and other latent variable models, we were
able to show that the notion of confidence sets may instead be used to provide a (conservative)
measure of residual block structure. We note that many open questions remain, and are hopeful
that this analysis may help to shed light on some important current issues facing practitioners and
theorists alike in statistical network analysis.
7
(a) School 10, K = 6
(b) School 18, K = 5
(c) School 21, K = 6
(d) School 22, K = 5
(e) School 26, K = 3
(f) School 29, K = 6
(g) School 38, K = 5
(h) School 55, K = 4
(i) School 56, K = 6
(j) School 66, K = 6
(k) School 67, K = 3
(l) School 72, K = 4
(m) School 78, K = 6
(n) School 80, K = 4
Figure 3: Adjacency matrices for schools exhibiting residual block structure as described in Section 3.2, with nodes ordered by grade (solid lines) and corresponding latent classes (dotted lines).
8
References
[1] A. Goldenberg, A. X. Zheng, S. E. Fienberg, and E. M. Airoldi, ?A survey of statistical
network models?, Foundation and Trends in Machine Learning, vol. 2, pp. 1?117, Feb. 2010.
[2] R. Albert and A. L. Barabasi, ?Statistical mechanics of complex networks?, Reviews of
Modern Physics, vol. 74, no. 47, Jan. 2002.
[3] M. E. J. Newman, ?The structure and function of complex networks?, SIAM Review, vol. 45,
pp. 167?256, June 2003.
[4] C. Cooper and A. M. Frieze, ?A general model of web graphs?, Random Structures and
Algorithms, vol. 22, no. 3, pp. 311?335, Mar. 2003.
[5] M. O. Jackson, Social and Economic Networks, Princeton University Press, 2008.
[6] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications, Cambridge
University Press, Cambridge, U.K., 1994.
[7] T. A. B. Snijders and K. Nowicki, ?Estimation and prediction for stochastic blockmodels for
graphs with latent block structure?, J. Classif., vol. 14, pp. 75?100, Jan. 1997.
[8] M. S. Handcock, A. E. Raftery, and J. M. Tantrum, ?Model-based clustering for social networks?, J. R. Stat. Soc. A, vol. 170, pp. 301?354, Mar. 2007.
[9] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, ?Mixed membership stochastic
blockmodels?, J. Mach. Learn. Res., vol. 9, pp. 1981?2014, June 2008.
[10] P. D. Hoff, ?Multiplicative latent factor models for description and prediction of social networks?, Computational Math. Organization Theory, vol. 15, pp. 261?272, Dec. 2009.
[11] M. E. J. Newman, ?Modularity and community structure in networks?, Proc. Natl Acad. Sci.
U.S.A., vol. 103, pp. 8577?8582, June 2006.
[12] A. L. Traud, E. D. Kelsic, P. J. Mucha, and M. A. Porter, ?Comparing community structure
to characteristics in online collegiate social networks?, SIAM Rev., 2011, to appear.
[13] P. J. Bickel and A. Chen, ?A nonparametric view of network models and Newman-Girvan
and other modularities?, Proc. Natl Acad. Sci. U.S.A., vol. 106, pp. 21068?21073, Dec. 2009.
[14] D.S. Choi, P.J. Wolfe, and E.M. Airoldi, ?Stochastic blockmodels with growing numbers of
classes?, Biometrika, 2011, to appear.
[15] K. Rohe, S. Chatterjee, and B. Yu, ?Spectral clustering and the high-dimensional stochastic
blockmodel?, Ann. Stat., 2011, to appear.
[16] A. Celisse, J.J. Daudin, and L. Pierre, ?Consistency of maximum-likelihood and variational
estimators in the stochastic block model?, Arxiv preprint 1105.3288, 2011.
[17] B. Karrer, E. Levina, and MEJ Newman, ?Robustness of community structure in networks?,
Phys. Rev. E, vol. 77, pp. 46119?46128, Apr. 2008.
[18] C.P. Massen and J.P.K. Doye, ?Thermodynamics of community structure?, Arxiv preprint
cond-mat/0610077, 2006.
[19] J. Copic, M. O. Jackson, and A. Kirman, ?Identifying community structures from network
data via maximum likelihood methods?, B.E. J. Theoretical Economics, vol. 9, Sept. 2009.
[20] P.W. Holland and S. Leinhardt, ?An exponential family of probability distributions for directed graphs?, J. Am. Stat. Assoc., vol. 76, pp. 33?50, Mar. 1981.
[21] SJ Haberman, ?Comment on Holland and Leinhardt?, J. Am. Stat. Assoc., vol. 76, pp. 60?62,
Mar. 1981.
[22] S. Wasserman and S.O.L. Weaver, ?Statistical analysis of binary relational data: parameter
estimation?, J. Math. Psychol., vol. 29, pp. 406?427, Dec. 1985.
[23] P. D. Hoff, ?Modeling homophily and stochastic equivalence in symmetric relational data?,
in Adv. in Neural Information Processing Systems, pp. 657?664. MIT Press, 2008.
[24] S.M. Goodreau, J.A. Kitts, and M. Morris, ?Birds of a feather, or friend of a friend? using
exponential random graph models to investigate adolescent social networks?, Demography,
vol. 46, pp. 103?125, Feb. 2009.
[25] M.C. Gonz?alez, H.J. Herrmann, J. Kert?esz, and T. Vicsek, ?Community structure and ethnic
preferences in school friendship networks?, Physica A., vol. 379, no. 1, pp. 307?316, 2007.
9
| 4230 |@word trial:2 version:3 proportion:3 stronger:1 open:2 seek:1 simulation:4 p0:4 fifteen:1 solid:1 score:1 united:1 pub:1 denoting:1 longitudinal:3 recovered:1 comparing:2 current:1 yet:1 additive:1 partition:16 visible:1 enables:2 parameterization:2 inspection:1 undertook:1 blei:1 provides:2 math:2 node:10 preference:1 nodal:5 along:2 direct:1 pairing:2 qualitative:1 feather:1 fitting:5 manner:1 expected:4 behavior:1 p1:4 frequently:1 examine:3 mechanic:1 grade:16 growing:1 inappropriate:1 considering:1 increasing:2 haberman:1 project:1 underlying:1 matched:3 kind:1 interpreted:2 developed:4 caution:1 alez:1 shed:1 biometrika:1 assoc:2 exchangeable:1 appear:4 engineering:2 understood:1 local:1 modify:1 kert:1 acad:2 despite:1 mach:1 analyzing:2 becoming:1 approximately:1 bird:1 quantified:2 examined:2 suggests:1 equivalence:1 range:6 averaged:1 directed:1 practical:1 practice:4 block:17 lost:1 assortative:1 procedure:3 jan:2 asymptotics:1 empirical:5 dictate:1 inferential:2 confidence:35 deviance:4 suggest:1 cannot:5 unc:2 selection:3 context:1 intercept:2 www:2 equivalent:1 restriction:1 economics:2 adolescent:11 convex:1 survey:5 focused:1 independently:1 identifying:2 unstructured:1 pure:1 wasserman:2 estimator:7 jackson:2 notion:3 nominal:2 exact:1 us:1 hypothesis:3 harvard:6 wolfe:2 element:4 trend:1 particularly:1 modularities:1 vein:1 observed:13 preprint:2 capture:1 calculate:1 adv:1 mentioned:1 intuition:1 complexity:1 covariates:15 moderately:1 prescribes:1 division:1 easily:1 various:1 separated:1 fast:1 effective:1 newman:4 formation:1 whose:2 larger:2 faust:1 tightness:1 otherwise:1 statistic:3 nab:8 itself:1 online:1 sequence:1 indication:1 propose:2 leinhardt:2 product:2 aligned:1 date:1 degenerate:1 description:1 validate:3 assessing:1 sea:2 help:1 illustrate:1 depending:1 friend:2 stat:4 measured:1 school:39 inflate:1 eq:18 strong:1 soc:1 predicted:2 come:1 indicate:2 quantify:2 inflated:1 met:1 posit:1 exhibiting:1 closely:1 owing:1 stochastic:17 cpc:2 adjacency:2 require:1 physica:1 hold:4 exp:1 goodreau:1 algorithmic:1 collegiate:1 bickel:1 adopt:1 barabasi:1 estimation:4 proc:2 esz:1 individually:1 largest:1 tool:2 reflects:2 weighted:1 mit:1 always:1 aim:1 modified:1 avoid:1 lifted:1 derived:1 june:3 bernoulli:11 likelihood:17 check:1 blockmodel:10 baseline:14 sense:2 detect:2 tantrum:1 inference:4 goldenberg:1 am:2 membership:3 explanatory:9 relation:1 comprising:2 issue:2 sociomatrix:2 constrained:1 initialize:1 hoff:2 equal:4 sampling:1 yu:1 nearly:1 report:2 employ:3 modern:1 randomly:2 frieze:1 simultaneously:1 national:3 divergence:9 resulted:1 replaced:1 argmax:1 ab:15 addhealth:2 detection:1 recalling:2 interest:2 organization:1 mining:1 investigate:2 highly:2 zheng:1 replicates:1 alignment:4 analyzed:1 light:1 natl:2 edge:5 necessary:2 respective:1 conduct:1 desired:2 re:3 theoretical:2 fitted:7 modeling:2 earlier:2 assignment:3 maximization:4 karrer:1 entry:1 uniform:1 comprised:3 conducted:1 reported:3 siam:2 physic:2 choose:2 usable:1 leading:2 return:1 account:1 diversity:1 student:11 summarized:1 includes:2 coefficient:6 race:9 multiplicative:2 performed:1 view:1 apparently:1 sup:1 wave:1 xing:1 identifiability:1 ass:6 variance:6 characteristic:3 maximized:1 identify:1 identification:1 explain:1 phys:1 assortativity:2 manual:1 whenever:1 aligns:1 definition:1 sharing:1 against:1 notional:1 pp:16 associated:8 dataset:4 popular:1 recall:1 knowledge:1 dimensionality:1 obtainable:1 appears:2 exceeded:2 methodology:2 formulation:4 arranged:1 though:1 mar:4 generality:1 correlation:2 web:1 expressive:1 assessment:2 lack:3 porter:1 logistic:5 brings:1 quality:1 behaved:1 grows:1 usage:2 effect:7 verify:1 normalized:1 classif:1 hence:1 symmetric:5 leibler:1 nowicki:1 bonferroni:3 noted:1 generalized:1 evident:1 complete:1 demonstrate:2 meaning:1 wise:1 ranging:1 variational:1 common:2 confirmatory:1 homophily:1 association:1 discussed:3 kitts:1 measurement:2 significant:1 cambridge:5 gibbs:1 theorist:1 unconstrained:2 consistency:11 inclusion:1 handcock:1 had:2 specification:1 similarity:2 patrick:2 add:2 feb:2 own:1 showed:1 touching:1 moderate:2 gonz:1 certain:1 binary:2 success:1 seen:2 minimum:1 additional:3 freely:1 converge:1 relates:1 full:1 snijders:1 reduces:3 stem:1 exceeds:1 match:1 levina:1 mucha:1 divided:1 vicsek:1 mle:2 prediction:5 regression:7 mej:1 expectation:3 albert:1 iteration:1 represent:1 sometimes:1 histogram:1 arxiv:2 dec:3 whereas:1 signify:1 interval:2 annealing:1 median:1 leaving:1 kirman:1 comment:1 subject:1 induced:4 undergo:1 undirected:1 incorporates:2 leveraging:1 seem:1 odds:1 practitioner:1 presence:1 variety:1 independence:3 fit:7 zi:4 identified:1 approaching:1 economic:1 regarding:1 withinclass:1 intensive:1 whether:3 edoardo:1 effort:1 tabulated:1 remark:1 generally:1 useful:3 detailed:3 clear:1 amount:1 nonparametric:1 morris:1 http:2 generate:2 zj:4 dotted:1 estimated:5 algorithmically:2 celisse:1 mat:1 vol:17 group:1 salient:1 four:1 drawn:1 verified:1 undertaken:1 graph:11 asymptotically:1 fraction:2 year:3 sum:1 parameterized:1 uncertainty:7 reporting:1 family:2 putative:1 draw:2 parsimonious:1 jaccard:1 bound:12 paramount:1 nonnegative:1 constraint:1 incorporation:1 bibliography:1 dichotomous:1 aspect:1 span:1 concluding:1 expanded:1 relatively:1 conjecture:1 department:1 according:1 across:2 remain:1 em:1 appealing:1 rev:2 modification:1 alike:1 explained:2 intuitively:1 restricted:1 fienberg:2 computationally:2 alluded:1 agree:1 discus:1 loose:2 count:3 turn:1 slack:1 mind:1 end:2 cor:1 lieu:1 available:1 apply:4 observe:2 enforce:1 spectral:1 pierre:1 robustness:2 pji:1 responding:1 denotes:2 include:2 clustering:4 remaining:1 giving:1 question:3 quantity:1 fa:1 primary:1 parametric:2 diagonal:1 variance2:1 exhibit:2 div:1 unable:1 simulated:3 sci:2 collected:1 modeled:1 racial:1 ratio:4 difficult:1 negative:1 rise:2 upper:4 datasets:1 enabling:1 finite:1 withheld:1 heterogeneity:2 situation:1 relational:2 discovered:1 varied:1 perturbation:1 community:15 inferred:2 david:1 introduced:2 required:1 specified:2 concisely:1 able:3 below:3 departure:1 power:1 greatest:1 demanding:1 natural:1 quantification:3 difficulty:1 rely:1 indicator:3 residual:12 weaver:1 thermodynamics:1 raftery:1 categorical:1 psychol:1 health:11 sept:1 prior:1 literature:4 review:2 checking:2 determining:1 relative:1 asymptotic:3 girvan:1 loss:1 mixed:1 versus:2 facing:1 foundation:1 degree:11 pij:13 consistent:4 article:2 course:1 summary:1 keeping:1 aij:6 formal:2 bias:4 taking:1 hopeful:1 qualitatively:2 herrmann:1 replicated:1 far:2 social:8 correlate:1 sj:1 approximate:3 kullback:1 ml:5 instantiation:1 latent:23 modularity:1 table:4 nature:1 learn:1 unavailable:1 improving:1 bearing:1 necessarily:1 complex:2 did:1 significance:6 main:1 blockmodels:4 apr:1 arise:1 n2:2 dyadic:3 complementary:1 ethnic:1 fig:4 depicts:1 cooper:1 exponential:3 rk:1 choi:2 friendship:9 theorem:2 specific:2 removing:1 covariate:6 invalidating:1 rohe:1 list:1 explored:1 concern:2 grouping:2 incorporating:1 airoldi:5 magnitude:1 conditioned:2 chatterjee:1 chen:1 flavor:1 intersection:1 explore:3 expressed:1 ordered:1 scalar:1 holland:2 gender:8 ma:3 quantifying:1 ann:1 price:1 shared:4 absence:1 change:3 included:3 determined:3 specifically:2 uniformly:2 corrected:3 conservative:4 total:3 cond:1 indicating:1 demography:1 maximumlikelihood:1 inability:1 signifying:1 incorporate:1 evaluate:2 princeton:1 |
3,569 | 4,231 | Sparse recovery by thresholded
non-negative least squares
Martin Slawski and Matthias Hein
Department of Computer Science
Saarland University
Campus E 1.1, Saarbr?ucken, Germany
{ms,hein}@cs.uni-saarland.de
Abstract
Non-negative data are commonly encountered in numerous fields, making nonnegative least squares regression (NNLS) a frequently used tool. At least relative to its simplicity, it often performs rather well in practice. Serious doubts
about its usefulness arise for modern high-dimensional linear models. Even in
this setting ? unlike first intuition may suggest ? we show that for a broad class
of designs, NNLS is resistant to overfitting and works excellently for sparse recovery when combined with thresholding, experimentally even outperforming `1 regularization. Since NNLS also circumvents the delicate choice of a regularization parameter, our findings suggest that NNLS may be the method of choice.
1
Introduction
Consider the linear regression model
y = X? ? + ?,
(1)
where y is a vector of observations, X ? Rn?p a design matrix, ? a vector of noise and ? ? a vector
of coefficients to be estimated. Throughout this paper, we are concerned with a high-dimensional
setting in which the number of unknowns p is at least of the same order of magnitude as the number
of observations n, i.e. p = O(n) or even p n, in which case one cannot hope to recover the
target ? ? if it does not satisfy one of various kinds of sparsity constraints, the simplest being that
? ? is supported on S = {j : ?j? 6= 0}, |S| = s < n. In this paper, we additionally assume that
? ? is non-negative, i.e. ? ? ? Rp+ . This constraint is particularly relevant, since non-negative data
occur frequently, e.g. in the form pixel intensity values of an image, time measurements, histograms
or count data, economical quantities such as prices, incomes and growth rates. Non-negativity
constraints emerge in numerous deconvolution and unmixing problems in diverse fields such as
acoustics [1], astronomical imaging [2], computer vision [3], genomics [4], proteomics [5] and
spectroscopy [6]; see [7] for a survey. Sparse recovery of non-negative signals in a noiseless setting
(? = 0) has been studied in a series of recent papers [8, 9, 10, 11]. One important finding of this body
of work is that non-negativity constraints alone may suffice for sparse recovery, without the need to
employ sparsity-promoting `1 -regularization as usually. The main contribution of the present paper
is a transfer of this intriguing result to a more realistic noisy setup, contradicting the well-established
paradigm that regularized estimation is necessary to cope with high dimensionality and to prevent
over-adaptation to noise. More specifically, we study non-negative least squares (NNLS)
1
2
min ky ? X?k2
(2)
?0 n
b
with minimizer ?b and its counterpart after hard thresholding ?(?),
(
?bj ,
?bj > ?,
?bj (?) =
(3)
0,
otherwise, j = 1, . . . , p,
1
where ? ? 0 is a threshold, and state conditions under which it is possible to infer the support
b
S by S(?)
= {j : ?bj (?) > 0}. Classical work on the problem [12] gives a positive answer for
fixed p, while in case one follows the modern statistical trend, one would add a regularizer to (2) in
order to encourage sparsity: the most popular approach is `1 -regularized least squares (lasso, [13]),
which is easy to implement and comes with strong theoretical guarantees with regard to prediction
and estimation of ? ? in the `2 -norm over a broad range of designs (see [14] for a review). On the
other hand, the rather restrictive ?irrepresentable condition? on the design is essentially necessary in
order to infer the support S from the sparsity pattern of the lasso [15, 16]. In view of its tendency
to assign non-zero weights to elements of the off-support S c = {1, . . . , p} \ S, several researchers,
e.g. [17, 18, 19], suggest to apply hard thresholding to the lasso solution to achieve support recovery.
In light of this, thresholding a non-negative least squares solution, provided it is close to the target
w.r.t. the `? -norm, is more attractive for at least two reasons: first, there is no need to carefully
tune the amount of `1 -regularization prior to thresholding; second, one may hope to detect relatively
small non-zero coefficients whose recovery is negatively affected by the bias of `1 -regularization.
Outline. We first prove a bound on the mean square prediction error of the NNLS estimator,
demonstrating that it may be resistant to overfitting. Section 3 contains our main results on sparse
recovery with noise. Experiments providing strong support of our theoretical findings are presented
in Section 4. Most of the proofs as well as technical definitions are relegated to the supplement.
Notation. Let J, K be index sets. For a matrix A ? Rn?m , AJ denotes the matrix one obtains by
extracting the columns corresponding to J. For j = 1, . . . , m, Aj denotes the j-th column of A.
The matrix AJK is the sub-matrix of A by extracting rows in J and columns in K. For v ? Rm , vJ
is the sub-vector corresponding to J. The identity matrix is denoted by I and vectors of ones by 1.
The symbols (?), () denote entry-wise (strict) inequalities. Lower and uppercase c?s denote
positive universal constants (not depending on n, p, s) whose values may differ from line to line.
Assumptions. We here fix what is assumed throughout the paper unless stated otherwise. Model
2
(1) is assumed to hold. The matrix X is assumed to be non-random and scaled s.t. kXj k2 = n ?j.
We assume that ? has i.i.d. zero-mean sub-Gaussian entries with parameter ? > 0, cf. supplement.
2
Prediction error and uniqueness of the solution
b 2.
In the following, the quantity of interest is the mean squared prediction error (MSE) n1 kX? ? ?X ?k
2
NNLS does not necessarily overfit. It is well-known that the MSE of ordinary least squares (OLS)
as well as that of ridge regression in general does not vanish unless p/n ? 0. Can one do better with
non-negativity constraints ? Obviously, the answer is negative for general X. To make this clear,
e be given and set X = [X
e ? X]
e by concatenating X
e and ?X
e columnwise.
let a design matrix X
ols
The non-negativity constraint is then vacuous in the sense that X ?b = X ?b , where ?bols is any OLS
solution. However, non-negativity constraints on ? can be strong when coupled with the following
condition imposed on the Gram matrix ? = n1 X > X.
Self-regularizing property. We call a design self-regularizing with universal constant ? ? (0, 1] if
? > ?? ? ?(1> ?)2 ?? 0.
(4)
The term ?self-regularizing? refers to the fact that the quadratic form in ? restricted to the nonnegative orthant acts like a regularizer arising from the design itself. Let us consider two examples:
(1) If ? ?0 > 0, i.e. all entries of the Gram matrix are at least ?0 , then (4) holds with ? = ?0 .
(2) If the Gram matrix is entry-wise non-negative and if the set of predictors indexed by {1, . . . , p}
>
can be partitioned into subsets B1 , . . . , BB such that min1?b?B n1 XB
XBb ?0 , then
b
>
min ? ?? ?
?0
B
X
b=1
B
>
?B
b
X
1 >
?0 > 2
XBb XBb ?Bb ? ?0
(1> ?Bb )2 ?
(1 ?) .
n
B
b=1
In particular, this applies to design matrices whose entries Xij = ?j (ui ) contain the function evaluations of non-negative functions {?j }pj=1 traditionally used for data smoothing such as splines,
Gaussians and related ?localized? functions at points {ui }ni=1 in some fixed interval, see Figure 1.
2
For self-regularizing designs, the MSE of NNLS can be controlled as follows.
Theorem 1. Let ? fulfill the self-regularizing property with constant ?. Then, with probability no
less than 1 - 2/p, the NNLS estimator obeys
r
1
2 log p ?
8?
8? 2 log p
?
2
b ?
kX? ? X ?k
k? k1 +
.
2
n
?
n
? n
The statement implies that for p
self-regularizing designs, NNLS is consistent in the sense that its
MSE, which is of the order O( log(p)/n k? ? k1 ), may vanish as n ? ? even if the number of
predictors p scales up to sub-exponentially in n. It is important to note that exact sparsity of ? ? is
not needed for Theorem 1 to hold. The rate is the same as for the lasso if no further assumptions on
the design are made, a result that is essentially obtained in the pioneering work [20].
?1
?15
y'
y
w
B1
B2
B3
B4
B5
Figure 2: A polyhedral cone in R3 and
its intersection with the simplex (right).
The point y is contained in a face (bold)
with normal vector w, whereas y 0 is not.
Figure 1: Block partitioning of 15 Gaussians
into B = 5 blocks. The right part shows the
corresponding pattern of the Gram matrix.
Uniqueness of the solution. Considerable insight can be gained by looking at the NNLS problem
(2) from the perspective of convex geometry. Denote by C = XRp+ the polyhedral cone generated
by the columns {Xj }pj=1 of X, which are henceforth assumed to be in general position in Rn . As
visualized in Figure 2, sparse recovery by non-negativity constraints can be analyzed by studying the
|F |
face lattice of C [9, 10, 11]. For F ? {1, . . . , p}, we say that XF R+ is a face of C if there exists a
separating hyperplane with normal vector w passing through the origin such that hXj , wi > 0, j ?
/
F , hXj , wi = 0, j ? F . Sparse recovery in a noiseless setting (? = 0) can then be characterized
concisely by the following statement which can essentially be found in prior work [9, 10, 11, 21].
Proposition 1. Let y = X? ? , where ? ? 0 has support S, |S| = s. If XS Rs+ is a face of C and the
columns of X are in general position in Rn , then the constrained linear system X? = y sb.t. ? 0,
has ? ? as its unique solution.
Proof. By definition, since XS Rs+ is a face of C, there exists a w ? Rn s.t. hXj , wi = 0, j ?
?
S, hXj , wi > 0, j ? S c . Assume that there is a second solution
P ? + ?, ? 6= 0. Expand
?
>
XS (?S + ?S ) + XS c ?S c = y. Multiplying both sides by w yields j?S c hXj , wi ?j = 0. Since
?S? c = 0, feasibility requires ?j ? 0, j ? S c . All inner products within the sum are positive,
concluding that ?S c = 0. General position implies ?S = 0.
Given Theorem 1 and Proposition 1, we turn to uniqueness in the noisy case.
p
Corollary 1. In the setting of Theorem 1, if k? ? k1 = o( n/ log(p)), then the NNLS solution ?b is
unique with high probability.
b the projection of y on C, is contained in its
Proof. Suppose first that y ?
/ C = XRp+ , then X ?,
boundary, i.e. in a lower-dimensional face. Using general position of the columns of X, Proposition
1 implies that ?b is unique. If y were already contained in C, one would have y = X ?b and hence
1
b 2 = 1 kX? ? ? yk2 = 1 k?k2 = O(1), with high probability,
kX? ? ? X ?k
(5)
2
2
2
n
n
n
using concentration of measure of the norm of the sub-Gaussian random vector ?. With the assumed
b 2 = o(1) in view of Theorem 1, which contradicts (5).
scaling for k? ? k1 , n1 kX? ? ? X ?k
2
3
3
Sparse recovery in the presence of noise
Proposition 1 states that support recovery requires XS Rs+ to be a face of XRp+ , which is equivalent
to the existence of a hyperplane separating XS Rs+ from the rest of C. For the noisy case, mere
separation is not enough ? a quantification is needed, which is provided by the following two incoherence constants that are of central importance for our main result. Both are specific to NNLS and
have not been used previously in the literature on sparse recovery.
Definition 1. For some fixed S ? {1, . . . , p}, the separating hyperplane constant is defined as
?b(S) = max ?
?,w
1
1
sb.t. ? XS> w = 0, ? XS>c w ? 1, kwk2 ? 1,
n
n
1
duality
? kXS ? ? XS c ?k2 ,
=
min
n
??Rs , ??T p?s?1
(6)
(7)
where T m?1 = {v ? Rm : v 0, 1> v = 1} denotes the simplex in Rm , i.e. ?b(S) equals the
distance of the subspace spanned by {Xj }j?S and the convex hull of {Xj }j?S c .
We denote by ?S and ??
S the orthogonal projections on the subspace spanned by {Xj }j?S and its
orthogonal complement, respectively, and set Z = ??
S XS c . One can equivalently express (7) as
1
?b2 (S) = min
?> Z > Z?.
(8)
p?s?1
n
??T
The second incoherence constant we need can be traced back to the KKT optimality conditions of
the NNLS problem. The role of the following quantity is best understood from (13) below.
Definition 2. For some fixed S ? {1, . . . , p} and Z = ??
b (S) is defined as
S XS c , ?
1 >
?
b (S) =
min
min
ZF ZF v
, V(F ) = {v ? R|F | : kvk? = 1, v 0}. (9)
n
?6=F ?{1,...,p?s} v?V(F )
?
In the supplement, we show that i) ?
b (S) > 0 ? ?b(S) > 0 ? XS Rs+ is a face of C, and ii)
?
b (S) ? 1, with equality if {Xj }j?S and {Xj }j?S c are orthogonal and n1 XS>c XS c is entry-wise nonnegative. Denoting the entries of ? = n1 X > X by ?jk , 1 ? j, k ? p, our main result additionally
involves the constants
P
?(S) = maxj?S maxk?S
|,
?+ (S) = maxj?S k?S c |?jk |, ?min (S) = minj?S ?j? ,
jk
c |?
K(S) = maxv: kvk? =1
??1
SS v ? , ?min (S) = minv: kvk2 =1 k?SS vk2 .
(10)
b
b
Theorem 2. Consider
the
thresholded
NNLS
estimator
?(?)
defined
in
(3)
with
support
S(?).
q
2 log p
(i) If ? > ?b22?
and
(S)
n
r
2?
2 log p
e
e
,
?min (S) > ?, ? = ?(1 + K(S)?(S)) +
1/2
n
{?min (S)}
q
2 log p
(ii) or if ? > ?b2?
and
(S)
n
r
2?
2 log p
e ?
e = ?(1 + K(S)?+ (S)) +
?min (S) > ?,
,
n
{?min (S)}1/2
b
e and S(?)
b
then k?(?)
? ? ? k? ? ?
= S with probability no less than 1 ? 10/p.
Remark. The concept of a separating functional as in (6) is also used to show support recovery for
the lasso [15, 16] as well as for orthogonal matching pursuit [22, 23]. The ?irrepresentable condition?
employed in these works requires the existence of a separation constant ?(S) > 0 such that
maxc |Xj> XS (XS> XS )?1 sign(?S? )| ? 1??(S), while |Xj> XS (XS> XS )?1 sign(?S? )| = 1, j ? S,
j?S
hence {Xj }j?S and {Xj }j?S c are separated by the functional | ?, XS (XS> XS )?1 sign(?S? ) |.
In order to prove Theorem 2, we need two lemmas first. The first one is immediate from the
KKT optimality conditions of the NNLS problem.
4
Lemma 1. ?b is a minimizer of (2) if and only if there exists F ? {1, . . . , p} such that
1 >
b = 0, and ?bj > 0, j ? F,
X (y ? X ?)
n j
1 >
b ? 0, and ?bj = 0, j ? F c .
X (y ? X ?)
n j
The next lemma is crucial, since it permits us to decouple ?bS from ?bS c .
Lemma 2. Consider the two non-negative least squares problems
(P 1) : min
? (P 1) 0
1 ?
k? (? ? XS c ? (P 1) )k22
n S
(P 2) :
min
? (P 2) 0
1
k?S y ? XS ? (P 2) ? ?S XS c ?b(P 1) k22
n
with minimizers ?b(P 1) of (P 1) and ?b(P 2) of (P 2), respectively. If ?b(P 2) 0, then setting ?bS =
?b(P 2) and ?bS c = ?b(P 1) yields a minimizer ?b of the non-negative least squares problem (2).
Proof of Theorem 2. The proofs of parts (i) and (ii) overlap to a large extent. Steps specific to one of
the two parts are preceded by ?(i)? or ?(ii)?. Consider problem (P 1) of Lemma 2.
Step 1: Controlling k?b(P 1) k1 via ?b2 (S), controlling k?b(P 1) k? via ?
b (S).
b(P 1) is a minimizer, it satisfies
(i) With ? = ??
?,
since
?
S
2 >
1
1
1
2
k? ? Z ?b(P 1) k22 ? k?k2 ? (?b(P 1) )> Z > Z ?b(P 1) ? k?b(P 1) k1 M, M = max
|Zj ?|.
n
n
n
1?j?(p?s) n
(11)
As observed in (8), ?b2 (S) = min??T p?s?1 ?> n1 Z > Z?, s.t. the l.h.s. can be lower bounded via
1
>1 >
(?b(P 1) )> Z > Z ?b(P 1) ?
min
?
Z
Z?
k?b(P 1) k21 = ?b2 (S)k?b(P 1) k21 .
(12)
n
n
??T p?s?1
Combining (11) and (12), we have k?b(P 1) k1 ? ?b21(S) M .
(ii) In view of Lemma 1, there exists a set F ? {1, . . . , p ? s} (we may assume F 6= ?, otherwise
(P 1)
?b(P 1) = 0) such that ?bF c = 0 and such that
1 >
2
1
2
(P 1)
(P 1)
ZF ZF ?bF
= ZF> ?, ?
ZF> ZF ?bF
=
ZF> ?
n
n
n
n
?
?
1 >
2 >
(P 1)
b
? min
ZF ZF v
k?
k? ?
Z ?
, V(F ) = {v ? R|F | : kvk? = 1, v 0}
n
v?V(F ) n
?
?
1
2
(P 1)
b
??
b (S)k?
k? =
min
min
ZF ZF v
k?b(P 1) k? ?
Z > ?
= M,
?6=F ?{1,...,p?s} v?V(F )
n
n
?
?
(13)
where we have used Definition 2. We conclude that k?b(P 1) k? ?
M
?
b (S) .
Step 2: Back-substitution into (P2). Equipped with the bounds just derived, we insert ?b(P 1) into
problem (P 2) of Lemma 2, and show that in conjunction with the assumptions made for the minimum support coefficient ?min (S), the ordinary least squares estimator corresponding to (P 2)
1
??(P 2) = argmin k?S y ? XS ? (P 2) ? ?S XS c ?b(P 1) k22
n
(P
2)
?
has only positive components. Lemma 2 then yields ??(P 2) = ?b(P 2) = ?bS . Using the closed form
expression for the ordinary least squares estimator, one obtains
1
1
b(P 1) .
??(P 2) = ??1
X > (XS ?S? + ?S ? ? ?S XS c ?b(P 1) ) = ?S? + ??1
X > ? ? ??1
SS ?SS c ?
n SS S
n SS S
?1
>
b(P 1) k? . We have
It remains to control the deviation terms M = k n1 ??1
SS XS ?k? and k?SS ?SS c ?
(
b(P 1) k1
(10)
for (i),
b(P 1) k? ? max k??1 vk? k?SS c ?b(P 1) k? ? K(S)? ?(S)k?
k??1
SS ?SS c ?
SS
v: kvk? =1
?+ (S)k?b(P 1) k? for (ii).
(14)
Step 3: Putting together the pieces. The two random terms M and M are maxima of a finite collection of sub-Gaussian random variables, which can be controlled using standard techniques. Since
5
?1 > ?
?1/2
kZj k2 ? kXj k2 and ke>
for all j, the sub-Gaussian parameters
j ?SS XS / nk2 ? {?min (S)}
?
?
1/2
of these collections are upperq
bounded by ?/ n and ?/({?min (S)}
n), respectively. It follows
q
p
2 log p
2?
that the two events {M ? 2? 2 log
n } and {M ? {?min (S)}1/2
n } both hold with probability
no less than 1 ? 10/p, cf. supplement. Subsequently, we work conditional on these two events. For
the choice of ? made for (i) and (ii), respectively, it follows that
r
2?
2 log p
?(S)
for (i),
?
(P 2)
?
k? ? ?
k? ?
+ ?K(S) ?
?+ (S) for (ii),
n
{?min (S)}1/2
and hence, using the lower bound on ?min (S), that ??(P 2) = ?bS 0 and thus also that ?b(P 1) = ?bS c .
Subsequent thresholding with the respective choices made for ? yields the assertion. 2
In the sequel, we apply Theorem 2 to specific classes of designs commonly studied
in the
p
literature, for which thresholded NNLS achieves an `? -error of the optimal order O( log(p)/n).
We here only provide sketches, detailed derivations are relegated to the supplement.
Example 1: Power decay. Let the entries of the Gram matrix ? be given by ?jk = ?|j?k| , 1 ?
j, k ? p, 0 ? ? < 1, so that the {Xj }pj=1 form a Markov random field in which Xj is conditionally
independent of {Xk }k?{j?1,j,j+1}
given {Xj?1 , Xj+1 }, cf. [24]. The conditional independence
/
structure implies that all entries of Z > Z are non-negative, such that, using the definition of ?
b (S),
X
1
1 >
1 >
?
b (S) ? min
min
(Z Z)jj +
min{(Z > Z)jk , 0},
Zj Zv = min
1?j?p?s v0,kvk? =1 n
n
1?j?(p?s) n
k6=j
2
2?
the sum on the r.h.s. vanishes, thus one computes ?
b (S) ? min1?j?(p?s) n1 (Z > Z)jj ? 1 ? 1+?
2
?1
for all S. For the remaining constants in (10), one can show that ?SS is a band matrix of bandwidth
no more than 3 for all choices of S such that ?min (S) and K(S) are uniformly lower and upper
bounded, respectively, by constants depending on ? only. By the geometric series formula, ?+ (S) ?
?
1?? . In total, for a constant C? > 0 depending on ? only, one obtains an `? -error of the form
p
b
(15)
k?(?)
? ? ? k? ? C? ? 2 log(p)/n.
Example 2: Equi-correlation. Suppose that ?jk = ?, 0 < ? < 1, for all j 6= k, and ?jj = 1 for
all j. For any S, one computes that the matrix n1 Z > Z is of the same regular structure with diagonal
entries all equal to 1 ? ? and off-diagonal entries all equal to ? ? ?, where ? = ?2 s/(1 + (s ? 1)?).
Therefore, using (8), the separating hyperplane constant (7) can be computed in closed form:
?b2 (S) =
(1 ? ?)?
1??
+
= O(s?1 ).
(s ? 1)? + 1 p ? s
(16)
Arguing as in (12) in the proof of Theorem 2, this allows one to show that with high probability,
p
p
2? 2 log(p)/n
((s ? 1)? + 1)2? 2 log(p)/n
b
k ?S c k 1 ?
?
.
(17)
?b2 (S)
(1 ? ?)?
On the other hand, using the sameqreasoning as in Example 1, ?
b (S) ? 1 ? ? = c? > 0, say.
2 log p
2?
as in part (ii) of Theorem 2 and combining the strong
Choosing the threshold ? = ?b (S)
n
`1 -bound (17) on the off-support coefficients with a slight modification of the bound (14) together
with ?min (S) = 1 ? ? yields again the desired optimal bound of the form (15).
Random designs. So far, the design matrix X has been assumed to be fixed. Consider the following ensemble of random matrices
Ens+ = {X = (xij ), {xij , 1 ? i ? n, 1 ? j ? p} i.i.d. from a sub-Gaussian distribution on R+ }.
Among others, the class of sub-Gaussian distributions on R+ encompasses all distributions on a
bounded set on R+ , e.g. the family of beta distributions (with the uniform distribution as special case) on [0, 1], Bernoulli distributions on {0, 1} or more generally distributions on counts
6
{0, 1, . . . , K}, for some positive integer K. The ensemble Ens+ is well amenable to analysis,
since after suitable re-scaling the corresponding population Gram matrix ?? = E[ n1 X > X] has
equi-correlation structure (Example 2): denoting the mean of the entries and their?squares by ? and
?2 , respectively, we have ?? = (?2 ? ?2 )I + ?2 11> such that re-scaling by 1/ ?2 leads to equicorrelation with ? = ?2 /?2 . As shown above, the incoherence constant ?b2 (S), which gives rise to a
strong bound on k?bS c k1 , scales favourably and can be computed in closed form. For random designs
?
from Ens+ , one additionally has to take into account the deviation between ? and ?
p. Using tools
from random matrix theory, we show that the deviation is moderate, of the order O( log(p)/n).
Theorem 3. Let X be a random matrix from Ens+ , scaled s.t. E n1 X > X = ?I + (1 ? ?)11> for
some ? ? (0, 1). Fix an S ? {1, . . . , p}, |S| ? s. Then there exists constants c, c1 , c2 , c3 , C, C 0 > 0
such that for all n ? C log(p)s2 ,
p
?b2 (S) ? cs?1 ? C 0 log(p)/n
with probability no less than 1 ? 3/p ? exp(?c1 n) ? 2 exp(?c2 log p) ? exp(?c3 log1/2 (p)s).
4
Experiments
Setup. We randomly generate data y = X? ? + ?, where ? has i.i.d. standard Gaussian entries. We
consider two choices for the design X. For one set of experiments, the rows of X are drawn i.i.d.
from a Gaussian distribution whose covariance matrix has the power decay structure of Example 1
with parameter ? = 0.7. For the second set, we pick a representative of the class Ens+ by drawing
each entry of X uniformly from [0, 1] and re-scaling s.t. the population Gram matrix ?? has equicorrelation structure with ? = 3/4. The target ? ? is generated by selecting its supportpS uniformly
at random and then setting ?j? = b ? ?min (S)(1 + Uj ), j ? S, where ?min (S) = C? ? 2 log(p)/n,
using upper bounds for the constant C? as used for Examples 1 and 2; the {Uj }j?S are drawn i.i.d.
uniformly from [0, 1], and b is a parameter controlling the signal strength. The experiments can be
divided into two parts. In the first part, the parameter b is kept fixed while the aspect ratio p/n of X
and the fraction of sparsity s/n vary. In the second part, s/n is fixed to 0.2, while p/n and b vary.
When not fixed, s/n ? {0.05, 0.1, 0.15, 0.2, 0.25, 0.3}. The grid used for b is chosen specific to
the designs, calibrated such that the sparse recovery problems are sufficiently challenging. For the
design from Ens+ , p/n ? {2, 3, 5, 10}, whereas for power decay p/n ? {1.5, 2, 2.5, 3, 3.5, 4}, for
reasons that become clear from the results. Each configuration is replicated 100 times for n = 500.
Comparison. Across these runs, we compare the probability of ?success? of thresholded NNLS
(tNNLS), non-negative lasso (NN`1 ), thresholded non-negative lasso (tNN`1 ) and orthogonal matching pursuit (OMP, [22, 23]). For a regularization parameter ? ? 0, NN`1 is defined as a minimizer
2
b
?(?)
of min?0 n1 ky ? X?k2 + ?1> ?. We also compare against the ordinary lasso (replacing 1> ?
by k?k1 and removing the non-negativity constraint); since its performance is mostly nearly equal,
partially considerably worse than that of its non-negative counterpart (see the bottom right panel of
Figure 4 for an example), the results are not shown in the remaining plots for the sake of better readability. ?Success? is defined as follows. For tNNLS, we have ?success? if minj?S ?bj > maxj?S c ?bj ,
i.e. there exists a threshold that permits support
b = 2kX > ?/nk? ,
p recovery. For NN`1 , we set ?
which is the empirical counterpart to ?0 = 2 2 log(p)/n, the choice for the regularization parameter advocated in [14] to achieve the optimal rate for estimating ? ? in the `2 -norm, and compute
b
the whole set of solutions {?(?),
???
b} using the non-negative lasso modification of LARS [26]
and check whether the sparsity pattern of one of these solutions recovers S. For tNN`1 , we inspect
b
{?(?)
: ? ? [?0 ? ?
b, ?0 ? ?
b]} and check whether minj?S ?bj (?) > maxj?S c ?bj (?) holds for one
of these solutions. For OMP, we check whether the support S is recovered in the first s steps. Note
that, when comparing tNNLS and tNN`1 , the lasso is given an advantage, since we optimize over a
range of solutions.
Remark: We have circumvented the choice of the threshold ?, which is crucial in practice. In a
specific application [5] the threshold is chosen in a signal-dependent way allowing domain experts
to interpret ? as signal-to-noise ratio. Alternatively, one can exploit that under the conditions of
Theorem 2, the s largest coefficients of ?b are those of the support. Given a suitable data-driven
estimate for s e.g. that proposed in [25], ? can be chosen automatically.
7
Ens+
1
0.8
0.8
Prob. of Success
Prob. of Success
power decay
1
p/n= 1.5
0.6
p/n= 2.0
p/n= 2.5
p/n= 3.0
0.4
p/n= 3.5
p/n= 2.0
p/n= 3.0
p/n= 5.0
0.6
p/n= 10.0
0.4
p/n= 4.0
0.2
0
0.1
0.2
0.15
0.2
0.25
0.3
0
0.35
0.3
0.35
0.4
0.45
b
0.5
0.55
0.6
b
Figure 3: Comparison of thresholded NNLS (red) and thresholded non-negative lasso (blue) for the
experiments with constant s/n, while b (abscissa) and p/n (symbols) vary.
power decay
Ens+
1
1
0.8
Prob. of Success
Prob. of Success
0.8
p/n= 1.5
0.6
p/n= 2.0
p/n= 2.5
0.4
p/n= 3.0
p/n= 3.5
0.6
0.4
p/n= 4.0
0.2
0
0.05
0.2
0.1
0.15
0.2
0.25
0
0.05
0.3
0.15
0.2
0.25
0.3
s/n
power decay, w/o thresholding
power decay, non?negative lasso vs. ordinary lasso
1
p/n= 1.5
0.8
0.8
p/n= 2.0
Prob. of Success
Prob. of Success
0.1
s/n
1
p/n= 2.5
0.6
p/n= 3.0
p/n= 3.5
p/n= 4.0
0.4
0.2
0
0
p/n= 2.0
p/n= 3.0
p/n= 5.0
p/n= 10.0
0.6
0.4
0.2
0.05
0.1
0
0.05
0.15
s/n
p/n= 1.5
p/n= 2.0
p/n= 2.5
p/n= 3.0
p/n= 3.5
p/n= 4.0
0.1
0.15
0.2
0.25
0.3
s/n
Figure 4: Top: Comparison of thresholded NNLS (red) and the thresholded non-negative lasso
(blue) for the experiments with constant b, while s/n (abscissa) and p/n (symbols) vary. Bottom
left: Non-negative lasso without thresholding (blue) and orthogonal matching pursuit (magenta).
Bottom right: Thresholded non-negative lasso (blue) and thresholded ordinary lasso (green).
Results. The approaches NN`1 and OMP are not competitive ? both work only with rather moderate levels of sparsity, with a breakdown at s/n = 0.15 for power decay as displayed in the bottom
left panel of Figure 4. For the second design, the results are even worse. This is in accordance with
the literature where thresholding is proposed as remedy [17, 18, 19]. Yet, for a wide range of configurations, tNNLS visibly outperforms tNN`1 , a notable exception being power decay with larger
values for p/n. This is in contrast to the design from Ens+ , where even p/n = 10 can be handled.
This difference requires further research.
Conclusion. To deal with higher levels of sparsity, thresholding seems to be inevitable. Thresholding the biased solution obtained by `1 -regularization requires a proper choice of the regularization
parameter and is likely to be inferior to thresholded NNLS with regard to the detection of small signals. The experimental results provide strong support for the central message of the paper: even in
high-dimensional, noisy settings, non-negativity constraints can be unexpectedly powerful when interacting with ?self-regularizing ?properties of the design. While this has previously been observed
empirically, our results provide a solid theoretical understanding of this phenomenon. A natural
question is whether this finding can be transferred to other kinds of ?simple constraints? (e.g. box
constraints) that are commonly imposed.
8
References
[1] Y. Lin, D. Lee, and L. Saul. Nonnegative deconvolution for time of arrival estimation. In ICASSP, 2004.
[2] J. Bardsley and J. Nagy. Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging. SIAM Journal on Matrix Analysis and Applications, 27:1184?1198, 2006.
[3] A. Szlam and. Z. Guo and S. Osher. A split Bregman method for non-negative sparsity penalized least
squares with applications to hyperspectral demixing. In IEEE International Conference on Image Processing, 2010.
[4] L. Li and T. Speed. Parametric deconvolution of positive spike trains. The Annals of Statistics, 28:1279?
1301, 2000.
[5] M. Slawski and M. Hein. Sparse recovery for Protein Mass Spectrometry data. In NIPS workshop on
practical applications of sparse modelling, 2010.
[6] D. Donoho, I. Johnstone, J. Hoch, and A. Stern. Maximum entropy and the nearly black object. Journal
of the Royal Statistical Society Series B, 54:41?81, 1992.
[7] D. Chen and R. Plemmons. Nonnegativity constraints in numerical analysis. In Symposium on the Birth
of Numerical Analysis, 2007.
[8] A. Bruckstein, M. Elad, and M. Zibulevsky. On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations. IEEE Transactions on Information Theory, 54:4813?4820, 2008.
[9] D. Donoho and J. Tanner. Counting the faces of randomly-projected hypercubes and orthants, with applications. Discrete and Computational Geometry, 43:522?541, 2010.
[10] M. Wang and A. Tang. Conditions for a Unique Non-negative Solution to an Underdetermined System.
In Proceedings of Allerton Conference on Communication, Control, and Computing, 2009.
[11] M. Wang, W. Xu, and A. Tang. A unique nonnegative solution to an undetermined system: from vectors
to matrices. IEEE Transactions on Signal Processing, 59:1007?1016, 2011.
[12] C. Liew. Inequality Constrained Least-Squares Estimation. Journal of the American Statistical Association, 71:746?751, 1976.
[13] R. Tibshirani. Regression shrinkage and variable selection via the lasso. Journal of the Royal Statistical
Society Series B, 58:671?686, 1996.
[14] S. van de Geer and P. B?uhlmann. On the conditions used to prove oracle results for the Lasso. The
Electronic Journal of Statistics, 3:1360?1392, 2009.
[15] P. Zhao and B. Yu. On model selection consistency of the lasso. Journal of Machine Learning Research,
7:2541?2567, 2006.
[16] M. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using `1 constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55:2183?2202,
2009.
[17] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. The
Annals of Statistics, 37:246?270, 2009.
[18] T. Zhang. Some Sharp Performance Bounds for Least Squares Regression with L1 Regularization. The
Annals of Statistics, 37:2109?2144, 2009.
[19] S. Zhou. Thresholding procedures for high dimensional variable selection and statistical estimation. In
NIPS, 2009.
[20] E. Greenshtein and Y. Ritov. Persistence in high-dimensional linear predictor selection and the virtue of
overparametrization. Bernoulli, 6:971?988, 2004.
[21] D. Donoho and J. Tanner. Sparse nonnegative solution of underdetermined linear equations by linear
programming. Proceedings of the National Academy of Science, 102:9446?9451, 2005.
[22] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information
Theory, 50:2231?2242, 2004.
[23] T. Zhang. On the Consistency of Feature Selection using Greedy Least Squares Regression. Journal of
Machine Learning Research, 10:555?568, 2009.
[24] H. Rue and L. Held. Gaussian Markov Random Fields. Chapman and Hall/CRC, Boca Raton, 2001.
[25] C. Genovese, J. Jin, and L. Wasserman. Revisiting Marginal Regression. Technical report, Carnegie
Mellon University, 2009. http://arxiv.org/abs/0911.4080.
[26] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least Angle Regression. The Annals of Statistics,
32:407?499, 2004.
9
| 4231 |@word norm:4 seems:1 bf:3 r:6 covariance:2 pick:1 solid:1 substitution:1 series:4 contains:1 selecting:1 configuration:2 denoting:2 outperforms:1 recovered:1 comparing:1 yet:1 intriguing:1 subsequent:1 numerical:2 realistic:1 plot:1 maxv:1 v:1 alone:1 greedy:1 xk:1 equi:2 readability:1 allerton:1 org:1 zhang:2 saarland:2 kvk2:1 c2:2 beta:1 become:1 symposium:1 prove:3 polyhedral:2 abscissa:2 frequently:2 plemmons:1 automatically:1 ucken:1 equipped:1 provided:2 estimating:1 notation:1 campus:1 suffice:1 bounded:4 panel:2 mass:1 what:1 kind:2 argmin:1 finding:4 guarantee:1 act:1 growth:1 rm:3 k2:8 scaled:2 partitioning:1 control:2 szlam:1 positive:6 understood:1 accordance:1 xbb:3 incoherence:3 black:1 studied:2 meinshausen:1 challenging:1 range:3 obeys:1 unique:5 practical:1 arguing:1 practice:2 block:2 implement:1 minv:1 procedure:1 universal:2 empirical:1 projection:2 matching:3 persistence:1 refers:1 regular:1 suggest:3 protein:1 cannot:1 irrepresentable:2 close:1 selection:5 optimize:1 equivalent:1 imposed:2 nonnegatively:1 convex:2 survey:1 ke:1 simplicity:1 recovery:18 wasserman:1 estimator:5 insight:1 spanned:2 population:2 traditionally:1 nnls:21 annals:4 target:3 suppose:2 controlling:3 exact:1 programming:2 origin:1 trend:1 element:1 particularly:1 jk:6 breakdown:1 observed:2 min1:2 role:1 bottom:4 wang:2 unexpectedly:1 boca:1 revisiting:1 zibulevsky:1 intuition:1 vanishes:1 ui:2 negatively:1 kxj:2 icassp:1 various:1 regularizer:2 derivation:1 train:1 separated:1 choosing:1 birth:1 whose:4 larger:1 elad:1 say:2 s:15 otherwise:3 drawing:1 statistic:5 noisy:5 itself:1 slawski:2 obviously:1 advantage:1 matthias:1 product:1 adaptation:1 relevant:1 combining:2 achieve:2 academy:1 ky:2 unmixing:1 object:1 depending:3 advocated:1 p2:1 strong:6 c:2 involves:1 come:1 implies:4 bardsley:1 differ:1 hull:1 subsequently:1 lars:1 crc:1 assign:1 fix:2 proposition:4 underdetermined:3 insert:1 hold:5 sufficiently:1 hall:1 normal:2 exp:3 algorithmic:1 bj:10 achieves:1 vary:4 uniqueness:4 estimation:5 uhlmann:1 largest:1 tool:2 hope:2 tnnls:4 gaussian:9 rather:3 fulfill:1 zhou:1 shrinkage:1 conjunction:1 corollary:1 derived:1 vk:1 bernoulli:2 check:3 modelling:1 visibly:1 contrast:1 orthants:1 detect:1 sense:2 vk2:1 dependent:1 minimizers:1 nn:4 sb:2 relegated:2 expand:1 germany:1 pixel:1 among:1 denoted:1 k6:1 smoothing:1 constrained:4 special:1 marginal:1 field:4 equal:4 chapman:1 broad:2 yu:2 nearly:2 genovese:1 inevitable:1 simplex:2 report:1 spline:1 others:1 serious:1 employ:1 modern:2 randomly:2 national:1 maxj:4 geometry:2 delicate:1 n1:13 ab:1 detection:1 interest:1 message:1 evaluation:1 analyzed:1 kvk:5 light:1 uppercase:1 held:1 xb:1 amenable:1 bregman:1 encourage:1 necessary:2 respective:1 orthogonal:6 unless:2 indexed:1 desired:1 re:3 hein:3 theoretical:3 column:6 assertion:1 lattice:1 ordinary:6 deviation:3 entry:14 subset:1 undetermined:1 predictor:3 usefulness:1 uniform:1 answer:2 equicorrelation:2 considerably:1 combined:1 calibrated:1 hypercubes:1 international:1 siam:1 sequel:1 lee:1 off:3 nk2:1 tanner:2 together:2 squared:1 central:2 again:1 henceforth:1 worse:2 expert:1 american:1 zhao:1 doubt:1 li:1 account:1 de:2 b2:10 bold:1 coefficient:5 satisfy:1 notable:1 piece:1 view:3 closed:3 red:2 competitive:1 recover:1 contribution:1 square:16 ni:1 ensemble:2 yield:5 mere:1 economical:1 multiplying:1 researcher:1 maxc:1 minj:3 definition:6 against:1 proof:6 recovers:1 popular:1 astronomical:2 efron:1 dimensionality:1 carefully:1 back:2 higher:1 ritov:1 box:1 just:1 correlation:2 overfit:1 hand:2 sketch:1 favourably:1 replacing:1 tropp:1 aj:2 b3:1 k22:4 contain:1 concept:1 remedy:1 counterpart:3 regularization:10 hence:3 equality:1 deal:1 attractive:1 conditionally:1 self:7 inferior:1 m:1 outline:1 ridge:1 performs:1 l1:1 image:2 wise:3 ols:3 functional:2 preceded:1 empirically:1 b4:1 exponentially:1 association:1 slight:1 interpret:1 kwk2:1 measurement:1 mellon:1 grid:1 consistency:2 resistant:2 yk2:1 add:1 recent:1 kzj:1 perspective:1 moderate:2 driven:1 inequality:2 outperforming:1 success:9 b22:1 minimum:1 omp:3 employed:1 paradigm:1 signal:6 ii:9 infer:2 technical:2 xf:1 characterized:1 lin:1 divided:1 controlled:2 feasibility:1 prediction:4 regression:8 essentially:3 vision:1 proteomics:1 noiseless:2 arxiv:1 histogram:1 c1:2 spectrometry:1 whereas:2 interval:1 crucial:2 biased:1 rest:1 unlike:1 strict:1 call:1 extracting:2 integer:1 presence:1 counting:1 split:1 easy:1 concerned:1 enough:1 xj:14 independence:1 hastie:1 lasso:22 bandwidth:1 inner:1 whether:4 expression:1 handled:1 b5:1 greed:1 passing:1 jj:3 remark:2 generally:1 clear:2 detailed:1 tune:1 amount:1 excellently:1 band:1 visualized:1 simplest:1 generate:1 http:1 xij:3 zj:2 sign:3 estimated:1 arising:1 tibshirani:2 blue:4 diverse:1 discrete:1 carnegie:1 affected:1 express:1 zv:1 putting:1 threshold:6 demonstrating:1 traced:1 drawn:2 prevent:1 pj:3 thresholded:12 kept:1 imaging:2 fraction:1 cone:2 sum:2 run:1 prob:6 angle:1 powerful:1 throughout:2 family:1 electronic:1 separation:2 circumvents:1 scaling:4 bound:9 quadratic:2 encountered:1 nonnegative:7 oracle:1 strength:1 occur:1 constraint:13 sake:1 aspect:1 speed:1 min:34 concluding:1 optimality:2 martin:1 relatively:1 circumvented:1 transferred:1 department:1 across:1 contradicts:1 partitioned:1 wi:5 making:1 b:8 modification:2 osher:1 restricted:1 equation:2 previously:2 remains:1 turn:1 count:2 r3:1 needed:2 studying:1 pursuit:3 gaussians:2 permit:2 promoting:1 apply:2 rp:1 existence:2 denotes:3 remaining:2 cf:3 top:1 exploit:1 restrictive:1 k1:10 uj:2 classical:1 society:2 already:1 quantity:3 question:1 spike:1 parametric:1 concentration:1 diagonal:2 subspace:2 distance:1 columnwise:1 separating:5 extent:1 reason:2 preconditioned:1 index:1 providing:1 ratio:2 equivalently:1 setup:2 mostly:1 statement:2 negative:24 stated:1 rise:1 design:21 proper:1 stern:1 unknown:1 allowing:1 zf:12 upper:2 observation:2 inspect:1 markov:2 finite:1 jin:1 orthant:1 displayed:1 immediate:1 maxk:1 looking:1 communication:1 rn:5 interacting:1 sharp:2 hxj:5 intensity:1 raton:1 vacuous:1 complement:1 c3:2 greenshtein:1 acoustic:1 concisely:1 saarbr:1 established:1 nip:2 usually:1 pattern:3 below:1 sparsity:11 b21:1 encompasses:1 pioneering:1 max:3 green:1 royal:2 wainwright:1 power:9 overlap:1 event:2 suitable:2 quantification:1 regularized:2 natural:1 numerous:2 negativity:8 log1:1 coupled:1 genomics:1 review:1 prior:2 literature:3 geometric:1 understanding:1 relative:1 bols:1 localized:1 consistent:1 thresholding:12 row:2 penalized:1 overparametrization:1 supported:1 bias:1 side:1 nagy:1 johnstone:2 wide:1 saul:1 face:9 emerge:1 sparse:16 van:1 regard:2 boundary:1 gram:7 computes:2 commonly:3 made:4 collection:2 replicated:1 projected:1 far:1 income:1 cope:1 transaction:4 bb:3 obtains:3 uni:1 bruckstein:1 overfitting:2 kkt:2 b1:2 assumed:6 conclude:1 alternatively:1 iterative:1 additionally:3 transfer:1 spectroscopy:1 mse:4 necessarily:1 domain:1 vj:1 rue:1 main:4 s2:1 noise:5 arise:1 whole:1 arrival:1 contradicting:1 body:1 xu:1 representative:1 en:9 sub:9 position:4 nonnegativity:1 concatenating:1 vanish:2 tang:2 theorem:13 formula:1 removing:1 magenta:1 specific:5 tnn:4 k21:2 symbol:3 x:32 decay:9 virtue:1 deconvolution:3 exists:6 demixing:1 workshop:1 gained:1 importance:1 supplement:5 hyperspectral:1 magnitude:1 kx:6 nk:1 chen:1 entropy:1 intersection:1 likely:1 contained:3 partially:1 applies:1 minimizer:5 satisfies:1 conditional:2 identity:1 donoho:3 price:1 ajk:1 considerable:1 experimentally:1 hard:2 specifically:1 uniformly:4 hyperplane:4 decouple:1 lemma:8 total:1 geer:1 kxs:1 duality:1 tendency:1 experimental:1 exception:1 support:15 guo:1 regularizing:7 phenomenon:1 |
3,570 | 4,232 | Complexity of Inference in Latent Dirichlet
Allocation
David Sontag
New York University?
Daniel M. Roy
University of Cambridge
Abstract
We consider the computational complexity of probabilistic inference in Latent Dirichlet Allocation (LDA). First, we study the problem of finding
the maximum a posteriori (MAP) assignment of topics to words, where
the document?s topic distribution is integrated out. We show that, when
the e?ective number of topics per document is small, exact inference takes
polynomial time. In contrast, we show that, when a document has a large
number of topics, finding the MAP assignment of topics to words in LDA
is NP-hard. Next, we consider the problem of finding the MAP topic distribution for a document, where the topic-word assignments are integrated
out. We show that this problem is also NP-hard. Finally, we briefly discuss
the problem of sampling from the posterior, showing that this is NP-hard
in one restricted setting, but leaving open the general question.
1
Introduction
Probabilistic models of text and topics, known as topic models, are powerful tools for exploring large data sets and for making inferences about the content of documents. Topic
models are frequently used for deriving low-dimensional representations of documents that
are then used for information retrieval, document summarization, and classification [Blei
and McAuli?e, 2008; Lacoste-Julien et al., 2009]. In this paper, we consider the computational complexity of inference in topic models, beginning with one of the simplest and most
popular models, Latent Dirichlet Allocation (LDA) [Blei et al., 2003]. The LDA model is
arguably one of the most important probabilistic models in widespread use today.
Almost all uses of topic models require probabilistic inference. For example, unsupervised
learning of topic models using Expectation Maximization requires the repeated computation
of marginal probabilities of what topics are present in the documents. For applications in
information retrieval and classification, each new document necessitates inference to determine what topics are present.
Although there is a wealth of literature on approximate inference algorithms for topic models, such Gibbs sampling and variational inference [Blei et al., 2003; Griffiths and Steyvers,
2004; Mukherjee and Blei, 2009; Porteous et al., 2008; Teh et al., 2007], little is known
about the computational complexity of exact inference. Furthermore, the existing inference
algorithms, although well-motivated, do not provide guarantees of optimality. We choose
to study LDA because we believe that it captures the essence of what makes inference easy
or hard in topic models. We believe that a careful analysis of the complexity of popular
probabilistic models like LDA will ultimately help us build a methodology for spanning the
gap between theory and practice in probabilistic AI.
Our hope is that our results will motivate discussion of the following questions, guiding
research of both new topic models and the design of new approximate inference and learning
?
This work was partially carried out while D.S. was at Microsoft Research New England.
1
algorithms. First, what is the structure of real-world LDA inference problems? Might there
be structure in ?natural? problem instances that makes them di?erent from hard instances
(e.g., those used in our reductions)? Second, how strongly does the prior distribution bias
the results of inference? How do the hyperparameters a?ect the structure of the posterior
and the hardness of inference?
We study the complexity of finding assignments of topics to words with high posterior
probability and the complexity of summarizing the posterior distributions on topics in a
document by either its expectation or points with high posterior density. In the former case,
we show that the number of topics in the maximum a posteriori assignment determines the
hardness. In the latter case, we quantify the sense in which the Dirichlet prior can be seen
to enforce sparsity and use this result to show hardness via a reduction from set cover.
2
MAP inference of word assignments
We will consider the inference problem for a single document. The LDA model states that the
document, represented as a collection of words w = (w1 , w2 , . . . , wN ), is generated as follows:
a distribution over the T topics is sampled from a Dirichlet distribution, ? ? Dir(?); then,
for i 2 [N ] := {1, . . . , N }, we sample a topic zi ? Multinomial(?) and word wi ? zi , where
t , t 2 [T ] are distributions on a dictionary of words. Assume that the word distributions
t are fixed (e.g., they have been previously estimated), and let lit = log Pr(wi |zi = t) be
the log probability of the ith word being generated from topic t. After integrating out the
topic distribution vector, the joint distribution of the topic assignments conditioned on the
words w is given by
Q
P
N
?t ) t (nt + ?t ) Y
(
P
Pr(wi |zi ),
(1)
Pr(z1 , . . . , zN |w) / Q t
t (?t ) (
t ?t + N ) i=1
where nt is the total number of words assigned to topic t.
In this section, we focus on the inference problem of finding the most likely assignment of
topics to words, i.e. the maximum a posteriori (MAP) assignment. This has many possible
applications. For example, it can be used to cluster the words of a document, or as part of
a larger system such as part-of-speech tagging [Li and McCallum, 2005]. More broadly, for
many classification tasks involving topic models it may be useful to have word-level features
for whether a particular word was assigned to a given topic. From both an algorithm design
and complexity analysis point of view, this MAP problem has the additional advantage of
involving only discrete random variables.
Taking the logarithm of Eq. 1 and ignoring constants, finding the MAP assignment is seen
to be equivalent to the following combinatorial optimization problem:
X
X
=
max
log ( nt + ?t ) +
xit lit
(2)
xit 2{0,1},nt
subject to
t
X
xit = 1,
t
X
i,t
xit = nt ,
i
where the indicator variable xit = I[zi = t] denotes the assignment of word i to topic t.
2.1
Exact maximization for small number of topics
Suppose a document only uses ? ? T topics. That is, T could be large, but we are
guaranteed that the MAP assignment for a document uses at most ? di?erent topics. In this
section, we show how we can use this knowledge to efficiently find a maximizing assignment
of words to topics. It is important to note that we only restrict the maximum number of
topics per document, letting the Dirichlet prior and the likelihood guide the choice of the
actual number of topics present.
We first observe that, if we knew the number of words assigned to each topic, finding the
MAP assignment is easy. For t 2 [T ], let n?t be the number of words assigned to topic t
2
t1
t2
t3
t4
t5
15
3.0
1?4
2.5
1?2
10
2.0
1
1.5
2
5
1.0
0.5
w1
w2
w3
w4
w5
w6
0.5
1.0
1.5
2.0
2.5
3.0
1
2
3
4
5
Figure 1: (Left) A LDA instance derived from a k-set packing instance. (Center) Plot of
F (nt ) = log (nt + ?) for various values of ?. The x-axis varies nt , the number of words assigned
to topic t, and the y-axis shows F (nt ). (Right) Behavior of log (nt + ?) as ? ! 0. The function
is stable everywhere but at zero, where the reward for sparsity increases without bound.
in the MAP assignment. Then, the MAP assignment x is found by solving the following
optimization problem:
X
max
xit lit
(3)
xit 2{0,1}
subject to
i,t
X
xit = 1,
t
X
xit = n?t ,
i
which is equivalent to weighted b-matching in a bipartite graph (the words are on one side,
the topics on the other) and can be optimally solved in time O(bm3 ), where b = maxt n?t =
O(N ) and m = N + T [Schrijver, 2003].
P
0 and
We call (n1 , . . . , nT ) a valid partition when ni
t nt = N . Using weighted bmatching, we can find a MAP assignment of words to topics by trying all T? = ? T
( ?)
choices of ? topics and all possible valid partitions with at most ? non-zeros.
for all subsets A ? [T ] such that |A| = ? do
for all valid partitions n = (n1 , n2 , . . . , nT ) such
Pthat nt = 0 for t 62 A do
Weighted-B-Matching(A, n, l) + t log ( nt + ?t )
A,n
end for
end for
return arg maxA,n A,n
There are at most N ? 1 valid partitions with ? non-zero counts. For each of these, we solve
the b-matching problem to find the most likely assignment of words to topics that satisfies
the cardinality constraints. Thus, the total running time is O((N T )? (N + ? )3 ). This is
polynomial when the number of topics ? appearing in a document is a constant.
2.2
Inference is NP-hard for large numbers of topics
In this section, we show that probabilistic inference is NP-hard in the general setting where a
document may have a large number of topics in its MAP assignment. Let WORD-LDA(?)
denote the decision problem of whether
> V (see Eq. 2) for some V 2 R, where the
hyperparameters ?t = ? for all topics. We consider both ? < 1 and ?
1 because, as
shown in Figure 1, the optimization problem is qualitatively di?erent in these two cases.
Theorem 1. WORD-LDA(?) is NP-hard for all ? > 0.
Proof. Our proof is a straightforward generalization of the approach used by Halperin and
Karp [2005] to show that the minimum entropy set cover problem is hard to approximate.
The proof is done by reduction from k-set packing (k-SP), for k 3. In k-SP, we are given
a collection of k-element sets over some universe of elements ? with |?| = n. The goal
is to find the largest collection of disjoint sets. There exists a constant c < 1 such that
it is NP-hard to decide whether a k-SP instance has (i) a solution with n/k disjoint sets
3
covering all elements (called a perfect matching), or (ii) at most cn/k disjoint sets (called a
(cn/k)-matching).
We now describe how to construct a LDA inference problem from a k-SP instance. This
requires specifying the words in the document, the number of topics, and the word log
probabilities lit . Let each element i 2 ? correspond to a word wi , and let each set correspond
to one topic. The document consists of all of the words (i.e., ?). We assign uniform
probability to the words in each topic, so that Pr(wi |zi = t) = k1 for i 2 t, and 0 otherwise.
Figure 1 illustrates the resulting LDA model. The topics are on the top, and the words
from the document are on the bottom. An edge is drawn between a topic (set) and a word
(element) if the corresponding set contains that element.
What remains is to show that we can solve some k-SP problem by using this reduction and
solving a WORD-LDA(?) problem. For technical reasons involving ? > 1, we require that
k is sufficiently large. We will use the following result (we omit the proof due to space
limitations).
Lemma 2. Let P be a k-SP instance for k > (1 + ?)2 , and let P 0 be the derived WORDLDA(?) instance. There exist constants CU and CL < CU such that, if there is a perfect
matching in P , then
CU . If, on the other hand, there is at most a (cn/k)-matching in
P , then < CL .
Let P be a k-SP instance for k > (3 + ?)2 , P 0 be the derived WORD-LDA(?) instance,
and CU and CL < CU be as in Lemma 2. Then, by testing < CL and > CU we can
decide whether P is a perfect matching or at best a (cn/k)-matching. Hence k-SP reduces
to WORD-LDA(?).
The bold lines in Figure 1 indicate the MAP assignment, which for this example corresponds
to a perfect matching for the original k-set packing instance. More realistic documents would
have significantly more words than topics used. Although this is not possible while keeping
k = 3, since the MAP assignment always has ? N/k, we can instead reduce from a k-set
packing problem with k
3. Lemma 2 shows that this is hard as well.
3
MAP inference of the topic distribution
In this section we consider the task of finding the mode of Pr(?|w). This MAP problem
involves integrating out the topic assignments, zi , as opposed to the previously considered
MAP problem of integrating out the topic distribution ?. We will see that the MAP topic
distribution is not always well-defined, which will lead us to define and study alternative
formulations. In particular, we give a precise characterization of the MAP problem as one
of finding sparse topic distributions, and use this fact to give hardness results for several
settings. We also show settings for which MAP inference is tractable.
There are many potential applications of MAP inference of the document?s topic distribution. For example, the distribution may be used for topic-based information retrieval or
as the feature vector for classification. As we will make clear later, this type of inference
results in sparse solutions. Thus, the MAP topic distribution provides a compact summary
of the document that could be useful for document summarization.
Let ? = (?1 , . . . , ?T ). A straightforward application of Bayes? rule allows us to write the
posterior density of ? given w as
! N T
!
T
Y
YX
?t 1
Pr(?|w) /
?t
?t it ,
(4)
t=1
i=1 t=1
where it = Pr(wi |zi = t). Taking the logarithm of the posterior and ignoring constants,
we obtain
!
T
N
T
X
X
X
(?) =
(?t 1) log(?t ) +
log
?t it
(5)
t=1
i=1
4
t=1
We will use the shorthand (?) = P (?) + L(?), where P (?) =
PN
PT
L(?) = i=1 log( t=1 it ?t ).
PT
t=1 (?t
1) log(?t ) and
PT
To find the MAP ?, we maximize (5) subject to the constraint that t=1 ?t = 1 and ?t 0.
Unfortunately, this maximization problem can be degenerate. In particular, note that if
?t = 0 for ?t < 1, then the corresponding term in P (?) will take the value 1, overwhelming
the likelihood term. Thus, any feasible solution with the above property could be considered
?optimal?.
A similar problem arises during the maximum-likelihood estimation of a normal mixture
model, where the likelihood diverges to infinity as the variance of a mixture component
with a single data point approaches zero [Biernacki and Chr?etien, 2003; Kiefer and Wolfowitz, 1956]. In practice, one can enforce a lower bound on the variance or penalize such
configurations. Here we consider a similar tactic.
For ? > 0, let TOPIC-LDA(?) denote the optimization problem
X
?t = 1, ? ? ?t ? 1.
max (?) subject to
?
(6)
t
For ? = 0, we will denote the corresponding optimization problem by TOPIC-LDA. When
?t = ?, i.e. the prior distribution on the topic distribution is a symmetric Dirichlet,
we write TOPIC-LDA(?,? ) for the corresponding optimization problem. In the following sections we will study the structure and hardness of TOPIC-LDA, TOPIC-LDA(?) and
TOPIC-LDA(?,? ).
3.1
Polynomial-time inference for large hyperparameters (?t
1)
When ?t 1, Eq. 5 is a concave function of ?. As a result, we can efficiently find ?? using a
number of techniques from convex optimization. Note that this is in contrast to the MAP
inference problem discussed in Section 2, which we showed was hard for all choices of ?.
Since we are optimizing over the simplex (? must be non-negative and sum to 1), we can
apply the exponentiated gradient method [Kivinen and Warmuth, 1995]. Initializing ?0 to
be the uniform vector, the update for time s is given by
N
?s exp(?5st )
?t 1 X
it
s
?ts+1 = P t s
,
5
=
+
,
(7)
PT
t
s)
s
s
?
exp(?5
?
?
t
t t?
t?
t?=1 ?t? it?
i=1
where ? is the step size and 5s is the gradient.
When ? = 1 the prior disappears altogether and this algorithm simply corresponds to
optimizing the likelihood term. When ?
1, the prior corresponds to a bias toward a
particular ? topic distribution.
3.2
Small hyperparameters encourage sparsity (? < 1)
On the other hand, when ?t < 1, the first term in Eq. 5 is convex whereas the second term is
concave. This setting, of ? much smaller than 1, occurs frequently in practice. For example,
learning a LDA model on a large corpus of NIPS abstracts with T = 200 topics, we find
that the hyperparameters found range from ?t = 0.0009 to 0.135, with the median being
0.01. Although in this setting it is difficult to find the global optimum (we will make this
precise in Theorem 6), one possibility for finding a local maximum is the Concave-Convex
Procedure [Yuille and Rangarajan, 2003].
In this section we prove structural results about the TOPIC-LDA(?,? ) solution space for
when ? < 1. These results illustrate that the Dirichlet prior encourages sparse MAP solutions: the topic distribution will be large on as few topics as necessary to explain every
word of the document, and otherwise will be close to zero.
The following lemma shows that in any optimal solution to TOPIC-LDA(?,? ), for every
word, there is at least one topic that both has large probability and gives non-trivial probability to this word. We use K(?, T, N ) = e 3/? N 1 T 1/? to refer to the lower bound on
the topic?s probability.
5
Lemma 3. Let ? < 1. All optimal solutions ?? to TOPIC-LDA(?,? ) have the following
property: for every word i, ?t?? K(?, T, N ) where t? = arg maxt it ?t? .
Proof sketch. If ? K(?, T, N ) the claim trivially holds. Assume for the purpose of contradiction that there exists a word ?i such that ?t?? < K(?, T, N ), where t? = arg maxt ?it ?t? .
P
?
Let Y denote the set of topics t 6= t? such that ?t?
2?. Let 1 =
t2Y ?t and 2 =
P
?
t62Y,t6=t? ?t . Note that 2 < 2T ?. Consider
?
1 ?
1
1
2
n
?
?
?t =
?t? for t 2 Y,
??t = ?t? for t 62 Y, t 6= t?.
(8)
?t? = ,
n
1
P
? > (?? ),
It is easy to show that 8t, ??t
?, and t ??t = 1. Finally, we show that (?)
?
contradicting the optimality of ? . The full proof is given in the supplementary material.
Next, we show that if a topic is not sufficiently ?used? then it will be given a probability very
close to zero. By used, we mean that for at least one word, the topic is close in probability
to that of the largest contributor to the likelihood of the word. To do this, we need to define
the notion of the dynamic range of a word, given as ?i = maxt,t0 : it >0, it0 >0 it0 . We let
it
the maximum dynamic range be ? = maxi ?i . Note that ? 1 and, for most applications,
it is reasonable to expect ? to be small (e.g., less than 1000).
Lemma 4. Let ? < 1, and let ?? be any optimal solution to TOPIC-LDA(?,? ). Suppose
1
topic t? has ?t?? < (?N ) 1 K(?, T, N ). Then, ?t?? ? e 1 ? +2 ?.
Proof. Suppose for the purpose of contradiction that ?t?? > e 1
?
?
follows: ??? = ?, and ??t = 1 ?? ?? for t 6= t?. We have:
t
?
(?)
1 ?t?
?
(? ) = (1
?) log
?
?t??
?
1
? +2
?. Consider ?? defined as
t
?
+ (T
1)(1
?) log
?
1 ?t??
1 ?
?
+
N
X
Using the fact that log(1 z)
2z for z 2 [0, 12 ], it follows that
?
?
1 ?t??
(T 1)(1 ?) log
(T 1)(1 ?) log 1 ?t??
2(T
1 ?
2(T
We have ??t
?t? for t 6= t?, and so
P
P ?
?
?
? ?t
P t ?t it = P t6=t ?
t ?t it
t6=t? ?t
1)(?
it
it
+?
+
?t??
1)(?N )
it?
it?
P
1
log
i=1
1)(?
K(?, T, N )
P
?
t6=t? ?t it
?
?
t6=t? ?t it + ?t?
2(?
P ?
?
P t ?t
?
t t
it
it
!
.
1)?t??
(9)
1).
(10)
.
(11)
it?
Recall from Lemma 3 that, for each word i and t? = arg maxt it ?t? , we have ?t? > K(?, T, N ).
1
z,
Necessarily t? 6= t?. Therefore, using the fact that log 1+z
!
P
?
?t?? it?
(?N ) 1 K(?, T, N ) it?
1
t6=t? ?t it
P
.
(12)
log P
?
?
?
?
+
?
?
K(?,
T,
N
)
n
it?
t6=t? t it
t6=t? t it
t? it?
?
Thus, ( ?)
(?? ) > (1
?) log e 1
1
? +2
+ 2(?
1)
1 = 0, completing the proof.
Finally, putting together what we showed in the previous two lemmas, we conclude that
all optimal solutions to TOPIC-LDA(?,? ) either have ?t large or small, but not in between
(that is, we have demonstrated a gap). We have the immediate corollary:
?
? 1
Theorem 5. For ? < 1, all optimal solutions to TOPIC-LDA(?,? ) have ?t ? e 1 ? +2 ?
or ?t
?
1
e
3/?
N
2
T
1/?
.
6
3.3
Inference is NP-hard for small hyperparameters (? < 1)
The previous results characterize optimal solutions to TOPIC-LDA(?,? ) and highlight the
fact that optimal solutions are sparse. In this section we show that these same properties
can be the source of computational hardness during inference. In particular, it is possible to
encode set cover instances as TOPIC-LDA(?,? ) instances, where the set cover corresponds
to those topics assigned appreciable probability.
Theorem 6. TOPIC-LDA(?,? ) is NP-hard for ? ? K(?, T, N )T /(1 ?) T N/(1 ?) and ? < 1.
Proof. Consider a set cover instance consisting of a universe of elements and a family of
sets, where we assume for convenience that the minimum cover is neither a singleton, all
but one of the family of sets, nor the entire family of sets, and that there are at least two
elements in the universe. As with our previous reduction, we have one topic per set and
one word in the document for each element. We let Pr(wi |zi = t) = 0 when element wi is
not in set t, and a constant otherwise (we make every topic have the uniform distribution
over the same number of words, some of which may be dummy words not appearing in the
document). Let Si ? [T ] denote the set of topics to which word i belongs. Then, up to
PT
PN
P
additive constants, we have P (?) = (1 ?) t=1 log(?t ) and L(?) = i=1 log( t2Si ?t ).
Let C?? ? [T ] be those topics t 2 [T ] such that ?t?
K(?, T, n), where ?? is an optimal
solution to TOPIC-LDA(?,? ). It immediately follows from Lemma 3 that C?? is a cover.
Suppose for the purpose of contradiction that C?? is not a minimal cover. Let C? be a
?
|C|)
minimal cover, and let ??t = ? for t 62 C? and ??t = 1 ?(T
> T1 otherwise. We will show
?
|C|
? > (? ? ), contradicting the optimality of ?? , and thus proving that C?? is in fact
that (?)
minimal. This suffices to show that TOPIC-LDA(?,? ) is NP-hard in this regime.
P
For all ? in the simplex, we have
i log(maxt2Si ?t ) ? L(?) ? 0. Thus it follows that
? ? N log T . Likewise, using the assumption that T |C|
? + 1, we have
L(?? ) L(?)
?
P (?)
P (?? )
? log ? (|C|
? + 1) log K(?, T, N ) + (T |C|
?
(T |C|)
1) log ?
(13)
(1 ?)
1
T log K(?, T, N ),
(14)
log
?
? and taken ?? 2
where we have conservatively only included the terms t 62 C? for P (?)
? + 1 terms taking the latter value. It follows that
{?, K(?, T, N )} with |C|
1
? + L(?)
?
(T log K(?, T, N ) + N log T ). (15)
P (?)
P (?? ) + L(?? ) > (1 ?) log
?
This is greater than 0 precisely when (1 ?) log 1? > log T N K(?, T, N )T .
Note that although ? is exponentially small in N and T , the size of its representation in
binary is polynomial in N and T , and thus polynomial in the size of the set cover instance.
It can be shown that as ? ! 0, the solutions to TOPIC-LDA(?,? ) become degenerate,
concentrating their support on the minimal set of topics C ? [T ] such that 8i, 9t 2 C s.t.
it > 0. A generalization of this result holds for TOPIC-LDA(?) and suggests that, while
it may be possible to give a more sensible definition of TOPIC-LDA as the set of solutions
for TOPIC-LDA(?) as ? ! 0, these solutions are unlikely to be of any practical use.
4
Sampling from the posterior
The previous sections of the paper focused on MAP inference problems. In this section, we
study the problem of marginal inference in LDA.
Theorem 7. For ? > 1, one can approximately sample from Pr(? | w) in polynomial time.
Proof sketch. The density given in Eq. 4 is log-concave when ? 1. The algorithm given
in Lovasz and Vempala [2006] can be used to approximately sample from the posterior.
7
Although polynomial, it is not clear whether the algorithm given in Lovasz and Vempala
[2006], based on random walks, is of practical interest (e.g., the running time bound has a
constant of 1030 ). However, we believe our observation provides insight into the complexity
of sampling when ? is not too small, and may be a starting point towards explaining the
empirical success of using Markov chain Monte Carlo to do inference in LDA.
Next, we show that when ? is extremely small, it is NP-hard to sample from the posterior.
We again reduce from set cover. The intuition behind the proof is that, when ? is small
enough, an appreciable amount of the probability mass corresponds to the sparsest possible
? vectors where the supported topics together cover all of the words. As a result, we could
directly read o? the minimal set cover from the posterior marginals E[?t | w].
1
Theorem 8. When ? < (4N + 4)T N (N )
Pr(? | w), under randomized reductions.
, it is NP-hard to approximately sample from
The full proof can be found in the supplementary material. Note that it is likely that one
would need an extremely large and unusual corpus to learn an ? so small. Our results
illustrate a large gap in our knowledge about the complexity of sampling as a function of ?.
We feel that tightening this gap is a particularly exciting open problem.
5
Discussion
In this paper, we have shown that the complexity of MAP inference in LDA strongly depends
on the e?ective number of topics per document. When a document is generated from a small
number of topics (regardless of the number of topics in the model), WORD-LDA can be
solved in polynomial time. We believe this is representative of many real-world applications.
On the other hand, if a document can use an arbitrary number of topics, WORD-LDA is
NP-hard. The choice of hyperparameters for the Dirichlet does not a?ect these results.
We have also studied the problem of computing MAP estimates and expectations of the
topic distribution. In the former case, the Dirichlet prior enforces sparsity in a sense that
we make precise. In the latter case, we show that extreme parameterizations can similarly
cause the posterior to concentrate on sparse solutions. In both cases, this sparsity is shown
to be a source of computational hardness.
In related work, Sepp?
anen et al. [2003] suggest a heuristic for inference that is also applicable
to LDA: if there exists a word that can only be generated with high probability from one of
the topics, then the corresponding topic must appear in the MAP assignment whenever that
word appears in a document. Miettinen et al. [2008] give a hardness reduction and greedy
algorithm for learning topic models. Although the models they consider are very di?erent
from LDA, some of the ideas may still be applicable. More broadly, it would be interesting
to consider the complexity of learning the per-topic word distributions t .
Our paper suggests a number of directions for future study. First, our exact algorithms
can be used to evaluate the accuracy of approximate inference algorithms, for example by
comparing to the MAP of the variational posterior. On the algorithmic side, it would be
interesting to improve the running time of the exact algorithm from Section 2.1. Also, note
that we did not give an analogous exact algorithm for the MAP topic distribution when
the posterior has support on only a small number of topics. In this setting, it may be
possible to find this set of topics by trying all S ? [T ] of small cardinality and then doing
a (non-uniform) grid search over the topic distribution restricted to support S.
Finally, our structural results on the sparsity induced by the Dirichlet prior draws connections between inference in topic models and sparse signal recovery. We proved that the MAP
topic distribution has, for each topic t, either ?t ? ? or ?t bounded below by some value
(much larger than ?). Because of this gap, we can approximately view the MAP problem
as searching for a set corresponding to the support of ?. Our work motivates the study of
greedy algorithms for MAP inference in topic models, analogous to those used for set cover.
One could even consider learning algorithms that use this greedy algorithm within the inner
loop [Krause and Cevher, 2010].
8
Acknowledgments D.M.R. is supported by a Newton International Fellowship.
thank Tommi Jaakkola and anonymous reviewers for helpful comments.
We
References
C. Biernacki and S. Chr?etien. Degeneracy in the maximum likelihood estimation of univariate
Gaussian mixtures with EM. Statist. Probab. Lett., 61(4):373?382, 2003. ISSN 0167-7152.
D. Blei and J. McAuli?e. Supervised topic models. In J. Platt, D. Koller, Y. Singer, and S. Roweis,
editors, Adv. in Neural Inform. Processing Syst. 20, pages 121?128. MIT Press, Cambridge, MA,
2008.
D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:
993?1022, 2003. ISSN 1532-4435.
T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc. Natl. Acad. Sci. USA, 101(Suppl
1):5228?5235, 2004. doi: 10.1073/pnas.0307752101.
E. Halperin and R. M. Karp. The minimum-entropy set cover problem. Theor. Comput. Sci., 348
(2):240?250, 2005. ISSN 0304-3975. doi: http://dx.doi.org/10.1016/j.tcs.2005.09.015.
J. Kiefer and J. Wolfowitz. Consistency of the maximum likelihood estimator in the presence of
infinitely many incidental parameters. Ann. Math. Statist., 27:887?906, 1956. ISSN 0003-4851.
J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Inform. and Comput., 132, 1995.
A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In Proc. Int.
Conf. on Machine Learning (ICML), 2010.
S. Lacoste-Julien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality
reduction and classification. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors,
Adv. in Neural Inform. Processing Syst. 21, pages 897?904. 2009.
W. Li and A. McCallum. Semi-supervised sequence modeling with syntactic topic models. In Proc.
of the 20th Nat. Conf. on Artificial Intelligence, volume 2, pages 813?818. AAAI Press, 2005.
L. Lovasz and S. Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration and optimization. In Proc. of the 47th Ann. IEEE Symp. on Foundations of Comput. Sci.,
pages 57?68. IEEE Computer Society, 2006. ISBN 0-7695-2720-5.
P. Miettinen, T. Mielik?
ainen, A. Gionis, G. Das, and H. Mannila. The discrete basis problem. IEEE
Trans. Knowl. Data Eng., 20(10):1348?1362, 2008.
I. Mukherjee and D. M. Blei. Relative performance guarantees for approximate inference in latent
Dirichlet allocation. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Adv. in
Neural Inform. Processing Syst. 21, pages 1129?1136. 2009.
I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed gibbs
sampling for latent dirichlet allocation. In Proc. of the 14th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pages 569?577, New York, NY, USA, 2008. ACM.
A. Schrijver. Combinatorial optimization. Polyhedra and efficiency. Vol. A, volume 24 of Algorithms
and Combinatorics. Springer-Verlag, Berlin, 2003. ISBN 3-540-44389-4. Paths, flows, matchings,
Chapters 1?38.
J. K. Sepp?
anen, E. Bingham, and H. Mannila. A simple algorithm for topic identification in 0-1
data. In Proc. of the 7th European Conf. on Principles and Practice of Knowledge Discovery in
Databases, pages 423?434. Springer-Verlag, 2003.
Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for
latent Dirichlet allocation. In Adv. in Neural Inform. Processing Syst. 19, volume 19, 2007.
A. L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Comput., 15:915?936, April
2003. ISSN 0899-7667.
9
| 4232 |@word cu:6 briefly:1 polynomial:8 open:2 eng:1 reduction:8 configuration:1 contains:1 daniel:1 document:32 existing:1 comparing:1 nt:15 si:1 dx:1 must:2 additive:1 realistic:1 partition:4 plot:1 ainen:1 update:1 greedy:3 intelligence:1 warmuth:2 mccallum:2 beginning:1 ith:1 blei:7 characterization:1 provides:2 parameterizations:1 math:1 org:1 become:1 ect:2 consists:1 shorthand:1 prove:1 symp:1 tagging:1 hardness:8 behavior:1 frequently:2 nor:1 little:1 actual:1 overwhelming:1 cardinality:2 bounded:1 mass:1 what:6 maxa:1 finding:11 guarantee:2 every:4 concave:5 platt:1 omit:1 appear:1 arguably:1 t1:2 local:1 acad:1 mach:1 path:1 approximately:4 might:1 studied:1 specifying:1 suggests:2 range:3 acknowledgment:1 pthat:1 practical:2 testing:1 enforces:1 practice:4 mannila:2 procedure:2 w4:1 empirical:1 significantly:1 matching:10 word:56 integrating:3 griffith:2 suggest:1 convenience:1 close:3 selection:1 collapsed:2 equivalent:2 map:35 demonstrated:1 center:1 maximizing:1 reviewer:1 straightforward:2 regardless:1 starting:1 sepp:2 convex:4 focused:1 recovery:1 immediately:1 contradiction:3 rule:1 insight:1 estimator:1 deriving:1 steyvers:2 proving:1 searching:1 notion:1 analogous:2 feel:1 pt:5 today:1 suppose:4 exact:6 smyth:1 us:3 etien:2 element:10 roy:1 particularly:1 mukherjee:2 database:1 bottom:1 solved:2 capture:1 initializing:1 adv:4 intuition:1 complexity:12 reward:1 dynamic:2 ultimately:1 motivate:1 solving:2 yuille:2 bipartite:1 efficiency:1 basis:1 matchings:1 necessitates:1 packing:4 joint:1 represented:1 various:1 chapter:1 fast:2 describe:1 monte:1 doi:3 artificial:1 newman:2 heuristic:1 larger:2 solve:2 supplementary:2 otherwise:4 syntactic:1 advantage:1 sequence:1 isbn:2 loop:1 degenerate:2 roweis:1 cluster:1 optimum:1 diverges:1 rangarajan:2 perfect:4 help:1 illustrate:2 erent:4 eq:5 involves:1 indicate:1 quantify:1 tommi:1 concentrate:1 direction:1 material:2 require:2 assign:1 suffices:1 generalization:2 anonymous:1 theor:1 exploring:1 hold:2 sufficiently:2 considered:2 normal:1 exp:2 algorithmic:1 claim:1 dictionary:2 purpose:3 estimation:2 proc:6 applicable:2 combinatorial:2 knowl:1 contributor:1 largest:2 tool:1 weighted:3 hope:1 lovasz:3 mit:1 always:2 gaussian:1 pn:2 karp:2 jaakkola:1 corollary:1 encode:1 derived:3 focus:1 xit:9 polyhedron:1 likelihood:8 contrast:2 sigkdd:1 summarizing:1 sense:2 posteriori:3 inference:40 helpful:1 integrated:2 entire:1 unlikely:1 koller:3 arg:4 classification:5 integration:1 marginal:2 construct:1 ng:1 sampling:7 lit:4 unsupervised:1 icml:1 future:1 simplex:2 np:13 t2:1 few:1 consisting:1 t2y:1 microsoft:1 n1:2 interest:1 w5:1 possibility:1 mining:1 mixture:3 extreme:1 behind:1 natl:1 chain:1 edge:1 encourage:1 necessary:1 logarithm:2 walk:1 re:1 minimal:5 cevher:2 instance:15 modeling:1 cover:15 zn:1 assignment:23 maximization:3 subset:1 uniform:4 predictor:1 rounding:1 too:1 optimally:1 characterize:1 varies:1 dir:1 st:1 density:3 international:1 randomized:1 probabilistic:7 together:2 w1:2 again:1 aaai:1 opposed:1 choose:1 conf:4 return:1 li:2 syst:4 potential:1 singleton:1 bold:1 int:2 gionis:1 combinatorics:1 depends:1 later:1 view:2 doing:1 bayes:1 asuncion:1 ni:1 kiefer:2 accuracy:1 variance:2 likewise:1 efficiently:2 t3:1 correspond:2 identification:1 bayesian:1 carlo:1 explain:1 inform:5 whenever:1 definition:1 proof:12 di:4 ihler:1 degeneracy:1 sampled:1 proved:1 ective:2 popular:2 concentrating:1 recall:1 knowledge:4 dimensionality:1 appears:1 supervised:2 methodology:1 april:1 formulation:1 done:1 strongly:2 furthermore:1 hand:3 sketch:2 widespread:1 mode:1 lda:45 halperin:2 scientific:1 believe:4 usa:2 former:2 hence:1 assigned:6 read:1 symmetric:1 during:2 encourages:1 essence:1 covering:1 tactic:1 trying:2 variational:3 multinomial:1 exponentially:1 volume:3 discussed:1 marginals:1 refer:1 cambridge:2 gibbs:2 ai:1 trivially:1 grid:1 similarly:1 consistency:1 submodular:1 stable:1 posterior:14 showed:2 optimizing:2 belongs:1 verlag:2 binary:1 success:1 seen:2 minimum:3 additional:1 greater:1 determine:1 maximize:1 wolfowitz:2 signal:1 ii:1 semi:1 full:2 pnas:1 reduces:1 technical:1 england:1 retrieval:3 involving:3 expectation:3 suppl:1 penalize:1 whereas:1 fellowship:1 krause:2 wealth:1 median:1 leaving:1 source:2 w2:2 comment:1 subject:4 induced:1 logconcave:1 flow:1 jordan:2 call:1 structural:2 presence:1 bengio:2 easy:3 wn:1 enough:1 zi:9 w3:1 restrict:1 reduce:2 idea:1 cn:4 inner:1 t0:1 whether:5 motivated:1 sontag:1 speech:1 york:2 cause:1 useful:2 clear:2 amount:1 statist:2 simplest:1 http:1 exist:1 estimated:1 disjoint:3 per:5 dummy:1 broadly:2 discrete:2 write:2 vol:1 putting:1 drawn:1 neither:1 lacoste:2 graph:1 sum:1 everywhere:1 powerful:1 almost:1 reasonable:1 decide:2 family:3 draw:1 decision:1 disclda:1 bound:4 completing:1 guaranteed:1 constraint:2 infinity:1 precisely:1 optimality:3 extremely:2 vempala:3 smaller:1 em:1 mielik:1 wi:8 making:1 restricted:2 pr:10 taken:1 previously:2 remains:1 discus:1 count:1 singer:1 letting:1 tractable:1 end:2 unusual:1 apply:1 observe:1 enforce:2 appearing:2 alternative:1 altogether:1 original:1 denotes:1 dirichlet:15 running:3 porteous:2 top:1 newton:1 yx:1 k1:1 build:1 society:1 question:2 occurs:1 sha:1 gradient:4 thank:1 miettinen:2 sci:3 berlin:1 sensible:1 topic:122 trivial:1 spanning:1 reason:1 toward:1 w6:1 issn:5 difficult:1 unfortunately:1 negative:1 tightening:1 design:2 incidental:1 motivates:1 summarization:2 teh:2 observation:1 markov:1 descent:1 t:1 immediate:1 precise:3 arbitrary:1 david:1 z1:1 connection:1 nip:1 trans:1 below:1 regime:1 sparsity:6 max:3 natural:1 indicator:1 kivinen:2 improve:1 julien:2 disappears:1 axis:2 carried:1 anen:2 text:1 prior:9 literature:1 probab:1 discovery:2 relative:1 expect:1 highlight:1 interesting:2 limitation:1 allocation:7 versus:1 foundation:1 exciting:1 editor:3 principle:1 maxt:5 summary:1 supported:2 keeping:1 t6:8 bias:2 guide:1 side:2 exponentiated:2 explaining:1 taking:3 sparse:7 lett:1 world:2 valid:4 t5:1 conservatively:1 collection:3 qualitatively:1 welling:2 approximate:5 compact:1 global:1 corpus:2 conclude:1 it0:2 knew:1 discriminative:1 search:1 latent:7 bingham:1 learn:2 ignoring:2 schuurmans:2 bottou:2 cl:4 necessarily:1 european:1 da:1 sp:8 did:1 universe:3 hyperparameters:7 n2:1 contradicting:2 repeated:1 representative:1 ny:1 guiding:1 sparsest:1 comput:4 theorem:6 showing:1 maxi:1 exists:3 nat:1 conditioned:1 illustrates:1 t4:1 gap:5 entropy:2 tc:1 simply:1 likely:3 univariate:1 infinitely:1 partially:1 springer:2 corresponds:5 determines:1 satisfies:1 acm:2 ma:1 goal:1 ann:2 careful:1 towards:1 appreciable:2 content:1 hard:18 feasible:1 included:1 lemma:9 total:2 called:2 schrijver:2 chr:2 support:4 latter:3 arises:1 evaluate:1 |
3,571 | 4,233 | 1
INTRODUCTION
1
Video Annotation and Tracking with Active Learning
Carl Vondrick
UC Irvine
Deva Ramanan
UC Irvine
[email protected]
[email protected]
Abstract
We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning,
and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We
implement a constrained tracker and compute the expected change for putative
annotations with efficient dynamic programming algorithms. We demonstrate our
framework on four datasets, including two benchmark datasets constructed with
key frame annotations obtained by Amazon Mechanical Turk. Our results indicate
that we could obtain equivalent labels for a small fraction of the original cost.
1
Introduction
With the decreasing costs of personal portable cameras and the rise of online video sharing services
such as YouTube, there is an abundance of unlabeled video readily available. To both train and evaluate computer vision models for video analysis, this data must be labeled. Indeed, many approaches
have demonstrated the power of data-driven analysis given labeled video footage [12, 17].
But, annotating massive videos is prohibitively expensive. The twenty-six hour VIRAT video data
set consisting of surveillance footage of cars and people cost tens of thousands of dollars to annotate
despite deploying state-of-the-art annotation protocols [13]. Existing video annotation protocols
typically work by having users (possibly on Amazon Mechanical Turk) label a sparse set of key
frames followed by either linear interpolation [16] or nonlinear tracking [1, 15].
We propose an adaptive key-frame strategy which uses active learning to intelligently query a worker
to label only certain objects at only certain frames that are likely to improve performance. This
approach exploits the fact, that for real footage, not all objects/frames are ?created equal?; some
objects during some frames are ?easy? to automatically annotate in that they are stationary (such as
parked cars in VIRAT [13]) or moving in isolation (such a single basketball player running down
the court during a fast break [15]). In these cases, a few user clicks are enough to constrain a visual
tracker to produce accurate tracks. Rather, user clicks should be spent on more ?hard? objects/frames
that are visually ambiguous, such as occlusions or cluttered backgrounds.
Related work (Active learning): We refer the reader to the excellent survey in [14] for a contemporary review of active learning. Our approach is an instance of active structured prediction
Figure 1: Videos from the VIRAT data set [13] can have hundreds of objects per frame. Many of
those objects are easily tracked except for a few difficult cases. Our active learning framework automatically focuses the worker?s effort on the difficult instances (such as occlusion or deformation).
2
TRACKING
2
[8, 7], since we train object models that predict a complex, structured label (an object track) rather
than a binary class output. However, rather than training a single car model over several videos
(which must be invariant to instance-specific properties such as color and shape), we train a separate car model for each car instance to be tracked. From this perspective, our training examples are
individual frames rather than videos. But notably, these examples are non-i.i.d; indeed, temporal
dependencies are crucial for obtaining tracks from sparse labels. We believe this property makes
video a prime candidate for active learning, possibly simplifying its theoretical analysis [14, 2] because one does not face an adversarial ordering of data. Our approach is similar to recent work in
active labeling [4], except we determine which part of the label the user should annotate in order to
improve performance the most. Finally, we use a novel query strategy appropriate for video: rather
than use expected information gain (expensive to compute for structured predictors) or label entropy
(too coarse of an approximation), we use the expected label change to select a frame. We select the
frame, that when labeled, will produce the largest change in the estimated track of an object.
Related work (Interactive video annotation): There has also been work on interactive tracking
from the computer vision community. [5] describe efficient data structures that enable interactive
tracking, but do not focus on frame query strategies as we do. [16] and [1] describe systems that
allow users to manually correct drifting trackers, but this requires annotators to watch an entire video
in order to determine such erroneous frames, a significant burden in our experience.
2
Tracking
In this section, we outline the dynamic programming tracker of [15]. We will extend it in Section
3 to construct an efficient active learning algorithm. We begin by describing a method for tracking
a single object, given a sparse set of key frame bounding-box annotations. As in [15], we use a
visual tracker to interpolate the annotations for the unlabeled in-between frames. We define bit to
be a bounding box at frame t at pixel position i. Let ? be the non-empty set of worker annotations,
represented as a set of bounding boxes. Without loss of generality, assume that all paths are on the
interval 0 ? t ? T .
2.1
Discriminative Object Templates
We build a discriminative visual model of the object in order to predict its location. For every
bounding box annotation in ?, we extract its associated image patch and resize it to the average
size in the set. We then extract both histogram of oriented gradients (HOG) [9] and color features:
T
?n (bn ) = [HOG RGB] where RGB are the means and covariances of the color channels. When
trained with a linear classifier, these color features are able to learn a quadratic decision boundary in
RGB-space. In our experiments, we used a HOG bin size of either 4 or 8 depending on the size of
the object.
We then learn a model trained to discriminate the object against the the background. For every annotated frame, we extract an extremely large set of negative bounding boxes that do not significantly
overlap with the positive instances. Given a set of features bn with labels yn ? {?1, 1} classifying
them as positive or negative, we train a linear SVM by minimizing the loss function:
N
X
1
w = argmin w ? w + C
max(0, 1 ? yn w ? ?n (bn ))
2
n
?
(1)
We use liblinear [10] in our experiments. Training typically took only a few seconds.
2.2
Motion Model
In order to score a putative interpolated path b0:T = {b0 . . . bT }, we define the energy function
E(b0:T ) comprised of both unary and pairwise terms:
E(b0:T ) =
T
X
Ut (bt ) + S(bt , bt?1 )
(2)
t=0
Ut (bt ) = min (?w ? ?t (bt ), ?1 ) ,
S(bt , bt?1 ) = ?2 ||bt ? bt?1 ||2
(3)
3
ACTIVE LEARNING
3
where Ut (bt ) is the local match cost and St (bt , bt?1 ) is the pairwise spring. Ut (bt ) scores how well
a particular bt matches against the learned appearance model w, but truncated by ?1 so as to reduce
the penalty when the object undergoes an occlusion. We are able to efficiently compute the dot
product w ? ?t (bt ) using integral images on the RGB weights [6]. St (bt , bt?1 ) favors smooth motion
and prevents the tracked object from teleporting across the scene.
2.3
Efficient Optimization
We can recover the missing annotations by computing the optimal path as given by the energy
function. We find the least cost path b?0:T over the exponential set of all possible paths:
b?0:T = argmin E(b0:T ) s.t.
b0:T
bt = bit
?bit ? ?
(4)
subject to the constraint that the path crosses through the annotations labeled by the worker in ?. We
note that these constraints can be removed by simply redefining Ut (bt ) = ? ?bt 6= bit .
A naive approach to minimizing (4) would take O(K T ) for K locations per frame. However, we can
efficiently solve the above problem in O(T K 2 ) by using dynamic programming through a forward
pass recursion [3]:
C0? (b0 ) = U0 (b0 )
?
Ct? (bt ) = Ut (bt ) + min Ct?1
(bt?1 ) + S(bt , bt?1 )
(5)
?t? (bt )
=
bt?1
?
argmin Ct?1
(bt?1 )
bt?1
+ S(bt , bt?1 )
(6)
By storing the pointers in (6), we are able to reconstruct the least cost path by backtracking from the
last frame T . We note that we can further reduce this computation to O(T K) by applying distance
transform speed ups to the pairwise term in (3) [11].
3
Active Learning
Let curr0:T be the current best estimate for the path given a set of user annotations ?. We wish to
compute which frame the user should annotate next t? . In the ideal case, if we had knowledge of the
gt
ground-truth path bgt
0:T , we should select the frame t, that when annotated with bt , would produce
gt
a new estimated path closest to the ground-truth. Let us write next0:T (bt ) for the estimated track
given the augmented constraint set ? 0 = ? ? bgt
t . The optimal next frame is:
topt = argmin
T
X
0?t?T j=0
gt
err(bgt
j , nextj (bt ))
(7)
where err could be squared error or a thresholded overlap (in which err evaluates to 0 or 1 depending upon if the two locations sufficiently overlap or not). Unfortunately, we cannot directly compute
(7) since we do not know the true labels ahead of time.
3.1
Maximum Expected Label Change (ELC)
We make two simplifying assumptions to implement the previous ideal selection strategy, inspired
by the popular maximum expected gradient length (EGL) algorithm for active learning [14] (which
selects an example so as to maximize the expected change in a learned model). First, we change
the minimization to a maximization and replace the ground-truth error with the change in track
gt
gt
label: err(bgt
j , nextj (bt )) ? err(currj , nextj (bt )). Intuitively, if we make a large change in the
estimated track, we are likely to be taking a large step toward the ground-truth solution. However,
this requires knowing the ground-truth location bgt
t . We make the second assumption that we have
access to an accurate estimate of P (bit ), which is the probability that, if we show the user frame t,
then they will annotate a particular location i. We can use this distribution to compute an expected
change in track label:
t? = argmax
0?t?T
K
X
i=0
P (bit ) ? ?I(bit ) where ?I(bit ) =
T
X
j=0
err(currj , nextj (bit ))
(8)
3
ACTIVE LEARNING
4
(a) One click: Initial frame only
(c) Identical objects.
(d) About to intersect.
(b) Two clicks: Initial and requested frame
(e) Intersection point.
(f) After intersection.
Figure 2: We consider a synthetic video of two nearly identical rectangles rotating around a point?
one clockwise and the other counterclockwise. The rectangles intersect every 20 frames, at which
point the tracker does not know which direction the true rectangle is following. Did they bounce or
pass through? (a) Our framework realizes the ambiguity can be resolved by requesting annotations
when they do not intersect. Due to the periodic motion, a fixed rate tracker may request annotations
at the intersection points, resulting in wasted clicks. The expected label change plateaus because
every point along the maximas provide the same amount of disambiguating information. (b) Once
the requested frame is annotated, that corresponding segment is resolved, but the others remain
ambiguous. In this example, our framework can determine the true path for a particular rectangle in
only 7 clicks, while a fixed rate tracker may require 13 clicks.
The above selects the frame, that when annotated, produces the largest expected track label change.
We now show how to compute P (bit ) and ?I(bit ) using costs and constrained paths, respectively,
from the dynamic-programming based visual tracker described in Section 2. By considering every
possible space-time location that a worker could annotate, we are able to determine which frame we
expect could change the current path the most. Even though this calculation searches over an exponential number of paths, we are able to compute it in polynomial time using dynamic programming.
Moreover, (8) can be parallelized across frames in order to guarantee a rapid response time, often
necessary due to the interactive nature of active learning.
3.2
Annotation Likelihood and Estimated Tracks
A user has access to global knowledge and video history when annotating a frame. To capture such
global information, we define the annotation likelihood of location bit to be the score of the best track
given that additional annotation:
??(bit )
i
where ?(bit ) = E next0:T (bit )
P (bt ) ? exp
(9)
2
?
The above formulation only assigns high probabilities to locations that lie on paths that agree with
the global constraints in ?, as explained in Fig.2 and Fig.3. To compute energies ?(bit ) for all
3
ACTIVE LEARNING
5
Figure 3: Consider two identical rectangles that
translate, but never intersect. Although both objects have the same appearance, our framework
does not query for new annotations because the
pairwise cost has made it unlikely that the two
objects switch identities, indicated by a single
mode in the probability map. A probability exclusively using unary terms would be bimodal.
Figure 4: Consider a white rectangle moving on
a white background. Since it is impossible to
distuingish the foreground from the background,
our framework will query for the midpoint and
gracefully degrade to a fixed rate labeling. If the
object is extremely difficult to localize, the active
learner will automatically decide the optimal annotation strategy is to use fixed rate key frames.
spacetime locations bit , we use a standard two-pass dynamic programming algorithm for computing
min-marginals:
?(bit ) = Ct? (bit ) + Ct? (bit ) ? U (bit )
(10)
Ct? (bit )
corresponds to intermediate costs computed by running the recursive algorithm from
where
(5) backward in time. By caching forward and backward pointers ?t? (bit ) and ?t? (bit ), the associated
tracks next0:T (bit ) can be found by backtracking both forward and backward from any spacetime
location bit .
3.3
Label Change
We now describe a dynamic programming algorithm for computing the label change ?I(bit ) for all
i
possible spacetime locations bit . To do so, we define intermediate quantities ??
t (bt ) which represent
i
the label change up to time t given the user annotates location bt :
??
0 (b0 ) = err(curr0 , next0 (b0 ))
?
?
??
t (bt ) = err(currt , nextt (bt )) + ?t?1 (?t (bt ))
(11)
(12)
i
We can compute ??
t (bt ), the expected label change due to frames t to T given a user annotation at
i
bt , by running the above recursion backward in time. The total label change is their sum, minus the
double-counted error from frame t:
i
? i
i
?I(bit ) = ??
t (bt ) + ?t (bt ) ? err(currt , nextt (bt ))
(13)
(13) is sensitive to small spatial shifts; i.e. ?I(bit ) 6? ?I(bi+
t ). To reduce the effect of imprecise
human labeling (which we encounter in practice), we replace the label change with a worst-case
label change computed over a neighboring window N (bit ):
i
?
?I(b
t) =
min ?I(bjt )
bjt ?N (bit )
(14)
By selecting frames that have a large expected ?worse-case? label change, we avoid querying frames
that require precise labeling and instead query for frames that are easy to label (e.g., the user may
annotate any location within a small neighborhood and still produce a large label change).
3.4
Stopping Criteria
Our final active learning algorithm is as follows: we query a frame t? according to (8), add the
user annotation to the constraint set ?, retrain the object template with additional training examples
4
QUALITATIVE EXPERIMENTS
6
(a) One click: Initial frame only
(c) Training
(b) Two clicks: Initial and requested frame
(d) Walking, Yes Jacket
(e) Taking Off Jacket
(f) Walking, No Jacket
Figure 5: We analyze a video of a man who takes off a jacket and changes his pose. A tracker trained
only on the initial frame will lose the object when his appearance changes. Our framework is able
to determine which additional frame the user should annotate in order to resolve the track. (a) Our
framework does not expect any significant label change when the person is wearing the same jacket
as in the training frame (black curve). But, when the jacket is removed and the person changes
his pose (colorful curves), the tracker cannot localize the object and our framework queries for an
additional annotation. (b) After annotating the requested frame, the tracker learns the color of the
person?s shirt and gains confidence in its track estimate. A fixed rate tracker may pick a frame where
the person is still wearing the jacket, resulting in a wasted click. (c-f) The green box is the predicted
path with one click and red box is with two clicks. If there is no green box, it is the same as the red.
extracted from frame t? (according to (1)), and repeat. We stop requesting annotations once we are
confident that additional annotations will not significantly change the predicted path:
max
0?t?T
K
X
P (bit ) ? ?I(bit ) < tolerance
(15)
i=0
We then report b?0:T as the final annotated track as found in (4). We note, however, that in practice
external factors, such as budget, will often trigger the stopping condition before we have obtained a
perfect track. As long as the budget is sufficiently high, the reported annotations will closely match
the actual location of the tracked object.
We also note that one can apply our active learning algorithm in parallel for multiple objects in a
video. We maintain separate object models w and constraint sets ? for each object. We select the
object and frame with the maximum expected label change according to (8) . We demonstrate that
this strategy naturally focuses labeling effort on the more difficult objects in a video.
4
Qualitative Experiments
In order to demonstrate our framework?s capabilities, we show how our approach handles a couple
of interesting annotation problems. We have assembled two data sets: a synthetic video of easy-tolocalize rectangles maneuvering in an uncluttered background, and a real-world data set of actors
following scripted walking patterns.
4
QUALITATIVE EXPERIMENTS
(a) One click: Initial frame only
(c) Training image
(d) Entering occlusion
7
(b) Two clicks: Initial and requested frame
(e) Total occlusion
(f) After occlusion
Figure 6: We investigate a car from [13] that undergoes a total occlusion and later reappears. The
tracker is able to localize the car until it enters the occlusion, but it cannot recover when the car
reappears. (a) Our framework expects a large label change during the occlusion and when the object
is lost. The largest label change occurs when the object begins to reappear because this frame would
lock the tracker back onto the correct path. (b) When the tracker receives the requested annotation,
it is able to recover from the occlusion, but it is still confused when the object is not visible.
(a) Initial frame
(b) Rotation
(c) Scale
(d) Estimated
Figure 7: We examine situations where there are many easy-to-localize objects (e.g., stationary
objects) and only a few difficult instances. In this example, red boxes were manually annotated and
black boxes are automatically estimated. Our framework realizes that the stationary objects are not
likely to change their label, so it focuses annotations on moving objects.
We refer the reader to the figures. Fig.2 shows how our framework is able to resolve inherently
ambiguous motion with the minimum number of annotations. Fig.3 highlights how our framework
does not request annotations when the paths of two identical objects are disjoint because the motion
is not ambiguous. Fig.4 reveals how our framework will gracefully degrade to fixed rate key frames
if the tracked object is difficult to localize. Fig.5 demonstrates motion of objects that deform. Fig.6
shows how we are able to detect occlusions and automatically recover by querying for a correct
annotation. Finally, Fig.7 shows how we are able to transfer wasted clicks from stationary objects
on to moving objects.
5
BENCHMARK RESULTS
8
Figure 8: A hard scene in a basketball game [15]. Players frequently undergo total and partial
occlusion, alter their pose, and are difficult to localize due to a cluttered background.
(a) VIRAT Cars [13]
(b) Basketball Players [15]
Figure 9: We compare active key frames (green curve) vs. fixed rate key frames (red curve) on a subset (a few thousand frames) of the VIRAT videos and part of a basketball game. We could improve
performance by increasing annotation frequency, but this also increases the cost. By decreasing
the annotation frequency in the easy sections and instead transferring those clicks to the difficult
frames, we achieve superior performance over the current methods on the same budget. (a) Due to
the large number of stationary objects in VIRAT, our framework assigns a tremendous number of
clicks to moving objects, allowing us to achieve nearly zero error. (b) By focusing annotation effort
on ambiguous frames, we show nearly a 5% improvement on basketball players.
5
Benchmark Results
We validate our approach on both the VIRAT challenge video surveillance data set [13] and the
basketball game studied in [15]. VIRAT is unique for its enormous size of over three million frames
and up to hundreds of annotated objects in each frame. The basketball game is extremely difficult
due to cluttered backgrounds, motion blur, frequent occlusions, and drastic pose changes.
We evaluate the performance of our tracker using active key frames versus fixed rate key frames. A
fixed rate tracker simply requests annotations every T frames, regardless of the video content. For
active key frames, we use the annotation schedule presented in section 3. Our key frame baseline
is the state-of-the-art labeling protocol used to originally annotate both datasets [15, 13]. In a given
video, we allow our active learning protocol to iteratively pick a frame and an object to annotate
until the budget is exhausted. We then run the tracker described in section 2 constrained by these
key frames and compare its performance.
We score the two key frame schedules by determining how well the tracker is able to estimate
the ground truth annotations. For every frame, we consider a prediction to be correct as long as
it overlaps the ground truth by at least 30%, a threshold that agrees with our qualitative rating of
performance. We compare our active approach to a fixed-rate baseline for a fixed amount of user
effort: is it better to spend X user clicks on active or fixed-rate key frames? Fig.9 shows the former
strategy is better. Indeed, we can annotate the VIRAT data set for one tenth of its original cost.
Acknowledgements: Funding for this research was provided by NSF grants 0954083 and
0812428, ONR-MURI Grant N00014-10-1-0933, an NSF GRF, and support from Intel and Amazon.
REFERENCES
9
References
[1] A. Agarwala, A. Hertzmann, D. Salesin, and S. Seitz. Keyframe-based tracking for rotoscoping and
animation. In ACM Transactions on Graphics (TOG), volume 23, pages 584?591. ACM, 2004. 1, 2
[2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the 23rd
international conference on Machine learning, ICML ?06, pages 65?72, New York, NY, USA, 2006.
ACM. 2
[3] R. Bellman. Some problems in the theory of dynamic programming. Econometrica: Journal of the
Econometric Society, pages 37?48, 1954. 3
[4] S. Branson, P. Perona, and S. Belongie. Strong supervision from weak annotation: Interactive training of
deformable part models. ICCV. 2
[5] A. Buchanan and A. Fitzgibbon. Interactive feature tracking using kd trees and dynamic programming.
In CVPR 06, volume 1, pages 626?633. Citeseer, 2006. 2
[6] F. Crow. Summed-area tables for texture mapping. ACM SIGGRAPH Computer Graphics, 18(3):207?
212, 1984. 3
[7] A. Culotta, T. Kristjansson, A. McCallum, and P. Viola. Corrective feedback and persistent learning for
information extraction. Artificial Intelligence, 170(14-15):1101?1122, 2006. 1
[8] A. Culotta, A. McCallum, and M. U. A. D. O. C. SCIENCE. Reducing labeling effort for structured
prediction tasks. In PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, volume 20, page 746. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press;
1999, 2005. 1
[9] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, pages I: 886?
893, 2005. 2
[10] R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. LIBLINEAR: A library for large linear classification.
The Journal of Machine Learning Research, 9:1871?1874, 2008. 2
[11] P. Felzenszwalb and D. Huttenlocher. Distance transforms of sampled functions. Cornell Computing and
Information Science Technical Report TR2004-1963, 2004. 3
[12] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. Freeman. Sift flow: dense correspondence across different
scenes. In Proceedings of the 10th European Conference on Computer Vision: Part III, pages 28?42.
Springer-Verlag, 2008. 1
[13] S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C.-C. Chen, J. T. Lee, S. Mukherjee, J. K. Aggarwal, H. Lee,
L. Davis, E. Swears, X. Wang, Q. Ji, K. Reddy, M. Shah, C. Vondrick, H. Pirsiavash, D. Ramanan, J. Yuen,
A. Torralba, B. Song, A. Fong, A. Roy-Chowdhury, and M. Desai. A large-scale benchmark dataset for
event recognition in surveillance video. In CVPR, 2011. 1, 7, 8
[14] B. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of
Wisconsin?Madison, 2009. 1, 2, 3
[15] C. Vondrick, D. Ramanan, and D. Patterson. Efficiently Scaling Up Video Annotation on Crowdsourced
Marketplaces. ECCV, 2010. 1, 2, 8
[16] J. Yuen, B. Russell, C. Liu, and A. Torralba. LabelMe video: Building a Video Database with Human
Annotations. 2009. 1, 2
[17] J. Yuen and A. Torralba. A data-driven approach for event prediction. Computer Vision?ECCV 2010,
pages 707?720, 2010. 1
| 4233 |@word dalal:1 polynomial:1 c0:1 triggs:1 seitz:1 bn:3 simplifying:2 rgb:4 covariance:1 citeseer:1 pick:2 kristjansson:1 hsieh:1 minus:1 liblinear:2 initial:8 liu:2 score:4 exclusively:1 selecting:1 existing:1 err:9 current:3 beygelzimer:1 must:2 readily:1 visible:1 blur:1 shape:1 v:1 stationary:5 intelligence:2 reappears:2 mccallum:2 pointer:2 coarse:1 location:14 along:1 constructed:1 persistent:1 qualitative:4 buchanan:1 introduce:1 pairwise:4 notably:1 indeed:3 expected:13 rapid:1 examine:1 frequently:1 shirt:1 footage:3 inspired:1 bellman:1 decreasing:2 freeman:1 automatically:5 resolve:2 actual:1 window:1 considering:1 increasing:1 begin:2 confused:1 moreover:1 provided:1 agnostic:1 argmin:4 guarantee:1 temporal:1 every:7 interactive:6 prohibitively:1 classifier:1 demonstrates:1 ramanan:3 grant:2 colorful:1 yn:2 positive:2 service:1 before:1 local:1 despite:1 path:19 interpolation:1 black:2 studied:1 branson:1 jacket:7 bi:1 unique:1 camera:1 recursive:1 practice:2 implement:2 lost:1 reappear:1 fitzgibbon:1 intersect:4 area:1 significantly:2 ups:1 imprecise:1 confidence:1 cannot:3 unlabeled:2 selection:1 onto:1 applying:1 impossible:1 equivalent:1 map:1 demonstrated:1 missing:1 regardless:1 cluttered:3 survey:2 amazon:3 assigns:2 oh:1 his:3 handle:1 trigger:1 user:18 massive:1 programming:9 carl:1 us:1 perera:1 roy:1 expensive:2 recognition:1 walking:3 mukherjee:1 muri:1 labeled:4 huttenlocher:1 database:1 enters:1 capture:1 worst:1 thousand:2 wang:2 culotta:2 desai:1 ordering:1 maneuvering:1 contemporary:1 removed:2 russell:1 hertzmann:1 econometrica:1 dynamic:9 personal:1 trained:3 deva:1 segment:1 upon:1 tog:1 patterson:1 learner:1 easily:1 resolved:2 siggraph:1 represented:1 corrective:1 train:4 fast:1 describe:3 london:1 query:8 artificial:2 labeling:7 marketplace:1 choosing:1 neighborhood:1 spend:1 solve:1 cvpr:3 annotating:3 reconstruct:1 favor:1 transform:1 final:2 online:1 intelligently:1 took:1 propose:1 product:1 frequent:1 neighboring:1 uci:1 translate:1 achieve:2 deformable:1 grf:1 validate:1 empty:1 double:1 produce:6 perfect:1 object:46 spent:1 depending:2 pose:4 b0:10 strong:1 predicted:2 indicate:1 bjt:2 direction:1 closely:1 annotated:8 correct:4 human:3 enable:1 settle:1 bin:1 require:2 yuen:4 tracker:20 sufficiently:2 ic:1 ground:7 visually:1 around:1 exp:1 mapping:1 predict:2 torralba:4 realizes:2 label:30 lose:1 sensitive:1 largest:3 agrees:1 minimization:1 mit:2 rather:5 caching:1 avoid:1 cornell:1 surveillance:3 focus:4 improvement:1 likelihood:2 adversarial:1 baseline:2 dollar:1 detect:1 stopping:2 unary:2 entire:1 transferring:1 typically:2 bt:48 unlikely:1 perona:1 selects:2 pixel:1 agarwala:1 classification:1 constrained:3 art:2 spatial:1 uc:2 summed:1 equal:1 construct:1 once:2 having:1 never:1 extraction:1 manually:2 identical:4 park:1 icml:1 nearly:3 foreground:1 alter:1 others:1 report:3 few:5 oriented:2 national:1 interpolate:1 individual:1 argmax:1 consisting:1 occlusion:13 maintain:1 detection:1 highly:1 investigate:1 chowdhury:1 accurate:3 integral:1 worker:5 necessary:1 experience:1 partial:1 tree:1 rotating:1 deformation:1 theoretical:1 minimal:1 instance:6 maximization:1 cost:11 subset:1 expects:1 hundred:2 predictor:1 comprised:1 too:1 graphic:2 reported:1 dependency:1 fong:1 periodic:1 synthetic:2 confident:1 st:2 person:4 international:1 dramanan:1 lee:2 off:2 squared:1 ambiguity:1 aaai:1 possibly:2 worse:1 external:1 deform:1 later:1 break:1 analyze:1 red:4 recover:4 parked:1 parallel:1 capability:1 annotation:42 crowdsourced:1 who:1 efficiently:3 yes:1 salesin:1 weak:1 history:1 plateau:1 deploying:1 sharing:1 against:2 evaluates:1 energy:3 frequency:2 turk:2 topt:1 naturally:1 associated:2 couple:1 irvine:2 gain:2 stop:1 sampled:1 popular:1 dataset:1 color:5 car:9 ut:6 knowledge:2 schedule:2 back:1 focusing:1 teleporting:1 originally:1 response:1 formulation:1 box:10 though:1 generality:1 elc:1 until:2 langford:1 receives:1 nonlinear:1 undergoes:2 mode:1 indicated:1 believe:1 usa:1 effect:1 building:1 true:3 former:1 entering:1 iteratively:1 white:2 during:3 basketball:7 game:4 ambiguous:5 davis:1 criterion:1 outline:1 demonstrate:3 motion:7 balcan:1 vondrick:4 image:3 novel:2 funding:1 superior:1 rotation:1 ji:1 tracked:5 volume:3 million:1 extend:1 marginals:1 refer:2 significant:2 cambridge:1 rd:1 had:1 dot:1 moving:5 access:2 actor:1 supervision:1 annotates:1 gt:5 add:1 closest:1 recent:1 perspective:1 driven:2 prime:1 certain:2 n00014:1 verlag:1 binary:1 onr:1 minimum:1 additional:5 parallelized:1 determine:5 maximize:1 clockwise:1 u0:1 multiple:1 uncluttered:1 smooth:1 technical:2 match:3 aggarwal:1 calculation:1 cross:1 long:2 lin:1 prediction:4 vision:4 annotate:12 histogram:2 represent:1 bimodal:1 scripted:1 background:7 interval:1 crucial:1 subject:1 undergo:1 counterclockwise:1 flow:1 ideal:2 intermediate:2 iii:1 easy:5 enough:1 switch:1 isolation:1 click:18 reduce:3 knowing:1 court:1 requesting:2 judiciously:1 shift:1 bounce:1 six:1 effort:6 penalty:1 song:1 york:1 amount:2 transforms:1 ten:1 nsf:2 estimated:8 disjoint:1 track:18 per:2 write:1 key:15 four:1 threshold:1 enormous:1 localize:6 thresholded:1 tenth:1 rectangle:7 backward:4 econometric:1 wasted:3 fraction:1 sum:1 run:1 reader:2 decide:1 patch:1 putative:2 decision:1 resize:1 scaling:1 bit:34 ct:6 followed:1 spacetime:3 correspondence:1 fan:1 quadratic:1 ahead:1 constraint:6 constrain:1 scene:3 interpolated:1 speed:1 extremely:3 min:4 spring:1 structured:4 according:3 request:3 kd:1 across:3 remain:1 intuitively:1 invariant:1 explained:1 iccv:1 agree:1 reddy:1 describing:1 know:2 drastic:1 available:1 apply:1 appropriate:1 encounter:1 shah:1 drifting:1 original:2 running:3 lock:1 madison:1 exploit:1 build:1 society:1 quantity:1 occurs:1 strategy:7 gradient:3 distance:2 separate:2 gracefully:2 degrade:2 evaluate:2 portable:1 toward:1 length:1 minimizing:2 difficult:9 unfortunately:1 hog:3 negative:2 rise:1 hoogs:1 twenty:1 allowing:1 datasets:3 benchmark:4 truncated:1 situation:1 viola:1 precise:1 frame:71 community:1 rating:1 cast:1 mechanical:2 redefining:1 sivic:1 learned:2 tremendous:1 hour:1 assembled:1 able:12 pattern:1 challenge:1 including:1 max:2 video:30 green:3 pirsiavash:1 power:1 overlap:4 event:2 recursion:2 improve:3 library:1 created:1 extract:3 naive:1 review:1 literature:1 acknowledgement:1 determining:1 wisconsin:1 loss:2 expect:2 highlight:1 interesting:1 querying:3 versus:1 annotator:1 classifying:1 storing:1 eccv:2 repeat:1 last:1 allow:2 template:2 face:1 taking:2 felzenszwalb:1 midpoint:1 sparse:3 tolerance:1 boundary:1 curve:4 feedback:1 world:1 forward:3 made:1 adaptive:1 counted:1 transaction:1 keyframe:1 global:3 active:28 reveals:1 belongie:1 discriminative:2 search:1 table:1 crow:1 channel:1 learn:2 ca:1 nature:1 inherently:1 obtaining:1 transfer:1 menlo:1 requested:6 excellent:2 complex:1 european:1 protocol:4 did:1 dense:1 bounding:5 animation:1 augmented:1 fig:9 intel:1 retrain:1 ny:1 position:1 wish:1 exponential:2 candidate:1 lie:1 learns:1 abundance:1 down:1 erroneous:1 specific:1 sift:1 svm:1 burden:1 texture:1 budget:4 egl:1 exhausted:1 chen:1 entropy:1 intersection:3 backtracking:2 simply:2 likely:3 appearance:3 visual:4 prevents:1 tracking:9 watch:1 chang:1 springer:1 corresponds:1 truth:7 extracted:1 acm:4 ma:1 identity:1 disambiguating:1 replace:2 man:1 content:1 change:32 youtube:1 hard:2 labelme:1 except:2 reducing:1 total:4 discriminate:1 pas:3 player:4 select:4 people:1 support:1 wearing:2 |
3,572 | 4,234 | The Impact of Unlabeled Patterns in Rademacher
Complexity Theory for Kernel Classifiers
Davide Anguita, Alessandro Ghio, Luca Oneto, Sandro Ridella
Department of Biophysical and Electronic Engineering
University of Genova
Via Opera Pia 11A, I-16145 Genova, Italy
{Davide.Anguita,Alessandro.Ghio} @unige.it
{Luca.Oneto,Sandro.Ridella} @unige.it
Abstract
We derive here new generalization bounds, based on Rademacher Complexity theory, for model selection and error estimation of linear (kernel) classifiers, which
exploit the availability of unlabeled samples. In particular, two results are obtained: the first one shows that, using the unlabeled samples, the confidence term
of the conventional bound can be reduced by a factor of three; the second one
shows that the unlabeled samples can be used to obtain much tighter bounds, by
building localized versions of the hypothesis class containing the optimal classifier.
1
Introduction
Understanding the factors that influence the performance of a statistical procedure is a key step for
finding a way to improve it. One of the most explored procedures in the machine learning approach
to pattern classification aims at solving the well?known model selection and error estimation problem, which targets the estimation of the generalization error and the choice of the optimal predictor
from a set of possible classifiers. For reaching this target, several approaches have been proposed
[1, 2, 3, 4], which provide an upper bound on the generalization ability of the classifier, which can
be used for model selection purposes as well. Typically, all these bounds consists of three terms:
the first one is the empirical error of the classifier (i.e. the error performed on the training data),
the second term is a bias that takes into account the complexity of the class of functions, which the
classifier belongs to, and the third one is a confidence term, which depends on the cardinality of the
training set. These approaches are quite interesting because they investigate the finite sample behavior of a classifier, instead of the asymptotic one, even though their practical applicability has been
questioned for a long time1 . One of the most recent methods for obtaining these bounds is to exploit
the Rademacher Complexity, which is a powerful statistical tool that has been deeply investigated
during the last years [5, 6, 7]. This approach has shown to be of practical use, by outperforming more
traditional methods [8, 9] for model selection in the small?sample regime [10, 5, 6], i.e. when the
dimensionality of the samples is comparable, or even larger, than the cardinality of the training set.
We show in this work how its performance can be further improved by exploiting some extra knowledge on the problem. In fact, real?world classification problems often are composed of datasets
with labeled and unlabeled data [11, 12]: for this reason an interesting challenge is finding a way to
exploit the unlabeled data for obtaining tighter bounds and, therefore, better error estimations.
In this paper, we present two methods for exploiting the unlabeled data in the Rademacher Complexity theory [2]. First, we show how the unlabeled data can have a role in reducing the confidence
1
See, for example, the NIPS 2004 Workshop (Ab)Use of Bounds or the 2002 Neurocolt Workshop on Bounds
less than 0.5
1
term, by obtaining a new bound that takes into account both labeled and unlabeled data. Then, we
propose a method, based on [7], which exploits the unlabeled data for selecting a better hypothesis
space, which the classifier belongs to, resulting in a much sharper and accurate bound.
2
Theoretical framework and results
We consider the following prediction problem: based on a random observation of X ? X ? Rd
one has to estimate Y ? Y ? {?1, 1} by choosing a suitable prediction rule f : X ? [?1, 1].
The generalization error L(f ) = E{X ,Y} ?(f (X), Y ) associated to the prediction rule is defined
through a bounded
a set of labeled
loss function ?(f (X), Y ) : [?1, 1] ? Y ? [0, 1]. We observe
samples Dnl : (X1l , Y1l ), ? ? ? , (Xnl l , Ynll ) and a set of unlabeled ones Dnu : (X1u ), ? ? ? , (Xnuu ) .
The data consist of a sequence of independent, identically distributed (i.i.d.) samples with the same
distribution P (X , Y) for Dnl and Dnu . The goal is to obtain a bound on L(f ) that takes into
account both the labeled and unlabeled data. As we do not know the distributionP
that have generated
nl
?(f (Xil ), Yil ).
the data, we do not know L(f ) but only its empirical estimation Lnl (f ) = 1/nl i=1
In the typical context of Structural Risk Minimization (SRM) [13] we define an infinite sequence of
hypothesis spaces of increasing complexity {Fi , i = 1, 2, ? ? ? }, then we choose a suitable function
space Fi and, consequently, a model f ? ? Fi that fits the data. As we do not know the true data
distribution, we can only say that:
{L(f ) ? Lnl (f )}f ?Fi ? sup {L(f ) ? Lnl (f )}
(1)
f ?Fi
or, equivalently:
L(f ) ? Lnl (f ) + sup {L(f ) ? Lnl (f )} ,
f ?Fi
?f ? Fi
(2)
In this framework, the SRM procedure brings us to the following choice of the function space and
the corresponding optimal classifier:
#
"
f ?, F ? :
arg
min
Fi ?{F1 ,F2 ,??? }
min Lnl (f )f ?Fi + sup {L(f ) ? Lnl (f )}
f ?Fi
(3)
f ?Fi
Since the generalization bias (supf ?Fi {L(f ) ? Lnl (f )}) is a random variable, it is possible to
statistically analyze it and obtain a bound that holds with high probability [5].
From this point, we will consider two types of prediction rule with the associated loss function:
fH (x) =sign(wT ?(x) + b),
min(1, wT ?(x) + b)
fS (x) =
max(?1, wT ?(x) + b)
1 ? yfH (x)
(4)
2
1 ? yfS (x)
(5)
?S (fS (x), y) =
2
?H (fH (x), y) =
if wT ?(x) + b > 0
,
if wT ?(x) + b ? 0
where ?(?) : Rd ? RD with D >> d, w ? RD and b ? R. The function ?(?) is introduced to
allow for a later introduction of kernels, even though, for simplicity, we will focus only on the linear
case. Note that both the hard loss ?H (fH (x), y) and the soft loss (or ramp loss) [14] ?S (fS (x), y)
are bounded ([0, 1]) and symmetric (?(f (x), y) = 1 ? ?(f (x), ?y)). Then, we recall the definition
of Rademacher Complexity (R) for a class of functions F:
nl
nl
X
1 X
? n (F) = E? sup 2
?i ?(f (xi ), yi ) = E? sup
?i f (xi )
R
l
f ?F nl i=1
f ?F nl i=1
(6)
where ?1 , . . . , ?nl are nl independent Rademacher random variables, i.e. independent random variables for which P(?i = +1) = P(?i = ?1) = 1/2, and the last equality holds if we use one
? is a computable realization of the expected Rademacher
of the losses defined before. Note that R
?
Complexity R(F) = E(X ,Y) R(F).
The most renowed result in Rademacher Complexity theory
states that [2]:
s
log 2?
?
L(f )f ?F ? Lnl (f )f ?F + Rnl (F) + 3
(7)
2nl
which holds with probability (1 ? ?) and allows to solve the problem of Eq. (3).
2
2.1
Exploiting unlabeled samples for reducing the confidence term
Assuming that the amount of unlabeled data is larger than the number of labeled samples, we split
them in blocks of similar size by defining the quantity m = ?nu /nl ? + 1, so that we can consider a
?
composed of mnl pattern. Then, we can upper bound the expected generalizaghost sample Dmn
l
tion bias in the following way 2 :
?
?
?
?
i?nl
nl
m
X
X
X
1
1
1
E{X ,Y} sup {L(f ) ? Lnl (f )} = E{X ,Y} sup ?E{X ? ,Y ? } ?
?? ? ?
?i ?
m i=1 nl k=(i?1)?n +1 k
nl i=1
f ?F
f ?F
l
?
?
i?nl
m
X
1 X
1
? E{X ,Y} E{X ? ,Y ? }
??k ? ?|k|nl ?
sup ?
m i=1 f ?F nl
k=(i?1)?nl +1
?
?
i?nl
m
i
h
X
1 X
1
= E{X ,Y} E{X ? ,Y ? } E?
?|k|nl ??k ? ?|k|nl ?
sup ?
m i=1 f ?F nl
k=(i?1)?nl +1
?
?
i?nl
m
m
X
1 X
1 X ?i
2
? E{X ,Y} E?
?|k|nl ?k ? = E{X ,Y}
sup ?
R (F)
m i=1 f ?F nl
m i=1 nl
k=(i?1)?nl +1
where |k|nl = (k ? 1) mod (nl ) + 1. The last quantity (that we call Expected Extended Rademacher
? n (F)) and the expected generalization bias are both deterministic quantities
Complexity E{X ,Y} R
u
and we know only one realization of them, dependent on the sample. Then, we can use the McDiarmid?s inequality [15] to obtain:
#
"
? n (F) + ? ?
(8)
P sup {L(f ) ? Ln (f )} ? R
"
u
l
f ?F
#
P sup {L(f ) ? Lnl (f )} ? E{X ,Y} sup {L(f ) ? Lnl (f )} + a? +
(9)
i
h
? n (F) + (1 ? a)? ?
? n (F) ? R
P E{X ,Y} R
u
u
(10)
f ?F
f ?F
e?2nl a
with a ? [0, 1]. By choosing a =
?
m
?
,
2+ m
2 2
?
+ e?
(mnl )
(1?a)2 ?2
2
(11)
we can write:
#
m
2mnl ?2
1 X ?i
?
?
Rnl (F) + ? ? 2e (2+ m)2
P sup {L(f ) ? Lnl (f )} ?
m i=1
f ?F
"
(12)
and obtain an explicit bound which holds with probability (1 ? ?):
L(f )f ?F ? Lnl (f )f ?F
?
m
2+ m
1 X ?i
Rnl (F) + ?
+
m i=1
m
s
log 2?
2nl
(13)
? i (F) is the Rademacher Complexity of the class F computed on the i-th block of
where R
nl
unlabeled data. Note that for m = 1 the training set does not contain any unlabeled data
and the bound given by Eq. (3) is recovered, while for large m the confidence term is re?i
duced by a factor of 3. At a first sight, it would seem impossible to compute the term R
nl
without knowing the labels of
n the data, but it is easy to show that this is noto the case. In
fact, let us define Ki+ = k ? {k = (i ? 1) ? nl + 1, . . . , i ? nl } : ?|k|nl = +1 and Ki? =
2
we define ?(f (xi ), yi ) ? ?i to simplify the notation
3
(a) Coventional function classes
(b) Localized function classes
Figure 1: The effect of selecting a better center for the hypothesis classes.
n
o
k ? {k = (i ? 1) ? nl + 1, . . . , i ? nl } : ?|k|nl = ?1 , then we have:
?
?
m
X
X
X
X
2 ?
? n (F) = 1 + 1
1?
?(fk , yk ) ?
?(fk , yk ) ?
R
E? sup
u
m i=1
f ?F nl
+
?
+
k?Ki
k?Ki
k?Ki
?
?
m
X
X
1 X
2
2
=1+
?(fk , ?yk ) ?
?(fk , yk )?
E? sup ??
m i=1
nl
nl
f ?F
+
?
k?Ki
k?Ki
?
?
i?nl
m
X
1 X
2
=1+
E? sup ??
?(fk , ??|k|nl yk )?
m i=1
nl
f ?F
k=(i?1)?nl +1
?
?
i?nl
m
X
1 X
2
=1?
E? inf ?
?(fk , ?|k|nl )?
f ?F nl
m i=1
k=(i?1)?nl +1
which corresponds to solving a classification problem using all the available data with random labels.
The expectation can be easily computed with some Monte Carlo trials.
2.2
Exploiting the unlabeled data for tightening the bound
Another way of exploiting the unlabeled data is to use them for selecting a more suitable sequence of
hypothesis spaces. For this purpose we could use some of the unlabeled samples or, even better, the
nc = nu ? ?nu /nl ? nl samples left from the procedure of the previous section. The idea is inspired
by the work of [3] and [7], which propose to inflate the hypothesis classes by centering them around
a ?good? classifier. Usually, in fact, we have no a-priori information on what can be considered a
good choice of the class center, so a natural choice is the origin [13], as in Figure 1(a). However,
if it happens that the center is ?close? to the optimal classifier, the search for a suitable class will
stop very soon and the resulting Rademacher Complexity will be consequently reduced (see Figure
1(b)). We propose here a method for finding two possible ?good? centers for the hypothesis classes.
Let us consider nc unlabeled samples and run a clustering algorithm on them, by setting the number
of clusters to 2, and obtaining two clusters C1 and C2 . We build two distinct labeled datasets by
assigning the labels +1 and ?1 to C1 and C2 , respectively, and then vice-versa. Finally, we build
two classifiers fC1 (x) and fC2 (x) = ?fC1 (x) by learning the two datasets3 . The two classifiers,
which have been found using only unlabeled samples, can then be used as centers for searching
a better hypothesis class. It is worthwhile noting that any supervised learning algorithm can be
used [16], because the centers are only a hint for a better centered hypothesis space: their actual
classification performance is not of paramount importance. The underlying principle that inspired
3
Note that we could build only one classifier by assigning the most probable labels to the nc samples,
according to the nl labeled ones but, rigorously speaking, this is not allowed by the SRM principle, because
it would lead to use the same data for both choosing the space of functions and computing the Rademacher
Complexity.
4
this procedure relies on the reasonable hypothesis that P(X ) is correlated with P(X , Y): in fact, in
an unlucky scenario, where the two classes are heavily overlapped, the method would obviously fail.
Choosing a good center for the SRM procedure can greatly reduce the second term of the bound
given by Eq. (13) [7] (the bias or complexity term). Note, however, that the confidence term is not
? in (F) as
affected, so we propose here an improved bound, which makes this term depending on R
l
well. We use a recent concentration result for Self Bounding Functions [17], instead of the looser
McDiarmid?s inequality. The detailed proof is omitted due to space constraints and we give here
only the sketch (it is a more general version of the proof in [18] for Rademacher Complexities):
#
"
(mnl )(1?a)2 ?2
?
? n (F) + ? ? e?2nl a2 ?2 + e 2E{X ,Y} R? nu (F )
P sup {L(f ) ? Ln (f )} ? R
(14)
l
f ?F
with a ? [0, 1]. Choosing a =
"
?
u
?
m
q
,
Pm ? i
1
m+2 E{X ,Y} m
i=1 Rn (F )
we obtain:
l
#
?
?
? n (F) + ? ? 2e ( m+2
P sup {L(f ) ? Lnl (f )} ? R
u
f ?F
? 2mnl ?
2
? n (F )
E{X ,Y} R
u
)
2
so that the following explicit bound holds with probability (1 ? ?):
q
s
? n (F) + ?m log 2
2 E{X ,Y} R
u
?
? n (F) +
?
L(f )f ?F ? Lnl (f )f ?F + R
u
2nl
m
(15)
(16)
? n (F) = 1 and we obtain again Eq. (13). Unfortunately, the
Note that, in the worst case, E{X ,Y} R
u
Expected Extended Rademacher Complexity cannot be computed, but we can upper bound it with
its empirical version (see, for example, [19], pages 420?422, for a justificaton of this step) as in
Eq.(10) to obtain:
#
"
(mn )(1?a)2 ?2
? ? l
?2nl a2 ?2
2(Rnu (F )+(1?a)?)
?
+e
P sup {L(f ) ? Ln (f )} ? Rn (F) + ? ? e
(17)
f ?F
l
u
with a ? [0, 1]. Differently from Eq. (15) the previous expression cannot be put in explicit form, but
it can be simply computed numerically by writing it as:
m
L(f )f ?F ? Lnl (f )f ?F +
1 X ?i
R (F) + ?bu
m i=1 nl
(18)
The value ?bu can be obtained by upper bounding with ? the last term of Eq. (17) and solving the
inequality respect to a and ?, so that the bound holds with probability (1 ? ?).
We can show the improvements obtained through these new results, by plotting the values of the
confidence terms and comparing them with the conventional one [2]. Figure 2 shows the value of
?l in Eq. (7) against ?u , the corresponding term in Eq. (13), and ?bu , as a function of the number of
samples.
3
Performing the Structural Risk Minimization procedure
Computing the values of the bounds described in the previous sections is a straightforward process,
at least in theory. The empirical error Lnl (f ) is found by learning a classifier with the original
? in (F) is computed by learning the
labeled dataset, while the (Extended) Rademacher Complexity R
l
dataset composed of both labeled and unlabeled samples with random labels.
In order apply in practice the results of the previous section and to better control the hypothesis
space, we formulate the learning phase of the classifier as the following optimization problem, based
5
m ? [1,10]
m = 1, R ? [0,1]
1
1
?
?
l
0.9
u
b
u
0.9
?
?
u
0.8
0.8
0.7
0.7
m=1
R = 0.9
0.5
0.4
0.4
0.3
0.3
0.2
0.2
m = 10
0.1
0
R=1
0.6
m=2
0.5
?
?
0.6
40
60
80
100
R=0
0.1
120
n
140
160
180
0
200
40
60
80
100
120
n
140
160
180
200
(b) ?nl VS ?bu with m = 1
(a) ?l VS ?u
Figure 2: Comparison of the new confidence terms with the conventional one.
on the Ivanov version of the Support Vector Machine (I-SVM) [13]:
n
X
min
?i
w,b,?
(19)
i=1
? 2 ? ?2
kw ? wk
yi wT ?(xi ) + b ? 1 ? ?i
?i ? 0,
?i = min (2, ?i )
? is controlled by the hyperparameter ? and
where the size of the hypothesis space, centered in w,
the last constraint is introduced for bounding the SVM loss function, which would be otherwise
unbounded and would prevent the application of the theory developed so far. Note that, in practice,
? =
? = +w
? C1 and the second one with w
two sub-problems must be solved: the first one with w
? C1 , then the solution corresponding to the smaller value of the objective function is selected.
?w
Unfortunately, solving a classification problem with a bounded loss function is computationally intractable, because the problem is no longer convex and even state-of-the-art solvers like, for example,
CPLEX [20] fail to found an exact solution, when the training set size exceeds few tens of samples.
Therefore, we propose here to find an approximate solution through well?known algorithms like,
for example, the Peeling [6] or the Convex?Concave Constrained Programming (CCCP) technique
[14, 21, 22]. Furthermore, we derive a dual formulation of problem (19) that allows us exploiting
the well known Sequential Minimal Optimization (SMO) algorithm for SVM learning [23].
Problem (19) can be rewritten in the equivalent Tikhonov formulation:
n
X
1
? 2+C
?i
kw ? wk
min
w,b,?
2
i=1
yi wT ?(xi ) + b ? 1 ? ?i
?i ? 0,
(20)
?i = min (2, ?i )
which gives the same solution of the Ivanov formulation for some value of C [13]. The method
for finding the value of C, corresponding to a given value of ?, is reported in [10], where it is also
shown that C cannot be used directly to control the hypothesis space. Then, it is possible to apply
the CCCP technique, which is synthesized in Algorithm 1, by splitting the objective function in its
convex and concave parts:
Jconvex (?)
Jconcave (?)
z
}|
{ z }| {
n
n
X
X
1
? 2+C
min kw ? wk
?i
?i ?C
w,b,? 2
i=1
i=1
yi wT ?(xi ) + b ? 1 ? ?i
?i ? 0,
?i = max(0, ?i ? 2)
6
(21)
where ? = [w|b] is introduced to simplify the notation. Obviously, the algorithm does not guarantee
to find the optimal solution, but it converges to a (usually good) solution in a finite number of steps
[14]. To apply the algorithm we must compute the derivative of the concave part of the objective
function:
!
n
n
X
X
d (?C?i )
dJconcave (?)
?
=
?i yi wT ?(xi ) + b
(22)
?
=
d?
d?
t
?
i=1
?t
i=1
Then, the learning problem becomes:
n
n
X
X
1
? 2+C
?i yi wT ?(xi ) + b
?i +
min kw ? wk
w,b,? 2
i=1
i=1
T
yi w ?(xi ) + b ? 1 ? ?i , ?i ? 0
where
?i =
C
0
if yi f t (xt ) < ?1
otherwise
(23)
(24)
Finally, it is possible to obtain the dual formulation (derivation is omitted due to lack of space):
?
?
nC1
n
n
n
X
X
1 XX
?
min
(25)
?i ?j yi yj K(xi , xj ) +
?
? j yi y?j K(?
xj , xi ) ? 1? ?i
? 2
i=1 j=1
i=1 j=1
? ?i ? ?i ? C ? ?i ,
n
X
?i yi = 0
i=1
where we have used the kernel trick [24] K(?, ?) = ?(?)T ?(?).
4
A case study
We consider the MNIST dataset [25], which consists of 62000 images, representing the numbers
from 0 to 9: in particular, we consider the 13074 patterns containing 0?s and 1?s, allowing us to deal
with a binary classification problem. We simulate the small?sample regime by randomly sampling
a training set with low cardinality (nl < 500), while the remaining 13074 ? nl images are used as
a test set or as an unlabeled dataset, by simply discarding the labels. In order to build statistically
relevant results, this procedure is repeated 30 times.
In Table 1 we compare the conventional bound with our proposal. In the first column the number
of labeled patterns (nl ) is reported, while the second column shows the number of unlabeled ones
(nu ). The optimal classifier f ? is selected by varying ? in the range [10?6 , 1], and selecting the
function corresponding to the minimum of the generalization error estimate provided by each bound.
Then, for each case, the selected f ? is tested on the remaining 13074 ? (nl + nu ) samples and the
classification results are reported in column three and four, respectively. The results show that the
f ? selected by exploiting the unlabeled patterns behaves better than the other and, furthermore, the
estimated L(f ), reported in column five and six, shows that the bound is tighter, as expected by
theory.
The most interesting result, however, derives from the use of the new bound of Eq. (18), as reported
in Table 2, where the unlabeled data is exploited for selecting a more suitable center of the hypothesis space. The results are reported analogously to Table 1. Note that, for each experiment, 30%
Algorithm 1 CCCP procedure
Initialize ? 0
repeat
(?)
? t+1 = arg min? Jconvex (?) + dJconcave
t ?
d?
until ? t+1 = ? t
?
7
Table 1: Model selection and error estimation, exploiting unlabeled data for tightening the bound.
nl
10
20
40
60
80
100
120
150
170
200
250
300
400
nu
20
40
80
120
160
200
240
300
340
400
500
600
800
Test error of f ?
Eq. (7)
Eq. (13)
13.20 ? 0.86
12.40 ? 0.82
8.93 ? 1.20
8.93 ? 1.29
6.26 ? 0.16
6.02 ? 0.17
5.95 ? 0.12
5.88 ? 0.13
5.61 ? 0.07
5.30 ? 0.07
5.36 ? 0.21
5.51 ? 0.22
4.98 ? 0.40
5.36 ? 0.40
4.41 ? 0.53
4.08 ? 0.51
3.59 ? 0.57
3.40 ? 0.64
2.75 ? 0.47
2.67 ? 0.48
2.07 ? 0.03
2.05 ? 0.03
2.02 ? 0.04
1.94 ? 0.04
1.93 ? 0.02
1.79 ? 0.02
Estimated L(f )
Eq. (7)
Eq. (13)
194.00 ? 0.97
157.70 ? 0.97
142.00 ? 1.06
116.33 ? 1.06
103.00 ? 0.59
84.85 ? 0.59
85.50 ? 0.48
70.68 ? 0.48
73.70 ? 0.40
60.86 ? 0.40
66.10 ? 0.37
54.62 ? 0.37
61.30 ? 0.33
50.82 ? 0.33
55.10 ? 0.28
45.73 ? 0.28
52.40 ? 0.26
43.60 ? 0.26
48.10 ? 0.19
39.98 ? 0.19
42.70 ? 0.22
35.44 ? 0.22
39.20 ? 0.17
32.57 ? 0.17
34.90 ? 0.19
29.16 ? 0.19
Table 2: Model selection and error estimation, exploiting unlabeled data for selecting a more suitable
hypothesis center.
nl
7
14
28
42
56
70
84
105
119
140
175
210
280
nu
3
6
12
18
24
30
36
45
51
60
75
90
120
Test error of f ?
Eq. (7)
Eq. (18)
13.20 ? 0.86
8.98 ? 1.12
8.93 ? 1.20
5.10 ? 0.67
6.26 ? 0.16
3.05 ? 0.23
5.95 ? 0.12
2.36 ? 0.23
5.61 ? 0.07
1.96 ? 0.14
5.36 ? 0.21
1.63 ? 0.11
4.98 ? 0.40
1.44 ? 0.11
4.41 ? 0.53
1.27 ? 0.09
3.59 ? 0.57
1.20 ? 0.08
2.75 ? 0.47
1.08 ? 0.09
2.07 ? 0.03
0.92 ? 0.05
2.02 ? 0.04
0.81 ? 0.07
1.93 ? 0.02
0.70 ? 0.06
Estimated L(f )
Eq. (7)
Eq. (18)
219.15 ? 0.97
104.01 ? 1.62
159.79 ? 1.06
86.70 ? 0.01
115.58 ? 0.59
51.35 ? 0.00
95.77 ? 0.48
38.37 ? 0.00
82.59 ? 0.40
31.39 ? 0.00
74.05 ? 0.37
26.83 ? 0.00
68.56 ? 0.33
23.77 ? 0.00
61.59 ? 0.28
20.36 ? 0.00
58.50 ? 0.26
18.77 ? 0.00
53.72 ? 0.19
16.82 ? 0.00
47.73 ? 0.22
14.52 ? 0.00
43.79 ? 0.17
12.91 ? 0.00
38.88 ? 0.19
10.86 ? 0.00
of the data (nu ) are used for selecting the hypothesis center and the remaining ones (nl ) are used
for training the classifier. The proposed method consistently selects a better classifier, which registers a threefold classification error reduction on the test set, especially for training sets of smaller
cardinality. The estimation of L(f ) is largely reduced as well.
We have to consider that this very clear performance increase is also favoured by the characteristics
of the MNIST dataset, which consists of well?separated classes: this particular data distribution implies that only few samples suffice for identifying a good hypothesis center. Many more experiments
with different datasets and varying the ratio between labeled and unlabeled samples are needed, and
are currently underway, for establishing the general validity of our proposal but, in any case, these
results appear to be very promising.
5
Conclusion
In this paper we have studied two methods which exploit unlabeled samples to tighten the
Rademacher Complexity bounds on the generalization error of linear (kernel) classifiers. The first
method improves a very well?known result, while the second one aims at changing the entire approach by selecting more suitable hypothesis spaces, not only acting on the bound itself. The recent
literature on the theory of bounds attempts to obtain tighter bounds through more refined concentration inequalities (e.g. improving Mc Diarmid?s inequality), but we believe that the idea of reducing
the size of the hypothesis space is a more appealing field of research because it opens the road to
possible significant improvements.
References
[1] V.N. Vapnik and A.Y. Chervonenkis. On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability and its Applications, 16:264, 1971.
8
[2] P.L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. The Journal of Machine Learning Research, 3:463?482, 2003.
[3] P.L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. The Annals of
Statistics, 33(4):1497?1537, 2005.
[4] O. Bousquet and A. Elisseeff. Stability and generalization. The Journal of Machine Learning
Research, 2:499?526, 2002.
[5] P.L. Bartlett, S. Boucheron, and G. Lugosi. Model selection and error estimation. Machine
Learning, 48(1):85?113, 2002.
[6] D. Anguita, A. Ghio, and S. Ridella. Maximal discrepancy for support vector machines. Neurocomputing, 74(9):1436?1443, 2011.
[7] D. Anguita, A. Ghio, L. Oneto, and S. Ridella. Selecting the Hypothesis Space for Improving the Generalization Ability of Support Vector Machines. In The 2011 International Joint
Conference on Neural Networks (IJCNN), San Jose, California. IEEE, 2011.
[8] S. Arlot and A. Celisse. A survey of cross-validation procedures for model selection. Statistics
Surveys, 4:40?79, 2010.
[9] B. Efron and R. Tibshirani. An introduction to the bootstrap. Chapman & Hall/CRC, 1993.
[10] D. Anguita, A. Ghio, L. Oneto, and S. Ridella. In-sample Model Selection for Support Vector
Machines. In The 2011 International Joint Conference on Neural Networks (IJCNN), San Jose,
California. IEEE, 2011.
[11] K.P. Bennett and A. Demiriz. Semi-supervised support vector machines. In Advances in neural
information processing systems 11: proceedings of the 1998 conference, page 368. The MIT
Press, 1999.
[12] O. Chapelle, B. Scholkopf, and A. Zien. Semi-supervised learning. The MIT Press, page 528,
2010.
[13] V.N. Vapnik. The nature of statistical learning theory. Springer Verlag, 2000.
[14] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In Proceedings of the 23rd international conference on Machine learning, pages 201?208. ACM,
2006.
[15] C. McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148?
188, 1989.
[16] S. Haykin. Neural networks: a comprehensive foundation. Prentice Hall PTR Upper Saddle
River, NJ, USA, 1994.
[17] S. Boucheron, G. Lugosi, and P. Massart. On concentration of self-bounding functions. Electronic Journal of Probability, 14:1884?1899, 2009.
[18] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities using the entropy method.
The Annals of Probability, 31(3):1583?1614, 2003.
[19] G. Casella and R.L. Berger. Statistical inference. 2001.
[20] I. CPLEX. 11.0 users manual. ILOG SA, 2008.
[21] J. Wang, X. Shen, and W. Pan. On efficient large margin semisupervised learning: Method and
theory. Journal of Machine Learning Research, 10:719?742, 2009.
[22] J. Wang and X. Shen. Large margin semi?supervised learning. Journal of Machine Learning
Research, 8:1867?1891, 2007.
[23] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Advances in Kernel MethodsSupport Vector Learning, 208:1?21, 1998.
[24] J. Shawe-Taylor and N. Cristianini. Margin distribution and soft margin. Advances in Large
Margin Classifiers, pages 349?358, 2000.
[25] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of
deep architectures on problems with many factors of variation. In 24th ICML, pages 473?480,
2007.
9
| 4234 |@word trial:1 version:4 open:1 elisseeff:1 reduction:1 selecting:9 chervonenkis:1 recovered:1 comparing:1 assigning:2 must:2 v:2 selected:4 haykin:1 oneto:4 mcdiarmid:3 five:1 unbounded:1 x1l:1 c2:2 rnl:3 scholkopf:1 consists:3 expected:6 behavior:1 inspired:2 actual:1 ivanov:2 cardinality:4 increasing:1 solver:1 becomes:1 xx:1 bounded:4 notation:2 underlying:1 provided:1 suffice:1 what:1 developed:1 finding:4 sinz:1 nj:1 guarantee:1 concave:3 classifier:22 platt:1 control:2 appear:1 arlot:1 before:1 engineering:1 local:1 establishing:1 lugosi:3 studied:1 range:1 statistically:2 practical:2 yj:1 practice:2 block:2 bootstrap:1 procedure:10 dmn:1 empirical:5 confidence:8 road:1 cannot:3 unlabeled:30 selection:9 close:1 put:1 context:1 risk:3 writing:1 influence:1 impossible:1 prentice:1 equivalent:1 conventional:4 deterministic:1 center:11 straightforward:1 convex:3 survey:3 formulate:1 shen:2 simplicity:1 splitting:1 identifying:1 rule:3 stability:1 searching:1 variation:1 annals:2 target:2 heavily:1 user:1 exact:1 programming:1 hypothesis:20 origin:1 overlapped:1 trick:1 labeled:11 role:1 solved:1 wang:2 worst:1 nc1:1 deeply:1 alessandro:2 yk:5 convexity:1 complexity:20 cristianini:1 rigorously:1 solving:4 f2:1 easily:1 joint:2 differently:1 derivation:1 separated:1 distinct:1 fast:1 monte:1 choosing:5 refined:1 quite:1 unige:2 larger:2 solve:1 say:1 ramp:1 otherwise:2 ability:2 statistic:2 demiriz:1 itself:1 obviously:2 sequence:3 biophysical:1 propose:5 maximal:1 relevant:1 realization:2 scalability:1 ghio:5 exploiting:9 cluster:2 convergence:1 rademacher:18 xil:1 converges:1 derive:2 depending:1 inflate:1 sa:1 eq:18 implies:1 trading:1 larochelle:1 centered:2 crc:1 f1:1 generalization:10 tighter:4 probable:1 hold:6 around:1 considered:1 hall:2 noto:1 omitted:2 fh:3 purpose:2 a2:2 estimation:9 label:6 currently:1 vice:1 tool:1 datasets3:1 minimization:2 mit:2 gaussian:1 sight:1 aim:2 reaching:1 varying:2 focus:1 improvement:2 consistently:1 greatly:1 inference:1 dependent:1 typically:1 entire:1 selects:1 arg:2 classification:8 dual:2 priori:1 art:1 constrained:1 initialize:1 field:1 sampling:1 chapman:1 kw:4 icml:1 discrepancy:1 simplify:2 hint:1 few:2 randomly:1 composed:3 neurocomputing:1 comprehensive:1 phase:1 cplex:2 ab:1 attempt:1 investigate:1 evaluation:1 unlucky:1 nl:66 accurate:1 taylor:1 re:1 theoretical:1 minimal:2 column:4 soft:2 pia:1 applicability:1 predictor:1 srm:4 uniform:1 reported:6 international:3 river:1 bu:4 analogously:1 again:1 containing:2 choose:1 derivative:1 account:3 bergstra:1 wk:4 availability:1 combinatorics:1 register:1 depends:1 collobert:1 performed:1 later:1 tion:1 analyze:1 sup:20 opera:1 largely:1 characteristic:1 mc:1 carlo:1 casella:1 manual:1 definition:1 centering:1 against:1 frequency:1 associated:2 proof:2 stop:1 dataset:5 davide:2 recall:1 knowledge:1 efron:1 dimensionality:1 improves:1 supervised:4 improved:2 formulation:4 though:2 furthermore:2 until:1 sketch:1 lack:1 brings:1 believe:1 semisupervised:1 building:1 effect:1 validity:1 contain:1 true:1 dnu:2 usa:1 equality:1 symmetric:1 boucheron:3 deal:1 during:1 self:2 ptr:1 image:2 fi:12 behaves:1 yil:1 ridella:5 numerically:1 synthesized:1 significant:1 versa:1 rd:5 fk:6 pm:1 shawe:1 chapelle:1 longer:1 sandro:2 recent:3 italy:1 belongs:2 inf:1 scenario:1 tikhonov:1 verlag:1 inequality:6 outperforming:1 binary:1 yi:12 exploited:1 minimum:1 semi:3 zien:1 exceeds:1 cross:1 long:1 luca:2 cccp:3 controlled:1 impact:1 prediction:4 expectation:1 kernel:6 dnl:2 c1:4 proposal:2 extra:1 massart:2 mod:1 seem:1 call:1 structural:3 noting:1 split:1 identically:1 easy:1 bengio:1 xj:2 fit:1 architecture:1 reduce:1 idea:2 knowing:1 computable:1 expression:1 six:1 bartlett:3 f:3 questioned:1 speaking:1 deep:1 detailed:1 clear:1 amount:1 ten:1 reduced:3 sign:1 estimated:3 tibshirani:1 celisse:1 write:1 hyperparameter:1 threefold:1 affected:1 key:1 four:1 changing:1 prevent:1 time1:1 year:1 run:1 jose:2 powerful:1 reasonable:1 electronic:2 looser:1 genova:2 comparable:1 bound:33 ki:7 courville:1 paramount:1 ijcnn:2 constraint:2 bousquet:2 simulate:1 min:11 performing:1 department:1 according:1 smaller:2 pan:1 appealing:1 happens:1 ln:3 computationally:1 fail:2 needed:1 know:4 available:1 rewritten:1 apply:3 observe:1 worthwhile:1 original:1 clustering:1 remaining:3 exploit:5 build:4 especially:1 objective:3 quantity:3 concentration:4 traditional:1 neurocolt:1 fc2:1 reason:1 assuming:1 berger:1 ratio:1 equivalently:1 nc:3 unfortunately:2 sharper:1 tightening:2 allowing:1 upper:5 y1l:1 observation:1 ilog:1 datasets:3 finite:2 defining:1 extended:3 rn:2 duced:1 introduced:3 california:2 smo:1 nu:9 nip:1 usually:2 pattern:6 regime:2 challenge:1 max:2 suitable:7 event:1 natural:1 mn:1 representing:1 improve:1 fc1:2 rnu:1 understanding:1 literature:1 underway:1 asymptotic:1 relative:1 loss:8 interesting:3 localized:2 validation:1 xnl:1 foundation:1 principle:2 plotting:1 repeat:1 last:5 soon:1 bias:5 allow:1 distributed:1 world:1 lnl:18 san:2 far:1 tighten:1 erhan:1 approximate:1 xi:11 search:1 table:5 distributionp:1 promising:1 nature:1 obtaining:4 improving:2 investigated:1 bottou:1 mnl:5 bounding:4 allowed:1 repeated:1 favoured:1 sub:1 explicit:3 anguita:5 third:1 peeling:1 xt:1 discarding:1 explored:1 svm:3 derives:1 workshop:2 consist:1 intractable:1 mnist:2 sequential:2 vapnik:2 importance:1 mendelson:2 margin:5 supf:1 entropy:1 simply:2 saddle:1 springer:1 corresponds:1 relies:1 acm:1 weston:1 goal:1 consequently:2 bennett:1 hard:1 typical:1 infinite:1 reducing:3 wt:10 acting:1 support:6 tested:1 correlated:1 |
3,573 | 4,235 | Penalty Decomposition Methods for Rank
Minimization ?
Zhaosong Lu ?
Yong Zhang ?
Abstract
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix
optimization problems can be solved as lower dimensional vector optimization
problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems. The convergence
results of the PD methods have been shown in the longer version of the paper
[19]. Finally, we test the performance of our methods by applying them to matrix
completion and nearest low-rank correlation matrix problems. The computational
results demonstrate that our methods generally outperform the existing methods
in terms of solution quality and/or speed.
1
Introduction
In this paper we consider the following rank minimization problems:
min{f (X) : rank(X) ? r, X ? X ? ?},
(1)
min{f (X) + ? rank(X) : X ? X ? ?}
(2)
X
X
for some r, ? ? 0, where X is a closed convex set, ? is a closed unitarily invariant set in <m?n ,
and f : <m?n ? < is a continuously differentiable function (for the definition of unitarily invariant
set, see Section 2.1). In literature, there are numerous application problems in the form of (1) or
(2). For example, several well-known combinatorial optimization problems such as maximal cut
(MAXCUT) and maximal stable set can be formulated as problem (1) (see, for example, [11, 1, 5]).
More generally, nonconvex quadratic programming problems can also be cast into (2) (see, for
example, [1]). Recently, some image recovery and machine learning problems are formulated as (1)
or (2) (see, for example, [27, 31]). In addition, the problem of finding nearest low-rank correlation
matrix is in the form of (1), which has important application in finance (see, for example, [4, 29, 36,
38, 25, 30, 12]).
Several approaches have recently been developed for solving problems (1) and (2) or their special
cases. In particular, for those arising in combinatorial optimization (e.g., MAXCUT), one novel
method is to first solve the semidefinite programming (SDP) relaxation of (1) and then obtain an
approximate solution of (1) by applying some heuristics to the solution of the SDP (see, for example,
[11]). Despite the remarkable success on those problems, it is not clear about the performance of this
method when extended to solve more general problem (1). In addition, the nuclear norm relaxation
approach has been proposed for problems (1) or (2). For example, Fazel et al. [10] considered a
?
This work was supported in part by NSERC Discovery Grant.
Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
[email protected]).
?
Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
[email protected]).
?
1
(email:
(email:
special case of problem (2) with f ? 0 and ? = <m?n . In their approach, a convex relaxation is
applied to (1) or (2) by replacing the rank of X by the nuclear norm of X and numerous efficient
methods can then be applied to solve the resulting convex problems. Recently, Recht et al. [27]
showed that under some suitable conditions, such a convex relaxation is tight when X is an affine
manifold. The quality of such a relaxation, however, remains unknown when applied to general
problems (1) and (2). Additionally, for some application problems, the nuclear norm stays constant
in feasible region. For example, as for nearest low-rank correlation matrix problem (see Subsection
3.2), any feasible point is a symmetric positive semidefinite matrix with all diagonal entries equal
to one. For those problems, nuclear norm relaxation approach is obviously inappropriate. Finally,
nonlinear programming (NLP) reformulation approach has been applied for problem (1) (see, for
example, [5]). In this approach, problem (1) is cast into an NLP problem by replacing the constraint
rank(X) ? r by X = U V where U ? <m?r and V ? <r?n , and then numerous optimization
methods can be applied to solve the resulting NLP. It is not hard to observe that such an NLP has
infinitely many local minima, and moreover it can be highly nonlinear, which might be challenging
for all existing numerical optimization methods for NLP. Also, it is not clear whether this approach
can be applied to problem (2).
In this paper we consider general rank minimization problems (1) and (2). We first show that a
class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form
solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. The
convergence of the PD methods has been shown in the longer version of the paper [19]. Finally, we
test the performance of our methods by applying them to matrix completion and nearest low-rank
correlation matrix problems. The computational results demonstrate that our methods generally
outperform the existing methods in terms of solution quality and/or speed.
The rest of this paper is organized as follows. In Subsection 1.1, we introduce the notation that is
used throughout the paper. In Section 2, we first establish some technical results on a class of rank
minimization problems and then use them to develop the penalty decomposition methods for solving
problems (1) and (2). In Section 3, we conduct numerical experiments to test the performance of
our penalty decomposition methods for solving matrix completion and nearest low-rank correlation
matrix problems. Finally, we present some concluding remarks in Section 4.
1.1
Notation
In this paper, the symbol <n denotes the n-dimensional Euclidean space, and the set of all m ? n
matrices with real entries is denoted by <m?n . The spaces of n ? n symmetric matrices will
be denoted by S n . If X ? S n is positive semidefinite, we write X 0. The cone of positive
n
semidefinite matrices is denoted by S+
. The Frobenius norm of a real matrix X is defined as
p
T
kXkF :=
Tr(XX ) where Tr(?) denotes the trace of a matrix, and the nuclear norm of X,
denoted by kXk? , is defined as the sum of all singular values of X. The rank of a matrix X is
denoted by rank(X). We denote by I the identity matrix, whose dimension should be clear from the
context. For a real symmetric matrix X, ?(X) denotes the vector of all eigenvalues of X arranged
in nondecreasing order and ?(X) is the diagonal matrix whose ith diagonal entry is ?i (X) for all
i. Similarly, for any X ? <m?n , ?(X) denotes the q-dimensional vector consisting of all singular
values of X arranged in nondecreasing order, where q = min(m, n), and ?(X) is the m ? n matrix
whose ith diagonal entry is ?i (X) for all i and all off-diagonal entries are 0, that is, ?ii (X) = ?i (X)
for 1 ? i ? q and ?ij (X) = 0 for all i 6= j. We define the operator D : <q ? <m?n as follows:
Dij (x) =
xi
0
if i = j;
otherwise
?x ? <q ,
where q = min(m, n). For any real vector, k ? k0 , k ? k1 and k ? k2 denote the cardinality (i.e., the
number of nonzero entries), the standard 1-norm and the Euclidean norm of the vector, respectively.
2
2
Penalty decomposition methods
In this section, we first establish some technical results on a class of rank minimization problems.
Then we propose penalty decomposition (PD) methods for solving problems (1) and (2) by using
these technical results.
2.1
Technical results on special rank minimization
In this subsection we first show that a class of matrix optimization problems can be solved as lower
dimensional vector optimization problems. As a consequence, we establish a result that a class
of rank minimization problems have closed form solutions, which will be used to develop penalty
decomposition methods in Subsection 2.2. The proof of the result can be found in the longer version
of the paper [19]. Before proceeding, we introduce some definitions that will be used subsequently.
Let U n denote the set of all unitary matrices in <n?n . A norm k ? k is a unitarily invariant norm
on <m?n if kU XV k = kXk for all U ? U m , V ? U n , X ? <n?n . More generally, a function
F : <m?n ? < is a unitarily invariant function if F (U XV ) = F (X) for all U ? U m , V ? U n ,
X ? <m?n . A set X ? <m?n is a unitarily invariant set if
{U XV : U ? U m , V ? U n , X ? X } = X .
Similarly, a function F : S n ? < is a unitary similarity invariant function if F (U XU T ) = F (X)
for all U ? U n , X ? S n . A set X ? S n is a unitary similarity invariant set if
{U XU T : U ? U n , X ? X } = X .
The following result establishes that a class of matrix optimization problems over a subset of <m?n
can be solved as lower dimensional vector optimization problems.
Proposition 2.1 Let k ? k be a unitarily invariant norm on <m?n , and let F : <m?n ? < be a
unitarily invariant function. Suppose that X ? <m?n is a unitarily invariant set. Let A ? <m?n be
given, q = min(m, n), and let ? be a non-decreasing function on [0, ?). Suppose that U ?(A)V T
is the singular value decomposition of A. Then, X ? = U D(x? )V T is an optimal solution of the
problem
min F (X) + ?(kX ? Ak)
(3)
s.t. X ? X ,
?
q
where x ? < is an optimal solution of the problem
min F (D(x)) + ?(kD(x) ? ?(A)k)
s.t. D(x) ? X .
(4)
As some consequences of Proposition 2.1, we next state that a class of rank minimization problems
on a subset of <m?n can be solved as lower dimensional vector minimization problems.
Corollary 2.2 Let ? ? 0 and A ? <m?n be given, and let q = min(m, n). Suppose that X ?
<m?n is a unitarily invariant set, and U ?(A)V T is the singular value decomposition of A. Then,
X ? = U D(x? )V T is an optimal solution of the problem
1
min{? rank(X) + kX ? Ak2F : X ? X },
(5)
2
where x? ? <q is an optimal solution of the problem
1
(6)
min{?kxk0 + kx ? ?(A)k22 : D(x) ? X }.
2
Corollary 2.3 Let r ? 0 and A ? <m?n be given, and let q = min(m, n). Suppose that X ?
<m?n is a unitarily invariant set, and U ?(A)V T is the singular value decomposition of A. Then,
X ? = U D(x? )V T is an optimal solution of the problem
min{kX ? AkF : rank(X) ? r, X ? X },
?
(7)
q
where x ? < is an optimal solution of the problem
min{kx ? ?(A)k2 : kxk0 ? r, D(x) ? X }.
3
(8)
Remark. When X is simple enough, problems (5) and (7) have closed form solutions. In many
applications, X = {X ? <m?n : a ? ?i (X) ? b ?i} for some 0 ? a < b ? ?. For such X , one
can see that D(x) ? X if and only if a ? |xi | ? b for all i. In this case, it is not hard to observe that
problems (6) and (8) have closed form solutions (see [20]). It thus follows from Corollaries 2.2 and
2.3 that problems (5) and (7) also have closed form solutions.
The following results are heavily used in [6, 22, 34] for developing algorithms for solving the nuclear norm relaxation of matrix completion problems. They can be immediately obtained from
Proposition 2.1.
Corollary 2.4 Let ? ? 0 and A ? <m?n be given, and let q = min(m, n). Suppose that
U ?(A)V T is the singular value decomposition of A. Then, X ? = U D(x? )V T is an optimal
solution of the problem
1
min ?kXk? + kX ? Ak2F ,
2
where x? ? <q is an optimal solution of the problem
1
min ?kxk1 + kx ? ?(A)k22 .
2
Corollary 2.5 Let r ? 0 and A ? <m?n be given, and let q = min(m, n). Suppose that U ?(A)V T
is the singular value decomposition of A. Then, X ? = U D(x? )V T is an optimal solution of the
problem
min{kX ? AkF : kXk? ? r},
where x? ? <q is an optimal solution of the problem
min{kx ? ?(A)k2 : kxk1 ? r}.
Clearly, the above results can be generalized to solve a class of matrix optimization problems over a
subset of S n . The details can be found in the longer version of the paper [19].
2.2
Penalty decomposition methods for solving (1) and (2)
In this subsection, we consider the rank minimization problems (1) and (2). In particular, we first
propose a penalty decomposition (PD) method for solving problem (1), and then extend it to solve
problem (2) at end of this subsection. Throughout this subsection, we make the following assumption for problems (1) and (2).
Assumption 1 Problems (1) and (2) are feasible, and moreover, at least a feasible solution, denoted
by X feas , is known.
Clearly, problem (1) can be equivalently reformulated as
min{f (X) : X ? Y = 0, X ? X , Y ? Y},
X,Y
(9)
where Y := {Y ? ?| rank(Y ) ? r}.
Given a penalty parameter % > 0, the associated quadratic penalty function for (9) is defined as
%
(10)
Q% (X, Y ) := f (X) + kX ? Y k2F .
2
We now propose a PD method for solving problem (9) (or, equivalently, (1)) in which each penalty
subproblem is approximately solved by a block coordinate descent (BCD) method.
Penalty decomposition method for (9) (asymmetric matrices):
Let %0 > 0, ? > 1 be given. Choose an arbitrary Y00 ? Y and a constant ? ?
max{f (X feas ), minX?X Q%0 (X, Y00 )}. Set k = 0.
1) Set l = 0 and apply the BCD method to find an approximate solution (X k , Y k ) ? X ? Y
for the penalty subproblem
min{Q%k (X, Y ) : X ? X , Y ? Y}
(11)
by performing steps 1a)-1d):
4
k
1a) Solve Xl+1
? Arg min Q%k (X, Ylk ).
X?X
k
k
1b) Solve Yl+1
? Arg min Q%k (Xl+1
, Y ).
Y ?Y
k
k
1c) Set (X k , Y k ) := (Xl+1
, Yl+1
).
2) Set %k+1 := ?%k .
3) If min Q%k+1 (X, Y k ) > ?, set Y0k+1 := X feas . Otherwise, set Y0k+1 := Y k .
X?X
4) Set k ? k + 1 and go to step 1).
end
Remark. We observe that the sequence {Q%k (Xlk , Ylk )} is non-increasing for any fixed k. Thus,
in practical implementation, it is reasonable to terminate the BCD method based on the relative
progress of {Q%k (Xlk , Ylk )}. In particular, given accuracy parameter I > 0, one can terminate the
BCD method if
k
k
|Q%k (Xlk , Ylk ) ? Q%k (Xl?1
, Yl?1
)|
? I .
(12)
k
k
max(|Q%k (Xl , Yl )|, 1)
Moreover, we can terminate the outer iterations of the above method once
k
max |Xij
? Yijk | ? O
(13)
ij
for some O > 0. In addition, given that problem (11) is nonconvex, the BCD method may converge
to a stationary point. To enhance the quality of approximate solutions, one may execute the BCD
method multiple times starting from a suitable perturbation of the current approximate solution. In
detail, at the kth outer iteration, let (X k , Y k ) be a current approximate solution of (11) obtained
by the BCD method, and let rk = rank(Y k ). Assume that rk > 1. Before starting the (k +
1)th outer iteration, one can apply the BCD method again starting from Y0k ? Arg min{kY ?
Y k kF : rank(Y ) ? rk ? 1} (namely, a rank-one perturbation of Y k ) and obtain a new approximate
? k , Y? k ) of (11). If Q% (X
? k , Y? k ) is ?sufficiently? smaller than Q% (X k , Y k ), one can set
solution (X
k
k
k
k
k
k
? , Y? ) and repeat the above process. Otherwise, one can terminate the kth outer
(X , Y ) := (X
iteration and start the next outer iteration. Furthermore, in view of Corollary 2.3, the subproblem
in step 1b) can be reduced to the problem in form of (8), which has closed form solution when
? is simple enough. Finally, the convergence results of this PD method has been shown in the
longer version of the paper [19]. Under some suitable assumptions, we have established that any
accumulation point of the sequence generated by our method when applied to problem (1) is a
stationary point of a nonlinear reformulation of the problem.
Before ending this section, we extend the PD method proposed above to solve problem (2). Clearly,
(2) can be equivalently reformulated as
min{f (X) + ? rank(Y ) : X ? Y = 0, X ? X , Y ? ?}.
(14)
X,Y
Given a penalty parameter % > 0, the associated quadratic penalty function for (14) is defined as
%
(15)
P% (X, Y ) := f (X) + ? rank(Y ) + kX ? Y k2F .
2
Then we can easily adapt the PD method for solving (9) to solve (14) (or, equivalently, (2)) by setting
the constant ? ? max{f (X feas ) + ? rank(X feas ), minX?X P%0 (X, Y00 )}. In addition, the set Y
becomes ?.
In view of Corollary 2.2, the BCD subproblem in step 1b) when applied to minimize the penalty
function (15) can be reduced to the problem in form of (6), which has closed form solution when
? is simple enough. In addition, the practical termination criteria proposed for the previous PD
method can be suitably applied to this method as well. Moreover, given that problem arising in step
1) is nonconvex, the BCD method may converge to a stationary point. To enhance the quality of
approximate solutions, one may apply a similar strategy as described for the previous PD method
by executing the BCD method multiple times starting from a suitable perturbation of the current
approximate solution. Finally, by a similar argument as in the proof of [19, Theorem 3.1], we
can show that every accumulation point of the sequence {(X k , Y k )} is a feasible point of (14).
Nevertheless, it is not clear whether a similar convergence result as in [19, Theorem 3.1(b)] can be
established due to the discontinuity and nonconvexity of the objective function of (2).
5
3
Numerical results
In this section, we conduct numerical experiments to test the performance of our penalty decomposition (PD) methods proposed in Section 2 by applying them to solve matrix completion and
nearest low-rank correlation matrix problems. All computations below are performed on an Intel
Xeon E5410 CPU (2.33GHz) and 8GB RAM running Red Hat Enterprise Linux (kernel 2.6.18).
The codes of all the compared methods in this section are written in Matlab.
3.1
Matrix completion problem
In this subsection, we apply our PD method proposed in Section 2 to the matrix completion problem,
which has numerous applications in control and systems theory, image recovery and data mining
(see, for example, [33, 24, 9, 16]). It can be formulated as
min
X?<m?n
s.t.
rank(X)
Xij = Mij , (i, j) ? ?,
(16)
where M ? <m?n and ? is a subset of index pairs (i, j). Recently, numerous methods were
proposed to solve the nuclear norm relaxation or the variant of (16) (see, for example, [18, 6, 22, 8,
13, 14, 21, 23, 32, 17, 37, 35]).
It is not hard to see that problem (16) is a special case of the general rank minimization problem (2)
with f (X) ? 0, ? = 1, ? = <m?n , and X = {X ? <m?n : Xij = Mij , (i, j) ? ?}. Thus,
the PD method proposed in Subsection 2.2 for problem (2) can be suitably applied to (16). The
implementation details of the PD method can be found in [19].
Next we conduct numerical experiments to test the performance of our PD method for solving matrix
completion problem (16) on real data. In our experiment, we aim to test the performance of our PD
method for solving a grayscale image inpainting problem [2]. This problem has been used in [22, 35]
to test FPCA and LMaFit, respectively and we use the same scenarios as generated in [22, 35]. For
an image inpainting problem, our goal is to fill the missing pixel values of the image at given pixel
locations. The missing pixel positions can be either randomly distributed or not. As shown in
[33, 24], this problem can be solved as a matrix completion problem if the image is of low-rank.
In our test, the original 512 ? 512 grayscale image is shown in Figure 1(a). To obtain the data for
problem (16), we first apply the singular value decomposition to the original image and truncate
the resulting decomposition to get an image of rank 40 shown in Figure 1(e). Figures 1(b) and
1(c) are then constructed from Figures 1(a) and 1(e) by sampling half of their pixels uniformly at
random, respectively. Figure 1(d) is generated by masking 6% of the pixels of Figure 1(e) in a nonrandom fashion. We now apply our PD method to solve problem (16) with the data given in Figures
1(b), 1(c) and 1(d), and the resulting recovered images are presented in Figures 1(f), 1(g) and 1(h),
respectively. In addition, given an approximate recovery X ? for M , we define the relative error as
rel err :=
kX ? ? M kF
.
kM kF
We observe that the relative errors of three recovered images to the original images by our method
are 6.72e-2, 6.43e-2 and 6.77e-2, respectively, which are all smaller than those reported in [22, 35].
3.2
Nearest low-rank correlation matrix problem
In this subsection, we apply our PD method proposed in Section 2 to find the nearest low-rank
correlation matrix, which has important applications in finance (see, for example, [4, 29, 36, 38, 30]).
It can be formulated as
min 21 kX ? Ck2F
X?S n
(17)
s.t. diag(X) = e,
rank(X) ? r, X 0
n
for some correlation matrix C ? S+
and some integer r ? [1, n], where diag(X) denotes the vector
consisting of the diagonal entries of X and e is the all-ones vector. Recently, a few methods have
been proposed for solving problem (17) (see, for example, [28, 26, 3, 25, 12, 15]).
6
(a) original image
(b) 50% masked original (c) 50% masked rank 40 (d) 6.34% masked rank 40
image
image
image
(e) rank 40 image
(f) recovered image by PD (g) recovered image by PD (h) recovered image by PD
Figure 1: Image inpainting
It is not hard to see that problem (17) is a special case of the general rank constraint problem (2)
n
with f (X) = 12 kX ? Ck2F , ? = S+
, and X = {X ? S n : diag(X) = e}. Thus, the PD method
proposed in Subsection 2.2 for problem (2) can be suitably applied to (17). The implementation
details of the PD method can be found in [19].
Next we conduct numerical experiments to test the performance of our method for solving (17) on
three classes of benchmark testing problems. These problems are widely used in literature (see, for
example, [3, 29, 25, 15]) and their corresponding data matrices C are defined as follows:
(P1) Cij = 0.5 + 0.5 exp(?0.05|i ? j|) for all i, j (see [3]).
(P2) Cij = exp(?|i ? j|) for all i, j (see [3]).
(P3) Cij = LongCorr + (1 ? LongCorr) exp(?|i ? j|) for all i, j, where LongCorr = 0.6 and
? = ?0.1 (see [29]).
We first generate an instance for each (P1)-(P3) by letting 500. Then we apply our PD method and
the method named as Major developed in [25] to solve problem (17) on the instances generated
above. To fairly compare their performance, we choose the termination criterion for Major to be the
one based on the relative error rather than the (default) absolute error. More specifically, it terminates
once the relative error is less than 10?5 . The computational results of both methods on the instances
generated above with r = 5, 10, . . . , 25 are presented in Table 1. The names of all problems are
given in column one and they are labeled in the same manner as described in [15]. For example,
P1n500r5 means that it corresponds to problem (P1) with n = 500 and r = 5. The results of
both methods in terms of number of iterations, objective function value and CPU time are reported
in columns two to seven of Table 1, respectively. We observe that the objective function values
for both methods are comparable though the ones for Major are slightly better on some instances.
In addition, for small r (say, r = 5), Major generally outperforms PD in terms of speed, but PD
substantially outperforms Major as r gets larger (say, r = 15).
4
Concluding remarks
In this paper we proposed penalty decomposition (PD) methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. In the longer
version of the paper [20], we have showed that under some suitable assumptions any accumulation
point of the sequence generated by our method when applied to the rank constrained minimization
problem is a stationary point of a nonlinear reformulation of the problem. The computational results on matrix completion and nearest low-rank correlation matrix problems demonstrate that our
7
Table 1: Comparison of Major and PD
Problem
P1n500r5
P1n500r10
P1n500r15
P1n500r20
P1n500r25
P2n500r5
P2n500r10
P2n500r15
P2n500r20
P2n500r25
P3n500r5
P3n500r10
P3n500r15
P3n500r20
P3n500r25
Iter
488
836
1690
3106
5444
2126
3264
5061
4990
2995
2541
2357
2989
4086
5923
Major
Obj
3107.0
748.2
270.2
123.4
65.5
24248.5
11749.5
7584.4
5503.2
4256.0
2869.3
981.8
446.9
234.7
135.9
Time
22.9
51.5
137.0
329.1
722.0
97.8
199.6
409.9
532.0
404.1
116.4
144.2
241.9
438.4
788.3
Iter
2514
1220
804
581
480
3465
1965
1492
1216
1022
2739
1410
923
662
504
PD
Obj
3107.2
748.2
270.2
123.4
65.5
24248.5
11749.5
7584.4
5503.2
4256.0
2869.4
981.8
446.9
234.7
135.9
Time
80.7
48.4
37.3
31.5
29.4
112.3
76.6
70.4
67.2
69.2
90.4
55.4
41.6
33.0
29.5
methods generally outperform the existing methods in terms of solution quality and/or speed. More
computational results of the PD method can be found in the longer version of the paper [19].
References
[1] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis, algorithms, Engineering Applications. MPS-SIAM Series on Optimization, SIAM, Philadelphia,
PA, USA, 2001.
[2] M. Bertalm??o, G. Sapiro, V. Caselles and V. Ballester. Image inpainting. SIGGRAPH 2000,
New Orleans, USA, 2000.
[3] D. Brigo. A note on correlation and rank reduction. Available at www.damianobrigo.it, 2002.
[4] D. Brigo and F. Mercurio. Interest Rate Models: Theory and Practice. Springer-Verlag, Berlin,
2001.
[5] S. Burer, R. D. C. Monteiro, and Y. Zhang. Maximum stable set formulations and heuristics
based on continuous optimization. Math. Program., 94:137-166, 2002.
[6] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. Technical report, 2008.
[7] E. J. Cand?es and B. Recht. Exact matrix completion via convex optimization. Found. Comput.
Math., 2009.
[8] W. Dai and O. Milenkovic. SET: an algorithm for consistent matrix completion. Technical
report, Department of Electrical and Computer Engineering, University of Illinois, 2009.
[9] L. Eld?en. Matrix methods in data mining and pattern recognition (fundamentals of algorithms).
SIAM, Philadelphia, PA, USA, 2009.
[10] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum
order system approximation. P. Amer. Contr. Conf., 6:4734-4739, 2001.
[11] M. X. Goemans and D. P. Williamson. .878-approximation algorithms for MAX CUT and
MAX 2SAT. Lect. Notes Comput. Sc., 422-431, 1994.
[12] I. Grubi?si?c and R. Pietersz. Efficient rank reduction of correlation matrices. Linear Algebra
Appl., 422:629-653, 2007.
[13] R. H. Keshavan and S. Oh. A gradient descent algorithm on the Grassman manifold for matrix completion. Technical report, Department of Electrical Engineering, Stanford University,
2009.
[14] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation.
Technical report, University of Illinois, Urbana-Champaign, 2009.
8
[15] Q. Li and H. Qi. A sequential semismooth Newton method for the nearest low-rank correlation
matrix problem. Technical report, School of Mathematics, University of Southampton, UK,
2009.
[16] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. A., 31:1235-1256, 2009.
[17] Y. Liu, D. Sun, and K. C. Toh. An implementable proximal point algorithmic framework for
nuclear norm minimization. Technical report, National University of Singapore, 2009.
[18] Z. Lu, R. D. C. Monteiro, and M. Yuan. Convex optimization methods for dimension reduction
and coefficient estimation in Multivariate Linear Regression. Accepted in Math. Program.,
2008.
[19] Z. Lu and Y. Zhang. Penalty decomposition methods for rank minimization. Technical report,
Department of Mathematics, Simon Fraser University, Canada, 2010.
[20] Z. Lu and Y. Zhang. Penalty decomposition methods for l0 minimization. Technical report,
Department of Mathematics, Simon Fraser University, Canada, 2010.
[21] R. Mazumder, T. Hastie, and R. Tibshirani. Regularization methods for learning incomplete
matrices. Technical report, Stanford University, 2009.
[22] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank
minimization. To appear in Math. Program., 2008.
[23] R. Meka, P. Jain and I. S. Dhillon. Guaranteed rank minimization via singular value projection.
Technical report, University of Texas at Austin, 2009.
[24] T. Mrita and T. Kanade. A sequential factorization method for recovering shape and motion
from image streams. IEEE T. Pattern Anal., 19:858-867, 1997.
[25] R. Pietersz and I. Grubi?si?c. Rank reduction of correlation matrices by majorization. Quant.
Financ., 4:649-662, 2004.
[26] F. Rapisarda, D. Brigo and F. Mercurio. Parametrizing correlations: a geometric interpretation.
Banca IMI Working Paper, 2002 (www.fabiomercurio.it).
[27] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. To appear in SIAM Rev., 2007.
[28] R. Rebonato. On the simultaneous calibration of multifactor lognormal interest rate models to
Black volatilities and to the correlation matrix. J. Comput. Financ., 2:5-27, 1999.
[29] R. Rebonato. Modern Pricing and Interest-Rate Derivatives. Princeton University Press, New
Jersey, 2002.
[30] R. Rebonato. Interest-rate term-structure pricing models: a review. P. R. Soc. Lond. A-Conta.,
460:667-728, 2004.
[31] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In Proceedings of the International Conference of Machine Learning, 2005.
[32] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized
least squares problems. Accepted in Pac. J. Optim., 2009.
[33] C. Tpmasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vision, 9:137-154, 1992.
[34] E. van den Berg and M. P. Friedlander. Sparse optimization with least-squares constraints.
Technical Report, University of British Columbia, Vancouver, 2010.
[35] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion
by a nonlinear successive over-relaxation algorithm. Technical report, Department of Computational and Applied Mathematics, Rice University, 2010.
[36] L. Wu. Fast at-the-money calibration of the LIBOR market model using Lagrangian multipliers. J. Comput. Financ., 6:39-77, 2003.
[37] J. Yang and X. Yuan. An inexact alternating direction method for trace norm regularized least
squares problem. Technical report, Department of Mathematics, Nanjing University, China,
2010.
[38] Z. Zhang and L. Wu. Optimal low-rank approximation to a correlation matrix. Linear Algebra
Appl., 364:161-187, 2003.
9
| 4235 |@word milenkovic:1 version:7 norm:18 suitably:3 termination:2 km:1 decomposition:23 eld:1 tr:2 inpainting:4 reduction:4 liu:2 series:1 bc:2 outperforms:2 existing:4 err:1 current:3 recovered:5 optim:1 si:2 toh:2 written:1 numerical:6 shape:2 stationary:4 half:1 ith:2 math:4 location:1 successive:1 zhang:6 enterprise:1 constructed:1 yuan:2 manner:1 introduce:2 market:1 p1:3 cand:2 sdp:2 decreasing:1 cpu:2 inappropriate:1 cardinality:1 increasing:1 becomes:1 xx:1 moreover:4 notation:2 caselles:1 substantially:1 developed:2 finding:1 nonrandom:1 sapiro:1 every:1 finance:2 k2:3 fpca:1 control:1 uk:1 grant:1 appear:2 positive:3 before:3 engineering:3 local:1 xv:3 consequence:4 despite:1 ak:1 approximately:1 might:1 black:1 china:1 challenging:1 appl:2 factorization:4 nemirovski:1 fazel:3 practical:2 testing:1 orleans:1 practice:1 block:3 atomic:1 boyd:1 projection:1 get:2 nanjing:1 interior:1 operator:1 context:1 applying:4 accumulation:3 www:2 lagrangian:1 missing:2 go:1 starting:4 convex:7 shen:1 recovery:3 immediately:1 nuclear:11 fill:1 oh:1 vandenberghe:1 s6:2 coordinate:3 suppose:6 heavily:1 exact:1 programming:3 pa:2 ak2f:2 recognition:1 asymmetric:1 cut:2 labeled:1 kxk1:2 subproblem:6 solved:9 electrical:2 descend:2 region:1 sun:1 pd:30 solving:14 tight:1 algebra:2 easily:1 siggraph:1 k0:1 jersey:1 jain:1 fast:2 lect:1 sc:1 whose:3 heuristic:3 widely:1 solve:14 larger:1 say:2 stanford:2 otherwise:3 rennie:1 nondecreasing:2 obviously:1 sequence:4 differentiable:1 eigenvalue:1 cai:1 propose:5 maximal:2 y0k:3 frobenius:1 yijk:1 ky:1 convergence:4 executing:1 ben:1 volatility:1 develop:2 completion:15 nearest:10 ij:2 school:1 progress:1 p2:1 soc:1 recovering:1 direction:1 subsequently:1 proposition:3 admira:1 y00:3 considered:1 sufficiently:1 exp:3 algorithmic:1 major:7 estimation:1 combinatorial:2 establishes:1 minimization:25 clearly:3 aim:1 rather:1 corollary:7 l0:1 rank:62 contr:1 burnaby:2 pixel:5 arg:3 monteiro:2 denoted:6 constrained:1 special:5 fairly:1 equal:1 once:2 sampling:1 k2f:2 report:13 few:1 wen:1 modern:2 randomly:1 national:1 consisting:2 interest:4 highly:1 mining:2 zhaosong:2 semidefinite:4 bregman:1 conduct:4 incomplete:1 euclidean:2 instance:4 xeon:1 column:2 kxkf:1 entry:7 subset:4 southampton:1 masked:3 dij:1 imi:1 reported:2 proximal:2 recht:3 fundamental:1 siam:5 international:1 stay:1 lee:1 off:1 yl:4 enhance:2 continuously:1 linux:1 again:1 choose:2 conf:1 derivative:1 feas:5 li:1 parrilo:1 coefficient:1 int:1 mp:1 stream:2 performed:1 view:2 closed:10 red:1 start:1 masking:1 simon:4 majorization:1 minimize:1 collaborative:1 square:3 v5a:2 accuracy:1 ballester:1 identification:1 lu:4 simultaneous:1 email:2 definition:2 inexact:1 proof:2 associated:2 subsection:11 organized:1 arranged:2 execute:1 though:1 formulation:1 amer:1 furthermore:1 correlation:17 working:1 replacing:2 keshavan:1 nonlinear:5 quality:6 pricing:2 name:1 usa:3 k22:2 unitarily:10 multiplier:1 regularization:1 alternating:1 symmetric:3 nonzero:1 goldfarb:1 dhillon:1 criterion:2 generalized:1 yun:1 demonstrate:3 motion:2 image:24 novel:1 recently:5 extend:2 interpretation:1 ylk:4 meka:1 mathematics:7 similarly:2 maxcut:2 illinois:2 stable:2 calibration:2 longer:7 similarity:2 money:1 multivariate:1 showed:2 scenario:1 verlag:1 nonconvex:3 success:1 minimum:4 dai:1 kxk0:2 converge:2 ii:1 multiple:2 champaign:1 technical:17 adapt:1 burer:1 fraser:4 qi:1 prediction:1 variant:1 regression:1 vision:1 iteration:6 kernel:1 orthography:1 addition:7 singular:10 rest:1 obj:2 integer:1 unitary:3 yang:1 enough:3 hastie:1 quant:1 texas:1 whether:2 gb:1 penalty:22 reformulated:2 remark:4 matlab:1 generally:6 clear:4 reduced:2 generate:1 outperform:3 xij:3 singapore:1 multifactor:1 arising:2 tibshirani:1 write:1 iter:2 reformulation:3 nevertheless:1 nonconvexity:1 ram:1 relaxation:9 cone:1 sum:1 named:1 throughout:2 reasonable:1 wu:2 p3:2 sfu:2 comparable:1 guaranteed:2 quadratic:3 constraint:4 bcd:11 yong:1 tal:1 speed:4 argument:1 min:28 concluding:2 lond:1 performing:1 department:8 developing:1 mercurio:2 truncate:1 kd:1 smaller:2 terminates:1 slightly:1 rev:1 den:1 invariant:12 equation:1 remains:1 letting:1 end:2 available:1 apply:8 observe:5 appearing:1 hat:1 original:5 denotes:5 running:1 nlp:5 newton:1 k1:1 establish:5 objective:4 strategy:1 diagonal:6 minx:2 kth:2 gradient:2 berlin:1 outer:5 seven:1 manifold:2 code:1 index:1 equivalently:4 semismooth:1 cij:3 trace:2 lmafit:1 implementation:3 anal:2 unknown:1 urbana:1 benchmark:1 implementable:1 descent:2 parametrizing:1 extended:1 perturbation:3 arbitrary:1 canada:4 cast:2 namely:1 pair:1 brigo:3 established:2 akf:2 discontinuity:1 below:1 pattern:2 program:3 max:6 suitable:5 regularized:2 hindi:1 numerous:5 columbia:1 philadelphia:2 review:1 literature:2 discovery:1 geometric:1 kf:3 friedlander:1 vancouver:1 relative:5 lecture:1 bresler:1 srebro:1 remarkable:1 affine:1 consistent:1 thresholding:1 austin:1 ck2f:2 supported:1 repeat:1 lognormal:1 absolute:1 sparse:1 ghz:1 distributed:1 van:1 dimension:2 default:1 ending:1 approximate:9 sat:1 xi:2 grayscale:2 continuous:1 iterative:1 table:3 additionally:1 kanade:2 ku:1 terminate:4 ca:2 mazumder:1 williamson:1 diag:3 xu:2 intel:1 en:1 fashion:1 position:1 xl:5 comput:5 financ:3 rk:3 theorem:2 british:1 pac:1 symbol:1 rel:1 sequential:2 kx:14 margin:1 chen:1 yin:1 infinitely:1 kxk:4 nserc:1 springer:1 mij:2 corresponds:1 ma:1 rice:1 identity:1 formulated:4 goal:1 e5410:1 xlk:3 feasible:5 hard:4 specifically:1 uniformly:1 goemans:1 accepted:2 e:2 grassman:1 berg:1 accelerated:1 princeton:1 |
3,574 | 4,236 | Image Parsing via Stochastic Scene Grammar
Yibiao Zhao?
Department of Statistics
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Song-Chun Zhu
Department of Statistics and Computer Science
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
This paper proposes a parsing algorithm for scene understanding which includes
four aspects: computing 3D scene layout, detecting 3D objects (e.g. furniture), detecting 2D faces (windows, doors etc.), and segmenting background. In contrast
to previous scene labeling work that applied discriminative classifiers to pixels
(or super-pixels), we use a generative Stochastic Scene Grammar (SSG). This
grammar represents the compositional structures of visual entities from scene categories, 3D foreground/background, 2D faces, to 1D lines. The grammar includes
three types of production rules and two types of contextual relations. Production
rules: (i) AND rules represent the decomposition of an entity into sub-parts; (ii)
OR rules represent the switching among sub-types of an entity; (iii) SET rules represent an ensemble of visual entities. Contextual relations: (i) Cooperative ?+?
relations represent positive links between binding entities, such as hinged faces of
a object or aligned boxes; (ii) Competitive ?-? relations represents negative links
between competing entities, such as mutually exclusive boxes. We design an efficient MCMC inference algorithm, namely Hierarchical cluster sampling, to
search in the large solution space of scene configurations. The algorithm has two
stages: (i) Clustering: It forms all possible higher-level structures (clusters) from
lower-level entities by production rules and contextual relations. (ii) Sampling: It
jumps between alternative structures (clusters) in each layer of the hierarchy to
find the most probable configuration (represented by a parse tree). In our experiment, we demonstrate the superiority of our algorithm over existing methods on
public dataset. In addition, our approach achieves richer structures in the parse
tree.
1
Introduction
Scene understanding is an important task in neural information processing systems. By analogy
to natural language parsing, we pose the scene understanding problem as parsing an image into a
hierarchical structure of visual entities (in Fig.1(i)) using the Stochastic Scene Grammar (SSG). The
literature of scene parsing can be categorized into two categories: discriminative approaches and
generative approaches.
Discriminative approaches focus on classifying each pixel (or superpixel) to a semantic label
(building, sheep, road, boat etc.) by discriminative Conditional Random Fields (CRFs) model [5][7]. Without an understanding of the scene structure, the pixel-level labeling is insufficient to represent the knowledge of object occlusions, 3D relationships, functional space etc. To address this
problem, geometric descriptions were added to the scene interpretation. Hoiem et al. [1] and Saxena
et al. [8] generated the surface orientation labels and the depth labels by exploring rich geometric
?
http://www.stat.ucla.edu/?ybzhao/research/sceneparsing
1
(i) a parse tree
scene
3D background
3D foregrounds
(ii) input image and line detection
2D faces
(iii) geometric parsing result
1D line segments
(iv) reconstructed via line segments
Figure 1: A parse tree of geometric parsing result.
Figure 2: 3D synthesis of novel views based on the parse tree.
features and context information. Gupta et al. [9] posed the 3D objects as blocks and infers its 3D
properties such as occlusion, exclusion and stableness in addition to surface orientation labels. They
showed the global 3D prior does help the 2D surface labeling. For the indoor scene, Hedau et al.
[2], Wang et al. [3] and Lee et al. [4] adopted different approaches to model the geometric layout
of the background and/or foreground objects, and fit their models into Structured SVM (or Latent
SVM) settings [10]. The Structured SVM uses features extracted jointly from input-output pairs and
maximizes the margin over the structured output space. These algorithms involve hidden variables
or structured labels in discriminative training. However, these discriminative approaches lack a general representation of visual vocabulary and a principled approach for exploring the compositional
structure.
Generative approaches make efforts to model the reconfigurable graph structures in generative
probabilistic models. The stochastic grammar were used to parse natural languages [11]. Compositional models for the hierarchical structure and sharing parts were studied in visual object recognition [12]-[15]. Zhu and Mumford [16] proposed an AND/OR Graph Model to represent the compositional structures in vision. However, the expressive power of configurable graph structures comes
at the cost of high computational complexity of searching in a large configuration space. In order to
accelerate the inference, the Adaptor Grammars [17] applied an idea of ?adaptor? (re-using subtree)
that induce dependencies among successive uses. Han and Zhu [18] applied grammar rules, in a
greedy manner, to detect rectangular structures in man-made scenes. Porway et al. [19] [20]allowed
the Markov chain jumping between competing solutions by a C4 algorithm.
2
Overview of the approach. In this paper, we parse an image into a hierarchical structure, namely
a parse tree as shown in Fig.1. The parse tree covers a wide spectrum of visual entities, including
scene categories, 3D foreground/background, 2D faces, and 1D line segments. With the low-level
information of the parse tree, we reconstruct the original image by the appearance of line segments,
as shown in Fig.1(iv). With the high-level information of the parse tree, we further recover the 3D
scene by the geometry of 3D background and foreground objects, as shown in Fig.2.
This paper has two major contributions to the scene parsing problems:
(I) A Stochastic Scene Grammar (SSG) is introduced to represent the hierarchical structure of visual
entities. The grammar starts with a single root node (the scene) and ends with a set of terminal nodes
(line segments). In between, we generate all intermediate 3D/2D sub-structures by three types of
production rules and two types of contextual relations, as illustrated in Fig.3. Production rules:
AND, OR, and SET. (i) The AND rule encodes how sub-parts are composed into a larger structure. For example, three hinged faces form a 3D box, four linked line segments form a rectangle,
a background and inside objects form a scene in Fig.3(i); (ii) The SET rule represents an ensemble
of entities, e.g. a set of 3D boxes or a set of 2D regions as in Fig.3(ii); (iii)The OR rule represents a switch between different sub-types, e.g. a 3D foreground and 3D background have several
switchable types in Fig.3(iii). Contextual relations: Cooperative ?+? and Competitive ?-?. (i) If
the visual entities satisfy a cooperative ?+? relation, they tend to bind together, e.g. hinged faces of
a foreground box showed in Fig.3(a). (ii) If entities satisfy a competitive ?-? relation, they compete
with each other for presence, e.g. two exclusive foreground boxes competing for a same space in
Fig.3(b).
(II) A hierarchical cluster sampling algorithm is proposed to perform inference efficiently in SSG
model. The algorithm accelerates a Markov chain search by exploring contextual relations. It has
two stages: (i) Clustering. Based on the detected line segments in Fig.1(ii), we form all possible
larger structures (clusters). In each layer, the entities are first filtered by the Cooperative ?+? constraints, they then form a cluster only if they satisfy the ?+? constraints, e.g. several faces form a
cluster of a box when their edges are hinged tightly. (ii) Sampling. The sampling process makes a
big reversible jumps by switching among competing sub-structures (e.g. two exclusive boxes).
In summary, the Stochastic Scene Grammar is a general framework to parse a scene with a large
number of geometric configurations. We demonstrate the superiority of our algorithm over existing
methods in the experiment.
2
Stochastic Scene Grammar
The Stochastic Scene Grammar (SSG) is defined as a four-tuple G = (S, V, R, P ), where S is a
start symbol at the root (scene); V = V N ? V T , V N is a finite set of non-terminal nodes (structures
or sub-structures), V T is a finite set of terminal nodes (line segments); R = {r : ? ? ?} is
a set of production rules, each of which represents a generating process from a parent node ? to
its child nodes ? = Ch? . P (r) = P (?|?) is an expansion probability for each production rule
(r : ? ? ?). A set of all valid configurations C derived from production rules is called a language:
{ri }
L(G) = {C : S ???? C, {ri } ? R, C ? V T , P ({ri }) > 0}.
Production rules. We define three types of stochastic production rules RAN D ,ROR ,RSET to represent the structural regularity and flexibility of visual entities. The regularity is enforced by the AND
rule and the flexibility is expressed by the OR rule. The SET rule is a mixture of OR and AND rules.
(i) An AND rule (rAN D : A ? a ? b ? c) represents the decomposition of a parent node A into
three sub-parts a, b, and c. The probability P (a, b, c|A) measures the compatibility (contextual
relations) among sub-structures a, b, c. As seen Fig.3(i), the grammar outputs a high probability if
the three faces of a 3D box are well hinged, and a low probability if the foreground box lays out of
the background.
(ii) An OR rule (rOR : A ? a | b) represents the switching between two sub-types a and b of a
parent node A. The probability P (a|A) indicates the preference for one subtype over others. For
3D foreground in Fig.3(iii), the three sub-types in the third row represent objects below the horizon.
These objects appear with high probabilities. Similarly, for the 3D background in Fig.3(iii), the
camera rarely faces the ceiling or the ground, hence, the three sub-types in the middle row have
3
(iii) OR rules
(i) AND rules
linked lines
hinged faces
invalid scene layout
(ii) SET rules
aligned faces
aligned boxes
exclusive faces
nested faces
stacked boxes
exclusive boxes
(a) "+" relations
3D foreground types 3D background types
(b) "-" relations
Figure 3: Three types of production rules: AND (i), SET (ii) OR (iii), and two types of contextual
relations: cooperative ?+? relations (a), competitive ?-? relations (b).
higher probabilities (the higher the darker). Moreover, OR rules also model the discrete size of
entities, which is useful to rule out the extreme large or small entities.
(iii) An SET rule (rSET : A ? {a}k , k ? 0) represents an ensemble of k visual entities. The SET
rule is equivalent to a mixture of OR and AND rules (rSET : A ? ? | a | a ? a | a ? a ? a | ? ? ? ).
It first chooses a set size k by ORing, and forms an ensemble of k entities by ANDing. It is worth
noting that the OR rule essentially changes the graph topology of the output parse tree by changing
its node size k. In this way, as seen in Fig.3(ii), the SET rule generates a set of 3D/2D entities which
satisfy some contextual relations.
Contextual relations. There are two kinds of contextual relations, Cooperative ?+? relations and
Competitive ?-? relations, which involve in the AND and SET rules.
(i) The cooperative ?+? relations specify the concurrent patterns in a scene, e.g. hinged faces, nested
rectangle, aligned windows in Fig.3(a). The visual entities satisfying a cooperative ?+? relation tend
to bind together.
(i) The competitive ?-? relations specify the exclusive patterns in a scene. If entities satisfy competitive ?-? relations, they compete with each other for the presence. As shown in Fig.3(b), if a 3D box
is not contained by its background, or two 2D/3D objects are exclusive with one another, these cases
will rarely be in a solution simultaneously.
The tight structures vs. the loose structure: If several visual entities satisfy a cooperative ?+? relation, they tend to bind together, and we call them tight structures. These tight structures are grouped
into clusters in the early stage of inference (Sect.4). If the entities neither satisfy any cooperative
?+? relations nor violate a competitive ?-? relation, they may be loosely combined. We call them
loose structures, whose combinations are sampled in a later stage of inference (Sect.4). With the
three production rules and two contextual relations, SSG is able to handle an enormous number of
configurations and large geometric variations, which are the major difficulties in our task.
3
Bayesian formulation of the grammar
We define a posterior distribution for a solution (a parse tree) pt conditioned on an input image I.
This distribution is specified in terms of the statistics defined over the derivation of production rules.
P (pt|I) ? P (pt)P (I|pt) = P (S)
Y
v?V N
P (Chv |v)
Y
P (I|v)
(1)
v?V T
where I is the input image, pt is the parse tree. The probability derivation represents a generating
process of the production rules {r : v ? Chv } from the start symbol S to the nonterminal nodes
v ? V N , and to the children of non-terminal nodes Chv . The generating process stops at the
terminal nodes v ? V T and generates the image I.
We use a probabilistic graphical model of AND/OR graph [12, 17] to formulate our grammar. The
graph structure G = (V, E) consists of a set of nodes V and a set of edges E. The edge define a
4
(i) initial distribution
(ii) with cooperative(+) relations (iii) with competitive(-) relations
(iv) with both (+/-) relations
Figure 4: Learning to synthesize. (a)-(d) Some typical samples drawn from Stochastic Scene Grammar model with/without contextual relations.
parent-child conditional dependency for each production rule. The posterior distribution of a parse
graph pt is given
P by a family of Gibbs distributions: P (pt|I; ?) = 1/Z(I; ?) exp{?E(pt|I)}, where
Z(I; ?) = pt?? exp{?E(pt|I)} is a partition function summation over the solution space ?.
The energy is decomposed into three potential terms:
E(pt|I) =
X
v?V OR
E OR (AT (Chv )) +
X
E AN D (AG (Chv )) +
v?V AN D
X
E T (I(?v )) (2)
?v ??I ,v?V T
(i) The energy for OR nodes is defined over ?type? attributes AT (Chv ) of ORing child nodes.
The potential captures the prior statistics on each switching branch. E OR (AT (v)) = ? log P (v ?
T (v))
AT (v)) = ? log{ P #(v?A#(v?u)
}. The switching probability of foreground objects and the
u?Ch(v)
background layout is shown in Fig.3(iii).
(ii) The energy for AND nodes is defined over ?geometry? attribute AG (Chv ) of ANDing child
nodes. They are Markov Random Fields (MRFs) inside a tree-structure. We define both ?+? relations and ?-? relations as E AN D = ?+ h+ (AG (Chv )) + ?? h? (AG (Chv )), where h(?) are sufficient statistics in the exponential model, ? are their parameters. For 2D faces as an example,
the ?+? relation specifies a quadratic distance between their connected joints h+ (AG (Chv )) =
P
2
a,b?Chv (X(a) ? X(b)) , and the ?-? relation specifies an overlap rate between their occupied
image area h? (AG (Chv )) = (?a ? ?b )/(?a ? ?b ), a, b ? Chv .
(iii) The energy for Terminal nodes is defined over bottom-up image features I(?v ) on the image
area ?v . The features used in this paper include: (a) surface labels of geometric context [1], (b) a
3D orientation map [21], (c) the MDL coding length of line segments [20]. This term only captures
the features from their dominant image area ?v , and avoids the double counting of the shared edges
and the occluded areas.
We learn the context-sensitive grammar model of SSG from a context-free grammar. Under the
learning framework of minimax entropy [25], we enforce the contextual relations by adding statistical constraints sequentially. The learning process matches the statistics between the current
distribution p and a targeted distribution f by adding the most violated constraint in each iteration.
Fig.4 shows the typical samples drawn from the learned SSG model. With more contextual relations
being added, the sampled configurations become more similar to a real scene, and the statistics of
the learned distribution become closer to that of target distribution.
4
Inference with hierarchical cluster sampling
We design a hierarchical cluster sampling algorithm to infer the optimal parse tree for the SSG
model. A parse tree specifies a configuration of visual entities. The combination of configurations
makes the solution space expand exponentially, and it is NP-hard to enumerate all parse trees in such
a large space.
5
700
energy
600
500
400
300
iterations
200
0
50
100
150
200
250
300
iteration 150
iteration 0
iteration 50
iteration 100
iteration 200
iteration 250
iteration 300
Figure 5: The hierarchical cluster sampling process.
In order to detecting scene components, neither sliding window (top-down) nor binding (bottom-up)
approaches can handle the large geometric variations and an enormous number of configurations.
In this paper we combine the bottom-up and top-down process by exploring the contextual relations
defined on the grammar model. The algorithm first perform a bottom-up clustering stage and follow
by a top-down sampling stage.
In the clustering stage, we group visual entities into clusters (tight structures) by filtering the entities based on cooperative ?+? relations. With the low-level line segments as illustrated in Fig.1.(iv),
we detect substructures, such as 2D faces, aligned and nested 2D faces, 3D boxes, aligned and
stacked 3D boxes (in Fig.3(a)) layer by layer. The clusters Cl are formed only if the cooperative ?+?
constraints are satisfied. The proposal probability for each cluster Cl is defined as
P+ (Cl|I) =
Y
v?ClOR
P OR (AT (v))
Y
P+AN D (AG (u), AG (v))
u,v?ClAN D
Y
P T (I(?v )).
(3)
v?ClT
Clusters with marginal probabilities below threshold are pruned. The threshold is learned by a
probably approximately admissible (PAA) bound [23]. The clusters so defined are enumerable.
In the sampling stage, we performs an efficient MCMC inference to search in the combinational
space. In each step, the Markov chain jumps over a cluster (a big set of nodes) given information of
?what goes together? from clustering. The algorithm proposes a new parse tree: pt? = pt+Cl? with
the cluster Cl? conditioning on the current parse tree pt. To avoid heavy computation, the proposal
probability is defined as
Y
Q(pt? |pt, I) = P+ (Cl? |I)
P?AN D (AG (u)|AG (v)).
(4)
u?ClAN D ,v?ptAN D
The algorithm gives more weights to the proposals with strong bottom-up support and tight ?+?
relations by P+ (Cl|I), and simultaneously avoids the exclusive proposals with ?-? relations by
P?AN D (AG (u)|AG (v)). All of these probabilities are pre-computed before sampling. The marginal
probability of each cluster P+ (Cl|I) is computed during the clustering stage, and the probability
for each pair-wise negative ?-? relations P?AN D (AG (u)|AG (v)) is then calculated and stored in a
look-up table. The algorithm also proposes a new parse tree by pruning current parse tree randomly.
Q(pt|pt?,I)
By applying the Metropolis-Hastings acceptance probability ?(pt ? pt?) = min{1, Q(pt?|pt,I)
?
P (pt?|I)
P (pt|I) },
the Markov chain search satisfies the detailed balance principle, which implies that the
Markov chain search will converge to the global optimum in Fig.5.
5
Experiments
We evaluate our algorithm on both the UIUC indoor dataset [2] and our own dataset. The UIUC
dataset contains 314 cluttered indoor images, of which the ground-truth is two label maps of background layout with/without foreground objects. Our dataset contains 220 images which cover six
6
3D foreground detection
1
0.8
0.8
True positive rate
True positive rate
2D face detection
1
0.6
after inference
0.4
cluster proposals
0.2
0.6
after inference
0.4
cluster proposals
0.2
0
0
0
0.2
0.4
0.6
False nagative rate
(a)
0.8
1
0
0.2
0.4
0.6
0.8
1
False nagative rate
(b)
(c)
Figure 6: Quantitative performance of 2D face detection (a) and 3D foreground detection (b) in our
dataset. (c) An example of the top proposals and the result after inference.
indoor scene categories: bedroom, living room, kitchen, classroom, office room, and corridor. The
dataset is available on the project webpage1 . The ground-truths are hand labeled segments for scene
components for each image. Our algorithm usually takes 20s in clustering, 40s in sampling, and 1m
in preparing input features.
Qualitative evaluation: The experimental results in Fig.7 is obtained by applying different production rules to images in our dataset. With the AND rules only, the algorithm obtains reasonable
results and successfully recovers some salient 3D foreground objects and 2D faces. With both the
AND and SET rules, the cooperative ?+? relations help detect some weak visual entities. Fig.8 lists
more experimental results of the UIUC dataset. The proposed algorithm recovers most of the indoor
components. In the last row, we show some challenging images with missing detections and false
positives. Weak line information, ambiguous overlapping objects, salient patterns and clustered
structures would confuse our algorithm.
Quantitative evaluation: We first evaluate the detection of 2D faces, 3D foreground objects in
our dataset. The detection error is measured on the pixel level, it indicates how many pixels are
correctly labelled. In Fig.6, the red curves show the ROC of 2D faces / 3D objects detection in
clustering stage. They are computed by thresholding cluster probabilities given by Eq.3. The blue
curves show the ROC of final detection given a partial parse tree after MCMC inference. They are
computed by thresholding the marginal probability given Eq.2. Using the UIUC dataset, we compare
our algorithm to four other state-of-the-art indoor scene parsing algorithms, Hoiem et al. [1], Hedau
et al. [2], Wang et al. [3] and Lee et al. [4]. All of these four algorithms used discriminative learning
of Structure-SVM (or Latent-SVM). By applying the production rules and the contextual relations,
our generative grammar model outperforms others as shown in Table.1.
6
Conclusion
In this paper, we propose a framework of geometric image parsing using Stochastic Scene Grammar
(SSG). The grammar model is used to represent the compositional structure of visual entities. It
is beyond the traditional probabilistic context-free grammars (PCFGs) in a few aspects: spatial
context, production rules for multiple occurrences of objects, richer image appearance and geometric
properties. We also design a hierarchical cluster sampling algorithm that uses contextual relations
to accelerate the Markov chain search. The SSG model is flexible to model other compositional
structures by applying different production rules and contextual relations. An interesting extension
of our work can be adding semantic labels, such as chair, desk, shelf etc., to 3D objects. This will
be interesting to discover new relations between TV and sofa, desk and chair, bed and night table as
demonstrated in [26].
Acknowledgments
The work is supported by grants from NSF IIS-1018751, NSF CNS-1028381 and ONR MURI
N00014-10-1-0933.
1
http://www.stat.ucla.edu/?ybzhao/research/sceneparsing
7
Figure 7: Experimental results by applying the AND/OR rules (the first row) and applying all
AND/OR/SET rules (the second row) in our dataset
Figure 8: Experimental results of more complex indoor images in UIUC dataset [2]. The last row
shows some challenging images with missing detections and false positives of proposed algorithm.
Table 1: Segmentation precision compared with Hoiem et al. 2007 [1], Hedau et al. 2009 [2], Wang
et al. 2010 [3] and Lee et al. 2010 [4] in the UIUC dataset [2].
Segmentation precision
Without rules
With 3D ?-? constraints
With AND, OR rules
With AND, OR, SET rules
[1]
73.5%
-
[2]
78.8%
-
8
[3]
79.9%
-
[4]
81.4%
83.8%
-
Our method
80.5%
84.4%
85.1%
85.5%
References
[1] Hoiem, D., Efors, A., & Hebert, M. (2007) Recovering Surface Layout from an Image IJCV 75(1).
[2] Hedau, V., Hoiem, D., & Forsyth, D. (2009) Recovering the spatial layout of cluttered rooms. In ICCV.
[3] Wang, H., Gould, S. & Koller, D. (2010) Discriminative Learning with Latent Variables for Cluttered Indoor
Scene Understanding. ECCV.
[4] Lee, D., Gupta, A. Hebert, M., & Kanade, T. (2010) Estimating Spatial Layout of Rooms using Volumetric
Reasoning about Objects and Surfaces Advances in Neural Information Processing Systems 7, pp. 609-616.
Cambridge, MA: MIT Press.
[5] Shotton, J., & Winn, J. (2007) TextonBoost for Image Understanding: Multi-Class Object Recognition and
Segmentation by Jointly Modeling Texture, Layout, and Context. IJCV
[6] Tu, Z., & Bai, X. (2009) Auto-context and Its Application to High-level Vision Tasks and 3D Brain Image
Segmentation PAMI
[7] Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: probabilistic models
for segmenting and labeling sequence data. In ICML (pp. 282-289).
[8] Saxena, A., Sun, M. & Ng, A. (2008) Make3d: Learning 3D scene structure from a single image. PAMI.
[9] Gupta, A., Efros,A., & Hebert, M. (2010) Blocks World Revisited: Image Understanding using Qualitative
Geometry and Mechanics. ECCV.
[10] Tsochantaridis, T. Joachims, T. Hofmann & Y. Altun (2005) Large Margin Methods for Structured and
Interdependent Output Variables, JMLR, Vol. 6, pages 1453-1484.
[11] Manning, C., & Schuetze, H. (1999) Foundations of statistical natural language processing. Cambridge:
MIT Press.
[12] Chen, H., Xu, Z., Liu, Z., & Zhu, S. C. (2006) Composite templates for cloth modeling and sketching. In
CVPR (1) pp. 943-950.
[13] Jin, Y., & Geman, S. (2006) Context and hierarchy in a probabilistic image model. In CVPR (2) pp.
2145-2152.
[14] Zhu, L., & Yuille, A. L. (2005) A hierarchical compositional system for rapid object detection. Advances
in Neural Information Processing Systems 7, pp. 609-616. Cambridge, MA: MIT Press.
[15] Fidler, S., & Leonardis, A. (2007) Towards Scalable Representations of Object Categories: Learning a
Hierarchy of Parts. In CVPR.
[16] Zhu, S. C., & Mumford, D. (2006) A stochastic grammar of images. Foundations and Trends in Computer
Graphics and Vision, 2(4), 259-362.
[17] Johnson, M., Griffiths, T. L, & Goldwater, S. (2007) Adaptor Grammars: A Framework for Specifying
Compositional Nonparametric Bayesian Models. In G. Tesauro, D. S. Touretzky and T.K. Leen (eds.), Advances
in Neural Information Processing Systems 7, pp. 609-616. Cambridge, MA: MIT Press.
[18] Han, F., & Zhu, S. C. (2009) Bottom-Up/Top-Down Image Parsing with Attribute Grammar PAMI
[19] Porway, J., & Zhu, S. C. (2010) Hierarchical and Contextual Model for Aerial Image Understanding. Int?l
Journal of Computer Vision, vol.88, no.2, pp 254-283.
[20] Porway, J., & Zhu, S. C. (2011) C4 : Computing Multiple Solutions in Graphical Models by Cluster
Sampling. PAMI, vol.33, no.9, 1713-1727.
[21] Lee, D., Hebert, M., & Kanade, T. (2009) Geometric Reasoning for Single Image Structure Recovery In
CVPR.
[22] Hedau, V., Hoiem, D., & Forsyth, D. (2010). Thinking Inside the Box: Using Appearance Models and
Context Based on Room Geometry. In ECCV.
[23] Felzenszwalb, P.F. (2010) Cascade Object Detection with Deformable Part Models. In CVPR.
[24] Pero, L. D., Guan, J., Brau, E. Schlecht, J. & Barnard, K. (2011) Sampling Bedrooms. In CVPR.
[25] Zhu, S. C., Wu, Y., & Mumford, D. (1997) Minimax Entropy Principle and Its Application to Texture
Modeling. Neural Computation 9(8): 1627-1660.
[26] Yu, L. F., Yeung, S. K., Tang, C. K., Terzopoulos, D., Chan, T. F. & Osher, S. (2011) Make it home:
automatic optimization of furniture arrangement. ACM Transactions on Graphics 30(4): pp.86
9
| 4236 |@word middle:1 decomposition:2 textonboost:1 bai:1 configuration:10 contains:2 liu:1 initial:1 hoiem:6 outperforms:1 existing:2 current:3 contextual:20 parsing:11 partition:1 hofmann:1 v:1 generative:5 greedy:1 mccallum:1 hinged:7 filtered:1 detecting:3 node:19 revisited:1 successive:1 preference:1 become:2 corridor:1 qualitative:2 consists:1 ijcv:2 combine:1 combinational:1 inside:3 manner:1 rapid:1 nor:2 uiuc:6 multi:1 brain:1 terminal:6 mechanic:1 decomposed:1 window:3 chv:13 project:1 discover:1 moreover:1 estimating:1 maximizes:1 what:1 kind:1 ag:14 quantitative:2 saxena:2 classifier:1 subtype:1 grant:1 superiority:2 appear:1 segmenting:2 positive:5 before:1 bind:3 switching:5 approximately:1 pami:4 studied:1 specifying:1 challenging:2 pcfgs:1 acknowledgment:1 camera:1 block:2 area:4 cascade:1 composite:1 pre:1 road:1 induce:1 griffith:1 altun:1 tsochantaridis:1 context:10 applying:6 www:2 equivalent:1 map:2 demonstrated:1 missing:2 crfs:1 layout:9 go:1 cluttered:3 rectangular:1 formulate:1 recovery:1 rule:50 searching:1 handle:2 variation:2 hierarchy:3 anding:2 pt:24 target:1 us:3 superpixel:1 synthesize:1 trend:1 recognition:2 satisfying:1 lay:1 muri:1 cooperative:14 labeled:1 bottom:6 geman:1 wang:4 capture:2 region:1 connected:1 sun:1 sect:2 ran:2 principled:1 complexity:1 occluded:1 tight:5 segment:11 ror:2 yuille:1 accelerate:2 joint:1 represented:1 derivation:2 stacked:2 detected:1 labeling:4 whose:1 richer:2 posed:1 larger:2 cvpr:6 reconstruct:1 grammar:27 statistic:7 jointly:2 final:1 sequence:1 propose:1 tu:1 aligned:6 flexibility:2 deformable:1 description:1 bed:1 los:4 parent:4 cluster:24 regularity:2 double:1 optimum:1 generating:3 object:24 help:2 stat:3 pose:1 measured:1 nonterminal:1 adaptor:3 make3d:1 eq:2 strong:1 recovering:2 come:1 implies:1 attribute:3 stochastic:12 public:1 clustered:1 probable:1 summation:1 exploring:4 extension:1 ground:3 exp:2 major:2 achieves:1 early:1 efros:1 sofa:1 label:8 sensitive:1 concurrent:1 grouped:1 successfully:1 mit:4 super:1 occupied:1 avoid:1 shelf:1 office:1 derived:1 focus:1 joachim:1 indicates:2 contrast:1 detect:3 inference:11 mrfs:1 cloth:1 paa:1 hidden:1 relation:49 koller:1 expand:1 pixel:6 compatibility:1 among:4 orientation:3 flexible:1 proposes:3 art:1 spatial:3 marginal:3 field:3 ng:1 sampling:15 preparing:1 represents:9 look:1 icml:1 yu:1 thinking:1 foreground:17 others:2 np:1 few:1 randomly:1 composed:1 simultaneously:2 tightly:1 kitchen:1 geometry:4 occlusion:2 cns:1 rset:3 detection:13 acceptance:1 evaluation:2 sheep:1 mdl:1 mixture:2 extreme:1 chain:6 edge:4 tuple:1 closer:1 partial:1 jumping:1 tree:21 iv:4 loosely:1 pero:1 re:1 modeling:3 cover:2 cost:1 johnson:1 graphic:2 configurable:1 stored:1 dependency:2 chooses:1 combined:1 lee:5 probabilistic:5 synthesis:1 together:4 sketching:1 satisfied:1 zhao:1 potential:2 coding:1 includes:2 int:1 forsyth:2 satisfy:7 later:1 view:1 root:2 linked:2 red:1 competitive:9 recover:1 start:3 substructure:1 contribution:1 formed:1 efficiently:1 ensemble:4 ssg:11 goldwater:1 weak:2 bayesian:2 worth:1 touretzky:1 sharing:1 ed:1 volumetric:1 energy:5 pp:8 recovers:2 sampled:2 stop:1 dataset:14 knowledge:1 infers:1 segmentation:4 classroom:1 higher:3 follow:1 specify:2 formulation:1 leen:1 box:17 stage:10 hand:1 hastings:1 parse:24 expressive:1 night:1 reversible:1 lack:1 overlapping:1 building:1 true:2 hence:1 fidler:1 semantic:2 illustrated:2 during:1 ambiguous:1 demonstrate:2 performs:1 reasoning:2 image:31 wise:1 novel:1 functional:1 overview:1 conditioning:1 exponentially:1 interpretation:1 cambridge:4 gibbs:1 automatic:1 similarly:1 language:4 han:2 surface:6 etc:4 dominant:1 posterior:2 own:1 exclusion:1 showed:2 chan:1 tesauro:1 n00014:1 onr:1 seen:2 converge:1 clt:1 ii:17 branch:1 violate:1 sliding:1 living:1 infer:1 multiple:2 match:1 scalable:1 vision:4 essentially:1 yeung:1 iteration:9 represent:10 proposal:7 background:14 addition:2 winn:1 probably:1 tend:3 lafferty:1 call:2 structural:1 presence:2 door:1 intermediate:1 iii:12 noting:1 counting:1 shotton:1 switch:1 fit:1 bedroom:2 competing:4 topology:1 idea:1 angeles:4 enumerable:1 six:1 effort:1 song:1 compositional:8 enumerate:1 useful:1 detailed:1 involve:2 nonparametric:1 desk:2 category:5 http:2 generate:1 specifies:3 nsf:2 correctly:1 blue:1 discrete:1 vol:3 group:1 four:5 salient:2 threshold:2 enormous:2 drawn:2 changing:1 neither:2 rectangle:2 graph:7 enforced:1 compete:2 family:1 reasonable:1 wu:1 home:1 oring:2 accelerates:1 layer:4 bound:1 furniture:2 schlecht:1 quadratic:1 constraint:6 scene:39 ri:3 encodes:1 ucla:4 generates:2 aspect:2 min:1 chair:2 pruned:1 gould:1 department:2 structured:5 tv:1 combination:2 manning:1 aerial:1 metropolis:1 osher:1 iccv:1 ceiling:1 mutually:1 loose:2 end:1 adopted:1 available:1 hierarchical:12 enforce:1 occurrence:1 alternative:1 original:1 top:5 clustering:8 include:1 graphical:2 added:2 arrangement:1 mumford:3 exclusive:8 traditional:1 distance:1 link:2 entity:29 length:1 relationship:1 insufficient:1 balance:1 negative:2 design:3 perform:2 markov:7 finite:2 jin:1 introduced:1 namely:2 pair:2 specified:1 c4:2 california:2 learned:3 address:1 able:1 beyond:1 leonardis:1 below:2 pattern:3 usually:1 indoor:8 including:1 power:1 overlap:1 natural:3 difficulty:1 boat:1 zhu:10 minimax:2 ptan:1 auto:1 prior:2 understanding:8 literature:1 geometric:12 interdependent:1 interesting:2 filtering:1 analogy:1 foundation:2 sufficient:1 switchable:1 principle:2 thresholding:2 classifying:1 heavy:1 production:19 row:6 eccv:3 summary:1 supported:1 last:2 free:2 hebert:4 terzopoulos:1 wide:1 template:1 face:23 felzenszwalb:1 curve:2 depth:1 vocabulary:1 valid:1 avoids:2 rich:1 hedau:5 calculated:1 world:1 made:1 jump:3 transaction:1 reconstructed:1 pruning:1 obtains:1 global:2 sequentially:1 discriminative:8 spectrum:1 search:6 latent:3 table:4 kanade:2 learn:1 ca:2 expansion:1 yibiao:1 cl:8 complex:1 big:2 allowed:1 child:5 categorized:1 xu:1 fig:25 roc:2 darker:1 precision:2 sub:12 pereira:1 exponential:1 guan:1 jmlr:1 third:1 porway:3 admissible:1 tang:1 down:4 reconfigurable:1 symbol:2 list:1 svm:5 chun:1 gupta:3 false:4 adding:3 texture:2 subtree:1 conditioned:1 confuse:1 margin:2 horizon:1 chen:1 entropy:2 appearance:3 visual:16 expressed:1 contained:1 binding:2 ch:2 nested:3 clan:2 satisfies:1 truth:2 extracted:1 ma:3 acm:1 conditional:3 targeted:1 invalid:1 towards:1 room:5 shared:1 man:1 labelled:1 change:1 hard:1 barnard:1 typical:2 called:1 experimental:4 rarely:2 support:1 violated:1 evaluate:2 mcmc:3 |
3,575 | 4,237 | Query-Aware MCMC
Andrew McCallum
Department of Computer Science
University of Massachusetts
Amherst, MA
[email protected]
Michael Wick
Department of Computer Science
University of Massachusetts
Amherst, MA
[email protected]
Abstract
Traditional approaches to probabilistic inference such as loopy belief propagation
and Gibbs sampling typically compute marginals for all the unobserved variables
in a graphical model. However, in many real-world applications the user?s interests are focused on a subset of the variables, specified by a query. In this case it
would be wasteful to uniformly sample, say, one million variables when the query
concerns only ten. In this paper we propose a query-specific approach to MCMC
that accounts for the query variables and their generalized mutual information
with neighboring variables in order to achieve higher computational efficiency.
Surprisingly there has been almost no previous work on query-aware MCMC. We
demonstrate the success of our approach with positive experimental results on a
wide range of graphical models.
1
Introduction
Graphical models are useful for representing relationships between large numbers of random variables in probabilistic models spanning a wide range of applications, including information extraction
and data integration. Exact inference in these models is often computationally intractable due to the
dense dependency structures required in many real world problems, thus there exists a large body
of work on both variational and sampling approximations to inference that help manage large treewidth. More recently, however, inference has become difficult for a different reason: large data. The
proliferation of interconnected data and the desire to model it has given rise to graphical models with
millions or even billions of random variables. Unfortunately, there has been little research devoted
to approximate inference in graphical models that are large in terms of their number of variables.
Other than acquiring more machines and parallelizing inference [1, 2], there have been few options
for coping with this problem.
Fortunately, many inference needs are instigated by queries issued by users interested in particular
random variables. These real-world queries tend to be grounded (i.e., focused on specific data cases).
For example, a funding agency might be interested in the expected impact that funding a particular
research group has on a certain scientific topic. In these situations not all variables are of equal
relevance to the user?s query; some variables become observed given the query, others become
statistically independent given the query, and the remaining variables are typically marginalized.
Thus, a user-generated query provides a tremendous amount of information that can be exploited by
an intelligent inference procedure. Unfortunately, traditional approaches to inference such as loopy
belief propagation (BP) and Gibbs sampling are query agnostic in the sense that they fail to take
advantage of this knowledge and treat each variable as equally relevant. Surprisingly, there has been
little research on query specific inference and the only existing approaches focus on loopy BP [3, 4].
In this paper we propose a query-aware approach to Markov chain Monte Carlo (QAM) that exploits
the dependency structure of the graph and the query to achieve faster convergence to the answer. Our
method selects variables for sampling in proportion to their influence on the query variables. We
1
determine this influence using a computationally tractable generalization of mutual information between the query variables and each variable in the graph. Because our query-specific approach to
inference is based on MCMC, we can provide arbitrarily close approximations to the query answer
while also scaling to graphs whose structure and unrolled factor density would ordinarily preclude
both exact and belief propagation inference methods. This is essential for the method to be deployable in real-world probabilistic databases where even a seemingly innocuous relational algebra
query over a simple fully independent structure can produce an inference problem that is #P-hard
[5]. We demonstrate dramatic improvements over traditional Markov chain Monte Carlo sampling
methods across a wide array of models of diverse structure.
2
2.1
Background
Graphical Models
Graphical models are a flexible framework for capturing statistical relationships between random
variables. A factor graph G := hx, ?i is a bipartite graph consisting of n random variables x =
{xi }n1 and m factors ? = {?i }m
1 . Each variable xi has a domain Xi , and we notate the entire
domain space of the random variables (x) as X with associated ?-algebra ?. Intuitively, a factor ?i
is a function that maps a subset of random variable values v i ? Xi to a non-negative real-valued
number, thus capturing the compatibility of an assignment to those variables. The factor graph then
expresses a probability measure over (X, ?), the probability of a particular event ? ? ? is given as
?(?) =
m
1 XY
?i (v i ),
Z v?? i=1
Z=
X
?(v).
(1)
v?X
We will assume that ? is defined so that marginalization of any subset of the variables is well
defined; this is important in the sequel.
2.2
Queries on Graphical Models
Informally, a query on a graphical model is a request for some quantity of interest that the graphical
model is capable of providing. That is, a query is a function mapping the graphical model to an
answer set. Inference is required to recover these quantities and produce an answer to the query.
While in the general case, a query may contain arbitrary functions over the support of a graphical
model, for this work we consider queries of the marginal form. That is a query Q consists of three
parts Q = hxq , xl , xe i. Where xq is the set of query variables whose marginal distributions (or
MAP configuration) are the answer to the query, xe is a set of evidence variables whose values
are observed, and xl is the set of latent variables over which one typically marginalizes to obtain
the statistically sound answer. Note that this class of queries is remarkably general and includes
queries that require expectations over arbitrary functions. We can see this because a function over
the graphical model (or a subset of the graphical model) is itself a random variable, and can therefore
be included in xq .1 More precisely, a query over a graphical model is:
X
Q(xq , xl , xe , ?) = ?(xq |xe = ve ) =
?(xq , xl |xe = ve )
(2)
vl
we assume that ? is well defined with respect to marginalization over arbitrary subsets of variables.
2.3
Markov Chain Monte Carlo
Markov chain Monte Carlo (MCMC) is an important inference method for graphical models where
computing the normalization constant Z is intractable. In particular, for many MCMC schemes such
as Gibbs sampling and more generally Metropolis-Hastings, Z cancels out of the computation for
generating a single sample. MCMC has been successfully used in a wide variety of applications
including information extraction [8], data integration [9], and machine vision [10]. For simplicity,
in this work, we consider Markov chains over discrete state spaces. However, many of the results
1
Research in probabilistic databases has demonstrated that a large class of relational algebra queries can be
represented as graphical models and answered using statistical queries of the this form [6, 7].
2
presented in this paper may be extended to arbitrary state spaces using more general statements with
measure theoretic definitions.
Markov chain Monte Carlo produces a sequence of states {si }?
1 in a state space S according to
a transition kernel K : S ? S ? R+ , which in the discrete case is a stochastic matrix: for all
s ? S K(s, ?) is a valid probability measure and for all s ? S K(?, s) is a measurable function.
Since we are concerned with MCMC for inference in graphical models, we will from now on let
S:=X, and use X instead. Under certain conditions the Markov chain is said to be ergodic, then the
chain exhibits two types of convergence. The first is of practical interest: a law of large numbers
convergence
Z
1X
lim
f (st ) =
f (s)?(s)ds
(3)
t?? t
s?X
where the st are empirical samples from the chain.
The second type of convergence is to the distribution ?. At each time step, the Markov chain is in a
time-specific distribution over the state space (encoding the probability of being in a particular state
at time t). For example, given an initial distribution ?0 over the state space, the probability of being
in a next state s0 is the probability of all paths beginning in starting states s with probabilities ?0 (s)
and transitioning to s0 with probabilities K(s, s0 ). Thus the time-specific (t = 1) distribution over
all states is given by ? (1) = ?0 K; more generally, the distribution at time t is given by ? (t) = ?0 K t .
Under certain conditions and regardless of the initial distribution, the Markov chain will converge
to the stationary (invariant) distribution ?. A sufficient (but not necessary) condition for this is to
require that the Markov transition kernel obey detailed balance:
?(x)K(x, x0 ) = ?(x0 )K(x0 , x) ?x, x0 ? X
(4)
Convergence of the chain is established when repeated applications of the transition kernel maintain the invariant distribution ? = ?K, and convergence is traditionally quantified using the total
variation norm:
1 X (t)
|? (x) ? ?(x)|
(5)
k? (t) ? ?ktv := sup |? (t) (A) ? ?(A)| =
2
A??
x?X
The rate at which a Markov chain converges to the stationary distribution is proportional to the
spectral gap of the transition kernel, and so there exists a large body of literature proving bounds on
the second eigenvalues.
2.4
MCMC Inference in Graphical Models
MCMC is used for inference in graphical models by constructing a Markov chain with invariant
distribution ? (given by the graphical model). One particularly successful approach is the Metropolis
Hastings (MH) algorithm. The idea is to devise a proposal distribution T : X ? X ? [0, 1] for which
it is always tractable to sample a next state s0 given a current state s. Then, the proposed state s0 is
accepted with probability function A
?(s0 )T (s, s0 )
0
A(s, s ) = min 1,
(6)
?(s)T (s0 , s)
The resulting transition kernel KMH is given by
?
?
T (s, s0 )
if A(s, s0 ) > 1, s 6= s0
?
?
0
0
)
if A(s, s0 ) < 1
KM H (s, s0 ) = T (s, s )A(s, sP
0
?
K(s, r)(1 ? A(s, r)) if s = s0
?
?T (s, s ) +
(7)
r:A(s,r)<1
Further, observe that in the computation of A, the partition function Z cancels, as do factors outside the Markov blanket of the variables that have changed. As a result, generating samples from
graphical models with Metropolis-Hastings is usually inexpensive.
3
3
Query Specific MCMC
Given a query Q = hxq , xl , xe i, and a probability distribution ? encoded by a graphical model G
with factors ? and random variables x, the problem of query specific inference is to return the highest fidelity answer to Q given a possible time budget. We can put more precision on this statement
by defining ?highest fidelity? as closest to the truth in total variation distance.
Our approach for query specific inference is based on the Metropolis Hastings algorithm described
in Section 2.4. A simple yet generic case of the Metropolis Hastings proposal distribution T (that
has been quite successful in practice) employs the following steps:
1: Beginning in a current state s, select a random variable xi ? x from a probability distribution p
over the indices of the variables (1, 2, ? ? ? , n).
2: Sample a new value for xi according to some distribution q(Xi ) over that variable?s domain,
leave all other variables unchanged and return the new state s0 .
In brief, this strategy arrives at a new state s0 from a current state s by simply updating the value
of one variable at a time. In traditional MCMC inference, where the marginal distributions of all
variables are of equal interest, the variables are usually sampled in a deterministic order, or selected
uniformly at random; that is, p(i) = n1 induces a uniform distribution over the integers 1, 2, ? ? ? , n.
However, given a query Q, it is reasonable to choose a p that more frequently selects the query
variables for sampling. Clearly, the query variable marginals depend on the remaining latent variables, so we must tradeoff sampling between query and non-query variables. A key observation is
that not all latent variables influence the query variables equally. A fundamental question raised and
addressed in this paper is: how do we pick a variable selection distribution p for a query Q to obtain
the highest fidelity answer under a finite time budget. We propose to select variables based on their
influence on the query variable according to the graphical model.
We will now formalize a broad definition of influence by generalizing mutual information. The
?(x,y)
mutual information I(x, y) = ?(x, y) log( ?(x)?(y)
) between two random variables measures
the strength of their dependence. It is easy to check that this quantity is the KL divergence
between the joint distribution of the variables and the product of the marginals: I(x, y) =
KL(?(x, y)||?(x)?(y)). In this sense, mutual information measures dependence as a ?distance?
between the full joint distribution and its independent approximation. Clearly, if x and y are independent then this distance is zero and so is their mutual information. We produce a generalization
of mutual information which we term the influence by substituting an arbitrary divergence function
f in place of the KL divergence.
Definition 1 (Influence). Let x and y be two random variables with marginal distributions
?(x, y),?(x), ?(y). Let f (?1 (?), ?2 (?)) 7? r, r ? R+ be a non-negative real-valued divergence
between probability distributions. The influence ?(x, y) between x and y is
?(x, y) := f (?(x, y), ?(x)?(y))
(8)
If we let f be the KL divergence then ? becomes the mutual information; however, because MCMC
convergence is more commonly assessed with total variation norm, we define an influence metric
based on this choice for f . In particular we define ?tv (x, y) := k?(x, y) ? ?(x)?(y)ktv .
As we will now show, the total variation influence (between the query variable and the latent variables) has the important property that it is exactly the error incurred from ignoring a single latent
variable when sampling values for xq . For example, suppose we design an approximate query specific sampler that saves computational resources by ignoring a particular random variable xl . Then,
the variable xl will remain at its burned-in value xl =vl for the duration of query specific sampling.
As a result, the chain will converge to the invariant distribution ?(?|xl =vl ). If we use this conditional
distribution to approximate the marginal, then the expected error we incur is exactly the influence
score under total variation distance.
1
Proposition 1. If p(i) = 1(i 6= l) n?1
induces an MH kernel that neglects variable xl , then the
expected total variation error ?tv of the resulting MH sampling procedure under the model is the
total variation influence ?tv .
4
Proof: The resulting chain has stationary distribution ?(xq |xl = vl ). The expected error is:
X
?(xl =vl )k?(xq |xl =vl ) ? ? (t) (xq )ktv
E? [?tv ] =
vl ?Xl
=
X
?(xl =vl )
vl ?Xl
1 X
?(xq |xl =vl ) ? ? (t) (xq )
2
vq ?Xq
1 X X
=
?(xq |xl =vl )?(xl =vl ) ? ? (t) (xq )?(xl =vl )
2
vl ?Xl vq ?Xq
1 X X
=
?(xq , xl ) ? ? (t) (xq )?(xl ) = ?tv (xq , xl )
2
vl ?Xl vq ?Xq
This demonstrates that the expected cost of not sampling a variable is exactly that variable?s influence on the query variable. We are now justified in selecting variables proportional to their influence
to reduce the error they assert on the query marginal. For example, if a variable?s influence score is
zero this also means that there is no cost incurred from neglecting that variable (if a query renders
variables statistically independent of the query variable then these variables will be correctly ignored
under the influence based sampling procedure).
Note, however, that computing either ?tv or the mutual information is as difficult as inference itself.
Thus, we define a computationally efficient variant of influence that we term the influence trail score.
The idea is to approximate the true influence as a product of factors along an active trail in the graph.
Definition 2 (Influence Trail Score). Let ? = (x0 , x1 , ? ? ? , xr ) be an active trail between the query
variable xq and xi where x0 = xq and xr = xi . Let ?(xi , xj ) be the approximate joint
P distribution
between xi and xj according only to the mutual factors in their scopes. Let ?(xi ) = xj ?(xi , xj )
be a marginal distribution. The influence trail score with respect to an active trail ? is
?? (xq , xi ) :=
r?1
Y
f (?i (xi , xi+1 ), ?i (xi )?i (xi+1 ))
(9)
i=1
The influence trail score is efficient to compute because all factors and variables outside the mutual
scopes of each variable pair are ignored. In the experimental results we evaluate both the influence
and the influence trail and find that they perform similarly and outperform competing graph-based
heuristics for determining p.
While in general it is difficult to uniformly state that one choice of p converges faster than another
for all models and queries, we present the following analysis showing that even an approximate
query aware sampler can exhibit faster finite time convergence progress than an exact sampler. Let
K be an exact MCMC kernel that converges to the correct stationary distribution and let L be an
approximate kernel that exclusively samples the query variable and thus converges to the conditional
distribution of the query variable. We now assume an ergodic scheme for the two samplers where
the convergence rates are geometrically bounded from above and below by constants ?l and ?k :
k?0 Lt ? ?K ktv = ?(?lt )
t
k?0 K ? ?K ktv =
?(?kt )
(10)
(11)
Because L only samples the query variable, the dimensionality of L?s state space is much smaller
than K?s state space, and we will assume that L converges more quickly to its own invariant distribution, that is, ?l ?k . Extrapolating Proposition 1, we know that the error incurred from neglecting
to sample the latent variables is the influence ?tv between the joint distribution of the latent variables
and the query variable. Observe that L is simultaneously making progress towards two distributions:
its own invariant distribution and the invariant distribution of K plus an error term. If the error term
?tv is sufficiently small then we can write the following inequality:
?lt + ?tv ? ?kt
(12)
We want this inequality to hold for as many time steps as possible. The amount of time that L (the
query only kernel) is closer to K?s stationary distribution ?k can be determined by solving for t,
5
yielding the fixed point iteration:
t=
log (?lt + ?tv )
log ?k
(13)
l +?tv )
The one-step approximation yields a non-trivial, but conservative bound: t ? log(?
log ?k . Thus, for a
sufficiently small error, t can be positive. This implies that the strategy of exclusively sampling the
query variables can achieve faster short-term convergence to the correct invariant distribution even
though asymptotic convergence is to the incorrect invariant distribution. Indeed, we observe this
phenomena experimentally in Section 5.
4
Related Work
Despite the prevalence of probabilistic queries, the machine learning and statistics communities
have devoted little attention to the problem of query-specific inference. The only existing papers
of which we are aware both build on loopy belief propagation [3, 4]; however, for many inference
problems, MCMC is a preferred alternative to LPB because it is (1) able to obtain arbitrarily close
approximations to the true marginals and (2) is better able to scale to models with large or real-valued
variable domains that are necessary for state-of-the-art results in data integration [9], information
extraction [8], and deep vision tasks with many latent layers [11].
To the best of our knowledge, this paper is one of the first to propose a query-aware sampling
strategy for MCMC in either the machine learning or statistics community. The decayed MCMC
algorithm for filtering [12] can be thought of as a special case of our method where the model is a
linear chain, and the query is for the last variable in the sequence. That paper proves a finite mixing
time bounds on infinitely long sequences. In contrast we are interested in arbitrarily shaped graphs
and in the practical consideration of large finite models. MCMC has also recently been deployed
in probabilistic databases [13] where it is possible to incorporate the deterministic constraints of a
relational algebra query directly into a Metropolis-Hastings proposal distribution to obtain quicker
answers [14, 15].
A related idea from statistics is data augmentation (or auxiliary variable) approaches to sampling
where latent variables are artificially introduced into the model to improve convergence of the original variables (e.g., Swendsen-Wang [16] and slice sampling [17]). In this setting, we see QAM
as a way of determining a more sophisticated variable selection strategy that can balance sampling
efforts between the original and auxiliary variables.
5
Experiments
In this section we demonstrate the effectiveness and broad applicability of query aware MCMC
(QAM) by demonstrating superior convergence rates to the query marginals across a diverse range
of graphical models that vary widely in structure. In our experiments, we generate a wide range
of graphical models and evaluate the convergence of each chain exactly, avoiding noisy empirical
sampling error by performing exact computations with full transition kernels.
We evaluate the following query-aware samplers:
1.
2.
3.
4.
Polynomial graph distance 1 (QAM-Poly1): p(xi )?d(xq , xi )?N , where d is shortest path;
Influence - Exact mutual information (QAM-MI): p(xi )?I(xq , xi );
Influence - total variation distance (QAM-TV): p(xi )??tv (xq , xi );
Influence trail score - total variation (QAM-TV): p(xi ) set according to Equation 9;
and two baseline samplers
7. Traditional Metropolis-Hastings (Uniform): p(xi )?1;
8. Query-only Metropolis-Hastings (qo): p(xi ) = 1(xq = xi );
on six different graphical models with varying parameters generated from a Beta(2,2) distribution
(this ensures an interesting dynamic range over the event space).
6
1.
2.
3.
4.
5.
6.
Independent - each variable is statistically independent
Linear chain - a linear-chain CRF (used in NLP and information extraction)
Hoop - same as linear chain plus additional factor to close the loop
Grid - or Ising model, used in statistical physics and vision
Fully connected PW - each pair of variables has a pairwise factor
Fully connected - every variable is connected through a single factor
Mirroring many real-world conditional random field applications, the non-unary factors (connecting more than one variable) are generated from the same factor-template and thus share the same
parameters (each generated from log(Beta(2,2))). Each variable has a corresponding observation
factor whose parameters are not shared and randomly set according to log(Beta2,2)/2.
For our experiments we randomly generate ten parameter settings for each of the six model types
and measure convergence of the six chains to the the single-variable marginal query ?(xq ) for each
variable in each of the sixty realized models. Convergence is measured using the total variation
norm: k?(xq ) ? ?(xq )(t) ktv . In this set of experiments we do not wish to introduce empirical sampling error so we generate models with nine-variables per graph enabling us to (1) exactly compute
the answer to the marginal query, (2) fully construct the 2n ? 2n transition matrices, and (3) alget
braically compute the time t distributions for each chain ? (t) = ?0 KMH
given an initial uniform
?9
distribution ?0 (x) = 2 .
We display marginal convergence results in Figure 1. Generally, all the query specific sampling
chains converge more quickly than the uniform baseline in the early iterations across every model.
It is interesting to compare the convergence rates of the various QAM approaches at different time
stages. The query-only and mutual information chain exhibit the most rapid convergence in the early
stages of learning, with the query-only chain converging to an incorrect distribution, and the mutual
information chain slowly converging during the later time stages. While QAM-TV exhibits similar
convergence patterns to the polynomial chains, QAM-TV slightly outperforms them in the more
connected models (grid and fully-connected-pw). Finally, notice that the influence-trail variant of
total variation influence converges at a similar rate to the actual total variation influence, and in some
cases converges more quickly (e.g., in the grid and the latter stages of the full pairwise model).
In the next experiment, we demonstrate how the size of the graphical model affects convergence of the various chains. In particular, we plot the convergence of all chains on six different hoop-structured models containing three, four, six, eight, ten, and twelve variables (Figure 2). Again, the results are averaged over ten randomly generated graphs, but this time we plot
the advantage over the uniform kernel. That is we measure the difference in convergence rates
t
t
k? ? ?0 KUnif
ktv ? k? ? ?0 KQAM
ktv so that points above the line x = 0 mean the QAM is closer to
the answer than the uniform baseline and points below the line mean the QAM is further from the
answer. As expected, increasing the number of variables in the graph increases the opportunities for
query specific sampling and thus increases QAM?s advantage over traditional MCMC.
6
Conclusion
In this paper we presented a query-aware approach to MCMC, motivated by the need to answer
queries over large scale graphical models. We found that the query-aware sampling methods outperform the traditional Metropolis Hastings sampler across all models in the early time steps. Further,
as the number of variables in the models increase, the query aware samplers not only outperform the
baseline for longer periods of time, but also exhibit more dramatic convergence rate improvements.
Thus, query specific sampling is a promising approach for approximately answering queries on realworld probabilistic databases (and relational models) that contain billions of variables. Successfully
deploying QAM in this setting will require algorithms for efficiently constructing and sampling the
variable selection distribution. An exciting area of future work is to combine query specific sampling with adaptive MCMC techniques allowing the kernel to evolve in response to the underlying
distribution. Further, more rapid convergence could be obtained by mixing the kernels in a way
that combines the strength of each: some kernels converge quickly in the early stages of sampling
while other converge more quickly in the later stages, thus together they could provide a very powerful query specific inference tool. There has been little theoretical work on analyzing marginal
convergence of MCMC chains and future work can help develop these tools.
7
0.20
0.15
0.10
Total variation distance
0.05
0.15
0.10
0.00
0.05
Total variation distance
0.20
0.10
Hoop
0.20
Linear Chain
Uniform
Query-only
QAM-Poly1
QAM-MI
QAM-TV
QAM-TV-G
0.00
Total variation distance
Independent
10
20
30
40
50
0
10
20
30
40
50
0
10
20
30
Time
Time
Time
Grid
Fully Connected (PW)
Fully Connected
40
50
40
50
10
20
30
40
50
0.06
0.04
0.00
0.0
0
0.02
Total variation distance
0.3
0.2
0.1
Total variation distance
0.15
0.10
0.05
0.00
Total variation distance
0.20
0
0
10
20
Time
30
40
50
0
10
Time
20
30
Time
Figure 1: Convergence to the query marginals of the stationary distribution from an initial uniform
distribution.
20
30
40
10
20
30
40
0.10
0.05
0.00
Improvement over uniform
0.10
0.05
0
50
0
10
20
30
8 Variables
10 Variables
12 Variables
30
40
50
0
10
20
Time
30
Time
40
50
50
40
50
0.05
0.00
Improvement over uniform
0.05
0.00
Improvement over uniform
20
40
0.10
Time
0.10
Time
0.05
10
0.00
50
0.00
0
6 Variables
Time
0.10
10
Improvement over uniform
0.05
0.10
Uniform
Query-only
QAM-Poly1
QAM-Poly2
QAM-MI
QAM-TV
QAM-TV-G
0
Improvement over uniform
4 Variables
0.00
Improvement over uniform
3 Variables
0
10
20
30
Time
Figure 2: Improvement over uniform p as the number of variables increases. Above the line x = 0
is an improvement in marginal convergence, and below is worse than the baseline. As number of
variables increase, the improvements of the query specific techniques increase.
8
7
Acknowledgements
This work was supported in part by the Center for Intelligent Information Retrieval, in part by
IARPA via DoI/NBC contract #D11PC20152, in part by IARPA and AFRL contract #FA8650-10C-7060 , and in part by UPenn NSF medium IIS-0803847. The U.S. Government is authorized to
reproduce and distribute reprint for Governmental purposes notwithstanding any copyright annotation thereon. Any opinions, findings and conclusions or recommendations expressed in this material
are the authors? and do not necessarily reflect those of the sponsor. The authors would also like to
thank Alexandre Passos and Benjamin Marlin for useful discussion.
References
[1] Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M.
Hellerstein. Graphlab: A new parallel framework for machine learning. In Conference on
Uncertainty in Artificial Intelligence (UAI), Catalina Island, California, July 2010.
[2] Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. Large-scale
cross-document coreference using distributed inference and hierarchical models. In Association for Computational Linguistics: Human Language Technologies (ACL HLT), 2011.
[3] Arthur Choi and Adnan Darwiche. Focusing generalizations of belief propagation on targeted
queries. In Association for the Advancement of Artificial Intelligence (AAAI), 2008.
[4] Anton Chechetka and Carlos Guestrin. Focused belief propagation for query-specific inference.
In International Conference on Artificial Intelligence and Statistics (AI STATS), 2010.
[5] Nilesh Dalvi and Dan Suciu. The dichotomy of conjunctive queries on probabilistic structures.
Technical Report 0612102, University of Washington, 2007.
[6] Prithviraj Sen, Amol Deshpande, and Lise Getoor. Exploiting shared correlations in probabilistic databases. In Very Large Data Bases (VLDB), 2008.
[7] Daisy Zhe Wang, Eirlinaios Michelakis, Minos Garofalakis, and Joseph M. Hellerstein.
BayesStore: Managing large, uncertain data repositories with probabilistic graphical models.
In Very Large Data Bases (VLDB), 2008.
[8] Hoifung Poon and Pedro Domingos. Joint inference in information extraction. In Association
for the Advancement of Artificial Intelligence, pages 913?918, Vancouver, Canada, 2007.
[9] Aron Culotta, Michael Wick, Robert Hall, and Andrew McCallum. First-order probabilistic
models for coreference resolution. In Human Language Technology Conf. of the North American Chapter of the Assoc. of Computational Linguistics (HLT/NAACL), pages 81?88, 2007.
[10] Adrian Barbu and Song Chun Zhu. Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities. IEEE Trans. Pattern Anal. Mach. Intell., 27(8):1239?1253, 2005.
[11] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann machines. In International
Conference on Artificial Intelligence and Statistics (AI STATS), 2009.
[12] Bhaskara Marthi, Hanna Pasula, Stuart Russell, and Yuval Peres. Decayed MCMC filtering.
In Conference on Uncertainty in Artificial Intelligence (UAI), pages 319?326, 2002.
[13] Michael Wick, Andrew McCallum, and Gerome Miklau. Scalable probabilistic databases with
factor graphs and MCMC. In Very Large Data Bases (VLDB), pages 794?804, 2010.
[14] Michael Wick, Andrew McCallum, and Gerome Miklau. Representing uncertainty in probabilistic databases with scalable factor graphs. Master?s thesis, University of Massachusetts,
proposed September 2008 and submitted April 2009.
[15] Daisy Zhe Wang, Michael J. Franklin, Minos Garofalakis, Joseph M. Hellerstein, and
Michael L. Wick. Hybrid in-database inference for declarative information extraction. In Proceedings of the 2011 international conference on Management of data, SIGMOD ?11, pages
517?528, New York, NY, USA, 2011. ACM.
[16] R.H. Swendsen and J.S. Wang. Nonuniversal critical dynamics in MC simulations. Phys. Rev.
Lett., 58(2):68?88, 1987.
[17] Radford Neal. Slice sampling. Annals of Statistics, 31:705?767, 2000.
9
| 4237 |@word repository:1 kmh:2 pw:3 polynomial:2 proportion:1 norm:3 adnan:1 adrian:1 km:1 vldb:3 simulation:1 pick:1 dramatic:2 initial:4 configuration:1 uma:2 score:7 ktv:8 exclusively:2 selecting:1 document:1 franklin:1 miklau:2 outperforms:1 existing:2 current:3 si:1 yet:1 danny:1 must:1 conjunctive:1 partition:1 extrapolating:1 plot:2 bickson:1 stationary:6 intelligence:6 selected:1 advancement:2 mccallum:6 beginning:2 short:1 provides:1 qam:23 chechetka:1 along:1 become:3 beta:2 incorrect:2 consists:1 combine:2 dalvi:1 dan:1 introduce:1 nbc:1 pairwise:2 x0:6 upenn:1 indeed:1 expected:6 rapid:2 proliferation:1 frequently:1 salakhutdinov:1 little:4 actual:1 preclude:1 increasing:1 becomes:1 bounded:1 underlying:1 agnostic:1 medium:1 unobserved:1 finding:1 marlin:1 assert:1 every:2 exactly:5 demonstrates:1 assoc:1 catalina:1 positive:2 thereon:1 treat:1 despite:1 encoding:1 mach:1 analyzing:1 path:2 approximately:1 might:1 plus:2 acl:1 quantified:1 innocuous:1 range:5 statistically:4 averaged:1 practical:2 hoifung:1 practice:1 prevalence:1 xr:2 procedure:3 area:1 coping:1 empirical:3 thought:1 close:3 selection:3 put:1 influence:31 measurable:1 map:2 demonstrated:1 deterministic:2 center:1 darwiche:1 regardless:1 starting:1 duration:1 attention:1 focused:3 ergodic:2 resolution:1 simplicity:1 stats:2 array:1 proving:1 traditionally:1 variation:18 annals:1 suppose:1 user:4 exact:6 barbu:1 trail:10 domingo:1 particularly:1 updating:1 ising:1 database:8 observed:2 quicker:1 wang:5 culotta:1 ensures:1 connected:7 russell:1 highest:3 benjamin:1 agency:1 dynamic:2 depend:1 solving:1 singh:1 algebra:4 passos:1 incur:1 coreference:2 bipartite:1 efficiency:1 mh:3 joint:5 represented:1 various:2 chapter:1 monte:5 doi:1 query:93 artificial:6 dichotomy:1 outside:2 whose:4 encoded:1 quite:1 valued:3 heuristic:1 say:1 widely:1 statistic:6 itself:2 noisy:1 subramanya:1 seemingly:1 advantage:3 sequence:3 eigenvalue:1 amarnag:1 sen:1 propose:4 interconnected:1 product:2 neighboring:1 relevant:1 loop:1 mixing:2 poon:1 achieve:3 billion:2 convergence:28 exploiting:1 produce:4 generating:2 converges:7 leave:1 help:2 andrew:5 develop:1 measured:1 progress:2 auxiliary:2 c:2 treewidth:1 blanket:1 implies:1 correct:2 stochastic:1 human:2 opinion:1 material:1 require:3 government:1 hx:1 generalization:3 proposition:2 minos:2 hold:1 sufficiently:2 hall:1 swendsen:3 mapping:1 scope:2 substituting:1 vary:1 early:4 purpose:1 ruslan:1 successfully:2 tool:2 clearly:2 always:1 varying:1 lise:1 focus:1 improvement:11 lpb:1 kyrola:1 check:1 contrast:1 baseline:5 sense:2 inference:29 unary:1 vl:15 typically:3 entire:1 reproduce:1 interested:3 selects:2 compatibility:1 fidelity:3 flexible:1 raised:1 integration:3 art:1 mutual:14 marginal:12 equal:2 construct:1 aware:11 special:1 extraction:6 shaped:1 sampling:29 washington:1 field:1 broad:2 stuart:1 cancel:2 future:2 others:1 report:1 intelligent:2 few:1 employ:1 randomly:3 simultaneously:1 ve:2 divergence:5 intell:1 consisting:1 n1:2 maintain:1 interest:4 arrives:1 sixty:1 yielding:1 copyright:1 devoted:2 suciu:1 chain:32 kt:2 hoop:3 capable:1 neglecting:2 necessary:2 xy:1 closer:2 arthur:1 theoretical:1 uncertain:1 assignment:1 loopy:4 cost:2 applicability:1 subset:5 uniform:16 successful:2 dependency:2 answer:13 gerome:2 notate:1 st:2 density:1 fundamental:1 amherst:2 decayed:2 twelve:1 international:3 sequel:1 probabilistic:13 physic:1 contract:2 nilesh:1 michael:6 connecting:1 quickly:5 together:1 thesis:1 aaai:1 augmentation:1 again:1 manage:1 choose:1 slowly:1 marginalizes:1 containing:1 reflect:1 management:1 worse:1 conf:1 american:1 return:2 account:1 distribute:1 includes:1 north:1 aron:1 later:2 sup:1 recover:1 option:1 carlos:2 parallel:1 annotation:1 daisy:2 efficiently:1 yield:1 anton:1 mc:1 carlo:5 submitted:1 deploying:1 phys:1 hlt:2 definition:4 inexpensive:1 deshpande:1 associated:1 proof:1 mi:3 sampled:1 massachusetts:3 knowledge:2 lim:1 dimensionality:1 formalize:1 sophisticated:1 focusing:1 alexandre:1 afrl:1 higher:1 response:1 april:1 though:1 stage:6 correlation:1 d:1 pasula:1 hastings:9 qo:1 propagation:6 scientific:1 usa:1 naacl:1 contain:2 true:2 neal:1 during:1 generalized:1 theoretic:1 demonstrate:4 crf:1 variational:1 consideration:1 recently:2 funding:2 superior:1 million:2 association:3 marginals:6 gibbs:3 ai:2 grid:4 similarly:1 language:2 longer:1 base:3 closest:1 own:2 posterior:1 issued:1 certain:3 inequality:2 arbitrarily:3 success:1 xe:6 exploited:1 devise:1 guestrin:2 fortunately:1 additional:1 managing:1 determine:1 fernando:1 converge:5 shortest:1 period:1 july:1 ii:1 full:3 sound:1 sameer:1 technical:1 faster:4 cross:1 long:1 retrieval:1 equally:2 sponsor:1 impact:1 converging:2 variant:2 aapo:1 scalable:2 vision:3 expectation:1 metric:1 iteration:2 grounded:1 normalization:1 kernel:14 proposal:3 background:1 remarkably:1 justified:1 want:1 addressed:1 tend:1 effectiveness:1 integer:1 garofalakis:2 easy:1 concerned:1 variety:1 marginalization:2 xj:4 affect:1 nonuniversal:1 competing:1 reduce:1 idea:3 tradeoff:1 six:5 motivated:1 effort:1 song:1 render:1 fa8650:1 york:1 nine:1 deep:2 ignored:2 useful:2 generally:3 detailed:1 informally:1 mirroring:1 amount:2 ten:4 induces:2 generate:3 outperform:3 nsf:1 notice:1 governmental:1 correctly:1 per:1 diverse:2 discrete:2 wick:5 write:1 express:1 group:1 key:1 four:1 demonstrating:1 wasteful:1 graph:15 geometrically:1 realworld:1 powerful:1 uncertainty:3 master:1 place:1 almost:1 reasonable:1 gonzalez:1 scaling:1 capturing:2 bound:3 layer:1 display:1 strength:2 precisely:1 constraint:1 bp:2 answered:1 min:1 performing:1 department:2 tv:20 according:6 structured:1 request:1 across:4 remain:1 smaller:1 slightly:1 island:1 metropolis:9 joseph:4 making:1 amol:1 rev:1 intuitively:1 invariant:9 computationally:3 resource:1 vq:3 equation:1 fail:1 know:1 tractable:2 eight:1 obey:1 observe:3 hellerstein:3 spectral:1 generic:1 hierarchical:1 deployable:1 save:1 alternative:1 original:2 remaining:2 nlp:1 linguistics:2 graphical:30 opportunity:1 marginalized:1 neglect:1 exploit:1 sigmod:1 build:1 prof:1 unchanged:1 question:1 quantity:3 realized:1 strategy:4 dependence:2 traditional:7 said:1 exhibit:5 september:1 distance:12 thank:1 topic:1 trivial:1 spanning:1 reason:1 declarative:1 index:1 relationship:2 providing:1 balance:2 unrolled:1 difficult:3 unfortunately:2 robert:1 statement:2 negative:2 rise:1 ordinarily:1 design:1 anal:1 boltzmann:1 perform:1 allowing:1 observation:2 markov:13 finite:4 enabling:1 situation:1 relational:4 extended:1 defining:1 hinton:1 peres:1 arbitrary:6 parallelizing:1 community:2 canada:1 introduced:1 pair:2 required:2 specified:1 kl:4 california:1 marthi:1 tremendous:1 established:1 trans:1 able:2 yucheng:1 usually:2 below:3 pattern:2 including:2 belief:6 event:2 getoor:1 critical:1 hybrid:1 zhu:1 representing:2 scheme:2 improve:1 technology:2 brief:1 reprint:1 xq:29 literature:1 acknowledgement:1 evolve:1 determining:2 asymptotic:1 law:1 vancouver:1 fully:7 interesting:2 burned:1 proportional:2 filtering:2 geoffrey:1 incurred:3 sufficient:1 s0:16 exciting:1 share:1 changed:1 surprisingly:2 last:1 supported:1 wide:5 template:1 distributed:1 slice:2 lett:1 world:5 transition:7 valid:1 author:2 commonly:1 adaptive:1 approximate:7 preferred:1 graphlab:1 active:3 uai:2 xi:28 zhe:2 latent:9 promising:1 ignoring:2 hanna:1 necessarily:1 artificially:1 constructing:2 domain:4 sp:1 dense:1 iarpa:2 repeated:1 body:2 x1:1 deployed:1 ny:1 precision:1 pereira:1 wish:1 xl:25 answering:1 bhaskara:1 choi:1 transitioning:1 specific:19 showing:1 chun:1 concern:1 evidence:1 intractable:2 exists:2 essential:1 notwithstanding:1 budget:2 gap:1 authorized:1 generalizing:2 lt:4 simply:1 infinitely:1 desire:1 expressed:1 recommendation:1 acquiring:1 pedro:1 radford:1 truth:1 acm:1 ma:2 conditional:3 targeted:1 towards:1 shared:2 hard:1 experimentally:1 included:1 determined:1 uniformly:3 yuval:1 sampler:8 conservative:1 total:18 accepted:1 experimental:2 select:2 poly2:1 support:1 latter:1 assessed:1 relevance:1 phenomenon:1 incorporate:1 evaluate:3 mcmc:25 avoiding:1 |
3,576 | 4,238 | Clustering via Dirichlet Process Mixture Models for
Portable Skill Discovery
Scott Niekum
Andrew G. Barto
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
{sniekum,barto}@cs.umass.edu
Abstract
Skill discovery algorithms in reinforcement learning typically identify single
states or regions in state space that correspond to task-specific subgoals. However,
such methods do not directly address the question of how many distinct skills are
appropriate for solving the tasks that the agent faces. This can be highly inefficient when many identified subgoals correspond to the same underlying skill,
but are all used individually as skill goals. Furthermore, skills created in this
manner are often only transferable to tasks that share identical state spaces, since
corresponding subgoals across tasks are not merged into a single skill goal. We
show that these problems can be overcome by clustering subgoal data defined in
an agent-space and using the resulting clusters as templates for skill termination
conditions. Clustering via a Dirichlet process mixture model is used to discover a
minimal, sufficient collection of portable skills.
1
Introduction
Reinforcement learning (RL) is often used to solve single tasks for which it is tractable to learn a
good policy with minimal initial knowledge. However, many real-world problems cannot be solved
in this fashion, motivating recent research on transfer and hierarchical RL methods that allow knowledge to be generalized to new problems and encapsulated in modular skills. Although skills have
been shown to improve agent learning performance [2], representational power [10], and adaptation
to non-stationarity [3], to the best of our knowledge, current methods lack the ability to automatically
discover skills that are transferable to related state spaces and novel tasks, especially in continuous
domains.
Skill discovery algorithms in reinforcement learning typically identify single states or regions in
state space that correspond to task-specific subgoals. However, such methods do not directly address
the question of how many distinct skills are appropriate for solving the tasks that the agent faces.
This can be highly inefficient when many identified subgoals correspond to the same underlying
skill, but are all used individually as skill goals. For example, opening a door ought to be the same
skill whether an agent is one inch or two inches away from the door, or whether the door is red
or blue; making each possible configuration a separate skill would be unwise. Furthermore, skills
created in this manner are often only transferable to tasks that share identical state spaces, since
corresponding subgoals across tasks are not merged into a single skill goal.
We show that these problems can be overcome by collecting subgoal data from a series of tasks
and clustering it in an agent-space [9], a shared feature space across multiple tasks. The resulting
clusters generalize subgoals within and across tasks and can be used as templates for portable skill
termination conditions. Clustering also allows the creation of skill termination conditions in a datadriven way that makes minimal assumptions and can be tailored to the domain through a careful
1
choice of clustering algorithm. Additionally, this framework extends the utility of single-state subgoal discovery algorithms to continuous domains, in which the agent may never see the same state
twice. We argue that clustering based on a Dirichlet process mixture model is appropriate in the
general case when little is known about the nature or number of skills needed in a domain. Experiments in a continuous domain demonstrate the utility of this approach and illustrate how it may be
useful even when traditional subgoal discovery methods are infeasible.
2
2.1
Background and Related Work
Reinforcement learning
The RL paradigm [20] usually models a problem faced by the agent as a Markov decision process
(MDP), expressed as M = hS, A, P, Ri, where S is the set of environment states the agent can
observe, A is the set of actions that the agent can execute, P (s, a, s0 ) is the probability that the
environment transitions to s0 ? S when action a ? A is taken in state s ? S, and R(s, a, s0 ) is the
expected scalar reward given to the agent when the environment transitions to state s0 from s after
the agent takes action a.
2.2
Options
The options framework [19] models skills as temporally extended actions that can be invoked like
primitive actions. An option o consists of an option policy ?o : S ?A ? [0, 1], giving the probability
of taking action a in state s, an initiation set Io ? S, giving the set of states from which the option
can be invoked, and a termination condition ?o : S ? [0, 1], giving the probability that option
execution will terminate upon reaching state s. In this paper, termination conditions are binary, so
that we can define a termination set of states, To ? S, in which option execution always terminates.
2.3
Agent-spaces
To facilitate option transfer across multiple tasks, Konidaris and Barto [9] propose separating problems into two representations. The first is a problem-space representation which is Markov for the
current task being faced by the agent, but may change across tasks; this is the typical formulation of
a problem in RL. The second is an agent-space representation, which is identical across all tasks to
be faced by the agent, but may not be Markov for any particular task. An agent-space is often a set
of agent-centric features, like a robot?s sensor readings, that are present and retain semantics across
tasks. If the agent represents its top-level policy in a task-specific problem-space but represents its
options in an agent-space, the task at hand will always be Markov while allowing the options to
transfer between tasks.
Agent-spaces enable the transfer of an option?s policy between tasks, but are based on the assumption
that this policy was learned under an option termination set that is portable; the termination set must
accurately reflect how the goal of the skill varies across tasks. Previous work using agent-spaces has
produced portable option policies when the termination sets were hand-coded; our contribution is the
automatic discovery of portable termination sets, so that such skills can be aquired autonomously.
2.4
Subgoal discovery and skill creation
The simplest subgoal discovery algorithms analyze reward statistics or state visitation frequencies
to discover subgoal states [3]. Graph-based algorithms [18, 11] search for ?bottleneck? states on
state transition graphs via clustering and other types of analysis. Algorithms based on intrinsic
motivation have included novelty metrics [17] and hand-coded salience functions [2]. Skill chaining
[10] discovers subgoals by ?chaining? together options, in which the termination set of one option is
the empirically determined initiation set of the next option in the chain. HASSLE [1] clusters similar
regions of state space to identify single-task subgoals. All of these methods compute subgoals that
may be inefficient or non-portable if used alone as skill targets, but that can be used as data for our
algorithm to find portable options.
Other algorithms analyze tasks to create skills directly, rather than search for subgoals. VISA [7]
creates skills to control factored state variables in tasks with sparse causal graphs. PolicyBlocks
2
[15] looks for policy similarities that can be used as templates for skills. The SKILLS algorithm
[21] attempts to minimize description length of policies while preserving a performance metric.
However, these methods only exhibit transfer to identical state spaces and often rely on discrete
state representations. Related work has also used clustering to determine which of a set of MDPs an
agent is currently facing, but does not address the need for skills within a single MDP [22].
2.5
Dirichlet process mixture models
Many popular clustering algorithms require the number of data clusters to be known a priori or use
heuristics to choose an approximate number. By contrast, Dirichlet process mixture models (DPMMs) provide a non-parametric Bayesian framework to describe distributions over mixture models
with an infinite number of mixture components. A Dirichlet process (DP), parameterized by a base
distribution G0 and a concentration parameter ?, is used as a prior over the distribution G of mixture
components. For data points X, mixture component parameters ?, and a parameterized distribution
F , the DPMM can be written as [13]:
G|?, G0 ? DP (?, G0 )
?i |G ? G
xi |?i ? F (?i ).
One type of DPMM can be implemented as an infinite Gaussian mixture model (IGMM) in which
all parameters are inferred from the data [16]. Gibbs sampling is used to generate samples from the
posterior distribution of the IGMM and adaptive rejection sampling [4] is used for the probabilities
which are not in a standard form. After a ?burn-in? period, unbiased samples from the posterior
distribution of the IGMM can be drawn from the Gibbs sampler. A hard clustering can be found
by drawing many such samples and using the sample with the highest joint likelihood of the class
indicator variables. We use a modified IGMM implementation written by M. Mandel 1 .
3
Latent Skill Discovery
To aid thinking about our algorithm, subgoals can be viewed as samples from the termination sets
of latent options that are implicitly defined by the distribution of tasks, the chosen subgoal discovery algorithm, and the agent definition. Specifically, we define the latent options as those whose
termination sets contain all of the sampled subgoal data and that maximize the expected discounted
cumulative reward when used by a particular agent on a distribution of tasks (assuming optimal option policies given the termination sets). When many such maximizing sets exist, we assume that
the latent options are one particular set from amongst these choices; for discussion, the particular
choice does not matter, but it is important to have a single set.
Therefore, our goal is to recover the termination sets of the latent options from the sampled subgoal
data; these can be used to construct a library of options that approximate the latent options and have
the following desirable properties:
? Recall: The termination sets of the library options should contain a maximal portion of the
termination sets of the latent options.
? Precision: The termination sets of the library options should contain minimal regions that
are not in the termination sets of the latent options.
? Separability: The termination set of each library option should be entirely contained within
the termination set of some single latent option.
? Minimality: A minimal number of options should be defined, while still meeting the above
criteria. Ideally, this will be equal to the number of latent options.
Most of these properties are straightforward, but the importance of separability should be emphasized. Imagine an agent that faces a distribution of tasks with several latent options that need to be
sequenced in various ways for each task. If a clustering breaks each latent option termination set
into two options (minimality is violated, but separability is preserved), some exploration inefficiency
1
Source code can be found at http://mr-pc.org/work/
3
may be introduced, but each option will reliably terminate in a skill-appropriate state. However, if
a clustering combines the termination sets of two latent options into that of a single library option,
the library option becomes unreliable; when the functionality of a single latent option is needed, the
combined option may exhibit behavior corresponding to either.
We cannot reason directly about latent options since we do not know what they are a priori, so
we must estimate them with respect to the above constraints from sampled subgoal data alone. We
assume that subgoal samples corresponding to the same latent option form a contiguous region on
some manifold, which is reflected in the problem representation. If they do not, then our method
cannot cluster and find skills; we view this as a failing of the representation and not of our methodology.
Under this assumption, clustering of sampled subgoals can be used to approximate latent option
termination sets. We propose a method of converting clusters parameterized by Gaussians into termination sets that respect the recall and precision properties. Knowing the number of skills a priori
or discovering the appropriate number of clusters from the data satisfies the minimality property.
Separability is more complicated, but can be satisfied by any method that can handle overlapping
clusters without merging them and that is not inherently biased toward a small number of skills.
Methods like spectral clustering [14] that rely on point-wise distance metrics cannot easily handle
cluster overlap and are unsuitable for this sort of task. In the general case where little is known
about the number and nature of the latent options, IGMM-based clustering is an attractive choice, as
it can model any number of clusters of arbitrary complexity; when clusters have a complex shape,
an IGMM may over-segment the data, but this still produces separable options.
4
Algorithm
We present a general algorithm to discover latent options when using any particular subgoal discovery method and clustering algorithm. Note that some subgoal discovery methods discover state
regions, rather than single states; in such cases, sampling techniques or a clustering algorithm such
as NPClu [5] that can handle non-point data must be used. We then describe a specific implementation of the general algorithm that is used in our experiments.
4.1
General algorithm
Given an agent A, task distribution ? , subgoal discovery algorithm D, and clustering algorithm C:
1. Compute a set of sample agent-space subgoals X = {x1 , x2 , ..., xn }, where X = D(A, ? ).
2. Cluster the subgoals X into clusters with parameters ? = {?1 , ?2 , ..., ?k }, where ? =
C(X). If the clustering method is parametric, then the elements of ? are cluster parameters,
otherwise they are data point assignments to clusters.
3. Define option termination sets T1 , T2 , ..., Tk , where Ti = M(?i ), and M is a mapping from
elements of ? to termination set definitions.
4. Instantiate and train options O1 , O2 , ..., Ok using T1 , T2 , ..., Tk as termination sets.
4.2
Experimental implementation
We now present an example implementation of the general algorithm that is used in our experiments.
As to not confound error from our clustering method with error introduced by a subgoal discovery
algorithm, we use a hand-coded binary salience function; the main contribution of this work is the
clustering strategy that enables generalization and transfer, so we are not concerned with the details
of any particular subgoal discovery algorithm. This also demonstrates the possible utility of our
approach, even when automatic subgoal discovery is inappropriate or infeasible. More details on
this are presented in the following sections.
First, a distribution of tasks and an RL agent are defined. We allow the agent to solve tasks drawn
from this distribution while collecting subgoal state samples every time the salience function is
triggered. This continues until 10,000 subgoal state samples are collected. These points are then
clustered using one of two different clustering methods. Gaussian expectation-maximization (E-M),
4
for which we must provide the number of clusters a priori, provides an approximate upper bound
on the performance of any clustering method based on a Gaussian mixture model. We compare this
to IGMM-based clustering that must discover the number of clusters automatically. E-M is used as
a baseline metric to separate error caused by not knowing the number of clusters a priori from error
caused by using a Gaussian mixture model. Since E-M can get stuck in local minima, we run it 10
times and choose the clustering with the highest log-likelihood. For the IGMM-based clustering,
we let the Gibbs sampler burn-in for 10,000 samples and then collect an additional 10,000 samples,
from which we choose the sample with the highest joint likelihood of the class indicator variables
as defined by Rasmussen [16].
We now must define a mapping function M that maps our clusters to termination sets. Both of
our clustering methods return a list of K sets of Gaussian means ? and covariances ?. We would
like to choose a ridge on each Gaussian to be the cluster?s termination set boundary; thus, we use
Mahalanobis distance from each cluster mean, where
q
DiM ahalanobis (x) = (x ? ?i )T ??1
i (x ? ?i ) ,
and the termination set Ti is defined as:
1
Ti (x) =
0
: DiM ahalanobis (x) ? i
: otherwise,
where i is a threshold. An appropriate value for each i is found automatically by calculating the
maximum DiM ahalanobis (x) of any of the subgoal state points x assigned to the ith cluster. This
makes each i just large enough so that all the subgoal state data points assigned to the ith cluster
are within the i -Mahalanobis distance of that cluster mean, satisfying both our recall and precision
conditions. Note that some states can be contained in multiple termination sets.
Using these termination sets, we create options that are given to the agent for a 100 episode ?gestation period?, during which the agent can learn option policies using off-policy learning, but cannot
invoke the options. After this period, the options can be invoked from any state.
5
5.1
Experiments
Light-Chain domain
We test the various implementations of our algorithm on a continuous domain similar to the Lightworld domain [9], designed to provide intuition about the capabilities of our skill discovery method.
In our version, the Light-Chain domain, an agent is placed in a 10?10 room that contains a primary
beacon, a secondary beacon, and a goal beacon placed in random locations. If the agent moves
within 1 unit of the primary beacon, the beacon becomes ?activated? for 30 time steps. Similarly,
if the agent moves within 1 unit of the secondary beacon while the primary beacon is activated, it
also becomes activated for 30 time steps. The goal of the task is for the agent to move within 1 unit
of the goal beacon while the secondary beacon is activated, upon which it receives a reward of 100,
ending the episode. In all other states, the agent receives a reward of ?1. Additionally, each beacon
emits a uniquely colored light?either red, green, or blue?that is selected randomly for each task.
Figure 1 shows two instances of the Light-Chain domain with different beacon locations and light
color assignments.
There are four actions available to the agent in every state: move north, south, east, or west. The
actions are stochastic, moving the agent between 0.9 and 1.1 units (uniformly distributed) in the
specified direction. In the case of an action that would move an agent through a wall, the agent
simply moves up to the wall and stops. The problem-space for this domain is 4-dimensional: The
x-position of the agent, the y-position of the agent, and two boolean variables denoting whether or
not the primary and secondary beacons are activated, respectively. The agent-space is 6-dimensional
and defined by RGB range sensors that the agent is equipped with. Three of the sensors describe the
north/south distance of the agent from each of the three colored lights (0 if the agent is at the light,
positive values for being north of it, and negative vales for being south of it). The other three sensors
are identical, but measure east/west distance. Since the beacon color associations change with every
task, a portable top-level policy cannot be learned in agent space, but portable agent-space options
can be learned that reliably direct the agent toward each of the lights.
5
Figure 1: Two instances of the Light-Chain domain. The numbers 1?3 indicate the primary, secondary, and goal beacons respectively, while color signifies the light color each beacon emits. Notice
that both beacon placement and color associations change between tasks.
The agent?s salience function is defined as:
1 : At time t, a beacon became activated for the first time in this episode.
salient(t) =
0 : otherwise.
Our algorithm clusters subgoal state data to create option termination conditions that generalize
properly within a task and across tasks. In the Light-Chain domain, there are three latent options?
one corresponding to each light color. Generalization within a task requires each option to terminate
in any state within a 1 unit radius of its corresponding light color. However, if the agent only sees
one task, all such states will be within some small fixed range of the other two lights; a termination
set built from such data would not transfer to another task, since the relative positions of the lights
would change. Thus, generalization across tasks requires each option to terminate when it is close
to the proper light, regardless of the observed positions of the other two lights. When provided with
data from many tasks, our algorithm can discover these relationships between agent-space variables
and use them to define portable options. These options can then be used in each task, although in a
different order for each, based on that task?s color associations with the beacons.
Although we provide a broad subgoal (activate beacons) to the agent through the salience function,
our algorithm does the work of discovering how many ways there are to accomplish these subgoals
(three?one for each light color) and how to achieve each of these (get within 1 unit of that light). In
each instance of the task, it is unknown which light color will correspond to each beacon. Therefore,
it is not possible to define a skill that reliably guides the agent to a particular beacon (e.g. the primary
beacon) and is portable across tasks. Instead, our algorithm discovers skills to navigate to particular
lights, leading the agent to beacons by proxy. Note that this number of skills is independent of the
number of beacons; if there were four possible colors of light, but only three beacons, four skills
would be created so that the agent could perform well when presented with any three of the four
colors in a given task. Similarly, such a setup can be used in other tasks where a broad subgoal is
known, but the different means and number of ways of achieving it are unknown a priori.
5.2
Experimental structure
Two different agent types were used in our experiments: agents with and without options. The
parameters for each agent type were optimized separately via a grid search. Top-level policies were
learned using -greedy SARSA(?) (? = 0.001, ? = 0.99, ? = 0.7, = 0.1 without options,
? = 0.0005, ? = 0.99, ? = 0.9, = 0.1 with options) and the state-action value function was
represented with a linear function approximator using the third order Fourier basis [8]. Option
policies were learned off-policy (with an option reward of 1000 when in a terminating state), using
Q(?) (? = 0.000025, ? = 0.99, ? = 0.9) and the fifth order independent Fourier basis.
For the agents that discover options, we used the procedure outlined in the previous section to collect
subgoal state samples and learn option policies. We compared these agents to an agent with perfect,
hand-coded termination sets (each option terminated within 1 unit of a particular light) that followed
the same learning procedure, but without the subgoal discovery step. After option policies were
learned for 100 episodes, they were frozen and agent performance was measured for 10 episodes in
6
10
8
8
6
6
4
4
Blue North/South
Green East/West
10
2
0
?2
2
0
?2
?4
?4
?6
?6
?8
?8
?10
?10
?10
?10
?8
?6
?4
?2
0
2
Green North/South
4
6
8
10
(a) Proj. onto Green-N/S and Green-E/W
?8
?6
?4
?2
0
2
Green North/South
4
6
8
10
(b) Proj. onto Green-N/S and Blue-N/S
Figure 2: IGMM clusterings of 6-dimensional subgoal data projected onto 2 dimensions at a time
for visualization.
each of 1,000 novel tasks, with a maximum episode length of 5,000 steps and a maximum option
execution time of 50 steps. After each task, the top-level policy was reset, but the option policies
were kept constant. We compared performance of the agents using options to that of an agent without
options, tested under the same conditions. This entire experiment was repeated 10 times.
6
Results
Figure 2 shows an IGMM-based clustering (only 1,000 points shown for readability), in which the
original data points are projected onto 2 of the 6 agent-space dimensions at a time for visualization purposes, where cluster assignment is denoted with unique markers. It can be seen that three
clusters (the intuitively optimal number) have been found. In 2(a), the data is projected onto the
green north/south and green east/west dimensions. A central circular cluster is apparent, containing
subgoals triggered by being near the green light. In 2(b), the north/south dimensions of two different
light colors are compared. Here, there are two long clusters that each have a small variance with
respect to one color and a large variance with respect to the other. These findings correspond to our
intuitive notion of skills in this domain, in which an option should terminate when it is close to a
particular light color, regardless of the positions of the other two lights. Note that these clusters actually overlap in 6 dimensions, not just in the projected view, since the activation radii of the beacons
can occasionally overlap, depending on their placement.
Figure 3(a) compares the cumulative time it takes to solve 10 episodes for agents with no options,
IGMM options, E-M options (with three clusters), and options with perfect, hand-coded termination
sets. As expected, in all cases, options provide a significant learning advantage when facing a novel
task. The agent using E-M options performs only slightly worse than the agent using perfect, handcoded options, showing that clustering effectively discovers options in this domain and that very little
error is introduced by using a Gaussian mixture model. Possibly more surprisingly, the agent using
IGMM options performs equally as well as the agent using E-M options (making the lines difficult
to distinguish in the graph), demonstrating that estimating the number of clusters automatically is
feasible in this domain and introduces negligible error. In fact, the IGMM-based clustering finds
three clusters in all 10 trials of the experiment.
Figure 3(b) shows the performance of agents using E-M options where the number of pre-specified
clusters varies. As expected, the agent with three options (the intuitively optimal number of skills
in this domain) performs the best, but the agents using five and six options still retain a significant
advantage over an agent with no options. Most notably, when less than the optimal number of
options are used, the agent actually performs worse than the baseline agent with no options. This
confirms our intuition that option separability is more important than minimality. Thus, it seems
that E-M may be effective if the designer can come up with a good approximation of the number of
latent options, but it is critical to overestimate this number.
7
6000
No options
IGMM term sets
E?M term sets
Perfect term sets
2500
Average cumulative steps to goal (over episodes)
Average cumulative steps to goal (over episodes)
3000
2000
1500
1000
500
0
1
2
3
4
5
6
7
8
9
4000
3000
2000
1000
0
10
Episodes
No options
E?M 2 clusters
E?M 3 clusters
E?M 5 clusters
E?M 6 clusters
5000
1
2
3
4
5
6
7
8
9
10
Episodes
(a) Comparative performance of agents
(b) E-M with varying numbers of clusters
Figure 3: Agent performance in Light-Chain domain with 95% confidence intervals
7
Discussion and Conclusions
We have demonstrated a general method for clustering agent-space subgoal data to form the termination sets of portable skills in the options framework. This method works in both discrete and continuous domains and can be used with any choice of subgoal discovery and clustering algorithms. Our
analysis of the Light-Chain domain suggests that if the number of latent options is approximately
known a priori, clustering algorithms like E-M can perform well. However, in the general case,
IGMM-based clustering is able to discover an appropriate number of options automatically without
sacrificing performance.
The collection and analysis of subgoal state samples can be computationally expensive, but this
is a one-time cost. Our method is most relevant when a distribution of tasks is known ahead of
time and we can spend computational time up front to improve agent performance on new tasks
to be faced later, drawn from the same distribution. This can be beneficial when an agent will
have to face a large number of related tasks, like in DRAM memory access scheduling [6], or
for problems where fast learning and adaptation to non-stationarity is critical, such as automatic
anesthesia administration [12].
In domains where traditional subgoal discovery algorithms fail or are too computationally expensive,
it may be possible to define a salience function that specifies useful subgoals, while still allowing
the clustering algorithm to decide how many skills are appropriate. For example, it is desirable to
capture the queen in chess, but it may be beneficial to have several skills that result in different types
of board configurations after taking the queen, rather than a single monolithic skill. Such a setup is
advantageous when a broad subgoal is known a priori, but the various means and number of ways
in which the subgoal might be accomplished are unknown, as in our Light-Chain experiment. This
extends the possibility of skill discovery to a class of domains in which it may have previously been
intractable.
An agent with a library of appropriate portable options ought to be able to learn novel tasks faster
than an agent without options. However, as this library grows, the number of available actions actually increases and agent performance may begin to decline. This counter-intuitive notion, commonly
known as the utility problem, reveals a fundamental problem with using skills outside the context of
hierarchies. For skill discovery to be useful in larger problems, future work will have to address basic questions about how to automatically construct appropriate skill hierarchies that allow the agent
to explore in simpler, more abstract action spaces as it gains more skills and competency.
Acknowledgments
We would like to thank Philip Thomas and George Konidaris for useful discussions. Scott Niekum
and Andrew G. Barto were supported in part by the AFOSR under grant FA9550-08-1-0418.
8
References
[1] Bram Bakker and J?urgen Schmidhuber. Hierarchical reinforcement learning based on subgoal discovery
and subpolicy specialization. In Proc. of the 8th Conference on Intelligent Autonomous Systems, pages
438?445, 2004.
[2] A. G. Barto, S. Singh, and N. Chentanez. Intrinsically motivated learning of hierarchical collections of
skills. In Proc. of the International Conference on Developmental Learning, pages 112?119, 2004.
[3] Bruce L. Digney. Learning hierarchical control structures for multiple tasks and changing environments.
In Proc. of the 5th Conference on the Simulation of Adaptive Behavior. MIT Press, 1998.
[4] W. R. Gilks and P. Wild. Adaptive Rejection Sampling for Gibbs Sampling. Journal of the Royal Statistical Society, Series C, 41(2):337?348, 1992.
[5] M. Halkidi and M. Vazirgiannis. Npclu: An approach for clustering spatially extended objects. Intell.
Data Anal., 12:587?606, December 2008.
[6] Engin Ipek, Onur Mutlu, Jose F. Martinez, and Rich Caruana. Self-optimizing memory controllers: A
reinforcement learning approach. Computer Architecture, International Symposium on, 0:39?50, 2008.
[7] Anders Jonsson and Andrew Barto. Causal graph based decomposition of factored mdps. J. Mach. Learn.
Res., 7:2259?2301, December 2006.
[8] G.D. Konidaris, S. Osentoski, and P.S. Thomas. Value function approximation in reinforcement learning
using the fourier basis. In Proceedings of the Twenty-Fifth Conference on Artificial Intelligence, 2011.
[9] George Konidaris and Andrew G. Barto. Building portable options: Skill transfer in reinforcement learning. In Proc. of the 20th International Joint Conference on Artificial Intelligence, pages 895?900, 2007.
[10] George Konidaris and Andrew G. Barto. Skill discovery in continuous reinforcement learning domains
using skill chaining. In Advances in Neural Information Processing Systems 22, pages 1015?1023, 2009.
[11] Amy McGovern and Andrew G. Barto. Automatic discovery of subgoals in reinforcement learning using
diverse density. In ICML, pages 361?368, 2001.
[12] Brett Moore, Periklis Panousis, Vivek Kulkarni, Larry Pyeatt, and Anthony Doufas. Reinforcement learning for closed-loop propofol anesthesia: A human volunteer study. In Innovative Applications of Artificial
Intelligence, 2010.
[13] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of computational and graphical statistics, 9(2):249?265, 2000.
[14] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In
Advances in Neural Information Processing Systems, pages 849?856. MIT Press, 2001.
[15] Marc Pickett and Andrew G. Barto. Policyblocks: An algorithm for creating useful macro-actions in
reinforcement learning. In ICML, pages 506?513, 2002.
[16] Carl Edward Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information
Processing Systems 12, pages 554?560. MIT Press, 2000.
? ur S?ims?ek and Andrew G. Barto. Using relative novelty to identify useful temporal abstractions in
[17] Ozg?
reinforcement learning. In Proc. of the Twenty-First International Conference on Machine Learning,
pages 751?758, 2004.
? ur S?ims?ek and Andrew G. Barto. Skill characterization based on betweenness. In NIPS, pages 1497?
[18] Ozg?
1504, 2008.
[19] Richard Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181?211, 1999.
[20] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[21] Sebastian Thrun and Anton Schwartz. Finding structure in reinforcement learning. In Advances in Neural
Information Processing Systems 7, pages 385?392. MIT Press, 1995.
[22] Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: A
hierarchical bayesian approach. In In: ICML 07: Proceedings of the 24th international conference on
Machine learning, page 1015. ACM Press, 2007.
9
| 4238 |@word h:1 trial:1 version:1 seems:1 advantageous:1 tadepalli:1 termination:38 confirms:1 simulation:1 prasad:1 rgb:1 covariance:1 decomposition:1 initial:1 configuration:2 series:2 uma:1 inefficiency:1 contains:1 denoting:1 dpmms:1 o2:1 current:2 pickett:1 activation:1 must:6 written:2 shape:1 enables:1 designed:1 alone:2 igmm:15 discovering:2 instantiate:1 selected:1 greedy:1 intelligence:4 betweenness:1 ith:2 fa9550:1 colored:2 provides:1 characterization:1 location:2 readability:1 org:1 simpler:1 five:1 anesthesia:2 direct:1 symposium:1 consists:1 combine:1 wild:1 ray:1 manner:2 notably:1 datadriven:1 expected:4 behavior:2 multi:1 discounted:1 automatically:6 little:3 inappropriate:1 equipped:1 becomes:3 provided:1 discover:9 underlying:2 estimating:1 begin:1 brett:1 what:1 bakker:1 finding:2 ought:2 temporal:2 every:3 collecting:2 ti:3 demonstrates:1 schwartz:1 control:2 unit:7 grant:1 overestimate:1 t1:2 positive:1 negligible:1 local:1 monolithic:1 io:1 sutton:2 mach:1 approximately:1 might:1 burn:2 twice:1 collect:2 suggests:1 range:2 policyblocks:2 unique:1 acknowledgment:1 gilks:1 procedure:2 pre:1 confidence:1 mandel:1 get:2 cannot:6 close:2 onto:5 scheduling:1 context:1 map:1 demonstrated:1 maximizing:1 primitive:1 straightforward:1 regardless:2 factored:2 amy:1 handle:3 notion:2 autonomous:1 target:1 imagine:1 hierarchy:2 carl:1 element:2 osentoski:1 satisfying:1 expensive:2 continues:1 observed:1 solved:1 capture:1 region:6 episode:11 autonomously:1 counter:1 highest:3 intuition:2 environment:4 developmental:1 complexity:1 reward:6 ideally:1 terminating:1 singh:2 solving:2 segment:1 creation:2 upon:2 creates:1 basis:3 easily:1 joint:3 various:3 represented:1 train:1 distinct:2 fast:1 describe:3 activate:1 effective:1 artificial:4 mcgovern:1 niekum:2 outside:1 whose:1 modular:1 heuristic:1 solve:3 apparent:1 spend:1 drawing:1 otherwise:3 larger:1 ability:1 statistic:2 triggered:2 advantage:2 frozen:1 propose:2 maximal:1 reset:1 adaptation:2 macro:1 relevant:1 loop:1 achieve:1 representational:1 description:1 intuitive:2 cluster:39 produce:1 comparative:1 perfect:4 tk:2 object:1 illustrate:1 andrew:11 depending:1 measured:1 edward:1 implemented:1 c:1 indicate:1 come:1 direction:1 radius:2 merged:2 functionality:1 stochastic:1 exploration:1 human:1 enable:1 larry:1 require:1 generalization:3 clustered:1 wall:2 sarsa:1 mapping:2 purpose:1 failing:1 encapsulated:1 proc:5 currently:1 individually:2 create:3 mit:5 sensor:4 always:2 gaussian:8 modified:1 reaching:1 rather:3 varying:1 barto:13 wilson:1 properly:1 likelihood:3 contrast:1 baseline:2 dim:3 ozg:2 abstraction:2 anders:1 typically:2 entire:1 proj:2 semantics:1 denoted:1 priori:8 urgen:1 equal:1 construct:2 never:1 ng:1 sampling:6 identical:5 represents:2 broad:3 look:1 icml:3 thinking:1 future:1 t2:2 competency:1 intelligent:1 opening:1 richard:2 randomly:1 soumya:1 intell:1 attempt:1 stationarity:2 highly:2 circular:1 possibility:1 introduces:1 mixture:15 pc:1 light:30 activated:6 chain:10 gestation:1 re:1 causal:2 sacrificing:1 digney:1 minimal:5 instance:3 boolean:1 contiguous:1 caruana:1 assignment:3 maximization:1 signifies:1 cost:1 queen:2 front:1 too:1 motivating:1 varies:2 accomplish:1 combined:1 density:1 fundamental:1 amherst:2 international:5 retain:2 minimality:4 off:2 invoke:1 michael:1 together:1 precup:1 reflect:1 satisfied:1 central:1 containing:1 choose:4 possibly:1 worse:2 creating:1 ek:2 inefficient:3 leading:1 return:1 north:8 matter:1 caused:2 doina:1 later:1 break:1 view:2 closed:1 analyze:2 mutlu:1 red:2 portion:1 recover:1 option:100 complicated:1 sort:1 capability:1 bruce:1 contribution:2 minimize:1 became:1 variance:2 correspond:6 identify:4 inch:2 generalize:2 anton:1 bayesian:2 accurately:1 produced:1 fern:1 sebastian:1 definition:2 konidaris:5 frequency:1 sampled:4 emits:2 stop:1 gain:1 massachusetts:1 popular:1 intrinsically:1 recall:3 knowledge:3 color:15 actually:3 centric:1 ok:1 reflected:1 methodology:1 wei:1 formulation:1 subgoal:35 execute:1 vazirgiannis:1 furthermore:2 just:2 until:1 hand:6 receives:2 overlapping:1 lack:1 marker:1 aquired:1 mdp:2 grows:1 facilitate:1 engin:1 building:1 contain:3 unbiased:1 assigned:2 spatially:1 moore:1 neal:1 vivek:1 attractive:1 mahalanobis:2 during:1 self:1 uniquely:1 transferable:3 chaining:3 criterion:1 generalized:1 ridge:1 demonstrate:1 performs:4 wise:1 invoked:3 novel:4 discovers:3 rl:5 empirically:1 subgoals:19 association:3 ims:2 significant:2 gibbs:4 chentanez:1 automatic:4 grid:1 outlined:1 similarly:2 moving:1 robot:1 access:1 similarity:1 base:1 posterior:2 recent:1 optimizing:1 schmidhuber:1 occasionally:1 initiation:2 binary:2 meeting:1 accomplished:1 preserving:1 minimum:1 additional:1 seen:1 george:3 mr:1 converting:1 novelty:2 paradigm:1 determine:1 period:3 maximize:1 semi:1 multiple:4 desirable:2 alan:1 faster:1 unwise:1 long:1 beacon:26 equally:1 coded:5 basic:1 controller:1 metric:4 expectation:1 volunteer:1 tailored:1 sequenced:1 preserved:1 background:1 separately:1 interval:1 source:1 biased:1 south:8 december:2 jordan:1 near:1 door:3 enough:1 concerned:1 architecture:1 identified:2 decline:1 knowing:2 administration:1 bottleneck:1 whether:3 six:1 specialization:1 motivated:1 utility:4 action:13 useful:6 simplest:1 generate:1 http:1 specifies:1 exist:1 notice:1 designer:1 blue:4 diverse:1 discrete:2 visitation:1 four:4 salient:1 threshold:1 demonstrating:1 achieving:1 drawn:3 changing:1 kept:1 graph:5 run:1 jose:1 parameterized:3 extends:2 decide:1 decision:1 entirely:1 bound:1 followed:1 distinguish:1 ahead:1 placement:2 constraint:1 ri:1 x2:1 fourier:3 innovative:1 subpolicy:1 separable:1 department:1 across:12 terminates:1 slightly:1 separability:5 visa:1 beneficial:2 ur:2 making:2 chess:1 intuitively:2 confound:1 taken:1 computationally:2 visualization:2 previously:1 fail:1 needed:2 know:1 tractable:1 available:2 gaussians:1 observe:1 hierarchical:5 away:1 appropriate:10 spectral:2 yair:1 original:1 thomas:2 top:4 dirichlet:7 clustering:39 graphical:1 unsuitable:1 calculating:1 giving:3 especially:1 society:1 move:6 g0:3 question:3 parametric:2 concentration:1 strategy:1 primary:6 traditional:2 exhibit:2 amongst:1 dp:2 distance:5 separate:2 thank:1 separating:1 onur:1 philip:1 thrun:1 manifold:1 argue:1 portable:15 collected:1 reason:1 toward:2 assuming:1 length:2 code:1 o1:1 relationship:1 setup:2 difficult:1 handcoded:1 negative:1 dram:1 anal:1 implementation:5 reliably:3 proper:1 policy:19 dpmm:2 unknown:3 allowing:2 upper:1 perform:2 vale:1 twenty:2 markov:5 extended:2 arbitrary:1 jonsson:1 inferred:1 introduced:3 specified:2 optimized:1 learned:6 nip:1 address:4 able:2 usually:1 scott:2 reading:1 built:1 green:10 memory:2 royal:1 power:1 overlap:3 critical:2 rely:2 indicator:2 improve:2 mdps:4 library:8 temporally:1 created:3 faced:4 prior:1 discovery:25 relative:2 afosr:1 facing:2 approximator:1 agent:87 sufficient:1 proxy:1 s0:4 share:2 placed:2 surprisingly:1 rasmussen:2 supported:1 infeasible:2 salience:6 guide:1 allow:3 template:3 face:4 taking:2 fifth:2 sparse:1 distributed:1 overcome:2 boundary:1 xn:1 world:1 transition:3 cumulative:4 ending:1 dimension:5 stuck:1 collection:3 reinforcement:17 adaptive:3 projected:4 commonly:1 rich:1 approximate:4 skill:60 implicitly:1 unreliable:1 satinder:1 reveals:1 xi:1 continuous:6 search:3 latent:22 additionally:2 learn:5 transfer:8 nature:2 terminate:5 inherently:1 complex:1 anthony:1 domain:23 marc:1 main:1 terminated:1 motivation:1 martinez:1 repeated:1 x1:1 west:4 board:1 fashion:1 aid:1 precision:3 position:5 third:1 specific:4 emphasized:1 navigate:1 showing:1 list:1 intrinsic:1 intractable:1 merging:1 effectively:1 importance:1 execution:3 rejection:2 simply:1 explore:1 expressed:1 contained:2 scalar:1 satisfies:1 acm:1 ma:1 goal:12 viewed:1 careful:1 room:1 shared:1 feasible:1 change:4 hard:1 included:1 typical:1 determined:1 infinite:3 specifically:1 sampler:2 uniformly:1 secondary:5 experimental:2 east:4 aaron:1 violated:1 kulkarni:1 tested:1 |
3,577 | 4,239 | Submodular Multi-Label Learning
James Petterson
NICTA/ANU
Canberra, Australia
Tiberio Caetano
NICTA/ANU
Sydney/Canberra, Australia
Abstract
In this paper we present an algorithm to learn a multi-label classifier which
attempts at directly optimising the F -score. The key novelty of our formulation is that we explicitly allow for assortative (submodular) pairwise
label interactions, i.e., we can leverage the co-ocurrence of pairs of labels
in order to improve the quality of prediction. Prediction in this model
consists of minimising a particular submodular set function, what can be
accomplished exactly and e?ciently via graph-cuts. Learning however is
substantially more involved and requires the solution of an intractable combinatorial optimisation problem. We present an approximate algorithm for
this problem and prove that it is sound in the sense that it never predicts
incorrect labels. We also present a nontrivial test of a su?cient condition
for our algorithm to have found an optimal solution. We present experiments on benchmark multi-label datasets, which attest the value of the
proposed technique. We also make available source code that enables the
reproduction of our experiments.
1
Introduction
Research in multi-label classification has seen a substantial growth in recent years (e.g.,
[1, 2, 3, 4]). This is due to a number of reasons, including the increase in availability of
multi-modal datasets and the emergence of crowdsourcing, which naturally create settings
where multiple interpretations of a given input observation are possible (multiple labels for
a single instance). Also many classical problems are inherently multi-label, such as the
categorisation of documents [5], gene function prediction [6] and image tagging [7].
There are two desirable aspects in a multi-label classification system. The first is that a
prediction should ideally be good both in terms of precision and recall: we care not only
about predicting as many of the correct labels as possible, but also as few non-correct labels
as possible. One of the most popular measures for assessing performance is therefore the
F -score, which is the harmonic mean of precision and recall [8]. The second property we
wish is that, both during training and also at test time, the algorithm should ideally take
into account possible dependencies between the labels. For example, in automatic image
tagging, if labels ocean and ship have high co-occurrence frequency in the training set, the
model learned should somehow boost the chances of predicting ocean if there is strong visual
evidence for the label ship [9].
In this paper we present a method that directly addresses these two aspects. First, we
explicitly model the dependencies between pairs of labels, albeit restricting them to be
submodular (in rough terms, we model only the positive pairwise label correlations). This
enables exact and e?cient prediction at test time, since finding an optimal subset of labels
reduces to the minimisation of a particular kind of submodular set function which can be
done e?ciently via graph-cuts. Second, our method directly attempts at optimising a convex
surrogate of the F -score. This is because we draw on the max-margin structured prediction
framework from [10], which, as we will see, enables us to optimise a convex upper bound
on the loss induced by the F -score. The critical technical contribution of the paper is a
constraint generation algorithm for loss-augmented inference where the scoring of the pair
(input-output) is a submodular set function and the loss is derived from the F -score. This
1
is what enables us to fit our model into the estimator from [10]. Our constraint generation
algorithm is only approximate since the problem is intractable. However we give theoretical
arguments supporting our empirical findings that the algorithm is not only very accurate in
practice, but in the majority of our real-world experiments it actually produces a solution
which is exactly optimal. We compare the proposed method with other benchmark methods
on publicly available multi-label datasets, and results favour our approach. We also provide
source code that enables the reproduction of all the experiments presented in this paper.
Related Work. A convex relaxation for F -measure optimisation in the multi-label setting
was proposed recently in [11]. This can be seen as a particular case of our method when there
are no explicit label dependencies. In [12] the authors propose quite general tree and DAGbased dependencies among the labels and adapt decoding algorithms from signal processing
to the problem of finding predictions consistent with the structures learned. In [13] graphical
models are used to impose structure in the label dependencies. Both [12] and [13] are in a
sense complementary to our method since we do not enforce any particular graph topology
on the labels but instead we limit the nature of the interactions to be submodular. In [14]
the authors study the multi-label problem under the assumption that prior knowledge on
the density of label correlations is available. They also use a max-margin framework, similar
in spirit to our formulation. A quite simple and basic strategy for multi-label problems is
to treat them as multiclass classification, e?ectively ignoring the relationships between the
labels. One example in this class is the Binary Method [15]. The RAkEL algorithm [16] uses
instead an ensemble of classifiers, each learned on a random subset of the label set. In [17] the
authors propose a Bayesian CCA model and apply it to multi-label problems by enforcing
group sparsity regularisation in order to capture information about label co-occurrences.
2
The Model
Let x ? X be a vector of dimensionality D with the features of an instance (say, an image);
let y ? Y be a set of labels for an instance (say, tags for an image), from a fixed dictionary
of V possible labels, encoded as y ? {0, 1}V . For example, y = [1 1 0 0] denotes the first
and second labels of a set of four. We assume we are given a training set {(xn , y n )}N
n=1 , and
our task is to estimate a map f : X ? Y that has good agreement with the training set but
also generalises well to new data. In this section we define the class of functions f that we
will consider. In the next section we define the learning algorithm, i.e., a procedure to find
a specific f in the class.
2.1
The Loss Function Derived from the F -Score
Our notion of ?agreement? with the training set is given by a loss function. We focus on
maximising the average over all instances of F, a score that considers both precision and
recall and can be written in our notation as
F =
N
1 ? 2 p(y n , y?n )r(y n , y?n )
|y ? y?|
|y ? y?|
, where p(y, y?) =
and r(y, y?) =
n
n
n
n
N n=1 p(y , y? ) + r(y , y? )
|?
y|
|y|
Here y?n denotes our prediction for input instance n, y n is the corresponding ground-truth,
? denotes the element-wise product and |u| denotes the 1-norm of vector u (in our case the
number of 1s since u will always be binary). Since our goal is to maximise the F -score a
suitable choice of loss function is ?(y, y?) = 1 ? F (y, y?), which is the one we adopt in this
paper. The loss for a single prediction is therefore
?(y, y?) = 1 ? 2 |y ? y?|/(|y| + |?
y |)
2.2
(1)
Feature Maps and Parameterisation
We assume that the prediction for a given input x is a maximiser of a score that encodes
both the unary dependency between labels and instances as well as the pairwise dependencies
between labels:
y? ? argmax y T Ay
y?Y
2
(2)
where
?
?A is an upper-triangular matrix scoring the pair (x, y), with diagonal elements Aii =
x, ?i1 , where x is the input feature vector and ?i1 is a parameter vector that defines how
label i weighs each feature of x. The o?-diagonal elements are Aij = Cij ?ij , where Cij
2
is the normalised counts of co-occurrence of labels i and j in the training set, and ?ij
the corresponding scalar parameter which is forced to be non-negative. This will ensure
that the o?-diagonal entries of A are non-negative and therefore that problem 2 consists
of the maximisation of a supermodular function (or, equivalently, the minimisation of a
submodular function), which can be solved e?ciently via graph-cuts. We also define the
T
T
T
2
complete parameter vectors ?1 := [. . . ?i1 . . . ]T , ?2 := [. . . ?ij
. . . ]T and ? = [?1 ?2 ]T , as
well as the complete feature maps ?1 (x, y) = vec(x ? y), ?2 (y) = vec(y ? y) and ?(x, y) =
[?T1 (x, y) ?T2 (y)]T . This way the score in expression 2 can be written as y T Ay = ??(x, y), ??.
Note that the dimensionality
of ?2 is the number of non-zero elements of matrix C?in this
?V ?
setting that is 2 , but it can be reduced by setting to zero elements of C below a specified
threshold.
3
Learning Algorithm
Optimisation Problem. Direct optimisation of the loss defined in equation 1 is a highly
intractable problem since it is a discrete quantity and our parameter space is continuous.
Here we will follow the program in [10] and instead construct a convex upper bound on the
loss function, which can then be attacked using convex optimisation tools. The purpose of
learning will be to solve the following ?convex optimisation?problem
N
?
1 ?
2
[?? , ? ? ] = argmin
?n + ???
(3a)
N n=1
2
?,?
s.t. ??(xn , y n ), ?? ? ??(xn , y), ?? ? ?(y, y n ) ? ?n ,
?n ? 0, ?n, y ?= y n .
(3b)
yn? ? argmax [?(y, y n ) + ??(xn , y), ??]
(4)
This is the margin-rescaling estimator for structured support vector machines [10].
The constraints immediately imply that the optimal solution will be such that ?n? ?
?(argmaxy ??(xn , y), ?? ? , y n ), and therefore the minimum value of the objective function
upper bounds the loss, thus motivating the formulation. Since there are exponentially many
constraints, we follow [10] in adopting a constraint generation strategy, which starts by
solving the problem with no constraints and iteratively adding the most violated constraint
for the current solution of the optimisation problem. This is guaranteed to find an ?-close
approximation of the solution of (3) after including only a polynomial (O(??2 )) number of
constraints [10]. At each iteration we need to maximise the violation margin ?n , which from
the constraints 3b reduces to
y?Y
Learning Algorithm. The learning algorithm is described in Algorithm 1 (requires as
subroutine Algorithm 2). Algorithm 1 describes a particular convex solver based on bundle
methods (BMRM [18]), which we use here. Other solvers could have been used instead.
Our contribution lies not here, but in the routine of constraint generation for Algorithm 1,
which is described in Algorithm 2.
BMRM requires the solution of constraint generation and the value of the objective function
for the slack corresponding to the constraint generated, as well as its gradient. Soon we will
discuss constraint generation. The other two ingredients we describe here. The slack at the
optimal solution is
?n? = ?(yn? , y n ) + ??(xn , yn? ), ?? ? ??(xn , y n ), ??
thus the objective function from (3) becomes
1 ?
?
2
?(yn? , y n ) + ??(xn , yn? ), ?? ? ??(xn , y n ), ?? + ??? ,
N n
2
whose gradient is
?? ?
1 ?
(?(xn , y n ) ? ?(xn , yn? ))
N n
3
(5)
(6)
(7)
Algorithm 1 Bundle Method for Regu- Algorithm 2 Constraint Generation
larised Risk Minimisation (BMRM)
1: Input: (xn , y n ), ?, V , Output: yk?n
max
n n N
=0
1: Input: training set {(x , y )}n=1 , ?, 2: k [k],n
3: Aij = ??ij , Cij ? (for all i, j : i ?= j)
Output: ?
4: while k ? V do
2: Initialize i = 1, ?1 = 0, max= ??
2y n
5:
diag(A[k],n ) = diag(A) ? k+?y
3: repeat
n ?2
4:
for n = 1 to N do
?n
T [k],n
6:
y
=
argmax
y
A
y
(graph-cuts)
y
k
5:
Compute yn? (yk?n
returned by Almax
gorithm 2.)
7:
if |yk?n | > k then
6:
end for
8:
kmax = |yk?n |; k = kmax
7:
Compute gradient gi (equation (7)) 9:
else if |yk?n | = k then
and objective oi (equation (6))
10:
kmax = |yk?n |; k = kmax + 1
2
?
8:
?i+1
:=
argmin? 2 ???
+ 11:
else
max(0, max ?gj , ?? + oj ); i ? i + 1
12:
k =k+1
j?i
13:
end if
9: until converged (see [17])
14: end while
10: return ?
15: return yk?n
max
Expressions (6) and (7) are then used in Algorithm 1.
Constraint Generation. The most challenging step consists of solving the constraint
generation problem. Constraint generation for a given training instance n consists of solving
the combinatorial optimisation problem in expression 4, which, using the loss in equation 1,
as well as the correspondence y T Ay = ??(x, y), ??, can be written as
y ?n ? argmax y T An (y)y
(8)
y
n
2 y
n
where diag(An ) = diag(A) ? |y|+|y
n | and o?diag(A ) = o?diag(A). Note that the matrix
An depends on y. More precisely, a subset of its diagonal elements (those Anii for which
y n (i) = 1) depends on the quantity |y|, i.e., the number of nonzero elements in y. This
makes solving problem 8 a formidable task. If An were independent of y, then eq. 8 could
be solved exactly and e?ciently via graph-cuts, just as our prediction problem in equation
2. A na??ve strategy would be to aim for solving problem 8 V times, one for each value of |y|,
and constraining the optimisation to only include elements y such that |y| is fixed. In other
words, we can partition the optimisation problem into k optimisation problems conditioned
on the sets Yk := {y : |y| = k}:
max y T A(y)y = max max y T A[k],n y
y
k
y?Yk
(9)
where A[k],n denotes the particular matrix An that we obtain when |y| = k. However the
inner maximization above, i.e., the problem of maximising a supermodular function (or
minimising a submodular function) subject to a cardinality constraint, is itself NP-hard
[19]. We therefore do not follow this strategy, but instead seek a polynomial-time algorithm
that in practice will give us an optimal solution most of the time.
Algorithm 2 describes our algorithm. In the worst case it calls graph-cuts O(V ) times, so the
total complexity is O(V 4 ).1 The algorithm essentially searches for the largest k such that
solving argmaxy y T A[k],n y returns a solution with k 1s. We call the k obtained kmax , and
the corresponding solution yk?n
. Observe the fact that, as k increases during the execution
max
of the algorithm, Anii increases for those i where y n (i) = 1. The increment observed when k
increases to k ? is
k? ? k
[k? ],n
[k],n
?kk? := Aii
? Aii = 2 ?
(10)
(k + |y n |)(k + |y n |)
which is always a positive quantity. Although this algorithm is not provably optimal, Theorem 1 guarantees that it is sound in the sense that it never predicts incorrect labels. In the
1
The worst-case bound of O(V 3 ) for graph-cuts is very pessimistic; in practice the algorithm is
extremely e?cient.
4
next section we present additional evidence supporting this algorithm, in the form of a test
that if positive guarantees the solution obtained is optimal.
We call a solution y ? a partially optimal solution of argmaxy y T An (y)y if the labels it predicts
as being present are indeed present in an optimal solution, i.e., if for those i for which
y ? (i) = 1 we also have y ?n (i) = 1, for some y ?n ? argmaxy y T An (y)y. Equivalently, we can
write y ? ? y ?n = y ? . We have the following result
Theorem 1 Upon completion of Algorithm 2, yk?n
is a partially optimal solution of
max
T n
argmaxy y A (y)y.
The proof is in the Appendix A. The theorem means that whenever the algorithm predicts
the presence of a label, it does so correctly; however there may be labels not predicted which
are in fact present in the corresponding optimal solution.
4
Certificate of Optimality
As empirically verified in our experiments in section 5, our constraint generation algorithm
(Algorithm 2) is indeed quite accurate: most of the time the solution obtained is optimal.
In this section we present a test that if positive guarantees that an optimal solution has
been obtained (i.e., a certificate of optimality). This can be used to generate empirical lower
bounds on the probability that the algorithm returns an optimal solution (we explore this
possibility in the experimental section).
We start by formalising the situation in which the algorithm will fail. Let Z := {i :
(i) = 0}, and PZ be the power set of Z (Z for ?zeros?). Let O := {i : yk?n
(i) = 1} (O
yk?n
max
max
for ?ones?). Then the algorithm will fail if there exists ? ? PZ such that
?
?
? [k +|?|],n
+|?|
Anij +
Aii max
Aij +
+ ?kkmax
|y n ? yk?n
|>0
(11)
max
max
?
??
?
i??
i??,j?O
i,j??;i?=j
??
?
?
??
? ? ?? ? ?
(d)
(c)
(b)
(a)
The above expression describes the situation in which, starting with yk?n
, if we insert |?|
max
1s in the indices defined by index set ?, we will obtain a new vector y ? which is a feasible
solution of argmaxy y T An (y)y and yet has strictly larger score than solution yk?n
. This
max
can be understood by looking closely into each of the sums in expression 11. Sums (a)
and (b) describe the increase in the objective function due to the inclusion of o?-diagonal
terms. Both (a) and (b) are non-negative due to the submodularity assumption. Term (c)
is the sum of the diagonal terms corresponding to the newly introduced 1s of y ? . Term (c)
is negative or zero, since each term in the sum is negative or zero (otherwise yk?n
would
max
have included it). Finally, term (d) is non-negative, being the total increase in the diagonal
elements of O due to the inclusion of |?| additional 1s. We can write (c) as
? [k +|?|],n ? [k ],n ? [k +|?|],n
[k
],n
Aii max
=
Aii max +
(Aii max
? Aii max )
(12)
i??
?
??
?
(c)
i??
?
??
(e)
?
i??
?
??
(f )
and the last term can be?
bounded as
[k
+|?|],n
[k
],n
+|?|
(Aii max
? Aii max ) ? ?kkmax
v?
? max??
?
i??
?
(13)
(g)
n
n
?yk?n
|, |?|]
max
where v? = min[|y |?|y
is an upper bound on the number of indices i ? ? such
+|?|
that y n (i) = 1, and ?kkmax
is
the
increment
in a diagonal element i for which y n (i) = 1
max
arising from increasing the cardinality of the solution from kmax to kmax +|?|. Incorporating
bound 13 into equation 12, we get that (c) ? (e) + (g). We can then replace (c) in inequality
11 by (e) + (g), obtaining
?
?
? [k ],n
+|?|
+|?|
Anij +
Aij +
Aii max + ?kkmax
v? + ?kkmax
|y n ? yk?n
| > 0 (14)
max
max
max
?
??
?
i??
i??,j?O
i,j??;i?=j
:=??
?
??
?
:=?A,?
5
Algorithm 3 Compute max? ?A,?
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
, V,
Input: A[kmax ],n , yk?n
max
Output: max
max = ??
Z = {i : yk?n
(i) = 0}
max
O = {i : yk?n
(i) = 1}
max
for i ? Z do
O? = O ? i
rmax = maxy:yO? =1 y T A[kmax ],n y (graph-cuts)
if rmax > max then
max = rmax
end if
end for
max = max ? maxy y T A[kmax ],n y
return max
Table 1: Datasets. #train/#test
denotes the number of observations
used for training and testing respectively; V is the number of labels and
D the dimensionality of the features;
Avg is the average number of labels
per instance.
dataset
domain
#train
#test
V
D
Avg
yeast
biology
1500
917
14
103
4.23
enron
text
1123
579
53
1001
3.37
We know that, regardless of A or ?, ?A,? ? 0 (otherwise yk?n
?
/ argmaxy y T A[kmax ],n y,
max
T [kmax ],n
since ?A,? is the increment in the objective function y A
y obtained by adding 1s in
the entries of ?). The key fact coming to our aid is that ?? is ?small?, and a weak upper
bound is 2. This is because
+|?|
+|?|
+|?|
?kkmax
v? + ?kkmax
|y n ? yk?n
| ? ?kkmax
|y n | ? ?Vkmax |y n | ? ?V0 |y n | =
max
max
max
max
n
= 2V |y n |/((V + |y n |)|y n |) ? 2
(15)
(Note that if |y | = 0 then ?? = 0 and our algorithm will always return an optimal solution
since ?A,? ? 0). Now, since ?A,? ? 0 for any A and ? ? PZ , it su?ces that we study the
quantity max? ?A,? : if max? ?A,? < ?2, then ?A,? < ?2 for any ? ? PZ . It is however
very hard to understand theoretically the behaviour of the random variable max? ?A,? even
for a simplistic uniform i.i.d. assumption on the entries of A. This is because the domain
of ?, PZ , is itself a random quantity that depends on the particular A chosen. This makes
computing even the expected value of max? ?A,? an intractable task, let alone obtaining
concentration of measure results that could give us upper bounds on the probability of
condition 14 holding under the assumed distribution on A.
However, for a given A we can actually compute max? ?A,? e?ciently. This can be done
with Algorithm 3. The algorithm e?ectively computes the gap between the scores of the
optimal solution yk?n
and the highest scoring solution if one sets to 1 at least one of the
max
zero entries in yk?n
. It does so by solving graph-cuts constraining the solution y to include
max
the 1s present in yk?n
but additionally fixing one of the zero entries of yk?n
to 1 (lines
max
max
?n
7-8). This is done for every possible zero entry of ykmax , and the maximum score is recorded
(lines 7-11). The gap between this and the score of the optimal solution yk?n
is then
max
returned (line 13). This will involve V ? kmax calls to graph-cuts, and therefore the total
computational complexity is O(V 4 ). Once we compute max? ?A,? , we simply test wether
|
|V |
n
n
max? ?A,? + ?|V
kmax |y | > 0 holds (we use ?kmax |y | rather than 2 as an upper bound for ??
because, as seen from (15), it is the tightest upper bound which still does not depend on ?
and therefore can be computed). We have the following theorem (proven in Appendix A)
|
n
?n
Theorem 2 Upon completion of Algorithm 3, if max? ?A,? + ?|V
kmax |y | ? 0, then ykmax is
T n
an optimal solution of argmaxy y A (y)y.
5
Experimental Results
To evaluate our multi-label learning method we applied it to real-world datasets and compared it to state-of-the art methods.
Datasets. For the sake of reproducibility we focused in publicly available datasets, and to
ensure that the label dependencies have a reasonable impact in the results we restricted the
experiments to datasets with a su?ciently large average number of labels per instance. We
6
Figure 1: F -Score results on enron (left) and yeast (right), for di?erent amounts of unary
features. The horizontal axis denotes the proportion of the features used in training.
chose therefore two multilabel datasets from mulan:2 yeast and enron. Table 1 describes
them in more detail.
Experimental setting. The datasets used have very informative unary features, so to
better visualise the contribution of the label dependencies to the model we trained using
varying amounts (1%, 10% and 100%) of the original unary features. We compared our
proposed method to RML[11] without reversion3 , which is essentially our model without
the quadratic term, and to other state-of-the-art methods for which source code is publicly
available ? BR [15], RAkEL[16] and MLKNN[20].
Model selection. Our model has two parameters: ?, the trade-o? between data fitting
and good generalisation, and c, a scalar that multiplies C to control the trade-o? between
the linear and the quadratic terms. For each experiment we selected them with 5-fold crossvalidation on the training data. We also control the sparsity of C by setting Cij to zero
for all except the top most frequent pairs ? this way we can reduce the dimensionality of
?2 , avoiding an excessive number of parameters for datasets with large values of V . In our
experiments we used 50% of the pairs with yeast and 5% with enron (45 and 68 pairs,
respectively). We experimented with other settings, but the results were very similar.
RML?s only parameter, ?, was selected with 5-fold cross-validation. MLKNN?s two parameters k (number of neighbors) and s (strength of the uniform prior) were kept fixed to 10
and 1.0, respectively, as was done in [20]. RAkEL?s m (number of models) and t (threshold)
were set to the library?s default (respectively 2 ? N and 0.5), and k (size of the labelset) was
set to V2 as suggested by [4]. For BR we kept the library?s defaults.
Implementation. Our implementation is in C++, based on the source code of RML[11],
which uses the Bundle Methods for Risk Minimization (BMRM) of [18]. The max-flow
computations needed for graph-cuts are done with the library of [21]. The modifications
necessary to enforce positivity in ?2 in BMRM are described in Appendix C. Source code is
available4 under the Mozilla Public License. Details of training time for our implementation
are available in Appendix B.
Results: F-Score. In Figure 1 we plot the F -Score for varying-sized subsets of the unary
features, for both enron (left) and yeast (right). The goal is to assess the benefits of
explicitly modelling the pairwise label interactions, particularly when the unary information
is deteriorated. As can be seen in Figure 1, when all features are available our model behaves
similarly to RML. In this setting the unary features are very informative and the pairwise
interactions are not helpful. As we reduce the number of available unary features (from right
to left in the plots), the importance of the pairwise interactions increase, and our model
demonstrates improvement over RML.
2
http://mulan.sourceforge.net/datasets.html
RML deals mainly with the reverse problem of predicting instances given labels, however it can
be applied in the forward direction as well as described in [11].
4
http://users.cecs.anu.edu.au/?jpetterson/.
3
7
Figure 2: Empirical analysis of Algorithms 2 and 3 during training with the yeast dataset.
Left: frequency with which Algorithm 2 is optimal at each iteration (blue) and frequency
with which Algorithm 3 reports an optimal solution has been found by Algorithm 2 (green).
Right: di?erence, at each iteration, between the objective computed using the results from
Algorithm 2 and exhaustive enumeration.
Results: Correctness. To evaluate how well our constraint generation algorithm performs
in practice we compared its results against those of exhaustive search, which is exact but
only feasible for a dataset with a small number of labels, such as yeast. We also assessed
the strength of our test proposed in Algorithm 3. In Figure 2-left we plot, for the first
100 iterations of the learning algorithm, the frequency with which Algorithm 2 returns the
exact solution (blue line) as well as the frequency with which the test given in Algorithm
3 guarantees the solution is exact (green line). We can see that overall in more than 50%
of its executions Algorithm 2 produces an optimal solution. Our test e?ectively o?ers a
lower bound which is as expected is not tight, however it is informative in the sense that
its variations reflect legitimate variations in the real quantity of interest (as can be seen by
the obvious correlation between the two curves).
For the learning algorithm, however, what we are interested in is the objective oi and the
gradient gi of line 7 of Algorithm 1, and both depend only on the compound result of N
executions of Algorithm 2 at each iteration of the learning algorithm. This is illustrated
in Figure 2-right, where we plot, for each iteration, the normalised di?erence between the
objective computed with results from Algorithm 2 and the one computed with the results
of an exact exhaustive search5 . We can see that the di?erence is quite small ? below 4%
after the initial iterations.
6
Conclusion
We presented a method for learning multi-label classifiers which explicitly models label dependencies in a submodular fashion. As an estimator we use structured support vector
machines solved with constraint generation. Our key contribution is an algorithm for constraint generation which is proven to be partially optimal in the sense that all labels it
predicts are included in some optimal solution. We also describe an e?ciently computable
test that if positive guarantees the solution found is optimal, and can be used to generate
empirical lower bounds on the probability of finding an optimal solution. We present empirical results that corroborate the fact that the algorithm is very accurate, and we illustrate
the gains obtained in comparison to other popular algorithms, particularly a previous algorithm which can be seen as the particular case of ours when there are no explicit label
interactions being modelled.
Acknowledgements
We thank Choon Hui Teo for his help on making the necessary modifications to BMRM.
NICTA is funded by the Australian Government as represented by the Department of
Broadband, Communications and the Digital Economy and the Australian Research Council
through the ICT Centre of Excellence program.
5
We repeated this experiment with several sets of parameters with similar results.
8
References
[1] K. Dembczynski, W. Cheng, and E. H?
ullermeier, ?Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains,? in ICML, 2010.
[2] X. Zhang, T. Graepel, and R. Herbrich, ?Bayesian Online Learning for Multi-label and
Multi-variate Performance Measures,? in AISTATS, 2010.
[3] P. Rai and H. Daume, ?Multi-Label Prediction via Sparse Infinite CCA,? in NIPS,
2009.
[4] J. Read, B. Pfahringer, G. Holmes, and E. Frank, ?Classifier chains for multi-label
classification.,? in ECML/PKDD, 2009.
[5] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor, ?Kernel-based learning of
hierarchical multilabel classification models,? JMLR, vol. 7, pp. 1601?1626, December
2006.
[6] Z. Barutcuoglu, R. E. Schapire, and O. G. Troyanskaya, ?Hierarchical multi-label prediction of gene function,? Bioinformatics, vol. 22, pp. 830?836, April 2006.
[7] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid, ?TagProp: Discriminative
Metric Learning in Nearest Neighbor Models for Image Auto-Annotation,? in ICCV,
2009.
[8] M. Jansche, ?Maximum expected F-measure training of logistic regression models,?
HLT, 2005.
[9] T. Mensink, J. Verbeek, and G. Csurka, ?Learning structured prediction models for
interactive image labeling,? in CVPR, 2011.
[10] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun, ?Large margin methods for
structured and interdependent output variables,? JMLR, vol. 6, pp. 1453?1484, 2005.
[11] J. Petterson and T. Caetano, ?Reverse multi-label learning,? in NIPS, 2010.
[12] W. Bi and J. Kwok, ?Multi-Label Classification on Tree- and DAG-Structured Hierarchies,? in ICML, 2011.
[13] N. Ghamrawi and A. Mccallum, ?Collective Multi-Label Classification,? 2005.
[14] B. Hariharan, S. V. N. Vishwanathan, and M. Varma, ?Large Scale Max-Margin MultiLabel Classification with Prior Knowledge about Densely Correlated Labels,? in ICML,
2010.
[15] G. Tsoumakas, I. Katakis, and I. P. Vlahavas, Mining Multi-label Data. Springer, 2009.
[16] G. Tsoumakas and I. P. Vlahavas, ?Random k-labelsets: An ensemble method for
multilabel classification,? in ECML, 2007.
[17] S. Virtanen, A. Klami, and S. Kaski, ?Bayesian CCA via Group Sparsity,? in ICML,
2011.
[18] C. H. Teo, S. V. N. Vishwanathan, A. J. Smola, and Q. V. Le, ?Bundle methods for
regularized risk minimization,? JMLR, vol. 11, pp. 311?365, 2010.
[19] Z. Svitkina and L. Fleischer, ?Submodular approximation: Sampling-based algorithms
and lower bounds,? in FOCS, 2008.
[20] M.-L. Zhang and Z.-H. Zhou, ?ML-KNN: A lazy learning approach to multi-label learning,? Pattern Recognition, vol. 40, pp. 2038?2048, July 2007.
[21] Y. Boykov and V. Kolmogorov, ?An experimental comparison of min-cut/max-flow
algorithms for energy minimization in vision,? IEEE Trans. PAMI, 2004.
9
| 4239 |@word polynomial:2 norm:1 proportion:1 seek:1 initial:1 score:17 document:1 ours:1 current:1 yet:1 written:3 partition:1 informative:3 hofmann:1 enables:5 plot:4 alone:1 selected:2 mccallum:1 certificate:2 herbrich:1 zhang:2 direct:1 incorrect:2 consists:4 prove:1 ectively:3 fitting:1 focs:1 theoretically:1 excellence:1 pairwise:6 expected:3 tagging:2 indeed:2 pkdd:1 multi:24 enumeration:1 solver:2 cardinality:2 becomes:1 increasing:1 mulan:2 notation:1 bounded:1 formidable:1 katakis:1 what:3 kind:1 argmin:2 substantially:1 rmax:3 finding:4 guarantee:5 every:1 growth:1 interactive:1 exactly:3 classifier:5 demonstrates:1 labelset:1 control:2 yn:7 positive:5 maximise:2 t1:1 understood:1 treat:1 limit:1 virtanen:1 cecs:1 pami:1 chose:1 au:1 challenging:1 co:4 bi:1 testing:1 practice:4 assortative:1 maximisation:1 procedure:1 empirical:5 erence:3 word:1 altun:1 get:1 close:1 selection:1 tsochantaridis:1 risk:3 kmax:16 map:3 regardless:1 starting:1 convex:7 focused:1 immediately:1 legitimate:1 estimator:3 holmes:1 varma:1 his:1 notion:1 variation:2 increment:3 crowdsourcing:1 deteriorated:1 hierarchy:1 user:1 exact:5 us:2 agreement:2 element:10 recognition:1 particularly:2 gorithm:1 cut:12 predicts:5 observed:1 solved:3 capture:1 worst:2 caetano:2 trade:2 highest:1 yk:29 substantial:1 anii:2 complexity:2 ideally:2 rakel:3 multilabel:5 trained:1 depend:2 solving:7 tight:1 upon:2 aii:11 represented:1 kaski:1 kolmogorov:1 train:2 forced:1 describe:3 labeling:1 exhaustive:3 saunders:1 quite:4 encoded:1 whose:1 solve:1 larger:1 say:2 cvpr:1 otherwise:2 triangular:1 gi:2 knn:1 emergence:1 itself:2 online:1 net:1 propose:2 interaction:6 product:1 coming:1 frequent:1 reproducibility:1 crossvalidation:1 sourceforge:1 assessing:1 produce:2 help:1 illustrate:1 completion:2 fixing:1 erent:1 nearest:1 ij:4 eq:1 strong:1 sydney:1 predicted:1 australian:2 direction:1 submodularity:1 rml:6 closely:1 correct:2 australia:2 public:1 tsoumakas:2 government:1 behaviour:1 tiberio:1 pessimistic:1 insert:1 strictly:1 hold:1 ground:1 dictionary:1 adopt:1 purpose:1 label:65 combinatorial:2 troyanskaya:1 teo:2 council:1 largest:1 correctness:1 create:1 tool:1 minimization:3 rough:1 always:3 aim:1 rather:1 zhou:1 varying:2 minimisation:3 derived:2 focus:1 yo:1 joachim:1 improvement:1 modelling:1 mainly:1 sense:5 helpful:1 inference:1 economy:1 unary:8 pfahringer:1 subroutine:1 i1:3 interested:1 provably:1 overall:1 classification:10 html:1 among:1 multiplies:1 art:2 initialize:1 construct:1 never:2 once:1 sampling:1 optimising:2 biology:1 icml:4 excessive:1 t2:1 np:1 report:1 ullermeier:1 few:1 petterson:2 ve:1 choon:1 densely:1 argmax:4 attempt:2 interest:1 highly:1 possibility:1 mining:1 violation:1 argmaxy:8 bundle:4 chain:2 accurate:3 necessary:2 tree:2 taylor:1 weighs:1 theoretical:1 instance:10 corroborate:1 maximization:1 subset:4 entry:6 uniform:2 mozilla:1 formalising:1 motivating:1 dependency:10 density:1 probabilistic:1 decoding:1 na:1 reflect:1 recorded:1 positivity:1 rescaling:1 return:7 account:1 availability:1 explicitly:4 depends:3 wether:1 csurka:1 attest:1 start:2 bayes:1 dembczynski:1 annotation:1 contribution:4 ass:1 oi:2 publicly:3 hariharan:1 ensemble:2 weak:1 bayesian:3 modelled:1 ghamrawi:1 converged:1 whenever:1 hlt:1 guillaumin:1 against:1 energy:1 pp:5 frequency:5 james:1 involved:1 obvious:1 naturally:1 proof:1 di:4 gain:1 newly:1 dataset:3 popular:2 recall:3 knowledge:2 regu:1 dimensionality:4 graepel:1 routine:1 actually:2 supermodular:2 follow:3 modal:1 april:1 formulation:3 done:5 mensink:2 anij:2 just:1 smola:1 correlation:3 until:1 horizontal:1 su:3 somehow:1 defines:1 logistic:1 quality:1 yeast:7 svitkina:1 read:1 iteratively:1 nonzero:1 illustrated:1 deal:1 during:3 ay:3 complete:2 performs:1 image:6 harmonic:1 wise:1 recently:1 boykov:1 behaves:1 empirically:1 exponentially:1 visualise:1 interpretation:1 vec:2 dag:1 automatic:1 similarly:1 inclusion:2 centre:1 submodular:11 shawe:1 funded:1 bmrm:6 gj:1 v0:1 recent:1 ship:2 reverse:2 compound:1 inequality:1 binary:2 accomplished:1 scoring:3 seen:6 minimum:1 additional:2 care:1 impose:1 novelty:1 signal:1 july:1 multiple:2 sound:2 desirable:1 reduces:2 technical:1 generalises:1 adapt:1 minimising:2 cross:1 impact:1 prediction:14 verbeek:2 regression:1 basic:1 simplistic:1 optimisation:11 essentially:2 rousu:1 metric:1 vision:1 iteration:7 kernel:1 adopting:1 labelsets:1 else:2 source:5 klami:1 enron:5 induced:1 subject:1 december:1 flow:2 spirit:1 call:4 ciently:7 leverage:1 presence:1 constraining:2 fit:1 variate:1 topology:1 inner:1 mlknn:2 reduce:2 multiclass:1 br:2 computable:1 fleischer:1 favour:1 expression:5 returned:2 involve:1 amount:2 reduced:1 generate:2 http:2 schapire:1 arising:1 correctly:1 per:2 blue:2 discrete:1 write:2 vol:5 group:2 key:3 four:1 threshold:2 license:1 ce:1 verified:1 kept:2 graph:12 relaxation:1 year:1 sum:4 reasonable:1 draw:1 appendix:4 maximiser:1 bound:14 cca:3 guaranteed:1 correspondence:1 fold:2 quadratic:2 cheng:1 nontrivial:1 strength:2 constraint:22 categorisation:1 precisely:1 vishwanathan:2 encodes:1 sake:1 tag:1 aspect:2 argument:1 extremely:1 optimality:2 min:2 structured:6 department:1 rai:1 describes:4 parameterisation:1 modification:2 making:1 maxy:2 restricted:1 iccv:1 equation:6 slack:2 count:1 discus:1 fail:2 needed:1 know:1 end:5 available:8 tightest:1 apply:1 observe:1 hierarchical:2 v2:1 enforce:2 kwok:1 ocean:2 occurrence:3 vlahavas:2 original:1 denotes:7 top:1 ensure:2 include:2 graphical:1 classical:1 objective:9 quantity:6 strategy:4 concentration:1 diagonal:8 surrogate:1 gradient:4 thank:1 majority:1 considers:1 reason:1 nicta:3 enforcing:1 maximising:2 code:5 index:3 relationship:1 kk:1 equivalently:2 cij:4 holding:1 frank:1 tagprop:1 negative:6 implementation:3 collective:1 upper:9 observation:2 datasets:12 benchmark:2 attacked:1 ecml:2 supporting:2 situation:2 looking:1 communication:1 introduced:1 pair:7 specified:1 learned:3 boost:1 nip:2 trans:1 address:1 suggested:1 below:2 pattern:1 sparsity:3 program:2 including:2 max:65 optimise:1 oj:1 green:2 power:1 critical:1 suitable:1 regularized:1 predicting:3 improve:1 imply:1 library:3 axis:1 barutcuoglu:1 schmid:1 auto:1 szedmak:1 text:1 prior:3 ict:1 acknowledgement:1 interdependent:1 regularisation:1 loss:11 generation:14 proven:2 ingredient:1 validation:1 digital:1 consistent:1 repeat:1 last:1 soon:1 aij:4 allow:1 normalised:2 understand:1 neighbor:2 jansche:1 sparse:1 benefit:1 curve:1 default:2 xn:12 world:2 computes:1 author:3 forward:1 avg:2 approximate:2 gene:2 ml:1 assumed:1 discriminative:1 continuous:1 search:2 table:2 additionally:1 learn:1 nature:1 inherently:1 ignoring:1 obtaining:2 domain:2 diag:6 aistats:1 daume:1 repeated:1 complementary:1 augmented:1 canberra:2 cient:3 broadband:1 fashion:1 aid:1 precision:3 wish:1 explicit:2 lie:1 jmlr:3 theorem:5 specific:1 er:1 pz:5 experimented:1 reproduction:2 evidence:2 intractable:4 exists:1 incorporating:1 albeit:1 restricting:1 adding:2 importance:1 hui:1 execution:3 conditioned:1 anu:3 ocurrence:1 margin:6 gap:2 simply:1 explore:1 visual:1 lazy:1 partially:3 scalar:2 springer:1 truth:1 chance:1 goal:2 sized:1 replace:1 feasible:2 hard:2 included:2 generalisation:1 except:1 infinite:1 total:3 experimental:4 support:2 assessed:1 bioinformatics:1 violated:1 evaluate:2 avoiding:1 correlated:1 |
3,578 | 424 | Stochastic Neurodynamics
J.D. Cowan
Department of Mathematics, Committee on
Neurobiology, and Brain Research Institute,
The University of Chicago, 5734 S. Univ. Ave.,
Chicago, Illinois 60637
Abstract
The main point of this paper is that stochastic neural networks have a
mathematical structure that corresponds quite closely with that of
quantum field theory. Neural network Liouvillians and Lagrangians
can be derived, just as can spin Hamiltonians and Lagrangians in QFf.
It remains to show the efficacy of such a description.
1 INTRODUCTION
A basic problem in the analysis of large-scale neural network activity, is that one can
never know the initial state of such activity, nor can one safely assume that synaptic
weights are symmetric, or skew-symmetric. How can one proceed, therefore, to analyse
such activity? One answer is to use a "Master Equation" (Van Kampen, 1981). In
principle this can provide statistical information, moments and correlation functions of
network activity by making use of ensemble averaging over all possible initial states. In
what follows I give a short account of such an approach.
1.1 THE BASIC NEURAL MODEL
In this approach neurons are represented as simple gating elements which cycle through
several internal states whenever the net voltage generated at their activated post-synaptic
62
Stochastic Neurodynamics
sites exceeds a threshold. These states are "quiescent", "activated", and "refractory",
labelled 'q', 'a', and 'r'respectively. There are then four transitions to consider: q ~ a, r ~
a, a ~ r, and r ~ q. Two of these, q ~ a, and r ~ a, are functions of the neural
membrane current. I assume that on the time scale measured in units of 't m , the
membrane time constant, the instantaneous transition rate A(q ~ a) is a smooth function
of the input current. Ji(T). The transition rates A(q ~ a) and A(r ~ a) are then given by:
Aq = e[(J(T)/Jq)-I] = eq[J(T)],
(1)
and
Ar =e[(J(T)/Jr)-I]
=er[J(T)],
(2)
respectively, where J q and Jr are the threshold currents related to 8 q and 8 r' and where
e [x] is a suitable smoothly increasing function of x, and T = t/'t m .. The other two
transition rates, A(a ~ r) and A(r ~ q) are defined simply as constants eX and :13. Figure 1
shows the "kinetic" scheme that results. Implicit in this scheme is the smoothing of input
current pulses that takes place in the membrane,and also the smoothing caused by the
Figure 1. Neural state transition rates
presumed asynchronous activation of synapses. This simplified description of neural
state transitions is essential to our investigation of cooperative effects in large nets.
1.2 PROBABILITY DISTRIBUTIONS FOR NEURAL NETWORK ACTIVITY
The configuration space of a neural network is the space of distinguishable patterns of
neural activity. Since each neuron can be in the state q, a or r, there are 3N such patterns
in a network of N neurons. Since N is 0(10 10), the configuration space is in principle
very large. This observation, together with the existence of random fluctuations of neural
63
64
Cowan
activity, and the impracticability of specifying the initial states of all the neurons in a
large network, indicates the need for a probabilistic description of the formation and decay of patterns of neural activity.
Let Q(T), A(T), R(T) denote the numbers of quiescent. activated. and refractory neurons
in a network ofN neurons at time T. Evidently,
Q(T)+A(T)+R(T) =N.
(3)
Consider therefore N neurons in a d-dimensional lattice. Let a neural state vector be
denoted by
(4)
where vi means the neuron at the site i is in the state v = q. a ? or r. Let P[Q(T)] be the
probability of finding the network in state I Q > at time T. and let
I P(T) >=LP[Q(T)]I Q>
(5)
Q
be a neural probability state vector.
Evidently LP[Q(T)] = 1.
(6)
Q
1.3 A NEURAL NETWORK MASTER EQUATION
Now consider the most probable state transitions which can occur in an asynchronous
noisy network. These are:
(Q. A. R) ~ (Q. A. R) no change
(Q+I. A-I. R) -+ (Q, A. R) activation of a quiescent cell
(Q. A-I, R+ I) -+ (Q. A. R) activation of a refractory cell
(Q, A+I, R-I) -+ (Q, A. R) an activated cell becomes refractory
(Q-I. A, R+ I) ~ (Q, A. R) a refractory cell beomes quiescent.
All other transitions, e.g., those involving two or more transitions in time dT, are
assumed to occur with probability O(dT).
These state transitions can be represented by the action on a set of basis vectors, of
certain matrices. Let the basis vectors be:
Stochastic N eurodynamics
Iq>=G} 13>=() It> =U)
(7)
and consider the Gell-Mann matrices representing the Lie Group SU(3) (Georgi, 1982) :
Al =
AS =
C~::1.)
e-
: ..i
1 ?.
(_ -i. )
A2=
4=
)
~::
A3 =
Ci)
1..7
.1.
C--)
4=
: -.1:
= (_:
-i)
~ (4 ? i AS).
A?2 =
(AI
1{1.. -)
A8 =..J
. 1 ?
!
.. .
1..
and the raising and lowering operators:
A?I =
(""1)
? i AV, A ? 3 =
1~
.. 2
! (4?
i A7) .
(8)
(9)
It is easy to see that these operators act on the basis vectors I v > as shown in figure 2.
Figure 2. Neural State Transitions generated by the
raising and lowering operators of the Lie Group
SU(3).
(10)
It also follows that:
1
and that:
i
Ji = ~ Wij A +Ij A -Ij = ~ Wij A +2j A -2j .
(11)
J
J
The entire sequence of neural state transition into (Q,A,R) can be represented by the
operator "Liouvillian":
65
66
Cowan
1
1
1
+ N ~ (A -Ii - 1) A +Ii 9 q[JJ
1
1
+ N ~ (A -2i - 1) A +2i 9r [Ji] .
(12)
1
This operator acts on the state function I P(T? according to the equation:
a
aT I P(T?
=-L I P(T?.
(13)
This is the neural network analogue of the Schrodinger equation, except that P[O (T)] =
< 0 IP(T? is a real probability distribution, and L is not Hermitian. In fact this equation
is a Markovian representation of neural network activity (Doi, 1976; Grassberger &
Scheunert, 1980), and is the required master equation.
1.4 A SPECIAL CASE: TWO-STATE NEURONS
It is helpful to consider the simpler case of two state neurons first, since the group
algebra is much simpler. I therefore neglect the refractory state, and use the two
dimensional basis vectors:
la>=
(~)
corresponding to the kinetic scheme shown in figure 3a:
ex
(a)
(b)
Figure 3. (a) Neural State Transitions in the twostate case, (b) Neural State Transitions generated by
the raising and lowering operators of the Lie Group
SU(2).
(14)
Stochastic Neurodynamics
The relevant matrices are the well-known Pauli spin matrices representing the Lie Group
SU(2) (Georgi, 1982):
<J1 =
(i ~ )
(15)
and the raising and lowering operators:
<J? =
t
(<J1
? i<J2)
(16)
giving the state transiiton diagram shown in figure 3(b).
Liouvillian is:
The corresponding neural
Ii = ~ Wij <J+j <J_j .
where
(18)
J
Physicists will recognize this Liouvillian as a generalization of the Regge spin
Hamiltonian ofQFT:
= ex ~ (<J +i
1(
N ~ ~ (<J-1 - 1) <J+i <J+j <J_j .
(19)
1
1 J
In principle, eqn. (13) with L given by eqn. (12) or (17), together with initial conditions,
contains a complete description of neural network activity, since its formal solution takes
the form:
L
- 1) <J-i +
T
I P(T?
=exp (- JL(T')dT') I P(O?
o
.
(20)
1.5 MOMENT GENERATING EQUATIONS AND SPIN-COHERENT STATES
Solving this system of equation in detail however, is a difficult problem. In practice one
is satisfied with the first few statistical moments. These can be obtained as follows (I
describe here the two-state case. Similar but more complicated calculations obtain for
the three-state case).
Consider the following "spin-coherent states" (perelomov 1986; Hecht 1987):
I ex >
= exp (
L <\*
<J+i ) I 0 >
i
where ex is a complex number, and < 0 I is the "vacuum" state < q1q2 ...... <IN I.
Evidently
(21)
67
68
Cowan
< a IP >
= < a I LP[o(n]1 0> = LP[O(T)] < a 10> .
o
0
It can be shown that < a IO > = CfvI a v2
2 ............. a \IN
N and that < a I P >
G( a l a 2 ....
~)
=
the moment generating function for the probability distribution p(n.
It can then be shown that:
aG
[
aT = a ~ (D'1 -
a +
1
aa.1
a
Da. = a.(I- a . - )
1
1
1 a
a?
where
(22)
1) -
a
and Ii =:L Wij Da?-.
j
J
aa .
1
(23)
J
i.e.; the moment generating equation expressed in the "oscillator-algebra" representation.
1.5 A NEURAL NETWORK PATH INTEGRAL
The content of eqns. (22) and (23) can be summarized in a Wiener-Feynman Path
Integral (Schulman 1981). It can be shown that the transition probability of reaching a
state 0' (T) given the initial state O(To), the so-called propagator (1(0', T 10, TO) ,
can be expressed as the Path Integral:
In Dai (T') exp [ TO
IT {~ 2:1 (D'a i Da *i - Dai D'a*i ) - L(Dai , Da*i ) }],
1
where D'a.1
= aaT Da.1
and D a.1 (T')
= ( ~ )n lim n->oo
7t
where d 2a
(24)
1
n
n
d2a i (j)
* ' and
j=l (1+a. (j)a. (j?)3
1
1
=d(R1 a) d(Im a). This propagator is sometimes written as an expectation
with respect to the Wiener measure
In. Da.1 (T)
as:
1
To
(1(0' I 0) = < exp [dT'
I
T
where the neural network Lagrangian is defined as:
L] >
(25)
Stochastic N eurodynamics
L =
L(Du 1?, Du 1*
?)
-
* - Du. D'u.).
*
L. -21 (D'u.Du.
1
1
1
1
(26)
1
a
The propagator
contains all the statistics of the network activity. Steepest descent
methods, asymptotics, and Liapunov-Schmidt bifurcation methods may be used to
evaluate it.
2 CONCLUSIONS
The main point of this paper is that stochastic neural networks have a mathematical
structure that corresponds quite closely with that of quantum field theory. Neural
network Liouvillians and Lagrangians can be derived, just as can spin Hamiltonians and
Lagrangians in QFf. It remains to show the efficacy of such a description.
Acknowledgements
The early stages of this work were carried out in part with Alan Lapedes and David Sharp
of the Los Alamos National Laboratory. We thank the Santa Fe Institute for hospitality
and facilities during this work, which was supported in part by grant # NOOO 14-89-J-1099
from the US Department of the Navy, Office of Naval Research .
References
Van Kampen, N. (1981), Stochastic Processes in Physics & Chemistry (N. Holland,
Amsterdam).
Georgi, H. (1982), Lie Algebras in Particle Physics (Benjamin Books, Menlo Park)
Doi, M. (1976), J.Phys. A. Math. Gen. 9,9,1465-1477; 1479-1495
Grassberger, P. & Scheunert, M. (1980), Fortschritte der Physik 28, 547-578.
Hecht, K.T. (1987), The Vector Coherent State Method (Springer, New York)
Perelomov, A. (1986), Generalized Coherent States and Their Applications (Springer,
New York).
Matsubara, T & Matsuda, H. (1956). A lattice model of Liquid Helium, I. Prog. Theoret.
Phys. 16,6, 569-582.
Schulman, L. (1981), Techniques and Applications of Path Integration (Wiley, New
York).
69
| 424 |@word effect:1 facility:1 physik:1 closely:2 symmetric:2 laboratory:1 pulse:1 stochastic:8 matsubara:1 during:1 d2a:1 eqns:1 mann:1 thank:1 moment:5 initial:5 configuration:2 contains:2 efficacy:2 generalization:1 liquid:1 lagrangians:4 investigation:1 probable:1 generalized:1 lapedes:1 im:1 schrodinger:1 complete:1 current:4 liouvillian:3 activation:3 exp:4 instantaneous:1 written:1 difficult:1 grassberger:2 chicago:2 fe:1 j1:2 ji:3 early:1 a2:1 refractory:6 jl:1 twostate:1 liapunov:1 av:1 neuron:10 observation:1 ai:1 steepest:1 hamiltonian:1 short:1 propagator:3 descent:1 mathematics:1 particle:1 illinois:1 math:1 hospitality:1 neurobiology:1 aq:1 simpler:2 reaching:1 sharp:1 mathematical:2 voltage:1 office:1 kampen:2 david:1 derived:2 nooo:1 required:1 naval:1 hermitian:1 indicates:1 certain:1 scheunert:2 raising:4 ave:1 presumed:1 coherent:4 nor:1 helpful:1 der:1 brain:1 dai:3 entire:1 pattern:3 jq:1 increasing:1 becomes:1 wij:4 ii:4 matsuda:1 analogue:1 suitable:1 what:1 denoted:1 exceeds:1 smooth:1 alan:1 smoothing:2 special:1 bifurcation:1 integration:1 calculation:1 field:2 finding:1 ag:1 never:1 post:1 hecht:2 representing:2 safely:1 scheme:3 park:1 involving:1 act:2 basic:2 carried:1 expectation:1 sometimes:1 unit:1 grant:1 few:1 cell:4 schulman:2 acknowledgement:1 recognize:1 national:1 aat:1 diagram:1 physicist:1 io:1 fluctuation:1 path:4 cowan:4 specifying:1 principle:3 activated:4 easy:1 supported:1 asynchronous:2 integral:3 practice:1 formal:1 institute:2 asymptotics:1 van:2 transition:15 quantum:2 markovian:1 ar:1 simplified:1 proceed:1 york:3 operator:7 jj:1 lattice:2 action:1 santa:1 alamo:1 lagrangian:1 assumed:1 answer:1 quiescent:4 neurodynamics:3 gell:1 probabilistic:1 physic:2 menlo:1 together:2 group:5 four:1 du:4 threshold:2 satisfied:1 complex:1 hamiltonians:2 da:6 element:1 main:2 book:1 lowering:4 cooperative:1 account:1 chemistry:1 master:3 summarized:1 site:2 place:1 prog:1 theoret:1 cycle:1 caused:1 wiley:1 vi:1 benjamin:1 lie:5 complicated:1 qff:2 ofn:1 activity:11 solving:1 algebra:3 spin:6 wiener:2 occur:2 gating:1 er:1 ensemble:1 decay:1 basis:4 a3:1 essential:1 neglect:1 represented:3 ci:1 univ:1 describe:1 department:2 fortschritte:1 doi:2 according:1 synapsis:1 phys:2 formation:1 vacuum:1 whenever:1 synaptic:2 navy:1 quite:2 membrane:3 jr:2 smoothly:1 lp:4 distinguishable:1 simply:1 making:1 expressed:2 amsterdam:1 statistic:1 holland:1 analyse:1 noisy:1 springer:2 ip:2 aa:2 corresponds:2 a8:1 sequence:1 equation:9 evidently:3 net:2 lim:1 remains:2 skew:1 committee:1 kinetic:2 know:1 oscillator:1 labelled:1 j2:1 relevant:1 feynman:1 content:1 change:1 gen:1 dt:4 except:1 averaging:1 pauli:1 v2:1 evaluate:1 description:5 called:1 just:2 implicit:1 los:1 stage:1 correlation:1 schmidt:1 la:1 eqn:2 r1:1 existence:1 generating:3 su:4 a7:1 internal:1 iq:1 oo:1 measured:1 ij:2 giving:1 eq:1 ex:5 |
3,579 | 4,240 | Blending Autonomous Exploration and
Apprenticeship Learning
Thomas J. Walsh
Center for Educational
Testing and Evaluation
University of Kansas
Lawrence, KS 66045
[email protected]
Daniel Hewlett
Clayton T. Morrison
School of Information:
Science, Technology and Arts
University of Arizona
Tucson, AZ 85721
{dhewlett@cs,clayton@sista}.arizona.edu
Abstract
We present theoretical and empirical results for a framework that combines the
benefits of apprenticeship and autonomous reinforcement learning. Our approach
modifies an existing apprenticeship learning framework that relies on teacher
demonstrations and does not necessarily explore the environment. The first change
is replacing previously used Mistake Bound model learners with a recently proposed framework that melds the KWIK and Mistake Bound supervised learning
protocols. The second change is introducing a communication of expected utility from the student to the teacher. The resulting system only uses teacher traces
when the agent needs to learn concepts it cannot efficiently learn on its own.
1
Introduction
As problem domains become more complex, human guidance becomes increasingly necessary to
improve agent performance. For instance, apprenticeship learning, where teachers demonstrate
behaviors for agents to follow, has been used to train agents to control complicated systems such
as helicopters [1]. However, most work on this topic burdens the teacher with demonstrating even
the simplest nuances of a task. By contrast, in autonomous reinforcement learning [2] a number
of domain classes can be efficiently learned by an actively exploring agent, although this class is
provably smaller than those learnable with the help of a teacher [3].
Thus the field seems to be largely bifurcated. Either agents learn autonomously and eschew the larger
learning capacity from teacher interaction, or the agent overburdens the teacher by not exploring
simple concepts it could garner on its own. Intuitively, this seems like a false choice, as human
teachers often use demonstration but also let students explore parts of the domain on their own. We
show how to build a provably efficient learning system that balances teacher demonstrations and
autonomous exploration. Specifically, our protocol and algorithms cause a teacher to only step in
when its advice will be significantly more helpful than autonomous exploration by the agent.
We extend a previously proposed apprenticeship learning protocol [3] where a learning agent and
teacher take turns running trajectories. This version of apprenticeship learning is fundamentally
different from Inverse Reinforcement Learning [4] and imitation learning [5] because our agents are
allowed to enact better policies than their teachers and observe reward signals. In this setting, the
number of times the teacher outperforms the student was proven to be related to the learnability of
the domain class in a mistake bound predictor (MBP) framework.
Our work modifies previous apprenticeship learning efforts in two ways. First, we will show that
replacing the MBP framework with a different learning architecture called KWIK-MBP (based on
a similar recently proposed protocol [6]) indicates areas where the agent should autonomously explore, and melds autonomous and apprenticeship learning. However, this change alone is not suffi1
cient to keep the teacher from intervening when an agent is capable of learning on its own. Hence,
we introduce a communication of the agent?s expected utility, which provides enough information
for the teacher to decide whether or not to provide a trace (a property not shared by any of the previous efforts). Furthermore, we show the number of such interactions grows only with the MBP
portion of the KWIK-MBP bound. We then discuss how to relax the communication requirement
when the teacher observes the student for many episodes. This gives us the first apprenticeship
learning framework where a teacher only shows demonstrations when they are needed for efficient
learning, and gracefully blends autonomous exploration and apprenticeship learning.
2
Background
The main focus of this paper is blending KWIK autonomous exploration strategies [7] and apprenticeship learning techniques [3], utilizing a framework for measuring mistakes and uncertainty based
on KWIK-MB [6]. We begin by reviewing results relating the learnability of domain parameters in
a supervised setting to the efficiency of model-based RL agents.
2.1
MDPs and KWIK Autonomous Learning
We will consider environments modeled as a Markov Decision Process (MDP) [2] hS, A, T, R, ?i,
with states and actions S and A, transition function T : S, A 7? P r[S], rewards R : S, A 7? <, and
discount
factor ? ? [0, 1). The value of a state under policy ? : S 7? A is V? (s) = R(s, ?(s)) +
P
? s0 ?S T (s, a, s0 )V? (s0 ) and the optimal policy ? ? satisfies ?? V?? ? V? .
In model-based reinforcement learning, recent advancements [7] have linked the efficient learnability of T and R in the KWIK (?Knows What It Knows?) framework for supervised learning with
PAC-MDP behavior [8]. Formally, KWIK learning is:
Definition 1. A hypothesis class H : X 7? Y is KWIK learnable with parameters and ? if the
following holds. For each (adversarial) input xt the learner predicts yt ? Y or ?I don?t know?
(?). With probability (1 ? ?) (1) when yt 6= ?, ||yt ? E[h(xt )]|| < and (2) the total number of ?
predictions is bounded by a polynomial function of (|H|, 1 , 1? ).
Intuitively, KWIK caps the number of times the agent will admit uncertainty in its predictions. Prior
work [7] showed that if the transition and reward functions (T and R) of an MDP are KWIK learnable, then a PAC-MDP agent (which takes only a polynomial number of suboptimal steps with high
probability) can be constructed for autonomous exploration. The mechanism for this construction is
an optimistic interpretation of the learned model. Specifically, KWIK-learners LT and LR are built
for T and R and the agent replaces any ? predictions with transitions to a trap state with reward
Rmax , causing the agent to explore these uncertain regions. This exploration requires only a polynomial (with respect to the domain parameters) number of suboptimal steps, thus the link from KWIK
to PAC-MDP. While the class of functions that is KWIK learnable includes tabular and factored
MDPs, it does not cover many larger dynamics classes (such as STRIPS rules with conjunctions for
pre-conditions) that are efficiently learnable in the apprenticeship setting.
2.2
Apprenticeship Learning with Mistake Bound Predictor
We now describe an existing apprenticeship learning framework [3], which we will be modifying
throughout this paper. In that protocol, an agent is presented with a start state s0 and is asked to take
actions according to its current policy ?A , until a horizon H or a terminal state is reached. After each
of these episodes, a teacher is allowed to (but may choose not to) demonstrate their own policy ?T
starting from s0 . The learning agent is able to fully observe each transition and reward received both
in its own trajectories as well as those of the teacher, who may be able to provide highly informative
samples. For example, in an environment with n bits representing a combination lock that can only
be opened with a single setting of the bits, the teacher can demonstrate the combination in a single
trace, while an autonomous agent could spend 2n steps trying to open it.
Also in that work, the authors describe a measure of sample complexity called PAC-MDP-Trace
(analogous to PAC-MDP from above) that measures (with probability 1 ? ?) the number of episodes
where V?A (s0 ) < V?T (s0 ) ? , that is where the expected value of the agent?s policy is significantly
worse than the expected value of the teacher?s policy (VA and VT for short). A result analogous
2
to the KWIK to PAC-MDP result was shown connecting a supervised framework called Mistake
Bound Predictor (MBP) to PAC-MDP-Trace behavior. MBP extends the classic mistake bound
learning framework [9] to handle data with noisy labels, or more specifically:
Definition 2. A hypothesis class H : X 7? Y is Mistake Bound Predictor (MBP) learnable with
parameters and ? if the following holds. For each adversarial input xt , the learner predicts yt ? Y .
If ||Eh? [xt ] ? yt || > , then the agent has made a mistake. The number of mistakes must be bounded
by a polynomial over ( 1 , 1? , |H|) with probability (1 ? ?).
An agent using MBP learners LT and LR for the MDP model components will be PAC-MDPTrace. The conversion mirrors the KWIK to PAC-MDP connection described earlier, except that the
interpretation of the model is strict, and often pessimistic (sometimes resulting in an underestimate
of the value function). For instance, if the transition function is based on a conjunction (e.g. our
combination lock), the MBP learners default to predicting ?false? where the data is incomplete,
leading an agent to think its action will not work in those situations. Such interpretations would
be catastrophic in the autonomous case (where the agent would fail to explore such areas), but are
permissible in apprenticeship learning where teacher traces will provide the missing data.
Notice that under a criteria where the number of teacher traces is to be minimized, MBP learning
may overburden the teacher. For example, in a simple flat MDP, an MBP-Agent picks actions that
maximize utility in the part of the state space that has been exposed by the teacher, never exploring,
so the number of teacher traces scales with |S||A|. But a flat MDP is autonomously (KWIK) learnable, so no traces should be required. Ideally an agent would explore the state space where it can
learn efficiently, and only rely on the teacher for difficult to learn concepts (like conjunctions).
3
Teaching by Demonstration with Mixed Interpretations
We now introduce a different criteria with the goal of minimizing teacher traces while not forcing
the agent to explore exponentially long.
Definition 3. A Teacher Interaction (TI) bound for a student-teacher pair is the number of episodes
where the teacher provides a trace to the agent that guarantees (with probability 1 ? ?) that the
number of agent steps between each trace (or after the last one) where VA (s0 ) < VT (s0 ) ? is
polynomial in 1 , 1? , and the domain parameters.
A good TI bound minimizes the teacher traces needed to achieve good behavior, but only requires
the suboptimal exploration steps to be polynomially bounded, not minimized. This reflects our
judgement that teacher interactions are far more costly than autonomous agent steps, so as long
as the latter are reasonably constrained, we should seek to minimize the former. The relationship
between TI and PAC-MDP-Trace is the following:
Theorem 1. The TI bound for a domain class and learning algorithm is upper-bounded by the
PAC-MDP-Trace bound for the same domain/algorithm with the same and ? parameters.
Proof. A PAC-MDP-Trace bound quantifies (with probability 1 ? ?) the worst-case number of
episodes where the student performs worse than the teacher, specifically where VA (s0 ) < VT (s0 )?.
Suppose an environment existed with a PAC-MDP-Trace bound of B1 and a TI bound of B2 > B1 .
This would mean the domain was learnable with at most B1 teacher traces. But this is a contradiction
because no more traces are needed to keep the autonomous exploration steps polynomial.
3.1
The KWIK-MBP Protocol
We would like to describe a supervised learning framework (like KWIK or MBP) that can quantify
the number of changes made to a model through exploration and teacher demonstrations. Here, we
propose such a model based on the recent KWIK-MB protocol [6], which we extend below to cover
stochastic labels (KWIK-MBP).
Definition 4. A hypothesis class H : X 7? Y is KWIK-MBP with parameters and ? under
the following conditions. For each (adversarial) input xt the learner must predict yt ? Y or ?.
With probability (1 ? ?), the number of ? predictions must be bounded by a polynomial K over
h|H|, 1/, 1/?i and the number of mistakes (by Definition 2) must be bounded by a polynomial M
over h|H|, 1/, 1/?i.
3
Algorithm 1 KWIK-MBP-Agent with Value Communication
1: The agent A knows , ?, S, A, H and planner P .
2: The teacher T has policy ?T with expected value VT
3: Initialize KWIK-MBP learners LT and LR to ensure k value accuracy w.h.p. for k ? 2
4: for each episode do
5:
s0 = Environment.startState
? A, T? and R
? (see construction below).
6:
A calculates the value function UA of ?A from S,
7:
A communicates its expected utility UA (s0 ) on this episode to T
8:
if VT (s0 ) ? k?1
k > UA (s0 ) then
9:
T provides a trace ? starting from s0 .
10:
?hs, a, r, s0 i Update LT (s, a, s0 ) and LR (s, a, r)
11:
while episode not finished and t < H do
S
S? = S Smax , the Rmax trap state
12:
? = LR (s, a) or Rmax if LR (s, a) = ?
R
13:
?
T = LT (s, a) or Smax if LT (s, a) = ?.
14:
? T?, R).
?
15:
at = P.getPlan(st , S,
16:
hrt , st+1 i = E.executeAct(at )
17:
LT .Update(st , at , st+1 ); LR .U pdate(st , at , rt )
KWIK-MB was originally designed for a situation where mistakes are more costly than ? predictions. So mistakes are minimized while ? predictions are only bounded. This is analogous to our
TI criteria (traces minimized with exploration bounded) so we now examine a KWIK-MBP learner
in the apprenticeship setting.
3.2
Mixing Optimism and Pessimism
Algorithm 1 (KWIK-MBP-Agent) shows an apprenticeship learning agent built over KWIK-MBP
learners LT and LR . Both of these model learners are instantiated to ensure the learned value
function will have k accuracy for k ? 2 (for reasons discussed in the main theorem), which can
2
(1??)
be done by setting R = 1??
16k and T = 16k?Vmax (details follow the same form as standard
connections between model learners and value function accuracy, for example in Theorem 3 from
[7]). When planning with the subsequent model, the agent constructs a ?mixed? interpretation,
trusting the learner?s predictions where mistakes might be made, but replacing (lines 13-14) all ?
predictions from LR with a reward of Rmax and any ? predictions from T? with transitions to the
Rmax trap state Smax . This has the effect of drawing the agent to explore explicitly uncertain regions
(?) and to either explore on its own or rely on the teacher for areas where a mistake might be made.
For instance, in the experiments in Figure 2 (left), discussed in depth later, a KWIK-MBP agent
only requires traces for learning the pre-conditions in a noisy blocks world but uses autonomous
exploration to discover the noise probabilities.
4
Teaching by Demonstration with Explicit Communication
Thus far we have not discussed communication from the student to the teacher in KWIK-MBPAgent (line 7). We now show that this communication is vital in keeping the TI bound small.
Example 1. Suppose there was no communication in Algorithm 1 and the teacher provided a trace
when ?A was suboptimal. Consider a domain where the pre-conditions of actions are governed by
a disjunction over the n state factors (if the disjunction fails, the action fails). Disjunctions can be
learned with M = n/3 mistakes and K = 3n/2 ? 3M ? predictions [6]. However, that algorithm
defaults to predicting ?true? and only learns from negative examples. This optimistic interpretation
means the agent will expect success, and can learn autonomously. However, the teacher will provide
a trace to the agent since it sees it performing suboptimally during exploration. Such traces are
unnecessary and uninformative (their positive examples are useless to LT ).
This illustrates the need for student communication to give some indication of its internal model
to the teacher. The protocol in Algorithm 1 captures this intuition by providing a channel (line 7)
4
where the student communicates its expected utility UA . The teacher then only shows a trace to
a pessimistic agent (line 8), but will ?stand back? and let an over-confident student learn from its
own mistakes. We note that there are many other possible forms of this communication such as
announcing the probability of reaching a goal or an equivalence query [10] type model, where the
student exposes its entire internal model to the teacher. We focus here on the communication of
utility, which is general enough for MDP domains but has low communication overhead.
4.1
Theoretical Properties
Explain
Explore (4)
Exploit (2)
VT
(3)VA
(1)VA
The proof of the algorithm?s TI bound ape/k
pears below and is illustrated in Figure 1
but intuitively we show that if we force
the student to (w.h.p.) learn an k -accurate
(1) UA
UA
value function for k ? 2 then we can guarUA
e/k
(2) UA
UA
antee traces where UA < VT ? k will
be helpful, but are not needed until UA
VT-e
,
at
which
is reported below VT ? (k?1)
k
point UA alone cannot guarantee that VA
is within of VT and so a trace must be Figure 1: The areas for UA and VA corresponding to
given. Because traces are only given when the cases in the main theorem. In all cases UT ? UA
the student undervalues a potential policy, and when k = 2 the two dashed lines collapse together.
the number of traces is related only to the
MBP portion of the KWIK-MBP bound, and more specifically to the number of pessimistic mistakes, defined as:
(2)VA
VA
(4)VA
VA
Definition 5. A mistake is pessimistic if and only if it causes some policy ? to be undervalued in the
agent?s model, that is in our case U? < V? ? k .
Note that by the construction of our model, KWIK-learnable parameters (? replaced by Rmax-style
interpretations) never result in such pessimistic mistakes. We can now state the following:
Theorem 2. Algorithm 1 with KWIK-MBP learners will have a TI bound that is polynomial in 1 , 1? ,
1
and 1??
and P , where P is the number of pessimistic mistakes (P ? M ) made by LT and LR .
Proof. The proof stems from an expansion of the Explore-Explain-Exploit Lemma from [3]. That
original lemma categorized the three possible outcomes of an episode in an apprenticeship learning
setting where the teacher always gives a trace and with LT and LR built to learn V within 2 .
The three possibilities for an episode were (1) exploration, when the agent?s value estimate of ?A
is inaccurate, ||VA ? UA || > /2, (2) exploitation when the agent?s prediction of its own return is
accurate (||UA ?VA || ? /2) and the agent is near-optimal with respect to the teacher (VA ? VT ?),
and (3) explanation when ||VA ? UA || ? /2, but VA < VT ? . Because both (1) and (3) provide
samples to LT and LR , the number of times they can occur is bounded (in the original lemma) by the
MBP bound on those learners and in both cases a relevant sample is produced with high probability
due to the simulation lemma (c.f. Lemma 9 of [7]), which states that two different value returns
from two MDPs means that, with high probability, their parameters must be different.
We need to extend the lemma to cover our change in protocol (the teacher may not step in on every
episode) and in evaluation criteria (TI bound instead of PAC-MDP-Trace). Specifically, we need to
show: (i) The number of steps between traces where VA < VT ? is polynomially bounded. (ii) Only
a polynomial number of traces are given, and they are all guaranteed to improve some parameter in
the agent?s model with high probability. (iii) Only pessimistic mistakes (Definition 5) cause a teacher
intervention. Note that properties (i) and (ii) imply that VA < VT ? for only a polynomial number
of episodes and correspond directly to the TI criteria from Definition 3. We now consider Algorithm
1 according to these properties in all of the cases from the original explore-exploit-explain lemma.
We begin with the Explain case where VA < VT ? and ||UA ? VA || ? k . Combining these
inequalities, we know UA < VT ? (k?1)
, so a trace will definitely be provided. Since UT ? UA
k
(UT is the value of ?T in the student?s model and UA was optimal) we have at least UT < VT ? k
and the simulation lemma implies the trace will (with high probability) be helpful. Since there are a
5
limited number of such mistakes (because LR and LT are KWIK-MBP learners) we have satisfied
property (ii). Property (iii) is true because both ?T and ?A are undervalued.
We now consider the Exploit case where VA ? VT ? and ||UA ? VA || ? k . There are two possible
situations here, because UA can either be larger or smaller than VT ? (k?1)
. If UA ? VT ? (k?1)
k
k
then no trace is given, but the agent?s policy is near optimal so property (i) is not violated. If
, then a trace is given, even in this exploit case, because the teacher does not
UA < VT ? (k?1)
k
know VA and cannot distinguish this case from the ?explain? case above. However, this trace will
still be helpful, because UT ? UA , so at least UT < VT ? k (satisfying iii), and again by the
simulation lemma, the trace will help us learn a parameter and there are a limited number of such
mistakes, so (ii) holds.
Finally, we have the Explore case, where ||UA ? VA || > k . In that case, the agent?s own experience
will help it learn a parameter, but in terms of traces we have the following cases:
UA ? VT ? (k?1)
and VA > UA + k . In this case no trace is given but we have VA > VT ? , so
k
property (i) holds.
UA ? VT ? (k?1)
and UA > VA + k . No trace is given here, but this is the classical exploration
k
case (UA is optimistic, as in KWIK learning). Since UA and VA are sufficiently separated, the
agent?s own experience will provide a useful sample, and because all parameters are polynomially
learnable, property (i) is satisfied.
UA < VT ? (k?1)
and either VA > UA + k or UA > VA + k . In either case, a trace will be
k
provided but UT ? UA so at least UT < VT ? k and the trace will be helpful (satisfying property
(ii)). Pessimistic mistakes are causing the trace (property iii) since ?T is undervalued.
Our result improves on previous results by attempting to minimize the number of traces while reasonably bounding exploration. The result also generalizes earlier apprenticeship learning results
on 2 -accurate learners [3] to k -accuracy, while ensuring a more practical and stronger bound (TI
instead of PAC-MDP-Trace). The choice of k in this situation is somewhat complicated. Larger k
requires more accuracy of the learned model, but decreases the size of the ?bottom region? above
where a limited number of traces may be given to an already near-optimal agent. So increasing k
can either increase or decrease the number of traces, depending on the exact problem instance.
4.2
Experiments
We now present experiments in two domains. The first domain is a blocks world with dynamics
based on stochastic STRIPS operators, a ?1 step cost, and a goal of stacking the blocks. That is, the
environment state is described as a set of grounded relations (e.g. On(a, b)) and actions are described
by relational (with variables) operators that have conjunctive pre-conditions that must hold for the
action to execute (e.g. putDown(X, To) cannot execute unless the agent is holding X and To is
clear and a block). If the pre-conditions hold, then one of a set of possible effects (pairs of Add
and Delete lists), chosen based on a probability distribution over effects, will change the current
state. The actions in our blocks world are two versions of pickup(X, From) and two versions of
putDown(X, To), with one version being ?reliable?, producing the expected result 80% of the time
and otherwise doing nothing. The other version of each action has the probabilities reversed. The
literals in the effects of the STRIPS operators (the Add and Delete lists) are given to the learning
agents, but the pre-conditions and the probabilities of the effects need to be learned. This is an
interesting case because the effect probabilities can be learned autonomously while the conjunctive
pre-conditions (of sizes 3 and 4), require teacher input (like our combination lock example).
Figure 2, column 1, shows KWIK, MBP, and KWIK-MBP agents as trained by a teacher who uses
unreliable actions half the time. The KWIK learner never receives traces (since its expected utility,
shown in 1a, is always high), but spends an exponential (in the number of literals) time exploring
the potential pre-conditions of actions (1b). In contrast, the MBP and KWIK-MBP agents use
the first trace to learn the pre-conditions. The proportion of trials (out of 30) that the MBP and
KWIK-MBP learners received teacher traces across episodes is shown in the bar graphs 1c and 1d
of Fig. 2. The MBP learner continues to get traces for several episodes afterwards, using them to
6
help learn the probabilities well after the pre-conditions are learned. This probability learning could
be accomplished autonomously, but the MBP pessimistic value function prevents such exploration
in this case. By contrast, KWIK-MBP receives 1 trace to learn the pre-conditions, and then explores
the probabilities on its own. KWIK-MBP actually learns the probabilities faster than MBP because
it targets areas it does not know about rather than relying on potentially redundant teacher samples.
However, in rare cases KWIK-MBP receives additional traces; in fact there were two exceptions in
the 30 trials, indicated by ??s at episodes 5 and 19 in 1d. The reason for this is that sometimes the
learner may be unlucky and construct an inaccurate value estimate and the teacher then steps in and
provides a trace.
Predicted
Values
Predicted
Values
KWIK?MBP
KWIK-MBP
KWIK?MBP
KWIK-MBP
Undiscounted
Reward
Pr(Trace) AvgAvg
Undiscounted
Reward
Pr(Trace)
MBP
MBP
MBP
MBP
Pr(Trace)
Pr(Trace)
Pr(Trace)
Pr(Trace)
Pr(Trace)
Pr(Trace)
Avg
Undiscounted
Reward
Pr(Trace) Avg
Undiscounted
Reward
Avg
Undiscounted
Reward
Pr(Trace)
Pr(Trace)
Avg
Cumulative
Reward
Pr(Trace)
Predicted
Values
Predicted
Values
Predicted
Values
Predicted
Values
Blocks World
Wumpus World
The second domain is a variant of ?Wumpus
0
Blocks World
Wumpus World
0
1a
2a
World? with 5 locations in a chain, an agent
?2
?2
0
who can move, fire arrows (unlimited supply)
?4
0
?4
?6
or pick berries (also unlimited), and a wumKWIK
KWIK
?50
?6
MBP
MBP
KWIK
?8
KWIK
KWIK
KWIK
pus moving randomly. The domain is repre?50
KWIK?MBP
KWIK?MBP
MBP
MBP
KWIK-MBP
KWIK-MBP
?8
KWIK?MBP
?10
MBP
MBP
KWIK?MBP
sented by a Dynamic Bayes Net (DBN) based
?100
?100
5
10
15
20
25 ?1000
5
10
15
20
25
30
on these factors and the reward is represented
?40
0
5
10
15
20
25
0
5
10
15
20
25
30
?4
1b
2b
0
?6
?10
as a linear combination of the factor values
?6
?10
?8
?20
(?5 for a live wumpus and +2 for picking
?8
?20
?10
?30
a berry). The action effects are noisy, espe?10
?30
?12
?40
cially the probability of killing the wumpus,
?12
?40
?14
?50
?14
?50
which depends on the exact (not just relative)
?16
?60
0
5
10
15
20
25
0
5
10
15
20
25
30
?16
?60
10
locations of the agent, wumpus, and whether
10
5
10
15
20
25
5
10
15
20
25
30
1
1
0.5
0.5
1c
the wumpus is dead yet (three parent fac2c
0.5
0.5
0
0
0
5
10
15
20
25
0
5
10
15
20
25
30
tors in the DBN). While the reward function
0
0
0
5
10
15
20
25
0
5
10
15
20
25
30
1
1
is KWIK learnable through linear regression
1d
2d
1
1
0.5
0.5
[7] and though DBN CPTs with small parent
0.5
0.5
*
*
0
0
0
5*
10
15
20 *
25
0
5
10
15
20
25
30
sizes are also KWIK learnable, the high con0
0
Episodes15
Episodes
0
5
10Episodes
20
25
0
5
10 Episodes
15
20
25
30
Episodes
Episodes
Episodes
Episodes
nectivity of this particular DBN makes autonomous exploration of all the parent-value Figure 2: A plot matrix with rows (a) value predicconfigurations prohibitive. Because of this, tions U (s ), (b) average undiscounted cumulative
A 0
in our KWIK-MBP implementation, we com- reward and (c and d) the proportion of trials where
bined a KWIK linear regression learner for MBP and KWIK-MBP received teacher traces. The
LR with an MBP learner for LT that is given left column is Blocks World and the right a modified
the DBN structure and learns the parameters Wumpus World. Red corresponds to KWIK, blue to
from experience, but when entries in the con- MBP, and black to KWIK-MBP.
ditional probability tables are the result of
only a few data points, the learner predicts no change for this factor, which was generally a pessimistic outcome. We constructed an ?optimal hunting? teacher that finds the best combination of
locations to shoot the wumpus from/at, but ignores the berries. We concentrate on the ability of our
algorithm to find a better policy than the teacher (i.e., learning to pick berries), while staying close
enough to the teacher?s traces that it can hunt the wumpus effectively.
MBPMBP
MBPMBP
KWIK?MBP
KWIK-MBP
KWIK?MBP
KWIK-MBP
Figure 2, column 2, presents the results from this experiment. In plot 2a we see the predicted values
of the three learners, while plot 2b shows their performance. The KWIK learner starts with high
UA that gradually descends (in 2a), but without traces the agent spends most of its time exploring
fruitlessly (very slowly inclining slope of 2b). The MBP agent learns to hunt from the teacher
and quickly achieves good behavior, but rarely learns to pick berries (only gaining experience on
the reward of berries if it ends up in completely unknown state and picks berries at random many
times). The KWIK-MBP learner starts with high expected utility and explores the structure of just
the reward function, discovering berries but not the proper location combinations for killing the
wumpus. Its UA thus initially drops precipitously as it thinks all it can do is collect berries. Once
this crosses the teacher?s threshold, the teacher steps in with a number of traces showing the best
way to hunt the wumpus?this is seen in plot 2d with the small bump in the proportion of trials
with traces, starting at episode 2 and declining roughly linearly until episode 10. The KWIK-MBP
student is then able to fill in the CPTs with information from the teacher and reach an optimal policy
that kills the wumpus and picks berries, avoiding both the over- and under-exploration of the KWIK
and MBP agents. This increased overall performance is seen in plot 2b as KWIK-MBP?s average
cumulative reward surpasses MBP between episodes 5 and 10 .
7
5
Inferring Student Aptitude
We now describe a method for a teacher to infer the student?s aptitude by using long periods without
teacher interventions as observation phases. This interaction protocol is an extension of Algorithm
1, but instead of using direct communication, the teacher will allow the student to run some number
of trajectories m from a fixed start state and then decide whether to show a trace or not.
We would like to show that the length (m) of each observation phase can be polynomially bounded
and the system as a whole can still maintain a good TI bound. We show below that such an m exists
and is related to the PAC-MDP bound for a portion of the environment we call the zone of tractable
exploration (ZTE). The ZTE (inspired by the zone of proximal development [11]) is the area of an
MDP that an agent with background knowledge B and model learners LT and LR can act in with
a polynomial number of suboptimal steps as judged only within that area. Combining the ZTE, B,
LT and LR induces a learning sub-problem where the agent must learn to act as well as possible
without the teacher?s help.
Remark 1. If the learning agent is KWIK-MBP and the evaluation phase has length m = A1 + A2
where A1 is the PAC-MDP bound for the ZTE and A2 is the number of trials all starting from s0
needed to estimate VA (s0 ) (V?A ) within accuracy /k for k ? 4, and the teacher only steps in when
V?A < VT ? (k?1)
k , the resulting interaction will have a TI bound equivalent to the earlier one,
although the student needs to wait m trials to get a trace from the teacher.
A1 trials are necessary because the agent may need to explore all the ? or optimistic mistakes within
the ZTE, and each episode might contain only one of the A1 suboptimal steps. Since each trajectory
with a fixed policy results in an i.i.d. sample with mean VA , A2 can be polynomially bounded using
a Chernoff bound [12]. Note we require here that k ? 4 (a stricter requirement than earlier). This is
because we have errors of ||VA ? V?A || ? /k and ||UA ? VA || ? /k, so V?A needs to be at least 3/k
below VT to ensure UT < VT ? /k, and therefore traces are helpful. But V?A may also overestimate
VA , leading to an extra /k slack term, and hence k ? 4.
6
Related Work and Conclusions
Our teaching protocol extends early apprenticeship learning work for linear MDPs [1], which
showed a polynomial number of upfront traces followed by greedy (not explicitly exploring) trajectories could achieve good behavior. Our protocol is similar to a recent ?practice/critique? interaction [13] where a teacher observed an agent and then labeled individual actions as ?good? or
?bad?, but the teacher did not provide demonstrations in that work. Our setting differs from inverse
reinforcement learning [4, 5] because our student can act better than the teacher, does not know the
dynamics, and observes rewards. Studies have also been done on humans providing shaping rewards
as feedback to agents rather than our demonstration technique [14, 15].
Some works have taken a heuristic approach to mixing autonomous learning and teacher-provided
trajectories. This has been done in robot reinforcement learning domains [16] and for bootstrapping
classifiers [17]. Many such approaches give all the teacher data at the beginning, while our teaching
protocol has the teacher only step in selectively, and our theoretical results ensure the teacher will
only step in when its advice will have a significant effect.
We have shown how to use an extension of the KWIK-MB [6] (now KWIK-MBP) framework as
the basis for model-based RL agents in the apprenticeship paradigm. These agents have a ?mixed?
interpretation of their learned models that admits a degree of autonomous exploration. Furthermore,
introducing a communication channel from the student to the teacher and having the teacher only
give traces when VT is significantly better than UA guarantees the teacher will only provide demonstrations that attempt to teach concepts the agent could not tractably learn on its own, which has
clear benefits when demonstrations are far more costly than exploration steps.
Acknowledgments
We thank Michael Littman and Lihong Li for discussions and DARPA-27001328 for funding.
8
References
[1] Pieter Abbeel and Andrew Y. Ng. Exploration and apprenticeship learning in reinforcement
learning. In ICML, 2005.
[2] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT
Press, Cambridge, MA, March 1998.
[3] Thomas J. Walsh, Kaushik Subramanian, Michael L. Littman, and Carlos Diuk. Generalizing
apprenticeship learning across hypothesis classes. In ICML, 2010.
[4] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning.
In ICML, 2004.
[5] Nathan Ratliff, David Silver, and J. Bagnell. Learning to search: Functional gradient techniques for imitation learning. Autonomous Robots, 27:25?53, 2009.
[6] Amin Sayedi, Morteza Zadimoghaddam, and Avrim Blum. Trading off mistakes and don?tknow predictions. In NIPS, 2010.
[7] Lihong Li, Michael L. Littman, Thomas J. Walsh, and Alexander L. Strehl. Knows what it
knows: A framework for self-aware learning. Machine Learning, 82(3):399?443, 2011.
[8] Alexander L. Strehl, Lihong Li, and Michael L. Littman. Reinforcement learning in finite
MDPs: PAC analysis. Journal of Machine Learning Research, 10:2413?2444, 2009.
[9] Nick Littlestone. Learning quickly when irrelevant attributes abound. Machine Learning,
2:285?318, 1988.
[10] Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319?342, 1988.
[11] Lev Vygotsky. Interaction between learning and development. In Mind In Society. Harvard
University Press, Cambridge, MA, 1978.
[12] Michael J. Kearns, Yishay Mansour, and Andrew Y. Ng. Approximate planning in large
pomdps via reusable trajectories. In NIPS, 1999.
[13] Kshitij Judah, Saikat Roy, Alan Fern, and Thomas G. Dietterich. Reinforcement learning via
practice and critique advice. In AAAI, 2010.
[14] W. Bradley Knox and Peter Stone. Combining manual feedback with subsequent mdp reward
signals for reinforcement learning. In AAMAS, 2010.
[15] Andrea Lockerd Thomaz and Cynthia Breazeal. Teachable robots: Understanding human
teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6-7):716?
737, 2008.
[16] William D. Smart and Leslie Pack Kaelbling. Effective reinforcement learning for mobile
robots. In ICRA, 2002.
[17] Sonia Chernova and Manuela Veloso. Interactive policy learning through confidence-based
autonomy. Journal of Artificial Intelligence Research, 34(1):1?25, 2009.
9
| 4240 |@word h:2 trial:7 exploitation:1 version:5 judgement:1 polynomial:13 seems:2 stronger:1 proportion:3 open:1 pieter:2 seek:1 simulation:3 diuk:1 pick:6 hunting:1 daniel:1 outperforms:1 existing:2 bradley:1 current:2 com:1 yet:1 conjunctive:2 must:8 subsequent:2 informative:1 designed:1 plot:5 update:2 drop:1 alone:2 half:1 prohibitive:1 advancement:1 discovering:1 greedy:1 intelligence:2 beginning:1 short:1 lr:16 provides:4 location:4 constructed:2 direct:1 become:1 supply:1 combine:1 overhead:1 con0:1 introduce:2 apprenticeship:24 expected:10 roughly:1 andrea:1 examine:1 planning:2 behavior:7 terminal:1 inspired:1 relying:1 ua:40 becomes:1 begin:2 discover:1 bounded:12 provided:4 increasing:1 abound:1 what:2 rmax:6 minimizes:1 spends:2 bootstrapping:1 guarantee:3 every:1 ti:14 act:3 interactive:1 stricter:1 classifier:1 control:1 intervention:2 producing:1 overestimate:1 positive:1 mistake:27 sutton:1 critique:2 lev:1 might:3 black:1 k:1 equivalence:1 collect:1 collapse:1 walsh:3 limited:3 hunt:3 practical:1 acknowledgment:1 testing:1 practice:2 block:8 differs:1 area:7 empirical:1 significantly:3 pre:11 confidence:1 wait:1 get:2 cannot:4 close:1 operator:3 judged:1 live:1 equivalent:1 center:1 yt:6 modifies:2 educational:1 missing:1 starting:4 aptitude:2 factored:1 rule:1 contradiction:1 utilizing:1 fill:1 classic:1 handle:1 autonomous:19 analogous:3 construction:3 suppose:2 target:1 yishay:1 exact:2 us:3 hypothesis:4 harvard:1 roy:1 satisfying:2 continues:1 predicts:3 labeled:1 bottom:1 observed:1 zte:5 capture:1 worst:1 region:3 episode:26 autonomously:6 decrease:2 observes:2 intuition:1 environment:7 complexity:1 reward:21 asked:1 ideally:1 littman:4 dynamic:4 trained:1 reviewing:1 smart:1 exposed:1 efficiency:1 learner:29 completely:1 basis:1 darpa:1 represented:1 pdate:1 train:1 separated:1 instantiated:1 describe:4 effective:2 query:2 artificial:2 bifurcated:1 outcome:2 disjunction:3 heuristic:1 larger:4 spend:1 relax:1 drawing:1 otherwise:1 ability:1 think:2 noisy:3 indication:1 net:1 thomaz:1 propose:1 interaction:8 mb:4 helicopter:1 causing:2 relevant:1 combining:3 mixing:2 achieve:2 amin:1 intervening:1 az:1 parent:3 overburden:2 requirement:2 undiscounted:6 smax:3 silver:1 staying:1 help:5 depending:1 tions:1 andrew:4 school:1 received:3 hrt:1 c:1 descends:1 implies:1 trading:1 quantify:1 predicted:7 concentrate:1 announcing:1 attribute:1 modifying:1 opened:1 stochastic:2 exploration:23 human:4 require:2 abbeel:2 pessimistic:10 blending:2 exploring:6 extension:2 hold:6 sufficiently:1 lawrence:1 predict:1 bump:1 tor:1 achieves:1 early:1 a2:3 ditional:1 label:2 expose:1 undervalued:3 reflects:1 mit:1 always:2 modified:1 reaching:1 rather:2 mobile:1 barto:1 conjunction:3 focus:2 indicates:1 contrast:3 adversarial:3 pear:1 helpful:6 inaccurate:2 entire:1 initially:1 relation:1 provably:2 overall:1 development:2 art:1 constrained:1 initialize:1 field:1 aware:1 construct:2 never:3 bined:1 once:1 chernoff:1 having:1 ng:3 putdown:2 icml:3 tabular:1 minimized:4 fundamentally:1 richard:1 few:1 randomly:1 individual:1 replaced:1 phase:3 fire:1 maintain:1 william:1 attempt:1 highly:1 possibility:1 evaluation:3 unlucky:1 chernova:1 hewlett:1 chain:1 accurate:3 capable:1 necessary:2 experience:4 unless:1 incomplete:1 littlestone:1 guidance:1 theoretical:3 delete:2 uncertain:2 instance:4 column:3 earlier:4 increased:1 cover:3 measuring:1 leslie:1 cost:1 introducing:2 stacking:1 entry:1 rare:1 surpasses:1 predictor:4 kaelbling:1 learnability:3 reported:1 teacher:79 proximal:1 confident:1 st:5 knox:1 definitely:1 explores:2 kshitij:1 off:1 picking:1 pessimism:1 connecting:1 together:1 quickly:2 michael:5 again:1 aaai:1 satisfied:2 choose:1 slowly:1 literal:2 worse:2 admit:1 dead:1 leading:2 style:1 return:2 actively:1 li:3 potential:2 student:21 b2:1 includes:1 explicitly:2 depends:1 cpts:2 later:1 optimistic:4 linked:1 doing:1 portion:3 start:4 reached:1 repre:1 complicated:2 bayes:1 red:1 carlos:1 slope:1 minimize:2 trusting:1 accuracy:6 largely:1 efficiently:4 who:3 correspond:1 garner:1 killing:2 produced:1 fern:1 trajectory:7 pomdps:1 explain:5 reach:1 manual:1 strip:3 definition:8 underestimate:1 proof:4 con:1 knowledge:1 cap:1 ut:9 improves:1 shaping:1 actually:1 back:1 originally:1 supervised:5 follow:2 done:3 execute:2 though:1 furthermore:2 just:2 until:3 receives:3 replacing:3 indicated:1 mdp:24 grows:1 effect:8 dietterich:1 concept:5 true:2 contain:1 former:1 hence:2 illustrated:1 during:1 self:1 kaushik:1 criterion:5 trying:1 stone:1 demonstrate:3 performs:1 meld:2 shoot:1 recently:2 funding:1 functional:1 rl:2 exponentially:1 extend:3 interpretation:8 discussed:3 relating:1 significant:1 cambridge:2 declining:1 dbn:5 teaching:5 lihong:3 moving:1 robot:5 add:2 kwik:77 pu:1 own:13 recent:3 showed:2 zadimoghaddam:1 irrelevant:1 forcing:1 inequality:1 success:1 vt:31 accomplished:1 seen:2 additional:1 somewhat:1 maximize:1 redundant:1 period:1 morrison:1 signal:2 dashed:1 ii:5 afterwards:1 paradigm:1 infer:1 stem:1 alan:1 faster:1 veloso:1 cross:1 long:3 a1:4 va:35 calculates:1 prediction:12 ensuring:1 variant:1 regression:2 sometimes:2 grounded:1 background:2 uninformative:1 permissible:1 extra:1 strict:1 ape:1 call:1 near:3 mbp:77 vital:1 enough:3 iii:4 architecture:1 suboptimal:6 whether:3 optimism:1 utility:8 effort:2 peter:1 cause:3 action:14 remark:1 useful:1 generally:1 clear:2 discount:1 induces:1 simplest:1 angluin:1 sista:1 notice:1 upfront:1 blue:1 kill:1 reusable:1 demonstrating:1 threshold:1 blum:1 teachable:1 graph:1 run:1 inverse:3 uncertainty:2 extends:2 throughout:1 planner:1 decide:2 sented:1 decision:1 bit:2 bound:27 guaranteed:1 distinguish:1 followed:1 existed:1 replaces:1 arizona:2 occur:1 flat:2 unlimited:2 nathan:1 performing:1 attempting:1 according:2 combination:7 march:1 smaller:2 across:2 increasingly:1 intuitively:3 gradually:1 pr:12 taken:1 previously:2 turn:1 discus:1 mechanism:1 fail:1 needed:5 know:10 slack:1 mind:1 tractable:1 end:1 generalizes:1 observe:2 sonia:1 thomas:4 original:3 running:1 ensure:4 lock:3 exploit:5 build:2 classical:1 society:1 icra:1 move:1 already:1 blend:1 strategy:1 costly:3 rt:1 breazeal:1 bagnell:1 gradient:1 reversed:1 link:1 thank:1 capacity:1 gracefully:1 topic:1 reason:2 nuance:1 suboptimally:1 length:2 modeled:1 relationship:1 useless:1 providing:2 demonstration:11 balance:1 minimizing:1 difficult:1 potentially:1 holding:1 teach:1 trace:79 negative:1 ratliff:1 implementation:1 proper:1 policy:15 unknown:1 conversion:1 upper:1 observation:2 markov:1 finite:1 pickup:1 situation:4 relational:1 communication:14 mansour:1 clayton:2 david:1 pair:2 required:1 connection:2 nick:1 learned:9 tractably:1 nip:2 able:3 bar:1 below:6 tucson:1 built:3 eschew:1 reliable:1 explanation:1 gaining:1 subramanian:1 eh:1 rely:2 predicting:2 force:1 representing:1 improve:2 technology:1 kansa:1 mdps:5 imply:1 finished:1 prior:1 understanding:1 berry:10 relative:1 precipitously:1 fully:1 expect:1 mixed:3 interesting:1 proven:1 dana:1 agent:69 degree:1 s0:20 strehl:2 row:1 autonomy:1 last:1 keeping:1 allow:1 cially:1 benefit:2 feedback:2 default:2 depth:1 transition:6 world:10 stand:1 cumulative:3 ignores:1 author:1 made:5 reinforcement:13 vmax:1 avg:4 far:3 polynomially:5 approximate:1 keep:2 unreliable:1 b1:3 manuela:1 unnecessary:1 imitation:2 don:2 search:1 quantifies:1 table:1 ku:1 learn:16 reasonably:2 channel:2 pack:1 expansion:1 necessarily:1 complex:1 protocol:13 domain:17 did:1 main:3 linearly:1 arrow:1 bounding:1 noise:1 whole:1 judah:1 nothing:1 allowed:2 aamas:1 categorized:1 advice:3 fig:1 cient:1 fails:2 inferring:1 sub:1 explicit:1 exponential:1 governed:1 communicates:2 learns:5 theorem:5 bad:1 xt:5 pac:18 showing:1 cynthia:1 learnable:12 list:2 admits:1 burden:1 trap:3 exists:1 false:2 avrim:1 effectively:1 mirror:1 illustrates:1 horizon:1 morteza:1 generalizing:1 lt:16 explore:14 prevents:1 corresponds:1 satisfies:1 relies:1 ma:2 goal:3 shared:1 change:7 specifically:6 except:1 lemma:9 kearns:1 called:3 total:1 catastrophic:1 exception:1 formally:1 rarely:1 zone:2 internal:2 selectively:1 latter:1 alexander:2 violated:1 avoiding:1 |
3,580 | 4,241 | Robust Multi-Class Gaussian Process Classification
Daniel Hern?andez-Lobato
ICTEAM - Machine Learning Group
Universit?e catholique de Louvain
Place Sainte Barbe, 2
Louvain-La-Neuve, 1348, Belgium
[email protected]
Jos?e Miguel Hern?andez-Lobato
Department of Engineering
University of Cambridge
Trumpington Street, Cambridge
CB2 1PZ, United Kingdom
[email protected]
Pierre Dupont
ICTEAM - Machine Learning Group
Universit?e catholique de Louvain
Place Sainte Barbe, 2
Louvain-La-Neuve, 1348, Belgium
[email protected]
Abstract
Multi-class Gaussian Process Classifiers (MGPCs) are often affected by overfitting problems when labeling errors occur far from the decision boundaries. To
prevent this, we investigate a robust MGPC (RMGPC) which considers labeling
errors independently of their distance to the decision boundaries. Expectation
propagation is used for approximate inference. Experiments with several datasets
in which noise is injected in the labels illustrate the benefits of RMGPC. This
method performs better than other Gaussian process alternatives based on considering latent Gaussian noise or heavy-tailed processes. When no noise is injected in
the labels, RMGPC still performs equal or better than the other methods. Finally,
we show how RMGPC can be used for successfully identifying data instances
which are difficult to classify correctly in practice.
1
Introduction
Multi-class Gaussian process classifiers (MGPCs) are a Bayesian approach to non-parametric multiclass classification with the advantage of producing probabilistic outputs that measure uncertainty
in the predictions [1]. MGPCs assume that there are some latent functions (one per class) whose
value at a certain location is related by some rule to the probability of observing a specific class
there. The prior for each of these latent functions is specified to be a Gaussian process. The task of
interest is to make inference about the latent functions using Bayes? theorem. Nevertheless, exact
Bayesian inference in MGPCs is typically intractable and one has to rely on approximate methods.
Approximate inference can be implemented using Markov-chain Monte Carlo sampling, the Laplace
approximation or expectation propagation [2, 3, 4, 5].
A problem of MGPCs is that, typically, the assumed rule that relates the values of the latent functions
with the different classes does not consider the possibility of observing errors in the labels of the
data, or at most, only considers the possibility of observing errors near the decision boundaries
of the resulting classifier [1]. The consequence is that over-fitting can become a serious problem
when errors far from these boundaries are observed in practice. A notable exception is found in
the binary classification case when the labeling rule suggested in [6] is used. Such rule considers
the possibility of observing errors independently of their distance to the decision boundary [7, 8].
However, the generalization of this rule to the multi-class case is difficult. Existing generalizations
1
are in practice simplified so that the probability of observing errors in the labels is zero [3]. Labeling
errors in the context of MGPCs are often accounted for by considering that the latent functions of the
MGPC are contaminated with additive Gaussian noise [1]. Nevertheless, this approach has again the
disadvantage of considering only errors near the decision boundaries of the resulting classifier and is
expected to lead to over-fitting problems when errors are actually observed far from the boundaries.
Finally, some authors have replaced the underlying Gaussian processes of the MGPC with heavytailed processes [9]. These processes have marginal distributions with heavier tails than those of a
Gaussian distribution and are in consequence expected to be more robust to labeling errors far from
the decision boundaries.
In this paper we investigate a robust MGPC (RMGPC) that addresses labeling errors by introducing
a set of binary latent variables. One latent variable for each data instance. These latent variables
indicate whether the assumed labeling rule is satisfied for the associated instances or not. If such
rule is not satisfied for a given instance, we consider that the corresponding label has been randomly
selected with uniform probability among the possible classes. This is used as a back-up mechanism
to explain data instances that are highly unlikely to stem from the assumed labeling rule. The
resulting likelihood function depends only on the total number of errors, and not on the distances
of these errors to the decision boundaries. Thus, RMGPC is expected to be fairly robust when
the data contain noise in the labels. In this model, expectation propagation (EP) can be used to
efficiently carry out approximate inference [10]. The cost of EP is O(ln3 ), where n is the number
of training instances and l is the number of different classes. RMGPC is evaluated in four datasets
extracted from the UCI repository [11] and from other sources [12]. These experiments show the
beneficial properties of the proposed model in terms of prediction performance. When labeling noise
is introduced in the data, RMGPC outperforms other MGPC approaches based on considering latent
Gaussian noise or heavy-tailed processes. When there is no noise in the data, RMGPC performs
better or equivalent to these alternatives. Extra experiments also illustrate the utility of RMGPC to
identify data instances that are unlikely to stem from the assumed labeling rule.
The organization of the rest of the manuscript is as follows: Section 2 introduces the RMGPC model.
Section 3 describes how expectation propagation can be used for approximate Bayesian inference.
Then, Section 4 evaluates and compares the predictive performance of RMGPC. Finally, Section 5
summarizes the conclusions of the investigation.
2
Robust Multi-Class Gaussian Process Classification
Consider n training instances in the form of a collection of feature vectors X = {x1 , . . . , xn } with
associated labels y = {y1 , . . . , yn }, where yi ? C = {1, . . . , l} and l is the number of classes. We
follow [3] and assume that, in the noise free scenario, the predictive rule for yi given xi is
yi = arg maxfk (xi ) ,
(1)
k
where f1 , . . . , fl are unknown latent functions that have to be estimated. The prediction rule given by
(1) is unlikely to hold always in practice. For this reason, we introduce a set of binary latent variables
z = {z1 , . . . , zn }, one per data instance, to indicate whether (1) is satisfied (zi = 0) or not (zi = 1).
In this latter case, the pair (xi , yi ) is considered to be an outlier and, instead of assuming that yi is
generated by (1), we assume that xi is assigned a random class sampled uniformly from C. This is
equivalent to assuming that f1 , . . . , fl have been contaminated with an infinite amount of noise and
serves as a back-up mechanism to explain observations which are highly unlikely to originate from
(1). The likelihood function for f = (f1 (x1 ), f1 (x2 ) . . . , f1 (xn ), f2 (x1 ), f2 (x2 ) . . . , f2 (xn ), . . . ,
fl (x1 ), fl (x2 ), . . . , fl (xn ))T given y, X and z is
?
?1?zi
zi
n
Y
Y
1
?
P(y|X, z, f ) =
?(fyi (xi ) ? fk (xi ))?
,
(2)
l
i=1
k6=yi
where ?(?) is the Heaviside step function. In (2), the contribution
to the likelihood of each instance
Q
(xi , yi ) is a a mixture of two terms: A first term equal to k6=yi ?(fyi (xi ) ? fk (xi )) and a second
term equal to 1/l. The mixing coefficient is the prior probability of zi = 1. Note that only the first
term actually depends on the accuracy of f . In particular, it takes value 1 when the corresponding
instance is correctly classified using (1) and 0 otherwise. Thus, the likelihood function described in
2
(2) considers only the total number of prediction errors made by f and not the distance of these errors
to the decision boundary. The consequence is that (2) is expected to be robust when the observed
data contain labeling errors far from the decision boundaries.
We do not have any preference for a particular instance to be considered an outlier. Thus, z is set to
follow a priori a factorizing multivariate Bernoulli distribution:
n
Y
P(z|?) = Bern(z|?) =
?zi (1 ? ?)1?zi ,
(3)
i=1
where ? is the prior fraction of training instances expected to be outliers. The prior for ? is set to be
a conjugate beta distribution, namely
?a0 ?1 (1 ? ?)b0 ?1
P(?) = Beta(?|a0 , b0 ) =
,
(4)
B(a0 , b0 )
where B(?, ?) is the beta function and a0 and b0 are free hyper-parameters. The values of a0 and b0
do not have a big impact on the final model provided that they are consistent with the prior belief
that most of the observed data are labeled using (1) (b0 > a0 ) and that they are small such that (4) is
not too constraining. We suggest a0 = 1 and b0 = 9.
As in [3], the prior for f1 , . . . , fl is set to be a product of Gaussian processes with means equal to 0
and covariance matrices K1 , . . . , Kl , as computed by l covariance functions c1 (?, ?), . . . , cl (?, ?):
P(f ) =
l
Y
N (fk |0, Kk )
(5)
k=1
where N (?|?, ?) denotes a multivariate Gaussian density with mean vector ? and covariance matrix
?, f is defined as in (2) and fk = (fk (x1 ), fk (x2 ), . . . , fk (xn ))T , for k = 1, . . . , l.
2.1
Inference, Prediction and Outlier Identification
Given the observed data X and y, we make inference about f , z and ? using Bayes? theorem:
P(y|X, z, f )P(z|?)P(?)P(f )
P(?, z, f |y, X) =
,
(6)
P(y|X)
where P(y|X) is the model evidence, a constant useful to perform model comparison under a
Bayesian setting [13]. The posterior distribution and the likelihood function can be used to compute
a predictive distribution for the label y? ? C associated to a new observation x? :
XZ
P(y? |x? , y, X) =
P(y? |x? , z? , f? )P(z? |?)P(f? |f )P(?, z, f |y, X) df df? d? ,
(7)
z ,z?
Q
where f? = (f1 (x? ), . . . , fl (x? ))T , P(y? |x? , z? , f? ) = k6=y? ?(fk (x? ) ? fy? (x? ))1?z? (1/l)z? ,
P(z? |?) = ?z? (1 ? ?)1?z? and P(f? |f ) is a product of l conditional Gaussians with zero mean and
covariance matrices given by the covariance functions of K1 , . . . , Kl . The posterior for z is
Z
P(z|y, X) = P(?, z, f |y, X)df d? .
(8)
This distribution is useful to compute the posterior probability that the i-th training instance is an
outlier, i.e., P(zi = 1|y, X). For this, we only have to marginalize (8) with respect to all the
components of z except zi . Unfortunately, the exact computation of (6), (7) and P(zi = 1|y, X) is
intractable for typical classification problems. Nevertheless, these expressions can be approximated
using expectation propagation [10].
3
Expectation Propagation
The joint probability of f , z, ? and y given X can be written as the product of l(n + 1) + 1 factors:
P(f , z, ?, y|X) = P(y|X, z, f )P(z|?)P(?)P(f )
?
?"
#
" l
#
n Y
n
Y
Y
Y
=?
?ik (f , z, ?)?
?i (f , z, ?) ?? (f , z, ?)
?k (f , z, ?) , (9)
i=1 k6=yi
i=1
3
k=1
where each factor has the following form:
1
?ik (f , z, ?) = ?(fyi (xi ) ? fk (xi ))1?zi (l? l?1 )zi ,
?? (f , z, ?) =
?i (f , z, ?) = ?zi (1 ? ?)1?zi ,
?a0 ?1 (1 ? ?)b0 ?1
,
B(a0 , b0 )
?k (f , z, ?) = N (fk |0, Kk ) .
(10)
Let ? be the set that contains all these exact factors. Expectation propagation (EP) approximates
each ? ? ? using a corresponding simpler factor ?? such that
?
?"
?"
# " l
# ? n
# " l
#
n Y
n
n
Y
Y
Y
Y Y
Y
Y
?
?
?
?
?
?ik ?
?i ??
?k ? ?
?ik ?
?i ??
?k .
(11)
i=1 k6=yi
i=1
i=1 k6=yi
k=1
i=1
k=1
In (11) the dependence of the exact and the approximate factors on f , z and ? has been removed
to improve readability. The approximate factors ?? are constrained to belong to the same family of
exponential distributions, but they do not have to integrate to one. Once normalized with respect to
f , z and ?, (9) becomes the exact posterior distribution (6). Similarly, the normalized product of the
approximate factors becomes an approximation to the posterior distribution:
?"
?
#
" l
#
n
n
Y
Y
1 ?Y Y ?
?
?
?
?
?ik (f , z, ?)
?i (f , z, ?) ?? (f , z, ?)
?k (f , z, ?) ,
(12)
Q(f , z, ?) =
Z i=1
i=1
k6=yi
k=1
where Z is a normalization constant that approximates P(y|X). Exponential distributions are closed
under product and division operations. Therefore, Q has the same form as the approximate factors
and Z can be readily computed. In practice, the form of Q is selected first, and the approximate
factors are then constrained to have the same form as Q. For each approximate factor ?? define
?
? one by
Q\? ? Q/?? and consider the corresponding exact factor ?. EP iteratively updates each ?,
?
?
\?
\
?
?
one, so that the Kullback-Leibler (KL) divergence between ?Q and ?Q
is minimized. The EP
algorithm involves the following steps:
1. Initialize all the approximate factors ?? and the posterior approximation Q to be uniform.
2. Repeat until Q converges:
?
?
(a) Select an approximate factor ?? to refine and compute Q\? ? Q/?.
?
?
\?
\?
?
?
(b) Update the approximate factor ? so that KL(?Q ||?Q ) is minimized.
? \?? .
(c) Update the posterior approximation Q to the normalized version of ?Q
3. Evaluate Z ? P(y|X) as the integral of the product of all the approximate factors.
The optimization problem in step 2-(b) is convex with a single global optimum. The solution to this
?
? \?? . EP is not guaranteed
problem is found by matching sufficient statistics between ?Q\? and ?Q
to converge globally but extensive empirical evidence shows that most of the times it converges to
a fixed point [10]. Non-convergence can be prevented by damping the EP updates [14]. Damping
is a standard procedure and consists in setting ?? = [??new ] [??old ]1? in step 2-(b), where ??new is the
updated factor and ??old is the factor before the update. ? [0, 1] is a parameter which controls the
amount of damping. When = 1, the standard EP update operation is recovered. When = 0, no
update of the approximate factors occurs. In our experiments = 0.5 gives good results and EP
seems to always converge to a stationary solution. EP has shown good overall performance when
compared to other methods in the task of classification with binary Gaussian processes [15, 16].
3.1
The Posterior Approximation
The posterior distribution (6) is approximated by a distribution Q in the exponential family:
Q(f , z, ?) = Bern(z|p)Beta(?|a, b)
l
Y
N (fk |?k , ?k ) ,
(13)
k=1
where N (?|, ?, ?) is a multivariate Gaussian distribution with mean ? and covariance matrix ?;
Beta(?|a, b) is a beta distribution with parameters a and b; and Bern(?|p) is a multivariate Bernoulli
4
distribution with parameter vector p. The parameters ?k and ?k for k = 1, . . . , l and p, a and b
are estimated by EP. Note that Q factorizes with respect to fk for k = 1, . . . , l. This makes the cost
of the EP algorithm linear in l, the total number of classes. More accurate approximations can be
obtained at a cubic cost in l by considering correlations among the fk . The choice of (13) also makes
all the required computations tractable and provides good results in Section 4.
The approximate factors must have the same functional form as Q but they need not be normalized.
However, the exact factors ?ik with i = 1, . . . , n and k 6= yi , corresponding to the likelihood,
(2), only depend on fk (xi ), fyi (xi ) and zi . Thus, the beta part of the corresponding approximate
factors can be removed and the multivariate Gaussian distributions simplify to univariate Gaussians.
Specifically, the approximate factors ??ik with i = 1, . . . , n and k 6= yi are:
(fyi (xi ) ? ?
?yiki )2
1 (fk (xi ) ? ?
?ik )2
??ik (f , z, ?) = s?ik exp ?
+
p?ziki (1 ? p?ik )1?zi , (14)
yi
2
??ik
??ik
yi
where s?ik , p?ik , ?
?ik , ??ik , ?
?yiki and ??ik
are free parameters to be estimated by EP. Similarly, the exact
factors ?i , with i = 1, . . . , n, corresponding to the prior for the latent variables z, (3), only depend
on ? and zi . Thus, the Gaussian part of the corresponding approximate factors can be removed and
the multivariate Bernoulli distribution simplifies to a univariate Bernoulli. The resulting factors are:
?
??i (f , z, ?) = s?i ?a?i ?1 (1 ? ?)bi ?1 p?zi i (1 ? p?i )1?zi ,
(15)
for i = 1, . . . , n, where s?i , a
?i , ?bi , p?i are free parameters to be estimated by EP. The exact factor ??
corresponding to the prior for ?, (4), need not be approximated, i.e., ??? = ?? . The same applies to
the exact factors ?k , for k = 1, . . . , l, corresponding to the priors for f1 , . . . , fl , (5). We set ??k = ?k
for k = 1, . . . , l. All these factors ??? and ??k , for k = 1, . . . , l, need not be refined by EP.
3.2
The EP Update Operations
The approximate factors ??ik , for i = 1, . . . , n and k 6= yi , corresponding to the likelihood, are
refined in parallel, as in [17]. This notably simplifies the EP updates. In particular, for each ??ik
?
?
we compute Q\?ik as in step 2-(a) of EP. Given each Q\?ik and the exact factor ?ik , we update
new
each ??ik . Then, Q
is re-computed as the normalized product of all the approximate factors.
Preliminary experiments indicate that parallel and sequential updates converge to the same solution.
The remaining factors, i.e., ??i , for i = 1, . . . , n, are updated sequentially, as in standard EP. Further
details about all these EP updates are found in the supplementary material1 . The cost of EP, assuming
constant iterations until convergence, is O(ln3 ). This is the cost of inverting l matrices of size n?n.
3.3
Model Evidence, Prediction and Outlier Identification
Once EP has converged, we can evaluate the approximation to the model evidence as the integral of
the product of all the approximate terms. This gives the following result:
?
?
" n
#
" l
# ? n ?
X
X X
1 X
?
log Z = B +
log Di +
Ck ? log |Mk | + ?
log s?ik ? + log s?i ? , (16)
2
i=1
i=1
k=1
k6=yi
where
?
Di = p?i ?
?
Y
p?ik ? + (1 ? p?i ) ?
k6=yi
?ik =
(P
?
yi
?yiki )2 /?
?ik
k6=yi (?
2
?
?ik /?
?ik
?
Y
(1 ? p?ik )? ,
Ck = ?Tk ??1
k ?k ?
n
X
?ik ,
i=1
k6=yi
if k = yi ,
otherwise ,
B = log B(a, b) ? log B(a0 , b0 ) ,
(17)
P
yi ?1
and Mk = ?k Kk + I, with ?k a diagonal matrix defined as ?kii = k6=yi (?
?ik
) , if yi = k, and
?1
?kii = ??ik
otherwise. It is possible to compute the gradient of log Z with respect to ?kj , i.e., the j-th
1
The supplementary material is available online at http://arantxa.ii.uam.es/%7edhernan/RMGPC/.
5
hyper-parameter of the k-th covariance function used to compute Kk . Such gradient is useful to find
the covariance functions ck (?, ?), with k = 1, . . . , l, that maximize the model evidence. Specifically,
one can show that, if EP has converged, the gradient of the free parameters of the approximate
factors with respect to ?kj is zero [18]. Thus, the gradient of log Z with respect to ?kj is
? log Z
1
1
?1 k ?Kk
T ?Kk
= ? trace Mk ?
+ (? k )T (M?1
M?1 ? k ,
(18)
k )
??kj
2
??kj
2
??kj k
P
yi
where ? k = (bk1 , bk2 , . . . , bkn )T with bki = k6=yi ?
?yiki /?
?ik
, if k = yi , and bki = ?
?ik /?
?ik otherwise.
The predictive distribution (7) can be approximated when the exact posterior is replaced by Q:
Z
Y u ? mk
?
?
du ,
(19)
P(y? |x? , y, X) ? + (1 ? ?) N (u|my? , vy? )
?
l
vk
k6=y?
where ?(?) is the cumulative probability function of a standard Gaussian distribution and
?
?1
?1
k
?
? T
? = a/(a + b) , mk = (k?k )T K?1
K?1
kk , (20)
k Mk ? , vk = ?k ? (kk )
k ? Kk ?k Kk
for k = 1, . . . , l, with k?k equal to the covariances between x? and X, and with ??k equal to the
corresponding variance at x? , as computed by ck (?, ?). There is no closed form expression for the
integral in (19). However, it can be easily approximated by a one-dimensional quadrature.
The posterior (8) of z can be similarly approximated by marginalizing Q with respect to ? and f :
P(z|y, X) ? Bern(z|p) =
n
Y
zi
pi (1 ? pi )1?zi ,
(21)
i=1
where p = (p1 , . . . , pn )T . Each parameter pi of Q, with 1 ? i ? n, approximates P(zi = 1|y, X),
i.e., the posterior probability that the i-th training instance is an outlier. Thus, these parameters can
be used to identify the data instances that are more likely to be outliers.
The cost of evaluating (16) and (18) is respectively O(ln3 ) and O(n3 ). The cost of evaluating (19)
is O(ln2 ) since K?1
k , with k = 1, . . . , l, needs to be computed only once.
4
Experiments
The proposed Robust Multi-class Gaussian Process Classifier (RMGPC) is compared in several experiments with the Standard Multi-class Gaussian Process Classifier (SMGPC) suggested in [3].
SMGPC is a particular case of RMGPC which is obtained when b0 ? ?. This forces the prior
distribution for ?, (4), to be a delta centered at the origin, indicating that it is not possible to observe
outliers. SMGPC explains data instances for which (1) is not satisfied in practice by considering
Gaussian noise in the estimation of the functions f1 , . . . , fl , which is the typical approach found
in the literature [1]. RMGPC is also compared in these experiments with the Heavy-Tailed Process
Classifier (HTPC) described in [9]. In HTPC, the prior for each latent function fk , with k = 1, . . . , l,
is a Gaussian Process that has been non-linearly transformed to have marginals that follow hyperbolic secant distributions with scale parameter bk . The hyperbolic secant distribution has heavier
tails than the Gaussian distribution and is expected to perform better in the presence of outliers.
4.1
Classification of Noisy Data
We carry out experiments on four datasets extracted from the UCI repository [11] and from other
sources [12] to evaluate the predictive performance of RMGPC, SMGPC and HTPC when different
fractions of outliers are present in the data2 . These datasets are described in Table 1. All have
multiple classes and a fairly small number n of instances. We have selected problems with small n
because all the methods analyzed scale as O(n3 ). The data for each problem are randomly split 100
times into training and test sets containing respectively 2/3 and 1/3 of the data. Furthermore, the
labels of ? ? {0%, 5%, 10%, 20%} of the training instances are selected uniformly at random from
C. The data are normalized to have zero mean and unit standard deviation on the training set and
2
The R source code of RMGPC is available at http://arantxa.ii.uam.es/%7edhernan/RMGPC/.
6
the average balanced class rate (BCR) of each method on the test set is reported for each value of
?. The BCR of a method with prediction accuracy ak on those instances of class k (k = 1, . . . , l) is
Pl
defined as 1/l k=1 ak . BCR is preferred to prediction accuracy in datasets with unbalanced class
distributions, which is the case for the datasets displayed in Table 1.
Table 1: Characteristics of the datasets used in the experiments.
Dataset
New-thyroid
Wine
Glass
SVMguide2
# Instances
215
178
214
319
# Attributes
5
13
9
20
# Classes
3
3
6
3
# Source
UCI
UCI
UCI
LIBSVM
In our experiments, the different methods analyzed (RMGPC, SMGPC and HTPC) use the same
covariance function for each latent function, i.e., ck (?, ?) = c(?, ?), for k = 1, . . . , l, where
1
T
c(xi , xj ) = exp ? (xi ? xj ) (xi ? xj )
(22)
2?
is a standard Gaussian covariance function with length-scale parameter ?. Preliminary experiments
on the datasets analyzed show no significant benefit from considering a different covariance function
for each latent function. The diagonal of the covariance matrices Kk , for k = 1, . . . , l, of SMGPC
are also added an extra term equal to ?2k to account for latent Gaussian noise with variance ?2k
around fk [1]. These extra terms are used by SMGPC to explain those instances that are unlikely
to stem from (1). In both RMGPC and SMGPC the parameter ? is found by maximizing (16) using
a standard gradient ascent procedure. The same method is used for tuning the parameters ?k in
SMGPC. In HTPC an approximation to the model evidence is maximized with respect to ? and the
scale parameters bk , with k = 1, . . . , l, using also gradient ascent [9].
Table 2: Average BCR in % of each method for each problem, as a function of ?.
Dataset
RMGPC
New-thyroid
Wine
Glass
SVMguide2
94.2?4.5
98.0?1.6
65.2?7.7
76.3?4.1
New-thyroid
Wine
Glass
SVMguide2
92.3?5.4
97.0?2.2
63.9?7.9
74.9?4.4
SMGPC
? = 0%
93.9?4.4
98.0?1.6
60.6?8.6 C
74.6?4.2 C
? = 10%
89.0?5.5 C
96.4?2.6
58.0?7.4 C
72.8?4.7 C
HTPC
RMGPC
90.0?5.5 C
97.3?2.0 C
59.5?8.0 C
72.8?4.1 C
92.7?4.9
97.5?1.7
63.5?8.0
75.6?4.3
88.3?6.6 C
95.6?4.6 C
55.7?7.7 C
71.5?4.7 C
89.5?6.0
96.6?2.7
59.7?8.3
72.8?5.1
SMGPC
? = 5%
90.7?5.8 C
97.3?2.0
58.9?8.0 C
73.8?4.4 C
? = 20%
85.9?7.4 C
95.5?2.6 C
55.5?7.3 C
71.4?5.0 C
HTPC
89.7?6.1 C
96.6?2.2 C
57.9?7.5 C
71.9?4.5 C
85.7?7.7 C
95.1?3.0 C
52.8?7.8 C
67.5?5.6 C
Table 2 displays for each problem the average BCR of each method for the different values of ?
considered. When the performance of a method is significantly different from the performance of
RMGPC, as estimated by a Wilcoxon rank test (p-value < 1%), the corresponding BCR is marked
with the symbol C. The table shows that, when there is no noise in the labels (i.e., ? = 0%), RMGPC
performs similarly to SMGPC in New-Thyroid and Wine, while it outperforms SMGPC in Glass
and SVMguide2. As the level of noise increases, RMGPC is found to outperform SMGPC in all the
problems investigated. HTPC typically performs worse than RMGPC and SMGPC independently of
the value of ?. This can be a consequence of HTPC using the Laplace approximation for approximate
inference [9]. In particular, there is evidence indicating that the Laplace approximation performs
worse than EP in the context of Gaussian process classifiers [15]. Extra experiments comparing
RMGPC, SMGPC and HTPC under 3 different noise scenarios appear in the supplementary material.
They further support the better performance of RMGPC in the presence of outliers in the data.
4.2
Outlier Identification
A second batch of experiments shows the utility of RMGPC to identify data instances that are likely
to be outliers. These experiments use the Glass dataset from the previous section. Recall that for this
7
1.00
0.50
0.00
P(z_i = 1|y,X)
dataset RMGPC performs significantly better than SMGPC for ? = 0%, which suggest the presence
of outliers. After normalizing the Glass dataset, we run RMGPC on the whole data and estimate the
posterior probability that each instance is an outlier using (21). The hyper-parameters of RMGPC
are estimated as described in the previous section. Figure 1 shows for each instance (xi , yi ) of the
Glass dataset, with i = 1, . . . , n, the value of P(zi = 1|y, X). Note that most of the instances
are considered to be outliers with very low posterior probability. Nevertheless, there is a small set
of instances that have very high posterior probabilities. These instances are unlikely to stem from
(1) and are expected to be misclassified when placed on the test set. Consider the set of instances
that are more likely to be outliers than normal instances (i.e., instances 3, 36, 127, 137, 152, 158 and
188). Assume the experimental protocol of the previous section. Table 3 displays the fraction of
times that each of these instances is misclassified by RMGPC, SMGPC and HTPC when placed on
the test set. The posterior probability that each instance is an outlier, as estimated by RMGPC, is
also reported. The table shows that all the instances are typically misclassified by all the classifiers
investigated, which confirms the difficulty of obtaining accurate predictions for them in practice.
0
50
100
Glass Data Instances
150
200
Figure 1: Posterior probability that each data instance form the Glass dataset is an outlier.
Table 3: Average test error in % of each method on each data instance that is more likely to be an
outlier. The probability that the instance is an outlier, as estimated by RMGPC, is also displayed.
Test
Error
RMGPC
SMGPC
HTPC
P(zi = 1|y, X)
5
Glass Data Instances
3-rd
36-th
127-th
137-th
152-th
158-th
188-th
100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0
100.0?0.0 92.0?5.5 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0
100.0?0.0 84.0?7.5 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0 100.0?0.0
0.69
0.96
0.82
0.51
0.86
0.83
1.00
Conclusions
We have introduced a Robust Multi-class Gaussian Process Classifier (RMGPC). RMGPC considers
only the number of errors made, and not the distance of such errors to the decision boundaries of
the classifier. This is achieved by introducing binary latent variables that indicate when a given
instance is considered to be an outlier (wrongly labeled instance) or not. RMGPC can also identify
the training instances that are more likely to be outliers. Exact Bayesian inference in RMGPC is
intractable for typical learning problems. Nevertheless, approximate inference can be efficiently
carried out using expectation propagation (EP). When EP is used, the training cost of RMGPC is
O(ln3 ), where l is the number of classes and n is the number of training instances. Experiments in
four multi-class classification problems show the benefits of RMGPC when labeling noise is injected
in the data. In this case, RMGPC performs better than other alternatives based on considering latent
Gaussian noise or noise which follows a distribution with heavy tails. When there is no noise in the
data, RMGPC performs better or equivalent to these alternatives. Our experiments also confirm the
utility of RMGPC to identify data instances that are difficult to classify accurately in practice. These
instances are typically misclassified by different predictors when included in the test set.
Acknowledgment
All experiments were run on the Center for Intensive Computation and Mass Storage (Louvain). All authors
acknowledge support from the Spanish MCyT (Project TIN2010-21575-C02-02).
8
References
[1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine
Learning (Adaptive Computation and Machine Learning). The MIT Press, 2006.
[2] Christopher K. I. Williams and David Barber. Bayesian classification with Gaussian processes.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342?1351, 1998.
[3] Hyun-Chul Kim and Zoubin Ghahramani. Bayesian Gaussian process classification with
the EM-EP algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence,
28(12):1948?1959, 2006.
[4] R.M Neal. Regression and classification using Gaussian process priors. Bayesian Statistics,
6:475?501, 1999.
[5] Matthias Seeger and Michael I. Jordan. Sparse Gaussian process classification with multiple
classes. Technical report, University of California, Berkeley, 2004.
[6] M. Opper and O. Winther. Gaussian process classification and SVM: Mean field results. In
P. Bartlett, B.Schoelkopf, D. Schuurmans, and A. Smola, editors, Advances in large margin
classifiers, pages 43?65. MIT Press, 2000.
[7] Daniel Hern?andez-Lobato and Jos?e Miguel Hern?andez-Lobato. Bayes machines for binary
classification. Pattern Recognition Letters, 29(10):1466?1473, 2008.
[8] Hyun-Chul Kim and Zoubin Ghahramani. Outlier robust Gaussian process classification. In
Structural, Syntactic, and Statistical Pattern Recognition, volume 5342 of Lecture Notes in
Computer Science, pages 896?905. Springer Berlin / Heidelberg, 2008.
[9] Fabian L. Wauthier and Michael I. Jordan. Heavy-Tailed Process Priors for Selective Shrinkage. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors,
Advances in Neural Information Processing Systems 23, pages 2406?2414. 2010.
[10] Thomas Minka. A Family of Algorithms for approximate Bayesian Inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[11] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[12] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines, 2001.
[13] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and
Statistics). Springer, August 2006.
[14] T. Minka and J. Lafferty. Expectation-propagation for the generative aspect model. In Adnan
Darwiche and Nir Friedman, editors, Proceedings of the 18th Conference on Uncertainty in
Artificial Intelligence, pages 352?359. Morgan Kaufmann, 2002.
[15] Malte Kuss and Carl Edward Rasmussen. Assessing approximate inference for binary Gaussian
process classification. Journal of Machine Learning Research, 6:1679?1704, 2005.
[16] H Nickisch and CE Rasmussen. Approximations for binary Gaussian process classification.
Journal of Machine Learning Research, 9:2035?2078, 10 2008.
[17] Marcel Van Gerven, Botond Cseke, Robert Oostenveld, and Tom Heskes. Bayesian source
localization with the multivariate Laplace prior. In Y. Bengio, D. Schuurmans, J. Lafferty,
C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems
22, pages 1901?1909, 2009.
[18] Matthias Seeger. Expectation propagation for exponential families. Technical report, Department of EECS, University of California, Berkeley, 2006.
9
| 4241 |@word oostenveld:1 repository:3 version:1 seems:1 adnan:1 confirms:1 eng:1 covariance:13 carry:2 contains:1 united:1 daniel:2 outperforms:2 existing:1 recovered:1 com:1 comparing:1 gmail:1 written:1 readily:1 must:1 additive:1 dupont:2 update:12 stationary:1 intelligence:3 selected:4 generative:1 data2:1 provides:1 location:1 preference:1 readability:1 simpler:1 become:1 beta:7 ik:37 consists:1 fitting:2 darwiche:1 introduce:1 notably:1 expected:7 p1:1 xz:1 multi:9 globally:1 considering:8 becomes:2 provided:1 project:1 underlying:1 mass:1 berkeley:2 universit:2 classifier:12 uk:1 control:1 unit:1 yn:1 producing:1 appear:1 before:1 engineering:1 consequence:4 ak:2 bi:2 acknowledgment:1 practice:8 cb2:1 procedure:2 secant:2 empirical:1 hyperbolic:2 significantly:2 matching:1 suggest:2 zoubin:2 marginalize:1 wrongly:1 storage:1 context:2 equivalent:3 center:1 lobato:4 maximizing:1 williams:4 independently:3 convex:1 identifying:1 rule:11 jmh233:1 laplace:4 updated:2 exact:13 carl:2 origin:1 fyi:5 approximated:6 recognition:3 labeled:2 observed:5 ep:27 culotta:2 schoelkopf:1 removed:3 balanced:1 bkn:1 cam:1 depend:2 predictive:5 localization:1 division:1 f2:3 easily:1 joint:1 monte:1 artificial:1 zemel:1 labeling:12 newman:1 hyper:3 refined:2 whose:1 supplementary:3 otherwise:4 statistic:3 syntactic:1 noisy:1 final:1 online:1 advantage:1 matthias:2 product:8 uci:6 mixing:1 convergence:2 chul:2 optimum:1 assessing:1 converges:2 tk:1 illustrate:2 ac:1 miguel:2 b0:11 edward:2 implemented:1 involves:1 indicate:4 marcel:1 attribute:1 centered:1 material:2 explains:1 kii:2 f1:9 andez:4 generalization:2 investigation:1 preliminary:2 pl:1 hold:1 around:1 considered:5 normal:1 exp:2 belgium:2 heavytailed:1 wine:4 estimation:1 label:10 successfully:1 mit:2 gaussian:37 always:2 ck:5 pn:1 shrinkage:1 factorizes:1 cseke:1 vk:2 bernoulli:4 likelihood:7 rank:1 seeger:2 kim:2 glass:10 inference:13 typically:5 unlikely:6 a0:10 transformed:1 misclassified:4 selective:1 arg:1 classification:17 among:2 overall:1 k6:14 priori:1 constrained:2 fairly:2 initialize:1 marginal:1 equal:7 once:3 field:1 sampling:1 minimized:2 report:2 contaminated:2 simplify:1 serious:1 mcyt:1 randomly:2 divergence:1 replaced:2 friedman:1 organization:1 interest:1 investigate:2 possibility:3 highly:2 neuve:2 introduces:1 mixture:1 analyzed:3 bki:2 chain:1 accurate:2 integral:3 ln3:4 damping:3 old:2 taylor:1 re:1 mk:6 instance:45 classify:2 disadvantage:1 zn:1 cost:8 introducing:2 deviation:1 uniform:2 predictor:1 too:1 reported:2 eec:1 my:1 nickisch:1 density:1 winther:1 probabilistic:1 jos:2 michael:2 again:1 thesis:1 satisfied:4 containing:1 worse:2 chung:1 account:1 de:2 coefficient:1 notable:1 depends:2 yiki:4 closed:2 observing:5 bayes:3 parallel:2 asuncion:1 contribution:1 botond:1 accuracy:3 variance:2 characteristic:1 efficiently:2 maximized:1 kaufmann:1 identify:5 bayesian:10 identification:3 accurately:1 carlo:1 kuss:1 bk1:1 classified:1 converged:2 explain:3 evaluates:1 minka:2 associated:3 di:2 sampled:1 dataset:7 massachusetts:1 recall:1 actually:2 back:2 manuscript:1 follow:3 tom:1 evaluated:1 furthermore:1 smola:1 until:2 correlation:1 christopher:3 propagation:10 contain:2 normalized:6 assigned:1 iteratively:1 leibler:1 neal:1 spanish:1 ln2:1 performs:9 functional:1 volume:1 tail:3 belong:1 approximates:3 marginals:1 significant:1 cambridge:2 tuning:1 rd:1 fk:17 heskes:1 similarly:4 shawe:1 wilcoxon:1 multivariate:7 posterior:17 scenario:2 certain:1 binary:8 yi:30 morgan:1 converge:3 maximize:1 ii:2 relates:1 multiple:2 stem:4 technical:2 lin:1 prevented:1 impact:1 prediction:9 regression:1 expectation:10 df:3 iteration:1 normalization:1 achieved:1 c1:1 source:5 extra:4 rest:1 ascent:2 lafferty:3 jordan:2 structural:1 near:2 presence:3 gerven:1 constraining:1 split:1 bengio:1 xj:3 zi:24 simplifies:2 multiclass:1 intensive:1 whether:2 expression:2 heavier:2 utility:3 bartlett:1 useful:3 amount:2 bcr:6 http:2 outperform:1 vy:1 estimated:8 delta:1 correctly:2 per:2 affected:1 group:2 four:3 nevertheless:5 prevent:1 svmguide2:4 libsvm:2 ce:1 material1:1 fraction:3 run:2 letter:1 injected:3 uncertainty:2 place:2 family:4 c02:1 chih:2 decision:10 summarizes:1 fl:9 guaranteed:1 display:2 refine:1 occur:1 x2:4 n3:2 aspect:1 thyroid:4 department:2 trumpington:1 conjugate:1 beneficial:1 describes:1 em:1 outlier:25 hern:4 mechanism:2 tractable:1 serf:1 available:2 gaussians:2 operation:3 uam:2 observe:1 pierre:2 alternative:4 batch:1 thomas:1 denotes:1 remaining:1 k1:2 ghahramani:2 added:1 occurs:1 parametric:1 dependence:1 diagonal:2 gradient:6 distance:5 wauthier:1 berlin:1 street:1 originate:1 barber:1 considers:5 fy:1 reason:1 assuming:3 code:1 length:1 kk:11 kingdom:1 difficult:3 unfortunately:1 robert:1 trace:1 unknown:1 perform:2 observation:2 datasets:8 markov:1 acknowledge:1 hyun:2 fabian:1 displayed:2 y1:1 august:1 introduced:2 inverting:1 pair:1 namely:1 specified:1 kl:4 z1:1 extensive:1 required:1 bk:2 california:2 louvain:5 address:1 suggested:2 pattern:5 sainte:2 belief:1 david:1 malte:1 difficulty:1 rely:1 force:1 uclouvain:1 improve:1 technology:1 library:1 carried:1 kj:6 nir:1 prior:14 literature:1 marginalizing:1 lecture:1 tin2010:1 integrate:1 sufficient:1 consistent:1 bk2:1 editor:4 pi:3 heavy:5 accounted:1 repeat:1 placed:2 free:5 bern:4 rasmussen:3 catholique:2 institute:1 sparse:1 benefit:3 van:1 boundary:12 opper:1 xn:5 evaluating:2 cumulative:1 author:2 collection:1 made:2 adaptive:1 simplified:1 far:5 transaction:2 approximate:28 preferred:1 kullback:1 confirm:1 global:1 overfitting:1 sequentially:1 assumed:4 xi:19 factorizing:1 latent:19 tailed:4 table:9 robust:10 obtaining:1 schuurmans:2 heidelberg:1 du:1 investigated:2 cl:1 protocol:1 linearly:1 big:1 noise:19 whole:1 quadrature:1 x1:5 icteam:2 barbe:2 cubic:1 exponential:4 theorem:2 specific:1 jen:1 bishop:1 symbol:1 pz:1 svm:1 evidence:7 normalizing:1 intractable:3 sequential:1 phd:1 margin:1 univariate:2 likely:5 chang:1 applies:1 springer:2 extracted:2 conditional:1 marked:1 included:1 infinite:1 except:1 uniformly:2 typical:3 specifically:2 total:3 e:2 la:2 experimental:1 exception:1 select:1 indicating:2 support:3 latter:1 unbalanced:1 evaluate:3 heaviside:1 |
3,581 | 4,242 | Sparse Bayesian Multi-Task Learning
C?edric Archambeau, Shengbo Guo, Onno Zoeter
Xerox Research Centre Europe
{Cedric.Archambeau, Shengbo.Guo, Onno.Zoeter}@xrce.xerox.com
Abstract
We propose a new sparse Bayesian model for multi-task regression and classification. The model is able to capture correlations between tasks, or more specifically
a low-rank approximation of the covariance matrix, while being sparse in the features. We introduce a general family of group sparsity inducing priors based on
matrix-variate Gaussian scale mixtures. We show the amount of sparsity can be
learnt from the data by combining an approximate inference approach with type
II maximum likelihood estimation of the hyperparameters. Empirical evaluations
on data sets from biology and vision demonstrate the applicability of the model,
where on both regression and classification tasks it achieves competitive predictive
performance compared to previously proposed methods.
1
Introduction
Learning multiple related tasks is increasingly important in modern applications, ranging from the
prediction of tests scores in social sciences and the classification of protein functions in systems
biology to the categorisation of scenes in computer vision and more recently to web search and
ranking. In many real life problems multiple related target variables need to be predicted from a
single set of input features. A problem that attracted considerable interest in recent years is to label
an image with (text) keywords based on the features extracted from that image [26]. In general, this
multi-label classification problem is challenging as the number of classes is equal to the vocabulary
size and thus typically very large. While capturing correlations between the labels seems appealing
it is in practice difficult as it rapidly leads to numerical problems when estimating the correlations.
A naive solution is to learn a model for each task separately and to make predictions using the
independent models. Of course, this approach is unsatisfactory as it does not take advantage of
all the information contained in the data. If the model is able to capture the task relatedness, it
is expected to have generalisation capabilities that are drastically increased. This motivated the
introduction of the multi-task learning paradigm that exploits the correlations amongst multiple
tasks by learning them simultaneously rather than individually [12]. More recently, the abundant
literature on multi-task learning demonstrated that performance indeed improves when the tasks are
related [6, 31, 2, 14, 13].
The multi-task learning problem encompasses two main settings. In the first one, for every input,
every task produces an output. If we restrict ourselves to multiple regression for the time being, the
most basic multi-task model would consider P correlated tasks1 , the vector of covariates and targets
being respectively denoted by xn ? RD and yn ? RP :
n ? N (0, ?),
yn = Wxn + ? + n ,
(1)
where W ? RP ?D is the matrix of weights and ? ? RP the task offsets and n ? RP the vector
residual errors with covariance ? ? RP ?P . In this setting, the output of all tasks is observed for
1
While it is straightfoward to show that the maximum likelihood estimate of W would be the same as when
considering uncorrelated noise, imposing any prior on W would lead to a different solution.
1
every input. In the second setting, the goal is to learn from a set of observed tasks and to generalise
to a new task. This approach views the multi-task learning problem as a transfer learning problem,
where it is assumed that the various tasks belong in some sense to the same environment and share
common properties [23, 5]. In general only a single task output is observed for every input.
A recent trend in multi-task learning is to consider sparse solutions to facilitate the interpretation.
Many formulate the sparse multi-task learning problem in a (relaxed) convex optimization framework [5, 22, 35, 23]. If the regularization constant is chosen using cross-validation, regularizationbased approaches often overestimate the support [32] as they select more features than the set that
generated the data. Alternatively, one can adopt a Bayesian approach to sparsity in the context of
multi-task learning [29, 21]. The main advantage of the Bayesian formalism is that it enables us to
learn the degree of sparsity supported by the data and does not require the user to specify the type
of penalisation in advance.
In this paper, we adopt the first setting for multi-task learning, but we will consider a hierarchical
Bayesian model where the entries of W are correlated so that the residual errors are uncorrelated.
This is similar in spirit as the approach taken by [18], where tasks are related through a shared kernel
matrix. We will consider a matrix-variate prior to simultaneously model task correlations and group
sparsity in W. A matrix-variate Gaussian prior was used in [35] in a maximum likelihood setting to
capture task correlations and feature correlations. While we are also interested in task correlations,
we will consider matrix-variate Gaussian scale mixture priors centred at zero to drive entire blocks
of W to zero. The Bayesian group LASSO proposed in [30] is a special case. Group sparsity [34]
is especially useful in presence of categorical features, which are in general represented as groups
of ?dummy? variables. Finally, we will allow the covariance to be of low-rank so that we can deal
with problems involving a very large number of tasks.
2
Matrix-variate Gaussian prior
Before starting our discussion of the model, we introduce the matrix variate Gaussian as it plays a
key role in our work. For a matrix W ? RP ?D , the matrix-variate Gaussian density [16] with mean
matrix M ? RP ?D , row covariance ? ? RD?D and column covariance ? ? RP ?P is given by
1
>
N (M, ?, ?) ? e? 2 vec(W?M)
(???)?1 vec(W?M)
1
? e? 2 tr{?
?1
(W?M)> ??1 (W?M)}
.
(2)
If we let ? = E(W ? M)(W ? M)> , then ? = E(W ? M)> (W ? M)/c where c ensures
the density integrates to one. While this introduces a scale ambiguity between ? and ? (easily
removed by means of a prior), the use of a matrix-variate formulation is appealing as it makes
explicit the structure vec(W), which is a vector formed by the concatenation of the columns of
W. This structure is reflected in its covariance matrix which is not of full rank, but is obtained by
computing the Kronecker product of the row and the column covariance matrices.
It is interesting to compare a matrix-variate prior for W in (1) with the classical multi-level approach
to multiple regression from statistics (see e.g. [20]). In a standard multi-level model, the rows of W
are drawn iid from a multivariate Gaussian with mean m and covariance S, and m is further drawn
from zero mean Gaussian with covariance R. Integrating out m leads then to a Gaussian distributed
vec(W) with mean zero and with a covariance matrix that has the block diagonal elements equal to
S + R and all off-diagonal elements equal to R. Hence, the standard multi-level model assumes a
very different covariance structure than the one based on (2) and incidentally cannot learn correlated
and anti-correlated tasks simultaneously.
3
A general family of group sparsity inducing priors
We seek a solution for which the expectation of W is sparse, i.e., blocks of W are driven to zero. A
straightforward way to induce sparsity, and which would be equivalent to `1 -regularisation on blocks
of W, is to consider a Laplace prior (or double exponential). Although applicable in a penalised
likelihood framework, the Laplace prior would be computationally hard in a Bayesian setting as it
is not conjugate to the Gaussian likelihood. Hence, naively using this prior would prevent us from
computing the posterior in closed form, even in a variational setting. In order to circumvent this
problem, we take a hierarchical Bayesian approach.
2
?
V
?2
Zi
Wi
yn
tn
N
?, ?, ?
?i
?i
?, ?
Q
Figure 1: Graphical model for sparse Bayesian multiple regression (when excluding the dashed
arrow) and sparse Bayesian multiple classification (when considering all arrows).
We assume that the marginal prior, or effective prior, on each block Wi ? RP ?Di has the form of
a matrix-variate Gaussian scale mixture, a generalisation of the multivariate Gaussian scale mixture
[3]:
?
Z
p(Wi ) =
Q
X
N (0, ?i?1 ?i , ?) p(?i ) d?i ,
0
Di = D,
(3)
i=1
where ?i ? RDi ?Di , ? ? RP ?P and ?i > 0 is the latent precision (i.e., inverse scale) associated
to block Wi .
A sparsity inducing prior for Wi can then be constructed by choosing a suitable hyperprior for
?i . We impose a generalised inverse Gaussian prior (see Supplemental Appendix A for a formal
definition with special cases) on the latent precision variables:
?i ? N ?1 (?, ?, ?) =
? ?
?1
??
???
1
?
?i??1 e? 2 (??i +??i ) ,
2K? ( ??)
(4)
?
where K? (?) is the modified Besselp
function of the second kind, ? is the index, ?? defines the
concentration of the distribution and ?/? defines its scale. The effective prior is then a symmetric
matrix-variate generalised hyperbolic distribution:
K?+ P Di
p(Wi ) ?
2
r
q
> ?1
?(? + tr{??1
Wi })
i Wi ?
> ?1 W }
?+tr{??1
i
i Wi ?
?
!?+ P Di
.
(5)
2
The marginal (5) has fat tails compared to the matrix-variate Gaussian. In particular, the family
contains the matrix-variate Student-t, the matrix-variate Laplace and the matrix-variate VarianceGamma as special cases. Several of the multivariate equivalents have recently been used as priors
to induce sparsity in the Bayesian paradigm, both in the context of supervised [19, 11] and unsupervised linear Gaussian models [4].
4
Sparse Bayesian multiple regression
Q
We view {Wi }Q
i=1 , {?i }i=1 and {?1 , . . . , ?D1 , . . . , ?1 , . . . , ?DQ } as latent variables that need to
be marginalised over. This is motivated by the fact that overfitting is avoided by integrating out
all parameters whose cardinality scales with the model complexity, i.e., the number of dimensions
and/or the number of tasks. We further introduce a latent projectoin matrix V ? RP ?K and a set of
latent matrices {Zi }Q
i=1 to make a low-rank approximation of the column covariance ? as explained
below. Note also that ?i captures the correlations between the rows of group i.
3
The complete probabilistic model is given by
yn |W, xn ? N (Wxn , ? 2 IP ),
Wi |V, Zi , ?i , ?i ?
Zi |?i , ?i ?
V ? N (0, ? IP , IK ),
N (VZi , ?i?1 ?i , ? IP ),
N (0, ?i?1 ?i , IK ),
?i ? W
?1
(6)
(?, ?IDi ),
?i ? N ?1 (?, ?, ?),
where ? 2 is the residual noise variance and ? is residual variance associated to W. The graphical
model is shown in Fig. 1. We reparametrise the inverse Wishart distribution and define it as follows:
|?|
? ? W ?1 (?, ?) =
2
where ?p (z) = ?
p(p?1)
4
Qp
j=1
?(z +
D+??1
2
(D+??1)D
2
|??1 |
2D+?
2
?D ( D+??1
)
2
1
?1
e? 2 tr{??
}
,
? > 0,
1?j
2 ).
Using the compact notations W = (W1 , . . . , WQ ), Z = (Z1 , . . . , ZQ ), ? = diag{?1 , . . . , ?Q }
and ? = diag{?1 , . . . , ?D1 , . . . , ?1 , . . . , ?DQ }, we can compute the following marginal:
ZZ
p(W|V, ?) ?
Z
=
N (VZ, ??1 ?, ? IP )N (0, ??1 ?, IK )p(?)dZd?
N (0, ??1 ?, VV> + ? IP )p(?)d?.
Thus, the probabilistic model induces sparsity in the blocks of W, while taking correlations between
the task parameters into account through the random matrix ? ? VV> + ? IP . This is especially
useful when there is a very large number of tasks.
The latent variables Z = {W, V, Z, ?, ?} are infered by variational EM [27], while the hyperparameters ? = {? 2 , ?, ?, ?, ?, ?, ?} are estimated by type II ML [8, 25]). Using variational inference
is motivated by the fact that deterministic approximate inference schemes converge faster than traditional sampling methods such as Markov chain Monte Carlo (MCMC), and their convergence can
easily be monitored. The choice of learning the hyperparameters by type II ML is preferred to the
option of placing vague priors over them, although this would also be a valid option.
In order to find a tractable solution, we assume that the variational posterior q(Z) =
q(W, V, Z, ?, ?) factorises as q(W)q(V)q(, Z)q(?)q(?) given the data D = {(yn , xn )}N
n=1 [7].
The variational EM combined to the type II ML estimation of the hyperparameters cycles through
the following two steps until convergence:
1. Update of the approximate posterior of the latent variables and parameters for fixed hyperparameters. The update for W is given by
q(W) ? ehln p(D,Z|?)iq(Z/W) ,
(7)
where Z/W is the set Z with W removed and h?iq denotes the expectation with respect to
q. The posteriors of the other latent matrices have the same form.
2. Update of the hyperparameters for fixed variational posteriors:
? ? argmax hln p(D, Z, |?)iq(Z) .
(8)
?
Variational EM converges to a local maximum of the log-marginal likelihood. The convergence can
be checked by monitoring the variational lower bound, which monotonically increases during the optimisation. Next, we give the explicit expression of the variational EM steps and the updates for the
hyperparameters, whereas we show that of the variational bound in the Supplemental Appendix D.
4.1
Variational E step (mean field)
Asssuming a factorised posterior enables us to compute it in closed form as the priors are each
conjugate to the Gaussian likelihood. The approximate posterior is given by
q(Z) = N (MW , ?W , SW )N (MV , ?V , SV )N (MZ , ?Z , SZ )
(9)
Y
?
W ?1 (?i , ?i )N ?1 (?i , ?i , ?i ).
i
The expression of posterior parameters are given in Supplemental Appendix C. The computational
bottleneck resides in the inversion of ?W which is O(D3 ) per iteration. When D > N , we can use
the Woodbury identity for a matrix inversion of complexity O(N 3 ) per iteration.
4
4.2
Hyperparameter updates
To learn the degree of sparsity from data we optimise the hyperparameters. There are no closed
form updates for {?, ?, ?}. Hence, we need to find the root of the following expressions, e.g., by
line search:
?
d ln K? ( ??) X
?
? : Q ln
?Q
hln ?i i = 0,
?
d?
i
s
p
Q?
Q ?
1 X ?1
?:
?
R? ( ??) +
h?i i = 0,
?
2 ?
2 i
r
X
p
?
R? ( ??) ?
?: Q
h?i i = 0,
?
i
s
(10)
(11)
(12)
where (??) was invoked. Unfortunately, the derivative in the first equation needs to be estimated
numerically. When considering special cases of the mixing density such as the Gamma or the inverse
Gamma simplified updates are obtained and no numerical differentiation is required.
Due to space constraints, we omit the type II ML updates for the other hyperparameters.
4.3
Predictions
Predictions are performed
by Bayesian averaging. The predictive distribution is approximated as
R
follows: p(y? |x? ) ? p(y? |W, x? )q(W)dW = N (MW x? , (? 2 + x>
? ?W x? )IP ).
5
Sparse Bayesian multiple classification
We restrict ourselves to multiple binary classifiers and consider a probit model in which the likelihood is derived from the Gaussian cumulative density. A probit model is equivalent to a Gaussian noise and a step function likelihood [1]. Let tn ? RP be the class label vectors, with
tnp ? {?1, +1} for all n. The likelihood is replaced by
tn |yn ?
Y
yn |W, xn ? N (Wxn , ? 2 IP ),
I(tnp ynp ),
(13)
p
where I(z) = 1 for z > 0 and 0 otherwise. The rest of the model is as before; we will set ? = 1.
The latent variables to infer are now Y and Z. Again, we assume a factorised posterior. We further assume the variational posterior q(Y) is a product of truncated Gaussians (see Supplemental
Appendix B):
q(Y) ?
YY
n
Y
I(tnp ynp )N (?np , 1) =
p
tnp =+1
N+ (?np , 1)
Y
N? (?np , 1),
(14)
tnp =?1
where ?np is the pth entry of ? n = MW xn . The other variational and hyperparameter updates are
unchanged, except that Y is replaced by matrix ? ? . The elements of ? ? are defined in (??).
5.1
Bayesian classification
In Bayesian classification the goal is to predict the label with highest posterior probability. Based
on the variational approximation we propose the following classification rule:
?t? = arg max P (t? |T) ? arg max
t?
t?
YZ
Nt?p (??p , 1)dy?p = arg max
t?
p
Y
? (t?p ??p ) ,
(15)
p
where ? ? = MW x? . Hence, to decide whether the label t?p is ?1 or +1 it is sufficient to use the
sign of ??p as the decision rule. However, the probability P (t?p |T) tells us also how confident we
are in the prediction we make.
5
Estimated task covariance
True task covariance
8
SPBMRC
Ordinary Least Squares
Predict with ground truth W
6
5
Sparsity pattern
4
3
1
2
0.8
E ??1
Average Squared Test Error
7
1
0.6
0.4
0.2
0
20
30
40
50
60
70
80
90
100
0
Training set size
5
10
15
20
25
30
35
40
45
50
Feature index
SBMR estimated weight matrix
OLS estimated weight matrix
True weight matrix
Figure 2: Results for the ground truth data set. Top left: Prediction accuracy on a test set as a
function of training set size. Top right: estimated and true ? (top), true underlying sparsity pattern
(middle) and inverse of the posterior mean of {?i }i showing that the sparsity is correctly captured
(bottom). Bottom diagrams: Hinton diagram of true W (bottom), ordinary least squares learnt W
(middle) and the sparse Bayesian multi-task learnt W (top). The ordinary least squares learnt W
contains many non-zero elements.
6
A model study with ground truth data
To understand the properties of the model we study a regression problem with known parameters.
Figure? 2 shows
for 5 tasks and 50 features. Matrix W is drawn using V =
? ?
? the results
?
[ .9 .9 .9 ? .9 ? .9]> and ? = 0.1, i.e. the covariance for vec(W) has 1?s on the
diagonal and ?.9 on the off-diagonal elements. The first three tasks and the last two tasks are positively correlated. There is a negative correlation between the two groups. The active features are
randomly selected among the 50 candidate features. We evaluate the models with 104 test points
and repeated the experiment 25 times. Gaussian noise was added to the targets (? = 0.1).
It can be observed that the proposed model performs better and converges faster to the optimal
performance when the data set size increases compared ordinary least squares. Note also that both
? and the sparsity pattern are correctly identified.
6
Table 1: Performance (with standard deviation) of classification tasks on Yeast and Scene data sets in
terms of accuracy and AUC. LR: Bayesian logistic regression; Pooling: pooling all data and learning
a single model; Xue: the matrix stick-breaking process based multi-task learning model proposed in
[33]. K = 10 for the proposed models (i.e., Laplace, Student-t, and ARD). Note that the first five
rows for Yeast and Scene data sets are reported in [29]. The reported performances are averaged
over five randomized repetitions.
Model
LR
Pool
Xue [33]
Model-1 [29]
Model-2 [29]
Chen [15]
Laplace
Student
ARD
7
Yeast
Accuracy
AUC
0.5047
0.5049
0.4983
0.5112
0.5106
0.5105
0.5212
0.5244
0.5424
0.5406
NA
0.7987?0.0044
0.7987?0.0017 0.8349?0.0020
0.7988?0.0017 0.8349?0.0019
0.7987?0.0020 0.8349?0.0020
Scene
Accuracy
AUC
0.7362
0.6153
0.7862
0.5433
0.7765
0.5603
0.7756
0.6325
0.7911
0.6416
NA
0.9160?0.0038
0.8892?0.0038 0.9188?0.0041
0.8897?0.0034 0.9183?0.0041
0.8896?0.0044 0.9187?0.0042
Multi-task classification experiments
In this section, we evaluate the proposed model on two data sets: Yeast [17] and Scene [9], which
have been widely used as testbeds to evaluate multi-task learning approaches [28, 29, 15]. To demonstrate the superiority of the proposed models, we conduct systematic empirical evaluations including
the comparisons with (1) Bayesian logistic regression (BLR) that learns tasks separately, (2) a pooling model that pools data together and learns a single model collectively, and (3) the state-of-the-art
multi-task learning methods proposed in [33, 29, 15].
We follow the experimental setting as introduced in [29] for fair comparisons, and omit the detailed
setting due to space limitation. We evaluate all methods for the classification task using two metrics:
(1) overall accuracy at a threshold of zero and (2) the average area under the curve (AUC). Results
on the Yeast and Scence data sets using these two metrics are reported in Table 7. It is interesting
to note that even for small values of K (fewer parameters in the column covariance) the proposed
model achieves good results. We also study how the performances vary with different K on a tuning
set, and observe that there are no significant differences on performances using different K (not
shown in the paper). The results in Table 7 were produced with K = 10.
The proposed models (Laplace, Student-t, ARD) significantly outperform the Bayesian logistic regression approach that learns each task separately. This observation agrees with the previous work
[6, 31, 2, 5] demonstrating that the multi-task approach is beneficial over the naive approach of
learning tasks separately. For the Yeast data set, the proposed models are significantly better than
?Xue? [33], Model-1 and Model-2 [29], and the best performing model in [15]. For the Scene data
set, our models and the model in [15] show comparable results.
The advantage of using hierarchical priors is particularly evident in a low data regime. To study
the impact of training set size on performance, we report the accuracy and AUC as functions of the
training set sizes in Figure 3. For this experiment, we use a single test set of size 1196, which replicates the experimental setup in [29]. Figure 3 shows that the proposed Bayesian methods perform
well overall, but that the performances are not significantly impacted when the number of data is
small. Similar results were obtained for the Yeast data set.
8
Conclusion
In this work we proposed a Bayesian multi-task learning model able to capture correlations between
tasks and to learn the sparsity pattern of the data features simultaneously. We further proposed a
low-rank approximation of the covariance to handle a very large number of tasks. Combining lowrank and sparsity at the same time has been a long open standing issue in machine learning. Here,
we are able to achieve this goal by exploiting the special structure of the parameters set. Hence, the
7
Scene data set, K=10
0.9
0.9
0.8
0.8
ARD
Student?t
Laplace
Model?2
Model?1
BLR
0.7
0.6
0.5
400
AUC
Accuracy
Scene data set, K=10
600
800
0.7
0.6
1000
Number of training samples
0.5
400
600
800
1000
Number of training samples
Figure 3: Model comparisons in terms of classification accuracy and AUC on the Scene data set
for K = 10. Error bars represent 3 times the standard deviation. Results for Bayesian logistic
regression (BLR), Model-1 and Model-2 are obtained based on the measurements using a ruler from
Figure 2 in [29], for which no error bars are given.
proposed model combines sparsity and low-rank in a different manner than in [10], where a sum of
a sparse and low-rank matrix is considered.
By considering a matrix-variate Gaussian scale mixture prior we extended the Bayesian group
LASSO to a more general family of group sparsity inducing priors. This suggests the extension
of current Bayesian methodology to learn structured sparsity from data in the future. A possible
extension is to consider the graphical LASSO to learn sparse precision matrices ??1 abd ??1 . A
similiar approach was explored in [35].
References
[1] J. H. Albers and S. Chib. Bayesian analysis of binary and polychotomous response data.
J.A.S.A., 88(422):669?679, 1993.
[2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks
and unlabeled data. JMLR, 6:1817?1853, 2005.
[3] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal
Statistical Society B, 36(1):99?102, 1974.
[4] C. Archambeau and F. Bach. Sparse probabilistic projections. In NIPS. MIT Press, 2008.
[5] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73:243?272, 2008.
[6] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. JMLR,
4:83?99, 2003.
[7] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby
Computational Neuroscience Unit, University College London, 2003.
[8] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 1985.
[9] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification.
Pattern Recognition, 37(9):1757?1771, 2004.
[10] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of
the ACM, 58:1?37, June 2011.
[11] F. Caron and A. Doucet. Sparse Bayesian nonparametric regression. In ICML, pages 88?95.
ACM, 2008.
[12] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
8
[13] O. Chapelle, P. Shivaswamy, S. Vadrevu, K. Weinberger, Y. Zhang, and B. Tseng. Multi-task
learning for boosting with application to web search ranking. In SIGKDD, pages 1189?1198,
2010.
[14] R. Chari, W. W. Lockwood, B. P. Coe, A. Chu, D. Macey, A. Thomson, J. J. Davies,
C. MacAulay, and W. L. Lam. Sigma: A system for integrative genomic microarray analysis of cancer genomes. BMC Genomics, 7:324, 2006.
[15] J. Chen, J. Liu, and J. Ye. Learning incoherent sparse and low-rank patterns from multiple
tasks. In SIGKDD, pages 1179?1188. ACM, 2010.
[16] A. P. Dawid. Some matrix-variate distribution theory: Notational considerations and a bayesian
application. Biometrika, 68(1):265?274, 1981.
[17] A. Elisseeff and J. Weston. A kernel method for multi-labelled classification. In NIPS. 2002.
[18] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods.
JMLR, 6:615?637, 2005.
[19] M. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on PAMI,
25:1150?1159, 2003.
[20] A. Gelman and J. Hill. Data Analysis Using Regression and Multilevel/Hiererarchical Models.
Cambridge University Press, 2007.
[21] D. Hern?andez-Lobato, J. M. Hern?andez-Lobato, T. Helleputte, and P. Dupont. Expectation
propagation for Bayesian multi-task feature selection. In ECML-PKDD, pages 522?537, 2010.
[22] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In
NIPS, pages 745?752. 2009.
[23] T. Jebara. Multitask sparsity via maximum entropy discrimination. JMLR, 12:75?110, 2011.
[24] B. J?rgensen. Statistical Properties of the Generalized Inverse Gaussian Distribution.
Springer, 1982.
[25] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[26] A. Makadia, V. Pavlovic, and S. Kumar. A new baseline for image annotation. In ECCV, 2008.
[27] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse,
and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. MIT
press, 1998.
[28] P. Rai and H. Daume. Multi-label prediction via sparse infinite cca. In NIPS, pages 1518?1526.
2009.
[29] P. Rai and H. D. III. Infinite predictor subspace models for multitask learning. In AISTATS,
pages 613?620, 2010.
[30] S. Raman, T. J. Fuchs, P. J. Wild, E. Dahl, and V. Roth. The Bayesian group-Lasso for analyzing contingency tables. In ICML, pages 881?888, 2009.
[31] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing features: efficient boosting procedures
for multiclass object detection. In CVPR, pages 762?769. IEEE Computer Society, 2004.
[32] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
l1 -constrained quadratic programming (lasso). IEEE Transactions on Information Theory,
55(5):2183 ?2202, 2009.
[33] Y. Xue, D. Dunson, and L. Carin. The matrix stick-breaking process for flexible multi-task
learning. In ICML, pages 1063?1070, 2007.
[34] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J.
R. Statistic. Soc. B, 68(1):49?67, 2006.
[35] Y. Zhang and J. Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In
NIPS, pages 2550?2558. 2010.
9
| 4242 |@word multitask:4 middle:2 inversion:2 seems:1 open:1 integrative:1 seek:1 covariance:17 jacob:1 elisseeff:1 tr:4 edric:1 liu:1 contains:2 score:1 current:1 com:1 nt:1 luo:1 chu:1 attracted:1 numerical:2 xrce:1 enables:2 dupont:1 update:9 discrimination:1 selected:1 fewer:1 lr:2 boosting:2 idi:1 zhang:3 five:2 constructed:1 ik:3 yuan:1 combine:1 wild:1 manner:1 introduce:3 indeed:1 expected:1 cand:1 pkdd:1 multi:30 freeman:1 considering:4 cardinality:1 estimating:1 notation:1 underlying:1 kind:1 bakker:1 supplemental:4 differentiation:1 every:4 fat:1 biometrika:1 classifier:1 stick:2 unit:1 omit:2 yn:7 superiority:1 overestimate:1 before:2 shengbo:2 generalised:2 local:1 analyzing:1 interpolation:1 pami:1 suggests:1 challenging:1 archambeau:3 averaged:1 woodbury:1 mallow:1 practice:1 block:7 procedure:1 pontil:2 area:1 empirical:2 straightfoward:1 hyperbolic:1 significantly:3 projection:1 davy:1 vert:1 integrating:2 induce:2 protein:1 cannot:1 unlabeled:1 selection:2 gelman:1 context:2 equivalent:3 deterministic:1 demonstrated:1 roth:1 lobato:2 straightforward:1 starting:1 convex:3 formulate:1 boutell:1 shen:1 recovery:1 rule:2 d1:2 dw:1 handle:1 dzd:1 laplace:7 target:3 play:1 user:1 programming:1 trend:1 element:5 approximated:1 particularly:1 recognition:1 dawid:1 observed:4 role:1 bottom:3 capture:5 ensures:1 cycle:1 removed:2 mz:1 highest:1 environment:1 complexity:2 covariates:1 abd:1 predictive:3 vague:1 easily:2 various:1 represented:1 effective:2 london:1 monte:1 tell:1 choosing:1 whose:1 widely:1 cvpr:1 otherwise:1 statistic:2 noisy:1 ip:8 beal:1 advantage:3 propose:2 lam:1 product:2 combining:2 blr:3 rapidly:1 mixing:1 achieve:1 inducing:4 exploiting:1 convergence:3 double:1 produce:1 incidentally:1 incremental:1 converges:2 object:1 iq:3 andrew:1 ard:4 lowrank:1 keywords:1 albers:1 soc:1 predicted:1 require:1 multilevel:1 andez:2 clustered:1 extension:2 considered:1 ground:3 normal:2 wright:1 predict:2 achieves:2 adopt:2 vary:1 torralba:1 estimation:3 integrates:1 applicable:1 label:8 individually:1 vz:1 agrees:1 repetition:1 grouped:1 mit:2 genomic:1 gaussian:21 modified:1 rather:1 derived:1 june:1 notational:1 unsatisfactory:1 rank:8 likelihood:10 sigkdd:2 baseline:1 sense:1 inference:4 shivaswamy:1 typically:1 entire:1 interested:1 arg:3 classification:15 among:1 overall:2 denoted:1 issue:1 flexible:1 art:1 special:5 mackay:1 constrained:1 marginal:4 equal:3 field:1 testbeds:1 evgeniou:2 sampling:1 zz:1 biology:2 placing:1 bmc:1 unsupervised:1 icml:3 carin:1 future:1 np:4 report:1 pavlovic:1 modern:1 randomly:1 chib:1 simultaneously:4 gamma:2 murphy:1 replaced:2 argmax:1 ourselves:2 ando:1 detection:1 interest:1 evaluation:2 replicates:1 introduces:1 mixture:6 chain:1 conduct:1 hyperprior:1 lockwood:1 abundant:1 increased:1 formalism:1 column:5 caruana:1 ordinary:4 applicability:1 rdi:1 entry:2 deviation:2 predictor:1 reported:3 learnt:4 sv:1 xue:4 combined:1 confident:1 density:4 randomized:1 standing:1 probabilistic:3 off:2 systematic:1 pool:2 together:1 polychotomous:1 na:2 w1:1 again:1 ambiguity:1 squared:1 thesis:1 wishart:1 derivative:1 li:1 account:1 factorised:2 centred:1 student:5 ranking:2 mv:1 performed:1 view:3 root:1 closed:3 zoeter:2 competitive:1 option:2 capability:1 annotation:1 formed:1 square:4 accuracy:8 variance:2 bayesian:34 produced:1 iid:1 carlo:1 monitoring:1 drive:1 penalised:1 sharing:1 checked:1 definition:1 associated:2 di:5 monitored:1 improves:1 supervised:2 follow:1 reflected:1 specify:1 impacted:1 methodology:1 response:1 formulation:2 correlation:12 until:1 web:2 propagation:1 defines:2 logistic:4 vadrevu:1 yeast:7 facilitate:1 ye:1 brown:1 true:5 regularization:1 hence:5 symmetric:1 neal:1 deal:1 during:1 onno:2 auc:7 generalized:1 hill:1 evident:1 complete:1 demonstrate:2 thomson:1 tn:3 performs:1 l1:1 ranging:1 image:3 variational:15 invoked:1 recently:3 consideration:1 common:1 ols:1 qp:1 belong:1 interpretation:1 tail:1 numerically:1 significant:1 measurement:1 caron:1 imposing:1 vec:5 cambridge:1 rd:2 tuning:1 heskes:1 centre:1 chapelle:1 europe:1 multivariate:3 posterior:12 recent:2 ynp:2 driven:1 binary:2 life:1 captured:1 relaxed:1 impose:1 schneider:1 converge:1 paradigm:2 monotonically:1 dashed:1 ii:5 multiple:14 full:1 infer:1 faster:2 cross:1 long:1 bach:2 lin:1 penalisation:1 impact:1 prediction:7 involving:1 regression:14 basic:1 variant:1 vision:2 expectation:3 optimisation:1 metric:2 iteration:2 kernel:3 represent:1 ruler:1 whereas:1 separately:4 chari:1 diagram:2 microarray:1 rest:1 pooling:3 spirit:1 jordan:1 mw:4 presence:1 iii:1 variate:17 zi:4 lasso:5 restrict:2 identified:1 multiclass:1 bottleneck:1 whether:1 motivated:3 expression:3 fuchs:1 penalty:1 york:1 useful:2 detailed:1 amount:1 nonparametric:1 induces:1 outperform:1 sign:1 estimated:6 neuroscience:1 dummy:1 per:2 yy:1 correctly:2 hyperparameter:2 group:11 key:1 threshold:2 demonstrating:1 drawn:3 d3:1 prevent:1 dahl:1 year:1 sum:1 inverse:6 family:4 decide:1 raman:1 decision:2 appendix:4 dy:1 comparable:1 capturing:1 bound:2 cca:1 quadratic:1 kronecker:1 categorisation:1 constraint:1 scene:10 kumar:1 performing:1 structured:1 rai:2 xerox:2 conjugate:2 beneficial:1 increasingly:1 em:5 wi:11 appealing:2 explained:1 taken:1 computationally:1 ln:2 equation:1 previously:1 hern:2 tractable:1 gaussians:1 observe:1 hierarchical:3 weinberger:1 rp:12 assumes:1 denotes:1 top:4 clustering:1 graphical:4 sw:1 coe:1 exploit:1 especially:2 yz:1 classical:1 society:2 unchanged:1 micchelli:1 added:1 concentration:1 rgensen:1 diagonal:4 traditional:1 amongst:1 subspace:1 concatenation:1 tseng:1 makadia:1 index:2 berger:1 difficult:1 unfortunately:1 setup:1 dunson:1 sigma:1 negative:1 perform:1 observation:1 markov:1 anti:1 similiar:1 truncated:1 ecml:1 hinton:2 excluding:1 extended:1 sharp:1 hln:2 jebara:1 introduced:1 required:1 z1:1 nip:5 able:4 bar:2 below:1 pattern:6 regime:1 sparsity:23 encompasses:1 optimise:1 max:3 including:1 royal:1 wainwright:1 suitable:1 circumvent:1 residual:4 marginalised:1 scheme:1 factorises:1 categorical:1 incoherent:1 naive:2 genomics:1 text:1 prior:23 literature:1 cedric:1 regularisation:1 probit:2 interesting:2 limitation:1 validation:1 contingency:1 degree:2 sufficient:1 dq:2 editor:1 uncorrelated:2 share:1 row:5 cancer:1 course:1 eccv:1 supported:1 last:1 figueiredo:1 drastically:1 formal:1 allow:1 vv:2 generalise:1 understand:1 taking:1 sparse:19 distributed:1 curve:1 dimension:1 vocabulary:1 xn:5 valid:1 resides:1 cumulative:1 genome:1 adaptive:1 avoided:1 simplified:1 pth:1 social:1 transaction:2 approximate:5 compact:1 relatedness:1 preferred:1 sz:1 ml:4 doucet:1 overfitting:1 active:1 infered:1 assumed:1 alternatively:1 search:3 latent:9 zq:1 table:4 learn:8 transfer:1 robust:1 diag:2 aistats:1 main:2 arrow:2 noise:4 hyperparameters:9 daume:1 repeated:1 fair:1 positively:1 fig:1 gatsby:1 precision:3 explicit:2 exponential:1 candidate:1 breaking:2 jmlr:4 learns:3 showing:1 gating:1 offset:1 explored:1 naively:1 phd:1 justifies:1 sparseness:1 chen:2 entropy:1 contained:1 collectively:1 springer:2 truth:3 extracted:1 ma:1 acm:3 weston:1 goal:3 identity:1 labelled:1 shared:1 considerable:1 hard:1 specifically:1 generalisation:2 except:1 infinite:2 averaging:1 principal:1 experimental:2 e:1 select:1 college:1 wq:1 support:1 guo:2 evaluate:4 mcmc:1 argyriou:1 correlated:5 |
3,582 | 4,243 | Environmental statistics and the trade-off between
model-based and TD learning in humans
Dylan A. Simon
Department of Psychology
New York University
New York, NY 10003
[email protected]
Nathaniel D. Daw
Center for Neural Science and Department of Psychology
New York University
New York, NY 10003
[email protected]
Abstract
There is much evidence that humans and other animals utilize a combination of
model-based and model-free RL methods. Although it has been proposed that
these systems may dominate according to their relative statistical efficiency in
different circumstances, there is little specific evidence ? especially in humans
? as to the details of this trade-off. Accordingly, we examine the relative performance of different RL approaches under situations in which the statistics of reward
are differentially noisy and volatile. Using theory and simulation, we show that
model-free TD learning is relatively most disadvantaged in cases of high volatility
and low noise. We present data from a decision-making experiment manipulating
these parameters, showing that humans shift learning strategies in accord with
these predictions. The statistical circumstances favoring model-based RL are also
those that promote a high learning rate, which helps explain why, in psychology,
the distinction between these strategies is traditionally conceived in terms of rulebased vs. incremental learning.
1
Introduction
There are many suggestions that humans and other animals employ multiple approaches to learned
decision making [1]. Precisely delineating these approaches is key to understanding human decision systems, especially since many problems of behavioral control such as addiction have been attributed to partial failures of one component [2]. In particular, understanding the trade-offs between
approaches in order to bring them under experimental control is critical for isolating their unique
contributions and ultimately correcting maladaptive behavior. Psychologists primarily distinguish
between declarative rule learning and more incremental learning of stimulus-response (S?R) habits
across a broad range of tasks [3, 4]. They have shown that large problem spaces, probabilistic feedback (as in the weather prediction task), and difficult to verbalize rules (as in information integration
tasks from category learning) all seem to promote the use of a habit learning system [5, 6, 7, 8, 9].
The alternative strategies, which these same manipulations disfavor, are often described as imputing (inherently deterministic) ?rules? or ?maps?, and are potentially supported by dissociable neural
systems also involved in memory for one-shot episodes [10].
Neuroscientists studying rats have focused on more specific tasks that test whether animals are sensitive to changes in the outcome contingency or value of actions. For instance, under different task
circumstances or following different brain lesions, rats are more or less willing to continue working
for a devalued food reward [11]. In terms of reinforcement learning (RL) theories, such evidence
has been proposed to reflect a distinction between parallel systems for model-based vs. model-free
RL [12, 13]: a world model permits updating a policy following a change in food value, while
model-free methods preclude this.
1
Intuitively, S?R habits correspond well to the policies learned by TD methods such as actor/critic
[14, 15], and rule-based cognitive planning strategies seem to mirror model-based algorithms. However, the implication that this distinction fundamentally concerns the use or non-use of a world model
in representation and algorithm seems somewhat at odds with the conception in psychology. Specifically, neither the gradation of update (i.e., incremental vs. abrupt) nor the nature of representation
(i.e., verbalizable rules) posited in the declarative system seem obviously related to the model-use
distinction. Although there have been some suggestions about how episodic memory may support
TD learning [16], a world model as conceived in RL is typically inherently probabilistic, so as to
support computing expected action values in stochastic environments, and thus must be learned by
incrementally composing multiple experiences. It has also been suggested that episodic memory
supports yet a third decision strategy distinct from both model-based and model-free [17], although
there is no experimental evidence for such a triple dissociation or in particular for a separation between the putative episodic and model-based controllers.
Here we suggest that an explanation for this mismatch may follow from the circumstances under
which each RL approach dominates. It has previously been proposed that model-free and modelbased reasoning should be traded off according to their relative statistical efficiency (proxied by
uncertainty) in different circumstances [13]. In fact, what ultimately matters to a decision-maker is
relative advantage in terms of reward [18]. Focusing specifically on task statistics, we extend the
uncertainty framework to investigate under what circumstances the performance of a model-based
system excels sufficiently to make it worthwhile.
When the environment is completely static, TD is well known to converge to the optimal policy
almost as quickly as model-based approaches [19], and so environmental change must be key to
understanding its computational disadvantages. Primarily, model-free Monte Carlo (MC) methods
such as TD are unable to propagate learned information around the state space efficiently, and in
particular to generalize to states not observed in the current trajectory. This is not the only way in
which MC methods learn slowly, however: they must also take samples of outcomes and average
over them. This process introduces additional noise to the sampling process which must be averaged
over, as observational deviations resulting from the learner?s own choice variability or transition
stochasticity in the environment are confounded with variability in immediate rewards. In effect, this
averaging imposes an upper bound on the learning rate needed to achieve reasonable performance,
and, correspondingly, on how well it can keep up with task volatility.
Conversely, the key benefit of model-based reasoning lies in its ability to react quickly to change,
applying single-trial experience flexibly in order to construct values. We provide a more formal
argument of this observation in MDPs with dynamic rewards and static transitions, and find that
the environments in which TD is most impaired are those with frequent changes and little noise.
This suggests a strategy by which these two approaches should optimally trade-off, which we test
empirically using a decision task in humans while manipulating reward statistics. The high-volatility
environments in which model-based learning dominates are also those in which a learning rate near
one optimally applies. This may explain why a model-based system is associated with or perhaps
specialized for rapid, declarative rule learning.
2
Theory
Model-free and model-based methods differ in their strategies for estimating action values from
samples. One key disadvantage of Monte Carlo sampling of long-run values in an MDP, relative to
model-based RL (in which immediate rewards are sampled and aggregated according to the sampled
transition dynamics), is the need to average samples over both reward and state transition stochasticity. This impairs its ability to track changes in the underlying MDP, with the disadvantage most
pronounced in situations of high volatility and low noise.
Below, we develop the intuition for this disadvantage by applying Kalman filter analysis [20] to
examine uncertainties in the simplest possible MDP that exhibits the issue. Specifically, consider a
state with two actions, each associated with a pair of terminal states. Each action leads to one of the
two states with equal probability, and each of the four terminal states is associated with a reward. The
rewards are stochastic and diffusing, according to a Gaussian process, and the transitions are fixed.
We consider the uncertainty and reward achievable as a function of the volatility and observation
noise. We have here made some simplifications in order to make the intuition as clear as possible:
2
that each trajectory has only a single state transition and reward; that in the steady state the static
transition matrix has been fully learned; and that all analyzed distributions are Gaussian. We test
some of these assumptions empirically in section 3 by showing that the same pattern holds in more
complex tasks.
2.1
Model
In general Xt (i) or just X will refer to an actual sample of the ith variable (e.g., reward or value) at
? refers to the (latent) true mean of X, and X
? refers to estimates of X
? made by the learning
time t, X
process. Given i.i.d. Gaussian diffusion processes on each value, Xt (i), described by:
?
?
2
? t+1 (i) X
? t (i))2
= (X
diffusion or volatility,
(1)
?
?
2
2
?
" = (Xt (i) Xt (i))
and observation noise,
(2)
the optimal learning rate that achieves the minimal uncertainty (from the Kalman gain) is:
p
2
2 + 4"2
?? =
(3)
2
2"
Note that this function is monotonically increasing with and decreasing with " (and in particular,
?? ! 1 as " ! 0). When using this learning rate the resulting asymptotic uncertainty (variance of
estimates) will be:
p
D
E
2 + 4"2 + 2
?
2
?
?
UX (? ) = (X X) =
(4)
2
This, as expected, increases monotonically in both parameters.
?
?
What often matters, however, is identifying the highest of multiple values, e.g., X(i)
and X(j).
If
?
?
X(i)
X(j)
= d, the marginal value of the choice will be ?d. Given some uncertainty, U , the
?
?
probability of this choice, i.e., X(i)
> X(j),
compared to chance is:
?
Z 1 ?
d
c(U ) = 2
x p
(x)dx 1
(5)
U
1
(Where and are the density and distribution functions for the standard normal.) The
p resulting
value of the choice is thus c(U )d. While c is flat at 1 as U ! 0, it shrinks as ?(1/ U ) (since
0
(0) = 0). Our goal is now to determine c(UQ ) for each algorithm.
2.2
Value estimation
Consider the value of one of the actions in our two-action MDP which leads to state A or B. Here,
?
?
R(B)
? = R(A)+
the true expected value of the choice is Q
. If each reward is changing according to
2
the Gaussian diffusion process described above, this will induce a change process on Q. A model?
?
based system that has fully learned the transition dynamics will be able to estimate R(A)
and R(B)
? By assuming each reward is sampled equally
separately, and thus take the expectation to produce Q.
often and adopting the appropriate effective , the resulting uncertainty of this expectation, UMB ,
follows Equation 4, with X = Q.
On the other hand, a Monte Carlo system that must take samples over transitions will observe Q =
2
?
?
R(A) or Q = R(B). If R(A)
R(B)
= d, it will observe an additional variance of d4 from the
mixture of the two reward distributions. Treating this noise as Gaussian and adding it to the noise of
the rewards, this decreases the optimal learning rate and increases the minimal uncertainty to:
p
D
E
2 + d2 + 4"2 + 2
2
? Q)
?
UMC = (Q
=
(6)
2
Other forms of stochasticity, whether from changing policies or more complex MDPs, will similarly
inflate the effective noise term, albeit with a different form.
Clearly UMC UMB . However, the more relevant measure is how these uncertainties translate into
values [18]. For this we want to compare their relative success rates, c(U ) from Equation 5, which
scale directly to outcome. The relative advantage of the model-based (MB) approach, c(UMB )
3
MB?TD advantage (probability)
0.05
0.10
0.15
"
0.2
0.1
0.3
0.4
0.5
0
0.1
0.2
0.3
0.4
0.5
0.00
0
0.0
0.1
0.2
0.3
0.4
0.5
0.0
0.1
0.2
0.3
0.4
0.5
"
Figure 1: Difference in theoretical success rate between MB and MC
c(UMC ), is plotted in Figure 1 for an arbitrary reward deviation d = 1. As expected, as either
the volatility or noise parameter gets very large and the task gets harder, the uncertainty increases,
performance approaches chance, and the relative advantage vanishes. However, for reasonable sizes
of , the model-based advantage first increases to a peak as increases, which is largest for small
values of ". No comparable increasing advantage is seen for model-based valuation for increasing
".
While these techniques may also be extended more generally to other MDPs (see Supplemental
Materials), the core observation presented above should illuminate the remainder of our discussion.
3
Simulation
To examine our qualitative predictions in a more realistic setting, we simulated randomly generated
MDPs with 8 states, 2 actions, and transition and reward functions following the assumptions given
in the previous section, with the addition of a contractive factor on rewards, ', to prevent divergence:
? 0 (s, a) ? N (0, 1)
R
p
2
'= 1
? t (s, a) = 'R
? t 1 (s, a) + wt (s, a)
R
? t (s, a) + vt
Rt (s, a) = R
stationary distribution
?=1
var R
wt (s, a) ? N (0,
2
)
2
vt ? N (0, " )
Each transition had (at most) three possible outcome, with probabilities 0.6, 0.3, and 0.1, assigned
randomly with replacement from the 8 states. In order to avoid bias related to the exploration policy,
each learning algorithm observed the same set of 1000 choices (chosen according to the objectively
optimal policy, plus softmax decision noise), and the greedy policy resulting from its learned values
? values at that point. The entire process was repeated 5000
was assessed according to the true R
times for each different setting of and " parameters.
We compared the performance of a model-based approach using value iteration with a fixed, optimal
reward learning rate and transition counting (MB) against various model-free algorithms including
Q(0), SARSA(0), and SARSA(1) (with fixed optimal learning rates), all using a discount factor of
= 0.9. As expected, all learners showed a decrement in reward as increased. Figure 2 shows the
difference in mean reward obtained between MB and SARSA(0). Q(0) and SARSA(1) showed the
same pattern of results.
The correspondence between the theoretical results and the simulation confirms that the theoretical
findings do hold more generally, and we claim that the same underlying effects drive these results.
4
1.5
"
0.3
0.4
0.5
0.6
0.02
0.03
0.04
0.05
0.06
0.0
MB?TD advantage (reward)
0.5
1.0
0.2
0.2
0.3
0.4
0.5
0.6
0.2
0.3
0.4
"
0.5
0.6
Figure 2: Difference in reward obtained between MB and SARSA(0)
4
Human behavior
Human subjects performed a decision task that represented an MDP with 4 states and 2 actions.
The rewards followed the same contractive Gaussian diffusion process used in section 3, with
and " parameters varied across subjects. We sought changes in the reliance on model-based and
model-free strategies via regressions of past events onto current choices [21]. We hypothesized that
model-based RL would be uniquely favored for large and small ".
4.1
4.1.1
Methods
Participants
55 individuals from the undergraduate subject pool and the surrounding community participated in
the experiment. Twelve received monetary compensation based on performance, and the remainder
received credit fulfilling course requirements. All participants gave informed consent and the study
was approved by the human subjects ethics board of the institute.
4.1.2
Task
Subjects viewed a graphical representation of a rotating disc with four pairs of colored squares
equally spaced around the edge. Each pair of squares constituted a state (s 2 S = {N, E, S, W})
and had a unique distinguishable color and icon indicating direction (an arrow of some type). Each
of the two squares in a state represented an action (a 2 A = {L, R}), and had a left- or right-directed
icon. During the task, only the top quadrant of the disc was visible at any time, and at decision time
subjects could select the left or right action by pressing the left or right arrow button on a keyboard.
Immediately after selecting an action, between zero and five coins (including a pie-fraction of a
coin) appeared under the selected action square, representing a reward (R 2 [0, 5]). After 600 ms,
the disc began rotating and the reward became slowly obscured over the next 1150 ms until a new
pair of squares was at the top of the disc and the next decision could be entered, as seen in Figure 3.
The state dynamics were determined by a fixed transition function (T : S ? A ! A) such that
each action was most likely to lead to the next adjacent state along the edge of the disc (e.g.,
T (N, L) = W). To this, additional uniform outcome noise was added with probability 0.4. The reward distribution followed the same Gaussian process given in the previous sections, except shifted
and trimmed. The parameters and " were varied by condition.
?
0.7 if s0 = T (s, a)
T : S ? A ? S ! [0, 1]
T (s, a, s0 ) =
0.1 otherwise
? t (s, a) + vt + 2.5, 0), 5)
Rt : S ? A ! [0, 5]
Rt (s, a) = min(max(R
5
Figure 3: Abstract task layout and screen shot shortly after a choice is made (yellow box indicates
visible display): Each state has two actions, right (red) and left (blue), which lead to the indicated
state with 70% probability, and otherwise to another state at random. Each action also results in a
reward of 0?5 coins.
Each subject was first trained on the transition and reward dynamics of the task, including 16 ob? was shown so as to get a feeling for both
servations of reward samples where the latent value R
the change and noise processes. They then performed 500 choice trials in a single condition. Each
subject was randomly assigned to one of 12 conditions, made up of 2 {0.03, 0.0462, 0.0635,
0.0882, 0.1225, 0.1452} partially crossed with " 2 {0, 0.126, 0.158, 0.316, 0.474, 0.506}.
4.1.3
Analysis
Because they use different sampling strategies to estimate action values, TD and model-based RL
differ in their predictions of how experience with states and rewards should affect subsequent
choices. Here, we use a regression analysis to measure the extent to which choices at a state are
influenced by recent previous events characteristic of either approach [21]. This approach has
the advantage of making only very coarse assumptions about the learning process, as opposed to
likelihood-based model-fits which may be biased by the specific learning equations assumed. By
confining our analyses to the most recent samples we remain agnostic about free parameters with
non-linear effects such as learning rates and discount factors, but rather measure the relative strength
of reliance on either sort of evidence directly using a general linear model. Regardless of the actual
learning process, the most recent sample should have the strongest effect [22]. Accordingly, below
we define explanatory variables that capture the most recently experienced reward sample that would
be relevant to a choice under either Q(1) TD or model-based planning.
The data for each subject were considered to be the sequence of states visited, St , actions taken,
At , and rewards received, Rt . We define additional vector time sequences a, j, r, q, and p, each
indexed by time and state and referred to generally as xt (s), with all x0 initially undefined. For each
observation we perform the following updates:
?stay? vs. ?switch? (boolean indicator)
last action
?jump? unexpected transition
immediate reward
subsequent reward
expected reward
for x = a, j, r, q, and p
change
wt = [At = at (St )]
at+1 (St ) = At
jt+1 (St ) = [St+1 6= T (St , At )]
rt+1 (St ) = Rt
qt+1 (St 1 ) = Rt
pt+1 (St ) = rt+1 (T (St , At ))
xt+1 (s) = xt (s) 8s 6= St
dt+1 = |Rt rt |
For convenience, we use xt to mean xt (St ). Note that these vectors are step functions, such that
each value is updated (xt 6= xt 1 ) only when a relevant observation is made. They thus always
represent the most recent relevant sample.
6
Given the task dynamics, we can consider how a TD-based Q-learning system and a model-based
planning system would compute values. Both take into account the last sample of the immediate
reward, rt . They differ in how they account for the reward from the ?next state?: either, for Q(1), as
qt (the last reward received from the state visited after the last visit to St ) or, for model-based RL, as
pt (the last sample of the reward at the true successor state). That is, while TD(1) will incorporate the
reward observed following Rt , regardless of the state, a model-based system will instead consider
the expected successor state [21]. While the latter two reward observations will be the same in some
cases, they can disagree either after a jump trial (j, where the model-based and sample successor
states differ), or when the successor state has more recently been visited from a different predecessor
state (providing a reward sample known to model-based but not to TD).
Given this, we can separate the effects of model-based and model-free learning by defining additional explanatory variables:
?
q if qt = pt
rt0 = t
common
0 otherwise (after mean correction)
qt? = qt
rt0
p?t
rt0
= pt
unique
While r represents the cases where the two systems use the same reward observation, q ? and p? are
the residual rewards unique to each learning system.
0
We applied a mixed-effects logistic regression model using glmer [23] to predict ?stay? (wt = 0)
trials. Any regressors of interest were mean-corrected before being entered into the design. Any
trial in which one of the variables was undefined (e.g., the first visit to a state) was omitted. Also,
we required that subjects have at least 50 (10%) switch trials to be included.
First we examined the main effects with a regression including fixed effects of interest for r, r0 , q ? ,
p? , and random effects of no interest for r, q, and p (without covariances). Next, we ran a regression
adding all the interactions between the condition variables ( , ") and the specific reward effects (q ? ,
p? ). Finally, we additionally included the interaction between change in reward on the previous trial
(d) and the specific reward effects.
4.2
Results
A total of 5 subjects failed to meet the inclusion criterion of 50 switch trials (in each case because
they pressed the same button on almost all trials), leaving 500 decision trials from each of 50 subjects. Subjects were observed to switch on 143 ? 55 trials (mean ? 1 SD). As designed, there were
an average of 151 ? 17 ?jump? trials per subject. The number of trials in which TD and model-based
disagreed as to the most recent relevant sample of the next-state reward (r0 = 0) was 243 ? 26, and
for 181 ? 19 of these, it was due to a more recent visit to the next state. The results of the regressions
are shown in Table 1.
Beyond the trivial effects of perseveration and reward, subjects showed a substantial amount of TDtype learning (q ? > 0), and a smaller but significant amount of model-based lookahead (p? > 0).
The interactions of these effects by condition demonstrated that subjects in higher drift conditions
showed significantly less TD ( ?q ? < 0) but unreduced model-based learning ( ?p? ), possibly due
to the relative disadvantage of TD with increased drift. Similarly, higher noise conditions showed
decreased model-based effects (" ? p? < 0) and no change in TD, which may be driven by the
decreasing advantage of MB. Note that, since the (nonsignificant) trend on the unaffected variable is
positive, it is unlikely that either interaction effect results from a nonspecific change in performance
or the ?noisiness? of choices. Both of these effects are consistent with the pattern of differential
reliance predicted by the theoretical analysis. The effect of change on the previous trial (d) provides
one hint as to how subjects may adjust their reliance on either system dynamically: higher changes
are indicative of noisier environments which are thus expected to promote TD learning.
5
Discussion
We have shown that humans systematically adjust their reliance on learning approaches according
to the statistics of the task, in a way qualitatively consistent with the theoretical considerations
7
Table 1: Behavioral effects from nested regressions (each including preceding groups)
variable
constant
r
r0
q?
p?
? q?
? p?
" ? q?
" ? p?
d ? q?
d ? p?
effects
mixed
mixed
mixed
mixed
mixed
fixed
fixed
fixed
fixed
mixed
mixed
z p
11.61 * 10 29
14.99 * 10 49
5.55 * 10 7
6.40 * 10 9
2.51 " 0.012
-4.07 + 0.00005
0.67 0.50
0.99 0.32
-2.11 # 0.035
1.63 0.10
-3.06 # 0.0022
description
perseveration
last immediate r
common next r
TD(1) next-step r
model predicted r
TD with change
model with change
TD with noise
model with noise
TD after change
model after change
presented. Model-based methods, while always superior to TD in terms of performance, have the
largest advantage in the presence of change paired with low environmental noise, because the Monte
Carlo sampling strategy of TD interferes with tracking fast change. If the additional costs of modelbased computation are fixed, this would motivate employing the system only in the regime where
its advantage was most pronounced [18]. Consistent with this, human behavior exhibited relatively
larger use of model-based RL with increased reward volatility and lesser use of it with increased
observation noise.
Of course, increasing either the volatility or noise parameters makes the task harder, and a decline in
the marker for either sort of learning, as we observed, implies an overall decrement in performance.
However, as the decrement was specific to one or the other explanatory variable, this may also be
interpreted as a relative increase in use of the unaffected strategy. It is also worth noting that the
linearized regression analysis examines only the effect of the most recent rewards, and the weighting
of those relative to earlier samples will depend on the learning rate [22]. Thus a decrease in learning
rate for either system may be confounded with a decrease in the strength of its effect in our analysis.
However, while the optimal learning rates are also predicted to differ between conditions, these
predictions are common to both systems, and it seems unlikely that each would differentially adjust
its learning rate in response to a different manipulation.
The characteristics associated with these learning systems in psychology can be seen as consequences of the relative strengths of model-based and model-free learning. If the model-based system
is most useful in conditions of low noise and high volatility, then the appropriate learning rates for
such a system are large: there is less need and utility to take multiple samples for the purpose of
averaging. In this case of a high learning rate, model-based learning is closely aligned with singleshot episodic encoding, possibly subsuming such a system [17], as well as with learning categorical,
verbalizable rules in the psychological sense, rather than averages. This may also explain the selective engagement of putatively model-based brain regions such as the dorsolateral prefrontal cortex
in tasks with less stochastic outcomes [24]. Finally, this relates indirectly to the well known phenomenon whereby dominance shifts from the model-based to the model-free controller with overtraining: a model-based system dominates early not simply because it learns faster, but because it is
capable of better learning with fewer trials.
The specific advantage of high learning rates may well motivate the brain to use a restricted modelbased system, such as one with learning rate fixed to 1. Indeed (see Supplemental materials), this
restriction has little detriment on the system?s advantage over TD in the circumstances where it
would be expected to be used, but causes drastic performance problems as observation noise increases, since averaging over samples is then required. Such a limitation might have useful computational advantages. Transition matrices learned this way, for instance, will be sparse: just records
of trajectories. Such matrices admit both compressed representations and more efficient planning algorithms (e.g., tree search) as, in the fully deterministic case, only one trajectory must be examined.
Conversely, evaluations in a model based system are extremely costly when transitions are highly
stochastic, since averages must be computed over exponentially many paths, while they add no cost
to model-free learning.
Acknowledgments This work was supported by Award Number R01MH087882 from NIMH as part of the
NSF/NIH CRCNS Program, and by a Scholar Award from the McKnight Foundation.
8
References
[1] Bernard W. Balleine, Nathaniel D. Daw, and John P. O?Doherty. Multiple forms of value learning and
the function of dopamine. In Paul W. Glimcher, Colin F. Camerer, Ernst Fehr, and Russell A. Poldrack,
editors, Neuroeconomics: Decision Making and the Brain, chapter 24, pages 367?387. Academic Press,
London, 2008.
[2] Antoine Bechara. Decision making, impulse control and loss of willpower to resist drugs: a neurocognitive perspective. Nat Neurosci, 8(11):1458?63, 2005.
[3] Frederick Toates. The interaction of cognitive and stimulus-response processes in the control of behaviour.
Neuroscience & Biobehavioral Reviews, 22(1):59?83, 1997.
[4] Peter Dayan. Goal-directed control and its antipodes. Neural Netw, 22:213?219, 2009.
[5] Neal Schmitt, Bryan W. Coyle, and Larry King. Feedback and task predictability as determinants of
performance in multiple cue probability learning tasks. Organ Behav Hum Perform, 16(2):388?402,
1976.
[6] Berndt Brehmer and Jan Kuylenstierna. Task information and performance in probabilistic inference
tasks. Organ Behav Hum Perform, 22:445?464, 1978.
[7] B J Knowlton, L R Squire, and M A Gluck. Probabilistic classification learning in amnesia. Learn Mem,
1(2):106?120, 1994.
[8] W. Todd Maddox and F. Gregory Ashby. Dissociating explicit and procedural-learning based systems of
perceptual category learning. Behavioural Processes, 66(3):309?332, 2004.
[9] W. Todd Maddox, J. Vincent Filoteo, Kelli D. Hejl, and A. David Ing. Category number impacts
rule-based but not information-integration category learning: Further evidence for dissociable categorylearning systems. J Exp Psychol Learn Mem Cogn, 30(1):227?245, 2004.
[10] R. A. Poldrack, J. Clark, E. J. Par?e-Blagoev, D. Shohamy, J. Creso Moyano, C. Myers, and M. A. Gluck.
Interactive memory systems in the human brain. Nature, 414(6863):546?550, 2001.
[11] Bernard W. Balleine and Anthony Dickinson. Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology, 37(4?5):407?419, 1998.
[12] Kenji Doya. What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?
Neural Netw, 12(7?8):961?974, 1999.
[13] Nathaniel D. Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and
dorsolateral striatal systems for behavioral control. Nat Neurosci, 8(12):1704?1711, 2005.
[14] Ben Seymour, John P. O?Doherty, Peter Dayan, Martin Koltzenburg, Anthony K. Jones, Raymond J.
Dolan, Karl J. Friston, and Richard S. Frackowiak. Temporal difference models describe higher-order
learning in humans. Nature, 429(6992):664?667, 2004.
[15] John P. O?Doherty, Peter Dayan, Johannes Schultz, Ralf Deichmann, Karl Friston, and Raymond J. Dolan.
Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304(5669):452?
454, 2004.
[16] Adam Johnson and A. David Redish. Hippocampal replay contributes to within session learning in a
temporal difference reinforcement learning model. Neural Netw, 18(9):1163?1171, 2005.
[17] M?at?e Lengyel and Peter Dayan. Hippocampal contributions to control: The third way. In J.C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 889?896. MIT Press, Cambridge, MA, 2008.
[18] Mehdi Keramati, Amir Dezfouli, and Payam Piray. Speed/accuracy trade-off between the habitual and
the goal-directed processes. PLoS Comput Biol, 7(5):e1002055, 2011.
[19] Michael Kearns and Satinder Singh. Finite-sample convergence rates for q-learning and indirect algorithms. In Michael S. Kearns, Sara A. Solla, and David A. Cohn, editors, Advances in Neural Information
Processing Systems 11, volume 11, pages 996?1002. MIT Press, Cambridge, MA, 1999.
[20] R. E. Kalman. A new approach to linear filtering and prediction problems. J Basic Eng, 82(1):35?45,
1960.
[21] Nathaniel D Daw, S. J. Gershman, B. Seymour, P. Dayan, and R. J. Dolan. Model-based influences on
humans? choices and striatal prediction errors. Neuron, 69(6):1204?1215, 2011.
[22] Brian Lau and Paul W Glimcher. Dynamic response-by-response models of matching behavior in rhesus
monkeys. J Exp Anal Behav, 84(3):555?579, 2005.
[23] Douglas Bates, Martin Maechler, and Ben Bolker. lme4: Linear mixed-effects models using S4 classes,
2011. R package version 0.999375-39.
[24] Saori C Tanaka, Kazuyuki Samejima, Go Okada, Kazutaka Ueda, Yasumasa Okamoto, Shigeto Yamawaki, and Kenji Doya. Brain mechanism of reward prediction under predictable and unpredictable
environmental dynamics. Neural Netw, 19(8):1233?1241, 2006.
9
| 4243 |@word trial:15 determinant:1 version:1 achievable:1 seems:2 approved:1 instrumental:2 willing:1 d2:1 simulation:3 propagate:1 confirms:1 covariance:1 linearized:1 eng:1 rhesus:1 pressed:1 harder:2 shot:2 selecting:1 past:1 current:2 yet:1 dx:1 must:7 john:3 realistic:1 visible:2 subsequent:2 treating:1 designed:1 update:2 v:4 stationary:1 greedy:1 selected:1 fewer:1 cue:1 amir:1 accordingly:2 indicative:1 ith:1 core:1 record:1 colored:1 coarse:1 provides:1 putatively:1 five:1 along:1 berndt:1 predecessor:1 differential:1 amnesia:1 addiction:1 qualitative:1 verbalize:1 behavioral:3 balleine:2 x0:1 indeed:1 expected:9 rapid:1 behavior:4 examine:3 planning:4 nor:1 brain:6 terminal:2 bolker:1 decreasing:2 td:26 food:2 little:3 actual:2 preclude:1 unpredictable:1 increasing:4 biobehavioral:1 estimating:1 underlying:2 maechler:1 agnostic:1 what:4 interpreted:1 monkey:1 informed:1 supplemental:2 finding:1 temporal:2 interactive:1 delineating:1 platt:1 control:7 before:1 positive:1 todd:2 sd:1 seymour:2 consequence:1 striatum:1 encoding:1 servations:1 meet:1 path:1 might:1 plus:1 examined:2 dynamically:1 conversely:2 suggests:1 sara:1 range:1 contractive:2 averaged:1 directed:4 unique:4 acknowledgment:1 cogn:1 habit:3 jan:1 episodic:4 drug:1 significantly:1 weather:1 matching:1 induce:1 refers:2 quadrant:1 suggest:1 get:3 onto:1 convenience:1 unreduced:1 applying:2 gradation:1 influence:1 restriction:1 deterministic:2 map:1 center:1 demonstrated:1 layout:1 regardless:2 flexibly:1 rt0:3 nonspecific:1 focused:1 go:1 abrupt:1 identifying:1 correcting:1 react:1 immediately:1 rule:8 examines:1 dominate:1 ralf:1 traditionally:1 updated:1 pt:4 dickinson:1 substrate:1 trend:1 updating:1 maladaptive:1 observed:5 role:1 capture:1 region:1 episode:1 plo:1 russell:1 trade:5 highest:1 decrease:3 ran:1 substantial:1 intuition:2 environment:6 vanishes:1 nimh:1 predictable:1 reward:55 dynamic:8 ultimately:2 trained:1 motivate:2 depend:1 singh:1 dissociating:1 efficiency:2 learner:2 completely:1 indirect:1 frackowiak:1 various:1 represented:2 chapter:1 surrounding:1 distinct:1 fast:1 effective:2 london:1 monte:4 describe:1 outcome:6 larger:1 otherwise:3 compressed:1 ability:2 statistic:5 objectively:1 noisy:1 obviously:1 advantage:14 pressing:1 sequence:2 interferes:1 myers:1 interaction:5 mb:8 remainder:2 frequent:1 relevant:5 aligned:1 monetary:1 entered:2 disfavor:1 consent:1 translate:1 ernst:1 achieve:1 lookahead:1 roweis:1 description:1 pronounced:2 dissociable:3 differentially:2 competition:1 convergence:1 impaired:1 requirement:1 produce:1 incremental:3 adam:1 ben:2 volatility:10 help:1 develop:1 qt:5 received:4 inflate:1 predicted:3 kenji:2 implies:1 differ:5 direction:1 closely:1 filter:1 stochastic:4 exploration:1 human:15 successor:4 observational:1 larry:1 material:2 behaviour:1 scholar:1 niv:1 brian:1 sarsa:5 correction:1 hold:2 sufficiently:1 around:2 credit:1 normal:1 considered:1 exp:2 predict:1 traded:1 claim:1 achieves:1 sought:1 early:1 omitted:1 ventral:1 purpose:1 estimation:1 maker:1 visited:3 sensitive:1 largest:2 organ:2 offs:1 clearly:1 mit:2 gaussian:7 always:2 rather:2 avoid:1 noisiness:1 indicates:1 likelihood:1 sense:1 inference:1 dayan:6 unlikely:2 entire:1 explanatory:3 initially:1 typically:1 koller:1 favoring:1 manipulating:2 selective:1 issue:1 overall:1 classification:1 favored:1 animal:3 integration:2 softmax:1 marginal:1 equal:1 construct:1 sampling:4 represents:1 broad:1 jones:1 promote:3 coyle:1 stimulus:2 fundamentally:1 hint:1 employ:1 primarily:2 richard:1 randomly:3 fehr:1 divergence:1 individual:1 replacement:1 neuroscientist:1 interest:3 investigate:1 highly:1 evaluation:1 adjust:3 introduces:1 analyzed:1 mixture:1 undefined:2 implication:1 dezfouli:1 edge:2 capable:1 partial:1 experience:3 indexed:1 tree:1 rotating:2 plotted:1 isolating:1 obscured:1 theoretical:5 minimal:2 psychological:1 instance:2 increased:4 earlier:1 boolean:1 disadvantage:5 cost:2 deviation:2 uniform:1 johnson:1 optimally:2 gregory:1 engagement:1 st:13 density:1 peak:1 twelve:1 stay:2 probabilistic:4 off:5 pool:1 modelbased:3 michael:2 quickly:2 reflect:1 opposed:1 slowly:2 possibly:2 prefrontal:2 cognitive:2 admit:1 account:2 redish:1 matter:2 squire:1 crossed:1 performed:2 red:1 sort:2 participant:2 parallel:1 simon:1 contribution:2 square:5 nathaniel:5 became:1 variance:2 characteristic:2 efficiently:1 dissociation:1 correspond:1 spaced:1 accuracy:1 yellow:1 camerer:1 generalize:1 payam:1 vincent:1 disc:5 bates:1 mc:3 carlo:4 trajectory:4 perseveration:2 drive:1 icon:2 unaffected:2 worth:1 lengyel:1 overtraining:1 explain:3 strongest:1 influenced:1 failure:1 against:1 involved:1 associated:4 attributed:1 okamoto:1 static:3 sampled:3 gain:1 color:1 ethic:1 focusing:1 higher:4 dt:1 follow:1 response:5 shrink:1 box:1 just:2 until:1 working:1 hand:1 mehdi:1 cohn:1 marker:1 incrementally:1 logistic:1 perhaps:1 indicated:1 impulse:1 mdp:5 effect:22 hypothesized:1 true:4 assigned:2 bechara:1 neal:1 adjacent:1 cerebellum:1 during:1 uniquely:1 whereby:1 steady:1 rat:2 d4:1 m:2 criterion:1 hippocampal:2 doherty:3 bring:1 reasoning:2 consideration:1 recently:2 began:1 volatile:1 imputing:1 specialized:1 common:3 superior:1 nih:1 rl:12 empirically:2 poldrack:2 conditioning:1 exponentially:1 cerebral:1 volume:1 extend:1 neuropharmacology:1 refer:1 significant:1 singleshot:1 cambridge:2 similarly:2 inclusion:1 session:1 stochasticity:3 had:3 habitual:1 actor:1 cortex:2 add:1 own:1 showed:5 recent:7 perspective:1 driven:1 manipulation:2 keyboard:1 continue:1 success:2 vt:3 seen:3 additional:6 somewhat:1 preceding:1 r0:3 converge:1 aggregated:1 determine:1 monotonically:2 colin:1 relates:1 multiple:6 ing:1 faster:1 academic:1 long:1 posited:1 equally:2 visit:3 award:2 paired:1 impact:1 prediction:8 regression:8 basic:1 controller:2 circumstance:7 expectation:2 subsuming:1 dopamine:1 iteration:1 represent:1 adopting:1 accord:1 addition:1 want:1 separately:1 participated:1 decreased:1 leaving:1 biased:1 exhibited:1 subject:17 seem:3 odds:1 near:1 counting:1 presence:1 noting:1 conception:1 diffusing:1 switch:4 affect:1 fit:1 psychology:5 gave:1 decline:1 lesser:1 shift:2 brehmer:1 whether:2 utility:1 impairs:1 trimmed:1 peter:5 york:4 cause:1 behav:3 action:20 generally:3 useful:2 clear:1 johannes:1 amount:2 s4:1 discount:2 category:4 simplest:1 nsf:1 shifted:1 neuroscience:1 conceived:2 track:1 per:1 bryan:1 blue:1 incentive:1 group:1 key:4 four:2 reliance:5 dominance:1 procedural:1 basal:1 changing:2 prevent:1 neither:1 douglas:1 diffusion:4 utilize:1 button:2 fraction:1 run:1 package:1 uncertainty:12 solla:1 almost:2 reasonable:2 ueda:1 doya:2 separation:1 putative:1 decision:13 ob:1 dorsolateral:2 comparable:1 bound:1 ashby:1 followed:2 distinguish:1 simplification:1 correspondence:1 display:1 yasumasa:1 disadvantaged:1 strength:3 yamawaki:1 precisely:1 rulebased:1 flat:1 speed:1 argument:1 min:1 extremely:1 relatively:2 martin:2 department:2 according:8 combination:1 mcknight:1 across:2 remain:1 smaller:1 maddox:2 making:5 psychologist:1 intuitively:1 restricted:1 fulfilling:1 lau:1 taken:1 behavioural:1 equation:3 previously:1 mechanism:1 needed:1 singer:1 drastic:1 confounded:2 studying:1 yael:1 permit:1 observe:2 worthwhile:1 appropriate:2 indirectly:1 uq:1 alternative:1 coin:3 shortly:1 top:2 graphical:1 especially:2 added:1 hum:2 strategy:11 costly:1 rt:12 illuminate:1 antoine:1 exhibit:1 excels:1 unable:1 separate:1 simulated:1 valuation:1 extent:1 trivial:1 declarative:3 assuming:1 kalman:3 providing:1 detriment:1 difficult:1 pie:1 potentially:1 striatal:2 design:1 anal:1 policy:7 confining:1 perform:3 upper:1 disagree:1 observation:10 neuron:1 finite:1 compensation:1 immediate:5 situation:2 extended:1 variability:2 defining:1 incorporate:1 varied:2 arbitrary:1 community:1 drift:2 david:3 pair:4 required:2 resist:1 distinction:4 learned:8 daw:5 tanaka:1 able:1 suggested:1 beyond:1 below:2 pattern:3 mismatch:1 frederick:1 appeared:1 regime:1 program:1 including:5 memory:4 explanation:1 max:1 critical:1 event:2 friston:2 indicator:1 residual:1 representing:1 mdps:4 categorical:1 psychol:1 raymond:2 review:1 understanding:3 keramati:1 kazuyuki:1 dolan:3 relative:13 asymptotic:1 fully:3 loss:1 par:1 mixed:9 suggestion:2 limitation:1 filtering:1 var:1 gershman:1 triple:1 clark:1 foundation:1 contingency:2 consistent:3 imposes:1 s0:2 schmitt:1 editor:3 systematically:1 critic:1 karl:2 course:2 supported:2 last:6 free:15 formal:1 bias:1 institute:1 correspondingly:1 sparse:1 benefit:1 feedback:2 cortical:1 world:3 transition:17 knowlton:1 made:5 reinforcement:2 jump:3 regressors:1 qualitatively:1 feeling:1 employing:1 schultz:1 lme4:1 netw:4 keep:1 satinder:1 mem:2 assumed:1 samejima:1 search:1 latent:2 why:2 table:2 additionally:1 nature:3 learn:3 okada:1 composing:1 inherently:2 contributes:1 complex:2 anthony:2 main:1 constituted:1 neurosci:2 decrement:3 arrow:2 noise:21 paul:2 lesion:1 repeated:1 referred:1 crcns:1 board:1 screen:1 ny:2 predictability:1 experienced:1 explicit:1 dylan:1 lie:1 replay:1 perceptual:1 comput:1 third:2 weighting:1 learns:1 specific:7 xt:11 jt:1 showing:2 nyu:2 evidence:6 concern:1 dominates:3 undergraduate:1 disagreed:1 albeit:1 adding:2 mirror:1 neurocognitive:1 nat:2 gluck:2 distinguishable:1 simply:1 likely:1 ganglion:1 failed:1 unexpected:1 ux:1 partially:1 tracking:1 applies:1 nested:1 environmental:4 chance:2 ma:2 goal:4 viewed:1 king:1 change:21 included:2 specifically:3 determined:1 except:1 corrected:1 averaging:3 wt:4 kearns:2 total:1 bernard:2 experimental:2 indicating:1 select:1 support:3 latter:1 assessed:1 noisier:1 dorsal:1 phenomenon:1 glimcher:2 biol:1 |
3,583 | 4,244 | The Fixed Points of Off-Policy TD
J. Zico Kolter
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Off-policy learning, the ability for an agent to learn about a policy other than the
one it is following, is a key element of Reinforcement Learning, and in recent
years there has been much work on developing Temporal Different (TD) algorithms that are guaranteed to converge under off-policy sampling. It has remained
an open question, however, whether anything can be said a priori about the quality
of the TD solution when off-policy sampling is employed with function approximation. In general the answer is no: for arbitrary off-policy sampling the error
of the TD solution can be unboundedly large, even when the approximator can
represent the true value function well. In this paper we propose a novel approach
to address this problem: we show that by considering a certain convex subset of
off-policy distributions we can indeed provide guarantees as to the solution quality
similar to the on-policy case. Furthermore, we show that we can efficiently project
on to this convex set using only samples generated from the system. The end result is a novel TD algorithm that has approximation guarantees even in the case of
off-policy sampling and which empirically outperforms existing TD methods.
1
Introduction
In temporal prediction tasks, Temporal Difference (TD) learning provides a method for learning
long-term expected rewards (the ?value function?) using only trajectories from the system. The
algorithm is ubiquitous in Reinforcement Learning, and there has been a great deal of work studying
the convergence properties of the algorithm. It is well known that for a tabular value function
representation, TD converges to the true value function [3, 4]. For linear function approximation
with on-policy sampling (i.e., when the states are drawn from the stationary distribution of the policy
we are trying to evaluate), the algorithm converges to a well-known fixed point that is guaranteed
to be close to the optimal projection of the true value function [17]. When states are sampled offpolicy, standard TD may diverge when using linear function approximation [1], and this has led in
recent years to a number of modified TD algorithms that are guaranteed to convergence even in the
presence of off-policy sampling [16, 15, 9, 10].
Of equal importance, however, is the actual quality of the TD solution under off-policy sampling.
Previous work, as well as an example we present in this paper, show that in general little can be said
about this question: the solution found by TD can be arbitrarily poor in the case of off-policy sampling, even when the true value function is well-approximated by a linear basis. Pursing a slightly
different approach, other recent work has looked at providing problem dependent bounds, which use
problem-specific matrices to obtain tighter bounds than previous approaches [19]; these bounds can
apply to the off-policy setting, but depend on problem data, and will still fail to provide a reasonable
bound in the cases mentioned above where the off-policy approximation is arbitrarily poor. Indeed,
a long-standing open question in Reinforcement Learning is whether any a priori guarantees can be
made about the solution quality for off-policy methods using function approximation.
In this paper we propose a novel approach that addresses this question: we present an algorithm that
looks for a subset of off-policy sampling distributions where a certain relaxed contraction property
1
holds; for distributions in this set, we show that it is indeed possible to obtain error bounds on the
solution quality similar to those for the on-policy case. Furthermore, we show that this set of feasible
off-policy sampling distributions is convex, representable via a linear matrix inequality (LMI), and
we demonstrate how the set can be approximated and projected onto efficiently in the finite sample
setting. The resulting method, which we refer to as TD with distribution optimization (TD-DO),
is thus able to guarantee a good approximation to the best possible projected value function, even
for off-policy sampling. In simulations we show that the algorithm can improve significantly over
standard off-policy TD.
2
Preliminaries and Background
A Markov chain is a tuple, (S, P, R, ?), where S is a set of states, P : S ? S ? R+ is a transition
probability function, R : S ? R is a reward function, and ? ? [0, 1) is a discount factor. For
simplicity of presentation we will assume the state space is countable, and so can be indexed by
the set S = {1, . . . , n}, which allows us to use matrix rather than operator notation. The value
function for a Markov chain, P
V : S ? R maps states to their long term discounted sum of rewards,
?
and is defined as V (s) = E [ t=0 ? t R(st )|s0 = s]. The value function may also be expressed via
Bellman?s equation (in vector form)
V = R + ?P V
(1)
where R, V ? Rn represent vectors of all rewards and values respectively, and P ? Rn?n is a
matrix of probability transitions Pij = P (s? = j|s = i).
In linear function approximation, the value function is approximated as a linear combination of
some features describing the state: V (s) ? wT ?(s), where w ? Rk is a vector of parameters, and
? : S ? Rk is a function mapping states to k-dimensional feature vectors; or, again using vector
notation, V ? ?w, where ? ? Rn?k is a matrix of all feature vectors. The TD solution is a fixed
point of the Bellman operator followed by a projection, i.e.,
?
?
?wD
= ?D (R + ?P ?wD
)
(2)
where ?D = ?T (?T D?)?1 ?T D is a projection matrix weighted by the diagonal matrix D ?
Rn?n . Rearranging terms gives the analytical solution
?1 T
?
wD
= ?T D(? ? ?P ?)
? DR.
(3)
Although we cannot expect to form this solution exactly when P is unknown and too large to represent, we can approximate the solution via stochastic iteration (leading to the original TD algorithm),
or via the least-squares TD (LSTD) algorithm, which forms the matrices
m
m
1 X
1 X
(i)
w
?D = A??1?b, A? =
?(s(i) ) ?(s(i) ) ? ??(s? ) , ?b =
?(s(i) )r(i)
(4)
m i=1
m i=1
(i)
(i)
? D. When D
given a sequence of states, rewards, and next states {s(i) , r(i) , s? }m
i=1 where s
is not the stationary distribution of the Markov chain (i.e., we are employing off-policy sampling),
then the original TD algorithm may diverge (LSTD will still be able to compute the TD fixed point
in this case, but has a greater computational complexity of O(k 2 )). Thus, there has been a great deal
of work on developing O(k) algorithms that are guaranteed to converge to the LSTD fixed point
even in the case of off-policy sampling [16, 15].
We note that the above formulation avoids any explicit mention of a Markov Decision Process
(MDP) or actual policies: rather, we just have tuples of the form {s, r, s? } where s is drawn from
an arbitrary distribution but s? still follows the ?policy? we are trying to evaluate. This is a standard
formulation for off-policy learning (see e.g. [16, Section 2]); briefly, the standard way to reach this
setting from the typical notion of off-policy learning (acting according to one policy in an MDP, but
evaluating another) is to act according to some original policy in an MDP, and then subsample only
those actions that are immediately consistent with the policy of interest. We use the above notation as it avoids the need for any explicit notation of actions and still captures the off-policy setting
completely.
2.1
Error bounds for the TD fixed point
Of course, in addition to the issue of convergence, there is the question as to whether we can say anything about the quality of the approximation at this fixed point. For the case of on-policy sampling,
the answer here is an affirmative one, as formalized in the following theorem.
2
4
Approximation Error
10
TD Solution
2
10
Optimal Approximation
0
10
?2
10
?4
10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
p
Figure 1: Counter example for off-policy TD learning: (left) the Markov chain considered for the
counterexample; (right) the error of the TD estimate for different off-policy distributions (plotted on
a log scale), along with the error of the optimal approximation.
?
?
Theorem 1. (Tsitsiklis and Van Roy [17], Lemma 6) Let wD
be the unique solution to ?wD
=
?
?D (R + ?P ?wD ) where D is the stationary distribution of P . Then
1
?
k?wD
? V kD ?
k?D V ? V kD .
(5)
1??
Thus, for on-policy sampling with linear function approximation, not only does TD converge to its
fixed point, but we can also bound the error of its approximation relative to k?D V ? V kD , the
lowest possible approximation error for the class of function approximators.1
Since this theorem plays an integral role in the remainder of this paper, we want to briefly give the
intuition of its proof. A fundamental property of Markov chains [17, Lemma 1] is that transition
matrix P is non-expansive in the D norm when D is the stationary distribution
kP xkD ? kxkD , ?x.
(6)
From this it can be shown that the Bellman operator is a ?-contraction in the D norm and Theorem 1
follows. When D is not the stationary distribution of the Markov chain, then (6) need not hold, and
it remains to be seen what, if anything, can be said a priori about the TD fixed point in this situation.
3
An off-policy counterexample
Here we present a simple counter-example which shows, for general off-policy sampling, that the
TD fixed point can be an arbitrarily poor approximator of the value function, even if the chosen
bases can represent the true value function with low error. The same intuition has been presented
previously [11]. though we here present a concrete numerical example for illustration.
Example 1. Consider the two-state Markov chain shown in Figure 1, with transition probability
matrix P = (1/2)11T , discount factor ? = 0.99, and value function V = [1 1.05]T (with R =
(I ? ?P )V ). Then for any ? > 0 and C > 0, there exists an off-policy distribution D such that
using bases ? = [1 1.05 + ?]T gives
?
k?D V ? V k ? ?, and k?wD
? V k ? C.
(7)
Proof. (Sketch) The fact that k?D V ? V k ? ? is obvious from the choice of basis. To show that
the TD error can be unboundedly large, let D = diag(p, 1 ? p); then, after some simplification, the
TD solution is given analytically by
?2961 + 4141p ? 2820? + 2820p?
?
(8)
wD
=
?2961 + 4141p ? 45240? + 84840p? ? 40400?2 + 40400p?2
which is infinite, (1/w = 0), when
2961 + 45240? + 40400?2
.
(9)
4141 + 84840? + 40400?2
?
Since this solution is in (0, 1) for all epsilon, by choosing p close to this value, we can make wD
arbitrarily large, which in turn makes the error of the TD estimate arbitrarily large.
p=
1
The approximation factor can be sharpened to ?
1
1?? 2
in some settings [18], though the analysis does not
carry over to our off-policy case, so we present here the simpler version.
3
Figure 1 shows a plot of k?w? ? V k2 for the example above with ? = 0.001, varying p from 0 to
1. For p ? 0.715 the error of the TD solution approaches infinity; the essential problem here is that
when D is not the stationary distribution of P , A = ?T D(? ? ?P ?) can become close to zero (or
for the matrix case, one of its eigenvalues can become zero), and the TD value function estimate can
grow unboundedly large. Thus, we argue that simple convergence for an off-policy algorithm is not
a sufficient criterion for a good learning system, since even for a convergent algorithm the quality of
the actual solution could be arbitrarily poor.
4
A convex characterization of valid off-policy distributions
Although it may seem as though the above example would imply that very little could be said about
the quality of the TD fixed point under off-policy sampling, in this section we show that by imposing
additional constraints on the sampling distribution, we can find a convex family of distributions for
which it is possible to make guarantees.
To motivate the approach, we again note that error bounds for the on-policy TD algorithm follow
from the Markov chain property that kP xkD ? kxkD for all x when D is the stationary distribution. However, finding a D that satisfies this condition is no easier than computing the stationary
distribution directly and thus is not a feasible approach. Instead, we consider a relaxed contraction
property: that the transition matrix P followed by a projection onto the bases will be non-expansive
for any function already in the span of ?. Formally, we want to consider distributions D for which
k?D P ?wkD ? k?wkD
(10)
k
for any w ? R . This defines a convex set of distributions, since
k?D P ?wk2D ? k?wk2D
?
?
wT ?T P T D?(?T D?)?1 ?T D?(?T D?)?1 ?DP ?T w ? wT ?T D?w
wT ?T P T D?(?T D?)?1 ?DP ?T ? ?T D? w ? 0.
(11)
This holds for all w if and only if2
?T P T D?(?T D?)?1 ?DP ?T ? ?T D? 0
which in turn holds if and only if
(12)
3
F ?
?T D?
?T DP ?
T T
? P D? ?T D?
0
(13)
This is a matrix inequality (LMI) in D, and thus describes a convex set. Although the distribution D
is too high-dimensional to optimize directly, analogous to LSTD, the F matrix defined above is of a
representable size (2k ? 2k), and can be approximated from samples. We will return to this point in
the subsequent section, and for now will continue to use the notation of the true distribution D for
simplicity. The chief theoretical result of this section is that if we restrict our attention to off-policy
distributions within this convex set, we can prove non-trivial bounds about the approximation error
of the TD fixed point.
Theorem 2. Let w? be the unique solution to ?w? = ?D (R+?P ?w? ) where D is any distribution
? ? D?1/2 D?1/2 Then4
satisfying (13). Further, let D? be the stationary distribution of P , and let D
?
1 + ??(D)
?
k?wD
? V kD ?
k?D V ? V kD .
(14)
1??
The bound here is of a similar form to the previously stated bound for on-policy TD, it bounds
the error of the TD solution relative to the error of the best possible approximation, except for
? term, which measures how much the chosen distribution deviates from the
the additional ??(D)
? = 1, so we recover the original bound up to a
stationary distribution. When D = D? , ?(D)
constant factor. Even though the bound does include this term that depends on the distance from
the stationary distribution, no such bound is possible for D that do not satisfy the convex constraint
(13), as illustrated by the previous counter-example.
2
A 0 (A 0) denotes that A is negative (positive)
semidefinite.
?
?
A B
Using the Schur complement property that
0 ? B T AB ? C 0 [2, pg 650-651].
T
B
C
4
?(A) denotes the condition number of A, the ratio of the singular values ?(A) = ?max (A)/?min (A).
3
4
4
10
Approximation Error
TD Solution
Optimal Approximation
2
10
Feasible Region
0
10
?2
10
?4
10
0
0.1
0.2
0.3
0.4
0.5
p
0.6
0.7
0.8
0.9
1
Figure 2: Counter example from Figure 1 shown with the set of all valid distributions for which
F 0. Restricting the solution to this region avoids the possibility of the high error solution.
Proof. (of Theorem 2) By the triangle inequality and the definition of the TD fixed point,
?
?
k?wD
? V kD ? k?wD
? ?D V kD + k?D V ? V kD
?
= k?D (R + ?P ?wD
) ? ?D (R + ?P V )kD + k?D V ? V kD
?
= ?k?D P ?wD ? ?D P V kD + k?D V ? V kD
?
? ?k?D P ?wD
? ?D P ?D V kD + ?k?D P ?D V ? ?D P V kD + k?D V ? V kD .
(15)
Since ?D V = ?w
? for some w,
? we can use the definition of our contraction k?D P ?wkD ? k?wkD
to bound the first term as
?
?
?
k?D P ?wD
? ?D P ?D V kD ? k?wD
? ?D V kD ? k?wD
? V kD .
(16)
Similarly, the second term in (15) can be bounded as
k?D P ?D V ? ?D P V kD ? kP ?D V ? P V kD ? kP kD k?D V ? V kD
(17)
where kP kD denotes the matrix norm kAkD ? maxkxkD ?1 kAxkD . Substituting these bounds
back into (15) gives
?
(1 ? ?)k?wD
? V kD ? (1 + ?kP kD )k?D V ? V kD ,
(18)
?
so all the remains is to show that kP kD ? ?(D). To show this, first note that kP kD? = 1, since
max kP xkD? ?
kxkD? ?1
max kxkD? = 1,
kxkD? ?1
(19)
and for any nonsingular D,
kP kD = max kP xkD = max
kxkD ?1
kyk2 ?1
p
y T D?1/2 P T DP D?1/2 y = kD1/2 P D?1/2 k2 .
(20)
Finally, since D? and D are both diagonal (and thus commute),
kD1/2 P D?1/2 k2 = kD??1/2 D1/2 D?1/2 P D??1/2 D?1/2 D?1/2 k2
? kD??1/2 D1/2 k2 kD?1/2 P D??1/2 k2 kD?1/2 D?1/2 k2
?
= kD?1/2 D1/2 k2 kD?1/2 D1/2 k2 = ?(D)
?
?
The final form of the bound can be quite loose of, course, as many of the steps involved in the proof
used substantial approximations and discarded problem specific data (such as the actual k?D P kD
? term, for instance). This is in constrast to the previously mentioned
term in favor of the generic ?(D)
work of Yu and Bertsekas [19] that uses these and similar terms to obtain much tigher, but data
dependent, bounds. Indeed, applying a theorem from this work we can arrive as a slight improvement
of the bound above [13], but the focus here is just on the general form and possibility of the bound.
Returning to the counter-example from the previous section, we can visualize the feasible region
for which F 0, shown as the shaded portion in Figure 2, and so constraining the solution to
this feasible region avoids the possibility of the high error solution. Moreover, in this example the
optimal TD error occurs exactly at the point where ?min (F ) = 0, so that projecting an off-policy
distribution onto this set will give an optimal solution for initially infeasible distributions.
5
4.1
Estimation from samples
Returning to the issue of optimizing this distribution only using samples from the system, we note
(i)
that analogous to LSTD, for samples {s(i) , r(i) , s? }m
i=1
#
"
m
m
X
(i)
(i)
T
(i)
? (i) T
1
1 X?
?(s
)?(s
)
?(s
)?(s
)
F? =
?
Fi
(21)
(i)
m i=1 ?(s? )?(s(i) )T ?(s(i) )?(s(i) )T
m i=1
will be an unbiased estimate of the LMI matrix F (for a diagonal matrix D given the our sampling
distribution over s(i) ). Placing a weight di on each sample, we could optimize the sum F? (d) =
Pm
?
i=1 di Fi and obtain a tractable optimization problem. However, optimizing these weights freely is
not permissible, since this procedure allows us to choose di 6= dj even if s(i) = s(j) , which violates
the weights in the original LMI. However, if we additionally require that s(i) = s(j) ? di = dj
(or more appropriately for continuous features and states, for example that kdi ? dj k ? 0 as
k?(s(i) ) ? ?(s(j) )k ? 0 according to some norm) then we are free to optimize over these empirical
distribution weights. In practice, we want to constrain this distribution in a manner commensurate
with the complexity of the feature space and the number of samples. However, determining the best
such distributions to use in practice remains an open problem for future work in this area.
Finally, since many empirical distributions satisfy F? (d) 0, we propose to ?project? the empirical
distribution onto this set by minimizing the KL divergence between the observed and optimized
distributions, subject to the constraint that F? (d) 0. Since this constraint is guaranteed to hold at
the stationary distribution, the intuition here is that by moving closer to this set, we will likely obtain
a better solution. Formally, the final optimization problem, which we refer to as the TD-DO method
(Temporal Difference Distribution Optimization), is given by
m
X
min
d
??
pi log di
s.t. , 1T d = 0, F? (d) 0, d ? C.
(22)
i=1
where C is some convex set that respects the metric constraints described above. This is a convex optimization problem in d, and thus can be solved efficiently, though off-the-shelf solvers can perform
quite poorly, especially for large dimension m.
4.2
Efficient Optimization
Here we present a first-order optimization method based on solving the dual of (22). By properly
exploiting the decomposability of the objective and low-rank structure of the dual problem, we
develop an iterative optimization method where each gradient step can be computed very efficiently.
The presentation here is necessarily brief due to space constraints, but we also include a longer
description and an implementation of the method in the supplementary material. For simplicity we
present the algorithm ignoring the constraint set C, though we discuss possible additonal constraints
briefly in supplementary material.
We begin by forming the Lagrangian of (22), introducing Lagrange multipliers Z ? R2k?2k for the
constraint F? (d) 0 and ? ? R for the constraint 1T d = 1. This leads to the dual optimization
problem
)
( m
X
T ?
T
p?i log di ? tr(Z F (d)) + ?(1 d ? 1) .
(23)
max min ?
Z0,?
d
i=1
Treating Z as fixed, we maximize over ? and minimize over d in (23) using an equality-constrained,
feasible start Newton method [2, pg 528]. Since the objective is separable over the di ?s the Hessian
matrix is diagonal, and the Newton step can be computed in O(m) time; furthermore, since we
solve this subproblem for each update of dual variables Z, we can warm-start Newton?s method
from previous solutions, leading to a number of Newton steps that is virtually constant in practice.
Considering now the maximization over Z, the gradient of
(
)
X
?
T ? ?
?
T ?
g(Z) ?
?p?i log di (Z) ? trZ F (d (Z)) + ? (Z)(1 d (Z) ? 1)
i
6
(24)
0.4
Off?policy TD
Off?policy TD?DO
On?policy TD
Optimal Projection
0.8
Normalized Approximation Error
Normalized Approximation Error
1
0.6
0.4
0.2
0
2
3
4
5
6
7
Number of Bases
8
9
Off?policy TD
Off?policy TD?DO
On?policy TD
Optimal Projection
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
2
10
3
4
5
6
7
Number of Bases
8
9
10
Figure 3: Average approximation error of the TD methods, using different numbers of bases functions, for the random Markov chain (left) and diffusion chain (right).
0.25
Off?policy TD
Off?policy TD?DO
On?policy TD
Optimal Projection
1.5
Normalized Approximation Error
Normalized Approximation Error
2
1
0.5
0 0
10
1
2
10
10
Closeness to Stationary Distribution, C
Off?policy TD
Off?policy TD?DO
On?policy TD
Optimal Projection
0.2
0.15
0.1
0.05
0 0
10
3
10
mu
1
2
10
10
Closeness to Stationary Distribution, C
3
10
mu
Figure 4: Average approximation error, using off-policy distributions closer or further from the
stationary distribution (see text) for the random Markov chain (left) and diffusion chain (right).
0.8
Off?policy TD
Off?policy TD?DO
On?policy TD
Normalized Approximation Error
Normalized Approximation Error
1.5
1
0.5
0 2
10
3
10
Number of Samples
0.6
0.5
0.4
0.3
0.2
0.1
0 2
10
4
10
Off?policy TD
Off?policy TD?DO
On?policy TD
0.7
3
10
Number of Samples
4
10
Figure 5: Average approximation error for TD methods computed via sampling, for different numbers of samples, for random Markov chain (left) and diffusion chain (right).
is given simply by ?Z g(Z) = ?F? (d? (Z)). We then exploit the fact that we expect Z to typically be
low-rank: by the KKT conditions for a semidefinite program F? (d) and Z will have complementary
ranks, and since we expect F? (d) to be nearly full rank at the solution, we factor Z = Y Y T for
Y ? Rk?p with p ? k. Although this is now a non-convex problem, local optimization of this
objective is still guaranteed to give a global solution to the original semidefinite problem, provided
we choose the rank of Y to be sufficient to represent the optimal solution [5]. The gradient of this
transformed problem is ?Z g(Y Y T ) = ?2F? (d)Y , which can be computed in time O(mkp) since
each F?i term is a low-rank matrix, and we optimize the dual objective via an off-the-shelf LBFGS
solver [12, 14]. Though it is difficult to bound p aprirori, we can check after the solution that our
chosen value was sufficient for the global solution, and we have found that very low values (p = 1
or p = 2) were sufficient in our experiments.
5
Experiments
Here we present simple simulation experiments illustrating our proposed approach; while the evaluation is of course small scale, the results highlight the potential of TD-DO to improve TD algorithms
both practically as well as theoretically. Since the benefits of the method are clearest in terms of
7
0.19
0.22
Off?policy TD?DO
Normalized Approximation Error
Normalized Approximation Error
Off?policy TD?DO
0.18
0.17
0.16
0.15
0.14
0.13
0.12
20
30
40
50
60
70
Number of Clusters
80
90
0.2
0.18
0.16
0.14
0.12
0.1
0.08
5
100
10
15
20
25
30
35
40
Number of LBFGS Iterations
45
50
Figure 6: (Left) Effect of the number of clusters for sample-based learning on diffusion chain,
(Right) performance of algorithm on diffusion chain versus number of LBFGS iterations
the mean performance over many different environments we focus on randomly generated Markov
chains of two types: a random chain and a diffusion process chain.5
Figure 3 shows the average approximation error of the different algorithms with differing numbers
of basis function, over 1000 domains. In this and all experiments other than those evaluating the
effect of sampling, we use the full ? and P matrices to compute the convex set, so that we are
evaluating the performance of the approach in the limit of large numbers of samples. We evaluate
the approximation error kV? ? V kD where D is the off-policy sampling distribution (so as to be
as favorable as possible to off-policy TD). In all cases the TD-DO algorithm improves upon the
off-policy TD, though the degree of improvement can vary from minor to quite significant.
Figure 4 shows a similar result for varying the closeness of the sampling distribution to the
stationary distribution; in our experiments, the off-policy distribution is sampled according to
D ? Dir(1 + C? ?) where ? denotes the stationary distribution. As expected, the off-policy approaches perform similarly for larger C? (approaching the stationary distribution), with TD-DO
having a clear advantage when the off-policy distribution is far from the stationary distribution.
In Figure 5 we consider the effect of sampling on the algorithms. For these experiments we employ a
simple clustering method to compute a distribution over states d that respects the fact that ?(s(i) ) =
?(s(j) ) ? di = dj : we group the sampled states into k clusters via k-means clustering on the
feature vectors, and optimize over the reduced distribution d ? Rk . In Figure 6 we vary the number
of clusters k for the sampled diffusion chain, showing that the algorithm is robust to a large number
of different distributional representations; we also show the performance of our method varying the
number of LBFGS iterations, illustrating that performance generally improves monotonically.
6
Conclusion
The fundamental idea we have presented in this paper is that by considering a convex subset of
off-policy distributions (and one which can be computed efficiently from samples), we can provide
performance guarantees for the TD fixed point. While we have focused on presenting error bounds
for the analytical (infinite sample) TD fixed point, a huge swath of problems in TD learning arise
from this same off-policy issue: the convergence of the original TD method, the ability to find the
?1 regularized TD fixed point [6], the on-policy requirement of the finite sample analysis of LSTD
[8], and the convergence of TD-based policy iteration algorithms [7]. Although left for future work,
we suspect that the same techniques we present here can also be extending to these other cases,
potentially providing a wide range of analogous results that still apply under off-policy sampling.
Acknowledgements. We thank the reviewers for helpful comments and Bruno Scherrer for pointing
out a potential improvement to the error bound. J. Zico Kolter is supported by an NSF CI Fellowship.
5
Experimental details: For the random Markov Chain rows of P are drawn IID from a Dirichlet distribution,
and the reward and bases are random normal, with |S| = 11. For the diffusion-based chain, we sample
|S| = 100 points from a 2D unit cube xi ? [0, 1]2 and set p(s? = j|s = i) ? exp(?kxi ? xj k2 /(2? 2 ))
for bandwidth ? = 0.4. Similarly, rewards are sampled from a zero-mean Gaussian Process with covariance
Kij = exp(?kxi ? xj k2 /(2? 2 )), and for basis vectors we use the principle eigenvectors of Cov(V ) =
E[(I ? ?P )RRT (I ? ?P )T ] = (I ? ?P )K(I ? ?P )T , which are the optimal bases for representing value
functions (in expectation). Some details of the domains are omitted due to space constraints, but MATLAB
code for all the experiments is included in the supplementary files.
8
References
[1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In
Proceedings of the International Conference on Machine Learning, 1995.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] P. Dayan. The convergence of TD(?) for general ?. Machine Learning, 8(3?4), 1992.
[4] T. Jaakkola, M. I. Jordan, and S. P. Singh. On the convergence of stochastic iterative dynamic
programming algorithms. Neural Computation, 6(6), 1994.
[5] M. Journee, F. Bach, P.A. Absil, and R. Sepulchre. Low-rank optimization on the cone of
positive semidefinite matrices. SIAM Journal on Optimization, 20(5):2327?2351, 2010.
[6] J.Z. Kolter and A.Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In Proceedings of the International Conference on Machine Learning, 2009.
[7] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[8] A. Lazaric, M. Ghavamzadeh, and R. Munos. Finite-sample analysis of LSTD. In Proceedings
of the International Conference on Machine Learning, 2010.
[9] H.R. Maei and R.S. Sutton. GQ(?): A general gradient algorithm for temporal-difference
prediction learning with eligibility traces. In Proceedings of the Third Conference on Artificial
General Intelligence, 2010.
[10] H.R. Maei, Cs. Szepesvari, S. Bhatnagar, and R.S. Sutton. Toward off-policy learning control
with function approximation. In Proceedings of the International Conference on Machine
Learning, 2010.
[11] R. Munos. Error bounds for approximate policy iteration. In Proceedings of the International
Conference on Machine Learning, 2003.
[12] J. Nocedal and S.J. Wright. Numerical Optimization. Springer, 1999.
[13] B. Scherrer. Personal communication, 2011.
[14] M. Schmidt. minfunc, 2005. Available at http://www.cs.ubc.ca/?schmidtm/
Software/minFunc.html.
[15] R.S. Sutton, H.R. Maei, D. Precup, S. Bhatnagar, D. Silver, Cs. Szepesvari, and E. Wiewiora.
Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the International Conference on Machine Learning, 2009.
[16] R.S. Sutton, Cs. Szepesvari, and H.R. Maei. A convergent O(n) algorithm for off-policy
temporal-different learning with linear function approximation. In Advances in Neural Information Processing, 2008.
[17] J.N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function
approximation. IEEE Transactions and Auotomatic Control, 42:674?690, 1997.
[18] J.N. Tsitsiklis and B. Van Roy. Average cost temporal difference learning. Automatica,
35(11):1799?1808, 1999.
[19] H. Yu and D. P. Bertsekas. Error bounds for approximations from projected linear equations.
Mathematics of Operations Research, 35:306?329, 2010.
9
| 4244 |@word illustrating:2 version:1 briefly:3 norm:4 open:3 simulation:2 contraction:4 covariance:1 pg:2 commute:1 mention:1 tr:1 sepulchre:1 carry:1 outperforms:1 existing:1 wd:20 subsequent:1 numerical:2 wiewiora:1 plot:1 treating:1 update:1 stationary:19 intelligence:2 rrt:1 offpolicy:1 provides:1 characterization:1 wkd:4 simpler:1 along:1 become:2 prove:1 manner:1 theoretically:1 expected:2 indeed:4 bellman:3 discounted:1 td:76 actual:4 little:2 considering:3 solver:2 project:2 begin:1 notation:5 bounded:1 moreover:1 provided:1 lowest:1 what:1 affirmative:1 differing:1 finding:1 guarantee:6 temporal:10 act:1 exactly:2 returning:2 k2:11 control:2 zico:2 unit:1 bertsekas:2 positive:2 local:1 limit:1 sutton:4 shaded:1 range:1 unique:2 practice:3 procedure:1 area:1 empirical:3 significantly:1 projection:8 boyd:1 onto:4 close:3 cannot:1 operator:3 selection:1 applying:1 optimize:5 www:1 map:1 lagrangian:1 reviewer:1 attention:1 convex:15 focused:1 simplicity:3 formalized:1 immediately:1 constrast:1 vandenberghe:1 notion:1 analogous:3 play:1 programming:1 us:1 element:1 roy:3 approximated:4 satisfying:1 distributional:1 observed:1 role:1 subproblem:1 solved:1 capture:1 region:4 counter:5 mentioned:2 intuition:3 substantial:1 mu:2 complexity:2 environment:1 reward:7 dynamic:1 personal:1 ghavamzadeh:1 motivate:1 depend:1 solving:1 singh:1 upon:1 basis:4 completely:1 triangle:1 fast:1 kp:11 artificial:2 choosing:1 quite:3 supplementary:3 solve:1 larger:1 say:1 ability:2 favor:1 cov:1 final:2 sequence:1 eigenvalue:1 advantage:1 analytical:2 propose:3 gq:1 if2:1 remainder:1 poorly:1 description:1 kv:1 exploiting:1 convergence:8 cluster:4 requirement:1 unboundedly:3 extending:1 silver:1 converges:2 develop:1 minor:1 c:4 stochastic:2 violates:1 material:2 require:1 preliminary:1 tighter:1 hold:5 practically:1 considered:1 wright:1 normal:1 exp:2 great:2 mapping:1 visualize:1 pointing:1 substituting:1 parr:1 vary:2 omitted:1 estimation:1 favorable:1 weighted:1 mit:1 gaussian:1 modified:1 rather:2 shelf:2 varying:3 jaakkola:1 focus:2 improvement:3 properly:1 rank:7 check:1 expansive:2 absil:1 helpful:1 dependent:2 dayan:1 typically:1 initially:1 transformed:1 issue:3 dual:5 scherrer:2 html:1 priori:3 constrained:1 cube:1 equal:1 having:1 ng:1 sampling:25 placing:1 look:1 yu:2 nearly:1 tabular:1 future:2 employ:1 randomly:1 divergence:1 ab:1 interest:1 kd1:2 huge:1 possibility:3 evaluation:1 semidefinite:4 chain:22 tuple:1 integral:1 closer:2 indexed:1 plotted:1 theoretical:1 minfunc:2 instance:1 kij:1 maximization:1 cost:1 introducing:1 subset:3 decomposability:1 too:2 answer:2 dir:1 kxi:2 st:1 fundamental:2 international:6 siam:1 csail:1 standing:1 off:61 diverge:2 precup:1 concrete:1 again:2 sharpened:1 choose:2 lmi:4 dr:1 leading:2 return:1 potential:2 baird:1 satisfy:2 kolter:4 depends:1 portion:1 start:2 recover:1 minimize:1 square:3 efficiently:5 nonsingular:1 iid:1 bhatnagar:2 trajectory:1 r2k:1 reach:1 definition:2 involved:1 clearest:1 obvious:1 proof:4 di:9 sampled:5 massachusetts:1 improves:2 ubiquitous:1 back:1 follow:1 formulation:2 though:8 furthermore:3 just:2 sketch:1 defines:1 schmidtm:1 quality:8 mdp:3 effect:3 normalized:8 true:6 unbiased:1 multiplier:1 analytically:1 equality:1 regularization:1 laboratory:1 illustrated:1 deal:2 kyk2:1 eligibility:1 anything:3 criterion:1 trying:2 presenting:1 demonstrate:1 novel:3 fi:2 lagoudakis:1 empirically:1 slight:1 refer:2 significant:1 cambridge:2 counterexample:2 imposing:1 pm:1 similarly:3 mathematics:1 bruno:1 dj:4 moving:1 longer:1 base:8 recent:3 optimizing:2 certain:2 inequality:3 arbitrarily:6 continue:1 approximators:1 seen:1 greater:1 relaxed:2 additional:2 employed:1 freely:1 converge:3 maximize:1 monotonically:1 full:2 bach:1 long:3 prediction:2 additonal:1 metric:1 expectation:1 iteration:7 represent:5 background:1 addition:1 want:3 fellowship:1 grow:1 singular:1 permissible:1 appropriately:1 file:1 comment:1 subject:1 suspect:1 virtually:1 seem:1 schur:1 jordan:1 presence:1 constraining:1 xj:2 restrict:1 approaching:1 bandwidth:1 idea:1 whether:3 hessian:1 action:2 matlab:1 generally:1 clear:1 eigenvectors:1 discount:2 reduced:1 http:1 nsf:1 lazaric:1 group:1 key:1 drawn:3 diffusion:8 nocedal:1 year:2 sum:2 cone:1 arrive:1 family:1 reasonable:1 decision:1 bound:26 guaranteed:6 followed:2 simplification:1 convergent:2 infinity:1 constraint:11 constrain:1 software:1 span:1 min:4 separable:1 developing:2 according:4 combination:1 poor:4 representable:2 kd:37 describes:1 slightly:1 projecting:1 mkp:1 equation:2 remains:3 previously:3 describing:1 turn:2 fail:1 loose:1 kdi:1 discus:1 tractable:1 end:1 studying:1 available:1 operation:1 apply:2 generic:1 schmidt:1 original:7 denotes:4 clustering:2 include:2 dirichlet:1 newton:4 exploit:1 epsilon:1 especially:1 objective:4 question:5 already:1 looked:1 occurs:1 diagonal:4 said:4 gradient:5 dp:5 distance:1 thank:1 argue:1 trivial:1 toward:1 code:1 illustration:1 providing:2 ratio:1 minimizing:1 difficult:1 potentially:1 trace:1 stated:1 negative:1 implementation:1 countable:1 policy:83 unknown:1 perform:2 markov:14 discarded:1 commensurate:1 finite:3 descent:1 situation:1 communication:1 rn:4 arbitrary:2 maei:4 complement:1 kl:1 optimized:1 journee:1 address:2 able:2 program:1 max:6 warm:1 regularized:1 residual:1 representing:1 improve:2 technology:1 brief:1 imply:1 deviate:1 text:1 acknowledgement:1 determining:1 relative:2 expect:3 highlight:1 approximator:2 versus:1 agent:1 degree:1 pij:1 consistent:1 s0:1 sufficient:4 principle:1 pi:1 tigher:1 row:1 course:3 supported:1 free:1 infeasible:1 tsitsiklis:3 institute:1 wide:1 munos:2 van:3 benefit:1 dimension:1 transition:5 avoids:4 evaluating:3 valid:2 made:1 reinforcement:4 projected:3 employing:1 far:1 transaction:1 approximate:2 global:2 kkt:1 automatica:1 tuples:1 xi:1 continuous:1 iterative:2 chief:1 additionally:1 learn:1 robust:1 rearranging:1 szepesvari:3 ignoring:1 ca:1 necessarily:1 domain:2 diag:1 subsample:1 arise:1 complementary:1 explicit:2 third:1 rk:4 remained:1 theorem:7 specific:2 showing:1 closeness:3 exists:1 essential:1 restricting:1 importance:1 ci:1 easier:1 led:1 simply:1 likely:1 lbfgs:4 forming:1 lagrange:1 expressed:1 lstd:7 springer:1 ubc:1 satisfies:1 ma:1 presentation:2 swath:1 feasible:6 included:1 typical:1 infinite:2 except:1 wt:4 acting:1 lemma:2 experimental:1 formally:2 evaluate:3 d1:4 |
3,584 | 4,245 | N EWTRON: an Efficient Bandit algorithm for Online
Multiclass Prediction
Elad Hazan
Department of Industrial Engineering
Technion - Israel Institute of Technology
Haifa 32000 Israel
[email protected]
Satyen Kale
Yahoo! Research
4301 Great America Parkway
Santa Clara, CA 95054
[email protected]
Abstract
We present an efficient algorithm for the problem of online multiclass prediction
with bandit feedback in the fully adversarial setting. We measure its regret with
respect to the log-loss defined in [AR09], which is parameterized by a scalar ?.
We prove that the regret of N EWTRON is O(log T ) when ? is a constant that does
not vary with horizon T , and at most O(T 2/3 ) if ? is allowed
to increase to infinity
?
with T . For ? = O(log T ), the regret is bounded by O( T ), thus solving the open
problem of [KSST08, AR09]. Our algorithm is based on a novel application of the
online Newton method [HAK07]. We test our algorithm and show it to perform
well in experiments, even when ? is a small constant.
1
Introduction
Classification is a fundamental task of machine learning, and is by now well understood in its basic
variants. Unlike the well-studied supervised learning setting, in many recent applications (such as
recommender systems, ad selection algorithms, etc.) we only obtain limited feedback about the true
label of the input (e.g., in recommender systems, we only get feedback on the recommended items).
Several such problems can be cast as online, bandit versions of multiclass prediction problems1 . The
general framework, called the ?contextual bandits? problem [LZ07], is as follows. In each round,
the learner receives an input x in some high dimensional feature space (the ?context?), and produces
an action in response, and obtains an associated reward. The goal is to minimize regret with respect
to a reference class of policies specifying actions for each context.
In this paper, we consider the special case of multiclass prediction, which is a fundamental problem
in this area introduced by Kakade et al [KSST08]. Here, a learner obtains a feature vector, which
is associated with an unknown label y which can take one of k values. Then the learner produces
a prediction of the label, y?. In response, only 1 bit of information is given, whether the label is
correct or incorrect. The goal is to design an efficient algorithm that minimizes regret with respect
to a natural reference class of policies: linear predictors. Kakade et al [KSST08] gave an efficient
algorithm, dubbed BANDITRON. Their algorithm attains regret of O(T 2/3 ) for a natural multiclass
hinge loss, and they ask the question whether a better regret
bound is possible. While the EXP4
?
algorithm [ACBFS03], applied to this setting, has an O( T log T ) regret bound, it is highly inefficient, requiring O(T n/2 ) time per iteration,?where n is the dimension of the feature space. Ideally,
one would like to match or improve the O( T log T ) regret bound of the EXP4 algorithm with an
efficient algorithm (for a suitable loss function).
This question has received considerable attention. In COLT 2009, Abernethy and Rakhlin [AR09]
formulated the open question precisely as minimizing regret for a suitable loss function in the fully
1
For the basic bandit classification problem see [DHK07, RTB07, DH06, FKM05, AK08, MB04, AHR08].
1
adversarial setting (and even offered a monetary reward for a resolution of the problem). Some
?
special cases have been successfully resolved: the original paper of [KSST08], gives a O( T )
bound
? in the noiseless large-margin case. More recently, Crammer and Gentile [CG11] gave a
O( T log T ) regret bound via an efficient algorithm based on the upper confidence bound method
under a semi-adversarial assumption on the labels: they are generated stochastically via a specific
linear model (with unknown parameters which change over time). Yet the general (fully adversarial)
case has been unresolved as of now.
In this paper we address this question and design a novel algorithm for the fully adversarial setting
with its expected regret measured with respect to log-loss function defined in [AR09], which is
parameterized by a scalar ?. When ? is a constant independent of T , we get a much stronger
guarantee than required
by the open problem: the regret is bounded by O(log T ). In fact, the regret
?
is bounded by O( T ) even for ? = ?(log T ). Our regret bound for larger values of ? increases
smoothly to a maximum of O(T 2/3 ), matching that of BANDITRON in the worst case.
The algorithm is efficient to implement, and it is based on the online Newton method introduced
in [HAK07]; hence we call the new algorithm N EWTRON. We implement the algorithm (and a
faster variant, PN EWTRON) and test it on the same data sets used by Kakade et al [KSST08]. The
experiments show improved performance over the BANDITRON algorithm, even for ? as small as 10.
2
2.1
Preliminaries
Notation
Let [k] denote the set of integers {1, 2, . . . , k}, and ?k ? Rk the set of distributions on [k].
For any Rn , let 1, 0 denote the all 1s and all 0s vectors respectively, and let I denote the identity
n
matrix in Rn?n . For two
inner
Pn (row or column) vectors v, w ? R , we denote by v ? w their usual
product, i.e. v ? w = i=1 vi wi . We denote by kvk the `2 norm of v. For a vector v ? Rn , denote
by diag(v) the diagonal matrix in Rn?n where the ith diagonal entry equals vi .
For a matrix W ? Rk?n , denote by W1 , W2 , . . . , Wk its rows, which are (row) vectors in Rn .
To avoid defining unnecessary notation, we will interchangeably use W to denote both a matrix in
Rk?n or a (column) vector in Rkn . The vector form of the matrix W is formed by arranging its
rows one after the other, and then taking the transpose (i.e., the vector [W1 |W2 | ? ? ? |Wk ]> ). Thus,
for two matrices V and W, V ? W denotes their inner product in their vector form. For i ? [n] and
l ? [k], denote by Eil the matrix which has 1 in its (i, l)th entry, and 0 everywhere else.
For a matrix W, we denote by kWk the Frobenius norm of W, which is also the usual `2 norm
of the vector form of W, and so the notation is consistent. Also, we denote by kWk2 the spectral
norm of W, i.e. the largest singular value of W.
For two matrices W and V denote by W ? V their Kronecker product [HJ91]. For two square symmetric matrices W, V of like order, denote by W V the fact that W ? V is positive semidefinite,
i.e. all its eigenvalues are non-negative. A useful fact of the Kronecker product is the following:
if W, V are symmetric matrices such that W V, and if U is a positive semidefinite symmetric
matrix, then W ?U V ?U. This follows from the fact that if W, U are both symmetric, positive
semidefinite matrices, then so is their Kronecker product W ? U.
2.2
Problem setup
Learning proceeds in rounds. In each round t, for t = 1, 2, . . . , T , we are presented a feature vector
xt ? X , where X ? Rn , and kxk ? R for all x ? X . Here R is some specified constant. Associated
with xt is an unknown label yt ? [k]. We are required to produce a prediction, y?t ? [k], as the label
of xt . In response, we obtain only 1 bit of information: whether y?t = yt or not. In particular, when
y?t 6= yt , the identity of yt remains unknown (although one label, y?t , is ruled out).
The learner?s hypothesis class is parameterized by matrices W ? Rk?n with kWk ? D, for some
specified constant D. Denote the set of such matrices by K. Given a matrix W ? K with the rows
2
W1 , W2 , . . . , Wk , the prediction associated with W for xt is
y?t = arg max Wi ? xt .
i?[k]
While ideally we would like to minimize the 0 ? 1 loss suffered by the learner, for computational
reasons it is preferable to consider convex loss functions. A natural choice used in Kakade et
al [KSST08] is the multi-class hinge loss:
`(W, (xt , yt )) = max [1 ? Wyt ? xt + Wi ? xt ]+ .
i?[k]\yt
Other suitable loss functions `(?, ?) may also be used. The ultimate goal of the learner is to minimize
regret, i.e.
T
T
X
X
Regret :=
`(Wt , (xt , yt )) ? min
`(W? , (xt , yt )).
?
W ?K
t=1
t=1
A different loss function was proposed in an open problem by Abernethy and Rakhlin in COLT
2009 [AR09]. We use this loss function in this paper and define it now.
We choose a constant ? which parameterizes the loss function. Given a matrix W ? K and an
example (x, y) ? X ? [k], define the function P : K ? X ? ?k as
exp(?Wi ? x)
P(W, x)i = P
.
j exp(?Wj ? x)
Now let p = P(W, x). Suppose we make our prediction y?t by sampling from p.
A natural loss function for this scheme is log-loss defined as follows:
!
1
1
exp(?Wy ? x)
`(W, (x, y)) = ? log(py ) = ? log P
?
?
j exp(?Wk ? x)
P
1
exp(?W
?
x)
.
= ?Wy ? x + log
j
j
?
The log-loss is always positive. As ? becomes large, this log-loss function has the property that
when the prediction given by W for x is correct, it is very close to zero, and when the prediction is
incorrect, it is roughly proportional to the margin of the incorrect prediction over the correct one.
The algorithm and its analysis depend upon the the gradient and Hessian of the loss function w.r.t.
W. The following lemma derives these quantities (proof in full version). Note that in the following,
W is to be interpreted as a vector W ? Rkn .
Lemma 1. Fix a matrix W ? K and an example (x, y) ? X ? [k], and let p = P(W, x). Then we
have
?`(W, (x, y)) = (p ? ey ) ? x and ?2 `(W, (x, y)) = ?(diag(p) ? pp> ) ? xx> .
In the analysis, we need bounds on the smallest non-zero eigenvalue of the (diag(p) ? pp> ) factor
of the Hessian. Such bounds are given in the full version2 For the sake of the analysis, however, the
matrix inequality given in Lemma 2 below suffices. It is given in terms of a parameter ?, which is
the minimum probability of a label in any distribution P(W, x).
Definition 1. Define ? := minW?K,x?X mini P(W, x)i .
We have the following (loose) bound on ?, which follows easily using the fact that |Wi ? x| ? RD:
? ? exp(?2?RD)/k.
k?k
(1)
Lemma 2. Let W ? K be any weight matrix, and let H ? R
be any symmetric matrix such that
H1 = 0. Then we have
??
?2 `(W, (x, y))
H ? xx> .
kHk2
2
Our earlier proof used Cheeger?s inequality. We thank an anonymous referee for a simplified proof.
3
Algorithm 1 N EWTRON. Parameters: ?, ?
1: Initialize W10 = 0.
2: for t = 1 to T do
3:
Obtain the example xt .
4:
Let pt = P(Wt0 , xt ), and set p0t = (1 ? ?) ? pt + ?k 1.
5:
Output the label y?t by sampling from p0t . This is equivalent to playing Wt = Wt0 with
probability (1 ? ?), and Wt = 0 with probability ?.
6:
Obtain feedback, i.e. whether y?t = yt or not.
7:
if y?t = yt then
1
t (yt )
0
? t := 1?p
8:
Define ?
p0t (yt ) ? k 1 ? eyt ? xt and ?t := pt (yt ).
9:
else
? t := p0t (?yt ) ? ey? ? 1 1 ? xt and ?t := 1.
10:
Define ?
t
yt )
pt (?
k
11:
end if
12:
Define the cost function
? t ? (W ? W0 ) + 1 ?t ?(?
? t ? (W ? W0 ))2 .
ft (W) := ?
(2)
t
t
2
13:
Compute
t
X
1
0
kWk2 .
(3)
Wt+1
:= arg min
ft (W) +
W?K
2D
? =1
14: end for
2.3
The FTAL Lemma
Our algorithm is based on the FTAL algorithm [HAK07]. This algorithm is an online version of
the Newton step algorithm in offline optimization. The following lemma specifies the algorithm,
specialized to our setting, and gives its regret bound. The proof is in the full version.
Lemma 3. Consider an online convex optimization problem over some convex, compact domain
K ? Rn of diameter D with cost functions ft (w) = (vt ? w ? ?t ) + 12 ?t (vt ? w ? ?t )2 , where the
vector vt ? Rn and scalars ?t , ?t are chosen by the adversary such that for some known parameters
r, a, b, we have kvt k ? r, ?t ? a, and |?t (vt ? w ? ?t )| ? b, for all w ? K. Then the algorithm
that, in round t, plays
wt := arg min
w?K
t?1
X
ft (w)
? =1
2
has regret bounded by O( nba log( DraT
b )).
3
The N EWTRON algorithm
Our algorithm for bandit multiclass learning algorithm, dubbed N EWTRON, is shown as Algorithm 1
above. In each iteration, we randomly choose a label from the distribution specified by the current
weight matrix on the current example mixed with the uniform distribution over labels specified by
an exploration parameter ?. The parameter ? (which is similar to the exploration parameter used
in the EXP3 algorithm of [ACBFS03]) is eventually tuned based on the value of the parameter ?
in the loss function (see Corollary 5). We then use the observed feedback to construct a quadratic
loss function (which is strongly convex) that lower bounds the true loss function in expectation (see
Lemma 7) and thus allows us to bound the regret. To do this we construct a randomized estimator
? t for the gradient of the loss function at the current weight matrix. Furthermore, we also choose a
?
parameter ?t , which is an adjustment factor for the strongly convexity of the quadratic loss function
ensuring that its expectation lower bounds the true loss function. Finally, we compute the new loss
matrix using a Follow-The-Regularized-Leader strategy, by minimizing the sum of all quadratic loss
functions so far with `2 regularization. As described in [HAK07], this convex program can be solved
in quadratic time, plus a projection on K in the norm induced by the Hessian.
4
Statement and discussion of main theorem. To simplify notation, define the function `t : K ? R
as `t (W) = `(W, (xt , yt )). Let Et [?] denote the conditional expectation with respect to the ?-field
Ft , where Ft is the smallest ?-field with respect to which the predictions y?k , for k = 1, 2, . . . , t ? 1,
are measurable.
With this notation, we can state our main theorem giving the regret bound:
1
Theorem 4. Given ?, ? and ? ? 12 , suppose we set ? ? min{ ??
10 + ?, 4RD } in the N EWTRON
? log(k)
? ?
algorithm, for ? = 20?R2 D2 . Let ? = max{ 2 , k }. The N EWTRON algorithm has the following
bound on the expected regret:
T
X
?
E[`t (Wt )] ? `t (W ) = O
kn
??
log T +
? log(k)
T
?
t=1
Before giving the proof theorem 4, we first state a corollary (a simple optimization of parameters,
proved in the full version) which shows how ? in Theorem 4 can be set appropriately to get a smooth
interpolation between O(log(T )) and O(T 2/3 ) regret based on the value of ?.
Corollary 5. Given ?, there is a setting of ? so that the regret of N EWTRON is bounded by
exp(4?RD)
min c
log(T ), 6cRDT 2/3 ,
?
where the constant c = O(k 3 n) is independent of ?.
Discussion of the bound. The parameter ? is inherent to the log-loss function as defined in
[AR09]. Our main result as given in Corollary 5 which entails logarithmic regret for constant ?,
contains a constant which depends exponentially on ?. Empirically, it seems that ? can be set to a
small constant, say 10 (see Section 4), and still have good performance.
1
log(T ), the regret can be bounded as
Note that even when ? grows with T , as long as ? ? 8RD
?
O(cRD T ), thus solving the open problem of [KSST08, AR09] for log-loss functions with this
range of ?.
We can say something even stronger - our results provide a ?safety net? - no matter what the value
of ? is, the regret of our algorithm is never worse than O(T 2/3 ), matching the bound of the BAN DITRON algorithm (although the latter holds for the multiclass hinge loss).
Analysis.
Proof. (Theorem 4.) The optimization (3) is essentially running the algorithm from Lemma 3 on
1
(Eil ? W)2
K with the cost functions ft (W), with additional nk initial fictitious cost functions 2D
for i ? [n] and l ? [k]. These fictitious cost functions can be thought of as regularization. While
technically these fictitious cost functions are not necessary to prove our regret bound, we include
them since this seems to give better experimental performance and only adds a constant to the regret.
We now apply the regret bound of Lemma 3 by estimating the parameters r, a, b. This is a simple
technical calculation and done in Lemma 6 below, which yields the values r = R
? , a = ??, b = 1.
Hence, the regret bound of Lemma 3 implies that for any W? ? K,
T
X
ft (Wt0 ) ? ft (W? ) = O
kn
??
log T .
t=1
Note that the bound above excludes the fictitious cost functions since they only add a constant additive term to the regret, which is absorbed by the O(log T ) term. Similarly, we have also suppressed
additive constants arising from the log( DraT
b ) term in the regret bound of Lemma 3.
Taking expectation on both sides of the above bound with respect to the randomness in the algorithm,
and using the specification (2) of ft (W) we get
1
0
?
0
? 2
kn
?
?
(4)
= O ??
log T .
E ?t ? (Wt ? W ) ? ?t ?(?t ? (Wt ? W ))
2
5
By Lemma 7 below, we get that
? t ? (W0 ? W? ) ? 1 ?t ?(?
? t ? (W0 ? W? ))2 + 20?R2 D2 .
`t (Wt0 ) ? `t (W? ) ? E ?
t
t
2
t
(5)
Furthermore, we have
? log(k)
,
?
0
E[`t (Wt )] ? `(Wt ) ?
t
(6)
since Wt = Wt0 with probability (1 ? ?) and Wt = 0 with probability ?, and `t (0) =
? log(k)
Plugging (5) and (6) in (4), and using ? = 20?R
2 D2 ,
T
X
?
E[`(Wt )] ? `(W ) = O
kn
??
log T +
? log(k)
T
?
log(k)
? .
.
t=1
We now state two lemmas that were used in the proof of Theorem 4. The first one (proof in the full
version) obtains parameter settings to use Lemma 3 in Theorem 4.
1
Lemma 6. Assume ? ? 4RD
and ? ? 12 . Let ? = max{ 2? , ?k }. Then the following are valid
settings for the parameters r, a, b: r = R
? , a = ?? and b = 1.
The next lemma shows that in each round, the expected regret of the inner FTAL algorithm with ft
cost functions is larger than the regret of N EWTRON.
1
Lemma 7. For ? = ??
10 + ? and ? ? 2 , we have
? t ? (W0 ? W? ) ? 1 ?t ?(?
? t ? (W0 ? W? ))2 + 20?R2 D2 .
`t (Wt0 ) ? `t (W? ) ? E ?
t
t
2
t
? t ] = (p ? ey ) ? xt ,
Proof. The intuition behind the proof is the following. We show that Et [?
t
>
? t?
? ] = Ht ? xt x> for some
which by Lemma 1 equals ?`t (Wt0 ). Next, we show that Et [?t ?
t
t
matrix Ht s.t. Ht 1 = 0. By upper bounding kHt k, we then show (using Lemma 2) that for any
? ? K we have
?2 `t (?) ?Ht ? xt x>
t .
The stated bound then follows by an application of Taylor?s theorem.
The technical details for the proof are as follows. First, note that
? t ? (W0 ? W? )] = E[?
? t ] ? (W0 ? W? ).
E[?
t
t
t
t
(7)
? t ].
We now compute Et [?
?
?
X
1
?
p
(y
)
1
p
(y)
1
t
t
t
? t ] = ?p0 (yt ) ?
?
1 ? eyt +
p0t (y) ? 0
? ey?t ? 1 ? ? xt
E[?
t
p0t (yt )
k
pt (y)
k
t
y6=yt
= (pt ? eyt ) ? xt .
(8)
Next, we have
? t ? (W0 ? W? ))2 ] = (W0 ? W? )> E[?t ?
? t?
? > ](W0 ? W? ).
E[?t (?
t
t
t
t
t
t
(9)
>
? t?
? ].
We now compute Et [?t ?
t
"
2
>
1 ? pt (yt )
1
1
>
0
?
?
?
1 ? eyt
1 ? eyt
E[?t ?t ?t ] = pt (yt ) ? ?t
p0t (yt )
k
k
t
?
2
>
X
p
(y)
1
1
t
+
p0t (y) ?
? ey ? 1
ey ? 1 ? ? xt x>
t
p0t (y)
k
k
y6=yt
=: Ht ? xt x>
t ,
(10)
6
where Ht is the matrix in the brackets above. We note a few facts about Ht . First, note that
(ey ? k1 1) ? 1 = 0, and so Ht 1 = 0. Next, the spectral norm (i.e. largest eigenvalue) of Ht is
bounded as:
2 X 0
1
ey ? 1 1
2 ? 10,
kHt k2 ?
k1 1 ? eyt
+
pt (y)
k
2
(1 ? ?)
y6=yt
for ? ?
1
2.
Now, for any ? ? K, by Lemma 2, for the specified value of ? we have
??
?2 `t (?)
Ht ? xt x>
t .
10
(11)
Now, by Taylor?s theorem, for some ? on the line segment connecting Wt0 to W? , we have
1
`t (W? )?`t (Wt0 ) = ?`t (Wt0 ) ? (W? ? Wt0 ) + (W? ? Wt0 )> [?2 `t (?)](W? ? Wt0 ),
2
??
1
?
0
? ((pt ? eyt ) ? xt ) ? (W? ? Wt0 ) + (W? ? Wt0 )> [ Ht ? xt x>
t ](W ? Wt ),
2
10
(12)
where the last inequality follows from (11). Finally, we have
1
1
?
0
>
?
0 2
2 2
(W? ? Wt0 )> [?Ht ? xt x>
t ](W ? Wt ) ? ?kHt ? xt xt k2 kW ? Wt k ? 20?R D , (13)
2
2
since kW? ? Wt0 k ? 2D. Adding inequalities (12) and (13), rearranging the result and using (7),
(8), (9), and (10) gives the stated bound.
4
Experiments
While the theoretical regret bound for N EWTRON is O(log T ) when ? = O(1), the provable constant
in O(?) notation is quite large, leading one to question the practical performance of the algorithm.
The main reason for the large constant is that the analysis requires the ? parameter to be set extremely small to get the required bounds. In practice, however, one can keep ? a tunable parameter
and try using larger values. In this section, we give experimental evidence (replicating the experiments of [KSST08]) that shows that the practical performance of the algorithm is quite good for
small values of ? (like 10), and not too small values of ? (like 0.01, 0.0001).
Data sets. We used three data sets from [KSST08]: S YN S EP, S YN N ON S EP, and R EUTERS 4. The
first two, S YN S EP and S YN N ON S EP, are synthetic data sets, generated according to the description
given in [KSST08]. These data sets have the same 106 feature vectors with 400 features. There are
9 possible labels. The data set S YN S EP is linearly separable, whereas the data set S YN N ON S EP is
made inseparable by artificially adding 5% label noise. The R EUTERS 4 data set is generated from
the Reuters RCV1 corpus. There are 673, 768 documents in the data set with 4 possible labels, and
346, 810 features. Our results are reported by averaging over 10 runs of the algorithm involved.
Algorithms. We implemented the BANDITRON and N EWTRON algorithms3 . The N EWTRON algorithm is significantly slower than BANDITRON due to its quadratic running time. This makes it
infeasible for really large data sets like R EUTERS 4. To surmount this problem, we implemented
an approximate version of N EWTRON, called PN EWTRON4 , which runs in linear time per iteration
and thus has comparable speed to BANDITRON. PN EWTRON does not have the same regret guarantees of N EWTRON however. To derive PN EWTRON, we can restate N EWTRON equivalently as (see
[HAK07]):
Wt0 = arg min (W ? Wt00 )> At (W ? Wt00 )
W?K
Pt?1
?1
1
00
? ? ? W? )?
? ?.
? ??
? > and bt = Pt?1 (1 ? ?? ? ?
where Wt = ?At bt , for At = D I + ? =1 ?? ? ?
?
? =1
PN EWTRON makes the following change, using the diagonal approximation for the Hessian, and
usual Euclidean projections:
Wt0 = arg min (W ? Wt00 )> (W ? Wt00 )
W?K
3
We did not implement the Confidit algorithm of [CG11] since our aim was to consider algorithms in the
fully adversarial setting.
4
Short for pseudo-N EWTRON. The ?P? may be left silent so that it?s almost N EWTRON, but not quite.
7
1
where Wt00 = ?A?1
t bt , for At = D I +
Pt?1
? ? ? W? )?
? ?.
bt = ? =1 (1 ? ?? ? ?
Pt?1
? =1
? ??
? > ) and bt is the same as before,
diag(?? ? ?
?
Parameter settings. In our experiments, we chose K to be the unit `2 ball in Rkn , so D = 1. We
also choose ? = 10 for all experiments in the log-loss. For BANDITRON, we chose the value of
? specified in [KSST08]: ? = 0.014, 0.006 and 0.05 for S YN S EP, S YN N ON S EP and R EUTERS 4
respectively. For N EWTRON and PN EWTRON, we chose ? = 0.01, 0.006 and 0.05 respectively. The
other parameter for N EWTRON and PN EWTRON, ?, was set to the values ? = 0.01, 0.01, and 0.0001
respectively. We did not tune any of the parameters ?, ? and ? for N EWTRON or PN EWTRON.
?0.2
10
?0.3
10
?0.4
10
error rate
Evaluation. We evaluated the algorithms in terms of their error
rate, i.e. the fraction of prediction mistakes made as a function
of time. Experimentally, PN EWTRON has quite similar performance to N EWTRON, but is significantly faster. Figure 4 shows
how BANDITRON, N EWTRON and PN EWTRON compare on the
S YN N ON S EP data set for 104 examples5 . It can be seen that
PN EWTRON has similar behavior to N EWTRON, and is not much
worse.
Banditron
Newtron
PNewtron
?0.5
10
?0.6
10
?0.7
10
The rest of the experiments were conducted using only BAN DITRON and PN EWTRON . The results are shown in figure 4. It
can be clearly seen that PN EWTRON decreases the error rate much
SynNonSep: number of examples
faster than BANDITRON. For the S YN S EP data set, PN EWTRON
very rapidly converges to the lowest possible error rate due to
setting the exploration parameter ? = 0.01, viz. 0.01 ? 8/9 = Figure 1: Log-log plots of error
0.89%. In comparison, the final error for BANDITRON is 1.91%. rates vs. number of examples
For the S YN N ON S EP data set, PN EWTRON converges rapidly to for BANDITRON, N EWTRON
on S YN N ON its final value of 11.94%. BANDITRON remains at a high error and PN EWTRON
4
level until about 104 examples, and at the very end catches up S EP with 10 examples.
with and does slightly better than PN EWTRON, ending at 11.47%.
For the R EUTERS 4 data set, both BANDITRON and PN EWTRON
decrease the error rate at roughly same pace; however PN EWTRON still obtains better performance
consistently by a few percentage points. In our experiments, the final error rate for PN EWTRON is
13.08%, while that for BANDITRON is 18.10%.
?0.8
10
2
3
10
4
10
10
?0.2
0
10
10
?0.2
Banditron
PNewtron
Banditron
PNewtron
10
Banditron
PNewtron
?0.3
10
?0.3
10
?0.4
10
?0.4
error rate
error rate
Error rate
10
?1
10
?0.5
10
?0.6
10
?0.5
10
?0.6
10
?2
10
?0.7
10
?0.7
10
?0.8
10
?0.8
10
?0.9
10
?3
10
2
10
3
10
4
10
5
10
SynSep: number of examples
6
10
2
10
3
4
10
10
5
10
SynNonSep: number of examples
6
10
2
10
3
10
4
10
5
10
Figure 2: Log-log plots of error rates vs. number of examples for BANDITRON and PN EWTRON on
different data sets. Left: S YN S EP. Middle: S YN N ON S EP. Right: R EUTERS 4.
5
Future Work
Some interesting questions remain open. Our theoretical guarantee applies only to the quadratictime N EWTRON algorithm. Is it possible to obtain similar regret guarantees for a linear time algorithm? Our regret bound has an exponentially large constant, which depends on the loss functions
parameters. Does there exist an algorithm with similar regret guarantees but better constants?
5
In the interest of reducing running time for N EWTRON, we used a smaller data set.
8
6
10
Reuters4: number of examples
References
[ACBFS03] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32:48?77, January 2003.
[AHR08]
Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An
efficient algorithm for bandit linear optimization. In COLT, pages 263?274, 2008.
[AK08]
[AR09]
Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. J. Comput. Syst. Sci., 74(1):97?114, 2008.
?
Jacob Abernethy and Alexander Rakhlin. An efficient bandit algorithm for T -regret
in online multiclass prediction? In COLT, 2009.
[CG11]
Koby Crammer and Claudio Gentile. Multiclass classification with bandit feedback
using adaptive regularization. In ICML, 2011.
[DH06]
Varsha Dani and Thomas P. Hayes. Robbing the bandit: less regret in online geometric
optimization against an adaptive adversary. In SODA, pages 937?943, 2006.
[DHK07]
Varsha Dani, Thomas Hayes, and Sham Kakade. The price of bandit information for
online optimization. In NIPS. 2007.
[FKM05]
Abraham D. Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex
optimization in the bandit setting: gradient descent without a gradient. In SODA, pages
385?394, 2005.
[HAK07]
Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online
convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[HJ91]
R.A. Horn and C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press,
Cambridge, 1991.
[KSST08]
Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Efficient bandit algorithms
for online multiclass prediction. In ICML?08, pages 440?447, 2008.
[LZ07]
John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits
with side information. In NIPS, 2007.
[MB04]
H. Brendan McMahan and Avrim Blum. Online geometric optimization in the bandit
setting against an adaptive adversary. In COLT, pages 109?123, 2004.
[RTB07]
Alexander Rakhlin, Ambuj Tewari, and Peter Bartlett. Closing the gap between bandit
and full-information online optimization: High-probability regret bound. Technical
Report UCB/EECS-2007-109, EECS Department, University of California, Berkeley,
Aug 2007.
9
| 4245 |@word middle:1 version:7 seems:2 stronger:2 norm:6 open:6 d2:4 jacob:2 p0:1 initial:1 contains:1 tuned:1 document:1 current:3 com:1 contextual:1 clara:1 yet:1 john:1 additive:2 plot:2 v:2 greedy:1 item:1 ith:1 short:1 zhang:1 incorrect:3 prove:2 newtron:1 expected:3 behavior:1 problems1:1 roughly:2 multi:2 euters:6 eil:2 armed:1 becomes:1 xx:2 notation:6 bounded:7 mb04:2 estimating:1 lowest:1 israel:2 what:1 interpreted:1 minimizes:1 dubbed:2 guarantee:5 pseudo:1 berkeley:1 preferable:1 k2:2 unit:1 yn:14 positive:4 before:2 engineering:1 understood:1 safety:1 mistake:1 interpolation:1 plus:1 chose:3 studied:1 specifying:1 limited:1 range:1 practical:2 horn:1 practice:1 regret:44 implement:3 banditron:19 area:1 significantly:2 thought:1 matching:2 projection:2 confidence:1 get:6 close:1 selection:1 context:2 py:1 equivalent:1 measurable:1 yt:24 kale:2 attention:1 convex:7 resolution:1 estimator:1 arranging:1 pt:14 suppose:2 play:1 hypothesis:1 referee:1 observed:1 ft:11 ep:14 solved:1 worst:1 wj:1 decrease:2 cheeger:1 intuition:1 convexity:1 reward:2 ideally:2 depend:1 solving:2 segment:1 technically:1 upon:1 learner:6 resolved:1 easily:1 america:1 shalev:1 abernethy:4 quite:4 elad:3 larger:3 say:2 satyen:2 final:3 online:16 eigenvalue:3 net:1 crd:1 product:5 unresolved:1 monetary:1 rapidly:2 description:1 frobenius:1 produce:3 adam:1 converges:2 derive:1 ac:1 measured:1 received:1 aug:1 implemented:2 implies:1 wyt:1 restate:1 correct:3 exploration:3 routing:1 fix:1 suffices:1 really:1 preliminary:1 anonymous:1 ftal:3 hold:1 exp:7 great:1 skale:1 kvt:1 vary:1 rkn:3 smallest:2 inseparable:1 label:15 largest:2 successfully:1 dani:2 clearly:1 always:1 aim:1 kalai:1 pn:22 avoid:1 claudio:1 corollary:4 viz:1 consistently:1 industrial:1 adversarial:6 attains:1 brendan:2 bt:5 bandit:17 arg:5 classification:3 colt:5 yahoo:2 special:2 initialize:1 equal:2 construct:2 field:2 never:1 sampling:2 y6:3 kw:2 koby:1 icml:2 future:1 report:1 simplify:1 inherent:1 few:2 randomly:1 eyt:7 interest:1 highly:1 evaluation:1 bracket:1 kvk:1 semidefinite:3 behind:1 ehazan:1 necessary:1 minw:1 taylor:2 euclidean:1 haifa:1 ruled:1 theoretical:2 column:2 earlier:1 yoav:1 cost:8 entry:2 predictor:1 technion:2 uniform:1 conducted:1 johnson:1 too:1 reported:1 kn:4 eec:2 synthetic:1 varsha:2 fundamental:2 randomized:1 siam:1 ie:1 connecting:1 w1:3 cesa:1 choose:4 worse:2 stochastically:1 inefficient:1 leading:1 syst:1 wk:4 inc:1 matter:1 ad:1 vi:2 depends:2 h1:1 try:1 hazan:3 kwk:2 shai:1 minimize:3 il:1 formed:1 square:1 yield:1 drat:2 randomness:1 definition:1 against:2 pp:2 involved:1 associated:4 proof:11 proved:1 tunable:1 ask:1 auer:1 supervised:1 follow:1 response:3 improved:1 done:1 evaluated:1 strongly:2 furthermore:2 until:1 langford:1 receives:1 fkm05:2 grows:1 requiring:1 true:3 awerbuch:1 hence:2 regularization:3 symmetric:5 round:5 interchangeably:1 novel:2 recently:1 specialized:1 empirically:1 exponentially:2 kwk2:2 multiarmed:1 cambridge:2 rd:6 similarly:1 closing:1 replicating:1 specification:1 entail:1 etc:1 add:2 something:1 recent:1 inequality:4 vt:4 seen:2 minimum:1 gentile:2 additional:1 ey:8 recommended:1 semi:1 full:6 sham:2 smooth:1 technical:3 match:1 exp3:1 calculation:1 faster:3 long:1 algorithms3:1 plugging:1 ensuring:1 prediction:15 variant:2 basic:2 noiseless:1 expectation:4 essentially:1 iteration:3 agarwal:1 hj91:2 whereas:1 else:2 singular:1 suffered:1 khk2:1 appropriately:1 w2:3 rest:1 unlike:1 induced:1 call:1 integer:1 gave:2 nonstochastic:1 competing:1 p0t:9 silent:1 inner:3 parameterizes:1 multiclass:10 whether:4 bartlett:1 ultimate:1 hak07:6 peter:2 hessian:4 action:2 useful:1 tewari:2 santa:1 tune:1 dark:1 diameter:1 schapire:1 specifies:1 percentage:1 exist:1 arising:1 per:2 pace:1 blum:1 ht:12 baruch:1 excludes:1 fraction:1 sum:1 surmount:1 run:2 parameterized:3 everywhere:1 soda:2 almost:1 comparable:1 bit:2 nba:1 bound:30 quadratic:5 infinity:1 precisely:1 kronecker:3 sake:1 kleinberg:1 speed:1 min:7 extremely:1 rcv1:1 separable:1 department:2 according:1 ball:1 remain:1 slightly:1 smaller:1 suppressed:1 wi:5 kakade:6 remains:2 loose:1 eventually:1 end:3 apply:1 spectral:2 ahr08:2 slower:1 original:1 thomas:2 denotes:1 running:3 include:1 hinge:3 newton:3 robbing:1 giving:2 k1:2 amit:1 question:6 quantity:1 strategy:1 usual:3 diagonal:3 gradient:4 thank:1 sci:1 w0:11 topic:1 reason:2 provable:1 mini:1 minimizing:2 equivalently:1 setup:1 robert:2 statement:1 negative:1 stated:2 design:2 policy:2 unknown:4 perform:1 bianchi:1 recommender:2 upper:2 descent:1 january:1 defining:1 rn:8 introduced:2 cast:1 required:3 specified:6 california:1 kht:3 nip:2 address:1 adversary:3 proceeds:1 wy:2 below:3 program:1 ambuj:2 max:4 exp4:2 suitable:3 natural:4 regularized:1 scheme:1 improve:1 technology:1 catch:1 flaxman:1 epoch:1 geometric:2 nicol:1 freund:1 fully:5 loss:30 mixed:1 interesting:1 proportional:1 fictitious:4 offered:1 consistent:1 playing:1 row:5 ban:2 last:1 transpose:1 infeasible:1 offline:1 side:2 institute:1 taking:2 tauman:1 feedback:6 dimension:1 valid:1 ending:1 made:2 adaptive:4 simplified:1 far:1 approximate:1 obtains:4 compact:1 keep:1 hayes:2 parkway:1 corpus:1 unnecessary:1 leader:1 shwartz:1 wt0:19 ca:1 rearranging:1 artificially:1 domain:1 diag:4 did:2 main:4 linearly:1 abraham:1 bounding:1 noise:1 reuters:1 allowed:1 tong:1 comput:2 mcmahan:2 rk:4 theorem:10 specific:1 xt:28 rakhlin:5 r2:3 evidence:1 derives:1 avrim:1 adding:2 horizon:1 margin:2 nk:1 gap:1 smoothly:1 logarithmic:2 absorbed:1 kxk:1 adjustment:1 scalar:3 applies:1 w10:1 conditional:1 goal:3 formulated:1 identity:2 price:1 considerable:1 change:2 experimentally:1 reducing:1 wt:17 averaging:1 lemma:22 called:2 experimental:2 ucb:1 latter:1 crammer:2 alexander:3 |
3,585 | 4,246 | Sparse Manifold Clustering and Embedding
Ren?e Vidal
Center for Imaging Science
Johns Hopkins University
[email protected]
Ehsan Elhamifar
Center for Imaging Science
Johns Hopkins University
[email protected]
Abstract
We propose an algorithm called Sparse Manifold Clustering and Embedding
(SMCE) for simultaneous clustering and dimensionality reduction of data lying
in multiple nonlinear manifolds. Similar to most dimensionality reduction methods, SMCE finds a small neighborhood around each data point and connects each
point to its neighbors with appropriate weights. The key difference is that SMCE
finds both the neighbors and the weights automatically. This is done by solving
a sparse optimization problem, which encourages selecting nearby points that lie
in the same manifold and approximately span a low-dimensional affine subspace.
The optimal solution encodes information that can be used for clustering and dimensionality reduction using spectral clustering and embedding. Moreover, the
size of the optimal neighborhood of a data point, which can be different for different points, provides an estimate of the dimension of the manifold to which the
point belongs. Experiments demonstrate that our method can effectively handle
multiple manifolds that are very close to each other, manifolds with non-uniform
sampling and holes, as well as estimate the intrinsic dimensions of the manifolds.
1
1.1
Introduction
Manifold Embedding
In many areas of machine learning, pattern recognition, information retrieval and computer vision,
we are confronted with high-dimensional data that lie in or close to a manifold of intrinsically lowdimension. In this case, it is important to perform dimensionality reduction, i.e., to find a compact
representation of the data that unravels their few degrees of freedom.
The first step of most dimensionality reduction methods is to build a neighborhood graph by connecting each data point to a fixed number of nearest neighbors or to all points within a certain radius
of the given point. Local methods, such as LLE [1], Hessian LLE [2] and Laplacian eigenmaps
(LEM) [3], try to preserve local relationships among points by learning a set of weights between
each point and its neighbors. Global methods, such as Isomap [4], Semidefinite embedding [5],
Minimum volume embedding [6] and Structure preserving embedding [7], try to preserve local and
global relationships among all data points. Both categories of methods find the low-dimensional representation of the data from a few eigenvectors of a matrix related to the learned weights between
pairs of points.
For both local and global methods, a proper choice of the neighborhood size used to build the
neighborhood graph is critical. Specifically, a small neighborhood size may not capture sufficient
information about the manifold geometry, especially when it is smaller than the intrinsic dimension
of the manifold. On the other hand, a large neighborhood size could violate the principles used to
capture information about the manifold. Moreover, the curvature of the manifold and the density of
the data points may be different in different regions of the manifold, hence using a fix neighborhood
size may be inappropriate.
1
1.2
Manifold Clustering
In many real-world problems, the data lie in multiple manifolds of possibly different dimensions.
Thus, to find a low-dimensional embedding of the data, one needs to first cluster the data according
to the underlying manifolds and then find a low-dimensional representation for the data in each
cluster. Since the manifolds can be very close to each other and they can have arbitrary dimensions,
curvature and sampling, the manifold clustering and embedding problem is very challenging.
The particular case of clustering data lying in multiple flat manifolds (subspaces) is well studied and
numerous algorithms have been proposed (see e.g., the tutorial [8]). However, such algorithms take
advantage of the global linear relations among data points in the same subspace, hence they cannot handle nonlinear manifolds. Other methods assume that the manifolds have different instrinsic
dimensions and cluster the data according to the dimensions rather than the manifolds themselves
[9, 10, 11, 12, 13]. However, in many real-world problems this assumption is violated. Moreover,
estimating the dimension of a manifold from a point cloud is a very difficult problem on its own.
When manifolds are densely sampled and sufficiently separated, existing dimensionality reduction
algorithms such as LLE can be extended to perform clustering before the dimensionality reduction
step [14, 15, 16]. More precisely, if the size of the neighborhood used to build the similarity graph
is chosen to be small enough not to include points from other manifolds and large enough to capture
the local geometry of the manifold, then the similarity graph will have multiple connected components, one per manifold. Therefore, spectral clustering methods can be employed to separate the
data according to the connected components. However, as we will see later, finding the right neighborhood size is in general difficult, especially when manifolds are close to each other. Moreover, in
some cases one cannot find a neighborhood that contains only points from the same manifold.
1.3
Paper Contributions
In this paper, we propose an algorithm, called SMCE, for simultaneous clustering and embedding
of data lying in multiple manifolds. To do so, we use the geometrically motivated assumption that
for each data point there exists a small neighborhood in which only the points that come from the
same manifold lie approximately in a low-dimensional affine subspace. We propose an optimization
program based on sparse representation to select a few neighbors of each data point that span a
low-dimensional affine subspace passing near that point. As a result, a few nonzero elements of the
solution indicate the points that are on the same manifold, hence they can be used for clustering. In
addition, the weights associated to the chosen neighbors indicate their distances to the given data
point, which can be used for dimensionality reduction. Thus, unlike conventional methods that
first build a neighborhood graph and then extract information from it, our method simultaneously
builds the neighborhood graph and obtains its weights. This leads to successful results even in
challenging situations where the nearest neighbors of a point come from other manifolds. Clustering
and embedding of the data into lower dimensions follows by taking the eigenvectors of the matrix
of weights and its submatrices, which are sparse hence can be stored and be operated on efficiently.
Thanks to the sparse representations obtained by SMCE, the number of neighbors of the data points
in each manifold reflects the intrinsic dimensionality of the underlying manifold. Finally, SMCE
has only one free parameter that, for a large range of variation, results in a stable clustering and
embedding, as the experiments will show. To the best of our knowledge, SMCE is the only algorithm
proposed to date that allows robust automatic selection of neighbors and simultaneous clustering and
dimensionality reduction in a unified manner.
2
Proposed Method
Assume we are given a collection of N data points {xi ? RD }N
i=1 lying in n different manifolds
{Ml }nl=1 of intrinsic dimensions {dl }nl=1 . In this section, we consider the problem of simultaneously clustering the data according to the underlying manifolds and obtaining a low-dimensional
representation of the data points within each cluster.
We approach this problem using a spectral clustering and embedding algorithm. Specifically, we
build a similarity graph whose nodes represent the data points and whose edges represent the similarity between data points. The fundamental challenge is to decide which nodes should be connected
and how. To do clustering, we wish to connect each point to other points from the same manifold. To
2
M2
x5 x x6
4
xp
M1
x2 x1 x3
Figure 1: For x1 ? M1 , the smallest neighborhood containing points from M1 also contains points from
M2 . However, only the neighbors in M1 span a 1-dimensional subspace around x1 .
do dimensionality reduction, we wish to connect each point to neighboring points with appropriate
weights that reflect the neighborhood information. To simultaneously pursue both goals, we wish to
select neighboring points from the same manifold.
We address this problem by formulating an optimization algorithm based on sparse representation.
The underlying assumption behind the proposed method is that each data point has a small neighborhood in which the minimum number of points that span a low-dimensional affine subspace passing
near that point is given by the points from the same manifold. More precisely:
Assumption 1 For each data point xi ? Ml consider the smallest ball Bi ? RD that contains the
dl + 1 nearest neighbors of xi from Ml . Let the neighborhood Ni be the set of all data points in
Bi excluding xi . In general, this neighborhood contains points from Ml as well as other manifolds.
We assume that for all i there exists ? 0 such that the nonzero entries of the sparsest solution of
X
X
k
cij (xj ? xi )k2 ? and
cij = 1
(1)
j?Ni
j?Ni
corresponds to the dl + 1 neighbors of xi from Ml . In other words, among all affine subspaces
spanned by subsets of the points {xj }j?Ni and passing near xi up to error, the one of lowest
dimension has dimension dl and it is spanned by the dl + 1 neighbors of xi from Ml .
In the limiting case of densely sampled data, this affine subspace coincides with the dl -dimensional
tangent space of Ml at xi . To illustrate this, consider the two manifolds shown in Figure 1 and
assume that points x4 , x5 and x6 are closer to x1 than x2 or x3 . Then any small ball centered at
x1 ? M1 that contains x2 and x3 will also contain points x4 , x5 and x6 . In this case, among affine
spans of all possible choices of 2 points in this neighborhood, the one corresponding to x2 and x3
is the closest one to x1 , and is also close to the tangent space of M1 at x1 . On the other hand, the
affine span of any choices of 3 or more data points in the neighborhood always passes through x1 .
However, this requires a linear combination of more than 2 data points.
2.1
Optimization Algorithm
Our goal is to propose a method that selects, for each data point xi , a few neighbors that lie in the
same manifold. If the neighborhood Ni is known and of relatively small size, one can search for the
minimum number of points that satisfy (1). However, Ni is not known a priori and searching for
a few data points in Ni that satisfy (1) becomes more computationally complex as the size of the
neighborhood increases. To tackle this problem, we let the size of the neighborhood be arbitrary.
However, by using a sparse optimization program, we bias the method to select a few data points
that are close to xi and span a low-dimensional affine subspace passing near xi .
Consider a point xi in the dl -dimensional manifold Ml and consider the set of points {xj }j6=i . It
follows from Assumption 1 that, among these points, the ones that are neighbors of xi in Ml span
a dl -dimensional affine subspace of RD that passes near xi . In other words,
k [x1 ? xi ? ? ? xN ? xi ] ci k2 ? and 1> ci = 1
(2)
has a solution ci whose dl + 1 nonzero entries corresponds to dl + 1 neighbors of xi in Ml .
Notice that after relaxing the size of the neighborhood, the solution ci that uses the minimum number
of data points, i.e., the solution ci with the smallest number of nonzero entries, may no longer be
3
unique. In the example of Figure 1, for instance, a solution of (2) with two nonzero entries can
correspond to an affine combination of x2 and x3 or an affine combination of x2 and xp . To bias
the solutions of (2) to the one that corresponds to the closest neighbors of xi in Ml , we set up an
optimization program whose objective function favors selecting a few neighbors of xi subject to the
constraint in (2), which enforces selecting points that approximately lie in an affine subspace at xi .
Before that, it is important to decouple the goal of selecting a few neighbors from that of spanning
an affine subspace. To do so, we normalize the vectors {xj ? xi }j6=i and let
h
i
xN ?xi
?xi
?
?
?
X i , kxx11?x
? RD?N ?1 .
(3)
k
kx
?x
k
i 2
i 2
N
In this way, for a small ?, the locations of the nonzero entries of any solution ci of kX i ci k2 ? ? do
not depend on whether the selected points are close to or far from xi . Now, among all the solutions
of kX i ci k2 ? ? that satisfy 1> ci = 1, we look for the one that uses a few closest neighbors of
xi . To that end, we consider an objective function that penalizes points based on their proximity to
xi . That is, points that are closer to xi get lower penalty than points that are farther away. We thus
consider the following weighted `1 -optimization program
min kQi ci k1 subject to kX i ci k2 ? ?, 1> ci = 1,
(4)
where the `1 -norm promotes sparsity of the solution [17] and the proximity inducing matrix Qi ,
which is a positive-definite diagonal matrix, favors selecting points that are close to xi . Note that
the elements of Qi should be chosen such that the points that are closer to xi have smaller weights,
allowing the assignment of nonzero coefficients to them. Conversely, the points that are farther from
xi should have larger weights, favoring the assignment of zero coefficients to them. A simple choice
kx ?xi k2
of the proximity inducing matrix is to select the diagonal elements of Qi to be P jkxt ?x
?
i k2
t6=i
exp(kx ?x k /?)
j
i 2
(0, 1]. Also, one can use other types of weights, such as exponential weights P exp(kx
t ?xi k2 /?)
t6=i
where ? > 0. However, the former choice of the weights, which is also tuning parameter free, works
very well in practice, as we will show later.
Another optimization program which is related to (4) by the method of Lagrange multipliers, is
1
min ? kQi ci k1 + kX i ci k22 subject to 1> ci = 1,
(5)
2
where the parameter ? sets the trade-off between the sparsity of the solution and the affine reconstruction error. Notice that this new optimization program, which also prefers sparse solutions, is
similar to the Lasso optimization problem [18, 17]. The only modification, is the introduction of the
affine constraint 1> ci = 1. As we will show in the next section, there is a wide range of values of
? for which the optimization program in (5) successfully finds a sparse solution for each point from
neighbors in the same manifold.
Notice that, in sharp contrast to the nearest neighbors-based methods, which first fix the number
of neighbors or the neighborhood radius and then compute the weights between points in each
neighborhood, we do the two steps at the same time. In other words, the optimization programs (4)
and (5) automatically choose a few neighbors of the given data point, which approximately span
a low-dimensional affine subspace at that point. In addition, by the definition of Qi and X i , the
solutions of the optimization programs (4) and (5) are invariant with respect to a global rotation,
translation, and scaling of the data points.
2.2
Clustering and Dimensionality Reduction
By solving the proposed optimization programs for each data point, we obtain the necessary
information for clustering and dimensionality reduction. This is because the solution c>
,
i
[ci1 ? ? ? ciN ] of the proposed optimization programs satisfies
X
cij
(xj ? xi ) ? 0.
(6)
kxj ? xi k2
j6=i
Hence, we can rewrite xi ? [x1 x2 ? ? ? xN ] wi , where the weight vector w>
,
i
[wi1 ? ? ? wiN ] ? RN associated to the i-th data point is defined as
wii , 0,
cij /kxj ? xi k2
, j 6= i.
t6=i cit /kxt ? xi k2
wij , P
4
(7)
The indices of the few nonzero elements of wi , ideally, correspond to neighbors of xi in the same
manifold and their values indicate their (inverse) distances to xi .
Next, we use the weights wi to perform clustering and dimensionality reduction. We do so by
building a similarity graph G = (V, E) whose nodes represent the data points. We connect each
node i, corresponding to xi , to the node j, corresponding to xj , with an edge whose weight is equal
to |wij |. While, potentially, every node can get connected to all other nodes, because of the sparsity
of wi , each node i connects itself to only a few other nodes that correspond to the neighbors of xi
in the same manifold. We call such neighbors as sparse neighbors. In addition, the distances of the
sparse neighbors to xi are reflected in the weights |wij |.
The similarity graph built in this way has ideally several connected components, where points in
the same manifold are connected to each other and there is no connection between two points in
different manifolds. In other words, the similarity matrix of the graph has ideally the following form
?
?
W [1] 0 ? ? ? 0
? 0 W [2] ? ? ? 0 ?
?,
(8)
W , [ |w1 | ? ? ? |wN | ] = ?
..
.. ?
..
? ...
.
.
. ?
0
0
? ? ? W [n]
where W [l] is the similarity matrix of the data points in Ml and ? ? RN ?N is an unknown
permutation matrix. Clustering of the data follows by applying spectral clustering [19] to W .1 One
can also determine the number of connected components by analyzing the eigenspectrum of the
Laplacian matrix [20].
Any of the existing dimensionality reduction techniques can be applied to the data in each cluster to
obtain a low-dimensional representation of the data in the corresponding manifold. However, this
would require new computation of neighborhoods and weights. On the other hand, the similarity
graph built by our method has a locality preserving property by the definition of the weights. Thus,
we can use the adjacency matrix, W [i], of the i-th cluster as a similarity between points in the
corresponding manifold and obtain a low-dimensional embedding of the data by taking the last few
eigenvectors of the normalized Laplacian matrix associated to W [i] [3]. Note that there are other
ways for inferring the low-dimensional embedding of the data in each cluster along the line of [21]
and [1] which is beyond the scope of the current paper.
2.3
Intrinsic Dimension Information
An advantage of proposed sparse optimization algorithm is that it provides information about the
intrinsic dimension of the manifolds. This comes from the fact that a data point xi ? Ml and its
neighbors in Ml lie approximately in the dl -dimensional tangent space of Ml at xi . Since dl + 1
vectors in this tangent space are linearly dependent, the solution ci of the proposed optimization
programs is expected to have dl + 1 nonzero elements. As a result, we can obtain information about
the intrinsic dimension of the manifolds in the following way. Let ?l denote the set of indices of
points that belong to the l-th cluster. For each point in ?l , we sort the elements of |ci | from the
largest to the smallest and denote the new vector as cs,i . We define the median sparse coefficient
vector of the l-th cluster as
msc(l) = median{cs,i }i??l ,
(9)
whose j-th element is computed as the median of the j-th elements of the vectors {cs,i }i??l . Thus,
the number of nonzero elements of msc(l) or, more practically, the number of elements with relatively high magnitude, gives an estimate of the intrinsic dimension of the l-th manifold plus one.2
An advantage of our method is that it allows us to have a different neighborhood size for each data
point, depending on the local dimension of its underlying manifold at that point. For example, in the
case of two manifolds of dimensions d1 = 2 and d2 = 30, for data points in the l-th manifold we
automatically obtain solutions with dl + 1 nonzero elements. On the other hand, methods that fix
the number of neighbors fall into trouble because the number of neighbors would be too small for
one manifold or too large for the other manifold.
Note that a symmetric adjacency matrix can be obtained by taking W = max(W , W > ).
One can also use the mean of the sorted coefficients in each cluster to compute the dimension of each
manifold. However, we prefer to use the median for robustness reasons.
1
2
5
SMCE, ! = 0.1
LLE, K = 5
SMCE, ! = 1
SMCE, ! = 10
SMCE, ! = 100
LEM, K = 5
LLE, K = 20
LEM, K = 20
Figure 2: Top: embedding of a punctured sphere and the msc vectors obtained by SMCE for different values
of ?. Bottomn: embedding obtained by LLE and LEM for different values of K.
SMCE
LLE
Figure 3: Clustering and embedding for two trefoil-knots. Left: original manifolds. Middle: embedding and
msc vectors obtained by SMCE. Right: clustering and embedding obtained by LLE.
3
Experiments
In this section, we evaluate the performance of SMCE on a number of synthetic and real experiments.
For all the experiments, we use the optimization program (5), where we typically set ? = 10.
However, the clustering and embedding results obtained by SMCE are stable for ? ? [1, 200]. Since
the weighted `1 -optimization does not select the points that are very far from the given point, we
consider only L < N ? 1 neighbors of each data point in the optimization program, where we
typically set L = N/10. As in the case of nearest neighbors-based methods, there is no guarantee
that the points in the same manifold form a single connected component of the similarity graph built
by SMCE. However, this has always been the case in our experiments, as we will show next.
3.1
Experiments with Synthetic Data
Manifold Embedding. We first evaluate SMCE for the dimensionality reduction task only. We
sample N = 1, 000 data points from a 2-sphere, where a neighborhood of its north pole is excluded.
We then embed the data in R100 , add small Gaussian white noise to it and apply SMCE for ? ?
{0.1, 1, 10, 100}. Figure 2 shows the embedding results of SMCE in a 2 dimensional Euclidean
space. The three large elements of the msc vector for different values of ? correctly reflect the fact
that the sphere has dimension two. However, note that for very large values of ? the performance
of the embedding degrades since we put more emphasis on the sparsity of the solution. The results
in the bottom of Figure 2 show the embeddings obtained by LLE and LEM for K = 5 and K =
20 nearest neighbors. Notice that, for K = 20, nearest neighbor-based methods obtain similar
embedding results to those of SMCE, while for K = 5 they obtain poor embedding results. This
suggests that the principle used by SMCE to select the neighbors is very effective: it chooses very
few neighbors that are very informative for dimensionality reduction.
Manifold Clustering and Embedding. Next, we consider the challenging case where the manifolds are close to each other. We consider two trefoil-knots, shown in Figure 3, which are embedded
in R100 and are corrupted with small Gaussian white noise. The data points are sampled such that
among the 2 nearest neighbors of 1% of the data points there are points from the other manifold.
Also, among the 3 and 5 nearest neighbors of 9% and 18% of the data points, respectively, there
are points from the other manifold. For such points, the nearest neighbors-based methods will connect them to nearby points in the other manifold and assign large weights to the connection. As a
result, these methods cannot obtain a proper clustering or a successful embedding. Table 1 shows
the misclassification rates of LLE and LEM for different number of nearest neighbors K as well as
the misclassification rates of SMCE for different values of ?. While there is no K for which we can
successfully cluster the data using LLE and LEM, for a wide range of ?, SMCE obtains a perfect
clustering. Figure 3 shows the results of SMCE for ? = 10 and LLE for K = 3. As the results
6
Table 1: Misclassifications rates for LLE and LEM as a function of K and for SMCE as a function of ?.
K
2
3
4
5
6
8
LLE 15.5% 9.5% 16.5% 13.5% 16.5% 37.5
LEM 15.5% 13.5% 17.5% 14.5% 28.5% 28.5%
?
0.1
1
10
50
70
100
SMCE 15.5% 6.0% 0.0% 0.0% 0.0% 0.0%
10
38.5%
13.5%
200
0.0%
Table 2: Percentage of data points whose K nearest neighbors contain points from the other manifold.
K
1
2
3
4
7
10
3.9% 10.2% 23.4% 35.2% 57.0% 64.8%
show, enforcing that the neighbors of a point from the same manifold span a low-dimensional affine
subspace helps to select neighbors from the correct manifold and not from the other manifolds. This
results in successful clustering and embedding of the data as well as unraveling the dimensions of
the manifolds. On the other hand, the fact that LLE and LEM choose wrong neighbors, results in a
low quality embedding.
3.2
Experiments with Real Data
In this section, we examine the performance of SMCE on real datasets. We show that challenges
such as manifold proximity and non-uniform sampling are also common in real data sets, and that
our algorithm is able to handle these issues effectively.
First, we consider the problem of clustering and embedding of face images of two different subjects
from the Extended Yale B database [22]. Each subject has 64 images of 192 ? 168 pixels captured
under a fixed pose and expression and with varying illuminations. By applying SMCE with ? =
10 on almost 33, 000-dimensional vectorized faces, we obtain a misclassification rate of 2.34%,
which corresponds to wrongly clustering 3 out of the 128 data points. Figure 4, top row, shows the
embeddings obtained by SMCE, LLE and LEM for the whole data prior to clustering. Only SMCE
reasonably separates the low-dimensional representation of face images according to the subjects.
Note that in this experiment, the space of face images under varying illumination is not densely
sampled and in addition the two manifolds are very close to each other. Table 2 shows the percentage
of points in the dataset whose K nearest neighbors contain points from the other manifold. As the
table shows, there are several points whose closest neighbor comes from the other manifold. Beside
the embedding of each method in Figure 4 (top row), we have shown the coefficient vector of a
data point in M1 whose closest neighbor comes from M2 . While nearest-neighbor-based methods
pick the wrong neighbors with strong weights, SMCE successfully selects sparse neighbors from the
correct manifold. The plots in the bottom of Figure 4 show the embedding obtained by SMCE for
each cluster. As we move along the horizontal axis, the direction of the light source changes from
left to right, while as we move along the vertical axis, the overall darkness of the images changes
from light to dark. Also, the msc vectors suggest a 2-dimensionality of the face manifolds, correctly
reflecting the number of degrees of freedom of the light sourceEmbedding
on the via
illumination
rig, which is a
SMCE
sphere in R3 .
Next, we consider the dimensionality
reduction of the images in the Frey
face dataset, which consists of 1965
face images captured under varying
pose and expression. Each image is
vectorized as a 560 element vector of
pixel intensities. Figure 5 shows the
two-dimensional embedding obtained by
SMCE. Note that the low-dimensional
representation captures well the left to
right pose variations in the horizontal
axis and the expression changes in the
vertical axis.
Figure 5: 2-D embedding of Frey face data using SMCE.
7
SMCE
LLE
LEM
Subject 1
Subject 2
Subject 1
Subject 2
Subject 1
Subject 2
Cluster 1
Cluster 2
Cluster 1
Cluster 2
Figure 4: Clustering and embedding of two faces. Top: 2-D embedding obtained by SMCE, LLE and LEM.
The weights associated to a data point from the first subject are shown beside the embedding. Bottom: SMCE
embedding and msc vectors.
Cluster 1
Cluster 2
Digit 0
Digit 3
Digit 4
Digit 6
Digit 7
Figure 6: Clustering and embedding of five digits from the MNIST dataset. Left: 2-D embedding obtained by
SMCE for five digits {0, 3, 4, 6, 7}. Middle: 2-D embedding of the data in the first cluster that corresponds to
digit 3. Right: 2-D embedding of the data in the second cluster that corresponds to digit 6.
Finally, we consider the clustering and dimensionality reduction of the digits from the MNIST test
database [23]. We use the images from five digits {0, 3, 4, 6, 7} in the dataset where we randomly
select 200 data points from each digit. The left plot in Figure 6 shows the joint embedding of the
whole data using SMCE. One can see that the data are reasonably well separated according to their
classes. The middle and the right plots in Figure 6, show the two-dimensional embedding obtained
by SMCE for two data clusters, which correspond to the digits 3 and 6.
4
Discussion
We proposed a new algorithm based on sparse representation for simultaneous clustering and dimensionality reduction of data lying in multiple manifolds. We used the solution of a sparse optimization
program to build a similarity graph from which we obtained clustering and low-dimensional embedding of the data. The sparse representation of each data point ideally encodes information that can
be used for inferring the dimensionality of the underlying manifold around that point. Finding robust methods for estimating the intrinsic dimension of the manifolds from the sparse coefficients and
investigating theoretical guarantees under which SMCE works is the subject of our future research.
Acknowledgment
This work was partially supported by grants NSF CNS-0931805, NSF ECCS-0941463 and NSF
OIA-0941362.
8
References
[1] S. Roweis and L. Saul, ?Nonlinear dimensionality reduction by locally linear embedding,? Science, vol.
290, no. 5500, pp. 2323?2326, 2000.
[2] D. Donoho and C. Grimes, ?Hessian eigenmaps: Locally linear embedding techniques for highdimensional data,? National Academy of Sciences, vol. 100, no. 10, pp. 5591?5596, 2003.
[3] M. Belkin and P. Niyogi, ?Laplacian eigenmaps and spectral techniques for embedding and clustering,?
in Neural Information Processing Systems, 2002, pp. 585?591.
[4] J. B. Tenenbaum, V. de Silva, and J. C. Langford, ?A global geometric framework for nonlinear dimensionality reduction,? Science, vol. 290, no. 5500, pp. 2319?2323, 2000.
[5] K. Q. Weinberger and L. Saul, ?Unsupervised learning of image manifolds by semidefinite programming,?
in IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp. 988?955.
[6] B. Shaw and T. Jebara, ?Minimum volume embedding,? in Artificial Intelligence and Statistics, 2007.
[7] ??, ?Structure preserving embedding,? in International Conference on Machine Learning, 2009.
[8] R. Vidal, ?Subspace clustering,? Signal Processing Magazine, vol. 28, no. 2, pp. 52?68, 2011.
[9] D. Barbar?a and P. Chen, ?Using the fractal dimension to cluster datasets,? in KDD ?00: Proceedings of
the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 2000, pp.
260?264.
[10] P. Mordohai and G. G. Medioni, ?Unsupervised dimensionality estimation and manifold learning in highdimensional spaces by tensor voting.? in International Joint Conference on Artificial Intelligence, 2005,
pp. 798?803.
[11] A. Gionis, A. Hinneburg, S. Papadimitriou, and P. Tsaparas, ?Dimension induced clustering,? in KDD
?05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data
mining, 2005, pp. 51?60.
[12] E. Levina and P. J. Bickel, ?Maximum likelihood estimation of intrinsic dimension.? in NIPS, 2004.
[13] G. Haro, G. Randall, and G. Sapiro, ?Translated poisson mixture model for stratification learning,? International Journal of Computer Vision, 2008.
[14] M. Polito and P. Perona, ?Grouping and dimensionality reduction by locally linear embedding,? in Neural
Information Processing Systems, 2002.
[15] A. Goh and R. Vidal, ?Segmenting motions of different types by unsupervised manifold clustering,? in
IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[16] ??, ?Clustering and dimensionality reduction on Riemannian manifolds,? in IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[17] D. Donoho and X. Huo, ?Uncertainty principles and ideal atomic decomposition,? IEEE Trans. Information Theory, vol. 47, no. 7, pp. 2845?2862, Nov. 2001.
[18] R. Tibshirani, ?Regression shrinkage and selection via the lasso,? Journal of the Royal Statistical Society
B, vol. 58, no. 1, pp. 267?288, 1996.
[19] A. Ng, Y. Weiss, and M. Jordan, ?On spectral clustering: analysis and an algorithm,? in Neural Information Processing Systems, 2001, pp. 849?856.
[20] U. von Luxburg, ?A tutorial on spectral clustering,? Statistics and Computing, vol. 17, 2007.
[21] Z. Zhang and H. Zha, ?Principal manifolds and nonlinear dimensionality reduction via tangent space
alignment,? SIAM J. Sci. Comput., vol. 26, no. 1, pp. 313?338, 2005.
[22] K.-C. Lee, J. Ho, and D. Kriegman, ?Acquiring linear subspaces for face recognition under variable
lighting,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684?698,
2005.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? in Proceedings of the IEEE, 1998, pp. 2278 ? 2324.
9
| 4246 |@word middle:3 norm:1 d2:1 decomposition:1 pick:1 reduction:24 contains:5 selecting:5 document:1 existing:2 current:1 john:2 informative:1 kdd:2 plot:3 intelligence:3 selected:1 huo:1 farther:2 provides:2 node:9 location:1 zhang:1 five:3 along:3 consists:1 eleventh:1 manner:1 expected:1 themselves:1 examine:1 automatically:3 inappropriate:1 becomes:1 estimating:2 moreover:4 underlying:6 lowest:1 pursue:1 unified:1 finding:2 guarantee:2 sapiro:1 every:1 voting:1 tackle:1 k2:11 wrong:2 grant:1 segmenting:1 before:2 positive:1 ecc:1 local:6 frey:2 analyzing:1 approximately:5 plus:1 emphasis:1 studied:1 conversely:1 challenging:3 relaxing:1 suggests:1 range:3 bi:2 unique:1 acknowledgment:1 enforces:1 lecun:1 atomic:1 practice:1 definite:1 x3:5 digit:13 area:1 jhu:2 submatrices:1 word:4 suggest:1 get:2 cannot:3 close:10 selection:2 wrongly:1 put:1 applying:2 darkness:1 conventional:1 center:2 m2:3 spanned:2 embedding:52 handle:3 searching:1 variation:2 limiting:1 magazine:1 programming:1 us:2 element:13 recognition:6 database:2 bottom:3 cloud:1 capture:4 barbar:1 region:1 connected:8 rig:1 trade:1 cin:1 ideally:4 kriegman:1 depend:1 solving:2 rewrite:1 r100:2 translated:1 kxj:2 joint:2 separated:2 effective:1 artificial:2 neighborhood:30 whose:11 larger:1 favor:2 niyogi:1 statistic:2 itself:1 confronted:1 advantage:3 kxt:1 propose:4 reconstruction:1 neighboring:2 date:1 roweis:1 academy:1 inducing:2 normalize:1 cluster:22 perfect:1 help:1 illustrate:1 depending:1 pose:3 nearest:14 strong:1 c:3 come:5 indicate:3 direction:1 radius:2 correct:2 centered:1 adjacency:2 require:1 assign:1 fix:3 ci1:1 lying:5 around:3 sufficiently:1 proximity:4 practically:1 exp:2 scope:1 bickel:1 smallest:4 wi1:1 estimation:2 largest:1 successfully:3 reflects:1 weighted:2 always:2 gaussian:2 rather:1 shrinkage:1 varying:3 likelihood:1 contrast:1 sigkdd:2 dependent:1 typically:2 perona:1 relation:1 favoring:1 wij:3 selects:2 pixel:2 issue:1 among:9 overall:1 priori:1 equal:1 ng:1 sampling:3 stratification:1 x4:2 look:1 unsupervised:3 future:1 papadimitriou:1 few:15 belkin:1 randomly:1 preserve:2 densely:3 simultaneously:3 national:1 geometry:2 connects:2 cns:1 freedom:2 mining:2 alignment:1 mixture:1 grime:1 nl:2 semidefinite:2 operated:1 behind:1 light:3 edge:2 closer:3 necessary:1 euclidean:1 penalizes:1 goh:1 theoretical:1 instance:1 assignment:2 pole:1 entry:5 subset:1 uniform:2 successful:3 eigenmaps:3 too:2 stored:1 connect:4 corrupted:1 synthetic:2 chooses:1 thanks:1 density:1 fundamental:1 international:5 siam:1 lee:1 off:1 connecting:1 hopkins:2 w1:1 von:1 reflect:2 containing:1 choose:2 possibly:1 de:1 north:1 coefficient:6 gionis:1 satisfy:3 later:2 try:2 zha:1 sort:1 contribution:1 ni:7 efficiently:1 correspond:4 knot:2 ren:1 lighting:1 j6:3 simultaneous:4 definition:2 sixth:1 pp:15 associated:4 riemannian:1 sampled:4 dataset:4 intrinsically:1 knowledge:3 dimensionality:28 reflecting:1 x6:3 reflected:1 wei:1 done:1 msc:7 langford:1 hand:5 horizontal:2 nonlinear:5 quality:1 building:1 k22:1 contain:3 multiplier:1 isomap:1 normalized:1 former:1 hence:5 excluded:1 symmetric:1 nonzero:11 white:2 x5:3 encourages:1 coincides:1 demonstrate:1 motion:1 silva:1 image:10 common:1 rotation:1 volume:2 polito:1 belong:1 m1:7 automatic:1 rd:4 tuning:1 stable:2 similarity:12 longer:1 add:1 curvature:2 closest:5 own:1 belongs:1 certain:1 preserving:3 minimum:5 captured:2 employed:1 determine:1 signal:1 multiple:7 violate:1 levina:1 sphere:4 retrieval:1 promotes:1 laplacian:4 qi:4 regression:1 vision:5 poisson:1 represent:3 addition:4 median:4 source:1 unlike:1 pass:2 subject:14 induced:1 jordan:1 call:1 near:5 ideal:1 bengio:1 enough:2 wn:1 embeddings:2 xj:6 misclassifications:1 lasso:2 haffner:1 whether:1 motivated:1 expression:3 penalty:1 hessian:2 passing:4 prefers:1 fractal:1 kqi:2 eigenvectors:3 dark:1 locally:3 tenenbaum:1 hinneburg:1 category:1 cit:1 percentage:2 nsf:3 tutorial:2 notice:4 per:1 correctly:2 tibshirani:1 vol:9 key:1 imaging:2 graph:13 geometrically:1 luxburg:1 inverse:1 uncertainty:1 almost:1 decide:1 trefoil:2 prefer:1 scaling:1 yale:1 precisely:2 constraint:2 x2:7 flat:1 encodes:2 nearby:2 span:10 formulating:1 min:2 haro:1 relatively:2 according:6 ball:2 combination:3 poor:1 smaller:2 mordohai:1 wi:4 modification:1 lem:13 randall:1 invariant:1 computationally:1 r3:1 end:1 wii:1 vidal:3 apply:1 away:1 appropriate:2 spectral:7 shaw:1 robustness:1 weinberger:1 ho:1 original:1 top:4 clustering:45 include:1 trouble:1 k1:2 build:7 especially:2 society:1 tensor:1 objective:2 move:2 degrades:1 diagonal:2 unraveling:1 gradient:1 win:1 subspace:17 distance:3 separate:2 sci:1 manifold:89 eigenspectrum:1 spanning:1 reason:1 enforcing:1 index:2 relationship:2 difficult:2 cij:4 potentially:1 proper:2 unknown:1 perform:3 allowing:1 vertical:2 datasets:2 situation:1 extended:2 excluding:1 rn:2 arbitrary:2 sharp:1 jebara:1 intensity:1 pair:1 connection:2 learned:1 nip:1 trans:1 address:1 beyond:1 able:1 pattern:5 sparsity:4 challenge:2 program:15 built:3 smce:43 max:1 oia:1 medioni:1 royal:1 critical:1 misclassification:3 numerous:1 axis:4 extract:1 prior:1 geometric:1 discovery:2 tangent:5 embedded:1 beside:2 permutation:1 degree:2 affine:18 sufficient:1 xp:2 vectorized:2 principle:3 translation:1 row:2 supported:1 last:1 free:2 t6:3 bias:2 lle:18 neighbor:52 wide:2 taking:3 fall:1 face:10 saul:2 sparse:19 tsaparas:1 dimension:25 xn:3 world:2 collection:1 far:2 transaction:1 nov:1 compact:1 obtains:2 ml:15 global:6 investigating:1 xi:45 search:1 table:5 reasonably:2 robust:2 obtaining:1 ehsan:2 bottou:1 complex:1 linearly:1 whole:2 noise:2 x1:10 inferring:2 wish:3 sparsest:1 exponential:1 comput:1 lie:7 embed:1 dl:14 intrinsic:10 exists:2 mnist:2 grouping:1 effectively:2 ci:20 magnitude:1 illumination:3 elhamifar:1 hole:1 kx:8 chen:1 locality:1 lagrange:1 partially:1 acquiring:1 corresponds:6 satisfies:1 acm:2 goal:3 sorted:1 donoho:2 change:3 specifically:2 decouple:1 principal:1 called:2 select:8 highdimensional:2 violated:1 evaluate:2 d1:1 |
3,586 | 4,247 | Distributed Delayed Stochastic Optimization
Alekh Agarwal
John C. Duchi
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720
{alekh,jduchi}@eecs.berkeley.edu
Abstract
We analyze the convergence of gradient-based optimization algorithms
whose updates depend on delayed stochastic gradient information. The
main application of our results is to the development of distributed minimization algorithms where a master node performs parameter updates while
worker nodes compute stochastic gradients based on local information in
parallel, which may give rise to delays due to asynchrony. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible. In application to distributed optimization, we show
n-node architectures whose optimization error in stochastic problems?in
?
spite of asynchronous delays?scales asymptotically as O(1/ nT ), which
is known to be optimal even in the absence of delays.
1
Introduction
We focus on stochastic convex optimization problems of the form
Z
F (x; ?)dP (?),
minimize f (x) for f (x) := EP [F (x; ?)] =
x?X
(1)
?
where X ? Rd is a closed convex set, P is a probability distribution over ?, and F (? ; ?) is
convex for all ? ? ?, so that f is convex. Classical stochastic gradient algorithms [18, 16]
iteratively update a parameter x(t) ? X by sampling ? ? P , computing g(t) = ?F (x(t); ?),
and performing the update x(t + 1) = ?X (x(t) ? ?(t)g(t)), where ?X denotes projection
onto the set X and ?(t) ? R is a stepsize. In this paper, we analyze asynchronous gradient
methods, where instead of receiving current information g(t), the procedure receives out
of date gradients g(t ? ? (t)) = ?F (x(t ? ? (t)), ?), where ? (t) is the (potentially random)
delay at time t. The central contribution of this paper is to develop algorithms that?under
natural assumptions about the functions F in the objective (1)?achieve asymptotically
optimal convergence rates for stochastic convex optimization in spite of delays.
Our model of delayed gradient information is particularly relevant in distributed optimization scenarios, where a master maintains the parameters x while workers compute stochastic
gradients of the objective (1) using a local subset of the data. Master-worker architectures
are natural for distributed computation, and other researchers have considered models similar to those in this paper [12, 10]. By allowing delayed and asynchronous updates, we can
avoid synchronization issues that commonly handicap distributed systems.
Distributed optimization has been studied for several decades, tracing back at least to
seminal work of Bertsekas and Tsitsiklis ([3, 19, 4]) on asynchronous computation and
minimization of smooth functions where the parameter vector is distributed. More recent
work has studied problems in which each processor or node
Pn i in a network has a local
function fi , and the goal is to minimize the sum f (x) = n1 i=1 fi (x) [12, 13, 17, 7]. Our
work is closest to Nedi?c et al.?s asynchronous incremental subgradient method [12], who
1
Master
g1 (t ? n)
x(t)
x(t + 1)
g2 (t ? n + 1)
1
2
gn (t ? 1)
x(t + n ? 1)
3
n
Figure 1: Cyclic delayed update architecture. Workers compute gradients in parallel, passing outof-date (stochastic) gradients gi (t ? ? ) = ?fi (x(t ? ? )) to master. Master responds with current
parameters. Diagram shows parameters and gradients communicated between rounds t and t+n?1.
analyze gradient projection steps taken using out-of-date gradients. See Figure 1 for an
illustration. Nedi?c et al. prove that the procedure converges, and a slight extension of their
results
p shows that the optimization error of the procedure after T iterations is at most
O( ? /T ), ? being the delay in gradients.?Without delay, a centralized stochastic gradient
algorithm attains convergence rate O(1/ T ). All the approaches mentioned above give
slower convergence than this centralized rate in distributed settings, paying a penalty for
data being split across a network; as Dekel et al. [5] note, one would expect that parallel
computation actually speeds convergence. Langford et al. [10] also study asynchronous
methods in the setup of stochastic optimization and attempt to remove the penalty for the
delayed procedure under an additional smoothness assumption; however, their paper has a
technical error (see the long version [2] for details). The main contributions of our paper
are (1) to remove the delay penalty for smooth functions and (2) to demonstrate benefits
in convergence rate by leveraging parallel computation even in spite of delays.
We build on results of Dekel et al. [5], who give reductions of stochastic optimization algorithms (e.g. [8, 9]) to show that for smooth objectives f , when n processors compute
stochastic gradients
? in parallel using a common parameter x it is possible to achieve convergence rate O(1/ T n). The rate holds so long as most processors remain synchronized for
most of the time [6]. We show similar results, but we analyze the effects of asynchronous gradient updates where all the nodes in the network can suffer delays, quantifying the impact of
the delays. In application to distributed optimization, we show that under different network
?
assumptions, we achieve convergence
rates ranging?from O(min{n3 /T, (n/T )2/3 } + 1/ T n)
?
to O(min{n/T, 1/T 2/3 } + 1/ T n), which is O(1/ nT ) asymptotically in T . The time necessary to achieve ?-optimal solution to the problem (1) is asymptotically O(1/n?2 ), a factor
of n?the size of the network ?better than a centralized procedure in spite of delay. Proofs of
our results can be found in the long version of this paper [2].
Notation We denote a general norm by k?k, and its associated dual norm k?k? is defined as kzk? := supx:kxk?1 hz, xi. The subdifferential set of a function f at a point x is
?f (x) := g ? Rd | f (y) ? f (x) + hg, y ? xi for all y ? dom f . A function f is G-Lipschitz
w.r.t. the norm k?k on X if ?x, y ? X , |f (x) ? f (y)| ? G kx ? yk, and f is L-smooth on X if
k?f (x) ? ?f (y)k? ? L kx ? yk , equivalently, f (y) ? f (x)+h?f (x), y ? xi+
L
2
kx ? yk .
2
A convex function h is c-strongly convex with respect to a norm k?k over X if
c
2
h(y) ? h(x) + hg, y ? xi + kx ? yk
for all x, y ? X and g ? ?h(x).
2
2
(2)
Setup and Algorithms
To build intuition for the algorithms we analyze, we first describe the delay-free algorithm
underlying our approach: the dual averaging algorithm of Nesterov [15].1 The dual averaging
algorithm is based on a strongly convex proximal function ?(x); we assume without loss
that ?(x) ? 0 for all x ? X and (by scaling) that ? is 1-strongly convex.
1
Essentially identical results to those we present here also hold for extensions of mirror descent [14], but we omit these for lack of space.
2
At time t, the algorithm updates a dual vector z(t) and primal vector x(t) ? X using a
subgradient g(t) ? ?F (x(t); ?(t)), where ?(t) is drawn i.i.d. according to P :
n
o
1
z(t + 1) = z(t) + g(t) and x(t + 1) = argmin hz(t + 1), xi +
?(x) .
(3)
?(t + 1)
x?X
For the remainder of the paper, we will use the following three essentially standard assumptions [8, 9, 20] about the stochastic optimization problem (1).
Assumption I (Lipschitz Functions). For P -a.e. ?, the function F (? ; ?) is convex. More2
over, for any x ? X , and v ? ?F (x; ?), E[kvk? ] ? G2 .
Assumption II (Smooth Functions). The expected function f has L-Lipschitz continuous
2
gradient, and for all x ? X the variance bound E[k?f (x) ? ?F (x; ?)k? ] ? ? 2 holds.
Assumption III (Compactness). For all x ? X , ?(x) ? R2 /2.
Several commonly used functions satisfy the above assumptions, for example:
(i) The logistic loss: F (x; ?) = log[1+exp(hx, ?i)]. The objective F satisfies Assumptions I
and II so long as k?k? has finite second moment.
(ii) Least squares: F (x; ?) = (a ? hx, bi)2 where ? = (a, b) for a ? Rd and b ? R, satisfies
Assumptions I and II if X is compact and k?k? has finite fourth moment.
Under Assumption III, assumptions I and II imply finite-sample convergence rates for the
PT
update (3). Define the time averaged vector x
b(T ) := T1 t=1 x(t + 1). Under Assumption I,
?
dual averaging
satisfies E[f (b
x(T ))] ? f (x? ) = O(RG/ T ) for the stepsize choice ?(t) =
?
R/(G t) [15, 20]. The result is sharp to constant factors [14, 1], but can be further improved
using Assumption II. Building on work of Juditsky et al. [8]
? and Lan [9], Dekel et al. [5,
Appendix A] show that the stepsize choice ?(t)?1 = L + ?R t, yields the convergence rate
?R
LR2
+?
.
(4)
E[f (b
x(T ))] ? f (x? ) = O
T
T
Delayed Optimization Algorithms We now turn to extending the dual averaging (3)
update to the setting in which instead of receiving a current gradient g(t) at time t, the
procedure receives a gradient g(t ? ? (t)), that is, a stochastic gradient of the objective (1)
computed at the point x(t ? ? (t)). Our analysis admits any sequence ? (t) of delays as long
as the mapping t 7? t ? ? (t) is one-to-one, and satisfies E[? (t)2 ] ? B 2 < ?.
We consider the dual averaging algorithm with g(t) replaced by g(t ? ? (t)):
o
n
1
?(x) . (5)
z(t + 1) = z(t) + g(t ? ? (t)) and x(t + 1) = argmin hz(t + 1), xi +
?(t + 1)
x?X
By combining the techniques Nedi?c et al. [12] developed with the convergence proofs of dual
averaging [15], it is possible to show that so long as E[? (t)] ? B < ? for all t, Assumptions
I
? ?
x(T ))] ? f (x? ) = O(RG B/ T ). In
and III and the stepsize choice ?(t) = G?RBt give E[f (b
?
the next section we show how to avoid the B penalty.
3
Convergence rates for delayed optimization of smooth functions
We now state and discuss the implications of two results for asynchronous stochastic gradient
methods. Our first convergence result is for the update rule (5), while the second averages
several stochastic subgradients for every update, each with a potentially different delay.
3.1
Simple delayed optimization
?
Our focus in this section is to remove the B penalty for the delayed update rule (5) using
Assumption II, which arises for non-smooth optimization because subgradients can vary
drastically even when measured at near points. We show that under the smoothness condition, the errors from delay become second order: the penalty is asymptotically negligible.
3
Theorem 1. Let x(t)?be defined by the update (5). Define ?(t)?1 = L + ?(t), where
?
PT
?(t) = ? t or ?(t) ? ? T for all t. The average x
b(T ) = t=1 x(t + 1)/T satisfies
Ef (b
x(T )) ? T f (x? ) ?
?2
LG2 (? + 1)2 log T
LR2 + 6? GR 2?R2
+ ? + ? +4
.
T
?2 T
T
? T
We make a few remarks about the?theorem. The log T factor on the last term is not present
when using the fixed stepsize of ? T . Furthermore, though we omit it here for lack of space,
the analysis also extends to random delays as long as E[? (t)2 ] ? B 2 ; see the long version [2]
for details. Finally, based
? on Assumption II, we can set ? = ?/R, which makes the rate
asymptotically O(?R/ T ), which is the same as the delay-free case so long as ? = o(T 1/4 ).
The take-home message from Theorem 1 is thus that the penalty in convergence rate due
to the delay ? (t) is asymptotically negligible. In the next section, we show the implications
of this result for robust distributed stochastic optimization algorithms.
3.2
Combinations of delays
In some scenarios?including distributed settings similar to those we discuss in the next
section?the procedure has access not to only a single delayed gradient but to several stochastic gradients with different delays. To abstract away the essential parts of this situation, we
assume that the procedure receives n stochastic gradients g1 , . . . , gn ? Rd , where each has
a potentially different delay ? (i). Let ? = (?i )ni=1 be (an unspecified) vector in probability
simplex. Then the procedure performs the following updates at time t:
z(t + 1) = z(t) +
n
X
i=1
?i gi (t ? ? (i)),
x(t + 1) = argmin hz(t + 1), xi +
x?X
1
?(x) . (6)
?(t + 1)
The next theorem builds on the proof of Theorem 1.
?
Theorem
2. Under Assumptions I?III, let ?(t) = (L + ?(t))?1 and ?(t) = ? t or ?(t) ?
?
PT
b(T ) = t=1 x(t + 1)/T for the update sequence (6) satisfies
? T for all t. The average x
Pn
Pn
?i LG2 (? (i) + 1)2 log T
2LR2 + 4 i=1 ?i ? (i)GR
?
+ 6 i=1
f (b
x(T )) ? f (x ) ?
T
?2 T
n
2
X
2
4?R
1
+ ? + ? E
?i [?f (x(t ? ? (i))) ? gi (t ? ? (i))]
? .
T
? T
i=1
We illustrate the consequences of Theorem 2 for distributed optimization in the next section.
4
Distributed Optimization
We now turn to what we see as the main purpose and application of the above results:
developing robust and efficient algorithms for distributed stochastic optimization. Our main
motivations here are machine learning applications where the data is so large that it cannot
fit on a single computer. Examples of the form (1) include logistic or linear regression, as
described respectively in Sec. 2(i) and (ii). We consider both stochastic and online/streaming
scenarios for such problems. In the simplest setting, the distribution P in the objective (1)
PN
is the empirical distribution over an observed dataset, that is, f (x) = N1 i=1 F (x; ?i ). We
divide the N samples among n workers so that each worker has an N/n-sized subset of data.
In online learning applications, the distribution P is the unknown distribution generating
the data, and each worker receives a stream of independent data points ? ? P . Worker i
uses its subset of the data, or its stream, to compute gi ? Rd , an estimate of the gradient
?f of the global f . We assume that gi is an unbiased estimate of ?f (x), which is satisfied,
for example, in the online setting or when each worker computes the gradient gi based on
samples picked at random without replacement from its subset of the data.
The architectural assumptions we make are based off of master/worker topologies, but the
convergence results in Section 3 allow us to give procedures robust to delay and asynchrony.
The architectures build on the na??ve scheme of having each worker simultaneously compute a stochastic gradient and send it to the master, which takes a gradient step on the
4
x(t ? 1)
x(t)
g(t ? 1)
x(t ? 2)
M
g(t ? 2)
M
g(t ? 3)
x(t ? 3)
g(t ? 4)
x(t ? 4)
(a)
(b)
Figure 2: Master-worker averaging network. (a): parameters stored at different nodes at time t.
A node at distance d from master has the parameter x(t ? d). (b): gradients computed at different
nodes. A node at distance d from master computes gradient g(t ? d).
1
3 g1 (t
? d)
+ 13 g2 (t ? d ? 2) + 13 g3 (t ? d ? 2)
Depth d
1
g2 (t ? d ? 1)
Depth d + 1
2
{x(t ? d), g2 (t ? d ? 2), g3 (t ? d ? 2)}
Figure 3: Communication of gradient information toward master
node at time t from node 1 at distance d from master. Information
stored at time t by node i in brackets to right of node.
g3 (t ? d ? 1)
{x(t ? d ? 1)}
3
{x(t ? d ? 1)}
averaged gradient. While the n gradients are computed in parallel in the na??ve scheme,
accumulating and averaging n gradients at the master takes ?(n) time, offsetting the gains
of parallelization, and the procedure is non-robust to laggard workers.
Cyclic Delayed Architecture This protocol is the delayed update algorithm mentioned
in the introduction, and it computes n stochastic gradients of f (x) in parallel. Formally,
worker i has parameter x(t?? ) and computes gi (t?? ) = ?F (x(t?? ); ?i (t)) ? Rd , where ?i (t)
is a random variable sampled at worker i from the distribution P . The master maintains
a parameter vector x ? X . At time t, the master receives gi (t ? ? ) from some worker i,
computes x(t + 1) and passes it back to worker i only. Other workers do not see x(t + 1)
and continue their gradient computations on stale parameter vectors. In the simplest case,
each node suffers a delay of ? = n, though our analysis applies to random delays as well.
Recall Fig. 1 for a description of the process.
Locally Averaged Delayed Architecture At a high level, the protocol we now describe
combines the delayed updates of the cyclic delayed architecture with averaging techniques
of previous work [13, 7]. We assume a network G = (V, E), where V is a set of n nodes
(workers) and E are the edges between the nodes. We select one of the nodes as the master,
which maintains the parameter vector x(t) ? X over time.
The algorithm works via a series of multicasting and aggregation steps on a spanning tree
rooted at the master node. In the broadcast phase, the master sends its current parameter
vector x(t) to its immediate neighbors. Simultaneously, every other node broadcasts its
current parameter vector (which, for a depth d node, is x(t ? d)) to its children in the
spanning tree. See Fig. 2(a). Every worker computes its local gradient at its new parameter
(see Fig. 2(b)). The communication then proceeds from leaves toward the root. The leaf
nodes communicate their gradients to their parents, and the parent takes the gradients of
the leaf nodes from the previous round (received at iteration t ? 1) and averages them with
its own gradient, passing this averaged gradient back up the tree. Again simultaneously,
each node takes the averaged gradient vectors of its children from the previous rounds,
averages them with its current gradient vector, and passes the result up the spanning tree.
See Fig. 3. The master node receives an average of delayed gradients from the entire tree,
giving rise to updates of the form (6). We note that this is similar to the MPI all-reduce
operation, except our implementation is non-blocking since we average delayed gradients
with different delays at different nodes.
5
4.1
Convergence rates for delayed distributed minimization
We turn now to corollaries of the results from the previous sections that show even asynchronous distributed procedures achieve asymptotically faster rates (over centralized procedures). The key is that workers can pipeline updates by computing asynchronously and in
parallel, so each worker can compute a low variance estimate of the gradient ?f (x). We
ignore the constants L, G, R, and ?, which are not dependent on the characteristics of
the network. We also assume that each worker i uses m independent
samples ?i (j) ? P ,
Pm
1
j = 1, . . . , m, to compute the stochastic gradient as gi (t) = m
j=1 ?F (x(t); ?i (j)). Using
the cyclic protocol as in Fig. 1, Theorem 1 gives the following result.
2
Corollary 1. Let ?(x) = 12 kxk2 , assume the conditions in Theorem 1, and assume that
each worker uses m samples ? ? P to compute
p the gradient it communicates to the master.
Then with the choice ?(t) = max{? 2/3 T ?1/3 , T /m} the update (5) satisfies
2/3 2
? m
1
?
?
.
,
E[f (b
x(T ))] ? f (x ) = O min
+?
T 2/3 T
Tm
2
2
Proof Noting that ? 2 = E[k?f (x) ? gi (t)k2 ] = E[k?f (x) ? ?F (x; ?)k2 ]/m = O(1/m)
when workers use m independent stochastic gradient samples, the corollary is immediate.
As in Theorem 1, the corollary generalizes to random delays as long as E? 2 (t) ? B 2 < ?,
with ? replaced by B in the result. So long as B = o(T 1/4 ), the first
? term in the bound is
asymptotically negligible, and we achieve a convergence rate of O(1/ T n) when m = O(n).
The cyclic delayed architecture has the drawback that information from a worker can take
? = O(n) time to reach the master. While the algorithm is robust to delay, the downside
of the architecture is that the essentially ? 2 m or ? 2/3 term in the bounds above can be
quite large. To address the large n drawback, we turn our attention to the locally averaged
architecture described by Figs. 2 and 3, where delays can be smaller since they depend
only on the height of a spanning tree in the network. As a result of the communication
procedure, the master receives a convex combination of the stochastic gradientsPevaluated at
n
each worker i. Specifically, the master receives gradients of the form g? (t) = i=1 ?i gi (t ?
? (i)) for some ? in the simplex, where ? (i) is the delay of worker i, which puts us in
the setting of Theorem 2. We now make the reasonable assumption that the gradient
errors ?f (x(t)) ? gi (t) are uncorrelated across the nodes in the network.2 In statistical
applications, for example, each worker may own independent data or receive streaming data
2
from independent sources. We also set ?(x) = 12 kxk2 , and observe
n
2
n
X
X
2
E
?i ?f (x(t ? ? (i))) ? gi (t ? ? (i))
?2i E k?f (x(t ? ? (i))) ? gi (t ? ? (i))k2 .
=
i=1
2
i=1
This gives the following corollary to Theorem 2.
?
?
2
Corollary 2. Set ?i = n1 for all i, ?(x) = 12 kxk2 , and ?(t) = ? T /R n. Let ?? and ? 2
denote the average of the delays ? (i) and ? (i)2 . Under the conditions of Theorem 2,
!
??GR LG2 R2 n? 2
R?
LR2
?
+
+
+?
E [f (b
x(T )) ? f (x )] = O
.
T
T
?2 T
Tn
?
Asymptotically, E[f (b
x(T ))] ? f (x? ) = O(1/ T n). In this architecture, the delay ? is
bounded by the graph diameter D. Furthermore, we can use a slightly different stepsize
set?
ting as in Corollary 1 to get an improved rate of O(min{(D/T )2/3 , nD2 /T } + 1/ T n). It is
also possible?but outside the scope of this extended abstract?to give fast(er) convergence
rates dependent on communication costs (details can be found in the long version [2]).
4.2
Running-time comparisons
We now explicitly study the running times of the centralized stochastic gradient algorithm (3), the cyclic delayed protocol with the update (5), and the locally averaged architecture with the update (6). To make comparisons more cleanly, we avoid constants,
2
Similar results continue to hold under weak correlation.
6
r
1
Ef (b
x) ? f (x ) = O
2/3 T3
n
n
1
?
?
Ef (b
x) ? f (x ) = O min
,
+
2/3
Tn
T2/3 T 2
D
nD
1
?
Ef (b
x) ? f (x ) = O min
,
+?
2/3
T
T
nT
?
Centralized (3)
Cyclic (5)
Local (6)
Table 1: Upper bounds on optimization error after T units of time. See text for details.
2
assuming that the bound ? 2 on E k?f (x) ? ?F (x; ?)k is 1, and that sampling ? ? P and
evaluating ?F (x; ?) requires unit time. P
It is also clear that if we receive m uncorrelated
m
1
1
2
samples of ?, the variance Ek?f (x) ? m
j=1 ?F (x; ?j )k2 ? m .
Now we state our assumptions on the relative times used by each algorithm. Let T be
the number of units of time allocated to each algorithm, and let the centralized, cyclic
delayed and locally averaged delayed algorithms complete Tcent , Tcycle and Tdist iterations,
respectively, in time T . It is clear that Tcent = T . We assume that the distributed methods
use mcycle and mdist samples of ? ? P to compute stochastic gradients. For concreteness, we
assume that communication is of the same order as computing the gradient of one sample
?F (x; ?). In the cyclic setup of Sec. 3.1, it is reasonable to assume that mcycle = ?(n)
m
to avoid idling of workers. For mcycle = ?(n), the master requires cycle
units of time to
n
mcycle
receive one gradient update, so n Tcycle = T . In the local communication framework, if
each node uses mdist samples to compute a gradient, the master receives a gradient every
mdist units of time, and hence mdist Tdist = T . We summarize our assumptions by saying
that in T units of time, each algorithm performs the following number of iterations:
Tn
T
,
and Tdist =
.
(7)
mcycle
mdist
Combining with the bound (4) and Corollaries 1 and 2, we get the results in Table 1.
Asymptotically in the number of units of time T , both the cyclic and locally communicating
stochastic optimization schemes have the same convergence rate. Comparing the lower
order terms, since D ? n for any network, the locally averaged algorithm always guarantees
better performance than the cyclic algorithm. For specific graph topologies, however, we
can quantify the time improvements (assuming we are in the n2/3 /T 2/3 regime):
Tcent = T,
Tcycle =
? n-node cycle or path: D = n so that both methods have the same convergence rate.
?
?
?
? n-by- n grid: D = n, so the distributed method has a factor of n2/3 /n1/3 =
n1/3 improvement over the cyclic architecture.
? Balanced trees and expander graphs: D = O(log n), so the distributed method has
a factor?ignoring logarithmic terms?of n2/3 improvement over cyclic.
5
Numerical Results
Though this paper focuses mostly on the theoretical analysis of delayed stochastic methods,
it is important to understand their practical aspects. To that end, we use the cyclic delayed
method (6) to solve a somewhat large logistic regression problem:
minimize f (x) =
x
N
1 X
log(1 + exp(?bi hai , xi))
N i=1
subject to kxk2 ? R.
(8)
We use the Reuters RCV1 dataset [11], which consists of N ? 800000 news articles, each labeled with a combination of the four labels economics, government, commerce, and medicine.
In the above example, the vectors ai ? {0, 1}d , d ? 105 , are feature vectors representing the
words in each article, and the labels bi are 1 if the article is about government, ?1 otherwise.
We simulate the cyclic delayed optimization algorithm (5) for the problem (8) for several
choices of the number of workers n and the number of samples m computed at each worker.
We summarize the results in Figure 4. We fix an ? (in this case, ? = .05), then measure the
7
1000
Time to ? accuracy
Time to ? accuracy
800
600
400
800
600
200
400
0
1
2
3
4
5
6
8
10
12
15
18
1
22 26
2
3
4
5
6
8
10
12
Number of workers
Number of workers
(a)
(b)
Figure 4: Estimated time to compute ?-accurate solution to the objective (8) as a function of
the number of workers n. See text for details. Plot (a): convergence time assuming the cost of
communication to the master and gradient computation are same. Plot (b): convergence time
assuming the cost of communication to the master is 16 times that of gradient computation.
time it takes the stochastic algorithm (5) to output an x
b such that f (b
x) ? inf x?X f (x) + ?.
We perform each experiment ten times. The two plots differ in the amount of time C
required to communicate the parameters x between the master and the workers (relative to
the amount of time to compute the gradient on one sample in the objective (8)). For the
left plot in Fig. 4(a), we assume that C = 1, while in Fig. 4(b), we assume that C = 16.
For Fig. 4(a), each worker uses m = n samples to compute a stochastic gradient for the
objective (8). The plotted results show the delayed update (5) enjoys speedup (the ratio of
time to ?-accuracy for an n-node system versus the centralized procedure) nearly linear in
the
?
p number n of worker machines until n ? 15 or so. Since we use the stepsize2 choice ?(t)
t/n, which yields the predicted convergence rate given by Corollary 1, the n m/T ? n3 /T
term in the convergence rate presumably becomes non-negligible for larger n. This expands
on earlier experimental work with a similar method [10], which experimentally demonstrated
linear speedup for small values of n, but did not investigate larger network sizes.
In Fig. 4(b), we study the effects of more costly communication by assuming that communication is C = 16 times more expensive than gradient computation. As argued in the
long version [2], we set the number of samples each worker
p computes to m = Cn = 16n
and correspondingly reduce the damping stepsize ?(t) ? t/(Cn). In the regime of more
expensive communication?as our theoretical results predict?small numbers of workers still
enjoy significant speedups over a centralized method, but eventually the cost of communication and delays mitigate some of the benefits of parallelization. The alternate choice of
stepsize ?(t) = n2/3 T ?1/3 gives qualitatively similar performance.
6
Conclusion and Discussion
In this paper, we have studied delayed dual averaging algorithms for stochastic optimization, showing applications of our results to distributed optimization. We showed that for
smooth problems, we can preserve the performance benefits of parallelization over centralized stochastic optimization even when we relax synchronization requirements. Specifically,
we presented methods that take advantage of distributed computational resources and are
robust to node failures, communication latency, and node slowdowns. In addition, though
we omit these results for brevity, it is possible to extend all of our expected convergence
results to guarantees with high-probability.
Acknowledgments
AA was supported by a Microsoft Research Fellowship and NSF grant CCF-1115788, and
JCD was supported by the NDSEG Program and Google. We are very grateful to Ofer
Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao for communicating of their proof
of the bound (4). We would also like to thank Yoram Singer and Dimitri Bertsekas for
reading a draft of this manuscript and giving useful feedback and references.
8
References
[1] A. Agarwal, P. Bartlett, P. Ravikumar, and M. Wainwright. Information-theoretic
lower bounds on the oracle complexity of convex optimization. In Advances in Neural
Information Processing Systems 23, 2009.
[2] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. URL
http://arxiv.org/abs/1104.5525, 2011.
[3] D. P. Bertsekas. Distributed asynchronous computation of fixed points. Mathematical
Programming, 27:107?120, 1983.
[4] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical
Methods. Prentice-Hall, Inc., 1989.
[5] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online
prediction using mini-batches. URL http://arxiv.org/abs/1012.1367, 2010.
[6] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Robust distributed online
prediction. URL http://arxiv.org/abs/1012.1370, 2010.
[7] J. Duchi, A. Agarwal, and M. Wainwright. Dual averaging for distributed optimization:
convergence analysis and network scaling. IEEE Transactions on Automatic Control,
to appear, 2011.
[8] A. Juditsky, A. Nemirovski, and C. Tauvel. Solving variational inequalities with the
stochastic mirror-prox algorithm. URL http://arxiv.org/abs/0809.0815, 2008.
[9] G. Lan.
An optimal method for stochastic composite optimization.
Mathematical Programming Series A, 2010.
Online first, to appear. URL
http://www.ise.ufl.edu/glan/papers/OPT SA4.pdf.
[10] J. Langford, A. Smola, and M. Zinkevich. Slow learners are fast. In Advances in Neural
Information Processing Systems 22, pages 2331?2339, 2009.
[11] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text
categorization research. Journal of Machine Learning Research, 5:361?397, 2004.
[12] A. Nedi?c, D.P. Bertsekas, and V.S. Borkar. Distributed asynchronous incremental
subgradient methods. In D. Butnariu, Y. Censor, and S. Reich, editors, Inherently
Parallel Algorithms in Feasibility and Optimization and their Applications, volume 8 of
Studies in Computational Mathematics, pages 381?407. Elsevier, 2001.
[13] A. Nedi?c and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54:48?61, 2009.
[14] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, New York, 1983.
[15] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical
Programming A, 120(1):261?283, 2009.
[16] B. T. Polyak. Introduction to optimization. Optimization Software, Inc., 1987.
[17] S. S. Ram, A. Nedi?c, and V. V. Veeravalli. Distributed stochastic subgradient projection
algorithms for convex optimization. Journal of Optimization Theory and Applications,
147(3):516?545, 2010.
[18] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical
Statistics, 22:400?407, 1951.
[19] J. Tsitsiklis. Problems in decentralized decision making and computation. PhD thesis,
Massachusetts Institute of Technology, 1984.
[20] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543?2596, 2010.
9
| 4247 |@word version:5 norm:4 nd:1 dekel:6 cleanly:1 moment:2 reduction:1 cyclic:15 series:2 current:6 comparing:1 nt:3 john:1 numerical:2 remove:3 plot:4 update:25 juditsky:2 leaf:3 idling:1 draft:1 node:31 org:4 height:1 mathematical:4 become:1 prove:1 consists:1 combine:1 expected:2 multi:1 becomes:1 notation:1 underlying:1 bounded:1 what:1 argmin:3 unspecified:1 outof:1 developed:1 jduchi:1 guarantee:2 berkeley:3 every:4 mitigate:1 expands:1 k2:4 control:2 unit:7 grant:1 omit:3 enjoy:1 appear:2 bertsekas:5 ozdaglar:1 t1:1 negligible:5 engineering:1 local:6 consequence:1 path:1 studied:3 nemirovski:2 bi:3 averaged:9 practical:1 commerce:1 acknowledgment:1 offsetting:1 communicated:1 procedure:15 lg2:3 empirical:1 composite:1 projection:3 word:1 spite:4 get:2 onto:1 cannot:1 put:1 prentice:1 seminal:1 accumulating:1 www:1 zinkevich:1 demonstrated:1 send:1 attention:1 economics:1 convex:14 nedi:6 bachrach:3 communicating:2 rule:2 annals:1 pt:3 shamir:3 programming:3 us:5 expensive:2 particularly:1 labeled:1 blocking:1 ep:1 observed:1 electrical:1 cycle:2 news:1 yk:4 mentioned:2 intuition:1 balanced:1 ran:1 complexity:2 rose:1 nesterov:2 dom:1 depend:2 grateful:1 solving:1 efficiency:1 learner:1 fast:2 describe:2 outside:1 ise:1 whose:2 quite:1 larger:2 solve:1 relax:1 otherwise:1 statistic:1 gi:14 g1:3 asynchronously:1 online:7 sequence:2 advantage:1 remainder:1 relevant:1 combining:2 date:3 achieve:6 description:1 convergence:26 parent:2 requirement:1 extending:1 generating:1 incremental:2 converges:1 categorization:1 illustrate:1 develop:1 measured:1 received:1 paying:1 predicted:1 synchronized:1 quantify:1 differ:1 drawback:2 stochastic:42 argued:1 government:2 hx:2 fix:1 opt:1 extension:2 hold:4 considered:1 hall:1 exp:2 presumably:1 mapping:1 scope:1 predict:1 vary:1 purpose:1 label:2 robbins:1 minimization:3 always:1 avoid:4 pn:4 corollary:9 focus:3 nd2:1 improvement:3 attains:1 censor:1 elsevier:1 dependent:2 streaming:2 entire:1 compactness:1 issue:1 dual:12 among:1 development:1 having:1 sampling:2 identical:1 nearly:1 simplex:2 t2:1 few:1 simultaneously:3 ve:2 preserve:1 delayed:29 replaced:2 phase:1 replacement:1 n1:5 microsoft:1 attempt:1 ab:4 centralized:10 message:1 mcycle:5 investigate:1 bracket:1 kvk:1 primal:2 hg:2 implication:2 accurate:1 edge:1 worker:40 necessary:1 ohad:1 damping:1 tree:7 divide:1 plotted:1 theoretical:2 earlier:1 downside:1 gn:2 cost:4 subset:4 delay:34 gr:3 stored:2 supx:1 eec:1 proximal:1 off:1 receiving:2 na:2 again:1 central:1 satisfied:1 ndseg:1 thesis:1 broadcast:2 ek:1 dimitri:1 li:1 prox:1 sec:2 inc:2 satisfy:1 explicitly:1 stream:2 root:1 picked:1 closed:1 analyze:5 aggregation:1 maintains:3 parallel:10 lr2:4 monro:1 contribution:3 minimize:3 square:1 ni:1 accuracy:3 variance:3 who:2 characteristic:1 yield:2 t3:1 rbt:1 weak:1 researcher:1 processor:3 reach:1 suffers:1 failure:1 proof:5 associated:1 gain:1 sampled:1 dataset:2 massachusetts:1 recall:1 actually:1 back:3 manuscript:1 improved:2 though:4 strongly:3 furthermore:2 smola:1 until:1 langford:2 correlation:1 receives:9 veeravalli:1 lack:2 google:1 logistic:3 asynchrony:2 stale:1 building:1 effect:2 unbiased:1 ccf:1 hence:1 iteratively:1 round:3 rooted:1 mpi:1 pdf:1 complete:1 demonstrate:1 theoretic:1 tn:3 duchi:3 performs:3 ranging:1 variational:1 ef:4 fi:3 common:1 volume:1 extend:1 slight:1 jcd:1 significant:1 ai:1 smoothness:2 rd:6 automatic:2 grid:1 pm:1 mathematics:1 access:1 reich:1 alekh:2 closest:1 own:2 recent:1 showed:1 inf:1 scenario:3 inequality:1 continue:2 additional:1 somewhat:1 ii:9 smooth:9 technical:1 faster:1 long:13 lin:1 ravikumar:1 feasibility:1 impact:1 prediction:2 regression:2 essentially:3 arxiv:4 iteration:4 agarwal:4 gilad:3 receive:3 subdifferential:1 addition:1 fellowship:1 tauvel:1 diagram:1 source:1 sends:1 allocated:1 parallelization:3 pass:2 hz:4 subject:1 expander:1 ufl:1 leveraging:1 near:1 noting:1 yang:1 split:1 iii:4 fit:1 architecture:13 topology:2 polyak:1 reduce:2 tm:1 cn:2 bartlett:1 url:5 penalty:7 suffer:1 passing:2 york:1 remark:1 useful:1 latency:1 clear:2 amount:2 locally:6 ten:1 simplest:2 diameter:1 http:5 nsf:1 estimated:1 key:1 four:1 lan:2 drawn:1 ram:1 asymptotically:12 subgradient:6 graph:3 concreteness:1 sum:1 master:29 fourth:1 communicate:2 extends:1 saying:1 reasonable:2 architectural:1 home:1 decision:1 appendix:1 scaling:2 bound:8 handicap:1 oracle:1 n3:2 software:1 aspect:1 speed:1 simulate:1 min:6 performing:1 subgradients:2 rcv1:2 speedup:3 department:1 developing:1 according:1 alternate:1 combination:3 across:2 remain:1 smaller:1 slightly:1 g3:3 making:1 taken:1 pipeline:1 resource:1 turn:4 discus:2 eventually:1 singer:1 end:1 generalizes:1 operation:1 ofer:1 decentralized:1 observe:1 away:1 stepsize:8 batch:1 slower:1 denotes:1 running:2 include:1 medicine:1 yoram:1 giving:2 ting:1 build:4 classical:1 objective:9 costly:1 responds:1 hai:1 gradient:63 dp:1 distance:3 thank:1 toward:2 spanning:4 assuming:5 illustration:1 ratio:1 mini:1 equivalently:1 setup:3 mostly:1 potentially:3 rise:2 implementation:1 unknown:1 perform:1 allowing:1 upper:1 benchmark:1 finite:3 descent:1 immediate:2 situation:1 extended:1 communication:13 sharp:1 required:1 california:1 address:1 proceeds:1 regime:2 reading:1 summarize:2 program:1 including:1 max:1 wainwright:2 natural:2 regularized:1 representing:1 scheme:3 technology:1 mdist:5 imply:1 text:3 relative:2 synchronization:2 loss:2 expect:1 versus:1 agent:1 article:3 xiao:4 editor:1 uncorrelated:2 supported:2 last:1 asynchronous:11 free:2 slowdown:1 enjoys:1 tsitsiklis:3 drastically:1 allow:1 understand:1 institute:1 neighbor:1 correspondingly:1 tracing:1 distributed:31 benefit:3 kzk:1 depth:3 feedback:1 evaluating:1 yudin:1 computes:7 commonly:2 qualitatively:1 collection:1 transaction:2 compact:1 ignore:1 global:1 xi:8 continuous:1 decade:1 table:2 robust:7 ca:1 inherently:1 ignoring:1 protocol:4 did:1 main:5 motivation:1 reuters:1 n2:4 child:2 fig:10 slow:1 wiley:1 kxk2:4 communicates:1 theorem:13 specific:1 showing:1 er:1 r2:3 admits:1 essential:1 mirror:2 phd:1 kx:4 rg:2 logarithmic:1 borkar:1 kxk:1 g2:5 applies:1 aa:1 satisfies:7 lewis:1 goal:1 sized:1 quantifying:1 lipschitz:3 absence:1 experimentally:1 specifically:2 except:1 averaging:12 experimental:1 formally:1 select:1 arises:1 brevity:1 |
3,587 | 4,248 | Composite Multiclass Losses
Elodie Vernet
ENS Cachan
Robert C. Williamson
ANU and NICTA
Mark D. Reid
ANU and NICTA
[email protected]
[email protected]
[email protected]
Abstract
We consider loss functions for multiclass prediction problems. We show when
a multiclass loss can be expressed as a ?proper composite loss?, which is the
composition of a proper loss and a link function. We extend existing results for
binary losses to multiclass losses. We determine the stationarity condition, Bregman representation, order-sensitivity, existence and uniqueness of the composite
representation for multiclass losses. We subsume existing results on ?classification calibration? by relating it to properness and show that the simple integral
representation for binary proper losses can not be extended to multiclass losses.
1
Introduction
The motivation of this paper is to understand the intrinsic structure and properties of suitable loss
functions for the problem of multiclass prediction, which includes multiclass probability estimation.
Suppose we are given a data sample S := (xi , yi )i?[m] where xi ? X is an observation and yi ?
{1, .., n} =: [n] is its corresponding class. We assume the sample S is drawn iid according to some
distribution P = PX ,Y on X ? [n]. Given a new observation x we want to predict the probability
pi := P(Y = i|X = x) of x belonging to class i, for i ? [n]. Multiclass classification requires the
learner to predict the most likely class of x; that is to find y? = arg maxi?[n] pi .
A loss measures the quality of prediction. Let ?n := {(p1 , . . . , pn ) : ?i?[n] pi = 1, and 0 ? pi ? 1, ?i ?
[n]} denote the n-simplex. For multiclass probability estimation, ` : ?n ? Rn+ . For classification, the
loss ` : [n] ? Rn+ . The partial losses `i are the components of `(q) = (`1 (q), . . . , `n (q))0 .
Proper losses are particularly suitable for probability estimation. They have been studied in detail
when n = 2 (the ?binary case?) where there is a nice integral representation [1, 2, 3], and characterization [4] when differentiable. Classification calibrated losses are an analog of proper losses for
the problem of classification [5]. The relationship between classification calibration and properness
was determined in [4] for n = 2. Most of these results have had no multiclass analogue until now.
The design of losses for multiclass prediction has received recent attention [6, 7, 8, 9, 10, 11, 12]
although none of these papers developed the connection to proper losses, and most restrict consideration to margin losses (which imply certain symmetry conditions). Glasmachers [13] has shown
that certain learning algorithms can still behave well when the losses do not satisfy the conditions in
these earlier papers because the requirements are actually stronger than needed.
Our contributions are: We relate properness, classification calibration, and the notion used in [8]
which we rename ?prediction calibrated? ?3; we provide a novel characterization of multiclass
properness ?4; we study composite proper losses (the composition of a proper loss with an invertible
link) presenting new uniqueness and existence results ?5; we show how the above results can aid in
the design of proper losses ?6; and we present a (somewhat surprising) negative result concerning
the integral representation of proper multiclass losses ?7. Many of our results are characterisations.
Full proofs are provided in the extended version [14].
1
2
Formal Setup
Suppose X is some set and Y = {1, . . . , n} = [n] is a set of labels. We suppose we are given
data (xi , yi )i?[m] such that Yi ? Y is the label corresponding to xi ? X . These data follow a joint
distribution PX ,Y . We denote by EX ,Y and EY |X respectively, the expectation and the conditional
expectation with respect to PX ,Y .
The conditional risk L associated with a loss ` is the function
L : ?n ? ?n 3 (p, q) 7? L(p, q) = EY?p `Y (q) = p0 ? `(q) =
? pi `i (q) ? R+ ,
i?[n]
where Y ? p means Y is drawn according to a multinomial distribution with parameter p. In a typical
learning problem one will make an estimate q : X ? ?n . The full risk is L(q) = EX EY |X `Y (q(X)).
Minimizing L(q) over q : X ? ?n is equivalent to minimizing L(p(x), q(x)) over q(x) ? ?n for all
x ? X where p(x) = (p1 (x), . . . , pn (x))0 , p0 is the transpose of p, and pi (x) = P(Y = i|X = x). Thus
it suffices to only consider the conditional risk; confer [3].
A loss ` : ?n ? Rn+ is proper if L(p, p) ? L(p, q), ?p, q ? ?n . It is strictly proper if the inequality is
strict when p 6= q. The conditional Bayes risk L : ?n 3 p 7? infq??n L(p, q). This function is always
concave [2]. If ` is proper, then L(p) = L(p, p) = p0 ? `(p). Strictly proper losses induce Fisher
consistent estimators of probabilities: if ` is strictly proper, p = arg minq L(p, q).
In order to differentiate the losses we project the n-simplex into a subset of Rn?1 . We denote by ?? : ?n 3 p = (p1 , . . . , pn )0 7? p? = (p1 , . . . , pn?1 )0 ? ?? n := {(p1 , . . . , pn?1 )0 : pi ? 0, ?i ?
?1 ? n
n
[n], ?n?1
i=1 pi ? 1}, the projection of the n-simplex ? , and ?? : ? 3 p? = ( p?1 , . . . , p?n?1 ) 7? p =
n?1
0
n
( p?1 , . . . , p?n?1 , 1 ? ?i=1 p?i ) ? ? its inverse.
The losses above are defined on the simplex ?n since the argument (an estimator) represents
a probability vector. However it is sometimes desirable to use another set V of predictions.
One can consider losses ` : V ? Rn+ . Suppose there exists an invertible function ? : ?n ? V .
Then ` can be written as a composition of a loss ? defined on the simplex with ? ?1 . That is,
`(v) = ? ? (v) := ? (? ?1 (v)). Such a function ? ? is a composite loss. If ? is proper, we say ` is a
proper composite loss, with associated proper loss ? and link ?.
We use the following notation. The kth unit vector ek is the n vector with all components zero except
the kth which is 1. The n-vector 1n := (1, . . . , 1)0 . The derivative of a function f is denoted D f and
its Hessian H f . Let ?? n := {(p1 , . . . , pn ) : ?i?[n] pi = 1, and 0 < pi < 1, ?i ? [n]} and ? ?n := ?n \ ?? n .
3
Relating Properness to Classification Calibration
Properness is an attractive property of a loss for the task of class probability estimation. However if
one is merely interested in classifying (predicting y? ? [n] given x ? X ) then one requires less. We
relate classification calibration (the analog of properness for classification problems) to properness.
Suppose c ? ?? n . We cover ?n with n subsets each representing one class:
Ti (c) := {p ? ?n : ? j 6= i pi c j ? p j ci }.
Observe that for i 6= j, the sets {p ? R : pi c j = p j c j } are subsets of dimension n ? 2 through c and
all ek such that k 6= i and k 6= j. These subsets partition ?n into two parts, the subspace Ti is the
intersection of the subspaces delimited by the precedent (n ? 2)-subspace and in the same side as ei .
We will make use of the following properties of Ti (c).
Lemma 1 Suppose c ? ?? n , i ? [n]. Then the following hold:
1. For all p ? ?n , there exists i such that p ? Ti (c).
2. Suppose p ? ?n . Ti (c) ? T j (c) ? {p ? ?n : pi c j = p j ci }, a subspace of dimension n ? 2.
3. Suppose p ? ?n . If p ?
i=1 Ti (c)
Tn
then p = c.
4. For all p, q ? ?n , p 6= q, there exists c ? ?? n , and i ? [n] such that p ? Ti (c) and q ?
/ Ti (c).
2
Classification calibrated losses have been developed and studied under some different definitions
and names [6, 5]. Below we generalise the notion of c-calibration which was proposed for n = 2 in
[4] as a generalisation of the notion of classification calibration in [5].
Definition 2 Suppose ` : ?n ? Rn+ is a loss and c ? ?? n . We say ` is c-calibrated at p ? ?n if for all
i ? [n] such that p ?
/ Ti (c) then ?q ? Ti (c), L(p) < L(p, q). We say that ` is c-calibrated if ?p ? ?n ,
` is c-calibrated at p.
Definition 2 means that if the probability vector q one predicts doesn?t belong to the same subset
(i.e. doesn?t predict the same class) as the real probability vector p, then the loss might be larger.
Classification calibration in the sense used in [5] corresponds to 12 -calibrated losses when n = 2. If
cmid := ( 1n , . . . , 1n )0 , cmid -calibration induces Fisher-consistent estimates in the case of classification.
Furthermore ?` is cmid -calibrated and for all i ? [n], and `i is continuous and bounded below? is
equivalent to ?` is infinite sample consistent as defined by [6]?. This is because if ` is continuous
and Ti (c) is closed, then ?q ? Ti (c), L(p) < L(p, q) if and only if L(p) < infq?Ti (c) L(p, q).
The following result generalises the correspondence between binary classification calibration and
properness [4, Theorem 16] to multiclass losses (n > 2).
Proposition 3 A continuous loss ` : ?n ? Rn+ is strictly proper if and only if it is c-calibrated for
all c ? ?? n .
In particular, a continuous strictly proper loss is cmid -calibrated. Thus for any estimator q?n of the
conditional probability vector one constructs by minimizing the empirical average of a continuous
strictly proper loss, one can build an estimator of the label (corresponding to the largest probability
of q?n ) which is Fisher consistent for the problem of classification.
In the binary case, ` is classification calibrated if and only if the following implication holds [5]:
L( fn ) ? min L(g) ? PX ,Y (Y 6= fn (X)) ? min PX ,Y (Y 6= g(X)) .
(1)
g
g
Tewari and Bartlett [8] have characterised when (1) holds in the multiclass case. Since there is no
reason to assume the equivalence between classification calibration and (1) still holds for n > 2, we
give different names for these two notions. We keep the name of classification calibration for the
notion linked to Fisher consistency (as defined before) and call prediction calibrated the notion of
Tewari and Bartlett (equivalent to (1)).
Definition 4 Suppose ` : V ? Rn+ is a loss. Let C` = co({`(v) : v ? V }), the convex hull of the
image of V . ` is said to be prediction calibrated if there exists a prediction function pred : Rn ? [n]
such that
?p ? ?n :
inf
p0 ? z > inf p0 ? z = L(p).
z?C` ,ppred(z) <maxi pi
z?C`
Observe that the class is predicted from `(p) and not directly from p (which is equivalent if the
loss is invertible). Suppose that ` : ?n ? Rn+ is such that ` is prediction calibrated and pred(`(p)) ?
arg maxi pi . Then ` is cmid -calibrated almost everywhere.
By introducing a reference ?link? ?? (which corresponds to the actual link if ` is a proper composite
loss) we now show how the pred function can be canonically expressed in terms of arg maxi pi .
?
? Then
Proposition 5 Suppose ` : V ? Rn+ is a loss. Let ?(p)
? arg minv?V L(p, v) and ? = ` ? ?.
? is proper. If ` is prediction calibrated then pred(? (p)) ? arg maxi pi .
4
Characterizing Properness
We first present some simple (but new) consequences of properness. We say f : C ? Rn ? Rn is
monotone on C when for all x and y in C, ( f (x) ? f (y))0 ? (x ? y) ? 0; confer [15].
Proposition 6 Suppose ` : ?n ? Rn+ is a loss. If ` is proper, then ?` is monotone.
3
Proposition 7 If ` is strictly proper then it is invertible.
A theme of the present paper is the extensibility of results concerning binary losses to multiclass
losses. The following proposition shows how the characterisation of properness in the general (not
necessarily differentiable) multiclass case can be reduced to the binary case. In the binary case,
the two classes are often denoted ?1 and 1 and the loss is denoted ` = (`1 , `?1 )0 . We project the
2-simplex ?2 into [0, 1]: ? ? [0, 1] is the projection of (?, 1 ? ?) ? ?2 .
Proposition 8 Suppose ` : ?n ? Rn+ is a loss. Define
p,q
0
?
?` p,q : [0, 1] 3 ? 7? `1p,q (?) = q 0 ? ` p + ?(q ? p) .
`??1 (?)
p ? ` p + ?(q ? p)
Then ` is (strictly) proper if and only if `?p,q is (strictly) proper ?p, q ? ? ?n .
This proposition shows that in order to check if a loss is proper one needs only to check the properness in each line. One could use the easy characterization of properness for differentiable binary
?`0 (?)
`0 (?)
1
losses (` : [0, 1] ? R2+ is proper if and only if ?? ? [0, 1], 1??
= ?1? ? 0, [4]). However this
needs to be checked for all lines defined by p, q ? ? ?n . We now extend some characterisations of
properness to the multiclass case by using Proposition 8.
Lambert [16] proved that in the binary case, properness is equivalent to the fact that the further your
prediction is from reality, the larger the loss (?order sensitivity?). The result relied upon on the total
order of R. In the multiclass case, there does not exist such a total order. Yet, one can compare
two predictions if they are in the same line as the true real class probability. The next result is a
generalization of the binary case equivalence of properness and order sensitivity.
Proposition 9 Suppose ` : ?n ? Rn+ is a loss. Then ` is (strictly) proper if and only if ?p, q ? ?n ,
?0 ? h1 ? h2 , L(p, p + h1 (q ? p)) ? L(p, p + h2 (q ? p)) (the inequality is strict if h1 6= h2 ).
?Order sensitivity? tells us more about properness: the true class probability minimizes the risk
and if the prediction moves away from the true class probability in a line then the risk increases.
This property appears convenient for optimization purposes: if one reaches a local minimum in the
second argument of the risk and the loss is strictly proper then it is a global minimum. If the loss is
proper, such a local minimum is a global minimum or a constant in an open set. But observe that
typically one is minimising the full risk L(q(?)) over functions q : X ? ?n . Order sensitivity of `
does not imply this optimisation problem is well behaved; one needs convexity of q 7? L(p, q) for
all p ? ?n to ensure convexity of the functional optimisation problem.
The order sensitivity along a line leads to a new characterisation of differentiable proper losses. As
in the binary case, one condition comes from the fact that the derivative is zero at a minimum and
the other ensures that it is really a minimum.
Corollary 10 Suppose ` : ?n ? Rn+ is a loss such that `? = ` ? ??1
? is differentiable. Let M(p) =
?
D`(?? (p)) ? D?? (p). Then ` is proper if and only if
p0 ? M(p) = 0
?q, r ? ?n , ?p ? ?? n .
(2)
(q ? r)0 ? M(p) ? (q ? r) ? 0
We know that for any loss, its Bayes risk L(p) = infq??n L(p, q) = infq??n p0 ? `(q) is concave. If ` is
proper, L(p) = p0 ? `(p). Rather than working with the loss ` : V ? Rn+ we will now work with the
simpler associated conditional Bayes risk L : V ? R+ .
f (x)
We need two definitions from [15]. Suppose f : Rn ? R is concave. Then limt?0 f (x+td)?
exists,
t
and is called the directional derivative of f at x in the direction d and is denoted D f (x, d). By
analogy with the usual definition of subdifferential, the superdifferential ? f (x) of f at x is
? f (x) := s ? Rn : s0 ? y ? D f (x, y), ?y ? Rn = s ? Rn : f (y) ? f (x) + s0 ? (y ? x), ?y ? Rn .
A vector s ? ? f (x) is called a supergradient of f at x.
The next proposition is a restatement of the well known Bregman representation of proper losses;
see [17] for the differentiable case, and [2, Theorem 3.2] for the general case.
4
Proposition 11 Suppose ` : ?n ? Rn+ is a loss. Then ` is proper if and only if there exists a concave
function f and ?q ? ?n , there exists a supergradient A(q) ? ? f (q) such that
?p, q ? ?n , p0 ? `(q) = L(p, q) = f (q) + (p ? q)0 ? A(q).
Then f is unique and f (p) = L(p, p) = L(p).
The fact that f is defined on a simplex is not a problem. Indeed, the superdifferential becomes
? f (x) = {s ? Rn : s0 ? d ? D f (x, d), ?d ? ?n } = {s ? Rn : f (y) ? f (x) + s0 ? (y ? x), ?y ? ?n }. If
0
0
0
?n
?
f? = f ???1
? is differentiable at q? ? ? , A(q) = (D f (?? (q)), 0) +? 1n , ? ? R. Then (p?q) ?A(q) =
D f?(?? (q)) ? (?? (p) ? ?? (q)). Hence for any concave differentiable function f , there exists an
unique proper loss whose Bayes risk is equal to f (we say that f is differentiable when f? is differentiable).
The last property gives us the form of the proper losses associated with a Bayes risk. Suppose
L : ?n ? R+ is concave. The proper losses whose Bayes risk is equal to L are
n
` : ?n 3 q 7? L(q) + (ei ? q)0 ? A(q)
? Rn+ , ?A(q) ? ? L(q).
(3)
i=1
This result suggests that some information is lost by representing a proper loss via its Bayes risk
(when the last is not differentiable). The next proposition elucidates this by showing that proper
losses which have the same Bayes risk are equal almost everywhere.
Proposition 12 Two proper losses `1 and `2 have the same conditional Bayes risk function L if and
only if `1 = `2 almost everywhere. If L is differentiable, `1 = `2 everywhere.
We say that L is differentiable at p if L? = L ? ??1
? is differentiable at p? = ?? (p).
Proposition 13 Suppose ` : ?n ? Rn+ is a proper loss. Then ` is continuous in ?? n if and only if L is
differentiable on ?? n ; ` is continuous at p ? ?? n if and only if, L is differentiable at p ? ?? n .
5
The Proper Composite Representation: Uniqueness and Existence
It is sometimes helpful to define a loss on some set V rather than ?n ; confer [4]. Composite losses
(see the definition in ?2) are a way of constructing such losses: given a proper loss ? : ?n ? Rn+ and
an invertible link ? : ?n ? V , one defines ? ? : V ? Rn+ using ? ? = ? ? ? ?1 . We now consider the
question: given a loss ` : V ? Rn+ , when does ` have a proper composite representation (whereby `
can be written as ` = ? ? ? ?1 ), and is this representation unique? We first consider the binary case
and study the uniqueness of the representation of a loss as a proper composite loss.
Proposition 14 Suppose ` = ? ? ? ?1 : V ? R2+ is a proper composite loss and that the proper loss
? is differentiable and the link function ? is differentiable and invertible. Then the proper loss ?
is unique. Furthermore ? is unique if ?v1 , v2 ? R, ?v ? [v1 , v2 ], `01 (v) 6= 0 or `0?1 (v) 6= 0. If there
exists v?1 , v?2 ? R such that `01 (v) = `0?1 (v) = 0 ?v ? [v?1 , v?2 ], one can choose any ?|[v?1 ,v?2 ] such that
? is differentiable, invertible and continuous in [v?1 , v?2 ] and obtain ` = ? ? ? ?1 , and ? is uniquely
defined where ` is invertible.
Proposition 15 Suppose ` : V ? R2+ is a differentiable binary loss such that ?v ? V , `0?1 (v) 6= 0
or `01 (v) 6= 0. Then ` can be expressed as a proper composite loss if and only if the following
three conditions hold: 1) `1 is decreasing (increasing); 2) `?1 is increasing (decreasing); and 3)
`0 (v)
f : V 3 v 7? `0 1 (v) is strictly increasing (decreasing) and continuous.
?1
Observe that the last condition is alway satisfied if both `1 and `?1 are convex.
Suppose ? : R ? R+ is a function. The loss defined via `? : V 3 v 7? (`?1 (v), `1 (v))0 =
(?(?v), ?(v))0 ? R2+ is called a binary margin loss. Binary margin losses are often used for classification problems. We will now show how the above proposition applies to them.
5
Corollary 16 Suppose ? : R ? R+ is differentiable and ?v ? R, ? 0 (v) 6= 0 or ? 0 (?v) 6= 0. Then `?
0
(v)
can be expressed as a proper composite loss if and only if f : R 3 v 7? ? ??0 (?v)
is strictly monotonic
continuous and ? is monotonic.
If ? is convex or concave then f defined above is monotonic. However not all binary margin losses
are composite proper losses. One can even build a smooth margin loss which cannot be expressed as
? 0 (?v)
x2 ?2x+2
a proper composite loss. Consider ?(x) = 1 ? ?1 arctan(x ? 1). Then f (v) = ? 0 (?v)+?
0 (v) = 2x2 +4
which is not invertible.
We now generalize the above results to the multiclass case.
Proposition 17 Suppose ` has two proper composite representations ` = ? ? ? ?1 = ? ? ? ?1 where
? and ? are proper losses and ? and ? are continuous invertible. Then ? = m almost everywhere.
If ` is continuous and has a composite representation, then the proper loss (in the decomposition) is
unique (? = ? everywhere).
If ` is invertible and has a composite representation, then the representation is unique.
S`
v)
L(
hq
=
:
{x
x = `(v)
q=
v)}
L(
`2 (v)
x?
Given a loss ` : V ? Rn+ , we denote by S` = `(V ) +
[0, ?)n = {? : ?v ? V , ?i ? [n], ?i ? `i (v)} the superprediction set of ` (confer e.g. [18]). We introduce a
?
set of hyperplanes for p ? ?n and ? ? R, h p = {x ?
?
Rn : x0 ? p = ? }. A hyperplane h p supports a set A at
?
x ? A when x ? h p and for all a ? A , a0 ? p ? ? or
0
for all a ? A , a ? p ? ? . We say that S` is strictly
convex in its inner part when for all p ? ?n , there exists an unique x ? `(V ) such that there exists a hyper?
plane h p supporting S` at x. S` is said to be smooth
when for all x ? `(V ), there exists an unique hyperplane supporting S` at x. If ` is invertible, we can
express these two definitions in terms of v ? V rather
than x ? `(V ). If ` : V ? Rn+ is strictly convex, then
S` will be strictly convex in its inner part.
q
`(V )
`1 (v)
Proposition 18 Suppose ` : V ?
is a continuous invertible loss. Then ` has a strictly proper
composite representation if and only if S` is convex, smooth and strictly convex in its inner part.
Rn+
Proposition 19 Suppose ` : V ? Rn+ is a continuous loss. If ` has a proper composite representation, then S` is convex and smooth. If ` is also invertible, then S` is strictly convex in its inner
part.
6
Designing Proper Losses
We now build a family of conditional Bayes risks. Suppose we are given n(n?1)
concave
2
functions {Li1 ,i2 : ?2 ? R}1?i1 <i2 ?n on ?2 , and we want to build a concave function L on ?n
which is equal to one of the given functions on each edge of the simplex (?1 ? i1 < i2 ? n,
L(0, ., 0, pi1 , 0, ., 0, pi2 , 0, ., 0) = Li1 ,i2 (pi1 , pi2 )). This is equivalent to choosing a binary loss function,
knowing that the observation is in the class i1 or i2 . The result below gives one possible construction.
(There exists an infinity of solutions ? one can simply add any concave function equal to zero in
each edge).
Lemma 20 Suppose we have a family of concave functions {Li1 ,i2 : ?2 ? R}1?i1 <i2 ?n , then
pi1
pi2
L : ?n 3 p 7? L(p1 , . . . , pn ) = ? (pi1 + pi2 )Li1 ,i2
,
pi1 + pi2 pi1 + pi2
1?i1 <i2 ?n
is concave and ?1 ? i1 < i2 ? n, L(0, ., 0, pi1 , 0, ., 0, pi2 , 0, ., 0) = Li1 ,i2 (pi1 , pi2 ).
6
Using this family of Bayes risks, one can build a family of proper losses.
Lemma 21 Suppose we have a family of binary proper losses `i1 ,i2 : ?2 ? R2 . Then
!n
j?1
n
pj
pi
i, j
i, j
n
? Rn+
` : ? 3 p 7? `(p) = ? `?1
+ ? `1
p
+
p
p
+
p
i
j
i
j
i=1
i= j+1
j=1
is a proper n-class loss such that
? i ,i
? `11 2 (pi1 ) i = i1
.
`i ((0, ., 0, pi1 , 0, ., 0, pi2 , 0, ., 0)) =
`i1 ,i2 (p ) i = i2
? ?1 i1
0
otherwise
Observe that it is much easier to work at first with the Bayes risk and then using the correspondence
between Bayes risks and proper losses.
7
Integral Representations of Proper Losses
Unlike the natural generalisation of the results from proper binary to proper multiclass losses above,
there is one result that does not carry over: the integral representation of proper losses [1]. In the
binary case there exists a family of ?extremal? loss functions (cost-weighted generalisations of the
0-1 loss) each parametrised by c ? [0, 1] and defined for all ? ? [0, 1] by `c?1 (?) := cJ? ? cK and
`c1 := (1 ? c)J? < cK. As shown in [1, 3], given these extremal functions, any proper binary loss `
R
can be expressed as the weighted integral ` = 01 `c w(c) dc + constant with w(c) = ?L00 (c). This
representation is a special case of a representation from Choquet theory [19] which characterises
when every point in some set can be expressed as a weighted combination of the ?extremal points?
of the set. Although there is such a representation when n > 2, the difficulty is that the set of extremal
points is much larger and this rules out the existence of a nice small set of ?primitive? proper losses
when n > 2. The rest of this section makes this statement precise.
A convex cone K is a set of points closed under linear combinations of positive coefficients. That
is, K = ?K + ? K for any ?, ? ? 0. A point f ? K is extremal if f = 12 (g + h) for g, h ? K
implies ?? ? R+ such that g = ? f . That is, f cannot be represented as a non-trivial combination of
other points in K . The set of extremal points for K will be denoted ex K . Suppose U is a bounded
closed convex set in Rd , and Kb (U) is the set of convex functions on U bounded by 1, then Kb (U)
is compact with respect to the topology of uniform convergence. Theorem 2.2 of [20] shows that the
extremal points of the convex cone K (U) = {? f +? g : f , g ? Kb (U), ?, ? ? 0} are dense (w.r.t. the
topology of uniform convergence) in K (U) when d > 1. This means for any function f ? K (U)
there is a sequence of functions (gi )i such that for all i gi ? ex K (U) and limi?? k f ? gi k? = 0,
where k f k? := supu?U | f (u)|. We use this result to show that the set of extremal Bayes risks is
dense in the set of Bayes risks when n > 2.
In order to simplify our analysis, we restrict attention to fair proper losses. A loss is fair if each
partial loss is zero on its corresponding vertex of the simplex (`i (ei ) = 0, ?i ? [n]). A proper loss is
fair if and only if its Bayes risk is zero at each vertex of the simplex (in this case the Bayes risk is
also called fair). One does not lose generality by studying fair proper losses since any proper loss is
a sum of a fair proper loss and a constant vector.
The set of fair proper losses defined on ?n form a closed convex cone, denoted Ln . The set of
concave functions which are zero on all the vertices of the simplex ?n is denoted Fn and is also a
closed convex cone.
Proposition 22 Suppose n > 2. Then for any fair proper loss ` ? Ln there exists a sequence (`i )i
of extremal fair proper losses (`i ? ex Ln ) which converges almost everywhere to `.
The proof of Proposition 22 requires the following lemma which relies upon the correspondence
between a proper loss and its Bayes risk (Proposition 11) and the fact that two continuous functions
equal almost everywhere are equal everywhere.
Lemma 23 If ` ? ex Ln then its corresponding Bayes risk L is extremal in Fn . Conversely, if
L ? ex Fn then all the proper losses ` with Bayes risk equal to L are extremal in Ln .
7
We also need a correspondence between the uniform convergence of a sequence of Bayes risk functions and the convergence of their associated proper losses.
Lemma 24 Suppose L, Li ? Fn for i ? N and suppose ` and `i , i ? N are associated proper losses.
Then (Li )i converges uniformly to L if and only if (`i )i converges almost everywhere to `.
Bronshtein [20] and Johansen [21]
showed how to construct a set of extremal convex functions which is dense
in K (U). With a trivial change of sign
this leads to a family of extremal proper
fair Bayes risks that is dense in the set
of Bayes risks in the topology of uniform
convergence. This means that it is not
possible to have a small set of extremal
(?primitive?) losses from which one can
construct any proper fair loss by linear
combinations when n > 2.
A convex polytope is a compact convex
intersection of a finite set of half-spaces
and is therefore the convex hull of its
vertices. Let {ai }i be a finite family Figure 1: Complexity of extremal concave functions in two
of affine functions defined on ?n . Now dimensions (corresponds to n = 3). Graph of an extremal condefine the convex polyhedral function f cave function in two dimensions. Lines are where the slope
by f (x) := maxi ai (x). The set K := changes. The pattern of these lines can be arbitrarily complex.
{Pi = {x ? ?n : f (x) = ai (x)}} is a covering of ?n by polytopes. Theorem 2.1 of [20] shows that for f , Pi and K so defined, f is extremal
if the following two conditions are satisfied: 1) for all polytopes Pi in K and for every face F of Pi ,
F ? ?n 6= ? implies F has a vertex in ?n ; 2) every vertex of Pi in ?n belongs to n distinct polytopes
of K. The set of all such f is dense in K (U).
Using this result it is straightforward to exhibit some sets of extremal fair Bayes risks {Lc (p) : c ?
n
?n }. Two examples are when Lc (p) = ?
i=1
8
pi
ci
p
?J cpii ? c jj K or Lc (p) =
j6=i
^ 1?p
i
1?ci .
i?[n]
Conclusion
We considered loss functions for multiclass prediction problems and made four main contributions:
? We extended existing results for binary losses to multiclass prediction problems including several characterisations of proper losses and the relationship between properness and
classification calibration;
? We related the notion of prediction calibration to classification calibration;
? We developed some new existence and uniqueness results for proper composite losses
(which are new even in the binary case) which characterise when a loss has a proper composite representation in terms of the geometry of the associated superprediction set; and
? We showed that the attractive (simply parametrised) integral representation for binary
proper losses can not be extended to the multiclass case.
Our results suggest that in order to design losses for multiclass prediction problems it is helpful to
use the composite representation, and design the proper part via the Bayes risk as suggested for the
binary case in [1]. The proper composite representation is used in [22].
Acknowledgements
The work was performed whilst Elodie Vernet was visiting ANU and NICTA, and was supported by
the Australian Research Council and NICTA, through backing Australia?s ability.
8
References
[1] Andreas Buja, Werner Stuetzle and Yi Shen. Loss functions for binary class probability estimation and classification: Structure and applications. Technical report, University of Pennsylvania, November 2005. http://www-stat.wharton.upenn.edu/?buja/PAPERS/
paper-proper-scoring.pdf.
[2] Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359-378, March 2007.
[3] Mark D. Reid and Robert C. Williamson. Information, divergence and risk for binary experiments. Journal of Machine Learning Research, 12:731-817, March 2011.
[4] Mark D. Reid and Robert C. Williamson. Composite binary losses. Journal of Machine
Learning Research, 11:2387-2422, 2010.
[5] Peter L. Bartlett, Michael I. Jordan and Jon D. McAuliffe. Convexity, classification, and risk
bounds. Journal of the American Statistical Association, 101(473):138-156, March 2006.
[6] Tong Zhang. Statistical analysis of some multi-category large margin classification methods.
Journal of Machine Learning Research, 5:1225-1251, 2004.
[7] Simon I. Hill and Arnaud Doucet. A framework for kernel-based multi-category classification.
Journal of Artificial Intelligence Research, 30:525-564, 2007.
[8] Ambuj Tewari and Peter L. Bartlett. On the consistency of multiclass classification methods.
Journal of Machine Learning Research, 8:1007-1025, 2007.
[9] Yufeng Liu. Fisher consistency of multicategory support vector machines. Proceedings of the
Eleventh International Conference on Artificial Intelligence and Statistics, side 289-296, 2007.
[10] Ra?ul Santos-Rodr??guez, Alicia Guerrero-Curieses, Roc??o Alaiz-Rodriguez and Jes?us CidSueiro. Cost-sensitive learning based on Bregman divergences. Machine Learning, 76:271285, 2009. http://dx.doi.org/10.1007/s10994-009-5132-8.
[11] Hui Zou, Ji Zhu and Trevor Hastie. New multicategory boosting algorithms based on multicategory Fisher-consistent losses. The Annals of Applied Statistics, 2(4):1290-1306, 2008.
[12] Zhihua Zhang, Michael I. Jordan, Wu-Jun Li and Dit-Yan Yeung. Coherence functions for
multicategory margin-based classification methods. Proceedings of the Twelfth Conference on
Artificial Intelligence and Statistics (AISTATS), 2009.
[13] Tobias Glasmachers. Universal consistency of multi-class support vector classication. Advances in Neural Information Processing Systems (NIPS), 2010.
[14] Elodie Vernet, Robert C. Williamson and Mark D. Reid. Composite multiclass losses. (with
proofs). To appear in NIPS 2011, October 2011. http://users.cecs.anu.edu.au/
?williams/papers/P188.pdf.
[15] Jean-Baptiste Hiriart-Urruty and Claude Lemar?echal. Fundamentals of Convex Analysis.
Springer, Berlin, 2001.
[16] Nicolas S. Lambert. Elicitation and evaluation of statistical forecasts. Technical report, Stanford University, March 2010. http://www.stanford.edu/?nlambert/lambert_
elicitation.pdf.
[17] Jes?us Cid-Sueiro and An??bal R. Figueiras-Vidal. On the structure of strict sense Bayesian cost
functions and its applications. IEEE Transactions on Neural Networks, 12(3):445-455, May
2001.
[18] Yuri Kalnishkan and Michael V. Vyugin. The weak aggregating algorithm and weak mixability.
Journal of Computer and System Sciences, 74:1228-1244, 2008.
[19] Robert R. Phelps. Lectures on Choquet?s Theorem, volume 1757 of Lecture Notes in Mathematics. Springer, 2nd edition, 2001.
[20] Efim Mikhailovich Bronshtein. Extremal convex functions. Siberian Mathematical Journal,
19:6-12, 1978.
[21] S?ren Johansen. The extremal convex functions. Mathematica Scandinavica, 34:61-68, 1974.
[22] Tim van Erven, Mark D. Reid and Robert C. Williamson. Mixability is Bayes risk curvature
relative to log loss. Proceedings of the 24th Annual Conference on Learning Theory, 2011. To
appear. http://users.cecs.anu.edu.au/?williams/papers/P186.pdf.
[23] Rolf Schneider. Convex Bodies: The Brunn-Minkowski Theory. Cambridge University Press,
1993.
9
| 4248 |@word version:1 stronger:1 nd:1 twelfth:1 open:1 adrian:1 decomposition:1 p0:9 carry:1 liu:1 erven:1 existing:3 surprising:1 yet:1 guez:1 written:2 dx:1 fn:6 partition:1 half:1 intelligence:3 plane:1 characterization:3 boosting:1 hyperplanes:1 arctan:1 simpler:1 zhang:2 org:1 mathematical:1 along:1 eleventh:1 polyhedral:1 introduce:1 x0:1 upenn:1 ra:1 indeed:1 p1:7 multi:3 decreasing:3 td:1 actual:1 increasing:3 becomes:1 provided:1 project:2 bounded:3 notation:1 santos:1 minimizes:1 developed:3 whilst:1 cave:1 every:3 ti:13 concave:14 unit:1 appear:2 reid:6 mcauliffe:1 before:1 positive:1 local:2 gneiting:1 aggregating:1 consequence:1 cecs:2 might:1 au:4 studied:2 equivalence:2 suggests:1 conversely:1 co:1 unique:9 lost:1 minv:1 supu:1 stuetzle:1 universal:1 yan:1 empirical:1 composite:27 projection:2 convenient:1 induce:1 suggest:1 cannot:2 risk:35 www:2 equivalent:6 primitive:2 attention:2 straightforward:1 minq:1 convex:25 williams:2 shen:1 estimator:4 rule:2 notion:7 s10994:1 annals:1 construction:1 suppose:34 user:2 elucidates:1 designing:1 particularly:1 predicts:1 restatement:1 ensures:1 extensibility:1 convexity:3 complexity:1 tobias:1 upon:2 learner:1 joint:1 represented:1 brunn:1 distinct:1 doi:1 artificial:3 tell:1 hyper:1 choosing:1 yufeng:1 whose:2 jean:1 larger:3 stanford:2 say:7 otherwise:1 ability:1 statistic:3 gi:3 differentiate:1 sequence:3 differentiable:21 claude:1 hiriart:1 fr:1 canonically:1 figueiras:1 convergence:5 requirement:1 converges:3 pi2:9 tim:1 stat:1 received:1 predicted:1 come:1 implies:2 australian:1 direction:1 hull:2 kb:3 australia:1 glasmachers:2 suffices:1 generalization:1 really:1 proposition:23 strictly:20 hold:5 considered:1 predict:3 purpose:1 uniqueness:5 estimation:6 lose:1 label:3 extremal:20 council:1 sensitive:1 largest:1 weighted:3 always:1 rather:3 ck:2 pn:7 corollary:2 check:2 sense:2 helpful:2 typically:1 a0:1 interested:1 i1:10 backing:1 arg:6 classification:28 rodr:1 denoted:7 special:1 equal:8 construct:3 wharton:1 represents:1 jon:1 simplex:11 report:2 simplify:1 divergence:2 geometry:1 stationarity:1 evaluation:1 parametrised:2 implication:1 bregman:3 integral:7 edge:2 partial:2 earlier:1 cover:1 werner:1 cost:3 introducing:1 vertex:6 subset:5 uniform:4 elodie:3 calibrated:16 international:1 sensitivity:6 fundamental:1 invertible:14 michael:3 satisfied:2 choose:1 ek:2 derivative:3 american:2 li:3 includes:1 coefficient:1 satisfy:1 performed:1 h1:3 closed:5 linked:1 bayes:26 relied:1 slope:1 simon:1 curie:1 contribution:2 siberian:1 bronshtein:2 directional:1 generalize:1 weak:2 bayesian:1 lambert:2 iid:1 none:1 ren:1 bob:1 j6:1 reach:1 checked:1 trevor:1 definition:8 scandinavica:1 mathematica:1 proof:3 associated:7 proved:1 cj:1 jes:2 actually:1 appears:1 delimited:1 follow:1 generality:1 furthermore:2 until:1 working:1 ei:3 rodriguez:1 defines:1 quality:1 behaved:1 name:3 true:3 hence:1 arnaud:1 i2:14 attractive:2 confer:4 uniquely:1 covering:1 whereby:1 bal:1 pdf:4 presenting:1 hill:1 tn:1 image:1 consideration:1 novel:1 multinomial:1 functional:1 ji:1 volume:1 extend:2 analog:2 belong:1 relating:2 association:2 composition:3 cambridge:1 ai:3 rd:1 consistency:4 mathematics:1 had:1 calibration:15 add:1 curvature:1 recent:1 showed:2 inf:2 belongs:1 certain:2 inequality:2 binary:29 arbitrarily:1 yuri:1 yi:5 scoring:2 minimum:6 somewhat:1 schneider:1 ey:3 determine:1 full:3 desirable:1 smooth:4 generalises:1 technical:2 minimising:1 supergradient:2 concerning:2 baptiste:1 prediction:19 optimisation:2 expectation:2 yeung:1 sometimes:2 limt:1 kernel:1 c1:1 subdifferential:1 want:2 rest:1 unlike:1 strict:3 jordan:2 call:1 properness:19 easy:1 hastie:1 li1:5 restrict:2 topology:3 pennsylvania:1 inner:4 andreas:1 knowing:1 multiclass:28 bartlett:4 ul:1 peter:2 hessian:1 jj:1 phelps:1 tewari:3 characterise:1 induces:1 kalnishkan:1 category:2 dit:1 reduced:1 http:5 exist:1 sign:1 express:1 four:1 drawn:2 characterisation:5 pj:1 v1:2 graph:1 merely:1 monotone:2 cone:4 sum:1 inverse:1 everywhere:10 almost:7 family:8 wu:1 coherence:1 cachan:2 superdifferential:2 bound:1 correspondence:4 annual:1 infinity:1 your:1 x2:2 vyugin:1 argument:2 min:2 pi1:10 minkowski:1 px:5 according:2 combination:4 march:4 belonging:1 ln:5 tilmann:1 needed:1 know:1 urruty:1 studying:1 vidal:1 vernet:3 observe:5 away:1 v2:2 existence:5 choquet:2 ensure:1 multicategory:4 build:5 mixability:2 move:1 question:1 usual:1 cid:1 said:2 exhibit:1 visiting:1 kth:2 subspace:4 hq:1 link:7 berlin:1 polytope:1 trivial:2 reason:1 nicta:4 relationship:2 minimizing:3 setup:1 october:1 robert:6 statement:1 relate:2 negative:1 design:4 proper:93 observation:3 finite:2 november:1 behave:1 supporting:2 subsume:1 extended:4 precise:1 dc:1 rn:37 buja:2 pred:4 connection:1 johansen:2 polytopes:3 nip:2 suggested:1 elicitation:2 below:3 pattern:1 alicia:1 rolf:1 ambuj:1 including:1 analogue:1 suitable:2 natural:1 difficulty:1 predicting:1 zhu:1 representing:2 imply:2 raftery:1 jun:1 nice:2 characterises:1 acknowledgement:1 precedent:1 relative:1 loss:138 lecture:2 analogy:1 h2:3 affine:1 consistent:5 s0:4 classifying:1 pi:24 classication:1 alway:1 echal:1 supported:1 last:3 transpose:1 formal:1 side:2 understand:1 generalise:1 characterizing:1 limi:1 face:1 van:1 dimension:4 doesn:2 made:1 transaction:1 compact:2 l00:1 keep:1 global:2 doucet:1 xi:4 continuous:15 reality:1 nicolas:1 symmetry:1 williamson:6 necessarily:1 complex:1 constructing:1 zou:1 aistats:1 dense:5 main:1 motivation:1 edition:1 fair:12 body:1 en:2 roc:1 aid:1 lc:3 tong:1 theme:1 theorem:5 showing:1 maxi:6 r2:5 intrinsic:1 exists:15 ci:4 hui:1 anu:7 margin:7 forecast:1 easier:1 intersection:2 simply:2 likely:1 zhihua:1 expressed:7 applies:1 monotonic:3 springer:2 corresponds:3 relies:1 conditional:8 lemar:1 fisher:6 change:2 determined:1 typical:1 except:1 generalisation:3 infinite:1 characterised:1 hyperplane:2 lemma:6 uniformly:1 total:2 called:4 guerrero:1 rename:1 mark:6 support:3 mikhailovich:1 ex:7 |
3,588 | 4,249 | Learning to Agglomerate Superpixel Hierarchies
Viren Jain
Janelia Farm Research Campus
Howard Hughes Medical Institute
Srinivas C. Turaga
Brain & Cognitive Sciences
Massachusetts Institute of Technology
Kevin L. Briggman, Moritz N. Helmstaedter, Winfried Denk
Department of Biomedical Optics
Max Planck Institute for Medical Research
H. Sebastian Seung
Howard Hughes Medical Institute
Massachusetts Institute of Technology
Abstract
An agglomerative clustering algorithm merges the most similar pair of clusters
at every iteration. The function that evaluates similarity is traditionally handdesigned, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show
how to train a similarity function by regarding it as the action-value function of a
reinforcement learning problem. We apply this general method to segment images
by clustering superpixels, an application that we call Learning to Agglomerate
Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain
images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary
detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.
1
Introduction
A clustering is defined as a partitioning of a set of elements into subsets called clusters. Roughly
speaking, similar elements should belong to the same cluster and dissimilar ones to different clusters.
In the traditional unsupervised formulation of clustering, the true membership of elements in clusters
is completely unknown. Recently there has been interest in the supervised or semisupervised setting
[5], in which true membership is known for some elements and can serve as training data. The goal
is to learn a clustering algorithm that generalizes to new elements and new clusters. A convenient
objective function for learning is the agreement between the output of the algorithm and the true
clustering, for which the standard measurement is the Rand index [25].
Clustering is relevant for many application domains. One prominent example is image segmentation, the division of an image into clusters of pixels that correspond to distinct objects in the scene.
Traditional approaches treated image segmentation as unsupervised clustering. However, it is becoming popular to utilize a supervised clustering approach in which a segmentation algorithm is
trained on a set of images for which ground truth is known [23, 32]. The Rand index has become
increasingly popular for evaluating the accuracy of image segmentation [34, 3, 13, 15, 35], and has
recently been used as an objective function for supervised learning of this task [32].
This paper focuses on agglomerative algorithms for clustering, which iteratively merge pairs of clusters that maximize a similarity function. Equivalently, the merged pairs may be those that minimize
1
a distance or dissimilarity function, which is like a similarity function up to a change of sign. Speed
is a chief advantage of agglomerative algorithms. The number of evaluations of the similarity function is polynomial in the number of elements to be clustered. In contrast, the popular approach of
using a Markov random field to partition a graph with nodes that are the elements to be clustered,
and edge weights given by their similarities, involves a computation that can be NP-hard [18].
Inefficient inference becomes even more costly for learning, which generally involves many iterations of inference. To deal with this problem, many researchers have developed learning methods
for graphical models that depend on efficient approximate inference. However, once such approximations are introduced, many of the desirable theoretical properties of this framework no longer
apply and performance in practice may be arbitrarily poor, as several authors have recently noted
[36, 19, 8]. Here we avoid such issues by basing learning on agglomerative clustering, which is an
efficient inference procedure in the first place.
We show that an agglomerative clustering algorithm can be regarded as a policy for a deterministic
Markov decision process (DMDP) in which a state is a clustering, an action is a merging of two
clusters, and the immediate reward is the change in the Rand index with respect to the ground truth
clustering. In this formulation, the optimal action-value function turns out to be the optimal similarity function for agglomerative clustering. This DMDP formulation is helpful because it enables
the application of ideas from reinforcement learning (RL) to find an approximation to the optimal
similarity function.
Our formalism is generally applicable to any type of clustering, but is illustrated with a specific
application to segmenting images by clustering superpixels. These are defined as groups of pixels
from an oversegmentation produced by some other algorithm [27]. Recent research has shown
that agglomerating superpixels using a hand-designed similarity function can improve segmentation
accuracy [3]. It is plausible that it would be even more powerful to learn the similarity function from
training data. Here we apply our RL framework to accomplish this, yielding a new method called
Learning Agglomeration of Superpixel Hierarchies (LASH). LASH works by iteratively updating
an approximation to the optimal similarity function. It uses the current approximation to generate
a sequence of clusterings, and then improves the approximation on all possible actions on these
clusterings.
LASH is an instance of a strategy called on-policy control in RL. This strategy has seen many empirical successes, but the theoretical guarantees are rather limited. Furthermore, LASH is implemented
here for simplicity using infinite temporal discounting, though it could be extended to the case of
finite discounting. Therefore we empirically evaluated LASH on the problem of segmenting images
of brain tissue from serial electron microscopy, which has recently attracted a great deal of interest [6, 15]. We find that LASH substantially improves upon state of the art convolutional network
and random forest boundary-detection methods for this problem, reducing segmentation error (as
measured by the Rand error) by 50% as compared to the next best technique.
We also tried the simpler strategy of directly training superpixel similarities, and then applying single
linkage clustering [2]. This produced less accurate test set segmentations than LASH.
2
Agglomerative clustering as reinforcement learning
A Markov decision process (MDP) is defined by a state s, a set of actions A(s) at each state, a
function P (s, a, s? ) specifying the probability of the s ? s? transition after taking action a ? A(s),
and a function R(s, a, s? ) specifying the immediate reward. A policy ? is a map from states to
actions, a = ?(s). The goal of reinforcement learning (RL) is to find a policy ? that maximizes the
expected value of total reward.
?T ?1
Total reward is defined as the sum of immediate rewards t=0 R(st , at )?
up to some time horizon
?
T . Alternatively, it is defined as the sum of discounted immediate rewards, t=0 ? t R(st , at ), where
0 ? ? ? 1 is the discount factor. Many RL methods are based on finding an optimal action-value
function Q? (s, a), which is defined as the sum of discounted rewards obtained by taking action a
at state s and following the optimal policy thereafter. An optimal policy can be extracted from this
function by ? ? (s) = argmaxa Q? (s, a).
2
We can define agglomerative clustering as an MDP. Its state s is a clustering of a set of objects. For
each pair of clusters in st , there is an action at ? A(st ) that merges them to yield the clustering
st+1 = at (st ). Since the merge action is deterministic, we have the special case of a deterministic
MDP, rather than a stochastic one. To define the rewards of the MDP, we make use of the Rand
index, a standard measure of agreement between two clusterings of the same set [25]. A clustering
is equivalent to classifying all pairs of objects as belonging to the same cluster or different clusters.
The Rand index RI(s, s? ) is the fraction of object pairs on which the clusterings s and s? agree.
Therefore, we can define the immediate reward of action a as the resulting increase in the Rand
index with respect to a ground truth clustering s? , R(s, a) = RI(a(s), s? ) ? RI(s, s? ).
An agglomerative clustering algorithm is a policy of this MDP, and the optimal similarity function
is given by the optimal action-value function Q? . The sum of undiscounted immediate rewards
?T ?1
?telescopes? to the simple result t=0 R(st , at ) = RI(sT , s? ) ? RI(s0 , s? ) [21]. Therefore RL
for a finite time horizon T is equivalent to maximizing the Rand index RI(sT , s? ) of the clustering
at time T .
We will focus on the simple case of infinite discounting (? = 0). Then the optimal action-value
function Q? (s, a) is equal to R(s, a). In other words, R(s, a) is the best similarity function. We
know R(s, a) exactly for the training data, but we would also like it to apply to data for which ground
truth is unknown. Therefore we train a function approximator Q? so that Q? (s, a) ? R(s, a) on the
training data, and hope that it generalizes to the test data. The following procedure is a simple way
of doing this.
1. Generate an initial sequence of clusterings (s1 , . . . , sT ) by using R(s, a) as a similarity function: iterate at = argmaxa R(st , a) and st+1 = at (st ), terminating when
maxa R(st , a) ? 0.
2. Train the parameters ? so that Q? (st , a) ? R(st , a) for all st and for all a ? A(st ).
3. Generate a new sequence of clusterings by using Q? (s, a) as a similarity function: iterate
at = argmaxa Q? (st , a) and st+1 = at (st ), terminating when maxa Q? (st , a) ? 0.
4. Goto 2.
Here the clustering s1 is the trivial one in which each element is its own cluster. (The termination
of the clustering is equivalent to the continued selection of a ?do-nothing? action that leaves the
clustering the same, st+1 = st .) This is an example of ?on-policy? learning, because the function
approximator Q? is trained on clusterings generated by using it as a policy. It makes intuitive sense
to optimize Q? for the kinds of clusterings that it actually sees in practice, rather than for all possible
clusterings. However, there is no theoretical guarantee that such on-policy learning will converge,
since we are using a nonlinear function approximation. Guarantees only exist if the action-value
function is represented by a lookup table or a linear approximation. Nevertheless, the nonlinear
approach has achieved practical success in a number of problem domains. Later we will present
empirical results supporting the effectiveness of on-policy learning in our application.
The assumption of infinite discounting removes a major challenge of RL, dealing with temporally
delayed reward. Are we losing anything by this assumption? If our approximation to the actionvalue function were perfect, Q? (s, a) = R(s, a), then agglomerative clustering would amount to
greedy maximization of the Rand index. It is straightforward to show that this yields the clustering
that is the global maximum. In practice, the approximation will be imperfect, and extending the
above procedure to finite discounting could be helpful.
3
Agglomerating superpixels for image segmentation
The introduction of the Berkeley segmentation database (BSD) provoked a renaissance of the boundary detection and segmentation literature. The creation of a ground-truth segmentation database enabled learning-driven methods for low-level boundary detection, which were found to outperform
classic methods such as Canny?s [23, 10]. Global and multi-scale features were added to improve
performance even further [26, 22, 29], and recently learning methods have been developed that
directly optimize measures of segmentation performance [32, 13].
3
3UHFLVLRQ?5HFDOORI&RQQHFWHG3L[HO3DLUV
1
0.99
Precision
0.98
0.97
Baseline
0.96
CN [14, 33]
0.95
ilastik [31]
0.94
BLOTC CN [13]
MALIS CN [32]
6WDQGDUG&1
LODVWLN
%/27&&1
0$/,6&1
6LQJOH/LQNDJH
LASH
0.93
0.92
0.91
0.9
0.86
0.88
0.9
Single Linkage
LASH
0.92
0.94
Recall
0.96
0.98
Rand Error
Pair Recall
Pair Precision
.0499
.0084
.0064
.0056
.0055
.0049
.0029
0
n/a
91.33
99.31
94.57
93.41
93.78
99.30
91.85
87.85
94.32
94.97
96.48
94.83
1
Figure 1: Performance comparison on a one megavoxel test set; parameters, such as the binarization
threshold for the convolutional network (CN) affinity graphs, were determined based on the optimal
value on the training set. CN?s used a field of view of 16 ? 16 ? 16, ilastik used a field of view of
23 ? 23 ? 23, and LASH used a field of view of 50 ? 50 ? 50. LASH leads to a substantial decrease
in Rand error (1-Rand Index), and much higher connected pixel-pair precision at similar levels of
recall as compared to other state of the art methods. The ?Connected pixel-pairs? curve measures
the accuracy of connected pixel pairs pairs relative to ground truth. This measure corrects for the
imbalance in the Rand error for segmentations in which most pixels are disconnected from one
another, as in the case of EM reconstruction of dense brain wiring. For example, ?Trivial Baseline?
above represents the trivial segmentation in which all pixels are disconnected from one another, and
achieves relatively low Rand error but of course zero connected-pair recall.
However, boundary detectors alone have so far failed to produce segmentations that rival human
levels of accuracy. Therefore many recent studies use boundary detectors to generate an oversegmentation of the image into fragments, and then attempt to cluster the ?superpixels? . This approach
has been shown to improve the accuracy of segmenting natural images [3, 30].
A similar approach [2, 1, 17, 35, 16] has also been employed to segment 3d nanoscale images from
serial electron microscopy [11, 9]. In principle, it should be possible to map the connections between
neurons by analyzing these images [20, 12, 28]. Since this analysis is highly laborious, it would be
desirable to have automated computer algorithms for doing so [15]. First, each synapse must be
identified. Second, the ?wires? of the brain, its axons and dendrites, must be traced, i.e., segmented.
If these two tasks are solved, it is then possible to establish which pairs of neurons are connected by
synapses.
For our experiments, images of rabbit retina inner plexiform layer were acquired using Serial Block
Face Scanning Electron Microscopy (SBF-SEM) [9, 4]. The tissue was specially stained to enhance
cell boundaries while suppressing contrast from intracellular structures (e.g., mitochondria). The
image volume was acquired at 22 ? 22 ? 25 nm resolution, yielding a nearly isotropic 3d dataset
with excellent slice-to-slice registration. Two training sets were created by human tracing and proofreading of subsets of the 3d image. The training sets were augmented with their eight 3d orthogonal
rotations and reflections to yield 16 training images that contained roughly 80 megavoxels of labeled
training data. A separate one megavoxel labeled test set was used to evaluate algorithm performance.
3.1
Boundary Detectors
For comparison purposes, as well as to provide supervoxels for LASH, we tested several state of the
art boundary detection algorithms on the data. A convolutional network (CN) was trained to produce
affinity graphs that can be segmented using connected components or watershed [14, 33]. We also
trained CNs using MALIS and BLOTC, which are recently proposed machine learning algorithms
that optimize true metrics of segmentation performance. MALIS directly optimizes the Rand index
[32]. BLOTC, originally introduced for 2d boundary maps and here generalized to 3d affinity graphs,
4
SBF-SEM Z-X Reslice
Human Labeling
Supervoxel Sizes
40
BLOTC CN
% of Volume Occupied
35
LASH
30
25
20
15
10
5
z
0
0 to 100
100 to 1,000 1,000 to 1e4 1e4 to 1e5 More than 1e5
Supervoxel size (in voxels)
x
Figure 2: (Left) Visual comparison of output from a state of the boundary detector, BLOTC CN [13], and
Learning to Agglomerate Superpixel Hierarchies (LASH). Image and segmentations are from a Z-X axis resectioning of the 100 ? 100 ? 100 voxel test set. Segmentations were performed in 3d though only a single
2d 100 ? 100 reslice is shown here. White circle shows an example location in which BLOTC CN merged
two separate objects due to weak staining in an adjacent image slice; orange ellipse shows an example location
in which BLOTC CN split up a single thin object. LASH avoids both of these errors. (Right) Distribution of
supervoxel sizes, as measured by percentage of image volume occupied by specific size ranges of supervoxels.
optimizes ?warping error,? a measure of topological disagreement derived from concepts introduced
in digital topology [13].
Finally, we trained ?ilastik,? a random-forest based boundary detector [31]. Unlike the CNs, which
operated on the raw image and learned features as part of the training process, ilastik uses a predefined set of image features that represented low-level image structure such as intensity gradients
and texture. The CNs used a field of view of 16 ? 16 ? 16 voxels to make decisions about any
particular image location, while ilastik used features from a field of view of up to 23 ? 23 ? 23
voxels.
To generate segmentations of the test set, we found connected components of the thresholded boundary detector output, and then performed marker-based watershed to grow out these regions until they
touched. Figure 1 shows the Rand index attained by the CNs and ilastik. Here we convert the index
into an error measure by subtracting it from 1. Segmentation performance is sensitive to the threshold used to binarize boundary detector output, so we used the threshold that minimized Rand error
on the training set.
3.2
Supervoxel Agglomeration
Supervoxels were generated from BLOTC convolutional network output, using connected components applied at a high threshold (0.9) to avoid undersegmented regions (in the test set, there was
only one supervoxel in the initial oversegmentation which contained more than one ground truth
region). Regions were then grown out using marker-based watershed. The size of the supervoxels
varied considerably, but the majority of the image volume was assigned to supervoxels larger than
1, 000 voxels in size (as shown in Figure 3).
For each pair of neighboring supervoxels, we computed a 138 dimensional feature vector, as described in the Appendix. This was used as input to the learned similarity function Q? , which we
represented by a decision-tree boosting classifier [7]. We followed the procedure given in Section
2, but with two modifications. First the examples used in each training iteration were collected by
segmenting all the images in the training set, not only a single image. Second, Q? was trained to approximate H(R(st , a)) rather than R(st , a), where H is the Heaviside step function and the log-loss
was optimized. This was done because our function approximator was suitable for classification, but
5
some other approximator suitable for regression could also be used. The loop in the procedure of
Section 2 was terminated when training error stopped decreasing by a significant amount, after 3
cycles. Then the learned similarity function was applied to agglomerate supervoxels in the test set
to yield the results in Figure 1. The agglomeration terminated after around 5000 steps.
The results show substantial decrease in Rand error compared to state of the art techniques (MALIS
and BLOTC CN). A potential sublety in interpreting these results is the small absolute values of the
Rand error for all of these techniques. The Rand error is defined as the probability of classifying
pairs of voxels as belonging to the same or different clusters. This classification task is highly
imbalanced, because the vast majority of voxel pairs belong to different ground truth clusters. Hence
even a completely trivial segmentation in which every voxel is its own cluster can achieve fairly low
Rand error (Figure 1). Precision and recall are better quantifications of performance at imbalanced
classification[23]. Figure 1 shows that LASH achieves much higher precision at similar recall.
For the task of segmenting neurons in EM images, high precision is especially important as false
positives can lead to false positive neuron-pair connections.
Visual comparison of segmentation performance is shown in Figure 2. LASH avoids both split and
merge errors that result from segmenting BLOTC CN output. BLOTC CN in turn was previously
shown to outperform other techniques such as Boosted Edge Learning, multi-scale normalized cut,
and gPb-OWT-UCM [13].
3.3
Naive training of the similarity function on superpixel pairs
In the conventional algorithms for agglomerative clustering, the similarity S(A, B) of two clusters
A and B can be reduced to the similarities S(x, y) of elements x ? A and y ? B. For example,
single linkage clustering assumes that S(A, B) = maxx?A,y?B S(x, y). The maximum operation
is replaced by the minimum or average in other common algorithms. LASH does not impose any
such constraint of reducibility on the similarity function. Consequently, LASH must truly compute
new similarities after each agglomerative step. In contrast, conventional algorithms can start by
computing the matrix of similarities between the elements to be clustered, and all further similarities
between clusters follow from trivial computations.
Therefore another method of learning agglomerative clustering is to train a similarity function on
pairs of superpixels only, and then apply a standard agglomerative algorithm such as single linkage
clustering. This has been previously been done for images from serial electron microscopy [2].
(Note that single linkage clustering is equivalent to creating a graph in which nodes are superpixels
and edge weights are their similarities, and then finding the connected components of the thresholded
graph.) As shown in Figure 1, clustering superpixels in this way improves upon boundary detection
algorithms. However, the improvement is substantially less than achieved by LASH.
Discussion
Why did LASH achieve better accuracy than other approaches? One might argue that the comparison
is unfair, because the CNs and ilastik detected boundaries using a field of view considerably smaller
than that used in the LASH features (up to 50 ? 50 ? 50 for the SVF feature computation). If these
competing methods were allowed to use the same context, perhaps their accuracy would improve
dramatically. This is possible, but training time would also increase dramatically. Training a CN
with MALIS or BLOTC on 80 megavoxels of training data with a 163 field of view already takes
on the order of a week, using an optimized GPU implementation [24]. Adding the additional layers
to the CN required to achieve a field of view of 503 might require months of additional training.1
In constrast, the entire LASH training process is completed within roughly one day. This can be
attributed to the efficiency gains associated with computations on supervoxels rather than voxels.
In short, LASH is more accurate because it is efficient enough to utilize more image context in its
computations.
Why does LASH outperform the naive method of directly training superpixel similarities used in
single linkage clustering? The naive method uses the same amount of image context. In this case,
1
Using a much larger field of view with a CN will likely require new architectures that incorporate multiscale capabilities.
6
Figure 3: Example of SVF feature computation. Blue and red are two different supervoxels. Left panel shows
rendering of the objects, right panel shows smoothed vector fields (thin arrows), along with chosen center-ofmass orientation vectors (thick blue/red lines) and line connecting the two center of masses (thick green line).
The angle between the thick blue/red and green lines is used as a feature during LASH.
LASH is probably superior because it trains the similarities by optimizing the clustering that they
actually yield. The naive method resembles LASH, but with the modification that the action-value
function is trained only for the actions possible on the clustering s1 rather than on the entire sequence
of clusterings (see Step 2 of the procedure in Section 2).
We have conceptualized LASH in the framework of reinforcement learning. Previous work has
applied reinforcement learning to other structured prediction problems [21]. An additional closely
related approach to structured prediction is SEARN, introduced by Daume et al [8]. As in our
approach, SEARN uses a single classifier repeatedly on a (structured) input to iteratively solve an
inference problem. The major difference between our approach and theirs is the way the classifier
is trained. In paticular, SEARN begins with a manually specified policy (given by ground truth or
heuristics) and then iteratively degrades the policy as a classifier is trained and ?replaces? the initial
policy. In our approach, the initial policy may exhibit poor performance (i.e., for random initial ?),
and then improves through training.
We have implemented LASH with infinite discounting of future rewards, but extending to finite discounting might produce better results. Generalizing the action space to include splitting of clusters
as well as agglomeration might also be advantageous. Finally, the objective function optimized by
learning might be tailored to better reflect more task-specific criteria, such as the number of locations
that human might have to correct (?proofread?) to yield an error-free segmentation by semiautomated
means. These directions will be explored in future work.
Appendix
Features of supervoxel pairs used by the similarity function
The similarity function that we trained with LASH required as input a set of features for each supervoxel pair that might be merged. For each supervoxel pair, we first computed a ?decision point,?
defined as the midpoint of the shortest line that connects any two points of the supervoxels. From
this decision point, we computed several types of features that encodes information about the underlying affinity graph as well the shape of the supervoxel objects near the decision point: (1) size
of each supervoxel in the pair, (2) distance between the two supervoxels, (3) analog affinity value
of the graph edge at which the two supervoxels would merge if grown out using watershed, and
the distance from the decision point to this edge, (4) ?Smoothed Vector Field? (SVF), a novel shape
feature described below, computed at various spatial scales (maximum 50 ? 50 ? 50). This feature
measures the orientation of each supervoxel near the decision point.
Finally, for each supervoxel in the pair we also included the above features for the closest 4 other
decision points that involve that supervoxel. Overall, this feature set yielded a 138 dimensional
feature vector for each supervoxel pair.
The smoothed vector field (SVF) shape feature attempts to determine the orientation of a supervoxel
near some specific location (e.g., the decision point used in reference to some other supervoxel).
7
The main challenge in computing such an orientation is dealing with high-frequency noise and
irregularities in the precise shape of the supervoxel. We developed a novel approach that deals with
this issue by smoothing a vector field derived from image moments. For a binary 3d image, SVF is
computed in the following manner:
1. A spherical mask of radius 5 is selected around each image location Ix,y,z , and ?vx,y,z is
then computed as the largest eigenvector of the 3 ? 3 second order image moment matrix
for that window.
2. The vector field is smoothed via ?3 iterations of ?ising-like? interactions
among nearest
?
?x+1 ?y+1 ?z+1
neighbor vector fields: ?vx,y,z ? f
vi,j,k , where f represents
i=x?1
j=y?1
k=z?1 ?
a (non-linear) renormalization such that the magnitude of each vector remains 1.
3. The smoothed vector at the center of mass of the supervoxel is used to compute angular
orientation of the supervoxel (see Figure 3).
References
[1] B. Andres, J. H. Kappes, U. K?the, C. Schn?rr, and F. A. Hamprecht. An empirical comparison of
inference algorithms for graphical models with higher order factors using opengm. In M. Goesele, S. Roth,
A. Kuijper, B. Schiele, and K. Schindler, editors, Pattern Recognition, volume 6376 of Lecture Notes in
Computer Science, pages 353?362. Springer, 2010.
[2] B. Andres, U. Koethe, M. Helmstaedter, W. Denk, and F. Hamprecht. Segmentation of SBFSEM volume
data of neural tissue by hierarchical classification. In Proceedings of the 30th DAGM symposium on
Pattern Recognition, pages 142?152. Springer, 2008.
[3] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. From contours to regions: An empirical evaluation.
Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 0:2294?2301, 2009.
[4] K. L. Briggman and W. Denk. Towards neural circuit reconstruction with volume electron microscopy
techniques. Current opinion in neurobiology, 16(5):562?570, 2006.
[5] O. Chapelle, B. Schlkopf, and A. Zien. Semi-supervised learning. The MIT Press, page 528, 2010.
[6] D. Chklovskii, S. Vitaladevuni, and L. Scheffer. Semi-automated reconstruction of neural circuits using
electron microscopy. Current Opinion in Neurobiology, 2010.
[7] M. Collins, R. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine
Learning, 48(1):253?285, 2002.
[8] H. Daum?, III, J. Langford, and D. Marcu. Search-based structured prediction. Machine Learning,
75:297?325, 2009. 10.1007/s10994-009-5106-x.
[9] W. Denk and H. Horstmann. Serial block-face scanning electron microscopy to reconstruct threedimensional tissue nanostructure. PLoS Biol, 2(11):e329, 2004.
[10] P. Dollar, Z. Tu, and S. Belongie. Supervised learning of edges and object boundaries. Computer Vision
and Pattern Recognition, IEEE Computer Society Conference on, 2:1964?1971, 2006.
[11] K. J. Hayworth, N. Kasthuri, R. Schalek, and J. W. Lichtman. Automating the collection of ultrathin serial
sections for large volume TEM reconstructions. Microscopy and Microanalysis, 12(S02):86?87, 2006.
[12] M. Helmstaedter, K. L. Briggman, and W. Denk. 3D structural imaging of the brain with photons and
electrons. Current Opinion in Neurobiology, 18(6):633?641, 2008.
[13] V. Jain, B. Bollmann, M. Richardson, D. Berger, M. Helmstaedter, K. Briggman, W. Denk, J. Bowden,
J. Mendenhall, W. Abraham, K. Harris, N. Kasthuri, K. Hayworth, R. Schalek, J. Tapia, J. Lichtman, and
H. Seung. Boundary Learning by Optimization with Topological Constraints. In Computer Vision and
Pattern Recognition, IEEE Computer Society Conference on, 2010.
[14] V. Jain, J. F. Murray, F. Roth, S. C. Turaga, V. Zhigulin, K. L. Briggman, M. N. Helmstaedter, W. Denk,
and H. S. Seung. Supervised learning of image restoration with convolutional networks. Computer Vision,
IEEE International Conference on, 0:1?8, 2007.
[15] V. Jain, H. Seung, and S. Turaga. Machines that learn to segment images: a crucial technology for
connectomics. Current opinion in neurobiology, 2010.
[16] E. Jurrus, R. Whitaker, B. W. Jones, R. Marc, and T. Tasdizen. An optimal-path approach for neural circuit
reconstruction. In Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International
Symposium on, pages 1609?1612, May 2008.
[17] V. Kaynig, T. Fuchs, and J. M. Buhmann. Neuron geometry extraction by perceptual grouping in sstem
images. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2010.
[18] V. Kolmogorov and R. Zabih. What energy functions can be minimizedvia graph cuts? IEEE Transactions
on Pattern Analysis and Machine Intelligence, pages 147?159, 2004.
[19] A. Kulesza, F. Pereira, et al. Structured learning with approximate inference. Advances in neural information processing systems, 20, 2007.
8
[20] J. W. Lichtman and J. R. Sanes. Ome sweet ome: what can the genome tell us about the connectome?
Curr. Opin. Neurobiol., 18(3):346?353, Jun 2008.
[21] F. Maes, L. Denoyer, and P. Gallinari. Structured prediction with reinforcement learning. Machine
learning, 77(2):271?301, 2009.
[22] M. Maire, P. Arbel?ez, C. Fowlkes, and J. Malik. Using contours to detect and localize junctions in natural
images. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1?8.
IEEE, 2008.
[23] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local
brightness, color, and texture cues. IEEE Trans. Patt. Anal. Mach. Intell., pages 530?549, 2004.
[24] J. Mutch, U. Knoblich, and T. Poggio. CNS: a GPU-based framework for simulating cortically-organized
networks. Technical report, Massachussetts Institute of Technology, 2010.
[25] W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846?850, 1971.
[26] X. Ren. Multi-scale improves boundary detection in natural images. In Proceedings of the 10th European
Conference on Computer Vision: Part III, pages 533?545. Springer-Verlag, Springer, 2008.
[27] X. Ren and J. Malik. Learning a Classification Model for Segmentation. In Proceedings of the Ninth
IEEE International Conference on Computer Vision-Volume 2, page 10. IEEE Computer Society, 2003.
[28] H. Seung. Reading the Book of Memory: Sparse Sampling versus Dense Mapping of Connectomes.
Neuron, 62(1):17?29, 2009.
[29] E. Sharon, A. Brandt, and R. Basri. Fast multiscale image segmentation. In Computer Vision and Pattern
Recognition, 2000. Proceedings. IEEE Conference on, volume 1, pages 70?77. IEEE, 2000.
[30] E. Sharon, M. Galun, D. Sharon, R. Basri, and A. Brandt. Hierarchy and adaptivity in segmenting visual
scenes. Nature, 442(7104):810?813, 2006.
[31] C. Sommer, C. Straehle, U. K?the, and F. A. Hamprecht. "ilastik: Interactive learning and segmentation
toolkit". In 8th IEEE International Symposium on Biomedical Imaging (ISBI 2011), in press, 2011.
[32] S. C. Turaga, K. L. Briggman, M. Helmstaedter, W. Denk, and H. S. Seung. Maximin affinity learning of
image segmentation. In NIPS, 2009.
[33] S. C. Turaga, J. F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H. S. Seung.
Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22(2):511?538, 2010.
[34] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):929, 2007.
[35] S. N. Vitaladevuni and R. Basri. Co-clustering of image segments using convex optimization applied
to EM neuronal reconstruction. In Computer Vision and Pattern Recognition, IEEE Computer Society
Conference on, 2010.
[36] M. Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting. The
Journal of Machine Learning Research, 7:1829?1859, 2006.
9
| 4249 |@word polynomial:1 advantageous:1 termination:1 tried:1 brightness:1 maes:1 ultrathin:1 briggman:7 moment:2 initial:5 fragment:1 suppressing:1 current:5 attracted:1 gpu:2 must:3 connectomics:1 nanoscale:1 partition:1 shape:4 enables:1 opin:1 remove:1 designed:1 alone:1 greedy:1 leaf:1 selected:1 intelligence:2 cue:1 agglomerating:2 isotropic:1 short:1 boosting:1 node:2 location:6 brandt:2 simpler:1 kaynig:1 along:1 become:1 symposium:3 manner:1 acquired:2 mask:1 expected:1 roughly:3 multi:3 brain:6 discounted:2 decreasing:1 spherical:1 window:1 ilastik:8 becomes:1 begin:1 estimating:1 campus:1 underlying:1 maximizes:1 panel:2 mass:2 circuit:3 what:2 kind:1 neurobiol:1 substantially:2 eigenvector:1 maxa:2 developed:3 finding:2 guarantee:3 temporal:1 berkeley:1 every:2 interactive:1 exactly:1 classifier:4 wrong:1 partitioning:1 control:1 medical:3 gallinari:1 planck:1 segmenting:7 positive:2 local:1 mach:1 analyzing:1 path:1 becoming:1 handdesigned:1 merge:4 might:7 resembles:1 specifying:2 challenging:1 co:1 limited:2 range:1 practical:1 hughes:2 practice:3 block:2 irregularity:1 ucm:1 procedure:6 maire:2 empirical:4 maxx:1 convenient:1 word:1 bowden:1 argmaxa:3 selection:1 context:3 applying:2 optimize:3 equivalent:4 deterministic:3 map:3 conventional:2 maximizing:1 center:3 straightforward:1 conceptualized:1 roth:3 rabbit:1 convex:1 resolution:1 simplicity:1 splitting:1 constrast:1 continued:1 regarded:1 enabled:1 classic:1 traditionally:1 s10994:1 hierarchy:5 losing:1 viren:1 us:4 superpixel:7 agreement:2 element:10 recognition:9 updating:1 marcu:1 cut:2 ising:1 database:2 labeled:2 zhigulin:1 solved:1 region:5 kappes:1 connected:9 cycle:1 plo:1 decrease:2 substantial:2 schiele:1 gpb:1 reward:12 seung:7 denk:9 trained:10 depend:1 terminating:2 segment:4 creation:1 serve:1 upon:2 division:1 efficiency:1 completely:2 represented:3 various:1 kolmogorov:1 grown:2 train:5 jain:5 distinct:1 fast:1 detected:1 labeling:1 tell:1 kevin:1 heuristic:1 larger:2 plausible:1 solve:1 cvpr:1 reconstruct:1 richardson:1 farm:1 advantage:1 sequence:4 rr:1 arbel:1 koethe:1 reconstruction:6 subtracting:1 interaction:1 canny:1 undersegmented:1 relevant:1 neighboring:1 loop:1 tu:1 macro:1 ome:2 achieve:3 intuitive:1 resectioning:1 cluster:20 undiscounted:1 extending:2 produce:3 perfect:1 object:9 measured:2 nearest:1 implemented:2 involves:2 direction:1 thick:3 merged:3 closely:1 correct:1 radius:1 stochastic:1 human:4 vx:2 opinion:4 mendenhall:1 jurrus:1 require:2 clustered:4 around:2 ground:10 great:1 mapping:1 week:1 electron:9 major:2 achieves:2 purpose:1 applicable:1 sensitive:1 largest:1 basing:1 hope:1 mit:1 rather:6 occupied:2 avoid:2 renaissance:1 boosted:1 derived:2 focus:2 unnikrishnan:1 improvement:2 superpixels:8 contrast:3 baseline:2 sense:1 dollar:1 helpful:2 inference:7 detect:2 membership:2 dagm:1 entire:2 pantofaru:1 pixel:7 issue:2 overall:1 classification:5 among:1 orientation:5 smoothing:1 art:4 special:1 orange:1 fairly:1 field:16 once:1 equal:1 extraction:1 spatial:1 sampling:1 manually:1 tapia:1 represents:2 jones:1 unsupervised:2 nearly:1 thin:2 tem:1 future:2 minimized:1 report:1 np:1 sweet:1 retina:1 intell:1 delayed:1 sstem:1 replaced:1 geometry:1 connects:1 cns:6 attempt:2 curr:1 detection:7 interest:3 highly:2 evaluation:4 laborious:1 truly:1 yielding:2 operated:1 hamprecht:3 watershed:4 predefined:1 accurate:2 bregman:1 edge:6 poggio:1 orthogonal:1 tree:1 goesele:1 circle:1 theoretical:3 stopped:1 instance:1 formalism:1 restoration:1 maximization:1 subset:2 scanning:2 accomplish:1 considerably:2 st:26 international:4 automating:1 corrects:1 connectome:1 enhance:1 connecting:1 reflect:1 nm:1 provoked:1 nano:1 cognitive:1 creating:1 american:1 inefficient:1 book:1 potential:1 photon:1 lookup:1 vi:1 later:1 view:9 performed:2 doing:2 red:3 start:1 capability:1 minimize:1 accuracy:8 convolutional:6 correspond:1 yield:6 weak:1 raw:1 andres:2 produced:3 schlkopf:1 ren:2 researcher:1 tissue:4 detector:7 synapsis:1 sebastian:1 evaluates:1 energy:1 frequency:1 associated:1 attributed:1 gain:1 dataset:2 massachusetts:2 lash:33 popular:3 recall:6 nanostructure:1 color:1 improves:5 segmentation:31 organized:1 actually:2 higher:3 originally:1 supervised:7 attained:1 adaboost:1 mutch:1 follow:1 day:1 improved:1 rand:22 synapse:1 formulation:3 evaluated:1 though:2 furthermore:1 done:2 biomedical:3 angular:1 until:1 langford:1 hand:1 nonlinear:2 marker:2 multiscale:2 logistic:1 perhaps:1 mdp:5 semisupervised:2 concept:1 true:4 normalized:1 discounting:7 assigned:1 hence:1 moritz:1 iteratively:4 illustrated:1 deal:3 white:1 wiring:1 adjacent:1 during:1 noted:1 anything:1 criterion:2 generalized:1 prominent:1 agglomerate:4 reflection:1 interpreting:1 image:47 novel:2 recently:6 common:1 rotation:1 agglomeration:4 superior:1 rl:7 empirically:1 volume:10 belong:2 analog:1 association:1 theirs:1 measurement:1 significant:1 janelia:1 chapelle:1 toolkit:1 similarity:32 longer:1 owt:1 mitochondrion:1 closest:1 own:2 recent:3 imbalanced:2 optimizing:1 supervoxel:20 driven:1 optimizes:2 verlag:1 binary:1 arbitrarily:1 success:2 proofread:1 seen:1 minimum:1 additional:3 impose:1 employed:1 converge:1 maximize:1 shortest:1 determine:1 semi:2 zien:1 desirable:2 segmented:2 technical:1 mali:5 serial:7 prediction:4 regression:2 vision:10 metric:1 iteration:4 tailored:1 microscopy:9 achieved:2 cell:1 chklovskii:1 grow:1 crucial:1 specially:1 plexiform:1 unlike:1 probably:1 goto:1 bollmann:1 effectiveness:1 call:1 structural:1 near:3 svf:5 paticular:1 split:2 enough:1 iii:2 automated:2 iterate:2 rendering:1 architecture:1 identified:1 topology:1 competing:1 imperfect:1 regarding:1 idea:1 cn:16 inner:1 fuchs:1 linkage:7 speaking:1 searn:3 action:19 repeatedly:1 dramatically:3 generally:2 involve:1 amount:3 discount:1 rival:1 zabih:1 telescope:1 reduced:1 generate:6 schapire:1 outperform:3 exist:1 percentage:1 sign:1 stained:1 blue:3 patt:1 group:1 thereafter:1 oversegmentation:3 nevertheless:1 threshold:4 traced:1 localize:1 schindler:1 registration:1 thresholded:2 utilize:2 sharon:3 vast:1 graph:10 imaging:3 fraction:1 sum:4 convert:1 angle:1 powerful:1 knoblich:1 place:1 denoyer:1 decision:11 appendix:2 layer:2 followed:1 topological:2 replaces:1 yielded:1 optic:1 constraint:2 hayworth:2 scene:2 ri:7 encodes:1 kasthuri:2 speed:1 proofreading:1 relatively:1 martin:1 department:1 structured:6 turaga:5 bsd:1 supervoxels:13 poor:2 belonging:2 disconnected:2 smaller:1 increasingly:1 em:3 modification:2 s1:3 agree:1 previously:2 remains:1 turn:2 singer:1 know:1 available:1 generalizes:2 operation:1 junction:1 apply:5 eight:1 hierarchical:1 disagreement:1 massachussetts:1 simulating:1 fowlkes:3 ho:1 assumes:1 clustering:54 include:1 sommer:1 completed:1 graphical:3 whitaker:1 daum:1 especially:1 establish:1 ellipse:1 society:6 threedimensional:1 murray:2 warping:1 objective:5 malik:4 added:1 already:1 strategy:4 costly:1 degrades:1 traditional:2 exhibit:1 affinity:7 gradient:1 distance:4 separate:2 arbelaez:1 majority:2 agglomerative:14 collected:1 argue:1 trivial:5 binarize:1 toward:1 connectomes:1 index:12 berger:1 equivalently:1 schalek:2 implementation:1 anal:1 policy:15 unknown:2 imbalance:1 neuron:6 wire:1 markov:3 howard:2 finite:4 supporting:1 immediate:6 extended:1 neurobiology:4 precise:1 varied:1 ninth:1 smoothed:5 intensity:1 introduced:4 pair:26 required:2 specified:1 connection:2 optimized:3 schn:1 maximin:1 merges:2 learned:3 nip:1 trans:1 below:1 pattern:11 kulesza:1 reading:1 challenge:2 max:1 green:2 memory:1 wainwright:1 suitable:2 treated:1 natural:4 quantification:1 buhmann:1 improve:4 technology:4 temporally:1 axis:1 created:1 jun:1 naive:5 binarization:1 literature:1 voxels:6 reducibility:1 relative:1 loss:1 lecture:1 adaptivity:1 approximator:4 versus:1 isbi:2 digital:1 s0:1 principle:1 editor:1 classifying:2 tasdizen:1 course:1 free:1 hebert:1 srinivas:1 institute:6 neighbor:1 taking:2 face:2 absolute:1 midpoint:1 tracing:1 sparse:1 slice:3 boundary:20 curve:1 benefit:1 evaluating:1 transition:1 avoids:2 contour:2 genome:1 author:1 collection:1 reinforcement:7 far:1 voxel:3 transaction:2 approximate:3 basri:3 dealing:2 global:2 belongie:1 alternatively:1 sbf:2 search:1 chief:1 table:1 why:2 learn:4 vitaladevuni:2 nature:1 helmstaedter:7 sem:2 forest:2 dendrite:1 e5:2 straehle:1 excellent:1 european:1 domain:2 marc:1 did:1 dense:2 main:1 intracellular:1 terminated:2 arrow:1 noise:1 abraham:1 daume:1 galun:1 nothing:1 allowed:1 augmented:1 neuronal:1 e329:1 scheffer:1 renormalization:1 axon:1 precision:6 cortically:1 sanes:1 pereira:1 unfair:1 perceptual:1 opengm:1 ix:1 touched:1 e4:2 specific:4 explored:1 horstmann:1 grouping:1 false:2 merging:1 adding:1 texture:2 dissimilarity:1 magnitude:1 horizon:2 lichtman:3 generalizing:1 likely:1 ez:1 visual:3 failed:1 contained:2 actionvalue:1 springer:4 truth:10 extracted:1 harris:1 goal:2 month:1 consequently:1 towards:1 change:2 hard:1 included:1 infinite:4 determined:1 reducing:1 called:3 total:2 s02:1 winfried:1 collins:1 dissimilar:1 staining:1 incorporate:1 evaluate:1 heaviside:1 tested:1 biol:1 |
3,589 | 425 | Adjoint-Functions and Temporal Learning
Algorithms in Neural Networks
N. Toomarian and J. Barhen
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
Abstract
The development of learning algorithms is generally based upon the minimization of an energy function. It is a fundamental requirement to compute the gradient of this energy function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural gain,etc.
In principle, this requires solving a system of nonlinear equations for each
parameter of the model, which is computationally very expensive. A new
methodology for neural learning of time-dependent nonlinear mappings is
presented. It exploits the concept of adjoint operators to enable a fast
global computation of the network's response to perturbations in all the
systems parameters. The importance of the time boundary conditions of
the adjoint functions is discussed. An algorithm is presented in which
the adjoint sensitivity equations are solved simultaneously (Le., forward
in time) along with the nonlinear dynamics of the neural networks. This
methodology makes real-time applications and hardware implementation
of temporal learning feasible.
1
INTRODUCTION
Early efforts in the area of training artificial neural networks have largely focused
on the study of schemes for encoding nonlinear mapping characterized by timeindependent inputs and outputs. The most widely used approach in this context
has been the error backpropagation algorithm (Werbos, 1974), which involves either
static i.e., "feedforward" (Rumelhart, 1986), or dynamic i.e., "recurrent" ( Pineda,
1988) networks. In this context ( Barhen et aI, 1989, 1990a, 1990b), have exploited
113
114
Toomarian and Barhen
the concepts of adjoint operators and terminal attractors. These concepts provide
a firm mathematical foundation for learning such mappings with dynamical neural
networks, while achieving a considerable reduction in the overall computational
costs (Barhen et aI, 1991).
Recently, there has been a wide interest in developing learning algorithms capable
of modeling time-dependent phenomena ( Hirsh, 1989). In a more restricted application oriented, domain attention has focused on learning temporal sequences. The
problem can be formulated as minimization, over an arbitrary but finite time interval, of an appropriate error functional. Thus, the gradients of the functional with
respect to the various parameters of the neural architecture, e.g., synaptic weights,
neural gains, etc. must be computed.
A number of methods have been proposed for carrying out this task, a recent survey of which can be found in (Pearlmutter, 1990). Here, we will briefly mention
only those which are relevant to our work. \-Villiams and Zipser(1989) discuss a
scheme similar to the well known "Forward Sensitivity Equations" of sensitivity
theory (Cacuci, 1981 and Toomarian et aI, 1987), in which the same set of sensitivity equations has to be solved again and again for each network parameter of
interest . Clearly, this is computationally very expensive, and scales poorly to large
systems. Pearlmutter (1989), on the other hand, describes a variational approach
which yields a set of equations which are similar to the "Adjoint Sensitivity Equations" (Cacuci, 1981 and Toomarian et aI, 1987). These equations must be solved
backwards in time and involve storage of the state variables from the activation network dynamics, which is impractical. These authors ( Toomarian and Barhen, 1990
) have suggested a new method which, in contradistinction to previous approaches,
solves the adjoint system of equations forward in time, concomitantly with the neural activation dynamics. A potential drawback of this method lies in the fact that
these adjoint equations have to be treated in terms of distributions which precludes
straight-forward numerical implementation. Finally, Pineda (1990), suggests combining the existence of disparate time scales with a heuristic gradient computation.
However, the underlying adiabatic assumptions and highly "approximate" gradient
evaluation technique place severe limits on the applicability of his approach.
In this paper we introduce a rigorous derivation of two novel systems of adjoint equations, which can be solved simultaneously (i.e., forward in time) with the network
dynamics, and thereby enable the implementation of temporal learning algorithms
in a computationally efficient manner. Numerical simulations and comparison with
previously available results will be presented elsewhere( Toomarian and Barhen,
1991).
2
TEMPORAL LEARNING
We formalize a neural network as an adaptive dynamical system whose temporal
evolution is governed by the following set of coupled nonlinear differential equations:
gnhn(L Tnm
m
Um
+
In)]
t>O
(1)
Adjoint-Functions and Temporal Learning Algorithms in Neural Networks
where Un represents the output of the nth neuron [un(O) being the initial state],
and Tnm denotes the synaptic coupling from the m-th to the n-th neuron. The
constant Kn characterizes the decay of neuron activity. The sigmoidal function g(.)
modulates the neural response, with gain given by r; typically, g(rx) = tanh(rx).
The time-dependent "source" term, In(t), encodes component-contribution of the
target temporal pattern a(t) via the expression
if n E Sx
if n E SH U Sy
(2)
The topographic input, output, and hidden network partitions Sx, Sy and SH,
respectively, are architectural requirements related to the encoding of mapping-type
problems. Details are given in Barhen et al (1989).
To proceed formally with the development of a temporal learning algorithm, we
consider an approach based upon the minimization of a "neuromorphic" energy
functional E, given by the following expression
E(u,p)
=
1~
t...
2:
r~
dt
=
1
Fdt
(3)
t
"l
where
if n E Sy
if n E Sx U SH
r n(t)
(4)
In our model the internal dynamical parameters of interest are the synaptic
strengths Tnm of the interconnection topology, the characteristic decay constants
K n , and the gain parameters rn.
Therefore, the vector of system parameters (
Barhen et aI, 1990b) should be
(5a)
In this paper, however, for illustration purposes and simplicity, we will limit ourselves in terms of parameters to the synaptic interconnections only. Hence, the
vector of system parameters will have M = N 2 elements
p
=
{TIl, ... , TN N }
(5b)
We will assume that elements of p are, in principle, independent. Furthermore, we
will also assume that, for a specific choice of parameters and set of initial conditions,
a unique solution of Eq. (1) exists. Hence, u is an implicit function of p.
Lyapunov stability requires the energy functional to be monotonically decreasing
during learning time, r. This translates into
2:
M
dE =
dE . dp~
dr
~=1 dp~ dr
Thus, one can always choose, with
7]
<0
(6)
>0
dp~
dE
-=-7]dr
dp~
(7)
115
116
Toomarian and Barhen
Integrating the above dynamical system over the interval [T, T + .6.TL one obtains,
PI'( T + .6.T) = p~( T) - TJ
I
T
+6T dE
-d dT
(8)
PI'
Equation (8) implies that, in order to update a system parameter PI" one must
evaluate the gradient of E with respect to PI' in the interval [T, T+.6.T]. Furthermore,
using Eq. (3) and observing that the time integral and derivative with respect to
PI" permute one can write;
T
dE=
dpl'
1
t
dF
dt
dPI'
=
1
t
8F
dt+
8pI'
1
t
8F
- ? -8ft.
dt
8ft. 8pI'
(9)
Since F is known analytically [viz. Eq. (3)] computation of 8F /8u n and 8F /8p~
is straightforward.
(lOa)
(lOb)
Thus, the quantity that needs to be determined is the vector 8ft./ 8p~. Differentiating
the activation dynamics, Eq. (1), with respect to PI" we observe that the time
derivative and partial derivative with respect to PI' commute. Using the shorthand
notation 8(?? ?)/8pl' = ( .. .),1' we obtain a set of equations to be referred to as
"Forward Sensitivity Equations-FSE";
t>O
t=O
(12)
in which
Anm
=
Sn,~ =
"Yn
len
!In
6nm
-
I: Tnm
Tnm
(13)
6p ,. ,T"",
(14)
"Yn 9n
Um
m
where !In represents the derivative of gn with respect to Un, and 6 denotes the Kronecker symbol. Since the initial conditions of the activation dynamics, Eq.( 1), are
excluded from the system parameter vector p, the initial conditions of the forward
sensitivity equations will be taken as zero. Computation of the gradients, via Eq.
(9), using the forward sensitivity scheme as proposed by William and Zipser (1989),
would require solving Eq. (12), N 2 times, since the source term explicitly depends
on PI'. The system of equations (12) has N equations, each of which requires
summation over all N neurons. Hence, the amount of computation ( measured in
multiply-accumulates, scales like N 4 per time step. We assume that the interval
between to to tf is divided to L time steps. Therefore, the total number of multiplyaccumulates scales like N4 L. Clearly, the scaling properties of this approach are
very poor and it can not be practically applied to very large networks. On the other
hand, this method has also inherent advantages. The FSE are solved forward in
time along with the nonlinear dynamics of the neural networks. Therefore, there is
no need for or a large amount of memory. Since u n ,1' has N 3 components, that is
all needed to be stored.
Adjoint-Functions and Temporal Learning Algorithms in Neural Networks
In order to reduce the computational costs, an alternative approach can be considered. It is based upon the concept of adjoint operators, and eliminates the need for
explicit appearance of u,#-, in Eq. (9). A vector of adjoint functions, v is obtained,
which contain all the information required for computing all the "sensitivities",
dE I dp#-" The necessary and sufficient conditions for constructing adjoint equations
are discussed elsewhere ( Toomarian et aI, 1987 and references therein).
It can be shown that an Adjoint System of Equations-ASE, pertaining to the forward
system of equations (12), can be formally written as
t> 0
(15)
m
In order to specify Eq. (15) in closed mathematical form, we must define the
source term s~ and time- boundary conditions for the system. Both should be
independent of p#-, and its derivatives.
By identifying s~ with aFla Un and selecting the final time condition v(t =
t,) = 0, a system of equations is obtained, which is similar to those proposed by
Pearlmutter. The method requires that the neural activation dynamics, i.e ., Eq.
(1), be solved first forward in time, as followed by the ASE, Eq. (15), integrated
backwards in time. The computation requirement of this approach scales as N2 L.
However, a major drawback to date has resided with the necessity to store quantities
such as g, 5* and 5,#-, at each time step. Thus, the memory requirements for this
method scale as N 2 L.
=
= =
By selecting 5*
g~ -v6(t-t,) and initial conditions v(t
0) 0, these authors (
Toomarian and Barhen 1990 ) have suggested a method which, in contradistinction
to previous approaches, enables the ASE to be integrated forward in time, i.e.,
concomitantly with the neural activation dynamics. This approach saves a large
amount of storage, which scales only as N 2 . The computation complexity of this
method, is similar to that of backward integration and scales as N 2 L. A potential
drawback lies in the fact that Eq. (15) must then be treated in terms of distributions,
which precludes straightforward numerical implementation.
At this stage, we introduce a new paradigm which will enable us to evolve the
adjoint dynamics, Eq. (15) forward in time, but without the difficulties associated
with solutions in the sense of distributions. We multiply the FSE, Eq. (12), by v
and the ASE, Eq. (15), by u,~, subtract the two resulting equations and integrate
over the time interval (to,t,). This procedure yields the bilinear form:
U,~ )tl -
(v
(v U,#-' )'0
=1'1 [(v S,~) -
(u,#-, S*)]dt
(16)
to
To proceed, we select
{sv(t =
-* - l l .
-
0) = O.
Thus, Eq. (16) can be rewritten as:
l aaF
t
Uu,~dt
1-
= s? u.~dt =
t
(17)
() t1
1,v -
S,~dt - [v U,~]tl
(18)
117
118
Toomarian and Barhen
The first term in the RHS of Eq. (18) can be computed by using the values of v
obtained by solving the ASE, (Eqs. (15) and (17?, forward in time. The main
difficulty resides in the evaluation of the second term in the RHS of Eq. (18), i.e.,
[v u,~]t/" To compute it, we now introduce an auxiliary adjoint system:
t> 0
(19)
m
in which we select
(20)
Note that, eventhough we selected z(tJ) = 0, we are also interested in solving this
auxiliary adjoint system forward in time. Thus, the critical issue is how to select
the initial condition (i.e. z(t o that would result in z(tJ) = O. The bilinear form
associated with the dynamical systems u,~ and z can be derived in a similar fashion
to Eq. (16). Its expression is:
?,
(z
u'~)t /
- (z
u,~ )t
o
= 1t/ [(z
to
s,~) - ( u,~
S)]dt
(21)
Incorporatingo5\ z(tJ) and the initial condition of Eq. (12) into Eq. (21), we obtain;
I t'
"
(u,~ S)dt = [v u,~]t/ =
to
1t/
(z s,~ )dt
(22)
to
In order to provide a simple illustration on how the problem of selecting the initial
conditions for the z-dynamics can be addressed, we assume, for a moment, that the
matrix A in Eq. (19) is time independent. Hence, the formal solution of Eq. (19)
can be written as:
z(t) = z(to)eAT(t-to)
(23a)
z(tJ) = z(to)eAT(t/-to) - v(tJ)
(23b)
Therefore, in principle, Eq. (22) can be expressed in terms of z(to), using Eq. (23a).
At time t J , where v(tJ) is known from the solution of Eq. (15), one can calculate
the vector z(to), from Eq. (23b), with z(tJ) = O.
In the problem under consideration, however, the matrix A in Eq.
dependent (viz Eq. (13?. Thus the auxiliary adjoint equations will
means of finite differences. Usually, the same numerical scheme that is
(1) and (15) will be adopted. For illustrative purposes, we limit the
the sequel to the first order approximation i.e.;
( Z-1+1 -
dt
-I)
Z
+
A ' -1
z
=0
o<
I
<
L
(19) is time
be solved by
used for Eqs.
discussion in
(24)
Adjoint-Functions and Temporal Learning Algorithms in Neural Networks
From this equation one can easily show that
zl+1 = B' . B ' - 1 ... Bl . BOz(t o )
in which
B' = I
+
ilt A'
(25)
(26)
where I is the identity matrix. Thus, the RHS of Eq. (22) can be rewritten as:
[v u.~]tJ = [LB(l-I)!S.~]z(to) ilt
(27)
I
The initial conditions z( to) can easily be found at time t" i.e., at iteration stop L
by solving the algebraic equation:
B(L-l)!z(t o ) = vet,)
(28)
I
In summary, the computation of the gradients i.e. Eq. (8) involves two stages,
corresponding to the two terms in the RHS of Eq. (18). The first term is calculated using the adjoint functions v obtained from Eq. (15). The computational
complexity is N 2 L. The second term is calculated via Eq. (27), and involves two
steps: a) kernel propagation, which requires multiplication of two matrices B' and
B(l-I) at each time step; the computational complexity scales as N 3 L; b) numerical
integration via Eq. (24) which requires a matrix vector multiplication at each time
step; hence, it scales as N2 L. Thus, the overall computational complexity of this
approach is of the order N 3 L. Notice, however, that here the storage needed is
minimal and equal to N 2 .
3
CONCLUSIONS
A new methodology for neural learning of time-dependent nonlinear mappings is
presented. It exploits the concept of adjoint operators. The resulting algorithm
enables computation of the gradient of an energy function with respect to various
parameters of the network architecture in a highly efficient manner. Specifically,
it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve the equations forward in time.
Not only is a large amount of computation and storage saved, but the handling
of real-time applications becomes also possible. This methodology also makes the
hardware implementation of temporal learning attractive.
Acknowledgments
This research was carried out at the Center for Space Microelectronics Technology,
Jet Propulsion Laboratory, California Institute of Technology. Support for the
work came from Agencies of the U.S. Department of Defense including the Naval
Weapons Center (China Lake, CA), and from the Office of Basic Energy Sciences
of the Department of Energy, through an agreement with the National Aeronautics
and Space Administration. The authors acknowledge helpful discussions with J.
Martin and D. Andes from Navel Weapons Center.
119
120
Toomarian and Barhen
References
Barhen, J., Gulati, S., and Zak, M., 1989, "Neural Learning of Constrained Nonlinear Transformations" ,IEEE Computer, 22(6), 67-76.
Barhen, J., Toomarian, N., and Gulati, S., 1990a, " Adjoint Operator Algorithms
for Faster Learning in Dynamical Neural Networks", Adv. Neur. Inf. Proc. Sys.,
2,498-508.
Barhen, J., Toomarian, N., and Gulati, S., 1990b, "Application of Adjoint Operators
to Neural Learning", Appl. Math. Lett.,3 (3), 13-18.
Barhen, J., Toomarian, N., and Gulati, S., 1991, "Fast Neural Learning Algorithms
Using Adjoint Operators", Submitted to IEEE Trans. of Neural Networks
Cacuci, D. G., 1981, "Sensitivity Theory for Nonlinear Systems", J. Math. Phys.,
22 (12), 2794-2802.
Hirsch, M. W., 1989, "Convergent Activation Dynamics in Continuous Time Networks" ,Neural Networks, 2 (5), 331-349.
Pearlmutter, B. A., 1989, "Learning State Space Trajectories in Recurrent Neural
Networks", Neural Computation, 1 (2), 263-269.
Pearlmutter, B. A., 1990, "Dynamic Recurrent Neural Networks", Technical Report CMU-CS-90-196, School of Computer Science, Carnegie Mellon University,
Pittsburgh, Pa.
Pineda, F., 1988, "Dynamics and Architecture in Neural Computation", J. of Complexity, 4, 216-245.
Pineda, F., 1990, "Time Dependent Adaptive Neural Networks", Adv. Neur. Inf.
Proc. Sys., 2, 710-718.
Rumelhart, D. E., and McC.and, J. 1., 1986, Parallel and Distributed Processing,
MIT Press.
Toomarian, N., Wacholder, E., and Kaizerman, S., 1987, "Sensitivity Analysis of
Two-Phase Flow Problems", Nucl. Sci. Eng., 99 (1), 53-81.
Toomarian, N. and Barhen, J., 1990, "Adjoint Operators and Non- Adiabatic Algorithms in Neural Networks", Appl. Math. Lett., (in press).
Toomarian, N. and Barhen, J., 1991, " Learning a Trajectory Using Adjoint Functions" , submitted to Neural Networks
Werbos, P., 1974, "Beyond Regression: New Tools for Prediction and Analysis in
The Behavioral Sciences", Ph.D. Thesis, Harvard Univ.
Williams, R. J., and Zipser, D., 1989, "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks", Neural Computation, 1 (2), 270-280.
Part III
Oscillations
| 425 |@word briefly:1 simulation:1 eng:1 commute:1 dramatic:1 mention:1 thereby:1 moment:1 necessity:1 reduction:2 initial:9 selecting:3 activation:7 must:5 written:2 numerical:5 partition:1 enables:2 update:1 selected:1 sys:2 math:3 sigmoidal:1 mathematical:2 along:2 differential:1 shorthand:1 combine:1 behavioral:1 manner:2 introduce:3 fdt:1 terminal:1 decreasing:1 becomes:1 underlying:1 toomarian:17 notation:1 transformation:1 impractical:1 temporal:12 um:2 zl:1 yn:2 continually:1 hirsh:1 t1:1 limit:3 bilinear:2 encoding:2 accumulates:1 therein:1 china:1 suggests:1 appl:2 barhen:18 unique:1 acknowledgment:1 backpropagation:1 procedure:1 area:1 mcc:1 integrating:1 operator:8 storage:4 context:2 center:3 straightforward:2 attention:1 williams:1 focused:2 survey:1 simplicity:1 identifying:1 his:1 stability:1 target:1 agreement:1 pa:1 element:2 rumelhart:2 expensive:2 harvard:1 werbos:2 ft:3 solved:7 calculate:1 adv:2 andes:1 agency:1 complexity:6 dynamic:15 carrying:1 solving:5 upon:3 easily:2 various:3 derivation:1 univ:1 fast:2 pertaining:1 artificial:1 firm:1 whose:1 heuristic:1 widely:1 solve:1 interconnection:2 precludes:2 ability:1 topographic:1 final:1 pineda:4 sequence:1 advantage:2 relevant:1 combining:1 date:1 poorly:1 adjoint:28 requirement:4 coupling:1 recurrent:4 measured:1 school:1 eq:37 solves:1 auxiliary:3 c:1 involves:3 implies:1 uu:1 lyapunov:1 resided:1 drawback:3 saved:1 enable:3 require:1 summation:1 timeindependent:1 pl:1 practically:1 considered:1 cacuci:3 mapping:5 major:1 early:1 purpose:2 proc:2 tanh:1 tf:1 tool:1 minimization:3 mit:1 clearly:2 always:1 office:1 derived:1 viz:2 naval:1 rigorous:1 sense:1 helpful:1 dependent:6 typically:1 integrated:2 pasadena:1 hidden:1 interested:1 overall:2 issue:1 development:2 constrained:1 integration:2 equal:1 represents:2 report:1 inherent:2 oriented:1 simultaneously:2 national:1 phase:1 ourselves:1 attractor:1 william:1 interest:3 highly:2 multiply:2 evaluation:2 severe:1 sh:3 tj:9 integral:1 capable:1 partial:1 necessary:1 concomitantly:2 minimal:1 modeling:1 gn:1 neuromorphic:1 cost:2 applicability:1 stored:1 kn:1 sv:1 fundamental:1 sensitivity:11 sequel:1 again:2 thesis:1 nm:1 choose:1 dr:3 derivative:5 til:1 potential:2 de:6 explicitly:1 depends:1 closed:1 observing:1 characterizes:1 len:1 parallel:1 contribution:1 characteristic:1 largely:1 sy:3 yield:2 rx:2 trajectory:2 straight:1 submitted:2 phys:1 villiams:1 synaptic:5 energy:7 associated:2 static:1 gain:4 stop:1 formalize:1 dt:13 methodology:4 response:2 specify:1 furthermore:2 implicit:1 stage:2 eventhough:1 hand:2 nonlinear:9 propagation:1 concept:5 contain:1 evolution:1 hence:5 analytically:1 excluded:1 laboratory:2 attractive:1 during:1 lob:1 anm:1 illustrative:1 pearlmutter:5 tn:1 variational:1 consideration:1 novel:1 recently:1 functional:4 discussed:2 mellon:1 zak:1 ai:6 etc:2 aeronautics:1 recent:1 inf:2 store:1 came:1 exploited:1 paradigm:1 monotonically:1 technical:1 jet:2 characterized:1 faster:1 divided:1 ase:5 prediction:1 basic:1 regression:1 cmu:1 df:1 iteration:1 kernel:1 interval:5 addressed:1 source:3 weapon:2 eliminates:1 flow:1 zipser:3 backwards:2 feedforward:1 iii:1 architecture:4 topology:1 reduce:1 translates:1 administration:1 expression:3 defense:1 effort:1 algebraic:1 proceed:2 generally:1 involve:1 amount:4 ph:1 hardware:2 notice:1 per:1 write:1 carnegie:1 achieving:1 backward:1 ilt:2 place:1 architectural:1 lake:1 oscillation:1 scaling:1 followed:1 convergent:1 activity:1 strength:1 kronecker:1 encodes:1 eat:2 martin:1 department:2 developing:1 neur:2 poor:1 describes:1 n4:1 restricted:1 taken:1 computationally:3 equation:26 previously:1 discus:1 needed:2 adopted:1 available:1 rewritten:2 observe:1 appropriate:1 save:1 alternative:1 existence:1 denotes:2 running:1 exploit:2 bl:1 quantity:2 gradient:8 dp:5 sci:1 propulsion:2 illustration:2 disparate:1 implementation:5 neuron:4 finite:2 acknowledge:1 rn:1 perturbation:1 dpi:1 arbitrary:1 lb:1 required:1 california:2 trans:1 beyond:1 suggested:2 dynamical:6 pattern:1 usually:1 including:1 memory:2 critical:1 treated:2 difficulty:2 nucl:1 nth:1 scheme:4 technology:3 carried:1 coupled:1 sn:1 gulati:4 evolve:1 multiplication:2 fully:1 foundation:1 integrate:1 sufficient:1 principle:3 pi:10 elsewhere:2 summary:1 loa:1 formal:1 institute:2 wide:1 differentiating:1 distributed:1 boundary:2 calculated:2 lett:2 resides:1 forward:16 author:3 adaptive:2 dpl:1 approximate:1 obtains:1 global:1 hirsch:1 pittsburgh:1 navel:1 un:4 vet:1 continuous:1 ca:2 fse:3 permute:1 constructing:1 domain:1 main:1 rh:4 n2:2 referred:1 tl:3 fashion:1 adiabatic:2 explicit:1 lie:2 governed:1 tnm:5 specific:1 symbol:1 decay:2 microelectronics:1 exists:1 importance:1 modulates:1 sx:3 subtract:1 contradistinction:2 appearance:1 expressed:1 v6:1 identity:1 formulated:1 feasible:1 considerable:1 determined:1 specifically:1 total:1 formally:2 select:3 internal:1 support:1 evaluate:1 phenomenon:1 handling:1 |
3,590 | 4,250 | Learning a Tree of Metrics
with Disjoint Visual Features
Sung Ju Hwang
University of Texas
Austin, TX 78701
Kristen Grauman
University of Texas
Austin, TX 78701
Fei Sha
University of Southern California
Los Angeles, CA 90089
[email protected]
[email protected]
[email protected]
Abstract
We introduce an approach to learn discriminative visual representations while exploiting external semantic knowledge about object category relationships. Given
a hierarchical taxonomy that captures semantic similarity between the objects,
we learn a corresponding tree of metrics (ToM). In this tree, we have one metric
for each non-leaf node of the object hierarchy, and each metric is responsible for
discriminating among its immediate subcategory children. Specifically, a Mahalanobis metric learned for a given node must satisfy the appropriate (dis)similarity
constraints generated only among its subtree members? training instances. To further exploit the semantics, we introduce a novel regularizer coupling the metrics
that prefers a sparse disjoint set of features to be selected for each metric relative to its ancestor (supercategory) nodes? metrics. Intuitively, this reflects that
visual cues most useful to distinguish the generic classes (e.g., feline vs. canine)
should be different than those cues most useful to distinguish their component
fine-grained classes (e.g., Persian cat vs. Siamese cat). We validate our approach
with multiple image datasets using the WordNet taxonomy, show its advantages
over alternative metric learning approaches, and analyze the meaning of attribute
features selected by our algorithm.
1
Introduction
Visual recognition is a fundamental computer vision problem that demands sophisticated image
representations?due to both the large number of object categories a system should ultimately recognize, as well as the noisy cluttered conditions in which training examples are often captured.
The research community has made great strides in recent years by training discriminative models
with an array of well-engineered descriptors, e.g., capturing gradient texture, color, or local part
configurations. In particular, recent work shows promising results when integrating powerful feature selection techniques, whether through kernel combination [1, 2], sparse coding dictionaries [3],
structured sparsity regularization [4, 5], or metric learning approaches [6, 7, 8, 9, 10].
However, typically the semantic information embedded in the learned features is restricted to the category labels on image exemplars. For example, a learned metric generates (dis)similarity constraints
using instances with the different/same class label; multiple kernel learning methods optimize feature weights to minimize class prediction errors; group sparsity regularizers exploit class labels to
guide the selected dimensions. Unfortunately, this means richer information about the meaning of
the target object categories is withheld from the learned representations. While sufficient for objects starkly different in appearance, this omission is likely restrictive for objects with finer-grained
distinctions, or when a large number of classes densely populate the original feature space.
1
We propose a metric learning approach to learn discriminative visual representations while also
exploiting external knowledge about the target objects? semantic similarity.1 We assume the external
knowledge itself is available in the form of a hierarchical taxonomy over the objects (e.g., from
WordNet or some other knowledge base). Our approach exploits these semantics in two novel ways.
First, we construct a tree of metrics (ToM) to directly capture the hierarchical structure. In this tree,
each metric is responsible for discriminating among its immediate object subcategories. Specifically,
we learn one metric for each non-leaf node, and require it to satisfy (dis)similarity constraints generated among its subtree members? training instances. We use a variant of the large-margin nearest
neighbor objective [11], and augment it with a regularizer for sparsity in order to unify Mahalanobis
parameter learning with a simple means of feature selection.
Second, rather than learn the metrics at each node independently, we introduce a novel regularizer
for disjoint sparsity that couples each metric with those of its ancestors. This regularizer specifies
that a disjoint set of features should be selected for a given node and its ancestors, respectively. Intuitively, this represents that the visual features most useful to distinguish the coarse-grained classes
(e.g., feline vs. canine) should often be different than those cues most useful to distinguish their
fine-grained subclasses (e.g., Persian vs. Siamese cat, German Shepherd vs. Boxer). The resulting
optimization problem is convex, and can be optimized with a projected subgradient approach.
The ideas of exploiting label hierarchy and model sparsity are not completely new to computer
vision and machine learning researchers. Hierarchical classifiers are used to speed up classification
time and alleviate data sparsity problems [12, 13, 14, 15, 16]. Parameter sparsity is increasingly
used to derive parsimonious models with informative features [4, 5, 3].
Our novel contribution lies in the idea of ToM and disjoint sparsity together as a new strategy for
visual feature learning. Our idea reaps the benefits of both schools of thought. Rather than relying on
learners to discover both sparse features and a visual hierarchy fully automatically, we use external
?real-world? knowledge expressed in hierarchical structures to bias which sparsity patterns we want
the learned discriminative feature representations to exhibit. Thus, our end-goal is not any sparsity
pattern returned by learners, but the patterns that are in concert with rich high-level semantics.
We validate our approach with the Animals with Attributes [17] and ImageNet [18] datasets using the
WordNet taxonomy. We demonstrate that the proposed ToM outperforms both global and multiplemetric metric learning baselines that have similar objectives but lack the hierarchical structure and
proposed disjoint sparsity regularizer. In addition, we show that when the dimensions of the original
feature space are interpretable (nameable) visual attributes, the disjoint features selected for superand sub-classes by our method can be quite intuitive.
2
Related Work
A wide variety of feature learning approaches have been explored for visual recognition. Some of
the very best results on benchmark image classification tasks today use multiple kernel learning
approaches [1, 2] or sparse coding dictionaries for local features (e.g., [3]). One way to regularize
visual feature selection is to prefer that object categories share features, so as to speed up object
detection [19]; more recent work uses group sparsity to impose some sharing among the (un)selected
features within an object category or view [4, 5]. We instead seek disjoint features between coarse
and fine categories, such that the regularizer helps to focus on useful differences across levels.
Metric learning has been a subject of extensive research in recent years, in both vision and learning. Good visual metrics can be trained with boosting [20, 21], feature weight learning [6], or
Mahalanobis metric learning methods [7, 8, 10]. An array of Mahalanobis metric learners has been
developed in the machine learning literature [22, 23, 11]. The idea of using multiple ?local? metrics
to cover a complex feature space is not new [24, 9, 10, 25]; however, in contrast to our approach,
existing methods resort to clustering or (flat) class labels to determine the partitioning of training
instances to metrics. Most methods treat the partitioning and metric learning processes separately,
but some recent work integrates the grouping directly into the learning objective [21], or trains mul1
We use ?learned representation? and ?learned metric? interchangeably, since we deal with sparse Mahalanobis metrics, which are equivalent to selecting a subset of features and applying a linear feature space
transformation.
2
tiple metrics jointly across tasks [26]. No previous work explores mapping the semantic hierarchy
to a ToM, nor couples metrics across the hierarchy levels, as we propose. To show the impact, in
experiments we directly compare to a state-of-the-art approach for learning multiple metrics.
Previous metric learning work integrates feature learning and selection via a regularizer for sparsity [27], as we do here. However, whereas that approach targets sparsity in the linear transformed
space, ours targets sparsity in the original feature space, and, most importantly, also includes a disjoint sparsity regularizer. The advantage in doing so is that our learner will be able to return both
discriminative and interpretable feature dimensions, as we demonstrate in our results. Transformed
feature spaces?while suitably flexible if only discriminative power is desired?add layers that complicate interpretability, not only to models for individual classifiers but also (more seriously) to tease
apart patterns across related categories (such as parent-child).
The ?orthogonal transfer? by [28] is most closely related in spirit to our goal of selecting disjoint
features. However, unlike [28], our regularizer will yield truly disjoint features when minimized?a
property hinging on the metric-based classification scheme we have chosen. Our learning problem
is guaranteed to be convex, whereas hyperparameters need to be tuned to ensure convexity in [28].
We return to these differences in Section 3.3, after explaining our algorithm in detail.
External semantics beyond object class labels are rarely used in today?s object recognition systems,
but recent work has begun to investigate new ways to integrate richer knowledge. Hierarchical
taxonomies have natural appeal, and researchers have studied ways to discover such structure automatically [29, 30, 13], or to integrate known structure to train classifiers at different levels [12, 31].
The emphasis is generally on saving prediction time (by traversing the tree from its root) or combining decisions, whereas we propose to influence feature learning based on these semantics. While
semantic structure need not always translate into helping visual feature selection, the correlation between WordNet semantics and visual confusions observed in [32] supports our use of the knowledge
base in this work. The machine learning community has also long explored hierarchical classification (e.g., [14, 15, 16]). Of this work, our goals most relate to [14], but our focus is on learning
features discriminatively and biasing toward a disjoint feature set via regularization.
Beyond taxonomies, researchers are also injecting semantics by learning mid-level nameable ?attributes? for object categorization (e.g., [17, 33]). We show that when our method is applied to
attributes as base features, the disjoint sparsity effects appear to be fairly interpretable.
3
Approach
We review briefly the techniques for learning distance metrics. We then describe an `1 -norm based
regularization for selecting a sparse set of features in the context of metric learning. Building on that,
we proceed to describe our main algorithmic contribution, that is, the design of a metric learning algorithm that prefers not only sparse but also disjoint features for discriminating different categories.
3.1
Distance metric learning
Many learning algorithms depend on calculating distances between samples, notably k-nearest
neighbor classifiers or clustering. While the default is to use the Euclidean distance, the more
general Mahalanobis metric is often more suitable. For two data points xi , xj ? RD , their (squared)
Mahalanobis distance is given by
d2M (xi , xj ) = (xi ? xj )T M (xi ? xj ),
(1)
where M is a positive semidefinite matrix M 0. Arguably, the Mahalanobis distance can better
model complex data, as it considers correlations between feature dimensions.
Learning the optimal M from labeled data has been an active research topic (e.g., [23, 22, 11]).
Most methods follow an intuitively appealing strategy: a good metric M should pull data points
belonging to the same class closer and push away data points belonging to different classes. As an
illustrative example, we describe the technique used in constructing large margin nearest neighbor
(LMNN) classifiers [11], to which our empirical studies extensively compare.
In LMNN, each point xi in the training set is associated with two sets of different data points in xi ?s
nearest neighbors (identified in the Euclidean distance): the ?targets? whose labels are the same as
3
?
xi ?s and the ?impostors? whose labels are different. Let x+
i denote the ?target? and xi denote the
?impostor? sets, respectively. LMNN identifies the optimal M as the solution to,
min
M 0
subject to
`(M ) =
X X
i
1+
d2M (xi , xj ) + ?
j?x+
i
d2M (xi , xj )
?
X
?ijl
(2)
ijl
d2M (xi , xl )
? ?ijl ; ?ijl ? 0 .? j ?
x+
i ,
l?
x?
i
where the objective function `(M ) balances two forces: pulling the target towards xi and pushing
the impostor away. The latter is characterized by the constraint composed of a triplet of data points:
the distance to an impostor should be greater than the distance to a target by at least a margin of 1,
possibly with the help of a slack variable ?ijl . The minimization of eq. (2) is a convex optimization
problem with semidefinite constraints on M 0, and is tractable with standard techniques.
Our approach extends previous work on metric learning in two aspects: i) we apply a sparsity-based
regularization to identify informative features (Section 3.2); ii) at the same time, we seek metrics that
rely on disjoint subsets of features for categories at different semantic granularities (Section 3.3).
3.2
Sparse feature selection for metric learning
How can we learn a metric such that only a sparse set of features are relevant? Examining the
definition of the Mahalanobis distance in eq. (1), we deduce that if the d-th feature of x is not to be
used, it is sufficient and necessary for the d-th diagonal element of M be zero.
Therefore, analogous to the use of `1 -norm by the popular LASSO technique [34], we add the `1 norm of M ?s diagonal elements to the large margin metric learning criterion `(M ) in eq. (2),
min
M 0
X X
i
d2M (xi , xj ) + ?
j?x+
i
X
?ijl + ?Trace[M ],
ijl
(3)
where we have omitted the constraints as they are not changed. ? and ? are nonnegative (hyper)parameters trading off the sparsity of the model and the other parts in the objective. Note that
since the matrix trace Trace[?] is a linear function of its argument, this sparse feature metric learning
problem remains a convex optimization.
3.3
Learning a tree of metrics (ToM) with disjoint visual features
How can we learn a tree of metrics so each metric uses features disjoint from its ancestors??
Using disjoint features To characterize the ?disjointness? between two metrics Mt and Mt0 , we
use the vectors of their nonnegative diagonal elements vt and vt0 as proxies to which features are
(more heavily) used. This is a reasonable choice as we use the sparsity-inducing `1 -norm in learning
the metrics. We measure their degree of ?competition? for common features,
C(Mt , Mt0 ) = kvt + vt0 k22 .
(4)
Intuitively, if a feature dimension is not used by either metric, the competition for that feature is low.
If a feature dimension is used by both metrics heavily, then the competition is high. Consequently,
minimizing eq. (4) as a regularization term will encourage different metrics to use disjoint features.
Note that the measure is a convex function of vt and vt0 , hence also convex in Mt and Mt0 .
Learning a tree of metrics Formally, assume we have a tree T where each node corresponds to
a category. Let t index the T non-leaf or internal nodes. We learn a metric Mt to differentiate its
children categories c(t). For any node t, we use D(t) to denote those training samples whose labeled
categories are offspring of t, and a(t) to denote the nodes on the path from the root to t.
4
To learn our metrics {Mt }T
t=1 , we apply similar strategies of learning metrics for large-margin
nearest neighbor classifiers. We cast it as a convex optimization problem:
min
{Mt }0
X X
t
+
d2Mt (xi , xj ) + ?
X X
X
t,c,r,ijl
c?c(t) i,j?D(c)
t
subject to
X
?tcrijl +
X
?t Trace[Mt ]
t
?ta C(Mt , Ma )
a?a(t)
(5)
? t, ? c ? c(t), ? r ? c(t) ? {c}, ? xi , xj ? D(c), xl ? D(r)
1 + d2Mt (xi , xj ) ? d2Mt (xi , xl ) ? ?tcrijl ; ?tcrijl ? 0 .
In short, there are T learning (sub)problems, one for each metric. Each metric learning problem is
in the style of the sparse feature metric learning eq. (3). However, more importantly, these metric
learning problems are coupled together through the disjoint regularization. Our disjoint regularization encourages a metric Mt to use different sets of features from its super-categories?categories
on the tree path from the root.
Numerical optimization The optimization problem in eq. (5) is convex, though nonsmooth due
to the nonnegative slack variables. We use the subgradient method, previously used for similar
problems [11]. For problems with a large taxonomy, learning all the regularization coefficients ?t
and ?ta is prohibitive, as the number of coefficient combinations is O(k T ), where T is the number
of nodes and k is the number of values a coefficient can take. Thus, for the large-scale problems we
focus on, we use a simpler and computationally more efficient strategy of Sequential Optimization
(SO) by sequentially optimizing one metric at a time. Specifically, we optimize the metric at the
root node and then its children, assuming the metric at the root is fixed. We then recursively (in
breadth-first-search) optimize the rest of the metrics, always treating the metrics at the higher level
of the hierarchy as fixed. This strategy has a significantly reduced computational cost of O(kT).
In addition, the SO procedure allows each metric to be optimized with different parameters and
prevents a badly-learned low-level metric from influencing upper-level ones through the disjoint
regularization terms. (This can also be achieved by adjusting all regularization coefficients in parallel
through extensive cross-validation, but at a much higher computational expense.)
Using a tree of metrics for classification Once the metrics at all nodes are learned, they can be
used for several classification tasks (e.g., with k-NN or as a kernel to a SVM). In this work, we
study two tasks in particular: 1) We consider ?per-node classification?, where the metric at each
node is used to discriminate its sub-categories. Since decisions at higher-level nodes must span a
variety of object sub-categories, these generic decisions are interesting to test the learned features in
a broader context. 2) We consider hierarchical classification [35], a natural way to use the full ToM.
In this case, we examine the recognition accuracy for the finest-level categories only. We classify an
object from the root node down; the leaf node that terminates the path is the predicted label.
We stress that our metric learning criterion of eq. (5) aims to minimize classification errors at each
node. Thus, improvement in per-node accuracy is more directly indicative of whether the learning
has resulted in useful metrics. Understanding the relation between per-node and full multi-class
accuracy has been a challenging research problem in building hierarchical classifiers [16, 12].
Relationship to orthogonal transfer Our work shares a similar spirit to the ?orthogonal transfer?
idea explored in [28]. The authors there use non-overlapping features to construct multiple SVM
classifiers for hierarchical
classification of text documents. Concretely, they propose an orthogP
onal regularizer ij Kij |wiT wj | where wi and wj are the SVM parameters. Minimizing it will
reduce the similarity of the parameter vectors and make them ?orthogonal? to each other. However, orthogonality does not necessarily imply disjoint features. This can be seen with a contrived
two-dimensional counterexample where wi = [1 ? 1]T and wj = [?1 ? 1]T . Both features are
used, yet the two parameter vectors are orthogonal. In contrast, our disjoint regularizer eq. (4) is
more indicative of true disjointness. Specifically, when our regularizer attains its minimum value of
zero, we are guaranteed that features are non-overlapping as our vi and vj are nonnegative diagonal
elements of positive semidefinite matrices. Our regularizer is also guaranteed to be convex, whereas
the convexity of the regularizer in [28] depends critically on tuning Kij .
5
root:{a,b,c,d}
Synthetic Features
0.7
0.6
A:{a,b}
B:{c,d}
value
0.5
0.4
0.3
0.2
0.1
a
b
c
d
(a) Class Hierarchy
0
a
b
c
d
(b) Means of the features
(c) TOM
(d) TOM + Sparsity
(e) TOM + Disjoint
Figure 1: Synthetic dataset example. Our disjoint regularizer yields a sparse metric that only considers the
feature dimension(s) necessary for discrimination at that given level.
4
Results
We validate our ToM approach on several datasets, and consider three baselines: 1) Euclidean:
Euclidean distance in the original feature space, 2) Global LMNN: a single global metric for all
classes learned with the LMNN algorithm [11], and 3) Multi-Metric LMNN: one metric learned
per class using the multiple metric LMNN variant [11]. We use the code provided by the authors.
To evaluate the influence of each aspect of our method, we test it under three variants: 1) ToM:
ToM learning without any regularization terms, 2) ToM+Sparsity: ToM learning with the sparsity regularization term, and 3) ToM+Disjoint: ToM learning with the disjoint regularization term.
For all experiments, we test with five random data splits of 60%/20%/20% for train/validation/test.
We use the validation data to set the regularization parameters ? and ? among candidate values
{0, 1, 10, 100, 1000}, and we generate 500 (xi , xj , xl ) training triplets per class.
4.1
Proof of concept on synthetic dataset
First we use synthetic data to clearly illustrate disjoint sparsity regularization. We generate data with
precisely the property that coarser categories are distinguishable using feature dimensions distinct
from those needed to discriminate their subclasses. Specifically, we sample 2000 points from each
of four 4D Gaussians, giving four leaf classes {a, b, c, d}. They are grouped into two superclasses
A = {a, b} and B = {c, d}. The first two dimensions of all points are specific to the superclass
decision (A vs. B), while the last two are specific to the subclasses. See Fig. 1 (a) and (b).
We run hierarchical k-nearest neighbor classification (k = 3) on the test set. ToM+Sparsity increases
the recognition rate by 0.90%, while ToM+Disjoint increases it by 4.05%. Thus, as expected, disjoint sparsity does best, since it selects different features for the super- and sub-classes. Accordingly,
in the learned Mahalanobis matrices for each node (Fig. 1(c)-(e)), we see disjoint sparsity zeros out
the unneeded features for the upper-level metric, showed as black squares in the figure (e). In contrast, the ToM+Sparsity features are sub-optimal and fit to some noise in the data (d).
4.2
Visual recognition experiments
Next we demonstrate our approach on challenging visual recognition tasks.
Datasets and implementation details We validate with three datasets drawn from two publicly
available image collections: Animals with Attributes (AWA) [17] and ImageNet [18, 32]. Both are
well-suited for our scenario, since they consist of fine-grained categories that can be grouped into
more general object categories. AWA contains 30,475 images and 50 animal classes, and we use
it to create two datasets: 1) AWA-PCA, which uses the provided features (SIFT, rgSIFT, PHOG,
SURF, LSS, RGB), concatenated, standardized, and PCA-reduced to 50 dimensions, and 2) AWAATTR, which uses 85-dimensional attribute predictions as the original feature space. The latter is
formed by concatenating the outputs of 85 linear SVMs trained to predict the presence/absence of
the 85 nameable properties annotated by [17], e.g., furry, white, quadrupedal, etc. For our third
dataset VEHICLE-20, we take 20 vehicle classes and 26,624 images from ImageNet, and apply
PCA to reduce the authors? provided visual word features [32] to 50 dimensions per image (The
dimensionality worked best for the Global LMNN baseline.).
We use WordNet to generate the semantic hierarchies for all datasets. We retrieve all nodes in
WordNet that contain any of the object class names on their word lists. In the case of homonyms,
we manually disambiguate the word sense. Then, we build a compact partial hierarchy over those
nodes by 1) pruning out any node that has only one child (i.e., removing superfluous nodes), and 2)
resolving any instances of multiple parentship by choosing the path from the leaf to root having the
most overlap with other classes. See Figures 2 and 3 for the resulting AWA and VEHICLE trees.
6
placental
ungulate
carnivore
aquatic mammal
primate
rodent
pinniped dolphin
feline
dog
baleen
bear
musteline
procyonid
sheperd
AWA?ATTR
10
Accuracy improvement
?4
bovine
w
?2
bovid
deer
co
f
0
equine
ox o
l
ffa
bu
p
ee
sh pe
lo
te
an
er
de
se
oo
m
ffe
ra
gi
us
g
am
pi pot
o
pp
hi
a
br
ze
e
rs
s
ho ero
oc cat
in
rh se
e t
am a
Si n c
ia
rs
Pe t
a
bc
bo
n
lio d
ar
op
le
er
rd
tig
he
ep
llie sh
co an
m a
er
G ahu
hu
n
ia
at
lm
2
domestic
hi
4
big cat
odd?toed ungulate
C
ol
da
w
x
fo on
o
da
cc
ra pan
t
an
gi
r
te
ot l
se
ea
w
k
un r
sk ea
rb r
a
la
po be
y
zl k
iz
gr bac
p
m le
hu ha hin
w olp
d
ue
bl on
m e
l
m
co ha
rw
lle
ki s
ru
al
w
ee
al
se anz
p
im
ch
y
la
ke
ril on
go r m
e
id
sp e
s
ou
m
t
ra
l
rre
ui
sq ter
s
m
ha r
e
av
be
t
ba
it
bb
ra ant
h
ep
el
e
ol
m
Global LMNN: 1.33
TOM: 1.44
TOM+Sparsity: 1.93
TOM+Disjoint: 2.15
6
ruminant
cat
AWA?PCA
8
Global LMNN: 1.01
TOM: 1.53
TOM+Sparsity: 1.94
TOM+Disjoint: 2.45
8
6
4
2
0
?2
?6
?4
m
us
pi teli
n n
do nip e
m ed
es
bo tic
v
pr big id
oc ca
yo t
n
g. id
ap
e
do dee
sh lph r
ep in
pr er
im d
ro ate
d
w ent
h
eq ale
ui
ne
do
ru be g
m a
in r
ca an
ni t
n
un c e
ev gu at
en la
?t te
a o
od qu ed
d a
ca ?to tic
rn ed
iv
ba ore
le
e
pl fel n
ac in
e e
bo nta
vi l
ne
eq
u
bi ine
g
do ca
lp t
h
pr d in
oc ee
yo r
ni
d
sh bov
ep id
er
d
do
b g
b ea
pi ov r
m nnipine
u e
od ste d
d? line
to
ed
pr ca
pl ima t
ac t
en e
ev ca tal
en nin
ca ?to e
rn ed
i
a vo
ru qua re
m ti
in c
w ant
h
r a
do od le
m en
es t
t
un feli ic
gu ne
ba late
le
g. en
ap
e
Figure 2: Semantic hierarchy for AWA (top) and the per-node accuracy improvements relative to Euclidean
distance, for the AWA-PCA (left) and AWA-ATTR (right) datasets. Numbers in legends denote average improvement over all nodes. We generally achieve a sizable accuracy gain relative to the Global LMNN baseline
(dark left bar for each class), showing the advantage of exploiting external semantics with our ToM approach.
VEHICLE?20
10
vehicle
wheeled vehicle
craft
self?propelled vehicle
vessel
ship
aircraft
boat
h. air
l. air
motor vehicle
bicycle
locomotive
car
truck
Accuracy improvement
Global LMNN: 0.86
TOM: 2.42
TOM+Sparsity: 2.79
TOM+Disjoint: 3.13
8
6
4
2
0
ip
ca
bi r
cy
cl
e
ve
se sse
lf? l
m
ot pro
p
or
ve .
he hic
av le
ie
r?
ai
r
er
ve
sh
ip
ai
rc
ra
ft
t
lo ruc
co
k
m
ot
iv
e
cr
af
t
bo
a
w
he t
el
ed
a
h
rs
k
uc
rtr
ile
tra
up ck
ck tru
pi
e
ag
rb
ga
r
ce le
ra rtib
e
nv
co
o.
b
om
ca loc o.
m
m
ea loco
st
c
tri ike
ec
b
el ain
nt
o
ou rtw
m
o
ef
cl
er
cy oot
bi
sc
or
ot
m n
o
llo
ba
p
hi
rs
ai r
ne
rli e
ai
an
pl t
ar
oa
db
ee
w
e
no
sp
ca
ne
ol
ai
nd
go
nt
er
lin
co
?a
ir
hi
cl
e
?2
lig
ht
Accuracy improvement
whale
g.ape
even?toed ungulate
canine
Figure 3: Semantic hierarchy for VEHICLE-20 and the per-node accuracy gains, plotted as above.
Throughout, we evaluate classification accuracy using k-nearest neighbors (k-NN). For ToM, at
node n we use k = 2ln ?1 + 1, where ln is the level of the node, and ln = 1 for leaf nodes. This
means we use a larger k at the higher nodes in the tree where there is larger intra-class variation,
in an effort to be more robust to outliers. For the Euclidean and LMNN baselines, which lack a
hierarchy, we simply use k=3. Note that ToM?s setting at the final decision nodes (just above a leaf)
is also k = 3, comparable to the baselines.
4.2.1
Per-node accuracy and analysis of the learned representations
Since our algorithm optimizes the metrics at every node, we first examine the resulting per-node
decisions. That is, how accurately can we predict the correct subcategory at any given node? The
bar charts in Figures 2 and 3 show the results, in terms of raw k-NN accuracy improvements over the
Euclidean baseline. For reference, we also show the Global LMNN baseline. Multi-Metric LMNN
is omitted here, since its metrics are only learned for the leaf node classes. We observe a good
increase for most classes, as well as a clear advantage relative to LMNN. Furthermore, our results
are usually strongest when including the novel disjoint sparsity regularizer. This result supports our
basic claim about the potential advantage of exploiting external semantics in ToM.
We find that absolute gains are similar in either the PCA or ATTR feature spaces for AWA, though
exact gains per class differ. While the ATTR variant exposes the semantic features directly to the
learner, the PCA variant encapsulates an array of low-level descriptors into its dimensions. Thus,
while we can better interpret the meaning of disjoint sparsity on the attributes, our positive result on
raw image features assures that disjoint feature selection is also amenable in the more general case.
To look more closely at this, Table 1 displays representative superclasses from AWA-ATTR together
with the attributes that ToM+Disjoint selects as discriminative for their subclasses. The attributes
shown are those with nonzero weights in the learned metrics. Intuitively, we see that often the selected attributes are indeed useful for discriminating the child classes. For example, ?tusks? and
?plankton? attributes help distinguish common dolphins from killer whales, whereas ?stripes? and
7
Superclass
dolphin
Subclasses
common
dolphin,
killer whale
Attributes selected
tusks, plankton, blue, gray, red,
patches, slow, muscle, active, insects
Superclass
whale
Subclass
dolphin,
baleen
whale
equine
horse,
zebra
stripes, domestic, orange, red, yellow, toughskin, newworld, arctic,
bush
odd-toed
ungulate
equine,
rhinoceros
Attributes selected
black, white, blue, gray, toughskin,
chewteeth, strainteeth, smelly, slow,
muscle, active, fish, hunter, skimmer, oldworld, arctic. . .
fast, longneck, hairless, black,
white, yellow, patches, spots, bulbous, longleg, buckteeth, horns,
tusks, smelly. . .
Table 1: Attributes selected by ToM+Disjoint for various superclass objects in AWA. See text.
Method
Euclidean
Global LMNN
Multi-metric LMNN
ToM
ToM + Sparsity
ToM + Disjoint
AWA-ATTR
Correct label
Semantic similarity
32.36 ? 0.21
53.60 ? 0.26
32.49 ? 0.42
53.93 ? 0.88
32.34 ? 0.35
53.73 ? 0.71
36.79 ? 0.27
58.36 ? 0.09
37.58 ? 0.32
59.29 ? 0.58
38.29 ? 0.61
59.72 ? 0.62
AWA-PCA
Correct label
Semantic similarity
17.54 ? 0.38
38.11 ? 0.58
19.62 ? 0.51
40.34 ? 0.32
17.61 ? 0.33
38.94 ? 0.31
18.70 ? 0.41
43.44 ? 0.43
18.79 ? 0.46
43.38 ? 0.34
19.00 ? 0.30
43.59 ? 0.19
VEHICLE-20
Correct label
Semantic similarity
28.51 ? 0.56
56.10 ? 0.41
29.65 ? 0.44
57.57 ? 0.45
30.00 ? 0.51
57.91 ? 0.54
31.23 ? 0.67
60.72 ? 0.54
32.09 ? 0.18
62.66 ? 0.26
32.77 ? 0.32
63.01 ? 0.21
Table 2: Multi-class hierarchical classification accuracy and semantic similarity on all three datasets. Numbers are averages over 5 splits, and standard errors for 95% confidence interval. Our method outperforms the
baselines in almost all cases, and notably provides more semantically close predictions. See text.
?domestic? help distinguish zebras from horses. At the same time, as desired, we see that the
attributes useful for coarser-level categories are distinct from those employed to discriminate the
finer-level objects. For example, ?fast?, ?longneck?, or ?hairless? are used to differentiate equine
from rhino, but are excluded when differentiating horses from zebras (equine?s subclasses).
4.2.2
Hierarchical multi-class classification accuracy
Next we evaluate the complete multi-class classification accuracy, where we use all the learned ToM
metrics together to predict the leaf-node label of the test points. This is a 50-way task for AWA, and
a 20-way task for VEHICLES. Table 2 shows the results.
We score accuracy in two ways: Correct label records the percentage of examples assigned the
correct (leaf) label, while Semantic similarity records the semantic similarity between the predicted
and true labels. For both, higher is better. The former is standard recognition accuracy, while the
latter gives a more nuanced view of the ?semantic magnitude? of the classifiers? errors. Specifically,
we calculate the semantic similarity between classes (nodes) i and j using the metric defined in [36],
which counts the number of nodes shared by their two parent branches, divided by the length of the
longest of the two branches. In the spirit of other recent evaluations [37, 32, 36], this metric reflects
that some errors are worse than others; for example, calling a Persian cat a Siamese cat is a less
glaring error than calling a Persian cat a horse. This is especially relevant in our case, since our key
motivation is to instill external semantics into the feature learning process.
In terms of pure label correctness, ToM improves over the strong LMNN baselines for both AWAATTR and VEHICLE-20. Further, in all cases, we see that disjoint sparsity is an important addition
to ToM. However, in AWA-PCA, Global LMNN produces the best results by a statistically insignificant margin. We did not find a clear rationale for this one case. For AWA-ATTR, however, our
method is substantially better than Global LMNN, perhaps due to our method?s strength in exploiting
semantic features. While we initially expected Multi-Metric LMNN to outperform Global LMNN,
we suspect it struggles with clusters that are too close together. For all cases when ToM+Disjoint
outperforms the LMNN or Euclidean baselines, the improvement is statistically significant.
In terms of semantic similarity, our ToM is better than all baselines on all datasets. This is a very
encouraging result, since it suggests our approach is in fact leveraging semantics in a useful way.
In practice, the ability to make such ?reasonable? errors is likely to be increasingly important as the
community tackles larger and larger multi-class recognition problems.
Conclusion We presented a new metric learning approach for visual recognition that integrates external semantics about object hierarchy. Experiments with challenging datasets indicate its promise,
and support our hypothesis that outside knowledge about how objects relate is valuable for feature
learning. In future work, we are interested in exploring local features in this context, and considering
ways to learn both the hierarchy and the useful features simultaneously.
Acknowledgments F. Sha is supported by NSF IIS-1065243 and benefited from discussions with
D. Zhou and B. Kulis. K. Grauman is supported by NSF IIS-1065390.
8
References
[1] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV,
2009.
[2] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009.
[3] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, 2009.
[4] L.-J. Li, H. Su, E. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene
classification and semantic feature sparsification. In NIPS, 2010.
[5] Y. Jia, M. Salzmann, and T. Darrell. Factorized latent spaces with structured sparsity. In NIPS, 2010.
[6] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In
NIPS, 2006.
[7] P. Kumar, P. Torr, and A. Zisserman. An invariant large margin nearest neighbour classifier. In ICCV,
2007.
[8] P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. In CVPR, 2008.
[9] D. Ramanan and S. Baker. Local distance functions: A taxonomy, new algorithms, and an evaluation. In
PAMI, 2011.
[10] Z. Wang, Y. Hu, and L.-T. Chia. Image-to-class distance metric learning for image classification. In
ECCV, 2010.
[11] K. Q. Weinberger and K. L. Saul. Distance metric learning for large margin nearest neighbor classification.
JMLR, 10:207?244, June 2009.
[12] M. Marszalek and C. Schmid. Constructing category hierarchies for visual recognition. In ECCV, 2008.
[13] G. Griffin and P. Perona. Learning and using taxonomies for fast visual category recognition. In CVPR,
2008.
[14] D. Koller and M. Sahami. Hierarchically classifying documents using very few words. In ICML, 1997.
[15] A. McCallum, R. Rosenfeld, T. Mitchell, and A. Ng. Improving text classification by shrinkage in a
hierarchy of classes. In ICML, 1998.
[16] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In CIKM,
2004.
[17] C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[18] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-F ei. ImageNet: A large-scale hierarchical image
database. In CVPR, 2009.
[19] A. Torralba and K. Murphy. Sharing visual features for multiclass and multiview object detection. PAMI,
29(5), 2007.
[20] G. Shakhnarovich. Learning Task-Specific Similarity. PhD thesis, MIT, 2006.
[21] B. Babenko, S. Branson, and S. Belongie. Similarity functions for categorization: from monolithic to
category specific. In ICCV, 2009.
[22] A. Globerson and S. Roweis. Metric learning by collapsing classes. In NIPS, pages 451?458. 2006.
[23] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. In ICML, 2007.
[24] K. Weinberger and L. Saul. Fast solvers and efficient implementations for distance metric learning. In
ICML, 2008.
[25] Q. Chen and S. Sun. Hierarchical large margin nearest neighbor classification. In ICPR, 2010.
[26] S. Parameswaran and K. Weinberger. Large margin multi-task metric learning. In NIPS, 2010.
[27] Y. Ying, K. Huang, and C. Campbell. Sparse metric learning via smooth optimization. In NIPS. 2009.
[28] D. Zhou, L. Xiao, and M. Wu. Hierarchical classification via orthogonal transfer. In ICML, 2011.
[29] J. Sivic, B. Russell, A. Zisserman, W. Freeman, and A. Efros. Unsupervised discovery of visual object
class hierarchies. In CVPR, 2008.
[30] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR,
2008.
[31] A. Zweig and D. Weinshall. Exploiting object hierarchy: Combining models from different category
levels. In ICCV, 2007.
[32] J. Deng, A. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us?
In ECCV, 2010.
[33] Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In ECCV, 2010.
[34] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statistical Society, 58:267?288,
1994.
[35] S. Dumais and H. Chen. Hierarchical classification of web content. In Research and Development in
Information Retrieval, 2000.
[36] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories.
In ECCV, 2010.
[37] K. Barnard, Q. Fan, R. Swaminathan, A. Hoogs, R. Collins, P. Rondot, and J. Kaufhold. Evaluation of
localized semantics: data, methodology, and experiments. Technical report, University of Arizona, 2005.
9
| 4250 |@word aircraft:1 kulis:3 briefly:1 norm:4 nd:1 suitably:1 hu:3 seek:2 r:4 rgb:1 llo:1 mammal:1 locomotive:1 recursively:1 configuration:1 loc:1 score:1 selecting:3 contains:1 salzmann:1 tuned:1 seriously:1 bc:1 ours:1 document:3 outperforms:3 existing:1 od:3 nt:2 babenko:1 si:1 yet:1 must:2 finest:1 numerical:1 informative:2 hofmann:1 musteline:1 motor:1 treating:1 concert:1 interpretable:3 discrimination:1 bart:1 cue:3 selected:10 prohibitive:1 leaf:11 phog:1 indicative:2 v:6 mccallum:1 accordingly:1 short:1 record:2 provides:1 coarse:2 boosting:1 node:42 simpler:1 glaring:1 five:1 rc:1 formed:1 starkly:1 introduce:3 notably:2 indeed:1 expected:2 ra:6 nor:1 examine:2 multi:10 ol:3 freeman:1 relying:1 lmnn:24 automatically:2 encouraging:1 solver:1 considering:1 domestic:3 provided:3 discover:2 baker:1 factorized:1 hic:1 what:1 killer:2 weinshall:1 tic:2 substantially:1 developed:1 ag:1 transformation:1 sparsification:1 sung:1 every:1 subclass:7 ti:1 tackle:1 ro:1 classifier:10 grauman:4 partitioning:2 ramanan:1 zl:1 appear:1 arguably:1 positive:3 influencing:1 local:6 treat:1 monolithic:1 offspring:1 struggle:1 id:4 path:4 marszalek:1 ap:2 pami:2 black:3 emphasis:1 studied:1 suggests:1 challenging:3 co:6 branson:1 bi:3 statistically:2 reaps:1 acknowledgment:1 responsible:2 globerson:1 harmeling:1 horn:1 practice:1 impostor:4 loco:1 lf:1 sq:1 spot:1 procedure:1 empirical:1 oot:1 significantly:1 thought:1 matching:1 vedaldi:1 word:4 integrating:1 supercategory:1 confidence:1 equine:5 close:2 ga:1 selection:8 context:3 applying:1 influence:2 optimize:3 equivalent:1 rhinoceros:1 go:2 independently:1 l:1 cluttered:1 ke:1 feline:3 wit:1 unify:1 convex:9 pure:1 attr:7 array:3 importantly:2 pull:1 regularize:1 varma:1 retrieve:1 sse:1 variation:1 analogous:1 target:8 hierarchy:18 heavily:2 today:2 exact:1 us:4 hypothesis:1 element:4 roy:1 recognition:12 ze:1 stripe:2 rgsift:1 coarser:2 database:1 gehler:1 observed:1 ep:4 labeled:2 ft:1 wang:2 capture:2 calculate:1 cy:2 wj:3 sun:1 russell:1 valuable:1 ungulate:4 convexity:2 ui:2 ultimately:1 trained:2 depend:1 rre:1 shakhnarovich:1 ov:1 learner:5 completely:1 gu:2 po:1 various:1 tx:2 cat:9 regularizer:16 train:3 jain:2 fast:5 describe:3 distinct:2 sc:1 horse:4 deer:1 hyper:1 choosing:1 outside:1 tell:1 quite:1 richer:2 larger:4 whose:3 cvpr:7 ability:1 gi:2 unseen:1 rosenfeld:1 jointly:1 noisy:1 itself:1 ip:2 final:1 differentiate:2 advantage:5 ero:1 cai:1 propose:4 ste:1 relevant:2 combining:2 translate:1 achieve:1 roweis:1 ine:1 intuitive:1 inducing:1 validate:4 competition:3 fel:1 los:1 ent:1 exploiting:7 parent:2 dolphin:5 darrell:1 nin:1 contrived:1 cluster:1 produce:1 bernal:1 categorization:3 lig:1 object:32 help:4 coupling:1 derive:1 ac:2 gong:1 illustrate:1 exemplar:1 ij:1 school:1 op:1 odd:2 eq:10 strong:1 nearest:10 pot:1 predicted:2 frome:1 indicate:1 trading:1 sizable:1 c:2 differ:1 closely:2 annotated:1 correct:6 attribute:18 engineered:1 require:1 alleviate:1 kristen:1 mt0:3 im:2 exploring:1 helping:1 pl:3 ic:1 wheeled:1 great:1 algorithmic:1 mapping:1 bicycle:1 lm:1 claim:1 predict:3 kvt:1 efros:1 dictionary:2 torralba:2 omitted:2 integrates:3 injecting:1 label:18 expose:1 utexas:2 ain:1 grouped:2 correctness:1 create:1 reflects:2 minimization:1 mit:1 feisha:1 clearly:1 always:2 super:2 aim:1 ck:2 rather:2 zhou:2 cr:1 shrinkage:2 broader:1 focus:3 june:1 yo:2 improvement:8 longest:1 contrast:3 attains:1 baseline:12 sense:1 am:2 detect:1 parameswaran:1 el:3 nn:3 unneeded:1 typically:1 initially:1 perona:2 relation:1 koller:1 ancestor:4 transformed:2 rhino:1 interested:1 selects:2 semantics:13 bovine:1 classification:25 among:6 flexible:1 augment:1 insect:1 development:1 animal:3 art:1 spatial:1 fairly:1 orange:1 uc:1 construct:2 once:1 saving:1 ng:1 having:1 manually:1 whale:5 ike:1 arctic:2 represents:1 icml:5 look:1 unsupervised:2 yu:1 future:1 minimized:1 nonsmooth:1 others:1 report:1 few:1 neighbour:1 composed:1 simultaneously:1 densely:1 ve:3 resulted:1 recognize:1 murphy:1 usc:1 ima:1 individual:1 detection:3 investigate:1 intra:1 evaluation:3 truly:1 sh:5 semidefinite:3 baleen:2 superfluous:1 dee:1 regularizers:1 amenable:1 kt:1 encourage:1 partial:1 closer:1 necessary:2 orthogonal:6 traversing:1 tree:14 rtw:1 iv:2 euclidean:9 desired:2 plotted:1 re:1 instance:5 classify:1 kij:2 ar:2 cover:1 cost:1 onal:1 subset:2 examining:1 gr:1 too:1 characterize:1 synthetic:4 nickisch:1 dumais:1 st:1 ju:1 explores:1 fundamental:1 discriminating:4 ie:1 bu:1 dong:1 off:1 together:5 thesis:1 squared:1 huang:2 possibly:1 collapsing:1 worse:1 external:9 resort:1 style:1 return:2 li:4 potential:1 de:1 stride:1 coding:3 disjointness:2 includes:1 coefficient:4 tra:1 satisfy:2 depends:1 vi:2 vehicle:12 view:2 root:8 placental:1 analyze:1 doing:1 red:2 xing:1 parallel:1 vt0:3 jia:1 contribution:2 gulshan:1 square:1 ni:2 om:1 publicly:1 accuracy:17 descriptor:2 minimize:2 air:2 ir:1 chart:1 yield:2 identify:1 ant:2 yellow:2 raw:2 accurately:1 hunter:1 critically:1 cc:1 finer:2 researcher:3 strongest:1 fo:1 sharing:3 ed:6 complicate:1 definition:1 pp:1 proof:1 associated:1 couple:2 gain:4 dataset:3 adjusting:1 popular:1 begun:1 mitchell:1 color:1 car:1 dimensionality:1 improves:1 knowledge:8 ou:2 sophisticated:1 ea:4 campbell:1 ta:2 higher:5 follow:1 tom:43 methodology:1 wei:1 zisserman:3 ox:1 though:2 furthermore:1 just:1 swaminathan:1 correlation:2 web:1 ei:1 su:1 overlapping:2 lack:2 tru:1 gray:2 perhaps:1 pulling:1 aquatic:1 hwang:1 nuanced:1 building:2 effect:1 k22:1 contain:1 true:2 name:1 concept:1 former:1 hence:1 ril:1 assigned:1 regularization:15 excluded:1 dhillon:1 nonzero:1 furry:1 semantic:23 deal:1 white:3 mahalanobis:10 interchangeably:1 self:1 encourages:1 ue:1 davis:1 illustrative:1 oc:3 criterion:2 ijl:8 stress:1 theoretic:1 demonstrate:3 confusion:1 vo:1 multiview:1 complete:1 pro:1 hin:1 meaning:3 image:17 novel:5 ef:1 common:3 propelled:1 mt:9 he:3 interpret:1 significant:1 olp:1 counterexample:1 ai:5 zebra:3 longneck:2 rd:2 tuning:1 similarity:16 etc:1 base:3 add:2 deduce:1 recent:7 showed:1 optimizing:1 optimizes:1 apart:1 ship:1 scenario:1 vt:2 muscle:2 captured:1 seen:1 minimum:1 greater:1 impose:1 deng:2 employed:1 determine:1 ale:1 ii:3 branch:2 resolving:1 siamese:3 multiple:9 full:2 persian:4 smooth:1 technical:1 characterized:1 af:1 cross:1 chia:1 retrieval:2 long:1 lin:1 zweig:1 divided:1 impact:1 ile:1 prediction:4 basic:1 regression:1 variant:5 vision:3 metric:97 kernel:5 pyramid:1 achieved:1 addition:3 want:1 ore:1 separately:1 interval:1 fine:4 whereas:5 ot:4 rest:1 unlike:1 tiple:1 tri:1 nv:1 subject:3 ape:1 suspect:1 db:1 shepherd:1 member:2 legend:1 homonym:1 spirit:3 leveraging:1 ee:4 presence:1 ter:1 yang:1 split:2 granularity:1 variety:2 xj:11 fit:1 nta:1 identified:1 lasso:2 reduce:2 idea:5 br:1 multiclass:2 texas:2 angeles:1 rli:1 whether:2 pca:9 effort:1 returned:1 strainteeth:1 proceed:1 prefers:2 useful:10 generally:2 clear:2 se:5 awa:17 dark:1 mid:1 extensively:1 svms:1 category:28 bac:1 rw:1 generate:3 reduced:2 outperform:1 percentage:1 specifies:1 nsf:2 fish:1 cikm:1 disjoint:43 tibshirani:1 rb:2 per:11 blue:2 promise:1 iz:1 group:2 key:1 four:2 quadrupedal:1 drawn:1 ce:1 breadth:1 ruc:1 ht:1 subgradient:2 year:2 run:1 powerful:1 nameable:3 extends:1 throughout:1 reasonable:2 almost:1 wu:1 patch:2 parsimonious:1 griffin:1 prefer:1 decision:6 comparable:1 capturing:1 ki:1 hi:4 layer:1 guaranteed:3 distinguish:6 display:1 fan:1 arizona:1 truck:1 nonnegative:4 badly:1 strength:1 constraint:6 worked:1 fei:6 precisely:1 orthogonality:1 flat:1 scene:1 calling:2 tal:1 generates:1 aspect:2 speed:2 oo:1 min:3 span:1 kumar:1 argument:1 structured:2 icpr:1 combination:3 belonging:2 ate:1 across:4 increasingly:2 terminates:1 pan:1 wi:2 appealing:1 lp:1 qu:1 primate:1 encapsulates:1 outlier:1 intuitively:5 restricted:1 pr:4 iccv:5 invariant:1 mori:1 computationally:1 ln:3 remains:1 previously:1 slack:2 count:1 assures:1 german:1 singer:1 sahami:1 needed:1 tractable:1 end:1 available:2 gaussians:1 apply:3 observe:1 hierarchical:19 away:2 appropriate:1 generic:2 alternative:1 weinberger:3 ho:1 original:5 standardized:1 clustering:2 ensure:1 top:1 porteous:1 calculating:1 pushing:1 procyonid:1 giving:1 concatenated:1 tig:1 especially:1 restrictive:1 build:1 society:1 exploit:3 bl:1 malik:1 objective:5 strategy:5 sha:2 diagonal:4 southern:1 exhibit:1 gradient:1 distance:17 oa:1 lio:1 topic:1 considers:2 d2m:5 toward:1 assuming:1 ru:3 length:1 code:1 index:1 relationship:2 balance:1 minimizing:2 ying:1 unfortunately:1 hairless:2 taxonomy:10 relate:2 expense:1 trace:4 ba:4 design:1 implementation:2 hoogs:1 subcategory:2 plankton:2 canine:3 av:2 upper:2 datasets:11 benchmark:1 withheld:1 immediate:2 rn:2 omission:1 community:3 cast:1 dog:1 extensive:2 optimized:2 imagenet:4 sivic:1 california:1 learned:18 distinction:1 nip:7 beyond:2 bar:2 able:1 usually:1 pattern:4 ev:2 biasing:1 sparsity:36 bov:1 toughskin:2 interpretability:1 including:1 power:1 suitable:1 overlap:1 ia:2 force:1 rely:1 natural:2 boat:1 scheme:1 imply:1 ne:5 identifies:1 coupled:1 schmid:1 text:4 review:1 literature:1 understanding:1 discovery:1 relative:4 embedded:1 fully:1 subcategories:1 discriminatively:1 rationale:1 bear:1 interesting:1 localized:1 validation:3 integrate:2 bulbous:1 degree:1 sufficient:2 proxy:1 xiao:1 bank:1 classifying:2 share:2 pi:4 nowozin:1 eccv:5 austin:2 lo:2 changed:1 supported:2 last:1 tease:1 carnivore:1 dis:3 populate:1 bias:1 lle:1 guide:1 smelly:2 neighbor:9 explaining:1 saul:2 wide:1 differentiating:1 absolute:1 sparse:14 benefit:1 dimension:12 default:1 world:1 rich:1 concretely:1 boxer:1 made:1 collection:1 projected:1 author:3 ec:1 welling:1 bb:1 pruning:1 compact:1 global:13 active:3 sequentially:1 belongie:1 discriminative:8 xi:18 fergus:1 un:4 latent:2 search:2 triplet:2 sk:1 table:4 disambiguate:1 promising:1 learn:10 transfer:5 robust:1 ca:11 sra:1 improving:1 vessel:1 complex:2 necessarily:1 constructing:2 cl:3 vj:1 da:2 surf:1 sp:2 did:1 main:1 hierarchically:1 rh:1 big:2 noise:1 lampert:1 hyperparameters:1 motivation:1 child:6 fig:2 benefited:1 representative:1 rtr:1 en:5 slow:2 sub:6 concatenating:1 xl:4 lie:1 candidate:1 pe:2 jmlr:1 third:1 late:1 grained:5 removing:1 down:1 ffa:1 specific:4 qua:1 showing:1 sift:1 kaufhold:1 er:8 list:1 insignificant:1 explored:3 appeal:1 svm:3 grouping:1 consist:1 socher:1 sequential:1 phd:1 magnitude:1 texture:1 subtree:2 ahu:1 push:1 te:3 margin:10 demand:1 chen:2 hinging:1 rodent:1 suited:1 distinguishable:1 simply:1 likely:2 appearance:1 visual:24 prevents:1 expressed:1 bo:4 ch:1 corresponds:1 ma:1 superclass:6 goal:3 consequently:1 towards:1 barnard:1 absence:1 content:1 shared:1 specifically:6 torr:1 semantically:1 wordnet:6 toed:3 discriminate:3 e:2 la:3 craft:1 rarely:1 formally:1 berg:1 internal:1 support:4 latter:3 collins:1 bush:1 evaluate:3 |
3,591 | 4,251 | Speedy Q-Learning
Mohammad Gheshlaghi Azar
Radboud University Nijmegen
Geert Grooteplein 21N, 6525 EZ
Nijmegen, Netherlands
[email protected]
Remi Munos
INRIA Lille, SequeL Project
40 avenue Halley
59650 Villeneuve d?Ascq, France
[email protected]
Mohammad Ghavamzadeh
INRIA Lille, SequeL Project
40 avenue Halley
59650 Villeneuve d?Ascq, France
[email protected]
Hilbert J. Kappen
Radboud University Nijmegen
Geert Grooteplein 21N, 6525 EZ
Nijmegen, Netherlands
[email protected]
Abstract
We introduce a new convergent variant of Q-learning, called speedy Q-learning
(SQL), to address the problem of slow convergence in the standard form of the
Q-learning algorithm. We prove a PAC bound on the performance of SQL, which
shows that for an MDP with
n state-action pairs and the discount factor ? only T =
O log(n)/(?2 (1 ? ?)4 ) steps are required for the SQL algorithm to converge to
an ?-optimal action-value function with high probability. This bound has a better
dependency on 1/? and 1/(1 ? ?), and thus, is tighter than the best available result
for Q-learning. Our bound is also superior to the existing results for both modelfree and model-based instances of batch Q-value iteration that are considered to
be more efficient than the incremental methods like Q-learning.
1
Introduction
Q-learning [20] is a well-known model-free reinforcement learning (RL) algorithm that finds an
estimate of the optimal action-value function. Q-learning is a combination of dynamic programming,
more specifically the value iteration algorithm, and stochastic approximation. In finite state-action
problems, it has been shown that Q-learning converges to the optimal action-value function [5, 10].
However, it suffers from slow convergence, especially when the discount factor ? is close to one [8,
17]. The main reason for the slow convergence of Q-learning is the combination of the sample-based
stochastic approximation (that makes use of a decaying learning rate) and the fact that the Bellman
operator propagates information throughout the whole space (specially when ? is close to 1).
In this paper, we focus on RL problems that are formulated as finite state-action discounted infinite
horizon Markov decision processes (MDPs), and propose an algorithm, called speedy Q-learning
(SQL), that addresses the problem of slow convergence of Q-learning. At each time step, SQL uses
two successive estimates of the action-value function that makes its space complexity twice as the
standard Q-learning. However, this allows SQL to use a more aggressive learning rate for one of
the terms in its update rule and eventually achieves a faster convergence rate than the standard Qlearning (see Section 3.1 for a more detailed discussion). We prove
a PAC bound on the performance
of SQL, which shows that only T = O log(n)/((1 ? ?)4 ?2 ) number of samples are required for
SQL in order to guarantee an ?-optimal action-value function with high probability. This is superior
to the best result for the standard Q-learning by [8], both in terms of 1/? and 1/(1 ? ?). The rate
for SQL is even better than that for the Phased Q-learning algorithm, a model-free batch Q-value
1
iteration algorithm proposed and analyzed by [12]. In addition, SQL?s rate is slightly better than
the rate of the model-based batch Q-value iteration algorithm in [12] and has a better computational
and memory requirement (computational and space complexity), see Section 3.3.2 for more detailed
comparisons. Similar to Q-learning, SQL may be implemented in synchronous and asynchronous
fashions. For the sake of simplicity in the analysis, we only report and analyze its synchronous
version in this paper. However, it can easily be implemented in an asynchronous fashion and our
theoretical results can also be extended to this setting by following the same path as [8].
The idea of using previous estimates of the action-values has already been used to improve the performance of Q-learning. A popular algorithm of this kind is Q(?) [14, 20], which incorporates the
concept of eligibility traces in Q-learning, and has been empirically shown to have a better performance than Q-learning, i.e., Q(0), for suitable values of ?. Another recent work in this direction
is Double Q-learning [19], which uses two estimators for the action-value function to alleviate the
over-estimation of action-values in Q-learning. This over-estimation is caused by a positive bias introduced by using the maximum action value as an approximation for the expected action value [19].
The rest of the paper is organized as follows. After introducing the notations used in the paper
in Section 2, we present our Speedy Q-learning algorithm in Section 3. We first describe the algorithm in Section 3.1, then state our main theoretical result, i.e., a high-probability bound on the
performance of SQL, in Section 3.2, and finally compare our bound with the previous results on
Q-learning in Section 3.3. Section 4 contains the detailed proof of the performance bound of the
SQL algorithm. Finally, we conclude the paper and discuss some future directions in Section 5.
2
Preliminaries
In this section, we introduce some concepts and definitions from the theory of Markov decision
processes (MDPs) that are used throughout the paper. We start by the definition of supremum norm.
For a real-valued function g : Y 7? R, where Y is a finite set, the supremum norm of g is defined as
kgk , maxy?Y |g(y)|.
We consider the standard reinforcement learning (RL) framework [5, 16] in which a learning agent
interacts with a stochastic environment and this interaction is modeled as a discrete-time discounted
MDP. A discounted MDP is a quintuple (X, A, P, R, ?), where X and A are the set of states and
actions, P is the state transition distribution, R is the reward function, and ? ? (0, 1) is a discount
factor. We denote by P (?|x, a) and r(x, a) the probability distribution over the next state and the
immediate reward of taking action a at state x, respectively. To keep the representation succinct, we
use Z for the joint state-action space X ? A.
Assumption 1 (MDP Regularity). We assume Z and, subsequently, X and A are finite sets with
cardinalities n, |X| and |A|, respectively. We also assume that the immediate rewards r(x, a) are
uniformly bounded by Rmax and define the horizon of the MDP ? , 1/(1 ? ?) and Vmax , ?Rmax .
A stationary Markov policy ?(?|x) is the distribution over the control actions given the current
state x. It is deterministic if this distribution concentrates over a single action. The value and the
action-value functions of a policy ?, denoted respectively by V ? : X 7? R and Q? : Z 7? R,
are defined as the expected sum of discounted rewards that are encountered when the policy ?
is executed. Given a MDP, the goal is to find a policy that attains the best possible values,
V ? (x) , sup? V ? (x), ?x ? X. Function V ? is called the optimal value function. Similarly
the optimal action-value function is defined as Q? (x, a) = sup? Q? (x, a), ?(x, a) ? Z. The optimal action-value function Q?P
is the unique fixed-point of the Bellman optimality operator T defined
as (TQ)(x, a) , r(x, a) + ? y?X P (y|x, a) maxb?A Q(y, b), ?(x, a) ? Z. It is important to note
that T is a contraction with factor ?, i.e., for any pair of action-value functions Q and Q? , we have
kTQ ? TQ? k ? ? kQ ? Q? k [4, Chap. 1]. Finally for the sake of readability, we define the max
operator M over action-value functions as (MQ)(x) = maxa?A Q(x, a), ?x ? X.
3
Speedy Q-Learning
In this section, we introduce our RL algorithm, called speedy Q-Learning (SQL), derive a performance bound for this algorithm, and compare this bound with similar results on standard Q-learning.
2
p
The derived performance bound shows that SQL has a rate of convergence of order O( 1/T ),
which is better than all the existing results for Q-learning.
3.1
Speedy Q-Learning Algorithm
The pseudo-code of the SQL algorithm is shown in Algorithm 1. As it can be seen, this is the
synchronous version of the algorithm, which will be analyzed in the paper. Similar to the standard
Q-learning, SQL may be implemented either synchronously or asynchronously. In the asynchronous
version, at each time step, the action-value of the observed state-action pair is updated, while the
rest of the state-action pairs remain unchanged. For the convergence of this instance of the algorithm, it is required that all the states and actions are visited infinitely many times, which makes
the analysis slightly more complicated. On the other hand, given a generative model, the algorithm may be also formulated in a synchronous fashion, in which we first generate a next state
y ? P (?|x, a) for each state-action pair (x, a), and then update the action-values of all the stateaction pairs using these samples. We chose to include only the synchronous version of SQL in
the paper just for the sake of simplicity in the analysis. However, the algorithm can be implemented in an asynchronous fashion (similar to the more familiar instance of Q-learning) and our
theoretical results can also be extended to the asynchronous case under some mild assumptions.1
Algorithm 1: Synchronous Speedy Q-Learning (SQL)
Input: Initial action-value function Q0 , discount factor ?, and number of iteration T
Q?1 := Q0 ;
// Initialization
for k := 0, 1, 2, 3, . . . , T ? 1 do
// Main loop
1
;
?k := k+1
for each (x, a) ? Z do
Generate the next state sample yk ? P (?|x, a);
Tk Qk?1 (x, a) := r(x, a) + ?MQk?1 (yk );
Tk Qk (x, a) := r(x, a) + ?MQ
(y );
// Empirical
Bellman operator
` k k
?
`
?
Qk+1 (x, a) := Qk (x, a)+?k Tk Qk?1 (x, a)?Qk (x, a) +(1??k ) Tk Qk (x, a)?Tk Qk?1 (x, a) ;
// SQL update rule
end
end
return QT
As it can be seen from Algorithm 1, at each time step k, SQL keeps track of the action-value functions of the two time-steps k and k ? 1, and its main update rule is of the following form:
Qk+1 (x, a) = Qk (x, a)+?k Tk Qk?1 (x, a)?Qk (x, a) +(1??k ) Tk Qk (x, a)?Tk Qk?1 (x, a) ,
(1)
where Tk Q(x, a) = r(x, a) + ?MQ(yk ) is the empirical Bellman optimality operator for the sampled next state yk ? P (?|x, a). At each time step k and for state-action pair (x, a), SQL works as
follows: (i) it generates a next state yk by drawing a sample from P (?|x, a), (ii) it calculates two
sample estimates Tk Qk?1 (x, a) and Tk Qk (x, a) of the Bellman optimality operator (for state-action
pair (x, a) using the next state yk ) applied to the estimates Qk?1 and Qk of the action-value function
at the previous and current time steps, and finally (iii) it updates the action-value function of (x, a),
generates Qk+1 (x, a), using the update rule of Eq. 1. Moreover, we let ?k decays linearly with
time, i.e., ?k = 1/(k + 1), in the SQL algorithm. 2 The update rule of Eq. 1 may be rewritten in the
following more compact form:
Qk+1 (x, a) = (1 ? ?k )Qk (x, a) + ?k Dk [Qk , Qk?1 ](x, a),
(2)
where Dk [Qk , Qk?1 ](x, a) , kTk Qk (x, a) ? (k ? 1)Tk Qk?1 (x, a). This compact form will come
specifically handy in the analysis of the algorithm in Section 4.
Let us consider the update rule of Q-learning
1
Qk+1 (x, a) = Qk (x, a) + ?k Tk Qk (x, a) ? Qk (x, a) ,
See [2] for the convergence analysis of the asynchronous variant of SQL.
Note that other (polynomial) learning steps can also be used ?with speedy Q-learning. However one can
show that the rate of convergence of SQL is optimized for ?k = 1 (k + 1). This is in contrast to the standard
Q-learning algorithm for which the rate of convergence is optimized for a polynomial learning step [8].
2
3
which may be rewritten as
Qk+1 (x, a) = Qk (x, a) + ?k Tk Qk?1 (x, a) ? Qk (x, a) + ?k Tk Qk (x, a) ? Tk Qk?1 (x, a) . (3)
Comparing the Q-learning update rule of Eq. 3 with the one for SQL in Eq. 1, we first notice that
the same terms: Tk Qk?1 ? Qk and Tk Qk ? Tk Qk?1 appear on the RHS of the update rules of both
algorithms. However, while Q-learning uses the same conservative learning rate ?k for both these
terms, SQL uses ?k for the first term and a bigger learning step 1 ? ?k = k/(k + 1) for the second
one. Since the term Tk Qk ? Tk Qk?1 goes to zero as Qk approaches its optimal value Q? , it is not
necessary that its learning rate approaches zero. As a result, using the learning rate ?k , which goes
to zero with k, is too conservative for this term. This might be a reason why SQL that uses a more
aggressive learning rate 1 ? ?k for this term has a faster convergence rate than Q-learning.
3.2
Main Theoretical Result
The main theoretical result of the paper is expressed as a high-probability bound over the performance of the SQL algorithm.
Theorem 1. Let Assumption 1 holds and T be a positive integer. Then, at iteration T of SQL with
probability at least 1 ? ?, we have
?
?
s
2n
2
log
?
? ?
kQ? ? QT k ? 2? 2 Rmax ? +
.
T
T
We report the proof of Theorem 1 in Section 4. This result, combined
p with Borel-Cantelli lemma [9],
guarantees that QT converges almost surely to Q? with the rate 1/T . Further, the following result
which quantifies the number of steps T required to reach the error ? > 0 in estimating the optimal
action-value function, w.p. 1 ? ?, is an immediate consequence of Theorem 1.
Corollary 1 (Finite-time PAC (?probably approximately correct?) performance bound for SQL).
Under Assumption 1, for any ? > 0, after
T =
2
11.66? 4 Rmax
log
2
?
2n
?
steps of SQL, the uniform approximation error kQ? ? QT k ? ?, with probability at least 1 ? ?.
3.3
Relation to Existing Results
In this section, we first compare our results for SQL with the existing results on the convergence of
standard Q-learning. This comparison indicates that SQL accelerates the convergence of Q-learning,
especially for ? close to 1 and small ?. We then compare SQL with batch Q-value iteration (QI) in
terms of sample and computational complexities, i.e., the number of samples and the computational
cost required to achieve an ?-optimal solution w.p. 1 ? ?, as well as space complexity, i.e., the
memory required at each step of the algorithm.
3.3.1
A Comparison with the Convergence Rate of Standard Q-Learning
There are not many studies in the literature concerning the convergence rate of incremental modelfree RL algorithms such as Q-learning. [17] has provided the asymptotic convergence rate for Qlearning under the assumption that all the states have the same next state distribution. This result
shows that the asymptotic convergence rate of Q-learning has exponential dependency on 1 ? ?, i.e.,
1??
?
the rate of convergence is of O(1/t
) for ? ? 1/2.
The finite time behavior of Q-learning have been throughly investigated in [8] for different
time
Their main result indicates that by using the polynomial learning step ?k =
scales.
?
1 (k + 1) , 0.5 < ? < 1, Q-learning achieves ?-optimal performance w.p. at least 1 ? ? after
?"
?
#1
1
1??
n?Rmax w
4 2
? Rmax log ??
?Rmax
?
T = O?
(4)
+ ? log
?2
?
4
steps. When ? ? 1, one can argue that ? = 1/(1 ? ?) becomes the dominant term in the bound of
? 5 2.5 . On the
Eq. 4, and thus, the optimized bound w.r.t. ? is obtained for ? = 4/5 and is of
O ? /?
4 2
other hand, SQL is guaranteed to achieve the same precision in only O ? /? steps. The difference
between these two bounds is significant for large values of ?, i.e., ??s close to 1.
3.3.2
SQL vs. Q-Value Iteration
Finite sample bounds for both model-based and model-free (Phased Q-learning) QI have been derived in [12] and [7]. These algorithms can be considered as the batch version of Q-learning.
They show that to quantify ?-optimal action-value
functions with high probability,
we need
O n? 5 /?2 log(1/?) log(n?) + log log 1 ? and O n? 4 /?2 (log(n?) + log log 1 ?) samples in
model-free and model-based QI, respectively. A comparison between their results and the main re
sult of this paper suggests that the sample complexity of SQL, which is of order O n? 4 /?2 log n ,3
is better than model-free QI in terms of ? and log(1/?). Although the sample complexities of SQL
is only slightly tighter than the model-based QI, SQL has a significantly better computational and
space complexity than model-based QI: SQL needs only 2n memory space, while the space com4 2
?
plexity of model-based QI is given by either O(n?
/? ) or n(|X| + 1), depending on whether the
learned state transition matrix is sparse or not [12]. Also, SQL improves the computational com?
plexity by a factor of O(?)
compared to both model-free and model-based QI.4 Table 1 summarizes
the comparisons between SQL and the other RL methods discussed in this section.
Table 1: Comparison between SQL, Q-learning, model-based and model-free Q-value iteration in
terms of sample complexity (SC), computational complexity (CC), and space complexity (SPC).
Method
SC
CC
SPC
4
SQL
4
? n?
O
?2
4
? n?
O
?2
Q-learning (optimized)
5
n?
?
O
?2.5
5
? n?
O
?2.5
?(n)
?(n)
Model-based QI
4
n?
?
O
?2
5
? n?
O
?2
4
? n?
O
?2
Model-free QI
5
n?
?
O
?2
5
? n?
O
?2
?(n)
Analysis
In this section, we give some intuition about the convergence of SQL and provide the full proof of
the finite-time analysis reported in Theorem 1. We start by introducing some notations.
Let Fk be the filtration generated by the sequence of all random samples {y1 , y2 , . . . , yk } drawn
from the distribution P (?|x, a), for all state action (x, a) up to round k. We define the operator
D[Qk , Qk?1 ] as the expected value of the empirical operator Dk conditioned on Fk?1 :
D[Qk , Qk?1 ](x, a) , E(Dk [Qk , Qk?1 ](x, a) |Fk?1 )
= kTQk (x, a) ? (k ? 1)TQk?1 (x, a).
Thus the update rule of SQL writes
Qk+1 (x, a) = (1 ? ?k )Qk (x, a) + ?k (D[Qk , Qk?1 ](x, a) ? ?k (x, a)) ,
3
(5)
Note that at each round of SQL n new samples are generated. This combined with the result of Corollary 1
deduces the sample complexity of order O(n? 4 /?2 log(n/?)).
4
SQL has the sample and computational complexity of a same order since it performs only one Q-value
update per sample, whereas, in the case of model-based QI, the algorithm needs to iterate the action-value
?
function of all state-action pairs at least O(?)
times using Bellman operator, which leads to a computational
5 2
4 2
?
?
complexity bound of order O(n?
/? ) given that only O(n?
/? ) entries of the estimated transition matrix
are non-zero [12].
5
where the estimation error ?k is defined as the difference between the operator D[Qk , Qk?1 ] and its
sample estimate Dk [Qk , Qk?1 ] for all (x, a) ? Z:
?k (x, a) , D[Qk , Qk?1 ](x, a) ? Dk [Qk , Qk?1 ](x, a).
We have the property that E[?k (x, a)|Fk?1 ] = 0 which means that for all (x, a) ? Z the sequence
of estimation error {?1 (x, a), ?2 (x, a), . . . , ?k (x, a)} is a martingale difference sequence w.r.t. the
filtration Fk . Let us define the martingale Ek (x, a) to be the sum of the estimation errors:
Ek (x, a) ,
k
X
?j (x, a),
?(x, a) ? Z.
(6)
j=0
The proof of Theorem 1 follows the following steps: (i) Lemma 1 shows the stability of the algorithm
(i.e., the sequence of Qk stays bounded). (ii) Lemma 2 states the key property that the SQL iterate
Qk+1 is very close to the Bellman operator T applied to the previous iterate Qk plus an estimation
error term of order Ek /k. (iii) By induction, Lemma 3 provides a performance bound kQ? ? Qk k
in terms of a discounted sum of the cumulative estimation errors {Ej }j=0:k?1 . Finally (iv) we use
a maximal Azuma?s inequality (see Lemma 4) to bound Ek and deduce the finite time performance
for SQL.
For simplicity of the notations, we remove the dependence on (x, a) (e.g., writing Q for Q(x, a),
Ek for Ek (x, a)) when there is no possible confusion.
Lemma 1 (Stability of SQL). Let Assumption 1 hold and assume that the initial action-value function Q0 = Q?1 is uniformly bounded by Vmax , then we have, for all k ? 0,
kQk k ? Vmax ,
k?k k ? 2Vmax ,
and
kDk [Qk , Qk?1 ]k ? Vmax .
Proof. We first prove that kDk [Qk , Qk?1 ]k ? Vmax by induction. For k = 0 we have:
kD0 [Q0 , Q?1 ]k ? krk + ?kMQ?1 k ? Rmax + ?Vmax = Vmax .
Now for any k ? 0, let us assume that the bound kDk [Qk , Qk?1 ]k ? Vmax holds. Thus
kDk+1 [Qk+1 , Qk ]k ? krk + ? k(k + 1)MQk+1 ? kMQk k
1
k
Q
+
D
[Q
,
Q
]
?
kMQ
= krk + ?
(k
+
1)M
k
k
k
k?1
k
k+1
k+1
? krk + ? kM(kQk + Dk [Qk , Qk?1 ] ? kQk )k
? krk + ? kDk [Qk , Qk?1 ]k ? Rmax + ?Vmax = Vmax ,
and by induction, we deduce that for all k ? 0, kDk [Qk , Qk?1 ]k ? Vmax .
Now the bound on ?k follows from k?k k = kE(Dk [Qk , Qk?1 ]|Fk?1 ) ? Dk [Qk , Qk?1 ]k ? 2Vmax ,
Pk?1
and the bound kQk k ? Vmax is deduced by noticing that Qk = 1/k j=0 Dj [Qj , Qj?1 ].
The next lemma shows that Qk is close to TQk?1 , up to a O(1/k) term plus the average cumulative
estimation error k1 Ek?1 .
Lemma 2. Under Assumption 1, for any k ? 1:
1
(7)
Qk = (TQ0 + (k ? 1)TQk?1 ? Ek?1 ) .
k
Proof. We prove this result by induction. The result holds for k = 1, where (7) reduces to (5). We
now show that if the property (7) holds for k then it also holds for k + 1. Assume that (7) holds for
k. Then, from (5) we have:
k
1
Qk+1 =
Qk +
(kTQk ? (k ? 1)TQk?1 ? ?k )
k+1
k+1
1
1
k
(TQ0 + (k ? 1)TQk?1 ? Ek?1 ) +
(kTQk ? (k ? 1)TQk?1 ? ?k )
=
k+1 k
k+1
1
1
(TQ0 + kTQk ? Ek?1 ? ?k ) =
(TQ0 + kTQk ? Ek ).
=
k+1
k+1
Thus (7) holds for k + 1, and is thus true for all k ? 1.
6
Now we bound the difference between Q? and Qk in terms of the discounted sum of cumulative
estimation errors {E0 , E1 , . . . , Ek?1 }.
Lemma 3 (Error Propagation of SQL). Let Assumption 1 hold and assume that the initial actionvalue function Q0 = Q?1 is uniformly bounded by Vmax , then for all k ? 1, we have
k
1 X k?j
2??Vmax
+
?
kEj?1 k.
(8)
kQ? ? Qk k ?
k
k j=1
Proof. Again we prove this lemma by induction. The result holds for k = 1 as:
kQ? ? Q1 k = kTQ? ? T0 Q0 k = ||TQ? ? TQ0 + ?0 ||
? ||TQ? ? TQ0 || + ||?0 || ? 2?Vmax + ||?0 || ? 2??Vmax + kE0 k
We now show that if the bound holds for k, then it also holds for k + 1. Thus, assume that (8) holds
for k. By using Lemma 2:
?
Q ? Qk+1
=
Q? ? 1 (TQ0 + kTQk ? Ek )
k+1
1
k
1
?
?
=
(TQ
?
TQ
)
+
(TQ
?
TQ
)
+
E
0
k
k
k + 1
k+1
k+1
?
k?
1
?
kQ? ? Q0 k +
kQ? ? Qk k +
kEk k
k+1
k+1
k+1
?
?
k
X
k? ? 2??Vmax
1
1
2?
Vmax +
+
kEk k
? k?j kEj?1 k ? +
?
k+1
k+1
k
k j=1
k+1
k+1
=
2??Vmax
1 X k+1?j
+
?
kEj?1 k.
k+1
k + 1 j=1
Thus (8) holds for k + 1 thus for all k ? 1 by induction.
Now, based on Lemmas 3 and 1, we prove the main theorem of this paper.
Proof of Theorem 1. We begin our analysis by recalling the result of Lemma 3 at round T :
T
2??Vmax
1 X T ?k
kQ? ? QT k ?
+
?
kEk?1 k.
T
T
k=1
Note that the difference between this bound and the result of Theorem 1 is just in the second term.
So, we only need to show that the following inequality holds, with probability at least 1 ? ?:
s
T
X
2 log 2n
1
?
(9)
.
? T ?k kEk?1 k ? 2?Vmax
T
T
k=1
We first notice that:
T
T
1 X T ?k
1 X T ?k
? max1?k?T kEk?1 k
.
?
kEk?1 k ?
?
max kEk?1 k ?
1?k?T
T
T
T
k=1
(10)
k=1
Therefore, in order to prove (9) it is sufficient to bound max1?k?T kEk?1 k =
max(x,a)?Z max1?k?T |Ek?1 (x, a)| in high probability. We start by providing a high probability
bound for max1?k?T |Ek?1 (x, a)| for a given (x, a). First notice that
P max |Ek?1 (x, a)| > ? = P max max (Ek?1 (x, a)), max (?Ek?1 (x, a)) > ?
1?k?T
1?k?T
1?k?T
[
=P
max (Ek?1 (x, a)) > ?
max (?Ek?1 (x, a)) > ?
1?k?T
1?k?T
? P max (Ek?1 (x, a)) > ? + P max (?Ek?1 (x, a)) > ? ,
1?k?T
1?k?T
(11)
and each term is now bounded by using a maximal Azuma inequality, reminded now (see e.g., [6]).
7
Lemma 4 (Maximal Hoeffding-Azuma Inequality). Let V = {V1 , V2 , . . . , VT } be a martingale difference sequence w.r.t. a sequence of random variables {X1 , X2 , . . . , XT } (i.e.,
E(Vk+1 |X1 , . . . Xk ) = 0 for all 0 < k ? T ) such that V is uniformly bounded by L > 0. If
Pk
we define Sk = i=1 Vi , then for any ? > 0, we have
??2
P max Sk > ? ? exp
.
1?k?T
2T L2
As mentioned earlier, the sequence of random variables {?0 (x, a), ?1 (x, a), ? ? ? , ?k (x, a)} is
a martingale difference sequence w.r.t. the filtration Fk (generated by the random samples
{y0 , y1 , . . . , yk }(x, a) for all (x, a)), i.e., E[?k (x, a)|Fk?1 ] = 0. It follows from Lemma 4 that
for any ? > 0 we have:
??2
P max (Ek?1 (x, a)) > ? ? exp
2
1?k?T
8T Vmax
(12)
??2
.
P max (?Ek?1 (x, a)) > ? ? exp
2
1?k?T
8T Vmax
2
By combining (12) with (11) we deduce that P (max1?k?T |Ek?1 (x, a)| > ?) ? 2 exp 8T??
2
Vmax ,
and by a union bound over the state-action space, we deduce that
??2
P max kEk?1 k > ? ? 2n exp
.
(13)
2
1?k?T
8T Vmax
This bound can be rewritten as: for any ? > 0,
!
r
2n
P max kEk?1 k ? Vmax 8T log
? 1 ? ?,
(14)
1?k?T
?
which by using (10) proves (9) and Theorem 1.
5
Conclusions and Future Work
In this paper, we introduced a new Q-learning algorithm, called speedy Q-learning (SQL). We analyzed the finite time behavior of this algorithm as well as its asymptotic convergence to the optimal
action-value function. Our result is in the form of high probability bound on the performance loss
of SQL, which suggests that the algorithm converges to the optimal action-value function in a faster
rate than the standard Q-learning. Overall, SQL is a simple, efficient and theoretically well-founded
reinforcement learning algorithm, which improves on existing RL algorithms such as Q-learning
and model-based value iteration.
In this work, we are only interested in the estimation of the optimal action-value function and not the
problem of exploration. Therefore, we did not compare our result to the PAC-MDP methods [15,18]
and the upper-confidence bound based algorithms [3, 11], in which the choice of the exploration
policy impacts the behavior of the learning algorithms. However, we believe that it would be possible
to gain w.r.t. the state of the art in PAC-MDPs, by combining the asynchronous version of SQL with
a smart exploration strategy. This is mainly due to the fact that the bound for SQL has been proved to
be tighter than the RL algorithms that have been used for estimating the value function in PAC-MDP
methods, especially in the model-free case. We consider this as a subject for future research.
Another possible direction for future work is to scale up SQL to large (possibly continuous) state
and action spaces where function approximation is needed. We believe that it would be possible to
extend our current SQL analysis to the continuous case along the same path as in the fitted value
iteration analysis by [13] and [1]. This would require extending the error propagation result of
Lemma 3 to a ?2 -norm analysis and combining it with the standard regression bounds.
Acknowledgments
The authors appreciate supports from the PASCAL2 Network of Excellence Internal-Visit Programme and the European Community?s Seventh Framework Programme (FP7/2007-2013) under
grant agreement no 231495. We also thank Peter Auer for helpful discussion and the anonymous
reviewers for their valuable comments.
8
References
[1] A. Antos, R. Munos, and Cs. Szepesv?ari. Fitted Q-iteration in continuous action-space MDPs.
In Proceedings of the 21st Annual Conference on Neural Information Processing Systems,
2007.
[2] M. Gheshlaghi Azar, R. Munos, M. Ghavamzadeh, and H.J. Kappen. Reinforcement learning
with a near optimal rate of convergence. Technical Report inria-00636615, INRIA, 2011.
[3] P. L. Bartlett and A. Tewari. REGAL: A regularization based algorithm for reinforcement
learning in weakly communicating MDPs. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 2009.
[4] D. P. Bertsekas. Dynamic Programming and Optimal Control, volume II. Athena Scientific,
Belmount, Massachusetts, third edition, 2007.
[5] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont,
Massachusetts, 1996.
[6] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, New York, NY, USA, 2006.
[7] E. Even-Dar, S. Mannor, and Y. Mansour. PAC bounds for multi-armed bandit and Markov
decision processes. In 15th Annual Conference on Computational Learning Theory, pages
255?270, 2002.
[8] E. Even-Dar and Y. Mansour. Learning rates for Q-learning. Journal of Machine Learning
Research, 5:1?25, 2003.
[9] W. Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, 1968.
[10] T. Jaakkola, M. I. Jordan, and S. Singh. On the convergence of stochastic iterative dynamic
programming. Neural Computation, 6(6):1185?1201, 1994.
[11] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning.
Journal of Machine Learning Research, 11:1563?1600, 2010.
[12] M. Kearns and S. Singh. Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in Neural Information Processing Systems 12, pages 996?1002. MIT
Press, 1999.
[13] R. Munos and Cs. Szepesv?ari. Finite-time bounds for fitted value iteration. Journal of Machine
Learning Research, 9:815?857, 2008.
[14] J. Peng and R. J. Williams. Incremental multi-step Q-learning. Machine Learning, 22(13):283?290, 1996.
[15] A. L. Strehl, L. Li, and M. L. Littman. Reinforcement learning in finite MDPs: PAC analysis.
Journal of Machine Learning Research, 10:2413?2444, 2009.
[16] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts, 1998.
[17] Cs. Szepesv?ari. The asymptotic convergence-rate of Q-learning. In Advances in Neural Information Processing Systems 10, Denver, Colorado, USA, 1997, 1997.
[18] I. Szita and Cs. Szepesv?ari. Model-based reinforcement learning with nearly tight exploration
complexity bounds. In Proceedings of the 27th International Conference on Machine Learning,
pages 1031?1038. Omnipress, 2010.
[19] H. van Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems
23, pages 2613?2621, 2010.
[20] C. Watkins. Learning from Delayed Rewards. PhD thesis, Kings College, Cambridge, England,
1989.
9
| 4251 |@word mild:1 kgk:1 version:6 polynomial:3 norm:3 km:1 grooteplein:2 contraction:1 q1:1 kappen:3 initial:3 contains:1 existing:5 hasselt:1 current:3 comparing:1 com:1 belmont:1 remove:1 update:12 v:1 stationary:1 generative:1 intelligence:1 xk:1 provides:1 mannor:1 readability:1 successive:1 along:1 prove:7 kej:3 introduce:3 excellence:1 theoretically:1 peng:1 expected:3 behavior:3 multi:2 bellman:7 discounted:6 chap:1 armed:1 cardinality:1 becomes:1 project:2 estimating:2 notation:3 bounded:6 moreover:1 provided:1 begin:1 kind:1 rmax:9 maxa:1 guarantee:2 pseudo:1 tq0:7 stateaction:1 control:2 grant:1 appear:1 bertsekas:2 positive:2 consequence:1 sutton:1 path:2 approximately:1 lugosi:1 inria:6 chose:1 twice:1 initialization:1 might:1 plus:2 halley:2 suggests:2 phased:2 unique:1 acknowledgment:1 union:1 regret:1 handy:1 writes:1 empirical:3 significantly:1 confidence:1 close:6 operator:11 writing:1 deterministic:1 reviewer:1 go:2 williams:1 ke:1 simplicity:3 communicating:1 rule:9 estimator:1 mq:3 geert:2 stability:2 updated:1 colorado:1 programming:4 us:5 agreement:1 observed:1 valuable:1 yk:8 mentioned:1 intuition:1 environment:1 feller:1 complexity:14 reward:5 littman:1 dynamic:4 ghavamzadeh:3 weakly:1 singh:2 tight:1 smart:1 max1:5 easily:1 joint:1 indirect:1 describe:1 radboud:2 artificial:1 sc:2 valued:1 drawing:1 asynchronously:1 sequence:8 propose:1 interaction:1 maximal:3 fr:2 deduces:1 loop:1 combining:3 achieve:2 convergence:24 double:2 requirement:1 regularity:1 extending:1 incremental:3 converges:3 tk:21 derive:1 depending:1 qt:5 eq:5 implemented:4 c:4 come:1 quantify:1 direction:3 concentrate:1 correct:1 stochastic:4 subsequently:1 exploration:4 sult:1 require:1 villeneuve:2 alleviate:1 preliminary:1 anonymous:1 tighter:3 hold:15 considered:2 exp:5 achieves:2 estimation:10 visited:1 mit:2 ej:1 barto:1 jaakkola:1 corollary:2 derived:2 focus:1 vk:1 indicates:2 mainly:1 cantelli:1 contrast:1 attains:1 helpful:1 relation:1 bandit:1 france:2 interested:1 overall:1 szita:1 denoted:1 art:1 spc:2 lille:2 nearly:1 future:4 report:3 ortner:1 delayed:1 familiar:1 tq:8 recalling:1 analyzed:3 nl:2 antos:1 necessary:1 iv:1 re:1 e0:1 theoretical:5 fitted:3 instance:3 earlier:1 cost:1 introducing:2 entry:1 kq:9 uniform:1 seventh:1 too:1 reported:1 dependency:2 combined:2 deduced:1 st:1 international:1 stay:1 sequel:2 again:1 thesis:1 cesa:1 possibly:1 hoeffding:1 ek:25 return:1 li:1 aggressive:2 caused:1 vi:1 analyze:1 sup:2 start:3 decaying:1 complicated:1 qk:93 kek:10 cc:2 reach:1 suffers:1 definition:2 ktq:2 proof:8 sampled:1 gain:1 proved:1 popular:1 massachusetts:3 improves:2 hilbert:1 organized:1 auer:2 just:2 hand:2 propagation:2 scientific:2 believe:2 mdp:8 usa:2 concept:2 true:1 y2:1 regularization:1 q0:7 jaksch:1 round:3 game:1 eligibility:1 modelfree:2 plexity:2 mohammad:2 confusion:1 performs:1 omnipress:1 ari:4 superior:2 rl:8 empirically:1 denver:1 volume:2 discussed:1 extend:1 kd0:1 significant:1 cambridge:3 fk:8 similarly:1 dj:1 sql:60 deduce:4 mqk:2 dominant:1 recent:1 inequality:4 vt:1 seen:2 surely:1 converge:1 ii:3 full:1 reduces:1 technical:1 faster:3 england:1 concerning:1 e1:1 visit:1 bigger:1 calculates:1 qi:11 variant:2 impact:1 regression:1 neuro:1 prediction:1 iteration:13 addition:1 whereas:1 szepesv:4 rest:2 specially:1 probably:1 comment:1 subject:1 incorporates:1 jordan:1 integer:1 near:2 iii:2 maxb:1 iterate:3 idea:1 avenue:2 qj:2 synchronous:6 whether:1 t0:1 bartlett:1 peter:1 york:1 action:47 dar:2 tewari:1 detailed:3 netherlands:2 discount:4 generate:2 notice:3 estimated:1 track:1 tqk:6 per:1 discrete:1 key:1 drawn:1 kqk:4 v1:1 sum:4 noticing:1 uncertainty:1 throughout:2 almost:1 decision:3 summarizes:1 accelerates:1 bound:37 guaranteed:1 convergent:1 encountered:1 annual:2 x2:1 sake:3 generates:2 optimality:3 quintuple:1 combination:2 remain:1 slightly:3 y0:1 maxy:1 discus:1 eventually:1 needed:1 fp7:1 end:2 available:1 rewritten:3 v2:1 batch:5 ktk:1 include:1 k1:1 especially:3 prof:1 unchanged:1 appreciate:1 already:1 strategy:1 dependence:1 interacts:1 thank:1 athena:2 argue:1 reason:2 induction:6 ru:2 code:1 modeled:1 ke0:1 providing:1 executed:1 trace:1 nijmegen:4 filtration:3 policy:5 bianchi:1 upper:1 markov:4 finite:13 immediate:3 extended:2 y1:2 mansour:2 synchronously:1 regal:1 community:1 introduced:2 pair:9 required:6 optimized:4 learned:1 gheshlaghi:2 address:2 azuma:3 max:16 memory:3 pascal2:1 suitable:1 improve:1 mdps:6 ascq:2 literature:1 l2:1 asymptotic:4 loss:1 agent:1 sufficient:1 propagates:1 strehl:1 free:9 asynchronous:7 tsitsiklis:1 bias:1 taking:1 munos:5 sparse:1 van:1 transition:3 cumulative:3 kdk:6 author:1 reinforcement:9 vmax:28 founded:1 programme:2 compact:2 qlearning:2 supremum:2 keep:2 conclude:1 continuous:3 iterative:1 quantifies:1 sk:2 why:1 table:2 reminded:1 investigated:1 european:1 krk:5 did:1 pk:2 main:9 linearly:1 rh:1 azar:3 whole:1 edition:1 succinct:1 x1:2 borel:1 fashion:4 martingale:4 slow:4 ny:1 wiley:1 precision:1 exponential:1 watkins:1 third:1 theorem:9 xt:1 pac:8 decay:1 dk:9 throughly:1 phd:1 conditioned:1 horizon:2 remi:1 infinitely:1 ez:2 expressed:1 actionvalue:1 goal:1 formulated:2 king:1 specifically:2 infinite:1 uniformly:4 lemma:16 conservative:2 called:5 kearns:1 college:1 speedy:10 support:1 internal:1 |
3,592 | 4,252 | Prismatic Algorithm for Discrete D.C. Programming Problem
Yoshinobu Kawahara? and Takashi Washio
The Institute of Scientific and Industrial Research (ISIR)
Osaka University
8-1 Mihogaoka, Ibaraki-shi, Osaka 567-0047 JAPAN
{kawahara,washio}@ar.sanken.osaka-u.ac.jp
Abstract
In this paper, we propose the first exact algorithm for minimizing the difference of two
submodular functions (D.S.), i.e., the discrete version of the D.C. programming problem.
The developed algorithm is a branch-and-bound-based algorithm which responds to the
structure of this problem through the relationship between submodularity and convexity.
The D.S. programming problem covers a broad range of applications in machine learning. In fact, this generalizes any set-function optimization. We empirically investigate
the performance of our algorithm, and illustrate the difference between exact and approximate solutions respectively obtained by the proposed and existing algorithms in feature
selection and discriminative structure learning.
1 Introduction
Combinatorial optimization techniques have been actively applied to many machine learning applications, where submodularity often plays an important role to develop algorithms [10, 16, 27, 14,
15, 19, 1]. In fact, many fundamental problems in machine learning can be formulated as submoular
optimization. One of the important categories would be the D.S. programming problem, i.e., the
problem of minimizing the difference of two submodular functions. This is a natural formulation
of many machine learning problems, such as learning graph matching [3], discriminative structure
learning [21], feature selection [1] and energy minimization [24].
In this paper, we propose a prismatic algorithm for the D.S. programming problem, which is a
branch-and-bound-based algorithm responding to the specific structure of this problem. To the best
of our knowledge, this is the first exact algorithm to the D.S. programming problem (although there
exists an approximate algorithm for this problem [21]). As is well known, the branch-and-bound
method is one of the most successful frameworks in mathematical programming and has been incorporated into commercial softwares such as CPLEX [13, 12]. We develop the algorithm based
on the analogy with the D.C. programming problem through the continuous relaxation of solution
spaces and objective functions with the help of the Lov?asz extension [17, 11, 18]. The algorithm is
implemented as an iterative calculation of binary-integer linear programming (BILP).
Also, we discuss applications of the D.S. programming problem in machine learning and investigate empirically the performance of our method and the difference between exact and approximate
solutions through feature selection and discriminative structure-learning problems.
The remainder of this paper is organized as follows. In Section 2, we give the formulation of the
D.S. programming problem and then describe its applications in machine learning. In Section 3,
we give an outline of the proposed algorithm for this problem. Then, in Section 4, we explain the
details of its basic operations. And finally, we give several empirical examples using artificial and
real-world datasets in Section 5, and conclude the paper in Section 6.
Preliminaries and Notation: A set function f is called submodular if f (A) + f (B) ? f (A ?
B) + f (A ? B) for all A, B ? N , where N = {1, ? ? ? , n} [5, 7]. Throughout this paper, we denote
?
http://www.ar.sanken.osaka-u.ac.jp/?kawahara/
1
by f? the Lov?asz extension of f , i.e., a continuous function f? : Rn ? R defined by
?m?1
f?(p) = j=1 (?
pj ? p?j+1 )f (Uj ) + p?m f (Um ),
where Uj = {i ? N : pi ? p?j } and p?1 > ? ? ? > p?m are the m distinct elements of p?
[17, 18]. Also,
we denote by IA ? {0, 1}n the characteristic vector of a subset A ? N , i.e., IA = i?A ei where
ei is the i-th unit vector. Note, through the definition of the characteristic vector, any subset A ? N
has the one-to-one correspondence with the vertex of a n-dimensional cube D := {x ? Rn : 0 ?
xi ? 1(i = 1, . . . , n)}. And, we denote by (A, t)(T ) all combinations of a real value plus subset
whose corresponding vectors (IA , t) are inside or on the surface of a polytope T ? Rn+1 .
2
The D.S. Programming Problem and its Applications
Let f and g are submodular functions. In this paper, we address an exact algorithm to solve the D.S.
programming problem, i.e., the problem of minimizing the difference of two submodular functions:
min f (A) ? g(A).
A?N
(1)
As is well known, any real-valued function whose second partial derivatives are continuous everywhere can be represented as the difference of two convex functions [12]. As well, the problem (1)
generalizes any set-function optimization problem. Problem (1) covers a broad range of applications
in machine learning [21, 24, 3, 1]. Here, we give a few examples.
Feature selection using structured-sparsity inducing norms: Sparse methods for supervised
learning, where we aim at finding good predictors from as few variables as possible, have attracted
much interests from machine learning community. This combinatorial problem is known to be a
submodular maximization problem with cardinality constraint for commonly used measures such as
least-squared errors [4, 14]. And as is well known, if we replace the cardinality function with its
convex envelope such as l1 -norm, this can be turned into a convex optimization problem. Recently,
it is reported that submodular functions in place of the cardinality can give a wider family of polyhedral norms and may incorporate prior knowledge or structural constraints in sparse methods [1].
Then, the objective (that is supposed to be minimized) becomes the sum of a loss function (often,
supermodular) and submodular regularization terms.
Discriminative structure learning: It is reported that discriminatively structured Bayesian classifier often outperforms generatively structured one [21, 22]. One commonly used metric for discriminative structure learning would be EAR (explaining away residual) [2]. EAR is defined as the
difference of the conditional mutual information between variables by class C and non-conditional
one, i.e., I(Xi ; Xj |C) ? I(Xi ; Xj ). In structure learning, we repeatedly try to find a subset in
variables that minimize this kind of measures. Since the (symmetric) mutual information is a submodular function, obviously this problem leads the D.S. programming problem [21].
Energy minimization in computer vision: In computer vision, an image is often modeled with
a Markov random field, where each node represents a pixel. Let G = (V, E) be the undirected
graph, where a label xs ? L is assigned on each node. Then, many tasks in computer vision
can be naturally?formulated in terms
? of energy minimization where the energy function has the
form: E(x) = p?V ?p (xp ) + (p,q)?E ?(xp , xq ), where ?p and ?p,q are univariate and pairwise
potentials. In a pairwise potential for binarized energy (i.e., L = {0, 1}), submodularity is defined
as ?pq (1, 1) + ?pq (0, 0) ? ?pq (1, 0) + ?pq (0, 1) (see, for example, [26]). Based on this, any energy
function in computer vision can be written with a submodular function E1 (x) and a supermodular
function E2 (x) as E(x) = E1 (x) + E2 (x) (ex. [24]). Or, in case of binarized energy, even if such
explicit decomposition is not known, a non-unique decomposition to submodular and supermodular
functions can be always given [25].
3
Prismatic Algorithm for the D.S. Programming Problem
By introducing an additional variable t(? R), Problem (1) can be converted into the equivalent
problem with a supermodular objective function and a submodular feasible set, i.e.,
min
A?N,t?R
t ? g(A)
s.t. f (A) ? t ? 0.
2
(2)
Obviously, if (A? , t? ) is an optimal solution of Problem (2), then A? is an optimal solution of Problem (1)
and t? = f (A? ). The proposed algorithm is a realization
of the branch-and-bound scheme which responds to this
specific structure of the problem.
To this end, we first define a prism T (S) ? Rn+1 by
T = {(x, t) ? Rn ? R : x ? S},
where S is an n-simplex. S is obtained from the ndimensional cube D at the initial iteration (as described
in Section 4.1), or by the subdivision operation described
in the later part of this section (and the detail will be described in Section 4.2). The prism T has n + 1 edges that
are vertical lines (i.e., lines parallel to the t-axis) which
pass through the n + 1 vertices of S, respectively [11].
T
v
(0,1)
(1,1)
D
r
(1,0)
(0,0)
S2
S1
S
Figure 1: Illustration of the prismatic algorithm.
Our algorithm is an iterative procedure which mainly consists of two parts; branching and bounding,
as well as other branch-and-bound frameworks [13]. In branching, subproblems are constructed by
dividing the feasible region of a parent problem. And in bounding, we judge whether an optimal
solution exists in the region of a subproblem and its descendants by calculating an upper bound of
the subproblem and comparing it with an lower bound of the original problem. Some more details
for branching and bounding are described as follows.
Branching: The branching operation in our method is carried out using the property of a simplex.
That is, since, in a n-simplex, any r + 1 vertices
?p are not on a r ? 1-dimensional hyperplane for
r ? n, any n-simplex can be divided as S = i=1 Si , where p ? 2 and Si are n-simplices such
that each pair of simplices Si , Sj (i ?= j) intersects at most in common
?pboundary points (the way of
constructing such partition is explained in Section 4.2). Then, T = i=1 Ti , where Ti = {(x, t) ?
Rn ? R : x ? Si }, is a natural prismatic partition of T induced by the above simplical partition.
Bounding: For the bounding operation on Sk (resp., Tk ) at the iteration k, we consider a polyhe? where D
? = {(x, t) ? Rn ? R : x ? D, f?(x) ? t} is the
dral convex set Pk such that Pk ? D,
region corresponding to the feasible set of Problem (2). At the first iteration, such P is obtained as
P0 = {(x, t) ? Rn ? R : x ? S, t ? t?},
where t? is a real number satisfying t? ? min{f (A) : A ? N }. Here, t? can be determined by using
some existing submodular minimization solver [23, 8]. Or, at later iterations, more refined Pk , such
? is constructed as described in Section 4.4.
that P0 ? P1 ? ? ? ? ? D,
As described in Section 4.3, a lower bound ?(Tk ) of t ? g(A) on the current prism Tk can be
calculated through the binary-integer linear programming (BILP) (or the linear programming (LP))
using Pk , obtained as described above. Let ? be the lowest function value (i.e., an upper bound of
? found so far. Then, if ?(Tk ) ? ?, we can conclude that there is no feasible solution
t ? g(A) on D)
which gives a function value better than ? and can remove Tk without loss of optimality.
The pseudo-code of the proposed algorithm is described in Algorithm 1. In the following section,
we explain the details of the operations involved in this algorithm.
4
Basic Operations
Obviously, the procedure described in Section 3 involves the following basic operations:
1. Construction of the first prism: A prism needs to be constructed from a hypercube at first,
2. Subdivision process: A prism is divided into a finite number of sub-prisms at each iteration,
3. Bound estimation: For each prism generated throughout the algorithm, a lower bound for the
objective function t ? g(A) over the part of the feasible set contained in this prism is computed,
4. Construction of cutting planes: Throughout the algorithm, a sequence of polyhedral convex sets
? Each set Pj is generated by a cutting
P0 , P1 , ? ? ? is constructed such that P0 ? P1 ? ? ? ? ? D.
plane to cut off a part of Pj?1 , and
5. Deletion of non-optimal prisms: At each iteration, we try to delete prisms that contain no
feasible solution better than the one obtained so far.
3
?
Construct a simplex S0 ? D, its corresponding prism T0 and a polyhedral convex set P0 ? D.
Let ?0 be the best objective function value known in advance. Then, solve the BILP (5)
corresponding to ?0 and T0 , and let ?0 = ?(T0 , P0 , ?0 ) and (A?0 , t?0 ) be the point satisfying
?0 = t?0 ? g(A?0 ).
Set R0 ? T0 .
while Rk ?= ?
Select a prism Tk? ? Rk satisfying ?k = ?(Tk? ), (?
v k , t?k ) ? Tk? .
k ?
? then
if (?
v , tk ) ? D
Set Pk+1 = Pk .
else
Construct lk (x, t) according to (8), and set Pk+1 = {(x, t) ? Pk : lk (x, t) ? 0}.
Subdivide Tk? = T (Sk? ) into a finite number of subprisms Tk,j (j?Jk ) (cf. Section 4.2).
For each j ? Jk , solve the BILP (5) with respect to Tk,j , Pk+1 and ?k .
Delete all Tk,j (j?Jk ) satisfying (DR1) or (DR2). Let Mk denote the collection of
remaining prisms Tk,j (j ? Jk ), and for each T ? Mk set
1
2
3
4
5
6
7
8
9
10
11
12
?(T ) = max{?(Tk? ), ?(T, Pk+1 , ?k )}.
Let Fk be the set of new feasible points detected while solving BILP in Step 11, and set
13
?k+1 = min{?k , min{t ? g(A) : (A, t) ? Fk }}.
Delete all T ?Mk satisfying ?(T )??k+1 and let Rk be Rk?1 \ Tk ? Mk .
Set ?k+1 ? min{?(T ) : T ? Mk } and k ? k + 1.
14
15
Algorithm 1: Pseudo-code of the prismatic algorithm for the D.S programming problem.
4.1
Construction of the first prism
? can be constructed as follows.
The initial simplex S0 ? D (which yields the initial prism T0 ? D)
Now,
? let v and Av be a vertex of D and its corresponding subset in N , respectively, i.e., v =
i?Av ei . Then, the initial simplex S0 ? D can be constructed by
S0 = {x ? Rn : xi ? 1(i ? Av ), xi ? 0(i ? N \ Av ), aT x ? ?},
?
where a = i?N \Av ei ? i?Av ei and ? = |N \ Av |. The n + 1 vertices of S0 are v and the n
points where the hyperplane {x ? Rn : aT x = ?} intersects the edges of the cone {x ? Rn : xi ?
1(i ? Av ), xi ? 0(i ? N \ Av )}. Note this is just an option and any n-simplex S ? D is available.
?
4.2 Sub-division of a prism
Let Sk and Tk be the simplex and prism at k-th iteration in the algorithm, respectively. We denote Sk
as Sk = [v ik , . . . , v n+1
] := conv{v 1k , . . . , v n+1
} which is defined as the convex hull of its vertices
k
k
n+1
1
v k , . . . , v k . Then, any r ? Sk can be represented as
?n+1
?n+1
r = i=1 ?i v ik , i=1 ?i = 1, ?i ? 0 (i = 1, . . . , n + 1).
Suppose that r ?= v ik (i = 1, . . . , n + 1). For each i satisfying ?i > 0, let Ski be the subsimplex of
Sk defined by
i+1
n+1
Ski = [v 1k , . . . , v i?1
].
(3)
k , r, v k , . . . , v k
i
Then, the collection {Sk : ?i > 0} defines a partition of Sk , i.e., we have
?
j
i
i
?i >0 Sk = Sk , int Sk ? int Sk = ? for i ?= j [12].
In a natural way, the prisms T (Ski ) generated by the simplices Ski defined in Eq. (3) form a partition
of Tk . This subdivision process of prisms is exhaustive,
??i.e., for every nested (decreasing) sequence
of prisms {Tq } generated by this process, we have q=0 Tq = ? , where ? is a line perpendicular
to Rn (a vertical line) [11]. Although several subdivision process can be applied, we use a classical
bisection one, i.e., each simplex is divided into subsimplices by choosing in Eq. (3) as
r = (v ik1 + v ik2 )/2,
where ?v ik1 ? v ik2 ? = max{?v ik ? v jk ? : i, j ? {0, . . . , n}, i ?= j} (see Figure 1).
4
4.3
Lower bounds
Again, let Sk and Tk be the simplex and prism at k-th iteration in the algorithm, respectively. And,
let ? be an upper bound of t ? g(A), which is the smallest value of t ? g(A) attained at a feasible
?
point known so far in the algorithm. Moreover, let Pk be a polyhedral convex set which contains D
and be represented as
Pk = {(x, t) ? Rn ? R : Ak x + ak t ? bk },
(4)
m 1
where Ak is a real (m ? n)-matrix and ak , bk ? R . Now, a lower bound ?(Tk , Pk , ?) of t ? g(A)
? can be computed as follows.
over Tk ? D
First, let v ik (i = 1, . . . , n + 1) denote the vertices of Sk , and define I(Sk ) = {i ? {1, . . . , n + 1} :
v ik ? Bn } and
{
min{?, min{f?(v ik ) ? g?(v ik ) : i ? I(S)}}, if I(S) ?= ?,
?=
?,
if I(S) = ?.
For each i = 1, . . . , n + 1, consider the point (v ik , tik ) where the edge of Tk passing through v ik
intersects the level set {(x, t) : t ? g?(x) = ?}, i.e.,
tik = g?(v ik ) + ? (i = 1, . . . , n + 1).
Then, let us denote the uniquely defined hyperplane through the points (v ik , tik ) by H = {(x, t) ?
Rn ?R : pT x?t = ?}, where p ? Rn and ? ? R. Consider the upper and lower halfspace generated
by H, i.e., H+ = {(x, t) ? Rn ? R : pT x ? t ? ?} and H? = {(x, t) ? Rn ? R : pT x ? t ? ?}.
? ? H+ , then we see from the supermodularity of g(A) (the concavity of g?(x)) that
If Tk ? D
? ? min{t ? g(A) : (A, t) ? (A, t)(Tk ? H+ )}
min{t ? g(A) : (A, t) ? (A, t)(Tk ? D)}
? min{t ? g?(x) : (x, t) ? Tk ? H+ }
= tik ? g?(xik )(i = 1, . . . , n + 1) = ?.
Otherwise, we shift the hyperplane H (downward with respect to t) until it reaches a point z =
(x? , t? ) (? Tk ? Pk ? H? , x? ? Bn ) ((x? , t? ) is a point with the largest distance to H and the
? denote the resulting
corresponding pair (A, t) (since x? ? Bn ) is in (A, t)(Tk ? Pk ? H? )). Let H
? + the upper halfspace generated by H.
? Moreover, for each
supporting hyperplane, and denote by H
i = 1, . . . , n + 1, let z i = (v ik , t?ik ) be the point where the edge of T passing through v ik intersects
? Then, it follows (A, t)(Tk ? D)
? ? (A, t)(Tk ? Pk ) ? (A, t)(Tk ? H
? + ), and hence
H.
? > min{t ? g(A) : (A, t) ? (A, t)(Tk ? H
? + )}
min{t ? g(A) : (A, t) ? (A, t)(Tk ? D)}
= min{t?ik ? g?(v ik ) : i = 1, . . . , n + 1}.
Now, the above consideration leads to the following BILP in (?, x, t):
(?
)
?n+1
n+1
max
s.t. Ak x + ak t ? bk , x = i=1 ?i v ik , x ? Bn ,
i=1 ti ?i ? t
?,x,t
?n+1
i=1 ?i = 1, ?i ? 0 (i = 1, . . . , n + 1),
(5)
where A, a and b are given in Eq. (4).
? is empty.
Proposition 1. (a) If the system (5) has no solution, then intersection (A, t)(Tk ? D)
?n+1 ?
?
? ?
?
?
(b) Otherwise, let (? , x , t ) be an optimal solution of BILP (5) and c =
i=1 ti ?i ? t its
optimal value, respectively. Then, the following statements hold:
? ? (A, t)(H+ ).
(b1) If c? ? 0, then (A, t)(Tk ? D)
?n+1
?
(b2) If c > 0, then z = ( i=1 ?i v ik , t?k ), z i = (v ik , t?ik ) = (v ik , tik ? c? ) and t?ik ? g?(v ik ) =
? ? c? (i = 1, . . . , n + 1).
?n+1
Proof. First, we prove part (a). Since every point in Sk is uniquely representable as x = i=1 ?i v i ,
we see from Eq. (4) that the set (A, t)(Tk ? Pk ) coincide with the feasible set of problem (5).
? =?
Therefore, if the system (5) has no solution, then (A, t)(Tk ?Pk ) = ?, and hence (A, t)(Tk ? D)
T
?
(because D ? Pk ). Next, we move to part (b). Since the equation of H is p x ? t = ?, it follows
1
Note that Pk is updated at each iteration, which does not depend on Sk , as described in Section 4.4.
5
? and the point z amounts to solving the binary integer linear
that determining the hyperplane H
programming problem:
max pT x ? t
s.t. (x, t) ? Tk ? Pk , x ? Bn .
(6)
Here, we note that the objective of the above can be represented as
(?
)
?n+1
n+1
i
T i
pT x ? t = pT
i=1 ?i v k ? t =
i=1 ?i p v k ? t.
On the other hand, since (v ik , tik ) ? H, we have pT v ik ? tik = ? (i = 1, . . . , n + 1), and hence
?n+1
?n+1
pT x ? t = i=1 ?i (? + tik ) ? t = i=1 tik ?i ? t + ?.
Thus, the two BILPs (5) and (6) are equivalent. And, if ? ? denotes the optimal objective function
? is
value in Eq. (6), then ? ? = c? + ?. If ? ? ? ?, then it follows from the definition of H+ that H
?
obtained by a parallel shift of H in the direction H+ . Therefore, c ? 0 implies (A, t)(Tk ? Pk ) ?
? ? (A, t)(H+ ).
(A, t)(H+ ), and hence (A, t)(Tk ? D)
? = {(x, t) ? Rn ? R : pT x ? t = ? ? } and H = {(x, t) ? Rn ? R : pT x ? t = ?}
Since H
we see that for each intersection point (v ik , t?ik ) (and (v ik , tik )) of the edge of Tk passing through v ik
? (and H), we have pT v i ? t?i = ? ? and pT v i ? ti = ?, respectively. This implies that
with H
k
k
k
k
i
t?k = tik + ? ? ? ? = tik ? c? , and (using tik = g?(v ik ) + ?) that t?ik = g?(v ik ) + ? ? c? .
From the above, we see that, in the case (b1), ? constitutes a lower bound of (t?g(A)) wheres, in the
case (b2), such a lower bound is given by min{t?ik ? g?(v ik ) : i = 1, . . . , n + 1}. Thus, Proposition 1
provides the lower bound
{
+?,
if BILP (5) has no feasible point,
?,
if c? ? 0,
(7)
?k (Tk , Pk , ?) =
?
? ? c if c? > 0.
As stated in Section 4.5, Tk can be deleted from further consideration when ?k = ? or ?.
4.4
Outer approximation
? used in the preceding section is updated in each iteration, i.e.,
The polyhedral convex set Pk ? D
? The update from Pk to Pk+1
a sequence P0 , P1 , ? ? ? is constructed such that P0 ? P1 ? ? ? ? ? D.
(k = 0, 1, . . .) is done in a way which is standard for pure outer approximation methods [12]. That
is, a certain linear inequality lk (x, t) ? 0 is added to the constraint set defining Pk , i.e., we set
Pk+1 = Pk ? {(x, t) ? Rn ? R : lk (x, t) ? 0}.
The function lk (x, t) is constructed as follows. At iteration k, we have a lower bound ?k of t ?
g(A) as defined in Eq. (7), and a point (?
v k , t?k ) satisfying t?k ? g?(?
v k ) = ?k . We update the outer
? Then, we can set
?
approximation only in the case (?
v k , tk ) ?
/ D.
lk (x, t) = sTk [(x, t) ? z k ] + (f?(x?k ) ? t?k ),
(8)
where sk is a subgradient of f?(x) ? t at z k . The subgradient can be calculated as, for example,
stated in [9] (see also [7]).
? i.e.,
Proposition 2. The hyperplane {(x, t) ? Rn ? R : lk (x, t) = 0} strictly separates z k from D,
?
?
lk (z k ) > 0, and lk (x, t) ? 0 for (x, t) ? D.
? we have lk (z k ) = (f?(x? ) ? t? ). And, the latter inequality is
Proof. Since we assume that z k ?
/ D,
k
k
an immediate consequence of the definition of a subgradient.
4.5
Deletion rules
At each iteration of the algorithm, we try to delete certain subprisms that contain no optimal solution.
To this end, we adopt the following two deletion rules:
(DR1) Delete Tk if BILP (5) has no feasible solution.
6
[b
?
[b
?
b
b
Approx. (Supermodular-submodular)
Approx. (Supermodular-sumodular)
Time [second]
b
Exact (Prismatic)
Approx. (Supermodular-sumodular)
Test Error
Training Error
Exact (Prismatic)
Exact (Prismatic)
b
?
b
?
?
?
?
?
b
?
?
?
Figure 2: Training errors, test errors and computational time versus ? for the prismatic algorithm
and the supermodular-sumodular procedure.
p
120
120
120
120
n
150
150
150
150
k
5
10
20
40
exact(PRISM)
1.8e-4 (192.6)
2.0e-4 (262.7)
7.3e-4 (339.2)
1.7e-3 (467.6)
SSP
1.9e-4 (0.93)
2.4e-4 (0.81)
7.8e-4 (1.43)
2.1e-3 (1.17)
greedy
1.8e-4 (0.45)
2.3e-4 (0.56)
8.3e-4 (0.59)
2.9e-3 (0.63)
lasso
1.9e-4 (0.78)
2.4e-4 (0.84)
7.7e-4 (0.91)
1.9e-3 (0.87)
Table 1: Normalized mean-square prediction errors of training and test data by the prismatic algorithm, the supermodular-submodular procedure, the greedy algorithm and the lasso.
(DR2) Delete Tk if the optimal value c? of BILP (5) satisfies c? ? 0.
The feasibility of these rules can be seen from Proposition 1 as well as the D.C. programing prob? = ?, i.e., the prism Tk
lem [11]. That is, (DR1) follows from Proposition 1 that in this case Tk ? D
is infeasible, and (DR2) from Proposition 1 and from the definition of ? that the current best feasible
solution cannot be improved in T .
5 Experimental Results
We first provide illustrations of the proposed algorithm and its solution on toy examples from feature
selection in Section 5.1, and then apply the algorithm to an application of discriminative structure
learning using the UCI repository data in Section 5.2. The experiments below were run on a 2.8
GHz 64-bit workstation using Matlab and IBM ILOG CPLEX ver. 12.1.
5.1 Application to feature selection
We compared the performance and solutions by the proposed prismatic algorithm (PRISM), the
supermodular-submodular procedure (SSP) [21], the greedy method and the LASSO. To this end,
we generated data as follows: Given p, n and k, the design matrix X ? Rn?p is a matrix of i.i.d.
Gaussian components. A feature set J of cardinality k is chosen at random and the weights on the
selected features are sampled from a standard multivariate Gaussian distribution. The weights on
other features are 0. We then take y = Xw + n?1/2 ?Xw?2 ?, where w is the weights on features
and ? is a standard Gaussian vector. In the experiment, we used the trace norm of the submatrix
1
corresponding to J, XJ , i.e., tr(XJT XJ )1/2 . Thus, our problem is minw?Rp 2n
?y ? Xw?22 + ? ?
T
1/2
T
tr(XJ XJ ) , where J is the support of w. Or equivalently, minA?V g(A) + ? ? tr(XA
XA )1/2 ,
2
where g(A) := minwA ?R|A| ?y ? XA wA ? . Since the first term is a supermodular function [4] and
the second is a submodular function, this problem is the D.S. programming problem.
First, the graphs in Figure 2 show the training errors, test errors and computational time versus ? for
PRISM and SSP (for p = 120, n = 150 and k = 10). The values in the graphs are averaged over 20
datasets. For the test errors, we generated another 100 data from the same model and applied the estimated model to the data. And, for all methods, we tried several possible regularization parameters.
From the graphs, we can see the following: First, exact solutions (by PRISM) always outperform
approximate ones (by SSP). This would show the significance of optimizing the submodular-norm.
That is, we could obtain the better solutions (in the sense of prediction error) by optimizing the
objective with the submodular norm more exactly. And, our algorithm took longer especially when
7
Data
Chess
German
Census-income
Hepatitis
Attr.
36
20
40
19
Class
2
2
2
2
exact (PRISM)
96.6 (?0.69)
70.0 (?0.43)
73.2 (?0.64)
86.9 (?1.89)
approx. (SSP)
94.4 (?0.71)
69.9 (?0.43)
71.2 (?0.74)
84.3 (?2.31)
generative
92.3 (?0.79)
69.1 (?0.49)
70.3 (?0.74)
84.2 (?2.11)
Table 2: Empirical accuracy of the classifiers in [%] with standard deviation by the TANs discriminatively learned with PRISM or SSP and generatively learned with a submodular minimization
solver. The numbers in parentheses are computational time in seconds.
? smaller. This would be because smaller ? basically gives a larger size subset (solution). Also,
Table 1 shows normalized-mean prediction errors by the prismatic algorithm, the supermodularsubmodular procedure, the greedy method and the lasso for several k. The values are averaged over
10 datasets. This result also seems to show that optimizing the objective with the submodular norm
exactly is significant in the meaning of prediction errors.
5.2
Application to discriminative structure learning
Our second application is discriminative structure learning using the UCI machine learning repository.2 Here, we used CHESS, GERMAN, CENSUS-INCOME (KDD) and HEPATITIS, which have
two classes. The Bayesian network topology used was the tree augmented naive Bayes (TAN) [22].
We estimated TANs from data both in generative and discriminative manners. To this end, we used
the procedure described in [20] with a submodular minimization solver (for the generative case), and
the one [21] combined with our prismatic algorithm (PRISM) or the supermodular-submodular procedure (SSP) (for the discriminative case). Once the structures have been estimated, the parameters
were learned based on the maximum likelihood method.
Table 2 shows the empirical accuracy of the classifier in [%] with standard deviation for these
datasets. We used the train/test scheme described in [6, 22]. Also, we removed instances with
missing values. The results seem to show that optimizing the EAR measure more exactly could
improve the performance of classification (which would mean that the EAR is significant as the
measure of discriminative structure learning in the sense of classification).
6
Conclusions
In this paper, we proposed a prismatic algorithm for the D.S. programming problem (1), which is the
first exact algorithm for this problem and is a branch-and-bound method responding to the structure
of this problem. We developed the algorithm based on the analogy with the D.C. programming
problem through the continuous relaxation of solution spaces and objective functions with the help
of the Lov?asz extension. We applied the proposed algorithm to several situations of feature selection
and discriminative structure learning using artificial and real-world datasets.
The D.S. programming problem addressed in this paper covers a broad range of applications in
machine learning. In future works, we will develop a series of the presented framework specialized
to the specific structure of each problem. Also, it would be interesting to investigate the extension
of our method to enumerate solutions, which could make the framework more useful in practice.
Acknowledgments
This research was supported in part by JST PRESTO PROGRAM (Synthesis of Knowledge for
Information Oriented Society), JST ERATO PROGRAM (Minato Discrete Structure Manipulation
System Project) and KAKENHI (22700147). Also, we are very grateful to the reviewers for helpful
comments.
2
http://archive.ics.uci.edu/ml/index.html
8
References
[1] F. Bach. Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems 23, pages 118?126, 2010.
[2] J. A. Bilmes. Dynamic Bayesian multinets. In Proc. of the 16th Conf. on Uncertainty in Artificial Intelligence (UAI?00), pages 38?45, 2000.
[3] T. S. Caetano, J. J. McAuley, L. Cheng, Q. V. Le, and A. J. Smola. Learning graph matching. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 31(6):1048?1058, 2009.
[4] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In Proc. of the 40th annual
ACM symp. on Theory of computing (STOC?08), pages 45?54, 2008.
[5] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In R. Guy, H. Hanani, N. Sauer, and
J. Sch?onheim, editors, Combinatorial structures and their applications, pages 69?87, 1970.
[6] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifier. 29:131?163, 1997.
[7] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2 edition, 2005.
[8] S. Fujishige, T. Hayashi, and S. Isotani. The minimum-norm-point algorithm applied submodular function
minimization and linear programming. Technical report, Research Institute for Mathematical Sciences,
Kyoto University, 2006.
[9] E. Hazan and S. Kale. Beyond convexity: online submodular minimization. In Advances in Neural
Information Processing Systems 22, pages 700?708, 2009.
[10] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical image
classification. In Proc. of the 23rd Int?l Conf. on Machine learning (ICML?06), pages 417?424, 2006.
[11] R. Horst, T. Q. Phong, Ng. V. Thoai, and J. de Vries. On solving a D.C. programming problem by a
sequence of linear programs. Journal of Global Optimization, 1:183?203, 1991.
[12] R. Horst and H. Tuy. Global Optimization (Deterministic Approaches). Springer, 3 edition, 1996.
[13] T. Ibaraki. Enumerative approaches to combinatorial optimization. In J.C. Baltzer and A.G. Basel, editors,
Annals of Operations Research, volume 10 and 11. 1987.
[14] Y. Kawahara, K. Nagano, K. Tsuda, and J. A. Bilmes. Submodularity cuts and applications. In Advances
in Neural Information Processing Systems 22, pages 916?924. MIT Press, 2009.
[15] A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In Proc. of the 27th
Int?l Conf. on Machine learning (ICML?10), pages 567?574. Omnipress, 2010.
[16] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Journal
of Machine Learning Research, 9:2761?2801, 2008.
[17] L. Lov?asz. Submodular functions and convexity. In A. Bachem, M. Gr?otschel, and B. Korte, editors,
Mathematical Programming ? The State of the Art, pages 235?257. 1983.
[18] K. Murota. Discrete Convex Analysis. Monographs on Discrete Math and Applications. SIAM, 2003.
[19] K. Nagano, Y. Kawahara, and S. Iwata. Minimum average cost clustering. In Advances in Neural Information Processing Systems 23, pages 1759?1767, 2010.
[20] M. Narasimhan and J. A. Bilmes. PAC-learning bounded tree-width graphical models. In Proc. of the
20th Ann. Conf. on Uncertainty in Artificial Intelligence (UAI?04), pages 410?417, 2004.
[21] M. Narasimhan and J. A. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. In Proc. of the 21st Ann. Conf. on Uncertainty in Artificial Intelligence (UAI?05),
pages 404?412, 2005.
[22] F. Pernkopf and J. A. Bilmes. Discriminative versus generative parameter and structure learning of
bayesian network classifiers. In Proc. of the 22nd Int?l Conf. on Machine Learning (ICML?05), pages
657?664, 2005.
[23] M. Queyranne. Minimizing symmetric submodular functions. Math. Prog., 82(1):3?12, 1998.
[24] C. Rother, T. Minka, A. Blake, and V. Kolmogorov. Cosegmentation of image pairs by histogram
matching-incorporating a global constraint into mrfs. In Proc. of the 2006 IEEE Comp. Soc. Conf. on
Computer Vision and Pattern Recognition (CVPR?06), pages 993?1000, 2006.
[25] A. Shekhovtsov. Supermodular decomposition of structural labeling problem. Control Systems and Computers, 20(1):39?48, 2006.
[26] A. Shekhovtsov, V. Kolmogorov, P. Kohli, V. Hlav c, C. Rother, and P. Torr. Lp-relaxation of binarized
energy minimization. Technical Report CTU-CMP-2007-27, Czech Technical University, 2007.
[27] M. Thoma, H. Cheng, A. Gretton, H. Han, H. P. Kriegel, A. J. Smola, S. Y. Le Song Philip, X. Yan, and
K. Borgwardt. Near-optimal supervised feature selection among frequent subgraphs. In Proc. of the 2009
SIAM Conf. on Data Mining (SDM?09), pages 1076?1087, 2008.
9
| 4252 |@word kohli:1 repository:2 version:1 norm:9 seems:1 nd:1 tried:1 bn:5 decomposition:3 p0:8 isir:1 tr:3 mcauley:1 initial:4 generatively:2 contains:1 series:1 outperforms:1 existing:2 current:2 comparing:1 si:4 attracted:1 written:1 partition:5 kdd:1 bilp:10 remove:1 update:2 greedy:4 selected:1 generative:4 intelligence:4 ctu:1 plane:2 provides:1 math:2 node:2 mathematical:3 constructed:8 ik:35 descendant:1 consists:1 prove:1 symp:1 inside:1 polyhedral:5 tuy:1 manner:1 pairwise:2 lov:4 p1:5 decreasing:1 cardinality:4 solver:3 becomes:1 conv:1 project:1 notation:1 moreover:2 bounded:1 lowest:1 kind:1 developed:2 narasimhan:2 finding:1 pseudo:2 every:2 binarized:3 ti:5 exactly:3 um:1 classifier:5 control:1 unit:1 medical:1 baltzer:1 consequence:1 ak:6 plus:1 range:3 perpendicular:1 averaged:2 unique:1 acknowledgment:1 practice:1 procedure:9 empirical:3 yan:1 matching:3 cannot:1 selection:11 www:1 equivalent:2 deterministic:1 reviewer:1 shi:1 missing:1 kale:1 convex:10 pure:1 attr:1 rule:3 subgraphs:1 osaka:4 updated:2 resp:1 construction:3 play:1 commercial:1 suppose:1 exact:12 programming:26 pt:12 tan:3 annals:1 element:1 satisfying:7 jk:5 recognition:1 cut:2 role:1 subproblem:2 region:3 caetano:1 removed:1 monograph:1 convexity:3 dynamic:1 depend:1 solving:3 grateful:1 division:1 represented:4 kolmogorov:2 intersects:4 train:1 distinct:1 describe:1 artificial:5 detected:1 labeling:1 choosing:1 refined:1 kawahara:5 exhaustive:1 whose:2 larger:1 solve:3 valued:1 cvpr:1 supermodularity:1 otherwise:2 online:1 obviously:3 sequence:4 sdm:1 took:1 propose:2 remainder:1 frequent:1 turned:1 uci:3 realization:1 nagano:2 supposed:1 inducing:2 parent:1 empty:1 tk:49 help:2 illustrate:1 develop:3 ac:2 wider:1 eq:6 dividing:1 implemented:1 soc:1 involves:1 judge:1 implies:2 direction:1 submodularity:4 dral:1 hull:1 jst:2 hoi:1 preliminary:1 proposition:6 extension:4 strictly:1 hold:1 ic:1 blake:1 lyu:1 dictionary:1 sanken:2 smallest:1 adopt:1 estimation:1 proc:9 tik:13 combinatorial:4 label:1 largest:1 minimization:9 mit:1 always:2 gaussian:3 aim:1 cmp:1 takashi:1 kakenhi:1 polyhedron:1 likelihood:1 mainly:1 hepatitis:2 industrial:1 sense:2 helpful:1 elsevier:1 mrfs:1 pixel:1 classification:3 html:1 among:1 art:1 kempe:1 mutual:2 cube:2 field:1 construct:2 once:1 ng:1 represents:1 broad:3 bachem:1 icml:3 constitutes:1 future:1 minimized:1 simplex:11 report:2 few:2 oriented:1 cplex:2 tq:2 friedman:1 interest:1 investigate:3 mining:1 edge:5 partial:1 minw:1 sauer:1 tree:2 tsuda:1 delete:6 mk:5 cevher:1 instance:1 ar:2 cover:3 maximization:1 cost:1 introducing:1 vertex:7 subset:7 deviation:2 predictor:1 successful:1 gr:1 reported:2 combined:1 st:1 borgwardt:1 fundamental:1 siam:2 off:1 synthesis:1 squared:1 again:1 ear:4 guy:1 conf:8 derivative:1 actively:1 japan:1 toy:1 potential:2 converted:1 de:1 b2:2 int:5 later:2 try:3 hazan:1 bayes:1 option:1 parallel:2 halfspace:2 minimize:1 square:1 cosegmentation:1 accuracy:2 characteristic:2 yield:1 shekhovtsov:2 bayesian:5 bisection:1 basically:1 bilmes:5 comp:1 explain:2 reach:1 definition:4 energy:8 involved:1 minka:1 e2:2 naturally:1 proof:2 workstation:1 sampled:1 knowledge:3 organized:1 attained:1 supermodular:14 supervised:2 improved:1 formulation:2 done:1 onheim:1 just:1 xa:3 smola:2 until:1 hand:1 ei:5 defines:1 mode:1 murota:1 scientific:1 contain:2 normalized:2 regularization:2 assigned:1 hence:4 symmetric:2 erato:1 branching:5 uniquely:2 width:1 mina:1 outline:1 l1:1 omnipress:1 image:3 meaning:1 consideration:2 recently:1 common:1 specialized:1 empirically:2 jp:2 phong:1 volume:1 significant:2 approx:4 rd:1 fk:2 submodular:33 pq:4 ik1:2 han:1 longer:1 surface:1 multivariate:1 optimizing:4 manipulation:1 certain:3 inequality:2 binary:3 prism:30 seen:1 minimum:2 additional:1 guestrin:1 preceding:1 r0:1 branch:6 kyoto:1 gretton:1 technical:3 calculation:1 bach:1 divided:3 e1:2 feasibility:1 parenthesis:1 prediction:4 basic:3 xjt:1 regression:1 vision:5 metric:1 iteration:12 histogram:1 krause:2 addressed:1 else:1 sch:1 envelope:1 asz:4 archive:1 comment:1 induced:1 fujishige:2 undirected:1 seem:1 integer:3 structural:2 near:1 xj:6 ibaraki:2 lasso:4 topology:1 shift:2 t0:5 whether:1 dr1:3 queyranne:1 song:1 passing:3 repeatedly:1 matlab:1 enumerate:1 useful:1 korte:1 amount:1 simplical:1 category:1 http:2 outperform:1 estimated:3 edmonds:1 discrete:5 deleted:1 pj:3 graph:6 relaxation:3 subgradient:3 prismatic:15 sum:1 cone:1 run:1 prob:1 everywhere:1 uncertainty:3 place:1 throughout:3 family:1 prog:1 geiger:1 bit:1 submatrix:1 bound:19 correspondence:1 cheng:2 annual:1 constraint:4 multinets:1 software:1 min:15 optimality:1 structured:4 according:1 combination:1 representable:1 smaller:2 sumodular:3 lp:2 s1:1 lem:1 chess:2 explained:1 census:2 equation:1 discus:1 german:2 end:4 presto:1 generalizes:2 operation:8 available:1 apply:1 away:1 batch:1 subdivide:1 rp:1 original:1 responding:2 remaining:1 cf:1 denotes:1 clustering:1 graphical:1 xw:3 calculating:1 uj:2 especially:1 hypercube:1 classical:1 society:1 objective:10 move:1 added:1 responds:2 ssp:7 distance:1 separate:1 otschel:1 philip:1 outer:3 enumerative:1 polytope:1 rother:2 code:2 modeled:1 relationship:1 illustration:2 index:1 minimizing:4 equivalently:1 statement:1 stoc:1 subproblems:1 xik:1 trace:1 stated:2 design:1 ski:4 basel:1 upper:5 vertical:2 av:9 ilog:1 datasets:5 markov:1 observation:1 finite:2 jin:1 supporting:1 immediate:1 defining:1 situation:1 incorporated:1 rn:22 pernkopf:1 community:1 bk:3 pair:3 learned:3 deletion:3 czech:1 trans:1 address:1 beyond:1 kriegel:1 below:1 pattern:2 sparsity:2 program:3 max:4 ia:3 natural:3 residual:1 ndimensional:1 zhu:1 scheme:2 improve:1 axis:1 lk:10 carried:1 naive:1 xq:1 prior:1 determining:1 loss:2 discriminatively:2 interesting:1 analogy:2 versus:3 dr2:3 xp:2 s0:5 editor:3 pi:1 ibm:1 supported:1 infeasible:1 ik2:2 institute:2 explaining:1 matroids:1 sparse:3 ghz:1 calculated:2 world:2 concavity:1 horst:2 commonly:2 collection:2 coincide:1 far:3 income:2 sj:1 approximate:4 cutting:2 ml:1 hanani:1 active:1 ver:1 uai:3 global:3 b1:2 conclude:2 discriminative:14 xi:7 continuous:4 iterative:2 sk:19 table:4 yoshinobu:1 robust:1 constructing:1 da:1 pk:29 significance:1 s2:1 bounding:5 edition:2 minato:1 augmented:1 simplices:3 sub:2 explicit:1 mcmahan:1 rk:4 hlav:1 specific:3 pac:1 x:1 gupta:1 exists:2 incorporating:1 downward:1 vries:1 intersection:2 univariate:1 contained:1 hayashi:1 springer:1 nested:1 iwata:1 satisfies:1 acm:1 conditional:2 formulated:2 ann:2 stk:1 replace:1 feasible:12 programing:1 isotani:1 determined:1 torr:1 hyperplane:7 called:1 pas:1 experimental:1 subdivision:4 select:1 support:1 latter:1 goldszmidt:1 incorporate:1 washio:2 ex:1 |
3,593 | 4,253 | Signal Estimation Under Random Time-Warpings
and Nonlinear Signal Alignment
Sebastian Kurtek Anuj Srivastava Wei Wu
Department of Statistics
Florida State University, Tallahassee, FL 32306
skurtek,anuj,[email protected]
Abstract
While signal estimation under random amplitudes, phase shifts, and additive noise
is studied frequently, the problem of estimating a deterministic signal under random time-warpings has been relatively unexplored. We present a novel framework
for estimating the unknown signal that utilizes the action of the warping group to
form an equivalence relation between signals. First, we derive an estimator for
the equivalence class of the unknown signal using the notion of Karcher mean on
the quotient space of equivalence classes. This step requires the use of Fisher-Rao
Riemannian metric and a square-root representation of signals to enable computations of distances and means under this metric. Then, we define a notion of
the center of a class and show that the center of the estimated class is a consistent estimator of the underlying unknown signal. This estimation algorithm has
many applications: (1) registration/alignment of functional data, (2) separation of
phase/amplitude components of functional data, (3) joint demodulation and carrier estimation, and (4) sparse modeling of functional data. Here we demonstrate
only (1) and (2): Given signals are temporally aligned using nonlinear warpings
and, thus, separated into their phase and amplitude components. The proposed
method for signal alignment is shown to have state of the art performance using
Berkeley growth, handwritten signatures, and neuroscience spike train data.
1
Introduction
Consider the problem of estimating signal using noisy observation under the model:
f (t) = cg(a t ? ?) + e(t) ,
where the random quantities c ? R is the scale, a ? R is the rate, ? ? R is the phase shift, and
e(t) ? R is the additive noise. There has been an elaborate theory for estimation of the underlying
signal g, given one or several observations of the function f . Often one assumes that g takes a
parametric form, e.g. a superposition of Gaussians or exponentials with different parameters, and
estimates these parameters from the observed data [12]. For instance, the estimation of sinusoids or
exponentials in additive Gaussian noise is a classical problem in signal and speech processing. In
this paper we consider a related but fundamentally different estimation problem where the observed
functional data is modeled as: for t ? [0, 1],
fi (t) = ci g(?i (t)) + ei , i = 1, 2, . . . , n ,
(1)
Here ?i : [0, 1] ? [0, 1] are diffeomorphisms with ?i (0) = 0 and ?i (1) = 1. The fi s represent observations of an unknown, deterministic signal g under random warpings ?i , scalings ci and vertical
translations ei ? R. (A more general model would be to use full functions for additive noise but that
requires further discussion due to identifiability issues. Thus, we restrict to the above model in this
paper.) This problem is interesting because in many situations, including speech, SONAR, RADAR,
1
phase components
original data
warping functions
1.2
1
1.1
0.9
1
0.8
0.9
0.7
0.8
0.6
0.7
0.5
0.6
0.4
0.5
0.3
1.3
1.2
0.2
0.4
1.1
0.1
0.3
1
0.2
?3
0.9
?2
?1
0
1
2
0
0
3
0.2
0.4
0.6
0.8
1
0.8
0.7
amplitude components
0.6
0.5
0.4
?3
?2
?1
0
1
2
mean +/- STD after warping
1.3
1.3
1.2
1.2
1.1
1.1
3
1
1
0.9
0.9
0.8
0.8
1.3
0.7
0.7
1.2
0.6
1.1
0.5
1
0.4
mean +/- STD before warping
0.9
?3
0.6
0.5
0.4
?2
?1
0
1
2
3
?3
?2
?1
0
1
2
3
0.8
0.7
0.6
0.5
0.4
?3
?2
?1
0
1
2
3
Figure 1: Separation of phase and amplitude variability in function data.
NMR, fMRI, and MEG applications, the noise can actually affect the instantaneous phase of the signal, resulting in an observation that is a phase (or frequency) modulation of the original signal. This
problem is challenging because of the nonparametric, random nature of the warping functions ?i s. It
seems difficult to be able to recover g where its observations have been time-warped nonlinearly in
a random fashion. The past papers have either restricted to linear warpings (e.g. ?i (t) = ai t ? ?i ) or
known g (e.g. g(t) = cos(t)). It turns out that without any further restrictions on ?i s one can recover
g only up to an arbitrary warping function. This is easy to see since g ? ?i = (g ? ?) ? (? ?1 ? ?i )
for any warping function ?. (As described later, the warping functions are restricted to be automorphisms of a domain and, hence, form a group.) Under an additional condition related to the mean of
(inverses of) ?i s, we can reach the exact signal g, as demonstrated in this paper.
In fact, this model describes several related, some even equivalent, problems but with distinct
applications:
Problem 1: Joint Phase Demodulation and Carrier Estimation: One can view this problem as
that of phase (or frequency) demodulation but without the knowledge of the carrier signal g. Thus,
it becomes a problem of joint estimation of the carrier signal (g) and phase demodulation (?i?1 )
of signals that share the same carrier. In case the carrier signal g is known, e.g. g is a sinusoid,
then it is relatively easier to estimate the warping functions using dynamic time warping or other
estimation theoretic methods [15, 13]. So, we consider problem of estimation of g from {fi } under
the model given in Eqn. 1.
Problem 2: Phase-Amplitude Separation: Consider the set of signals shown in the top-left panel
of Fig. 1. These functions differ from each other in both heights and locations of their peaks and
valleys. One would like to separate the variability associated with the heights, called the amplitude
variability, from the variability associated with the locations, termed the phase variability. Although
this problem has been studied for almost two decades in the statistics community, see e.g. [7,
9, 4, 11, 8], it is still considered an open problem. Extracting the amplitude variability implies
temporally aligning the given functions using nonlinear time warping, with the result shown in the
bottom right. The corresponding set of warping functions, shown in the top right, represent the
phase variability. The phase component can also be illustrated by applying these warping functions
to the same function, also shown in the top right. The main reason for separating functional data into
these components is to better preserve the structure of the observed data, since a separate modeling
of amplitude and phase variability will be more natural, parsimonious and efficient. It may not be
obvious but the solution to this separation problem is intimately connected to the estimation of g in
Eqn. 1.
Problem 3: Multiple Signal/Image Registration: The problem of phase-amplitude separation is
intrinsically same as the problem of joint registration of multiple signals. The problem here is: Given
a set of observed signals {fi } estimate the corresponding points in their domains. In other words,
2
what are the ?i s such that, for any t0 , fi (?i?1 (t0 )) correspond to each other. The bottom right panels
of Fig. 1 show the registered signals. Although this problem is more commonly studied for images,
its one-dimensional version is non-trivial and helps understand the basic challenges. We will study
the 1D problem in this paper but, at least conceptually, the solutions extend to higher-dimensional
problems also.
In this paper we provide the following specific contributions. We study the problem of estimating
g given a set {fi } under the model given in Eqn. 1 and propose a consistent estimator for this
problem, along with the supporting asymptotic theory. Also, we illustrate the use of this solution in
automated alignment of sets of given signals. Our framework is based on an equivalence relation
between signals defined as follows. Two signals, are deemed equivalent if one can be time-warped
into the other; since the warping functions form a group, the equivalence class is an orbit of the
warping group. This relation partitions the set of signals into equivalence classes, and the set of
equivalence classes (orbits) forms a quotient space. Our estimation of g is based on two steps. First,
we estimate the equivalence class of g using the notion of Karcher mean on quotient space which,
in turn, requires a distance on this quotient space. This distance should respect the equivalence
structure, i.e. the distance between any elements should be zero if and only if they are in the same
class. We propose to use a distance that results from the Fisher-Rao Riemannian metric. This
metric was introduced in 1945 by C. R. Rao [10] and studied rigorously in the 70s and 80s by
Amari [1], Efron [3], Kass [6], Cencov [2], and others. While those earlier efforts were focused
on analyzing parametric families, we use the nonparametric version of the Fisher-Rao Riemannian
metric in this paper. The difficulty in using this metric directly is that it is not straightforward to
compute geodesics (remember that geodesics lengths provide the desired distances). However, a
simple square-root transformation converts this metric into the standard L2 metric and the distance
is obtainable as a simple L2 norm between the square-root forms of functions. Second, given an
estimate of the equivalence class of g, we define the notion of a center of an orbit and use that to
derive an estimator for g.
2
Background Material
We introduce some notation. Let ? be the set of orientation-preserving diffeomorphisms of the unit
interval [0, 1]: ? = {? : [0, 1] ? [0, 1]|?(0) = 0, ?(1) = 1, ? is a diffeo}. Elements of ? form a
group, i.e. (1) for any ?1 , ?2 ? ?, their composition ?1 ? ?2 ? ?; and (2) for any ? ? ?, its inverse
? ?1 ? ?, where the identity is the self-mapping ?id (t) = t. We will use kf k to denote the L2 norm
R1
( 0 |f (t)|2 dt)1/2 .
2.1
Representation Space of Functions
Let f be a real-valued function on the interval [0, 1]. We are going to restrict to those f that are
absolutely continuous on [0, 1]; let F?denote
p the set of all such functions. We define a mapping:
x/
|x| if |x| 6= 0 . Note that Q is a continuous map.
Q : R ? R according to: Q(x) ?
0
otherwise
For the purpose of studying the function f , we will represent it using
qa square-root velocity function
?
?
(SRVF) defined as q : [0, 1] ? R, where q(t) ? Q(f (t)) = f (t)/ |f?(t)|. It can be shown that if
the function f is absolutely continuous, then the resulting SRVF is square integrable. Thus, we will
define L2 ([0, 1], R) (or simply L2 ) to be the set of all SRVFs. For every q ? L2 there exists a function
f (unique up to a constant, or a vertical translation) such that the given q is the SRVF of that f . If
p
d
(f ??)(t)
= (q ? ?)(t) ?(t).
?
we warp a function f by ?, the SRVF of f ? ? is given by: q?(t) = ?dtd
| dt (f ??)(t)|
?
We will denote this transformation by (q, ?) = (q ? ?) ?.
?
2.2
Elastic Riemannian Metric
Definition 1 For any f ? F and v1 , v2 ? Tf (F), where Tf (F) is the tangent space to F at f , the
Fisher-Rao Riemannian metric is defined as the inner product:
Z
1
1 1
v? 1 (t)v?2 (t)
hhv1 , v2 iif =
dt .
(2)
?
4 0
|f (t)|
3
This metric has many fundamental advantages, including the fact that it is the only Riemannian
metric that is invariant to the domain warping [2]. This metric is somewhat complicated since it
changes from point to point on F, and it is not straightforward to derive equations for computing
geodesics in F. However, a small transformation provide an enormous simplification of this task.
This motivates the use of SRVFs for representing and aligning elastic functions.
Lemma 1 Under the SRVF representation, the Fisher-Rao Riemannian metric becomes the standard L2 metric.
This result can be used to compute the distance dF R between any two functions by computing the
L2 distance between the corresponding SRVFs, that is, dF R (f1 , f2 ) = kq1 ? q2 k. The next question
is: What is the effect of warping on dF R ? This is answered by the following result of isometry.
Lemma 2 For any two SRVFs q1 , q2 ? L2 and ? ? ?, k(q1 , ?) ? (q2 , ?)k = kq1 ? q2 k.
2.3
Elastic Distance on Quotient Space
Our next step is to define an elastic distance between functions as follows. The orbit of an SRVF
q ? L2 is given by: [q] = closure{(q, ?)|? ? ?}. It is the set of SRVFs associated with all the
warpings of a function, and their limit points. Let S denote the set of all such orbits. To compare
any two orbits we need a metric on S. We will use the Fisher-Rao distance to induce a distance
between orbits, and we can do that only because under this the action of ? is by isometries.
Definition 2 For any two functions f1 , f2 ? F and the corresponding SRVFs, q1 , q2 ? L2 , we
define the elastic distance d on the quotient space S to be: d([q1 ], [q2 ]) = inf ??? kq1 ? (q2 , ?)k.
Note that the distance d between a function and its domain-warped version is zero. However, it can
be shown that if two SRVFs belong to different orbits, then the distance between them is non-zero.
Thus, this distance d is a proper distance (i.e. it satisfies non-negativity, symmetry, and the triangle
inequality) on S but not on L2 itself, where it is only a pseudo-distance.
3
Signal Estimation Method
Our estimation is based on the model fi = ci (g ? ?i ) + ei , i = 1, ? ? ? , n, where g, fi ? F , ci ? R+ ,
?i ? ? and ei ? R. Given {fi }, our goal is to identify warping functions {?i } so as to reconstruct g.
We will do so in three steps: 1) For a given collection of functions {fi }, and their SRVFs {qi }, we
compute the mean of the corresponding orbits {[qi ]} in the quotient space S; we will call it [?]n . 2)
We compute an appropriate element of this mean orbit to define a template ?n in L2 . The optimal
warping functions {?i } are estimated by align individual functions to match the template ?n . 3) The
estimated warping functions are then used to align {fi } and reconstruct the underlying signal g.
3.1
Pre-step: Karcher Mean of Points in ?
In this section we will define a Karcher mean of a set of warping functions {?i }, under the FisherRao metric, using the differential geometry of ?. Analysis on ? is not straightforward because it is a
nonlinear manifold. To understand
its geometry, we will represent an element ? ? ? by the square?
root of its derivative ? = ?.
? Note that this is the same as the SRVF defined earlier for elements
of F, except that ?? > 0 here. Since ?(0) = 0, the mapping from ? to ? is a bijection and one
Rt
can reconstruct ? from ? using ?(t) = 0 ?(s)2 ds. An important advantage of this transformation
R1
R1
is that since k?k2 = 0 ?(t)2 dt = 0 ?(t)dt
?
= ?(1) ? ?(0) = 1, the set of all such ?s is S? ,
the unit sphere in the Hilbert space L2 . In other words, the square-root representation simplifies the
complicated geometry of ? to the unit sphere. Recall that the distance between any two points on
the unit sphere, under the Euclidean metric, is simply the length of the shortest arc of a great circle
connecting them on the sphere. Using Lemma 1, thepFisher-Rao
p distance between any two warping
R1
?1
functions is found to be dF R (?1 , ?2 ) = cos ( 0 ?? 1 (t) ?? 2 (t)dt). Now that we have a proper
distance on ?, we can define a Karcher mean as follows.
Definition 3 For a given
Pn set of warping functions ?1 , ?2 , . . . , ?n ? ?, define their Karcher mean to
be ??n = argmin??? i=1 dF R (?, ?i )2 .
4
The search for this minimum is performed using a standard iterative algorithm that is not repeated
here to save space.
3.2
Step 1: Karcher Mean of Points in S = L2 /?
Next we consider the problem of finding means of points in the quotient space S.
Definition 4 Define the Karcher mean [?]n of the given SRVF orbits {[qi ]} in the space S as a local
minimum of the sum of squares of elastic distances:
[?]n = argmin
[q]?S
n
X
d([q], [qi ])2 .
(3)
i=1
We emphasize that the Karcher mean [?]n is actually an orbit of functions, rather than a function.
The full algorithm for computing the Karcher mean in S is given next.
Algorithm 1: Karcher Mean of {[qi ]} in S
1. Initialization Step: Select ? = qj , where j is any index in argmin1?i?n ||qi ? n1
Pn
k=1 qk ||.
2. For each qi find ?i? by solving: ?i? = argmin??? k? ? (qi , ?)k. The solution to this
optimization comes from a dynamic programming algorithm in a discretized domain.
3. Compute the aligned SRVFs using q?i 7? (qi , ?i? ).
Pn
4. If the increment k n1 i=1 q?i ? ?k is small, then stop. Else, update the mean using ? 7?
P
n
1
?i and return to step 2.
i=1 q
n
The iterative update in Steps 2-4 is based on the gradient of the cost function given in Eqn. 3.
(k)
Denote the estimated mean in the kth iteration by ?(k) . In the kth iteration, let ?i denote the
P
(k)
(k)
n
optimal domain warping from qi to ?(k) and let q?i = (qi , ?i ). Then, i=1 d([?(k) ], [qi ])2 =
Pn
Pn
Pn
(k)
(k)
(k)
? q?i k2 ? i=1 k?(k+1) ? q?i k2 ? i=1 d([?(k+1) ], [qi ])2 . Thus, the cost function
i=1 k?
Pn
decreases iteratively and as zero is a lower bound, i=1 d([?(k) ], [qi ])2 will always converge.
3.3
Step 2: Center of an Orbit
Here we find a particular element of this mean orbit so that it can be used as a template to align the
given functions.
Definition 5 For a given set of SRVFs q1 , q2 , . . . , qn and q, define an element q? of [q] as the center
of [q] with respect to the set {qi } if the warping functions {?i }, where ?i = argmin??? k?
q ?(qi , ?)k,
have the Karcher mean ?id .
We will prove the existence of such an element by construction.
Algorithm 2: Finding Center of an Orbit : WLOG, let q be any element of the orbit [q].
1. For each qi find ?i by solving: ?i = argmin??? kq ? (qi , ?)k.
2. Compute the mean ??n of all {?i }. The center of [q] wrt {qi } is given by q? = (q, ??n?1 ).
We need to show that q? resulting from Algorithm 2 satisfies the mean condition in Definition 5. Note
that ?i is chosen to minimize kq ? (qi , ?)k, and also that k?
q ? (qi , ?)k = k(q, ??n?1 ) ? (qi , ?)k =
?
?1
kq ? (qi , ? ? ??n )k. Therefore, ?i = ?i ? ??n minimizes k?
q ? (qi , ?)k. That is, ?i? is a warping
?
that
aligns
q
to
q
?
.
To
verify
the
Karcher
mean
of
?
,
we
compute
the sum of squared distances
i
i n
Pn
Pn
P
? 2
?1 2
2
d
(?,
?
)
=
d
(?,
?
?
?
?
)
=
d
(?
?
?
?
?n is already the
F
R
F
R
i
F
R
n , ?i ) . As ?
n
i
i=1
i=1
i=1
mean of ?i , this sum of squares is minimized when ? = ?id . That is, the mean of ?i? is ?id .
We will apply this setup in our problem by finding the center of [?]n with respect to the SRVFs {qi }.
5
g
{f?i }
{fi }
estimate of g
4
4
4
4
2
2
2
2
0
0
0
0
?2
?2
?2
?2
error w.r.t. n
true g
estimated g
0.6
0.3
?4
0
0.5
1
?4
0
0.5
1
?4
0
0.5
1
?4
0
0.5
1
0
5 10
20
30
40
50
Figure 2: Example of consistent estimation.
3.4
Steps 1-3: Complete Estimation Algorithm
Consider the observation model fi = ci (g ? ?i ) + ei , i = 1, . . . , n, where g is an unknown signal,
and ci ? R+ , ?i ? ? and ei ? R are random. Given the observations {fi }, the goal is to estimate the
signal g. To make the system identifiable, we need some constraints on ?i , ci , and ei . In this paper,
the constraints are: 1) the population mean of {?i?1 } is identity ?id , and 2) the population Karcher
means of {ci } and {ei } are known, denoted by E(?
c) and E(?
e), respectively. Now we can utilize
Algorithms 1 and 2 to present the full procedure for function alignment and signal estimation.
Complete Estimation Algorithm: Given a set of functions {fi }ni=1 on [0, 1], and population means
E(?
c) and E(?
e). Let {qi }ni=1 denote the SRVFs of {fi }ni=1 , respectively.
1. Computer the Karcher mean of {[qi ]} in S using Algorithm 1; Denote it by [?]n .
2. Find the center of [?]n wrt {qi } using Algorithm 2; call it ?n .
3. For i = 1, 2, . . . , n, find ?i? by solving: ?i? = argmin??? k?n ? (qi , ?)k.
4. Compute the aligned SRVFs q?i = (qi , ?i? ) and aligned functions f?i = fi ? ?i? .
Pn
e))/E(?
c).
5. Return the warping functions {?i? } and the estimated signal g? = ( n1 i=1 f?i ?E(?
Illustration. We illustrate the estimation process using an example which is a quadraticallyenveloped sine-wave function g(t) = (1 ? (1 ? 2t)2 ) sin(5?t), t ? [0, 1]. We randomly generate
n = 50 warping functions {?i } such that {?i?1 } are i.i.d with mean ?id . We also generate i.i.d
sequences {ci } and {ei } from the exponential distribution with mean 1 and the standard normal
distribution, respectively. Then we compute functions fi = ci (g ? ?i ) + ei to form the functional
data. In Fig. 2, the first panel shows the function g, and the second panel shows the data {fi }. The
Complete Estimation Algorithm results in the aligned functions {f?i = fi ? ?i? } that are are shown
in the third panel in Fig. 2. In this case, E(?
c)) = 1, E(?
e) = 0. This estimated g (red) using the
Complete Estimation Algorithm as well as the true g (blue) are shown in the fourth panel. Note
that the estimate is very successful despite large variability in the raw data. Finally, we examine
the performance of the estimator with respect to the sample size, by performing this estimation for
n equal to 5, 10, 20, 30, and 40. The estimation errors, computed using the L2 norm between estimated g?s and the true g, are shown in the last panel. As we will show in the following theoretical
development, this estimate converges to the true g when the sample size n grows large.
4 Estimator Consistency and Asymptotics
In this section we mathematically demonstrate that the proposed algorithms in Section 3 provide
a consistent estimator for the underlying function g. This or related problems have been considered previously by several papers, including [14, 9], but we are not aware of any formal statistical
solution.
At first, we establish the following useful result.
Lemma 3 For any q1 , q2 ? L2 and a constant c > 0, we have argmin??? kq1 ? (q2 , ?)k =
argmin??? kcq1 ? (q2 , ?)k.
Corollary 1 For any function q ? L2 and constant c > 0, we have ?id ? argmin??? kcq ? (q, ?)k.
Moreover, if the set {t ? [0, 1]|q(t) = 0} has (Lebesgue) measure 0, ?id = argmin??? kcq?(q, ?)k.
6
Based on Lemma 3 and Corollary 1, we have the following result on the Karcher mean in the quotient
space S.
Theorem 1 For a function g, consider a sequence of functions fi (t) = ci g(?i (t)) + ei , where ci
is a positive constant, ei is a constant, and ?i is a time warping,
Pn ? i = 1, ? ? ? , n. Denote by qg
and qi the SRVFs of g and fi , respectively, and let s? = n1 i=1 ci . Then, the Karcher mean of
{[qi ], i = 1, 2, . . . , n} in S is s?[qg ]. That is,
?N
!
X
2
[?]n ? argmin
d ([qi ], [q]) = s?[qg ] = s?{(qg , ?), ? ? ?} .
[q]
i=1
Next, we present a simple fact about the Karcher mean (see Definition 3) of warping functions.
Lemma 4 Given a set {?i ? ?|i = 1, ..., n} and a ?0 ? ?, if the Karcher mean of {?i } is ?? , then
the Karcher mean of {?i ? ?0 } is ?? ? ?0 .
Theorem 1 ensures that [?]n belongs to the orbit of [qg ] (up to a scale factor) but we are interested in
estimating g itself, rather than its orbit. We will show in two steps (Theorems 2 and 3) that finding
the center of the orbit [?]n leads to a consistent estimator for g.
Theorem 2 Under the same conditions as in Theorem 1, let ? = (?
sqg , ?0 ), for ?0 ? ?, denote an
arbitrary element of the Karcher mean class [?]n = s?[qg ]. Assume that the set {t ? [0, 1]|g(t)
?
= 0}
has Lebesgue measure zero. If the population Karcher mean of {?i?1 } is ?id , then the center of the
orbit [?]n , denoted by ?n , satisfies limn?? ?n = E(?
s)qg .
This result shows that asymptotically one can recover the SRVF of the original signal by the Karcher
mean of the SRVFs of the observed signals. Next in Theorem 3, we will show that one can also
reconstruct g using aligned functions {f?i } generated by the Alignment Algorithm in Section 3.
Theorem 3 Under the same conditions as in Theorem 2, let ?i? = argmin? k(qi , ?) ? ?n k and f?i =
Pn
Pn
Pn
fi ? ?i? . If we denote c? = n1 i=1 ci and e? = n1 i=1 ei , then limn?? n1 i=1 f?i = E(?
c)g + E(?
e).
5 Application to Signal Alignment
In this section we will focus on function alignment and comparison of alignment performance with
some previous methods on several datasets. In this case, the given signals are viewed as {fi } in the
previous set up and we estimate the center of the orbit and then use it for alignment of all signals.
The datasets include 3 real experimental applications listed below. The data are shown in Column 1
in Fig. 3.
1. Real Data 1. Berkeley Growth Data: The Berkeley growth dataset for 39 male subjects
[11]. For better illustrations, we have used the first derivatives of the growth (i.e. growth
velocity) curves as the functions {fi } in our analysis.
2. Real Data 2. Handwriting Signature Data: 20 handwritten signatures and the acceleration functions along the signature curves [8]. Let (x(t), y(t)) denote the x and y coordinates p
of a signature traced as a function of time t. We study the acceleration functions
f (t) = x
?(t)2 + y?(t)2 of the signatures.
3. Real Data 3. Neural Spike Data: Spiking activity of one motor cortical neuron in a
Macaque monkey was recorded during arm-movement behavior [16]. The smoothed (using
a Gaussian kernel) spike trains over 10 movement trials are used in this alignment analysis.
There are no standard criteria on evaluating function alignment in the current literature. Here we use
the following three criteria so that together they provide a comprehensive evaluation, where fi and
f?i , i = 1, ..., N , denote the original and the aligned functions, respectively.
1. Least Squares: ls =
1
N
R
P
(f?i (t)? N 1?1
f?j (t))2 dt
R
Pj6=i
.
2
i=1 (fi (t)? N 1?1
j6=i fj (t)) dt
PN
ls measures the cross-sectional
variance of the aligned functions, relative to original values. The smaller the value of ls,
the better the alignment is in general.
7
Original
PACE [11]
SMR [4]
MBM [5]
F-R
30
30
30
30
30
20
20
20
20
20
10
10
10
10
10
0
0
0
0
5
10
15
5
Growth-male
10
15
5
(0.91, 1.09, 0.68)
10
15
0
5
(0.45, 1.17, 0.77)
10
15
5
(0.70, 1.17, 0.62)
1.5
1.5
1.5
1.5
1.5
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
20
40
60
0
80
Signature
20
40
60
0
80
(0.91, 1.18, 0.84)
20
40
60
0
80
(0.62, 1.59, 0.31)
20
40
60
0
80
(0.64, 1.57, 0.46)
1.5
1.5
1.5
1.5
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
1
1.5
2
2.5
Neural data
0
0.5
1
1.5
2
0
2.5
(0.87, 1.35, 1.10)
0.5
1
1.5
2
2.5
0
(0.69, 2.54, 0.95)
0.5
1
1.5
2
(0.48, 3.06, 0.40)
15
20
40
60
80
(0.56, 1.79, 0.31)
1.5
0
10
(0.64, 1.18, 0.31)
2.5
0
0.5
1
1.5
2
2.5
(0.40, 3.77, 0.28)
Figure 3: Empirical evaluation of four methods on 3 real datasets, with the alignment performance
computed using three criteria (ls, pc, sls). The best cases are shown in boldface.
P
cc(f?i (t),f?j (t))
2. Pairwise Correlation: pc = Pi6=j cc(fi (t),fj (t)) , where cc(f, g) is the pairwise Pearson?s
i6=j
correlation between functions. Large values of pc indicate good sychronization.
3. Sobolev Least Squares: sls =
PN
R
R
Pi=1
N
i=1
PN ?? 2
?
1
(f?i (t)? N
fj ) dt
Pj=1
N
1
(f?i (t)?
f?j )2 dt
N
, This criterion measures the
j=1
total cross-sectional variance of the derivatives of the aligned functions, relative to the
original value. The smaller the value of sls, the better synchronization the method achieves.
We compare our Fisher-Rao (F-R) method with the Tang-M?uller method [11] provided in principal
analysis by conditional expectation (PACE) package, the self-modeling registration (SMR) method
presented in [4], and the moment-based matching (MBM) technique presented in [5]. Fig. 3 summarizes the values of (ls, pc, sls) for these four methods using 3 real datasets. From the results, we
can see that the F-R method does uniformly well in functional alignment under all the evaluation
metrics. We have found that the ls criterion is sometimes misleading in the sense that a low value
can result even if the functions are not very well aligned. This is the case, for example, in the male
growth data under SMR method. Here the ls = 0.45, while for our method ls = 0.64, even though
it is easy to see that latter has performed a better alignment. On the other hand, the sls criterion
seems to best correlate with a visual evaluation of the alignment. The neural spike train data is the
most challenging and no other method except ours does a good job.
6 Summary
In this paper we have described a parameter-free approach for reconstructing underlying signal using
given functions with random warpings, scalings, and translations. The basic idea is to use the FisherRao Riemannian metric and the resulting geodesic distance to define a proper distance, called elastic
distance, between warping orbits of SRVF functions. This distance is used to compute a Karcher
mean of the orbits, and a template is selected from the mean orbit using an additional condition
that the mean of the warping functions is identity. By applying these warpings on the original functions, we provide a consistent estimator of the underlying signal. One interesting application of this
framework is in aligning functions with significant x-variability. We show the the proposed FisherRao method provides better alignment performance than the state-of-the-art methods in several real
experimental data.
8
References
[1] S. Amari. Differential Geometric Methods in Statistics. Lecture Notes in Statistics, Vol. 28.
Springer, 1985.
?
[2] N. N. Cencov.
Statistical Decision Rules and Optimal Inferences, volume 53 of Translations
of Mathematical Monographs. AMS, Providence, USA, 1982.
[3] B. Efron. Defining the curvature of a statistical problem (with applications to second order
efficiency). Ann. Statist., 3:1189?1242, 1975.
[4] D. Gervini and T. Gasser. Self-modeling warping functions. Journal of the Royal Statistical
Society, Ser. B, 66:959?971, 2004.
[5] G. James. Curve alignment by moments. Annals of Applied Statistics, 1(2):480?501, 2007.
[6] R. E. Kass and P. W. Vos. Geometric Foundations of Asymptotic Inference. John Wiley &
Sons, Inc., 1997.
[7] A. Kneip and T. Gasser. Statistical tools to analyze data representing a sample of curves. The
Annals of Statistics, 20:1266?1305, 1992.
[8] A. Kneip and J. O. Ramsay. Combining registration and fitting for functional models. Journal
of American Statistical Association, 103(483), 2008.
[9] J. O. Ramsay and X. Li. Curve registration. Journal of the Royal Statistical Society, Ser. B,
60:351?363, 1998.
[10] C. R. Rao. Information and accuracy attainable in the estimation of statistical parameters.
Bulletin of Calcutta Mathematical Society, 37:81?91, 1945.
[11] R. Tang and H. G. Muller. Pairwise curve synchronization for functional data. Biometrika,
95(4):875?889, 2008.
[12] H.L. Van Trees. Detection, Estimation, and Modulation Theory, vol. I. John Wiley, N.Y., 1971.
[13] M. Tsang, J. H. Shapiro, and S. Lloyd. Quantum theory of optical temporal phase and instantaneous frequency. Phys. Rev. A, 78(5):053820, Nov 2008.
[14] K. Wang and T. Gasser. Alignment of curves by dynamic time warping. Annals of Statistics,
25(3):1251?1276, 1997.
[15] A. Willsky. Fourier series and estimation on the circle with applications to synchronous
communication?I: Analysis. IEEE Transactions on Information Theory, 20(5):577 ? 583, sep
1974.
[16] W. Wu and A. Srivastava. Towards Statistical Summaries of Spike Train Data. Journal of
Neuroscience Methods, 195:107?110, 2011.
9
| 4253 |@word trial:1 version:3 seems:2 norm:3 open:1 closure:1 q1:6 attainable:1 sychronization:1 moment:2 series:1 ours:1 past:1 ka:2 current:1 john:2 additive:4 partition:1 motor:1 update:2 selected:1 provides:1 bijection:1 location:2 height:2 mathematical:2 along:2 differential:2 prove:1 fitting:1 introduce:1 pairwise:3 behavior:1 frequently:1 examine:1 discretized:1 becomes:2 provided:1 estimating:5 underlying:6 notation:1 fsu:1 panel:7 moreover:1 what:2 argmin:12 minimizes:1 monkey:1 q2:11 finding:4 transformation:4 pseudo:1 berkeley:3 unexplored:1 remember:1 every:1 temporal:1 growth:7 biometrika:1 k2:3 nmr:1 ser:2 unit:4 carrier:6 before:1 positive:1 local:1 limit:1 despite:1 analyzing:1 id:9 modulation:2 initialization:1 studied:4 equivalence:10 challenging:2 co:2 unique:1 procedure:1 asymptotics:1 empirical:1 matching:1 word:2 induce:1 pre:1 valley:1 applying:2 restriction:1 equivalent:2 deterministic:2 demonstrated:1 center:12 map:1 straightforward:3 l:8 focused:1 estimator:9 rule:1 population:4 notion:4 coordinate:1 increment:1 annals:3 construction:1 exact:1 programming:1 mbm:2 automorphisms:1 element:10 velocity:2 std:2 observed:5 bottom:2 wang:1 tsang:1 ensures:1 connected:1 decrease:1 movement:2 monograph:1 rigorously:1 dynamic:3 geodesic:4 signature:7 radar:1 solving:3 f2:2 efficiency:1 triangle:1 sep:1 joint:4 train:4 separated:1 distinct:1 pearson:1 valued:1 amari:2 otherwise:1 reconstruct:4 statistic:7 noisy:1 itself:2 advantage:2 sequence:2 propose:2 product:1 aligned:10 combining:1 r1:4 converges:1 help:1 derive:3 illustrate:2 stat:1 job:1 quotient:9 implies:1 come:1 indicate:1 differ:1 enable:1 material:1 f1:2 mathematically:1 considered:2 normal:1 great:1 mapping:3 achieves:1 purpose:1 estimation:27 superposition:1 tf:2 tool:1 uller:1 gaussian:2 argmin1:1 always:1 rather:2 pn:17 corollary:2 focus:1 cg:1 sense:1 am:1 inference:2 kneip:2 relation:3 going:1 interested:1 issue:1 orientation:1 denoted:2 development:1 art:2 equal:1 aware:1 iif:1 fmri:1 minimized:1 others:1 fundamentally:1 randomly:1 preserve:1 comprehensive:1 individual:1 phase:18 geometry:3 lebesgue:2 n1:7 detection:1 evaluation:4 alignment:20 smr:3 male:3 pc:4 tree:1 euclidean:1 orbit:24 desired:1 circle:2 theoretical:1 instance:1 column:1 modeling:4 earlier:2 rao:10 karcher:24 cost:2 kq:3 successful:1 providence:1 peak:1 fundamental:1 connecting:1 together:1 squared:1 recorded:1 warped:3 derivative:3 american:1 return:2 li:1 lloyd:1 inc:1 later:1 root:6 view:1 performed:2 sine:1 analyze:1 red:1 wave:1 recover:3 complicated:2 identifiability:1 contribution:1 minimize:1 square:11 ni:3 accuracy:1 qk:1 variance:2 correspond:1 identify:1 conceptually:1 handwritten:2 raw:1 cc:3 j6:1 reach:1 phys:1 sebastian:1 aligns:1 definition:7 frequency:3 james:1 obvious:1 associated:3 riemannian:8 handwriting:1 stop:1 dataset:1 intrinsically:1 recall:1 knowledge:1 efron:2 hilbert:1 amplitude:10 obtainable:1 actually:2 higher:1 dt:10 wei:1 though:1 correlation:2 d:1 hand:1 eqn:4 ei:13 nonlinear:4 grows:1 usa:1 effect:1 verify:1 true:4 hence:1 sinusoid:2 iteratively:1 illustrated:1 sin:1 during:1 self:3 criterion:6 theoretic:1 demonstrate:2 complete:4 fj:3 dtd:1 image:2 instantaneous:2 novel:1 fi:28 functional:9 spiking:1 volume:1 extend:1 belong:1 association:1 significant:1 composition:1 ai:1 consistency:1 i6:1 vos:1 ramsay:2 align:3 aligning:3 curvature:1 isometry:2 inf:1 belongs:1 termed:1 inequality:1 muller:1 integrable:1 preserving:1 minimum:2 additional:2 somewhat:1 converge:1 shortest:1 signal:44 full:3 multiple:2 match:1 cross:2 sphere:4 demodulation:4 qg:7 qi:34 basic:2 metric:20 df:5 expectation:1 iteration:2 represent:4 kernel:1 sometimes:1 background:1 interval:2 else:1 limn:2 subject:1 call:2 extracting:1 easy:2 automated:1 affect:1 restrict:2 inner:1 simplifies:1 idea:1 shift:2 qj:1 t0:2 synchronous:1 effort:1 kq1:4 speech:2 action:2 useful:1 listed:1 nonparametric:2 gasser:3 statist:1 generate:2 shapiro:1 sl:5 estimated:8 neuroscience:2 pace:2 tallahassee:1 blue:1 vol:2 group:5 four:2 enormous:1 traced:1 pj:1 registration:6 utilize:1 v1:1 asymptotically:1 convert:1 sum:3 inverse:2 package:1 fourth:1 almost:1 family:1 wu:2 utilizes:1 parsimonious:1 separation:5 sobolev:1 decision:1 summarizes:1 scaling:2 fl:1 bound:1 simplification:1 identifiable:1 activity:1 constraint:2 fourier:1 answered:1 performing:1 diffeomorphisms:2 optical:1 relatively:2 department:1 according:1 describes:1 smaller:2 reconstructing:1 intimately:1 son:1 rev:1 restricted:2 invariant:1 equation:1 previously:1 turn:2 wrt:2 studying:1 gaussians:1 apply:1 v2:2 appropriate:1 save:1 florida:1 existence:1 original:8 assumes:1 top:3 include:1 establish:1 classical:1 society:3 warping:42 question:1 quantity:1 spike:5 already:1 parametric:2 rt:1 gradient:1 kth:2 distance:28 separate:2 separating:1 manifold:1 trivial:1 reason:1 boldface:1 willsky:1 meg:1 length:2 modeled:1 index:1 illustration:2 difficult:1 setup:1 motivates:1 proper:3 unknown:5 vertical:2 observation:7 neuron:1 datasets:4 arc:1 supporting:1 situation:1 defining:1 variability:10 communication:1 smoothed:1 arbitrary:2 community:1 introduced:1 nonlinearly:1 registered:1 macaque:1 qa:1 able:1 below:1 challenge:1 including:3 royal:2 natural:1 difficulty:1 arm:1 representing:2 misleading:1 temporally:2 deemed:1 negativity:1 literature:1 l2:18 tangent:1 kf:1 geometric:2 asymptotic:2 relative:2 synchronization:2 lecture:1 interesting:2 foundation:1 consistent:6 share:1 pi:1 translation:4 summary:2 last:1 free:1 formal:1 understand:2 warp:1 template:4 bulletin:1 sparse:1 van:1 curve:7 cortical:1 evaluating:1 qn:1 quantum:1 commonly:1 collection:1 correlate:1 transaction:1 nov:1 emphasize:1 continuous:3 search:1 iterative:2 decade:1 sonar:1 nature:1 elastic:7 symmetry:1 domain:6 main:1 noise:5 repeated:1 fig:6 elaborate:1 fashion:1 wiley:2 wlog:1 exponential:3 third:1 calcutta:1 tang:2 theorem:8 specific:1 anuj:2 exists:1 ci:14 easier:1 simply:2 visual:1 sectional:2 springer:1 satisfies:3 conditional:1 identity:3 goal:2 viewed:1 acceleration:2 ann:1 towards:1 fisher:7 change:1 except:2 uniformly:1 lemma:6 principal:1 called:2 total:1 experimental:2 select:1 latter:1 absolutely:2 srivastava:2 |
3,594 | 4,254 | Relative Density-Ratio Estimation
for Robust Distribution Comparison
Makoto Yamada
Tokyo Institute of Technology
[email protected]
Takafumi Kanamori
Nagoya University
[email protected]
Taiji Suzuki
The University of Tokyo
[email protected]
Hirotaka Hachiya Masashi Sugiyama
Tokyo Institute of Technology
{hachiya@sg. sugi@}cs.titech.ac.jp
Abstract
Divergence estimators based on direct approximation of density-ratios without going through separate approximation of numerator and denominator densities have
been successfully applied to machine learning tasks that involve distribution comparison such as outlier detection, transfer learning, and two-sample homogeneity
test. However, since density-ratio functions often possess high fluctuation, divergence estimation is still a challenging task in practice. In this paper, we propose to
use relative divergences for distribution comparison, which involves approximation of relative density-ratios. Since relative density-ratios are always smoother
than corresponding ordinary density-ratios, our proposed method is favorable in
terms of the non-parametric convergence speed. Furthermore, we show that the
proposed divergence estimator has asymptotic variance independent of the model
complexity under a parametric setup, implying that the proposed estimator hardly
overfits even with complex models. Through experiments, we demonstrate the
usefulness of the proposed approach.
1
Introduction
Comparing probability distributions is a fundamental task in statistical data processing. It can be
used for, e.g., outlier detection [1, 2], two-sample homogeneity test [3, 4], and transfer learning
[5, 6].
A standard approach to comparing probability densities p(x) and p0 (x) would be to estimate a
divergence from p(x) to p0 (x), such as the Kullback-Leibler (KL) divergence [7]:
KL[p(x), p0 (x)] := Ep(x) [log r(x)] , r(x) := p(x)/p0 (x),
where Ep(x) denotes the expectation over p(x). A naive way to estimate the KL divergence is to
separately approximate the densities p(x) and p0 (x) from data and plug the estimated densities in
the above definition. However, since density estimation is known to be a hard task [8], this approach
does not work well unless a good parametric model is available. Recently, a divergence estimation
approach which directly approximates the density-ratio r(x) without going through separate approximation of densities p(x) and p0 (x) has been proposed [9, 10]. Such density-ratio approximation
methods were proved to achieve the optimal non-parametric convergence rate in the mini-max sense.
However, the KL divergence estimation via density-ratio approximation is computationally rather
expensive due to the non-linearity introduced by the ?log? term. To cope with this problem, another
divergence called the Pearson (PE) divergence [11] is useful. The PE divergence is defined as
?
?
PE[p(x), p0 (x)] := 12 Ep0 (x) (r(x) ? 1)2 .
1
The PE divergence is a squared-loss variant of the KL divergence, and they both belong to the class
of the Ali-Silvey-Csisz?ar divergences (which is also known as the f -divergences, see [12, 13]). Thus,
the PE and KL divergences share similar properties, e.g., they are non-negative and vanish if and
only if p(x) = p0 (x).
Similarly to the KL divergence estimation, the PE divergence can also be accurately estimated based
on density-ratio approximation [14]: the density-ratio approximator called unconstrained leastsquares importance fitting (uLSIF) gives the PE divergence estimator analytically, which can be
computed just by solving a system of linear equations. The practical usefulness of the uLSIF-based
PE divergence estimator was demonstrated in various applications such as outlier detection [2], twosample homogeneity test [4], and dimensionality reduction [15].
In this paper, we first establish the non-parametric convergence rate of the uLSIF-based PE divergence estimator, which elucidates its superior theoretical properties. However, it also reveals
that its convergence rate is actually governed by the ?sup?-norm of the true density-ratio function:
maxx r(x). This implies that, in the region where the denominator density p0 (x) takes small values,
the density-ratio r(x) = p(x)/p0 (x) tends to take large values and therefore the overall convergence
speed becomes slow. More critically, density-ratios can even diverge to infinity under a rather simple
setting, e.g., when the ratio of two Gaussian functions is considered [16]. This makes the paradigm
of divergence estimation based on density-ratio approximation unreliable.
In order to overcome this fundamental problem, we propose an alternative approach to distribution
comparison called ?-relative divergence estimation. In the proposed approach, we estimate the
?-relative divergence, which is the divergence from p(x) to the ?-mixture density:
q? (x) = ?p(x) + (1 ? ?)p0 (x) for 0 ? ? < 1.
For example, the ?-relative PE divergence is given by
?
?
PE? [p(x), p0 (x)] := PE[p(x), q? (x)] = 12 Eq? (x) (r? (x) ? 1)2 ,
where r? (x) is the ?-relative density-ratio of p(x) and p0 (x):
?
?
r? (x) := p(x)/q? (x) = p(x)/ ?p(x) + (1 ? ?)p0 (x) .
(1)
(2)
We propose to estimate the ?-relative divergence by direct approximation of the ?-relative densityratio.
A notable advantage of this approach is that the ?-relative density-ratio is always bounded above by
1/? when ? > 0, even when the ordinary density-ratio is unbounded. Based on this feature, we theoretically show that the ?-relative PE divergence estimator based on ?-relative density-ratio approximation is more favorable than the ordinary density-ratio approach in terms of the non-parametric
convergence speed.
We further prove that, under a correctly-specified parametric setup, the asymptotic variance of our
?-relative PE divergence estimator does not depend on the model complexity. This means that the
proposed ?-relative PE divergence estimator hardly overfits even with complex models.
Through experiments on outlier detection, two-sample homogeneity test, and transfer learning, we
demonstrate that our proposed ?-relative PE divergence estimator compares favorably with alternative approaches.
2 Estimation of Relative Pearson Divergence via Least-Squares Relative
Density-Ratio Approximation
Suppose we are given independent and identically distributed (i.i.d.) samples {xi }ni=1 from
0
a d-dimensional distribution P with density p(x) and i.i.d. samples {x0j }nj=1 from another ddimensional distribution P 0 with density p0 (x). Our goal is to compare the two underlying dis0
tributions P and P 0 only using the two sets of samples {xi }ni=1 and {x0j }nj=1 .
In this section, we give a method for estimating the ?-relative PE divergence based on direct approximation of the ?-relative density-ratio.
2
Direct Approximation of ?-Relative Density-Ratios:
Pn Let us model the ?-relative density-ratio
r? (x) (2) by the following kernel model g(x; ?) := `=1 ?` K(x, x` ), where ? := (?1 , . . . , ?n )>
are parameters to be learned from data samples, > denotes the transpose of a matrix or a vector, and
K(x, x0 ) is a kernel basis function. In the experiments, we use the Gaussian kernel.
The parameters ? in the model g(x; ?) are determined so that the following expected squared-error
J is minimized:
h
i
2
J(?) := 12 Eq? (x) (g(x; ?) ? r? (x))
?
?
?
?
2
0
= ?2 Ep(x) g(x; ?)2 + (1??)
? Ep(x) [g(x; ?)] + Const.,
2 Ep (x) g(x; ?)
where we used r? (x)q? (x) = p(x) in the third term. Approximating the expectations by empirical
averages, we obtain the following optimization problem:
i
h
1 >c
b > ? + ? ?> ? ,
b := argmin
?
H?
?
h
(3)
?
n
??R
2
2
where a penalty term ?? > ?/2 is included for regularization purposes, and ? (? 0) denotes the
b are defined as
c and h
regularization parameter. H
Pn0
Pn
1
0
0
b
b `,`0 := ? Pn K(xi , x` )K(xi , x`0 )+ (1??)
0
H
i=1
j=1 K(xj , x` )K(xj , x` ), h` := n
i=1 K(xi , x` ).
n
n0
b = (H
b
c + ?I n )?1 h,
It is easy to confirm that the solution of Eq.(3) can be analytically obtained as ?
where I n denotes the n-dimensional identity matrix. Finally, a density-ratio estimator is given as
b = Pn ?b` K(x, x` ).
rb? (x) := g(x; ?)
`=1
When ? = 0, the above method is reduced to a direct density-ratio estimator called unconstrained
least-squares importance fitting (uLSIF) [14]. Thus, the above method can be regarded as an extension of uLSIF to the ?-relative density-ratio. For this reason, we refer to our method as relative
uLSIF (RuLSIF).
The performance of RuLSIF depends on the choice of the kernel function (the kernel width in the
case of the Gaussian kernel) and the regularization parameter ?. Model selection of RuLSIF is
possible based on cross-validation (CV) with respect to the squared-error criterion J.
Using an estimator of the ?-relative density-ratio r? (x), we can construct estimators of the ?relative PE divergence (1). After a few lines of calculation, we can show that the ?-relative PE
divergence (1) is equivalently expressed as
?
?
?
?
2
0
PE? = ? ?2 Ep(x) r? (x)2 ? (1??)
+ Ep(x) [r? (x)] ? 12 = 12 Ep(x) [r? (x)] ? 12 .
2 Ep (x) r? (x)
Note that the middle expression can also be obtained via Legendre-Fenchel convex duality of the
divergence functional [17].
Based on these expressions, we consider the following two estimators:
Pn0
Pn
c ? := ? ? Pn rb? (xi )2 ? (1??)
b? (x0j )2 + n1 i=1 rb? (xi ) ? 12 ,
PE
i=1
j=1 r
2n
2n0
f ? := 1 Pn rb? (xi ) ? 1 .
PE
i=1
2n
2
(4)
(5)
We note that the ?-relative PE divergence (1) can have further different expressions than the above
ones, and corresponding estimators can also be constructed similarly. However, the above two
c ? has superior theoretical properties
expressions will be particularly useful: the first estimator PE
f
(see Section 3) and the second one PE? is simple to compute.
3 Theoretical Analysis
In this section, we analyze theoretical properties of the proposed PE divergence estimators. Since
our theoretical analysis is highly technical, we focus on explaining practical insights we can gain
from the theoretical results here; we describe all the mathematical details in the supplementary
material.
3
For theoretical analysis, let us consider a rather abstract form of our relative density-ratio estimator
described as
h P
i
Pn
n
(1??) Pn0
?
1
?
2
0 2
2
argming?G 2n
,
(6)
i=1 g(xi ) + 2n0
j=1 g(xj ) ? n
i=1 g(xi ) + 2 R(g)
where G is some function space (i.e., a statistical model) and R(?) is some regularization functional.
Non-Parametric Convergence Analysis: First, we elucidate the non-parametric convergence rate
of the proposed PE estimators. Here, we practically regard the function space G as an infinitedimensional reproducing kernel Hilbert space (RKHS) [18] such as the Gaussian kernel space, and
R(?) as the associated RKHS norm.
Let us represent the complexity of the function space G by ? (0 < ? < 2); the larger ? is, the
more complex the function class G is (see the supplementary material for its precise definition). We
analyze the convergence rate of our PE divergence estimators as n
? := min(n, n0 ) tends to infinity
for ? = ?n? under
?n? ? o(1) and ??1
n2/(2+?) ).
n
? = o(?
The first condition means that ?n? tends to zero, but the second condition means that its shrinking
speed should not be too fast.
Under several technical assumptions detailed in the supplementary material, we have the following
c ? (4) and PE
f ? (5):
asymptotic convergence results for the two PE divergence estimators PE
?1/2
2
c ? ? PE? = Op (?
PE
n
ckr? k? + ?n? max(1, R(r? ) )),
(7)
?
1/2
f ? ? PE? = Op ?1/2
PE
n
? kr? k? max{1, R(r? )}
?
(1??/2)/2
+ ?n? max{1, kr? k(1??/2)/2
, R(r? )kr? k?
, R(r? )} ,
(8)
?
where Op denotes the asymptotic order in probability,
q
q
c := (1 + ?) Vp(x) [r? (x)] + (1 ? ?) Vp0 (x) [r? (x)],
and Vp(x) denotes the variance over p(x):
?2
R?
R
Vp(x) [f (x)] =
f (x) ? f (x)p(x)dx p(x)dx.
In both Eq.(7) and Eq.(8), the coefficients of the leading terms (i.e., the first terms) of the asymptotic
convergence rates become smaller as kr? k? gets smaller. Since
??
??1 ?
?
?
kr? k? = ? ? + (1 ? ?)/r(x)
? < ?1 for ? > 0,
?
larger ? would be more preferable in terms of the asymptotic approximation error. Note that when
? = 0, kr? k? can tend to infinity even under a simple setting that the ratio of two Gaussian functions is considered [16]. Thus, our proposed approach of estimating the ?-relative PE divergence
(with ? > 0) would be more advantageous than the naive approach of estimating the plain PE
divergence (which corresponds to ? = 0) in terms of the non-parametric convergence rate.
c ? and PE
f ? have different asymptotic convergence rates. The
The above results also show that PE
1/2
?1/2
leading term in Eq.(7) is of order n
?
, while the leading term in Eq.(8) is of order ?n? , which is
?1/2
c ? would be more accurate
slightly slower (depending on the complexity ?) than n
?
. Thus, PE
0
f
than PE? in large sample cases. Furthermore, when p(x) = p (x), Vp(x) [r? (x)] = 0 holds and
c ? has the even faster
thus c = 0 holds. Then the leading term in Eq.(7) vanishes and therefore PE
convergence rate of order ?n? , which is slightly slower (depending on the complexity ?) than n
? ?1 .
Similarly, if ? is close to 1, r? (x) ? 1 and thus c ? 0 holds.
When n
? is not large enough to be able to neglect the terms of o(?
n?1/2 ), the terms of O(?n? ) matter.
If kr? k? and R(r? ) are large (this can happen, e.g., when ? is close to 0), the coefficient of the
f ? would be more favorable than
O(?n? )-term in Eq.(7) can be larger than that in Eq.(8). Then PE
c
PE? in terms of the approximation accuracy.
See the supplementary material for numerical examples illustrating the above theoretical results.
4
Parametric Variance Analysis: Next, we analyze the asymptotic variance of the PE divergence
c ? (4) under a parametric setup.
estimator PE
As the function space G in Eq.(6), we consider the following parametric model: G = {g(x; ?) | ? ?
? ? Rb } for a finite b. Here we assume that this parametric model is correctly specified, i.e., it
includes the true relative density-ratio function r? (x): there exists ? ? such that g(x; ? ? ) = r? (x).
Here, we use RuLSIF without regularization, i.e., ? = 0 in Eq.(6).
c ? (4) by V[PE
c ? ], where randomness comes from the draw of
Let us denote the variance of PE
n
0 n0
samples {xi }i=1 and {xj }j=1 . Then, under a standard regularity condition for the asymptotic
c ? ] can be expressed and upper-bounded as
normality [19], V[PE
?
?
?
?
c ? ] = Vp(x) r? ? ?r? (x)2 /2 /n + Vp0 (x) (1 ? ?)r? (x)2 /2 /n0 + o(n?1 , n0?1 ) (9)
V[PE
? kr? k2? /n + ?2 kr? k4? /(4n) + (1 ? ?)2 kr? k4? /(4n0 ) + o(n?1 , n0?1 ).
(10)
f ? by V[PE
f ? ]. Then, under a standard regularity condition for the
Let us denote the variance of PE
f ? is asymptotically expressed as
asymptotic normality [19], the variance of PE
??
? ?
f ? ] = Vp(x) r? + (1 ? ?r? )Ep(x) [?g]> H ?1
V[PE
? ?g /2 /n
??
? ? 0
?1
+ Vp0 (x) (1 ? ?)r? Ep(x) [?g]> H ?1
, n0?1 ),
(11)
? ?g /2 /n + o(n
where ?g is the gradient vector of g with respect to ? at ? = ? ? and
H ? = ?Ep(x) [?g?g > ] + (1 ? ?)Ep0 (x) [?g?g > ].
c ? depends only on the true relative
Eq.(9) shows that, up to O(n?1 , n0?1 ), the variance of PE
density-ratio r? (x), not on the estimator of r? (x). This means that the model complexity does not
affect the asymptotic variance. Therefore, overfitting would hardly occur in the estimation of the
relative PE divergence even when complex models are used. We note that the above superior property is applicable only to relative PE divergence estimation, not to relative density-ratio estimation.
This implies that overfitting occurs in relative density-ratio estimation, but the approximation error
cancels out in relative PE divergence estimation.
f ? is affected by the model G,
On the other hand, Eq.(11) shows that the variance of PE
since the factor Ep(x) [?g]> H ?1
?g
depends
on
the
model
in general. When the equality
?
?
?1
>
f
c ? are asymptotically
Ep(x) [?g] H ? ?g(x; ? ) = r? (x) holds, the variances of PE? and PE
c ? would be more recommended.
the same. However, in general, the use of PE
c ? ] can be upper-bounded by the quantity depending on kr? k? ,
Eq.(10) shows that the variance V[PE
which is monotonically lowered if kr? k? is reduced. Since kr? k? monotonically decreases as ?
increases, our proposed approach of estimating the ?-relative PE divergence (with ? > 0) would
be more advantageous than the naive approach of estimating the plain PE divergence (which corresponds to ? = 0) in terms of the parametric asymptotic variance.
See the supplementary material for numerical examples illustrating the above theoretical results.
4 Experiments
In this section, we experimentally evaluate the performance of the proposed method in two-sample
homogeneity test, outlier detection, and transfer learning tasks.
Two-Sample Homogeneity Test: First, we apply the proposed divergence estimator to twosample homogeneity test.
i.i.d.
0
i.i.d.
Given two sets of samples X = {xi }ni=1 ? P and X 0 = {x0j }nj=1 ? P 0 , the goal of the twosample homogeneity test is to test the null hypothesis that the probability distributions P and P 0
are the same against its complementary alternative (i.e., the distributions are different). By using
d of some divergence between the two distributions P and P 0 , homogeneity of two
an estimator Div
distributions can be tested based on the permutation test procedure [20].
5
Table 1: Experimental results of two-sample test. The mean (and standard deviation in the bracket)
rate of accepting the null hypothesis (i.e., P = P 0 ) for IDA benchmark repository under the significance level 5% is reported. Left: when the two sets of samples are both taken from the positive
training set (i.e., the null hypothesis is correct). Methods having the mean acceptance rate 0.95 according to the one-sample t-test at the significance level 5% are specified by bold face. Right: when
the set of samples corresponding to the numerator of the density-ratio are taken from the positive
training set and the set of samples corresponding to the denominator of the density-ratio are taken
from the positive training set and the negative training set (i.e., the null hypothesis is not correct).
The best method having the lowest mean accepting rate and comparable methods according to the
two-sample t-test at the significance level 5% are specified by bold face.
Datasets
d
n=n
banana
thyroid
titanic
diabetes
b-cancer
f-solar
heart
german
ringnorm
waveform
2
5
5
8
9
9
13
20
20
21
100
19
21
85
29
100
38
100
100
66
0
MMD
.96 (.20)
.96 (.20)
.94 (.24)
.96 (.20)
.98 (.14)
.93 (.26)
1.00 (.00)
.99 (.10)
.97 (.17)
.98 (.14)
P =
LSTT
(? = 0.0)
.93 (.26)
.95 (.22)
.86 (.35)
.87 (.34)
.91 (.29)
.91 (.29)
.85 (.36)
.91 (.29)
.93 (.26)
.92 (.27)
P0
LSTT
(? = 0.5)
.92 (.27)
.95 (.22)
.92 (.27)
.91 (.29)
.94 (.24)
.95 (.22)
.91 (.29)
.92 (.27)
.91 (.29)
.93 (.26)
LSTT
(? = 0.95)
.92 (.27)
.88 (.33)
.89 (.31)
.82 (.39)
.92 (.27)
.93 (.26)
.93 (.26)
.89 (.31)
.85 (.36)
.88 (.33)
MMD
.52 (.50)
.52 (.50)
.87 (.34)
.31 (.46)
.87 (.34)
.51 (.50)
.53 (.50)
.56 (.50)
.00 (.00)
.00 (.00)
P 6=
LSTT
(? = 0.0)
.10 (.30)
.81 (.39)
.86 (.35)
.42 (.50)
.75 (.44)
.81 (.39)
.28 (.45)
.55 (.50)
.00 (.00)
.00 (.00)
P0
LSTT
(? = 0.5)
.02 (.14)
.65 (.48)
.87 (.34)
.47 (.50)
.80 (.40)
.55 (.50)
.40 (.49)
.44 (.50)
.00 (.00)
.02 (.14)
LSTT
(? = 0.95)
.17 (.38)
.80 (.40)
.88 (.33)
.57 (.50)
.79 (.41)
.66 (.48)
.62 (.49)
.68 (.47)
.02 (.14)
.00 (.00)
When an asymmetric divergence such as the KL divergence [7] or the PE divergence [11] is adopted
for two-sample test, the test results depend on the choice of directions: a divergence from P to
P 0 or from P 0 to P . [4] proposed to choose the direction that gives a smaller p-value?it was
experimentally shown that, when the uLSIF-based PE divergence estimator is used for the twosample test (which is called the least-squares two-sample test; LSTT), the heuristic of choosing the
direction with a smaller p-value contributes to reducing the type-II error (the probability of accepting
incorrect null-hypotheses, i.e., two distributions are judged to be the same when they are actually
different), while the increase of the type-I error (the probability of rejecting correct null-hypotheses,
i.e., two distributions are judged to be different when they are actually the same) is kept moderate.
We apply the proposed method to the binary classification datasets taken from the IDA benchmark
repository [21]. We test LSTT with the RuLSIF-based PE divergence estimator for ? = 0, 0.5, and
0.95; we also test the maximum mean discrepancy (MMD) [22], which is a kernel-based two-sample
test method. The performance of MMD depends on the choice of the Gaussian kernel width. Here,
we adopt a version proposed by [23], which automatically optimizes the Gaussian kernel width. The
p-values of MMD are computed in the same way as LSTT based on the permutation test procedure.
First, we investigate the rate of accepting the null hypothesis when the null hypothesis is correct
(i.e., the two distributions are the same). We split all the positive training samples into two sets and
perform two-sample test for the two sets of samples. The experimental results are summarized in
the left half of Table 1, showing that LSTT with ? = 0.5 compares favorably with those with ? = 0
and 0.95 and MMD in terms of the type-I error.
Next, we consider the situation where the null hypothesis is not correct (i.e., the two distributions
are different). The numerator samples are generated in the same way as above, but a half of denominator samples are replaced with negative training samples. Thus, while the numerator sample set
contains only positive training samples, the denominator sample set includes both positive and negative training samples. The experimental results are summarized in the right half of Table 1, showing
that LSTT with ? = 0.5 again compares favorably with those with ? = 0 and 0.95. Furthermore,
LSTT with ? = 0.5 tends to outperform MMD in terms of the type-II error.
Overall, LSTT with ? = 0.5 is shown to be a useful method for two-sample homogeneity test. See
the supplementary material for more experimental evaluation.
Inlier-Based Outlier Detection: Next, we apply the proposed method to outlier detection.
Let us consider an outlier detection problem of finding irregular samples in a dataset (called an
?evaluation dataset?) based on another dataset (called a ?model dataset?) that only contains regular
samples. Defining the density-ratio over the two sets of samples, we can see that the density-ratio
6
Table 2: Experimental
results of outlier detection. Mean AUC score
(and standard deviation in the bracket)
over 100 trials is
reported.
The best
method having the
highest mean AUC
score and comparable
methods according to
the two-sample t-test
at the significance
level 5% are specified
by bold face.
The
datasets are sorted
in
the
ascending
order of the input
dimensionality d.
d
Datasets
IDA:banana
IDA:thyroid
IDA:titanic
IDA:diabetes
IDA:breast-cancer
IDA:flare-solar
IDA:heart
IDA:german
IDA:ringnorm
IDA:waveform
Speech
20News (?rec?)
20News (?sci?)
20News (?talk?)
USPS (1 vs. 2)
USPS (2 vs. 3)
USPS (3 vs. 4)
USPS (4 vs. 5)
USPS (5 vs. 6)
USPS (6 vs. 7)
USPS (7 vs. 8)
USPS (8 vs. 9)
USPS (9 vs. 0)
2
5
5
8
9
9
13
20
20
21
50
100
100
100
256
256
256
256
256
256
256
256
256
OSVM
(? = 0.05)
.668 (.105)
.760 (.148)
.757 (.205)
.636 (.099)
.741 (.160)
.594 (.087)
.714 (.140)
.612 (.069)
.991 (.012)
.812 (.107)
.788 (.068)
.598 (.063)
.592 (.069)
.661 (.084)
.889 (.052)
.823 (.053)
.901 (.044)
.871 (.041)
.825 (.058)
.910 (.034)
.938 (.030)
.721 (.072)
.920 (.037)
OSVM
(? = 0.1)
.676 (.120)
.782 (.165)
.752 (.191)
.610 (.090)
.691 (.147)
.590 (.083)
.694 (.148)
.604 (.084)
.993 (.007)
.843 (.123)
.830 (.060)
.593 (.061)
.589 (.071)
.658 (.084)
.926 (.037)
.835 (.050)
.939 (.031)
.890 (.036)
.859 (.052)
.950 (.025)
.967 (.021)
.728 (.073)
.966 (.023)
RuLSIF
(? = 0)
.597 (.097)
.804 (.148)
.750 (.182)
.594 (.105)
.707 (.148)
.626 (.102)
.748 (.149)
.605 (.092)
.944 (.091)
.879 (.122)
.804 (.101)
.628 (.105)
.620 (.094)
.672 (.117)
.848 (.081)
.803 (.093)
.950 (.056)
.857 (.099)
.863 (.078)
.972 (.038)
.941 (.053)
.721 (.084)
.982 (.048)
RuLSIF
(? = 0.5)
.619 (.101)
.796 (.178)
.701 (.184)
.575 (.105)
.737 (.159)
.612 (.100)
.769 (.134)
.597 (.101)
.971 (.062)
.875 (.117)
.821 (.076)
.614 (.093)
.609 (.087)
.670 (.102)
.878 (.088)
.818 (.085)
.961 (.041)
.874 (.082)
.867 (.068)
.984 (.018)
.951 (.039)
.728 (.083)
.989 (.022)
RuLSIF
(? = 0.95)
.623 (.115)
.722 (.153)
.712 (.185)
.663 (.112)
.733 (.160)
.584 (.114)
.726 (.127)
.605 (.095)
.992 (.010)
.885 (.102)
.836 (.083)
.767 (.100)
.704 (.093)
.823 (.078)
.898 (.051)
.879 (.074)
.984 (.016)
.941 (.031)
.901 (.049)
.994 (.010)
.980 (.015)
.761 (.096)
.994 (.011)
values for regular samples are close to one, while those for outliers tend to be significantly deviated
from one. Thus, density-ratio values could be used as an index of the degree of outlyingness [1, 2].
Since the evaluation dataset usually has a wider support than the model dataset, we regard the evaluation dataset as samples corresponding to the denominator density p0 (x), and the model dataset as
samples corresponding to the numerator density p(x). Then, outliers tend to have smaller densityratio values (i.e., close to zero). Thus, density-ratio approximators can be used for outlier detection.
We evaluate the proposed method using various datasets: IDA benchmark repository [21], an inhouse French speech dataset, the 20 Newsgroup dataset, and the USPS hand-written digit dataset
(the detailed specification of the datasets is explained in the supplementary material).
We compare the area under the ROC curve (AUC) [24] of RuLSIF with ? = 0, 0.5, and 0.95, and
one-class support vector machine (OSVM) with the Gaussian kernel [25]. We used the LIBSVM
implementation of OSVM [26]. The Gaussian width is set to the median distance between samples,
which has been shown to be a useful heuristic [25]. Since there is no systematic method to determine
the tuning parameter ? in OSVM, we report the results for ? = 0.05 and 0.1.
The mean and standard deviation of the AUC scores over 100 runs with random sample choice are
summarized in Table 2, showing that RuLSIF overall compares favorably with OSVM. Among the
RuLSIF methods, small ? tends to perform well for low-dimensional datasets, and large ? tends to
work well for high-dimensional datasets.
Transfer Learning: Finally, we apply the proposed method to transfer learning.
tr ntr
Let us consider a transductive transfer learning setup where labeled training samples {(xtr
j , yj )}j=1
te nte
drawn i.i.d. from p(y|x)ptr (x) and unlabeled test samples {xi }i=1 drawn i.i.d. from pte (x) (which
is generally different from ptr (x)) are available. The use of exponentially-weighted importance
weighting was shown to be useful for adaptation from ptr (x) to pte (x) [5]:
?
minf ?F
1
ntr
??
Pntr ? pte (xtr
j )
j=1
ptr (xtr
j )
?
loss(yjtr , f (xtr
))
,
j
where f (x) is a learned function and 0 ? ? ? 1 is the exponential flattening parameter. ? = 0 corresponds to plain empirical-error minimization which is statistically efficient, while ? = 1 corresponds
to importance-weighted empirical-error minimization which is statistically consistent; 0 < ? < 1
will give an intermediate estimator that balances the trade-off between statistical efficiency and consistency. ? can be determined by importance-weighted cross-validation [6] in a data dependent
fashion.
7
Table 3: Experimental results of transfer learning in human activity recognition. Mean classification
accuracy (and the standard deviation in the bracket) over 100 runs for human activity recognition of
a new user is reported. We compare the plain kernel logistic regression (KLR) without importance
weights, KLR with relative importance weights (RIW-KLR), KLR with exponentially-weighted importance weights (EIW-KLR), and KLR with plain importance weights (IW-KLR). The method having the highest mean classification accuracy and comparable methods according to the two-sample
t-test at the significance level 5% are specified by bold face.
Task
Walks vs. run
Walks vs. bicycle
Walks vs. train
KLR
(? = 0, ? = 0)
0.803
(0.082)
0.880
(0.025)
0.985
(0.017)
RIW-KLR
(? = 0.5)
0.889 (0.035)
0.892 (0.035)
0.992 (0.008)
EIW-KLR
(? = 0.5)
0.882 (0.039)
0.867 (0.054)
0.989 (0.011)
IW-KLR
(? = 1, ? = 1)
0.882
(0.035)
0.854
(0.070)
0.983
(0.021)
However, a potential drawback is that estimation of r(x) (i.e., ? = 1) is rather hard, as shown in this
paper. Here we propose to use relative importance weights instead:
i
h P
pte (xtr
ntr
j )
loss(yjtr , f (xtr
minf ?F n1tr j=1
j )) .
(1??)pte (xtr )+?ptr (xtr )
j
j
We apply the above transfer learning technique to human activity recognition using accelerometer
data. Subjects were asked to perform a specific task such as walking, running, and bicycle riding,
which was collected by iPodTouch. The duration of each task was arbitrary and the sampling rate
was 20Hz with small variations (the detailed experimental setup is explained in the supplementary
material). Let us consider a situation where a new user wants to use the activity recognition system.
However, since the new user is not willing to label his/her accelerometer data due to troublesomeness, no labeled sample is available for the new user. On the other hand, unlabeled samples for
the new user and labeled data obtained from existing users are available. Let labeled training data
tr ntr
{(xtr
j , yj )}j=1 be the set of labeled accelerometer data for 20 existing users. Each user has at most
nte
100 labeled samples for each action. Let unlabeled test data {xte
i }i=1 be unlabeled accelerometer
data obtained from the new user.
The experiments are repeated 100 times with different sample choice for ntr = 500 and nte = 200.
The classification accuracy for 800 test samples from the new user (which are different from the
200 unlabeled samples) are summarized in Table 3, showing that the proposed method using relative
importance weights for ? = 0.5 works better than other methods.
5 Conclusion
In this paper, we proposed to use a relative divergence for robust distribution comparison. We gave
a computationally efficient method for estimating the relative Pearson divergence based on direct
relative density-ratio approximation. We theoretically elucidated the convergence rate of the proposed divergence estimator under non-parametric setup, which showed that the proposed approach
of estimating the relative Pearson divergence is more preferable than the existing approach of estimating the plain Pearson divergence. Furthermore, we proved that the asymptotic variance of the
proposed divergence estimator is independent of the model complexity under a correctly-specified
parametric setup. Thus, the proposed divergence estimator hardly overfits even with complex models. Experimentally, we demonstrated the practical usefulness of the proposed divergence estimator
in two-sample homogeneity test, inlier-based outlier detection, and transfer learning tasks.
In addition to two-sample homogeneity test, inlier-based outlier detection, and transfer learning,
density-ratios can be useful for tackling various machine learning problems, for example, multi-task
learning, independence test, feature selection, causal inference, independent component analysis,
dimensionality reduction, unpaired data matching, clustering, conditional density estimation, and
probabilistic classification. Thus, it would be promising to explore more applications of the proposed relative density-ratio approximator beyond two-sample homogeneity test, inlier-based outlier
detection, and transfer learning.
Acknowledgments
MY was supported by the JST PRESTO program, TS was partially supported by MEXT KAKENHI
22700289 and Aihara Project, the FIRST program from JSPS, initiated by CSTP, TK was partially
supported by Grant-in-Aid for Young Scientists (20700251), HH was supported by the FIRST program, and MS was partially supported by SCAT, AOARD, and the FIRST program.
8
References
[1] A. J. Smola, L. Song, and C. H. Teo. Relative novelty detection. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS2009), pages 536?543, 2009.
[2] S. Hido, Y. Tsuboi, H. Kashima, M. Sugiyama, and T. Kanamori. Statistical outlier detection using direct
density ratio estimation. Knowledge and Information Systems, 26(2):309?336, 2011.
[3] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. J. Smola. A kernel method for the twosample-problem. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 513?520. MIT Press, Cambridge, MA, 2007.
[4] M. Sugiyama, T. Suzuki, Y. Itoh, T. Kanamori, and M. Kimura. Least-squares two-sample test. Neural
Networks, 24(7):735?751, 2011.
[5] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[6] M. Sugiyama, M. Krauledat, and K.-R. M?uller. Covariate shift adaptation by importance weighted cross
validation. Journal of Machine Learning Research, 8:985?1005, May 2007.
[7] S. Kullback and R. A. Leibler. On information and sufficiency. Annals of Mathematical Statistics, 22:79?
86, 1951.
[8] V. N. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998.
[9] M. Sugiyama, T. Suzuki, S. Nakajima, H. Kashima, P. von B?unau, and M. Kawanabe. Direct importance
estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics, 60:699?746,
2008.
[10] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood
ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847?5861, 2010.
[11] K. Pearson. On the criterion that a given system of deviations from the probable in the case of a correlated
system of variables is such that it can be reasonably supposed to have arisen from random sampling.
Philosophical Magazine, 50:157?175, 1900.
[12] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another.
Journal of the Royal Statistical Society, Series B, 28:131?142, 1966.
[13] I. Csisz?ar. Information-type measures of difference of probability distributions and indirect observation.
Studia Scientiarum Mathematicarum Hungarica, 2:229?318, 1967.
[14] T. Kanamori, S. Hido, and M. Sugiyama. A least-squares approach to direct importance estimation.
Journal of Machine Learning Research, 10:1391?1445, 2009.
[15] T. Suzuki and M. Sugiyama. Sufficient dimension reduction via squared-loss mutual information estimation. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
(AISTATS2010), pages 804?811, 2010.
[16] C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighting. In J. Lafferty, C. K. I.
Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 442?450. 2010.
[17] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, USA, 1970.
[18] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society,
68:337?404, 1950.
[19] A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 2000.
[20] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, New York, NY, 1993.
[21] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for adaboost. Machine Learning, 42(3):287?320,
2001.
[22] K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Sch?olkopf, and A. J. Smola. Integrating
structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49?e57, 2006.
[23] B. Sriperumbudur, K. Fukumizu, A. Gretton, G. Lanckriet, and B. Sch?olkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In Y. Bengio, D. Schuurmans, J. Lafferty,
C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages
1750?1758. 2009.
[24] A. P. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms.
Pattern Recognition, 30:1145?1159, 1997.
[25] B. Sch?olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of
a high-dimensional distribution. Neural Computation, 13(7):1443?1471, 2001.
[26] C.-C. Chang and C.h-J. Lin. LIBSVM: A Library for Support Vector Machines, 2001. Software available
at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
9
| 4254 |@word trial:1 illustrating:2 version:1 middle:1 repository:3 norm:2 advantageous:2 twelfth:1 willing:1 p0:18 tr:2 reduction:3 contains:2 score:3 series:1 rkhs:3 existing:3 bradley:1 comparing:2 ida:13 tackling:1 dx:2 written:1 numerical:2 happen:1 n0:11 v:12 implying:1 half:3 intelligence:2 flare:1 yamada:2 accepting:4 unbounded:1 mathematical:3 constructed:1 direct:9 become:1 incorrect:1 prove:1 eiw:2 fitting:2 classifiability:1 x0:1 theoretically:2 expected:1 planning:1 multi:1 automatically:1 becomes:1 project:1 estimating:10 linearity:1 bounded:3 underlying:1 null:9 lowest:1 argmin:1 finding:1 nj:4 kimura:1 masashi:1 preferable:2 k2:1 platt:2 grant:1 positive:6 scientist:1 tends:6 initiated:1 hirotaka:1 fluctuation:1 challenging:1 ringnorm:2 statistically:2 practical:3 acknowledgment:1 yj:2 practice:1 bootstrap:1 digit:1 procedure:2 area:2 empirical:3 maxx:1 significantly:1 matching:1 integrating:1 regular:2 get:1 close:4 selection:2 unlabeled:5 judged:2 risk:1 www:1 demonstrated:2 williams:2 duration:1 convex:3 n1tr:1 estimator:33 insight:1 aoard:1 densityratio:2 regarded:1 his:1 variation:1 ckr:1 argming:1 annals:2 elucidate:1 suppose:1 user:10 elucidates:1 magazine:1 hypothesis:9 diabetes:2 lanckriet:1 expensive:1 taiji:2 particularly:1 rec:1 recognition:5 asymmetric:1 walking:1 tributions:1 labeled:6 ep:14 csie:1 region:1 culotta:2 news:3 decrease:1 highest:2 trade:1 vanishes:1 complexity:7 asked:1 depend:2 solving:1 ali:2 predictive:1 efficiency:1 basis:1 usps:10 indirect:1 various:3 talk:1 train:1 fast:1 describe:1 artificial:2 zemel:1 pearson:6 choosing:1 heuristic:2 supplementary:8 larger:3 statistic:4 vaart:1 transductive:1 advantage:1 propose:4 adaptation:3 achieve:1 itoh:1 supposed:1 csisz:2 olkopf:5 convergence:15 regularity:2 inlier:4 wider:1 depending:3 tk:1 ac:4 stat:1 op:3 eq:15 ddimensional:1 c:2 involves:1 implies:2 come:1 rasch:2 direction:3 waveform:2 drawback:1 tokyo:4 correct:5 vp0:3 human:3 jst:1 material:8 ntu:1 probable:1 leastsquares:1 biological:1 mathematicarum:1 extension:1 hold:4 practically:1 considered:2 hall:1 bicycle:2 adopt:1 purpose:1 estimation:20 favorable:3 applicable:1 label:1 iw:2 makoto:1 teo:1 successfully:1 weighted:5 hoffman:1 minimization:3 uller:2 xtr:9 mit:1 fukumizu:1 always:2 gaussian:9 rather:4 pn:8 focus:1 kakenhi:1 likelihood:2 sense:1 inference:3 dependent:1 her:1 going:2 overall:3 classification:5 among:1 aistats2010:1 mutual:1 construct:1 cstp:1 having:4 sampling:2 chapman:1 cancel:1 minf:2 discrepancy:2 minimized:1 report:1 few:1 divergence:67 homogeneity:13 replaced:1 n1:1 detection:15 acceptance:1 highly:1 investigate:1 evaluation:5 mixture:1 bracket:3 silvey:2 accurate:1 unless:1 xte:1 taylor:2 walk:3 causal:1 theoretical:9 fenchel:1 soft:1 ar:2 e49:1 ordinary:3 deviation:5 usefulness:3 jsps:1 too:1 reported:3 my:1 density:56 fundamental:2 pn0:3 international:2 borgwardt:2 systematic:1 off:1 probabilistic:1 diverge:1 squared:4 again:1 von:1 choose:1 american:1 leading:4 potential:1 accelerometer:4 bold:4 summarized:4 includes:2 coefficient:3 matter:1 rockafellar:1 notable:1 depends:4 overfits:3 sup:1 analyze:3 solar:2 square:5 ni:3 accuracy:4 variance:15 vp:6 accurately:1 critically:1 rejecting:1 randomness:1 hachiya:2 definition:2 against:1 sriperumbudur:1 sugi:1 dis0:1 associated:1 gain:1 proved:2 dataset:11 studia:1 knowledge:1 efron:1 dimensionality:3 hilbert:1 actually:3 adaboost:1 sufficiency:1 furthermore:4 just:1 smola:4 hand:3 aronszajn:1 scientiarum:1 french:1 logistic:1 riding:1 usa:1 true:3 analytically:2 regularization:5 equality:1 leibler:2 numerator:5 width:4 auc:4 ptr:5 criterion:2 m:1 ulsif:7 demonstrate:2 recently:1 superior:3 functional:2 jp:4 exponentially:2 belong:1 approximates:1 refer:1 cambridge:2 cv:1 tuning:1 unconstrained:2 consistency:1 mathematics:1 similarly:3 sugiyama:7 shawe:2 pte:5 lowered:1 specification:1 showed:1 moderate:1 optimizes:1 nagoya:2 binary:1 approximators:1 der:1 unau:1 determine:1 paradigm:1 riw:2 recommended:1 monotonically:2 ii:2 smoother:1 novelty:1 ntr:5 gretton:3 technical:2 faster:1 plug:1 cross:3 calculation:1 lin:1 hido:2 variant:1 regression:1 denominator:6 breast:1 titech:2 expectation:2 kernel:17 represent:1 mmd:7 nakajima:1 arisen:1 irregular:1 addition:1 want:1 separately:1 thirteenth:1 scat:1 median:1 sch:5 posse:1 subject:1 tend:3 hz:1 lafferty:2 jordan:1 intermediate:1 split:1 identically:1 easy:1 enough:1 embeddings:1 xj:4 affect:1 gave:1 independence:1 bengio:1 shift:3 expression:4 penalty:1 song:1 speech:2 york:2 hardly:4 action:1 krauledat:1 useful:6 generally:1 detailed:3 involve:1 unpaired:1 reduced:2 http:1 outperform:1 estimated:2 correctly:3 rb:5 tibshirani:1 affected:1 drawn:2 k4:2 libsvm:3 kept:1 asymptotically:2 run:3 x0j:4 draw:1 comparable:3 bound:1 deviated:1 activity:4 elucidated:1 occur:1 pntr:1 infinity:3 software:1 speed:4 thyroid:2 min:1 structured:1 according:4 shimodaira:1 legendre:1 smaller:5 slightly:2 tw:1 aihara:1 outlier:16 explained:2 taken:4 heart:2 computationally:2 equation:1 german:2 cjlin:1 hh:1 ascending:1 adopted:1 available:5 presto:1 apply:5 kawanabe:1 kashima:2 alternative:3 slower:2 denotes:6 running:1 clustering:1 e57:1 const:1 neglect:1 establish:1 approximating:1 society:2 quantity:1 occurs:1 parametric:17 div:1 gradient:1 distance:1 separate:2 sci:1 collected:1 reason:1 index:1 mini:1 ratio:46 balance:1 equivalently:1 setup:7 favorably:4 negative:4 implementation:1 perform:3 upper:2 observation:1 datasets:8 benchmark:3 finite:1 t:1 situation:2 defining:1 precise:1 banana:2 mansour:1 reproducing:2 arbitrary:1 introduced:1 kl:8 specified:7 philosophical:1 learned:2 able:1 beyond:1 kriegel:1 usually:1 pattern:1 program:4 max:4 royal:1 wainwright:1 normality:2 technology:2 titanic:2 library:1 naive:3 hungarica:1 sg:2 ep0:2 relative:47 asymptotic:14 loss:4 permutation:2 approximator:2 validation:3 degree:1 sufficient:1 nte:3 consistent:1 editor:3 share:1 klr:11 cancer:2 mohri:1 twosample:5 osvm:6 supported:5 transpose:1 kanamori:5 institute:3 explaining:1 face:4 distributed:1 regard:2 overcome:1 plain:6 curve:2 dimension:1 van:1 infinitedimensional:1 suzuki:4 nguyen:1 cope:1 transaction:2 functionals:1 approximate:1 kullback:2 unreliable:1 confirm:1 overfitting:2 reveals:1 rulsif:11 xi:13 table:7 promising:1 onoda:1 transfer:12 robust:2 reasonably:1 contributes:1 improving:1 schuurmans:1 williamson:1 complex:5 flattening:1 significance:5 n2:1 repeated:1 complementary:1 roc:2 fashion:1 slow:1 aid:1 wiley:1 shrinking:1 ny:2 exponential:1 governed:1 pe:68 vanish:1 third:1 weighting:3 young:1 specific:1 covariate:3 showing:4 cortes:1 tsuboi:1 exists:1 vapnik:1 importance:15 kr:13 te:1 margin:1 explore:1 expressed:3 partially:3 chang:1 corresponds:4 outlyingness:1 ma:1 conditional:1 goal:2 identity:1 sorted:1 hard:2 experimentally:3 included:1 determined:2 reducing:1 called:7 duality:1 experimental:7 newsgroup:1 atsch:1 support:4 mext:1 takafumi:1 bioinformatics:1 evaluate:2 princeton:2 tested:1 correlated:1 |
3,595 | 4,255 | The Kernel Beta Process
Yingjian Wang?
Electrical & Computer Engineering Dept.
Duke University
Durham, NC 27708
[email protected]
Lu Ren?
Electrical & Computer Engineering Dept.
Duke University
Durham, NC 27708
[email protected]
David Dunson
Department of Statistical Science
Duke University
Durham, NC 27708
[email protected]
Lawrence Carin
Electrical & Computer Engineering Dept.
Duke University
Durham, NC 27708
[email protected]
Abstract
A new L?evy process prior is proposed for an uncountable collection of covariatedependent feature-learning measures; the model is called the kernel beta process
(KBP). Available covariates are handled efficiently via the kernel construction,
with covariates assumed observed with each data sample (?customer?), and latent
covariates learned for each feature (?dish?). Each customer selects dishes from an
infinite buffet, in a manner analogous to the beta process, with the added constraint
that a customer first decides probabilistically whether to ?consider? a dish, based
on the distance in covariate space between the customer and dish. If a customer
does consider a particular dish, that dish is then selected probabilistically as in
the beta process. The beta process is recovered as a limiting case of the KBP. An
efficient Gibbs sampler is developed for computations, and state-of-the-art results
are presented for image processing and music analysis tasks.
1
Introduction
Feature learning is an important problem in statistics and machine learning, characterized by the goal
of (typically) inferring a low-dimensional set of features for representation of high-dimensional data.
It is desirable to perform such analysis in a nonparametric manner, such that the number of features
may be learned, rather than a priori set. A powerful tool for such learning is the Indian buffet
process (IBP) [4], in which the data samples serve as ?customers?, and the potential features serve
as ?dishes?. It has recently been demonstrated that the IBP corresponds to a marginalization of a
beta-Bernoulli process [15]. The IBP and beta-Bernoulli constructions have found significant utility
in factor analysis [7, 17], in which one wishes to infer the number of factors needed to represent
data of interest. The beta process was developed originally by Hjort [5] as a L?evy process prior for
?hazard measures?, and was recently extended for use in feature learning [15], the interest of this
paper; we therefore here refer to it as a ?feature-learning measure.?
The beta process is an example of a L?evy process [6], another example of which is the gamma
process [1]; the normalized gamma process is well known as the Dirichlet process [3, 14]. A key
characteristic of such models is that the data samples are assumed exchangeable, meaning that the
order/indices of the data may be permuted with no change in the model.
?
The first two authors contributed equally to this work.
1
An important line of research concerns removal of the assumption of exchangeability, allowing
incorporation of covariates (e.g., spatial/temporal coordinates that may be available with the data).
As an example, MacEachern introduced the dependent Dirichlet process [8]. In the context of feature
learning, the phylogenetic IBP removes the assumption of sample exchangeability by imposing
prior knowledge on inter-sample relationships via a tree structure [9]. The form of the tree may be
constituted as a result of covariates that are available with the samples, but the tree is not necessarily
unique. A dependent IBP (dIBP) model has been introduced recently, with a hierarchical Gaussian
process (GP) used to account for covariate dependence [16]; however, the use of a GP may constitute
challenges for large-scale problems. Recently a dependent hierarchical beta process (dHBP) has
been developed, yielding encouraging results [18]. However, the dHBP has the disadvantage of
assigning a kernel to each data sample, and therefore it scales unfavorably as the number of samples
increases.
In this paper we develop a new L?evy process prior, termed the kernel beta process (KBP), which
yields an uncountable number of covariate-dependent feature-learning measures, with the beta process a special case. This model may be interpreted as inferring covariates x?i for each feature (dish),
indexed by i. The generative process by which the nth data sample, with covariates xn , selects
features may be viewed as a two-step process. First the nth customer (data sample) decides whether
(1)
to ?examine? dish i by drawing zni ? Bernoulli(K(xn , x?i ; ?i? )), where ?i? are dish-dependent
kernel parameters that are also inferred (the {?i? } defining the meaning of proximity/locality in covariate space). The kernels are designed to satisfy K(xn , x?i ; ?i? ) ? (0, 1], K(x?i , x?i ; ?i? ) = 1,
(1)
and K(xn , x?i ; ?i? ) ? 0 as kxn ? x?i k2 ? ?. In the second step, if zni = 1, customer n draws
(2)
(2)
zni ? Bernoulli(?i ), and if zni = 1, the feature associated with dish i is employed by data sample
n. The parameters {x?i , ?i? , ?i } are inferred by the model. After computing the posterior distribution on model parameters, the number of kernels required to represent the measures is defined by
the number of features employed from the buffet (typically small relative to the data size); this is a
significant computational savings relative to [18, 16], for which the complexity of the model is tied
to the number of data samples, even if a small number of features are ultimately employed.
In addition to introducing this new L?evy process, we examine its properties, and demonstrate how
it may be efficiently applied in important data analysis problems. The hierarchical construction of
the KBP is fully conjugate, admitting convenient Gibbs-sampling (complicated sampling methods
were required for the method in [18]). To demonstrate the utility of the model we consider imageprocessing and music-analysis applications, for which state-of-the-art performance is demonstrated
compared to other relevant methods.
2
2.1
Kernel Beta Process
Review of beta and Bernoulli processes
A beta process B ? BP(c, B0 ) is a distribution on positive random measures over the space (?, F).
Parameter c(?) is a positive function over ? ? ?, and B0 is the base measure defined over ?. The
beta process is an example of a L?evy process, and the L?evy measure of BP(c, B0 ) is
?(d?, d?) = c(?)? ?1 (1 ? ?)c(?)?1 d?B0 (d?)
(1)
To draw B, one draws a set of points (?i , ?i ) ? ? ? [0, 1] from a Poisson process with measure ?,
yielding
?
X
B=
?i ??i
(2)
i=1
where ??i is a unit point measure at ?i ; B is therefore a discrete measure, with probability one.
R R The infinite sum in (2) is a consequence of drawing Poisson(?)
P atoms {?i , ?i }, with
? = ? [0,1] ?(d?, d?) = ?. Additionally, for any set A ? F, B(A) = i: ?i ?A ?i .
If Zn ? BeP(B) is the nth draw from a Bernoulli process, with B defined as in (2), then
Zn =
?
X
bni ??i ,
bni ? Bernoulli(?i )
i=1
2
(3)
A set of N such draws, {Zn }n=1,N , may be used to define whether feature ?i ? ? is utilized to
represent the nth data sample, where bni = 1 if feature ?i is employed, and bni = 0 otherwise. One
may marginalize out the measure B analytically, yielding conditional probabilities for the {Zn } that
correspond to the Indian buffet process [15, 4].
2.2
Covariate-dependent L?evy process
In the above beta-Bernoulli construction, the same measure B ? BP(c, B0 ) is employed for generation of all {Zn }, implying that each of the N samples have the same probabilities {?i } for use
of the respective features {?i }. We now assume that with each of the N samples of interest there
are an associated set of covariates, denoted respectively as {xn }, with each xn ? X . We wish to
impose that if samples n and n0 have similar covariates xn and xn0 , that it is probable that they will
employ a similar subset of the features {?i }; if the covariates are distinct it is less probable that
feature sharing will be manifested.
Generalizing (2), consider
B=
?
X
?i ??i , ?i ? B0
(4)
i=1
where ?i = {?i (x) : x ? X } is a stochastic process (random function) from X ? [0, 1] (drawn independently from the {?i }). Hence, B is a P
dependent collection of L?evy processes with the measure
?
specific to covariate x ? X being Bx = i=1 ?i (x)??i . This constitutes a general specification,
with several interesting special cases. For example, one might consider ?i (x) = g{?i (x)}, where
g : R ? [0, 1] is any monotone differentiable link function and ?i (x) : X ? R may be modeled
as a Gaussian process [10], or related kernel-based construction. To choose g{?i (x)} one can potentially use models for the predictor-dependent breaks in probit, logistic or kernel stick-breaking
processes [13, 11, 2]. In the remainder of this paper we propose a special case for design of ?i (x),
termed the kernel beta process (KBP).
2.3
Characteristic function of the kernel beta process
Recall from Hjort [5] that B ? BP(c(?), B0 ) is a beta process on measure space (?, F) if its
characteristic function satisfies
Z
E[ejuB(A) ] = exp{
(eju? ? 1)?(d?, d?)}
(5)
[0,1]?A
?
where here j = ?1, and A is any subset in F. The beta process is a particular class of the L?evy
process, with ?(d?, d?) defined as in (1).
For kernel K(x, x? ; ? ? ), let x ? X , x? ? X , and ? ? ? ?; it is assumed that K(x, x? ; ? ? ) ? [0, 1]
for all x, x? and ? ? . As a specific example, for the radial basis function K(x, x? ; ? ? ) =
exp[?? ? kx ? x? k2 ], where ? ? ? R+ . Let x? represent random variables drawn from probability measure H, with support on X , and ? ? is also a random variable drawn from an appropriate
probability measure Q with support over ? (e.g., in the context of the radial basis function, ? ? are
drawn from a probability measure with support over R+ ). We now define a new L?evy measure
?X = H(dx? )Q(d? ? )?(d?, d?)
(6)
where ?(d?, d?) is the L?evy measure associated with the beta process, defined in (1).
Theorem 1 Assume parameters {x?i , ?i? , ?i , ?i } are drawn from measure ?X in (6), and that the
following measure is constituted
Bx =
?
X
?i K(x, x?i ; ?i? )??i
(7)
i=1
which may be evaluated for any covariate x ? X .
For any finite set of covariates S = {x1 , . . . , x|S| }, we define the |S|-dimensional random vector K =
(K(x1 , x? ; ? ? ), . . . , K(x|S| , x? ; ? ? ))T , with random variables x? and ? ? drawn from H and
Q, respectively. For any set A ? F, the B evaluated at covariates S, on the set A,
3
yields an |S|-dimensional random vector B(A) = (Bx1 (A), . . . , Bx|S| (A))T , where Bx (A) =
P
?
?
evy process with L?evy meai: ?i ?A ?i K(x, xi ; ?i ). Expression (7) is a covariate-dependent L?
sure (6), and characteristic function for an arbitrary set of covariates S satisfying
Z
j<u,B(A)>
E[e
] = exp{
(ej<u,K?> ? 1)?X (dx? , d? ? , d?, d?)}
(8)
X ???[0,1]?A
2
A proof is provided in the Supplemental Material. Additionally, for notational convenience, below a
draw of (7), valid for all covariates in X , is denoted B ? KBP(c, B0 , H, Q), with c and B0 defining
?(d?, d?) in (1).
2.4
Relationship to the beta-Bernoulli process
If the covariate-dependent measure Bx in (7) is employed to define covariate-dependent feature usage, then
Zx ? BeP(Bx ), generalizing (3). Hence, given {x?i , ?i? , ?i }, the feature-usage measure is
P?
Zx = i=1 bxi ??i , with bxi ? Bernoulli(?i K(x, x?i ; ?i? )). Note that it is equivalent in distribution
(1) (2)
(1)
(2)
to express bxi = zxi zxi , with zxi ? Bernoulli(K(x, x?i ; ?i? )) and zxi ? Bernoulli(?i ). This
model therefore yields the two-step generalization of the generative process of the beta-Bernoulli
(1)
process discussed in the Introduction. The condition zxi = 1 only has a high probability when
observed covariates x are near the (latent/inferred) covariates x?i . It is deemed attractive that this
intuitive generative process comes as a result of a rigorous L?evy process construction, the properties
of which are summarized next.
2.5
Properties of B
For all Borel subsets A ? F, if B is drawn from the KBP and for covariates x, x0 ? X , we have
E[Bx (A)]
Cov(Bx (A), Bx0 (A))
= B0 (A)E(Kx )
Z
Z
B0 (d?)(1 ? B0 (d?))
= E(Kx Kx0 )
? Cov(Kx , Kx0 )
B02 (d?)
c(?)
+
1
A
A
R
where, E(Kx ) = X ?? K(x, x? ; ? ? )H(dx? )Q(d? ? ). If K(x, x? ; ? ? ) = 1 for all x ? X ,
E(Kx ) = E(Kx Kx0 ) = 1, and Cov(Kx , Kx0 ) = 0, and the above results reduce to the those
for the original BP [15].
Assume c(?) = c, where c ? R+ is a constant, and let Kx = (K(x, x?1 ; ?1? ), K(x, x?2 ; ?2? ), . . . )T
represent an infinite-dimensional vector, then for fixed kernel parameters {x?i , ?i? },
Corr(Bx (A), Bx0 (A)) =
< Kx , Kx0 >
kKx k2 ? kKx0 k2
(9)
where it is assumed < Kx , Kx0 >, kKx k2 , kKx0 k2 are finite; the latter condition is always met
when we (in practice) truncate the number of terms used in (7). The expression in (9) clearly imposes
the desired property of high correlation in Bx and Bx0 when x and x0 are proximate.
Proofs of the above properties are provided in the Supplemental Material.
3
3.1
Applications
Model construction
We develop a covariate-dependent factor model, generalizing [7, 17], which did not consider covariates. Consider data yn ? RM with associated covariates xn ? RL , with n = 1, . . . , N . The factor
loadings in the factor model here play the role of ?dishes? in the buffet analogy, and we model the
data as
Zxn ? BeP(Bxn ),
wn ?
yn = D(wn ? bn ) + n
B ? KBP(c, B0 , H, Q),
N (0, ?1?1 IT ),
n ?
4
B0 ? DP(?0 G0 )
?1
N (0, ?2 IM )
(10)
with gamma priors placed on ?0 , ?1 and ?2 , with ? representing the pointwise (Hadamard) vector
product, and with IM representing the M ? M identity matrix. The Dirichlet process [3] base
1
measure G0 = N (0, M
IM ), and the KBP base measure B0 is a mixture of atoms (factor loadings).
For the applications considered it is important that the same atoms be reused at different points {x?i }
in covariate space, to allow for repeated structure to be manifested as a function of space or time,
within the image and music applications, respectively. The columns of D are defined respectively
by (?1 , ?2 , . . . ) in B, and the vector bn = (bn1 , bn2 , . . . ) with bnk = Zxn (?k ). Note that B is
drawn once from the KBP, and when drawing the Zxn we evaluate B as defined by the respective
covariate xn .
When implementing the KBP, we truncate the sum in (7) to T terms, and draw the ?i ?
Beta(1/T, 1), which corresponds to setting c = 1. We set T large, and the model infers the subset
of {?i }i=1,T that have significant amplitude, thereby estimating the number of factors needed for
representation of the data. In practice we let H and Q be multinomial distributions over a discrete
and finite set of, respectively, locations for {x?i } and kernel parameters for {?i? }, details of which
are discussed in the specific examples.
In (10), the ith column of D, denoted Di , is drawn from B0 , with B0 drawn from a Dirichlet
process (DP). There are multiple ways to perform such DP clustering, and here we apply the P?olya
urn scheme [3]. Assume D1 , D2 , . . . , Di?1 are a series of i.i.d. random draws from B0 , then the
successive conditional distribution of Di is of the following form:
Di |D1 , . . . , Di?1 , ?0 , G0 ?
Nu
X
l=1
n?l
?0
? D? +
G0 ,
i ? 1 + ?0 l
i ? 1 + ?0
(11)
where {D?l }l=1,Nu are the unique dictionary elements shared by the first i ? 1 columns of D, and
Pi?1
n?l = j=1 ?(Dj = D?l ). For model inference, an indicator variable ci is introduced for each Di ,
and ci = l with a probability proportional to n?l , with l = 1, . . . , Nu , with ci equal to Nu + 1 with
a probability controlled by ?0 . If ci = l for l = 1, . . . , Nu , Di takes the value D?l ; otherwise Di is
1
IM ), and a new dish/factor loading D?Nu +1 is hence introduced.
drawn from the prior G0 = N (0, M
3.2
Extensions
It is relatively straightforward to include additional model sophistication into (10), one example
of which we will consider in the context of the image-processing example. Specifically, in many
applications it is inappropriate to assume a Gaussian model for the noise or residual n . In Section
4.3 we consider the following augmented noise model:
?n ?
N (0, ???1 IM ),
mnp
n = ?n ? mn + ?n
(12)
?1
? Bernoulli(?
?n ), ?
?n ? Beta(a0 , b0 ), ?n ? N (0, ?3 IM )
with gamma priors placed on ?? and ?2 , and with p = 1, . . . , M . The term ?n ? mn accounts for
?spiky? noise, with potentially large amplitude, and ?
?n represents the probability of spiky noise in
data sample n. This type of noise model was considered in [18], with which we compare.
3.3
Inference
The model inference is performed with a Gibbs sampler. Due to the limited space, only those
variables having update equations distinct from those in the BP-FA of [17] are included here.
Assume T is the truncation level for the number of dictionary elements, {Di }i=1,T ; Nu is the
number of unique dictionary elements values in the current Gibbs iteration, {D?l }l=1,Nu . For the
applications considered in this paper, K(xn , x?i ; ?i? ) is defined based on the Euclidean distance:
K(xn , x?i ; ?i? ) = exp[??i? ||xn ? x?i ||2 ] for i = 1, . . . , T ; both ?i? and x?i are updated from multinomial distributions (defining Q and H, respectively) over a set of discretized values with a uniform
prior for each; more details on this are discussed in Sec. 4.
? Update {D?l }l=1,L : D?l ? N (?l , ?l ),
?l = ?l [?2
N X
X
(bni wni )yn?l ],
?l = [?2
N X
X
n=1 i:ci =l
n=1 i:ci =l
5
(bni wni )2 + M ]?1 IM ,
where yn?l = yn ?
P
i:ci 6=l
Di (bni wni ).
? Update {ci }i=1,T : p(ci ) ? Mult(pi ),
(
?i
QN
n?
?2
?i
?
2
l
n=1 exp{? 2 kyn ? Dl (bni wni )k2 }, if l is previously used,
T
?1+?
0
p(ci = l|?) ?
Q
N
?0
?2
?i
?
2
new
,
n=1 exp{? 2 kyn ? Dlnew (bni wni )k2 }, if l = l
T ?1+?0
P
P
where n?l ?i = j:j6=i ?(Dj = D?l ), and yn?i = yn ? k:k6=i Dk (bnk wnk ); pi is realized
by normalizing the above equation.
? Update {Zxn }n=1,N : for Zxn , update each component p(bni ) ? Bernoulli(vni ) for
i = 1, . . . , K,
2
exp{? ?22 DTi Di wni
? 2wni DTi yn?i }?i K(xn , x?i ; ?i? )
p(bni = 1)
.
=
p(bni = 0)
1 ? ?i K(xn , x?i ; ?i? )
vni is calculated by normalizing p(bni ) with the above constraint.
? Update {?i }i=1,T :
(1)
(2)
Introduce two sets of auxiliary variables {zni }i=1,T and {zni }i=1,T for each data yn .
(1)
(2)
Assume zni ? Bernoulli(?i ) and zni ? Bernoulli(K(xn , x?i ; ?i? )). For each specific n,
(1)
(2)
? If bni = 1, zni = 1 and zni = 1;
?
(1)
(2)
?
?
?
? p(zni = 0, zni = 0|bni = 0) =
(1)
(2)
p(zni = 0, zni = 1|bni = 0) =
? If bni = 0,
?
?
?
(1)
(2)
?
p(zni = 1, zni = 0|bni = 0) =
4
4.1
?
(1??i ) 1?K(xn ,x?
i ;?i )
1??i K(xn ,x?
;?i? )
i
?
(1??i )K(xn ,x?
i ;?i )
?
1??i K(xn ,x?
i ;?i )
?
?i 1?K(xn ,x?
i ;?i )
?)
1??i K(xn ,x?
;?
i
i
From the above equations, we derive the conditional distribution for ?i ,
X
1 X (1)
(1)
+
zni , 1 +
(1 ? zni ) .
?i ? Beta
T
n
n
Results
Hyperparameter settings
For both ?1 and ?2 the corresponding prior was set to Gamma(10?6 , 10?6 ); the concentration parameter ?0 was given a prior Gamma(1, 0.1). For both experiments below, the number of dictionary
elements T was truncated to 256, the number of unique dictionary element values was initialized
to 100, and {?i }i=1,T were initialized to 0.5. All {?i? }i=1,T were initialized to 10?5 and updated
from a set {10?5 , 10?4 , 10?3 , 10?2 , 10?1 , 1} with a uniform prior Q. The remaining variables were
initialized randomly. No parameter tuning or optimization has been performed.
4.2
Music analysis
We consider the same music piece as described in [12]: ?A Day in the Life? from the Beatles? album
Sgt. Pepper?s Lonely Hearts Club Band. The acoustic signal was sampled at 22.05 KHz and divided
into 50 ms contiguous frames; 40-dimensional Mel frequency cepstral coefficients (MFCCs) were
extracted from each frame, shown in Figure 1(a).
A typical goal of music analysis is to infer interrelationships within the music piece, as a function
of time [12]. For the audio data, each MFCC vector yn has an associated time index, the latter used
as the covariate xn . The finite set of temporal sample points (covariates) were employed to define a
library for the {x?i }, and H is a uniform distribution over this set. After 2000 burn-in iterations, we
collected samples every five iterations. Figure 1(b) shows the frequency for the number of unique
dictionary elements used by the data, based on the 1600 collected samples; and Figure 1(c) shows
the frequency for the number of total dictionary elements used.
With the model defined in (10), the sparse vector bn ?wn indicates the importance of each dictionary
element from {Di }i=1,T to data yn . Each of these N vectors {bn ? wn }n=1,N was normalized
6
feature values
2
15
20
0
25
?2
30
?4
35
?6
40
1000
2000
3000
4000
5000
600
Frequency calculated from the collected samples
4
10
Frequency calculated from the collected samples
5
500
400
300
200
100
6000
0
25
30
observation index
35
40
45
50
300
250
200
150
100
55
50
0
165
The number of unique dictionary elements
(a)
170
175
180
185
190
195
200
205
The number of dictionary elements taken by the data
(b)
(c)
Figure 1: (a) MFCCs features used in music analysis, where the horizontal axis corresponds to time, for
?A Day in the Life?. Based on the Gibbs collection samples: (b) frequency on number of unique dictionary
elements, and (c) total number of dictionary elements.
within each Gibbs sample, and used to compute a correlation matrix associated with the N time
points in the music. Finally, this matrix was averaged across the collection samples, to yield a
correlation matrix relating one part of the music to all others. For a fair comparison between our
methods and the model proposed in [12] (which used an HMM, and computed correlations over
windows of time), we divided the whole piece into multiple consecutive short-time windows. Each
temporal window includes 75 consecutive feature vectors, and we compute the average correlation
coefficients between the features within each pair of windows. There were 88 temporal windows
in total (each temporal window is de noted as a sequence in Figure 2), and the dimension of the
correlation matrix is accordingly 88 ? 88. The computed correlation matrix for the proposed KBP
model is presented in Figure 2(a).
We compared KBP performance with results based on BP-FA [17] in which covariates are not employed, and with results from the dynamic clustering model in [12], in which a dynamic HMM is
employed (in [12] a dynamic HDP, or dHDP, was used in concert with an HMM). The BP-FA results
correspond to replacing the KBP with a BP. The correlation matrix computed from the BP-FA and
the dHDP-HMM [12] are shown in Figures 2(b) and (c), respectively. The dHDP-HMM results yield
a reasonably good segmentation of the music, but it is unable to infer subtle differences in the music
over time (for example, all voices in the music are clustered together, even if they are different).
Since the BP-FA does not capture as much localized information in the music (the probability of
dictionary usage is the same for all temporal positions), it does not manifest as good a music segmentation as the dHDP-HMM. By contrast, the KBP-FA model yields a good music segmentation,
while also capturing subtle differences in the music over time (e.g., in voices). Note that the use of
the DP to allow repeated use of dictionary elements as a function of time (covariates) is important
here, due to the repetition of structure in the piece. One may listen to the music and observe the
segmentation at http://www.youtube.com/watch?v=35YhHEbIlEI.
1
0.9
10
0.8
20
0.95
20
0.7
0.9
10
10
0.8
20
0.5
40
0.4
50
0.3
60
0.2
70
0.1
30
0.9
40
50
0.85
60
70
0.8
sequence index
0.6
Sequence index
Sequence index
0.7
30
30
0.6
40
0.5
50
0.4
60
0.3
70
0.2
0
80
80
80
0.1
?0.1
10
20
30
40
50
60
70
80
10
Sequence index
(a)
20
30
40
50
60
Sequence index
(b)
70
80
10
20
30
40
50
60
70
80
sequence index
(c)
Figure 2: Inference of relationships in music as a function of time, as computed via a correlation of the
dictionary-usage weights, for (a) and (b), and based upon state usage in an HMM, for (c). Results are shown
for ?A Day in the Life.? The results in (c) are from [12], as a courtesy from the authors of that paper. (a)
KBP-FA, (b) BP-FA, (c) dHDP-HMM .
4.3
Image interpolation and denoising
We consider image interpolation and denoising as two additional potential applications. In both of
these examples each image is divided into N 8 ? 8 overlapping patches, and each patch is stacked
into a vector of length M = 64, constituting observation yn ? RM . The covariate xn represents the
7
patch coordinates in the 2-D space. The probability measure H corresponds to a uniform distribution
over the centers of all 8 ? 8 patches. The images were recovered based on the average of the
collection samples, and each pixel was averaged across all overlapping patches in which it resided.
For the image-processing examples, 5000 Gibbs samples were run, with the first 2000 discarded as
burn-in.
For image interpolation, we only observe a fraction of the image pixels, sampled uniformly at random. The model infers the underlying dictionary D in the presence of this missing data, as well as
the weights on the dictionary elements required for representing the observed components of {yn };
using the inferred dictionary and associated weights, one may readily impute the missing pixel values. In Table 1 we present average PSNR values on the recovered pixel values, as a function of
the fraction of pixels that are observed (20% in Table 1 means that 80% of the pixels are missing
uniformly at random). Comparisons are made between a model based on BP and one based on the
proposed KBP; the latter generally performs better, particularly when a large fraction of the pixels
are missing. The proposed algorithm yields results that are comparable to those in [18], which also
employed covariates within the BP construc tion. However, the proposed KBP construction has
the significant computational advantages of only requiring kernels centered at the locations of the
dictionary-dependent covariates {x?i }, while the model in [18] has a kernel for each of the image
patches, and therefore it scales unfavorably for large images.
Table 1: Comparison of BP and KBP for interpolating images with pixels missing uniformly at random,
using standard image-processing images. The top and bottom rows of each cell show results of BP and KBP,
respectively. Results are shown when 20%, 30% and 50% of the pixels are observed, selected uniformly at
random.
RATIO
20%
30%
50%
C. MAN
23.75
24.02
25.59
25.75
28.66
28.78
H OUSE
29.75
30.89
33.09
34.02
38.26
38.35
P EPPERS
25.56
26.29
28.64
29.29
32.53
32.69
L ENA
30.97
31.38
33.30
33.33
36.79
35.89
BARBARA
26.84
28.93
30.13
31.46
35.95
36.03
B OATS
27.84
28.11
30.20
30.24
33.05
33.18
F. PRINT
26.49
26.89
29.23
29.37
33.50
32.18
M AN
28.29
28.37
29.89
30.12
33.19
32.35
C OUPLE
27.76
28.03
29.97
30.33
33.61
32.35
H ILL
29.38
29.67
31.19
31.25
34.19
32.60
In the image-denoising example in Figure 3 the images were corrupted with both white Gaussian
noise (WGN) and sparse spiky noise, as considered in [18]. The sparse spiky noise exists in particular pixels, selected uniformly at random, with amplitude distributed uniformly between ?255 and
255. For the pepper image, 15% of the pixels were corrupted by spiky noise, and the standard deviation of the WGN was 15; for the house image, 10% of the pixels were corrupted by spiky noise and
the standard deviation of WGN was 10. We compared with different methods on both two images:
the augmented KBP-FA model (KBP-FA+) in Sec. 3.2, the BP-FA model augmented with a term for
spiky noise (BP-FA+) and the original BP-FA model. The model proposed with KBP showed the
best denoising result for both visual and quantitative evaluations. Again, these results are comparable to those in [18], with the significant computational advant age discussed above. Note that here
the imposition of covariates and the KBP yields marked improvements in this application, relative
to BP-FA alone.
Figure 3: Denoising Result: the first column shows the noisy images (PSNR is 15.56 dB for Peppers and
17.54 dB for House); the second and third column shows the results inferred from the BP-FA model (PSNR
is 16.31 dB for Peppers and 17.95 dB for House), with the dictionary elements shown in column two and the
reconstruction in column three; the fourth and fifth columns show results from BP-FA+ (PSNR is 23.06 dB
for Peppers and 26.71 dB for House); the sixth and seventh column shows the results of the KBP-FA+ (PSNR
is 27.37 dB for Peppers and 34.89 dB for House). In each case the dictionaries are ordered based on their
frequency of usage, starting from top-left.
8
5
Summary
A new L?evy process, the kernel beta process, has been developed for the problem of nonparametric
Bayesian feature learning, with example results presented for music analysis, image denoising, and
image interpolation. In addition to presenting theoretical properties of the model, state-of-the-art
results are realized on these learning tasks. The inference is performed via a Gibbs sampler, with
analytic update equations. Concerning computational costs, for the music-analysis problem, for
example, the BP model required around 1 second per Gibbs iteration, with KBP requiring about 3
seconds, with results run on a PC with 2.4GHz CPU, in non-optimized MatlabTM .
Acknowledgment
The research reported here was supported by AFOSR, ARO, DARPA, DOE, NGA and ONR.
References
[1] D. Applebaum. Levy Processes and Stochastic Calculus. Cambridge University Press, 2009.
[2] D. B. Dunson and J.-H. Park. Kernel stick-breaking processes. Biometrika, 95:307?323, 2008.
[3] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1973.
[4] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS,
2005.
[5] N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data.
Annals of Statistics, 1990.
[6] J.F.C. Kingman. Poisson Processes. Oxford Press, 2002.
[7] D. Knowles and Z. Ghahramani. Infinite sparse factor analysis and infinite independent components
analysis. In Independent Component Analysis and Signal Separation, 2007.
[8] S. N. MacEachern. Dependent Nonparametric Processes. In In Proceedings of the Section on Bayesian
Statistical Science, 1999.
[9] K. Miller, T. Griffiths, and M. I. Jordan. The phylogenetic Indian buffet process: A non-exchangeable
nonparametric prior for latent features. In UAI, 2008.
[10] C.E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[11] L. Ren, L. Du, L. Carin, and D. B. Dunson. Logistic stick-breaking process. J. Machine Learning
Research, 2011.
[12] L. Ren, D. Dunson, S. Lindroth, and L. Carin. Dynamic nonparametric bayesian models for analysis of
music. Journal of The American Statistical Association, 105:458?472, 2010.
[13] A. Rodriguez and D. B. Dunson. Nonparametric bayesian models through probit stickbreaking processes.
Univ. California Santa Cruz Technical Report, 2009.
[14] J. Sethuraman. A constructive definition of dirichlet priors. 1994.
[15] R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet process. In AISTATS,
2007.
[16] S. Williamson, P. Orbanz, and Z. Ghahramani. Dependent Indian buffet processes. In AISTATS, 2010.
[17] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary
learning for sparse image representations. In NIPS, 2009.
[18] M. Zhou, H. Yang, G. Sapiro, D. Dunson, and L. Carin. Dependent hierarchical beta process for image
interpolation and denoising. In AISTATS, 2011.
9
| 4255 |@word loading:3 reused:1 d2:1 calculus:1 bn:4 wgn:3 thereby:1 series:1 kx0:6 recovered:3 current:1 com:1 assigning:1 dx:3 readily:1 cruz:1 analytic:1 remove:1 designed:1 concert:1 update:7 n0:1 implying:1 generative:3 selected:3 alone:1 accordingly:1 ith:1 short:1 evy:16 location:2 successive:1 club:1 five:1 phylogenetic:2 beta:31 introduce:1 manner:2 x0:2 inter:1 examine:2 olya:1 discretized:1 encouraging:1 cpu:1 inappropriate:1 window:6 provided:2 estimating:1 underlying:1 interpreted:1 developed:4 supplemental:2 sapiro:2 temporal:6 dti:2 every:1 quantitative:1 biometrika:1 k2:8 rm:2 stick:3 exchangeable:2 unit:1 yn:13 positive:2 engineering:3 vni:2 consequence:1 oxford:1 interpolation:5 might:1 burn:2 limited:1 averaged:2 unique:7 acknowledgment:1 practice:2 lcarin:1 mult:1 convenient:1 radial:2 griffith:2 convenience:1 marginalize:1 context:3 www:1 equivalent:1 customer:8 demonstrated:2 courtesy:1 center:1 straightforward:1 missing:5 starting:1 independently:1 williams:1 bep:3 estimator:1 coordinate:2 analogous:1 limiting:1 updated:2 construction:8 play:1 annals:2 duke:8 element:15 satisfying:1 particularly:1 utilized:1 observed:5 role:1 bottom:1 wang:1 electrical:3 capture:1 complexity:1 covariates:25 dynamic:4 ultimately:1 bx1:1 serve:2 upon:1 basis:2 darpa:1 stacked:1 univ:1 distinct:2 sgt:1 drawing:3 otherwise:2 statistic:3 cov:3 gp:2 noisy:1 sequence:7 differentiable:1 advantage:1 propose:1 reconstruction:1 aro:1 product:1 remainder:1 relevant:1 hadamard:1 eju:1 intuitive:1 derive:1 develop:2 stat:1 ibp:5 b0:19 bn1:1 auxiliary:1 come:1 met:1 resided:1 matlabtm:1 stochastic:2 centered:1 material:2 implementing:1 generalization:1 clustered:1 probable:2 im:7 extension:1 proximity:1 around:1 considered:4 bxn:1 exp:7 lawrence:1 dictionary:22 consecutive:2 bn2:1 stickbreaking:1 repetition:1 tool:1 mit:1 clearly:1 gaussian:5 always:1 rather:1 zhou:2 ej:1 exchangeability:2 probabilistically:2 notational:1 improvement:1 bernoulli:17 indicates:1 contrast:1 rigorous:1 inference:5 dependent:16 ferguson:1 typically:2 a0:1 selects:2 pixel:12 ill:1 denoted:3 priori:1 k6:1 art:3 spatial:1 special:3 equal:1 once:1 saving:1 having:1 sampling:2 atom:3 represents:2 park:1 carin:5 constitutes:1 bx0:3 others:1 report:1 employ:1 randomly:1 gamma:6 interest:3 bnk:2 evaluation:1 mixture:1 yielding:3 admitting:1 pc:1 respective:2 tree:3 indexed:1 euclidean:1 initialized:4 desired:1 theoretical:1 column:9 contiguous:1 disadvantage:1 zn:5 cost:1 introducing:1 deviation:2 subset:4 predictor:1 uniform:4 seventh:1 reported:1 thibaux:1 corrupted:3 together:1 again:1 choose:1 american:1 kingman:1 bx:10 account:2 potential:2 de:1 summarized:1 sec:2 includes:1 coefficient:2 applebaum:1 satisfy:1 piece:4 performed:3 break:1 tion:1 bayes:1 complicated:1 characteristic:4 efficiently:2 miller:1 yield:8 correspond:2 bayesian:6 dhdp:5 ren:4 lu:1 mfcc:1 zx:2 j6:1 history:1 sharing:1 sixth:1 definition:1 frequency:7 associated:7 bni:18 proof:2 di:12 sampled:2 recall:1 knowledge:1 manifest:1 infers:2 psnr:5 listen:1 segmentation:4 subtle:2 amplitude:3 originally:1 day:3 evaluated:2 kyn:2 spiky:7 correlation:9 horizontal:1 beatles:1 replacing:1 overlapping:2 rodriguez:1 logistic:2 usage:6 normalized:2 requiring:2 analytically:1 hence:3 kxn:1 white:1 attractive:1 impute:1 noted:1 mel:1 m:1 presenting:1 demonstrate:2 performs:1 interrelationship:1 meaning:2 image:25 recently:4 permuted:1 multinomial:2 rl:1 bxi:3 khz:1 discussed:4 association:1 relating:1 significant:5 refer:1 cambridge:1 gibbs:9 imposing:1 ena:1 paisley:1 tuning:1 dj:2 mfccs:2 specification:1 base:3 posterior:1 showed:1 orbanz:1 dish:13 barbara:1 termed:2 manifested:2 onr:1 life:4 additional:2 impose:1 employed:10 signal:2 multiple:2 desirable:1 infer:3 technical:1 characterized:1 dept:3 hazard:1 divided:3 concerning:1 proximate:1 equally:1 controlled:1 poisson:3 iteration:4 kernel:20 represent:5 cell:1 addition:2 sure:1 db:8 jordan:2 near:1 presence:1 hjort:3 yang:1 wn:4 marginalization:1 pepper:6 reduce:1 whether:3 expression:2 handled:1 utility:2 constitute:1 generally:1 santa:1 nonparametric:8 band:1 http:1 per:1 discrete:2 hyperparameter:1 express:1 key:1 drawn:11 monotone:1 fraction:3 sum:2 nga:1 run:2 imposition:1 powerful:1 fourth:1 kbp:26 knowles:1 patch:6 separation:1 draw:8 comparable:2 capturing:1 constraint:2 incorporation:1 bp:23 urn:1 relatively:1 department:1 truncate:2 conjugate:1 across:2 heart:1 taken:1 equation:4 previously:1 needed:2 available:3 apply:1 observe:2 hierarchical:5 appropriate:1 buffet:9 voice:2 original:2 uncountable:2 dirichlet:5 clustering:2 include:1 remaining:1 top:2 music:22 ghahramani:3 lonely:1 wnk:1 g0:5 added:1 realized:2 print:1 fa:17 concentration:1 dependence:1 parametric:1 dp:4 distance:2 link:1 unable:1 hmm:8 collected:4 hdp:1 length:1 index:9 relationship:3 modeled:1 pointwise:1 ratio:1 nc:4 dunson:7 potentially:2 design:1 perform:2 contributed:1 allowing:1 zni:18 observation:2 discarded:1 finite:4 truncated:1 defining:3 extended:1 frame:2 zxi:5 arbitrary:1 inferred:5 david:1 introduced:4 pair:1 required:4 optimized:1 kkx:2 acoustic:1 california:1 learned:2 nu:8 nip:2 below:2 challenge:1 meai:1 indicator:1 residual:1 nth:4 representing:3 scheme:1 mn:2 library:1 axis:1 sethuraman:1 deemed:1 prior:13 review:1 removal:1 relative:3 afosr:1 fully:1 probit:2 generation:1 interesting:1 proportional:1 analogy:1 localized:1 age:1 imposes:1 pi:3 row:1 summary:1 placed:2 supported:1 unfavorably:2 truncation:1 rasmussen:1 allow:2 cepstral:1 fifth:1 sparse:5 distributed:1 ghz:1 calculated:3 xn:23 valid:1 dimension:1 qn:1 author:2 collection:5 made:1 constituting:1 decides:2 uai:1 assumed:4 xi:1 latent:4 table:3 additionally:2 reasonably:1 du:1 williamson:1 necessarily:1 interpolating:1 did:1 aistats:3 constituted:2 whole:1 noise:11 wni:7 oat:1 repeated:2 fair:1 x1:2 augmented:3 borel:1 inferring:2 position:1 wish:2 house:5 tied:1 breaking:3 levy:1 third:1 theorem:1 specific:4 covariate:15 dk:1 concern:1 dl:1 normalizing:2 exists:1 corr:1 importance:1 ci:10 album:1 kx:11 chen:1 durham:4 locality:1 generalizing:3 sophistication:1 visual:1 ordered:1 watch:1 corresponds:4 satisfies:1 extracted:1 conditional:3 goal:2 viewed:1 identity:1 marked:1 mnp:1 shared:1 man:1 change:1 youtube:1 included:1 infinite:6 specifically:1 typical:1 uniformly:6 sampler:3 denoising:7 called:1 total:3 xn0:1 maceachern:2 support:3 latter:3 indian:6 constructive:1 evaluate:1 audio:1 d1:2 |
3,596 | 4,256 | Recovering Intrinsic Images with a Global Sparsity
Prior on Reflectance
Peter Vincent Gehler
Max Planck Institut for Informatics
Carsten Rother
Microsoft Research Cambridge
[email protected]
[email protected]
Martin Kiefel, Lumin Zhang, Bernhard Sch?olkopf
Max Planck Institute for Intelligent Systems
{mkiefel,lumin,bs}@tuebingen.mpg.de
Abstract
We address the challenging task of decoupling material properties from lighting
properties given a single image. In the last two decades virtually all works have
concentrated on exploiting edge information to address this problem. We take a
different route by introducing a new prior on reflectance, that models reflectance
values as being drawn from a sparse set of basis colors. This results in a Random
Field model with global, latent variables (basis colors) and pixel-accurate output
reflectance values. We show that without edge information high-quality results
can be achieved, that are on par with methods exploiting this source of information. Finally, we are able to improve on state-of-the-art results by integrating edge
information into our model. We believe that our new approach is an excellent
starting point for future developments in this field.
1
Introduction
The task of recovering intrinsic images is to separate a given input image into its material-dependent
properties, known as reflectance or albedo, and its light-dependent properties, such as shading, shadows, specular highlights, and inter-reflectance. A successful separation of these properties would be
beneficial to a number of computer vision tasks. For example, an image which solely depends on
material-dependent properties is helpful for image segmentation and object recognition [11], while
a clean image of shading is a valuable input to shape-from-shading algorithms.
As in most previous work in this field, we cast the intrinsic image recovery problem into the following simplified form, where each image pixel is the product of two components:
I = sR .
3
(1)
3
Here I ? R is the pixel?s color, in RGB space, R ? R is its reflectance and s ? R its ?shading?.
Note, we use ?shading? as a proxy for all light-dependent properties, e.g. shadows. The fact that
shading is only a 1D entity imposes some limitations. For example, shading effects stemming from
multiple light sources can only be modeled if all light sources have the same color.1 The goal of this
work is to estimate s and R given I. This problem is severely under-constraint, with 4 unknowns
and 3 constraints for each pixel. Hence, a trivial solution to (1) is, for instance, I = R, s = 1 for all
pixels. The main focus of this paper is on exploring sensible priors for both shading and reflectance.
Despite the importance of this problem surprisingly little research has been conducted in recent
years. Most of the inventions were done in the 70s and 80s. The recent comparative study [7] has
shown that the simple Retinex method [9] from the 70s is still the top performing approach. Given
1
This problem can be overcome by utilizing a 3D vector for s, as done in [4], which we however do not
consider in this work.
(a) Image I ?paper1?
(b) I (in RGB)
(c) Reflectance R
(d) R (in RGB)
(e) Shading s
Figure 1: An image (a), its color in RGB space (b), the reflectance image (c), its distribution in
RGB space (d), and the shading image (e). Omer and Werman [12] have shown that an image of
a natural scene often contains only a few different ?basis colorlines?. Figure (b) shows a dominant
gray-scale color-line and other color lines corresponding to the scribbles on the paper (a). These
colorlines are generated by taking a small set of ?basis colors? which are then linearly ?smeared?
out in RGB space. The basis colors are clearly visible in (d), where the cluster for white (top,
right) is the dominant one. This ?smearing effect? comes from properties of the scene (e.g. shading
or shadows), and/or properties of the camera, e.g. motion blur. (Note, the few pixels in-between
clusters are due to anti-aliasing effects). In this work we approximate the basis colors by a simple
mixture of isotropic Gaussians.
the progress in the last two decades on probabilistic models, inference and learning techniques, as
well as the improved computational power, we believe that now is a good time to revisit this problem.
This work, together with the recent papers [14, 4, 7, 15], are a first step in this direction.
The main motivation of our work is to develop a simple, yet powerful probabilistic model for shading
and reflectance estimation. In total we use three different types of factors. The first one is the
most commonly used factor and is key ingredient of all Retinex-based methods. The idea is to
extract those image edges which are (potentially) true reflectance edges and then to recover a new
reflectance image that contains only these edges, using a set of Poisson equations. This term on
its own is enough to recover a non-trivial decomposition, i.e. s 6= 1. The next factor is a simple
smoothness prior on shading between neighboring image pixels, and has been used by some previous
work e.g. [14]. Note, there are a few works, which we discuss in more detail later, that extend these
pairwise terms to become patch-based. The third prior term is the main contribution of our work and
is conceptually very different from the local (pairwise or patch-based) constraints of previous works.
We propose a new global (image-wide) sparsity prior on reflectance based on the findings of [12]
and discussed in Fig 1. In the absence of other factors this already produces non-trivial results. This
prior takes the form of a Mixture of Gaussians, and encodes the assumption that the reflectance value
for each pixel is drawn from some mixing components, which in this context we refer to as ?basis
colors?. The complete model forms a latent variable Random Field model for which we perform
MAP estimation.
By combining the different terms we are able to outperform state-of-the art. If we use image optimal
parameter settings we perform on par with methods that use multiple images as input. To empirically
validate this we use the database introduced in the comparative study [7].
2
Related Work
There is a vast amount of literature on the problem of recovering intrinsic images. We refer the
reader to detailed surveys in [8, 17, 7], and limit our attention to some few related works.
Barrow and Tenenbaum [2] were the first to define the term ?intrinsic image?. Around the same
time the first solution to this problem was developed by Land and McCann [9] known as the Retinex
algorithm. After that the Retinex algorithm was extended to two dimensions by Blake [3] and
Horn [8], and later applied to color images [6]. The basic Retinex algorithm is a 2-step procedure:
1) detect all image gradients which are caused by changes in reflectance; 2) recover a reflectance
image which preserves the detected reflectance gradients. The basic assumption of this approach
is that small image gradients are more likely caused by a shading effect and strong gradients by a
change in reflectance. For color images this rule can be extended by treating changes in the 1D
brightness domain differently to changes in the 2D chromaticity space.2 This method, which we
denote as ?Color Retinex? was the top performing method in the recent comparison paper [7]. Note,
2
Note, a gradient in chromaticity can only be caused by differently colored light sources, or inter-reflectance.
2
the only approach which could beat Retinex utilizes multiple images [19]. Surprisingly, the study
[7] also shows that more sophisticated methods for training the reflectance edge detector, using
e.g. images patches, did not perform better than the basic Retinex method. In particular the study
tested two methods of Tappen et al. [17, 16]. A plausible explanation is offered, namely that these
methods may have over-fitted the small amount of training data. The method [17] has an additional
intermediate step where a Markov Random Field (MRF) is used to ?propagate? reflectance gradients
along contour lines.
The paper [15] implements the same intuition as done here, namely that there is a sparse set of reflectances present in the scene. However both approaches bear the following differences. In [15] a
sparsity enforcing term is included, that is penalizing reflectance differences from some prototype
references. This term encourages all reflectances to take on the same value, while the model we
propose in this paper allows for a mixture of different material reflectances and thus keeps their
diversity. Also, in contrast to [15], where a gradient aware wavelet transform is used as a new
representation, here we work directly in the RGB domain. By doing so we directly extend previous intrinsic image models which makes evident the gains that can be attributed to a global sparse
reflectance term alone.
Recently, Shen et al. [14] introduced an interesting extension of the Retinex method, which bears
some similarity with our approach. The key idea in their work is to perform a pre-processing step
where the (normalized) reflectance image is partitioned into a few clusters. Each cluster is treated
as a non-local ?super-pixel?. Then a variant of the Retinex method is run on this super-pixel image.
The conceptual similarity to our approach is the idea of performing an image-wide clustering step.
However, the differences are that they do not formulate this idea as a joint probabilistic model over
latent reflectance ?basis colors? and shading variables. Furthermore, every pixel in a super-pixel
must have the same intensity, which is not the case in our work. Also, they need a Retinex type of
edge term to avoid the trivial solution of s = 1.
Finally, let us briefly mention techniques which use patch-based constraints, instead of pair-wise
terms. The seminal work of Freeman et al. on learning low-level vision [5] formulates a probabilistic
model for intrinsic images. In essence, they build a patch-based prior jointly over shading and
reflectance. In a new test image the best explanation for reflectance and shading is determined.
The key idea is that patches do overlap, and hence form an MRF, where long-range propagation
is possible. Since no large-scale ground database was available at that time, they only train and
test on computer generated images of blob-like textures. Another patch-based method was recently
suggested in [4]. They introduce a new energy term which is satisfied when all reflectance values
in a small, e.g. 3 ? 3, patch lie on a plane in RGB space. This idea is derived from the Laplacian
matrix used for image matting [10]. On its own this term gives in practice often the trivial solution
s = 1. For that reason additional user scribbles are provided to achieve high-quality results.3
3
A Probabilistic Model for Intrinsic Images
The model outlined here falls into the class of Conditional Random Fields, specifying a conditional
probability distribution over reflectance R and shading S components for a given image I
p(s, R | I) ? exp (?E(s, R | I)) .
(2)
Before we describe the energy function E in detail, let us specify the notation. We will denote with
subscripts i the values at location i in the image. Thus Ii is an image pixel (vector of dimension 3),
Ri a reflectance vector (a 3-vector), si the shading (a scalar). The total number of pixels in an image
is N . With boldface we denote vectors of components, e.g. s = (s1 , . . . , sN ).
There are two ways to use the relationship (1) to formulate a model for shading and reflectance,
corresponding to two different image likelihoods p(I | s, R). One possible way is to relax the
relation (1) and for example assume a Gaussian likelihood p(I | s, R) ? exp(?kI ? sRk2 ) to
account for some noise in the image formation process. This yields an optimization problem with
4N unknowns. The second possibility is to assume a delta-prior around sR which results in the
following complexity reduction. Since Iic = si Ric has to hold of all color channels c = {R, G, B},
the unknown variables are specified up to scalar multipliers, in other words the direction of Ri is
~ i , with R
~ i = Ii /kIi k, leaving r = (r1 , . . . , rN ) to be the
already known. We rewrite Ri = ri R
3
We performed initial tests with this term. However, we found that it did not help to improve performance.
3
only unknown variable. The shading components can be computed using si = kIi k/ri . Thus the
optimization problem is reduced to a search of N variables.
The latter reduction is commonly exploited by intrinsic image algorithms in order to simplify the
model [7, 14, 4] and in the remainder we will also make use of it. This allows us to write all model
parts in terms of r.
Note that there is a global scalar k by which the result s, R can be modified without effecting eq. (1),
i.e. I = (sk)(1/kR). For visualization purpose k is chosen such that the results are visually closest
to the known ground truth.
3.1
Model
The energy function we describe here consists of three different terms that are linearly combined.
We will describe the three components and their influence in greater detail below, first we write the
optimization problem that corresponds to a MAP solution in its most general form
min
ws Es (r) + wr Eret (r) + wcl Ecl (r, ?).
(3)
ri ,?i ;i=1,...,n
Note, the global scale of the energy is not important, hence we can always fix one non-zero weight
ws , wr , wcl to 1.
Shading Prior (Es ) We expect the shading of an image to vary smoothly over the image and we
encode this in the following pairwise factors
X
2
Es (r) =
ri?1 kIi k ? rj?1 kIj k ,
(4)
i?j
where we use a 4-connected pixel graph to encode the neighborhood relation which we denote with
i ? j. Because of the dependency on the inverse of r, this term is not jointly convex in r. Any
model that includes this smoothness prior thus has the (potential) problem of multiple local minima.
Empirically we have seen that, however, this function seems to be very well behaved, a large range of
different starting points for r resulted in the same minimum. Nevertheless, we use multiple restarts
with different starting points, see optimization selection 3.2.
Gradient Consistency (Eret ) As discussed in the introduction, the main idea of the Retinex algorithm is to disambiguate between edges that are due to shading variations from those that are caused
by material reflectance changes. This idea is then implemented as follows. Assume that we already
know, or have classified, that an edge at location i, j in the input image is caused by a change in reflectance. Then we know the magnitude of the gradient that has to appear in the reflectance map by
~ i )?log(rj R
~ j ). Using the fact log(kIi k) = log(I c )?log(R
~ c)
noting that log(Ii )?log(Ij ) = log(ri R
i
i
(for all channels c) and assuming a squared deviation around the log gradient magnitude, this translates into the following Gaussian MRF term on the reflectances
X
2
Eret (r) =
(log(ri ) ? log(rj ) ? gij (I)(log(kIi k) ? log(kIj k))) .
(5)
i?j
It remains to specify the classification function g(I) for the image edges. In this work we adopt the
Color Retinex version that has been proposed in [7]. For each pixel i and a neighbor j we compute
the gradient of the intensity image and the gradient of the chromaticity change. If both gradients
exceed a certain threshold (?g and ?c resp.), the edge at i, j is classified as being a ?reflectance edge?
and in this case gij (I) = 1. The two parameters which are the thresholds ?g , ?c for the intensity
and the chromaticity change are then estimated using leave-one-out-cross validation. It is worth
noting that this term is qualitatively different from the smoothness prior on shading (4) even for
pixels where gij (I) = 0. Here, the log-difference is penalized whereas the shading smoothness
does also depend on the intensity values kIi k, kIj k. By setting wcl , ws = 0 in Eq. (2) we recover
Color Retinex [7].
Global Sparse Reflectance Prior (Ecl ) Motivated by the findings of [12] we include a term
that acts as a global potential on the reflectances and favors the decomposition into some few
reflectance clusters. We assume C different reflectance clusters, each of which is denoted by
? c , c ? {1, . . . , C}. Every reflectance component ri belongs to one of the clusters and we deR
note its cluster membership with the variable ?i ? {1, . . . , C}. This is summarized in the following
energy term
4
Figure 2: A crop from the image ?panther?. Left: input image I and true decomposition (R, s). Note,
the colors in reflectance image (True R) have been modified on purpose such that there are exactly 4
different colors. The second column shows a clustering (here from the solution with ws = 0), where
each cluster has an arbitrary color. The remaining columns show results with various settings for C
and ws (left reflectance image, right shading image). Top row is the result for C = 4 and bottom
row for C = 50 clusters, columns are results for ws = 0, 10?5 , and 0.1. Below the images is the
corresponding LMSE score (described in Section 4.1). (Note, results are visually slightly different
since the unknown overall global scaling factor k is set differently, that is I = (sk)(1/kR).
Ecl (r, ?) =
n
X
~i ? R
? ? k2 .
kri R
i
(6)
i=1
Here, both continuous r and discrete ? variables are mixed. This represents a global potential, since
the cluster means depend on the assignment of all pixels in the image. For fixed ?, this term is
?c
convex in r and for fixed r the optimum of ? is a simple assignment problem. The cluster means R
P
1
?c =
~
are optimally determined given r and ?: R
i:?i =c ri Ri .
|{i:?i =c}|
Relationship between Ecl and Es The example in Figure 2 highlights the influence of the terms.
We use a simplified model (2), namely Ecl + ws Es , and vary ws as well as the number of clusters.
Let us first consider the case where ws = 0 (third column). Independent of the clustering we
get an imperfect result. This is expected since there is no constraint across clusters. Hence the
shading within one cluster looks reasonable, but is not aligned across clusters. By adding a little
bit of smoothing (ws = 10?5 ; 4?th column), this problem is cured for both clusterings. It is very
important to note that too many clusters (here C=50) do not affect the result very much. The reason
is that enough clustering constraints are present to recover the variation in shading. If we were to
give each pixel its own cluster this would no longer be true and we would get the trivial solution of
s = 1. Finally, results deteriorate when the smoothing term is too strong (last column ws = 0.1),
since it prefers a constant shading. Note, that for this simple toy example the smoothness prior was
not important, however for real images the best results are achieved by using a non-zero ws .
3.2
Optimization of (3)
The MAP problem (3) consists of
both discrete and continuous variables and we solve it using coordinate
descent. The entire algorithm is summarized in Algorithm 1. 4
Algorithm 1 Coordinate Descent for solving (3)
1: Select r 0 as described in the text
~ i , i = 1, . . . , N }
2: ?0 ? K-Means clustering of {ri0 R
3: t ? 0
4: repeat
5:
rt+1 ? optimize (3) with ?t fixed
?c = P
~
6:
R
i:?i =c ri Ri /|{i : ?i = c}|
t+1
7:
?
? assign new cluster labels with rt+1 fixed
8:
t?t+1
9: until E(rt?1 , ?t?1 ) ? E(rt , ?t ) < ?
Given an initial value for ? we have
seen empirically that our function
tends to yield same solutions, irrespective of the starting point r. In order to be also robust with respect to
this initial choice, we choose from a range of initial r values as described next. From these starting points we choose the one with the lowest objective value (energy) and its corresponding result.
4
Code available http://people.tuebingen.mpg.de/mkiefel/projects/intrinsic
5
comment
Color Retinex
no edge information
Col-Ret+ global term
full model
Es
X
X
Ecl
X
X
X
Eret
X
X
X
LOO-CV
29.5
30.0
27.2
27.4
best single
29.5
30.6
24.4
24.4
image opt.
25.5
18.2
18.1
16.1
Table 1: Comparing the effect of including different terms. The column ?best-single? is the parameter set that works best on all 16 images jointly, ?image opt.? is the result when choosing the
parameters optimal for each image individually, based on ground truth information.
We have seen empirically that this procedure gives stable results. For instance, we virtually always
achieve a lower energy compared to using the ground truth r as initial start point.
Initialization of r It is reasonable to assume that the output has a fixed range, i.e. 0 ? Ric , si ? 1
(for all c, i).5 In particular, this is true for the data in [7]. From these constraints we can derive
that kIi k ? ri ? 3. Given that, we use the following three starting points for r, by varying ? ?
{0.3, 0.5, 0.7}: ri = ?kIi k + 3(1 ? ?). Additionally we choose the start point r = 1. From these
four different initial settings we choose the result which corresponds to the lowest final energy.
Initialization of ? Given an initial value for r we can compute the terms in Eq.(6) and use KMeans clustering to optimize it. We use the best solution from five restarts.
Updating r for a given fixed ? this is implemented using a conjugate gradient descent solver [1].
This typically converges in some few hundred iterations for the images used in the experiments.
Updating ?
4
~i ? R
? c k2 .
for given r this is a simple assignment problem: ?i = argminc=1,...,C kri R
Experiments
For the empirical evaluation we use the intrinsic image database that has been introduced in [7].
This dataset consists of 16 different images for all of which the ground truth shading and reflectance
components are available. We refer to [7] for details on how this data was collected. Some of the
images can be seen in Figure 3. In all experiments we compare against Color Retinex which was
found to be the best performing method among those that take a single image as input. The method
from [19] yields better results but requires multiple input images from different light variations.
4.1 Error metric
We report the performance of the algorithms using the two different error metrics that have been
suggested by the creators of the database [7]. The first metric is the average of the localized mean
squared error (LMSE) between the predicted and true shading and predicted and true reflectance
image. 6 Since the LMSE vary considerably we also use the average rank of the algorithm.
4.2 Experimental set-up and parameter learning
All free parameters of the models, e.g. the weights wcl , ws , wr and the gradient thresholds ?c , ?g
have been chosen using a leave-one-out estimate (LOO-CV). Due to the high variance of the scores
for the images we used the median error to score the parameters. Thus for image i the parameter
was chosen that leads to the lowest median error on all images except i. Additionally we record the
best single parameter set that works well on all images, and the score that is obtained when using
the optimal parameters on each image individually. Although the latter estimate involves knowing
ground truth estimates we are interested in the lower bound of the performance, in an interactive
scenario a user can provide additional information to achieve this, as in [4].
We select the parameters from the following ranges. Whenever used, we fix wcl = 1 since it
suffices to specify the relative difference between the parameters. For models using both the cluster
and shading smoothness terms, we select from ws ? {0.001, 0.01, 0.1}, for models that use the
cluster and Color Retinex term wr ? {0.001, 0.01, 0.1, 1, 10}. When all three terms are non-zero,
we vary ws as above paired with wr ? ?{0.1ws , ws , 10ws }. The gradient thresholds are varied
in ?g , ?c ? {0.075, 1} which yields four possible configurations. The reflectance cluster count is
varied in C ? {10, 50, 150}.
5
6
This assumption is violated if there is no global scalar k such that 0 ? (1/kRic ), (ksi ) ? 1.
We multiply by 1000 for easier readability
6
4.3 Comparison - Model variations
In a first set of experiments we investigate the influence of using combinations of the prior terms
described in Section 3.1. The numerical results are summarized in Table 1.
The first observation is that the Color Retinex algorithm (1st row) performs about similar to the
system using a shading smoothness prior together with the global factor Ecl (2nd row). Note that
the latter system does not use any gradient information for estimation. This confirms our intuition
that the term Ecl provides strong coupling information between reflectance components, as also
discussed in Figure 2. The lower value for the image optimal setting of 18.2 compared to 25.5 for
Color Retinex indicates that one would benefit from a better parameter estimate, i.e. the flexibility
of this algorithm is higher. Equipping Color Retinex with the global reflectance term improves all
recorded results (3rd vs 2nd row). Again it seems that the LOO-CV parameter estimation is more
stable in this case. Combining all three parts (4th row) does not improve the results over Color
Retinex with the reflectance prior. With knowledge about the optimal image parameter it yields a
lower LMSE score (16.1 vs 18.1).
LOO-CV rank best single im. opt.
4.4 Comparison to Literature
TAP05 [17]
56?
TAP06 [16]
39?
In Table 2 we compare the numerSHE [14]+
n/a
n/a
56.2
n/a
ical results of our method to other
SHE [15]?
n/a
n/a
(20.4)
intrinsic image algorithms. We
BAS [7]
72.6
5.1
60.3
36.6
again include the single best paGray-Ret [7]
40.7
4.9
40.7
28.9
rameter and image dependent optiCol-Ret
29.5
3.7
29.5
25.5
mal parameter set. Although those
full model
27.4
3.0
24.4
16.1
are positively biased and obviously
Weiss
[19]
21.5
2.7
21.5
21.5
decrease with model complexity
Weiss+Ret [7]
16.4
1.7
16.4
15.0
we believe that they are informative, given the parameter estimation
Table 2: Method comparison with other intrinsic image algoproblems due to the diverse and
rithms also compared in [7]. Refer to Tab. 1 for a description
small database. The full model usof the quantities. Note that the last two methods from [19]
ing all terms Ecl , Es and Ecret imuse multiple input images. For entries ?-? we had no individproves over all the compared methual results (and no code), the two numbers marked ? are estiods that use only a single image as
mated from Fig4.a [7]. SHE+ is our implementation. SHE?
input, but SHE? (see below). The
Note that in [15] results were only given for 13 of 16 images
difference in rank between (Colfrom [7]. The additional data was kindly provided by authors.
Ret) and (full model) indicates that
the latter model is almost always better (direct comparison: 13 out of 16 images) than Color Retinex
alone. The full model is even better on 6/16 images than the Weiss algorithm [19] that uses multiple
images. Regarding the results of SHE? , we could not resolve with certainty whether the reported
results should be compared as ?best single? or ?im.opt.? (most parameters in [15] are common to
all images, the strategy for setting ?max is not entirely specified). Assuming ?best single? SHE? is
better in terms of LMSE, in direct comparison both models are better on 8/16 images. Comparing
as an ?im.opt.? setting, our full model yields lower LMSE and is better on 12/16 images.
4.5 Visual Comparison
Additionally to the quantitative numbers we present some visual comparison in Figure 3, since the
numbers not always reflect a visually pleasing results. For example note that the method BAS that
either attributes all variations to shading (r = 1) or to reflectance alone (s = 1) already yields a
LMSE of 36.6, if for every image the optimal choice between the two is made. Numerically this
is better than [16, 17] and ?Gray-Ret? with proper model selection. However the results of those
algorithms are of course visually more pleasing. We have also tested our method on various other
real-world images and results are visually similar to [15, 4]. Due to missing ground truth and lack
of space we do not show them.
Figure 3 shows results with various models and settings. The ?turtle? example (top three rows)
shows the effect of the global term. Without the global term (Color Retinex with LOO-CV and
image optimal) the result is imperfect. The key problem of Retinex is highlighted in the two zoomin pictures with blue border (second column, left side). The upper one shows the detected edges in
black. As expected the Retinex result has discontinuities at these edges, but over-smooths otherwise
(lower picture). With a global term (remaining three results) the images look visually much better.
7
Figure 3: Various results obtained with different methods and settings (more in supplementary material); For each result: left reflectance image, right shading image
Note that the third row shows an extreme variation for the full model when switching from image
optimal setting to LOO-CV setting. The example ?teabag2? illustrates nicely the point that Color
Retinex and our model without edge term (i.e. no Retinex term) achieve very complementary results.
Our model without edges is sensitive to edge transitions, while Color Retinex has problems with fine
details, e.g. the small text below ?TWININGS?. Combing all terms (full model) gives the best result
with lowest LMSE score (16.4). Note, in this case we chose for both methods the image optimal
settings to illustrate the potential of each model.
5
Discussion and Conclusion
We have introduced a new probabilistic model for intrinsic images that explicitly models the reflectance formation process. Several extensions are conceivable, e.g. one can relax the condition
I = sR to allow deviations. Another refinement would be to replace the Gaussian cluster term
with a color line term [12]. Building on the work of [5, 4] one can investigate various higher-order
(patch-based) priors for both reflectance and shading.
A main concern is that in order to develop more advanced methods a larger and even more diverse
database than the one of [7] is needed. This is especially true to enable learning of richer models
such as Fields of Experts [13] or Gaussian CRFs [18]. We acknowledge the complexity of collecting
ground truth data, but do believe that the creation of a new, much enlarged dataset, is a necessity for
future progress in this field.
8
References
[1] www.gatsby.ucl.ac.uk/?edward/code/minimize.
[2] H. G. Barrow and J. M. Tenenbaum. Recovering intrinsic scene characteristics from images. Computer
Vision Systems, 1978.
[3] A. Blake. Boundary conditions for lightness computation in mondrian world. Computer Vision, Graphics,
and Image Processing, 1985.
[4] A. Bousseau, S. Paris, and F. Durand. User assisted intrinsic images. SIGGRAPH Asia, 2009.
[5] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael. Learning low-level vision. International Journal of
Computer Vision (IJCV), 2000.
[6] B. V. Funt, M. S. Drew, and M. Brockington. Recovering shading from color images. In European
Conference on Computer Vision (ECCV), 1992.
[7] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Ground-truth dataset and baseline evaluations for intrinsic image algorithms. In International Conference on Computer Vision (ICCV), 2009.
[8] B. K. Horn. Robot Vision. MIT press, 1986.
[9] E. Land and J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, 1971.
[10] A. Levin, D. Lischinski, and Y. Weiss. A closed form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 30(2), 2008.
[11] Y.-H. W. Ming Shao. Recovering facial intrinsic images from a single input. Lecture Notes in Computer
Science, 2009.
[12] I. Omer and M. Werman. Color lines: Image specific color representation. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2004.
[13] S. Roth and M. J. Black. Fields of experts. International Journal of Computer Vision (IJCV), 82(2):205?
229, 2009.
[14] L. Shen, P. Tan, and S. Lin. Intrinsic image decomposition with non-local texture cues. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
[15] L. Shen and C. Yeo. Intrinsic images decomposition using a local and global sparse representation of
reflectance. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
[16] M. Tappen, E. Adelson, and W. Freeman. Estimating intrinsic component images using non-linear regression. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006.
[17] M. Tappen, W. Freeman, and E. Adelson. Recovering intrinsic images from a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2005.
[18] M. Tappen, C. Liu, E. H. Adelson, and W. T.Freeman. Learning gaussian conditional random fields for
low-level vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
[19] Y. Weiss. Deriving intrinsic images from image sequences. In International Conference on Computer
Vision (ICCV), 2001.
9
| 4256 |@word version:1 briefly:1 seems:2 nd:2 confirms:1 propagate:1 rgb:8 decomposition:5 brightness:1 mention:1 shading:38 reduction:2 necessity:1 configuration:1 contains:2 score:6 liu:1 initial:7 com:1 comparing:2 si:4 yet:1 must:1 stemming:1 visible:1 numerical:1 blur:1 informative:1 shape:1 treating:1 v:2 alone:3 intelligence:2 cue:1 plane:1 isotropic:1 record:1 colored:1 provides:1 location:2 readability:1 zhang:1 five:1 along:1 direct:2 become:1 consists:3 ijcv:2 introduce:1 mccann:2 deteriorate:1 pairwise:3 inter:2 expected:2 mpg:2 aliasing:1 freeman:6 ming:1 resolve:1 little:2 solver:1 provided:2 project:1 notation:1 estimating:1 lowest:4 developed:1 ret:6 finding:2 certainty:1 quantitative:1 every:3 collecting:1 act:1 interactive:1 exactly:1 k2:2 uk:1 appear:1 planck:2 before:1 local:5 tends:1 limit:1 severely:1 switching:1 despite:1 subscript:1 solely:1 pami:2 black:2 chose:1 initialization:2 argminc:1 specifying:1 challenging:1 range:5 horn:2 camera:1 practice:1 implement:1 procedure:2 carmichael:1 empirical:1 pre:1 integrating:1 word:1 get:2 selection:2 context:1 influence:3 seminal:1 optimize:2 www:1 map:4 missing:1 crfs:1 roth:1 attention:1 starting:6 convex:2 survey:1 shen:3 formulate:2 recovery:1 rule:1 utilizing:1 deriving:1 variation:6 coordinate:2 resp:1 tan:1 user:3 us:1 recognition:6 tappen:4 updating:2 database:6 gehler:1 bottom:1 mal:1 connected:1 cured:1 decrease:1 valuable:1 intuition:2 complexity:3 depend:2 rewrite:1 solving:1 mondrian:1 creation:1 basis:8 shao:1 joint:1 siggraph:1 differently:3 panther:1 various:5 america:1 train:1 describe:3 detected:2 wcl:5 formation:2 neighborhood:1 choosing:1 richer:1 iic:1 plausible:1 solve:1 supplementary:1 relax:2 otherwise:1 larger:1 cvpr:5 favor:1 transform:1 jointly:3 highlighted:1 final:1 obviously:1 blob:1 sequence:1 ucl:1 propose:2 product:1 remainder:1 neighboring:1 aligned:1 combining:2 omer:2 mixing:1 achieve:4 flexibility:1 description:1 validate:1 olkopf:1 exploiting:2 ecl:9 cluster:23 optimum:1 r1:1 produce:1 comparative:2 leave:2 converges:1 object:1 help:1 derive:1 develop:2 coupling:1 illustrate:1 ac:1 ij:1 progress:2 eq:3 edward:1 strong:3 recovering:7 implemented:2 shadow:3 come:1 predicted:2 involves:1 direction:2 attribute:1 enable:1 material:6 paper1:1 kii:8 assign:1 fix:2 suffices:1 opt:5 im:3 exploring:1 extension:2 assisted:1 hold:1 around:3 blake:2 ground:9 exp:2 visually:6 lischinski:1 werman:2 vary:4 adopt:1 albedo:1 purpose:2 estimation:5 label:1 sensitive:1 individually:2 smeared:1 mit:1 clearly:1 gaussian:5 always:4 super:3 modified:2 avoid:1 varying:1 encode:2 derived:1 focus:1 she:6 rank:3 likelihood:2 indicates:2 contrast:1 baseline:1 detect:1 helpful:1 inference:1 dependent:5 membership:1 entire:1 typically:1 w:18 relation:2 ical:1 interested:1 pixel:19 overall:1 classification:1 among:1 smearing:1 denoted:1 development:1 art:2 smoothing:2 field:10 aware:1 nicely:1 represents:1 look:2 adelson:4 future:2 report:1 intelligent:1 simplify:1 few:7 preserve:1 resulted:1 microsoft:2 pleasing:2 possibility:1 investigate:2 multiply:1 evaluation:2 mixture:3 extreme:1 light:6 accurate:1 edge:19 facial:1 institut:1 fitted:1 instance:2 kij:3 column:8 formulates:1 assignment:3 introducing:1 deviation:2 entry:1 hundred:1 successful:1 levin:1 conducted:1 johnson:1 graphic:1 too:2 loo:6 optimally:1 reported:1 dependency:1 considerably:1 combined:1 st:1 international:4 probabilistic:6 informatics:1 together:2 squared:2 again:2 satisfied:1 recorded:1 reflect:1 choose:4 expert:2 yeo:1 toy:1 combing:1 account:1 potential:4 de:3 diversity:1 summarized:3 includes:1 caused:5 explicitly:1 depends:1 later:2 performed:1 closed:1 doing:1 tab:1 start:2 recover:5 contribution:1 minimize:1 variance:1 characteristic:1 yield:7 conceptually:1 vincent:1 lighting:1 worth:1 classified:2 detector:1 whenever:1 against:1 energy:8 attributed:1 rithms:1 gain:1 dataset:3 color:36 knowledge:1 improves:1 segmentation:1 sophisticated:1 higher:2 restarts:2 asia:1 specify:3 improved:1 wei:5 done:3 furthermore:1 equipping:1 until:1 propagation:1 lack:1 quality:2 gray:2 behaved:1 believe:4 building:1 effect:6 normalized:1 true:8 multiplier:1 hence:4 white:1 chromaticity:4 encourages:1 essence:1 evident:1 complete:1 performs:1 motion:1 image:112 wise:1 recently:2 common:1 empirically:4 extend:2 discussed:3 numerically:1 refer:4 cambridge:1 kri:2 cv:6 smoothness:7 rd:1 outlined:1 consistency:1 had:1 stable:2 robot:1 similarity:2 longer:1 dominant:2 closest:1 own:3 recent:4 belongs:1 scenario:1 route:1 certain:1 durand:1 der:1 exploited:1 seen:4 minimum:2 additional:4 greater:1 ii:3 multiple:8 full:8 rj:3 ing:1 smooth:1 cross:1 long:1 lin:1 rameter:1 paired:1 laplacian:1 mrf:3 regression:1 basic:3 variant:1 vision:17 crop:1 poisson:1 metric:3 funt:1 iteration:1 achieved:2 whereas:1 fine:1 median:2 source:4 leaving:1 sch:1 biased:1 sr:3 comment:1 virtually:2 noting:2 intermediate:1 exceed:1 enough:2 affect:1 specular:1 imperfect:2 idea:8 prototype:1 knowing:1 regarding:1 translates:1 whether:1 motivated:1 peter:1 prefers:1 detailed:1 amount:2 tenenbaum:2 concentrated:1 reduced:1 http:1 outperform:1 revisit:1 delta:1 estimated:1 wr:5 blue:1 diverse:2 write:2 discrete:2 key:4 four:2 nevertheless:1 threshold:4 drawn:2 penalizing:1 clean:1 invention:1 vast:1 graph:1 year:1 fig4:1 run:1 inverse:1 powerful:1 almost:1 reader:1 reasonable:2 separation:1 patch:9 utilizes:1 ric:2 scaling:1 bit:1 entirely:1 ki:1 bound:1 constraint:7 scene:4 ri:16 encodes:1 turtle:1 min:1 mated:1 performing:4 optical:1 martin:1 ri0:1 combination:1 conjugate:1 beneficial:1 slightly:1 across:2 partitioned:1 b:1 s1:1 lmse:8 iccv:2 equation:1 visualization:1 remains:1 discus:1 count:1 needed:1 know:2 available:3 gaussians:2 top:5 pgehler:1 clustering:7 include:2 remaining:2 creator:1 reflectance:58 carrot:1 build:1 especially:1 society:1 objective:1 already:4 quantity:1 strategy:1 rt:4 gradient:17 conceivable:1 separate:1 entity:1 sensible:1 collected:1 tuebingen:2 trivial:6 reason:2 enforcing:1 boldface:1 assuming:2 rother:1 code:3 modeled:1 relationship:2 effecting:1 potentially:1 ba:2 implementation:1 proper:1 unknown:5 perform:4 upper:1 observation:1 markov:1 kiefel:1 pasztor:1 acknowledge:1 descent:3 anti:1 barrow:2 beat:1 extended:2 rn:1 varied:2 arbitrary:1 intensity:4 introduced:4 cast:1 namely:3 pair:1 specified:2 paris:1 discontinuity:1 address:2 able:2 suggested:2 below:4 pattern:7 sparsity:3 max:3 including:1 explanation:2 power:1 overlap:1 natural:2 treated:1 advanced:1 improve:3 lightness:2 picture:2 irrespective:1 extract:1 sn:1 text:2 prior:18 literature:2 relative:1 par:2 highlight:2 bear:2 expect:1 interesting:1 limitation:1 mixed:1 lecture:1 ingredient:1 localized:1 validation:1 offered:1 proxy:1 imposes:1 land:2 eccv:1 row:8 course:1 penalized:1 surprisingly:2 last:4 repeat:1 free:1 side:1 allow:1 institute:1 wide:2 fall:1 taking:1 neighbor:1 sparse:5 matting:2 benefit:1 overcome:1 dimension:2 boundary:1 world:2 transition:1 contour:1 author:1 commonly:2 qualitatively:1 made:1 simplified:2 refinement:1 scribble:2 transaction:2 approximate:1 bernhard:1 keep:1 global:18 conceptual:1 search:1 latent:3 mpii:1 decade:2 sk:2 continuous:2 table:4 disambiguate:1 additionally:3 channel:2 robust:1 decoupling:1 excellent:1 european:1 domain:2 did:2 kindly:1 main:5 linearly:2 motivation:1 noise:1 border:1 complementary:1 positively:1 enlarged:1 fig:1 gatsby:1 grosse:1 col:1 lie:1 third:3 wavelet:1 specific:1 concern:1 intrinsic:23 adding:1 importance:1 kr:2 drew:1 texture:2 magnitude:2 illustrates:1 ksi:1 easier:1 smoothly:1 likely:1 visual:2 scalar:4 corresponds:2 truth:8 conditional:3 carsten:1 goal:1 kmeans:1 marked:1 replace:1 absence:1 change:8 included:1 determined:2 except:1 total:2 gij:3 e:7 experimental:1 select:3 people:1 retinex:29 latter:4 violated:1 tested:2 |
3,597 | 4,257 | Dynamical segmentation of single trials
from population neural data
Biljana Petreska
Gatsby Computational Neuroscience Unit
University College London
[email protected]
John P. Cunningham
Dept of Engineering
University of Cambridge
[email protected]
Byron M. Yu
ECE and BME
Carnegie Mellon University
[email protected]
Gopal Santhanam, Stephen I. Ryu? , Krishna V. Shenoy?
Electrical Engineering
?
Bioengineering, Neurobiology and Neurosciences Program
Stanford University
?
Dept of Neurosurgery, Palo Alto Medical Foundation
{gopals,seoulman,shenoy}@stanford.edu
Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Abstract
Simultaneous recordings of many neurons embedded within a recurrentlyconnected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function. In principle, these
dynamics might be identified by purely unsupervised, statistical means. Here,
we show that a Hidden Switching Linear Dynamical Systems (HSLDS) model?
in which multiple linear dynamical laws approximate a nonlinear and potentially non-stationary dynamical process?is able to distinguish different dynamical regimes within single-trial motor cortical activity associated with the preparation and initiation of hand movements. The regimes are identified without reference to behavioural or experimental epochs, but nonetheless transitions between
them correlate strongly with external events whose timing may vary from trial to
trial. The HSLDS model also performs better than recent comparable models in
predicting the firing rate of an isolated neuron based on the firing rates of others,
suggesting that it captures more of the ?shared variance? of the data. Thus, the
method is able to trace the dynamical processes underlying the coordinated evolution of network activity in a way that appears to reflect its computational role.
1
Introduction
We are now able to record from hundreds?and very likely soon from thousands?of neurons in
vivo. By studying the activity of these neurons in concert we may hope to gain insight not only into
the computations performed by specific neurons, but also into the computations performed by the
population as a whole. The dynamics of such collective computations can be seen in the coordinated
activity of all of the neurons within the local network; although each individual such neuron may
reflect this coordinated component only noisily. Thus, we hope to identify the computationallyrelevant network dynamics by purely statistical, unsupervised means?capturing the shared evolu1
tion through latent-variable state-space models [1, 2, 3, 4, 5, 6, 7, 8]. The situation is similar to
that of a camera operating at the extreme of its light sensitivity. A single pixel conveys very little
information about an object in the scene, both due to thermal and shot noise and due to the ambiguity of the single-channel signal. However, by looking at all of the noisy pixels simultaneously and
exploiting knowledge about the structure of natural scenes, the task of extracting the object becomes
feasible. In a similar way, noisy data from many neurons participating in a local network computation needs to be combined with the learned structure of that computation?embodied by a suitable
statistical model?to reveal the progression of the computation.
Neural spiking activity is usually analysed by averaging across multiple experimental trials, to obtain a smooth estimate of the underlying firing rates [2, 3, 4, 5]. However, even under carefully
controlled experimental conditions, the animal?s behavior may vary from trial-to-trial. Reaction
time in motor or decision-making tasks for example, reflects internal processes that can last for
measurably different periods of time. In these cases traditional methods are challenging to apply,
as there is no obvious way of aligning the data from different trials. It is thus essential to develop
methods for the analysis of neural data that can account for the timecourse of a neural computation
during a single trial. Single-trial methods are also attractive for analysing specific trials in which
the subject exhibits erroneous behavior. In the case of a surprisingly long movement preparation
time or a wrong decision, it becomes possible to identify the sources of error at the neural level.
Furthermore, single-trial methods allow the use of more complex experimental paradigms where the
external stimuli can arise at variable times (e.g. variable time delays).
Here, we study a method for the unsupervised identification of the evolution of the network computational state on single trials. Our approach is based on a Hidden Switching Linear Dynamical
System (HSLDS) model, in which the coordinated network influence on the population is captured
by a low-dimensional latent variable which evolves at each time step according to one of a set of
available linear dynamical laws. Similar models have a long history in tracking, speech and, indeed,
neural decoding applications [9, 10, 11] where they are variously known as Switching Linear Dynamical System models, Jump Markov models or processes, switching Kalman Filters or Switching
Linear Gaussian State Space models [12]. We add the prefix ?Hidden? to stress that in our application neither the switching process nor the latent dynamical variable are ever directly observed, and so
learning of the parameters of the model is entirely unsupervised?and again, learning in such models has a long history [13]. The details of the HSLDS model, inference and learning are reviewed
in Section 2. In our models, the transitions between linear dynamical laws may serve two purposes.
First, they may provide a piecewise-linear approximation to a more accurate non-linear dynamical
model [14]. Second, they may reflect genuine changes in the dynamics of the local network, perhaps
due to changes in the goals of the underlying computation under the control of signals external to
the local area. This second role leads to a computational segmentation of individual trials, as we
will see below.
We compare the performance of the HSLDS model to Gaussian Processes Factor Analysis (GPFA),
a method introduced by [8] which analyses multi-neuron data on a single-trial basis with similar motivation to our own. Instead of explicitly modeling the network computation as a dynamical process,
GPFA assumes that the computation evolves smoothly in time. In this sense, GPFA is less restrictive
and would perform better if the HSLDS provided a bad model of the real network dynamics. However GPFA assumes that the latent dimensions evolve independently, making GPFA more restrictive
than HSLDS in which the latent dimensions can be coupled. Coupling the latent dynamics introduces complex interactions between the latent dimensions, which allows a richer set of behaviors.
To validate our HSLDS model against GPFA and a single LDS we will use the cross-prediction
measure introduced with GPFA [8] in which the firing rate of each neuron is predicted using only
the firing rates of the rest of the neurons; thus the metric measures how well each model captures the
shared components of the data. GPFA and cross-prediction are reviewed briefly in Section 3, which
also introduces the dataset used; and the cross-prediction performance of the models is compared in
Section 4. Having validated the HSLDS approach, we go on to study the dynamical segmentation
identified by the model in the rest of Section 4, leading to the conclusions of Section 5.
2
2
Hidden Switching Linear Dynamical Systems
Our goal is to extract the structure of computational dynamics in a cortical network from the recorded
firing rates of a subset of neurons in that network. We use a Hidden Switching Linear Dynamical
Systems (HSLDS) model to capture the component of those firing rates which is shared by multiple
cells, thus exploiting the intuition that network computations should be reflected in coordinated
activity across a local population. This will yield a latent low-dimensional subspace of dynamical
states embedded within the space of noisy measured firing rates, along with a model of the dynamics
within that latent space. The dynamics of the HSLDS model combines a number of linear dynamical
systems (LDS), each of which capture linear Markovian dynamics using a first-order linear autoregressive (AR) rule [9, 15]. By combining multiple such rules, the HSLDS model can provide a
piecewise linear approximation to nonlinear dynamics, and also capture changes in the dynamics of
the local network driven by external influences that presumably reflect task demands. In the model
implemented here, transitions between LDS rules themselves form a Markov chain.
Let x:,t ? IRp?1 be the low-dimensional computational state that we wish to estimate. This latent
computational state reflects the network-level computation performed at timepoint t that gives rise
to the observed spiking activity y:,t ? IRq?1 . Note that the dimensionality of the computational state
p is lower than the dimensionality of the recorded neural data q which corresponds to the number of
recorded neurons. The evolution of the computational state x:,t is given by
x:,t |x:,t?1 , st ? N (Ast x:,t?1 , Kst )
(1)
where N (?, ?) denotes a Gaussian distribution with mean ? and covariance ?. The linear dynamical
matrices Ast ? IRp?p and innovations covariance matrices Kst ? IRp?p are parameters of the
model and need to be learned. These matrices are indexed by a switch variable st ? {1, ..., S} such
that different Ast and Kst need to be learned for each of the S possible linear dynamical systems.
If the dependencies on st are removed, Eq. 1 defines a single LDS.
The switch variable st specifies which linear dynamical law guides the evolution of the latent state
x:,t at timepoint t and as such provides a piecewise approximation to the nonlinear dynamics with
which x:,t may evolve. The variable st itself is drawn from a Markov transition matrix M learned
from the data:
st ? Discrete(M:,st?1 )
As mentioned above, the observed neural activity y:,t ? IRq?1 is generated by the latent dynamics
and denotes the spike counts (Gaussianised as described below) of q simultaneously recorded neurons at timepoints t ? {1, ..., T }. The observations y:,t are related to the latent computational states
x:,t through a linear-Gaussian relationship:
y:,t |x:,t ? N (Cx:,t + d, R).
where the observation matrix C ? IRq?p , offset d ? IRq?1 , and covariance matrix R ? IRq?q are
model parameters that need to be learned. We force R to be diagonal and to keep track only of the
independent noise variances. This means that the firing rates of different neurons are independent
conditioned on the latent dynamics, compelling the shared variance to live only in the latent space.
Note that different neurons can have different independent noise variances. We use a Gaussian
relationship instead of a point-process likelihood model for computational tractability. Finally, the
observation dynamics do not depend on which linear dynamical system is used (i.e., are independent
of st ). A graphical model of the particular HSLDS instance we have used is shown in Figure 2.
Inference and learning in the model are performed by approximate Expectation Maximisation (EM).
Inference (or the E-step) requires finding appropriate expected sufficient statistics under the distributions of the computational latent state and switch variable at each point in time given the observed
neural data p(x1:T , s1:T |y1:T ). Inference in the HSLDS is computationally intractable because of the
following exponential complexity. At the initial timepoint, s0 can take one of S discrete values. At
the next timepoint, each of the S possible latent states can again evolve according to S different linear dynamical laws, such that at timepoint t we need to keep track of S t possible solutions. To avoid
3
Figure 1: Graphical model of the HSLDS. The first layer corresponds to the discrete switch variable
that dictates which of the S available linear dynamical systems (LDSs) will guide the latent dynamics
shown in the second layer. The latent dynamics evolves as a linear dynamical system at timepoint t
and presumably captures relevant aspects of the computation performed at the level of the recorded
neural network. The relationship between the latent dynamics and neural data (third layer) is again
linear-Gaussian, such that each computational state is associated to a specific denoised firing pattern.
The dimensionality of the latent dynamics x is lower than that of the observations y (equivalent to
the number of recorded neurons), meaning that x extracts relevant features reflected in the shared
variance of y. Note that there are no connections between xt?1 and st , nor st and y.
this exponential scaling, we use an approximate inference algorithm based on Assumed Density Filtering [16, 17, 18] and Assumed Density Smoothing [19]. The algorithm comprises a single forward
pass that estimates the filtered posterior distribution p(xt , st |y1:t ) and a single backward pass that
estimates the smoothed posterior distribution p(xt , st |y1:T ). The key idea is to approximate these
posterior distributions by a simple tractable form such as a single Gaussian. The approximated distribution is then propagated through time conditioned on the new observation. The smoothing step
requires an additional simplifying assumption where p(xt+1 |st , st+1 , y1:T ) ? p(xt+1 |st+1 , y1:T ) as
proposed in [19]. It is also possible to use a mixture of a fixed number of Gaussians as the approximating distribution, at the cost of greater computational time. We found that this approach yielded
similar results in pilot runs, and thus retained the single-Gaussian approximation.
Learning the model parameters (or the M-step) can be performed using the standard procedure of
maximizing the expected joint log-likelihood:
N
X
n
hlog p(xn1:T , y1:T
)ipold (xn |yn )
n=1
with respect to the parameters Ast , Kst , M , C, d and R, where the superscript n indexes data from
each of N different trials. In practice, the estimated individual variance of particularly low-firing
neurons was very low and likely to be incorrectly estimated. Therefore we assumed a Wishart prior
on the observation covariance matrix R, which resulted in an update rule that adds a fixed parameter
? ? IR to all of the values at the diagonal. In the analyses below ? was fixed to the value that
gave the best cross-prediction results (see Section 3.2). Finally, the most likely state of the switch
variable s?1:T = arg maxs1:T p(s1:T |y1:T ) was estimated using the standard Viterbi algorithm [20],
which ensures that the most likely switch variable path is in fact possible in terms of the transitions
allowed by M .
3
3.1
Model Comparison and Experimental Data
Gaussian Process Factor Analysis
Below, we compare the performance of the HSLDS model to Gaussian Process Factor Analysis
(GPFA), another method for estimating the functional computation of a set of neurons. GPFA is an
extension of Factor Analysis that leverages time-label information, introduced in [8]. In this model,
the latent dynamics evolve as a Gaussian Process (GP), with a smooth correlation structure between
the latent states at different points in time. This combination of FA and the GP prior work together
to identify smooth low-dimensional latent trajectories.
4
Formally, each dimension of the low-dimensional latent states x:,t is indexed by i ? {1, ..., p} and
defines a separate GP:
xi,: ? N (0, Ki )
where xi,: ? IR1?T is the trajectory in time of the ith latent dimension and Ki ? IRT ?T is the
ith GP smoothing covariance matrix. Ki is set to the commonly-used squared exponential (SE)
covariance function as defined in [8].
Whereas HSLDS explicitly models the dynamics of the network computation, GPFA only assumes
that the evolution of the computational state is smooth. Thus GPFA is a less restrictive model than
HSLDS, but being model-free makes it also less informative of the dynamical rules that underlie the
computation. A major advantage of GPFA over HSLDS is that the solution is approximation-free
and faster to run.
3.2
Cross-prediction performance measure
To compare model goodness-of-fit we adopt the cross-prediction metric of [8]. All of these models
attempt to capture the shared variance in the data, and so performance may be measured by how well
the activity of one neuron can be predicted using the activity of the rest of the neurons. It is important
to measure the cross-prediction error on trials that have not been used for learning the parameters
of the model. We arrange the observed neural data in a matrix Y = [y:,1 , ..., y:,T ] ? IRq?T where
each row yj,: represents the activity of neuron j in time. The model cross-prediction for this neuron
j is y?j,: = E[yj,: |Y?j,: ] where Y?j,: ? IR(q?1)?T represents all but the jth row of Y . We first
estimate the trajectories in the latent space using all but the jth neuron P (x1:p,: |Y?j,: ) in a set of
testing trials. We then project this estimate back to the high-dimensional space to obtain the model
cross-prediction y?j,: using y?j,t = Cj,: ? E[x(:, t)|Y?j,: ] + dj . The error is computed as the sumof-squared errors between the model cross-prediction and the observed Gaussianised spike counts
across all neurons and timepoints; and we plot the difference between this error (per time bin) and
the average temporal variance of the corresponding neuron in the corresponding trial (denoted as
Var-MSE).
Note that the performance of difference models can be evaluated as a function of the dimensionality of the latent state. The HSLDS model has two futher free parameters which influence crossprediction peformance: the number of available LDSs S and the concentration of the Wishart prior
?.
3.3
Data
We applied the model to data recorded in the premotor and motor cortices of a rhesus macaque while
it performed a delayed center-out reach task. A trial began with the animal touching and looking at
an illuminated point at the center of a vertically oriented screen. A target was then illuminated at a
distance of 10cm and in one of seven directions (0, 45, 90, 135, 180, 225, 315) away from this central
starting point. The target remained visible while the animal prepared but withheld a movement to
touch it. After a random delay of between 200 and 700ms, the illumination of the starting point was
extinguished, which was the animal?s cue (the ?go cue?) to reach to the target to obtain a reward.
Neural activity was recorded from 105 single and multi-units, using a 96-electrode array (Blackrock,
Salt Lake City, UT). All active units were included in the analysis without selection based on tuning.
The spike-counts were binned at a relatively fine time-scale of 10ms (non-overlapping bins). As in
[8], the observations were taken to be the square-roots of these spike counts, a transformation that
helps to Gaussianise and stabilise the variance of count data [21].
4
Results
We first compare the cross-prediction-derived goodness-of-fit of the HSLDS model to that of the
single LDS and GPFA models in section 4.1. We find that HSLDS provides a better model of the
shared component of the recorded data than do the two other methods. We then study the dynamical
segmentation found by the HSLDS model, first by looking at a typical example (section 4.2) and
then by correlating dynamical switches to behavioural events (section 4.3). We show that the latent
5
Figure 2: Performance of the HSLDS
(green solid line), LDS (blue dashed) and
GPFA (red dash-dotted) models. Analyses are based on one movement type with
the target in the 45? direction. Crossprediction error was computed using 4fold cross-validation. HSLDS with different values of S also outperformed the
LDS case (which is equivalent to S = 1).
Performance was more sensitive to the
strength ? of the Wishart prior, and the best
performing model is shown.
?3
7.2
x 10
7
Var-MSE
6.8
6.6
6.4
6.2
6
HSLDS, S=7
LDS
GPFA
4
5
6
7
8
9
10
11
12
13
14
15
Latent state dimensionality p
trajectories and dynamical transitions estimated by the model predict reaction time, a behavioral
covariate that varies from trial-to-trial. Finally we argue that these behavioral correlates are difficult
to obtain using a standard neural analysis method.
4.1
Cross-prediction
To validate the HSLDS model we compared it to the GPFA model described in section 3.1 and a
single LDS model. Since all of these models attempt to capture the shared variance of the data
across neurons and multiple trials, we used cross-prediction to measure their performance. Crossprediction looks at how well the spiking activity of one neuron is predicted just by looking at the
spiking activity of all of the other neurons (described in detail in Section 3.2). We found that both the
single LDS and HSLDS models that allow for coupled latent dynamics do better than GPFA, shown
in Figure 2, which could be attributed to the fact that GPFA constrains the different dimensions
of the latent computational state to evolve independently. The HSLDS model also outperforms a
single LDS yielding the lowest prediction error for all of the latent dimensions we have looked at,
arguing that a nonlinear model of the latent dynamics is better than a linear model. Note that the
minimum prediction error asymptotes after 10 latent dimensions. It is tempting to suggest that for
this particular task the effective dimensionality of the spiking activity is much lower than that of
the 105 recorded neurons, thereby justifying the use of a low-dimensional manifold to describe the
underlying computation. This could be interpreted as evidence that neurons may carry redundant
information and that the (nonlinear) computational function of the network is better reflected at the
level of the population of neurons, rather than in single neurons.
4.2
Data segmentation
By definition, the HSLDS model partitions the latent dynamics underlying the observed data into
time-labeled segments that may evolve linearly. The segments found by HSLDS correspond to
periods of time in which the latent dynamics seem to evolve according to different linear dynamical
laws, suggesting that the observed firing pattern of the network has changed as a whole. Thus, by
construction, the HSLDS model can subdivide the network activity into different firing regimes for
each trial specifically.
For the purpose of visualization, we have applied an additional orthonormalization post-processing
step (as in [8]) that helps us order the latent dimensions according to the amount of covariance explained. The orthonormalization consists of finding the singular-value decomposition of C, allowing
us to write the product Cx:,t as UC (DC VC0 x:,t ), where UC ? IRq?p is a matrix with orthonormal
columns. We will refer to x
?:,t = DC VC0 x:,t as the orthonormalised latent state at time t. The first
dimension of the orthonormalised latent state in time x
?1,: corresponds then to the latent trajectory
which explains the most covariance. Since the columns of UC are orthonormal, the relationship
between the orthonormalised latent trajectories and observed data can be interpreted in an intuitive
way, similarly to Principal Components Analysis (PCA). The results presented here were obtained
by setting the number of switching LDSs S, latent space dimensionality p and Wishart prior ? to
values that yielded a reasonably low cross-prediction error.
Figure 3 shows a typical example of the HSLDS model applied to data in one movement direction,
where the different trials are fanned out vertically for illustration purposes. The first orthonormalized
6
Figure 3: HSLDS applied to neural data from the 45? direction movement (S = 7, p = 7, ? = 0.05).
The first dimension of the orthonormalised latent trajectory is shown. The colors denote the different
linear dynamical systems used by the model. Each line is a different trial, aligned to the target onset
(left) and go cue (right), and sorted by reaction time. Switches reliably follow the target onset and
precede the movement onset, with a time lag that is correlated with reaction time.
latent dimension indicates a transient in the recorded population activity shortly after target onset
(which is marked by the red dots) and a sustained change of activity after the go cue (marked by
the green dots). The colours of the lines indicate the most likely setting of the switching variable
at each time. It is evident that the learned solution segments each trial into a broadly reproducible
sequence of dynamical epochs. Some transitions appear to reliably follow or precede external events
(even though these events were not used to train the segmentation) and may reflect actual changes
in dynamics due to external influences. Others seem to follow each other in quick succession, and
may instead reflect linear approximations to non-linear dynamical processes?evident particularly
during transiently rapid changes in the latent state. Unfortunately, the current model does not allow
us to distinguish quantitatively between these two types of transition.
Note that the delays (time from target onset to go cue) used in the experiment varied from 200 to
700ms, such that the model systematically detected a change in the neural firing rates shortly after
the go cue appeared on each individual trial. The model succeeds at detecting these changes in a
purely unsupervised fashion as it was not given any time information about the external experimental
inputs.
4.3
Behavioral correlates during single trials
It is not surprising that the firing rates of the recorded neurons change during different behavioral
periods. For example, neural activity is often observed to be higher during movement execution
than during movement preparation. However, the HSLDS method reliably detects the behaviourallycorrelated changes in the pattern of neural activity across many neurons on single trials.
In order to ensure that HSLDS captures trial-specific information we have looked at whether the
time post-go-cue at which the model estimates a first switch in the neural dynamics could predict
the subsequent onset of movement and thus the trial reaction time (RT). We found that the filtered
model (which does not incorporate spiking data from future times into its estimate of the switching
variable) could explain 52% of the reaction time variance on average, across the 7 reach directions
(Figure 4).
Could a more conventional approach do better? We attempted to use a combination of the ?population vector? (PV) method and the ?rise-to-threshold? hypothesis. The PV sums the preferred
directions of a population of neurons, weighted by the respective spike counts in order to decode the
represented direction of movement [22]. The rise-to-threshold hypothesis asserts that neural firing
rates rise during a preparatory period and movement is initiated when the population rate crosses a
threshold [23]. The neural data used for this analysis were smoothed with a Gaussian window and
sampled at 1 ms. We first estimated the preferred direction p?q of the neuron indexed by q as the
7
Figure 4: Correlation (R2 = 0.52) between
the reaction time and first filtered HSLDS
switch following the go cue, on a trial-bytrial basis and averaged across directions.
Symbols correspond to movements in different directions. Note that in two catch trials
the model did not switch following the go
cue, so we considered the last switch before
the cue.
P7
unit vector in the direction of p~q = d=1 rid~v d where d indexes the instructed movement direction
~v d and rqd is the mean firing rate of neuron q during all movements in direction d. The preferred
direction of a given neuron often differed between plan and movement activity, so we used data from
movement onset until the movement end to estimate rqd as this gave us better results when trying to
estimate a threshold in the rising movement-related activity. We then estimated the instanteneous
PQ
amplitude of the network PV at time t as sdt = || q=1 yq,t p~q ||, where yq,t is the smoothed spike
count of neuron q at time t, Q is the number of neurons and ||w||
~ denotes the norm of the vector w.
~
Finally, we searched for a threshold length (one per direction), such that the time at which the PV
exceeded this length on each trial was best correlated with RT.
Note that this approach uses considerable supervision that was denied to the HSLDS model. First,
the movement epoch of each trial was identified to define the PV. Second, the thresholds were
selected so as to maximize the RT correlation?a direct form of supervision. Finally, this selection
was based on the same data as were used to evaluate the correlation score, thus leading to potential
overfitting in the explained variance. The HSLDS model was also trained on the same trials, which
could lead to some overfitting in terms of likelihood, but should not introduce overfitting in the
correlation between switch times and RT, which is not directly optimised.
Despite these considerable advantages, the PV approach did not predict RT as well as did the
HSLDS, yielding an average variance explained across conditions of 48%.
5
Conclusion
It appears that the Hidden Switching Linear Dynamical System (HSLDS) model is able to appropriately extract relevant aspects of the computation reflected in a network of firing neurons. HSLDS
explicitly models the nonlinear dynamics of the computation as a piecewise linear process that captures the shared variance in the neural data across neurons and multiple trials.
One limitation of HSLDS is the approximate EM algorithm used for inference and learning of the
model parameters. We have traded off computational tractability with accuracy, such that the model
may settle into a solution that is simpler than the optimum. A second limitation of HSLDS is the
slow training time of EM, enforcing an offline learning of the model parameters.
Despite these simplications, HSLDS can be used to dynamically segment the neural activity at the
level of the whole population of neurons into periods of different firing regimes. We showed that in a
delayed-reach task the firing regimes found correlate well with the experimental behavioral periods.
The computational trajectories found by HSLDS are trial-specific and with a dimensionality that
is more suitable for visualization than the high-dimensional spiking activity. Overall, HSLDS are
attractive models for uncovering behavioral correlates in neural data on a single-trial basis.
Acknowledgments.
This work was supported by DARPA REPAIR (N66001-10-C-2010), the Swiss
National Science Foundation Fellowship PBELP3-130908, the Gatsby Charitable Foundation, UK EPSRC
EP/H019472/1 and NIH-NINDS-CRCNS-R01, NDSEG and NSF Graduate Fellowships, Christopher and Dana
Reeve Foundation. We are very grateful to Jacob Macke, Lars Buesing and Alexander Lerchner for discussion.
8
References
[1] A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural
Computation, 15(5):965?991, 2003.
[2] M. Stopfer, V. Jayaraman, and G. Laurent. Intensity versus identity coding in an olfactory system. Neuron,
39:991?1004, 2003.
[3] S. L. Brown, J. Joseph, and M. Stopfer. Encoding a temporally structured stimulus with a temporally
structured neural representation. Nature Neuroscience, 8(11):1568?1576, 2005.
[4] R. Levi, R. Varona, Y. I. Arshavsky, M. I. Rabinovich, and A. I. Selverston. The role of sensory network
dynamics in generating a motor program. Journal of Neuroscience, 25(42):9807?9815, 2005.
[5] O. Mazor and G. Laurent. Transient dynamics versus fixed points in odor representations by locust
antennal lobe projection neurons. Neuron, 48:661?673, 2005.
[6] B. M. Broome, V. Jayaraman, and G. Laurent. Encoding and decoding of overlapping odor sequences.
Neuron, 51:467?482, 2006.
[7] M. M. Churchland, B. M. Yu, M. Sahani, and K. V. Shenoy. Techniques for extracting single-trial activity
patterns from large-scale neural recordings. Current Opinion in Neurobiology, 17(5):609?618, 2007.
[8] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity. J Neurophysiol,
102:614?635, 2009.
[9] Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking: Principles, Techniques and Software. Artech
House, Norwood, MA, 1998.
[10] B. Mesot and D. Barber. Switching linear dynamical systems for noise robust speech recognition. IEEE
Transactions of Audio, Speech and Language Processing, 15(6):1850?1858, 2007.
[11] W. Wu, M.J. Black, D. Mumford, Y. Gao, E. Bienenstock, and J. P. Donoghue. Modeling and decoding
motor cortical activity using a switching kalman filter. IEEE Transactions on Biomedical Engineering,
51(6):933?942, 2004.
[12] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press. In Press, 2011.
[13] K. P. Murphy. Switching kalman filters. Technical Report 98-10, Compaq Cambridge Research Lab,
1998.
[14] B. M. Yu, A. Afshar, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Extracting dynamical
structure embedded in neural activity. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural
Information Processing Systems 18, pages 1545?1552. Cambridge, MA: MIT Press, 2006.
[15] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999.
[16] D. L. Alspach and H. W. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations.
IEEE Transactions on Automatic Control, 17(4):439?448, 1972.
[17] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proceedings of the 14th
Conference on Uncertainty in Artificial Intelligence - UAI, volume 17, pages 33?42. Morgan Kaufmann,
1998.
[18] T. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD Thesis, MIT Media Lab,
2001.
[19] D. Barber. Expectation correction for smoothed inference in switching linear dynamical systems. Journal
of Machine Learning Research, 7:2515?2540, 2006.
[20] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm.
IEEE Transactions on Information Theory, IT-13:260?267, 1967.
[21] N. A. Thacker and P. A. Bromiley. The effects of a square root transform on a poisson distributed quantity.
Technical Report 2001-010, University of Manchester, 2001.
[22] A. P. Georgopoulos, A. B. Schwartz, and R. E. Ketiner. Neuronal population coding of movement direction. Science, 233:1416?1419, 1986.
[23] W. Erlhagen and G. Schoner. Dynamic field theory of movement preparation. Psychol Rev, 109:545?572,
2002.
9
| 4257 |@word trial:42 briefly:1 rising:1 norm:1 rhesus:1 lobe:1 covariance:8 simplifying:1 decomposition:1 jacob:1 thereby:1 solid:1 shot:1 carry:1 initial:1 schoner:1 score:1 prefix:1 outperforms:1 reaction:7 current:2 surprising:1 analysed:1 john:1 visible:1 partition:1 informative:1 subsequent:1 motor:5 asymptote:1 plot:1 concert:1 update:1 reproducible:1 stationary:1 cue:10 selected:1 intelligence:1 p7:1 ith:2 smith:1 record:1 filtered:3 provides:2 detecting:1 simpler:1 along:1 direct:1 consists:1 sustained:1 combine:1 behavioral:6 olfactory:1 introduce:1 jayaraman:2 expected:2 indeed:1 mesot:1 themselves:1 nor:2 ldss:3 multi:2 preparatory:1 rapid:1 behavior:3 detects:1 little:1 actual:1 window:1 becomes:2 provided:1 estimating:2 underlying:5 project:1 alto:1 medium:1 lowest:1 cm:1 interpreted:2 selverston:1 finding:2 transformation:1 temporal:1 wrong:1 uk:4 control:2 unit:5 medical:1 underlie:1 yn:1 appear:1 schwartz:1 shenoy:5 before:1 engineering:3 timing:1 local:6 vertically:2 switching:16 despite:2 encoding:2 initiated:1 optimised:1 firing:20 path:1 laurent:3 might:1 black:1 dynamically:1 challenging:1 graduate:1 averaged:1 locust:1 acknowledgment:1 camera:1 arguing:1 yj:2 testing:1 maximisation:1 practice:1 orthonormalization:2 swiss:1 procedure:1 area:1 maneesh:2 dictate:1 projection:1 suggest:1 selection:2 ast:4 influence:4 live:1 equivalent:2 conventional:1 quick:1 center:2 maximizing:1 go:9 starting:2 independently:2 insight:1 rule:5 array:1 orthonormal:2 population:12 target:8 construction:1 decode:1 us:1 hypothesis:2 approximated:1 particularly:2 recognition:1 labeled:1 observed:10 role:3 epsrc:1 ep:1 electrical:1 capture:10 thousand:1 ensures:1 movement:22 removed:1 mentioned:1 intuition:1 byronyu:1 complexity:1 constrains:1 reward:1 cam:1 dynamic:33 trained:1 depend:1 grateful:1 segment:4 churchland:1 purely:3 serve:1 basis:3 neurophysiol:1 joint:1 darpa:1 represented:1 train:1 effective:1 london:2 describe:1 detected:1 reeve:1 artificial:1 whose:1 richer:1 stanford:2 premotor:1 lag:1 compaq:1 statistic:1 gp:4 transform:1 noisy:3 itself:1 superscript:1 advantage:2 ir1:1 sequence:2 ucl:2 interaction:1 product:1 relevant:3 combining:1 aligned:1 intuitive:1 asserts:1 participating:1 validate:2 olkopf:1 exploiting:2 manchester:1 electrode:1 optimum:2 generating:1 object:2 help:2 coupling:1 develop:1 ac:3 vc0:2 bme:1 measured:2 eq:1 implemented:1 predicted:3 indicate:1 direction:16 filter:3 lars:1 stochastic:1 transient:2 settle:1 opinion:1 bin:2 explains:1 timepoint:6 extension:1 rong:1 correction:1 considered:1 presumably:2 viterbi:2 predict:3 traded:1 major:1 vary:2 adopt:1 arrange:1 purpose:3 estimation:2 outperformed:1 precede:2 label:1 palo:1 sensitive:1 concurrent:1 city:1 reflects:2 weighted:1 hope:2 orthonormalized:1 neurosurgery:1 mit:2 gopal:1 gaussian:14 rather:1 avoid:1 validated:1 derived:1 likelihood:3 indicates:1 arshavsky:1 sense:1 inference:9 stabilise:1 irp:3 cunningham:2 hidden:6 bienenstock:1 koller:1 pixel:2 arg:1 overall:1 uncovering:1 denoted:1 animal:4 smoothing:3 plan:1 uc:3 genuine:1 field:1 having:1 represents:2 yu:4 unsupervised:5 look:1 future:1 report:2 others:2 transiently:1 quantitatively:1 stimulus:2 piecewise:4 extinguished:1 oriented:1 simultaneously:2 resulted:1 national:1 individual:4 delayed:2 variously:1 murphy:1 attempt:2 introduces:2 mixture:1 extreme:1 yielding:2 light:1 chain:1 accurate:1 bioengineering:1 respective:1 sorenson:1 indexed:3 isolated:1 mazor:1 varona:1 instance:1 column:2 modeling:2 compelling:1 markovian:1 ar:1 goodness:2 rabinovich:1 tractability:2 cost:1 subset:1 hundred:1 delay:3 thacker:1 dependency:1 varies:1 seoulman:1 combined:1 st:15 density:2 sensitivity:1 off:1 decoding:4 together:1 broome:1 thesis:1 again:3 reflect:6 ambiguity:1 recorded:12 squared:2 central:1 ndseg:1 wishart:4 external:7 macke:1 leading:2 li:1 suggesting:2 account:1 sdt:1 potential:1 bromiley:1 coding:2 h019472:1 coordinated:5 explicitly:3 onset:7 performed:7 view:1 tion:1 root:2 lab:2 red:2 denoised:1 vivo:1 square:2 ir:2 afshar:1 accuracy:1 variance:14 kaufmann:1 convolutional:1 succession:1 yield:1 identify:3 correspond:2 lds:11 identification:1 buesing:1 bayesian:4 trajectory:8 history:2 simultaneous:1 explain:1 reach:4 definition:1 against:1 nonetheless:1 minka:1 obvious:1 conveys:1 associated:2 attributed:1 xn1:1 propagated:1 gain:1 pilot:1 dataset:1 sampled:1 knowledge:1 ut:1 dimensionality:8 color:1 segmentation:6 cj:1 amplitude:1 carefully:1 back:1 appears:2 exceeded:1 higher:1 follow:3 reflected:4 wei:1 evaluated:1 though:1 strongly:1 furthermore:1 just:1 biomedical:1 correlation:5 until:1 hand:1 touch:1 christopher:1 nonlinear:7 overlapping:2 defines:2 reveal:1 perhaps:1 effect:1 brown:2 evolution:5 biljana:2 gopals:1 attractive:2 during:8 m:4 trying:1 stress:1 evident:2 performs:1 reasoning:1 meaning:1 began:1 nih:1 functional:1 spiking:7 salt:1 volume:1 mellon:1 refer:1 cambridge:4 tuning:1 automatic:1 similarly:1 language:1 dj:1 dot:2 pq:1 cortex:1 operating:1 supervision:2 add:2 aligning:1 posterior:3 own:1 recent:1 touching:1 noisily:1 showed:1 shalom:1 driven:1 irq:7 initiation:1 erlhagen:1 krishna:1 seen:1 captured:1 additional:2 greater:1 minimum:1 morgan:1 paradigm:1 redundant:1 period:6 artech:1 tempting:1 signal:2 stephen:1 multiple:6 dashed:1 maximize:1 smooth:4 technical:2 faster:1 cross:16 long:3 dept:2 justifying:1 post:2 controlled:1 prediction:16 cmu:1 metric:2 expectation:2 poisson:1 cell:1 whereas:1 fellowship:2 fine:1 harrison:1 singular:1 source:1 appropriately:1 sch:1 rest:3 recording:2 subject:1 byron:1 seem:2 extracting:3 leverage:1 peformance:1 switch:13 fit:2 gave:2 identified:4 idea:1 donoghue:1 whether:1 pca:1 colour:1 forecasting:1 speech:3 se:1 amount:1 prepared:1 specifies:1 nsf:1 dotted:1 neuroscience:5 estimated:6 track:2 per:2 blue:1 broadly:1 carnegie:1 discrete:3 write:1 santhanam:3 key:1 levi:1 threshold:6 drawn:1 neither:1 backward:1 n66001:1 asymptotically:1 sum:2 run:2 uncertainty:1 family:1 wu:1 lake:1 decision:2 ninds:1 scaling:1 comparable:1 illuminated:2 capturing:1 bound:1 entirely:1 layer:3 ki:3 distinguish:2 dash:1 fold:1 yielded:2 activity:28 binned:1 strength:1 georgopoulos:1 scene:2 software:1 aspect:2 performing:1 relatively:1 structured:2 according:4 combination:2 sumof:1 petreska:1 across:9 em:3 joseph:1 evolves:3 making:2 s1:2 lerchner:1 rev:1 explained:3 repair:1 taken:1 behavioural:2 computationally:1 visualization:2 count:7 tractable:2 end:1 studying:1 available:3 gaussians:1 apply:1 progression:1 away:1 appropriate:1 odor:2 shortly:2 subdivide:1 assumes:3 denotes:3 ensure:1 graphical:2 restrictive:3 approximating:1 r01:1 quantity:1 spike:6 looked:2 fa:1 concentration:1 rt:5 mumford:1 traditional:1 diagonal:2 exhibit:1 subspace:1 distance:1 separate:1 denied:1 seven:1 manifold:1 argue:1 barber:3 enforcing:1 kalman:3 kst:4 retained:1 relationship:4 index:2 illustration:1 length:2 code:1 innovation:1 difficult:1 unfortunately:1 hlog:1 potentially:1 trace:1 rise:4 irt:1 reliably:3 collective:1 perform:1 allowing:1 neuron:48 observation:8 markov:3 withheld:1 thermal:1 incorrectly:1 situation:1 neurobiology:2 looking:4 ever:1 y1:7 dc:2 varied:1 smoothed:4 intensity:1 introduced:3 connection:1 timecourse:1 learned:6 ryu:3 macaque:1 able:4 bar:1 dynamical:38 usually:1 below:4 pattern:4 boyen:1 regime:5 appeared:1 program:2 green:2 event:4 suitable:2 natural:1 force:1 predicting:1 yq:2 temporally:2 psychol:1 catch:1 coupled:2 extract:3 embodied:1 sahani:4 epoch:3 prior:5 evolve:7 law:6 embedded:3 antennal:1 limitation:2 filtering:1 var:2 dana:1 versus:2 validation:1 foundation:4 norwood:1 sufficient:1 s0:1 xiao:1 principle:2 editor:1 charitable:1 systematically:1 row:2 changed:1 surprisingly:1 last:2 soon:1 free:3 jth:2 supported:1 offline:1 guide:2 allow:3 distributed:1 dimension:12 cortical:4 transition:8 xn:1 autoregressive:1 sensory:1 forward:1 commonly:1 jump:1 instructed:1 correlate:5 transaction:4 approximate:6 preferred:3 stopfer:2 keep:2 active:1 correlating:1 overfitting:3 rid:1 uai:1 assumed:3 xi:2 latent:46 reviewed:2 channel:1 reasonably:1 nature:1 robust:1 mse:2 complex:3 did:3 linearly:1 whole:3 noise:4 arise:1 motivation:1 allowed:1 x1:2 measurably:1 neuronal:1 west:1 crcns:1 platt:1 screen:1 fashion:1 gatsby:5 differed:1 slow:1 comprises:1 wish:1 timepoints:2 exponential:3 pv:6 house:1 third:1 remained:1 erroneous:1 bad:1 specific:5 xt:5 covariate:1 symbol:1 offset:1 r2:1 blackrock:1 evidence:1 essential:1 intractable:1 phd:1 execution:1 illumination:1 conditioned:2 demand:1 smoothly:1 cx:2 likely:5 gao:1 tracking:2 gpfa:19 futher:1 corresponds:3 springer:1 ma:2 goal:2 sorted:1 marked:2 identity:1 shared:10 feasible:1 analysing:1 change:10 included:1 typical:2 specifically:1 considerable:2 averaging:1 principal:1 pas:2 ece:1 experimental:7 succeeds:1 attempted:1 formally:1 college:2 internal:1 searched:1 alexander:1 preparation:4 incorporate:1 evaluate:1 audio:1 correlated:2 |
3,598 | 4,258 | The Doubly Correlated Nonparametric Topic Model
Dae Il Kim and Erik B. Sudderth
Department of Computer Science
Brown University, Providence, RI 02906
[email protected], [email protected]
Abstract
Topic models are learned via a statistical model of variation within document collections, but designed to extract meaningful semantic structure. Desirable traits
include the ability to incorporate annotations or metadata associated with documents; the discovery of correlated patterns of topic usage; and the avoidance of
parametric assumptions, such as manual specification of the number of topics. We
propose a doubly correlated nonparametric topic (DCNT) model, the first model
to simultaneously capture all three of these properties. The DCNT models metadata via a flexible, Gaussian regression on arbitrary input features; correlations
via a scalable square-root covariance representation; and nonparametric selection
from an unbounded series of potential topics via a stick-breaking construction.
We validate the semantic structure and predictive performance of the DCNT using
a corpus of NIPS documents annotated by various metadata.
1
Introduction
The contemporary problem of exploring huge collections of discrete data, from biological sequences
to text documents, has prompted the development of increasingly sophisticated statistical models.
Probabilistic topic models represent documents via a mixture of topics, which are themselves distributions on the discrete vocabulary of the corpus. Latent Dirichlet allocation (LDA) [3] was the
first hierarchical Bayesian topic model, and remains influential and widely used. However, it suffers
from three key limitations which are jointly addressed by our proposed model.
The first assumption springs from LDA?s Dirichlet prior, which implicitly neglects correlations1 in
document-specific topic usage. In diverse corpora, true semantic topics may exhibit strong (positive
or negative) correlations; neglecting these dependencies may distort the inferred topic structure. The
correlated topic model (CTM) [2] uses a logistic-normal prior to express correlations via a latent
Gaussian distribution. However, its usage of a ?soft-max? (multinomial logistic) transformation
requires a global normalization, which in turn presumes a fixed, finite number of topics.
The second assumption is that each document is represented solely by an unordered ?bag of words?.
However, text data is often accompanied by a rich set of metadata such as author names, publication dates, relevant keywords, etc. Topics that are consistent with such metadata may also be
more semantically relevant. The Dirichlet multinomial regression (DMR) [11] model conditions
LDA?s Dirichlet parameters on feature-dependent linear regressions; this allows metadata-specific
topic frequencies but retains other limitations of the Dirichlet. Recently, the Gaussian process topic
model [1] incorporated correlations at the topic level via a topic covariance, and the document level
via an appropriate GP kernel function. This model remains parametric in its treatment of the number of topics, and computational scaling to large datasets is challenging since learning scales superlinearly with the number of documents.
1
One can exactly sample from a Dirichlet distribution by drawing a vector of independent gamma random
variables, and normalizing so they sum to one. This normalization induces slight negative correlations.
1
The third assumption is the a priori choice of the number of topics. The most direct nonparametric
extension of LDA is the hierarchical Dirichlet process (HDP) [17]. The HDP allows an unbounded
set of topics via a latent stochastic process, but nevertheless imposes a Dirichlet distribution on any
finite subset of these topics. Alternatively, the nonparametric Bayes pachinko allocation [9] model
captures correlations within an unbounded topic collection via an inferred, directed acyclic graph.
More recently, the discrete infinite logistic normal [13] (DILN) model of topic correlations used an
exponentiated Gaussian process (GP) to rescale the HDP. This construction is based on the gamma
process representation of the DP [5]. While our goals are similar, we propose a rather different
model based on the stick-breaking representation of the DP [16]. This choice leads to arguably
simpler learning algorithms, and also facilitates our modeling of document metadata.
In this paper, we develop a doubly correlated nonparametric topic (DCNT) model which captures
between-topic correlations, as well as between-document correlations induced by metadata, for an
unbounded set of potential topics. As described in Sec. 2, the global soft-max transformation of
the DMR and CTM is replaced by a stick-breaking transformation, with inputs determined via both
metadata-dependent linear regressions and a square-root covariance representation. Together, these
choices lead to a well-posed nonparametric model which allows tractable MCMC learning and inference (Sec. 3). In Sec. 4, we validate the model using a toy dataset, as well as a corpus of NIPS
documents annotated by author and year of publication.
2
A Doubly Correlated Nonparametric Topic Model
The DCNT is a hierarchical, Bayesian nonparametric generalization of LDA. Here we give an
overview of the model structure (see Fig. 1), focusing on our three key innovations.
2.1
Document Metadata
Consider a collection of D documents. Let ?d ? RF denote a feature vector capturing the metadata
associated with document d, and ? an F ? D matrix of corpus metadata. When metadata is unavailable, we assume ?d = 1. For each of an unbounded sequence of topics k, let ?f k ? R denote an
associated significance weight for feature f , and ?:k ? RF a vector of these weights.2
We place a Gaussian prior ?:k ? N (?, ??1 ) on each topic?s weights, where ? ? RF is a vector of
mean feature responses, and ? is an F ? F diagonal precision matrix. In a hierarchical Bayesian
fashion [6], these parameters have priors ?f ? N (0, ?? ), ?f ? Gam(af , bf ). Appropriate values
for the hyperparameters ?? , af , and bf are discussed later.
T
Given ? and ?d , the document-specific ?score? for topic k is sampled as ukd ? N (?:k
?d , 1). These
real-valued scores are mapped to document-specific topic frequencies ?kd in subsequent sections.
2.2
Topic Correlations
For topic k in the ordered sequence of topics, we define a sequence of k linear transformation
weights Ak` , ` = 1, . . . , k. We then sample a variable vkd as follows:
vkd ? N
X
k
Ak` u`d , ??1
v
(1)
`=1
Let A denote a lower triangular matrix containing these values Ak` , padded by zeros. Slightly
abusing notation, we can then compactly write this transformation as v:d ? N (Au:d , L?1 ), where
L = ?v I is an infinite diagonal precision matrix. Critically, note that the distribution of vkd depends
only on the first k entries of u:d , not the infinite tail of scores for subsequent topics.
Marginalizing u:d , the covariance of v:d equals Cov[v:d ] = AAT + L?1 , ?. As in the classical
factor analysis model, A encodes a square-root representation of an output covariance matrix. Our
integration of input metadata has close connections to the semiparametric latent factor model [18],
but we replace their kernel-based GP covariance representation with a feature-based regression.
2
For any matrix ?, we let ?:k denote a column vector indexed by k, and ?f : a row vector indexed by f.
2
Figure 1: Directed graphical representation of a DCNT model for D documents containing N words. Each of
the unbounded set of topics has a word distribution ?k . The topic assignment zdn for word wdn depends on
document-specific topic frequencies ?d , which have a correlated dependence on the metadata ?d produced by
A and ?. The Gaussian latent variables ud and vd implement this mapping, and simplify MCMC methods.
Given similar lower triangular representations of factorized covariance matrices, conventional
Bayesian factor analysis models place a symmetric Gaussian prior Ak` ? N (0, ??1
A ). Under this
prior, however, E[?kk ] = k??1
grows
linearly
with
k.
This
can
produce
artifacts
for
standard factor
A
analysis [10], and is disastrous for the DCNT where k is unbounded. We instead propose an alternative prior Ak` ? N (0, (k?A )?1 ), so that the variance of entries in the k th row is reduced by a factor
of k. This shrinkage is carefully chosen so that E[?kk ] = ??1
A remains constant.
If we constrain A to be a diagonal matrix, with Akk ? N (0, ??1
A ) and Ak` = 0 for k 6= `, we
recover a simplified singly correlated nonparametric topic (SCNT) model which captures metadata
but not topic correlations. For either model, the precision parameters are assigned conjugate gamma
priors ?v ? Gam(av , bv ), ?A ? Gam(aA , bA ).
2.3
Logistic Mapping to Stick-Breaking Topic Frequencies
Stick breaking representations are widely used in applications of nonparametric Bayesian models,
and lead to convenient
P?sampling algorithms [8]. Let ?kd be the probability of choosing topic k in
document d, where k=1 ?kd = 1. The DCNT constructs these probabilities as follows:
?kd = ?(vkd )
k?1
Y
?(?v`d ),
?(vkd ) =
`=1
1
.
1 + exp(?vkd )
(2)
Here, 0 < ?(vkd ) < 1 is the classic logistic function, which satisfies ?(?v`d ) = 1 ? ?(v`d ). This
same transformation is part of the so-called logistic stick-breaking process [14], but that model is
motivated by different applications, and thus employs a very different prior distribution for vkd .
Given the distribution ?:d , the topic assignment indicator for word n in document d is drawn according to zdn ? Mult(?:d ). Finally, wdn ? Mult(?zdn ) where ?k ? Dir(?) is the word distribution
for topic k, sampled from a Dirichlet prior with symmetric hyperparameters ?.
3
Monte Carlo Learning and Inference
We use a Markov chain Monte Carlo (MCMC) method to approximately sample from the posterior
distribution of the DCNT. For most parameters, our choice of conditionally conjugate priors leads to
closed form Gibbs sampling updates. Due to the logistic stick-breaking transformation, closed form
resampling of v is intractable; we instead use a Metropolis independence sampler [6].
Our sampler is based on a finite truncation of the full DCNT model, which has proven useful with
other stick-breaking priors [8, 14, 15]. Let K be the maximum number topics. As our experiments
demonstrate, K is not the number of topics that will be utilized by the learned model, but rather a
? = K ? 1.
(possibly loose) upper bound on that number. For notational convenience, let K
3
? matrix of regression coefficients, and u is a K
? ?D
Under the truncated model, ? is a F ? K
? ?K
? lower triangular matrix, and
matrix satisfying u:d ? N (? T ?d , IK? ). Similarly, A is a K
? topics are set as in eq. (2), with the
v:d ? N (Au:d , ??1
? ). The probabilities ?kd for the first K
v IK
QK?1
PK?1
final topic set so that a valid distribution is ensured: ?Kd = 1 ? k=1 ?kd = k=1 ?(?vkd ).
3.1
Gibbs Updates for Topic Assignments, Correlation Parameters, and Hyperparameters
The precision parameter ?f controls the variability of the feature weights associated with each topic.
As in many regression models, the gamma prior is conjugate so that
p(?f | ?, af , bf ) ? Gam(?f | af , bf )
?
K
Y
N (?f k | ?f , ??1
f )
k=1
?
K
1?
1X
2
? Gam ?f | K + af ,
(?f k ? ?f ) + bf .
2
2
(3)
k=1
Similarly, the precision parameter ?v has a gamma prior and posterior:
p(?v | v, av , bv ) ? Gam(?v | av , bv )
D
Y
N (v:d | Au:d , L?1 )
d=1
? Gam ?v |
D
1X
1?
(v:d ? Au:d )T (v:d ? Au:d ) + bv .
KD + av ,
2
2
(4)
d=1
Entries of the regression matrix A have a rescaled Gaussian prior Ak` ? N (0, (k?A )?1 ). With a
gamma prior, the precision parameter ?A nevertheless has the following gamma posterior:
p(?A | A, aA , bA ) ? Gam(?A | aA , bA )
?
K
k
Y
Y
N (Ak` | 0, (k?A )?1 )
k=1 `=1
? Gam ?A |
?
K
k
1? ?
1 XX 2
kAk` + bA .
K(K ? 1) + aA ,
2
2
(5)
k=1 `=1
Conditioning on the feature regression weights ?, the mean weight ?f in our hierarchical prior for
each feature f has a Gaussian posterior:
p(?f | ?) ? N (?f | 0, ?? )
?
K
Y
N (?f k | ?f , ??1
f )
k=1
?
K
X
??
?1
?1
?
? N ?f |
?f k , (?? + K?f )
? ? + ??1
K?
f k=1
(6)
To sample ?:k , the linear function relating metadata to topic k, we condition on all documents uk: as
well as ?, ?, and ?. Columns of ? are conditionally independent, with Gaussian posteriors:
p(?:k | u, ?, ?, ?) ? N (?:k | ?, ??1 )N (uTk: | ?T ?:k , ID )
? N (?:k | (? + ??T )?1 (?uTk: + ??), (? + ??T )?1 ).
(7)
Similarly, the scores u:d for each document are conditionally independent with Gaussian posteriors:
p(u:d | v:d , ?, ?d , L) ? N (u:d | ? T ?d , IK? )N (v:d | Au:d , L?1 )
? N (u:d | (IK? + AT LA)?1 (AT Lv:d + ? T ?d ), (IK? + AT LA)?1 ).
(8)
To resample A, we note that its rows are conditionally independent. The posterior of the k entries
?k , u1:k,: , the first k entries of u:d for each document d:
Ak: in row k depends on vk: and U
?k , ?A , ?v ) ?
p(ATk: | vk: , U
k
Y
T
?kT ATk: , ??1
N (Akj | 0, (k?A )?1 )N (vk:
|U
v ID )
(9)
j=1
T
? ? T ?1 U
?k vk:
?k U
?kT )?1 ).
? N (ATk: | (k?A ??1
, (k?A Ik + ?v U
v Ik + Uk Uk )
4
For the SCNT model, there is a related but simpler update (see supplemental material).
As in collapsed sampling algorithms for LDA [7], we can analytically marginalize the word distri\dn
bution ?k for each topic. Let Mkw denote the number of instances of word w assigned to topic k,
\dn
excluding token n in document d, and Mk. the number of total tokens assigned to topic k. For a
vocabulary with W unique word types, the posterior distribution of topic indicator zdn is then
!
\dn
Mkw + ?
.
(10)
p(zdn = k | ?:d , z\dn ) ? ?kd
\dn
Mk. + W ?
Recall that the topic probabilities ?:d are determined from v:d via Equation (2).
3.2
Metropolis Independence Sampler Updates for Topic Activations
The posterior distribution of v:d does not have a closed analytical form due to the logistic nonlinearity underlying our stick-breaking construction. We instead employ a Metropolis-Hastings inde?
?
pendence sampler, where proposals q(v:d
| v:d , A, u:d , ?v ) = N (v:d
| Au:d , ??1
? ) are drawn from
v IK
the prior. Combining this with the likelihood of the Nd word tokens, the proposal is accepted with
?
probability min(A(v:d
, v:d ), 1), where
QNd
?
?
?
p(v:d
| A, u:d , ?v ) n=1
p(zdn | v:d
)q(v:d | v:d
, A, u:d , ?v )
?
A(v:d , v:d ) =
QNd
?
p(v:d | A, u:d , ?v ) n=1 p(zdn | v:d )q(v:d | v:d , A, u:d , ?v )
PNd
Nd
K ? n=1
?(zdn ,k)
?
Y
Y
p(zdn | v:d
)
?kd
=
=
(11)
p(zdn | v:d )
?kd
n=1
k=1
?
Because the proposal cancels with the prior distribution in the acceptance ratio A(v:d
, v:d ), the final
probability depends only on a ratio of likelihood functions, which can be easily evaluated from
counts of the number of words assigned to each topic by zd .
4
4.1
Experimental Results
Toy Bars Dataset
Following related validations of the LDA model [7], we ran experiments on a toy corpus of ?images?
designed to validate the features of the DCNT. The dataset consisted of 1,500 images (documents),
each containing a vocabulary of 25 pixels (word types) arranged in a 5x5 grid. Documents can be
visualized by displaying pixels with intensity proportional to the number of corresponding words
(see Figure 2). Each training document contained 300 word tokens.
Ten topics were defined, corresponding to all possible horizontal and vertical 5-pixel ?bars?. We
consider two toy datasets. In the first, a random number of topics is chosen for each document, and
then a corresponding subset of the bars is picked uniformly at random. In the second, we induce
topic correlations by generating documents that contain a combination of either only horizontal
(topics 1-5) or only vertical (topics 6-10) bars. For these datasets, there was no associated metadata,
so the input features were simply set as ?d = 1.
Using these toy datasets, we compared the LDA model to several versions of the DCNT. For LDA,
we set the number of topics to the true value of K = 10. Similar to previous toy experiments [7],
we set the parameters of its Dirichlet prior over topic distributions to ? = 50/K, and the topic
smoothing parameter to ? = 0.01. For the DCNT model, we set ?? = 106 , and all gamma prior
hyperparameters as a = b = 0.01, corresponding to a mean of 1 and a variance of 100. To initialize
the sampler, we set the precision parameters to their prior mean of 1, and sample all other variables
from their prior. We compared three variants of the DCNT model: the singly correlated SCNT (A
constrained to be diagonal) with K = 10, the DCNT with K = 10, and the DCNT with K = 20.
The final case explores whether our stick-breaking prior can successfully infer the number of topics.
For the toy dataset with correlated topics, the results of running all sampling algorithms for 10,000
iterations are illustrated in Figure 2. On this relatively clean data, all models limited to K = 10
5
Figure 2: A dataset of correlated toy bars (example document images in bottom left). Top: From left to
right, the true counts of words generated by each topic, and the recovered counts for LDA (K = 10), SCNT
(K = 10), DCNT (K = 10), and DCNT (K = 20). Note that the true topic order is not identifiable. Bottom:
Inferred topic covariance matrices for the four corresponding models. Note that LDA assumes all topics have
a slight negative correlation, while the DCNT infers more pronounced positive correlations. With K = 20
potential DCNT topics, several are inferred to be unused with high probability, and thus have low variance.
topics recover the correct topics. With K = 20 topics, the DCNT recovers the true topics, as well as
a redundant copy of one of the bars. This is typical behavior for sampling runs of this length; more
extended runs usually merge such redundant bars. The development of more rapidly mixing MCMC
methods is an interesting area for future research.
To determine the topic correlations corresponding to a set of learned model parameters, we use
a Monte Carlo estimate (details in the supplemental material). To make these matrices easier to
visualize, the Hungarian algorithm was used to reorder topic labels for best alignment with the
ground truth topic assignments. Note the significant blocks of positive correlations recovered by the
DCNT, reflecting the true correlations used to create this toy data.
4.2
NIPS Corpus
The NIPS corpus that we used consisted of publications from previous NIPS conferences 0-12
(1987-1999), including various metadata (year of publication, authors, and section categories). We
compared four variants of the DCNT model: a model which ignored metadata, a model with indicator features for the year of publication, a model with indicator features for year of publication
and the presence of highly prolific authors (those with more than 10 publications), and a model with
features for year of publication and additional authors (those with more than 5 publications). In all
cases, the feature matrix ? is binary. All models were truncated to use at most K = 50 topics, and
the sampler initialized as in Sec. 4.1.
4.2.1
Conditioning on Metadata
A learned DCNT model provides predictions for how topic frequencies change given particular
metadata associated with a document. In Figure 3, we show how predicted topic frequencies change
over time, conditioning also on one of three authors (Michael Jordan, Geoffrey Hinton, or Terrence
Sejnowski). For each, words from a relevant topic illustrate how conditioning on a particular author can change the predicted document content. For example, the visualization associated with
Michael Jordan shows that the frequency of the topic associated with probabilistic models gradually
increases over the years, while the topic associated with neural networks decreases. Conditioning
on Geoffrey Hinton puts larger mass on a topic which focuses on models developed by his research
group. Finally, conditioning on Terrence Sejnowski dramatically increases the probability of topics
related to neuroscience.
4.2.2
Correlations between Topics
The DCNT model can also capture correlations between topics. In Fig. 4, we visualize this using a diagram where the size of a colored grid is proportional to the magnitude of the correlation
6
Figure 3: The DCNT predicts topic frequencies over the years (1987-1999) for documents with (a) none of
the most prolific authors, (b) the Michael Jordan feature, (c) the Geoffrey Hinton feature, and (d) the Terrence
Sejnowski feature. The stick-breaking distribution at the top shows the frequencies of each topic, averaging
over all years; note some are unused. The middle row illustrates the word distributions for the topics highlighted
by red dots in their respective columns. Larger words are more probable.
Figure 4: A Hinton diagram of correlations between all pairs of topics, where the sizes of squares indicates the
magnitude of dependence, and red and blue squares indicate positive and negative correlations, respectively. To
the right are the top six words from three strongly correlated topic pairs. This visualization, along with others in
this paper, are interactive and can be downloaded from this page: http://www.cs.brown.edu/?daeil.
coefficients between two topics. The results displayed in this figure are for a model trained without metadata. We can see that the model learned strong positive correlations between function and
learning topics which have strong semantic similarities, but are not identical. Another positive correlation that the model discovered was between the topics visual and neuron; of course there are
many papers at NIPS which study the brain?s visual cortex. A strong negative correlation was found
between the network and model topics, which might reflect an idealogical separation between papers
studying neural networks and probabilistic models.
4.3
Predictive Likelihood
In order to quantitatively measure the generalization power of our DCNT model, we tested several
variants on two versions of the toy bars dataset (correlated & uncorrelated). We also compared
models on the NIPS corpus, to explore more realistic data where metadata is available. The test data
for the toy dataset consisted of 500 documents generated by the same process as the training data,
7
Perplexity (Toy Data)
Perplexity scores (NIPS)
14
2100
12
2050
10
2000
8
1950
6
1900
4
1850
2
0
10.5
10.52
9.79
10.14
12.08
12.13
11.51
11.75
LDA?A
HDP?A
DCNT?A
SCNT?A
LDA?B
HDP?B
DCNT?B
SCNT?B
1800
1975.46
2060.43
1926.42
1925.56
1923.1
1932.26
LDA
HDP
DCNT?noF
DCNT?Y
DCNT?YA1
DCNT?YA2
Figure 5: Perplexity scores (lower is better) computed via Chib-style estimators for several topic models.
Left: Test performance for the toy datasets with uncorrelated bars (-A) and correlated bars (-B). Right: Test
performance on the NIPS corpus with various metadata: no features (-noF), year features (-Y), year and prolific
author features (over 10 publications, -YA1), and year and additional author features (over 5 publications, -YA2).
while the NIPS corpus was split into training and tests subsets containing 80% and 20% of the full
corpus, respectively. Over the years 1988-1999, there were a total of 328 test documents.
We calculated predictive likelihood estimates using a Chib-style estimator [12]; for details see the
supplemental material. In a previous comparison [19], the Chib-style estimator was found to be far
more accurate than alternatives like the harmonic mean estimator. Note that there is some subtlety
in correctly implementing the Chib-style estimator for our DCNT model, due to the possibility of
rejection of our Metropolis-Hastings proposals.
Predictive negative log-likelihood estimates were normalized by word counts to determine perplexity
scores [3]. We tested several models, including the SCNT and DCNT, LDA with ? = 1 and ? =
0.01, and the HDP with full resampling of its concentration parameters. For the toy bars data, we
set the number of topics to K = 10 for all models except the HDP, which learned K = 15. For the
NIPS corpus, we set K = 50 for all models except the HDP, which learned K = 86.
For the toy datasets, the LDA and HDP models perform similarly. The SCNT and DCNT are both
superior, apparently due to their ability to capture non-Dirichlet distributions on topic occurrence
patterns. For the NIPS data, all of the DCNT models are substantially more accurate than LDA and
the HDP. Including metadata encoding the year of publication, and possibly also the most prolific
authors, provides slight additional improvements in DCNT accuracy. Interestingly, when a larger
set of author features is included, accuracy becomes slightly worse. This appears to be an overfitting
issue: there are 125 authors with over 5 publications, and only a handful of training examples for
each one.
While it is pleasing that the DCNT and SCNT models seem to provide improved predictive likelihoods, a recent study on the human interpretability of topic models showed that such scores do
not necessarily correlate with more meaningful semantic structures [4]. In many ways, the interactive visualizations illustrated in Sec. 4.2 provide more assurance that the DCNT can capture useful
properties of real corpora.
5
Discussion
The doubly correlated nonparametric topic model flexibly allows the incorporation of arbitrary features associated with documents, captures correlations that might exist within a dataset?s latent topics, and can learn an unbounded set of topics. The model uses a set of efficient MCMC techniques
for learning and inference, and is supported by a set of web-based tools that allow users to visualize
the inferred semantic structure.
Acknowledgments
This research supported in part by IARPA under AFRL contract number FA8650-10-C-7059. Dae Il
Kim supported in part by an NSF Graduate Fellowship. The views and conclusions contained herein
are those of the authors and should not be interpreted as necessarily representing the official policies
or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.
8
References
[1] A. Agovic and A. Banerjee. Gaussian process topic models. In UAI, 2010.
[2] D. M. Blei and J. D. Lafferty. A correlated topic model of science. AAS, 1(1):17?35, 2007.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993?1022,
March 2003.
[4] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. M. Blei. Reading tea leaves: How humans interpret
topic models. In NIPS, 2009.
[5] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. An. Stat., 1(2):209?230, 1973.
[6] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall, 2004.
[7] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 2004.
[8] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American
Statistical Association, 96(453):161?173, Mar. 2001.
[9] W. Li, D. Blei, and A. McCallum. Nonparametric Bayes Pachinko allocation. In UAI, 2008.
[10] H. F. Lopes and M. West. Bayesian model assessment in factor analysis. Stat. Sinica, 14:41?67, 2004.
[11] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with dirichlet-multinomial
regression. In UAI, 2008.
[12] I. Murray and R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models.
In NIPS 21, pages 1137?1144. 2009.
[13] J. Paisley, C. Wang, and D. Blei. The discrete infinite logistic normal distribution for mixed-membership
modeling. In AISTATS, 2011.
[14] L. Ren, L. Du, L. Carin, and D. B. Dunson. Logistic stick-breaking process. JMLR, 12, 2011.
[15] A. Rodriguez and D. B. Dunson. Nonparametric bayesian models through probit stick-breaking processes.
J. Bayesian Analysis, 2011.
[16] J. Sethuraman. A constructive definition of Dirichlet priors. Stat. Sin., 4:639?650, 1994.
[17] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[18] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. In AIStats 10, 2005.
[19] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In
ICML, 2009.
9
| 4258 |@word middle:1 version:2 nd:2 bf:5 covariance:8 series:1 score:8 document:37 interestingly:1 recovered:2 activation:1 realistic:1 subsequent:2 designed:2 update:4 resampling:2 leaf:1 assurance:1 mccallum:2 colored:1 blei:6 provides:2 simpler:2 unbounded:8 dn:5 along:1 direct:1 ik:8 doubly:5 behavior:1 themselves:1 brain:1 salakhutdinov:2 daeil:2 becomes:1 distri:1 xx:1 notation:1 underlying:1 factorized:1 mass:1 superlinearly:1 substantially:1 interpreted:1 developed:1 supplemental:3 finding:1 transformation:7 interactive:2 exactly:1 ensured:1 stick:14 control:1 uk:3 arguably:1 positive:6 aat:1 encoding:1 ak:9 id:2 mach:1 solely:1 approximately:1 merge:1 might:2 au:7 wallach:1 challenging:1 limited:1 graduate:1 directed:2 unique:1 acknowledgment:1 block:1 implement:1 area:1 mult:2 convenient:1 boyd:1 word:20 induce:1 griffith:1 qnd:2 convenience:1 close:1 selection:1 marginalize:1 gelman:1 put:1 collapsed:1 www:1 conventional:1 flexibly:1 estimator:5 avoidance:1 his:1 steyvers:1 classic:1 variation:1 construction:3 user:1 us:2 satisfying:1 utilized:1 predicts:1 bottom:2 wang:2 capture:8 decrease:1 contemporary:1 rescaled:1 ran:1 trained:1 predictive:5 compactly:1 easily:1 various:3 represented:1 monte:3 sejnowski:3 choosing:1 widely:2 posed:1 valued:1 larger:3 drawing:1 triangular:3 ability:2 cov:1 gp:3 jointly:1 highlighted:1 final:3 beal:1 sequence:4 analytical:1 propose:3 relevant:3 combining:1 date:1 rapidly:1 mixing:1 validate:3 pronounced:1 produce:1 generating:1 illustrate:1 develop:1 stat:3 rescale:1 keywords:1 eq:1 strong:4 c:3 hungarian:1 predicted:2 indicate:1 nof:2 annotated:2 correct:1 stochastic:1 human:2 atk:3 material:3 implementing:1 government:1 generalization:2 biological:1 probable:1 exploring:1 extension:1 hall:1 ground:1 normal:3 exp:1 mapping:2 visualize:3 resample:1 ctm:2 bag:1 label:1 create:1 successfully:1 tool:1 gaussian:12 rather:2 shrinkage:1 publication:13 focus:1 notational:1 vk:4 improvement:1 likelihood:6 indicates:1 seeger:1 kim:2 inference:3 dependent:2 membership:1 ferguson:1 pixel:3 issue:1 flexible:1 priori:1 development:2 smoothing:1 integration:1 initialize:1 constrained:1 equal:1 construct:1 ng:1 sampling:6 chapman:1 identical:1 cancel:1 carin:1 icml:1 future:1 others:1 simplify:1 prolific:4 employ:2 quantitatively:1 chib:4 simultaneously:1 gamma:8 replaced:1 wdn:2 pleasing:1 huge:1 acceptance:1 highly:1 possibility:1 evaluation:1 alignment:1 mixture:1 akk:1 chain:1 pendence:1 kt:2 accurate:2 neglecting:1 respective:1 indexed:2 initialized:1 re:1 dae:2 mk:2 instance:1 column:3 soft:2 modeling:2 retains:1 assignment:4 subset:3 entry:5 providence:1 dependency:1 dir:1 explores:1 akj:1 probabilistic:3 terrence:3 contract:1 michael:3 together:1 reflect:1 containing:4 possibly:2 worse:1 american:2 style:4 presumes:1 toy:15 li:1 potential:3 accompanied:1 unordered:1 sec:5 coefficient:2 depends:4 later:1 root:3 picked:1 closed:3 view:1 apparently:1 bution:1 red:2 bayes:2 recover:2 annotation:1 il:2 square:5 accuracy:2 variance:3 qk:1 bayesian:10 critically:1 produced:1 none:1 carlo:3 ren:1 suffers:1 manual:1 distort:1 definition:1 frequency:9 james:1 associated:10 recovers:1 dmr:2 sampled:2 dataset:8 treatment:1 recall:1 infers:1 sophisticated:1 carefully:1 reflecting:1 focusing:1 appears:1 afrl:2 response:1 improved:1 arranged:1 evaluated:1 strongly:1 mar:1 correlation:27 hastings:2 horizontal:2 web:1 banerjee:1 assessment:1 rodriguez:1 abusing:1 logistic:10 artifact:1 lda:17 scientific:1 grows:1 name:1 usage:3 brown:4 true:6 consisted:3 contain:1 normalized:1 analytically:1 assigned:4 symmetric:2 semantic:6 illustrated:2 conditionally:4 x5:1 sin:1 kak:1 demonstrate:1 image:3 harmonic:1 recently:2 superior:1 multinomial:3 overview:1 conditioning:6 discussed:1 slight:3 tail:1 relating:1 trait:1 interpret:1 association:2 significant:1 gibbs:3 paisley:1 grid:2 similarly:4 nonlinearity:1 dot:1 specification:1 similarity:1 cortex:1 etc:1 posterior:9 recent:1 showed:1 perplexity:4 binary:1 additional:3 utk:2 determine:2 ud:1 redundant:2 full:3 desirable:1 pnas:1 infer:1 af:5 prediction:1 scalable:1 regression:10 variant:3 iteration:1 represent:1 normalization:2 kernel:2 proposal:4 semiparametric:2 fellowship:1 addressed:1 diagram:2 sudderth:2 induced:1 facilitates:1 lafferty:1 seem:1 jordan:6 presence:1 unused:2 split:1 independence:2 carlin:1 ya1:2 whether:1 motivated:1 six:1 fa8650:1 ignored:1 useful:2 dramatically:1 singly:2 nonparametric:15 ten:1 induces:1 visualized:1 category:1 reduced:1 http:1 exist:1 nsf:1 neuroscience:1 correctly:1 blue:1 diverse:1 zd:1 discrete:4 write:1 tea:1 express:1 group:1 key:2 four:2 nevertheless:2 drawn:2 clean:1 graph:1 padded:1 sum:1 year:13 run:2 lope:1 place:2 separation:1 endorsement:1 scaling:1 capturing:1 bound:1 identifiable:1 bv:4 handful:1 incorporation:1 constrain:1 ri:1 encodes:1 u1:1 min:1 spring:1 relatively:1 department:1 influential:1 according:1 combination:1 march:1 kd:11 conjugate:3 slightly:2 increasingly:1 metropolis:4 gradually:1 equation:1 visualization:3 remains:3 turn:1 loose:1 count:4 tractable:1 studying:1 available:1 ishwaran:1 gam:9 hierarchical:6 appropriate:2 occurrence:1 alternative:2 top:3 dirichlet:15 include:1 running:1 assumes:1 graphical:1 neglect:1 murray:2 classical:1 implied:1 parametric:2 concentration:1 dependence:2 diagonal:4 exhibit:1 dp:2 mapped:1 vd:1 topic:114 erik:1 hdp:11 length:1 prompted:1 kk:2 ratio:2 innovation:1 sinica:1 dunson:2 disastrous:1 negative:6 ba:4 stern:1 policy:1 perform:1 teh:2 upper:1 av:4 vertical:2 neuron:1 datasets:6 markov:1 finite:3 pnd:1 displayed:1 truncated:2 extended:1 incorporated:1 variability:1 excluding:1 hinton:4 discovered:1 incorporate:1 arbitrary:3 zdn:10 intensity:1 inferred:5 pair:2 connection:1 learned:7 herein:1 nip:14 bar:11 usually:1 pattern:2 reading:1 rf:3 max:2 including:3 interpretability:1 power:1 indicator:4 representing:1 sethuraman:1 extract:1 metadata:26 text:2 prior:26 diln:1 discovery:1 marginalizing:1 probit:1 inde:1 interesting:1 limitation:2 allocation:4 proportional:2 acyclic:1 proven:1 lv:1 geoffrey:3 mixed:1 validation:1 downloaded:1 ya2:2 consistent:1 imposes:1 rubin:1 displaying:1 uncorrelated:2 row:5 course:1 token:4 supported:3 truncation:1 copy:1 exponentiated:1 allow:1 mimno:2 calculated:1 vocabulary:3 valid:1 evaluating:1 rich:1 pachinko:2 author:14 collection:4 simplified:1 far:1 correlate:1 implicitly:1 global:2 overfitting:1 uai:3 corpus:14 reorder:1 alternatively:1 latent:9 learn:2 unavailable:1 du:1 necessarily:2 official:1 aistats:2 significance:1 pk:1 linearly:1 hyperparameters:4 iarpa:2 graber:1 fig:2 west:1 fashion:1 precision:7 breaking:14 jmlr:1 third:1 specific:5 vkd:9 normalizing:1 intractable:1 magnitude:2 illustrates:1 conditioned:1 easier:1 rejection:1 simply:1 explore:1 visual:2 expressed:1 ordered:1 contained:2 subtlety:1 chang:1 aa:5 truth:1 satisfies:1 gerrish:1 goal:1 replace:1 content:1 change:3 included:1 infinite:4 determined:2 uniformly:1 semantically:1 sampler:6 typical:1 averaging:1 except:2 called:1 total:2 accepted:1 experimental:1 la:2 meaningful:2 constructive:1 mcmc:5 tested:2 correlated:16 |
3,599 | 4,259 | The Local Rademacher Complexity of `p-Norm
Multiple Kernel Learning
Marius Kloft?
Machine Learning Laboratory
TU Berlin, Germany
[email protected]
Gilles Blanchard
Department of Mathematics
University of Potsdam, Germany
[email protected]
Abstract
We derive an upper bound on the local Rademacher complexity of `p -norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p = 1 only while our
analysis covers all cases 1 ? p ? ?, assuming the different feature mappings
corresponding to the different kernels to be uncorrelated. We also show a lower
bound that shows that the bound is tight, and derive consequences regarding ex?
cess loss, namely fast convergence rates of the order O(n? 1+? ), where ? is the
minimum eigenvalue decay rate of the individual kernels.
1
Introduction
Kernel methods [24, 21] allow to obtain nonlinear learning machines from simpler, linear ones;
nowadays they can almost completely be applied out-of-the-box [3]. Nevertheless, after more than
a decade of research it still remains an unsolved problem to find the best abstraction or kernel for
a problem at hand. Most frequently, the kernel is selected from a candidate set according to its
generalization performance on a validation set. Clearly, the performance of such an algorithm is
limited by the best kernel in the set. Unfortunately, in the current state of research, there is little
hope that in the near future a machine will be able to automatically find?or even engineer?the
best kernel for a particular problem at hand [25]. However, by restricting to a less general problem,
can we hope to achieve the automatic kernel selection?
In the seminal work of Lanckriet et al. [18] it was shown that learning a support vector machine
(SVM) [9] and a convex kernel combination at the same time is computationally feasible. This approach was entitled multiple kernel learning (MKL). Research in the subsequent years focused on
speeding up the initially demanding optimization algorithms [22, 26]?ignoring the fact that empirical evidence for the superiority of MKL over trivial baseline approaches (not optimizing the kernel)
was missing. In 2008, negative results concerning the accuracy of MKL in practical applications accumulated: at the NIPS 2008 MKL workshop [6] several researchers presented empirical evidence
showing that traditional MKL rarely helps in practice and frequently is outperformed by a regular SVM using a uniform kernel combination, see http://videolectures.net/lkasok08_
whistler/. Subsequent research (e.g., [10]) revealed further negative evidence and peaked in the
provocative question ?Can learning kernels help performance?? posed by Corinna Cortes in an
invited talk at ICML 2009 [5].
Consequently, despite all the substantial progress in the field of MKL, there remained an unsatisfied
need for an approach that is really useful for practical applications: a model that has a good chance
of improving the accuracy (over a plain sum kernel). A first step towards a model of kernel learning
?
Marius Kloft is also with Friedrich Miescher Laboratory, Max Planck Society, T?ubingen. A part of this
work was done while Marius Kloft was with UC Berkeley, USA, and Gilles Blanchard was with Weierstra? Institute for Applied Analysis and Stochastics, Berlin.
1
1.4
0.7
1.2
0.6
0.5
0.4
SVM (single)
1?norm MKL
1.07?norm MKL
1.14?norm MKL
1.33?norm MKL
SVM (all)
0.3
0.2
0.1
0
C
H
P
Z
S
V
L1
Kernel Weights ?i
Test Set Accuracy
0.8
1?norm MKL
1.07?norm MKL
1.14?norm MKL
1.33?norm MKL
SVM
1
0.8
0.6
0.4
0.2
0
L4 L14 L30 SW1SW2
C
H
P
Z
S
V
L1
L4 L14 L30 SW1SW2
Figure 1: Result of a typical `p -norm MKL experiment in terms of accuracy (L EFT) and kernel weights output
by MKL (R IGHT).
that is useful for practical applications was made in [7, 13, 14]: by imposing an `q -norm penalty
(q > 1) rather than an `1 -norm one on the kernel combination coefficients. This `q -norm MKL is
an empirical minimization algorithm that operates on the multi-kernel class consisting of functions
f : x 7? hw, ?k (x)i with kwkk ? D, where ?k is the kernel mapping into the reproducing kernel
Hilbert space (RKHS) Hk with kernel k and norm k.kk , while the kernel k itself ranges over the
PM
set of possible kernels k = m=1 ?m km k?kq ? 1, ? ? 0 . A conceptual milestone going
back to the work of [1] and [20] is that this multi-kernel class can equivalently be represented as a
block-norm regularized linear class in the product RKHS:
Hp,D,M = fw : x 7? hw, ?(x)i w = (w(1) , . . . , w(M ) ), kwk2,p ? D ,
(1)
where there is a one-to-one mapping of q ? [1, ?] to p ? [1, 2] given by p =
2q
q+1 .
In Figure 1, we show exemplary results of an `p -norm MKL experiment, achieved on the protein
fold prediction dataset used in [4] (see supplementary material A for experimental details). We first
observe that, as expected, `p -norm MKL enforces strong sparsity in the coefficients ?m when p = 1
and no sparsity at all otherwise (but various degrees of soft sparsity for intermediate p). Crucially,
the performance (as measured by the test error) is not monotonic as a function of p; p = 1 (sparse
MKL) yields the same performance as the regular SVM using a uniform kernel combination, but
optimal performance is attained for some intermediate value of p?namely, p = 1.14. This is a
strong empirical motivation to study theoretically the performance of `p -MKL beyond the limiting
cases p = 1 and p = ?.
Clearly, the complexity of (1) will be greater than one that is based on a single kernel only. However,
it is unclear whether the increase is decent or considerably high and?since there is a free parameter
p?how this relates to the choice of p. To this end, the main aim of this paper is to analyze the sample
complexity of the hypothesis class (1). An analysis of this model, based on global Rademacher
complexities, was developed by [8] for special cases of p. In the present work, we base our main
analysis on the theory of local Rademacher complexities, which allows to derive improved and
more precise rates of convergence that cover the whole range of p ? [1, ?].
Outline of the contributions. This paper makes the following contributions:
? An upper bound on the local Rademacher complexity of `p -norm MKL is shown, from
which we derive an
excess risk bound that achieves a fast convergence rate of the order
2
1
?
1+ 1+?
? ?1
p
O(M
n? 1+? ), where ? is the minimum eigenvalue decay rate of the individ1
1
ual kernels (previous bounds for `p -norm MKL only achieved O(M p? n? 2 ).
? A lower bound is shown that beside absolute constants matches the upper bounds, showing
that our results are tight.
? The generalization performance of `p -norm MKL as guaranteed by the excess risk bound
is studied for varying values of p, shedding light on the appropriateness of a small/large p
in various learning scenarios.
Furthermore, we also present a simpler, more general proof of the global Rademacher bound shown
in [8] (at the expense of a slightly worse constant). A comparison of the rates obtained with local
and global Rademacher analysis is carried out in Section 3.
2
Notation. We abbreviate Hp = Hp,D = Hp,D,M if clear from the context. We denote the (normalized) kernel matrices corresponding to k and km by K and Km , respectively, i.e., the ijth entry of
(1)
K is n1 k(xi , xj ). Also, we denote u = (u(m) )M
, . . . , u(M ) ) ? H = H1 ? . . . ? HM .
m=1 = (u
Furthermore, let P be a probability measure on X i.i.d. generating the data x1 , . . . , xn and denote
by E the corresponding expectation operator. We work with operators in Hilbert spaces and will use
instead of the usual vector/matrix notation ?(x)?(x)> the tensor notation ?(x) ? ?(x) ? HS(H),
which is a Hilbert-Schmidt operator H 7? H defined as (?(x) ? ?(x))u = h?(x), ui ?(x). The
space HS(H) of Hilbert-Schmidt operators on H is itself a Hilbert space, and the expectation
2
E?(x) ? ?(x) is well-defined and belongs to HS(H) as soon as E k?(x)k is finite, which will
always be assumed. We denote by J = E?(x) ? ?(x) and Jm = E?m (x) ? ?m (x) the uncentered covariance operators corresponding to variables ?(x) and ?m (x), respectively; it holds that
2
2
tr(J) = E k?(x)k2 and tr(Jm ) = E k?m (x)k2 .
Global Rademacher Complexities We first review global Rademacher complexities (GRC) in
multiple kernel learning. Let x1 , . . . , xn be an i.i.d. sample
Pn drawn from P . The global Rademacher
complexity is defined as R(Hp ) = E supfw ?Hp hw, n1 i=1 ?i ?(xi )i, where (?i )1?i?n is an i.i.d.
family (independent of ?(xi )) of Rademacher variables (random signs). Its empirical counterpart is
b p ) = E R(Hp )x1 , . . . , xn = E? supf ?H hw, 1 Pn ?i ?(xi )i.
denoted by R(H
i=1
w
p
n
b p ) ? D cp?
tr(Km ) M
p? 1/2 for p ? [1, 2]
In the recent paper of [8] it was shown R(H
n
m=1 2
p
and p? being an integer (where c = 23/44 and p? := p?1
is the conjugated exponent). This bound
is tight and improves a series of loose results that were given for p = 1 in the past (see [8] and
references therein). In fact, the above result can be extended to the whole range of p ? [1, ?] (in
the supplementary material we present a quite simple proof using c = 1):
Proposition 1 (G LOBAL R ADEMACHER COMPLEXITY BOUND ). For any p ? 1 the empirical
version of global Rademacher complexity of the multi-kernel class Hp can be bounded as
s
M
?
1
b p ) ? min D t
R(H
tr(Km )
?.
n
n
t?[p,?]
m=1 t2
Interestingly, the above GRC bound is not monotonic in p and thus the minimum is not always
attained for t := p.
2
The Local Rademacher Complexity of Multiple Kernel Learning
Let x1 , . . . , xn be an i.i.d. sample drawn from P . We define
Pn the local Rademacher 2complexity (LRC) of Hp as Rr (Hp ) = E supfw ?Hp :P fw2 ?r hw, n1 i=1 ?i ?(xi )i, where P fw
:=
E(fw (?(x)))2 . Note that it subsumes the global RC as a special case for r = ?. We will also
use the following assumption in the bounds for the case p ? [1, 2]:
Assumption (U) (no-correlation). Let x ? P . The Hilbert space valued random variables
?1 (x), . . . , ?M (x) are said to be (pairwise) uncorrelated if for any m 6= m0 and w ? Hm , w0 ?
Hm0 , the real variables hw, ?m (x)i and hw0 , ?m0 (x)i are uncorrelated.
For example, if X = RM , the above means that the input variable x ? X has independent coordinates, and the kernels k1 , . . . , kM each act on a different coordinate. Such a setting was also
considered by [23] (for sparse MKL). To state the bounds, note that covariance
P? operators enjoy
discrete eigenvalue-eigenvector decompositions J = E?(x) ? ?(x) =
j=1 ?j uj ? uj and
P? (m) (m)
(m)
(m)
(m)
(m)
Jm = Ex
?x
= j=1 ?j uj ? uj , where (uj )j?1 and (uj )j?1 form orthonormal
bases of H and Hm , respectively. We are now equipped to state our main results:
Theorem 2 (L OCAL R ADEMACHER COMPLEXITY BOUND , p ? [1, 2] ). Assume that the kernels
are uniformly bounded (kkk? ? B < ?) and that Assumption (U) holds. The local Rademacher
complexity of the multi-kernel class Hp can be bounded for any p ? [1, 2] as
v
?
u
X
1
M
u 16
?
BeDM t? t?
2 (m)
1? t2?
2
?
t
min
rM
,
ceD
t
?
+
.
Rr (Hp ) ? min
j
?
n
j=1
n
t?[p,2]
m=1 t
2
3
Theorem 3 (L OCAL R ADEMACHER COMPLEXITY BOUND , p ? [2, ?] ). For any p ? [2, ?],
v
u X
u2 ?
2
Rr (Hp ) ? min t
min(r, D2 M t? ?1 ?j ).
n j=1
t?[p,?]
It is interesting to compare the above bounds for the special case p = 2 with the ones of Bartlett et al.
[2]. The main term of the bound of Theorem 3 (taking t = p = 2) is then essentially determined by
PM P?
(m) 1/2
. If the variables (?m (x)) are centered and uncorrelated, this
O n1 m=1 j=1 min(r, ?j )
1/2
P?
SM
1
is equivalently of order O n j=1 min(r, ?j )
because spec(J) = m=1 spec(Jm ); that is,
SM (m)
{?i , i ? 1} = m=1 ?i , i ? 1}; this rate is also what we would obtain through Theorem 3, so
both bounds on the LRC recover the rate shown in [2] for the special case p = 2.
It is also interesting to study the case p = 1: by using t = (log(M ))? in Theorem 2, we obtain the
?
3
P?
(m) M
1/2 + Be 2 D log(M ) , for
bound Rr (H1 ) ? 16
min rM, e3 D2 (log M )2 ?
j
j=1
n
n
m=1 ?
all M ? e2 . We now turn to proving Theorem 2. the proof of Theorem 3 is straightforward and
shown in the supplementary material C.
Proof of Theorem 2. . Note that it suffices to prove the result for t = p as trivially kwk2,t ? kwk2,p
holds for all t ? p so that Hp ? Ht and therefore Rr (Hp ) ? Rr (Ht ).
S TEP 1: R ELATING THE ORIGINAL CLASS WITH THE CENTERED CLASS .
In order to exploit
fp =
the no-correlation assumption, we will work in large parts of the proof with the centered class H
e
e
few kwk2,p ? D , wherein few : x 7? hw, ?(x)i,
and ?(x)
:= ?(x) ? E?(x). We start the proof
e
by noting that fw (x) = fw (x) ? hw, E?(x)i = fw (x) ? E hw, ?(x)i = fw (?(x)) ? Efw (?(x)),
so that, by the bias-variance decomposition, it holds that
2
2
2
P f 2 = Efw (x)2 = E (fw (x) ? Efw (x)) + (Efw (?(x))) = P fe2 + P fw .
(2)
w
w
Furthermore we note that by Jensen?s inequality
1
X
1
X
M
M
? p?
p2? p?
E?m (x)
p
E?(x)
? =
E?m (x), E?m (x)
=
2
2,p
m=1
m=1
s
X
1
M
M
p2? p?
E ?m (x), ?m (x)
?
=
tr(Jm )
Jensen
m=1
m=1
p?
(3)
2
so that we can express the complexity of the centered class in terms of the uncentered one as follows:
n
n
1X
1X
e
Rr (Hp ) ? E sup w,
?i ?(xi ) + E sup w,
?i E?(x) .
n
n
fw ?Hp ,
fw ?Hp ,
i=1
i=1
2
P fw
?r
2
P fw
?r
2
2
Concerning the first term of the above upper bound, using (2) we have P few
? P fw
, and thus
n
n
1X
1X
e i ) ? E sup w,
e i ) = Rr ( H
e p ).
?i ?(x
?i ?(x
E sup w,
n i=1
n i=1
fw ?Hp ,
fw ?Hp ,
2
P fw
?r
2
P few
?r
Now to bound the second term, we write
n
1X
?
E sup w,
?i E?(x) ? n sup hw, E?(x)i .
n i=1
fw ?Hp ,
fw ?Hp ,
2
P fw
?r
2
P fw
?r
Now observe that we have
(3)
H?older
r
tr(Jm ) M
p?
m=1
hw, E?(x)i ? kwk2,p kE?(x)k2,p? ? kwk2,p
2
p
2
as well as hw, E?(x)i = Efw (x) ? P fw . We finally obtain, putting together the steps above,
r
?
M
e p ) + n? 12 min
Rr (Hp ) ? Rr (H
r, D
tr(Jm ) m=1
p? .
(4)
2
This shows that there is no loss in working with the centered class instead of the uncentered one.
4
S TEP 2: B OUNDING THE COMPLEXITY OF THE CENTERED CLASS .
In this step of the proof
we generalize the technique of [19] to multi-kernel classes. First we note that, since the (centered)
covariance operator E?em (x) ? ?em (x) is also a self-adjoint Hilbert-Schmidt operator on Hm , there
P? e(m) (m)
(m)
(m)
ej ? u
e j , wherein (e
exists an eigendecomposition E?em (x) ? ?em (x) = j=1 ?
u
uj )j?1
j
is an orthogonal basis of Hm . Furthermore, the no-correlation assumption (U) entails E?el (x) ?
?em (x) = 0 for all l 6= m. As a consequence, for all j and m,
2
P few
M
M X
?
X
D
E2
X
2
(m)
e(m) wm , u
e
E(few (x))2 = E
wm , ?em (x)
=
?
j
j
=
m=1
(5)
m=1 j=1
n
n
E2
E
D1 X
e(m)
?
1 D (m) 1 X e
j
(m)
(m)
ej ,
ej
ej
u
?i ?em (xi ), u
=
E ?m (xi ) ? ?em (xi ) u
=
. (6)
E
n i=1
n
n i=1
n
Let now h1 , . . . , hM be arbitrary nonnegative integers. We can express the LRC in terms of the
eigendecompositon as follows
ep)
Rr ( H
n
n
E
D
M
M E
1X e
1X e
sup
?i ?(xi ) = E
w(m) m=1 ,
?i ?m (xi ) m=1
n i=1
n i=1
e p :P fe2 ?r
e p :P fe2 ?r
fw ?H
fw ?H
w
w
v
"v
#
u M hm
u M hm
n
u X X (m)
u X X (m) ?1
1 X
C.-S., Jensen
2
(m)
(m)
e hw(m) , u
e
e j i2 t
ej
?
?
E
?
sup t
?i ?em (xi ), u
j
j
n i=1
2 ?r
P few
m=1 j=1
m=1 j=1
X
?
n
M
1X e
(m)
(m)
e j ie
+ E sup w,
?i ?m (xi ), u
uj
h
n i=1
m=1
ep
fw ?H
= E
D
sup
w,
j=hm +1
so that (5) and (6) yield
s
(5), (6),H?older
?
ep)
Rr (H
r
PM
m=1
hm
n
+ D E
?
X
j=hm +1
n
M
1X e
(m)
(m)
e j ie
h
?i ?m (xi ), u
.
uj
n i=1
m=1 2,p?
S TEP 3: K HINTCHINE -K AHANE ? S AND ROSENTHAL? S INEQUALITIES . We use the KhintchineKahane (K.-K.) inequality (see Lemma B.2 in the supplementary material) to further bound the right
P
M
Pn
(m)
(m)
e ie
h1
term in the above expression as E
?i ?em (xi ), u
u
?
p?
n
P
P
M
E
m=1
j>hm
1
n
j
i=1
j>hm n
q
(m)
e
e j i2
i=1 h?m (xi ), u
Pn
p2? p1?
j
m=1 2,p?
. Note that for p ? 2 it holds that
?
p /2 ? 1, and thus it suffices to employ Jensen?s inequality once again to move the expectation
?
operator inside the inner term. In the general case we need a handle on the p2 -th moments and to
this end employ Lemma C.1 (Rosenthal + Young; see supplementary material), which yields
X
M
?
n
X
p2? p1?
1X e
(m)
e j i2
E
h?m (xi ), u
n i=1
m=1
j=hm +1
R+Y
?
M
X
m=1
?
(ep )
p?
2
B
n
p?
2
+
?
X
j=hm +1
n
1X e
(m)
e j i2
Eh?m (xi ), u
n i=1
p?
2
! p1?
v
!
u
X
2
M X
?
p? p2?
(?) u
BM p?
(m) 2
?
t
e
? ep
+
?j
n
m=1 j=hm +1
?
?
e(m) ? ?(m) by the Lidskiiwhere for (?) we used the subadditivity of p ?. Note that ?j, m : ?
j
j
Mirsky-Wielandt theorem since E?m (x) ? ?m (x) = E?em (x) ? ?em (x) + E?m (x) ? E?m (x). Thus
5
by the subadditivity of the root function
s
v
?
!
PM
M
2
u ?2
X
p?
u ep
h
r
BM
(m)
m=1 m
e
Rr (Hp ) ?
+Dt
+
?j
n
n
n
m=1
p?
j=hm +1
2
s
v
?
PM
X
M
1
u ?2 2
?
r m=1 hm u
ep D
BeDM p? p?
(m)
?
?j
+t
.
+
n
n
n
m=1 p?
j=hm +1
(7)
2
S TEP 4: B OUNDING THE COMPLEXITY OF THE ORIGINAL CLASS
note that
for
.
Now
?
1
M
1/2
?
all nonnegative integers hm we either have n? 2 min r, D
tr(Jm ) m=1
p?
2
? 2 2
P?
1/2
M
(m)
ep D
?
(in case all hm are zero) or it holds
j=hm +1 ?j
n
m=1 p2
1/2
PM
?
1
M
1/2
? r m=1 hm /n
(in case that at least one
n? 2 min r, D
tr(Jm ) m=1
p?
2
M
1/2
?
1
hm is nonzero) so that in any case we get n? 2 min r, D
tr(Jm ) m=1
p?
?
2
PM
P
?
2
2
1/2
M
1/2
r m=1 hm
(m)
?
?
+ ep nD
. Thus the following preliminary bound
j=hm +1 ?j
n
m=1 p2
follows from (4) by (7):
s
v
?
?
PM
M
1
u
? 2 D2
X
4r m=1 hm u
4ep
BeDM p? p?
(m)
Rr (Hp ) ?
?j
+t
, (8)
+
n
n
n
m=1 p?
j=hm +1
2
for all nonnegative integers hm ? 0. Later, we will use the above bound (8) for the computation of
the excess loss; however, to gain more insight in the bounds? properties, we express it in terms of
the truncated spectra of the kernels at the scale r as follows:
S TEP 5: R ELATING THE BOUND TO THE TRUNCATION OF THE SPECTRA OF THE KERNELS .
Next, we notice that for all nonnegative real numbers A1 , A2 and any a1 , a2 ? Rm
+ it holds for all
q?1
p
p
p
A1 + A2 ?
2(A1 + A2 )
(9)
1
ka1 kq + ka2 kq ? 21? q ka1 + a2 kq ? 2 ka1 + a2 kq
(10)
(the first statement follows from the concavity of the square root function and the second one is
readily proved; see Lemma C.3 in the supplementary material) and thus
v
?
u
M
1
?
X
u 16
BeDM p? p?
(m)
1? p2?
2 2
?
t
Rr (Hp )?
hm + ep D
?j
,
rM
? +
n
n
m=1 p
j=hm +1
where we used that for all non-negative a ? R
(`q -to-`p conversion)
kakq = h1, aq i
1
q
2
M
and 0 < q < p ? ? it holds
1/q
1
1
? k1k(p/q)? kaq kp/q
= M q ? p kakp .
H?older
(11)
Since the above holds for all nonnegative integers hm , the result follows, completing the proof.
2.1
Lower and Excess Risk Bounds
To investigate the tightness of the presented upper bounds on the LRC of Hp , we consider the case
where ?1 (x), . . . , ?M (x) are i.i.d; for example, this happens if the original input space X is RM ,
the original input variable x ? X has i.i.d. coordinates, and the kernels k1 , . . . , kM are identical and
each act on a different coordinate of x.
Theorem 4 (L OWER BOUND ). Assume that the kernels are centered and i.i.d.. Then, there is an
1
1
absolute constant c such that if ?(1) ? nD
2 then for all r ? n and p ? 1,
v
u X
uc ?
(1)
Rr (Hp,D,M ) ? t
min(rM, D2 M 2/p? ?j ).
(12)
n j=1
Comparing the above lower bound with the upper bounds, we observe that the upper bound of Theorem 2 for centered identical independent kernels is of the order
6
qP
2
(1)
?
2
p? ?
O
, thus matching the rate of the lower bound (the same holds
j
j=1 min rM, D M
for the bound of Theorem 3). This shows that the upper bounds of the previous section are tight.
As an application of our results to prediction problems such as classification or regression, we also
Pn
bound the excess loss of empirical minimization, f? := argminf n1 i=1 l(f (xi ), yi ), w.r.t. to a loss
function l: P (lf? ? lf ? ) := E l(f?(x), y) ? E l(f ? (x), y), where f ? := argminf E l(f (x), y) .
We use the analysis of Bartlett et al. [2] to show the following excess risk bound under the assump(m)
tion of algebraically decreasing eigenvalues of the kernel matrices, i.e. ?d > 0, ? > 1, ?m : ?j ?
??
dj
(proof shown in the supplementary material E):
(m)
Theorem 5. Assume that kkk? ? B and ?d > 0, ? > 1, ?m : ?j
? d j ?? . Let l be a
Lipschitz continuous loss with constant L and assume there is a positive constant F such that ?f ?
F : P (f ? f ? )2 ? F P (lf ? lf ? ). Then for all z > 0 with probability at least 1 ? e?z the excess
loss of the multi-kernel class Hp can be bounded for p ? [1, 2] as
r
1
??1
2
1
?
3??
dD2 L2 t? 2 1+? F ?+1 M 1+ 1+? t? ?1 n? 1+?
P (lf? ? lf ? ) ? min 186
1??
t?[p,2]
?
1
1
47 BDLM t? t?
(22BDLM t? + 27F )z
+
+
.
n
n
1
1
We see from the above bound that convergence
can be almost as slow as O p? M p? n? 2 (if ? ? 1
is small ) and almost as fast as O n?1 (if ? is large).
3
Interpretation of Bounds
In this section, we discuss the rates of Theorem5 obtained by local analysis
bounds,
that is
2
1+?
2
1
?
1+ 1+?
? 1+?
?
? ?1
t
?t ? [p, 2] : P (lf? ? lf ? ) = O t D
n
M
.
(13)
On the other hand, the global Rademacher complexity directly
leads to abound of the form [8]
1
1
(14)
?t ? [p, 2] : P (lf? ? lf ? ) = O t? DM t? n? 2 .
To compare the above rates, we first assume p ? (log M )? so that the best choice is t = p. Clearly,
the rate obtained through local analysis is better in n since ? > 1. Regarding the rate in the number
of kernels M and the radius D, a straightforward calculation shows that the local analysis improves
?
1
over the global one whenever M p /D = O( n) . Interestingly, this ?phase transition? does not
depend on ? (i.e. the ?complexity? of the kernels), but only on p.
Second, if p ? (log M )? , the best choice in (13) and (14) is t = (log M )? so that
1
1
(15)
P (lf? ? lf ? ) ? O min M n?1 , min t? DM t? n? 2
t?[p,2]
?
M
and the phase transition occurs for D log
M = O( n). Note, that when letting ? ? ? the
classical case of aggregation of M basis functions is recovered. This situation is to be compared to the sharp analysis of the optimal convergence rate of convex aggregation of M functions obtained by [27] in the framework of squared error loss regression, which is shown to be
1/2
1
M
?
O min M
,
log
. This corresponds to the setting studied here with D = 1, p = 1
n
n
n
and ? ? ?, and we see that our bound recovers (up to log factors) in this case this sharp bound and
the related phase transition phenomenon.
Please note that, by introducing an inequality in Eq. (5), Assumption (U)?a similar assumption was
also used in [23]?can be relaxed to a more general, RIP-like assumption as used in [16]; this comes
at the expense of an additional factor in the bounds (details omitted here).
When Can Learning Kernels Help Performance? As a practical application of the presented
bounds, we analyze the impact of the norm-parameter p on the accuracy of `p -norm MKL in various learning scenarios, showing why an intermediate p often turns out to be optimal in practical
applications. As indicated in the introduction, there is empirical evidence that the performance of
`p -norm MKL crucially depends on the choice of the norm parameter p (for example, cf. Figure 1
7
1.0
1.2
1.4
1.6
1.8
2.0
60
50
40
bound
30
20
bound
40 45 50 55 60 65
110
90
bound
80
60
70
w*
w*
w*
1.0
1.2
1.4
p
1.6
p
(a) ? = 2
(b) ? = 1
1.8
2.0
1.0
1.2
1.4
1.6
1.8
2.0
p
(c) ? = 0.5
Figure 2: Illustration of the three analyzed learning scenarios (T OP) differing in their soft sparsity of the
Bayes hypothesis w? (parametrized by ?) and corresponding values of the bound factor ?t as a function of p
(B OTTOM). A soft sparse (L EFT), a intermediate non-sparse (C ENTER), and an almost uniform w? (R IGHT).
in the introduction). The aim of this section is to relate the theoretical analysis presented here to this
empirically observed phenomenon.
To start with, first note that the choice of p only affects the excess risk bound in the factor (cf.
Theorem 5 and Equation (13))
2
1
2
?t := min Dp t? 1+? M 1+ 1+? t? ?1 .
t?[p,2]
Let us assume that the Bayes hypothesis can be represented by w? ? H such that the block components satisfy kw?m k2 = m?? , m = 1, . . . , M , where ? ? 0 is a parameter parameterizing the
?soft sparsity? of the components. For example, the cases ? ? {0.5, 1, 2} are shown in Figure 2
for M = 2 and rank-1 kernels. If n is large, the best bias-complexity trade-off for a fixed p will
correspond to a vanishing bias, so that the best choice of D will be close to the minimal value such
that w? ? Hp,D , that is, Dp = ||w? ||p . Plugging in this value for Dp , the bound factor ?p becomes
2
2
1
2
?p := kw? kp1+? min t? 1+? M 1+ 1+? t? ?1 .
t?[p,2]
We can now plot the value ?p as a function of p fixing ?, M , and ?. We realized this simulation for
? = 2, M = 1000, and ? ? {0.5, 1, 2}.The results are shown in Figure 2. Note that the soft sparsity
of w? is increased from the left hand to the right hand side. We observe that in the ?soft sparsest?
scenario (L EFT) the minimum is attained for a quite small p = 1.2, while for the intermediate case
(C ENTER) p = 1.4 is optimal, and finally in the uniformly non-sparse scenario (R IGHT) the choice
of p = 2 is optimal, i.e. SVM. This means that if the true Bayes hypothesis has an intermediately
dense representation (which is frequently encountered in practical applications), our bound gives the
strongest generalization guarantees to `p -norm MKL using an intermediate choice of p.
4
Conclusion
We derived a sharp upper bound on the local Rademacher complexity of `p -norm multiple kernel
learning. We also proved a lower bound that matches the upper one and shows that our result is
tight. Using the local Rademacher complexity bound, we derived an excess risk bound that attains
?
the fast rate of O(n? 1+? ), where ? is the minimum eigenvalue decay rate of the individual kernels.
In a practical case study, we found that the optimal value of that bound depends on the true Bayesoptimal kernel weights. If the true weights exhibit soft sparsity but are not strongly sparse, then
the generalization bound is minimized for an intermediate p. This is not only intuitive but also
supports empirical studies showing that sparse MKL (p = 1) rarely works in practice, while some
intermediate choice of p can improve performance.
Acknowledgments
We thank Peter L. Bartlett and K.-R. M?uller for valuable comments. This work was supported by the
German Science Foundation (DFG MU 987/6-1, RA 1894/1-1) and by the European Community?s
7th Framework Programme under the PASCAL2 Network of Excellence (ICT-216886) and under
the E.U. grant agreement 247022 (MASH Project).
8
References
[1] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO
algorithm. In Proc. 21st ICML. ACM, 2004.
[2] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics,
33(4):1497?1537, 2005.
[3] R. R. Bouckaert, E. Frank, M. A. Hall, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. WEKA?
experiences with a java open-source project. Journal of Machine Learning Research, 11:2533?2541,
2010.
[4] C. Campbell and Y. Ying. Learning with Support Vector Machines. Synthesis Lectures on Artificial
Intelligence and Machine Learning. Morgan & Claypool Publishers, 2011.
[5] C. Cortes. Invited talk: Can learning kernels help performance? In Proceedings of the 26th Annual
International Conference on Machine Learning, ICML ?09, pages 1:1?1:1, New York, NY, USA, 2009.
ACM. Video http://videolectures.net/icml09_cortes_clkh/.
[6] C. Cortes, A. Gretton, G. Lanckriet, M. Mohri, and A. Rostamizadeh. Proceedings of the NIPS Workshop
on Kernel Learning: Automatic Selection of Optimal Kernels, 2008. URL http://videolectures.
net/lkasok08_whistler/, Video http://www.cs.nyu.edu/learning_kernels.
[7] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Proceedings of the
International Conference on Uncertainty in Artificial Intelligence, 2009.
[8] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In Proceedings,
27th ICML, 2010.
[9] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995.
[10] P. V. Gehler and S. Nowozin. Let the kernel figure it out: Principled learning of pre-processing for kernel
classifiers. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 06 2009.
[11] R. Ibragimov and S. Sharakhmetov. The best constant in the rosenthal inequality for nonnegative random
variables. Statistics & Probability Letters, 55(4):367 ? 376, 2001.
[12] J.-P. Kahane. Some random series of functions. Cambridge University Press, 2nd edition, 1985.
[13] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K.-R. M?uller, and A. Zien. Efficient and accurate lp-norm
multiple kernel learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta,
editors, Advances in Neural Information Processing Systems 22, pages 997?1005. MIT Press, 2009.
[14] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. Lp-norm multiple kernel learning. Journal of Machine
Learning Research, 12:953?997, Mar 2011.
[15] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. Annals of
Statistics, 34(6):2593?2656, 2006.
[16] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. Annals of Statistics, 38(6):3660?3695,
2010.
[17] S. Kwapi?en and W. A. Woyczy?nski. Random Series and Stochastic Integrals: Single and Multiple.
Birkh?auser, Basel and Boston, M.A., 1992.
[18] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. I. Jordan. Learning the kernel matrix with
semi-definite programming. JMLR, 5:27?72, 2004.
[19] S. Mendelson. On the performance of kernel classes. J. Mach. Learn. Res., 4:759?771, December 2003.
[20] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6:1099?1125, 2005.
[21] K.-R. M?uller, S. Mika, G. R?atsch, K. Tsuda, and B. Sch?olkopf. An introduction to kernel-based learning
algorithms. IEEE Neural Networks, 12(2):181?201, May 2001.
[22] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning
Research, 9:2491?2521, 2008.
[23] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel
classes via convex programming. CoRR, abs/1008.3654, 2010.
[24] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Computation, 10:1299?1319, 1998.
[25] J. R. Searle. Minds, brains, and programs. Behavioral and Brain Sciences, 3(03):417?424, 1980.
[26] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. Journal of
Machine Learning Research, 7:1531?1565, July 2006.
[27] A. Tsybakov. Optimal rates of aggregation. In B. Sch?olkopf and M. Warmuth, editors, Computational
Learning Theory and Kernel Machines (COLT-2003), volume 2777 of Lecture Notes in Artificial Intelligence, pages 303?313. Springer, 2003.
9
| 4259 |@word h:3 version:1 norm:29 nd:3 open:1 km:7 d2:4 simulation:1 crucially:2 covariance:3 decomposition:2 tr:10 searle:1 moment:1 series:3 rkhs:2 interestingly:2 past:1 current:1 comparing:1 recovered:1 readily:1 subsequent:2 additive:1 plot:1 spec:2 selected:1 intelligence:3 warmuth:1 vanishing:1 lrc:4 math:1 simpler:2 rc:1 yuan:1 prove:1 behavioral:1 inside:1 excellence:1 pairwise:1 theoretically:1 ra:1 expected:1 p1:3 frequently:3 multi:6 brain:2 decreasing:1 automatically:1 little:1 jm:10 equipped:1 becomes:1 project:2 notation:3 bounded:4 what:1 eigenvector:1 developed:1 fe2:3 differing:1 guarantee:1 berkeley:1 act:2 milestone:1 k2:4 rm:8 classifier:1 grant:1 enjoy:1 superiority:1 planck:1 positive:1 local:16 consequence:2 despite:1 mach:1 simplemkl:1 mika:1 therein:1 studied:2 mirsky:1 koltchinskii:2 limited:1 range:3 practical:7 acknowledgment:1 enforces:1 practice:2 block:2 definite:1 lf:12 pontil:1 empirical:9 java:1 videolectures:3 matching:1 pre:1 regular:2 protein:1 get:1 close:1 selection:2 operator:9 risk:8 context:1 seminal:1 www:1 lobal:1 missing:1 straightforward:2 williams:1 hm0:1 convex:3 focused:1 ke:1 insight:1 parameterizing:1 holmes:1 orthonormal:1 proving:1 handle:1 coordinate:4 limiting:1 annals:3 rip:1 programming:2 hypothesis:4 lanckriet:4 agreement:1 recognition:1 gehler:1 ep:11 observed:1 culotta:1 sonnenburg:3 trade:1 valuable:1 substantial:1 principled:1 mu:1 complexity:27 ui:1 cristianini:1 depend:1 tight:5 completely:1 basis:2 represented:2 various:3 talk:2 fast:4 birkh:1 kp:1 artificial:3 quite:2 posed:1 supplementary:7 valued:1 tightness:1 otherwise:1 statistic:4 itself:2 brefeld:2 eigenvalue:6 rr:16 net:3 exemplary:1 provocative:1 l30:2 product:1 tu:2 achieve:1 adjoint:1 intuitive:1 olkopf:4 convergence:5 rademacher:20 generating:1 help:4 derive:4 fixing:1 measured:1 op:1 progress:1 eq:1 p2:9 strong:2 c:1 come:1 appropriateness:1 radius:1 stochastic:1 centered:9 material:7 suffices:2 generalization:5 really:1 preliminary:1 proposition:1 tighter:1 reutemann:1 kakq:1 hold:10 considered:1 hall:1 claypool:1 mapping:3 m0:2 achieves:1 a2:6 omitted:1 proc:1 outperformed:1 hope:2 minimization:3 uller:4 clearly:3 mit:1 always:2 aim:2 rather:1 pn:6 ej:5 varying:1 derived:2 rank:1 hk:1 attains:1 baseline:1 rostamizadeh:3 abstraction:1 el:1 accumulated:1 pfahringer:1 initially:1 going:1 germany:2 classification:1 colt:1 denoted:1 exponent:1 special:4 auser:1 uc:2 field:1 once:1 identical:2 kw:2 yu:1 icml:4 peaked:1 future:1 minimized:1 t2:2 few:7 employ:2 kp1:1 individual:2 dfg:1 phase:3 consisting:1 n1:5 ab:1 investigate:1 analyzed:2 light:1 accurate:1 nowadays:1 integral:1 experience:1 orthogonal:1 re:1 tsuda:1 theoretical:1 minimal:1 increased:1 soft:7 cover:2 ijth:1 introducing:1 rakotomamonjy:1 entry:1 uniform:3 kq:5 considerably:1 nski:1 st:1 international:2 kloft:6 ie:3 off:1 together:1 synthesis:1 again:1 squared:1 worse:1 ocal:2 conjugated:1 de:2 subsumes:1 blanchard:3 coefficient:2 satisfy:1 depends:2 later:1 h1:5 root:2 tion:1 analyze:2 sup:10 start:2 recover:1 wm:2 aggregation:3 bayes:3 contribution:2 square:1 accuracy:5 variance:1 yield:4 correspond:1 ka1:3 generalize:1 researcher:1 strongest:1 whenever:1 e2:3 dm:2 proof:9 recovers:1 unsolved:1 gain:1 dataset:1 proved:2 improves:2 hilbert:7 back:1 campbell:1 attained:3 dt:1 wherein:2 improved:1 done:1 box:1 strongly:1 mar:1 furthermore:4 smola:1 correlation:3 hand:5 working:1 nonlinear:2 mkl:29 indicated:1 usa:2 normalized:1 true:3 counterpart:1 regularization:2 laboratory:2 nonzero:1 i2:4 self:1 please:1 tep:5 outline:1 l1:2 cp:1 raskutti:1 witten:1 qp:1 empirically:1 volume:1 eft:3 interpretation:1 kwk2:6 cambridge:1 imposing:1 enter:2 automatic:2 trivially:1 mathematics:1 pm:8 hp:31 canu:1 aq:1 dj:1 afer:1 entail:1 base:2 recent:1 optimizing:1 belongs:1 scenario:5 ubingen:1 inequality:7 yi:1 entitled:1 minimum:5 greater:1 ced:1 relaxed:1 additional:1 morgan:1 algebraically:1 ight:3 july:1 semi:1 relates:1 multiple:12 zien:2 gretton:1 match:2 calculation:1 bach:2 concerning:2 a1:4 plugging:1 impact:1 prediction:2 regression:2 miescher:1 essentially:1 expectation:3 vision:1 kernel:71 achieved:2 source:1 publisher:1 sch:5 invited:2 comment:1 december:1 lafferty:1 jordan:2 integer:5 near:1 noting:1 revealed:1 intermediate:8 bengio:1 decent:1 xj:1 affect:1 inner:1 regarding:2 subadditivity:2 weka:1 dd2:1 whether:1 expression:1 bartlett:5 url:1 penalty:1 peter:1 e3:1 york:1 useful:2 clear:1 ibragimov:1 tsybakov:1 http:4 notice:1 sign:1 rosenthal:3 discrete:1 write:1 express:3 putting:1 nevertheless:1 drawn:2 intermediately:1 ce:1 ht:2 year:1 sum:1 letter:1 uncertainty:1 almost:4 family:1 bound:62 completing:1 guaranteed:1 laskov:1 fold:1 encountered:1 nonnegative:6 annual:1 oracle:1 bousquet:1 min:20 marius:3 department:1 according:1 combination:4 slightly:1 em:12 lp:2 stochastics:1 happens:1 kakp:1 ghaoui:1 computationally:1 equation:1 remains:1 turn:2 loose:1 discus:1 german:1 mind:1 letting:1 end:2 observe:4 schmidt:3 corinna:1 original:4 cf:2 exploit:1 k1:2 uj:9 society:2 classical:1 tensor:1 move:1 micchelli:1 question:1 realized:1 occurs:1 usual:1 traditional:1 unclear:1 said:1 kaq:1 dp:3 exhibit:1 thank:1 berlin:3 parametrized:1 w0:1 trivial:1 assuming:1 kk:1 illustration:1 ying:1 equivalently:2 unfortunately:1 statement:1 relate:1 expense:2 argminf:2 frank:1 negative:3 basel:1 gilles:3 upper:10 conversion:1 sm:2 finite:1 truncated:1 situation:1 extended:1 precise:1 reproducing:1 arbitrary:1 sharp:3 community:1 ka2:1 namely:2 friedrich:1 smo:1 potsdam:2 nip:2 able:1 beyond:1 pattern:1 fp:1 sparsity:8 program:1 max:1 video:2 pascal2:1 wainwright:1 demanding:1 ual:1 eh:1 regularized:1 mash:1 abbreviate:1 minimax:1 older:3 improve:1 conic:1 carried:1 hm:32 speeding:1 review:1 ict:1 l2:2 unsatisfied:1 beside:1 loss:8 lecture:2 interesting:2 validation:1 eigendecomposition:1 foundation:1 degree:1 editor:2 grandvalet:1 uncorrelated:4 nowozin:1 mohri:3 supported:1 free:1 soon:1 truncation:1 bias:3 allow:1 side:1 institute:1 taking:1 absolute:2 sparse:8 plain:1 xn:4 transition:3 concavity:1 made:1 bm:2 programme:1 excess:10 uni:1 global:11 uncentered:3 conceptual:1 assumed:1 l14:2 xi:19 spectrum:2 grc:2 continuous:1 decade:1 why:1 learn:1 ignoring:1 improving:1 schuurmans:1 european:1 main:4 dense:1 motivation:1 whole:2 edition:1 x1:4 en:1 slow:1 ny:1 sparsest:1 candidate:1 jmlr:1 hw:13 young:1 theorem:15 remained:1 showing:4 kkk:2 jensen:4 nyu:1 decay:3 svm:7 cortes:6 evidence:4 workshop:2 exists:1 mendelson:2 restricting:1 bayesoptimal:1 vapnik:1 ower:1 corr:1 boston:1 supf:1 wielandt:1 assump:1 u2:1 monotonic:2 springer:1 corresponds:1 chance:1 acm:2 consequently:1 towards:1 lipschitz:1 feasible:1 fw:25 typical:1 determined:1 operates:1 ademacher:3 uniformly:2 engineer:1 lemma:3 duality:1 experimental:1 shedding:1 kwkk:1 rarely:2 l4:2 hw0:1 whistler:1 atsch:2 support:4 d1:1 phenomenon:2 ex:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.